source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 17537918 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In perl, the * is usually greedy, unless you add a ? after it. When * is used against a group, however, the situation seems different. My question is "why". Consider this example: my $text = 'f fjfj ff';my (@matches) = $text =~ m/((?:fj)*)/;print "@matches\n";# --> ""@matches = $text =~ m/((?:fj)+)/;print "@matches\n";# --> "fjfj" In the first match, perl lazily prints out nothing, though it could have matched something, as is demonstrated in the second match. Oddly, the behavior of * is greedy as expected when the contents of the group is just . instead of actual characters: @matches = $text =~ m/((?:..)*)/;print "@matches\n";# --> 'f fjfj f' Note: The above was tested on perl 5.12. Note: It doesn't matter whether I use capturing or non-capturing parentheses for inside group.
Now provide the response and nothing else.
| This isn't a matter of greedy or lazy repetition. (?:fj)* is greedily matching as many repetitions of "fj" as it can, but it will successfully match zero repetitions. When you try to match it against the string "f fjfj ff" , it will first attempt to match at position zero (before the first "f"). The maximum number of times you can successfully match "fj" at position zero is zero, so the pattern successfully matches the empty string. Since the pattern successfully matched at position zero, we're done, and the engine has no reason to try a match at a later position. The moral of the story is: don't write a pattern that can match nothing, unless you want it to match nothing. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17537918', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/973402/']} | jdg_80241 |
stackexchange | llm_judgeable_groundtruth_similarity | 385482 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
If $G=(V,E)$ is a simple, undirected graph, then $C\subseteq V$ is said to be a vertex cover if $C\cap e\neq \varnothing$ for all $e\in E$ . Is there an infinite graph $G=(V,E)$ such that for any vertex cover $C$ there is a vertex cover $C'\subseteq C$ with $C'\neq C$ ?
Now provide the response and nothing else.
| No, by Zorn's Lemma! It suffices to check that the intersection of a chain of vertex covers is a vertex cover. If the intersection $C$ fails to be a vertex cover, then there is some edge $(v,w)$ such that neither $v$ nor $w$ is in $C$ . But then both $v$ and $w$ are excluded at some point in the chain, so not every set in the chain is a vertex cover. | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/385482', 'https://mathoverflow.net', 'https://mathoverflow.net/users/8628/']} | jdg_80242 |
stackexchange | llm_judgeable_groundtruth_similarity | 751648 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Given any Tensor, we can obtain a symmetric tensor through symmetrising operator.by $T_{uv} \rightarrow T_{(uv)}=\frac{1}{n!}(T_{uv}+T_{vu})$ where $n$ is the order of the tensor, and you have to take up all the permutations of the indices possible. And for anti-symmterising tensors, you have to take up alternating sums of the per mutated tensor components, and you get an anti-symmteric tensor. Now so this is a map from a tensor space to itself. Is this map only a change of basis map ? Is this a one-one map ?
Now provide the response and nothing else.
| It is true given any $p$ -covariant (or $p$ contravariant, that is not important) tensor $T$ you can define two new tensors: First, the Symmetric tensor $T_S$ given in coordinates by $$(T_S)_{i_1,\dots,i_p} = \frac{1}{p!} \sum_{\sigma \in \mathfrak{S}_p} T_{i_{\sigma(1)} , \dots i_{\sigma(p)}} \, ,$$ where $\mathfrak{S}_p$ is the symmetric group of order $p$ and $i_j$ runs from 1 to $n= \mbox{dim}(E)$ , ( $E$ the vector sapce considered), $j=1,2,\dots, p$ . And also the skew-symmetric tensor $T_A$ ( $A$ from antisymmetric), in coordinates reading $$(T_A)_{i_1,\dots,i_p} = \frac{1}{p!} \sum_{\sigma \in \mathfrak{S}_p} \mbox{sgn}(\sigma) \, T_{i_{\sigma(1)} , \dots i_{\sigma(p)}} .$$ So you are right, but your formulae are wrong. For $p=2$ you recover the usual matrix decomposition.However, @Andrew D. Hwang is right: for $p>2$ the usual formula fails (and if I am not worng you need some representation theory to find the correct decomposition). And both spaces cannot be one-to-one, because their dimensions are different. They already are in the usual case $p=2$ , when skew-symmetric tensors have dimension $n(n-1)/2$ and the symmetric ones have dimension $n(n+1)/2$ .For arbitrary $p$ , of the space of all $p$ -covariant (contravariant, remember the construction is the same) skew-symmetric tensors for is the binomial coefficient $n$ over $p$ if $p<n$ ; and zero for $p>n$ . Finally, notice that the skew-symmetrisation of every symmetric tensor will be the zero-tensor. It is true however that, if your vector space has an inner product, the spaces of $p$ -skew-symmetric tensors and $(n-p)$ -skew-symmetric tensors are isomorphic. The isomorphism is called Hodge dual operator (also Hodge star operator or Hodge dual star operator). (An isomorphism exists even if you do not have an inner product, since two finite-dimensional vector spaces with the same dimension are isomorphic). Skew-symmetric covariant tensors are very important in differential geometry. You can find information about that seeing exterior algebra, exterior calculus, $p$ -form... I hope my answer helps you. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/751648', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/82336/']} | jdg_80243 |
stackexchange | llm_judgeable_groundtruth_similarity | 70209118 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Python 3.9. In my unittest, I want to instantiate ServiceProvider, but it has to use mock_get_service in the test, not the imported get_service, since this makes network calls. I have worked through many solutions, including the original named above, with no success. Every time, the original get_service is called and crashes. app_module: from services import get_serviceclass ServiceProvider() def find_service(self, service_id): return get_service(service_id) unittest_module: from app_module import ServiceProviderdef mock_get_service(service_id): """ Test method: return the named service. """ if service_id == 'ServiceA': service = ServiceA() else: service = ServiceB() return serviceclass Test_ServiceProvider(unittest.TestCase): def test1_find_service(self): with patch('app_module.get_service') as mocked_get_service: mocked_get_service.side_effect = mock_get_service server = ServiceProvider() serv = server.find_service('ServiceA') self.assertIsInstance(serv, ServiceA)
Now provide the response and nothing else.
| Given that we want to swap words x and y , and that we don't care about the situation where they overlap, we can: split the string on occurrences of x within each piece, replace y with x join the pieces with y Essentially, we use split points within the string as a temporary marker to avoid the problem with sequential replacements. Thus: def swap_words(s, x, y): return y.join(part.replace(y, x) for part in s.split(x)) Test it: >>> swap_words('apples and avocados and avocados and apples', 'apples', 'avocados')'avocados and apples and apples and avocados'>>> | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/70209118', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/17576469/']} | jdg_80244 |
stackexchange | llm_judgeable_groundtruth_similarity | 49316751 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm learning SpringBoot2.0 with Java8 . And I followed some blog-making tutorial example. The tutorial source code is: @GetMapping("/{id}/edit")public String edit(@PathVariable Long id, Model model) { model.addAttribute("categoryDto", categoryService.findOne(id)); return "category/edit";} But this code is throwing this error: categoryService.findOne(id) I'm thinking about changing the JPA findOne() method to Optional< S > How to solve that? More info: This is the categoryService method: public Category findOne(Long id) { return categoryRepository.findOne(id);}
Now provide the response and nothing else.
| From at least, the 2.0 version, Spring-Data-Jpa modified findOne() . Now, findOne() has neither the same signature nor the same behavior. Previously, it was defined in the CrudRepository interface as: T findOne(ID primaryKey); Now, the single findOne() method that you will find in CrudRepository is the one defined in the QueryByExampleExecutor interface as: <S extends T> Optional<S> findOne(Example<S> example); That is implemented finally by SimpleJpaRepository , the default implementation of the CrudRepository interface. This method is a query by example search and you don't want that as a replacement. In fact, the method with the same behavior is still there in the new API, but the method name has changed. It was renamed from findOne() to findById() in the CrudRepository interface : Optional<T> findById(ID id); Now it returns an Optional , which is not so bad to prevent NullPointerException . So, the actual method to invoke is now Optional<T> findById(ID id) . How to use that? Learning Optional usage.Here's important information about its specification: A container object which may or may not contain a non-null value. If avalue is present, isPresent() will return true and get() will returnthe value. Additional methods that depend on the presence or absence of acontained value are provided, such as orElse() (return a default valueif value not present) and ifPresent() (execute a block of code if thevalue is present). Some hints on how to use Optional with Optional<T> findById(ID id) . Generally, as you look for an entity by id, you want to return it or make a particular processing if that is not retrieved. Here are three classical usage examples. Suppose that if the entity is found you want to get it otherwise you want to get a default value. You could write : Foo foo = repository.findById(id) .orElse(new Foo()); or get a null default value if it makes sense (same behavior as before the API change) : Foo foo = repository.findById(id) .orElse(null); Suppose that if the entity is found you want to return it, else you want to throw an exception. You could write : return repository.findById(id) .orElseThrow(() -> new EntityNotFoundException(id)); Suppose you want to apply a different processing according to if the entity is found or not (without necessarily throwing an exception). You could write : Optional<Foo> fooOptional = fooRepository.findById(id);if (fooOptional.isPresent()) { Foo foo = fooOptional.get(); // processing with foo ...} else { // alternative processing....} | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/49316751', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3820817/']} | jdg_80245 |
stackexchange | llm_judgeable_groundtruth_similarity | 767684 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to create ZIP archives on demand, using either Python zipfile module or unix command line utilities. Resources to be zipped are often > 1GB and not necessarily compression-friendly. How do I efficiently estimate its creation time / size?
Now provide the response and nothing else.
| Extract a bunch of small parts from the big file. Maybe 64 chunks of 64k each. Randomly selected. Concatenate the data, compress it, measure the time and the compression ratio. Since you've randomly selected parts of the file chances are that you have compressed a representative subset of the data. Now all you have to do is to estimate the time for the whole file based on the time of your test-data. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/767684', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/53330/']} | jdg_80246 |
stackexchange | llm_judgeable_groundtruth_similarity | 47211319 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using Python to pull data from Googles adwords API. I'd like to put that data into a Pandas DataFrame so I can perform analysis on the data. I'm using examples provided by Google here. Below is my attempt to try to get the output to be read as a pandas dataframe: from googleads import adwordsimport pandas as pdimport numpy as np# Initialize appropriate service.adwords_client = adwords.AdWordsClient.LoadFromStorage()report_downloader = adwords_client.GetReportDownloader(version='v201710')# Create report query.report_query = ('''select Date, Clicksfrom ACCOUNT_PERFORMANCE_REPORTduring LAST_7_DAYS''')df = pd.read_csv(report_downloader.DownloadReportWithAwql( report_query, 'CSV', client_customer_id='xxx-xxx-xxxx', # denotes which adw account to pull from skip_report_header=True, skip_column_header=False, skip_report_summary=True, include_zero_impressions=True)) The output is the data in what looks like csv format and an error. Day,Clicks2017-11-05,420612017-11-07,457922017-11-03,368742017-11-02,397902017-11-06,449342017-11-08,456312017-11-04,36031---------------------------------------------------------------------------ValueError Traceback (most recent call last)<ipython-input-5-cc25e32c9f3a> in <module>() 25 skip_column_header=False, 26 skip_report_summary=True,---> 27 include_zero_impressions=True))/anaconda/lib/python3.6/site-packages/pandas/io/parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, skipfooter, skip_footer, doublequote, delim_whitespace, as_recarray, compact_ints, use_unsigned, low_memory, buffer_lines, memory_map, float_precision) 653 skip_blank_lines=skip_blank_lines) 654 --> 655 return _read(filepath_or_buffer, kwds) 656 657 parser_f.__name__ = name/anaconda/lib/python3.6/site-packages/pandas/io/parsers.py in _read(filepath_or_buffer, kwds) 390 compression = _infer_compression(filepath_or_buffer, compression) 391 filepath_or_buffer, _, compression = get_filepath_or_buffer(--> 392 filepath_or_buffer, encoding, compression) 393 kwds['compression'] = compression 394 /anaconda/lib/python3.6/site-packages/pandas/io/common.py in get_filepath_or_buffer(filepath_or_buffer, encoding, compression) 208 if not is_file_like(filepath_or_buffer): 209 msg = "Invalid file path or buffer object type: {_type}"--> 210 raise ValueError(msg.format(_type=type(filepath_or_buffer))) 211 212 return filepath_or_buffer, None, compressionValueError: Invalid file path or buffer object type: <class 'NoneType'> I know I am missing something fundamental and I do not fully understand how to get data into a pandas dataframe. Any help would be greatly appreciated.
Now provide the response and nothing else.
| So I was able to find out the answer to my own question if anybody is curious or had the same problem I did. I had to import io and wrote the output from the adwords query to a string that I named output . I then used the seek() method to start from the beginning and read that using pandas read_csv . from googleads import adwordsimport pandas as pdimport numpy as npimport io# Define output as a stringoutput = io.StringIO()# Initialize appropriate service.adwords_client = adwords.AdWordsClient.LoadFromStorage()report_downloader = adwords_client.GetReportDownloader(version='v201710')# Create report query.report_query = ('''select Date, HourOfDay, Clicksfrom ACCOUNT_PERFORMANCE_REPORTduring LAST_7_DAYS''')# Write query result to output filereport_downloader.DownloadReportWithAwql( report_query, 'CSV', output, client_customer_id='xxx-xxx-xxx', # denotes which adw account to pull from skip_report_header=True, skip_column_header=False, skip_report_summary=True, include_zero_impressions=False)output.seek(0)df = pd.read_csv(output)df.head() | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47211319', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7556913/']} | jdg_80247 |
stackexchange | llm_judgeable_groundtruth_similarity | 20173996 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to save a string in a column of table in MySQL. Sometimes it is 2 characters and sometimes very larger. 50000 characters. Is it a good idea to user varchar(50000). If I only save 2 characters in the column, does it use all 4998 bytes or just 2 bytes?
Now provide the response and nothing else.
| As documented under Data Type Storage Requirements : Storage Requirements for String Types In the following table, M represents the declared column length in characters for nonbinary string types and bytes for binary string types. L represents the actual length in bytes of a given string value. βββββββββββββββββββββββββββββββ¦βββββββββββββββββββββββββββββββββββββββββββββββββββ Data Type β Storage Required ββ ββββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββββββββββββ£β CHAR( M ) β M Γ w bytes, 0 <= M <= 255, where w is the ββ β number of bytes required for the maximum-length ββ β character in the character set ββ ββββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββββββββββββ£β BINARY( M ) β M bytes, 0 <= M <= 255 ββ ββββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββββββββββββ£β VARCHAR( M ), VARBINARY( M ) β L + 1 bytes if column values require 0 β 255 ββ β bytes, L + 2 bytes if values may require more ββ β than 255 bytes ββ ββββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββββββββββββ£β TINYBLOB , TINYTEXT β L + 1 bytes, where L < 2 8 ββ ββββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββββββββββββ£β BLOB , TEXT β L + 2 bytes, where L < 2 16 ββ ββββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββββββββββββ£β MEDIUMBLOB , MEDIUMTEXT β L + 3 bytes, where L < 2 24 ββ ββββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββββββββββββ£β LONGBLOB , LONGTEXT β L + 4 bytes, where L < 2 32 ββ ββββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββββββββββββ£β ENUM(' value1 ',' value2 ',...) β 1 or 2 bytes, depending on the number of ββ β enumeration values (65,535 values maximum) ββ ββββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββββββββββββ£β SET(' value1 ',' value2 ',...) β 1, 2, 3, 4 or 8 bytes, depending on the number ββ β of set members (64 members maximum) ββββββββββββββββββββββββββββββββ©ββββββββββββββββββββββββββββββββββββββββββββββββββ Variable-length string types are stored using a length prefix plus data. The length prefix requires from one to four bytes depending on the data type, and the value of the prefix is L (the byte length of the string). For example, storage for a MEDIUMTEXT value requires L bytes to store the value plus three bytes to store the length of the value. To calculate the number of bytes used to store a particular CHAR , VARCHAR , or TEXT column value, you must take into account the character set used for that column and whether the value contains multi-byte characters. In particular, when using the utf8 (or utf8mb4 ) Unicode character set, you must keep in mind that not all characters use the same number of bytes and can require up to three (four) bytes per character. For a breakdown of the storage used for different categories of utf8 or utf8mb4 characters, see Section 10.1.10, βUnicode Supportβ . VARCHAR , VARBINARY , and the BLOB and TEXT types are variable-length types. For each, the storage requirements depend on these factors: The actual length of the column value The column's maximum possible length The character set used for the column, because some character sets contain multi-byte characters For example, a VARCHAR(255) column can hold a string with a maximum length of 255 characters. Assuming that the column uses the latin1 character set (one byte per character), the actual storage required is the length of the string ( L ), plus one byte to record the length of the string. For the string 'abcd' , L is 4 and the storage requirement is five bytes. If the same column is instead declared to use the ucs2 double-byte character set, the storage requirement is 10 bytes: The length of 'abcd' is eight bytes and the column requires two bytes to store lengths because the maximum length is greater than 255 (up to 510 bytes). Therefore, in answer your question: If I only save 2 characters in the column, does it use all 4998 bytes or just 2 bytes? A VARCHAR(50000) column storing a 2-character string would require L +2 bytes, where L is the number of bytes required to encode that 2-character string in the column's character set: it certainly will not use "all 4998 bytes". | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20173996', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2922211/']} | jdg_80248 |
stackexchange | llm_judgeable_groundtruth_similarity | 164748 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
I was wondering if there is an easy counter example to what follows: Suppose that $E$ is contractible CW-complex and $G_{1}, G_{2}$ are two isomorphic groups acting freely and continuously on $E$. Is it true that the two actions are conjugated ?
Now provide the response and nothing else.
| You can make a free group of rank two act on the plane (or the hyperbolic plane if you prefer) in two ways such that the orbit spaces are not homeomorphic: one is a punctured torus and the other is a three times punctured sphere. | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/164748', 'https://mathoverflow.net', 'https://mathoverflow.net/users/21369/']} | jdg_80249 |
stackexchange | llm_judgeable_groundtruth_similarity | 310001 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
A friend of mine joked that Zorn's lemma must be true because it's used in functional analysis, which gives results about PDEs that are then used to make planes, and the planes fly. I'm not super convinced. Is there a direct line of reasoning from Zorn's Lemma to a physical prediction? I'm thinking something like "Zorn's Lemma implies a theorem that says a certain differential equation has a certain property, and that equation models a phenomena that indeed has that property".
Now provide the response and nothing else.
| There are a lot of argument that can be applied here, and the question linked in the comment already give several of these, but there is one that I really like, and which I don't remember having seen a lot.Of course no argument of this sort can be fully rigorous as it always start from some assumption on what physics is supposed to be about and what is the real world... so this is only one answer among many possible. Our standing assumption is "in physics and real work application we only care about observable things". The short version of the argument is that every relevant rules of physics or theorem in physics, should be written in terms of geometric sequent (in the sense of geometric logic), as only those corresponds to statement about observable things. If this is the case then Barr's theorem show that any such theorem you can prove from your rules using the axiom of choice, you can also prove it without using neither the axiom of choice nor the law of excluded middle. So, AC (and the law of excluded middle) have no "observable" consequences. Let me clarify what I mean by that: If $x$ is some physical quantity (a real parameter like the mass, or the speed, or position or temperature of something in some units) then proposition like "x<10" are propositions that I call 'observable', because if they are true there is a finite time experiment that can prove it: If $x$ is indeed <10 then a good enough approximation of the value of $x$ will prove it. (I'm ignoring quantum mechanics, which has more to do with the fact that position speed and so one cannot really defined rather than they cannot be observed with arbitrary precision, in Quantum mechanics, in this case the observable property would be about probability of some event occurring... it might require a probabilistic refinement of the discussion here though.) By opposition, the statement "$x \leqslant 10$" is not observable in the same sense, because if it happens that $x$ is really equal to $10$, then no measure of $x$ with no given precision would be able to prove that $x \leqslant 10$, you will always get that $x$ is in some open interval around 10. Now in logical terms, if you have certain observable propositions, you can take a finite "AND" , an infinite "OR", apply some existential quantification to them and obtain another observable proposition, but the negation, the infinite "And", or the implication will in general take you out of the realm of observable propositions. In categorical logic, the propositions that are formed from certain 'atomic' propositions using infinite OR, finite AND, and existential quantification are called "Geometric propositions". One call a "Geometric sequent" something of the form $\forall x_1,x_2,\dots,x_n, P \Rightarrow Q$. with $P$ and $Q$ geometric proposition I claim that any rule or theorem of physique should have this form, i.e. they should say that "if some observations are made then I know I will be able to make some other observations". (this also include thing like $P \Rightarrow False$, i.e. "I'll never make such observations". Barr's theorem shows that if from some axioms that are geometric sequent, and using all of classical logic and the axiom of choice (in particular Zorn lemma), you can deduce some other some geometric sequent, then there exists a similar proof that does not use neither the axiom of choice nor the law of excluded middle. So in the end, you can freely use the axiom of choice wherever you want and you know that any theorem about thing you can actually observe in the real world will have a constructive proof. | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/310001', 'https://mathoverflow.net', 'https://mathoverflow.net/users/128590/']} | jdg_80250 |
stackexchange | llm_judgeable_groundtruth_similarity | 39021503 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
So I have a custom UIView class class MessageBox: UIView { override init(frame: CGRect) { super.init(frame: frame) createSubViews() } required init?(coder aDecoder: NSCoder) { super.init(coder: aDecoder) createSubViews() } func createSubViews() { let testView = UIView(frame: self.frame) testView.backgroundColor = UIColor.brown self.addSubview(testView) }} I added a UIView inside the storyboard and gave it some constraints: 100 from the top (superview), 0 from the left and right, height is 180 But when I run the app the brown subview I created in the code is way to big. I printed self.frame in my custom view and it turns out that the frame is (0,0,1000,1000) . But why? I set constraints, it should be something like (0,0,deviceWith, 180) . What did I do wrong? EDIT: That's my Storyboard setup:
Now provide the response and nothing else.
| Short and simple answer: You're doing it too early. Detailed answer: When a view is initialized from an Interface Builder file (a xib or a storyboard) its frame is initially set to the frame it has in Interface Builder. You can look at it as a temporary placeholder. When using Auto Layout the constraints are resolved (= the view's actual frame is computed) inside the view's layoutSubviews() method. Thus, there are two possible solutions for your problem: (preferrable) If you use Auto Layout, use it throughout your view. Either add your testView in Interface Builder as well and create an outlet for it or create your testView in code as you do, then set its translatesAutoResizingMaskIntoConstraints property to false (to sort of "activate Auto Layout") and add the required constraints for it in code. Set your testView 's frame after the MessageBox view's frame itself has been set by the layout engine. The only place where you can be sure that the system has resolved the view's frame from the constraints is when layoutSubviews() is called. override func layoutSubviews() { super.layoutSubviews() testView.frame = self.frame} (You need to declare your testView as a property / global variable, of course.) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/39021503', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3134192/']} | jdg_80251 |
stackexchange | llm_judgeable_groundtruth_similarity | 56721584 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been trying to make a GitLab CI/CD pipeline for deploying my MEAN application. I have three stages:1. test2. build 3. deploy The build stage has a build_angular job which generates an artifact. Now I want to use this artifacts in the next stage i.e deploy. The deploy job tells me that it has downloaded the artifact(image has been attached), but now I want to extract this artifact, but I don't know where the artifact is being downloaded. The path where the artifact is being downloaded is not mentioned anywhere in the docs.
Now provide the response and nothing else.
| GitLab is cleaning the working directory between two subsequent jobs. That's why you have to use artifacts and dependencies to pass files between jobs. When the "deploy" job says that the build artifact have been downloaded, it simply means that they have been recreated as they were before. The location of the downloaded artifacts matches the location of the artifact paths (as declared in the .yml file). Example build_job: stage: build script: - echo "create bin/ directory" - make artifacts: paths: - bin/deploy_job: stage: deploy script: - ls bin/ dependencies: - build_job Directory bin/ is passed to deploy_job from build_job . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/56721584', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8420068/']} | jdg_80252 |
stackexchange | llm_judgeable_groundtruth_similarity | 9400959 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I was wondering if there is a way to replace all instances of CRLF with '\n'. Is there a way to accomplish that?
Now provide the response and nothing else.
| What have you tried that is not working? CRLF means carriage return , line feed . Carriage return is \r , line feed is \n so replacing CRLF with line feed would be value = value.Replace("\r\n", "\n"); This is just like any other string replace. It is important to note that Replace is a method which returns a new string, so you need to assign the result back to some variable, in most cases it will probably be the same variable name you were already using. Edit: based on your comment below, you are probably mistaking CRLF for LF based on Notepad++'s end of line conversion setting. Open your file in a hex editor to see what is really there. You will see CRLF as 0D 0A (so carriage return is 0D and line feed is 0A ). Notepad++ will show you what you want to see. Checked your end of line conversion by clicking Edit > EOL Conversion > UNIX Format and it will show LF instead of CRLF, even if a CRLF is there. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9400959', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/729820/']} | jdg_80253 |
stackexchange | llm_judgeable_groundtruth_similarity | 17628436 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a database and a table with below attributes: Column: Date Type: TIMESTAMP Attributes: ON UPDATE CURRENT_TIMESTAMP Default: CURRENT_TIMESTAMP So each data will showing up in this format: "2013-07-06 14:32:18" I'm looking for a SQL query that shows retrieves rows in "2013-07-06". What I'm tried: select * from posts where date between "2013-07-05 00:00:00" and "2013-07-07 00:00:00";
Now provide the response and nothing else.
| You can use MySQL's DATE() function to obtain only the date part of your TIMESTAMP : SELECT * FROM posts WHERE DATE(date) = '2013-07-06' However, such a query cannot benefit from an index on the date column (if one exists); instead, one can use a range as follows: SELECT * FROM posts WHERE date >= '2013-07-06' AND date < '2013-07-07' MySQL will implicitly add a time of 00:00:00 to the date literals, but you can include them explicitly if so preferred. Note that BETWEEN ... AND ... is inclusive of its operands, so BETWEEN "2013-07-05 00:00:00" AND "2013-07-07 00:00:00" will include all records on 2013-07-05 and 2013-07-06 as well as any records at 00:00:00 on 2013-07-07 . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17628436', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2439524/']} | jdg_80254 |
stackexchange | llm_judgeable_groundtruth_similarity | 236923 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
A small company has a big list of customers. The list contains personal information like names, phone numbers, addresses, info (like what products bought) and some small notes. The data is stored at the computer in the office of the company. The company does not use any cloud service for anything. Backups are manually created and saved in a different location. A computer administrator arranges this. The company has a website and webhosting, and a webmail. Emails are received and stored at the webhosting account (so where website and email runs).Up to now, the 'only' data on the webhost is: The website (with images, webpages and data for a content management system, like news articles, etc.).This is stored in a MySQL database like on almost every webhosting. The email is hosted there. All other data like quotations to customers, invoices, and contact information is stored on a computer. Until now there was no reason to save this to some online server or something. Although this data regularly appears in email messages, that are on the webserver. Now the company wants employees to be able to remotely load and edit that offline data from a phone or a laptop.This automatically means that offline data should be accessed via internet, which means it must be stored at some server to make it accessible via internet.Most data like contact info and addresses can be stored in tables in the MySQL database on the webhost. The person who manages the website has a very good ICT background and could create some API-like access point to load customer details (of course with a secure authenticaion mechanism). This has a lot of possibilities for the future too. The security risks (threats) I'm concerned about, are: The webhosting leaking data (but that would mean also all email and website data is leaked). The API authentication mechanism for accessing the data remotely, must be 99.9% secure. Keeping as most data offline as workable, feels safer than storing it on a webserver. How do other organisations manage this when the need arises to access data via internet? Could someone assess the mentioned security risks and help us decide what to do?We mainly wonder how other organisations would handle this. It has something to do with company growth.Does someone have experience with this situation?
Now provide the response and nothing else.
| Not as good as a password manager, better than nothing. Security often comes at the price of convenience, and convenience often comes at the price of security. Password managers built into browsers are primarily there for convenience, and security plays a lesser role. The reason for this decision is that regular users are more easily convinced to use a system that is convenient for them, rather than a system that is more secure, but harder to use. Since you asked for Pros and Cons of real life scenarios, I'll detail the pros and cons of using a browser's built-in password manager in comparison to an offline password manager like Keepass, and not using a password manager at all. Using a browser's built-in password manager: Pros You already have it. Everyone uses a browser these days, and all major browsers come with built-in password managers. This means that from a regular user's point of view, the barrier to entry is incredibly low. It discourages password reuse. People dislike remembering passwords, so they certainly won't remember one password per site. If the browser automatically suggests a strong password upon registering, then the user will not be tempted to re-use an existing password for it. Furthermore, the passwords suggested by the password manager will likely not be cracked by any attackers, should hashes ever be stolen. It syncs to my other devices. A user can register to a service on their computer and then use the same service on their phone without having to worry about syncing passwords. That is a huge plus! Cons It doesn't defend against local attacks. Attackers which may have access to the computer of the user (think jealous girlfriend, not government agency) may be able to get the passwords rather easily. With access to the browser, for example when a user forgot to lock their computer, all passwords can be read out in a matter of minutes. It should be noted that local attacks are not something every user is concerned with. For example, I am living alone and don't really have local attacker in my threat model. It increases attack surface. If synchronization between devices is enabled, then the security of all my accounts is tied to the security of the account of my browser vendor. Furthermore, while it is unlikely, but not impossible, an attacker could steal the encrypted credentials of all users and then begin to crack them bit by bit. Since my passphrase is 72 characters long and I use a YubiKey, I am not all too worried about people logging into my Google account - but it could be a reason why you wouldn't want to enable synchronization. Vendor Lock-In. To my knowledge, browsers don't yet offer some way to export all credentials into some unified format. This means, if you have 200 credentials saved into Chrome and decide to switch to Firefox...well, have fun. Using a dedicated offline password manager: Pros Better protection against local attacks. Since passwords are stored in an encrypted file, which is protected by a master password and optionally a key file as well, an attacker with local access will likely not be able to steal your passwords. In the case of you forgetting to lock your PC, many password managers have settings to "forget" the key after a period of inactivity. While that doesn't completely protect you, it is much better than a built-in password manager. Better configurability. A browser's built-in password manager may struggle with sites which have less than desirable password policies, so a dedicated password manager can generate passwords according to a specified ruleset. Works outside the browser. Sometimes, I need to generate passwords to send via other means, such as text messages, or use them inside a VM. In this case, a browser's built-in password manager just doesn't cut it. Cons Higher barrier to entry. Users which don't have a background in security are unlikely to look for a password manager, and if they do, they're likely daunted by the different offers, such as online password managers, offline password managers, etc.. Furthermore, most offline password managers have quite a few very handy features, which are likely confusing for someone who "just wants to use the internet". Doesn't sync between devices. Since my passwords are just a file, it doesn't automagically sync between my devices. If I have a desktop PC, a laptop and a smart phone, then either I manually sync my password database between them, or I use some third-party service to do that for me (like Dropbox, Google Drive, etc.) Not using a password manager: Pros Works everywhere. Often times, I need to enter some form of authentication where I simply can't use a password manager, such as for my BitLocker password or my Windows domain password. In such cases, remembering a strong passphrase is the only option I have. Cons Encourages weak passwords. I won't remember a password like _B+5ZzRk!4vd2+5Q?qw$=9V , that's a fact. I may remember a long passphrase, such as Two Blue Bunnies jump over the Square Tree and Explode. , but that's quite a handful to type every time I want to log in. As such, most users will gravitate towards the shortest possible password they deem "good enough", and usually that's something like BostonNovember2020 . Encourages password reuse. "I already thought of a password or passphrase, and now you want me to remember a second one? A third even!? What do you mean I need to remember a new pass phrase for every service I use?!" - That's why people re-use passwords. Verdict Dedicated password managers are a great high-security option, that offer a ton of configurability. However, as many tools aimed at experts, they can be difficult to use for beginners. Built-in password managers strike a good middle-ground for non-tech people, increasing their security against likely threats, while making them a bit less secure against unlikely threats. Using no password manager at all should only be used in situations where no password manager is available. Appendix A: How can a local attacker actually get the passwords? One of the downsides of browser-based password managers is that they don't protect against local attacks. I will explain one possible local attack for demonstration purposes: Let us assume Bob stores all his passwords in Chrome. Bob leaves the house to buy groceries and forgot to lock his computer. Eve, Bob's jealous girlfriend, suspects that Bob may not be faithful to her, and wishes to check his various online accounts to see if he has been flirting with other women. She has about 30 minutes of direct access to his browser. The first thing Eve does is to open Chrome's internal password page. This can be done via the settings menu, or by directly calling the URL chrome://settings/passwords . In this list, Eve sees all pages for which passwords are saved, as well as the corresponding usernames. This alone already may be revealing sensitive information. On this page, Eve could click the "Show Password" button, but that prompts her to enter Bob's OS password, which she does not have. However, she can still manually recover the password for arbitrary sites. For example, she is most concerned with the sites bookofface.es and bird.app . She can manually navigate to these sites, where she is met with a login form. The browser will automatically enter Bob's credentials into the login form, but the password is blanked out, due to the type="password" property of the input field. By opening the developer console and removing that attribute, the password is revealed in plaintext. While this process is not as fast as a "copy all passwords" script, it is still doable within a reasonable amount of time. Depending on the goals of the attackers, certain sites may be more relevant than others. So while Bob may have passwords saved for hundreds of sites, Eve may only be interested in a couple. | {} | {'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/236923', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/167223/']} | jdg_80255 |
stackexchange | llm_judgeable_groundtruth_similarity | 34843177 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to get Angular2 to work with my Asp.Net WebApi 2 server. I managed to handle some GET requests correctly, however this POST request behaves strangely. I receive an OK (200) response from my server, yet the following code treats it as an error: public Register(){ this.accountService.Register(this.Name, this.Password, this.RepeatPassword, this.Email, this.Skype, this.Website).subscribe( () => { //this is what's supposed to be called, but isn't this.accountService.Login(this.Name, this.Password).subscribe( res => { console.log(res); localStorage.setItem('token', res); localStorage.setItem('user', this.Name); this.router.navigate(['Home']); }, error2 => { console.log(error2.Message); } ); }, error => { //the response gets here, instead of being handled above console.log(error.Message); } );} Here is the Register method of the accountService: public Register (userName:string, password:string, confirmPassword:string, email:string, skype:string, website:string){ return this.http.post(this.Uri + 'api/Account/Register', JSON.stringify( { UserName: userName, Password: password, ConfirmPassword: confirmPassword, Email: email, Skype: skype, Website: website }), this.GetRequestOptions() ).map((res: Response) => res.json());}
Now provide the response and nothing else.
| Finally found the answer.Ajax, when it's expecting a json, fires off the error callback after getting HTTP 200 with a malformed json (including an empty file). The fix was to replace all empty responses with {}. To do this in a ASP.Net WebApi2 server, you need to use return Ok("{}"); instead of just return Ok(); at the end of the method being called. An alternative solution (and most likely a better one) is to change the calling code - in particular, .map((res: Response) => res.json()) is the bit of code that failed - a conditional checking if res is empty could be added before calling res.json() . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/34843177', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5534746/']} | jdg_80256 |
stackexchange | llm_judgeable_groundtruth_similarity | 11180378 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am wondering why I keep getting error: flexible array member not at end of struct error when I call malloc. I have a struct with a variable length array, and I keep getting this error. The struct is, typedef struct { size_t N; double data[]; int label[];} s_col; and the call to malloc is, col = malloc(sizeof(s_col) + lc * (sizeof(double) + sizeof(int))); Is this the correct call to malloc?
Now provide the response and nothing else.
| You can only have one flexible array member in a struct, and it must always be the last member of the struct. In other words, in this case you've gone wrong before you call malloc , to the point that there's really no way to call malloc correctly for this struct. To do what you seem to want (arrays of the same number of data and label members), you could consider something like: struct my_pair { double data; int label;};typedef struct { size_t N; struct my_pair data_label[];}; Note that this is somewhat different though: instead of an array of double s followed by an array of int s, it gives you an array of one double followed by one int , then the next double , next int , and so on. Whether this is close enough to the same or not will depend on how you're using the data (e.g., for passing to an external function that expects a contiguous array, you'll probably have to do things differently). | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11180378', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/808713/']} | jdg_80257 |
stackexchange | llm_judgeable_groundtruth_similarity | 62975325 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Not sure if title is correct terminology. If you have to compare the characters in 2 strings (A,B) and count the number of matches of chars in B against A: sum([ch in A for ch in B]) is faster on %timeit than sum(ch in A for ch in B) I understand that the first one will create a list of bool, and then sum the values of 1.The second one is a generator. I'm not clear on what it is doing internally and why it is slower? Thanks. Edit with %timeit results: 10 characters generator expression list 10000 loops, best of 3: 112 Β΅s per loop 10000 loops, best of 3: 94.6 Β΅s per loop 1000 characters generator expression list 100 loops, best of 3: 8.5 ms per loop 100 loops, best of 3: 6.9 ms per loop 10,000 characters generator expression list 10 loops, best of 3: 87.5 ms per loop 10 loops, best of 3: 76.1 ms per loop 100,000 characters generator expression list 1 loop, best of 3: 908 ms per loop 1 loop, best of 3: 840 ms per loop
Now provide the response and nothing else.
| I took a look at the disassembly of each construct (using dis ). I did this by declaring these two functions: def list_comprehension(): return sum([ch in A for ch in B])def generation_expression(): return sum(ch in A for ch in B) and then calling dis.dis with each function. For the list comprehension: 0 BUILD_LIST 0 2 LOAD_FAST 0 (.0) 4 FOR_ITER 12 (to 18) 6 STORE_FAST 1 (ch) 8 LOAD_FAST 1 (ch)10 LOAD_GLOBAL 0 (A)12 COMPARE_OP 6 (in)14 LIST_APPEND 216 JUMP_ABSOLUTE 418 RETURN_VALUE and for the generator expression: 0 LOAD_FAST 0 (.0) 2 FOR_ITER 14 (to 18) 4 STORE_FAST 1 (ch) 6 LOAD_FAST 1 (ch) 8 LOAD_GLOBAL 0 (A)10 COMPARE_OP 6 (in)12 YIELD_VALUE14 POP_TOP16 JUMP_ABSOLUTE 218 LOAD_CONST 0 (None)20 RETURN_VALUE The disassembly for the actual summation is: 0 LOAD_GLOBAL 0 (sum) 2 LOAD_CONST 1 (<code object <genexpr> at 0x7f49dc395240, file "/home/mishac/dev/python/kintsugi/KintsugiModels/automated_tests/a.py", line 12>) 4 LOAD_CONST 2 ('generation_expression.<locals>.<genexpr>') 6 MAKE_FUNCTION 0 8 LOAD_GLOBAL 1 (B)10 GET_ITER12 CALL_FUNCTION 114 CALL_FUNCTION 116 RETURN_VALUE but this sum disassembly was constant between both your examples, with the only difference being the loading of generation_expression.<locals>.<genexpr> vs list_comprehension.<locals>.<listcomp> (so just loading a different local variable). The differing bytecode instructions between the first two disassemblies are LIST_APPEND for the list comprehension vs. the conjunction of YIELD_VALUE and POP_TOP for the generator expression. I won't pretend I know the intrinsics of Python bytecode, but what I gather from this is that the generator expression is implemented as a queue, where the value is generated and then popped. This popping doesn't have to happen in a list comprehension, leading me to believe there'll be a slight amount of overhead in using generators. Now this doesn't mean that generators are always going to be slower. Generators excel at being memory-efficient, so there will be a threshold N such that list comprehensions will perform slightly better before this threshold (because memory use won't be a problem), but after this threshold, generators will significantly perform better. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/62975325', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11135300/']} | jdg_80258 |
stackexchange | llm_judgeable_groundtruth_similarity | 17415709 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How might I change the verb of a WebClient request? It seems to only allow/default to POST, even in the case of DownloadString. try { WebClient client = new WebClient(); client.QueryString.Add("apiKey", TRANSCODE_KEY); client.QueryString.Add("taskId", taskId); string response = client.DownloadString(TRANSCODE_URI + "task"); result = JsonConvert.DeserializeObject<Dictionary<string, dynamic>>(response); } catch (Exception ex ) { result = null; error = ex.Message + " " + ex.InnerException; } And Fiddler says: POST http://someservice?apikey=20130701-234126753-X7384&taskId=20130701-234126753-258877330210884 HTTP/1.1Content-Length: 0
Now provide the response and nothing else.
| If you use HttpWebRequest instead you would get more control of the call. You can change the REST verb by the Method property (default is GET) HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(HostURI);request.Method = "GET";String test = String.Empty;using (HttpWebResponse response = (HttpWebResponse)request.GetResponse()){ Stream dataStream = response.GetResponseStream(); StreamReader reader = new StreamReader(dataStream); test = reader.ReadToEnd(); reader.Close(); dataStream.Close(); } DeserializeObject(test ...) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17415709', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/654434/']} | jdg_80259 |
stackexchange | llm_judgeable_groundtruth_similarity | 62945049 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I really like the addition of records in Java 14, at least as a preview feature, as it helps to reduce my need to use lombok for simple, immutable "data holders". But I'm having an issue with the implementation of nullable components. I'm trying to avoid returning null in my codebase to indicate that a value might not be present. Therefore I currently often use something like the following pattern with lombok. @Valuepublic class MyClass { String id; @Nullable String value; Optional<String> getValue() { // overwrite the generated getter return Optional.ofNullable(this.value); }} When I try the same pattern now with records, this is not allowed stating incorrect component accessor return type . record MyRecord (String id, @Nullable String value){ Optional<String> value(){ return Optional.ofNullable(this.value); }} Since I thought the usage of Optional s as return types is now preferred, I'm really wondering why this restriction is in place. Is my understanding of the usage wrong? How can I achieve the same, without adding another accessor with another signature which does not hide the default one? Should Optional not be used in this case at all?
Now provide the response and nothing else.
| A record comprises attributes that primarily define its state. The derivation of the accessors, constructors, etc. is completely based on this state of the records. Now in your example, the state of the attribute value is null , hence the access using the default implementation ends up providing the true state. To provide customized access to this attribute you are instead looking for an overridden API that wraps the actual state and further provides an Optional return type. Of course, as you mentioned one of the ways to deal with it would be to have a custom implementation included in the record definition itself record MyClass(String id, String value) { Optional<String> getValue() { return Optional.ofNullable(value()); }} Alternatively, you could decouple the read and write APIs from the data carrier in a separate class and pass on the record instance to them for custom accesses. The most relevant quote from JEP 384: Records that I found would be(formatting mine): A record declares its state -- the group of variables -- and commitsto an API that matches that state. This means that records give up afreedom that classes usually enjoy -- the ability to decouple aclass's API from its internal representation -- but in return, recordsbecome significantly more concise. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/62945049', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/519910/']} | jdg_80260 |
stackexchange | llm_judgeable_groundtruth_similarity | 47826730 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am resizing my RGB images stored in a folder(two classes) using following code: from keras.preprocessing.image import ImageDataGeneratordataset=ImageDataGenerator()dataset.flow_from_directory('/home/1',target_size=(50,50),save_to_dir='/home/resized',class_mode='binary',save_prefix='N',save_format='jpeg',batch_size=10) My data tree is like following: 1/ 1_1/ img1.jpg img2.jpg ........ 1_2/ IMG1.jpg IMG2.jpg ........resized/ 1_1/ (here i want to save resized images of 1_1) 2_2/ (here i want to save resized images of 1_2) After running the code i am getting following output but not images: Found 271 images belonging to 2 classes.Out[12]: <keras.preprocessing.image.DirectoryIterator at 0x7f22a3569400> How to save images?
Now provide the response and nothing else.
| Heres a very simple version of saving augmented images of one image wherever you want: Step 1. Initialize image data generator Here we figure out what changes we want to make to the original image and generate the augmented images You can read up about the diff effects here- https://keras.io/preprocessing/image/ datagen = ImageDataGenerator(rotation_range=10, width_shift_range=0.1, height_shift_range=0.1,shear_range=0.15, zoom_range=0.1,channel_shift_range = 10, horizontal_flip=True) Step 2: Here we pick the original image to perform the augmentation on read in the image image_path = 'C:/Users/Darshil/gitly/Deep-Learning/My Projects/CNN_Keras/test_augment/caty.jpg'image = np.expand_dims(ndimage.imread(image_path), 0) step 3: pick where you want to save the augmented images save_here = 'C:/Users/Darshil/gitly/Deep-Learning/My Projects/CNN_Keras/test_augment' Step 4. we fit the original image datagen.fit(image) step 5: iterate over images and save using the "save_to_dir" parameter for x, val in zip(datagen.flow(image, #image we chose save_to_dir=save_here, #this is where we figure out where to save save_prefix='aug', # it will save the images as 'aug_0912' some number for every new augmented image save_format='png'),range(10)) : # here we define a range because we want 10 augmented images otherwise it will keep looping forever I thinkpass | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47826730', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8293726/']} | jdg_80261 |
stackexchange | llm_judgeable_groundtruth_similarity | 1720191 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What is the importance of Pattern.compile() method? Why do I need to compile the regex string before getting the Matcher object? For example : String regex = "((\\S+)\\s*some\\s*";Pattern pattern = Pattern.compile(regex); // why do I need to compileMatcher matcher = pattern.matcher(text);
Now provide the response and nothing else.
| The compile() method is always called at some point; it's the only way to create a Pattern object. So the question is really, why should you call it explicitly ? One reason is that you need a reference to the Matcher object so you can use its methods, like group(int) to retrieve the contents of capturing groups. The only way to get ahold of the Matcher object is through the Pattern object's matcher() method, and the only way to get ahold of the Pattern object is through the compile() method. Then there's the find() method which, unlike matches() , is not duplicated in the String or Pattern classes. The other reason is to avoid creating the same Pattern object over and over. Every time you use one of the regex-powered methods in String (or the static matches() method in Pattern), it creates a new Pattern and a new Matcher. So this code snippet: for (String s : myStringList) { if ( s.matches("\\d+") ) { doSomething(); }} ...is exactly equivalent to this: for (String s : myStringList) { if ( Pattern.compile("\\d+").matcher(s).matches() ) { doSomething(); }} Obviously, that's doing a lot of unnecessary work. In fact, it can easily take longer to compile the regex and instantiate the Pattern object, than it does to perform an actual match. So it usually makes sense to pull that step out of the loop. You can create the Matcher ahead of time as well, though they're not nearly so expensive: Pattern p = Pattern.compile("\\d+");Matcher m = p.matcher("");for (String s : myStringList) { if ( m.reset(s).matches() ) { doSomething(); }} If you're familiar with .NET regexes, you may be wondering if Java's compile() method is related to .NET's RegexOptions.Compiled modifier; the answer is no. Java's Pattern.compile() method is merely equivalent to .NET's Regex constructor. When you specify the Compiled option: Regex r = new Regex(@"\d+", RegexOptions.Compiled); ...it compiles the regex directly to CIL byte code, allowing it to perform much faster, but at a significant cost in up-front processing and memory use--think of it as steroids for regexes. Java has no equivalent; there's no difference between a Pattern that's created behind the scenes by String#matches(String) and one you create explicitly with Pattern#compile(String) . (EDIT: I originally said that all .NET Regex objects are cached, which is incorrect. Since .NET 2.0, automatic caching occurs only with static methods like Regex.Matches() , not when you call a Regex constructor directly. ref ) | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/1720191', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/207647/']} | jdg_80262 |
stackexchange | llm_judgeable_groundtruth_similarity | 94331 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How does using SSL protect aginst dns spoof? since DNS is at a lower level and it is always work the same whether the user is visiting an HTTP or HTTPS site.
Now provide the response and nothing else.
| Assume you managed to poison the DNS cache for securesite.com with an IP that you control. Now, when the client visits https://securesite.com , it will resolve to your IP address. As part of the SSL handshake process, your server will need to send a valid certificate for securesite.com which contains the public key. At this point, you have 2 options. 1) Send the legitimate certificate. This will check out since the certificate is signed by a trusted CA. The client will then encrypt the master secret using the public key. It breaks down at this point, because without the private key, you cannot decrypt the master secret and thus you can't finish the connection setup. 2) Send a self signed certificate. However, since it is not signed by a trusted CA, a warning will show on the client's browser. If the client choose to proceed anyway, then you have successfully carried out the attack. DNS spoofing will generally not work on HTTPS websites unless the client chooses to ignore the warning signs or if you manage to obtain the private key for the site. | {} | {'log_upvote_score': 6, 'links': ['https://security.stackexchange.com/questions/94331', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/81301/']} | jdg_80263 |
stackexchange | llm_judgeable_groundtruth_similarity | 5360306 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am new to HPC and the task in hand is to do a performance analysis and comparison between MPICH and OpenMPI on a cluster which comprises of IBM servers equipped with dual-core AMD Opteron processors, running on a ClusterVisionOS. Which benchmark program should I pick to compare between MPICH and OpenMPI implementations? I am not sure if High-Performance Linpack Benchmark can help, as i am not attempting to measure the performance of the cluster itself.. kindly suggest.. Thank you
Now provide the response and nothing else.
| The classic examples are: NAS Parallel Benchmarks - theyare representative numerical kernelsthat you'd see in a lot of scientificcomputing applications. Theseadmittedly have a lot of computationbut also have the communicationspatterns you'd expect to see in realapplications, so they are fairlyrelevant. Or, if you really just want MPI "microbenchmarks", the OSU benchmarks or the Intel MPI Benchmarks are well known choices. These run zillions of tests -- ping-poing, broadcast, etc -- of various sizes and configurations, so you end up with a very large amount of data. The good news is that if you run these with the two MPIs, you'll know exactly where each one is stronger or weaker. MPICH and OpenMPI are both actively maintained and very solid, and have a long-standing friendly rivalry; so I'd be very surprised if you found one to be consistently faster than the other. We have had both on our system, and there were differences with the default settings on real applications, but usually fairly small, some favouring one some favouring the other. But to really find out which is better for a particular application, you need to do more than run with the default parameters; both implementations can have a large number of variables set dealing with how they deal with collectives (OpenMPI 1.5.x has very interesting-looking hierarchical collectives I haven't played with yet), etc. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5360306', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/667025/']} | jdg_80264 |
stackexchange | llm_judgeable_groundtruth_similarity | 14894035 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do I add a rectangular overlay (which should be more like a rectangular frame) in the camera preview? My application consists of a button that opens the camera upon clicking. I need the overlay in that camera preview. The code for the java file is: public class MainActivity extends Activity { Button b; Intent i; Bitmap bmp; final static int cameraData = 0; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); b = (Button) findViewById(R.id.button); InputStream is = getResources().openRawResource(R.drawable.ic_launcher); bmp = BitmapFactory.decodeStream(is); b.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { // TODO Auto-generated method stub i = new Intent(android.provider.MediaStore.ACTION_IMAGE_CAPTURE); startActivityForResult(i, cameraData); } }); }} The Layout file is as follows: <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"xmlns:tools="http://schemas.android.com/tools"android:layout_width="match_parent"android:layout_height="match_parent"android:orientation="vertical"tools:context=".MainActivity" ><TextView android:id="@+id/text" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_centerHorizontal="true" android:layout_gravity="center" android:text="click to take photo" /><Button android:id="@+id/button" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="click" android:layout_gravity="center" /></LinearLayout>
Now provide the response and nothing else.
| First, create a public Class that extends View. Inside its onDraw() method, draw your rectangle. For example: public class Box extends View { private Paint paint = new Paint(); Box(Context context) { super(context); } @Override protected void onDraw(Canvas canvas) { // Override the onDraw() Method super.onDraw(canvas); paint.setStyle(Paint.Style.STROKE); paint.setColor(Color.GREEN); paint.setStrokeWidth(10); //center int x0 = canvas.getWidth()/2; int y0 = canvas.getHeight()/2; int dx = canvas.getHeight()/3; int dy = canvas.getHeight()/3; //draw guide box canvas.drawRect(x0-dx, y0-dy, x0+dx, y0+dy, paint); }} Then, in your camera preview Activity, have an instance to your Box class: Box box = new Box(this); Finally: addContentView(box, new LayoutParams(LayoutParams.FILL_PARENT, LayoutParams.FILL_PARENT)); A green rectangle will be drawn onto your camera preview. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14894035', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1957189/']} | jdg_80265 |
stackexchange | llm_judgeable_groundtruth_similarity | 30355151 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am facing a weird issue with Mac version of Android Studio (i.e., I am not able to copy ( β + C ) and paste ( β + V ) any text). I have no idea why it is behaving strange like this. My question may be silly, but I have been facing this issue since I started working from Windows to Mac . Any solutions and alternatives would be much helpful for me. I am completely stuck with this issue for the past two days.
Now provide the response and nothing else.
| There are several reasons why you could be experiencing this You are using ClipMenu and Android studio doesn't like it Your keybinding is messed up. To solve keybinding issue press File -> Settings and check what the keybindings are, you have maybe changed it or imported settings from somebody else. See the image as a reference. Note: the filter of copy, you need to do this for paste too. To fix the issue with ClipMenu. Right click on "Application/IntelliJ IDEA 14.app" choose "Show Package Contents" edit bin/idea.properties add this property ide.mac.useNativeClipboard=True restart IntelliJ IDEA. There is a YouTrack issue with further information | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/30355151', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3298272/']} | jdg_80266 |
stackexchange | llm_judgeable_groundtruth_similarity | 126251 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This question of course began as a slightly irreverent play on the riddle, "Can God make a stone so big He can't lift it?" Then I wondered about the answer. Is it possible to exhibit a number that is provably a product of exactly two primes without knowing which primes? More to the point, without having the theoretical and technological means to compute which primes? If so, how is that done, and how old are the results that are needed in order to do it? And if not, does recent theory and technology make it impossible?
Now provide the response and nothing else.
| The answer appears to be yes, that we can construct such numbers at present. The techniques that have been used recently have their roots around 1985 when elliptic curves were first applied to cryptography and factorization and when personal computers with RAM by the megabyte became common. I would like to thank Charles for reminding me that a product of exactly two primes is called a semiprime. Chris K. Caldwell, a professor at the University of Tennessee at Martin whose current research interest is prime number theory, writes that " small examples of proven, unfactored, semiprimes can be easily constructed ." What is easy for him is not so easy for me, but it might not be too hard if I would re-read my copy of Bressoud's Factorization and Primality Testing . Proven, unfactored semiprimes are called "interesting semiprimes" by Don Reble, a software consultant who took up the problem from (at least his interpretation of) remarks by Ed Pegg, Jr. There are at least two examples online, a 1084-digit interesting semiprime constructed by Don Reble and a 5061-digit interesting semiprime constructed by David Broadhurst , a theoretical high energy physicist. Reble's interesting semiprime is in a text file that presents some parameters for a proof and the proof itself. It relies on properties of elliptic curves and is therefore currently over my head. Part of Reble's proof is that his semiprime survives a check that it is not a base-two strong probable prime. Broadhurst's interesting semiprime is in a text file that can be input to Pari. He has written there the relatively elementary conditions and the parameters that he used in order to prove that his number is a semiprime, basing his work on Reble's. He provides the location of a certificate that one of his parameters was proven prime using the free-of-cost, closed-source program Primo by Marcel Martin. Primo is an implementation of elliptic curve primality proving . For suggesting the problem, Broadhurst thanked Reble and Phil Carmody, a Linux kernel developer and researcher in high-performance numerical computing. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/126251', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/25495/']} | jdg_80267 |
stackexchange | llm_judgeable_groundtruth_similarity | 342803 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The Dirac measure is defined by $$\delta_x(A)= \begin{cases} 1 &\text{if $x \in A$}\\ 0 &\text{if $x \notin A$}\\ \end{cases}$$ Let $f:X\rightarrow \mathbb{R}$ be a function. Can anyone show me why $\int f \, d\delta_x=f(x)$? Any help is appreciated.
Now provide the response and nothing else.
| Hint If $g=\sum_i c_i 1_{A_i}$ is a step function, then $$\int g d \delta_x = \sum_i c_i \delta_x(A_i)=\sum_i c_i 1_{A_i}(x)=g(x) $$ | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/342803', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/67856/']} | jdg_80268 |
stackexchange | llm_judgeable_groundtruth_similarity | 16653004 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am struggling to understand the scope of models and their bindings in respect of directives which have limited scope. I get that restricting the scope on a directive means that controller.$scope and directive.scope are no longer the same thing. However, I am confused about how the placing of models either within the directive template or in the html affects data binding. I feel I'm missing something very fundamental and to move on I need to understand this. Take the following code (fiddle here: http://jsfiddle.net/2ams6/ ) JavaScript var app = angular.module('app',[]);app.controller('Ctrl',function($scope){});app.directive('testel', function(){ return { restrict: 'E', scope: { title: '@' }, transclude: true, template: '<div ng-transclude>'+ '<h3>Template title: {{title}}</h3>' + '<h3>Template data.title:{{data.title}}</h3>' + '</div>' } }); HTML <div ng-app='app'> <div ng-controller="Ctrl"> <input ng-model="data.title"> <testel title="{{data.title}}"> <h3>Transclude title:{{title}}</span></h3> <h3>Transclude data.title:{{data.title}}</h3> </testel> </div></div> The model only updates {{title}} within the template, and {{data.title}} in the transclusion. Why not {{title}} in the transclusion nor {{data.title}} in the template? Moving the input to within the transclusion like so (fiddle here: http://jsfiddle.net/eV8q8/1/ ): <div ng-controller="Ctrl"> <testel title="{{data.title}}"> <input ng-model="data.title"> <h3>Transclude title: <span style="color:red">{{title}}</span></h3> <h3>Transclude data.title: <span style="color:red">{{data.title}}</span></h3> </testel></div> now means only transclude {{data:title}} gets updated. Why not either template {{title}} or {{data.title}} , nor transclude {{title}} ? And finally, moving the input to within the template, like so (fiddle here: http://jsfiddle.net/4ngmf/2/ ): template: '<div ng-transclude>' + '<input ng-model="data.title" />' + '<h3>Template title: {{title}}</h3>' + '<h3>Template data.title: {{data.title}}</h3>' + '</div>' Now means that only template {{data.title}} gets updated. Again, why not the other 3 bindings? I hope there is something obvious staring me in the face and I'm missing it. If you get me to get this, I'll buy you a beer, or give you some points, or some other such thing. Many thanks.
Now provide the response and nothing else.
| Your fiddles create three scopes: a scope associated with controller Ctrl , because of ng-controller a directive transcluded scope, because of transclude: true a directive isolate scope, because of scope: { ... } In fiddle1, before we type anything into the text box we have the following: Scope 003 is the scope associated with the controller. Since we didn't type into the textbox yet, there is no data property. In isolate scope 004, we see that a title property was created, but it is empty. It is empty because the parent scope doesn't have a data.title property yet. After typing my title into the textbox, we now have: Controller scope 003 now has a new data object property (which is why it is colored yellow), which has a title property now set to my title . Since isolate scope property title is one-way databound to the interpolated value of data.title , it also gets the value my title (the value is colored yellow because it changed). The transcluded scope prototypically inherits from the controller scope, so inside the transcluded HTML, angular can follow the prototype chain and find $scope.data.title in the parent scope (but $scope.title doesn't exist there). The isolate scope only has access to its own properties, hence only property title . In fiddle2, before typing we have the same picture as in fiddle1. After typing my title : Notice where the new data.title property showed up -- on the transcluded scope. The isolate scope is still looking for data.title on the controller scope, but its not there this time, so its title property value remains empty. In fiddle3, before typing we have the same picture as in fiddle1. After typing my title : Notice where the new data.title property showed up -- on the isolate scope. None of the other scopes have access to the isolate scope, so the string my title won't appear anywhere else. Update for Angular v1.2: With change eed299a Angular now clears the transclusion point before transcluding, so the Template title: ... and Template data.title: ... parts won't show up unless you modify the template such that ng-transclude is by itself, such as: '<h3>Template title: <span style="color:red">{{title}}</span></h3>' +'<h3>Template data.title: <span style="color:red">{{data.title}}</span></h3>' +'<div ng-transclude></div>' In the update below for Angular v1.3, this template change was made. Update for Angular v1.3+: Since Angular v1.3, the transcluded scope is now a child of the directive's isolate scope, rather than a child of the controller scope. So in fiddle1, before we type anything: The pictures in this update are drawn with the Peri$scope tool, so the pictures are a bit different. The @ indicates we have an isolate scope property that uses the @ syntax, and the pink background means that the tool was unable to find an ancestor reference for the mapping (which is true, since we didn't type anything in to the textbox yet). After typing my title into the textbox, we now have: Isolate properties that use @ binding will always show the interpolated string result in the isolate scope after the @ symbol. Peri$scope was also able to find this exact string value in an ancestor scope, so it also shows a reference to that property. In fiddle 2, before typing we have the same picture as in fiddle1. After typing my title : Notice where the new data.title property showed up -- on the transcluded scope. The isolate scope is still looking for data.title on the controller scope, but its not there this time, so its title property value remains empty. In fiddle3, before typing we have the same picture as in fiddle1. After typing my title : Notice where the new data.title property showed up -- on the isolate scope. Even though the transcluded scope has access to the isolate scope via the $parent relationship, it won't look there for title or data.title -- it will only look in the controller scope (i.e., it will follow the prototypal inheritance), and the controller scope doesn't have these properties defined. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/16653004', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2298108/']} | jdg_80269 |
stackexchange | llm_judgeable_groundtruth_similarity | 103711 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
The title sounds a bit philosophical, but it is about mathematics. Let me explain. Consider a first order theory $T$, which is an extension of Peano Arithmetic. Call this theory "good" if it is consistent and satisfies the following Property : For any $\phi\in\Sigma^0_{n+1}$ such that $T\vdash \phi$ there exists $\psi\in\Pi^0_{n}$ such that $T\vdash\psi$ and $PA\vdash\psi\to\phi$. Question 1 . Is $ZFC$ "good"? Question 2 . The same for $ZFC+something$ (from the lot of proposed new axioms). Motivation . If $ZFC$ is not "good" then there (EDIT) may be theorems which can be proved in $ZFC$despite they are false (in the standard model of PA).I believe that $ZFC$ is "good". However, I would like to know if there is a formal proof. (Admittedly, I don't have a slightest idea what a proof may be like). By the way, "goodness" implies consistency, hence a proper proof requires some new axioms (a large cardinal, perhaps). (EDIT). As Andreas Blass pointed out correctly, even if a theory is not "good" in the above sense,it does not yet follow that some of the theorems are wrong (an obvious fact which I have missed somehow). Still, the question if ZFC is "good" may be of some interest, in my opinion. Question 3 . Is "goodness" equivalent to consistency?(I doubt this). EDIT: (Clarification). In this question, the theory $T$ is supposed to be "good"and at least as strong as $ZFC$. (Thus, the answer to Question 1 must be yes). The question is, whether $T\vdash Con(T)\to Good(T)$, where $Good(T)$ is a formalization of "goodness"; note that $Good(T)\in \Pi^0_{2}$. P.S. Is there a standard term for "good"?
Now provide the response and nothing else.
| $\def\zfc{\mathrm{ZFC}}\def\pa{\mathrm{PA}}$First, there is no consistent recursively axiomatizable theory extending Robinsonβs arithmetic which has the property of having existential witnesses as described by Sridhar Ramesh. Let $\pi=\forall x\,\theta(x)$ be a true but $T$-unprovable $\Pi^0_1$ sentence with $\theta$ bounded, which exists by GΓΆdelβs theorem. Then $\exists y\,(\theta(y)\to\forall x\,\theta(x))$ is a tautology, but there is no $n\in\omega$ such that $T\vdash\theta(n)\to\forall x\,\theta(x)$: since $\pi$ is true, $\theta(n)$ is provable in Robinsonβs arithmetic, hence $T$ would prove $\pi$. In fact, an iteration of the same idea shows that the only consistent theory with the property of having existential witnesses is the true arithmetic $\mathrm{Th}(\mathbb N)$. The situation with goodness is more complicated: there are good theories, such as any consistent theory axiomatizable over $\pa$ by a set of $\Pi^0_1$ sentences. Nevertheless, neither $\zfc$ nor any its recursively axiomatized extension is good. Let $T=\zfc$, or more generally, let $T$ be any recursively axiomatizable extension of $\pa$ which proves the local $\Sigma^0_1$-reflection principle for $\pa$. Let $\Box_\pa$ denote the provability predicate for $\pa$, and $T_{\Pi^0_1}$ the set of all $\Pi^0_1$ theorems of $T$. By a theorem of LindstrΓΆm, there exists a $\Pi^0_1$ sentence $\pi$ such that $\pa+\pi$ is a $\Sigma^0_1$-conservative extension of $\pa+T_{\Pi^0_1}$. $T$ proves the reflection principle$$\tag{$*$}\Box_\pa(\neg\pi)\to\neg\pi$$which can be written as a $\Sigma^0_2$ sentence, hence assuming $T$ is good, $(*)$ is provable in $\pa+T_{\Pi^0_1}$, and a fortiori in $\pa+\pi$. But then $\pa+\pi$ proves its own consistency, hence by GΓΆdelβs theorem, it is inconsistent. By $\Sigma^0_1$-conservativity, $\pa+T_{\Pi^0_1}$ is also inconsistent, hence $T$ is inconsistent, contradicting its goodness. Reference: Per LindstrΓΆm, On partially conservative sentences and interpretability , Proc. AMS 91 (1984), no. 3, pp. 436β443. | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/103711', 'https://mathoverflow.net', 'https://mathoverflow.net/users/9833/']} | jdg_80270 |
stackexchange | llm_judgeable_groundtruth_similarity | 42277639 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've got a new API that I'm building with ASP.NET Core, and I can't get any data POST'ed to an endpoint. Here's what the endpoint looks like: [HttpPost][Route("StudentResults")]public async Task<IActionResult> GetStudentResults([FromBody]List<string> userSocs, [FromBody]int collegeId){ var college = await _collegeService.GetCollegeByID(collegeId); // var occupations = await _laborMarketService.GetOccupationProgramsBySocsAndCollege(userSocs, college); return Ok();} And here's what my payload that I'm sending through Postman looks like: { "userSocs": [ "291123", "291171", "312021", "291071", "152031", "533011" ], "collegeId": 1} I'm making sure that I have postman set as a POST, with Content-Type application/json . What am I doing wrong?
Now provide the response and nothing else.
| You get always null because you need to encapsulate all your post variables inside only one object. Like this: public class MyPostModel { public List<string> userSocs {get; set;} public int collegeId {get; set;}} and then public async Task<IActionResult> GetStudentResults([FromBody] MyPostModel postModel) | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/42277639', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1226755/']} | jdg_80271 |
stackexchange | llm_judgeable_groundtruth_similarity | 25194450 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
searching for an answer to the question above in Stackoverflow I have always bumped into a similar - but not equal - question: whether it is possible to create a new rails app with an older version than the latest installed on the computer (one of the most popular posts of this kind is Specifying rails version to use when creating a new application ). However, what I'm interested in is knowing if it's possible to run a command such as 'rails __2.1.0__ new myapp' even if that specific rails version is not yet installed on my computer so that when it runs it it automatically installs this rails version plus it creates all the starting files (in which the Gemfile contains all the compatible gems of that specific version already). As an example...Right now I'm following the Rails Tutorial book by Michael Hartl and we are asked to use the ruby version 2.0.0, the rails version 4.0.8 and to include the following info into the Gemfile: source 'https://rubygems.org'ruby '2.0.0'#ruby-gemset=railstutorial_rails_4_0gem 'rails', '4.0.8'group :development do gem 'sqlite3', '1.3.8'endgem 'sass-rails', '4.0.1'gem 'uglifier', '2.1.1'gem 'coffee-rails', '4.0.1'gem 'jquery-rails', '3.0.4'gem 'turbolinks', '1.1.1'gem 'jbuilder', '1.0.2'group :doc do gem 'sdoc', '0.3.20', require: falseend It happens that I have by default ruby-2.1.2 and rails 4.1.4 so when I have wanted to follow Hartl's book I have had to create a new rails application (which sets up the Gemfile according to rails 4.1.4) and after that I have had to cd into the new app , run $ gem install rails --version 4.0.8 to install the version, substitute the default Gemfile that came with rails 4.1.4 for the above code , then run bundle install and bundle update . It seems to work, but all in all it is a rather tedious and annoying solution . Couldn't this be solved, like I wrote at the beggining, with a 'rails ____2.1.0____ new myapp' in which version 2.1.0 (that I do not have installed) gets installed at that moment? I'm sure there has to be an easier way to get started with a different rails version project, I just don't find it or try to solve it with the wrong commands. I'm sure the solution I implemented wasn't good enough either since whenever I try to create another rails app using a version that I have allegedly already installed (2.0.0) this is what I get from the Terminal: Desktop$ rails _2.0.0_ new myapp/Users/gaa/.rvm/rubies/ruby-2.1.2/lib/ruby/site_ruby/2.1.0/rubygems/dependency.rb:313:in `to_specs': Could not find 'railties' (= 2.0.0) - did find: [railties-4.1.4,railties-4.1.1] (Gem::LoadError)Checked in 'GEM_PATH=/Users/gaa/.rvm/gems/ruby-2.1.2:/Users/gaa/.rvm/gems/ruby-2.1.2@global', execute `gem env` for more informationfrom /Users/gaa/.rvm/rubies/ruby-2.1.2/lib/ruby/site_ruby/2.1.0/rubygems/dependency.rb:322:in `to_spec'from /Users/gaa/.rvm/rubies/ruby-2.1.2/lib/ruby/site_ruby/2.1.0/rubygems/core_ext/kernel_gem.rb:53:in `gem'from /Users/gaa/.rvm/gems/ruby-2.1.2/bin/rails:22:in `<main>'from /Users/gaa/.rvm/gems/ruby-2.1.2/bin/ruby_executable_hooks:15:in `eval'from /Users/gaa/.rvm/gems/ruby-2.1.2/bin/ruby_executable_hooks:15:in `<main>' I would really appreciate a helping hand.
Now provide the response and nothing else.
| I think what you're after is this: gem install rails -v 4.0.8rails _4.0.8_ new myapp That would generate a rails 4.0.8 app, and create a default Gemfile locked at 4.0.8. Less tedious. Rails needs to be installed, to run the rails command. gem is the installer. Ruby needs to be installed, to run gem . Ruby evolves and fixes bugs, old projects may need older versions etc. You probably want a ruby version manager which gets its' own installation (chruby, rbenv, rvm) that should precede the gem install ... . You can lock the ruby version for a project, in the Gemfile too e.g.: source 'https://rubygems.org'ruby '2.1.2' | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25194450', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3920354/']} | jdg_80272 |
stackexchange | llm_judgeable_groundtruth_similarity | 2538604 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $A$ be a unital C*-algebra, and $x\in A^+$. We know $\sqrt{x}$ is a unique square root of $x$, is limit of polynomials generated by $x$. My question: What is the representation of these polynomials? I think about using Functional calculus, and find the polynomials that generated the function $\sqrt{x}$. Thought about Maclaurin series, but I could not find anything. Please advise me. Thanks a lot.
Now provide the response and nothing else.
| There are many ways to find a sequence of polynomials that converge to a continuous function (the sequence is not unique). In this case, you can consider the polynomials defined inductively by$$p_0(t) = 0, p_{n+1}(t) = p_n(t) + \frac{1}{2}[t-p_n^2(t)]$$This is a sequence that converges pointwise to $\sqrt{t}$. However, they are also a monotone sequence, so by Dini's theorem, they must converge uniformly on compact sets. Hence, $p_n(x)$ converges to $\sqrt{x}$. Note that $p_n(0) = 0$ for all $n\in \mathbb{N}$, which is also sometimes useful (if you want to show that $\sqrt{x}$ is a compact operator whenever $x$ is compact, for instance) | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2538604', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/156453/']} | jdg_80273 |
stackexchange | llm_judgeable_groundtruth_similarity | 9491114 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Goal : I want to have a custom search (geocode) function and be able to list and clickeach result and display it on the map. Wanted : Adjust map bounds / zoom level accordingly, i.e. searching for "MA, USA" should zoom the map to let me see the whole Massachusetts state, while searching for "Boston, MA, USA" should zoom on the Boston area. When listing multiple results, the same should apply when clicking on a result. Issue : I can use the geometry.bounds object with fitBounds - but - some results you get using the geocoder do not have that geometry.bounds object. A quick example: searching for "Boston" returns Boston, MA, USA Boston, IN, USA Boston, KY, USA Boston, GA 31626, USA Boston, Salem, VA 22713, USA Boston, NY 14025, USA Both "Boston, KY" and "Boston NY 14025" do not have bounds. Question : Is there a reliable way to display any geocoder result on a map at the appropriate zoomlevel? Right now I am using something like that but I find this ugly and it doesn't solve the zoom issue if (results[0].geometry.bounds) { map.fitBounds(results[0].geometry.bounds);} else { map.setCenter(results[0].geometry.location); // eventually set zoom here to some middle-range value (ugly)}
Now provide the response and nothing else.
| Right. After trying both methods and testing, it comes out that: the geometry.bounds object is "optionnaly returned" as the doc says we don't exactly know what the geometry.bounds object is based on geometry.bounds "may not match the recommended viewport" and often doesn't geometry.bounds returns a square or rectangle of any size and shape while the viewport functions always return a rectangle with the same aspect ratio (around 1.43), whatever your map container dimensions are, as far as I tested. Below is the example of San Francisco, CA, mentioned in the doc. In red using geometry.bounds and in blue using the viewport functions. The solution is simple enough and is reliable. var resultBounds = new google.maps.LatLngBounds( results[0].geometry.viewport.getSouthWest(), results[0].geometry.viewport.getNorthEast());map.fitBounds(resultBounds); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9491114', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1238965/']} | jdg_80274 |
stackexchange | llm_judgeable_groundtruth_similarity | 39348234 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to put validation to a Spring Boot project. So I put @NotNull annotation to Entity fields. In controller I check it like this: @RequestMapping(value="", method = RequestMethod.POST)public DataResponse add(@RequestBody @Valid Status status, BindingResult bindingResult) { if(bindingResult.hasErrors()) { return new DataResponse(false, bindingResult.toString()); } statusService.add(status); return new DataResponse(true, "");} This works. But when I make it with input List<Status> statuses , it doesn't work. @RequestMapping(value="/bulk", method = RequestMethod.POST)public List<DataResponse> bulkAdd(@RequestBody @Valid List<Status> statuses, BindingResult bindingResult) { // some code here} Basically, what I want is to apply validation check like in the add method to each Status object in the requestbody list. So, the sender will now which objects have fault and which has not. How can I do this in a simple, fast way?
Now provide the response and nothing else.
| My immediate suggestion is to wrap the List in another POJO bean. And use that as the request body parameter. In your example. @RequestMapping(value="/bulk", method = RequestMethod.POST)public List<DataResponse> bulkAdd(@RequestBody @Valid StatusList statusList, BindingResult bindingResult) {// some code here} and StatusList.java will be @Validprivate List<Status> statuses;//Getter //Setter //Constructors I did not try it though. Update: The accepted answer in this SO link gives a good explanation why bean validation are not supported on Lists. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/39348234', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2702786/']} | jdg_80275 |
stackexchange | llm_judgeable_groundtruth_similarity | 4584393 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have two lists in Groovy and need to sum the contents of both. For example: list1 = [1,1,1]list2 = [1,1,1] I expected this result: total = [2,2,2] I try to sum with + operator o .sum method, but I have a concatenation of the lists. [1, 1, 1, 1, 1, 1] It's Groovy enough groovy or I need to loop each element of the lists?
Now provide the response and nothing else.
| Groovy's List.transpose() works like zip in some other languages. Try this: list1 = [1,2,3]list2 = [4,5,6]assert [list1, list2].transpose()*.sum() == [5,7,9] | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4584393', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/462015/']} | jdg_80276 |
stackexchange | llm_judgeable_groundtruth_similarity | 41667827 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an ASP.NET Core website, using EFCore.I would like to do some work like logging to the database, but after having sent the response to the user in order to answer faster. I could do it in a different thread, but due to async access of the DbContext I am not sure it is safe. Is there any recommended way to do that? public async Task<IActionResult> Request([FromForm]RequestViewModel model, string returnUrl = null){ try { var newModel = new ResponseViewModel(model); // Some work return View("RequestView",newModel) } finally { // Some analysis on the request // I would like to defer this part await Log(model); }} One of the reason is that I would like to call a web-service (geocoding), which is not needed to answer, but good to work on the log (I need the city/country of coordinates).
Now provide the response and nothing else.
| I see this has never been answered, but actually have a solution.The simple solution: public async Task<IActionResult> Request([FromForm]RequestViewModel model, string returnUrl = null){ try { var newModel = new ResponseViewModel(model); // Some work return View("RequestView",newModel) } finally { Response.OnCompleted(async () => { // Do some work here await Log(model); }); }} The secure solution, as OnCompleted used to be called before the response being sent, so delaying the response: public static void OnCompleted2(this HttpResponse resp, Func<Task> callback){ resp.OnCompleted(() => { Task.Run(() => { try { callback.Invoke(); } catch {} }); return Task.CompletedTask; });} and call Response.OnCompleted2(async () => { /* some async work */ }) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/41667827', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4881677/']} | jdg_80277 |
stackexchange | llm_judgeable_groundtruth_similarity | 5488363 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I currently have the following code in the HomeController of my MVC project: public class HomeController : Controller{ public ActionResult Index() { MyDataContext dc = new MyDataContext(); IQueryable<Table1Data> j = from n in dc.Table1 select n; return View(j); } So that works okay, but now I want to pass a second table through to the same view. So I was thinking I should be able to do something like this: public class HomeController : Controller{ public ActionResult Index() { MyDataContext dc = new MyDataContext(); IQueryable<Table1Data> j = from n in dc.Table1 select n; IQueryable<Table2Data> l = from k in dc.Table2 select k; return View(j, l); } Is there a way to have the view accept two models like this or, alternatively, a way to merge the two result sets (the two tables are not linked in any way?
Now provide the response and nothing else.
| Yes there is, but not quite like that. The way to do what you wish to do is to create a custom ViewModel class. This class ( MyPageViewModel ) would have two (or more) properties, one for each of your objects. In your view, you would access them using Model.Table1Data and Model.Table2Data . A custom ViewModel class is very simple: public class MyPageViewModel{ public IQueryable<Table1Data> Table1Data { get; set; } public IQueryable<Table2Data> Table2Data { get; set; }} You view would need to be strongly typed to this custom ViewModel class. <%@ Page Title="MyPage" MasterPageFile="~/Application/Master Pages/Site.Master" Inherits="System.Web.Mvc.ViewPage(Of MyAppNamespace.MyPageViewModel)" %> Don't try to type that youself; easier to create a new view and check "strongly typed" view, and specify your New Custom Viewmodel class. Then your action Controller method would be: public class HomeController : Controller{ public ActionResult Index() { MyDataContext dc = new MyDataContext(); MyPageViewModel vm = new MyPageViewModel(); vm.Table1Data = from n in dc.Table1 select n; vm.Table1Data = from k in dc.Table2 select k; return View(vm); }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5488363', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/166556/']} | jdg_80278 |
stackexchange | llm_judgeable_groundtruth_similarity | 9333814 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have been working with icefaces since its version 1.8 and now I am getting into the latest version (3.0). I think my problem is not version related, but the solution could be.I am programming an application which uses ICEpush. The model is like this:Several devices are connected to the server via java.net.socket and running in an independent thread and every message could trigger an ajax push event. When I use PushRenderer.Render("group") I get this:java.lang.RuntimeException: FacesContext is not present for thread Thread[172.17.1.49,5,main].When I use SessionRenderer.Render("group") it works as expected when I go the main page of the devices and I can see how the information changes on real time for everyone. The problem I find is that when I try to navigate to another view in my application I see that it is done in a very slow way and the buttons after I click on them do not work. When I refresh the page they start working again until I navigate to the page of the devices where I get the problem again.I have tried with the PortableRenderer but if I use PushRenderer.getPortableRenderer() I get a nullPointerException; and, if try to use PushRenderer.getPortableRenderer(FacesContext.getCurrentInstance()) I find that this method has changed its parameter to ServletContext that I do not know where I can find it. I hope you can help me with this. thanks in advance.
Now provide the response and nothing else.
| InsertOnSubmit is a Linq-to-SQL method and not in the Entity Framework. However, since our project was a conversion from Linq-to-SQL we have some extension methods that might help: public static class ObjectContextExtensions{ public static void SubmitChanges(this ObjectContext context) { context.SaveChanges(); } public static void InsertOnSubmit<T>(this ObjectQuery<T> table, T entity) { table.Context.AddObject(GetEntitySetName(table.Context, entity.GetType()), entity); } public static void InsertAllOnSubmit<T>(this ObjectQuery<T> table, IEnumerable<T> entities) { var entitySetName = GetEntitySetName(table.Context, typeof(T)); foreach (var entity in entities) { table.Context.AddObject(entitySetName, entity); } } public static void DeleteAllOnSubmit<T>(this ObjectQuery<T> table, IEnumerable<T> entities) where T : EntityObject, new() { var entitiesList = entities.ToList(); foreach (var entity in entitiesList) { if (null == entity.EntityKey) { SetEntityKey(table.Context, entity); } var toDelete = (T)table.Context.GetObjectByKey(entity.EntityKey); if (null != toDelete) { table.Context.DeleteObject(toDelete); } } } public static void SetEntityKey<TEntity>(this ObjectContext context, TEntity entity) where TEntity : EntityObject, new() { entity.EntityKey = context.CreateEntityKey(GetEntitySetName(context, entity.GetType()), entity); } public static string GetEntitySetName(this ObjectContext context, Type entityType) { return EntityHelper.GetEntitySetName(entityType, context); }} Where EntityHelper is as per the MyExtensions open source library . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9333814', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/890326/']} | jdg_80279 |
stackexchange | llm_judgeable_groundtruth_similarity | 4315970 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Does anyone know the solution to this problem? I'm not sure if it's asking to find the cardinality first, and whether I should use the additive counting principle or the multiplicative one? Would help me a lot, thanks! Given is a standard $52$ -card poker game. A hand is a non-ordered selection of five Cards from the game. Consider the following question and the answer to it: Question: How many hands are there that have all four colors? Answer: We choose a card from each color, so there are $(13^4)$ possibilities for this. $48$ possibilities remain when we choose the fifth card. So there are a total of $(13^4) \times 48$ hands that contain all four suits. Is the solution correct? If not, what is wrong with the answer and what is the correct solution?
Now provide the response and nothing else.
| There will be double-counting in your result. In fact, every choice is counted twice: suppose two of your cards are chosen to be e.g. diamonds. That choice is counted twice: once with the first diamond card chosen as you did your first choice of four cards (and the second chosen as the "fifth" card), and once the other way round. Thus, the total number is half of what you've got, i.e. $$\frac{1}{2}\left(13^4\times 48\right)=685,464$$ You can see this also the following way: first pick a suit ( $4$ ways). Pick two cards from that suit ( $13\choose 2$ ways). Pick one card from each of the three remaining suits ( $13^3$ ways). The total is: $$4{13\choose 2}\times 13^3=685,464$$ which is the same as we got before. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4315970', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/997440/']} | jdg_80280 |
stackexchange | llm_judgeable_groundtruth_similarity | 20993 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I was recently thinking about keeping my computer secure from physical access, and i decided that the safest way to keep my data under lock and key, was to do just that: Create a lock and key out of a USB drive. My idea is this: A USB drive would act as a "Key" preventing login unless it was inserted. Is this possible, and can it be done for a reasonable price? if so, how could i manage that, and what sort of programs should i look into? i am running Mac OSX Mountain Lion if it is of any consequence.
Now provide the response and nothing else.
| I don't think he is referring the USB-key solution as a sole authentication method, but as an addition to the normal username-password authentication. There are two methods I can think of, in order to implement this: 1. Encrypt the entire hard drive, and put the key-file on a USB drive. 2. Leave disk unencrypted and use the USB drive for a login purpose only. Let's review: 1. Use truecrypt or dm-crypt (or their Mac OS X equivalents) in order to encrypt the entire hard drive using a key-file and storing it on a USB drive. The obvious and major drawback: losing the USB drive will cause the loss of the entire filesystem. 2. Use solutions like pam_usb ( http://pamusb.org/ ) Do not forget to take in account remote login considerations! | {} | {'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/20993', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/5877/']} | jdg_80281 |
stackexchange | llm_judgeable_groundtruth_similarity | 34327545 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to time a code I've got in C++. I have an inner and an outer loop that I want to time separately, but at the same time. For some reason when I do this one of the instances returns 1.84467e+13 and always this exact number. Why is this happening? Here is a minimum working example that replicates the effect on my machine: #include <iostream>#include <stdlib.h>#include <time.h>using namespace std;int main(){ long int i, j; clock_t start, finish, tick, tock; double a = 0.0; double adding_time, runtime; start = clock(); for(i=0; i<10; i++) { a=0.0; tick =clock(); for(j=0; j<10000000; j++) { a+=1; } tock= clock(); adding_time = (double)(tick - tock)/CLOCKS_PER_SEC; cout << "Computation time:" << adding_time << endl; } finish = clock(); runtime = (double)(finish - start)/CLOCKS_PER_SEC; cout << "Total computation time:" << runtime << endl;}
Now provide the response and nothing else.
| Your clock_t is apparently an unsigned 64-bit type. You're taking tick - tock , where tock was measured after tick , so if there's any difference between the two at all, it's going to try to produce a negative number--but since it's an unsigned type, that's wrapping around to become something close to the largest number that can be represented in that type. Obviously, you really want to use tock-tick instead. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34327545', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4636048/']} | jdg_80282 |
stackexchange | llm_judgeable_groundtruth_similarity | 50942 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a Solomon LM1112SYL 4x16 LCD. The display has 16 pins and the datasheet ( http://www.allelectronics.com/mas_assets/spec/LCD-108.pdf ) shows that pins 15 & 16 are for the backlight power supply. When I power up the LCD without having any connections made to pins 15 & 16, the backlight is on. I would like to have the ability to dim or power off the backlight. NOTE: clabacchio has another post about this module at LCD Module Datasheet Or Other Info . I see that on the back of the module it is has a block silk-screened on it which appears to be for the backlight. VDD/VSS JI,KI 15K/16A JK 15A/16K JA I believe what this information is telling me is that JI is VDD and KI is VSS. If JK pads are shorted to the pads then pin 15 will be the cathode and pin 16 will be the anode to the backlight LEDs. If JA pads are shorted to the pads then pin 15 will be the anode and pin 16 will be the cathode to the backlight LEDs. I have not followed the traces yet to confirm this. Does anyone know if my assumption is correct? I have not it tried yet as I don't want to ruin anything but I may give it a go once I trace the connections and make certain there will be no ill affects. Update Thanks Fido UK, for pointing out what should have been obvious but escaped me for a time. For some reason I was thinking that there was a via or something where those blobs of solder were. I took my solder wick out and unsoldered the blobs and found jumper pads underneath. Works great now.
Now provide the response and nothing else.
| The back light is being powered from the supply to the LCD controller - Pins 1 & 2, this is from the solder blobs marked JI & KI near R9. To run the back light from pins 15 & 16 you need to unsolder the JI & KI blobs first, then solder the JA or JK links If you want pin 15 to be +5v use both JA links (pin 16 will be ground)If you want pin 16 to be +5v use both JK links (pin 15 will be ground) J stands for jumper - I means internal, K means pin 15 is the Cathode (I know but it is standard to use K as a short form of Cathode), A means pin 15 is the Anode... | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/50942', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/16782/']} | jdg_80283 |
stackexchange | llm_judgeable_groundtruth_similarity | 4162811 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $X = \{(x,y) \in \mathbb{R}^2 : y = 0\} \cup \{(x,y)\in \mathbb{R}^2: -x+y = 1\}$ . We want to prove that there is no continuous onto map $$A:=X \cup \{(x,y) \in \mathbb{R}^2: x=0\} \overset{f}{\longrightarrow} B:=X \cup \{(x,y) \in \mathbb{R}^2 : y=1\}$$ considering both $A$ and $B$ equipped with the Euclidean topology. Our approach to the problem has been to remove the point $p = (0,1)$ , which is the intersection of two lines, from the set of departure, and observe that this leaves $A \setminus \{p\}$ with three path-connected components. Then, regardless of where $f(p)$ lands in the codomain of the function, the number of connected components of $B \setminus \{f(p)\}$ is not preserved. Our main doubt is whether we can use this fact to prove that such a function $f$ does not exist. In other words, is the number of path-connected components preserved by a continuous and surjective function between topological spaces? EDIT:As mathcounterexamples.net has suggested in his answer, we cannot use the reasoning that $f$ preserves the number of path-connected components of $A$ and $B$ . We are now wondering what other method of proof could we use to solve the problem. Could it be, maybe, something related to the fact that $A$ contains a closed path (triangle) and $B$ contains none? Thanks in advance for your help and answers.
Now provide the response and nothing else.
| The number of (path)-connected components is indeed not preserved under a continuous subjective map. Let's consider $f : X \to C$ where $X = \{(x,0) \in \mathbb R^2 \mid x \in \mathbb R\}$ , $C \subseteq \mathbb R^2$ is the circle of radius one centered on the origin and $f(x) = (\cos x, \sin x)$ . $X$ and $C$ are connected. $f$ is onto and continuous. However $X \setminus \{(0,0)\}$ has two (path)-connected components while $f[X \setminus \{(0,0)\}] = C$ has only one. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4162811', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/601608/']} | jdg_80284 |
stackexchange | llm_judgeable_groundtruth_similarity | 306370 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like help solving the following analytically $$\frac{\partial^2 u(x)}{\partial x^2} = - p(x)$$ where $ p(x) = \begin{cases} 3 & : 0 < x < 0.4\\ x & : 0.5 < x < 1\\ 0 & : else \end{cases} $ and $u(0) = u(1) = 0$ Im stuck on this I have googled, and looked at textbooks, but what I would like is a worked example.(not homework per se, something I need to understand before I attempt the homework!)
Now provide the response and nothing else.
| Here is the approach using Green's function $G(x,y)$, which by definition is the solution of $-u''=\delta_y$ with zero boundary values, where $y\in (0,1)$ is fixed and $\delta_y$ is the Dirac delta . The presence of $\delta_y$ says that $u'(y+)-u'(y-)=-1$, and $u'$ is piecewise constant elsewhere. Hence,$$u(x)=\begin{cases} ax,\quad &x<y \\ (a-1)(x-1), \quad &x>y\end{cases}\tag{1}$$Since we don't want discontinuities in $u$ (they would contribute $\delta'_y$ to $u''$, which is a higher order singularity than we want), we must have $ay=(a-1)(y-1)$. The only solution is $a=1-y$. Thus, $$G(x,y)=\begin{cases} (1-y)x,\quad &x<y \\ y(1-x), \quad &x>y\end{cases}\tag{2}$$or in more compact (and obviously symmetric ) form, $$G(x,y)=\min(x,y)-xy\tag{3}$$This is how $G(x,1/3)$ looks: Returning to the general problem $-u''=p$, we can now write down its solution at once:$$u(x)=\int_0^1 G(x,y)\,p(y)\,dy \tag{4}$$ To evaluate the integral (4) for the function in your example, you should consider the cases $x\le 0.4$, $0.4\le x\le 0.5$ and $x\ge 0.5$ separately. If $x\le 0.4$: $$u(x)=\int_0^{x} y(1-x)\,3\,dy + \int_x^{0.4} x(1-y)\,3\,dy + \int_{0.5}^1 x(1-y)\,y\,dy \tag{5}$$ If $0.4\le x\le 0.5$: $$u(x)=\int_0^{0.4} y(1-x)\,3\,dy + \int_{0.5}^1 x(1-y)\,y\,dy \tag{6}$$ If $x\ge 0.5$: $$u(x)=\int_0^{0.4} y(1-x)\,3\,dy + \int_{0.5}^x y(1-x)\,y\,dy+\int_{x}^1 x(1-y)\,y\,dy \tag{7}$$ And this is the graph of the solution: Note that it is concave, and flat between $0.4$ and $0.5$ as it should be. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/306370', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/57142/']} | jdg_80285 |
stackexchange | llm_judgeable_groundtruth_similarity | 4753930 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've got an array of hashes representing objects as a response to an API call. I need to pull data from some of the hashes, and one particular key serves as an id for the hash object. I would like to convert the array into a hash with the keys as the ids, and the values as the original hash with that id. Here's what I'm talking about: api_response = [ { :id => 1, :foo => 'bar' }, { :id => 2, :foo => 'another bar' }, # ..]ideal_response = { 1 => { :id => 1, :foo => 'bar' }, 2 => { :id => 2, :foo => 'another bar' }, # ..} There are two ways I could think of doing this. Map the data to the ideal_response (below) Use api_response.find { |x| x[:id] == i } for each record I need to access. A method I'm unaware of, possibly involving a way of using map to build a hash, natively. My method of mapping: keys = data.map { |x| x[:id] }mapped = Hash[*keys.zip(data).flatten] I can't help but feel like there is a more performant, tidier way of doing this. Option 2 is very performant when there are a very minimal number of records that need to be accessed. Mapping excels here, but it starts to break down when there are a lot of records in the response. Thankfully, I don't expect there to be more than 50-100 records, so mapping is sufficient. Is there a smarter, tidier, or more performant way of doing this in Ruby?
Now provide the response and nothing else.
| Ruby <= 2.0 > Hash[api_response.map { |r| [r[:id], r] }]#=> {1=>{:id=>1, :foo=>"bar"}, 2=>{:id=>2, :foo=>"another bar"}} However, Hash::[] is pretty ugly and breaks the usual left-to-right OOP flow. That's why Facets proposed Enumerable#mash : > require 'facets'> api_response.mash { |r| [r[:id], r] }#=> {1=>{:id=>1, :foo=>"bar"}, 2=>{:id=>2, :foo=>"another bar"}} This basic abstraction (convert enumerables to hashes) was asked to be included in Ruby long ago, alas, without luck . Note that your use case is covered by Active Support: Enumerable#index_by Ruby >= 2.1 [UPDATE] Still no love for Enumerable#mash , but now we have Array#to_h . It creates an intermediate array, but it's better than nothing: > object = api_response.map { |r| [r[:id], r] }.to_h | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4753930', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/203130/']} | jdg_80286 |
stackexchange | llm_judgeable_groundtruth_similarity | 2217868 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Basically my question is whether the typical mathematician would consider it important to remember purely computational tricks like "trig substitution" which only help one solve obscure integrals that don't ever seem to be of interest in pure math. I can definitely understand things like integration by parts or even partial fractions but I took Calc II last semester and I have already forgotten most of the material. Can the typical mathematician pull things like this out of his rear or does one have to teach to remember things like that?
Now provide the response and nothing else.
| Talking about computational tricks more generally than just integrals, it wildly depends on the field of mathematics. Some have a lot of things that turn out to be fundamental that, to an outsider, seem like computational tricks more than deep theorems, or even just very useful tools. Most fields will have something . Just to name a few examples: Computing continued fractions and rational approximations quickly is a very useful skill if you work in the right subfield of number theory, but is a curiosity to most mathematicians. If you study fields related to Complex Analysis, especially from a more geometric view-point you'll develop massive skill at evaluating wonky integrals using clever contours. If your a combinatorist, you might spend a lot of time calculating the values of summations that require arcane tricks and clever substitutions. As has been mentioned in the comments, trig substitutions are secretly super important, especially things like $\tan(t/2)$. So what about your situation? First of all, integration by parts is an exceptionally important theorem in general. I personally think they give people the wrong formula in school and that it should be written as $$\int u\,dv +\int v\,du =uv\quad \text{a.k.a.}\quad u\frac{dv}{dx}+v\frac{du}{dx}=\frac{d}{dx}(uv)$$ because once I realized that integration by parts and the product rule were the same thing, remembering the formula and also figuring out how to do it in concrete instances became way better. Most mathematicians don't have a whole list of formulae memorized, but rather know how to derive them , because they understand the things that are hiding behind the curtain. So learn that. Learn how to think about the problems , and the particular tricks will be five minutes of scratch paper away whenever you need them. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2217868', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/338912/']} | jdg_80287 |
stackexchange | llm_judgeable_groundtruth_similarity | 2038570 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I know the answer is supposed to be $$a_{2n} = (2n-1) a_{2n-2}$$Can someone please explain why shouldn't be having $\binom{2n}{2}$ in place of $2n-1$? Doesn't it matter which two people are paired off out of the $2n$ people and hence generating a different case each time for the remaining $(2n-2)$ people?
Now provide the response and nothing else.
| Consider the following three theorems: Theorem ( Soundness ) Let $T$ be a first order theory in a given language $\mathcal L$ and let $\phi$ be a $\mathcal L$-formula. If there is a formal proof of $\phi$ from $T$ (in symbols $T \vdash \phi$), then for every model $\mathcal M$ of $T$ (i.e. $\mathcal M \models \psi$ for all $\psi \in T$) we have $\mathcal M \models \phi$. In simpler words: If $\phi$ is provable from $T$, then it is true in every model of $T$. Theorem ( Completeness ) Let $T$ be a first order theory in a given language $\mathcal L$ and let $\phi$ be a $\mathcal L$-formula. If for every model $\mathcal M$ of $T$ we have that $\mathcal M \models \phi$, then $T \vdash \phi$, i.e. there is a formal proof of $\phi$ from $T$. In simpler words: If $\phi$ is true in every model of $T$, then there is a formal proof $T \vdash \phi$ witnessing this fact. (This is a special property of first order logic. There are other logics that don't satisfy completness.) These two theorems combined are very promissing: In a sense Soundness says that everything provable is true. On the other hand Completeness says that everything universally true always has a formal proof. Great! However, there is one case not handled yet. What if $\phi$ is true in some models of $T$ and false in other models of $T$? Such a formula $\phi$ can't have a formal proof and also its negation $\neg \phi$ cannot be proved. It was hoped that this doesn't occur for nice theories $T$. For example, if $T$ is complete, this never occurs. A (somewhat shocking - at least at that time -) result due to GΓΆdel says that any natural theory $T$ has a formula $\phi$ such that $T \not \vdash \phi$ and $T \not \vdash \neg \phi$. In other words: If $T$ is consistent, then there are models $\mathcal M, \mathcal N$, such that $\mathcal M \models T \cup \{ \phi \}$ and $\mathcal N \models T \cup \{ \neg \phi \}$. More precisely: Theorem ( Incompleteness ) Let $T$ be a recursive enumerable, consistent theory that can interpret Peano Arithmetics . Then there is a statement $\phi$ such that $T \not \vdash \phi$ and $T \not \vdash \neg \phi$. In fact, this is true for $\phi \equiv \operatorname{Con}(T)$. Hence there is a true statement that $T$ cannot decide. People often emphasize how bad this result is when it comes to Hilbert's program or the quest for a formal foundation of mathematics. While their concern is justified to some extend, I personally view the Incompleteness Theorem in another light: The proof of the Incompleteness Theorem boils down to a (very clever) fixed point argument that works, precisely because $T$ is recursively enumerable and can code formal proofs. Take either of those properties away and it doesn't work anymore. To me this says that consistent theories of a decent proof-theoretical complexity can't be coded as very simple subsets of $\mathbb N$. In other words: Any reasonable coding of such a theory has a nontrivial Turing degree. From a modern point of view this isn't surprising at all - it's to be expected. Finally, let us compare these three theorems in light of your question. Soundness and Completeness tell us that our notion of formal proof is correct (both mathematically and philosophically). We can prove precisely those statements that $T$ implies (model theoretic) and that's exactly what we want. Incompleteness , on the other hand, tells us, that all consistent, sufficiently complex (in terms of proof theory) theories that can be enumerated by a Turing machine have a (true) statement that they do not decide. Or - in other words - any consistent, sufficiently complex and complete theory cannot be enumerated by a Turing machine. This really says that any reasonable coding of such a theory must be pretty complex. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2038570', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/366375/']} | jdg_80288 |
stackexchange | llm_judgeable_groundtruth_similarity | 381756 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
This last week I spent all effort into learning Puppet. Now I suffer from a mental buffer overflow and little confidence of ever being able to tame this beast. I came across many annotated examples but due to their countless variations I fail to discern between recommended (recent) Puppet style and conventions, and ad hoc "works for me" approaches. I can't stand it because it seems to be about basic level stuff. So. Using Puppet to manage groups and users, users' primary group equal to their own username, other groups could be lan for lan logins, wheel for admins, shell for users with a shell on arbitrary nodes, mail for users, daemons for various daemons. Admin logins will be on all nodes and to make things worse a lan login could be a shell login too. From what I understand it's okay to define a user multiple times if you make use of virtual definitions that are realised at some point. Sounds fab, so how does that work with multiple groups for a user? Say Bob can use both LAN nodes and node beastie.wan; is his login thebob then defined two times, in lanusers.pp with groups => ["lan"] and in shellusers.pp with groups => ["shell"] ? What if Bob wants his lan password to be separate from his shell password? The code I currently use has no virtual definitions, users are just hard coded single inclusions. At one point I came across an example using virtuals and this is where I got stuck because I don't understand how to expand the code so that Puppet creates a primary group and the required groups I defined first and then joins the user in those groups. Right. Please cluebat me properly.
Now provide the response and nothing else.
| Now I suffer from a mental buffer overflow and little confidence of ever being able to tame this beast. First: Relax. I've learned that, when you're new to something with a learning curve such as Puppet, it is pretty easy to become overwhelmed and not be able to get much done. is his login thebob then defined two times, in lanusers.pp with groups => ["lan"] and in shellusers.pp with groups => ["shell"]? Nope. Virtually define it in one place (maybe users.pp ) with groups => ['shell', 'lan',] . On the nodes, realize the users you need. For example, if for node beamin we want all shell users: node beamin { Account <| groups == 'shell' |>} What if Bob wants his lan password to be separate from his shell password? Then Bob should probably get 2 different accounts with different login names. | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/381756', 'https://serverfault.com', 'https://serverfault.com/users/93703/']} | jdg_80289 |
stackexchange | llm_judgeable_groundtruth_similarity | 1853 |
Below is a question asked on the forum ai.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The English Language is not well-suited to talking about artificial intelligence, which makes it difficult for humans to communicate to each other about what an AI is actually "doing". Thus, it may make more sense to use "human-like" terms to describe the actions of machinery, even when the internal properties of the machinery do not resemble the internal properties of humanity. Anthropomorphic language had been used a lot in technology (see the Hacker's Dictionary definition of anthropomorphization , which attempts to justify computer programmers' use of anthromporhic terms when describing technology), but as AI continues to advance, it may be useful to consider the tradeoffs of using anthropomorphic language in communicating to both technical audiences and non-technical audiences. How can we get a good handle on AI if we can't even describe what we're doing? Suppose I want to develop an algorithm that display a list of related articles. There are two ways by which I can explain how the algorithm works to a layman: Very Anthropomorphic - The algorithm reads all the articles on a website, and display the articles that are very similar to the article you are looking at. Very Technical - The algorithm converts each article into a "bag-of-words", and then compare the "bag-of-words" of each article to determine what articles share the most common words. The articles that share the most words in the bags are the ones that are displayed to the user. Obviously, #2 may be more "technically correct" than #1. By detailing the implementation of the algorithm, it makes it easier for someone to understand how to fix the algorithm if it produces an output that we disagree with heavily. But #1 is more readable, elegant, and easier to understand. It provides a general sense of what the algorithm is doing, instead of how the algorithm is doing it. By abstracting away the implementation details of how a computer "reads" the article, we can then focus on using the algorithm in real-world scenarios. Should I, therefore, prefer to use the anthropomorphic language as emphasized by Statement #1? If not, why not? P.S.: If the answer depends on the audience that I am speaking to (a non-technical audience might prefer #1, while a technical audience may prefer #2), then let me know that as well.
Now provide the response and nothing else.
| If clarity is your goal, you should attempt to avoid anthropomorphic language - doing so runs a danger of even misleading yourself about the capabilities of the program. This is a pernicious trap in AI research, with numerous cases where even experienced researchers have ascribed a greater degree of understanding to a program than is actually merited. Douglas Hofstadter describes the issue at some length in a chapter entitled "The Ineradicable Eliza Effect and Its Dangers" and there is also a famous paper by Drew McDermot, entitled "Artifical Intelligence meets natural stupidity" . Hence, in general one should make particular effort to avoid anthropomorphism in AI. However, when speaking to a non-technical audience, 'soundbite' descriptions are (as in any complex discipline) acceptable provided you let the audience know that they are getting the simplified version . | {} | {'log_upvote_score': 4, 'links': ['https://ai.stackexchange.com/questions/1853', 'https://ai.stackexchange.com', 'https://ai.stackexchange.com/users/181/']} | jdg_80290 |
stackexchange | llm_judgeable_groundtruth_similarity | 24024702 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to decrypt a password. The password is encrypted with password_hash function. $password = 'examplepassword';$crypted = password_hash($password, PASSWORD_DEFAULT); Now, let's assume that $crypted is stored in a database (there's a "users" table, with usernames, passwords, etc) and I need to do a login: I have to see if the password entered by the user matches the encrypted password stored in the database. This is the sql code... $sql_script = 'select * from USERS where username="'.$username.'" and password="'.$inputpassword.'"'; ...but $inputpassword is not encrypted, so it's not equal to what is stored in the password field of the table users... So, there's a function to decrypt after the use of password_hash ? Or should I change my encrypt method? Or what else?
Now provide the response and nothing else.
| Bcrypt is a one-way hashing algorithm, you can't decrypt hashes. Use password_verify to check whether a password matches the stored hash: <?php// See the password_hash() example to see where this came from.$hash = '$2y$07$BCryptRequires22Chrcte/VlQH0piJtjXl.0t1XkA8pw9dMXTpOq';if (password_verify('rasmuslerdorf', $hash)) { echo 'Password is valid!';} else { echo 'Invalid password.';} In your case, run the SQL query using only the username: $sql_script = 'SELECT * FROM USERS WHERE username=?'; And do the password validation in PHP using a code that is similar to the example above. The way you are constructing the query is very dangerous. If you don't parameterize the input properly, the code will be vulnerable to SQL injection attacks. See this Stack Overflow answer on how to prevent SQL injection. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/24024702', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3569234/']} | jdg_80291 |
stackexchange | llm_judgeable_groundtruth_similarity | 42015397 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
It appears as though the current version of the Google Calendar API is v3, which in the past I believe has supported both OAuth and simple API Keys for a JavaScript/Browser implementation. As of 2/2/17: is it possible to still use the Google Calendar API with an API Key, or is required that any access (even access to a public calendar) to the API go through OAuth2.0? According to the Google Calendar API, "Your application must use OAuth 2.0 to authorize requests" ( https://developers.google.com/google-apps/calendar/auth ), but my API request is opening a public calendar and does not require a sign-in as none of the data is private.
Now provide the response and nothing else.
| Well, it seems that the Google Calendar now needs an authorized tokens to access its own API. It is stated in their documentation that every request your application sends to the Google Calendar API must include an authorization token. The token also identifies your application to Google. I found here in this question that you cannot access your calendar information using API Key. API keys (or simple API access key) are not authorized tokens and can only be used for some API calls such as a Google search query etc; API keys will not let you access any user specific data, which I am assuming is your objective through this calendar application. I tried to access a Google calendar request by using an API key, but I get an error 401 "Login Required" https://www.googleapis.com/calendar/v3/users/me/calendarList/primary?key=YOUR_API_KEY To verify that the API_KEY is the problem, I try to use it in a Google Maps request. https://maps.googleapis.com/maps/api/js?key=YOUR_API_KEY&callback=initMap And I got here a successful response. I hope this information helps you. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/42015397', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2730985/']} | jdg_80292 |
stackexchange | llm_judgeable_groundtruth_similarity | 16609528 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I check if a string starts with a capital letter, in cases where the first letter might not be in A-Z range but from other languages also AND simultaneously if the first character is not a number? examples: "This is string" - match"this is string" - not match"5 not good" - not match"Π£Π²Π΅Π»ΠΈΡΠΈ starts capital" - match"ΠΌΠ°ΡΠ°Π±ΠΈΡΠ°Π½Π΅ no capital" - not match in php: if (preg_math('/??/i', $str) ) { echo 'yeeee haaa';}
Now provide the response and nothing else.
| Use this regex: preg_match('#^\p{Lu}#u', $str) \p{Lu} will match any character that has Unicode character property of uppercase letter. Demo on regex101 (please ignore the flags m and g , they are for demonstration purpose only) When you are dealing with a Unicode string (or more specifically, in the case of preg_ functions, the Unicode string must be in UTF-8 encoding), you must always use the u flag to make the engine treat the input string and the pattern with character semantics. Otherwise, by default, preg_ functions treat the pattern and input string as an array of bytes and produce an unexpected result for characters outside the ASCII range. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16609528', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/725097/']} | jdg_80293 |
stackexchange | llm_judgeable_groundtruth_similarity | 186370 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If $J$ is a singular matrix, then $(J^T J)^{-1}$ is singular too. I'm trying to prove that $J^T J+\lambda I$ is a singular matrix, where $I$ denotes identity matrix. Any suggestions please?Thanks
Now provide the response and nothing else.
| Let $x$ be nonzero such that $Jx=0$. Then $J^\mathrm{T} Jx=0$, so $J^\mathrm{T} J$ is singular, and $(J^\mathrm{T} J)^{-1}$ does not even exist. As for the second sentence in your question, $J$=the zero matrix and its transpose are certainly singular, but $J^\mathrm{T} J+\lambda I$ will certainly not be if $\lambda\neq 0$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/186370', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/24958/']} | jdg_80294 |
stackexchange | llm_judgeable_groundtruth_similarity | 2575234 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $k$ be a field, I wonder how to find the spectrum of $k[x]$ at $(x)$. I know the prime ideals of $k[x]$ are ideals generated by irreducible polynomials, but I have no ideas what prime ideals should look like in $k[x]_{(x)}$. Any solution or reference will be greatly appreciated!
Now provide the response and nothing else.
| Hint: For any commutative ring $R$ and prime ideal $P$, the prime ideals of $R$ localized at $P$ are in bijection with the primes of $R$ that are contained in $P$. So your question can be answered by instead looking at the prime ideals of $k[x]$ contained in $(x)$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2575234', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/185631/']} | jdg_80295 |
stackexchange | llm_judgeable_groundtruth_similarity | 521334 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do I set the initial value of a databound drop down list in ASP.NET? For instance, I want the values, but the first value to display should be -- Select One ---, with a null value.
Now provide the response and nothing else.
| I think what you want to do is this: <asp:DropDownList ID="DropDownList1" runat="server" AppendDataBoundItems="true"> <asp:ListItem Text="--Select One--" Value="" /> </asp:DropDownList> Make sure the 'AppendDataBoundItems' is set to true or else you will clear the '--Select One--' list item when you bind your data. If you have the 'AutoPostBack' property of the drop down list set to true you will have to also set the 'CausesValidation' property to true then use a 'RequiredFieldValidator' to make sure the '--Select One--' option doesn't cause a postback. <asp:RequiredFieldValidator ID="RequiredFieldValidator1" runat="server" ControlToValidate="DropDownList1"></asp:RequiredFieldValidator> | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/521334', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4241/']} | jdg_80296 |
stackexchange | llm_judgeable_groundtruth_similarity | 15900860 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This is example a webpage where it found the school id: <html><body><div><table style="height:2px"><tr><td style="height:10px"><div id="school"> Current i have this JavaScript code, that code detect well the school id and it provide me the DOM path to that div element.But now i want to check also if there is a height style element, and if there is a value. Show me then that value with the alert(y) . var path = [];var el = document.getElementById('school');do { var qq = path.unshift(el.nodeName); if (el.nodeName.currentStyle){ var y = qq.currentStyle["Height"]; alert(y);} else {var y = document.defaultView.getComputedStyle(el.nodeName,null).getPropertyValue("Height");alert(y);}} while ((el.nodeName.toLowerCase() != 'html') && (el = el.parentNode)) But that doesn't work.Question: Need help to read all the current height values (of the path to div id="school" element). Normal it must alert me this value:2px10px If need more information, just comment here below.
Now provide the response and nothing else.
| After searching for days, here is a working solution to return images to all other applications (tested for GMail and WhatsApp). First, you need to set an intent-filter in your AndroidManifest.xml (Inside application > activity ). This will list your application when other apps are calling for this intent (like when requesting an image). Note : WhatsApp is using the action.PICK - intent. Adding all intent-filters below even though will provide great compatibility with other apps. <intent-filter> <action android:name="android.intent.action.PICK" /> <category android:name="android.intent.category.DEFAULT" /> <category android:name="android.intent.category.OPENABLE" /> <data android:mimeType="image/*" /> </intent-filter> <intent-filter> <action android:name="android.intent.action.SEND" /> <category android:name="android.intent.category.DEFAULT" /> <category android:name="android.intent.category.OPENABLE" /> <data android:mimeType="image/*" /> </intent-filter> <intent-filter> <action android:name="android.intent.action.GET_CONTENT" /> <category android:name="android.intent.category.DEFAULT" /> <category android:name="android.intent.category.OPENABLE" /> <data android:mimeType="image/*" /> </intent-filter> The second thing you need to care for is responding to an idling intent.This should consist of two parts: First you should check whether your application has been executed to return an image or if its run all by itself. Intent intent = getIntent(); if (intent!=null && intent.getType()!=null) //check if any application has executed your app { if(intent.getType().indexOf("image/") != -1) isinint=true; //check if the requested type is an image. If true, set a public static boolean, f.e. named isinint to true. Default is false. } Now, when the user has picked an image, set the result as following. Due to memory issues, you should copy the file you want to return onto the sdcard and return the Uri. if(isinint) //check if any app cares for the result { Intent shareIntent = new Intent(android.content.Intent.ACTION_SEND, Uri.fromFile(openprev)); //Create a new intent. First parameter means that you want to send the file. The second parameter is the URI pointing to a file on the sd card. (openprev has the datatype File) ((Activity) context).setResult(Activity.RESULT_OK, shareIntent); //set the file/intent as result ((Activity) context).finish(); //close your application and get back to the requesting application like GMail and WhatsApp return; //do not execute code below, not important } Note! : You can leave out ((Activity) context) when calling the data in OnCreate or similiar void's. As i use this snippet in another void, i need to provide a context in any case that has to be defined as displayed. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15900860', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1731468/']} | jdg_80297 |
stackexchange | llm_judgeable_groundtruth_similarity | 150537 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
This is basically a repost of this math.se question. At the time I was writing this I thought it has to have a straightforward solution so I posted it there. Now I am not so sure about it being so easy. The problem is as follows. Let $f \in \{0,1\}^k$ and let $S_n(f)$ be the number of strings of $\{0,1\}^n$ that do not contain $f$ as a substring. Is it true that $$|f| > |f'| \implies S_n(f) > S_n(f')$$ I am aware that the transfer-matrix method could be used to compute $S_n(f)$ given a concrete $f.$ I don't know though if it offers a solution to this problem or perhaps if there's any other obvious reason why this is true.
Now provide the response and nothing else.
| Your conjecture is true. Here is a proof. Define $\operatorname{Av}_n(w)$ to be the number of binary words of length $n$ which avoid the pattern $w$. Let $u$ and $v$ be binary words with $|u| = k$ and $|v|=m$ with $k < m$. We will show that $\operatorname{Av}_n(u) < \operatorname{Av}_n(v)$ for all $n$. Define the special words $$M_n = \overbrace{00\cdots00}^n\qquad\text{and}\qquad L_n = \overbrace{00\cdots01}^n.$$ We can show fairly easily using the cluster method of Goulden and Jackson (and other ways as well, though the cluster method works easily for any pattern) that words avoiding $M_n$ and words avoiding $L_n$ have the generating functions $$m_n(x) = \sum_{r \geq 0}\operatorname{Av}_r(M_n)x^r = \frac{1-x^n}{1-2x+x^{n+1}}$$and $$\ell_n(x) = \sum_{r\geq 0} \operatorname{Av}_r(L_n)x^r = \frac{1}{1-2x+x^n}.$$ Moreover, of all words $w$ with $|w|=n$, $M_n$ is the most avoided word and $L_n$ is the least avoided word. Formally, for $w$ with $|w|=n$ and all $r$ $$\operatorname{Av}_r(L_n) \leq \operatorname{Av}_r(w) \leq \operatorname{Av}_r(M_n).$$ This can be seen probabilistically by observing that the number of occurrences of a pattern of length $n$ in all words of length $r$ is independent of what the pattern is. Since $M_n$ "packs" the most easily (i.e., has a lot of overlaps) and $L_n$ does not "pack" at all (i.e., cannot overlap itself), it follows that $M_n$ appears as a pattern in less words overall than any other pattern and $L_n$ appears as a pattern in more words overall than any other pattern. It should also be obvious that $\operatorname{Av}_r(L_n) \leq \operatorname{Av}_r(L_{n+1})$ for all $r$. We need to prove one more fact: $\operatorname{Av}_r(M_{s-1}) < \operatorname{Av}_r(L_s)$ for all $r \geq s-1$. Edit: As @DavidSpeyer pointed out in a comment, this is easily proved by observing that $M_{s-1}$ is a subword of $L_s$. I've removed my lengthier argument, but left the generating functions $m_n(x)$ and $\ell_n(x)$ defined above. We now combine all of our results: for $r \geq k$ $$\operatorname{Av}_r(u) \leq \operatorname{Av}_r(M_k) < \operatorname{Av}_r(L_{k+1}) \leq \operatorname{Av}_r(L_m) \leq \operatorname{Av}_r(v).\;\;\square$$ | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/150537', 'https://mathoverflow.net', 'https://mathoverflow.net/users/1737/']} | jdg_80298 |
stackexchange | llm_judgeable_groundtruth_similarity | 10294481 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to redirect every http://test.com request to http://www.test.com . How can this be done. In the server block I tried adding rewrite ^/(.*) http://www.test.com/$1 permanent; but in browser it says The page isn't redirecting properly Firefox has detected that the server is redirecting the request for this address in a way that will never complete. My server block looks like server { listen 80; server_name test.com; client_max_body_size 10M; client_body_buffer_size 128k; root /home/test/test/public; passenger_enabled on; rails_env production; #rewrite ^/(.*) http://www.test.com/$1 permanent; #rewrite ^(.*)$ $scheme://www.test.com$1; error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } }
Now provide the response and nothing else.
| Best way to do what you want is to add another server block: server { #implemented by default, change if you need different ip or port #listen *:80 | *:8000; server_name test.com; return 301 $scheme://www.test.com$request_uri;} And edit your main server block server_name variable as following: server_name www.test.com; Important : New server block is the right way to do this, if is evil . You must use locations and servers instead of if if it's possible. Rewrite is sometimes evil too , so replaced it with return . | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/10294481', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/457036/']} | jdg_80299 |
stackexchange | llm_judgeable_groundtruth_similarity | 344698 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
I am interested in the standard name for the following weak form of $k$ -differentiability. Definition. A function $f:\mathbb R\to\mathbb R$ is called Taylor $k$ -differentiable at a point $x_0$ if there are real numbers $a_0,\dots,a_k$ such that $$f(x)=\sum_{n=0}^k\tfrac1{n!}a_n(x-x_0)^n+o(|x-x_0|^k).$$ The numbers $a_0,\dots,a_k$ are unique (if exist) and can be called the derivatives $f^{(0)}(x_0),\dots,f^{(k)}(x_0)$ of $f$ at $x_0$ . The Taylor formula says that each $k$ -differentiable function at a point $x_0$ is Taylor $k$ -differentiable. The converse is not true: the Dirichlet-like function $$f(x)=\begin{cases}x^{k+1}&\mbox{if $x$ is rational};\\0&\mbox{if $x$ is irraional}\end{cases}$$ is Taylor $k$ -differentiable at zero (with all derivatives $f^{(0)}(0)=\dots=f^{(k)}(0)=0$ ) but is discontinuous at all non-zero points, so is not $k$ -differentiable in the standard sense. So, my question: Has the Taylor $k$ -differentiability some standard name, accepted in the literature? If yes, what is a suitable reference? Thanks.
Now provide the response and nothing else.
| Start with $$(\phi-i)(\phi-(p+1-i))=\phi^2-\phi(p+1)+i(p+1-i)=p(i-\phi)+(1+i-i^2).$$ Using this for $i=1,2,\ldots,(p-1)/2$ we get $$\prod_{j=1}^p(\phi-j)=\left(\phi-\frac{p+1}2\right)\prod_{i=1}^{(p-1)/2}\left(p(i-\phi)+(1+i-i^2)\right).$$ Expand the brackets and take it modulo $p^2\mathbb{Z}[\phi]$ . We get $$M:=\left(\phi-\frac{p+1}2\right)\prod_{i=1}^{(p-1)/2}\left(1+i-i^2\right)+p\left(\phi-\frac12\right)\prod_{i=1}^{(p-1)/2}\left(1+i-i^2\right)\sum_{i=1}^{(p-1)/2}\frac{i-\phi}{1+i-i^2}.$$ Denote $T=\prod_{i=1}^{(p-1)/2}\left(1+i-i^2\right)$ . Considering $M$ modulo $p$ we get $-\sqrt{5}T/2$ , thus by your calculation of $M$ modulo $p$ we get $T\equiv -2\pmod p$ . Next, we should look mod $p$ for $$\frac{M-\sqrt{5}}{p}=-\sqrt{5}\frac{T+2}{2p}+T\left(-\frac12-\frac{\sqrt{5}}2\sum_{i=1}^{(p-1)/2}\frac{i-\phi}{1+i-i^2}\right).$$ And here we should look for ``rational'' part, which must be equal to $\frac12$ modulo $p$ . This rational part equals $$T\left(-\frac12-\frac54\sum_{i=1}^{(p-1)/2}\frac{1}{1+i-i^2}\right),$$ and our claim reduces to $$\sum_{i=1}^{(p-1)/2}\frac{1}{1+i-i^2}\equiv -\frac15 \pmod p.$$ This can not be hard and it is not. We have $1+i-i^2=5/4-(i-1/2)^2$ . The guys $(i-1/2)^2$ run over the set $\mathcal{R}$ of all non-zero quadratic residues when $i$ goes from 1 to $(p-1)/2$ (indeed, they are quadratic residues for sure, non-zero and mutually distinct: if $(i-1/2)^2=(j-1/2)^2$ , then either $i=j$ or $p$ divides $i+j-1$ which can non be in our range). So we should have $\sum_{r\in \mathcal{R}} 1/(5/4-r)=-1/5$ . Denote $f(x)=\prod(x-r)=x^{(p-1)/2}-1$ . We should prove $\frac{f'(5/4)}{f(5/4)}=-1/5$ . This is true: $f(5/4)=-2$ , $f'(5/4)=\frac{p-1}2\cdot (-1)\cdot \frac45=\frac25$ . | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/344698', 'https://mathoverflow.net', 'https://mathoverflow.net/users/61536/']} | jdg_80300 |
stackexchange | llm_judgeable_groundtruth_similarity | 55022 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Why does DiscretizeGraphics seems to work on one GraphicsComplex and not the other? Here is an example that works: v = {{1, 0}, {0, 1}, {-1, 0}, {0, -1}};p1 = Graphics[GraphicsComplex[v, Polygon[{1, 2, 3, 4}]]];DiscretizeGraphics[p1] But this does not p2 = Graphics3D[First@ParametricPlot3D[{Cos[t], Sin[u], c Sin[t]}, {u, 0, 2 Pi}, {t, 0, 2 Pi}]];DiscretizeGraphics[p2](*The function DiscretizeGraphics is not implemented for \ GraphicsComplex[{{0.9999999999998993`,4.487989505128125`*^-7,*) But p2 is a GraphicsComplex? Looking at FullForm[p2] Here is the FullForm for p1 Are not p1 and p2 both GraphicsComplex ? p1 is 2D and p2 is 3D , but are they not both considered GraphicsComplex ? It will good to know exactly what can and what can not be discretized. I tried to find this, but could not. All what I see are examples of usages so far. reference: http://www.wolfram.com/mathematica/new-in-10/data-and-mesh-regions/discretizing-graphics.html http://reference.wolfram.com/language/ref/DiscretizeGraphics.html?q=DiscretizeGraphics I also looked at possible issues, and did not notice anything about this. Only one that came close is this multiple volume primitives is not supported . Is this the case here?
Now provide the response and nothing else.
| In the end this is a bug and I filed that. Now, what is going on: If you extract the coords and polygons from the GraphicsComplex and try to set up a MeshRegion you get a warning: gc = First@ ParametricPlot3D[{Cos[t], Sin[u], Sin[t]}, {u, 0, 2 Pi}, {t, 0, 2 Pi}];ply = Cases[(gc)[[2]], _Polygon, Infinity]MeshRegion[gc[[1]], ply]MeshRegion::coplnr: "The vertices in the polygon Polygon[{{1129,1621,705,100}}] are not coplanar." I guess that is what is happening internally and then the conversion is rejected. It could have given a better message, though. All of the Graphics(3D) functions were written before the MeshRegion functionality became available and used their own mesh format. For graphics it is not too important that the underlying mesh is of a good quality (e.g. no non coplanar elements). They human eye is very forgiving in that sense. But for computations over meshes it is essential that the underlying mesh has a reasonable quality. In this case the ParametricPlot3D needs to get rid of those non coplanar elements. To get a discretized cylinder could use DiscretizeGraphics[Cylinder[]] | {} | {'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/55022', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/70/']} | jdg_80301 |
stackexchange | llm_judgeable_groundtruth_similarity | 40526990 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What is the difference between use and include in Twig? Documentation : include The include statement includes a template and returns the rendered content of that template into the current one: οΏΌ{% include 'header.html' %}Body here...{% include 'footer.html' %} use The use statement tells Twig to import the blocks defined in blocks.html into the current template (it's like macros, but for blocks): blocks.html {% block sidebar %}{% endblock %} main.html {% extends "base.html" %}{% use "blocks.html" %}{% block title %}{% endblock %}{% block content %}{% endblock %} Possible answer: I think this should explain the difference: include is to get all the code from an external file and import it into your actual file at the right location of the call. use is completely different as it parses the linked file to find a particular section of code and then overwrites the blocks with the same name, in your current file, with the one found in this external file. include is like " go find this file and render it with my page here ". use is " parse this other file to find block definitions to use instead of my owns defined here ". If use command finds nothing matching the task, nothing is displayed at all from this file. Question is the explanation correct? are there any other explanations to this difference?
Now provide the response and nothing else.
| After months, I am posting an answer for any further reference to this question. I also added some description for extends & import & macro & embed for more clearance: There are various types of inheritance and code reuse in Twig: Include Main Goal: Code Reuse Use Case: Using header.html.twig & footer.html.twig inside base.html.twig . header.html.twig <nav> <div>Homepage</div> <div>About</div></nav> footer.html.twig <footer> <div>Copyright</div></footer> base.html.twig {% include 'header.html.twig' %}<main>{% block main %}{% endblock %}</main>{% include 'footer.html.twig' %} Extends Main Goal: Vertical Reuse Use Case: Extending base.html.twig inside homepage.html.twig & about.html.twig . base.html.twig {% include 'header.html.twig' %}<main>{% block main %}{% endblock %}</main>{% include 'footer.html.twig' %} homepage.html.twig {% extends 'base.html.twig' %}{% block main %}<p>Homepage</p>{% endblock %} about.html.twig {% extends 'base.html.twig' %}{% block main %}<p>About page</p>{% endblock %} Use Main Goal: Horizontal Reuse Use Case: sidebar.html.twig in single.product.html.twig & single.service.html.twig . sidebar.html.twig {% block sidebar %}<aside>This is sidebar</aside>{% endblock %} single.product.html.twig {% extends 'product.layout.html.twig' %}{% use 'sidebar.html.twig' %}{% block main %}<main>Product page</main>{% endblock %} single.service.html.twig {% extends 'service.layout.html.twig' %}{% use 'sidebar.html.twig' %}{% block main %}<main>Service page</main>{% endblock %} Notes: It's like macros, but for blocks. The use tag only imports a template if it does not extend another template, if it does not define macros, and if the body is empty. Macro Main Goal: Reusable Markup with Variables Use Case: A function which gets some variables and outputs some markup. form.html.twig {% macro input(name, value, type) %} <input type="{{ type|default('text') }}" name="{{ name }}" value="{{ value|e }}" }}" />{% endmacro %} profile.service.html.twig {% import "form.html.twig" as form %}<form action="/login" method="post"> <div>{{ form.input('username') }}</div> <div>{{ form.input('password') }}</div> <div>{{ form.input('submit', 'Submit', 'submit') }}</div></form> Embed Main Goal: Block Overriding Use Case: Embedding pagination.html.twig in product.table.html.twig & service.table.html.twig . pagination.html.twig <div id="pagination"> <div>{% block first %}{% endblock %}</div> {% for i in (min + 1)..(max - 1) %} <div>{{ i }}</div> {% endfor %} <div>{% block last %}{% endblock %}</div></div> product.table.html.twig {% set min, max = 1, products.itemPerPage %}{% embed 'pagination.html.twig' %} {% block first %}First Product Page{% endblock %} {% block last %}Last Product Page{% endblock %}{% endembed %} service.table.html.twig {% set min, max = 1, services.itemPerPage %}{% embed 'pagination.html.twig' %} {% block first %}First Service Page{% endblock %} {% block last %}Last Service Page{% endblock %}{% endembed %} Please note that embedded file ( pagination.html.twig ) has access to the current context ( min , max variables). Also you may pass extra variables to the embedded file: pagination.html.twig <p>{{ count }} items</p><div> <div>{% block first %}{% endblock %}</div> {% for i in (min + 1)..(max - 1) %} <div>{{ i }}</div> {% endfor %} <div>{% block last %}{% endblock %}</div></div> product.table.html.twig {% set min, max = 1, products|length %}{% embed 'pagination.html.twig' with {'count': products|length } %} {% block first %}First Product Page{% endblock %} {% block last %}Last Product Page{% endblock %}{% endembed %} Note: It has functionality of both Use & Include together. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/40526990', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5104596/']} | jdg_80302 |
stackexchange | llm_judgeable_groundtruth_similarity | 188658 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I found the following command to repeat a character in Linux: printf 'H%.0s' {1..5000} > H.txt I want, for example, H to repeat 5000 times. What does %.0s mean here?
Now provide the response and nothing else.
| That command depends on the shell generating 5000 arguments, and passing them to printf which then ignores them. While it may seem pretty quick - and is relative to some things - the shell must still generate all of those strings as args (and delimit them) and so on. Besides the fact that the generated Hs can't be printed until the shell first iterates to 5000, that command also costs in memory all that it takes to store and delimit the numeric string arguments to printf plus the Hs. Just as simply you can do: printf %05000s|tr \ H ...which generates a string of 5000 spaces - which, at least, are usually only a single byte per and cost nothing to delimit because they are not delimited. A few tests indicate that even for as few as 5000 bytes the cost of the fork and the pipe required for tr is worth it even in this case, and it almost always is when the numbers get higher. I ran... time bash -c 'printf H%.0s {1..5000}' >/dev/null ...and... time bash -c 'printf %05000s|tr \ H' >/dev/null Each about 5 times a piece (nothing scientific here - only anecdotal) and the brace expansion version averaged a little over .02 seconds in total processing time, but the tr version came in at around .012 seconds total on average - and the tr version beat it every time. I can't say I'm surprised - {brace expansion} is a useful interactive shell shorthand feature, but is usually a rather wasteful thing to do where any kind of scripting is concerned. The common form: for i in {[num]..[num]}; do ... ...when you think about it, is really two for loops - the first is internal and implied in that the shell must loop in some way to generate those iterators before saving them all and iterating them again for your for loop. Such things are usually better done like: iterator=$startuntil [ "$((iterator+=interval))" -gt "$end" ]; do ... ...because you store only a very few values and overwrite them as you go as well as doing the iteration while you generate the iterables. Anyway, like the space padding mentioned before, you can also use printf to zeropad an arbitrary number of digits, of course, like: printf %05000d I do both without arguments because for every argument specified in printf 's format string when an argument is not found the null string is used - which is interpreted as a zero for a digit argument or an empty string for a string. This is the other (and - in my opinion - more efficient) side of the coin when compared with the command in the question - while it is possible to get nothing from something as you do when you printf %.0 length strings for each argument, so also is it possible to get something from nothing. Quicker still for large amounts of generated bytes you can use dd like: printf \\0| dd bs=64k conv=sync ...and w/ regular files dd 's seek=[num] argument can be used to greater advantage. You can get 64k newlines rather than nulls if you add ,unblock cbs=1 to the above and from there could inject arbitrary strings per line with paste and /dev/null - but in that case, if it is available to you, you might as well use: yes 'output string forever' Here are some more dd examples anyway: dd bs=5000 seek=1 if=/dev/null of=./H.txt ...which creates (or truncates) a \0NUL filled file in the current directory named H.txt of size 5000 bytes. dd seeks straight to the offset and NUL-fills all behind it. <&1 dd bs=5000 conv=sync,noerror count=1 | tr \\0 H >./H.txt ...which creates a file of same name and size but filled w/ H chars. It takes advantage of dd 's spec'd behavior of writing out at least one full null-block in case of a read error when noerror and sync conversions are specified (and - without count= - would likely go on longer than you could want) , and intentionally redirects a writeonly file descriptor at dd 's stdin. | {} | {'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/188658', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/83681/']} | jdg_80303 |
stackexchange | llm_judgeable_groundtruth_similarity | 139305 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm currently working on an indie project, so I don't exactly have the luxury of throughout human testing or external code review β however, I don't see any difficult bugs in my current code (I fix them as I see them, and most of the time they're just wrong field names and such, things that you fix in a minute or two), and I test it after implementing any feature before pushing it. Lately my LOC number was about 400 a day (for the record, it's C#), and I'm not only implementing new systems, but also rewrite things I've already written and fix some bugs. Should I be bothered? Is it the sign that I need to stop and review all the code I've been writing up to this date and refactor it?
Now provide the response and nothing else.
| LOC is probably one of the most abused metrics, and as a result is probably one of the more useless measures of code quality, and an even more useless measurement of programming effort. Yes, that's a bold statement for me to make, and no, I can't point you to studies proving my point. However, I can state with hard earned experience that when you start worrying about how much code you've written, you're probably worrying about the wrong problems. You first need to ask yourself what it is you are trying to measure or prove, and whether this proof is merely out of interest, or to support a wider quality improvement and where you need to use this information to get buy-in from your team/management to do something about it. One of the things that I tend to use LOC for is a bit of a sanity check. If I find myself writing a lot of code, I become more interested in LOC per method, or LOC per class, rather than LOC over all. These measurements might be indicators that you have further refactoring to do if you're feeling a little OCD about how well factored your code should be. Very large classes might need to be refactored into a few smaller classes, and long multi-line methods might need to be broken down into several methods, other classes, or may even indicate some repetition that could be removed. Notice I used the word "might" several times there. The reality is that LOC provides only a possible indicator, and no real guarantee that your code may need to change. The real question to ask is whether the code behaves as required and as expected. If so, then your next question is whether or not you will be able to maintain the code easily, and whether you will have the time either now or in the future to make changes to working code to reduce your maintenance overheads in the future. Often, lots of code means that you will have more to maintain later, but sometimes even well-factored code can stretch out to hundreds of lines of code, and yes, you can sometimes find yourself writing hundreds of lines of code in a day. Experience however tells me that if I am sustaining an output of hundreds of lines of new code each day, that often there is a risk that much of the code has been inappropriately cut and paste from somewhere else, and that in itself may indicate problems with duplication and maintenance, but again that is no guarantee, so I tend to rely on what my experience and instincts tell me based on how the tasks at hand were completed. The best way to avoid the dilemma posed in your question IMHO is to forget about LOC, and refactor ALL of the time. Write your code test first, implement to fail, refactor to pass, then see what may be refactored there and then to improve the code. You'll leave the task knowing that you've double-checked your work already, and you won't be so concerned about second-guessing yourself in the future. Realistically speaking, if you use a test-first approach as I've described, any LOC/day measurement on your completed code will really mean you've written 3-5 times the measured amount, with that effort hidden successfully by your ongoing refactoring efforts. | {} | {'log_upvote_score': 5, 'links': ['https://softwareengineering.stackexchange.com/questions/139305', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/33609/']} | jdg_80304 |
stackexchange | llm_judgeable_groundtruth_similarity | 29842 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there any way for two people to securely authenticate each other without access to any special software (like PGP)? For example, when speaking over the phone or by sending SMSes to each other or using some other text channel. Assume they can securely exchange any necessary secrets beforehand and also have access to software then, but at the time they need to do the authentication they have only basic phones or perhaps calculators. Obviously the simplest thing is for each person to have a password that they say to the other, but this is not secure, because one person always has to go first. If the other is an impersonator they now have one of the passwords. So I'm probably looking for some kind of a challenge-response mechanism here - but whatever works.
Now provide the response and nothing else.
| So I'm probably looking for some kind of a challenge-response mechanism here I'd guess so. Print up a few pages of text in the following format: # Challenge Response # Challenge Response1 monkey character 2 sinew orange3 bottle helmet 4 glass glove You'll both have the same list. Whenever you authenticate it doesn't matter who goes first. Provide a challenge word and check the response. You might say, "3 - Bottle," and wait for your friend to look it up saying, "Helmet." Both of you cross the number off the list and never use it again. They're authenticated by providing the correct word in response. Then they pick a number and do the same thing to guard against a case where your friend may have challenged an attacker, thus providing them with a number-challenge pairing and using it in replay. The words can be random letters, the result of picking random lines from a very long wordlist, etc as long as they're randomly chosen. | {} | {'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/29842', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/20139/']} | jdg_80305 |
stackexchange | llm_judgeable_groundtruth_similarity | 14940660 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When sending a request to /customers/41224d776a326fb40f000001 and a document with _id 41224d776a326fb40f000001 does not exist, doc is null and I'm returning a 404 : Controller.prototype.show = function(id, res) { this.model.findById(id, function(err, doc) { if (err) { throw err; } if (!doc) { res.send(404); } return res.send(doc); }); }; However, when _id does not match what Mongoose expects as "format" (I suppose) for example with GET /customers/foo a strange error is returned: CastError: Cast to ObjectId failed for value "foo" at path "_id". So what's this error?
Now provide the response and nothing else.
| Mongoose's findById method casts the id parameter to the type of the model's _id field so that it can properly query for the matching doc. This is an ObjectId but "foo" is not a valid ObjectId so the cast fails. This doesn't happen with 41224d776a326fb40f000001 because that string is a valid ObjectId. One way to resolve this is to add a check prior to your findById call to see if id is a valid ObjectId or not like so: if (id.match(/^[0-9a-fA-F]{24}$/)) { // Yes, it's a valid ObjectId, proceed with `findById` call.} | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/14940660', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/220180/']} | jdg_80306 |
stackexchange | llm_judgeable_groundtruth_similarity | 276052 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I get the current system status (current CPU, RAM, free disk space, etc.) in Python? Ideally, it would work for both Unix and Windows platforms. There seems to be a few possible ways of extracting that from my search: Using a library such as PSI (that currently seems not actively developed and not supported on multiple platforms) or something like pystatgrab (again no activity since 2007 it seems and no support for Windows). Using platform specific code such as using a os.popen("ps") or similar for the *nix systems and MEMORYSTATUS in ctypes.windll.kernel32 (see this recipe on ActiveState ) for the Windows platform. One could put a Python class together with all those code snippets. It's not that those methods are bad but is there already a well-supported, multi-platform way of doing the same thing?
Now provide the response and nothing else.
| The psutil library gives you information about CPU, RAM, etc., on a variety of platforms: psutil is a module providing an interface for retrieving information on running processes and system utilization (CPU, memory) in a portable way by using Python, implementing many functionalities offered by tools like ps, top and Windows task manager. It currently supports Linux, Windows, OSX, Sun Solaris, FreeBSD, OpenBSD and NetBSD, both 32-bit and 64-bit architectures, with Python versions from 2.6 to 3.5 (users of Python 2.4 and 2.5 may use 2.1.3 version). Some examples: #!/usr/bin/env pythonimport psutil# gives a single float valuepsutil.cpu_percent()# gives an object with many fieldspsutil.virtual_memory()# you can convert that object to a dictionary dict(psutil.virtual_memory()._asdict())# you can have the percentage of used RAMpsutil.virtual_memory().percent79.2# you can calculate percentage of available memorypsutil.virtual_memory().available * 100 / psutil.virtual_memory().total20.8 Here's other documentation that provides more concepts and interest concepts: https://psutil.readthedocs.io/en/latest/ | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/276052', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/35935/']} | jdg_80307 |
stackexchange | llm_judgeable_groundtruth_similarity | 22166295 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am writing a function to execute shell commands and capture its output in a batch script. :runShellCmd setlocal EnableDelayedExpansion SET lf=- FOR /F "delims=" %%i IN ('%~1') DO if "%out%" == "" (set out=%%i) else (set out=!out!%lf%%%i) echo "Cmd output: %out%" SET "funOut=%out%"ENDLOCAL & IF "%~1" NEQ "" SET %~2=%out%goto :EOF I have been successful in passing simple commands and getting output. But for calls like CALL :runShellCmd "echo Jatin Kumar | find /c /i "jatin"" it fails with error unexpected | character . I know we need to escape | with ^ in for but if I try to pass ^| in the function argument string, it changes it to ^^| which again throws error. Am I missing something?
Now provide the response and nothing else.
| This is an effect of the CALL command. The CALL command doubles all carets in one of the batch parser phases. Normally you wouldn't see this, as the carets will be used as an escape charater directly after the doubling. See this call echo ^^^^call call echo ^^^^call call call echo ^^^^call echo "^^^^"call call echo "^^^^"call call call echo "^^^^" Output ^^^^^^"^^^^^^^^""^^^^^^^^^^^^^^^^""^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^" But how do you can escape your pipe character? You can't! But you can add a caret remover in your function. :runShellCmd setlocal EnableDelayedExpansion set "param=%~1" set param set "param=!param:^^=^!" for .... ('!param!') Or you could use an escaping trick when calling your function. set "caret=^"CALL :runShellCmd "echo Jatin Kumar %%caret%%| find /c /i " This works, as the %%caret%% will be expanded after the CALL caret doubling phase . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22166295', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/573167/']} | jdg_80308 |
stackexchange | llm_judgeable_groundtruth_similarity | 24266 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a set of points in the 3D space and I want to draw the bounding convex polyhedron of those points. A very naΓ―ve solution would be to draw a triangle between every three points: Graphics3D[ {EdgeForm[None], Polygon[Part[pts, #] & /@ Subsets[Range[Length@pts], {3}]]}, Axes -> True] However, this method is very inefficient. It takes a few seconds to be drawn on my laptop and whenever I want to rotate it using the mouse, it doesn't move smoothly.I know what the issue is: I am drawing too many triangles, many of them are not even visible (since they are placed inside the polyhedron). But how can I fix it? Here is an example and the set of the points: pts={{0., 0., 0.}, {0., 0., 1.79176}, {0., 1.79176, 0.}, {1.79176, 0., 0.}, {0., 1.79176, 0.}, {0., 0., 1.79176}, {1.79176, 0., 0.}, {0., 1.79176, 0.606136}, {0., 0.606136, 1.79176}, {0., 1.79176, 0.606136}, {0., 0.606136, 1.79176}, {1.79176, 0., 0.606136}, {0.606136, 0., 1.79176}, {1.79176, 0.606136, 0.}, {0.606136, 1.79176, 0.}, {1.79176, 0., 0.606136}, {0.606136, 0., 1.79176}, {1.79176, 0.606136, 0.}, {0.606136, 1.79176, 0.}, {1.79176, 0., 0.}, {0., 1.79176, 0.}, {0., 0., 1.79176}, {1.79176, 0., 0.606136}, {0.606136, 0., 1.79176}, {1.79176, 0.606136, 0.}, {0.606136, 1.79176, 0.}, {0., 1.79176, 0.606136}, {0., 0.606136, 1.79176}, {0., 1.79176, 0.606136}, {0., 0.606136, 1.79176}, {1.79176, 0.606136, 0.}, {0.606136, 1.79176, 0.}, {1.79176, 0., 0.606136}, {0.606136, 0., 1.79176}, {1.79176, 0.606136, 0.}, {0.606136, 1.79176, 0.}, {1.79176, 0., 0.606136}, {0.606136, 0., 1.79176}, {1.79176, 0.606136, 0.}, {0.606136, 1.79176, 0.}, {1.79176, 0., 0.606136}, {0.606136, 0., 1.79176}, {0., 1.79176, 0.606136}, {0., 0.606136, 1.79176}, {0., 1.79176, 0.606136}, {0., 0.606136, 1.79176}, {1.79176, 0.606136, 0.374693}, {1.79176, 0.374693, 0.606136}, {0.606136, 1.79176, 0.374693}, {0.374693, 1.79176, 0.606136}, {0.606136, 0.374693, 1.79176}, {0.374693, 0.606136, 1.79176}, {1.79176, 0.606136, 0.374693}, {1.79176, 0.374693, 0.606136}, {0.606136, 1.79176, 0.374693}, {0.374693, 1.79176, 0.606136}, {0.606136, 0.374693, 1.79176}, {0.374693, 0.606136, 1.79176}, {1.79176, 0.606136, 0.374693}, {1.79176, 0.374693, 0.606136}, {0.606136, 1.79176, 0.374693}, {0.374693, 1.79176, 0.606136}, {0.606136, 0.374693, 1.79176}, {0.374693, 0.606136, 1.79176}, {1.79176, 0.606136, 0.374693}, {1.79176, 0.374693, 0.606136}, {0.606136, 1.79176, 0.374693}, {0.374693, 1.79176, 0.606136}, {0.606136, 0.374693, 1.79176}, {0.374693, 0.606136, 1.79176}, {1.79176, 0.606136, 0.374693}, {1.79176, 0.374693, 0.606136}, {0.606136, 1.79176, 0.374693}, {0.374693, 1.79176, 0.606136}, {0.606136, 0.374693, 1.79176}, {0.374693, 0.606136, 1.79176}, {1.79176, 0.606136, 0.374693}, {1.79176, 0.374693, 0.606136}, {0.606136, 1.79176, 0.374693}, {0.374693, 1.79176, 0.606136}, {0.606136, 0.374693, 1.79176}, {0.374693, 0.606136, 1.79176}}
Now provide the response and nothing else.
| The documented (!) TetGenConvexHull can compute the convex hull. Then using a GraphicsComplex will be efficient: << TetGenLink`{coords, incidences} = TetGenConvexHull[pts];Graphics3D[{EdgeForm[], GraphicsComplex[coords, Polygon[incidences]]}] | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/24266', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/573/']} | jdg_80309 |
stackexchange | llm_judgeable_groundtruth_similarity | 7382937 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need help in creating my application, I create a database and connect it to my application my database contain manager table having a id, name, etc of fields. when I enter any id in textfield it must compare the value of textfield with database values how can I do this?
Now provide the response and nothing else.
| When you want to run a query and show the subset of data in a control, a DataView could help you. That's just one example, look at the MSDN example for DataView , that explains where you should use DataViews with DataTables... DataTable A datatable is an in-memory representation of a single database table. You can think of it as having columns and rows in the same way. The DataTable is a central object in the ADO.NET library. Other objects that use the DataTable include the DataSet and the DataView. Look at MSDN The DataTable class for more details. DataView A dataview is a view on a datatable, a bit like a sql view. It allows you to filter and sort the rows - often for binding to a windows form control. Additionally, a DataView can be customized to present a subset of data from the DataTable. This capability allows you to have two controls bound to the same DataTable, but showing different versions of the data. For example, one control may be bound to a DataView showing all of the rows in the table, while a second may be configured to display only the rows that have been deleted from the DataTable. The DataTable also has a DefaultView property which returns the default DataView for the table. Look at MSDN DataView class for more details. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/7382937', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/939815/']} | jdg_80310 |
stackexchange | llm_judgeable_groundtruth_similarity | 56495475 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In my ASP.NET core web application, I want to have an action that runs only in development mode. In production mode, maybe a 404 error will be good enough. Is it possible to do that?
Now provide the response and nothing else.
| This can be achieved by injecting IHostEnvironment into your controller and using its IsDevelopment() method inside of the action itself. Here's a complete example that returns a 404 when running in anything other than the Development environment: public class SomeController : Controller{ private readonly IHostEnvironment hostEnvironment; public SomeController(IHostEnvironment hostEnvironment) { this.hostEnvironment = hostEnvironment; } public IActionResult SomeAction() { if (!hostEnvironment.IsDevelopment()) return NotFound(); // Otherwise, return something else for Development. }} If you want to apply this more globally or perhaps you just want to separate out the concerns, Daboul explains how to do so with an action filter in this answer . For ASP.NET Core < 3.0, use IHostingEnvironment in place of IHostEnvironment . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/56495475', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10255020/']} | jdg_80311 |
stackexchange | llm_judgeable_groundtruth_similarity | 2672484 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Just a thought, but would using an IFRAME over a DIV essentially make that element isolated from the window in a way that slow scripts running in the IFRAME wouldn't affect the other frames/window?
Now provide the response and nothing else.
| Yes for the first part, an iframe will "sort-of" isolate your window from the script in the iframe. However, the parent window can still be accessed via window.parent . For the second part: No, it will not make it so slow scripts in the iframe won't affect other frames/windows. Your main window object and its child nodes all run in the same thread. JavaScript is single threaded [Ignore webworkers in this case, you can't pass dom elements between them anyway], so the only reason you can access the parent-window/child-iframe's window object is because they're on the same thread. To provide a quick example: Create a page called main.html In that page, have an iframe src="iframe.html" Next to the iframe, have a button with whatever text you want, I don't care. In iframe.html, window.onload = function(){ while(1){} }; Access iframe.html. You'll notice that when you put your mouse cursor over the button, it doesn't respond/redraw. This is because the browser is frozen. Source: I tried getting multithreading like this too. Learned the hard way =) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2672484', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/233427/']} | jdg_80312 |
stackexchange | llm_judgeable_groundtruth_similarity | 6981916 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Here my code I used below code to calculate the distance between two location using their latitude and longitude. It is giving wrong distance. sometimes getting right and sometimes getting irrelevant distance. We are getting lat1 and lng1 from database. //getting lat2 and lng2 from GPS as belowpublic class MyLocationListener implements LocationListener { @Override public void onLocationChanged(Location loc) { lat2=loc.getLatitude(); lng2=loc.getLongitude(); String Text = "My current location is: " +"Latitud = "+ loc.getLatitude() +"Longitud = " + loc.getLongitude(); //System.out.println("Lat & Lang form Loc"+Text); //Toast.makeText( getApplicationContext(), Text,Toast.LENGTH_SHORT).show(); } @Override public void onProviderDisabled(String provider) { } @Override public void onProviderEnabled(String provider) { } @Override public void onStatusChanged(String provider, int status, Bundle extras) { } //Calculating distance double earthRadius = 3958.75; double dLat = Math.toRadians(lat1-lat2); double dLng = Math.toRadians(lng1-lng2); double a = Math.sin(dLat/2) * Math.sin(dLat/2) + Math.cos(Math.toRadians(lat2)) * Math.cos(Math.toRadians(lat1)) * Math.sin(dLng/2) * Math.sin(dLng/2); double c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1-a)); double dist = earthRadius * c;
Now provide the response and nothing else.
| Here getting distance in miles (mi) private double distance(double lat1, double lon1, double lat2, double lon2) { double theta = lon1 - lon2; double dist = Math.sin(deg2rad(lat1)) * Math.sin(deg2rad(lat2)) + Math.cos(deg2rad(lat1)) * Math.cos(deg2rad(lat2)) * Math.cos(deg2rad(theta)); dist = Math.acos(dist); dist = rad2deg(dist); dist = dist * 60 * 1.1515; return (dist);}private double deg2rad(double deg) { return (deg * Math.PI / 180.0);}private double rad2deg(double rad) { return (rad * 180.0 / Math.PI);} | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/6981916', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/501733/']} | jdg_80313 |
stackexchange | llm_judgeable_groundtruth_similarity | 7300160 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I always read things about how certain functions within the C programming language are optimized by being written in assembly. Let me apologize if that sentence sounds a little misguided. So, I'll put it clearly: How is it that when you call some functions like strlen on UNIX/C systems, the actual function you're calling is written in assembly? Can you write assembly right into C programs somehow or is it an external call situation? Is it part of the C standard to be able to do this, or is it an operating system specific thing?
Now provide the response and nothing else.
| The C standard dictates what each library function must do rather than how it is implemented. Almost all known implementations of C are compiled into machine language. It is up to the implementers of the C compiler/library how they choose to implement functions like strlen . They could choose to implement it in C and compile it to an object, or they could choose to write it in assembly and assemble it to an object. Or they could implement it some other way. It doesn't matter so long as you get the right effect and result when you call strlen . Now, as it happens, many C toolsets do allow you to write inline assembly, but that is absolutely not part of the standard. Any such facilties have to be included as extensions to the C standard. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7300160', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/857994/']} | jdg_80314 |
stackexchange | llm_judgeable_groundtruth_similarity | 805469 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm reading through Joel David Hamkins' set theory lecture notes . On page 14, on the subject of inaccessible cardinals and submodels of ZFC in $V$, he defines a universe cardinal to be a cardinal $\gamma$ such that $V_\gamma$ models ZFC. For example, if an inaccessible cardinal $\kappa$ exists, then $\kappa$ is a universe cardinal. But furthermore, by the LΓΆwenheim-Skolem theorem, there are then lots of models of ZFC inside $V_\kappa$, and Hamkins says that this proves that $\kappa$ is the $\kappa$th universe cardinal. Specifically, $\{\gamma < \kappa: V_\gamma \prec V_\kappa\}$ is evidently closed in $\kappa$, and is "unbounded by LΓΆwenheim-Skolem". This is confusing to me. The LΓΆwenheim-Skolem theorem says that $V_\kappa$ contains submodels of arbitrary cardinality. But in order to have any universe cardinals at all, you need models of ZFC of the form $V_\gamma$, which don't seem to come from LΓΆwenheim-Skolem. Why should these exist? Is there something you can do to a model to put it in the form $V_\gamma$? Relatedly, it's also true (this is in Kanamori's Higher Infinite ) that the only models for second-order ZFC are $V_\kappa$ for $\kappa$ inaccessible. Can someone explain the difference between these two concepts? I'm finding the distinction between first-order and second-order Replacement a little hair-splitting. I hope I've phrased this the right way -- I'm a set theory rookie. Thanks in advance!
Now provide the response and nothing else.
| Yes, you do need a tiny additional argument to get universe cardinals below an inaccessible $\kappa$. The additional step is an elementary chain argument, which tends to occur relatively frequently. Starting from an inaccessible $\kappa$, LΓΆwenheim-Skolem gives a small elementary submodel $M_0\prec V_\kappa$. We can then put $M_0$ inside $V_{\alpha_0}$ for some cardinal $\alpha_0<\kappa$. Applying LΓΆwenheim-Skolem again gives a new elementary submodel $V_{\alpha_0}\subseteq M_1\prec V_\kappa$. We then put $M_1$ into some $V_{\alpha_1}$ again. After repeating this $\omega$ many times we get intertwined chains of models $M_n$ of ZFC and initial segments $V_{\alpha_n}$ of the universe which both union up to $V_{\sup \alpha_n}$, which is elementary in $V_{\kappa}$. This implies that $\sup\alpha_n<\kappa$ is a universe cardinal. By the way, the terminology has shifted slightly since Joel wrote those notes and the term now used for these cardinals is worldly cardinals . | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/805469', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/2215/']} | jdg_80315 |
stackexchange | llm_judgeable_groundtruth_similarity | 44346 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm having a problem verifying my answer to this question:Solve for x: $$\left| \begin{array}{cc}x+3 & 2 \\1 & x+2 \end{array} \right| = 0$$ I get: $(x+3)(x+2)-2=0$ $(x+3)(x+2)=2$thus: $x+3=2$ and $x+2=2$ $x=-1$ and $x=0$ The book says that $x=-1$ and $x=-4$ is the correct answer. I tried doing it a different way by expanding and got totally different answers: $x^2+5x=-4$ $x(x+5)=-4$ $x=-4$ and $x=-9$ What is going on here?
Now provide the response and nothing else.
| No. What you have done is $$(x+3) \cdot (x+2)=2$$ does not imply $x+3=2$ or $x+2=2$. For example, you can have $x+3 = \frac{1}{2}$ and $x+2 = 4$ which on multiplying gives $2$. Or you can have $x+3 = \frac{1}{8}$ and $x+2 =16$ which on multiplying gives $2$. So there are infinitely many possibilities which you can choose. So this is not the correct way to do. You have the determinant as $(x+3) \cdot (x+2) -2 = x^{2} +5x+4$. This can be written as $(x+1)(x+4)=0$ which says that $x=-1,-4$. If you have an equation of the form $f(x) \cdot g(x) =0$, then only either $f(x)=0$ or $g(x)=0$. I think this where you have got confused. But if $f(x) \cdot g(x)= K$, for $K \in \mathbb{R}$, then you cant have $f(x)=K$, or $g(x)=K$, because their product when multiplied gives $K^{2}$. You can have $f(x)= \frac{1}{K}$ and $g(x)=K^{2}$, but then this would keep on continuing. And you can't get all values by doing this. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/44346', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']} | jdg_80316 |
stackexchange | llm_judgeable_groundtruth_similarity | 2202524 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Yesterday one of my friend and I was discussing about the Principle of Mathematical Induction. After some time he made the following, Theorem. Let $\emptyset \subset X\subseteq Y$ and $f:X\to Y$ be a function such that, for all $y\in Y\setminus X$ there exists $x\in X$ such that $f(x)=y$ . $f(X)\subseteq X$ . Then $X=Y$ . Sketch of the Proof of the Claim. If $X=Y$ then we have nothing to prove. Otherwise suppose that $X\subset Y$ . Then there exists $y\in Y\setminus X$ . But by the first property we can conclude that there exists $x\in X$ such that $f(x)=y$ . By the second property we can conclude that $y(=f(x))\in X$ , a contradiction. So we are done. Now the interesting thing is that from the above theorem we can prove the so called Weak Principle of Mathematical Induction by taking $Y=\mathbb{N}$ and $X\subseteq \mathbb{N}$ such that $1\in X$ and by taking $f:X\to \mathbb{N}$ defined by $f(n)=n+1$ for all $n\in X$ . The above observation suggests that the above theorem is more general than the Principle of Mathematical Induction. But I couldn't find any literature regarding this type of 'induction'. The question is, Is this type of 'induction' well known? If so, can some literature regarding this be mentioned?
Now provide the response and nothing else.
| Like Hayden pointed out in the comments, your theorem doesn't really have much to do with mathematical induction. Notice that the first hypothesis of your theorem says that $Y\backslash X \subseteq f(X)$. So the two hypotheses of your theorem combined says that $$Y\backslash X \subseteq f(X) \subseteq X.$$Your theorem effectively says that if $X\subseteq Y$ and $Y\backslash X\subseteq X$, then $Y=X$. It has nothing to do with $f$ in particular. This is not a statement about induction or even mappings, but rather a statement about set inclusions. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2202524', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']} | jdg_80317 |
stackexchange | llm_judgeable_groundtruth_similarity | 492984 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I was reading the MSDN article about how to implement IDisposable and I am uncertain about the difference between managed and native resources cited in the article. I have a class that must dispose 2 of its fields when it is disposed. Should I treat them as Managed (dispose only when disposing = true) or Native resources?
Now provide the response and nothing else.
| A managed resource is another managed type, which implements IDisposable . You need to call Dispose() on any other IDisposable type you use. Native resources are anything outside the managed world such as native Windows handles etc. EDIT : Answer to question in comment (too long for comment) No that is just a managed type. A correctly constructed type, which doesnβt implement IDisposable will be handled by the garbage collector and you donβt have to do anything else. If your type uses a native resource directly (e.g. by calling Win32 libraries), you must implement IDisposable on your type and dispose of the resource(s) in the Dispose method. If your type uses a native resource encapsulated by another type which implements IDisposable , you must call Dispose() on instances of this type in the Dispose method of your type. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/492984', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/48465/']} | jdg_80318 |
stackexchange | llm_judgeable_groundtruth_similarity | 8766 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to find all the roots of a polynomial and decided to compute the eigenvalues of its companion matrix.How do I do that? For example, if I have this polynomial: $4x^3 - 3x^2 + 9x - 1$ , I compute the companion matrix: $$\begin{bmatrix} 0&0&\frac{3}{4} \\ 1&0&-\frac{9}{4} \\ 0&1&\frac{1}{4} \end{bmatrix}$$ Now how can I find the eigenvalues of this matrix? Thanks in advance.
Now provide the response and nothing else.
| Hey There, so if I am assuming correctly for your case, you want to find eigenvalues for this matrix, which is essentially solving for your roots of the characteristic polynomial of the matrix after doing the determinant operation on it. So to go off from Robert idea, we want to use the equation, det( A $-\lambda$ I ) = $0$ $~~~$(following from this we can plug in the coefficient matrix given). det( A $-\lambda$ I ) = $\left[\begin{array}{ccc}0-\lambda & 0 & \dfrac{3}{4} \\ 1 & 0-\lambda & -\dfrac{9}{4} \\ 0 & 1 & \dfrac{1}{4}-\lambda\end{array} \right] = 0$, where A is your coefficient matrix and I is the identity matrix. I = $\left[\begin{array}{ccr}1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1\end{array} \right]$ From here we can now find out the eigenvalues of the matrix A as follows: $\underline {\text{Evaluation of the determinant expanding it by the minors of column 1:}}$ = $~-\lambda\left[\begin{array}{cc} -\lambda & -\dfrac{9}{4} \\ 1 & \dfrac{1}{4}-\lambda \end{array} \right]-1\left[\begin{array}{cc} 0 & \dfrac{3}{4} \\ 1 & \dfrac{1}{4}-\lambda \end{array} \right]+ 0\left[\begin{array}{rr} 0 & \dfrac{3}{4} \\ -\lambda & -\dfrac{9}{4} \end{array} \right]$ $\Rightarrow ~ -\lambda\left[\begin{array}{c} \lambda^{2} -\dfrac{1}{4}\lambda + \dfrac{9}{4} \\ \end{array} \right]-1\left[\begin{array}{cc} 0 -\dfrac{3}{4} \\ \end{array} \right] + ~0\left[\begin{array}{rr} 0 + \dfrac{3}{4}\lambda \\ \end{array} \right]$ $\Rightarrow ~$ $-\lambda^3+\dfrac{1}{4}\lambda^2-\dfrac{9}{4}\lambda+\dfrac{3}{4}$, $~$ Hence our characteristic polynomial is now obtained. $$P(\lambda)=-\lambda^3+\dfrac{1}{4}\lambda^2-\dfrac{9}{4}\lambda+\dfrac{3}{4}$$ If you need assistance on how to find the characteristic polynomial by evaluating the determinant, here is a reference: Computing Determinants \ After solving this polynomial for its roots (eigenvalues) we get the following: {$\lambda = (0.329,0.000) ~~ \lambda = (-0.040,-1.508) ~~ \lambda = (-0.040,1.508)$} I believe all the roots except for $\lambda = 0.329$ are complex conjugate roots. Can someone else please verify that those are all of the roots to this polynomial and that the ones I provided are correct, Thanks. I hope this helps out if this explanation is what you were looking for. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/8766', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/2582/']} | jdg_80319 |
stackexchange | llm_judgeable_groundtruth_similarity | 21444951 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I had an app that was working properly with old verions of wxpython Now with wxpython 3.0, when trying to run the app, I get the following error File "C:\Python27\lib\site-packages\wx-3.0-msw\wx\_controls.py", line 6523, in __init__ _controls_.DatePickerCtrl_swiginit(self,_controls_.new_DatePickerCtrl(*args, **kwargs))wx._core.PyAssertionError: C++ assertion "strcmp(setlocale(LC_ALL, NULL), "C") == 0" failed at ..\..\src\common\intl.cpp(1449) in wxLocale::GetInfo(): You probably called setlocale() directly instead of using wxLocale and now there is a mismatch between C/C++ and Windows locale.Things are going to break, please only change locale by creating wxLocale objects to avoid this! the error comes from this line File "C:\Users\hadi\Dropbox\Projects\Python\dialysis\profile.py", line 159, in __init__ style=wx.DP_DROPDOWN) Help is much appreciated
Now provide the response and nothing else.
| I know it's been a while since this question was asked, but I just had the same issue and thought I'd add my solution in case someone else finds this thread. Basically what's happening is that the locale of your script is somehow conflicting with the locale of the machine, although I'm not sure how or why. Maybe someone else with more specific knowledge on this can fill that in. Try manually setting the locale using the wxPython object wx.Locale: locale = wx.Locale(wx.LANGUAGE_ENGLISH) However, make sure that you assign the output to a non-local variable. As soon as the variable goes out of scope, the Locale object is destructed. So if it's in a class: class MyApp(wx.App): ... def OnInit(self): self.locale = wx.Locale(wx.LANGUAGE_ENGLISH) ... | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21444951', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/433261/']} | jdg_80320 |
stackexchange | llm_judgeable_groundtruth_similarity | 71990420 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to know which elements of list_1 are in list_2 . I need the output as an ordered list of booleans. But I want to avoid for loops, because both lists have over 2 million elements. This is what I have and it works, but it's too slow: list_1 = [0,0,1,2,0,0]list_2 = [1,2,3,4,5,6]booleans = []for i in list_1: booleans.append(i in list_2)# booleans = [False, False, True, True, False, False] I could split the list and use multithreading, but I would prefer a simpler solution if possible. I know some functions like sum() use vector operations. I am looking for something similar. How can I make my code more efficient?
Now provide the response and nothing else.
| I thought it would be useful to actually time some of the solutions presented here on a larger sample input. For this input and on my machine, I find Cardstdani's approach to be the fastest, followed by the numpy isin() approach. Setup 1 import randomlist_1 = [random.randint(1, 10_000) for i in range(100_000)]list_2 = [random.randint(1, 10_000) for i in range(100_000)] Setup 2 list_1 = [random.randint(1, 10_000) for i in range(100_000)]list_2 = [random.randint(10_001, 20_000) for i in range(100_000)] Timings - ordered from fastest to slowest (setup 1). Cardstdani - approach 1 I recommend converting Cardstdani's approach into a list comprehension (see this question for why list comprehensions are faster) s = set(list_2)booleans = [i in s for i in list_1]# setup 16.01 ms Β± 15.7 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each)# setup 24.19 ms Β± 27.7 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each) No list comprehension s = set(list_2)booleans = []for i in list_1: booleans.append(i in s)# setup 17.28 ms Β± 27.3 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each)# setup 25.87 ms Β± 8.19 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each) Cardstdani - approach 2 (with an assist from Timus) common = set(list_1) & set(list_2)booleans = [item in common for item in list_1]# setup 18.3 ms Β± 34.8 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each)# setup 26.01 ms Β± 26.3 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each) Using the set intersection method common = set(list_1).intersection(list_2)booleans = [item in common for item in list_1]# setup 110.1 ms Β± 29.6 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each)# setup 24.82 ms Β± 19.5 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each) numpy approach (crissal) a1 = np.array(list_1)a2 = np.array(list_2)a = np.isin(a1, a2)# setup 118.6 ms Β± 74.2 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each)# setup 218.2 ms Β± 47.2 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each)# setup 2 (assuming list_1, list_2 already numpy arrays)10.3 ms Β± 73.5 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each) list comprehension l = [i in list_2 for i in list_1]# setup 14.85 s Β± 14.6 ms per loop (mean Β± std. dev. of 7 runs, 10 loops each)# setup 248.6 s Β± 823 ms per loop (mean Β± std. dev. of 7 runs, 10 loops each) Sharim - approach 1 booleans = list(map(lambda e: e in list_2, list_1))# setup 14.88 s Β± 24.1 ms per loop (mean Β± std. dev. of 7 runs, 10 loops each)# setup 248 s Β± 389 ms per loop (mean Β± std. dev. of 7 runs, 10 loops each) Using the __contains__ method booleans = list(map(list_2.__contains__, list_1))# setup 14.87 s Β± 5.96 ms per loop (mean Β± std. dev. of 7 runs, 10 loops each)# setup 248.2 s Β± 486 ms per loop (mean Β± std. dev. of 7 runs, 10 loops each) Sharim - approach 2 set_2 = set(list_2)booleans = list(map(lambda e: set_2 != set_2 - {e}, list_1))# setup 15.46 s Β± 56.1 ms per loop (mean Β± std. dev. of 7 runs, 10 loops each)# setup 211.1 s Β± 75.3 ms per loop (mean Β± std. dev. of 7 runs, 10 loops each) Varying the length of the input Employing the following setup import random list_1 = [random.randint(1, n) for i in range(n)]list_2 = [random.randint(1, n) for i in range(n)] and varying n in [2 ** k for k in range(18)] : Employing the following setup import random list_1 = [random.randint(1, n ** 2) for i in range(n)]list_2 = [random.randint(1, n ** 2) for i in range(n)] and varying n in [2 ** k for k in range(18)] , we obtain similar results: Employing the following setup list_1 = list(range(n))list_2 = list(range(n, 2 * n)) and varying n in [2 ** k for k in range(18)] : Employing the following setup import random list_1 = [random.randint(1, n) for i in range(10 * n)]list_2 = [random.randint(1, n) for i in range(10 * n)] and varying n in [2 ** k for k in range(18)] : | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/71990420', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/14640064/']} | jdg_80321 |
stackexchange | llm_judgeable_groundtruth_similarity | 16866015 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using the express module to build a RESTful API within Node.js. In my service I am making additional HTTP requests to external endpoints (server side). I need to return the data from those HTTP requests to the request body of my Web service. On all the actions the Web service is conducting, by using console.log() I have confirmed that I get the data I need. However, when I try to return those values to the service they come back with value null . I know this is because a request is asynchronous and the callback is not waiting for the HTTP request to finish. Is there a way to make this work?
Now provide the response and nothing else.
| A common practice is to use the async module. npm install async The async module has primitives to handle various forms of asynchronous events. In your case, the async#parallel call will allow you to make requests to all external APIs at the same time and then combine the results for return to the requester. Since you're making external http requests, you will probably find the request module helpful as well. npm install request Using request and async#parallel your route handler would look something like this... var request = require('request');var async = require('async');exports.handler = function(req, res) { async.parallel([ /* * First external endpoint */ function(callback) { var url = "http://external1.com/api/some_endpoint"; request(url, function(err, response, body) { // JSON body if(err) { console.log(err); callback(true); return; } obj = JSON.parse(body); callback(false, obj); }); }, /* * Second external endpoint */ function(callback) { var url = "http://external2.com/api/some_endpoint"; request(url, function(err, response, body) { // JSON body if(err) { console.log(err); callback(true); return; } obj = JSON.parse(body); callback(false, obj); }); }, ], /* * Collate results */ function(err, results) { if(err) { console.log(err); res.send(500,"Server Error"); return; } res.send({api1:results[0], api2:results[1]}); } );}; You can also read about other callback sequencing methods here . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/16866015', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1930766/']} | jdg_80322 |
stackexchange | llm_judgeable_groundtruth_similarity | 18353367 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to do one INSERT or another depending if a column exist because of different versions of the same table. I did the approach at this thread but SQL Server's pre check or 'sort of compilation' detects an error that would not fail during execution time. Here's some code IF COL_LENGTH('TableA', 'Column2') IS NOT NULLBEGININSERT INTO [dbo].[TableA]([Column1], [Column2], [Column3], [Column4])SELECT value1, value2, value3, value4END ELSEBEGININSERT INTO [dbo].[TableA]([Column1], [Column3], [Column4])SELECT value1, value3, value4END Any workaround?
Now provide the response and nothing else.
| SQL will know that the column doesn't exist so it won't let you run the query. The solution would be to execute a dynamic query. DECLARE @value1 AS VARCHAR(50)DECLARE @value2 AS VARCHAR(50)DECLARE @value3 AS VARCHAR(50)DECLARE @value4 AS VARCHAR(50)SET @value1 = 'somevalue1'SET @value2 = 'somevalue2'SET @value3 = 'somevalue3'SET @value4 = 'somevalue4'DECLARE @SQL AS VARCHAR(MAX)IF COL_LENGTH('TableA', 'Column2') IS NOT NULL BEGIN SET @SQL = 'INSERT INTO [dbo].[TableA]([Column1], [Column2], [Column3], [Column4]) SELECT ' + @value1 + ', ' + @value2 + ', ' + @value3 + ', ' + @value4 END ELSE BEGIN SET @SQL = 'INSERT INTO [dbo].[TableA]([Column1], [Column3], [Column4]) SELECT ' + @value1 + ', ' + @value3 + ', ' + @value4 ENDEXEC(@SQL) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18353367', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/253329/']} | jdg_80323 |
stackexchange | llm_judgeable_groundtruth_similarity | 55305893 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a nicer way to write the following function fs' with functors or applicatives? fncnB = (* 2)fncnA = (* 3)fs' fs = zip (map (fncnA . fst) fs) $ map (fncnB . snd) fs I see from this question that I could rely on the functor instance of lists to map a single function acting over both elements of each tuple, or e.g. on the applicative instance of tuples to apply functions to just the second-half of a two-tuple, but I'm curious how functors and applicatives fit into the picture of operating over lists of multi-component data types.
Now provide the response and nothing else.
| A tuple is a bifunctor, so bimap is available. import Data.BifunctorfncnB = (* 2)fncnA = (* 3)fs' = map (bimap fncnA fncnB) No third-party libraries required. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/55305893', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1542000/']} | jdg_80324 |
stackexchange | llm_judgeable_groundtruth_similarity | 2108846 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How would you formalise: "There are no geniuses but Newton was a genius"? I thought it could be:$$\neg\forall x(Px) \land Pa$$and also $$\neg\exists x (Px) \land Pa$$These both seem to make sense and formalise the sentence, however they are not identical statements. The way it is worded is really confusing!
Now provide the response and nothing else.
| The second one looks right to me. If by $Px$ you mean "$x$ is a genius" and $a$ is Newton, then your statements translate to: 1) it's not true that every person is a genius, and it's true that Newton is a genius; 2) there's doesn't exist anyone who is a genius, and it's true that Newton is a genius | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2108846', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/374092/']} | jdg_80325 |
stackexchange | llm_judgeable_groundtruth_similarity | 49267213 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to use the decltype keyword on an auto function: struct Thing { static auto foo() { return 12; } using type_t = decltype(foo());}; And I get the following error (gcc 7.4): <source>:6:25: error: use of 'static auto Thing::foo()' before deduction of 'auto' decltype(foo()); ^<source>:6:25: error: use of 'static auto Thing::foo()' before deduction of 'auto' Why has the compiler not yet deduced the return type?
Now provide the response and nothing else.
| Because for class definition, compiler will first determine all member names and types. Function body is analyzed after this is done. That's why a class member function can call another member function declared after its own definition. At the point compile is determining using type_t = decltype(foo()); function foo() 's body has not yet been analyzed. As a remedy, you can use static auto foo() -> decltype(12) { return 12;} NOTE: This phenomenon is only for class. The following code outside a class will compile: auto bar() { return 12; }using t = decltype(bar()); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/49267213', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6149078/']} | jdg_80326 |
stackexchange | llm_judgeable_groundtruth_similarity | 134609 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Some people advise against the use of For loops in Mathematica. Why? Should I heed this advice? What is wrong with For ? What should I use instead?
Now provide the response and nothing else.
| If you are new to Mathematica, and were directed to this post, first see if you can use Table to solve your problem. I have often told people, especially beginners, to avoid using For in favour of Do . The following is my personal opinion on why using For is harmful when learning Mathematica. If you are a seasoned Mathematica user, you won't find much to learn here. My biggest argument against For is that it hinders learning by encouraging error-prone, hard to read, and slow code. For mimics the syntax of the for loop of C-like languages. Many beginners coming from such languages will look for a "for loop" when they start using Mathematica. Unfortunately, For gives them lots of ways to shoot themselves in the foot, while providing virtually no benefits over alternatives such as Do . Settling on For also tends to delay beginners in discovering more Mathematica-like programming paradigms, such as list-based and functional programming ( Table , Map , etc.) I want to make it clear at the beginning that the following arguments are not about functional vs procedural programming. Functional programming is usually the better choice in Mathematica, but procedural programming is also clearly needed in many situations. I will simply argue that when we do need a procedural loop, For is nearly always the worst choice. Use Do or While instead. Use Do instead of For The typical use case of For is iterating over an integer range. Do will do the same thing better. Do is more concise, thus both more readable and easier to write without mistakes. Compare the following: For[i=1, i <= n, i++, f[i]; g[i]; h[i]] Do[ f[i]; g[i]; h[i], {i, n} ] In For we need to use both commas ( , ) and semicolons ( ; ) in a way that is almost, but not quite, the opposite of how they are used in C-like languages. This alone is a big source of beginner confusion and mistakes (possibly due to muscle memory). , and ; are visually similar so it is hard to spot the mistake. For does not localize the iterator i . A safe For needs explicit localization: Module[{i}, For[i=1, i <= n, i++, ...]] A common mistake is to overwrite the value of a global i , possibly defined in an earlier input cell. At other times i is used as a symbolic variable elsewhere, and For will inconveniently assign a value to it. In Do , i is a local variable, so we do not need to worry about these things. C-like languages typically use 0-based indexing. Mathematica uses 1-based indexing. for -loops are typically written to loop through 0..n-1 instead of 1..n , which is usually the more convenient range in Mathematica. Notice the differences between For[i=0, i < n, i++, ...] and For[i=1, i <= n, i++, ...] We must pay attention not only to the starting value of i , but also < vs <= in the second argument of For . Getting this wrong is a common mistake, and again it is hard to spot visually. In C-like languages the for loop is often used to loop through the elements of an array. The literal translation to Mathematica looks like For[i=1, i <= n, i++, doSomething[array[[i]]]] Do makes this much simpler and clearer: Do[doSomething[elem], {elem, array}] Do makes it easy to use multiple iterators: Do[..., {i, n}, {j, m}] The same requires a nested For loop which doubles the readability problems. Transitioning to more Mathematica-like paradigms A common beginner-written program that we see here on StackExchange collects values in a loop like this: list = {};For[i=1, i <= n, ++i, list = Append[list, i^2]] This is of course not only complicated, but also slow ($O(n^2)$ complexity instead of $O(n)$). The better way is to use Table : Table[i^2, {i, n}] Table and Do have analogous syntaxes and their documentation pages reference each other. Starting out with Do makes the transition to Table natural. Moving from Table to Map and other typical functional or vectorized ( Range[n]^2 ) constructs is then only a small step. Settling on For as "the standard looping construct" leaves beginners stuck with bad habits. Another very common question on StackExchange is how to parallelize a For loop. There is no parallel for in Mathematica, but there is a ParallelDo and more importantly a ParallelTable . The answer is almost always: design the computation so that separate steps of the iteration do not access the same variable. In other words: just use Table . More general versions of For For is of course in some ways more flexible than Do . It can express a broader range of iteration schemes. If you need something like this, I suggest just using While instead. When we see for , we usually expect either a simple iteration through an integer range or through an array. Doing something else, such as modifying the value of the iterator in the loop body is unexpected, therefore confusing. Using While signals that anything can happen in the loop body, so the readers of the code will watch out for such things. When is For appropriate? There are some cases when For is useful. The main example is translating code from other languages. It is convenient to be able to translate analogous for loops, and not have to think about what may be broken by immediately translating to a Do or a Table (e.g. does the loop modify the iterator in the body?). Once the translated code works fine, it can be rewritten gradually. There are existing questions on this, which also discuss other cases: Are there any cases when For[] loops are reasonable? Can this be written well, without loops? Summary The problem with For is that it hinders learning and makes it very easy for beginners to introduce mistakes into their code. If you are new to Mathematica, my advice is to forget that For exists, at least for a while. You can always accomplish the very same things with Do and While βuse them instead. Very often you will be able to replace Do with a Table or even a vectorized expressions. This will help you learn to write effective Mathematica code faster. If you are unsure about a use of For , then ask yourself: do I see a reason why For is clearly better here than Do or While ? If not, don't use it. If yes, you may have found one of the rare good use cases. | {} | {'log_upvote_score': 7, 'links': ['https://mathematica.stackexchange.com/questions/134609', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/12/']} | jdg_80327 |
stackexchange | llm_judgeable_groundtruth_similarity | 1443129 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am wanting to completely wrap an object so that all attribute and method requests get forwarded to the object it's wrapping, but also overriding any methods or variables that I want, as well as providing some of my own methods. This wrapper class should appear 100% as the existing class ( isinstance must act as if it is actually the class), however subclassing in itself is not going to cut it, as I want to wrap an existing object . Is there some solution in Python to do this? I was thinking something along the lines of: class ObjectWrapper(BaseClass): def __init__(self, baseObject): self.baseObject = baseObject def overriddenMethod(self): ... def myOwnMethod1(self): ... ... def __getattr__(self, attr): if attr in ['overriddenMethod', 'myOwnMethod1', 'myOwnMethod2', ...] # return the requested method else: return getattr(self.baseObject, attr) But I'm not that familiar with overriding __getattr__ , __setattr__ and __hasattr__ , so I'm not sure how to get that right.
Now provide the response and nothing else.
| The simplest way in most cases is probably: class ObjectWrapper(BaseClass): def __init__(self, baseObject): self.__class__ = type(baseObject.__class__.__name__, (self.__class__, baseObject.__class__), {}) self.__dict__ = baseObject.__dict__ def overriddenMethod(self): ... Working in this way, i.e. by reassigning self's __class__ and __dict__ in this fashion, you need only provide your overrides -- Python's normal attribute getting and setting mechanisms will do the rest... mostly . You'll be in trouble only if baseObject.__class__ defines __slots__ , in which case the multiple inheritance approach doesn't work and you do need the cumbersome __getattr__ (as others said, at least you don't need to worry that it will be called with attributes you're overriding, as it won't!-), __setattr__ (a greater pain, as it DOES get called for every attribute), etc; and making isinstance and special methods work takes painstaking and cumbersome detailed work. Essentially, __slots__ means that a class is a special, each instance a lightweight "value object" NOT to be subject to further sophisticated manipulation, wrapping, etc, because the need to save a few bytes per instance of that class overrides all the normal concerns about flexibility and so on; it's therefore not surprising that dealing with such extreme, rare classes in the same smooth and flexible way as you can deal with 99%+ of Python objects is truly a pain. So DO you need to deal with __slots__ (to the point of writing, testing, debugging and maintaining hundreds of lines of code just for those corner cases), or will the 99% solution in half a dozen lines suffice?-) It should also be noted that this may lead to memory leaks, as creating a subclass adds the subclass to the base class' list of subclasses, and isn't removed when all instances of the subclass are GC'd. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1443129', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/14902/']} | jdg_80328 |
stackexchange | llm_judgeable_groundtruth_similarity | 50846627 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
At the 2018 WWDC Apple announced the introduction of new USDZ file format. In relation to creating USDZ file the following was said: To create your own usdz files, a usdz_converter has been bundled as part of Xcode 10 beta. Its a command line tool for creating the usdz file from OBJ files, Single-frame Alembic (ABC) files, USD file (either .usda or usd.c) the basic command line is xcrun usdz_converter myFile.obj myFile.usdz I have installed Xcode 10 beta, but have been unable to run or even find usdz_converter in the Xcode 10 beta bundle made available on the 4th June 2018 (build 10L176w). note: Iβm still running on macOS highSierra, I have not installed macOS Mojave 10.14 beta yet... but didnβt think that should be the reason why. Has anyone else been able to get this xcrun usdz_converter to work? If so please share the steps.
Now provide the response and nothing else.
| Updated: August 08, 2022 . usdzconvert command in Xcode 14, 13, 12, 11 In macOS Ventura , Monterey , Big Sur and Catalina USDZ converter has a rich set of commands and supports more input formats for conversion than previous version. But before using it you need to download USDZ Tools . And don't forget to setup a global variables for Python usdz tools. Here's how a new command looks like in Terminal : usdzconvert ~/Desktop/file.gltf -diffuseColor albedo.png -metallic brass.jpg In macOS Ventura, Monterey, Big Sur and Catalina the default command shell is ZSH . Create Shell Resource file If you want to use USDPython tools you need to create a Shell Resource file .zshrc . For that go to Home area using change directory command in Terminal: cd /Users/<UserName> Check whether you are there or not with parent working directory command: pwd Then type ls command for listing of directory's content including hidden files: ls -a Now you are ready to create a hidden .zshrc file: touch ~/.zshrc Open this file using the following command: open ~/.zshrc Now you can add these lines into zsh resource file: export PATH="/Users/yourUserName/usdpython/USD:$PATH"export PATH="/Users/yourUserName/usdpython/usdzconvert:$PATH"export PYTHONPATH="/Users/yourUserName/usdpython/USD/lib/python:$PYTHONPATH"echo "Now I can use USDPython commands here." Save it and restart Terminal. usdzconvert is a Python script that converts the following assets into usdz : obj gltf fbx abc usd β you can export it from Maya using USD plug-in usda usdc If you need to use FBX format conversion you have to download and install FBX Python SDK. Then add to .zshrc file one more line: export PYTHONPATH="/Applications/Autodesk/FBXPythonSDK/2020.0.1/lib/Python27_ub:$PYTHONPATH" Save .zshrc file and restart Terminal. Here's a full list of options you can see in Terminal, typing usdzconvert -h : # DO NOT USE usdzconvert 0.63 BECAUSE IT CAUSES ERRORS.# USE usdzconvert 0.66, or usdzconvert 0.65, or usdzconvert 0.64outputFile Output .usd/usda/usdc/usdz files.-h, --help Show this help message and exit.-f <file> Read arguments from <file>-v Verbose output.-url <url> Add URL metadata-copyright "copyright message" Add copyright metadata-copytextures Copy texture files (for .usd/usda/usdc) workflows-metersPerUnit value Set metersPerUnit attribute with float value-loop Set animation loop flag to 1-no-loop Set animation loop flag to 0-m materialName Subsequent material arguments apply to this material.-iOS12 Make output file compatible with iOS 12 frameworks-texCoordSet name The name of the texture coordinates to use for current material. -diffuseColor r,g,b Set diffuseColor to constant color r,g,b with values in the range [0 .. 1]-diffuseColor <file> fr,fg,fb Use <file> as texture for diffuseColor. fr,fg,fb: (optional) constant fallback color, with values in the range [0..1]. -normal x,y,z Set normal to constant value x,y,z in tangent space [(-1, -1, -1), (1, 1, 1)].-normal <file> fx,fy,fz Use <file> as texture for normal. fx,fy,fz: (optional) constant fallback value, with values in the range [-1..1]. -emissiveColor r,g,b Set emissiveColor to constant color r,g,b with values in the range [0..1]-emissiveColor <file> fr,fg,fb Use <file> as texture for emissiveColor. fr,fg,fb: (optional) constant fallback color, with values in the range [0..1]. -metallic c Set metallic to constant c, in the range [0..1]-metallic ch <file> fc Use <file> as texture for metallic. ch: (optional) texture color channel (r, g, b or a). fc: (optional) fallback constant in the range [0..1] -roughness c Set roughness to constant c, in the range [0..1]-roughness ch <file> fc Use <file> as texture for roughness. ch: (optional) texture color channel (r, g, b or a). fc: (optional) fallback constant in the range [0..1] -occlusion c Set occlusion to constant c, in the range [0..1]-occlusion ch <file> fc Use <file> as texture for occlusion. ch: (optional) texture color channel (r, g, b or a). fc: (optional) fallback constant in the range [0..1] -opacity c Set opacity to constant c, in the range [0..1]-opacity ch <file> fc Use <file> as texture for opacity. ch: (optional) texture color channel (r, g, b or a). fc: (optional) fallback constant in the range [0..1]-clearcoat c Set clearcoat to constant c, in the range [0..1]-clearcoat ch <file> fc Use <file> as texture for clearcoat. ch: (optional) texture color channel (r, g, b or a). fc: (optional) fallback constant in the range [0..1]-clearcoatRoughness c Set clearcoat roughness to constant c, in the range [0..1]-clearcoatRoughness ch <file> fc Use <file> as texture for clearcoat roughness. ch: (optional) texture color channel (r, g, b or a). fc: (optional) fallback constant in the range [0..1] Reality Converter Instead of using a command line conversion tool (CLI), you can use a Reality Converter app (GUI). This Beta 4 app makes it easy to convert, view, and customise .usdz objects on Mac. Simply drag-and-drop common 3D file formats, such as .obj , .gltf or .fbx , to view the converted .usdz result, customize material properties with your own textures, and edit file metadata. You can even preview your .usdz object under a variety of lighting and environment conditions with built-in IBL options. For .fbx conversion you have to download and install FBX C++ SDK . Needed file is FBX SDK 2020.2.1 Clang (Universal Binary) . USDZ Export command in Reality Composer In Reality Composer for Xcode 14/13/12 you can export a usdz model right from Reality Composer's UI. For that you just need to activate a USDZ export in RealityComposer β Preferences menu. Also you can use AR USD Schemas and, of course, Autodesk Maya 2022 workflow. Create USDZ file from SCN scene Another way to generate an USDZ file is to convert it from the SceneKit's scene using the write(to:options:delegate:progressHandler:) instance method. Let's take a look at the code: import ARKitimport SceneKitclass ViewController: UIViewController { @IBOutlet var sceneView: ARSCNView! let scene = SCNScene(named: "art.scnassets/ship.scn")! override func viewDidLoad() { super.viewDidLoad() sceneView.scene = scene let path = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0] .appendingPathComponent("model.usdz") scene.write(to: path, options: nil, delegate: nil, progressHandler: nil) print(path) }} xcrun usdz_converter command in Xcode 10 In macOS Mojave 10.14 and Xcode 10 USDZ converter has a slightly different set of arguments and flags. Here's how an old command looks like in Terminal : xcrun usdz_converter file.obj file.usdz -color_map color.jpg -ao_map ao.png In macOS Mojave the default command shell is BASH . Here's a full list of options you can see in Terminal, typing xcrun usdz_converter -h : -g groupName [groupNames ...] Apply subsequent material properties to the named group(s).-m materialName [materialNames ...] Apply subsequent material properties to the named material(s).-h Display help.-a Generate a .usda intermediate file. Default is .usdc.-l Leave the intermediate .usd file in the source folder.-v Verbose output.-f filePath Read commands from a file.-texCoordSet set The name of the texturemap coordinate set to use if multiple exist (no quotes).-opacity o Floating point value 0.0...1.0-color_default r g b a Floating point values 0.0...1.0-normal_default r g b a Floating point values 0.0...1.0-emissive_default r g b a Floating point values 0.0...1.0-metallic_default r g b a Floating point values 0.0...1.0-roughness_default r g b a Floating point values 0.0...1.0-ao_default r g b a Floating point values 0.0...1.0-color_map filePath-normal_map filePath-emissive_map filePath-metallic_map filePath-roughness_map filePath-ao_map filePath | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/50846627', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7422651/']} | jdg_80329 |
stackexchange | llm_judgeable_groundtruth_similarity | 11156410 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a JMeter test plan with following http request samplers. Login Call some functionality which needs a logged in user Logout When I execute the test plan with 5 parallel threads, I see that the sampler 2 is called before calling sampler 1 for some threads, which then fails the response assertions. Is there any way to specify a sequence of samplers to be executed ?
Now provide the response and nothing else.
| This should ensure that they are executed sequentially : So let's start with thread group. Number of Threads(users) is 5. So assuming you have the logic work out for your login sampler. Just add additional sampler to it. So right click on that sample Add > Post Processors > BSF PostProcessor , inside this post processor big script space write ${__setProperty(ThreadValue,${__threadNum},)} . This will save the thread number to your property called ThreadValue . Make sure you select your language as beanshell in the dropdown list. Then after the login sampler add the if controller. Add this to the condition field (${JMeterThread.last_sample_ok}==true) && (${__property(ThreadValue,,)} == ${__threadNum}) What this means is that -> do only logged in stuff while the actual login is successful and if the login thread matches the thread you're currently in. That's it you do your login stuff only inside the if controller. If you want to be sure that you logout the right user place additional if controller arround it. Cheers | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11156410', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/111489/']} | jdg_80330 |
stackexchange | llm_judgeable_groundtruth_similarity | 1764947 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
It is well-known that if $X$ is a reasonably nice topological space (compact Hausdorff, say) then we can recover $X$ from the ring $C(X)$ of continuous functions $X\to\mathbb R$; see this MSE question for a discussion and problem 26 in the first chapter of Atiyah MacDonald for the construction. Is the same true for a compact smooth manifold $M$ and its ring $C^\infty(M)$ of smooth functions? More specifically, Let $M$ and $N$ be compact smooth manifolds. If $C^\infty(M)$ and $C^\infty(N)$ are isomorphic, then are $M$ and $N$ necessarily diffeomorphic? Can we recover the topological space $M$ from $C^\infty(M)$? If so, can we also recover the smooth structure on $M$?
Now provide the response and nothing else.
| I believe this is proven in Chapter 7 of Nestruev's Smooth Manifolds and Observables , but I haven't checked carefully. More precisely, the functor $M \to C^{\infty}(M)$ from smooth manifolds to the opposite of real commutative algebras is fully faithful, meaning that smooth maps $M \to N$ are precisely algebra maps $C^{\infty}(N) \to C^{\infty}(M)$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1764947', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/134824/']} | jdg_80331 |
stackexchange | llm_judgeable_groundtruth_similarity | 13805617 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Possible Duplicate: Casting vs using the βasβ keyword in the CLR I know there are a lot of questions about casts but I don't know the specific names of these two casts so I'm not sure where to look. What are the differences between the two casts below? TreeNode treeNode = (TreeNode)sender; // first castTreeNode treeNode = (sender as TreeNode); //second cast
Now provide the response and nothing else.
| The first type of cast is called an "explicit cast" and the second cast is actually a conversion using the as operator, which is slightly different than a cast. The explicit cast (type)objectInstance will throw an InvalidCastException if the object is not of the specified type. // throws an exception if myObject is not of type MyTypeObject.MyTypedObject mto = (MyTypedObject)myObject; The as operator will not throw an exception if the object is not of the specified type. It will simply return null . If the object is of the specified type then the as operator will return a reference to the converted type. The typical pattern for using the as operator is: // no exception thrown if myObject is not MyTypedObjectMyTypedObject mto = myObject as MyTypedObject; if (mto != null){ // myObject was of type MyTypedObject, mto is a reference to the converted myObject}else{ // myObject was of not type MyTypedObject, mto is null} Take a look at the following MSDN references for more details about explicit casting and type conversion: Casting (C#) Casting and Type Conversions How to: Safely Cast by Using as and is Operators | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13805617', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1063093/']} | jdg_80332 |
stackexchange | llm_judgeable_groundtruth_similarity | 9396302 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to remove words from a string if they match a list. x = "How I Met Your Mother 7x17 (HDTV-LOL) [VTV] - Mon, 20 Feb 2012"tags = ['HDTV', 'LOL', 'VTV', 'x264', 'DIMENSION', 'XviD', '720P', 'IMMERSE']print xfor tag in tags: if tag in x: print x.replace(tag, '') It produces this output: How I Met Your Mother 7x17 (HDTV-LOL) [VTV] - Mon, 20 Feb 2012How I Met Your Mother 7x17 (-LOL) [VTV] - Mon, 20 Feb 2012How I Met Your Mother 7x17 (HDTV-) [VTV] - Mon, 20 Feb 2012How I Met Your Mother 7x17 (HDTV-LOL) [] - Mon, 20 Feb 2012 I want it to remove all the words matching the list.
Now provide the response and nothing else.
| You are not keeping the result of x.replace() . Try the following instead: for tag in tags: x = x.replace(tag, '')print x Note that your approach matches any substring, and not just full words. For example, it would remove the LOL in RUN LOLA RUN . One way to address this would be to enclose each tag in a pair of r'\b' strings, and look for the resulting regular expression . The r'\b' would only match at word boundaries: for tag in tags: x = re.sub(r'\b' + tag + r'\b', '', x) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9396302', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1223606/']} | jdg_80333 |
stackexchange | llm_judgeable_groundtruth_similarity | 52665442 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am following this very excellent tutorial: https://github.com/binblee/springcloud-swarm When I deploy a stack to a Docker swarm that contains a single node (just the manager node), it works perfectly. docker stack deploy -c all-in-one.yml springcloud-demo I have four docker containers, one of them is Eureka service discovery, which all the other three containers register with successfully. The problem is when I add a worker node to the swarm, then two of the containers will be deployed to the worker, and two to the manager, and the services deployed to the worker node cannot find the Eureka server. java.net.UnknownHostException: eureka: Name does not resolve This is my compose file: version: '3'services: eureka: image: demo-eurekaserver ports: - "8761:8761" web: image: demo-web environment: - EUREKA_SERVER_ADDRESS=http://eureka:8761/eureka zuul: image: demo-zuul environment: - EUREKA_SERVER_ADDRESS=http://eureka:8761/eureka ports: - "8762:8762" bookservice: image: demo-bookservice environment: - EUREKA_SERVER_ADDRESS=http://eureka:8761/eureka Also, I can only access the Eureka Service Discovery server on the host on which it is deployed to. I thought that using "docker stack deploy" automatically creates an overlay network, in which all exposed ports will be routed to a host on which the respective service is running: From https://docs.docker.com/engine/swarm/ingress/ : All nodes participate in an ingress routing mesh. The routing mesh enables each node in the swarm to accept connections on published ports for any service running in the swarm, even if thereβs no task running on the node. This is the output of docker service ls : manager:~/springcloud-swarm/compose$ docker service lsID NAME MODE REPLICAS IMAGE PORTSrirdysi0j4vk springcloud-demo_bookservice replicated 1/1 demo-bookservice:latest936ewzxwg82l springcloud-demo_eureka replicated 1/1 demo-eurekaserver:latest *:8761->8761/tcplb1p8nwshnvz springcloud-demo_web replicated 1/1 demo-web:latest0s52zecjk05q springcloud-demo_zuul replicated 1/1 demo-zuul:latest *:8762->8762/tcp and of docker stack ps springcloud-demo : manager:$ docker stack ps springcloud-demoID NAME IMAGE NODE DESIRED STATE CURRENT STATE o8aed04qcysy springcloud-demo_web.1 demo-web:latest workernode Running Running 2 minutes agoyzwmx3l01b94 springcloud-demo_eureka.1 demo-eurekaserver:latest managernode Running Running 2 minutes agorwe9y6uj3c73 springcloud-demo_bookservice.1 demo-bookservice:latest workernode Running Running 2 minutes agoiy5e237ca29o springcloud-demo_zuul.1 demo-zuul:latest managernode Running Running 2 minutes ago UPDATE: I successfully added another host, but now I can't add a third. I tried a couple of times, following the same steps, (installing docker, opening the requisite ports, joining the swarm) - but the node cannot find the Eureka server with the container host name). UPDATE 2: In testing that the ports were opened, I examined the firewall config: workernode:~$ sudo ufw statusStatus: activeTo Action From-- ------ ----8080 ALLOW Anywhere4789 ALLOW Anywhere7946 ALLOW Anywhere2377 ALLOW Anywhere8762 ALLOW Anywhere8761 ALLOW Anywhere22 ALLOW Anywhere However - when I try to hit port 2377 on the worker node from the manager node, I can't: managernode:~$ telnet xx.xx.xx.xx 2377Trying xx.xx.xx.xx...telnet: Unable to connect to remote host: Connection refused
Now provide the response and nothing else.
| Let us break the solution into parts. Each part tries to give you an idea about the solution and is interconnected with each other. Docker container network Whenever we create a container without specifying network, docker attaches it to default bridge network. According to this, . Service discovery is unavailable in the default network. Hene in order to maker service discovery work properly we are supposed to create a user-defined network as it provides isolation, DNS resolution and many more features. All these things are applicable when we use docker run command. When docker-compose is used to run a container and network is not specified, it creates its own bridge network. which has all the properties of the user-defined networks. These bridge networks are not attachable by default, But they allow docker containers in the local machine to connect to them. Docker swarm network In Docker swarm and swarm mode routing mesh Whenever we deploy a service to it without specifying an external network it connects to the ingress network. When you specify an external overlay network you can notice that the created overlay network will be available only to the manager and not in the worker node unless a service is created and is replicated to it. These are also not attachable by default and does not allow other containers outside swarm services to connect to them. So you don't need to declare a network as attachable until you connect a container to it outside swarm. Docker Swarm As there is no pre defined/official limit on no of worker/manager nodes , You should be able to connect from the third node. One possibility is that the node might be connected as a worker node but you might try to deploy a container in that node which is restricted by the worker node if the overlay network is not attachable. And moreover, you can't deploy a service directly in the worker node. All the services are deployed in the manager node and it takes care of replicating and scaling the services based on config and mode provided. Firewall As mentioned in Getting started with swarm mode TCP port 2377 for cluster management communications TCP and UDP port 7946 for communication among nodes UDP port 4789 for overlay network traffic ip protocol 50 (ESP) for encrypted overlay network These ports should be whitelisted for communication between nodes. Most firewalls need to be reloaded once you make changes. This can be done by passing reload option to the firewall and it varies between Linux distributions. ufw doesn't need to be reloaded but needs commit if rules are added in file . Extra steps to be followed in firewall Apart from whitelisting the above ports. You may need to whitelist docker0,docker_gw_bridge,br-123456 ip address with netmask of 16. Else service discovery will not work in same host machine. i.e If you are trying to connect to eureka in 192.168.0.12 where the eureka service is in same 192.168.0.12 it will not resolve as firewall will block the traffic. Refer this (NO ROUTE TO HOST network request from container to host-ip:port published from other container) Java Sometimes Java works weird such that it throws java.net.MalformedURLException and similar exceptions. I've my own experience of such case with the solution as well. Here ping resolved properly but Java rmi was throwing an error. So, You can define your own custom alias when you attach to a user-defined network. Docker service discovery By default, you can resolve to a service by using container name. Apart from that, you can also resolve a service as <container_name>.<network_name> . Of course, you can define alias as well. And even you can resolve it as <alias_name>.<network_name> . Solution So you should create a user-defined overlay network after joining the swarm and then should deploy services. In the services, You should mention the external network as defined here along with making changes in the firewall. If you want to allow external containers to connect to the network you should make the network attachable. Since you haven't provided enough details on what's happening with third server. I assume that you are trying to deploy a container there which is denied by docker overlay network as the network is not attachable. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/52665442', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1786712/']} | jdg_80334 |
stackexchange | llm_judgeable_groundtruth_similarity | 29075920 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
With the layout below the buttons are being shown on top of the page . How can I make them appear below, after the container, down on the page ? <LinearLayout android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" android:id="@+id/container" xmlns:android="http://schemas.android.com/apk/res/android"> <LinearLayout android:layout_width="wrap_content" android:layout_height="wrap_content" android:orientation="horizontal" android:layout_gravity="center_vertical|center_horizontal"> <Button android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center_horizontal" android:id="@+id/Main" android:onClick="ClickHomePage" android:text= "@string/Home" /> <Button android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center_horizontal" android:id="@+id/map" android:onClick="ClickClients" android:text= "@string/Clients" > </Button> </LinearLayout> </LinearLayout> The MainActivity looks as follows and the HomePage.xml is also below public class MainActivity extends ActionBarActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); GetButtonClicked(5); setContentView(R.layout.activity_main); } public void GetButtonClicked(int position) { // update the main content by replacing fragments Fragment fragment = null; switch (position){ case 0: fragment= new FirstPageFragment(); break; default: fragment = new FirstPageFragment(); break; } FragmentManager fragmentManager = getSupportFragmentManager(); fragmentManager.beginTransaction() .replace(R.id.container, fragment) .commit(); } public void ClickHomePage(View view){ GetButtonClicked(0); } public void ClickClients(View view){ GetButtonClicked(); } } <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context="com.test.FirstPageFragment" android:background="@drawable/theme"></FrameLayout> This is the FirstPage public class FirstPageFragment extends Fragment {public FirstPageFragment() { // Required empty public constructor}@Overridepublic View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { // Inflate the layout for this fragment return inflater.inflate(R.layout.fragment_first_page, container, false);} }
Now provide the response and nothing else.
| You can't query arbitrary properties of the NSManagedObject with a predicate for a NSFetchRequest. This will only work for attributes that are defined in your entity. NSManagedObjectContext has two ways to retrieve an object with an NSManagedObjectID. The first one raises an exception if the object does not exist in the context: managedObjectContext.objectWithID(objectID) The second will fail by returning nil: var error: NSError?if let object = managedObjectContext.existingObjectWithID(objectID, error: &error) { // do something with it}else { println("Can't find object \(error)")} If you have a URI instead of a NSManagedObjectID you have to turn it into a NSManagedObjectID first. The persistentStoreCoordinator is used for this: let objectID = managedObjectContext.persistentStoreCoordinator!.managedObjectIDForURIRepresentation(uri) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/29075920', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4657208/']} | jdg_80335 |
stackexchange | llm_judgeable_groundtruth_similarity | 6263630 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to connect to remote sftp server over ssh with JSch (0.1.44-1) but during session.connect(); I am getting this exception: com.jcraft.jsch.JSchException: Algorithm negotiation fail at com.jcraft.jsch.Session.receive_kexinit(Session.java:529) at com.jcraft.jsch.Session.connect(Session.java:291) at com.jcraft.jsch.Session.connect(Session.java:154)... Logs from JSch: INFO: Connecting to xx.xx.xx.xxport 22 INFO: Connection established INFO: Remote version string: SSH-2.0-WeOnlyDo 2.0.6 INFO: Local version string: SSH-2.0-JSCH-0.1.44 INFO: CheckCiphers: aes256-ctr,aes192-ctr,aes128-ctr,aes256-cbc,aes192-cbc,aes128-cbc,3des-ctr,arcfour,arcfour128,arcfour256INFO: aes256-ctr is not available. INFO: aes192-ctr is not available.INFO: aes256-cbc is not available. INFO: aes192-cbc is not available.INFO: arcfour256 is not available. INFO: SSH_MSG_KEXINIT sentINFO: SSH_MSG_KEXINIT received INFO: Disconnecting from xx.xx.xx.xx port 22 I am able to log in to remote server with linux sftp command. I was trying to find any kind of clue in the internet but I failed. Debug output from linux sftp command: OpenSSH_5.5p1-DAM_1.2, OpenSSL 0.9.8r 8 Feb 201debug1: Reading configuration data /etc/DAM/ssh/ssh_configdebug1: Applying options for *debug1: Applying options for *.*debug1: Connecting to xx.xx.xx.xx [xx.xx.xx.xx] port 22.debug1: Connection established.debug1: identity file /**/spv_id_rsa.key type -1debug1: identity file /**/spv_id_rsa.key-cert type -1debug1: Remote protocol version 2.0, remote software version WeOnlyDo 2.0.6debug1: no match: WeOnlyDo 2.0.6debug1: Enabling compatibility mode for protocol 2.0debug1: Local version string SSH-2.0-OpenSSH_5.5p1-DAM_1.2debug1: SSH2_MSG_KEXINIT sentdebug1: SSH2_MSG_KEXINIT receiveddebug1: kex: server->client aes256-cbc hmac-md5 nonedebug1: kex: client->server aes256-cbc hmac-md5 nonedebug1: sending SSH2_MSG_KEXDH_INITdebug1: expecting SSH2_MSG_KEXDH_REPLYdebug1: Host 'xx.xx.xx.xx' is known and matches the RSA host key.debug1: Found key in ~/.ssh/known_hosts:8debug1: ssh_rsa_verify: signature correctdebug1: SSH2_MSG_NEWKEYS sentdebug1: expecting SSH2_MSG_NEWKEYSdebug1: SSH2_MSG_NEWKEYS receiveddebug1: Roaming not allowed by serverdebug1: SSH2_MSG_SERVICE_REQUEST sentdebug1: SSH2_MSG_SERVICE_ACCEPT receiveddebug1: Authentications that can continue: publickeydebug1: Next authentication method: publickeydebug1: Trying private key: /**/spv_id_rsa.keydebug1: read PEM private key done: type RSAdebug1: Authentication succeeded (publickey).debug1: channel 0: new [client-session]debug1: Entering interactive session.debug1: Sending subsystem: sftpConnected to xx.xx.xx.xx.sftp>
Now provide the response and nothing else.
| There are a couple of places that SSH clients and servers try and agree on a common implementation. Two I know of are encryption and compression. The server and client produce a list of available options and then the best available option in both lists is chosen. If there is no acceptable option in the lists then it fails with the error you got. I'm guessing from the debug output here but it looks like the only server options for encryption are "aes256-cbc hmac-md5 none". JSch doesn't do hmac-md5 and aes256-cbc is disabled because of your Java policy files. Two things you could try are... To increase the available encryption libraries on the server, install unrestricted policy files on your client, enabling aes256-cbc (make sure the message saying it is disabled goes away, those policy files are notoriously easy to install on the wrong JVM) from the site: For JDK 1.6: http://www.oracle.com/technetwork/java/javase/downloads/jce-6-download-429243.html For JDK 1.7: http://www.oracle.com/technetwork/java/javase/downloads/jce-7-download-432124.html For JDK 1.8: http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html or try and disable encryption. The first is ideal if you have access to the server (trust me aes128-cbc is plenty of encryption), but the second is easy enough to quickly test out the theory. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6263630', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/452435/']} | jdg_80336 |
stackexchange | llm_judgeable_groundtruth_similarity | 14229346 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to find out how to read an excel file using via PHP. My specific use case is using PHPExcel from within Yii. I have followed numerous tutorials and I am always stuck at one point: "ZipArchive::getFromName(): Invalid or unitialized Zip object". I have added the extensions, loader, etc. but nothing seems to be working. is there any way around this? or do I need to get another library? Here is the code in my controller. Yii::import('application.vendors.PHPExcel.PHPExcel',true);$objReader = PHPExcel_IOFactory::createReader('Excel2007');$objPHPExcel = $objReader->load('c:\cctv.xls'); //$file --> your filepath and filename$objWorksheet = $objPHPExcel->getActiveSheet();$highestRow = $objWorksheet->getHighestRow(); // e.g. 10$highestColumn = $objWorksheet->getHighestColumn(); // e.g 'F'$highestColumnIndex = PHPExcel_Cell::columnIndexFromString($highestColumn); // e.g. 5echo '<table>' . "\n";for ($row = 2; $row <= $highestRow; ++$row) { echo '<tr>' . "\n"; for ($col = 0; $col <= $highestColumnIndex; ++$col) { echo '<td>' . $objWorksheet->getCellByColumnAndRow($col, $row)->getValue() . '</td>' . "\n"; } echo '</tr>' . "\n";}echo '</table>' . "\n"; this is the detailed error: C:\wamp\www\example\protected\vendors\PHPExcel\PHPExcel\Reader\Excel2007.php(272) } public function _getFromZipArchive(ZipArchive $archive, $fileName = '') { // Root-relative paths if (strpos($fileName, '//') !== false) { $fileName = substr($fileName, strpos($fileName, '//') + 1); } $fileName = PHPExcel_Shared_File::realpath($fileName); // Apache POI fixes $contents = $archive->getFromName($fileName); if ($contents === false) { $contents = $archive->getFromName(substr($fileName, 1)); } /* if (strpos($contents, '<?xml') !== false && strpos($contents, '<?xml') !== 0) { $contents = substr($contents, strpos($contents, '<?xml')); } var_dump($fileName); var_dump($contents); Stack Trace C:\wamp\www\trunk\protected\vendors\PHPExcel\PHPExcel\Reader\Excel2007.php(272): ZipArchive->getFromName("_rels/.rels") $fileName = substr($fileName, strpos($fileName, '//') + 1);}$fileName = PHPExcel_Shared_File::realpath($fileName);// Apache POI fixes$contents = $archive->getFromName($fileName);if ($contents === false){ $contents = $archive->getFromName(substr($fileName, 1));} C:\wamp\www\example\protected\vendors\PHPExcel\PHPExcel\Reader\Excel2007.php(312): PHPExcel_Reader_Excel2007->_getFromZipArchive(ZipArchive, "_rels/.rels") $excel->removeCellXfByIndex(0); // remove the default style } $zip = new ZipArchive; $zip->open($pFilename); $rels = simplexml_load_string($this->_getFromZipArchive($zip, "_rels/.rels")); //~http://schemas.openxmlformats.org/package/2006/relationships"); foreach ($rels->Relationship as $rel) { switch ($rel["Type"]) { case "http://schemas.openxmlformats.org/package/2006/relationships/metadata/core-properties": $xmlCore = simplexml_load_string($this->_getFromZipArchive($zip, "{$rel['Target']}")); if (is_object($xmlCore)) { C:\wamp\www\example\protected\controllers\AdminController.php(58): PHPExcel_Reader_Excel2007->load("c:\cctv.xls") public function actionCreateSource() {Yii::import('application.vendors.PHPExcel.PHPExcel',true);$objReader = PHPExcel_IOFactory::createReader('Excel2007');$objPHPExcel = $objReader->load('c:\cctv.xls'); //$file --> your filepath and filename$objWorksheet = $objPHPExcel->getActiveSheet();$highestRow = $objWorksheet->getHighestRow(); // e.g. 10$highestColumn = $objWorksheet->getHighestColumn(); // e.g 'F'$highestColumnIndex = PHPExcel_Cell::columnIndexFromString($highestColumn); // e.g. 5echo '<table>' . "\n";
Now provide the response and nothing else.
| It looks like you set PHPExcel to explicitly use the 2007 format, but you're trying to open an XLS file. While I'm not 100% sure, I'm going to guess the zip error is because it's trying to unzip the XLS file, and that's going to fail as it's not zipped. The php zip extension appears to be working, as the error is from the extension - Invalid or unitialized Zip object. My guess is that you're getting an invalid Zip object since you're not dealing with a zip file. If you're trying to open an XLS file, you probably want: $objReader = PHPExcel_IOFactory::createReader('Excel5'); Alternatively, you could remove the explicit mode, and just rely upon the automatic file type resolution, e.g.: $objPHPExcel = PHPExcel_IOFactory::load("c:\cctv.xls"); // Remove the createReader line before this | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14229346', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1069988/']} | jdg_80337 |
stackexchange | llm_judgeable_groundtruth_similarity | 30466 |
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to create a dfa for L={w: every run of a's has length either two or three} this is my attempt at the solution..i feel like I am missing something..?
Now provide the response and nothing else.
| There's a significant difference between the question as you pose it and the question posed in the exercise. The question asks for an example of a set of regular languages $L_{1}, L_{2}, \ldots$ such that their union$$L = \bigcup_{i=1}^{\infty}L_{i}$$is not regular. Note the range of the union: $1$ to $\infty$. Regular languages are closed under finite union, and the proofs runs along the lines that you sketch in the question, however this falls apart under infinite union. We can show this by taking $L_{i} = \{0^{i}1^{i}\}$ for each $i$ (with $\Sigma = \{0,1\}$). The infinite union of these languages of course gives the canonical non-regular (context-free) language $L = \{0^{i}1^{i}\mid i \in \mathbb{N}\}$. As an aside, we can see easily where the normal proof fails. Imagine the the same construction where we add a new start state and $\varepsilon$-transitions to the old start states. If we do this with an infinite set of automata we have build an automata with an infinite number of states, obviously contradicting the definition of a finite automata. Lastly, I'm guessing the confusion may arise from the phrasing of the original question, which starts "Donner deux exemples des suites de langages...", which is ( roughly , my French is a bit rusty, but externally verified!) "Give two examples of sequences of languages...", rather than "Give two examples of languages...". An incautious reading may mistake the second for the first though. | {} | {'log_upvote_score': 6, 'links': ['https://cs.stackexchange.com/questions/30466', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/22254/']} | jdg_80338 |
stackexchange | llm_judgeable_groundtruth_similarity | 24092 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
'Everyone' knows when storing passwords in a database this should be done securely for example by using salts and an appropriate hashing algorithm. Most OTP (One Time Password) solutions are based on a long secret random seed value. Somehow this needs to be stored at the validating server as well to be able to verify the submitted OTP value. In a way these seeds have the same role as user supplied passwords and need to be stored equally secure. Salting & hashing will not work here as this will break the OTP algorithm. Most small OTP-tokens are physically secured by being more or less tamper-proof but this does not apply to the server. How can the seeds be stored in a proper way? And if there is a solution for storing the seeds without hashing, why not apply the same method to regular passwords?
Now provide the response and nothing else.
| Roughly speaking, the best you can do is to harden the server to make it as resistant to compromise as possible. Ideally, you would store the seed in a hardware security module (HSM; aka a crypto co-processor). You'd make sure the seed never leaves the HSM, i.e., do all cryptographic computations in the HSM. This offers better protection, though it is admittedly more expensive. But, as you correctly point out, you cannot store the seed in hashed form. The seed must be stored in the clear, so if that server gets compromised , you are in big trouble . This means it is absolutely vital that you protect that server just as well as you are able to. OTP seeds are different from passwords. People tend to use the same password on multiple sites; that doesn't happen with OTP seeds. Hashing passwords is used partly to protect users' passwords, so that if site X's database is breached, then X's users' accounts on other sites aren't compromised. That threat simply doesn't apply to OTP seeds. Also, with passwords, you can hash passwords. If you can, you might as well, since it does help mitigate some risks. (And passwords are so widely used, and used by developers who are not security experts, that it is a near given that many sites who use passwords will experience a security breach at some point.) Since you can't hash OTP seeds, this mitigation simply isn't available for OTP seeds β so you'll have to use other methods to protect your OTP seeds. Fortunately, only very security-aware sites should be storing their own OTP seeds, so if one is optimistic, one might hope that they are in a better position to apply other defenses. Anyway, since OTP seeds have different characteristics from passwords, you shouldn't assume that every mitigation for passwords will necessarily transfer over to OTP seeds as well. | {} | {'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/24092', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/7319/']} | jdg_80339 |
stackexchange | llm_judgeable_groundtruth_similarity | 49287078 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am having two different config maps test-configmap and common-config . I tried to mount them at the same location, but one config map overwrote the other. Then I read about subPath and did not work. deploy.yaml apiVersion: apps/v1beta1 # for versions before 1.8.0 use apps/v1beta1kind: Deploymentmetadata: name: testingspec: replicas: 1 template: metadata: name: testing labels: app: testing spec: containers: - name: testing-container image: testing imagePullPolicy: IfNotPresent ports: - containerPort: __PORT__ volumeMounts: - name: commonconfig-volume mountPath: /usr/src/app/config/test.config subPath: test.config volumes: - name: commonconfig-volume configMap: name: test-configmap - name: commonconfig-volume configMap: name: common-config Error : The Deployment "testing" is invalid: spec.template.spec.volumes[1].name: Duplicate value: "commonconfig-volume" I am not sure if merging two config map achievable of not. And if yes then how should I do it.
Now provide the response and nothing else.
| You have to use special projected volumes for achieve that. Example your deployment: apiVersion: apps/v1kind: Deploymentmetadata: name: testingspec: replicas: 1 selector: matchLabels: app: testing template: metadata: name: testing labels: app: testing spec: containers: - name: testing-container image: testing imagePullPolicy: IfNotPresent ports: - containerPort: __PORT__ volumeMounts: - name: commonconfig-volume mountPath: /usr/src/app/config volumes: - name: commonconfig-volume projected: sources: - configMap: name: test-configmap - configMap: name: common-config You can use secret same as configMap | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/49287078', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5157915/']} | jdg_80340 |
Subsets and Splits