source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 338520 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am wondering if there is a difference between Haskell's type classes and Go's interfaces. Both define types based on functions, in that way, that a value matches a type, if a function required by the type is defined for the value. Are there any differences or are this just two names for the same thing?
Now provide the response and nothing else.
| The two concepts are very very similar. In normal OOP languages, we attach a vtable (or for interfaces: itable) to each object: | thisv+---+---+---+| V | a | b | the object with fields a, b+---+---+---+ | v +---+---+---+ | o | p | q | the vtable with method slots o(), p(), q() +---+---+---+ This allows us to invoke methods similar to this->vtable.p(this) . In Haskell, the method table is more like an implicit hidden argument: method :: Class a => a -> a -> Int would look like the C++ function template<typename A>int method(Class<A>*, A*, A*) where Class<A> is an instance of typeclass Class for type A . A method would be invoked like typeclass_instance->p(value_ptr); The instance is separate from the values. The values still retain their actual type. While typeclasses allow some polymorphism, this is not subtyping polymorphism. That makes it impossible to make a list of values that satisfy a Class . E.g. assuming we have instance Class Int ... and instance Class String ... , we cannot create a heterogeneous list type like [Class] that has values like [42, "foo"] . (This is possible when you use the “existential types” extension, which effectively switches to the Go approach). In Go, a value doesn't implement a fixed set of interfaces. Consequently it can't have a vtable pointer. Instead, pointers to interface types are implemented as fat pointers that include one pointer to the data, another pointer to the itable: `this` fat pointer +---+---+ | | | +---+---+ ____/ \_________v v+---+---+---+ +---+---+| o | p | q | | a | b | the data with+---+---+---+ +---+---+ fields a, bitable with methodslots o(), p(), q()this.itable->p(this.data_ptr) The itable is combined with the data into a fat pointer when you cast from an ordinary value to an interface type. Once you have an interface type, the actual type of the data has become irrelevant. In fact, you can't access the fields directly without going through methods or downcasting the interface (which may fail). Go's approach to interface dispatch comes at a cost: each polymorphic pointer is twice as large as a normal pointer. Also, casting from one interface to another involves copying the method pointers to a new vtable. But once we've constructed the itable, this allows us to cheaply dispatch method calls to many interfaces, something which traditional OOP languages suffer with. Here, m is the number of methods in the target interface, and b is the number of base classes: C++ does object slicing or needs to chase virtual inheritance pointers when casting, but then has simple vtable access. O(1) or O(b) cost of upcasting, but O(1) method dispatch. The Java Hotspot VM doesn't have to do anything when upcasting, but upon interface method lookup does a linear search through all itables implemented by that class. O(1) upcasting, but O(b) method dispatch. Python doesn't have to do anything when upcasting, but uses a linear search through a C3-linearized base class list. O(1) upcasting, but O(b²) method dispatch? I'm not sure what the algorithmic complexity of C3 is. The .NET CLR uses an approach similar to Hotspot but adds another level of indirection in an attempt to optimize for memory usage. O(1) upcasting, but O(b) method dispatch. The typical complexity for method dispatch is much better since method lookup can often be cached, but the worst case complexities are quite horrible. In comparison, Go has O(1) or O(m) upcasting, and O(1) method dispatch. Haskell has no upcasting (constraining a type with a type class is a compile-time effect), and O(1) method dispatch. | {} | {'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/338520', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/96131/']} | jdg_81141 |
stackexchange | llm_judgeable_groundtruth_similarity | 13010411 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to download a file when a user clicks on download link. In Controller: $response = new Response(); $response->headers->set('Content-type', 'application/octect-stream'); $response->headers->set('Content-Disposition', sprintf('attachment; filename="%s"', $filename)); $response->headers->set('Content-Length', filesize($filename)); return $response; This is opening the dialog box to save the file, but it says the file is 0 bytes.And changing it to: $response = new Response(); $response->headers->set('Content-type', 'application/octect-stream'); $response->headers->set('Content-Disposition', sprintf('attachment; filename="%s"', $filename)); $response->headers->set('Content-Length', filesize($filename)); $response->headers->set('Content-Transfer-Encoding', 'binary'); $response->setContent(readfile($filename)); return $response; I get a bunch of weird characters instead of the file download dialog box. Finally, switching the "setContent" line to: $response->setContent(file_get_contents($filename)); It returns a PHP error: Fatal error: Allowed memory size... Any clues on how to achieve this?I've done it before in PHP (wihtout MVC), but I don't know what can be missing to do it through Symfony2... Maybe the solution is setting the memory_limit in PHP.INI, but I guess it´s not the best practice...
Now provide the response and nothing else.
| I finally solved this without X-SendFile (which is probably the best practice). Anyway, for those who can't get X-Sendfile apache module to work (shared hosting), here's a solution: // Generate response$response = new Response();// Set headers$response->headers->set('Cache-Control', 'private');$response->headers->set('Content-type', mime_content_type($filename));$response->headers->set('Content-Disposition', 'attachment; filename="' . basename($filename) . '";');$response->headers->set('Content-length', filesize($filename));// Send headers before outputting anything$response->sendHeaders();$response->setContent(file_get_contents($filename));return $response; | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/13010411', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1380191/']} | jdg_81142 |
stackexchange | llm_judgeable_groundtruth_similarity | 42115930 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I have a unique validation rule on 2 fields? a. The application should not allow two people to have the same identical first name and last name. It is allowed that the users fills in only a first name or only a last name. Because the user may have only one of them. b. But if the user enters only a first name (Glen), no other person in the table should have the same (first name = 'Glen' and last name = null). another 'Glen Smith' ok. I tried the following rule. It works great when both fields (first and last name) are not null: 'firstName' => 'unique:people,firstName,NULL,id,lastName,' . $request->lastName This rule fails on b. when only one field is present. Any hint?
Now provide the response and nothing else.
| The built in unique validator wouldn't really support what you're trying to do. It's purpose is to ensure that a single valid is unique in the database, rather than a composite of two values. However, you can create a custom validator : Validator::extend('uniqueFirstAndLastName', function ($attribute, $value, $parameters, $validator) { $count = DB::table('people')->where('firstName', $value) ->where('lastName', $parameters[0]) ->count(); return $count === 0;}); You could then access this new rule with: 'firstName' => "uniqueFirstAndLastName:{$request->lastName}" You'll probably find you might need to tweak your database query a little bit as it's untested. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/42115930', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3393718/']} | jdg_81143 |
stackexchange | llm_judgeable_groundtruth_similarity | 53244964 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've used Vert.x toolkit for creating reactive applications with support for relational DBs like MySQL and Postgres . I know Spring provides reactive support for some NoSQL DBs like Cassandra and Mongo but are they willing to provide the same for relational DBs?
Now provide the response and nothing else.
| What's the idea behind the Spring Framework? Spring Framework is a library to improve developer productivity, and so are Spring's portfolio projects such as Spring Data, Spring Security, Spring Cloud. These projects build on top of existing APIs which are either standardized through a JSR or a JEP or on top of libraries that have proved to be useful and widely used. The Spring team does not build drivers for databases or other integrations, that's up to the database/driver vendors. WebFlux compared to Vert.x Spring WebFlux is an good example for a typical Spring module. It builds on top of existing non-blocking servers (Project Reactor via netty, Undertow, and Jetty). WebFlux provides a runtime container for non-blocking, reactive applications leveraging Spring components to assist in developing and running such applications. Vert.x is an excellent example of an integrated environment that provides its own low-level implementations. Vert.x is heavily optimized and such an eco-system requires optimized integrations. Vert.x came up with own implementations for various databases and provides APIs that work well in a Vert.x context but these APIs are not JDBC. Relational Database APIs As M-Razavi already mentioned, Java uses JDBC to integrate with relational databases and JDBC is of a blocking nature – there's nothing sensible one could do about to mitigate the blocking nature of JDBC. Offloading JDBC calls to an Executor (typically Thread pool) is limited in its usefulness as the pool eventually saturates with requests). TL;DR, there's no API available on top of which we could provide a reactive relational database integration. So what are the options? M-Razavi already mentioned ADBA that is an initiative from Oracle to provide a standardized API for asynchronous database access in Java using futures. Everything in ADBA is still work in progress and the team behind ADBA is happy to get feedback. A bunch of Postgres folks is working on a Postgres ADBA driver that can be used for first experiments. However, ADBA is a future goal and I expect that we don't see ADBA released with Java 12. There are a couple of independent drivers such as Reactiverse's reactive-pg-client . These drivers come with a vendor-specific API and aren't really suited for a broader integration in Spring. We would need to provide additional layers to expose a common API, and new drivers couldn't be just plugged into your application so it works-out-of-the-box™. Having a standard API allows pluggability, so there's huge value in having a standard API. R2DBC to the rescue? Lacking a standard API and the non-availability of drivers, a team at Pivotal started to investigate on a reactive relational API that would be an ideal fit for reactive programming purposes. They came up with R2DBC which stands for Reactive Relational Database Connectivity. As of now, R2DBC is an incubator project to evaluate the feasibility and to start discussions whether driver vendors are interested at all in supporting reactive/non-blocking/asynchronous drivers. As of now, there are three driver implementations: PostgreSQL H2 Microsoft SQL Server R2DBC comes with an API specification ( r2dbc-spi ) and a client ( r2dbc-client ) that makes the SPI usable for applications. We started exploring on a Spring Data R2DBC integration that provides reactive APIs through a database client and by supporting reactive repositories. R2DBC and its eco-system are still young and ask for experiments and feedback to collect use cases and to see whether a reactive relational database integration would make sense. Right now, you can consume R2DBC through Spring Data and the following snippet shows DatabaseClient usage: PostgresqlConnectionFactory connectionFactory = new PostgresqlConnectionFactory(…);DatabaseClient databaseClient = DatabaseClient.create(connectionFactory);Mono<Integer> count = databaseClient.execute() .sql("INSERT INTO legoset (id, name, manual) VALUES($1, $2, $3)") .bind("$1", 42055) .bind("$2", "Description") .bindNull("$3", Integer.class) .fetch() .rowsUpdated();Flux<Map<String, Object>> rows = databaseClient.execute() .sql("SELECT id, name, manual FROM legoset") .fetch() .all(); | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/53244964', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3582996/']} | jdg_81144 |
stackexchange | llm_judgeable_groundtruth_similarity | 3175072 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I thought the C/C++ vs C#/Java performance question was well trodden, meaning that I'd read enough evidence to suggest that the VM languages are not necessarily any slower than the "close-to-silicon" languages. Mostly because the JIT compiler can do optimizations that the statically compiled languages cannot. However, I recently received a CV from a guy who claims that Java-based high frequency trading is always beaten by C++, and that he'd been in a situation where this was the case. A quick browse on job sites indeed shows that HFT applicants need knowledge of C++, and a look at Wilmott forum shows all the practitioners talking about C++. Is there any particular reason why this is the case? I would have thought that with modern financial business being somewhat complex, a VM language with type safety, managed memory, and a rich library would be preferred. Productivity is higher that way. Plus, JIT compilers are getting better and better. They can do optimizations as the program is running, so you'd think they's use that run-time info to beat the performance of the unmanaged program. Perhaps these guys are writing the critical bits in C++ and and calling them from a managed environment (P/Invoke etc)? Is that possible? Finally, does anyone have experience with the central question in this, which is why in this domain unmanaged code is without doubt preferred over managed? As far as I can tell, the HFT guys need to react as fast as possible to incoming market data, but this is not necessarily a hard realtime requirement. You're worse off if you're slow, that's for sure, but you don't need to guarantee a certain speed on each response, you just need a fast average. EDIT Right, a couple of good answers thus far, but pretty general (well-trodden ground). Let me specify what kind of program HFT guys would be running. The main criterion is responsiveness. When an order hits the market, you want to be the first to be able to react to it. If you're late, someone else might take it before you, but each firm has a slightly different strategy, so you might be OK if one iteration is a bit slow. The program runs all day long, with almost no user intervention. Whatever function is handling each new piece of market data is run dozens (even hundreds) of times a second. These firms generally have no limit as to how expensive the hardware is.
Now provide the response and nothing else.
| Firstly, 1 ms is an eternity in HFT. If you think it is not then it would be good to do a bit more reading about the domain. (It is like being 100 miles away from the exchange.) Throughput and latency are deeply intertwined as the formulae in any elementary queuing theory textbook will tell you. The same formulae will show jitter values (frequently dominated by the standard deviation of CPU queue delay if the network fabric is right and you have not configured quite enough cores). One of the problems with HFT arbitrage is that once you decide to capture a spread, there are two legs (or more) to realize the profit. If you fail to hit all legs you can be left with a position that you really don't want (and a subsequent loss) - after all you were arbitraging not investing. You don't want positions unless your strategy is predicting the (VERY near term!!!) future (and this, believe it or not, is done VERY successfully). If you are 1 ms away from exchange then some significant fraction of your orders won't be executed and what you wanted will be picked off. Most likely the ones that have executed one leg will end up losers or at least not profitable. Whatever your strategy is for argument's sake let us say it ends up a 55%/45% win/loss ratio. Even a small change in the win/loss ratio can have in big change in profitability. re: "run dozens (even hundreds)" seems off by orders of magnitude Even looking at 20000 ticks a second seems low, though this might be the average for the entire day for the instrument set that he is looking at. There is high variability in the rates seen in any given second. I will give an example. In some of my testing I look at 7 OTC stocks (CSCO,GOOG,MSFT,EBAY,AAPL,INTC,DELL) in the middle of the day the per second rates for this stream can range from 0 mps (very very rare) to almost almost 2000 quotes and trades per peak second. (see why I think the 20000 above is low.) I build infrastructure and measurement software for this domain and the numbers we talk about are 100000's and millions per second. I have C++ producer/consumer infrastructure libraries that can push almost 5000000 (5 million) messages/second between producer and consumer, (32 bit,2.4 GHz cores). These are 64 byte messages with new, construct, enqueue, synchronize , on the producer side and synchronize,dequeue,touch every byte,run virtual destructor,free on the consumer side. Now admittedly that is a simple benchmark with no Socket IO (and socket IO can be ugly) as would be at the end points of the end point pipe stages. It is ALL custom synchronization classes that only synchronize when empty, custom allocators, custom lock free queues and lists, occasional STL(with custom allocators) but more often custom intrusive collections (of which I have a significant library). More than once I have given a vendor in this arena a quadruple (and more) in throughput without increased batching at the socket endpoints. I have OrderBook and OrderBook::Universe classes that take less than 2us for new, insert, find, partial fill, find, second fill, erase, delete sequence when averaged over 22000 instruments. The benchmark iterates over all 22000 instruments serially between the insert first fill and last fill so there are no cheap caching tricks involved. Operations into the same book are separated by accesses of 22000 different books. These are very much NOT the caching characteristics of real data. Real data is much more localized in time and consecutive trades frequently hit the same book. All of this work involves careful consideration of the constants and caching characteristics in any of the algorithmic costs of the collections used. (Sometimes it seems that the K's in K O(n) K O(n*log n) etc., etc., etc. are dismissed a bit too glibly) I work on the Marketdata infrastructure side of things. It is inconceivable to even think of using java or a managed environment for this work. And when you can get this kind of performance with C++ and I think it is quite hard to get million+/mps performance with a managed environment) I can't imagine any of the significant investment banks or hedge funds (for whom a $250000 salary for a top notch C++ programmer is nothing) not going with C++. Is anybody out there really getting 2000000+/mps performance out of a managed environment? I know a few people in this arena and no one ever bragged about it to me. And I think 2mm in a managed environment would have some bragging rights. I know of one major player's FIX order decoder doing 12000000 field decodes/sec. (3Ghz CPU) It is C++ and the guy who wrote it almost challenged anybody to come up with somethingin a managed environment that is even half that speed. Technologically it is an interesting area with lots of fun performance challenges. Consider the options market when the underlying security changes - there might be say 6 outstanding price points with 3 or 4 different expiration dates. Now for each trade there were probably 10-20 quotes. Those quotes can trigger price changes in the options.So for each trade there might be 100 or 200 changes in options quotes. It is just a ton of data - not a Large Hadron Collider collision-detector-like amount of data but still a bit of a challenge. It is a bit different than dealing with keystrokes. Even the debate about FPGA's goes on. Many people take the position that a well coded parser running on 3GHZ commodity HW can beat a 500MHz FPGA. But even if a tiny bit slower (not saying they are) FPGA based systems can tend to have tighter delay distributions. (Read "tend" - this is not a blanket statement) Of course if you have a great C++ parser that you push through a Cfront and then push that through the FPGA image generator... But that another debate... | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3175072', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/233076/']} | jdg_81145 |
stackexchange | llm_judgeable_groundtruth_similarity | 554387 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Imagine that you are asked to infer some causal effect -- a change in an outcome $y$ in response to some variable $x$ . But, the person asking for this directs you to use a predictive model to do so. Here's the setup: $x$ is confounded inasmuch as there is some unobserved $u$ that is causally linked to both $y$ and $x$ . We have a classical omitted variables bias. We have high dimensional covariates $\mathbf{Z}$ that are not independent of $y$ or $x$ and/or $u$ You are asked to train a suite of predictive models -- neural networks, boosted trees, whatever -- denoted $g_i([x, \mathbf{Z}]) + \epsilon$ where $i$ indexes different models, and then select among them model $i$ that minimizes some metric of predictive skill. RMSE, for instance. Based on the chosen model, you are asked to report $$\frac{\partial \hat{y}}{\partial x} = \frac{\partial \hat{g}_i([x, \mathbf{Z}])}{\partial x}$$ You know that $$E\left[\frac{\partial \hat{y}}{\partial x}\right] \neq \frac{\partial y}{\partial x}$$ in the population, because the error term includes the omitted variable, so therefore $$\frac{\partial \epsilon}{\partial x} \neq 0 \text{ in the population, despite the fact that } \frac{\partial \hat\epsilon}{\partial x} = 0$$ in any reasonable model $g$ . On top of omitted variables bias, there may be bias from regularization too! Further assume that you have some causal model -- say an instrumental variables regression, utilizing some suitable instrument $w$ for $x$ . It's one of the models in your suite of models, but its predictive skill in terms of cross-validated RMSE is worse than the others. The best model is the one that produces the consistent causal estimate, right? But: How would you explain this to someone in layperson's terms? The person asking for analysis doesn't understand causal inference, and needs to be educated. However, they don't understand math and have little attention span. How can you effectively convey the basic point that causal methods are required, and predictive methods are inappropriate? No math, lots of stories, pithy sentences.
Now provide the response and nothing else.
| First of all, I don't think this should be treated as a strict dichotomy: "predictive models can never establish causal inference." There are various situations in which a predictive model gives us "pretty darn good" confidence that a given causal relationship exists. So what I'd say is that predictive models - no matter how sophisticated - are often insufficient to establish causality with a high degree of confidence. Now, how to explain this to people who don't know stats/math at all? Here's one approach: You've heard it said that "correlation is not causation." What that means is just that just because two variables (call them A and B) are correlated, that doesn't mean one causes the other. This can happen when the correlation is due to a third "confounding" variable that is correlated with both A and B. For example: just because having a college degree is correlated with high earnings as an adult doesn't mean that getting a degree CAUSED those earnings to go up - it could be that "having rich parents" both allows people to get a degree and then separately helps them earn more (even if going to college actually does nothing). Predictive models try to account for this problem by statistically "controlling for" confounding variables. So in the above case we could use statistical modeling to analyze the relationship between a degree and earnings after accounting for the fact that people with rich parents are more likely to have a degree. Unfortunately, it's never possible in practice to control for EVERY confounding variable. This is partly because important variables (like the student's "personal motivation") may not exist or be impossible to measure. Even controlling for "parents being rich" is tricky - what single number could perfectly capture a family's entire economic situation? And how can we be sure that the data we have are accurate? Do any of us know PRECISELY how "rich" our parents were when we were growing up? Another problem with predictive models is that even if you COULD control for everything they can't distinguish between A causing B or B causing A. So if we were trying to analyze the effect of depression on opiate use, no matter what control variables we include we can't be sure that the effect we observe isn't just due to opiate use CAUSING depression. Note that this is probably NOT a problem for our earlier example because it's impossible for your earnings as an adult to CAUSE you to have gone to college earlier in your life. This is one way in which our theoretical understanding of how these variables work helps us to understand the threats to causal inference. The only way to completely ensure that a relationship between A and B is causal is to experimentally control how people get "assigned" to different values of A (e.g. to get a college degree or not). If assignment to A is completely random then we can be sure that NOTHING else influenced A, which means that you don't have to worry about ANY confounding variables (even B) in analyzing the relationship between A and B. However, for reasons that are obvious when we're considering college degrees, random assignment is often infeasible or downright unethical. So we have to use other research design approaches to approximate the causal power of random assignment. Critically, these other approaches (instrumental variables, regression discontinuity, natural experiments) rely on the features of the world itself, and the data collection process, rather than statistical/mathematical methods, to address issues of confounding variables. | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/554387', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/17359/']} | jdg_81146 |
stackexchange | llm_judgeable_groundtruth_similarity | 19377437 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to write my own expected condition. What I need... I have an iframe. and I also have an image in it. I want to continue processing when image's scr will change. What I did: class url_changed_condition(object): ''' Checks whether url in iframe has changed or not ''' def __init__(self, urls): self._current_url, self._new_url = urls def __call__(self, ignored): return self._current_url != self._new_url and later in the code I have: def process_image(self, locator, current_url): try: WebDriverWait(self.driver, 10).until(ec.presence_of_element_located((By.TAG_NAME, u"iframe"))) iframe = self.driver.find_element(*locator) if iframe: print "Iframe found!" self.driver.switch_to_frame(iframe) WebDriverWait(self.driver, 10).until(ec.presence_of_element_located((By.XPATH, u"//div"))) # WebDriverWait(self.driver, 10).until( # url_changed_condition( # (current_url, self.driver.find_element(By.XPATH, u"//a/img").get_attribute(u"src")))) img_url = self.driver.find_element(By.XPATH, u"//a/img").get_attribute(u"src") print img_url self.search_dict[self._search_item].append(img_url) self.driver.switch_to_default_content() except NoSuchElementException as NSE: print "iframe not found! {0}".format(NSE.msg) except: print "something went wrong" import traceback import sys type_, value_, trace_ = sys.exc_info() print type_, value_ print traceback.format_tb(trace_) finally: return current_url This code works, but returns the same url multiple times. The problem is when I uncomment url_changed_condition it falls with TimeoutException in (current_url, self.driver.find_element(By.XPATH, u"//a/img").get_attribute(u"src")) The line below it works fine... I don't get it.
Now provide the response and nothing else.
| Looks like this topic misses an example of a Custom Expected Condition . It is actually pretty easy. First of all, what is an Expected Condition in Python selenium bindings : it is a new-style class (based on object ) it has __call__() magic method defined which returns a boolean There is a big set of built-in expected condition classes . Let's work through example. Let's say we want to wait until an element's text will start with a desired text : from selenium.webdriver.support import expected_conditions as ECclass wait_for_text_to_start_with(object): def __init__(self, locator, text_): self.locator = locator self.text = text_ def __call__(self, driver): try: element_text = EC._find_element(driver, self.locator).text return element_text.startswith(self.text) except StaleElementReferenceException: return False Usage: WebDriverWait(driver, 10).until(wait_for_text_to_start_with((By.ID, 'myid'), "Hello, World!")) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19377437', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1776946/']} | jdg_81147 |
stackexchange | llm_judgeable_groundtruth_similarity | 28593290 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Fill array a from a[0] to a[n-1]: generate random numbers until you get one that is not already in the previous indexes. This is my implementation: public static int[] first(int n) { int[] a = new int[n]; int count = 0; while (count != n) { boolean isSame = false; int rand = r.nextInt(n) + 1; for (int i = 0; i < n; i++) { if(a[i] == rand) isSame = true; } if (isSame == false){ a[count] = rand; count++; } } return a;} I thought it was N^2 but it's apparently N^2logN and I'm not sure when the log function is considered.
Now provide the response and nothing else.
| The 0 entry is filled immediately. The 1 entry has probability 1 - 1 / n = (n - 1) / n of getting filled by a random number. So we need on average n / (n - 1) random numbers to fill the second position. In general, for the k entry we need on average n / (n - k) random numbers and for each number we need k comparisons to check if it's unique. So we need n * 1 / (n - 1) + n * 2 / (n - 2) + ... + n * (n - 1) / 1 comparisons on average. If we consider the right half of the sum, we see that this half is greater than n * (n / 2) * (1 / (n / 2) + 1 / (n / 2 - 1) + ... + 1 / 1) The sum of the fractions is known to be Θ(log(n)) because it's an harmonic series . So the whole sum is Ω(n^2*log(n)) . In a similar way, we can show the sum to be O(n^2*log(n)) . This means on average we need Θ(n^2*log(n)) operations. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/28593290', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4209304/']} | jdg_81148 |
stackexchange | llm_judgeable_groundtruth_similarity | 387481 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Following along in QFT (Kaku) he introduces the Casimir Operators (Momentum squared and Pauli-Lubanski) and claims that the eigenvalues of the operators characterize the irreducible representations of the Poincare Group. How exactly does this correspondence between eigenvalues and irreducible representations work?
Now provide the response and nothing else.
| Whew. This answer got super out of hand, so buckle in if you're not familiar with Lie groups, Lie algebras, and representation theory. I'll summarize the main points here, and then embark on a long and winding road to explain. Every Lie group $G$ corresponds to a Lie algebra $\frak g$ which is isomorphic to $T_eG$ - the tangent space to $G$ at the identity element. Given a linear representation $\rho$ which maps elements of $G$ to the endomorphisms (i.e. matrices, if $V$ is finite dimensional) over some vector space $V$ , the Lie algebra $\frak g$ inherits a representation given by $d\rho$ , the "differential map" of $\rho$ . It's possible to construct a Casimir elements $\Omega$ from a basis $X_i$ of $\frak g$ and symmetric bilinear form called the Killing form on $\frak g$ . This Casimir element commutes with all elements of $\frak g$ . Note that the number of distinct Casimir operators is equal to the rank of $\frak g$ . We can apply our representation $d\rho$ of $\frak g$ to the Casimir element $\Omega$ . The resulting matrix $d\rho(\Omega)$ commutes with all matrices $d\rho(X)$ for all $X\in \frak g$ . Schur's Lemma tells us that if $\rho$ is irreducible, then $d\rho(\Omega) = \lambda_\rho \cdot \mathbb{1}$ where $\lambda_\rho$ is determined by the dimensionalities of $\frak g$ and $V$ . Therefore, each irreducible representation (up to isomorphisms) corresponds to a different value of $\lambda_\rho$ , making the $\lambda$ 's useful labels to distinguish meaningfully different representations. Representations of Lie Groups Given a Lie group $G$ and a vector space $V$ , a representation $\rho : G \rightarrow GL(V)$ is a map which takes elements of $G$ to the space of linear endomorphisms on $V$ subject to the constraint $$\forall g_1,g_2\in G: \rho(g_1\cdot g_2) = \rho(g_1) \circ \rho(g_2) $$ where $\cdot$ denotes composition in the group, and $\circ$ denotes composition of endomorphisms. If $V$ is a finite dimensional vector space of dimension $d$ over a field (say, $\mathbb{C}$ ), then its elements can be written as $d$ -dimensional column vectors, and the endomorphisms on $V$ can be written as $d\times d$ matrices. Given a representation $\rho$ , it may be possible to find some subspace $W\subseteq V$ such that $\rho(g) w \in W$ for all $g\in G$ and $w\in W$ - such subspaces are called invariant subspaces with respect to $\rho$ . It's easy to see that for every representation, $V$ itself is such a subspace, as is the space containing only the zero vector $\{0_V\}$ - these are called the trivial invariant subspaces. If $V$ and $\{0_V\}$ are the only such subspaces (that is, if there are no non-trivial invariant subspaces w.r.t. $\rho$ ), and if $\rho$ is not the trivial representation which maps every $g\in G$ to $0_V$ , then we call $\rho$ an irreducible representation of $G$ . In the finite dimensional case, such representations correspond to block diagonal matrices. Example 1: Consider the group $SO(2)$ - the 2D rotation group. Each group element corresponds to rotation in the plane by some angle $\theta\in[0,2\pi)$ . We want to find a representation of $SO(2)$ over the vector space $\mathbb{R}^2$ (i.e. the 2D column vectors with real elements). It's not difficult to show that $$\rho: SO(2) \rightarrow GL(\mathbb{R}^2)$$ $$ g_\theta \mapsto \pmatrix{\cos(\theta) & -\sin(\theta) \\ \sin(\theta) & \cos(\theta) }$$ constitutes a representation of $SO(2)$ , and that it is irreducible. Example 2: Everything is exactly the same, except now we consider the vector space $\mathbb{R}^3$ , with the representation $$\rho: SO(2) \rightarrow GL(\mathbb{R}^3)$$ $$ g_\theta \mapsto \pmatrix{1 & 0 & 0 \\ 0 & \cos(\theta) & -\sin(\theta) \\ 0 & \sin(\theta) & \cos(\theta) }$$ This also constitutes a representation of $SO(2)$ , but notice that the subspace $W\subset \mathbb{R}^3$ defined by $$ W := \left\{ \pmatrix{0 \\ x \\ y} : x,y \in \mathbb{R}\right\}$$ is invariant under the action of $\rho(g)$ for all $g\in G$ - it therefore constitutes a non-trivial invariant subspace, and so $\rho$ is not an irreducible representation. Lie Algebras <=> Lie Groups Recall that given a manifold $M$ , we can construct a tangent space $T_pM$ for each $p\in M$ . The elements of such a space are called tangent vectors , which we can (somewhat naively) think of as tiny little arrows glued to the point $p$ . More specifically, given a smooth curve $\gamma:\mathbb R \rightarrow M$ such that $\gamma(0)=p\in M$ (for convenience), we can create an object called $X_{p,\gamma}$ which acts on smooth functions to produce real numbers in the following way: $$ X_{p,\gamma} : C^\infty(M) \rightarrow \mathbb R $$ $$ f \mapsto X_{p,\gamma}f := (f \circ \gamma)'(0)$$ From $f$ , we can define a differential map which takes tangent vectors to the real numbers: $$ df : T_p M \rightarrow \mathbb R$$ $$ X_p \mapsto df(X_p) := X_p f$$ This seems to be notational redundance, but note the difference - the former definition defines a map from the space of smooth functions on $M$ to the real numbers, while the latter defines a map from the space of tangent vectors (at a point $p\in M$ ) to the real numbers. Here's the point. Given a Lie group G (which is, in particular, a smooth manifold), we can construct the tangent space at the identity element, $T_eG$ . Roughly speaking, the elements of $T_eG$ constitute a Lie algebra (note that Lie algebras, in principle, need not have anything whatsoever to do with Lie groups). Therefore, each Lie group $G$ corresponds to a Lie algebra $\frak g$ , which is isomorphic to the tangent space $T_eG$ . Representations of Lie Algebras From there, given a representation $\rho:G \rightarrow GL(V)$ , we can construct a differential map $$d\rho : {\frak{g}} \rightarrow GL(V)$$ It follows that, given some representation $\rho$ of $G$ , the associated Lie algebra $\frak g$ inherits a representation over the same vector space. Example 3a: Consider again the Lie group $SO(2)$ . We can define a smooth curve on $SO(2)$ defined as follows: $$\gamma : \mathbb{R} \rightarrow SO(2)$$ $$\theta \mapsto g_\theta$$ where $g_\theta$ represents the group element corresponding to rotation by an angle $\theta$ . Next, we define the tangent vector $X_{e,\gamma}$ to this curve at the identity element by its action on some general function $f\in C^\infty(SO(2))$ : $$X_{e,\gamma} : C^\infty(SO(2)) \rightarrow \mathbb R$$ $$f \mapsto (f\circ \gamma)'(0)$$ For instance, consider the function $f: g_\omega \rightarrow 3\omega$ . Then we would have that $$ X_{e,\gamma}f = (f\circ \gamma)'(0) = 3$$ By considering all such curves, we construct all such tangent vectors, and so we have constructed the tangent space $T_eG$ . Example 3b: Now consider the representation $\rho:SO(2) \rightarrow GL(\mathbb R^2)$ as discussed in Example 1 above. We can construct the differential map $$ d\rho : T_eG \rightarrow GL(\mathbb R^2)$$ $$ X_{e,\gamma} \mapsto X_{e,\gamma}\rho = (\rho \circ \gamma)'(0)$$ Notice that $$ \rho \circ \gamma : \mathbb R \rightarrow GL(\mathbb R^2)$$ $$ \theta \mapsto \pmatrix{\cos(\theta) & -\sin(\theta) \\ \sin(\theta) & \cos(\theta)}$$ and so $$d\rho(X_{e,\gamma}) = (\rho \circ \gamma)'(0) = \pmatrix{0 & -1 \\ 1 & 0}$$ We therefore see that the Lie algebra ${\frak{so}}(2)$ inherits a representation $d\rho$ from the representation $\rho$ of the corresponding Lie group $SO(2)$ . Lie Algebras and the Killing Form Lie Algebras are called algebras because, in addition to a vector space structure, they are equipped with an antisymmetric, bilinear bracket (sometimes referred to as a commutator bracket) $$[\cdot,\cdot] : {{\frak g}} \times {{\frak g}} \rightarrow {{\frak g}}$$ such that $$[X,Y] = -[Y,X]$$ and $$[X,[Y,Z]]+[Z,[X,Y]]+[Y,[Z,X]] = 0_{{\frak g}}$$ Typically one defines this bracket by the correspondence between the tangent space $T_eG$ and the space of left-invariant vector fields on $G$ . I won't get into this - one can imagine (perhaps perversely) that the Lie algebra inherits this bracket from the commutators of (one of) its faithful matrix representations (though this is obviously bass-ackwards). Anyway, ${{\frak g}}$ is a vector space, so we can always find a nice basis set for it - we'll call this basis $X_i, i=1\ldots$ dim( $\frak g$ ). We can write the commutation relations of the basis elements as $$[X_i,X_j] = C^k_{\ ij} X_k$$ where the $C$ 's are called the structure coefficients of $\frak g$ . Given any symmetric, nondegenerate form $$ B:{{\frak g}} \times {{\frak g}} \rightarrow \mathbb C$$ we can define a dual space ${\frak g}^*$ whose elements map elements of $\frak g$ to complex numbers. Given some $X,Y \in \frak g$ , we define the dual vector $\omega_X :{\frak g} \rightarrow \mathbb C$ as follows: $$ \omega_X (Y) := B(X,Y)$$ Notice that this is precisely the prescription we use to define covectors from vectors in relativity, where the symmetric, nondegenerate form is chosen to be the metric tensor, and we (somewhat sloppily) refer to the mapping between vectors and covectors as "raising/lowering indices." Of course, there is no clear notion of a metric tensor here, so in its place one typically chooses the Killing Form $$K:{\frak g}\times{\frak g} \rightarrow \mathbb{C}$$ $$(X,Y) \mapsto Tr\big[ ad_X \circ ad_Y\big]$$ where $ad_X (Y) = [X,Y]$ is the adjoint map to $X$ . Consider a general Lie algebra $\frak g$ with commutation relations $$ [X_i,X_j]=C^k_{\ i j}X_k$$ Let $A = a^i X_i$ and $B=b^i X_i$ be elements of $\frak g$ . Then because of its bilinearity, we have $$ K(A,B) = a^i b^j K(X_i,X_j) \equiv a^i b^j K_{ij}$$ The components $K_{ij}$ can be found as follows. Let $ad_{X_i} \circ ad_{X_j}$ act on basis element $X_\alpha$ : $$\big(ad_{X_i} \circ ad_{X_j}\big)X_\alpha = ad_{X_i}\big([X_j,X_\alpha]\big) = ad_{X_i}\big(C^k_{\ j \alpha} X_k\big) = C^k_{\ j \alpha}[X_i,X_k] = C^k_{\ j \alpha} C^{\beta}_{\ i k} X_\beta$$ To compute the trace, we set $\beta=\alpha$ and sum: $$K_{ij} = C^k_{\ j \alpha} C^\alpha_{\ i k} $$ Example 4: The Lie algebra $\frak{so}(3)$ has a canonical basis $X_i$ which has commutation relations $$[X_i,X_j] = \epsilon^k_{\ ij}X_k$$ The components of the Killing form with respect to this basis are $$K_{ij} = \epsilon_{\beta j \alpha}\epsilon_{\alpha i \beta} = -\epsilon_{j \alpha \beta} \epsilon_{i \alpha \beta} = -2\delta_{ij}$$ The Killing form has another interesting property - it is adjoint-invariant , by which I mean that for all $Z\in \frak g$ , $$K(ad_Z X,Y) + K(X,ad_Z Y) = 0$$ You can demonstrate this to yourself from the definition of $K$ . Dual Bases and Casimir Elements As stated above, we can use the Killing form to define a dual space to $\frak g$ . Such a space has canonical basis elements $\omega^i$ defined such that $$\omega^i(X_j) = \delta^i_j$$ That is, given the set $\{X_i\}$ , there exists another set $\{Y_i=\lambda^m_i X_m\}$ such that $$K(Y_i,X_j)= \lambda^m_i K_{mj} = \delta_{ij}$$ We then define $$\omega^i := K(Y_i,\bullet)$$ Example 5: We found in Example 4 that the Killing form on our chosen basis of ${\frak so}(3)$ has components $K_{ij}=-2\delta_{ij}$ . It therefore follows that the dual basis to $X_i$ with respect to $K$ is $$\omega^i = K(Y_i,\bullet)$$ where $$Y_i = -\frac{1}{2}X_i$$ Once we have our dual basis constructed, we can define the quadratic Casimir element : $$\Omega := \sum_i X_i \omega^i$$ We have defined $\Omega$ in terms of a basis and an adjoint-invariant form $K$ - it is a useful exercise to demonstrate that $\Omega$ is independent of our choice in basis, but does depend on our choice of $K$ . $\Omega$ has a very special property - it commutes with every element of $\frak g$ , in the following sense: Given any basis elements $X_j,X_k$ , $$[\Omega,X_j]X_k = \Omega X_j X_k - X_j \Omega X_k = X_i\omega^i X_j X_k - X_j X_i\omega^i X_k = X_jX_k - X_j X_k = 0 $$ Represention of a Casimir Element Given a representation $\rho:G \rightarrow GL(V)$ , we constructed an inherited representation $d\rho:{\frak g} \rightarrow GL(V)$ . We can extend this notion to a representation of the dual basis $\omega^i$ . Recall that $\omega^i(X_j)=\delta^i_j$ . We define the representation of the dual basis such that $$d\rho(\omega^i) =d\rho(Y_i)$$ where $$ \omega^i = K(Y_i,\bullet)$$ We then define the Casimir invariant corresponding to $\rho$ to be $$d\rho(\Omega):= d\rho(X_i)\circ d\rho(\omega^i)=d\rho(X_i)\circ d\rho(Y_i)$$ This is a crucial point. A Casimir element $\Omega$ is constructed from the Lie algebra $\frak g$ and the Killing form over $\frak g$ . It has nothing to do with any representation at all, and is an element of the Universal Enveloping Algebra $U({\frak g})$ . However, given a representation $\rho$ of $G$ and the corresponding representation $d\rho$ of $\frak g$ , we can construct a Casimir invariant $d\rho(\Omega)$ which is an element of $GL(V)$ . This object is representation dependent. Example 6: A canonical representation of the Lie algebra ${\frak{so}}(3)$ over $\mathbb R^3$ is $$\rho(X_1) = \pmatrix{0&0&0\\0&0&-1\\0&1&0}, \rho(X_2) = \pmatrix{0&0&1\\0&0&0\\-1&0&0}, \rho(X_3) = \pmatrix{0&-1&0\\1&0&0\\0&0&0} $$ The corresponding dual basis is such that $Y_i = -\frac{1}{2}X_i$ . Therefore, $$\rho(X_1)\circ\rho(Y_1) = -\frac{1}{2}\pmatrix{0&0&0\\0&-1&0\\0&0&-1}$$ Continuing from here, it follows that the Casimir invariant corresponding to $\rho$ is $$\rho(\Omega) = \pmatrix{1&0&0\\0&1&0\\0&0&1} = \mathbb{1}_V$$ End of the Story: Schur's Lemma In Example 6, we found that the Casimir element was simply the identity matrix $\mathbb 1$ . This is no accident. Schur's Lemma goes as follows: Let $\rho:{\frak g}\rightarrow GL(V)$ be a linear representation of $\frak g$ over $V$ . If $\rho$ is irreducible and $\Omega$ is a Casimir element defined on $U({\frak g})$ , then $\rho(\Omega) = \lambda_\rho \cdot \mathbb{1}\in GL(V)$ for some number $\lambda_\rho$ . This number $\lambda$ can be computed explicitly - it turns out that $$\lambda_\rho = \frac{dim({\frak g})}{dim(V)}$$ which corresponds to our example, in which $dim({\frak g})=dim(V) = 3$ . | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/387481', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/150015/']} | jdg_81149 |
stackexchange | llm_judgeable_groundtruth_similarity | 8525333 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have 3 tables: dentists, groups, and groupdentlink. Many dentists link to many groups through the groupdentlink table. So I'm trying to make a query where it will insert rows into groupdentlink (linking all dentists in the state with all the groups in the state) but only if those rows don't already exist. In a nutshell I want to add new rows without overwriting existing ones or duplicating them. So the intent of the query is something like: INSERT INTO groupdentlink (f_dent_id, f_group_id, f_schedule_id)VALUES ('$_POST[id]', '$groupid', '$scheduleid')WHERE NOT EXISTS ('$_POST[id]', '$groupid') And I don't have any primary keys in the groupdentlink table. Thank you in advance!
Now provide the response and nothing else.
| If you really want to write your own (working) query.. INSERT INTO groupdentlink ( f_dent_id, f_group_id, f_schedule_id) SELECT '$_POST[id]' f_dent_id, '$groupid' f_group_id, '$scheduleid' f_schedule_idFROM DUALWHERE NOT EXISTS ( SELECT 1 FROM `groupdentlink` WHERE f_dent_id = '$_POST[id]' AND f_group_id = '$groupid' LIMIT 1 -- will stop mysql to stop searching after first match) ... but MySQL can handle all this for you! You don't need primary keys to make MySQL handle this for you, you should add a UNIQUE key constraint on the combined set of the two columns. Query to add the unique key dent_group_uniq_key to groupdentlink . ALTER TABLE groupdentlink ADD UNIQUE KEY `dent_group_uniq_key` ( f_dent_id, f_group_id); Then use INSERT IGNORE on your query: INSERT IGNORE INTO groupdentlink ( f_dent_id, f_group_id, f_schedule_id) VALUES ( '$_POST[id]', '$groupid', '$scheduleid') INSERT IGNORE will try to insert a row to your table, if it fails due to a key constraint it will act like nothing happen. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8525333', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/955923/']} | jdg_81150 |
stackexchange | llm_judgeable_groundtruth_similarity | 6241282 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am having much trouble to get ImageMagick's identify to, well, identify a PDF as CMYK. Essentially, let's say I'm building this file, test.tex , with pdflatex : \documentclass[a4paper,12pt]{article}%% https://tex.stackexchange.com/questions/13071\pdfcompresslevel=0%% http://compgroups.net/comp.text.tex/Making-a-cmyk-PDF%% ln -s /usr/share/color/icc/sRGB.icm .% \immediate\pdfobj stream attr{/N 4} file{sRGB.icm}% \pdfcatalog{%% /OutputIntents [ <<% /Type /OutputIntent% /S/GTS_PDFA1% /DestOutputProfile \the\pdflastobj\space 0 R% /OutputConditionIdentifier (sRGB IEC61966-2.1)% /Info(sRGB IEC61966-2.1)% >> ]% }%% http://latex-my.blogspot.com/2010/02/cmyk-output-for-commercial-printing.html%% https://tex.stackexchange.com/questions/9961\usepackage[cmyk]{xcolor}\begin{document}Some text here...\end{document} If I then try to identify the resulting test.pdf file, I get it as RGB, no matter what options I've tried (at least according to the links in the source) - and yet, the colors in it would be saved as CMYK; for the source above: $ grep -ia 'cmyk\|rgb\| k' test.pdf 0 0 0 1 k 0 0 0 1 K0 0 0 1 k 0 0 0 1 K0 0 0 1 k 0 0 0 1 K0 0 0 1 k 0 0 0 1 KFontDirectory/CMR12 known{/CMR12 findfont dup/UniqueID known{dup/PTEX.Fullbanner (This is pdfTeX, Version 3.1415926-1.40.11-2.2 (TeX Live 2010) kpathsea version 6.0.0)$ identify -verbose 'test.pdf[0]'... Type: Palette Endianess: Undefined Colorspace: RGB Depth: 16/8-bit Channel depth: red: 8-bit green: 8-bit blue: 8-bit Channel statistics: Red:... Green:... Blue:... Histogram: 5: (12593,11565,11822) #31312D2D2E2E rgb(49,45,46) 4: (16448,15420,15677) #40403C3C3D3D rgb(64,60,61) 9: (20303,19275,19532) #4F4F4B4B4C4C rgb(79,75,76) 25: (23901,23130,23387) #5D5D5A5A5B5B rgb(93,90,91)... The same pretty much happens if I also uncomment that \immediate\pdfobj stream ... part; and yet, if there is only one color (black) in the document, I don't see where does identify come up with a histogram of RGB values (although, arguably, all of them close to gray) ?! So nevermind this, then I though I'd better try to use ghostscript to convert the test.pdf into a new pdf, which would be recognized as CMYK by identify - but no luck even there: $ gs -dNOPAUSE -dBATCH -dSAFER -sDEVICE=pdfwrite -sOutputFile=test-gs.pdf -dUseCIEColor -sProcessColorModel=DeviceRGB -dProcessColorModel=/DeviceCMYK -sColorConversionStrategy=/CMYK test.pdf GPL Ghostscript 9.01 (2011-02-07)Copyright (C) 2010 Artifex Software, Inc. All rights reserved.This software comes with NO WARRANTY: see the file PUBLIC for details.Processing pages 1 through 1.Page 1$ identify -verbose 'test-gs.pdf[0]'... Type: Grayscale Base type: Grayscale Endianess: Undefined Colorspace: RGB Depth: 16/8-bit... So the only thing that identify perceived as a change, is Type: Grayscale (from previous Type: Palette ); but otherwise it still sees an RGB colorspace! Along with this, note that identify is capable of correctly reporting a CMYK pdf - see CMYK poster example: fitting pdf page size to (bitmap) image size? #17843 - TeX - LaTeX - Stack Exchange for a command line example of generating such a PDF file using convert and gs . In fact, we can execute: convert test.pdf -depth 8 -colorspace cmyk -alpha Off test-c.pdf ... and this will result with a PDF that will be identify ed as CMYK - however, the PDF will also be rasterized (default at 72 dpi). EDIT: I have just discovered, that if I create an .odp presentation in OpenOffice, and export it to PDF; that PDF will by default be RGB, however, the following command (from ghostscript Examples | Production Monkeys ): # Color PDF to CMYK:gs -dSAFER -dBATCH -dNOPAUSE -dNOCACHE -sDEVICE=pdfwrite \-sColorConversionStrategy=CMYK -dProcessColorModel=/DeviceCMYK \-sOutputFile=output.pdf input.pdf ... actually will produce a CMYK pdf, reported as such by identify (although, the black will be rich, not plain - on all four channels); however, this command will work only when the slide has an added image (apparently, it is the one triggering the color conversion?!)! Funnily, I cannot get the same effect from a pdflatex PDF. So I guess my question can be asked two ways: Are there any command-line conversion methods in Linux, that will convert an RGB pdf into a CMYK pdf while preserving vectors, which is recognized as such in identify (and will consequently build a correct histogram of CMYK colors) Are there any other command-line Linux tools similar to identify , which would recognize use of CMYK colors correctly even in the original test.pdf from pdflatex ( and possibly build a color histogram, based on an arbitrarily chosen PDF page, like identify is supposed to )? Thanks in advance for any answers, Cheers! Some references: adobe - Script (or some other means) to convert RGB to CMYK in PDF? - Stack Overflow color - PDF colour model and LaTeX - TeX - LaTeX - Stack Exchange color - Option cmyk for xcolor package does not produce a CMYK PDF - TeX - LaTeX - Stack Exchange Making a cmyk PDF - comp.text.tex | Computer Group colormanagement with ghostscript ? - Rhinocerus : Is it for instance specified as "0 0 0 1 setcmykcolor"? Or possibly rather as "0 0 0 setrgbcolor"? In the latter case you would end up with a rich black for text, if DeviceRGB is remapped to a CIE-based color space in order to get RGB images color managed.
Now provide the response and nothing else.
| sdaau, the command you used for trying to convert your PDF to CMYK was not correct. Try this one instead: gs \ -o test-cmyk.pdf \ -sDEVICE=pdfwrite \ -sProcessColorModel=DeviceCMYK \ -sColorConversionStrategy=CMYK \ -sColorConversionStrategyForImages=CMYK \ test.pdf Update If color conversion does not work as desired and if you see a message like "Unable to convert color space to Gray, reverting strategy to LeaveColorUnchanged" then... your Ghostscript probably is a newer release from the 9.x version series , and your source PDF likely uses an embedded ICC color profile In this case add -dOverrideICC to the command line and see if it changes the result as desired. Update 2 To avoid JPEG artifacts appearing in the images (where there were none before), add: -dEncodeColorImages=false into the command line. (This is true for almost all GS PDF->PDF processing, not just for this case. Because GS by default creates a completely new file with newly constructed objects and a new file structure when asked to produce PDF output -- it doesn't simply re-use the previous objects, as a more "dumb" PDF processor like pdftk does { pdftk has other advantages though, don't misunderstand my statement!} . GS applies JPEG compression by default -- look at the current Ps2pdf documentation and search for "ColorImageFilter" to learn about more details...) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6241282', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/277826/']} | jdg_81151 |
stackexchange | llm_judgeable_groundtruth_similarity | 2526522 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an website where people can place a vote like this: http://mysite.com/vote/25 This will place a vote on item 25. I want to only make this available for registered users, and only if they want to do this. Now I know when someone is busy on the website, and someone gives them a link like this: http://mysite.com/vote/30 then the vote will be places for him on the item without him wanting to do this. I have read the explanation on the OWASP website , but i don't really understand it Is this an example of CSRF, and how can I prevent this. The best thing i can think off is adding something to the link like a hash. But this will be quite irritating to put something on the end of all the links. Is there no other way of doing this. Another thing can someone maybe give me some other example of this, because the website seems fairly fugue to me.
Now provide the response and nothing else.
| This could become an example of CSRF if : that link is fetched (via an <img> tag, for example) : forgery from another site : cross-site For example, if I could inject this <img> tag in the HTML source-code of stackoverflow (and I can, as stackoverflow allows one to use <img> tags in his posts) : <img src="http://mysite.com/vote/30" /> You would just have voted for that item ;-) The solution that is generally used is to place a token, that has a limited life-time, in the URL, and, when the URL is fetched, check that this token is still valid. The basic idea would be : When generating the page : generate a unique token store it in the user's session and place it in the links of the page -- which would look like this : http://mysite.com/vote/30?token=AZERTYUHQNWGST When the voting page is called : Check if the token is present in the URL Check if it's present in the user's session If not => do not register the vote The idea there is : Tokens don't have a long life-time, and are hard to guess Which means your attacker : has only a window of a few minutes during which his injection will be valid will have to be good at guessing ^^ will have to generate a different page for each user. Also, note that the shorter the user's session remains active after he has left your site, the less risks there are that it's still valid when he visits the bad website. But here, you have to choose between security and user-friendly... Another idea (that's not perfectly secure, but helps against guys would don't know how to force a POST request) , would be to only accept POST requests when people are voting : The browser is sending GET requests for injected tags As this URL is modifying some data, anyway, it should not work with GET, but only with POST But note that this is not perfectly safe : it's (probably ? ) possible to force/forge a POST request, with some bit of Javascript. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/2526522', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/207633/']} | jdg_81152 |
stackexchange | llm_judgeable_groundtruth_similarity | 64017 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
I am interested in the first non zero eigenvalue of the Laplace-Beltrami operator in a 2D compact manifold, and if there is a geometric characterization of its value. I am interested in the case when you fix the volume of the manifold to some value (say $Vol = 1$), and let the other modes of the metric fluctuate. The average curvature of the manifold is imposed by the Gauss-Bonet theorem, but can let the curvature to fluctuate from one point to another. My intuition says that the first non zero eigenvalue should approach 0 in the limit when the "fluctuations of the curvature" grow, but I can not give a precise meaning to this statement. so the question is: Is there any characterization of the first eigenvalue(s) of the Laplace-Beltrami operator in a 2D compact riemann manifold as functions of the curvature or its powers (i.g. $\int R^2 \sqrt{g} d^2x$). So let me be more specific. Imagine a manifold topologically equivalent to a Torus. The metric can be written as $ds^2=f(x_1,x_2)(dx^2_1+dx^2_2)$ and the scalar curvature $R=−2\Delta \log(f)$ For the case $f=cte$ we have a flat torus. Now expanding f as a fourier series we will have some regions of the torus with poritive curvature and some regions with negative curvature (this invalid some known theorems, that need wither the curvature to be always positive or always negative). Gauss bonnet says that: $\int R=0$ So the concrete question is: Is there a characterisation of the first eigenvalue of the Laplace operator in terms of $\int R^2$?
Now provide the response and nothing else.
| The first eigenvalue of a compact surface can be made arbitrarily small (even for surfaces of fixed genus); see, for example, [1], [2], [3] (and references therefrom). However, as proved by Sarnak and Xue [4], there are arithmetic examples of (constant negative curvature) compact Riemann surfaces of arbitrarily high genus with the first eigenvalue bounded away from zero (see also [5] for a construction involving Selberg's $3/16$'' theorem). [1] B. Randol, Small eigenvalues of Laplace operator on compact Riemann surfaces, Bull. AMS 80, 1974 996-1008 [2] R. Schoen, S. Wolpert, S. Yau, Geometric bounds on the low eigenvalues of a compact surface, Proc. Symp. Pure Math, vol. 36, AMS, 1980, 279-285. [3] P. Buser, On Cheeger’s inequality $λ_1 ≥ \frac{h^2}{4}$. Proc. Symp. Pure. Math. vol. 36, 29–77. [4] P. Sarnak and X. Xue, Bounds for multiplicites of automorphic representations, Duke Math J. 64, 1991, 207-227. [5] R. Brooks, E. Makover, Riemann surfaces with large first eigenvalue. J. Anal. Math. 83, 2001, 243–258. | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/64017', 'https://mathoverflow.net', 'https://mathoverflow.net/users/14840/']} | jdg_81153 |
stackexchange | llm_judgeable_groundtruth_similarity | 36035079 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am writing the below code in Request Class for Validation and Authorization. So the Below code is for add/Update record. <?phpnamespace App\Http\Requests;use App\Http\Requests\Request;class UserRequest extends Request{ public function authorize() { return \Auth::user()->isAdmin(); } public function rules() { return [ 'UserName' => 'required|max:50|min:3|unique:tbluser,UserName,' . \Request::get( 'UserID' ) . ',UserID', ]; }} My question is: Should I write the code to check if the current user is authorized or not to delete the record. For that should I use same Request class that is used for Add/Update or an another class specially for delete authentication? If I use same class then rules() will be executed which are meant for add/update
Now provide the response and nothing else.
| Two ranges overlap in one of two basic cases: one range contains the other completely (i.e. both the start and end of one range are between the start and end of the other), or the start or end of one range is contained within the other range Conversely, they do not overlap only if neither endpoint of each range is contained within the other range (cases 11 and 12 in your diagram). We can check whether the low end of either range is past the high end of the other, to detect both those cases: if (a > d || c < b) { // no overlap}else { // overlap} We can invert the condition and then use DeMorgan's laws to swap the order, if that's preferable: if (a <= d && c >= b) { // overlap}else { // no overlap} To find the actual overlap range, you take the maximum of the two low ends, and the minimum of the two high ends: int e = Math.max(a,b);int f = Math.min(c,d);// overlapping range is [e,f], and overlap exists if e <= f. All above assumes that the ranges are inclusive , that is, the range defined by a and c includes both the value of a and the value of c . It is fairly trivial to adjust for exclusive ranges, however. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/36035079', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/726802/']} | jdg_81154 |
stackexchange | llm_judgeable_groundtruth_similarity | 48357 |
Below is a question asked on the forum raspberrypi.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have bought DS18B20 with RJ45 port on its end. It is dumb idea to connect it directly to the Ethernet port on the Pi? I did so, tried reading the temperature ( http://www.reuk.co.uk/DS18B20-Temperature-Sensor-with-Raspberry-Pi.htm ) but I cannot see the sensor in the w1/devices.. So do I need some 1-Wire interface with Ethernet port as a middleware? Or would it be better idea to cut of the RJ45 and connect the sensor wia GPIO? Thanks If something is not clear I'll be happy to provide more info. I am kind of a beginer in this stuff. EDIT: So I cut off the RJ45 and discovered that my sensor has 4 wires ( http://rpishop.cz/862-thickbox_default/1wire-teplotni-senzor-ruzne-delky.jpg ). So far I tried both data wires separately as in the above mentioned tutorial, but the sensor does not appear to be registered correctly: ls -l /sys/bus/w1/devices/total 0lrwxrwxrwx 1 root root 0 Jun 5 20:35 00-400000000000 -> ../../../devices/w1_bus_master1/00-400000000000lrwxrwxrwx 1 root root 0 Jun 5 20:35 00-800000000000 -> ../../../devices/w1_bus_master1/00-800000000000lrwxrwxrwx 1 root root 0 Jun 5 20:33 w1_bus_master1 -> ../../../devices/w1_bus_master1 As a pull-up resistor I've used two 18K in parallel.
Now provide the response and nothing else.
| Connecting the DS18B20 to the Ethernet port would either not work or the effort to make it work (even if it was possible) would be excessive for someone starting out with a Raspberry Pi. As you state, cut off the RJ45 and connect the sensor with the black wire to ground, the red wire to the 3V3 pin and the blue or yellow (some are blue and some are yellow) wire to the GPIO4 pin. A resistor between the value of 4.7k Ohms to 10k Ohms needs to be connected between the 3V3 and GPIO4 pins to act as a ‘pull-up’ resistor. If you can, it's handy to use dupont connectors on the ends of the wires to make it easy to connect them to appropriate GPIO headers. I have published a few extra details of connecting a DS18B20 here if you need extra reference. | {} | {'log_upvote_score': 4, 'links': ['https://raspberrypi.stackexchange.com/questions/48357', 'https://raspberrypi.stackexchange.com', 'https://raspberrypi.stackexchange.com/users/46572/']} | jdg_81155 |
stackexchange | llm_judgeable_groundtruth_similarity | 60405430 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How to align elements inside a Row by the baseline. My issue is that I want to have a Row element with multiple Text elements and each of the Text elements will have different font and size. By default, they are aligned on the top. I need them to be aligned on the bottom. This is the code: class MainActivity : ComponentActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContent { MaterialTheme { Row { Text( text = "Abrakadabra", style = TextStyle(fontSize = 22.sp, fontWeight = FontWeight.Bold) ) Text( text = "Abrakadabra", style = TextStyle(fontSize = 14.sp, fontWeight = FontWeight.Bold) ) } } } }} Here is the rendered view of the code:
Now provide the response and nothing else.
| In Jetpack Compose 1.0.0 it can be done in this way: Row { Text( text = "One", fontSize = 20.sp, modifier = Modifier.alignByBaseline() ) Text( text = "Two", fontSize = 12.sp, modifier = Modifier.alignByBaseline() )} Result: If someone wants to align text to the last baseline when using multiline Text it can be done by using Modifier.alignBy(LastBaseline) Row { Text( text = "Line 1\nLine 2", fontSize = 20.sp, modifier = Modifier.alignBy(LastBaseline) ) Text( text = "Line 3", fontSize = 12.sp, modifier = Modifier.alignBy(LastBaseline) )} | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/60405430', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11842636/']} | jdg_81156 |
stackexchange | llm_judgeable_groundtruth_similarity | 239171 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The answer to why we put private member variables in C++ headers is that the size of the class must be known at points where instances are declared so that the compiler can generate code that appropriately moves about the stack. Why do we need to put private members in headers? But is there any reason to declare private functions in the class definition? The alternative would be essentially the pimpl idiom but without the superfluous indirection. Is this language feature more than a historical error?
Now provide the response and nothing else.
| Private member functions may be virtual , and in common implementations of C++ (that use a vtable) the specific order and number of virtual functions is required to be known by all clients of the class. This applies even if one or more of the virtual member functions is private . It might seem that this is like "putting the cart before the horse", because compiler implementation choices shouldn't affect the language specification. However, in reality the C++ language itself was developed at the same time as a working implementation ( Cfront ), which used vtables. | {} | {'log_upvote_score': 5, 'links': ['https://softwareengineering.stackexchange.com/questions/239171', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/128967/']} | jdg_81157 |
stackexchange | llm_judgeable_groundtruth_similarity | 8911356 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using eclipse + Android SDK. I need to round a float value to 2 decimals. I usually use the next "trick" using Math library. float accelerometerX = accelerometerX * 100; accelerometerX = round(accelerometerX); Log.d("Test","" + accelerometerX/100); But I feel it is not the best way to do it. Is there a library to do these type of operations?
Now provide the response and nothing else.
| I was working with statistics in Java 2 years ago and I still got the codes of a function that allows you to round a number to the number of decimals that you want. Now you need two, but maybe you would like to try with 3 to compare results, and this function gives you this freedom. /*** Round to certain number of decimals* * @param d* @param decimalPlace* @return*/public static float round(float d, int decimalPlace) { BigDecimal bd = new BigDecimal(Float.toString(d)); bd = bd.setScale(decimalPlace, BigDecimal.ROUND_HALF_UP); return bd.floatValue();} You need to decide if you want to round up or down. In my sample code I am rounding up. Hope it helps. EDIT If you want to preserve the number of decimals when they are zero (I guess it is just for displaying to the user) you just have to change the function type from float to BigDecimal, like this: public static BigDecimal round(float d, int decimalPlace) { BigDecimal bd = new BigDecimal(Float.toString(d)); bd = bd.setScale(decimalPlace, BigDecimal.ROUND_HALF_UP); return bd;} And then call the function this way: float x = 2.3f;BigDecimal result;result=round(x,2);System.out.println(result); This will print: 2.30 | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/8911356', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1077364/']} | jdg_81158 |
stackexchange | llm_judgeable_groundtruth_similarity | 508126 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Below is a sample implementation of overriding Object.Equals() for an entity base class from which all other entities in an application derive. All entity classes have the property Id, which is a nullable int. (It's the primary key of whatever table the entity class corresponds to.) public override bool Equals(object obj) { if (obj == null || GetType() != obj.GetType()) return false; if (base.Equals(obj)) return true; return Id.HasValue && ((EntityBase) obj).Id.HasValue && Id.Value == ((EntityBase) obj).Id.Value; } Given this implementation of Equals(), how do you correctly implement GetHashCode()?
Now provide the response and nothing else.
| If you're deriving from something that already overrides GetHashCode I'd implement it as: public override int GetHashCode(){ unchecked { int hash = 37; hash = hash * 23 + base.GetHashCode(); hash = hash * 23 + Id.GetHashCode(); return hash; }} A null value of Id will return 0 for Id.GetHashCode(). If your class just derives from Object, I'd just return Id.GetHashCode() - you do not want to include the object.GetHashCode implementation in your hash code, as that basically ends up being object identity. Note that your equality definition won't return true if neither entity has an Id, but the same hashcode will be returned from both objects. You may wish to consider changing your Equals implementation. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/508126', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/549/']} | jdg_81159 |
stackexchange | llm_judgeable_groundtruth_similarity | 18379 |
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let define the $(3,2)_s$ SAT problem : Given $F_3$, a satisfiable 3-CNF formula, and $F_2$, a 2-CNF formula ($F_3$ and $F_2$ are defined on the same variables). Is $F_3 \wedge F_2$ satisfiable? What is the complexity of this problem ? (Has it been studied before ?)
Now provide the response and nothing else.
| This problem is NP-complete. Let $\varphi$ be an arbitrary CNF formula (an instance of SAT). Consider $\varphi \lor y$, where $y$ is a fresh variable; obviously, this formula is satisfiable (you can simply set $y$ to true). Now convert $\varphi \lor y$ to 3-CNF, using any standard method, and let $\psi$ denote the result. Note that $\psi$ is a satisfiable 3-CNF formula, so we can let $F_3 = \psi$. Now, let $F_2 = \neg y$. Notice that $F_3 \land F_2$ is satisfiable if and only if $\varphi$ is. Therefore, the $(3,2)_s$ SAT problem is at least as hard as SAT. Also, it is clearly no harder than SAT. Therefore, it is exactly as difficult as SAT. | {} | {'log_upvote_score': 5, 'links': ['https://cstheory.stackexchange.com/questions/18379', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/6346/']} | jdg_81160 |
stackexchange | llm_judgeable_groundtruth_similarity | 18104624 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This code is not compiling, and it's throwing the following error: The type 'TestesInterfaces.MyCollection' already contains a definition for 'Current' But when I delete the ambiguous method, it keeps giving other errors. Can anyone help? public class MyCollection<T> : IEnumerator<T>{ private T[] vector = new T[1000]; private int actualIndex; public void Add(T elemento) { this.vector[vector.Length] = elemento; } public bool MoveNext() { actualIndex++; return (vector.Length > actualIndex); } public void Reset() { actualIndex = -1; } void IDisposable.Dispose() { } public Object Current { get { return Current; } } public T Current { get { try { T element = vector[actualIndex]; return element; } catch (IndexOutOfRangeException e) { throw new InvalidOperationException(e.Message); } } }}
Now provide the response and nothing else.
| I think you're misunderstanding IEnumerator<T> . Typically, collections implement IEnumerable<T> , not IEnumerator<T> . You can think of them like this: When a class implements IEnumerable<T> , it is stating "I am a collection of things that can be enumerated." When a class implements IEnumerator<T> , it is stating "I am a thing that enumerates over something." It is rare (and probably wrong) for a collection to implement IEnumerator<T> . By doing so, you're limiting your collection to a single enumeration. If you try to loop through the collection within a segment of code that's already looping through the collection, or if you try to loop through the collection on multiple threads simultaneously, you won't be able to do it because your collection is itself storing the state of the enumeration operation. Typically, collections (implementing IEnumerable<T> ) return a separate object (implementing IEnumerator<T> ) and that separate object is responsible for storing the state of the enumeration operation. Therefore, you can have any number of concurrent or nested enumerations because each enumeration operation is represented by a distinct object. Also, in order for the foreach statement to work, the object after the in keyword, must implement IEnumerable or IEnumerable<T> . It will not work if the object only implements IEnumerator or IEnumerator<T> . I believe this is the code you're looking for: public class MyCollection<T> : IEnumerable<T>{ private T[] vector = new T[1000]; private int count; public void Add(T elemento) { this.vector[count++] = elemento; } public IEnumerator<T> GetEnumerator() { return vector.Take(count).GetEnumerator(); } IEnumerator IEnumerable.GetEnumerator() { return GetEnumerator(); }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18104624', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2396350/']} | jdg_81161 |
stackexchange | llm_judgeable_groundtruth_similarity | 1786522 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Both these languages seem extremely similar to me. Although Python supports actual classes instead of being prototype-based, in Python classes are not all that different from functions that generate objects containing values and functions, just as you'd do in JavaScript. On the other hand, JavaScript only supports floating-point numbers and strings as built-in data types. These seem like fairly shallow differences to me, so these things aside, what are some more important differences between them?
Now provide the response and nothing else.
| Classical inheritance in Python, Prototypal inheritance in ECMAScript ECMAScript is a braces and semicolons language while Python is white-space and indent/block based No var keyword in Python, implicit globals in ECMAScript, both are lexically scoped Closures in Python 2.5 and lower ( re: Alex Martelli's comment ) are somewhat "limited" because the bindings are read-only, you can't access private variables like you could in ECMAScript There's no undefined in Python, exceptions are thrown Immutable list arrays in Python ( tuples ) No switch statement in Python but instead you're encouraged to use a dictionary in that manner, sometimes its convenient assigning properties to lambdas and executing them ECMAScript 3 does not have a yield statement, nor let expressions/statements, nor array comprehension s - however these are included in Mozilla's JS which is non-standard raise vs throw , except vs catch ( Python, JS ) Native Unicode strings in ECMAScript keyword operators such as and , is , and not are used in Python Python doesn't support counters such as i++ Python's for loop is "smart" so you don't need to use a counter for enumerating through lists, nor do you run into prototypal properties inherited from Object.prototype You don't have to use the new operator in Python to create objects Python is duck-typed I stole a good bit of info from http://hg.toolness.com/python-for-js-programmers/raw-file/tip/PythonForJsProgrammers.html | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1786522', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/163162/']} | jdg_81162 |
stackexchange | llm_judgeable_groundtruth_similarity | 9119594 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to detect 3 types of device: tablet , mobile or desktop . I found a script for detecting mobile on github , but how I can detect mobile, tablet and desktop?
Now provide the response and nothing else.
| Based on your prior use of mobile detection middleware, I'd recommend the following: Pick up the Python port of MobileESP ( source code here ) and drop it into a folder named mobileesp in the base of your project (where manage.py is). Throw in a blank __init__.py file so that Python will see it as a package. Go ahead and create a new file, middleware.py , in that directory, and fill it with: import refrom mobileesp import mdetectclass MobileDetectionMiddleware(object): """ Useful middleware to detect if the user is on a mobile device. """ def process_request(self, request): is_mobile = False is_tablet = False is_phone = False user_agent = request.META.get("HTTP_USER_AGENT") http_accept = request.META.get("HTTP_ACCEPT") if user_agent and http_accept: agent = mdetect.UAgentInfo(userAgent=user_agent, httpAccept=http_accept) is_tablet = agent.detectTierTablet() is_phone = agent.detectTierIphone() is_mobile = is_tablet or is_phone or agent.detectMobileQuick() request.is_mobile = is_mobile request.is_tablet = is_tablet request.is_phone = is_phone Lastly, make sure to include 'mobileesp.middleware.MobileDetectionMiddleware', in MIDDLEWARE_CLASSES in your settings file. With that in place, in your views (or anywhere that you have a request object) you can check for is_phone (for any modern smartphones), is_tablet (for modern tablets) or is_mobile (for any mobile devices whatsoever). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9119594', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1186091/']} | jdg_81163 |
stackexchange | llm_judgeable_groundtruth_similarity | 3137278 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to find the best way to calculate the box size needed for shipping. I have 3 shipping containers with different sizes. I have the product's width, length, depth, and mass defined in the database. I would like to know how to find the smallest amount of boxes needed to ship, and also the smallest dimensions of those boxes given the number of items in the cart. My current 'idea' is to find the maximum width of the entire products array, the select a box according to it, and then split the order as needed... this doesn't seem like it would work. My Box sizes are: - 8 x 6 x 6 = 228 cubic inches - 10 x 8 x 8 = 640 cubic inches - 12.5 x 12.5 x 12.5 = 1953.125 cubic inches A product is defined as such: [Product] => Array ( [STOCK_CODE] => 010003 [Product_Slug] => GABA_010003 [ItemName] => GABA [WHOLESALE_PRICE] => 17.47 [RETAIL_PRICE] => 24.95 [Brand] => [ProductLine] => [image_name] => 705077000440 [MASS] => 0.313 [Height] => 4.625 [Width] => 2.375 [Depth] => 2.375 [cubic_inches] => 26.087890625 ) I've looked into knapsack problem, packing problem, etc and can't find a way to do this. Any help would be GREAT. function shipping(){ $this->CartProduct->unbindModel( array('belongsTo' => array('User')) ); //find all cart products by current logged in user $cartItems = $this->CartProduct->find('all', array('conditions' => array('CartProduct.user_id' => $this->Auth->user('id')))); $i = 0; //get the max width, height, depth $maxHeight = 0; $maxWidth = 0; $maxDepth = 0; foreach($cartItems as $c){ $cartItems[$i]['Product']['cubic_inches'] = $c['Product']['Height'] * $c['Product']['Width'] * $c['Product']['Depth']; $cartItems[$i]['CartProduct']['total_cubic_inches'] = ($c['Product']['Height'] * $c['Product']['Width'] * $c['Product']['Depth']) * $c['CartProduct']['qty']; if($c['Product']['Height'] > $maxHeight) { $maxHeight = $c['Product']['Height']; } if($c['Product']['Width'] > $maxWidth) { $maxWidth = $c['Product']['Width']; } if($c['Product']['Depth'] > $maxDepth) { $maxDepth = $c['Product']['Depth']; } $i++; } //possible containers //8 x 6 x 6 = 228 ci //10 x 8 x 8 = 640 ci //12.5 x 12.5 x 12.5 = 1953.125 $possibleContainers = array( 1 => array( 'Height' => 8, 'Width' => 6, 'Depth' => 6, 'Cubic' => 228), 2 => array( 'Height' => 10, 'Width' => 8, 'Depth' => 8, 'Cubic' => 640), 3 => array( 'Height' => 12.5, 'Width' => 12.5, 'Depth' => 12.5, 'Cubic' => 1953.125) ); $max = array( 'Height' => $maxHeight, 'Width' => $maxWidth, 'Depth' => $maxDepth, ); pr($cartItems); pr($possibleContainers); die(); }
Now provide the response and nothing else.
| No, this is not safe, but not for the reason you might expect. ViewRoot.java, line 1841 . This is the code that dispatches your MotionEvents down the view hierarchy. Line 1841 is part of a finally block that calls recycle() on the MotionEvent that has just been dispatched. MotionEvents are not left to be garbage collected like most objects, they are pooled and recycled to avoid unnecessary memory allocation and garbage collection during event dispatch. Recycling a MotionEvent returns it to the object pool to be used again later when a new MotionEvent is needed. After recycle() has been called, a MotionEvent should be considered invalid. Your example code may end up reading a MotionEvent object that has been reused by the framework and now contains completely different data. If you plan to hang on to a MotionEvent after onTouchEvent returns, clone it using MotionEvent.obtain(event) . The static obtain() method will return a new MotionEvent from the object pool with the same content. You should call recycle() when you are done with a MotionEvent that you obtain() ed yourself to return it to the pool. It's not a huge deal if you forget this step, it will become regular Java garbage and the framework will create new MotionEvents when needed. However, MotionEvents can be dispatched very rapidly by the system to the point where cooperating with this optimization can make a nontrivial difference in performance on some devices. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3137278', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/155371/']} | jdg_81164 |
stackexchange | llm_judgeable_groundtruth_similarity | 23254518 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My problem is that code::blocks error message tells me that it can't find file executable in the search path for gnc gcc compiler. Although, I don't know what that means. Also I typed out some code: #include <iostream> using namespace std; int main(void) { cout <<"Hello World" <<endl; return 0; } I can't build it or run in code::blocks . What do I need to do? I went on line but I got some answers that are way over my head. I was able to use code::blocks once before I installed Visual studios express 2013 . Visual studios didn't work right either. It kept asking me to repair or uninstall every time I tried to open it. So I deleted it along with code::blocks . Now that I re-installed code::blocks I still can't get to work right. This problem with compilers is taking up all my time and I can't practice learning programming because I can't get any compiler to work right. I need some help, please.
Now provide the response and nothing else.
| I'm guessing you've installed Code::Blocks but not installed or set up GCC yet. I'm assuming you're on Windows, based on your comments about Visual Studio; if you're on a different platform, the steps for setting up GCC should be similar but not identical. First you'll need to download GCC. There are lots and lots of different builds; personally, I use the 64-bit build of TDM-GCC . The setup for this might be a bit more complex than you'd care for, so you can go for the 32-bit version or just grab a preconfigured Code::Blocks/TDM-GCC setup here . Once your setup is done, go ahead and launch Code::Blocks. You don't need to create a project or write any code yet; we're just here to set stuff up or double-check your setup, depending on how you opted to install GCC. Go into the Settings menu, then select Global compiler settings in the sidebar, and select the Toolchain executables tab. Make sure the Compiler's installation directory textbox matches the folder you installed GCC into. For me, this is C:\TDM-GCC-64 . Your path will vary, and this is completely fine; just make sure the path in the textbox is the same as the path you installed to. Pay careful attention to the warning note Code::Blocks shows: this folder must have a bin subfolder which will contain all the relevant GCC executables. If you look into the folder the textbox shows and there isn't a bin subfolder there, you probably have the wrong installation folder specified. Now, in that same Toolchain executables screen, go through the individual Program Files boxes one by one and verify that the filenames shown in each are correct. You'll want some variation of the following: C compiler: gcc.exe (mine shows x86_64-w64-mingw32-gcc.exe ) C++ compiler: g++.exe (mine shows x86_64-w64-mingw32-g++.exe ) Linker for dynamic libs: g++.exe (mine shows x86_64-w64-mingw32-g++.exe ) Linker for static libs: gcc-ar.exe (mine shows x86_64-w64-mingw32-gcc-ar.exe ) Debugger: GDB/CDB debugger: Default Resource compiler: windres.exe (mine shows windres.exe ) Make program: make.exe (mine shows mingw32-make.exe ) Again, note that all of these files are in the bin subfolder of the folder shown in the Compiler installation folder box - if you can't find these files, you probably have the wrong folder specified. It's okay if the filenames aren't a perfect match, though; different GCC builds might have differently prefixed filenames, as you can see from my setup. Once you're done with all that, go ahead and click OK . You can restart Code::Blocks if you'd like, just to confirm the changes will stick even if there's a crash (I've had occasional glitches where Code::Blocks will crash and forget any settings changed since the last launch). Now, you should be all set. Go ahead and try your little section of code again. You'll want int main(void) to be int main() , but everything else looks good. Try building and running it and see what happens. It should run successfully. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/23254518', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/916552/']} | jdg_81165 |
stackexchange | llm_judgeable_groundtruth_similarity | 20561196 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to call a client method from within a .net Web API controller action. Can I do this? The only post I can find that comes close to what I am looking to do is this one: SignalR + posting a message to a Hub via an action method In there a message is sent from within an asp.net MVC controller action using GlobalHost.ConnectionManager.GetHubContext. When I try that inside my Web API action no errors are thrown, but the method "methodInJavascript" is never invoked on the client side. Public ActionResult MyControllerMethod() { var context = GlobalHost.ConnectionManager.GetHubContext<MyHub>(); context.Clients.All.methodInJavascript("hello world"); // or context.Clients.Group("groupname").methodInJavascript("hello world"); } When I set a break point inside that action, I see that the code is being reached and executed. Nothing happens on the javascript client side though. Why? Is Web API so different under the hood that this won't work? Has anyone else tried it and had success? When I call the "methodInJavascript" from "within" my hub, it works perfectly. Just won't work when called from within a .net Web API controller action. UPDATE: After researching this issue I have no solution. I can only assume there is something missing from examples like this Server to client messages not going through with SignalR in ASP.NET MVC 4 and this calling SignalR hub from WebAPI controller issues like maybe there is an additional configuration step to enable calling from a HubContext or something. The code I initially posted here is like that which appears in those examples has not been demonstrated to be flawed in any way. Can anyone see a flaw in the code? Calling from html works. I do it extensively in my apps and never experience an issue. I have never seen a call from the HubContext in an API controller work. No errors. Just no results on the client. SOLVED (kind of): Code above does indeed work as is when published . Does not work in Visual Studio dev environment via localhost though. No errors but no result on the client end. Publishing the code as is to a real server on the web does indeed work. I never thought there'd be a difference so I never tried. Figured if it didn't work locally it wouldn't work published. It's working live now but I'm wondering why it doesn't work via localhost in the dev environment. Can't test locally with breakpoints and such. I have a feeling it's that signalr virtual directory. Something is different when run locally vs published. Not sure what but I see lots of posts like http://www.bitwisejourneys.com/signalr-hosting-in-iis-a-nasty-gotcha/ . Reading now to see if there's a way to have it work both locally and published.
Now provide the response and nothing else.
| I came across with same issue couple days ago. That took my 2 days to find solution and resolve it. After some serious investigate the problems root cause was the signalr dependency resolver that I set customly. At the end I found this link and that was saying this: Replacing the DependencyResolver You can change the DependencyResolver to use your DI container of choice by setting GlobalHost.DependencyResolver. NOTE: DO NOT override the global resolver in PreApplicationStart, it will not work, or it'll work only sometimes. Do it in PostApplicationStart (using WebActivator) or in Global.asax. The important place here the NOTE. Of course after signalr 2.0 this documentation become deprecated . So I mixed some of here with the new SignalR API. In new SignalR API not using WebActivatorEx anymore. OwinStartup preferred instead of WebActivator. [assembly: OwinStartupAttribute(typeof(YourNamespace.Startup))]namespace YourNamespace{ public partial class Startup { public void Configuration(IAppBuilder app) { //IoC container registration process UnityConfig.RegisterComponents(); UnityConfig.Container.RegisterType<AHub, AHub>(); HubConfiguration config = new HubConfiguration(); config.EnableJavaScriptProxies = true; //You should remove your dependency resolver code from here to Global.asax Application_Start method. Before setting the MVC properties. //config.Resolver = new SignalrDefaultDependencyResolver(UnityConfig.Container); // your dependency resolver // app.MapSignalR(config); } }} And in your global.asax namespace YourNamespace{ public class MvcApplication : System.Web.HttpApplication { protected void Application_Start() { //Here your SignalR dependency resolver GlobalHost.DependencyResolver = new SignalrDefaultDependencyResolver(UnityConfig.Container); //other settings goes on AreaRegistration.RegisterAllAreas(); GlobalConfiguration.Configure(WebApiConfig.Register); FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters); RouteConfig.RegisterRoutes(RouteTable.Routes); BundleConfig.RegisterBundles(BundleTable.Bundles); } }} I dont want to send all the code here, for showing up the real problem. So for me everything works fine for now. Dependency injection also works too. But the bad part is everywhere that I searched David Fowler was saying "Its by design". I started to think is this design really a necessary or a mistake. Hope it helps somebody else who makes research for same problem. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20561196', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/804064/']} | jdg_81166 |
stackexchange | llm_judgeable_groundtruth_similarity | 10260526 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I started writing some basic tests in JMeter and was surprised that the measurements are so different from those from Apache ab. I have a gigabit LAN connecting an Intel i7 server running Nginx and an i5 test machine running JMeter or ab. Initially, I am simply testing the out-of-the box Nginx home page response rate. ab -c 1 -n 100 http://testserver.local/ gives Document Path: /Document Length: 151 bytesConcurrency Level: 1Time taken for tests: 0.078 secondsComplete requests: 100Failed requests: 0Write errors: 0Total transferred: 38400 bytesHTML transferred: 15100 bytesRequests per second: 1280.77 [#/sec] (mean)Time per request: 0.781 [ms] (mean)Time per request: 0.781 [ms] (mean, across all concurrent requests)Transfer rate: 480.29 [Kbytes/sec] received This result is consistently reproducible, +/- a few percent. In JMeter, I have a 1-user 100-loop thread group containing: an HTTP header manager setting Accept-Encoding: gzip an HTTP Get / sampler a summary report listener With only 100 samples, this gives wildly inconsistent results each time I run it. But the most startling fact is that the throughput is reported as low as 40 requests per second (not 1280). The highest recorded rate was 1030, and this was achieved only when I increased to 10,000 samples. Am I right in thinking that JMeter is the wrong tool for simple load tests because its overheads are too high to allow accurate measurements?
Now provide the response and nothing else.
| Jmeter tells you how long each request actually took. AB just does some very basic math to get the overall average. So, the direct answer to your question is that jmeter gets it right and ab just makes a rough guess by giving you the mean across everything. But, sure, if you put the two tools side by side and rate them for speed then it is clearly the case that ab is going to out perform jmeter. Jmeter just does more, it records more data and is processing more logic so it takes longer to turn around a single request. The simple fact is that Jmeter is a fully featured load testing tool, AB is, well, not. The thing is, the aim of a load testing tool is not to be the fastest kid on the block, instead it is about being able to build a realistic representation of the sort of load your app might be hit with when it goes live. In this respect jmeter wins hands down, so it really depends on what your requirements are. If you just want to generate as many requests as possible using the least amount of hardware then ab is a nice choice but if you want to build a representative test, with transactional journeys, conditional logic and all sorts of other useful stuff, then jmeter is the way to go. Think of it like this: they are both Apache projects but AB was, I think, designed to test the apache web server, JMeter, however, was designed to test Tomcat. Now, I'm guessing that jmeter was producing inconsistent results because it was hitting a limit on the machine it was running on. I'm betting you were running in GUI mode and had at least one listener active, like this you are asking the tool to do a lot. If you need a high rate of requests then Jmeter has a lean and mean mode. Typically, for large volumes the best practice is to execute tests at the command line with very few listeners; there's lots of info about this subject on the apache jmeter site. Another point you should consider, if you're really getting into load testing, is that in order to really get benefit from this sort of thing you need to first decide what sort of load you need your site to support and only then should you design a test that represents this. This is achieved using pacing and simulated wait times. The problem with telling a thread that is should just go away and run as fast as it possibly can is that it will iterate as fast as its local conditions allow it to, but there will always be something that puts the breaks on, even ab is limited; no matter how lightweight a tool is it still does something . But if you pace your requests then you remove this problem and as a rather useful added bonus you end up with consistency between runs and between builds of the code, so even if your server speeds up or slows down (with changes to the code base) your test will still make the same rate of requests - which is pretty useful for benchmarking. If you want to take JMeter further then have a look at the Constant Throughput Timer and then use multiple threads to build the level of traffic you need to represent. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/10260526', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/204313/']} | jdg_81167 |
stackexchange | llm_judgeable_groundtruth_similarity | 21128275 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to access a RESTful web service I have written: http://localhost:8080/dukegen/ws/family/1 but getting a 404 using the address bar in the browser and do not know why. I am trying to return JSON. I have put Jackson 2 on my classpath: <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-databind</artifactId> <version>2.3.1</version></dependency> Here is the server output: Jan 14, 2014 8:29:55 PM org.springframework.web.servlet.handler.AbstractUrlHandlerMapping registerHandlerINFO: Mapped URL path [/ws/family/{familyId}] onto handler 'familyResource'Jan 14, 2014 8:29:55 PM org.springframework.web.servlet.handler.AbstractUrlHandlerMapping registerHandlerINFO: Mapped URL path [/ws/family/{familyId}.*] onto handler 'familyResource'Jan 14, 2014 8:29:55 PM org.springframework.web.servlet.handler.AbstractUrlHandlerMapping registerHandlerINFO: Mapped URL path [/ws/family/{familyId}/] onto handler 'familyResource'Jan 14, 2014 8:29:55 PM org.springframework.web.servlet.FrameworkServlet initServletBeanINFO: FrameworkServlet 'dispatcher': initialization completed in 360 msJan 14, 2014 8:29:55 PM org.springframework.web.servlet.DispatcherServlet noHandlerFoundWARNING: No mapping found for HTTP request with URI [/dukegen/ws/family/1] in DispatcherServlet with name 'dispatcher' Here is my Controller: @Controller@RequestMapping("ws")public class FamilyResource { @RequestMapping(value="family/{familyId}", method = RequestMethod.GET, produces="application/json") public @ResponseBody Family getFamily(@PathVariable long familyId) { .... builds Family object .... return family; }} Here is my dispatcher set up in web.xml: <servlet> <servlet-name>dispatcher</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <init-param> <param-name>contextConfigLocation</param-name> <param-value>classpath:/mvcContext.xml</param-value> </init-param> </servlet> <servlet-mapping> <servlet-name>dispatcher</servlet-name> <url-pattern>/ws/*</url-pattern> </servlet-mapping> My mvcContext.xml: <?xml version="1.0" encoding="UTF-8"?><beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd"> <context:component-scan base-package="ws.hamacher.dukegen.resource"/></beans> Any help would appreciated.
Now provide the response and nothing else.
| Couple of things are not correct here. First, in your request mapping, the mapping should be consistent.Your class should be mapped to "/ws" and your method which produces the result should be "/family/{familyId}" In your web.xml you have configured the servlet to respond to /ws/* and your controller is Request Mapped to ws again.This wont work. Once "/ws/*" is intercepted by your servlet, it should not be repeated in the Request Mappings. The Controller responds to only the URL pattern within its context. Whatever is after "/ws" in your URL is only in the context of the controller. I generally prefer the servlet to be mapped to "/" and all further resolutions coded within the controller. Just my preference, though. So the correct configuration is web.xml <servlet> <servlet-name>dispatcher</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <init-param> <param-name>contextConfigLocation</param-name> <param-value>classpath:/mvcContext.xml</param-value> </init-param> </servlet> <servlet-mapping> <servlet-name>dispatcher</servlet-name> <url-pattern>/</url-pattern> </servlet-mapping> and the controller @Controller @RequestMapping("/ws") public class FamilyResource { @RequestMapping(value="/family/{familyId}", method = RequestMethod.GET, produces="application/json") public @ResponseBody Family getFamily(@PathVariable long familyId) { .... builds Family object .... return family; } } | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21128275', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1440434/']} | jdg_81168 |
stackexchange | llm_judgeable_groundtruth_similarity | 41028114 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an array with nested objects, such as this one: [ {"name": "1", "children": [{"name": "1.1", "children":"1.2"}]}, {"id": "2", "thing": [{"name": "2.1", "children":"2.2"}]}, {"name": "3", "stuff": [{"name": "3.1", "children":"3.2"}]},] The objects can contain values of different types, including other, nested objects. I want to convert this array to CSV format. I've tried to iterate with for .. in loops, regular nested for loops, .map() and recursion. I think recursion might be the only way to solve this particular problem, though. For the CSV field names I want to use the sequence of keys that lead to the value. For the given example, the CSV result I'm looking for is: name, children.name, children.children,id, thing.name, thing.children, stuff.name, stuff.children1, 1.1, 1.2,,,,2,2.1,2.23,,,,3,3.1,3.2
Now provide the response and nothing else.
| You could use this ES6 function to create the 2D array you are looking for, which you can then easily transform to CSV: function pivot(arr) { var mp = new Map(); function setValue(a, path, val) { if (Object(val) !== val) { // primitive value var pathStr = path.join('.'); var i = (mp.has(pathStr) ? mp : mp.set(pathStr, mp.size)).get(pathStr); a[i] = val; } else { for (var key in val) { setValue(a, key == '0' ? path : path.concat(key), val[key]); } } return a; } var result = arr.map( obj => setValue([], [], obj) ); return [[...mp.keys()], ...result];}function toCsv(arr) { return arr.map( row => row.map ( val => isNaN(val) ? JSON.stringify(val) : +val ).join(',') ).join('\n');}// Sample datavar arr = [ {"name": "1", "children": [{"name": "1.1", "children":"1.2"}]}, {"id": "2", "thing": [{"name": "2.1", "children":"2.2"}]}, {"name": "3", "stuff": [{"name": "3.1", "children":"3.2"}]},];// Conversion to 2D array and then to CSV:console.log(toCsv(pivot(arr))); .as-console-wrapper { max-height: 100% !important; top: 0; } For other ways to convert a 2D array to CSV, see this Q&A . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/41028114', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1789573/']} | jdg_81169 |
stackexchange | llm_judgeable_groundtruth_similarity | 37772798 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to update Composer without any luck! What I have tried: $ composer self-update [InvalidArgumentException]Command "self-update" is not defined. $ sudo -H composer self-update [InvalidArgumentException]Command "self-update" is not defined. $ sudo apt-get install composer Reading package lists... Done Building dependency tree Readingstate information... Done composer is already the newest version. Thefollowing packages were automatically installed and are no longerrequired: libntdb1 linux-headers-4.2.0-30linux-headers-4.2.0-30-generic linux-image-4.2.0-30-genericlinux-image-extra-4.2.0-30-generic python-ntdb Use 'apt-getautoremove' to remove them. 0 upgraded, 0 newly installed, 0 to removeand 10 not upgraded. I am trying to self-update Composer because I am facing the following each time I try: $ composer update Loading composer repositories with package information Updatingdependencies (including require-dev)[RuntimeException]Could not load package rmrevin/yii2-fontawesome in http://packagist.org : [UnexpectedValueException] Could not parseversion constraint v4.1 . : Invalid version string "v4.1. "[UnexpectedValueException]Could not parse version constraint v4.1. : Invalid version string"v4.1. " How can I fix this issue? My PHP version is: php --version PHP 5.6.11-1ubuntu3.4 (cli) Copyright (c) 1997-2015 The PHP GroupZend Engine v2.6.0, Copyright (c) 1998-2015 Zend Technologieswith Zend OPcache v7.0.6-dev, Copyright (c) 1999-2015, by Zend Technologies My composer version is: composer --version Composer version @package_branch_alias_version@ (@package_version@)@release_date@
Now provide the response and nothing else.
| As Waqleh said , you have to uninstall PHP Composer and install it again. First, execute: sudo apt-get remove composer Then, execute these commands. The checksum here is for Composer 1.10.13, but you'll get the newest Composer (2.0.4 at the moment of editing this answer) when running the first line, so be sure to check in https://getcomposer.org/download/ : php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"php -r "if (hash_file('sha384', 'composer-setup.php') === '8a6138e2a05a8c28539c9f0fb361159823655d7ad2deecb371b04a83966c61223adc522b0189079e3e9e277cd72b8897') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;"php composer-setup.phpphp -r "unlink('composer-setup.php');" Now move file composer.phar to a directory that is in your path (from Installation - Linux / Unix / macOS ): sudo mv composer.phar /usr/local/bin/composer And execute composer from any directory. That's all! PS: If you're using PhpStorm (or maybe other IDEs), you'll have to close it and open it again. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/37772798', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1431096/']} | jdg_81170 |
stackexchange | llm_judgeable_groundtruth_similarity | 4461947 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This question is closely related to this question , but I am not happy with the answers there for several reasons which I will explain in a second. The limit $\lim_{n\to\infty}n(\sqrt[n]{n}-1)=\infty$ , where $n$ is a natural number, is easy to see by expanding the left side with the help of the exponential series. Indeed, we have $$n(\sqrt[n]{n}-1)=\ln(n)+\frac{1}{2}\cdot\frac{1}{n}\cdot\ln(n)^2+\frac{1}{6}\cdot\frac{1}{n^2}\cdot\ln(n)^3+\cdots\geq\ln(n)\,.$$ Since $\ln(n)$ grows arbitrary large with $n$ large, the limit is proven. I found this limit as an exercise in Analysis 1 by K. Königsberger, 5.8 Exercises, 3(b). I am using an old printing and the numbering might have changed, but it is in the very beginning of the book in a chapter about sequences. At that stage of the book the exponential series as well as logarithms have not yet been introduced and very few means are available. For educational purposes, I am looking for a really elementary proof which uses a different bound from below which in turn goes to infinity. The book suggests that such a proof must exist but I cannot find one. Can you please help me to find such proof? What is available at this stage is the Bernoulli inequality and the expansion of $(1+x)^n$ for a natural number $n$ and arbitrary $x$ , plus some very basic limits like $\sqrt[n]{n}\to 1$ , etc. which all can be done elementary. Thank you for your time and help!
Now provide the response and nothing else.
| For each $M \in \mathbb{R}$ , choose positive integers $k$ and $N$ so that $\frac{k}{2} > M$ and $N = 2^k$ . Then for each $n \geq N$ , we have \begin{align*}n \bigl( n^{1/n} - 1 \bigr)&\geq n \bigl( 2^{k/n} - 1 \bigr) \\&= n \cdot \frac{2^{k/n} - 1}{2^{n/n} - 1} \\&= n \cdot \frac{1 + 2^{1/n} + \cdots + 2^{(k-1)/n}}{1 + 2^{1/n} + \cdots + 2^{(n-1)/n}} \\&\geq n \cdot \frac{k}{2n} = \frac{k}{2} > M\end{align*} and therefore the sequence diverges to $+\infty$ . Remark. Note that the above argument essentially proves the inequality $$ n \bigl( n^{1/n} - 1 \bigr) \geq \frac{1}{2}\lfloor \log_2 n \rfloor, $$ so we can't really avoid the logarithm from entering this picture, as @Thomas Andrews anticipated. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/4461947', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/213008/']} | jdg_81171 |
stackexchange | llm_judgeable_groundtruth_similarity | 302307 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Reading on the no communication theorem and quantum teleportation, I have difficulty understanding the following thought experiment. A light source is emitting photons at a regular intervals $T$ in a vacuum. The photons travel through a geometry that disperses it in the desired directions (something like a diffraction aperture). Two observers in the same frame of reference as the source possess clocks synchronized with that of the light source. Observer $D1$ is nearer the source and will choose whether or not to observe each photon by time $t1$. Observer $D2$ further from the source, will observe every photon from the source. Observer $D2$ will not, though, be able to observe photons that have interacted at $D1$ because all their energy will be intercepted at $D1$ and will not propagate as far as $D2$. Can the photon theoretically at least, be delivered such that the interaction probability at $D1$ is 100% and $D2$ is 100%? Would it not possible to send a bit stream from $D1$ to $D2$ if observer $D1$ 2.a. intercepts photons to indicate "FALSE" and 2.b. moves their detector, allowing photons to pass to indicate "TRUE"? This question is, can observer $D2$'s measurements determine what $D1$ did? Would the transmission time of the bit stream from $D1$ to $D2$ be independent of their separation (dependent rather on the difference in their distance from the source)? If those assumptions were correct the message transmission could be made to exceed $c$ - so I suspect assumption 1 is wrong.
Now provide the response and nothing else.
| Your error is in assuming that if D1 does not measure a photon, it will necessarily be measured by D2. But that is not true! If you're sending half the probability amplitude to D1, and half to D2, than whenever D2 tries to measure a photon, it will see one exactly half the time. Similarly, D1 will see one exactly half the time. You won't be able to use this information to determine whether D1 has made a measurement, since D2 sees a photon 50% of the time when D1 has made a measurement, and 50% of the time when D1 has not made a measurement. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/302307', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/140002/']} | jdg_81172 |
stackexchange | llm_judgeable_groundtruth_similarity | 25871 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The Hertzsprung-Russell diagram that categorizes star types uses the letter codes O, B, A, F, G, K, and M to indicate a star's temperature/color. Hottest (blue) is O and coolest (red) is M. What do the letters themselves relate to? For example, why is "O" used for hottest/blue?
Now provide the response and nothing else.
| When astronomers started to get spectra of stars and began classifying them, the initial classification was based on the strength of the Balmer absorption lines in the spectra. The Balmer lines are created by electons in hydrogen atoms that are currently in the second energy level (N=2) absorbing energy and jumping up to higher levels. The stars with the strongest lines were given a designation of A, the second strongest B, etc. on down the line. Later, the association with temperature was made and many of the classes were combined and reordered based on the temperature of the star. However, the main letter designations were kept even though they were now "out of order". So the letters actually correspond to the relative strength of the Hydrogen absorption lines. The reason the O stars, which are hottest, are so far down on the scale is that they are so hot that Hydrogen is fully (or very nearly fully) ionized. If the atom has been ionized, it by definition doesn't have any electrons in the N=2 level since it doesn't have any electrons at all. Since there are no electrons in the proper level to form the line, the line is weak as it is only created when a chance recombination occurs. The cooler M stars, at the other end of the main sequence, also show very week Hydrogen lines. The reason for this is that they are too cool to even get the electrons excited enough to get up to the N=2 energy state. If there are no electrons in N=2, there is no way to get the electron to absorb energy and jump higher and form the Balmer lines. What hydrogen lines are seen here are due to the few higher velocity atoms on the long tail of the velocity distribution. When they collide, they have enough energy to get their electrons up to N=2. As the temperature increases, the energy available to excite the atoms increases as well. As you move up from M class stars you get to spectral types K, G, F, and then A, each one increasing in strength of the lines. The A stars, at about 10,000 K, are just the right temperature to push almost all the hydrogen atoms into the higher N=2 state where they can then be knocked higher and produce the Balmer series. Since there are a lot of electrons in N=2, you get strong lines. The next spectral class up is B, which has weaker lines because at this point you've started to ionize the hydrogen atoms and therefore have fewer atoms with electrons in the N=2 state. And then comes O stars which are nearly fully ionized and so are very week in the Balmer series. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/25871', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/-1/']} | jdg_81173 |
stackexchange | llm_judgeable_groundtruth_similarity | 20638136 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This is the error I get when I run any rake command: undefined method 'desc' for Sinatra::Application:Class # app.rbrequire 'sinatra'require 'sinatra/activerecord'require 'sinatra/contrib'get '/' do puts "Hello World"end# config.rurequire "./app"run Sinatra::Application# Rakefilerequire './app'require 'sinatra/activerecord/rake'# Gemfilesource 'https://rubygems.org'ruby '2.0.0'gem 'activerecord', '~> 4.0.2'gem 'sinatra', '~> 1.4.4'gem 'sinatra-activerecord', '~> 1.2.3'gem 'sinatra-contrib', '~> 1.4.2' Full trace: /Users/j/.rvm/gems/ruby-2.0.0-p247/gems/sinatra-contrib-1.4.2/lib/sinatra/namespace.rb:269:in `method_missing'/Users/j/.rvm/gems/ruby-2.0.0-p247/gems/sinatra-activerecord-1.2.3/lib/sinatra/activerecord/tasks.rake:4:in `block in <top (required)>'/Users/j/.rvm/gems/ruby-2.0.0-p247/gems/sinatra-contrib-1.4.2/lib/sinatra/namespace.rb:126:in `class_eval'/Users/j/.rvm/gems/ruby-2.0.0-p247/gems/sinatra-contrib-1.4.2/lib/sinatra/namespace.rb:126:in `block in new'/Users/j/.rvm/gems/ruby-2.0.0-p247/gems/sinatra-contrib-1.4.2/lib/sinatra/namespace.rb:118:in `initialize'/Users/j/.rvm/gems/ruby-2.0.0-p247/gems/sinatra-contrib-1.4.2/lib/sinatra/namespace.rb:118:in `new'/Users/j/.rvm/gems/ruby-2.0.0-p247/gems/sinatra-contrib-1.4.2/lib/sinatra/namespace.rb:118:in `new'/Users/j/.rvm/gems/ruby-2.0.0-p247/gems/sinatra-contrib-1.4.2/lib/sinatra/namespace.rb:142:in `namespace'/Users/j/.rvm/gems/ruby-2.0.0-p247/gems/sinatra-1.4.4/lib/sinatra/base.rb:1972:in `block (2 levels) in delegate'/Users/j/.rvm/gems/ruby-2.0.0-p247/gems/sinatra-activerecord-1.2.3/lib/sinatra/activerecord/tasks.rake:3:in `<top (required)>'/Users/j/.rvm/gems/ruby-2.0.0-p247/gems/activesupport-4.0.2/lib/active_support/dependencies.rb:223:in `load'/Users/j/.rvm/gems/ruby-2.0.0-p247/gems/activesupport-4.0.2/lib/active_support/dependencies.rb:223:in `block in load'/Users/j/.rvm/gems/ruby-2.0.0-p247/gems/activesupport-4.0.2/lib/active_support/dependencies.rb:214:in `load_dependency'/Users/j/.rvm/gems/ruby-2.0.0-p247/gems/activesupport-4.0.2/lib/active_support/dependencies.rb:223:in `load'/Users/j/.rvm/gems/ruby-2.0.0-p247/gems/sinatra-activerecord-1.2.3/lib/sinatra/activerecord/rake.rb:77:in `<top (required)>'/Users/j/.rvm/rubies/ruby-2.0.0-p247/lib/ruby/site_ruby/2.0.0/rubygems/core_ext/kernel_require.rb:51:in `require'/Users/j/.rvm/rubies/ruby-2.0.0-p247/lib/ruby/site_ruby/2.0.0/rubygems/core_ext/kernel_require.rb:51:in `require'/Users/j/.rvm/gems/ruby-2.0.0-p247/gems/backports-3.3.5/lib/backports/tools.rb:328:in `require_with_backports'/Users/j/.rvm/gems/ruby-2.0.0-p247/gems/activesupport-4.0.2/lib/active_support/dependencies.rb:229:in `block in require'/Users/j/.rvm/gems/ruby-2.0.0-p247/gems/activesupport-4.0.2/lib/active_support/dependencies.rb:214:in `load_dependency'/Users/j/.rvm/gems/ruby-2.0.0-p247/gems/activesupport-4.0.2/lib/active_support/dependencies.rb:229:in `require'/Users/j/Desktop/app/Rakefile:2:in `<top (required)>'/Users/j/.rvm/gems/ruby-2.0.0-p247@global/gems/rake-10.1.0/lib/rake/rake_module.rb:25:in `load'/Users/j/.rvm/gems/ruby-2.0.0-p247@global/gems/rake-10.1.0/lib/rake/rake_module.rb:25:in `load_rakefile'/Users/j/.rvm/gems/ruby-2.0.0-p247@global/gems/rake-10.1.0/lib/rake/application.rb:637:in `raw_load_rakefile'/Users/j/.rvm/gems/ruby-2.0.0-p247@global/gems/rake-10.1.0/lib/rake/application.rb:94:in `block in load_rakefile'/Users/j/.rvm/gems/ruby-2.0.0-p247@global/gems/rake-10.1.0/lib/rake/application.rb:165:in `standard_exception_handling'/Users/j/.rvm/gems/ruby-2.0.0-p247@global/gems/rake-10.1.0/lib/rake/application.rb:93:in `load_rakefile'/Users/j/.rvm/gems/ruby-2.0.0-p247@global/gems/rake-10.1.0/lib/rake/application.rb:77:in `block in run'/Users/j/.rvm/gems/ruby-2.0.0-p247@global/gems/rake-10.1.0/lib/rake/application.rb:165:in `standard_exception_handling'/Users/j/.rvm/gems/ruby-2.0.0-p247@global/gems/rake-10.1.0/lib/rake/application.rb:75:in `run'/Users/j/.rvm/gems/ruby-2.0.0-p247@global/gems/rake-10.1.0/bin/rake:33:in `<top (required)>'/Users/j/.rvm/gems/ruby-2.0.0-p247@global/bin/rake:23:in `load'/Users/j/.rvm/gems/ruby-2.0.0-p247@global/bin/rake:23:in `<main>'/Users/j/.rvm/gems/ruby-2.0.0-p247/bin/ruby_executable_hooks:15:in `eval'/Users/j/.rvm/gems/ruby-2.0.0-p247/bin/ruby_executable_hooks:15:in `<main>'
Now provide the response and nothing else.
| The Sinatra namespace extension from Sinatra contrib is interfering with Rake’s own namespace support. They both define a namespace method, and the Sinatra contrib version is being called (incorrectly) from the Sinatra-ActiveRecord Rake tasks . If you’re not using the namespaces from Sinatra-contrib, then the easiest solution would be to only require those extensions that you need ; e.g. change require 'sinatra/contrib' to require 'sinatra/whatever'require 'sinatra/anotherextension' If you are using Sinatra namespaces then I think you may be able to get round this by moving to a modular style app . Change your app.rb to something like require 'sinatra/base' # note this has changed from just 'sinatra'require 'sinatra/activerecord'require 'sinatra/contrib'class MyApp < Sinatra::Base register Sinatra::Contrib get '/' do "Hello World" end # other routes etc. as neededend Then in your config.ru you need run MyApp rather then run Sinatra::Application (of course you can – and should – give your class a better name). This avoids the collision of the two namespace methods, since the Sinatra version is only available in your application class, not the top level. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20638136', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/912404/']} | jdg_81174 |
stackexchange | llm_judgeable_groundtruth_similarity | 3826542 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Coming from a C++ background, I'm used to sticking the const keyword into function definitions to make objects being passed in read-only values. However, I've found out that this is not possible in C# (please correct me if I'm wrong). After some Googling, I arrived at the conclusion that the only way to make a read-only object is to write an interface that only has 'get' properties and pass that in instead. Elegant, I must say. public interface IFoo{ IMyValInterface MyVal{ get; }}public class Foo : IFoo{ private ConcreteMyVal _myVal; public IMyValInterface MyVal { get { return _myVal; } }} I would pass it into: public void SomeFunction(IFoo fooVar){ // Cannot modify fooVar, Excellent!!} This is fine. However, in the rest of my code, I would like to modify my object normally. Adding a 'set' property to the interface would break my read-only restriction. I can add a 'set' property to Foo (and not IFoo ), but the signature expects an interface rather than a concrete object. I would have to do some casting. // Add this to class Foo. Might assign null if cast fails??set { _myVal = value as ConcreteMyVal; }// Somewhere else in the code...IFoo myFoo = new Foo;(myFoo as Foo).MyFoo = new ConcreteMyVal(); Is there a more elegant way of replicating const or making read-only function parameters without adding another property or a function?
Now provide the response and nothing else.
| I think you may be looking for a solution involving two interfaces in which one inherits from the other: public interface IReadableFoo{ IMyValInterface MyVal { get; }}public interface IWritableFoo : IReadableFoo{ IMyValInterface MyVal { set; }}public class Foo : IWritableFoo { private ConcreteMyVal _myVal; public IMyValInterface MyVal { get { return _myVal; } set { _myVal = value as ConcreteMyVal; } }} Then you can declare methods whose parameter type “tells” whether it plans on changing the variable or not: public void SomeFunction(IReadableFoo fooVar){ // Cannot modify fooVar, excellent!}public void SomeOtherFunction(IWritableFoo fooVar){ // Can modify fooVar, take care!} This mimics compile-time checks similar to constness in C++. As Eric Lippert correctly pointed out, this is not the same as immutability. But as a C++ programmer I think you know that. By the way, you can achieve slightly better compile-time checking if you declare the type of the property in the class as ConcreteMyVal and implement the interface properties separately: public class Foo : IWritableFoo { private ConcreteMyVal _myVal; public ConcreteMyVal MyVal { get { return _myVal; } set { _myVal = value; } } public IMyValInterface IReadableFoo.MyVal { get { return MyVal; } } public IMyValInterface IWritableFoo.MyVal { // (or use “(ConcreteMyVal)value” if you want it to throw set { MyVal = value as ConcreteMyVal; } }} This way, the setter can only throw when accessed through the interface, but not when accessed through the class. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3826542', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/318811/']} | jdg_81175 |
stackexchange | llm_judgeable_groundtruth_similarity | 249427 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Now I'm confused with what "a linear transformation" means. In linear algebra textbook, I learned that a linear transformation is $T:V \to W$, where V,W are vector spaces, which satisfies additivity and homogeneity, in other words, $T(u+v)=Tu+Tv, T(av)=aTv$ for all $u,v \in V$ and $a \in F$ But in my complex analysis textbook, $\displaystyle f(z)=\frac{az+b}{cz+d}$, $a,b,c,d \in \mathbb C$, is introduced as an example of a linear transformation. However, this function $f$ doesn't seem to follow the definition from linear algebra. Indeed, $f(0) \neq 0$. Is it like there are two kinds of linear transformations in mathematics, or they are actually the same thing but I don't get it well?
Now provide the response and nothing else.
| $z \to \dfrac{az+b}{cz+d}$ is more properly called a fractional linear transformation (or linear fractional transformation, or Möbius transformation). It is not the same as a linear transformation, although abuse of the language sometimes does take place. EDIT: For example, Ford's "Automorphic Functions", first published 1929, defines $z' = \dfrac{az+b}{cz+d}$ as a "linear transformation": in a footnote he says 'This is more properly called a "linear fractional transformation"; but we shall use the briefer designation.' http://books.google.ca/books?id=aqPvo173YIIC&pg=PA1 | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/249427', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/37026/']} | jdg_81176 |
stackexchange | llm_judgeable_groundtruth_similarity | 35837214 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using Laravel version 5.2 and don't know how to redirect Laravel default page to my template 404 page
Now provide the response and nothing else.
| use abort(404); Some exceptions describe HTTP error codes from the server. For example, this may be a "page not found" error (404), an "unauthorized error" (401) or even a developer generated 500 error. In order to generate such a response from anywhere in your application, use the following: abort(404); If you invoke abort(404); anywhere in your route or controller it will throw HTTPNotFoundException which looks for a blade template to display in resources/views/errors/ directory with the filename same as the error code. Example: in your app/Http/routes.php Route::get('/test', function(){ return abort(404);}); in your resources/views/errors/ directory create 404 .blade.php, notice the name of the file corresponds with the abort( 404 ); Reference: https://laravel.com/docs/5.2/errors#http-exceptions | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35837214', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6027641/']} | jdg_81177 |
stackexchange | llm_judgeable_groundtruth_similarity | 474020 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I recently had my second lecture in Thermodynamics, a long lecture which involved the first law and a portion of the second law. At some point during the lecture we defined entropy as the change of heat energy per unit temperature. From this we derived a general expression for entropy (using laws derived for ideal gases) in which it was clear that it depended on the change of temperature and volume through the process as well as the number of moles. I have also learned that entropy is a measure of disorder in a system which was nonsense to me especially that I don't understand how disorder (chaotic movement of particles) is related to the change in heat energy per unit temperature, it's more related to specific heat if you ask me, nonetheless in attempts to understand what's useful in knowing what's the amount of disorder in a system I learnt that it measures the state of reversibility of the process which still doesn't make sense when trying to relate it with the "change in heat energy per unit temperature". TL;DR: I need an answer these questions: A process has an entropy of X what does this tell me? Another process has higher entropy what does this tell me? How can I relate the definition of entropy to "change in heat energy per unit time"? Please don't explain using statistical thermodynamics.
Now provide the response and nothing else.
| This is a big topic with many aspects but let me start with the reason why entropy and the second law was needed. You know the first law is conservation of energy. If a hot body is placed in contact with a cold body heat normally flows from the hot body to the cold . Energy lost by the hot body equals energy gained by the cold body. Energy is conserved and the first law obeyed. But that law would also be satisfied if the same amount of heat flowed in the other direction. However one never sees that happen naturally (without doing work). What's more, after transferring heat from hot to cold you would not expect it to spontaneously reverse itself. The process is irreversible. The Clausius form of the second law states that heat flows spontaneously from hot to cold. Clausius developed the property of entropy to create this as a general state function that could eventually be determined independently of trying to map just heat flow. ADDENDUM 1: Found a little more time to bring this to the next level. This will tie in what I said above to the actual second law and the property of entropy. So we needed a new law and property that would be violated if heat flowed naturally from a cold body to a hot body. The property is called entropy, $S$ , which obeys the following inequality: $$\Delta S_{tot}=\Delta S_{sys}+\Delta S_{surr}≥0$$ Where $\Delta S_{tot}$ is the total entropy change of the system plus the surrounding (entropy change of the universe) for any process where the system and surroundings interact. The equality applies if the process is reversible, and the inequality if it is irreversible. Since all real processes are irreversible (explained below), the law tells us that the total entropy of the universe increases as a result of a real process. The property of entropy is defined as $$dS=\frac {dQ_{rev}}{T}$$ where $dQ$ is a reversible differential transfer of heat and $T$ is the temperature at which it is transferred. Although it is defined for a reversible transfer of heat, it applies to any process between two states. If the process occurs at constant temperature, we can say $$\Delta S=\frac{Q}{T}$$ where $Q$ is the heat transferred to the system at constant temperature. We apply this new law to our hot and cold bodies and call them bodies $A$ and $B$ . To make things simple, we stipulate that the bodies are massive enough (or the amount of heat $Q$ transferred small enough) that their temperatures stay constant during the heat transfer Applying the second law to our bodies: $$\Delta S_{tot}=\frac{-Q}{T_A}+\frac{+Q}{T_B}$$ The minus sign for body $A$ simply means the entropy decrease for that body because heat is transferred out, and the positive sign for body $B$ means its entropy has increased because heat is transferred in. From the equation, we observe that for all $T_{A}>T_{B}$ , $\Delta S_{tot}>0$ . We further note that as the two temperatures get closer and closer to each other, $\Delta S_{tot}$ goes to $0$ . But if $T_{A}<T_{B}$ meaning heat transfers from the cold body to the hot body, $\Delta S$ would be less than zero, violating the second law. Thus the second law precludes that natural transfer of heat from a cold body to a hot body. Note that for $\Delta S_{tot}=0$ the temperatures would have to be equal. But we know that heat will not flow unless there is a temperature difference. So we see that for all real heat transfer processes, such processes are irreversible. Irreversibility and entropy increase is not limited to heat transfer processes. Any process goes from a state of disequilibrium to equilibrium. Beside heat, you have processes involving pressure differentials (pressure disequilibrium). These process are also irreversible and generate entropy. ADDENDUM 2: This will focus on the specific questions no. 1 and 2 in you post, that is 1. A process has an entropy of X what does this tell me? 2. Another process has higher entropy what does this tell me? Before answering this, it has been said that when the change in entropy, $\Delta S$ , is positive, “heat has entered the system”. It should be noted that heat entering the system is a sufficient condition for a positive entropy change, but it is not a necessary condition. As I said above, irreversibility and entropy generation is not limited to heat transfer processes. For example, an irreversible adiabatic expansion results in an increase in entropy, although no heat transfer occurs. An example is the free adiabatic expansion of an ideal gas, a.k.a. a Joule expansion. A rigid insulated chamber is partitioned into two equal volumes. On one side of the partition is an ideal gas. On the other side a vacuum. An opening is then created in the partition allowing the gas to freely expand into the evacuated half. The process is irreversible since the gas will not all return to its original half of the chamber without doing external work (compressing it). Since there was no heat transfer between the gas and the surroundings, $Q=0$ , and since the gas expanded into a vacuum without the chamber walls expanding, the gas does no work, $W=0$ . From the first law, $\Delta U=Q-W=0$ . For an ideal gas, any process, $\Delta U=C_{v}\Delta T$ . Therefore there is no change in temperature. The end result is the volume of the gas doubles, the pressure halves, and the temperature remains the same. We can determine the change in entropy for this process by devising a convenient reversible path to return the system to its original state, so that the overall change in entropy for the system is zero. The obvious choice is a reversible isothermal (constant temperature) compression process. The work done on the case in the isothermal compression equals the heat transferred out of the gas to the surroundings (increasing its entropy) and the change in internal energy is zero. Since this occurs at constant temperature we have, for the gas (system), $$\Delta S=-\frac{Q}{T}$$ Since we have returned the system to its original state, the overall change in entropy of the system is zero. Therefore, the change in entropy due to the free expansion had to be $$\Delta S_{exp}=+\frac{Q}{T}$$ We could also determine $\Delta S$ by combining the first law and the definition of entropy. This gives the second equation in Jeffery’s answer, which for the case of no temperature change ( $dT=0$ ) gives us, for one mole of an ideal gas, $$\Delta S=R\ln\frac{V_{f}}{V_i}$$ or, in the case of our free expansion where the volume doubles, $$\Delta S=R\ln2$$ Therefore, $$\Delta S=\frac{Q}{T}=R\ln2$$ Now, to answer your questions, what does this tell us? And what does another process having higher entropy tell us? Or, to put it another way, why should we care? One thing it tells us is that, in the case of an ideal gas, an irreversible (free) adiabatic expansion of an ideal gas results in a lost opportunity to do work. In the free adiabatic expansion, no work was done. If, however, the process was a reversible adiabatic process against a variable external pressure (constant entropy process), such that $Pv^k$ =constant ( $k=\frac{C_{p}}{C_{v}})$ the gas would have performed work on the surroundings equal to $$W=\frac{(P_{f}V_{f}-P_{i}V_{i})}{(1-k)}$$ Bottom line: One of the ramifications of an irreversible expansion process is that the work performed will be less than that for the same process carried out reversibly, due to the generation of entropy in the irreversible process. Irreversible processes lower the thermal efficiency of a system in performing work. Hope this helps. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/474020', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/208112/']} | jdg_81178 |
stackexchange | llm_judgeable_groundtruth_similarity | 79294 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Considering the list of rules: r= {"a" -> "1", "b" -> "2", "c" -> 3, d -> 231, "e" -> 1.25}; I want to have a function f[..._] := SomeFunction(key, value) so that I can give r to f : f[r] and will do SomeFunction(key, value) on all elements of r . p.s. Same question stands for Association. Any idea?Thanks.
Now provide the response and nothing else.
| this would be a definition which does what you want for a list of rules: f[r : {__Rule}] := someFunction @@@ r and this would be one which handles the Association case: f[a_Association] := someFunction @@@ Normal[a] As mentioned by Gerli in a comment in version 10.1 one can also use KeyValueMap for the second case, for which that new function was added: f[a_Association] := KeyValueMap[someFunction,a] both assume that you want to return a list of returnvalues of someFunction ... | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/79294', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/22935/']} | jdg_81179 |
stackexchange | llm_judgeable_groundtruth_similarity | 204669 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Even though sometimes software bugs and vulnerabilities are deemed as the same concept, there must be at least one distinct aspect between them, and I think the most prominent one is exploitability (the latter one having the property). What I'm curious about is, even after seeing many cases that divide-by-zero bugs are reported as software problems, I can hardly come up with any attack (other than DoS) using divide-by-zero bugs. I know not all kinds of bugs have the same impact upon a system in terms of security, but is there any attack method that uses divide-by-zero bugs to achieve something different than DoS, like privilege escalation for example?
Now provide the response and nothing else.
| At issue is that an exception handler will be invoked to handle the division by zero. In general, attackers know that exception handlers are not as well-tested as regular code flows. Your main logic flow might be sound and thoroughly tested, but an exception handler can be triggered by interrupts occurring anywhere in the code within its scope. int myFunction(int a, int b, SomeState state) { state(UNINITIALIZED); try { state.something(a/b); state(NORMAL); } catch () { state.something(b/a); state(INVERTED); } return retval;} This horrible pseudocode sort of illustrates one way the flaw could be exploited. Let's say that an uninitialized state is somehow vulnerable. If this routine is called, the state is first uninitialized. If b is zero, it catches the exception and tries to do some other logic. But if both a and b are zero, it throws again, leaving state uninitialized. The division by zero itself wasn't the vulnerability, it's the bad code around it that's possible to exploit. | {} | {'log_upvote_score': 8, 'links': ['https://security.stackexchange.com/questions/204669', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/199967/']} | jdg_81180 |
stackexchange | llm_judgeable_groundtruth_similarity | 27077878 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am attempting to build a UI for my Android app which contains a vertically scrollable page of horizontally scrollable carousels (something like what the Netflix app does). How is this type of behaviour accomplished? A basic implementation would be enough to get me started. There are a few other requirements for the UI, which I'll include here for reference, since it may impact what classes or libraries I can use. 1) Vertical scrolling between carousels should be smooth, but when user releases, the UI should "snap to" the closest carousel (so the user is always on a carousel row, not between two carousels). 2) Horizontal scrolling on a carousel should be smooth, but when user releases, the UI should "snap to" the closest item in the carousel. 3) Should be possible to overlay additional information over an item in the carousel 4) UI should be adaptable to any screen size. 5) Should be navigable with the arrow keys (for touchscreen-less devices) 6) Should work on a wide range of Android versions (possibly through the support library) 7) Should be OK to use in an open-source app licensed under the GPL Acceptable answers DO NOT have to meet all of these requirements. At a minimum, a good answer should involve navigating multiple carousels (versus only one carousel). Here is a mock-up of basically what I am envisioning (I'm flexible, doesn't have to look like this.. point is just to clarify what I am talking about -- each row would contain a lot of items that could be scrolled left and right, and the whole page could be scrolled up and down)
Now provide the response and nothing else.
| Main Idea In order to have a flexible design and having unlimited items you can create a RecyclerView as a root view with a LinearLayoutManager.VERTICAL as a LayoutManager . for each row you can put another RecyclerView but now with a LinearLayoutManager.HORIZONTAL as a LayoutManager . Result Source Code Requirements 1) Vertical scrolling between carousels should be smooth, but when user releases, the UI should "snap to" the closest carousel (so the user is always on a carousel row, not between two carousels). 2) Horizontal scrolling on a carousel should be smooth, but when user releases, the UI should "snap to" the closest item in the carousel. In order to achieve those I used OnScrollListener and when the states goes SCROLL_STATE_IDLE I check top and bottom views to see which of them has more visible region then scroll to that position. for each rows I do so for left and right views for each row adapter. In this way always one side of your carousels or rows fit. for example if top is fitted the bottom is not or vise versa. I think if you play a little more you can achieve that but you must know the dimension of window and change the dimension of carousels at runtime. 3) Should be possible to overlay additional information over an item in the carousel If you use RelativeLayout or FrameLayout as a root view of each item you can put information on top of each other. as you can see the numbers are on the top of images. 4) UI should be adaptable to any screen size. if you know how to support multiple screen size you can do so easily, if you do not know read the document. Supporting Multiple Screens 5) Should be navigable with the arrow keys (for touchscreen-less devices) use below function mRecyclerView.scrollToPosition(position); 6) Should work on a wide range of Android versions (possibly through the support library) import android. support.v7 .widget.RecyclerView; 7) Should be OK to use in an open-source app licensed under the GPL Ok happy coding!! | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/27077878', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1002212/']} | jdg_81181 |
stackexchange | llm_judgeable_groundtruth_similarity | 27202353 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How to open the AVD manager on Ubuntu Linux from the command line?
Now provide the response and nothing else.
| For thoose coming here because the android avd command has been deprecated*, here's how to do it now: Using Android Studio Open Android Studio with a valid studio project (if you don't have one, simply create one empty). After you've your project open, go to Tools => Android => AVD Manager . Note that this menu does not appear if you don't have a valid Android Studio project open . Now you can safely close Android Studio if you only want the AVD Manager. From command line If you want to use the command line you'll first need to create the virtual device using avdmanager , located under $ANDROID_HOME/tools/bin/avdmanager like so: $ANDROID_HOME/tools/bin/avdmanager create avd --force --name testAVD --abi google_apis/x86_64 --package 'system-images;android-23;google_apis;x86_64' After you created the device to emulate, then you need to open it with emulator , which is located under $ANDROID_HOME/tools/emulator . The command for the previously created device should be something like: emulator -avd testAVD If it crashes saying something like... [140022905190208]:ERROR:./android/qt/qt_setup.cpp:28:Qt library not found at ../emulator/lib64/qt/lib Ensure you're running the emulator command inside the $ANDROID_HOME/tools folder, otherwise may not work. If you already added a device VM you can list them all with emulator -list-avds Check out the official documentation for more details (thanks to Jeff Xiao for this—in comments). * The depreaction note says: The android command is no longer available.For manual SDK and AVD management, please use Android Studio.For command-line tools, use tools/bin/sdkmanager and tools/bin/avdmanager | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/27202353', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4251700/']} | jdg_81182 |
stackexchange | llm_judgeable_groundtruth_similarity | 18918579 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm struggling with setting up a login system for an app i'm creating. I'm able to set cookies for when the user is logged in or out. I don't think that testing every view if the user is logged in is a very elegant solution, and i'm afraid a page here and there may fall through the cracks (it's a rather large app). I'm thinking the best way would be to intercept route changes somehow and check if the user is logged in, otherwise send them to a login/create user page. I've found a few methods, but nothing seems to be officially documented. Has anyone used this type of method in a real world case, and was it effective? My route file looks like this: 'use strict';app.config(['$routeProvider', function ($routeProvider) { $routeProvider // LOGIN .when('/User/LoginUser', {templateUrl: 'views/user/login.html',controller: 'loginCtrl'}) ....... more routes here....... // DEFAULT .otherwise({redirectTo: '/'});}]); Any help or suggestions, or points to documented real world examples of how I would do something like this would be greatly appreciated!
Now provide the response and nothing else.
| You can intercept route changes as you suggested and act accordingly, using the following example as a basis: $rootScope.$on('$routeChangeStart', function (event, next) { var userAuthenticated = ...; /* Check if the user is logged in */ if (!userAuthenticated && !next.isLogin) { /* You can save the user's location to take him back to the same page after he has logged-in */ $rootScope.savedLocation = $location.url(); $location.path('/User/LoginUser'); } }); Also, add isLogin: true to the route definition of your login page, like this: $routeProvider // LOGIN .when('/User/LoginUser', {templateUrl: 'views/user/login.html',controller: 'loginCtrl', isLogin: true}) Good luck with your project! | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18918579', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/350899/']} | jdg_81183 |
stackexchange | llm_judgeable_groundtruth_similarity | 55120 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How can one show :$$\tan\frac{2\pi}{13}\tan\frac{5\pi}{13}\tan\frac{6\pi}{13}=\sqrt{65+18\sqrt{13}}?$$
Now provide the response and nothing else.
| HINT: The number $\alpha=\sqrt{65+18\sqrt{13}}$ is one of the four solutions of the equation $(\alpha^2-65)^2-4212=0$. Then, show that the LHS is also a solution of the same equation. The other three solutions must be disregarded somehow. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/55120', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/6715/']} | jdg_81184 |
stackexchange | llm_judgeable_groundtruth_similarity | 45430852 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider the following code: template<int value>constexpr int foo = value;template<typename... Ts>constexpr int sum(Ts... args) { return foo<(args + ...)>;}int main() { static_assert(sum(10, 1) == 11);} clang 4.0.1 gives me the following error: main.cpp:6:17: error: non-type template argument is not a constant expression return foo<(args + ...)>; ^~~~ This surprised me. Every argument is known at compile time, sum is marked as constexpr , so I see no reason why the fold expression can't be evaluated at compile time. Naturally, this also fails with the same error message: constexpr int result = (args + ...); // in sum [expr.prim.fold] isn't very helpful, it's very short and only describes the syntax allowed. Trying out newer versions of clang also gives the same result, as does gcc. Are they actually allowed or not?
Now provide the response and nothing else.
| A constant expression is allowed to contain a fold expression. It is not allowed to use the value of a function parameter, unless the function call is itself part of the entire constant expression. By way of example: constexpr int foo(int x) { // bar<x>(); // ill-formed return x; // ok}constexpr int y = foo(42); The variable y needs to be initialized with a constant expression. foo(42) is an acceptable constant expression because even though calling foo(42) involves performing an lvalue-to-rvalue conversion on the parameter x in order to return its value, that parameter was created within the entire constant expression foo(42) so its value is statically known. But x itself is not a constant expression within foo . An expression which is not a constant expression in the context where it occurs can nevertheless be part of a larger constant expression. The argument to a non-type template parameter must be a constant expression in and of itself, but x is not. So the commented-out line is ill-formed. Likewise your (args + ...) fails to be a constant expression (and hence cannot be used as a template argument) since it performs lvalue-to-rvalue conversion on the parameters of sum . However, if the function sum is called with constant expression arguments, the function call as a whole can be a constant expression even if (args + ...) appears within it. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/45430852', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3980929/']} | jdg_81185 |
stackexchange | llm_judgeable_groundtruth_similarity | 9270734 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I get the following error from ssh: Permissions 0777 for '/Users/username/.ssh/id_rsa' are too open.It is recommended that your private key files are NOT accessible by others.This private key will be ignored. What permissions should I give to the id_rsa file?
Now provide the response and nothing else.
| The keys need to be read-writable only by you: chmod 600 ~/.ssh/id_rsa Alternatively, the keys can be only readable by you (this also blocks your write access): chmod 400 ~/.ssh/id_rsa 600 appears to be better in most cases, because you don't need to change file permissions later to edit it. (See the comments for more nuances) The relevant portion from the manpage ( man ssh ) ~/.ssh/id_rsa Contains the private key for authentication. These files contain sensitive data and should be readable by the user but not accessible by others (read/write/execute). ssh will simply ignore a private key file if it is accessible by others. It is possible to specify a passphrase when generating the key which will be used to encrypt the sensitive part of this file using 3DES. ~/.ssh/identity.pub ~/.ssh/id_dsa.pub ~/.ssh/id_ecdsa.pub ~/.ssh/id_rsa.pub Contains the public key for authentication. These files are not sensitive and can (but need not) be readable by anyone. | {} | {'log_upvote_score': 13, 'links': ['https://Stackoverflow.com/questions/9270734', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/960956/']} | jdg_81186 |
stackexchange | llm_judgeable_groundtruth_similarity | 5576619 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I did some research on this topic, and there are some experts who have said that it is not possible , so I would like to ask for an alternative solution. My situation: Page A: [checkout.php] Customer fills in their billing details. Page B: [process.php] Generate an invoice number and store customer details in database. Page C: [thirdparty.com] Third Payment Gateway (ONLY ACCEPT POST DATA). Customer fills in their details and sets up their cart in Page A, then POSTs to Page B. Inside process.php, store the POSTed data inside the database and generate an invoice number. After that, POST the customer data and invoice number to thirdparty.com payment gateway. The problem is doing POST in page B. cURL is able to POST the data to Page C, but the problem is the page didn't redirect to page C. The customer needs to fill in Credit Card details on Page C. The third party payment gateway did give us the API sample, the sample is POST the invoice number together with customer detail.We don't want the system to generate an excess of unwanted invoice numbers. Is there any solution for this?Our current solution is for the customer to fill detail in Page A, then in Page B we create another page showing all the customer details there, where the user can click a CONFIRM button to POST to Page C. Our goal is for customers to only have to click once. Hope my question is clear :)
Now provide the response and nothing else.
| Generate a form on Page B with all the required data and action set to Page C and submit it with JavaScript on page load. Your data will be sent to Page C without much hassle to the user. This is the only way to do it. A redirect is a 303 HTTP header that you can read up on http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html , but I'll quote some of it: The response to the request can be found under a different URI and SHOULD be retrieved using a GET method on that resource. This method exists primarily to allow the output of a POST-activated script to redirect the user agent to a selected resource. The new URI is not a substitute reference for the originally requested resource. The 303 response MUST NOT be cached, but the response to the second (redirected) request might be cacheable. The only way to achieve what you're doing is with a intermediate page that sends the user to Page C. Here's a small/simple snippet on how you can achieve that: <form id="myForm" action="Page_C.php" method="post"><?php foreach ($_POST as $a => $b) { echo '<input type="hidden" name="'.htmlentities($a).'" value="'.htmlentities($b).'">'; }?></form><script type="text/javascript"> document.getElementById('myForm').submit();</script> You should also have a simple "confirm" form inside a noscript tag to make sure users without Javascript will be able to use your service. | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/5576619', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/129209/']} | jdg_81187 |
stackexchange | llm_judgeable_groundtruth_similarity | 32853167 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I maintain a web application that have a page with the JSF tag <f:event . I have rewrote a method in a service class for it to throw a business exception. However, when the business exception is thrown, it isn't caught in managed bean and the exception is showed on the page. Seems that my code try/catch doesn't work. In XHTML: <f:event listener="#{resourceBean.init(enrollment)}" type="preRenderView" /> Listener method in Managed Bean: private boolean canCreateResource;public void init(Enrollment enrollment) { (...) try { canCreateResource = resourceService.canCreateResource(enrollment); } catch (BusinessException e) { canCreateResource = false; }} Method in service class: public boolean canCreateResource(Enrollment enrollment) { if (...) { if (mandateService.isCoordinator(user, course)) { return true; } else { throw new BusinessException("Undefined business rule."); } } return false;} From what I read on other sites, I suppose I have to implement some JSF's handler class. But which and how? EDITED OBS 1: The BusinessException class extends RuntimeException class. OBS 2: The attribute canCreateResource was created to control the render of a button.
Now provide the response and nothing else.
| It's because you threw a RuntimeException from an EJB. When such RuntimeException is not annotated with @ApplicationException , then the EJB container will wrap it in an javax.ejb.EJBException and rethrow it. This is done so because runtime exceptions are usually only used to indicate bugs in code logic, i.e. programmer's mistakes and not enduser's mistakes. You know, NullPointerException , IllegalArgumentException , IndexOutOfBoundsException , NumberFormatException and friends. This allows the EJB client to have one catch-all point for such runtime exceptions, like catch (EJBException e) { There's a bug in the service layer or in the way how we are using it! } If you had tried catch (Exception e) and inspected the actual exception, then you'd have noticed that. Fix your BusinessException class accordingly to add that annotation, it will then be recognized as a real application exception and not be wrapped in an EJBException : @ApplicationException(rollback=true)public class BusinessException extends RuntimeException { // ...} Do note that in case you throw an non- RuntimeException , then you still need to keep the annotation on that, explicitly with rollback=true , because by default it wouldn't perform a rollback, on the contrary to a RuntimeException without the annotation. @ApplicationException(rollback=true)public class BusinessException extends Exception { // ...} Summarized: RuntimeException thrown from transactional EJB method will perform full rollback, but exception will be wrapped in EJBException . RuntimeException with @ApplicationException from transactional EJB method will only perform full rollback when rollback=true is explicitly set. Exception from transactional EJB method will not perform full rollback. Exception with @ApplicationException from transactional EJB method will only perform full rollback when rollback=true is explicitly set. Note that @ApplicationException is inherited over all subclasses of the custom exception, so you don't need to repeat it over all of them. Best would be to have it as an abstract class. See also the examples in the related question linked below. See also: Letting the presentation layer (JSF) handle business exceptions from service layer (EJB) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32853167', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2179818/']} | jdg_81188 |
stackexchange | llm_judgeable_groundtruth_similarity | 2583472 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am looking for a Regex that allows me to validate json. I am very new to Regex's and i know enough that parsing with Regex is bad but can it be used to validate?
Now provide the response and nothing else.
| Yes, a complete regex validation is possible. Most modern regex implementations allow for recursive regexpressions, which can verify a complete JSON serialized structure. The json.org specification makes it quite straightforward. $pcre_regex = ' / (?(DEFINE) (?<number> -? (?= [1-9]|0(?!\d) ) \d+ (\.\d+)? ([eE] [+-]? \d+)? ) (?<boolean> true | false | null ) (?<string> " ([^"\\\\]* | \\\\ ["\\\\bfnrt\/] | \\\\ u [0-9a-f]{4} )* " ) (?<array> \[ (?: (?&json) (?: , (?&json) )* )? \s* \] ) (?<pair> \s* (?&string) \s* : (?&json) ) (?<object> \{ (?: (?&pair) (?: , (?&pair) )* )? \s* \} ) (?<json> \s* (?: (?&number) | (?&boolean) | (?&string) | (?&array) | (?&object) ) \s* ) ) \A (?&json) \Z /six '; It works quite well in PHP with the PCRE functions . Should work unmodified in Perl; and can certainly be adapted for other languages. Also it succeeds with the JSON test cases . Simpler RFC4627 verification A simpler approach is the minimal consistency check as specified in RFC4627, section 6 . It's however just intended as security test and basic non-validity precaution: var my_JSON_object = !(/[^,:{}\[\]0-9.\-+Eaeflnr-u \n\r\t]/.test( text.replace(/"(\\.|[^"\\])*"/g, ''))) && eval('(' + text + ')'); | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/2583472', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/79722/']} | jdg_81189 |
stackexchange | llm_judgeable_groundtruth_similarity | 261243 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using ASP.NET MVC 4 and SQL Server 2012. I have no problem going for either of these solutions but I want to know which one would fit better in the particular application I'm creating now. I'm making a sort of a library system application. The only images the application will store are: Users' profile pictures (1 per user) Libraries' profile pictures (1 per library) Book authors' pictures (1 per author) Book cover images (1 per book) None of the images stored will need to be of high quality. So obviously the images are not a big part of the application and considering this, I am currently storing the images in my database. Though I noticed something that I think makes for a major con. The images are retrieved using a Url.ActionLink which leads to a Controller action method that accepts (for example) a bookId parameter before reaching to the database, retrieving that book's image and sending it back. So if a user were to search for books and he ends up getting a list of books along with their images before him, there will be a new call to the action method that retrieves images and a new query to the database for each image listed. Is there a better way of handling this? Should I reconsider keeping my images in the database or is it not a big issue? EDIT: Wouldn't consider this question a duplicate because it pertains to a specific type of an application that handles very little image data as as opposed to any type of an application (like in the linked question ). Of course when you're not sure how much image data you're gonna be storing it's always best to store images on the file system. This is not such a case. The nature of the answers here versus the answers there shows this.
Now provide the response and nothing else.
| Why not do both? Database is the backing, ultimate store for the image. The public side can be a simple read from db but you can easily extend that into a read-through disk cache and also take advantage of numerous infrastructure tricks to better enhance performance. The wins here are: simpler data backup -- database backups are fun and easy, no file system touching necessary simpler dev story -- you don't need to handle a bunch of files, just get the dev team a copy of the database easier to write the image -- file systems are a PITA, from permissions, to changing locations from dev->qa->production, to contention to lack of transactions. If the writes are going to the db then you solve 90% of those problems. modern databases can handle it -- presuming we are talking fairly typical image file sizes measuring under a few megabytes most modern database systems can handle them very well. It is not the database murdering issue that some older articles would have you believe. | {} | {'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/261243', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/154465/']} | jdg_81190 |
stackexchange | llm_judgeable_groundtruth_similarity | 1350598 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a command which I am executing from my XAML file using the following standard syntax: <Button Content="Zoom" Command="{Binding MyViewModel.ZoomCommand}"/> This worked fine until I realized that I needed TWO pieces of information from the view in order to make this operation complete the way users expect (the width and height of the canvas specfically). It seems like it's possible to pass an array as an argument to my command, but I don't see there being a way to specify the binding to my two canvas properties in the CommandParameter: <Button Content="Zoom" Command="{Binding MyViewModel.ZoomCommand" CommandParameter="{Binding ElementName=MyCanvas, Path=Width}"/> How do I pass both Width and Height to my command? It doesn't seem like this is possible using commands from XAML and I need to wire up a click handler in my codebehind to get this information to pass to my zoom method.
Now provide the response and nothing else.
| Firstly, if you're doing MVVM you would typically have this information available to your VM via separate properties bound from the view. That saves you having to pass any parameters at all to your commands. However, you could also multi-bind and use a converter to create the parameters: <Button Content="Zoom" Command="{Binding MyViewModel.ZoomCommand"> <Button.CommandParameter> <MultiBinding Converter="{StaticResource YourConverter}"> <Binding Path="Width" ElementName="MyCanvas"/> <Binding Path="Height" ElementName="MyCanvas"/> </MultiBinding> </Button.CommandParameter></Button> In your converter: public class YourConverter : IMultiValueConverter{ public object Convert(object[] values, ...) { return values.Clone(); } ...} Then, in your command execution logic: public void OnExecute(object parameter){ var values = (object[])parameter; var width = (double)values[0]; var height = (double)values[1];} | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/1350598', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/156789/']} | jdg_81191 |
stackexchange | llm_judgeable_groundtruth_similarity | 6374437 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
.NET's AES does not implement CTR directly. It only implements CBC, CFB, CTS, ECB and OFB. Can I use any of these modes and securely implement CTR around them, or do I need to use a different library altogether?
Now provide the response and nothing else.
| Yes, you can build a CTR using .NET's AES in ECB mode and a counter, that you yourself initialize and increment, for each block encrypted. An example of this is the WinZipAes encryption stream, which is part of the open-source DotNetZip. WinZip specifies the use of AES encryption for encrypted ZIP files, using AES in CTR mode. DotNetZip implements the CTR mode using ECB and the counter. See here for some comments. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6374437', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/521757/']} | jdg_81192 |
stackexchange | llm_judgeable_groundtruth_similarity | 32354107 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Java enums are classes. They are compiled as classes. How will the following example be compiled?What is its "class version"?What is the exact class code? I want actual Java code. public enum Ordinals { FIRST("st"), SECOND("nd"), THIRD("rd"); private String notation; private Ordinals(String notation) { this.notation = notation; } public String getNotation() { return notation; }}
Now provide the response and nothing else.
| Each enum class is compiled as a class being a subclass of java.lang.Enum . Each enum constant becomes a static final constant within that class. Then, an array $VALUES is created with all of the enum constants, in order of declaration. You can disassemble the code using the command javap -p -c Ordinals (on the compiled .class file) to find out the details. Compiled from "Ordinals.java"public final class Ordinals extends java.lang.Enum<Ordinals> { public static final Ordinals FIRST; public static final Ordinals SECOND; public static final Ordinals THIRD; private java.lang.String notation; // your custom field private static final Ordinals[] $VALUES; // all enum constants public static Ordinals[] values(); // every enum class has this static method Code: 0: getstatic #1 // Field $VALUES:[LOrdinals; 3: invokevirtual #2 // Method "[LOrdinals;".clone:()Ljava/lang/Object; 6: checkcast #3 // class "[LOrdinals;" 9: areturn public static Ordinals valueOf(java.lang.String); // every enum class has this static method Code: 0: ldc_w #4 // class Ordinals 3: aload_0 4: invokestatic #5 // Method java/lang/Enum.valueOf:(Ljava/lang/Class;Ljava/lang/String;)Ljava/lang/Enum; 7: checkcast #4 // class Ordinals 10: areturn private Ordinals(java.lang.String); Code: 0: aload_0 1: aload_1 2: iload_2 3: invokespecial #6 // Method java/lang/Enum."<init>":(Ljava/lang/String;I)V 6: aload_0 7: aload_3 8: putfield #7 // Field notation:Ljava/lang/String; 11: return public java.lang.String getNotation(); Code: 0: aload_0 1: getfield #7 // Field notation:Ljava/lang/String; 4: areturn static {}; // fills the $VALUES array and initializes the static fields corresponding to the enum constants Code: 0: new #4 // class Ordinals 3: dup 4: ldc #8 // String FIRST 6: iconst_0 7: ldc #9 // String st 9: invokespecial #10 // Method "<init>":(Ljava/lang/String;ILjava/lang/String;)V 12: putstatic #11 // Field FIRST:LOrdinals; 15: new #4 // class Ordinals 18: dup 19: ldc #12 // String SECOND 21: iconst_1 22: ldc #13 // String nd 24: invokespecial #10 // Method "<init>":(Ljava/lang/String;ILjava/lang/String;)V 27: putstatic #14 // Field SECOND:LOrdinals; 30: new #4 // class Ordinals 33: dup 34: ldc #15 // String THIRD 36: iconst_2 37: ldc #16 // String rd 39: invokespecial #10 // Method "<init>":(Ljava/lang/String;ILjava/lang/String;)V 42: putstatic #17 // Field THIRD:LOrdinals; 45: iconst_3 46: anewarray #4 // class Ordinals 49: dup 50: iconst_0 51: getstatic #11 // Field FIRST:LOrdinals; 54: aastore 55: dup 56: iconst_1 57: getstatic #14 // Field SECOND:LOrdinals; 60: aastore 61: dup 62: iconst_2 63: getstatic #17 // Field THIRD:LOrdinals; 66: aastore 67: putstatic #1 // Field $VALUES:[LOrdinals; 70: return } That would translate back to Java as public final class Ordinals extends java.lang.Enum<Ordinals> { public static final Ordinals FIRST; public static final Ordinals SECOND; public static final Ordinals THIRD; private String notation; private static final Ordinals[] $VALUES; public static Ordinals[] values() { return $VALUES.clone(); } public static Ordinals valueOf(String name) { return (Ordinals) Enum.valueOf(Ordinals.class, name); } private Ordinals(String name, int ordinal, String notation) { super(name, ordinal); this.notation = notation } static { FIRST = new Ordinals("FIRST", 0, "st"); SECOND = new Ordinals("SECOND", 1, "nd"); THIRD = new Ordinals("THIRD", 2, "rd"); Ordinals[] $VALUES = new Ordinals[3]; $VALUES[0] = FIRST; $VALUES[1] = SECOND; $VALUES[2] = THIRD; Ordinals.$VALUES = $VALUES; }} Class version is totally unrelated to that - it depends on the version of the Java compiler that you used (or on the explicit setting on the compiler to force it to compile for an older Java version). | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/32354107', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3453226/']} | jdg_81193 |
stackexchange | llm_judgeable_groundtruth_similarity | 275 |
Below is a question asked on the forum mechanics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
My e46 M3 would not start when I got to the parking garage after work. When I first turned the key, I heard a sound like grinding gears, but I was so startled that I immediately stopped trying to crank the engine. The interior lights and radio flashed for a fraction of a second when this happened, and the clock on the dash reset itself. A couple of seconds later I tried cranking again, but all I got was the sound of the starter relay clicking away. The engine did not turn over, nor did I hear any grinding noises. Is it the starter, or something else? How would I check this? I should also mention that I was able to push-start the car and get it running, since it has a manual transmission. Thanks!
Now provide the response and nothing else.
| Your battery was dead. In many cars, when the battery can't provide enough cranking amps, you will hear a clicking instead of the normal starting sound. The starter battery draws a lot of current. When that happens, the voltage available to the rest of the system drops dramatically, especially if the battery is weak. That's why your clock reset itself. Hook up a smart charger. A cheap one is a Battery Tender Jr . Avoid a dumb charger, which will gladly push 13.8V at 2A in to your battery until all the electrolyte has boiled away. A good charger will have bulk, absorption, and float stages. A really good one will have a desulfate stage, which can help revive an older battery. You need to figure out whether the problem is in your charging system or in the battery. A mechanic or a battery shop will have a testing tool that will measure the CCA of your battery. To get the best possible test, charge the battery to full, then let it rest disconnected for 24 hours. If the number is very low, your battery can't do the job. Letting a desulfating charger work for a long time may make a difference, but if this is your daily driver and you don't have a spare battery, you probably should just replace it. If you decide to replace your battery, check out How do I apply dielectric grease to my battery? If the battery checks out, then the problem is in your charging system. Perhaps there's a loose connection, or perhaps your alternator is bad. See your service manual for instructions on troubleshooting the charging system. You'll definitely want a good multimeter at this point, with 3 1/2 digits of precision. Remember that batteries have lead, produce hydrogen, and can get acid on things. Nitrile gloves, goggles or face mask, clothes you want holes in, and wash up well afterward. | {} | {'log_upvote_score': 5, 'links': ['https://mechanics.stackexchange.com/questions/275', 'https://mechanics.stackexchange.com', 'https://mechanics.stackexchange.com/users/116/']} | jdg_81194 |
stackexchange | llm_judgeable_groundtruth_similarity | 44183478 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Below is the my code : <?php$sql2 = "Select * from table ";for($i=0;$i<count($val);$i++){$sql2.=" where(column1 LIKE '%".$val[$i]."%' OR " . "column2 LIKE '%".$val[$i]."%' OR " . "column3 LIKE '%".$val[$i]."%' OR " . "column4 LIKE '%".$val[$i]."%' OR " . "column5 LIKE '%".$val[$i]."%' OR " . "column6 LIKE '%".$val[$i]."%' OR " . "column7 LIKE '%".$val[$i]."%') ";} Now how to add mysql_real_escape_string here ? option is like to do ->where('status=:status', [':status' => $status]) but here how can i pass such params ?
Now provide the response and nothing else.
| The variable you call video actually holds the playlist information , not the video information. You can find a list of the individual video information in the playlist's entries attribute. See below for a possible fix. I renamed your video variable to playlist and took the freedom to rewrite it a bit and add output: import textwrapimport youtube_dlplaylists = [ "https://www.youtube.com/playlist?list=PLRQGRBgN_EnrPrgmMGvrouKn7VlGGCx8m"]for playlist in playlists: with youtube_dl.YoutubeDL({"ignoreerrors": True, "quiet": True}) as ydl: playlist_dict = ydl.extract_info(playlist, download=False) # Pretty-printing the video information (optional) for video in playlist_dict["entries"]: print("\n" + "*" * 60 + "\n") if not video: print("ERROR: Unable to get info. Continuing...") continue for prop in ["thumbnail", "id", "title", "description", "duration"]: print(prop + "\n" + textwrap.indent(str(video.get(prop)), " | ", lambda _: True) ) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44183478', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1900235/']} | jdg_81195 |
stackexchange | llm_judgeable_groundtruth_similarity | 152606 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I was reading about SQL injection and saw this, which got me thinking: input fields as small as possible to reduce the likelihood of a hacker being able to squeeze SQL code into the field without it being truncated (which usually leads to a T-SQL syntax error). Source: Microsoft SQL Server 2008 R2 Unleashed What is the is the shortest field size where SQL injection can cause harm? With harm being database modification or returning results not intended by design. Including an end comment marker ( -- ) in a two character field would not cause harm, it would just cause a failed query. The potential hacker, might learn the field is susceptible to injection, but they are unable to leverage it.
Now provide the response and nothing else.
| No, there is no length that is too short to be exploitable (at least in some situations). A length-filter is not a valid protection against SQL injection, and prepared statements really are the only proper defense. A length filter is however a good measure as defense in depth (as are integer filters, alphanum filters, etc). There are many situations where e.g. valid input could never be above say 30 characters, but where meaningful exploitation requires more. It should (but probably doesn't) go without saying that any filtering as defense in depth must be taking place server-side as anything client-side can simply be bypassed. Restriction Bypass Restriction clauses (e.g. AND / OR ) can be bypassed by two characters, which can cause real harm, not just a failed query. The most simple example is a login (other examples would be the unauthorized deletion of additional data): SELECT * FROM users WHERE userid = [id] AND password = [password] Injection: id = 1#password = wrong_password Payload: 2 chars DoS DoS attacks require very few characters. In a MySQL example, it takes 7 for the actual call + x for the given seconds + whatever is needed to be able to call the function and fix the query. Example: SELECT * FROM users WHERE userid = [id] Injection (this is a valid injection, a longer form would be 1 AND sleep(99) ): sleep(99) Payload: 9 chars Reading Data If the data is displayed, the length depends mainly on the table and column name. I'll assume equal column count for all tables (it may happen, and it saves characters). Example: SELECT * FROM comments WHERE commentid = [id] Injection: 1 union select * from users Payload: 27 chars. Editing Data Unauthorized database modifications can also be achieved with few characters. Example: UPDATE users SET password = '[password]' WHERE id = [id] Injection (into password): ',isadmin='1 Payload: 12 chars A restriction bypass would also work (the result is that all passwords are now empty*): '# Payload: 2 chars * The password example is used for simplicity; passwords should be hashed making the example impossible. The example still applies in all similar situations (updating a username, updating permissions, and so on) | {} | {'log_upvote_score': 7, 'links': ['https://security.stackexchange.com/questions/152606', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/24064/']} | jdg_81196 |
stackexchange | llm_judgeable_groundtruth_similarity | 1553896 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Im asked to show that if $h(1)=0$ and $h'(x)={1\over{x}}$ then $a,b>0$ show $h(ab)=h(a)+h(b)$. Im expected to use implicit differentiation to show this property.
Now provide the response and nothing else.
| Make successive divisions by $2$ and note the remainders, until the quotient is $0$:$$\begin{array}{r|cc}76&0\\38&0\\19&1\\9&1&\uparrow\\4&0\\2&0\\1&1\end{array}$$The binary digits of $76$ are $\;\color{red}{1001100}_2$. In other words$$76=2^6+2^3+2^2.$$ This is because, if you write the Euclidean division equalities for each of these divisions, you have ( Horner scheme )\begin{align*}76&=2\cdot 38 =2(2\cdot 19))=2(2(2\cdot 9+1))=2(2(2(2\cdot 4+1)+1))\\&=2(2(2(2(2\cdot 2)+1)+1))=2(2(2(2(2(2\cdot 1))+1)+1))\end{align*} | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1553896', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/266798/']} | jdg_81197 |
stackexchange | llm_judgeable_groundtruth_similarity | 7110042 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to make some changes to the Checkout controller in my Magento store, however, when I try to put the controller file into my local directory it does not seem to be respecting the local file for this. I have checked that the files are in the same spot in their respective code pools and the files are named the same thing, it just will not do it. /app/code/core/Mage/Checkout/controllers|CartController.php /app/code/local/Mage/Checkout/controllers|CartController.php Those are the file paths for the two files I have. Is there something special about the checkout files in Magento that prevent them from being overridden?
Now provide the response and nothing else.
| FYI, controller overriding doesn't work just by placing the related file in local folder with the same folder structure. Rather you need to follow the following syntax for the purpose: <config> <frontend> [1] <routers> <checkout> [2] <args> <modules> <yourModule before="Mage_Checkout">Namespace_Module_Overwrite_Checkout</yourModule> [3] </modules> </args> </checkout> </routers> </frontend></config> [1] Depending on the controllers area ,"frontend" or "admin"[2] Router node of the module to be overridden (look it up in the config.xml of that module)[3] The tag can be anything, it must be unique within the node.Your_Module_Overwrite_Checkout (is mapped to directory)Your/Module/controllers/Overwrite/Checkout/Hope this helps.RegardsMagePsycho | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7110042', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/365709/']} | jdg_81198 |
stackexchange | llm_judgeable_groundtruth_similarity | 29424944 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm writing my first app in Kotlin after 3 years of experience with Android.Just confused as to how to utilize itemClickListener with a RecyclerView in Kotlin. I have tried the trait (edit: now interface) approach, very Java-like public class MainActivity : ActionBarActivity() { protected override fun onCreate(savedInstanceState: Bundle?) { // set content view etc go above this line class itemClickListener : ItemClickListener { override fun onItemClick(view: View, position: Int) { Toast.makeText(this@MainActivity, "TEST: " + position, Toast.LENGTH_SHORT).show() } } val adapter = DrawerAdapter(itemClickListener()) mRecyclerView.setAdapter(adapter) } trait ItemClickListener { fun onItemClick(view: View, position: Int) }} That seemed very redundant so I tried the inner class approach: inner class ItemClickListener { fun onItemClick(view: View, position: Int) { startActivityFromFragmentForResult<SelectExerciseActivity>(SELECT_EXERCISES) }} And then just setting the adapter's click listener like this: val adapter = WorkoutsAdapter(ItemClickListener()) But I'm still not satisfied with this because I think there might be a better, cleaner way. I'm trying to essentially achieve something like this: RecyclerView onClick Any suggestions? Ended up going with a variation of the approved answer Defined the function in the activity: val itemOnClick: (View, Int, Int) -> Unit = { view, position, type -> Log.d(TAG, "test")} Passed the function itself on to the adapter like this: class ExercisesAdapter(val itemClickListener: (View, Int, Int) -> Unit) : RecyclerView.Adapter<RecyclerView.ViewHolder>() { override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): RecyclerView.ViewHolder { // other stuff up here val vhExercise = ExerciseVH(view) // view holder // on to the view holder through the extension function vhExercise.onClick(itemClickListener) }} Extension function by Loop in the approved answer below. fun <T : RecyclerView.ViewHolder> T.onClick(event: (view: View, position: Int, type: Int) -> Unit): T { itemView.setOnClickListener { event.invoke(it, getAdapterPosition(), getItemViewType()) } return this}
Now provide the response and nothing else.
| My solution is like a combination of the previous ones with a super clean call from the activity. ContactAdapter: class ContactAdapter @Inject constructor() : RecyclerView.Adapter<ContactAdapter.ViewHolder>() { var onItemClick: ((Contact) -> Unit)? = null var contacts: List<Contact> = emptyList() ... override fun onBindViewHolder(holder: ViewHolder, position: Int) { val contact = contacts[position] holder.email.text = contact.email } inner class ViewHolder(itemView: View) : RecyclerView.ViewHolder(itemView) { val email: TextView = itemView.email init { itemView.setOnClickListener { onItemClick?.invoke(contacts[adapterPosition]) } } }} ContactActivity: override fun setupRecyclerAdapter() { recyclerView.adapter = contactAdapter recyclerView.layoutManager = LinearLayoutManager(this) contactAdapter.onItemClick = { contact -> // do something with your item Log.d("TAG", contact.email) }} | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/29424944', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/507142/']} | jdg_81199 |
stackexchange | llm_judgeable_groundtruth_similarity | 4191912 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been preventing updates to certain models by using this in the model: def update self.errors.add_to_base( "Cannot update a #{ self.to_s }" )end I'm now writing a plugin that delivers some extra functionality to the model, and I need to update one field in the model. If I weren't using a plugin I would do this directly in the model... def update if self.changed == ['my_field'] super else self.errors.add_to_base( "Cannot update a #{ self.to_s }" ) endend I can't do the same from my plugin since I don't know if the update behaviour is the ActiveRecord default, or has been overridden to prevent updates. Is there another way to prevent record updates while allowing me to override for a specific field (and only in the instance where my plugin is applied to this model).
Now provide the response and nothing else.
| First, you should be using a before_update callback for that sort of thing rather than overriding update. Second, you can store the updatable attributes on the model, and then update them with the plugin. I just wrote this in the browser, so it could be wrong. attr_accessor :updatable_attributes before_update :prevent_update private def prevent_update return true if self.changed == self.updatable_attributes self.errors.add_to_base "Cannot update a #{ self.to_s }" false endend | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4191912', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/154274/']} | jdg_81200 |
stackexchange | llm_judgeable_groundtruth_similarity | 45105164 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
To clear all bits you often see an exclusive or as in XOR eax, eax . Is there such a trick for the opposite too? All I can think of is to invert the zeroes with an extra instruction.
Now provide the response and nothing else.
| For most architectures with fixed-width instructions, the answer will probably be a boring one instruction mov of a sign-extended or inverted immediate, or a mov lo/high pair. e.g. on ARM, mvn r0, #0 (move-not). See gcc asm output for x86, ARM, ARM64, and MIPS, on the Godbolt compiler explorer . IDK anything about zseries asm or machine code. In ARM, eor r0,r0,r0 is significantly worse than a mov-immediate. It depends on the old value, with no special-case handling. Memory dependency-ordering rules prevent an ARM uarch from special-casing it even if they wanted to. Same goes for most other RISC ISAs with weakly-ordered memory but that don't require barriers for memory_order_consume (in C++11 terminology). x86 xor-zeroing is special because of its variable-length instruction set.Historically, 8086 xor ax,ax was fast directly because it was small. Since the idiom became widely used (and zeroing is much more common than all-ones), CPU designers gave it special support, and now xor eax,eax is faster than mov eax,0 on Intel Sandybridge-family and some other CPUs, even without considering direct and indirect code-size effects. See What is the best way to set a register to zero in x86 assembly: xor, mov or and? for as many micro-architectural benefits as I've been able to dig up. If x86 had a fixed-width instruction-set, I wonder if mov reg, 0 would have gotten as much special treatment as xor-zeroing has? Perhaps, because dependency-breaking before writing the low8 or low16 is important. The standard options for best performance: mov eax, -1 : 5 bytes, using the mov r32, imm32 encoding. (There is no sign-extending mov r32, imm8 , unfortunately). Excellent performance on all CPUs. 6 bytes for r8d-r15d (REX prefix). mov rax, -1 : 7 bytes, using the mov r/m64, sign-extended-imm32 encoding. (Not the REX.W=1 version of the eax version. That would be 10-byte mov r64, imm64 ). Excellent performance on all CPUs. The weird options that save some code-size usually at the expense of performance : (See also Tips for golfing in x86/x64 machine code ) xor eax,eax / dec rax (or not rax ): 5 bytes (4 for 32-bit eax , or 3 bytes in 32-bit mode where 1-byte dec eax exists. 64-bit mode used those 1-byte instructions as REX prefixes). Downside: two uops for the front-end. Still only one unfused-domain uop for the scheduler/execution units on recent Intel where xor-zeroing is handled in the front-end. mov -immediate always needs an execution unit. (But integer ALU throughput is rarely a bottleneck for instructions that can use any port; the extra front-end pressure is the problem) xor ecx,ecx / lea eax, [rcx-1] 5 bytes total for 2 constants (6 bytes for rax ): leaves a separate zeroed register . If you already want a zeroed register, there is almost no downside to this. lea can run on fewer ports than mov r,i on most CPUs, but since this is the start of a new dependency chain, the CPU can run it in any spare execution-port cycle after it issues. The same trick works for any two nearby constants, if you do the first one with mov reg, imm32 (or push imm8/pop) and the second with lea r32, [base + disp8] . disp8 has a range of -128 to +127, otherwise you need a disp32 . After a loop you may have a known-zero register, but LEA relative to it creates a false dependency, while mov-immediate wouldn't. Branch prediction + speculative exec can break control dependencies, although loop branches often mispredict their last iteration unless the trip count is low. or eax, -1 : 3 bytes (4 for rax ), using the or r/m32, sign-extended-imm8 encoding. Downside: false dependency on the old value of the register. push -1 / pop rax : 3 bytes. Slow but small. Recommended only for exploits / code-golf. Works for any sign-extended-imm8 , unlike most of the others. Downsides: uses store and load execution units, not ALU. (Possibly a throughput advantage in a rare cases on AMD Bulldozer-family where there are only two integer execution pipes, but decode/issue/retire throughput is higher than that. But don't try it without testing.) store/reload latency means rax won't be ready for ~5 cycles after this executes on Skylake, for example. (Intel): puts the stack-engine into rsp-modified mode, so the next time you read rsp directly it will take a stack-sync uop. (e.g. for add rsp, 28 , or for mov eax, [rsp+8] ). The store could miss in cache, triggering extra memory traffic. (Possible if you haven't touched the stack inside a long loop). Vector regs are different Setting vector registers to all-ones with pcmpeqd xmm0,xmm0 is special-cased on most CPUs as dependency-breaking (not Silvermont/KNL), but still needs an execution unit to actually write the ones. pcmpeqb/w/d/q all work, but q is slower on some CPUs and has longer machine code. For AVX2 , the ymm equivalent vpcmpeqd ymm0, ymm0, ymm0 is also the best choice. (Or b/w are equivalent, but vpcmpeqq has longer machine code.) For AVX without AVX2 the choice is less clear: there is no one obvious best approach. Compilers use various strategies : gcc prefers to load a 32-byte constant with vmovdqa , while older clang uses 128-bit vpcmpeqd followed by a cross-lane vinsertf128 to fill the high half. Newer clang uses vxorps to zero a register then vcmptrueps to fill it with ones. This is the moral equivalent of the vpcmpeqd approach, but the vxorps is needed to break the dependency on the prior version of the register and vcmptrueps has a latency of 3. It makes a reasonable default choice. Doing a vbroadcastss from a 32-bit value is probably strictly better than the load approach, but it is hard to get compilers to generate this. The best approach probably depends on the surrounding code. Fastest way to set __m256 value to all ONE bits AVX512 compares are only available with a mask register (like k0 ) as the destination, so compilers are currently using vpternlogd zmm0,zmm0,zmm0, 0xff as the 512b all-ones idiom. (0xff makes every element of the 3-input truth-table a 1 ). This is not special-cased as dependency-breaking on KNL or SKL, but it has 2-per-clock throughput on Skylake-AVX512. This beats using a narrower dependency-breaking AVX all-ones and broadcasting or shuffling it. If you need to re-generate all-ones inside a loop, obviously the most efficient way is to use a vmov* to copy an all-ones register. This doesn't even use an execution unit on modern CPUs (but still takes front-end issue bandwidth). But if you're out of vector registers, loading a constant or [v]pcmpeq[b/w/d] are good choices. For AVX512, it's worth trying VPMOVM2D zmm0, k0 or maybe VPBROADCASTD zmm0, eax . Each has only 1c throughput , but they should break dependencies on the old value of zmm0 (unlike vpternlogd ). They require a mask or integer register which you initialized outside the loop with kxnorw k1,k0,k0 or mov eax, -1 . For AVX512 mask registers , kxnorw k1,k0,k0 works, but it's not dependency-breaking on current CPUs. Intel's optimization manual suggests using it for generating an all-ones before a gather instruction, but recommends avoiding using the same input register as the output. This avoids making an otherwise-independent gather dependent on a previous one in a loop. Since k0 is often unused, it's usually a good choice to read from. I think vpcmpeqd k1, zmm0,zmm0 would work, but it's probably not special-cased as a k1=1 idiom with no dependency on zmm0. (To set all 64 bits instead of just the low 16, use AVX512BW vpcmpeqb or kxnorq ) On Skylake-AVX512, k instructions that operate on mask registers only run on a single port , even simple ones like kandw . (Also note that Skylake-AVX512 won't run vector uops on port1 when there are any 512b operations in the pipe, so execution unit throughput can be a real bottleneck.) There is no kmov k0, imm , only moves from integer or memory. Probably there are no k instructions where same,same is detected as special, so the hardware in the issue/rename stage doesn't look for k registers. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/45105164', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2428640/']} | jdg_81201 |
stackexchange | llm_judgeable_groundtruth_similarity | 40319045 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to write a unit test for component used in my service.Component and service work fine. Component: import {Component} from '@angular/core';import {PonyService} from '../../services';import {Pony} from "../../models/pony.model";@Component({ selector: 'el-ponies', templateUrl: 'ponies.component.html', providers: [PonyService]})export class PoniesComponent { ponies: Array<Pony>; constructor(private ponyService: PonyService) { this.ponies = this.ponyService.getPonies(2); } refreshPonies() { this.ponies = this.ponyService.getPonies(3); }} Service: import {Injectable} from "@angular/core";import {Http} from "@angular/http";import {Pony} from "../../models/pony.model";@Injectable()export class PonyService { constructor(private http: Http) {} getPonies(count: number): Array<Pony> { let toReturn: Array<Pony> = []; this.http.get('http://localhost:8080/js-backend/ponies') .subscribe(response => { response.json().forEach((tmp: Pony)=> { toReturn.push(tmp); }); if (count && count % 2 === 0) { toReturn.splice(0, count); } else { toReturn.splice(count); } }); return toReturn; }} Component unit test: import {TestBed} from "@angular/core/testing";import {PoniesComponent} from "./ponies.component";import {PonyComponent} from "../pony/pony.component";import {PonyService} from "../../services";import {Pony} from "../../models/pony.model";describe('Ponies component test', () => { let poniesComponent: PoniesComponent; beforeEach(() => { TestBed.configureTestingModule({ declarations: [PoniesComponent, PonyComponent], providers: [{provide: PonyService, useClass: MockPonyService}] }); poniesComponent = TestBed.createComponent(PoniesComponent).componentInstance; }); it('should instantiate component', () => { expect(poniesComponent instanceof PoniesComponent).toBe(true, 'should create PoniesComponent'); });});class MockPonyService { getPonies(count: number): Array<Pony> { let toReturn: Array<Pony> = []; if (count === 2) { toReturn.push(new Pony('Rainbow Dash', 'green')); toReturn.push(new Pony('Pinkie Pie', 'orange')); } if (count === 3) { toReturn.push(new Pony('Fluttershy', 'blue')); toReturn.push(new Pony('Rarity', 'purple')); toReturn.push(new Pony('Applejack', 'yellow')); } return toReturn; };} Part of package.json: { ... "dependencies": { "@angular/core": "2.0.0", "@angular/http": "2.0.0", ... }, "devDependencies": { "jasmine-core": "2.4.1", "karma": "1.2.0", "karma-jasmine": "1.0.2", "karma-phantomjs-launcher": "1.0.2", "phantomjs-prebuilt": "2.1.7", ... }} When I execute 'karma start' I get this error Error: Error in ./PoniesComponent class PoniesComponent_Host - inline template:0:0 caused by: No provider for Http! in config/karma-test-shim.js It looks like karma uses PonyService instead of mocking it as MockPonyService , in spite of this line: providers: [{provide: PonyService, useClass: MockPonyService}] . The question: How I should mock the service?
Now provide the response and nothing else.
| It's because of this @Component({ providers: [PonyService] <======}) This makes it so that the service is scoped to the component, which means that Angular will create it for each component, and also means that it supercedes any global providers configured at the module level. This includes the mock provider that you configure in the test bed. To get around this, Angular provides the TestBed.overrideComponent method, which allows us to override things like the @Component.providers and @Component.template . TestBed.configureTestingModule({ declarations: [PoniesComponent, PonyComponent]}).overrideComponent(PoniesComponent, { set: { providers: [ {provide: PonyService, useClass: MockPonyService} ] }}); | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/40319045', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3120210/']} | jdg_81202 |
stackexchange | llm_judgeable_groundtruth_similarity | 9097044 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have written a Windows Forms application and now I want to write some unit tests for it (not exactly test driven development seeing as I am writing the tests after I have developed but better late then never!) My question is that with such an application how do you go about writing the unit tests, given that nearly all of the methods and events are private? I have heard of NUnit Forms but I hear good and bad things about it, also there has been no real development on that project for a while so it looks abandoned. Also is it generally accepted that the project have have adequate unit testing in place if I wrote unit test cases for all of the events that a user would trigger by clicking/ pressing buttons, or would I have to go and write unit test cases for all methods and figure out a way to test my private methods? EDIT: My business logic is seperated from my presentation logic, there is 1 or 2 public methods my business logic exposes so the form can access them, but what about all the private methods that are in the business logic?
Now provide the response and nothing else.
| The first thing I would do is to ensure that you have proper separation of your business logic from your form. Basically, using an MVC pattern. Then, you can easily test everything outside the form, as if the form didn't even exist. Now, this could still leave some untested form-specific functionality. I.E., is the form wired-up to the service correctly? For this, then you could still consider something like NUnit Forms or another alternative. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9097044', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/678011/']} | jdg_81203 |
stackexchange | llm_judgeable_groundtruth_similarity | 11029256 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Possible Duplicate: A concise explanation of nil v. empty v. blank in Ruby on Rails Can anyone tell me the difference between nil? , blank? and empty? in Ruby?
Now provide the response and nothing else.
| In Ruby, nil in an object (a single instance of the class NilClass ). This means that methods can be called on it. nil? is a standard method in Ruby that can be called on all objects and returns true for the nil object and false for anything else. empty? is a standard Ruby method on some objects like Arrays, Hashes and Strings. Its exact behaviour will depend on the specific object, but typically it returns true if the object contains no elements. blank? is not a standard Ruby method but is added to all objects by Rails and returns true for nil , false , empty, or a whitespace string. Because empty? is not defined for all objects you would get a NoMethodError if you called empty? on nil so to avoid having to write things like if x.nil? || x.empty? Rails adds the blank? method. After answering, I found an earlier question, " How to understand nil vs. empty vs. blank in Rails (and Ruby) ", so you should check the answers to that too. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/11029256', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/830785/']} | jdg_81204 |
stackexchange | llm_judgeable_groundtruth_similarity | 79242 |
Below is a question asked on the forum hermeneutics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This article claims that God has been translated to 'a god' in the New World Translation as a result of the article 'ho' not proceeding the word God. Is this a valid argument or are there other areas in scripture or the Koine Greek language that allow this to be the case while also allowing 'a god' to be rendered as 'God'? Excerpt: Is the rendering “a god” consistent with the rules of Greek grammar?Some reference books argue strongly that the Greek text must betranslated, “The Word was God.” But not all agree. In his article“Qualitative Anarthrous Predicate Nouns: Mark 15:39 and John 1:1,”Philip B. Harner said that such clauses as the one in John 1:1, “withan anarthrous predicate preceding the verb, are primarily qualitativein meaning. They indicate that the logos has the nature of theos.” Hesuggests: “Perhaps the clause could be translated, ‘the Word had thesame nature as God.’” (Journal of Biblical Literature, 1973, pp. 85,87) Thus, in this text, the fact that the word the·osʹ in its secondoccurrence is without the definite article (ho) and is placed beforethe verb in the sentence in Greek is significant. Interestingly,translators that insist on rendering John 1:1, “The Word was God,” donot hesitate to use the indefinite article (a, an) in their renderingof other passages where a singular anarthrous predicate noun occursbefore the verb. Thus at John 6:70, JB and KJ both refer to JudasIscariot as “a devil,” and at John 9:17 they describe Jesus as “aprophet.” https://www.jw.org/en/library/books/Reasoning-From-the-Scriptures/Trinity/
Now provide the response and nothing else.
| I agree with the part of the OP quotation that says: Philip B. Harner said that such clauses as the one in John 1:1, “withan anarthrous predicate preceding the verb, are primarily qualitativein meaning. They indicate that the logos has the nature of theos.” Hesuggests: “Perhaps the clause could be translated, ‘the Word had thesame nature as God.’” (Journal of Biblical Literature, 1973, pp. 85,87) This also agree with Daniel Wallace - see the appendix below. However, this very assertion militates against the translation in John 1:1c of "a god". Indeed, the qualitative idea demands the translation, "the Word was God (in nature)" precisely as Harner suggests. The big problem with the NWT translation is its inconsistence about this very matter - in other places where we have anarthrous "theos", the same translators do NOT use "a god" such as: John 1:6 (NWT) - There came a man who was sent as a representative of God; his name was John [note - no "a" before "God" despite the lack of article] John 1:12 (NWT) - However, to all who did receive him, he gave authority to become God’s children [note - no "a" before "God" despite the lack of article] John 1:13 (NWT) - And they were born, not from blood or from a fleshly will or from man’s will, but from God [note - no "a" before "God" despite the lack of article] I could quote dozens more. The point is clear - the NWT translators translate according to a pre-existing theology rather than according to the established rules of grammar - see appendix below. APPENDIX - Grammar of "ho theos". Concerning John 1:1c, Daniel B Wallace, in his Greek Grammar Beyond the Basics, page 269, says this: The most likely candidate for Θεὸς is qualitative. This is true bothgrammatically (for the largest proportion of pre-verbal anarthrouspredicate nominatives fall into this category) and theologically (boththe theology of the Fourth Gospel and the NT as a whole). There is abalance between the Word's deity, which was already present in thebeginning (Ἐν ἀρχῇ ... Θεὸς ἦν [1:1], and his humanity, which wasadded later (σὰρξ ἐγένετο [1:14]). The grammatical structure of thesetwo statements mirrors each other; both emphasize the nature of theWord, rather than his identity. But Θεὸς was his nature from eternity(hence εἰμί is used), while σὰρξ was added at the incarnation (henceγίνομαι is used.) The following quote is taken from an article by the esteemed NT linguist and editor, Bruce Metzger found here >> http://www.bible-researcher.com/metzger.jw.html Some years ago Dr. Ernest Cadman Colwell of the University of Chicagopointed out in a study of the Greek definite article that, “A definitepredicate nominative has the article when it follows the verb; it doesnot have the article when it precedes the verb. … The opening verse ofJohn’s Gospel contains one of the many passages where this rulesuggests the translation of a predicate as a definite noun. Theabsence of the article [before θεος] does not make the predicateindefinite or qualitative when it precedes the verb; it is indefinitein this position only when the context demands it. The context makesno such demand in the Gospel of John, for this statement cannot beregarded as strange in the prologue of the gospel which reaches itsclimax in the confession of Thomas [John 20:28, ‘My Lord and myGod’].” In a lengthy Appendix in the Jehovah’s Witnesses’ translation, whichwas added to support the mistranslation of John 1:1, there are quotedthirty-five other passages in John where the predicate noun has thedefinite article in Greek. 20 These are intended to prove that theabsence of the article in John 1:1 requires that θεος must betranslated “a god.” None of the thirty-five instances is parallel,however, for in every case the predicate noun stands after the verb,and so, according to Colwell’s rule, properly has the article. So far,therefore, from being evidence against the usual translation of John1:1, these instances add confirmation to the full enunciation of therule of the Greek definite article. | {} | {'log_upvote_score': 4, 'links': ['https://hermeneutics.stackexchange.com/questions/79242', 'https://hermeneutics.stackexchange.com', 'https://hermeneutics.stackexchange.com/users/10176/']} | jdg_81205 |
stackexchange | llm_judgeable_groundtruth_similarity | 1857004 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
What is the coefficient of coefficient of $x^{41}$ in $(x^5 + x^6 + x^7 + x^8 + x^9)^5$? Using summation of G.P. , this is equivalent to finding the coefficient of $x^{41}$ in $$\left(x^5 \left(\frac{1-x^5}{1-x}\right)\right)^5$$ and thus finding coefficient of $x^{16}$ in $(\frac{1-x^5}{1-x})^5$. How to proceed after this?
Now provide the response and nothing else.
| In other words, how many ways are there to write$$ 41 = a + b+c+d+e $$where $a,b,c,d,e$ are integers between $5$ and $9$, inclusive. The highest sum we can make is $9\cdot 5=45$, so to get $41$ instead we need to remove $4$ units from among the 5 variables. Fortunately, even removing all $4$ of them from the same variable still leaves $5$, which is a valid value. So what we're counting is the same as How many ways are there to write $4$ as an (ordered) sum of $5$ non-negative integers? This is a standard combinatorial problem; by the stars-and-bars formula, the answer is $$\binom{4+5-1}{4} = 70$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1857004', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/350844/']} | jdg_81206 |
stackexchange | llm_judgeable_groundtruth_similarity | 7166954 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've tried to setup Phonegap on Android and deviceready won't fire. The reason is that DeviceInfo.uuid is always null/undefined. It seems like the non-javascript parts of phonegap isn't loaded correctly, but I can't see exactly what. For everything outside the www directory I'm using the code provided in the sample directory of the phonegap download. Anyone know what may be causing this? <!DOCTYPE html><html> <head> <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no;" /> <meta http-equiv="Content-type" content="text/html; charset=utf-8"> <script type="text/javascript" charset="utf-8" src="javascripts/phonegap-1.0.0.js"></script> <script src="http://debug.phonegap.com/target/target-script-min.js#something"></script> <script type="text/javascript" charset="utf-8"> function onBodyLoad() { var initialize = function() { window.console.log("deviceready||resume"); }; document.addEventListener("deviceready", initialize); document.addEventListener("resume", initialize); window.console.log("onBodyLoad!"); } </script> </head> <body onload="onBodyLoad()"> <h1>Herro World</h1> </body></html>
Now provide the response and nothing else.
| In case someone else stumble on this problem. I hadn't realized that phonegap-1.0.0.js is different for the iPhone and Android version. It has the same name, but the content is different. Thus, one must load the correct file. I solved it like this: <script type="text/javascript"> // Atrocious way of loading two diffent phonegap scripts, but other loading methods won't work. // also there shouldn't be two scripts to begin with -- so much for cross-platform. var userAgent = navigator.userAgent.toLowerCase(); if (userAgent.match(/android/)) { document.write("<script type='text/javascript' src='javascripts\/phonegap-android-1.0.0.js'><\/script>"); } else { document.write("<script type='text/javascript' src='javascripts\/phonegap-iphone-1.0.0.js'><\/script>"); }</script> | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7166954', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/118007/']} | jdg_81207 |
stackexchange | llm_judgeable_groundtruth_similarity | 68825 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Constructing quantum field theories is a well-known problem. In Euclidean space, you want to define a certain measure on the space of distributions on R^n. The trickiness is that the detailed properties of the distributions that you get is sensitive to the dimension of the theory and the precise form of the action. In classical mathematics, measures are hard to define, because one has to worry about somebody well-ordering your space of distributions, or finding a Hamel basis for it, or some other AC idiocy. I want to sidestep these issues, because they are stupid, they are annoying, and they are irrelevant. Physicists know how to define these measures algorithmically in many cases, so that there is a computer program which will generate a random distribution with the right probability to be a pick from the measure (were it well defined for mathematicians). I find it galling that there is a construction which can be carried out on a computer, which will asymptotically converge to a uniquely defined random object, which then defines a random-picking notion of measure which is good enough to compute any correlation function or any other property of the measure, but which is not sufficient by itself to define a measure within the field of mathematics, only because of infantile Axiom of Choice absurdities. So is the following physics construction mathematically rigorous? Question: Given a randomized algorithm P which with certainty generates a distribution $\rho$, does P define a measure on any space of distributions which includes all possible outputs with certain probability? This is a no-brainer in the Solovay universe, where every subset S of the unit interval [0,1] has a well defined Lebesgue measure. Given a randomized computation in Solovay-land which will produce an element of some arbitrary set U with certainty, there is the associated map from the infinite sequence of random bits, which can be thought of as a random element of [0,1], into U, and one can then define the measure of any subset S of U to be the Lebesgue measure of the inverse image of S under this map. Any randomized algorithm which converges to a unique element of U defines a measure on U. Question: Is it trivial to de-Solovay this construction? Is there is a standard way of converting an arbitrary convergent random computation into a measure, that doesn't involve a detour into logic or forcing? The same procedure should work for any random algorithm, or for any map, random or not. EDIT: (in response to Andreas Blass) The question is how to translate the theorems one can prove when every subset of U gets an induced measure into the same theorems in standard set theory. You get stuck precisely in showing that the set of measurable subsets of U is sufficiently rich (even though we know from Solovay's construction that they might as well be assumed to be everything!) The most boring standard example is the free scalar fields in a periodic box with all side length L. To generate a random field configuration, you pick every Fourier mode $\phi(k_1,...k_n)$ as a Gaussian with inverse variance $k^2/L^d$, then take the Fourier transform to define a distribution on the box. This defines a distribution, since the convolution with any smooth test function gives a sum in Fourier space which is convergent with certain probability. So in Solovay land, we are free to conclude that it defines a measure on the space of all distributions dual to smooth test functions. But the random free field is constructed in recent papers of Sheffield and coworkers by a much more laborious route, using the exact same idea, but with a serious detour into functional analysis to show that the measure exists (see for instance theorem 2.3 in http://arxiv.org/PS_cache/math/pdf/0312/0312099v3.pdf ). This kind of thing drives me up the wall, because in a Solovay universe, there is nothing to do--- the maps defined are automatically measurable. I want to know if there is a meta-theorem which guarantees that Sheffield stuff had to come out right without any work, just by knowing that the Solovay world is consistent. In other words, is the construction: pick a random Gaussian free field by choosing each Fourier component as a random gaussian of appropriate width and fourier transforming considered a rigorous construction of measure without any further rigamarole? EDIT IN RESPONSE TO COMMENTS: I realize that I did not specify what is required from a measure to define a quantum field theory, but this is well known in mathematical physics, and also explicitly spelled out in Sheffield's paper. I realize now that it was never clearly stated in the question I asked (and I apologize to Andreas Blass and others who made thoughtful comments below). For a measure to define a quantum field theory (or a statistical field theory), you have to be able to compute reasonably arbitrary correlation functions over the space of random distributions. These correlation functions are averages of certain real valued functions on a randomly chosen distribution--- not necessarily polynomials, but for the usual examples, they always are. By "reasonably arbitrary" I actually mean "any real valued function except for some specially constructed axiom of choice nonsense counterexample". I don't know what these distribtions look like a-priory, so honestly, I don't know how to say anything at all about them. You only know what distributions you get out after you define the measure, generate some samples, and seeing what properties they have. But in Solovay-land (a universe where every subset S of [0,1] is forced to have Lebesgue measure equal to the probability that a randomly chosen real number happens to be an element of S) you don't have to know anything. The moment you have a randomized algorithm that converges to an element of some set of distributions U, you can immediately define a measure, and the expectation value of any real valued function on U is equal to the integral of this function over U against that measure. This works for any function and any distribution space, without any topology or Borel Sets, without knowing anything at all, because there are no measurability issues--- all the subsets of [0,1] are measurable. Then once you have the measure, you can prove that the distributions are continuous functions, or have this or that singularity structure, or whatever, just by studying different correlation functions. For Sheffield, the goal was to show that the level sets of the distributions are well defined and given by a particular SLE in 2d, but whatever. I am not hung up on 2d, or SLE. If one were to suggest that this is the proper way to do field theory, and by "one" I mean "me", then one would get laughed out of town. So one must make sure that there isn't some simple way to de-Solovay such a construction for a general picking algorithm. This is my question. EDIT (in response to a comment by Qiaochu Yuan): In my view, operator algebras are not a good substitute for measure theory for defining general Euclidean quantum fields. For Euclidean fields, statistical fields really, you are interested any question one can ask about typical picks from a statistical distribution, for example "What is the SLE structure of the level sets in 2d"(Sheffield's problem), "What is the structure of the discontinuity set"? "Which nonlinear functions of a given smeared-by-a-test-function-field are certainly bounded?" etc, etc. The answer to all these questions (probably even just the solution to all the moment problems) contains all the interesting information in the measure, so if you have some non-measure substitute, you should be able to reconstruct the measure from it, and vice-versa. Why hide the measure? The only reason would be to prevent someone from bring up set-theoretic AC constructions. For the quantities which can be computed by a stochastic computation, it is traditional to ignore all issues of measurability. This is completely justified in a Solovay universe where there are no issues of measurability. I think that any reluctance to use the language of measure theory is due solely to the old paradoxes.
Now provide the response and nothing else.
| I don't know anything about the space of all distributions dual to smooth test functions, but do know a fair bit about computable measure theory (from a certain perspective). First, you mention that you have a computable algorithm which generates a probability distribution. I believe you are saying that you have a computable algorithm from $[0,1]$ (or technically the space of infinite binary sequences) to some set $U$ where $U$ is the space of distributions of some type. Say your map is $f$. How are you describing the element $f(x) \in U$? In computable analysis, there is a standard way to talk about these things. We can describe each element of $U$ with an infinite code (although each element has more than one code). Then $f$ works as follows: It reads the bits of $x$; from those bits, it starts to write out the code for the $f(x)$. The more bits of $x$ known, the more bits of the code for $f(x)$ known. (Note, not every space has such a nice encoding. If the space isn't separable, there isn't a good way to describe each object while still preserving the important properties, namely the topology. Is say, in your example above, the space of distributions that are dual to smooth test functions, is it a separable space--maybe in a weak topology? Does the encoding you use for elements of $U$ generate the same topology?) The important property of such a computable map is that it must be continuous (in the topology generated by the encoding, but these usually coincide with the topology of the space). Since $f$ is continuous, we know we can induce a Borel measure on $U$ as follows. If $S$ is an open set then $f^{-1}(S)$ is open and $\mu(f^{-1}(S))$ is known. Similarly, with any Borel sets, hence you have a Borel measure. Borel measures are sufficient for most applications I can think of (you can integrate continuous functions and from them, define and integrate the L^p functions), but once again, I don't know anything about your applications. Also, if the function $f$ doesn't always converge to a point in $U$, but only does so almost everywhere, the function $f$ is not continuous, but it is still fairly nice and I believe stuff can be said about the measure, although I need to think about it. Update: If $f$ converges with probability one, then the set of input points that $f$ converges on is a measure one $G_{\delta}$ set, in particular it is Borel. The function remains continuous on that domain (in the restricted topology). Hence there is still an induced Borel measure on the target space. (Take a Borel set; map it back. It is Borel on the restricted domain, and hence Borel on [0,1]). Update: Also, I am assuming that your algorithm directly computes the output from the input. I will give an example what I mean. Say one want to compute a real number. To compute it directly, I should be able to ask the algorithm to give me that number within $n$ decimal places with an error bound of $1/10^n$. An indirect algorithm works as follows: The computer just gives me a sequence of approximations that converge to the number. The computer may say $0,0,0,...$ so I think it converges to 0, but at some point it starts to change to $1,1,1,...$. I can never be sure if my approximation is close to the final answer. Even if your algorithm is of the indirect type, it doesn't matter for your applications. It will still generate a Borel map, albeit a more complex one than continuous, and hence it will generate a Borel measure on the target space. (The almost everywhere concerns are similar; they also go up in complexity, but are still Borel.) Without knowing more about your application it is difficult for me to say much specific to your case. Am I correct in my understanding of your construction, especially the computable side of it? For example, is this the way you describe the computable map from $[0,1]$ to $U$? On a more general note, much of measure theory has been developed in a set theoretic framework. This isn't very helpful with computable concerns. But using various other definitions of measures, one is able to once again talk about measure theory with an eye to what can and cannot be computed. I hope this helps, and that I didn't just trivialize your question. | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/68825', 'https://mathoverflow.net', 'https://mathoverflow.net/users/14689/']} | jdg_81208 |
stackexchange | llm_judgeable_groundtruth_similarity | 65832275 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to add a new SSH key to my GitHub account, but it says the key is already in use. I have hundreds of repos, and I don't want to click through each one to find out which is using this SSH key for deployments. Is there a way to automate this?
Now provide the response and nothing else.
| You need to add parenthesis after return: const CustomInput = ({children, value, onChange}: CustomInputProps) => { return ( <div> <label htmlFor="search">{children}</label> <input id="search" type="text" value={value} onChange={onChange} /> </div> )} https://stackblitz.com/edit/react-ts-pb6jpc?embed=1&file=index.tsx If you are writing const CustomInput = ({children, value, onChange}: CustomInputProps) => { return <div> <label htmlFor="search">{children}</label> <input id="search" type="text" value={value} onChange={onChange} /> </div>} this is transformed as const CustomInput = ({children, value, onChange}: CustomInputProps) => { return; <div> <label htmlFor="search">{children}</label> <input id="search" type="text" value={value} onChange={onChange} /> </div>} so you function basically returns undefined and is interpreted as const CustomInput = ({children, value, onChange}: CustomInputProps) => { return undefined; // nothing after return counts} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/65832275', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5547737/']} | jdg_81209 |
stackexchange | llm_judgeable_groundtruth_similarity | 10584670 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to convert a ResultSet to an XML file.I've first used this example for the serialization. import org.w3c.dom.bootstrap.DOMImplementationRegistry;import org.w3c.dom.Document;import org.w3c.dom.ls.DOMImplementationLS;import org.w3c.dom.ls.LSSerializer;...DOMImplementationRegistry registry = DOMImplementationRegistry.newInstance();DOMImplementationLS impl = (DOMImplementationLS)registry.getDOMImplementation("LS");... LSSerializer writer = impl.createLSSerializer();String str = writer.writeToString(document); After I made this work, I tried to validate my XML file, there were a couple of warnings.One about not having a doctype. So I tried another way to implement this. I came across the Transformer class. This class lets me set the encoding, doctype, etc. The previous implementation supports automatic namespace fix-up. The following does not. private static Document toDocument(ResultSet rs) throws Exception { DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); factory.setNamespaceAware(true); DocumentBuilder builder = factory.newDocumentBuilder(); Document doc = builder.newDocument(); URL namespaceURL = new URL("http://www.w3.org/2001/XMLSchema-instance"); String namespace = "xmlns:xsi="+namespaceURL.toString(); Element messages = doc.createElementNS(namespace, "messages"); doc.appendChild(messages); ResultSetMetaData rsmd = rs.getMetaData(); int colCount = rsmd.getColumnCount(); String attributeValue = "true"; String attribute = "xsi:nil"; rs.beforeFirst(); while(rs.next()) { amountOfRecords = 0; Element message = doc.createElement("message"); messages.appendChild(message); for(int i = 1; i <= colCount; i++) { Object value = rs.getObject(i); String columnName = rsmd.getColumnName(i); Element messageNode = doc.createElement(columnName); if(value != null) { messageNode.appendChild(doc.createTextNode(value.toString())); } else { messageNode.setAttribute(attribute, attributeValue); } message.appendChild(messageNode); } amountOfRecords++; } logger.info("Amount of records archived: " + amountOfRecords); TransformerFactory tff = TransformerFactory.newInstance(); Transformer tf = tff.newTransformer(); tf.setOutputProperty(OutputKeys.OMIT_XML_DECLARATION, "yes"); tf.setOutputProperty(OutputKeys.ENCODING, "UTF-8"); tf.setOutputProperty(OutputKeys.INDENT, "yes"); BufferedWriter bf = createFile(); StreamResult sr = new StreamResult(bf); DOMSource source = new DOMSource(doc); tf.transform(source, sr); return doc;} While I was testing the previous implementation I got an TransformationException: Namespace for prefix 'xsi' has not been declared. As you can see I've tried to add a namespace with the xsi prefix to the root element of my document. After testing this I still got the Exception. What is the correct way to set namespaces and their prefixes? Edit: Another problem I have with the first implementation is that the last element in the XML document doesn't have the last three closing tags.
Now provide the response and nothing else.
| The correct way to set a node on a namespaceAware document is by using: rootNode.createElementNS("http://example/namespace", "PREFIX:aNodeName"); So you can replace "PREFIX" with your own custom prefix and replace "aNodeName" with the name of your node. To avoid having each node having its own namespace declaration you can define the namespaces as attributes on your root node like so: rootNode.setAttribute("xmlns:PREFIX", "http://example/namespace"); Please be sure to set: documentBuilderFactory.setNamespaceAware(true) Otherwise you don't have namespaceAwareness. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10584670', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1389183/']} | jdg_81210 |
stackexchange | llm_judgeable_groundtruth_similarity | 53515314 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using HangFire to send emails to users in the background, regularly. I'm obtaining email addresses from database, but I'm not sure whether I'm "injecting" database context to service that's responsible for sending emails correctly This works correctly, is there a better way to do it? public void Configure(IApplicationBuilder app, IHostingEnvironment env, Context context){ (...) app.UseHangfireDashboard(); app.UseHangfireServer(new BackgroundJobServerOptions { HeartbeatInterval = new System.TimeSpan(0, 0, 5), ServerCheckInterval = new System.TimeSpan(0, 0, 5), SchedulePollingInterval = new System.TimeSpan(0, 0, 5) }); RecurringJob.AddOrUpdate(() => new MessageService(context).Send(), Cron.Daily); (...) app.UseMvc();} public class MessageService{ private Context ctx; public MessageService(Context c) { ctx = c; } public void Send() { var emails = ctx.Users.Select(x => x.Email).ToList(); foreach (var email in emails) { sendEmail(email, "sample body"); } }}
Now provide the response and nothing else.
| I just looked to the similar question and did not find the information in one place, so posting my solution here. Assume you have your Context configured as a service, i.e. public void ConfigureServices(IServiceCollection services){ services.AddControllers(); .... services.AddDbContext<Context>(options => { ... }); ....} This makes the IServiceProvider capable to resolve the Context dependency. Next, we need to update the MessageService class in order to not hold the Context forever but instantiate it only to perform the task. public class MessageService{ IServiceProvider _serviceProvider; public MessageService(IServiceProvider serviceProvider) { _serviceProvider = serviceProvider; } public void Send() { using (IServiceScope scope = _serviceProvider.CreateScope()) using (Context ctx = scope.ServiceProvider.GetRequiredService<Context>()) { var emails = ctx.Users.Select(x => x.Email).ToList(); foreach (var email in emails) { sendEmail(email, "sample body"); } } }} And finally we ask Hangfire to instantiate the MessageService for us, it will also kindly resolve the IServiceProvider dependency for us: RecurringJob.AddOrUpdate<MessageService>(x => x.Send(), Cron.Daily); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/53515314', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10522960/']} | jdg_81211 |
stackexchange | llm_judgeable_groundtruth_similarity | 7342389 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In this earlier question the OP asked the following problem: Given a rectangular grid where some squares are empty and some are filled, what is the largest number of 2x1 dominoes that can be placed into the world such that no two dominos overlap and no domino is atop a filled square? The (quite beautiful!) answer to this problem recognized that this is equivalent to finding a maximum bipartite matching in a specially-constructed graph. In this graph, each empty square has a node that is linked to each of its neighbors by an edge. Each domino then corresponds to an edge in the graph such that its endpoints are not covered by any other edge. Consequently, any set of edges that don't share a vertex (a matching) corresponds to an arrangement of dominoes, and vice-versa. My question is a generalization of this earlier one: Given a rectangular grid where some squares are empty and some are filled, what is the largest number of M x N dominoes (for a given M and N) that can be placed into the world such that no two dominos overlap and no domino is atop a filled square? I cannot see how to convert this into a matching problem as was done in the previous case. However, I also don't see any particular reason why this problem would immediately be NP-hard, so there may be a polynomial time solution to the problem. Is there an efficient algorithm for solving this problem? Or does anyone have a reduction that would show that this problem is NP-hard? Thanks so much!
Now provide the response and nothing else.
| This problem is definitely NP-hard and I can prove it. There is a reduction from 3-SAT to this problem. Specifically, it's a reduction from 3-SAT to the subproblem of this problem in which the dominoes are 1x3. There may also be other reductions for other specific sizes, but this one definitely works. Essentially, in this reduction, we're going to use domino positions to encode either true or false. In specific, I'm going to adopt the same notation as the other solution, which is to say that I'll use asterisks to indicate open spaces on the grid. I'll also use sets of three capital letters to represent dominoes and lower case letters to represent "signals" which are spaces which may or may not be filled depending on the state of the system. To embed a 3-SAT problem into this space, we're going to need a set of what I'll call gadgets which allow only certain states to be possible. Most of these gadgets will have a fixed number of dominoes in them. The exception will be the gadgets which represent the clauses which will have one extra domino if the clause is true (satisfied) but not when it is false (unsatisfied). We can interconnect these gadgets using paths. Together this will allow us to build a 3-SAT circuit. We will have a base number of dominoes since each path and gadget will take a standard amount of dominoes, we can add those up to get a base number k and then each clause gadget can have one extra domino if it is true, so if all clauses can be made true (and hence the expression satisfied) and there are n clauses, then the maximum number of dominoes will be n+k. If not, then the maximum number will be less than n+k. This is the basic form of the reduction. Next I will describe the gadgets and give examples. Similar to the other answer, we're going to have two positions which encode true and false for a given variable. So, I'll start with a single tile which can be in two possible places. **** This can either be covered with one domino like AAA* or *AAA Obviously, this cannot be covered with 2 dominoes and covering it with 0 dominoes would never be maximal. For my purposes, we're going to consider a protrusion to represent the value "false" and a lack of protrusion to represent "true". So we can view this part as having carrying two signals: x**y And in this case, only one of x or y will be covered, so we can consider the signals to be x and the logical not of x. For our purposes, whichever is covered is false, whichever is not covered is true. Next, we can transmit signals simply through straight can curved paths. If we have x*****y We will again have exactly two dominoes and result in either x or y being covered, but not both. ***y**x Will have exactly the same behavior. So we can use this to create long and curving paths in lengths which are increments of 3. However, not all lengths we might want to use are increments of 3, so we need an additional gadget to move a different distance. I call this the fiddler gadget and it's only purpose is to move the signal slightly uneven distances to make things connect successfully. Its input comes from x and output goes to y and it merely transmits the same signal along. It looks like this: ***y ***x It always contains exactly two dominoes and is filled in one of the following two ways: BBB* ABBB * A AAA *AX If we're going to model 3-SAT, however, we need more than this. Specifically, we need some way to model the clauses. To do this, we have a gadget where one extra domino can be packed in if the clause is true. The clause will be true when one or more of its inputs is true. In this case, that means that we can pack one extra domino in when at least one of the inputs does not protrude. It will look like this: *x*y* * z If we add an extra path to each for clarity, then it looks like this: * * * * * ****** * **** If x,y, and z are all false, then they'll all have protrusions and it will be filled like this: A B C D C D*C*D* * EEEF Where the rest of dominoes A,B, and F continue on down a path somewhere. If at least one of inputs is true, then we can pack in one extra domino (G) like so: C B A D A B C D C D C D C D or C D or C DGGGD* *CGGG *CGD* * * G EEEF EEEF GEEE However, even if all inputs are true, then we cannot pack in more than one domino. That scenario would look like this: C D C D C D***** * *EEE And as you can see, we can only insert exactly one extra domino into the empty space, not two. Now, if terms were never repeated, then we'd be done (or very nearly done). However, they can be repeated, so next, we need a signal splitter so that one variable can appear inmultiple terms. To do this, we utilize the following gadget: y*** ***z * * *** *** x In this gadget x is the input and y and z are the outputs. In this gadget, we can always pack 5 dominoes. If x protrudes than packing 5 dominoes will always require covering y and z as well. If x does not protrude, then covering y and z is not required. The packing where x does not protrude looks like this: yAAA BBBz C D CED CED E When x does protrude (we use X to indicate the end of the domino protruding into space x), the maximal packing necessarily covers both y and z: AAAC DBBB C D C*D EEE X I will take a moment to note that it would be possible to pack this with five dominoes when x is not protruding in such a way that either y or z protrude. However, doing so would result in terms which could be true (not protruding) becoming false (protruding). Allowing some of the terms (not variables, but actual terms in the clauses) to differ in value only by becoming false unnecessarily will never result in being able to satisfy an otherwise unsatisfiable expression. If our 3-SAT expression was (x | y | z) & (!x | y | !z) then allowing both x and !x to be false would only make things harder. If we were to allow both ends of something to be true, this would result in incorrect solutions, but we do not do this in this scheme. To frame it in terms of our specific problem, protruding unnecessarily will never result in more dominoes being able to be packed in later down the line. With paths and these three gadgets, we can now solve planar 3-SAT, which would be the sub-problem of 3-SAT where if we draw a graph where the terms and clauses are vertices and there is an edge between every term and every clause which contains that term, that the graph is planar. I believe that planar 3-SAT is probably NP-hard because planar 1-in-3-SAT is, but in case it's not, we can use gadgets to do a signal crossing. But it's really quite complex (if anyone sees a simpler way, please let me know) so first I'm going to do an example of solving planar 3-SAT with this system. So, a simple planar 3-SAT problem would be (x | y | z) & (!x | y | !z). Obviously, this is satisfiable, using any assignment where y is true or several other assignments. We will build our dominoes problem thus: ******* * * * * **** *** * **** **** * * * * * ******* * * * * * * * * * *z*x* ***** * * **** **** * * *** *** * * * y Notice that we had to use fiddlers at the top to get things to space correctly or else this would've been substantially less complex. Adding up the total dominoes from gadgets and paths we have 1 splitter (5 dominoes), two fiddlers (2 dominoes each), and a total of 13 regular paths, for a grand total of 5 + 2*2 + 13 = 22 dominoes guaranteed, even if the clauses cannot be satisfied. If they can be, then we will have 2 more dominoes which can be filled in for a total of 24. One optimal packing with 24 dominoes is as follows: QRRRSSS Q T Q T OPPP *UT O U*ON UVVV N W N W M IIIJJJK W M H K X M H K X *zGH* LLLX* G * GEEE FFF* B D BCD BCD C A A A This tiling contains 24 dominoes, so we can know that the original expression is satisfiable. In this case, the tiling corresponds to make y and x true and z false. Notice that this is not the only tiling (and not the only satisfying assignment of boolean values), but that there is no other tiling which will increase the number of tiles beyond 24, so it is a maximum tiling. (If you don't want to count all the dominoes you can note that I used every letter except for Y and Z.) If the maximal tiling had contained either 22 or 23 dominoes, then we would know that one of the clauses was not satisfied (GGG and/or LLL dominoes would not be able to be placed) and hence we would know that the original expression was not satisfiable. In order to be certain that we can do this even if planar 3-SAT isn't NP-hard, we can build a gadget which allows paths to cross. This gadget is unfortunately kind of big and complex, but it's the smallest one I was able to figure out. I'll first describe the pieces and then the whole gadget. Piece 1: Crossover point. x and y are the inputs. a,b,and c are the outputs. They will need to be combined using other gadgets to actually relay x and y to the opposite side of each other. ***c * *** * * * * * * *** *** ax*yb This gadget will always fit exactly 7 dominoes. There are four possible input combinations. If neither input protrudes (both are true) than no output will protrude and it will be filled as in (tt1) or (tt2) below. If only input x protrudes then only c will protrude as in (ft) below. If only input y protrudes then either output a or c will protrude as in (tf) below. And if input x and y both protrude then output c protrudes as in (ff) below. (tt) AAAc (ft) AAAc (tf) AAAc (ff) BAAA * * * B BBB BBB BBB CBD C D C D C D C D C D C D C D C D C D C D C D E G EEE EEE EEE EFG FFF FFF FFF EFG aGGGb aXGGG GGGYb aXFYb I have not included the possibility that in the (ft) or (tf) scenarios that c could be covered instead of a or b. This is possible within the scope of this gadget but once combined with other gadgets to form the complete crossover, if it were to do so, it would never result in a larger number of clauses being satisfied so we can exclude it. With that in mind, we can then observe that in this case the value of the input x is equal to the value of b & c and the input y is equal to the value of a & c (note that this would be logical or rather than logical and if protrusion were considered true rather than false). So we just need to split c and then use a logical and gadget to connect connect the values of c with a and b respectively and we will then have successfully completed our cross over. The logical and is our simplest gadget so far and it looks like this: **** * x*y You might actually note that there's one embedded towards the top of the crossover point gadget. This gadget will always contain precisely 2 dominoes. One will be at the top to serve as the output. The other one serves as a switch which will be horizontally oriented only if both x and y are true (non-protruding) and vertically oriented otherwise as we can see in the following diagrams: BBB* ABBB ABBB ABBB * A A A AAA XAy xAY XAY Thus we can complete the crossover by splitting c and then adding two of these gates, one for a & c and one for b & c. Putting it all together requires also adding some fiddler gadgets and looks like this: ******* **** * * * * * *** * *** *** *** * * * **** * **** * * * * **** * *** * *** * *** * **** * * **** y * * * * x * * * * * * * **** *** **** * *** *** *** **********x*y************* I'm not going to fill in example tilings for that. You'll have to do it yourself if you want to see it in action. So, hooray! We can now do arbitrary 3-SAT. I should take a moment to note that doing this will be a polynomial time transformation because even in the worst case, we can just make a big grid with all of the variables and their opposites along the top and all the terms on the side and do O(n^2) crossovers. So there is a simple, polynomial-time algorithm for laying this all out and the maximum size of the transformed problem is polynomial in the size of the input problem. QED. Edit note:Following Tom Sirgedas's excellent work in finding a mistake in the splitter gadget, I've made some changes to the answer. Essentially, my old splitter looked like this and could be packed with 6 when x does not protrude (rather than the 5 I had intended) like this: y*** ***z AAAC DBBB * * C D *** C*D *** EEE *x* FFF So I revised it by removing the two spaces on either side of x. This eliminates the six domino packing while still allowing a 5-domino packing in which y and z are uncovered when x is uncovered. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7342389', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/501557/']} | jdg_81212 |
stackexchange | llm_judgeable_groundtruth_similarity | 154046 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am struggling with the semantics of continuous random variables. For example, we do maximum likelihood estimation, in which we try to find the parameter $\theta$ which, for some observed data $D$, maximizes the likelihood $P(\theta|D)$. But my understanding of this is $$P(\theta = x) = P(x\leq\theta\leq x) = \int_x^xp(t)dt = 0$$ so I am not sure how any $\theta$ can result in a non-zero probability. Intuitively I understand what it means to find the "most probable" $\theta$, but I am having trouble uniting it with the formal definition. EDIT: In my class we defined $L(\theta:D)=P(D|\theta)=\prod_i P(D_i|\theta)$ (assuming i.i.d, where $D_i$ are the observations). Then we want to find $\text{argmax}_\theta \prod_i P(D_i|\theta)$. I was incorrect above about finding $P(\theta)$, but it seems to me we're still trying to find the maximal probability, where all probabilities are zero. Some answerers suggested that we're actually trying to find the max probability density but I don't understand why this is true.
Now provide the response and nothing else.
| It seems to me like whoever defined it for you was being hand-wavy (and, I would argue, careless). For continuous random variables, the likelihood is defined to be the joint density of the data $\mathcal D$ when taken as a function of the unknown parameter $\theta$, i.e. for a vector-valued observation $\mathbf x$ we have $$L(\theta | \mathbf x) := f(\mathbf x | \theta) $$ where $f$ is the density of a random vector $\mathbf X$. For discrete random vectors, replace take $f$ to be a probability mass function. This is the definition of likelihood that you should take. Then, to get the MLE, take an $\arg \max$ over $\theta$. To get a definition that is less arbitrary in the distinction between continuous and discrete rvs, one needs to introduce measure theory and the notion of Radon-Nikodym derivative, in which case we can generalize the notion of density so that mass functions are a type of density, and the arbitrariness vanishes. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/154046', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/2549/']} | jdg_81213 |
stackexchange | llm_judgeable_groundtruth_similarity | 4562326 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am having an argument with a colleague about $i$ , the square root of $-1$ . My argument goes as follows: $$\begin{align*}i &= \sqrt{-1}\\i^2 &= -1\\\frac{i^2}{i} &= -\frac{1}{i}\\i &= -\frac{1}{i}\end{align*}$$ and then through substitution, since both sides are defined and not variables, $$\mathrm{i} = \frac{-1}{\frac{-1}{\frac{-1}{\frac{-1}{\dots}}}}$$ Is this still equal to $\sqrt{-1}$ ? I argue that it does, although this would mean that there is some real correlation to imaginary numbers.
Now provide the response and nothing else.
| It is true that $i = \dfrac{-1}{i}$ It is also true that $i = \dfrac{~-1~~}{~\left(\frac{-1}{i}\right)~}$ Further, it is also true that the sequence $a_1 = i$ and $a_{n+1}=-1/a_{n}$ will equal $i$ for all values of $n$ and $\lim\limits_{n\to\infty} a_n = i$ as it is simply a constant sequence. That is to say... if you imagine your tower of division " from the bottom up " then yes, it will equal $i$ as you suggest. That is... if you prefer $(\dots/(-1/(-1/(-1/i))))$ The problem is that this is not how $-1/(-1/(-1/(-1/(\cdots))))$ is generally interpreted. The way you have written it makes it sound as though we are considering this fraction as being defined " from the top down. " Any time we are dealing with an infinite number of operations or the like, we must be very clear how we are defining what it is we are looking at and this is generally done by way of sequences and limits of sequences. If the limit of the sequence converges to a number we can say the expression "equals" that number. The way you have this written now, it makes it sound as though we are considering $b_1 = -1$ and $b_{n+1}=b_n/(-1)$ ... that each time we consider " more " of the tower of divisions, we are doing so by taking what we already have as the numerator and then dividing with something additional as the denominator. In such an interpretation of what you have written, this sequence does not converge. Instead, it will alternate between the two values of $-1$ and $1$ . Since this does not converge, that makes the expression you wrote undefined. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4562326', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1112104/']} | jdg_81214 |
stackexchange | llm_judgeable_groundtruth_similarity | 473241 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm working on a script and I need to build the tar command dynamically. Here are two examples to illustrate what I'm trying to do : #!/bin/bashTAR_ME="/tmp"EXCLUDE=("/tmp/hello hello" "/tmp/systemd*" "/tmp/Temp*")_tar="tar "`printf -- '--exclude="%s" ' "${EXCLUDE[@]}"`" -zcf tmp.tar.gz"echo COMMAND: "${_tar}"${_tar} "$TAR_ME"echo -e "\n\nNEXT:\n\n"EXCLUDE=("--exclude=/tmp/hello\ hello" "--exclude=/tmp/systemd*" "--exclude=/tmp/Temp*")_tar="tar "`printf -- '%s ' "${EXCLUDE[@]}"`" -zcf test.tar.gz"echo COMMAND: "${_tar}"${_tar} "$TAR_ME" I want to be able to use _tar as a command, I've been able to make it work with classic path, but I need it to work with spaces in folders' name.And every single time I got errors that look like : COMMAND: tar --exclude="/tmp/hello hello" --exclude="/tmp/systemd*" --exclude="/tmp/Temp*" -zcf tmp.tar.gz /tmptar: hello": Cannot stat: No such file or directoryCOMMAND: tar --exclude=/tmp/hello\ hello --exclude=/tmp/systemd* --exclude=/tmp/Temp* -zcf test.tar.gz tar: hello: Cannot stat: No such file or directory Just one thing you need to know, I need my script to work on very old machines, meaning I can't use last bash features.
Now provide the response and nothing else.
| Don't try to make an executable string. Instead build the arguments in an array and use that when calling tar (you are already using an array properly for EXCLUDE ): #!/bin/bashdirectory=/tmpexclude=( "hello hello" "systemd*" "Temp*" )# Now build the list of "--exclude" options from the "exclude" array:for elem in "${exclude[@]}"; do exclude_opts+=( --exclude="$directory/$elem" )done# Run tartar -cz -f tmp.tar.gz "${exclude_opts[@]}" "$directory" With /bin/sh : #!/bin/shdirectory=/tmpset -- "hello hello" "systemd*" "Temp*"# Now build the list of "--exclude" options from the "$@" list# (overwriting the values in $@ while doing so):for elem do set -- "$@" --exclude="$directory/$elem" shiftdone# Run tartar -cz -f tmp.tar.gz "$@" "$directory" Note the quoting of $@ in the sh code and of both ${exclude[@]} and ${exclude_opts[@]} in the bash code. This ensures that the lists are expanded to individually quoted elements. Related: How can we run a command stored in a variable? | {} | {'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/473241', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/268346/']} | jdg_81215 |
stackexchange | llm_judgeable_groundtruth_similarity | 3290293 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I ran into this issue during my homework. Using the rules of logarithms, I need to prove that $$-2\ln\bigg(\frac{2}{\sqrt{6}}\bigg)=\ln3-\ln2$$ So here were my steps: First step: $$-2\ln\bigg(\frac{2}{\sqrt{6}}\bigg)=\ln\left(\bigg(\frac{2}{\sqrt{6}}\bigg)^{-2}\right)$$ And that's as far as I got, because now I want to use the form $\ln(a/b) = \ln(a) - \ln(b)$ , but first I need to reduce the fraction because it is raised to the $-2$ . How do I evaluate $\bigg(\frac{2}{\sqrt{6}}\bigg)^{-2}$ ? Thanks
Now provide the response and nothing else.
| By definition $$a^{-k} = \frac 1{a^k}$$ So $$\left(\frac{2}{\sqrt{6}}\right)^{-2} =\frac 1{\left(\frac{2}{\sqrt{6}}\right)^{2}}=$$ $$\frac 1{\left(\frac {2^2}{\sqrt 6^2}\right)}=\frac {\sqrt 6^2}{2^2}=\frac 64=\frac 32$$ It will help to realize that $(\frac ab)^{-1} = 1/(a/b) = \frac ba$ and that $(\frac ab)^k = \frac {a^k}{b^k}$ to realize that that means $$\left(\frac ab\right)^{-k} = \frac 1{\left(\frac ab\right)^k}= \frac 1{\left(\frac {a^k}{b^k}\right)} = \frac {b^k}{a^k}.$$ (Also $(\frac ab)^{-k} = [(\frac ab)^{-1}]^k = (\frac ba)^k=\frac {b^k}{a^k}$ or that $(\frac ab)^{-k} = \frac {a^{-k}}{b^{-k}} = (1/a^k)/(1/b^k) = \frac {b^k}{a^k}$ .) In any event $$\left(\frac {2}{\sqrt 6}\right)^{-2} = \left(\frac {\sqrt 6}{ 2}\right)^2 = \frac {\sqrt 6^2}{2^2} = \frac 64 = \frac 32.$$ | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/3290293', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/688286/']} | jdg_81216 |
stackexchange | llm_judgeable_groundtruth_similarity | 15422838 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an error when trying to work with @JsonIdentityInfo jackson annotation. When I try to deserialize the object I get the following exception: Could not read JSON: Already had POJO for id (java.lang.Integer) [1] (through reference chain: eu.cobiz.web.domain.Site["operators"]->eu.yavix.web.domain.Account["image"]->eu.cobiz.web.domain.Image["@Image"]);nested exception is com.fasterxml.jackson.databind.JsonMappingException : Already had POJO for id (java.lang.Integer) [1] (through reference chain: eu.yavix.web.domain.Site["operators"]->eu.cobiz.web.domain.Account["image"]->eu.cobiz.web.domain.Image["@Image"]) The JSON I am trying to deserialize looks like: {"@Site": 1,"siteId": 1,"name": "0","address": { "@Address": 2, "addressId": 4, "number": "22"},"operators": [ { "accountId": 1, "email": "[email protected]", "image": { "@Image": 1, "imageId": 1, "uri": "http://icons.iconarchive.com/icons/deleket/purple-monsters/128/Alien-awake-icon.png" } }, { "accountId": 2, "email": "[email protected]", "image": { "@Image": 2, "imageId": 2, "uri": "http://icons.iconarchive.com/icons/deleket/purple-monsters/128/Alien-awake-icon.png" } }]} My domain object is annotated with @JsonIdentityInfo(generator = ObjectIdGenerators.IntSequenceGenerator.class, property = "@Image") The problem arises on @Id annotation since if I remove the annotation the problem disappears (as I did for account) but on my understanding the new feature is useful for cyclic dependencies which is useful for me in other scenarios. There shouldn't be a conflict between the 2 images since they are different objects. How can I solve this or what is the problem?
Now provide the response and nothing else.
| You should use scope parameter when annotating the ids. Then the de-serializer would make sure the id is unique within the scope. From Annotation Type JsonIdentityInfo : Scope is used to define applicability of an Object Id: all ids must be unique within their scope; where scope is defined as combination of this value and generator type. e.g. @JsonIdentityInfo(generator=ObjectIdGenerators.IntSequenceGenerator.class,property="@id", scope = Account.class) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15422838', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1413837/']} | jdg_81217 |
stackexchange | llm_judgeable_groundtruth_similarity | 253985 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to make some differential measurements on a device, but both nodes need to be floating from earth. I found out the hard way that the negative clip of my oscilloscope voltage probes are connected to the mains earth conductor, but I don't have access to a differential probe. I was wondering if there is an easy solution to this, and I figured this: inside the scope I have seen that the mains are only connected to a power, old style transformer, while the earth lead is screwed to the metal chassis. Since of course the transformer output is isolated from its input, I think that if I just remove the earth screw from the chassis and isolate the cable I can now connect my device to the scope, avoiding the pesky ground current that destroyed my first board. The negative terminal I would hook to the probe clip is about 60V over earth, the maximum current it can source is just above 30mA. The input transformer can surely withstand a 60V offset, right? I think that this is safe because the shell of my scope is all made of plastic, the only metallic parts are the BNC connectors and a couple of blade connectors that I use to calibrate my probes, but they should all be safe. Is it ok to remove the earth connector from my scope to make such a measurement? I have seen this suggestion all over the web, so I think it must be safe, but I want to ask a board of professionals just to be extra sure.
Now provide the response and nothing else.
| The best answer is to isolate the equipment you are trying to measure. When it is isolated, you are free to ground any one point, which will be done thru the scope probe ground lead. That keeps the scope, its chassis, and anything it may be connected to or touching from becoming live. If isolating the circuit you are trying to measure is not possible, then a portable battery-operated scope is the next best option. These are intended in part for this sort of thing, and are usually well insulated, unlike scopes intended to sit on a bench. If you are really really careful and can ensure everyone else around you or that might enter the room will be really careful, then you can put the isolation transformer on the scope. Before you do that, very carefully make sure the scope isn't touching anything conductive, and there are no connections to is to elsewhere, like a USB connection to a PC or something. This way you get what you suggest without modifying the scope. I just can't recommend what you suggest. You may think you found the only connection to ground, but the scope wasn't designed to be modified like that, so you don't really know without testing. A Hi-pot test would tell you, but may also blow up the scope in the process. Do it right. Use a isolation transformer, preferably on the equipment under test, not on the scope. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/253985', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/16993/']} | jdg_81218 |
stackexchange | llm_judgeable_groundtruth_similarity | 42440061 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Hey there I am searching for a function which is printing a dynamic variable as completely as possible to the console in Dart language. In PHP for instance I would use var_dump() in order to get all information about a variable. In JavaScript I would do one of the following: 1) Convert Object to JSON and print console.log(JSON.stringify(obj)) 2) Or a custom function like this: function dump(arr,level) { var dumped_text = ""; if(!level) level = 0; //The padding given at the beginning of the line. var level_padding = ""; for(var j=0;j<level+1;j++) level_padding += " "; if(typeof(arr) == 'object') { //Array/Hashes/Objects for(var item in arr) { var value = arr[item]; if(typeof(value) == 'object') { //If it is an array, dumped_text += level_padding + "'" + item + "' ...\n"; dumped_text += dump(value,level+1); } else { dumped_text += level_padding + "'" + item + "' => \"" + value + "\"\n"; } } } else { //Stings/Chars/Numbers etc. dumped_text = "===>"+arr+"<===("+typeof(arr)+")"; } return dumped_text; } However in Dart if I do print(variable) I get something like Instance of 'FooBarObject' . I cannot just convert an object to JSON like in JavaScript as this is not possible for all objects. So my question in detail: Is where a function or custom function in dart which can print a variable with unknown type (object, array, etc.) including all (public) properties as well as nested objects and variables to my console? Or which function is closest to this desire?
Now provide the response and nothing else.
| There is no built in function that generates such an output. print(variable) prints variable.toString() and Instance of 'FooBarObject' is the default implementation. You can override it in custom classes and print something different. You can use reflection ( https://www.dartlang.org/articles/libraries/reflection-with-mirrors ) to build a function yourself that investigates all kinds of properties of an instance and prints it the way you want.There is almost no limitation of what you can do and for debugging purposes it's definitely a fine option. For production web application it should be avoided because it limits tree-shaking seriously and will cause the build output size to increase notable.Flutter (mobile) doesn't support reflection at all. You can also use one of the JSON serialization packages, that make it easy to add serialization to custom classes and then print the serialized value.For example https://pub.dartlang.org/packages/dson I think there are others, but I don't know about (dis)advantages, because I usually roll my own using https://pub.dartlang.org/packages/source_gen | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/42440061', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/576746/']} | jdg_81219 |
stackexchange | llm_judgeable_groundtruth_similarity | 55275025 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm new to Angular. Actually, I'm trying to subscribe data from a service and that data, I'm passing to form control of mine from (example, it's like an edit form). import { Component, OnInit } from '@angular/core';import { FormBuilder, FormGroup, Validators, FormArray, FormControl } from '@angular/forms';import { ActivatedRoute, Router } from '@angular/router';import { QuestionService } from '../shared/question.service';@Component({ selector: 'app-update-que', templateUrl: './update-que.component.html', styleUrls: ['./update-que.component.scss']})export class UpdateQueComponent implements OnInit { questionsTypes = ['Text Type', 'Multiple choice', 'Single Select']; selectedQuestionType: string = ""; question: any = {}; constructor(private route: ActivatedRoute, private router: Router, private qService: QuestionService, private fb: FormBuilder) { } ngOnInit() { this.getQuebyid(); } getQuebyid(){ this.route.params.subscribe(params => { this.qService.editQue([params['id']]).subscribe(res =>{ this.question = res; }); }); } editqueForm = this.fb.group({ user: [''], questioning: ['', Validators.required], questionType: ['', Validators.required], options: new FormArray([]) }) setValue(){ this.editqueForm.setValue({user: this.question.user, questioning: this.question.questioning}) }} If I use [(ngModule)] on my form field to set the value to my element it is working fine and showing a warning it'll be deprecated in Angular 7 version. <textarea formControlName="questioning" [(ngModule)]="question.questioning" cols="70" rows="4"></textarea> So, I set the values to my form control by doing below but the element is not showing those values. setValue(){ this.editqueForm.setValue({user: this.question.user, questioning: this.question.questioning})} Can anyone tell me how to set values to my reactive form?
Now provide the response and nothing else.
| Setting or Updating of Reactive Forms Form Control values can be done using both patchValue and setValue. However, it might be better to use patchValue in some instances. patchValue does not require all controls to be specified within the parameters in order to update/set the value of your Form Controls. On the other hand, setValue requires all Form Control values to be filled in, and it will return an error if any of your controls are not specified within the parameter. In this scenario, we will want to use patchValue, since we are only updating user and questioning : this.qService.editQue([params["id"]]).subscribe(res => { this.question = res; this.editqueForm.patchValue({ user: this.question.user, questioning: this.question.questioning });}); EDIT: If you feel like doing some of ES6's Object Destructuring , you may be interested to do this instead const { user, questioning } = this.question;this.editqueForm.patchValue({ user, questioning}); Ta-dah! | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/55275025', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5122147/']} | jdg_81220 |
stackexchange | llm_judgeable_groundtruth_similarity | 19779097 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a ng-switch with different contents, one of them is a Gallery and I have to set the JQuery event to all images after switch ng-switch-when=gallery content is ready, but not lucky because it's firing the event before the content is loaded. the gallery is the third ng-switch-when so I can't set the events with a ng-init. So, does ng-switch have a content loaded event or directive? <a href="" ng-click="menuSelection='gallery';loadLibraryGallery();">GALERÍA</a> Controller $scope.loadLibraryGallery = function(){//I have to set this event to childrens after ng-switch-when=gallery is loaded $('.popup-gallery').magnificPopup(); }; Any idea. PleaseThank you.
Now provide the response and nothing else.
| You can view the ngSwitch directive's source on github It never calls any $broadcast nor $emit functions so no, it doesn't have such event. You could try creating your own directive, which is really simple. This directive is going to be linked after the html code has been inserted. Let's say this is your code: html: <ANY ng-switch="expression"> <ANY ng-switch-when="matchValue1" data-gallery>...</ANY> <ANY ng-switch-when="matchValue2">...</ANY> <ANY ng-switch-default>...</ANY></ANY> js: angular.module('app').directive('gallery', function(){ return function ($scope, $element, attrs) { $element.find('.popup-gallery').magnificPopup(); }}); Remember, that you should never modify DOM or call code that modifies DOM from the controller. This is directives job. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19779097', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1621139/']} | jdg_81221 |
stackexchange | llm_judgeable_groundtruth_similarity | 17862185 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
[Disclaimer: there may be more pythonic ways of doing what I want to do, but I want to know how python's scoping works here] I'm trying to find a way to make a decorator that does something like injecting a name into the scope of another function (such that the name does not leak outside the decorator's scope). For example, if I have a function that says to print a variable named var that has not been defined, I would like to define it within a decorator where it is called. Here is an example that breaks: c = 'Message'def decorator_factory(value): def msg_decorator(f): def inner_dec(*args, **kwargs): var = value res = f(*args, **kwargs) return res return inner_dec return msg_decorator@decorator_factory(c)def msg_printer(): print varmsg_printer() I would like it to print " Message ", but it gives: NameError: global name 'var' is not defined The traceback even points to wher var is defined: <ipython-input-25-34b84bee70dc> in inner_dec(*args, **kwargs) 8 def inner_dec(*args, **kwargs): 9 var = value---> 10 res = f(*args, **kwargs) 11 return res 12 return inner_dec So I don't understand why it can't find var . Is there any way to do something like this?
Now provide the response and nothing else.
| You can't. Scoped names (closures) are determined at compile time, you cannot add more at runtime. The best you can hope to achieve is to add global names, using the function's own global namespace: def decorator_factory(value): def msg_decorator(f): def inner_dec(*args, **kwargs): g = f.__globals__ # use f.func_globals for py < 2.6 sentinel = object() oldvalue = g.get('var', sentinel) g['var'] = value try: res = f(*args, **kwargs) finally: if oldvalue is sentinel: del g['var'] else: g['var'] = oldvalue return res return inner_dec return msg_decorator f.__globals__ is the global namespace for the wrapped function, so this works even if the decorator lives in a different module. If var was defined as a global already, it is replaced with the new value, and after calling the function, the globals are restored. This works because any name in a function that is not assigned to, and is not found in a surrounding scope, is marked as a global instead. Demo: >>> c = 'Message'>>> @decorator_factory(c)... def msg_printer():... print var... >>> msg_printer()Message>>> 'var' in globals()False But instead of decorating, I could just as well have defined var in the global scope directly . Note that altering the globals is not thread safe, and any transient calls to other functions in the same module will also still see this same global. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/17862185', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/386279/']} | jdg_81222 |
stackexchange | llm_judgeable_groundtruth_similarity | 4339220 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I know that I can map a UNC path to a local drive letter. However, I am wondering if there is a way to map a UNC path to a local folder. I have a program that has a specific folder hard coded into the program and I am wanting to try and create a folder with the same name that is mapped to a UNC path so that the data can be accessed from a network share. Is this doable? Specifically this is on a Windows 2003 server.
Now provide the response and nothing else.
| Yes, there is a way to map a UNC path to a local folder: C:\>mklink /D Develop \\obsidian\Developsymbolic link created for Develop <<===>> \\obsidian\Develop This is because i want a build server to use my own PC's Develop folder as its Develop folder: 10/20/2012 11:01 AM <SYMLINKD> Develop [\\obsidian\Develop] And there you have it. MKLINK [[/D] | [/H] | [/J]] Link Target /D Creates a directory symbolic link. Default is a file symbolic link. /H Creates a hard link instead of a symbolic link. /J Creates a Directory Junction. Link specifies the new symbolic link name. Target specifies the path (relative or absolute) that the new link refers to. Note : In my actual situation i needed another level of redirection, because the program i'm using realized that Develop was a symbolic link , pointing to a remote machine, and refused to comply. i told the program to shut up and do what it's told by giving it a junction that points to a local resource. 10/20/2012 11:06 AM <JUNCTION> Develop [C:\Develop2\]10/20/2012 11:01 AM <SYMLINKD> Develop2 [\\obsidian\Develop] | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4339220', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/58365/']} | jdg_81223 |
stackexchange | llm_judgeable_groundtruth_similarity | 2232089 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have some XML <Users> <User Name="Z"/> <User Name="D"/> <User Name="A"/></User> I want to sort that by Name . I load that xml using XDocument . How can I view that xml sorted by Name?
Now provide the response and nothing else.
| You can sort using LINQ to Xml, if XmlDocument is not the case XDocument input = XDocument.Load(@"input.xml");XDocument output = new XDocument( new XElement("Users", from node in input.Root.Elements() orderby node.Attribute("Name").Value descending select node)); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2232089', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/117899/']} | jdg_81224 |
stackexchange | llm_judgeable_groundtruth_similarity | 4284 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am looking for an intuitive explanation of the bias-variance tradeoff, both in general and specifically in the context of linear regression.
Now provide the response and nothing else.
| Imagine some 2D data--let's say height versus weight for students at a high school--plotted on a pair of axes. Now suppose you fit a straight line through it. This line, which of course represents a set of predicted values, has zero statistical variance. But the bias is (probably) high--i.e., it doesn't fit the data very well. Next, suppose you model the data with a high-degree polynomial spline. You're not satisfied with the fit, so you increase the polynomial degree until the fit improves (and it will, to arbitrary precision, in fact). Now you have a situation with bias that tends to zero, but the variance is very high. Note that the bias-variance trade-off doesn't describe a proportional relationship--i.e., if you plot bias versus variance you won't necessarily see a straight line through the origin with slope -1. In the polynomial spline example above, reducing the degree almost certainly increases the variance much less than it decreases the bias. The bias-variance tradeoff is also embedded in the sum-of-squares error function. Below, I have rewritten (but not altered) the usual form of this equation to emphasize this: $$E\left(\left(y - \dot{f}(x)\right)^2\right) = \sigma^2 + \left[f(x) - \frac{1}{\kappa}\sum_{i=0}^nf(x_n)\right]^2+\frac{\sigma^2}{\kappa}$$ On the right-hand side, there are three terms: the first of these is just the irreducible error (the variance in the data itself); this is beyond our control so ignore it. The second term is the square of the bias ; and the third is the variance . It's easy to see that as one goes up the other goes down--they can't both vary together in the same direction. Put another way, you can think of least-squares regression as (implicitly) finding the optimal combination of bias and variance from among candidate models. | {} | {'log_upvote_score': 6, 'links': ['https://stats.stackexchange.com/questions/4284', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/439/']} | jdg_81225 |
stackexchange | llm_judgeable_groundtruth_similarity | 12423 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
Successfully delivering email to my clients is a pretty high priority at my company. We are of course fully opt-in, paid clients, no possibility of spam, the whole thing. The information we email is specifically requested from us by the recipient. We have done some of the easy stuff like setting up SPF records on our outgoing mail servers. What other steps can we take to ensure that our email goes through, in terms of email server setup and configuration ( similar to SPF, etc ) Alternatively, are there 3rd party email services that we can pay for that already manage these issues?
Now provide the response and nothing else.
| You'll probably want to look into DKIM as well: http://www.dkim.org/ Yesmail is a mail marketing company that takes care of a lot of these issues. Other things that you can do are Run your emails through Spamassassin before sending them, to make sure they don't score too high. Avoid HTML Avoid URLs Make sure your users know to add your outbound email address to the address book or whitelist, and then stick with that address. Related to the above, try to have just one outbound email address that the messages come from. Make sure your mailserver's IP address resolves back to the domain name it claims when talking to other mail servers. Use TLS if possible. All of the above are basically things that spam filters look for when evaluating a message, so your ultimate goal is to avoid as many triggers as possible. | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/12423', 'https://serverfault.com', 'https://serverfault.com/users/1030/']} | jdg_81226 |
stackexchange | llm_judgeable_groundtruth_similarity | 39589636 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm working on a new game project at the moment that will consist of a React Native front-end and a Lambda-based back-end. The app requires some real time features such as active user records, geofencing, etc. I was looking at Firebase's Realtime Database that looks like a really elegant solution for real-time data sync but I don't think AWS has anything quite like it. The 3 options I could think of for "serverless" realtime using only AWS services are: Option 1: AWS IoT Messaging over WebSockets This one is quite obvious, a managed WebSockets connection through the IoT SDK. I was thinking of triggering Lambdas in response to inbound and outbound events and just use WebSockets as the realtime layer, building custom handling logic on the app client as you typically would. The downside to this, at least compared to Firebase, is that I will have to handle the data in the events myself which will add another layer of management on top of WebSockets and will have to be standardized with the API data layer in the application's stores. Pros: Scalable bi-directional realtime connection Cons: Only works when the app is open Message structure needs to be implemented Multiple transport layers to be managed Option 2: Push-triggered re-fetch Another option is to use push notifications as real-time triggers but use a regular HTTP request to API Gateway to actually get the updated payload. I like this approach because it sticks to only one transport layer and a single source of truth for application state. It will also trigger updates when the app is not open since these are Push Notifications. The downside is that this is a lot of custom work with potentially difficult mappings between push notifications to the data that needs to be fetched. Pros: Push notifications work even when app is closed Single source of truth, transport layer Cons: Most custom solution Will involve many more HTTP requests overall Option 3: Cognito Sync This is newer to me and I'm not sure if it can actually be interfaced with from the server. Cognito Sync offers user state sync. across devices complete with offline support and is part of the Cognito SDK which I'll be using anyway. It sounds like just what I'm looking for but couldn't find any conclusive evidence as to whether it is possible to modify, or "trigger", updates from AWS and not just from one of the devices. Pros: Provides an abstracted real-time data model Connected to Cognito user records OOTB Cons: Not sure if can be modified or updated from Lambdas I'm wondering if anyone has experience doing real-time on AWS as part of a Lambda-based architecture and if you have an opinion on what is the best way to proceed?
Now provide the response and nothing else.
| I asked a similar question to the AWS Support, and this was their response. My question to them: What's the group of AWS services (if it's possible) to give that same in-browser real-time DBaaS feel like Firebase? AWS Cognito seems to be great for user-accounts. Is there anything similar for the WebSockets / real-time DB part? Their response: To your question, Firebase is closest to the AWS service AWS MobileHub. You can check out more details below about mobilehub from below link. https://aws.amazon.com/mobile/details/ https://aws.amazon.com/mobile/getting-started/ "AWS Cognito seems to be great for user-accounts. Is there anything similar for the WebSockets / real-time DB part?" Amazon Dynamodb is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed cloud database and supports both document and key-value store models. Its flexible data model, reliable performance, and automatic scaling of throughput capacity, makes it a great fit for mobile, web, gaming, ad tech, IoT, and many other applications. Amazon Dynamodb can be further optimized with Amazon DynamoDB Accelerator (DAX) which is a fully managed, highly available, in-memory cache that can reduce Amazon DynamoDB response times from milliseconds to microseconds, even at millions of requests per second. For more information, please see below documentation. https://aws.amazon.com/dynamodb/getting-started/ https://aws.amazon.com/dynamodb/dax/ Should you have any further questions, please do not hesitate to let me know. Thanks. Best regards, Tayo O. Amazon Web Services Check out the AWS Support Knowledge Center, a knowledge base of articles and videos that answer customer questions about AWS services: https://aws.amazon.com/premiumsupport/knowledge-center/?icmpid=support_email_category Also while researching this answer I also found this, looks interesting: https://aws.amazon.com/blogs/database/how-to-build-a-chat-application-with-amazon-elasticache-for-redis/ The comments to that article is interesting as well. Jacob Wakeem: What advantage this approach have over using aws iot? It seems that iot has all these functionality without writing a single line of code and with server-less architecture. Sam Dengler: The managed PubSub feature in the AWS IoT service is also a good approach to message-based applications, like the one demonstrated in the article. With Elasticache (Redis), customers who use Pub/Sub are typically also using Redis as a data store for other use cases such as caching, leaderboards, etc. With that said, you could also use ElastiCache (Redis) with the AWS IoT service by triggering an AWS Lambda function via the AWS IoT rules engine. Depending on how the message-based application is architected and how the data is leveraged, one solution may be a better fit than the other. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/39589636', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1815952/']} | jdg_81227 |
stackexchange | llm_judgeable_groundtruth_similarity | 13010354 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
since I'm using WebSocket connections on more regular bases, I was interested in how things work under the hood. So I digged into the endless spec documents for a while, but so far I couldn't really find anything about chunking the transmission stream itself . The WebSocket protocol calls it data frames (which describes the pure data stream, so its also called non-control frames ). As far as I understood the spec, there is no defined max-length and no defined MTU (maximum transfer unit) value, that in turn means a single WebSocket data-frame may contain, by spec(!), an infinite amount of data (please correct me if I'm wrong here, I'm still a student on this). After reading that, I instantly setup my little Node WebSocket server. Since I have a strong Ajax history (also on streaming and Comet), my expectations originaly were like, " there must be some kind of interactive mode for reading data while it is transfered ". But I am wrong there, ain't I ? I started out small, with 4kb of data. server testSocket.emit( 'data', new Array( 4096 ).join( 'X' ) ); and like expected this arrives on the client as one data-chunk client wsInstance.onmessage = function( data ) { console.log( data.length ); // 4095}; so I increased the payload and I actually was expecting again, that at some point, the client-side onmessage handler will fire repeatly, effectivley chunking the transmission. But to my shock, it never happened ( node-server , tested on firefox , chrome and safari client-side). My biggest payload was 80 MB testSocket.emit( 'data', new Array( 1024*1024*80 ).join( 'X' ) ); and it still arrived in one big data-chunk on the client. Of course, this takes a while even if you have a pretty good connection. Questions here are is there any possiblity to chunk those streams, similar to the XHR readyState3 mode ? is there any size limit for a single ws data-frame ? are websockets not supposed to transfer such large payloads? (which would make me wonder again why there isn't a defined max-size) I might still look from the wrong perspective on WebSockets, probably the need for sending large data-amounts is just not there and you should chunk/split any data logically yourself before sending ?
Now provide the response and nothing else.
| First, you need to differentiate between the WebSocket protocol and the WebSocket API within browsers . The WebSocket protocol has a frame-size limit of 2^63 octets, but a WebSocket message can be composed of an unlimited number of frames. The WebSocket API within browsers does not expose a frame-based or streaming API, but only a message-based API. The payload of an incoming message is always completely buffered up (within the browser's WebSocket implementation) before providing it to JavaScript. APIs of other WebSocket implementations may provide frame- or streaming-based access to payload transferred via the WebSocket protocol. For example, AutobahnPython does. You can read more in the examples here https://github.com/tavendo/AutobahnPython/tree/master/examples/twisted/websocket/streaming . Disclosure: I am original author of Autobahn and work for Tavendo. More considerations: As long as there is no frame/streaming API in browser JS WebSocket API, you can only receive/send complete WS messages. A single (plain) WebSocket connection cannot interleave the payload of multiple messages. So i.e. if you use large messages, those are delivered in order, and you won't be able to send small messages in between while a big message is still on the fly. There is an upcoming WebSocket extension (extensions are a builtin mechanism to extend the protocol): WebSocket multiplexing. This allows to have multiple (logical) WebSocket connections over a single underlying TCP connection, which has multiple advantages. Note also: you can open multiple WS connections (over different underlying TCPs) to a single target server from a single JS / HTML page today . Note also: you can do "chunking" yourself in application layer: send your stuff in smaller WS messages a reassemble yourself. I agree, in an ideal world, you'd have message/frame/streaming API in browser plus WebSocket multiplexing. That would give all the power and convenience. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/13010354', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1386886/']} | jdg_81228 |
stackexchange | llm_judgeable_groundtruth_similarity | 56305613 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to set docker up on a new system, and when running docker info I get: docker -v=> Docker version 18.09.5, build e8ff056docker info=> Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.39/info: dial unix /var/run/docker.sock: connect: permission denied Following the docs, I've tried: sudo usermod -a -G docker $USER Which returns no output. When I then run groups: groups=> mark adm cdrom sudo dip plugdev lpadmin sambashare I can see a docker group exists: less /etc/group | grep docker=> docker:x:131:mark And can see that it owns a socket running where the error message states: ls -la /var/run/ | grep docker=> drwx------ 5 root root 120 May 25 14:54 docker-rw-r--r-- 1 root root 5 May 25 14:54 docker.pidsrw-rw---- 1 root docker 0 May 25 14:54 docker.sock So why can't I add myself to that group with sudo usermod -a -G docker $USER ?
Now provide the response and nothing else.
| You need to reload your shell in order to make the changes take effect.Often you need to reboot your shell process and possibly even restart your computer. e.g sudo usermod -aG docker $USERsudo reboot See @4Z4T4R answer and give a thumbs https://stackoverflow.com/a/66297855/7961500 Load changes without quitting your shell To avoid starting a new shell you can run. (Doesn't seem to work for all environments) exec su -l $USER This will create a new subshell with the loaded changes and replace your current shell with it. If nothing else is working for you. Another way if you just need to get it working now, is to change your primary group. This is only a temp solution as with any new shell you will need to apply it again. export my_group=$(id -gn)newgrp dockernewgrp $my_group Documentation You can also look at the offical documentation here https://docs.docker.com/engine/install/linux-postinstall/ | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/56305613', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6535997/']} | jdg_81229 |
stackexchange | llm_judgeable_groundtruth_similarity | 17825588 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
static inline __printf(2, 3)int dev_err(const struct device *dev, const char *fmt, ...){ return 0; } What is __printf() doing and what does the third arg(...) of dev_err mean?I'm able to visualize that this function is some kind of generic function. What does it do?
Now provide the response and nothing else.
| Everything there is pretty much standard (variable argument processing) except for the __printf(2,3) . This modifier to the function (similar to the static or inline modifiers) tells the compiler that it should check the format string at argument 2 ( fmt ) against the actual parameters starting at argument 3, using the printf style format specifiers. In other words, calling it with: dev_err (pDev, "%d", 1.0); would flag a warning because the format string and actual arguments don't match. The ... simply indicates that there are a variable number of arguments coming after the format string, similar to the way printf itself is implemented. C has long had the capability to process variable argument lists, the __printf() modifier simply gives the compiler a bit of extra information so that it can validate your use of the function. Linux defines __printf(a, b) as __attribute__((format(printf, a, b))) and gcc allows that second format to specify the varargs-checking attribute as per here (paraphrased below): format (archetype, string-index, first-to-check): The format attribute specifies that a function takes printf, scanf, strftime or strfmon style arguments which should be type-checked against a format string. For example, the declaration: extern int my_printf (void *my_object, const char *my_format, ...) __attribute__ ((format (printf, 2, 3))); causes the compiler to check the arguments in calls to my_printf for consistency with the printf style format string argument my_format. In the example above, the format string (my_format) is the second argument of the function my_print, and the arguments to check start with the third argument, so the correct parameters for the format attribute are 2 and 3. The format attribute allows you to identify your own functions which take format strings as arguments, so that GCC can check the calls to these functions for errors. As to what the function itself does, not a lot other than returning zero :-) It's almost certainly a placeholder should you ever wish to actually implement a real dev_err() function. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17825588', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2400613/']} | jdg_81230 |
stackexchange | llm_judgeable_groundtruth_similarity | 8683391 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Preamble: My core question is very similar to this one: How can I write a clean Repository without exposing IQueryable to the rest of my application? which has remained unanswered. I am hoping that if I approach the problem in a different way, and ask a slightly different question, I may get a result. I will repeat some of the content from this other question to avoid requiring readers to read it for context. Problem: I'm working with POCO entities, and Entity Framework 4. I am trying to allow for sophisticated ad-hoc filtering of entity sets at the application layer, while simultaneously trying to avoid exposing IQueryable<T> past my repository boundary. This is leaving me with some complications. I do not want to create a single massive filter method on the repository that takes a huge list of parameters, such as: IEnumerable GetFilteredCustomers(string nameFilter, string addressFilter, bool isActive, int customerId, ...) Not only is this extremely cumbersome to use, but it's super ugly to look at, especially if it's mostly a bunch of nulls, etc. It's also not as maintainable as I would like. I do not want to create a huge set of filter methods on the repository, such as: IEnumerable GetActiveCustomers()IEnumerable GetCustomersByName() There are a number of problems with this approach, including needing a huge list of methods which grows to n! where n is the number of available filter conditions if I want to be able to combine them in arbitrary ways. (i.e. all active customers with name George). Also highly difficult to maintain. I do not want to create chainable methods (Fluent Interface) that manipulate IEnumerable<T> , because ultimately that involves bringing back a huge result set from the database and filtering it down in-memory which is not a scalable solution. I can't create a Fluent Interface that manipulates IQueryable<T> because as I've already said, I don't want to expose the IQueryable<T> past the repositories. I'd like to avoid simply rehashing the single massive filter method by passing in an object full of parameters instead of a large parameter list, although at this point this might be the least ugly solution. Ideas: Ultimately, I think an ideal solution would be discovering some way to create a full query that doesn't know the source, and store it as a parameter. I could then pass that into the repository, where the source is known, and apply the query to the source and return the results. To clarify; in contrast to simply creating an object of parameters as mentioned above, I'd like to use the raw LINQ queries, but store them in a variable somehow, and apply them to a data source later. I suspect the return type would have to be known ahead of time, but I'm perfectly fine with defining that and having it known in advance. To view it from yet another perspective, consider the following: IQueryable<Customer> filteredCustomers = customerRepository.GetAll() .Where(c => c.FirstName == "Dave") .Where(c => c.IsActive == true) .Where(c => c.HasAddress == true) ; I want to package up the three Where clauses as a query object, completely separate from the customerRepository.GetAll(), pass it around as a parameter and apply it later.
Now provide the response and nothing else.
| Sure. You can write a method like: public Expression<Func<Customer, bool>> GetDave(){ return c => c.FirstName == "Dave" && c.IsActive && c.HasAddress;} ...and repository methods like: public IEnumerable<Customer> GetOneGuy(Expression<Func<Customer, bool>> criteria){ return Context.Customers.Where(criteria);} ...and call: var dave = Repository.GetOneGuy(this.GetDave()).Single(); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8683391', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/495000/']} | jdg_81231 |
stackexchange | llm_judgeable_groundtruth_similarity | 33880306 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a php script in my Apache server that have to send a curl request to a partner's server.Partner give me a .pem file that I have to attach to every call I do to its api. My php script is the follow: $ch = curl_init();curl_setopt($ch, CURLOPT_URL, $url);curl_setopt($ch, CURLOPT_SSLCERT, "test.pem" );curl_setopt($ch,CURLOPT_SSLCERTTYPE,"PEM");curl_setopt($ch, CURLOPT_RETURNTRANSFER, True);curl_setopt($ch, CURLOPT_POST, True);curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);curl_setopt($ch, CURLOPT_POSTFIELDS, $json);curl_setopt($ch, CURLOPT_VERBOSE, true);$result = curl_exec($ch);if(!$result){ echo "Curl Error: " . curl_error($ch);}else{ echo "Success: ". $result;}curl_close($ch); It returns: Curl Error: unable to set private key file: 'test.pem' type PEM Consider that it sends me .pem file and says "it has no passphrase"
Now provide the response and nothing else.
| I think that you need to use the tmpfile() and stream_get_meta_data . $pemFile = tmpfile();fwrite($pemFile, "test.pem");//the path for the pem file$tempPemPath = stream_get_meta_data($pemFile);$tempPemPath = $tempPemPath['uri'];curl_setopt($ch, CURLOPT_SSLCERT, $tempPemPath); Source: This answer here in SO helps me with similar problem. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/33880306', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_81232 |
stackexchange | llm_judgeable_groundtruth_similarity | 145303 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Another question about the convergence notes by Dr. Pete Clark: http://alpha.math.uga.edu/~pete/convergence.pdf (I'm almost at the filters chapter! Getting very excited now!) On page 15, Proposition 4.6 states that for the following three properties of a topological space $X$ , $(i)$ $X$ has a countable base. $(ii)$ $X$ is separable. $(iii)$ $X$ is Lindelof (every open cover admits a countable subcover). we always have $(i)\Rightarrow (ii)$ and $(i)\Rightarrow (iii)$ . Also, we if $X$ is metrizable, we have $(iii)\Rightarrow (i)$ , and thus all three are equivalent . This last part confuses me. We establish all the implications claimed in the proof, but there seems to be a missing link in the claim that all three are equivalent: namely $(ii)\Rightarrow (iii)$ .
Now provide the response and nothing else.
| Note that a separable metric space has a countable basis. Specifically, we take a countable dense subset $S$ and take the set of balls centered at $s$ with radius $1/n$ for each $n \in N$, $s \in S$. This can be checked to be a basis. So then $(ii) \Rightarrow (i)$ is proven, which is the missing link. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/145303', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/16938/']} | jdg_81233 |
stackexchange | llm_judgeable_groundtruth_similarity | 1048446 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Dijkstra's algorithm is a very good approach to the shortest path problem. But is it optimal? Are there better algorithms for unweighted graphs?
Now provide the response and nothing else.
| Although simple to implement, Dijkstra's shortest-path algorithm is not optimal. A guaranteed linear time, linear space (in the number of edges) algorithm is referenced by the Wikipedia article Shortest path problem as: Thorup, Mikkel (1999) "Undirected single-source shortest paths with positive integer weights in linear time". Journal of the ACM 46 (3): p. 362–394 As Mikkel Thorup points out in the abstract of the above: Thus, any implementation of Dijkstra's algorithm sorts the vertices according to their distances from [ single source ] s. However, we do not know how to sort in linear time. Here, a deterministic linear time and linear space algorithm is presented for the undirected single source shortest paths problem with positive integer weights. The algorithm avoids the sorting bottleneck by building a hierarchical bucketing structure, identifying vertex pairs that may be visited in any order. This effectively removes the dependency on number of vertices $V$ from $O(E + V\ln V)$ leaving only $O(E)$, where $E$ is the number of edges. Asano and Imai (2000) have an early but accessible account, Practical Efficiency of the Linear-Time Algorithm for the Single Source Shortest Path Problem . Slides from a 2009 talk by Nick Prühs are at Implementation of Thorup's Linear Time Algorithm for Undirected Single Source Shortest Paths With Positive Integer Weights . We remark that linear-time is (quasi)optimal since in the worst case a shortest path consists of all the edges, and hence requires linear time to form the path. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1048446', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/182026/']} | jdg_81234 |
stackexchange | llm_judgeable_groundtruth_similarity | 212664 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to use a VoronoiMesh to define a TogglerBar or SetterBar type Control ? For example, I can customise the looks of a TogglerBar Control[{{a, 0, ""}, {1 -> Spacer[{.1, .1}], 4 -> Spacer[{.1, 20}], 7 -> Spacer[{.1, .1}], 2 -> Spacer[{20, .1}], 5 -> Spacer[{20, 20}], 8 -> Spacer[{20, .1}], 3 -> Spacer[{.1, .1}], 6 -> Spacer[{.1, 20}], 9 -> Spacer[{.1, .1}]}, Appearance -> "Vertical" -> {3, 3}, ControlType -> TogglerBar}] But this doesn't change the rectangle-shaped buttons and I don't think this is the way to go about if I want to define a VoronoiMesh with clickable and "toggleable" cells. I want something like where each cell is selected/unselected whenever I click it, adding/removing a correspondent number to a list, for example, in the case of a TogglerBar . I would like this to also work as a SetterBar . Any ideas? Edit 1: Thank you all for your answers. As a follow up, I'm now interested in developing a TogglerBar -type object that allows users to hold and drag the mouse to select/deselect several cells . If you have time, please take a look at it, I'm a bit a clueless on how to do this, so any hint or idea is welcome. Edit 2: Following Lukas Lang's answer below, I also tried to vary the grid size in Manipulate Manipulate[x, Control[{n, 2, 10, 1}], Control[{{x, 3, ""}, MeshSetterBar[VoronoiMesh@RandomReal[{0, 1}, {n, 2}]]}]] However, this doesn't seem to behave as expected. Instead, I get Any idea why, and how to fix this? I tried Dynamic , but didn't work. Edit 3: As a third and (hopefully) final edit, thanks to Lukas Lang's answer, I was able to solve the original question. Now I just need to define several toggler-type meshes of the same shape. One naive attempt is simply Manipulate[Null, Dynamic@Grid[{ {Control[{n, 2, 10, 1}]}, {Control[{{x, {}, ""}, MeshTogglerBar[VoronoiMesh@RandomReal[{0, 1}, {n, 2}]]}]}, {Control[{{y, {}, ""}, MeshTogglerBar[VoronoiMesh@RandomReal[{0, 1}, {n, 2}]]}]} }]] Which naturally doesn't yield meshes with the same shape, due to the randomness in defining the points. How can I solve this? I have tried to define the mesh outside, then I lose the dynamic update of the mesh-dependent control. I would like something like the following, where I'm able to independently update similarly shaped meshes
Now provide the response and nothing else.
| Here are implementations for a MeshTogglerBar and MeshSetterBar based on my answer here (code below). Both implementations use Mouseover and EventHandler to handle detection of the polygon below the cursor for you. Compared to the NearestFunction approach, this is far more performant (since it is done by the front-end), it also works nicely for other types of meshes, where the cell below the cursor is not necessarily the one with the closest center. TogglerBar SetterBar Code MeshTogglerBar[mesh_] := iMeshTogglerBar[#, mesh] &Dynamic[MeshTogglerBar[mesh_]] ^:= Dynamic[iMeshTogglerBar[#, mesh] &]MeshTogglerBar[Dynamic@var_, mesh_] := iMeshTogglerBar[Dynamic@var, mesh]iMeshTogglerBar[Dynamic@var_, mesh_] := Module[ {prims = MeshPrimitives[mesh, 2]}, With[ { active = Append[dragAction]@Table[Unique["active"], Length@prims], n = Length@prims }, DynamicModule[ active, Graphics[ { FaceForm@White, EdgeForm@Blue, MapIndexed[ With[ {v = active[[#2[[1]]]]}, EventHandler[ Style[ Annotation[#, ""], TagBoxOptions -> { BaseStyle -> FEPrivate`Which[ FEPrivate`SameQ[v, True], {Lighter@Blue, EdgeForm@{Thick, Blue}}, FrontEnd`CurrentValue@"MouseOver", LightBlue, True, {} ] } ], { "MouseEntered" :> FEPrivate`If[ FEPrivate`And[ FrontEnd`CurrentValue[{"MouseButtonTest", 1}], FEPrivate`UnsameQ[v, dragAction] ], FEPrivate`Set[v, dragAction]; var[[#2[[1]]]] = dragAction ], {"MouseDown", 1} :> ( FEPrivate`Set[dragAction, FEPrivate`UnsameQ[v, True]]; FEPrivate`Set[v, dragAction]; var[[#2[[1]]]] = dragAction ) } ] ] &, prims ] }, ImageSize -> Medium ], Initialization :> ( If[ListQ@var, var = TrueQ /@ PadLeft[var, n, False], var = ConstantArray[False, n] ]; MapThread[Set, {Most@active, var}] ) ] ] ]MeshSetterBar[mesh_] := iMeshSetterBar[#, mesh] &Dynamic[MeshSetterBar[mesh_]] ^:= Dynamic[iMeshSetterBar[#, mesh] &]MeshSetterBar[Dynamic@var_, mesh_] := iMeshSetterBar[Dynamic@var, mesh]iMeshSetterBar[Dynamic@var_, mesh_] := DynamicModule[ {active}, Graphics[ { FaceForm@White, EdgeForm@Blue, MapIndexed[ EventHandler[ Style[ Annotation[#, ""], TagBoxOptions -> { BaseStyle -> FEPrivate`Which[ FEPrivate`SameQ[active, #2[[1]]], {Lighter@Blue, EdgeForm@{Thick, Blue}}, FrontEnd`CurrentValue@"MouseOver", LightBlue, True, {} ] } ], {"MouseClicked" :> ( FEPrivate`Set[active, #2[[1]]]; var = #2[[1]] ) } ] &, MeshPrimitives[mesh, 2] ] }, ImageSize -> Medium ], Initialization :> (active =var) ]SeedRandom[1]mesh = VoronoiMesh@RandomReal[{0, 1}, {10, 2}]Dynamic@xMeshSetterBar[Dynamic@x, mesh]Dynamic@xMeshTogglerBar[Dynamic@x, mesh] Notes Some notes on the implementation (you can find some more in my answer linked above): Since everything is handled by the front-end, these controls will have excellent performance For the MeshTogglerBar , we have to generate a list of state variables (one per cell). This is because the front-end cannot manipulate lists, so each cell needs a separate variable The default values of the state variables are set in the Initialization property of the DynamicModule to ensure that the values are not prematurely inserted anywhere. The dynamic styling is done via TagBoxOptions -> {BaseStyle -> {...}} . This is done since we need to set the styles via an option for the front-end-only solution to work. The Annotation[...] / TagBoxOptions trick is to ensure that any type of primitive is styled, not only Polygon s. The controlled variables are kept separate from the DynamicModule variables used to store the state of the control. This ensures that the front-end ↔ kernel communication is kept to a minimum (i.e. only when a click has happened is the kernel variable updated). For the MeshTogglerBar , we trigger on both "MouseEntered" and "MouseDown" to enable dragging over many elements to toggle them. The state of the first element is stored in dragAction , to ensure that dragging sets all elements to the same state instead of toggling them back and forth The iMeshTogglerBar / iMeshSetterBar functions are there so the control can be easily used inside Manipulate : Manipulate[ x, {{x, 3}, MeshSetterBar[mesh]} ] Similarly, the Dynamic[MeshSetterBar[_]] / Dynamic[MeshTogglerBar[_]] type definitions are to ensure that the controls work inside of Manipulate when the controls depend on other variables: Manipulate[x, {n, 2, 10, 1}, {{x, 3, ""}, MeshSetterBar[VoronoiMesh@RandomReal[{0, 1}, {n, 2}]]} ] The additional definition is necessary, since Manipulate wraps control specifications in Dynamic if any other manipulate variables occur in the specifications. This prevents Manipulator from seeing the Function expression, since it is not evaluated. The additional upvalue forces evaluation into something with an explicit Function in those cases. | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/212664', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/64001/']} | jdg_81235 |
stackexchange | llm_judgeable_groundtruth_similarity | 9191 |
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The claim that the Abrahamic god is omnipotent, omniscient and omnibenevolent is popular among theologians, practitioners of Abrahamic religions and atheists alike (I am referring to the idea of god, regardless of whether there is a being that fits the bill or not). Many theological arguments rely on some or all of these attributes, e.g. the problem of evil (requiring all three), the impossibility of coexistence of free will and omniscience, the famous "could god make a boulder so heavy even he himself could not lift it" (omnipotence), et cetera. Do these claims appear in major religious texts, such as the Torah, Bible, or Koran? Some examples: An article from the Christian Research Journal asserts that god is omnipotent, omniscient and omnibenevolent. An educational books publisher claiming to have their material used in 3500 educational institutions in the UK alone, as well as 85 other countries, claims the Abrahamic god possesses all of these attributes. Article by Rabbi Dovid Gottleib dedicated to the problem of evil in Judaism. It claims god is omnipotent, omniscient and just.
Now provide the response and nothing else.
| Methodology: simple google searches, restrict results to direct quotes that are clear and straightforward and from the old testament (since the OP asks for Abrahamic books, not sure which he means but certainly not new testament). Omnipotence: "O Sovereign Lord! You have made the heavens and earth by Your great power. Nothing is too hard for You!" (Jeremiah 32:17). as cited by discover God: God is all powerful and parallel translations from bible.cc and from the Christianity SE site, in https://christianity.stackexchange.com/questions/2643/biblical-evidence-for-omnipotence-omniscience the 1st answer cites Genesis 17:1-2 Genesis 17:1 (ESV) 1 When Abram was ninety-nine years old the LORD appeared to Abram and said to him, "I am God Almighty; walk before me, and be blameless, Parallel versions of Genesis 17:1 As the OP of the Christianity SE question noted, here in Genesis GOD itself is making the claim that "I am God Almighty" rather than it simply being a record of praise from one of his believers, who might be offering mistaken praise. [Note: this was all still written down by believers, obviously...] Omniscience: Hearken unto this, O Job: stand still, and consider the wondrous works of God. Dost thou know when God disposed them, and caused the light of his cloud to shine? Dost thou know the balancings of the clouds, the wondrous works of Him which is perfect in knowledge ... (Job 37:14-18) Parallel versions of Job 37:16 from bible.cc Also a claim is made that God sees what everyone is doing: Psalm 33:13,14 states, "The Lord looketh from heaven; he beholdeth all the sons of men. From the place of his habitation he looketh upon all the inhabitants of the earth." Psalm 33:14 parallel versions from bible.cc Both of these cites were from The Omniscience of God: Old Testament teaching Omnibenevolence A google search for "omnibenevolent god old testament" does not yield any positive claims among the first page of article titles. However, a search for "God is good to all" quickly yielded: New International Version (©1984) The LORD is good to all; he has compassion on all he has made. from http://bible.cc/psalms/145-9.htm and that seems a pretty clear statement of Omnibenevolence. For counterexamples to omnibenevolence a simple google search is all that is required. | {} | {'log_upvote_score': 4, 'links': ['https://skeptics.stackexchange.com/questions/9191', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/6189/']} | jdg_81236 |
stackexchange | llm_judgeable_groundtruth_similarity | 3362959 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there any easy way to get a DateTime 's " TimeMin " and " TimeMax "? TimeMin : The very first moment of the day. There is no DateTime that occurs before this one and still occurs on the same day. TimeMax : The very last moment of the day. There is no DateTime that occurs after this one and still occurs on the same day. These values would be helpful for filtering and doing date-related queries.
Now provide the response and nothing else.
| Here are two extensions I use to do exactly that. /// <summary> /// Gets the 12:00:00 instance of a DateTime /// </summary> public static DateTime AbsoluteStart(this DateTime dateTime) { return dateTime.Date; } /// <summary> /// Gets the 11:59:59 instance of a DateTime /// </summary> public static DateTime AbsoluteEnd(this DateTime dateTime) { return AbsoluteStart(dateTime).AddDays(1).AddTicks(-1); } This allows you to write: DateTime.Now.AbsoluteEnd() || DateTime.Now.AbsoluteStart() or DateTime partyTime = new DateTime(1999, 12, 31);Console.WriteLine("Start := " + partyTime.AbsoluteStart().ToString());Console.WriteLine("End := " + partyTime.AbsoluteEnd().ToString()); | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/3362959', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/139698/']} | jdg_81237 |
stackexchange | llm_judgeable_groundtruth_similarity | 1383373 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If $a$ and $b$ are positive numbers, what is the value of $\displaystyle \int_0^\infty \frac{e^{ax}-e^{bx}}{(1+e^{ax})(1+e^{bx})}dx$. A: $0$ B: $1$ C: $a-b$ D: $(a-b)\log 2$ E: $\frac{a-b}{ab}\log 2$ I really don't see how to start this one, I'm not so great with integrals.
Now provide the response and nothing else.
| One indirect approach: Write $$f(a,b) = \int_0^\infty \frac{e^{ax}-e^{bx}}{(1+e^{ax})(1+e^{bx})}dx$$ Then changing variables by $x = ku$, for some positive $k$, $$f(a,b) = k \int_0^\infty \frac{e^{kau}-e^{kbu}}{(1+e^{kau})(1+e^{kbu})}du = k\, f(ka,kb) $$ The only$^*$ answer which obeys the relation $$f(a,b) = k \,f(ka,kb)$$ is Option E. $^*$Footnote: As pointed out in the comments, Option A does follow this relation as well but is easy to rule out on other grounds. To unpack my original thinking: Option A is not possible as we can make the integrand positive: for $a > b$, $f(a,b) > 0$ Option B is not possible as $f(a,b)$ cannot be independent of $a$ and $b$, e.g., without explicitly calculating, it looks clear that $\partial_a f(a,b) \neq 0$. Now we're down to Options C, D or E. Since this is a question from a timed test and mathematicians are ruthlessly efficient (aka lazy), I don't want to evaluate the integral. Instead, can I find quickly some sort of scaling argument to rule out Options C and D? My 'fear' is that $f(ka,kb) = k\,f(a,b)$ which will instead rule out Option E and then I'll be stuck having to calculate the integral in order to differentiate between C and D. But no! Instead $f(a,b) = k\,f(ka,kb)$. Good! | {} | {'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/1383373', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/149032/']} | jdg_81238 |
stackexchange | llm_judgeable_groundtruth_similarity | 6215239 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a fragment (F1) with a public method like this public void asd() { if (getActivity() == null) { Log.d("yes","it is null"); }} and yes when I call it (from the Activity), it is null... FragmentTransaction transaction1 = getSupportFragmentManager().beginTransaction();F1 f1 = new F1();transaction1.replace(R.id.upperPart, f1);transaction1.commit();f1.asd(); It must be something that I am doing very wrong, but I don't know what that is.
Now provide the response and nothing else.
| commit schedules the transaction, i.e. it doesn't happen straightaway but is scheduled as work on the main thread the next time the main thread is ready. I'd suggest adding an onAttach(Activity activity) method to your Fragment and putting a break point on it and seeing when it is called relative to your call to asd() . You'll see that it is called after the method where you make the call to asd() exits. The onAttach call is where the Fragment is attached to its activity and from this point getActivity() will return non-null (nb there is also an onDetach() call). | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/6215239', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/706780/']} | jdg_81239 |
stackexchange | llm_judgeable_groundtruth_similarity | 7670112 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have two Numpy arrays (3-dimensional uint8) converted from PIL images. I want to find if the first image contains the second image, and if so, find out the coordinates of the top-left pixel inside the first image where the match is. Is there a way to do that purely in Numpy, in a fast enough way, rather than using (4! very slow) pure Python loops? 2D example: a = numpy.array([ [0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11]])b = numpy.array([ [2, 3], [6, 7]]) How to do something like this? position = a.find(b) position would then be (0, 2) .
Now provide the response and nothing else.
| I'm doing this with OpenCV 's matchTemplate function. There is an excellent python binding to OpenCV which uses numpy internally, so images are just numpy arrays. For example, let's assume you have a 100x100 pixel BGR file testimage.bmp . We take a 10x10 sub-image at position (30,30) and find it in the original. import cv2import numpy as npimage = cv2.imread("testimage.bmp")template = image[30:40,30:40,:]result = cv2.matchTemplate(image,template,cv2.TM_CCOEFF_NORMED)print np.unravel_index(result.argmax(),result.shape) Output: (30, 30) You can choose between several algorithms to match the template to the original, cv2.TM_CCOEFF_NORMED is just one of them. See the documentation for more details, some algorithms indicate matches as minima, others as maxima in the result array. A word of warning: OpenCV uses BGR channel order by default, so be careful, e.g. when you compare an image you loaded with cv2.imread to an image you converted from PIL to numpy. You can always use cv2.cvtColor to convert between formats. To find all matches above a given threshold confidence , I use something along the lines of this to extract the matching coordinates from my result array: match_indices = np.arange(result.size)[(result>confidence).flatten()]np.unravel_index(match_indices,result.shape) This gives a tuple of arrays of length 2, each of which is a matching coordinate. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7670112', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/981515/']} | jdg_81240 |
Subsets and Splits