source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
26496223
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an auto-generated SQL script to run on 5.6.17-157.2. It worked fine on 5.5.33-1.17.1. Every SQL statement shows \g at the end. For example CREATE TABLE articoli(ID INT,titolo LONGTEXT,sottotitolo LONGTEXT,descrizione LONGTEXT,note LONGTEXT,nomeopzione1 LONGTEXT,nomeopzione2 LONGTEXT,nomeopzione3 LONGTEXT,pagina CHAR(100),sottopagina SMALLINT,plain_titolo CHAR(200),plain_sottotitolo CHAR(200),nomeopzione4 LONGTEXT,KEY (ID),KEY (pagina),KEY (sottopagina));\g What changed between the two version to break query execution? How can I tell 5.6 to accept \g and don't care? I can't just change the SQL. It's auto-generated code that must run as final step of a monstrous software abomination "daily update" ( https://serverfault.com/questions/458340/euro-character-messed-up-during-ftp-transfer ) [Update] Better change the question: it's not enough to know what is that. I need to get the queries running. Now provide the response and nothing else.
MySQL already does accept \g , but it must follow a SQL statement. The \g is basically the same as ; That is, it is the terminator for a statement and that means send it to the server for parsing and execution. Your sample shows a create table statement terminated by both a semicolon and \g . This results in the create table statement running, because it has a semicolon. But then it tries to run another statement terminator without a statement. Try this: mysql> ;ERROR: No query specified Of course there was no query specified, this just shows a semicolon with no query. It's the same with a line with nothing but \g : mysql> \gERROR: No query specified And if you run a real query, and then a redundant terminator of either type, you get something similar. It runs the first query, then fails on the empty query: mysql> select 123; ;+-----+| 123 |+-----+| 123 |+-----+ERROR: No query specifiedmysql> select 123; \g+-----+| 123 |+-----+| 123 |+-----+ERROR: No query specified I don't know what you mean about this code is generated and you can't change it. You'll have to, because what you've got won't work. I would suggest you strip out the \g from your file before trying to run it. Here's an example of a file containing the bad empty-query pattern, and using sed to remove the redundant \g : $ cat > bad.sqlselect 123; \g$ sed -e 's/\\g//g' bad.sqlselect 123;
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26496223', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/471213/']}
jdg_77541
stackexchange
llm_judgeable_groundtruth_similarity
317606
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I was wondering if long long specifies a single datatype then why don't things like int int work? I meant obviously that's not a data type but there is a long data type. Essentially what I'm asking is: int a = 0; //okaylong b = 0; //still finelong long c = 0; //really long number but its okay....int int d = 0; //error why? Now provide the response and nothing else.
The difference between long long and int int is that long modifies a type, rather than being a type itself. long is really a shorthand for long int and long long a shorthand for long long int . More specifically int is a type specifier , just like char or bool . long is a type modifier . Other type modifiers are unsigned and signed and short . If one of the modifiers is missing then the type will fall back to a default. E.g. if there is no signed or unsigned then the type will be signed. If there is no short or long the default size depends on the compiler and the architecture. Take a look at this table on Wikipedia for a full list of how different type specifiers and modifiers can be combined. Edit: In current versions of the C and C++ standards long and short are actually type specifiers in their own right. This doesn't change the way that things can be combined though.
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/317606', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/227228/']}
jdg_77542
stackexchange
llm_judgeable_groundtruth_similarity
18989
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is the set $S$ = $\lbrace M \mid M \text{ is a Turing machine and }L(M)=\lbrace \langle M\rangle\rbrace\rbrace$ empty? In other words is there a Turing machine $M$ that only accepts its own encoding? What about a Turing machine that rejects only its own encoding? Now provide the response and nothing else.
The answer is yes. See Kleene's second recursion theorem : for any partial recursive function $Q(x,y)$ there is an index $p$ such that $\varphi_p \simeq \lambda y.Q(p,y)$. Suppose that $M$ is a Turing machine that on input $\langle x,y \rangle$ accepts if and only if $x=y$; then, by the above theorem, exists $M'$ such that $M'(\langle y \rangle) = M(\langle M' , y \rangle)$ and we have $L(M') = \{ \langle M' \rangle \}$. P.S. you can find a very clear proof of the recursion theorem in Chapter 6 of the M. Sipser's book "Introduction to the theory of computation".
{}
{'log_upvote_score': 4, 'links': ['https://cs.stackexchange.com/questions/18989', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/12115/']}
jdg_77543
stackexchange
llm_judgeable_groundtruth_similarity
31255486
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to convert std::chrono::time_point to and from a long type (integer 64 bits). I´m starting working with std::chrono ... Here is my code: int main (){ std::chrono::time_point<std::chrono::system_clock> now = std::chrono::system_clock::now(); auto epoch = now.time_since_epoch(); auto value = std::chrono::duration_cast<std::chrono::milliseconds>(epoch); long duration = value.count(); std::chrono::duration<long> dur(duration); std::chrono::time_point<std::chrono::system_clock> dt(dur); if (dt != now) std::cout << "Failure." << std::endl; else std::cout << "Success." << std::endl;} This code compiles, but does not show success. Why is dt different than now at the end? What is missing on that code? Now provide the response and nothing else.
std::chrono::time_point<std::chrono::system_clock> now = std::chrono::system_clock::now(); This is a great place for auto : auto now = std::chrono::system_clock::now(); Since you want to traffic at millisecond precision, it would be good to go ahead and covert to it in the time_point : auto now_ms = std::chrono::time_point_cast<std::chrono::milliseconds>(now); now_ms is a time_point , based on system_clock , but with the precision of milliseconds instead of whatever precision your system_clock has. auto epoch = now_ms.time_since_epoch(); epoch now has type std::chrono::milliseconds . And this next statement becomes essentially a no-op (simply makes a copy and does not make a conversion): auto value = std::chrono::duration_cast<std::chrono::milliseconds>(epoch); Here: long duration = value.count(); In both your and my code, duration holds the number of milliseconds since the epoch of system_clock . This: std::chrono::duration<long> dur(duration); Creates a duration represented with a long , and a precision of seconds . This effectively reinterpret_cast s the milliseconds held in value to seconds . It is a logic error. The correct code would look like: std::chrono::milliseconds dur(duration); This line: std::chrono::time_point<std::chrono::system_clock> dt(dur); creates a time_point based on system_clock , with the capability of holding a precision to the system_clock 's native precision (typically finer than milliseconds). However the run-time value will correctly reflect that an integral number of milliseconds are held (assuming my correction on the type of dur ). Even with the correction, this test will (nearly always) fail though: if (dt != now) Because dt holds an integral number of milliseconds , but now holds an integral number of ticks finer than a millisecond (e.g. microseconds or nanoseconds ). Thus only on the rare chance that system_clock::now() returned an integral number of milliseconds would the test pass. But you can instead: if (dt != now_ms) And you will now get your expected result reliably. Putting it all together: int main (){ auto now = std::chrono::system_clock::now(); auto now_ms = std::chrono::time_point_cast<std::chrono::milliseconds>(now); auto value = now_ms.time_since_epoch(); long duration = value.count(); std::chrono::milliseconds dur(duration); std::chrono::time_point<std::chrono::system_clock> dt(dur); if (dt != now_ms) std::cout << "Failure." << std::endl; else std::cout << "Success." << std::endl;} Personally I find all the std::chrono overly verbose and so I would code it as: int main (){ using namespace std::chrono; auto now = system_clock::now(); auto now_ms = time_point_cast<milliseconds>(now); auto value = now_ms.time_since_epoch(); long duration = value.count(); milliseconds dur(duration); time_point<system_clock> dt(dur); if (dt != now_ms) std::cout << "Failure." << std::endl; else std::cout << "Success." << std::endl;} Which will reliably output: Success. Finally, I recommend eliminating temporaries to reduce the code converting between time_point and integral type to a minimum. These conversions are dangerous, and so the less code you write manipulating the bare integral type the better: int main (){ using namespace std::chrono; // Get current time with precision of milliseconds auto now = time_point_cast<milliseconds>(system_clock::now()); // sys_milliseconds is type time_point<system_clock, milliseconds> using sys_milliseconds = decltype(now); // Convert time_point to signed integral type auto integral_duration = now.time_since_epoch().count(); // Convert signed integral type to time_point sys_milliseconds dt{milliseconds{integral_duration}}; // test if (dt != now) std::cout << "Failure." << std::endl; else std::cout << "Success." << std::endl;} The main danger above is not interpreting integral_duration as milliseconds on the way back to a time_point . One possible way to mitigate that risk is to write: sys_milliseconds dt{sys_milliseconds::duration{integral_duration}}; This reduces risk down to just making sure you use sys_milliseconds on the way out, and in the two places on the way back in. And one more example: Let's say you want to convert to and from an integral which represents whatever duration system_clock supports (microseconds, 10 th of microseconds or nanoseconds). Then you don't have to worry about specifying milliseconds as above. The code simplifies to: int main (){ using namespace std::chrono; // Get current time with native precision auto now = system_clock::now(); // Convert time_point to signed integral type auto integral_duration = now.time_since_epoch().count(); // Convert signed integral type to time_point system_clock::time_point dt{system_clock::duration{integral_duration}}; // test if (dt != now) std::cout << "Failure." << std::endl; else std::cout << "Success." << std::endl;} This works, but if you run half the conversion (out to integral) on one platform and the other half (in from integral) on another platform, you run the risk that system_clock::duration will have different precisions for the two conversions.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/31255486', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2697571/']}
jdg_77544
stackexchange
llm_judgeable_groundtruth_similarity
26377
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: We have a few linux file servers, our sysadmin refuses to do any NFS cross mounting, by cross mounting I mean mounting an nfs share from server A on server B and a share from server B on A . Anybody knows if there are any problems with this cross mounting? Now provide the response and nothing else.
Ordering the bootup of the two servers can be challenging. Ideally, you'd like your servers to recover from a power failure without operator intervention. But, of course, some servers need to come up before others—e.g., maybe you need your firewall up before trying to bring up servers that connect externally. Or, you need your NFS servers up before their clients. One way to do this is to get a smart power strip (e.g., APC managed PDU), and then you can configure power-on delay for each outlet. So you can make sure your firewall and switches turn on first, then your DNS servers, then your NFS servers, etc. Even if you do this manually off a checklist ("turn on A, wait for it to boot, turn on B and C, wait for them to boot, ...") you still want simplicity. But you can't do that if A depends on B, but B depends on A. In simple cases, its easy enough to work around—A and B just need to start up their own NFS server (and export their local filesystems) before attempting to mount any NFS shares (though this is the opposite of the default ordering). Unless of course you wind up with nested paths, where e.g., a local filesystem is mounted underneath an NFS mount. Or where you add a third server, and then maybe shares from A are mounted under ones from B on C. Working around that kind of stuff manually isn't too hard, of course, but you don't want to be doing that after a power failure. You've probably got unplanned problems to solve (e.g., stuff that broke from unexpected power loss, or dependencies you didn't know about), while everyone is complaining about it being down, so its best not to set up known, avoidable problems that'll eat your time. Also, mounting and then re-exporting an NFS share is probably ill-advised, if for no other reason than the network overhead of sending all the data twice. Not like the client can't just mount both servers. edit: One more reason—when an NFS server goes down, any processes on a client trying to access that server go into disk ("uninterruptible") sleep. That may lead to "contagion" from one NFS server failing to the other failing (from essential processes going into disk sleep, or running out of resources from cron jobs spawning and not finishing, etc.). (This shows a D in ps , if you're wondering)
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/26377', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/3850/']}
jdg_77545
stackexchange
llm_judgeable_groundtruth_similarity
2236082
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $\left(a_n\right)$ be the following sequence: $$a_n =\frac{\log^k\left(1\right)+\log^k\left(2\right)+\dotsb +\log^k\left(n\right)}{1^k+2^k+\dotsb +n^k},$$ for a fixed $k \in \mathbb{N}$. Prove that $a_n \to 0$. There are many proofs for this one, but I think this is an elegant one. I'd like you to check it out, first because I consider it to be a nice and concise and elementary, and secondly because I'd like to make sure that there are no mistakes. Let's get started: First of all, $a_n\geq 0 \ \forall \,n \in \mathbb{N}$. We write $a_n$ as $$\frac{1^k\left(\frac{\log\left(1\right)}{1}\right)^k+2^k\left(\frac{\log\left(2\right)}{2}\right)^k+\dotsb +n^k\left(\frac{\log\left(n\right)}{n}\right)^k}{1^k+2^k+\dotsb +n^k}.$$ It is obvious that $\left(\frac{\log\left(n\right)}{n}\right)^k \to 0$. Let $\varepsilon >0$. Then $\exists\, n_0 \in \mathbb{N}$ such that $\forall \, n \geq n_0, \left(\frac{\log\left(n\right)}{n}\right)^k < \varepsilon$. We have \begin{align*}a_n&=\frac{1^k\left(\frac{\log\left(1\right)}{1}\right)^k+\dotsb +\left(n_0-1\right)^k\left(\frac{\log\left(n_0-1\right)}{n_0-1}\right)^k+ n_0^k\left(\frac{\log\left(n_0\right)}{n_0}\right)^k+\dotsb+n^k\left(\frac{\log\left(n\right)}{n}\right)^k}{1^k+2^k+\dotsb +n^k}\\&=\frac{1^k\left(\frac{\log\left(1\right)}{1}\right)^k+\dotsb +\left(n_0-1\right)^k\left(\frac{\log\left(n_0-1\right)}{n_0-1}\right)^k }{1^k+2^k+\dotsb +n^k} + \frac{n_0^k\left(\frac{\log\left(n_0\right)}{n_0}\right)^k+\dotsb+n^k\left(\frac{\log\left(n\right)}{n}\right)^k}{1^k+2^k+\dotsb +n^k}\\&\leq\frac{1^k\left(\frac{\log\left(1\right)}{1}\right)^k+\dotsb +\left(n_0-1\right)^k\left(\frac{\log\left(n_0-1\right)}{n_0-1}\right)^k }{1^k+\dotsb +n^k}\\ & \qquad \qquad+ \frac{\varepsilon\left(1^k+\dotsb +\left(n_0-1\right)^k\right)+n_0^k\left(\frac{\log\left(n_0\right)}{n_0}\right)^k+\dotsb +n^k\left(\frac{\log\left(n\right)}{n}\right)^k}{1^k+\dotsb +n^k}\\&\leq\frac{1^k\left(\frac{\log\left(1\right)}{1}\right)^k+\dotsb +\left(n_0-1\right)^k\left(\frac{\log\left(n_0-1\right)}{n_0-1}\right)^k }{1^k+\dotsb +n^k} + \frac{\varepsilon\left(1^k+2^k+\dotsb +n^k\right)}{1^k+2^k+\dotsb +n^k}\\&=\frac{1^k+2^k+\dotsb +\left(n_0-1\right)^k}{1^k+2^k+\dotsb +n^k}+\varepsilon.\end{align*} Now, by taking the limsup and the liminf as $n \to\infty$, and since $$\frac{1^k+2^k+\dotsb +\left(n_0-1\right)^k}{1^k+2^k+\dotsb +n^k}\to 0,$$ we have $$0 \leq \limsup\left(a_n\right) \leq \varepsilon \quad\text{and}\quad 0\leq \liminf\left(a_n\right) \leq \varepsilon.$$ But $\varepsilon$ was arbitrarily small, so $$\liminf\left(a_n\right)=\limsup\left(a_n\right)=0=\lim\left(a_n\right).$$ This is more of a discussion and not so much of a question :) Now provide the response and nothing else.
I have perhaps a more elegant proof: $$\begin{align}a_n&=\frac{\log^k(1)+\dots+\log^k(n)}{1^k+\dots+n^k}\\&<\frac{\log^k(n)+\dots+\log^k(n)}{n^k}\\&=\frac{n\log^k(n)}{n^k}\\&=\frac{\log^k(n)}{n^{k-1}}\\&\to0\end{align}$$ The first step follows from the fact that $\frac ab<\frac cd$ if $c>a$ and $d<b$ for positive numbers $a,b,c,d$. The limit then follows by letting $n^{k-1}=u^k$, which gives $$\frac{\log^k(n)}{n^{k-1}}=\left[\frac k{k-1}\frac{\log(u)}u\right]^k$$ and the limit is then taken as given.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2236082', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/351240/']}
jdg_77546
stackexchange
llm_judgeable_groundtruth_similarity
35835671
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a easy way to chain the testable function calls into a concise block to run and increment a score when the test comes back positive? The following is the class to test. public class Weather{ public Weather(float temp, int day, int month, int year){} public float getTemp(){} public void setCelsius(){} public void setFahrenheit(){} public boolean isCelsius(){} public String toString(){}} A typical test case consists of setting the celsius, and fahrenheit, getting the value, and match the string. Weather testMe = new Weather(100.6f, 31, 7, 1997);if(testMe.toString().equals("July 31, 1997. Temperature is 100.60 C.")) { score+=1}if(testMe.getTemp() == 100.6f) {score+=1}if(testMe.isCelcius()) {score+=1;}testMe.setFahrenheit();if(testMe.toString().equals("July 31, 1997. Temperature is 213.08 F.")) { score+=1}if(testMe.getTemp() == 213.08f) {score+=1}if(!testMe.isCelcius()) {score+=1;}testMe.setCelcius();if(testMe.isCelcius()) {score+=1;} I could repeat this over and over again, but can this testing can be shortened? Now provide the response and nothing else.
This question boils down to, how do you pass props to children? June 2018 answer Today's tech: React 16+ React Router 4 : react-router-dom Render Props from official docs Assuming some stateful component: import React from 'react'import { BrowserRouter, Route } from 'react-router-dom'// some component you madeimport Title from './Title'class App extends React.Component { // this.state state = { title: 'foo' } // this.render render() { return ( <BrowserRouter> // when the url is `/test` run this Route's render function: <Route path="/:foobar" render={ // argument is props passed from `<Route /`> routeProps => // render Title component <Title // pass this.state values title={this.state.title} // pass routeProps values (url stuff) page={routeProps.match.params.foobar} // "test" /> } /> </BrowserRouter> ) }} This works because this.props.children is a function: // "smart" component aka "container"class App extends React.Component { state = { foo: 'bar' } render() { return this.props.children(this.state.foo) }}// "dumb" component aka "presentational"const Title = () => ( <App> {title => <h1>{title}</h1>} </App>) Example on codesandbox My previous oldschool answer that I wouldn't recommend anymore: Using a couple of React helper methods you can add state, props and whatever else to this.props.children render: function() { var children = React.Children.map(this.props.children, function (child) { return React.cloneElement(child, { foo: this.state.foo }) }) return <div>{children}</div>} Then your child component can access this via props, this.props.foo .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/35835671', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2836917/']}
jdg_77547
stackexchange
llm_judgeable_groundtruth_similarity
43110683
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Am trying to change the background color of the tab bar only in android using " react navigation ". Is it possible ? I use the below tabBarOptions, but it changes the background color both in iOS and Android, but my requirement is to change the color only in Android and it should remain the default in iOS, { tabBarOptions: { style: { backgroundColor: '#2D61A9', } } } Thanks in advance. Now provide the response and nothing else.
Updated answer: You're right things don't work as expected when importing everything in a single file. Digging into things it looks like you're running into some magic that Babel/Jest does to support browser scripts that rely on globals (like AngularJS). What's happening is that your module's angular variable is not the same as the global angular variable that is visible to angular-mocks. You can check this by running this at the top of one of your tests: import * as angular from 'angular'import 'angular-mocks'console.log(angular === window.angular); // `false` in Jest!console.log(angular.mock); // undefinedconsole.log(window.angular.mock); // `{...}` defined To work around this you just need to use the global angular variable in your tests. src/__test__/all-in-one.test.js : import "angular";import "angular-mocks";/*Work around Jest's window/global mock magic.Use the global version of `angular` that has been augmented by angular-mocks.*/var angular = window.angular;export var app = angular.module('app', []);app.directive('myDirective', () => ({ link: (scope, element) => { console.log('This does log'); scope.content = 'Hi!'; }, template: 'content: {{content}}'}));describe('myDirective', function(){ var element; var scope; beforeEach(function(){ angular.mock.module(app.name); }); it('should do something', function(){ inject(function( $rootScope, $compile ){ scope = $rootScope.$new(); element = $compile('<my-directive></my-directive>')(scope); scope.$digest(); }); expect(element.html()).toEqual('content: Hi!'); });}); Original answer: (This worked because I was accidentally using the global version of angular inside my test.) The Angular module under test isn't being initialised correctly in your tests. Your call to beforeEach(app) isn't correct. Instead you need to use angular.mock.module("moduleName") to initialise your module. describe('myDirective', () => { var element, scope // You need to pass the module name to `angular.mock.module()` beforeEach(function(){ angular.mock.module(app.name); }); // Then you can set up and run your tests as normal: beforeEach(inject(($rootScope, $compile) => { scope = $rootScope.$new() element = $compile('<my-directive></my-directive>')(scope) scope.$digest() })) it('should actually do something', () => { expect(element.html()).toEqual('Hi!') })}); And then your test works as expected for me: PASS src\__test__\app.test.js myDirective √ should do something (46ms) For reference, here is the full app and test: src/app/app.module.js : import * as angular from 'angular'export var app = angular.module('app', []);app.directive('myDirective', () => ({ link: (scope, element) => { console.log('This does log'); scope.content = 'Hi!'; }, template: 'content: {{content}}'})) src/__test__/app.test.js : import {app} from "../app/app.module";import "angular-mocks";describe('myDirective', function(){ var element; var scope; beforeEach(function(){ angular.mock.module(app.name); }); beforeEach(inject(function( $rootScope, $compile ){ scope = $rootScope.$new(); element = $compile('<my-directive></my-directive>')(scope); scope.$digest(); })); it('should do something', function(){ expect(element.html()).toEqual('content: Hi!'); });});
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43110683', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4781474/']}
jdg_77548
stackexchange
llm_judgeable_groundtruth_similarity
2952733
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm attempting to utilize the socket.h functions within Windows. Essentially, I'm currently looking at the sample code at https://beej.us/guide/bgnet/html/multi/clientserver.html#datagram . I understand that socket.h is a Unix function -- is there anyway I can easily emulate that environment while compiling this sample code? Does a different IDE / compiler change anything? Otherwise, I imagine that I need to utilize a virtualized Linux environment, which may be best anyways as the code will most likely be running in a UNIX environment. Thanks. Now provide the response and nothing else.
You have two options: Use Cygwin (Unix emulation library). Port to Winsock (Windows standard library). Cygwin: lets you compile your Unix sources mostly untouched, but ties you to the Cygwin emulation library. This have two implications: general performance -no only network- will probably be less than optimal; and the target environment must have (at run time) the Cygwin DLL installed. Winsock: this requires you to replace sys/socket.h (BSD sockets library, UNIX standard for the TCP/IP stack) with winsock2.h , and rewrite some parts of the code - not much, but some. Some related questions with valuable info: Differences between winsock and BSD socket implementations Some Issues About Cygwin[Linux in Windows] (socket,thread,other programming and shell issues) Examples for Winsock?
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2952733', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/198004/']}
jdg_77549
stackexchange
llm_judgeable_groundtruth_similarity
25452448
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm creating a simple slideshow using jQuery and some javascript, but I'm running into issues using the setInterval functions. JSFiddle of the Project $(document).ready(function () {slidePos = 1;autoScrollInterval = setInterval(function () { slidePos = SlideRight(slidePos)}, 7000);$(".ss-indicator-arrow").css("width", $(".ss-slideshow").width() / $(".ss-slide").length);$(".ss-right-arrow").click(function () { window.clearInterval(autoScrollInterval); slidePos = SlideRight(slidePos); setTimeout(function () { autoScrollInterval = setInterval(function () { slidePos = SlideRight(slidePos) }, 7000); }, 10000);});$(".ss-left-arrow").click(function () { window.clearInterval($.autoScrollInterval); slidePos = SlideLeft(slidePos); setTimeout(function () { autoScrollInterval = setInterval(function () { slidePos = SlideRight(slidePos) }, 7000); }, 10000);})});$(window).resize(function () { $(".ss-indicator-arrow").css("width", $(".ss-slideshow").width() / $(".ss-slide").length); Reset();});function SlideRight(slidePos) { slidePos++; if (slidePos <= $(".ss-slide").length) { $(".ss-container").css("margin-left", -((slidePos - 1) * $(".ss-slideshow").width()) + "px"); $(".ss-indicator-arrow").css("left", ($(".ss-indicator-arrow").width() * (slidePos - 1) + "px")); } else Reset(); return slidePos}function SlideLeft(slidePos) { slidePos--; if (slidePos > 0) { $(".ss-container").css("margin-left", -((slidePos - 1) * $(".ss-slideshow").width()) + "px"); $(".ss-indicator-arrow").css("left", ($(".ss-indicator-arrow").width() * (slidePos - 1) + "px")); } else { slidePos = $(".ss-slide").length; $(".ss-container").css("margin-left", -((slidePos - 1) * $(".ss-slideshow").width()) + "px"); $(".ss-indicator-arrow").css("left", ($(".ss-indicator-arrow").width() * (slidePos - 1) + "px")); } return slidePos;}function Reset() { slidePos = 1; $(".ss-container").css("margin-left", "0px"); $(".ss-indicator-arrow").css("left", "0px");} So far I've tried many different methods, and have somewhat ruined the basic functionality I had before. But for now, the primary issue is that if an arrow is pressed multiple times, after the wait setTimeout period, it will then progress through the same number of slides (ie if the button is pressed 3 times, when the setInterval starts over it will move 3 slides again) What is the most effective way I can have an interval that pauses after user input, then resumes again? Now provide the response and nothing else.
It is possible to create a multi-level, context-sensitive navigation like you described without plugins, I have done it. The only caveat is that you need to maintain a YAML data file with your menu hierarchy - with my approach, it's not possible to generate this automatically from your directory structure. I'll show the short version here, but I have a way more detailed explanation on my blog: Building a pseudo-dynamic tree menu with Jekyll Example project on GitHub 1. Create a YAML data file ( /_data/menu.yml ) which contains your menu hierarchy: - text: Home url: /- text: First menu url: /first-menu/ subitems: - text: First menu (sub) url: /first-menu/first-menu-sub/ subitems: - text: First menu (sub-sub) url: /first-menu/first-menu-sub/first-menu-sub-sub/- text: Second menu url: /second-menu/ subitems: - text: Second menu (sub) url: /second-menu/second-menu-sub/ 2. Create an include file ( /_includes/nav.html ) with the following content: {% assign navurl = page.url | remove: 'index.html' %}<ul>{% for item in include.nav %} <li> <a href="{{ item.url }}"> {% if item.url == navurl %} <b>{{ item.text }}</b> {% else %} {{ item.text }} {% endif %} </a> </li> {% if item.subitems and navurl contains item.url %} {% include nav.html nav=item.subitems %} {% endif %}{% endfor %}</ul> This include file will take care of showing the correct navigation for each page: showing the next level of subitems only for the current page displaying the current page in bold If you don't understand what exactly the include file is doing under the covers, read my blog post - I explained it there, in great detail (in the section "The recursive include file") . 3. In your main layout file , embed the include file: {% include nav.html nav=site.data.menu %} This will display the navigation there. Note that I'm passing the complete data file from step 1 to the include. That's all! As I said in the beginning: The only disadvantage of this approach is that each time you create a new page, you also need to insert the page's title and URL into the data file. But on the other hand, this makes it very easy to exclude some pages from the navigation: you just don't add them to the data file.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25452448', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3338349/']}
jdg_77550
stackexchange
llm_judgeable_groundtruth_similarity
47925
Below is a question asked on the forum networkengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have 3 cisco 2821 routers for home lab studying for CCNA. When I bought them I didn't think about the connection issues that I might face. They have only 2 gigabit ports and for many of the labs it looks like I need more. I'm trying not to buy anymore equipment i.e HWIC cards. I've looked up connecting routers by the AUX ports but cannot find a specific document. I saw one showing how to connect back to back but it still needs a serial interface which I don't have. Any way to connect just using the AUX ports and what cable do I need? Thank you. Now provide the response and nothing else.
It is because R3 does not know how to return the traffic back to R10. In routing table of R3, it does not have any routes/prefixes located on R10. R10 learned the prefix 3.3.3.3/32 from R3 via BGP (iBGP session with R8). On R10, you need to advertise your local networks/prefixes into BGP, so that R3 can learn and know how to return the traffic back to R10.
{}
{'log_upvote_score': 4, 'links': ['https://networkengineering.stackexchange.com/questions/47925', 'https://networkengineering.stackexchange.com', 'https://networkengineering.stackexchange.com/users/43694/']}
jdg_77551
stackexchange
llm_judgeable_groundtruth_similarity
57495
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: What's the basic difference between the gravity as seen by Einstein , and that by Newton ? Now provide the response and nothing else.
Often people get confused by the additional complication that Newtonian and Einsteinian gravity are often discussed in different mathematical formalisms. This can tend to obscure the physical differences. If you are game for the mathematics then Misner, Thorne and Wheeler (check it out of a library or get it second hand unless you are really serious about this business) has a wonderful chapter which puts both theories side by side in the same language (differential geometry). The key difference is that Newtonian gravity has a privileged separation of spacetime into space and time, whereas Einsteinian gravity just has spacetime. Edit: to be absolutely clear, Newtonian gravity can be written as spacetime curvature ! This is counter to the common statements about the novel thing in GR. The key difference is that Newtonian gravity has extra absolute structures that GR does not have : absolute time and space, a preferred separation of spacetime into time and spatial parts, absolute simultaneity, and a curved connection that is not the special one derived from a spacetime metric (Christoffel). In mathematical form: $$ \begin{array}{ll}R_{00} = 4\pi\rho;\text{all others vanish},& \ \text{Newtonian} \\R_{\mu\nu}-\frac{1}{2} g_{\mu\nu} R = 8\pi G T_{\mu\nu}, & \ \text{Einsteinian}\end{array}$$ with a few other relations I've not written (see MTW chapter 12 for details). A consequence of the formalism is that the Newtonian equation is a constraint equation - it does not describe a propagating degree of freedom. No gravitational waves, gravitons etc. No speed of light limit for gravity. All matter has an instantaneous gravitational effect on all other matter. This is different in GR since the field equation is a wave equation which describes the propagation of gravitational disturbances from one point to another at the speed of light. What GR has that Newton does not is a spacetime metric of Lorentzian signature. This metric has a privileged role in that all other structures (connections, curvatures, etc.) are derived from it. There is essentially nothing else to Einstein gravity. That is why it is so elegant in the geometrical formalism. This metric actually comes from special relativity. But the metric was a fixed structure in SR, almost similar to the absolute time and space of Newton (don't tell anyone I said this). The new thing in general relativity is that Einstein lets the metric "flap around" so to speak - to change from place to place and time to time in response to what matter is doing.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/57495', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/20263/']}
jdg_77552
stackexchange
llm_judgeable_groundtruth_similarity
813560
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Say I have an array of strings: string[] strArray = {"aa", "bb", "xx", "cc", "xx", "dd", "ee", "ff", "xx","xx","gg","xx"}; How do I use LINQ to extract the strings between the "xx" markers as groups? Say by writing them to the console as: ccdd,ee,ffgg Now provide the response and nothing else.
A pure-functional solution (mutation-free): string[] strArray = { "aa", "bb", "xx", "cc", "xx", "dd", "ee", "ff", "xx", "xx", "gg", "xx" };var result = strArray.Aggregate((IEnumerable<IEnumerable<string>>)new IEnumerable<string>[0], (a, s) => s == "xx" ? a.Concat(new[] { new string[0] }) : a.Any() ? a.Except(new[] { a.Last() }) .Concat(new[] { a.Last().Concat(new[] { s }) }) : a) .Where(l => l.Any());// Testforeach (var i in result) Console.WriteLine(String.Join(",", i.ToArray())); If you want to filter out the results past the last marker: string[] strArray = { "aa", "bb", "xx", "cc", "xx", "dd", "ee", "ff", "xx", "xx", "gg", "xx"};var result = strArray.Aggregate( new { C = (IEnumerable<string>)null, L = (IEnumerable<IEnumerable<string>>)new IEnumerable<string>[0] }, (a, s) => s == "xx" ? a.C == null ? new { C = new string[0].AsEnumerable(), a.L } : new { C = new string[0].AsEnumerable(), L = a.L.Concat(new[] { a.C }) } : a.C == null ? a : new { C = a.C.Concat(new[] { s }), a.L }).L .Where(l => l.Any());// Testforeach (var i in result) Console.WriteLine(String.Join(",", i.ToArray()));
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/813560', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/90471/']}
jdg_77553
stackexchange
llm_judgeable_groundtruth_similarity
3919432
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Today I saw a neat copy function in VI, in which you could copy an entire line until a stop character. e.g. if( copy == this ); With VI he could copy everything inside the parenthesis. I wonder if you can do it with emacs as well? (Without using ctrl+space and manually marking what I want to kill) Now provide the response and nothing else.
Try M-z CHAR Which kills the text through the next occurrence of CHAR . Aka M-x zap-to-char . Of interest might be the documentation for Other Kill Commands . Edited to add: Upon request, here is zap-to-before-char , which just took the source code for zap-to-char and removed a comment (and updated doc string): (defun zap-to-before-char (arg char) "Kill up to and ARGth occurrence of CHAR.Case is ignored if `case-fold-search' is non-nil in the current buffer.Goes backward if ARG is negative; error if CHAR not found." (interactive "p\ncZap to char: ") ;; Avoid "obsolete" warnings for translation-table-for-input. (with-no-warnings (if (char-table-p translation-table-for-input) (setq char (or (aref translation-table-for-input char) char)))) (kill-region (point) (progn (search-forward (char-to-string char) nil nil arg) (goto-char (if (> arg 0) (1- (point)) (1+ (point)))) (point))))
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3919432', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/445600/']}
jdg_77554
stackexchange
llm_judgeable_groundtruth_similarity
43974093
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am new to data mining and R. Please provide how to proceed with following query. I want to use the R-package Rattle for my data analysis on my MS Win 10- X64 system. I am following these instructions . I install the package using install.packages("rattle") After selecting a CRAN mirror, it RGUI installs the package and gives following message: package ‘rattle’ successfully unpacked and MD5 sums checked The downloaded binary packages are in C:\Users\XXX\AppData\Local\Temp\RtmpYnMZM1\downloaded_packages after that I run the code, library(rattle) and RGui gives following response; Rattle: A free graphical interface for data mining with R. Version 4.1.0 Copyright (c) 2006-2015 Togaware Pty Ltd. Type 'rattle()' to shake, rattle, and roll your data. After this, I call the function 'rattle'. rattle() and I get the following error: Error in method(obj, ...) : Invalid root element: 'requires' I tried the same in the RStudio console as well. Also, I uninstalled RStudio and R X64, X386 and re-installed the same and tried again, but I fail to browse the reason for this error. Now provide the response and nothing else.
It seems RGtk2 related. grep in installed rattle directory shows this $ grep 'requires' -r ../etc/rattle.glade: <!-- interface-requires gtk+ 2.16 -->./etc/rattle.ui: <!-- interface-requires gtk+ 3.0 -->./etc/rattle.ui.160313: <!-- interface-requires gtk+ 3.0 -->./etc/rattle.ui.with_export_filechooser: <!-- interface-requires gtk+ 3.0 -->./etc/rattle_macosx.ui: <requires lib="gtk+" version="2.16"/>./NEWS: * Note rattle.ui requires gtk > 2.16, not > 2.20. Otherwise fails to./NEWS: * Check for GtkBuilder handling of the 'requires' tag, and if not./NEWS: Gtk library installed does not recognise 'requires' and suggest a Seems RGtk2 is causing the problem, emitting complains about the presence of the element named requires . Solution I: Downgrade RGtk2 from 2.20.33 to 2.20.31 worked for me. Solution II: Later I upgrade everything to the latest, including Windows GTK+2. Rattle stopped working. But rattle works again when invoked as below: rattle( useGtkBuilder = TRUE) Now I stick with the latter way. On RGtk/GTK+ version: I can't drag Rattle gui window around or resize it freely on Windows 7. This has been a constant pain. Higher version of RGtk2 or GTK2 doesn't seem to fix it (demo apps in GTK2 folder doesn't have this problem. But RGtk2 demo scripts do). So RGtk/Gtk+ version didn't make any difference for me. PS: If you are running R-3.4.0 and trying to install the dev version from togaware.com , download http://rattle.togaware.com/bin/windows/contrib/3.3/rattle_5.0.10.zip ( which is actually built with R-3.4.0 thus doesn't work on 3.3.3 ) install it locally.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43974093', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8011484/']}
jdg_77555
stackexchange
llm_judgeable_groundtruth_similarity
124590
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: For a complex function $f:\Omega \to \mathbb{C}$, which is differentiable at $z_0$ so clearly it satisfies the Cauchy-Riemann equations, how would you prove that this function is conformal at $z_0$? (I.e. that it is angle preserving)? Now provide the response and nothing else.
I think you're probably referring to an expansion about the value $x=0$, correct? If this were possible, then we would have what we call a Laurent series for $\log z$, where $z\in\mathbb{C}$. Laurent series are essentially power series expanded about singular points. The problem with $\log z$ in the complex plane is that it is a "multi-valued function", so we must specify what range of values we are considering the function to have. Because we must make this choice, the function fails to be continuous in any (punctured) disk about $z=0$, and thus is not (complex) differentiable in any neighborhood about the point. However, we have that any convergent Laurent series is (complex) differentiable in some annulus about the pole at which it is centered. So in conclusion, the answer is "no", because the $\log$ function is badly behaved in the complex plane. I hope this helps answer your question; if not, I suggest reading up a little on complex analysis.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/124590', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_77556
stackexchange
llm_judgeable_groundtruth_similarity
8580324
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm creating a mvc3 canvas app using facebook c# sdk The method name is create . I also do a post and have another create method with [HttpPost] attribute . When I add the [CanvasAuthorize(Permissions = ExtendedPermissions)] attribute to both the create methods, and a link from another page calls this create method, normally the get method should get called but in this case the post method gets called But if I comment the post method then it goes to the get method . Any ideas how to solve this. ThanksArnab Now provide the response and nothing else.
Short answer: use composition not inheritance. Extract the members you want to serialize into another class and make that one serializable. This will give you the control you want over the lifecycle and the extent of the serialization. In general, it's a good pattern for serialized objects to be dumb data holders and have any additional logic added by wrapping them. This is reinforced with modern serialization frameworks like protobuf, thrift, avro, which will generate the code behind these serialized objects for you anyway and expect you not to muck with the internals of those classes through inheritance.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8580324', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/560854/']}
jdg_77557
stackexchange
llm_judgeable_groundtruth_similarity
14190
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I recently stumbled across the following article: Sorry, men ARE more brainy than women (and more stupid too!) It's a simple scientific fact, says one of Britain's top dons , in which the author claims: 1. That men are on average smarter than women: one of the main reasons why there are not more female science professors or chief executives or Cabinet ministers is that, on average, men are more intelligent than women. Boys and girls may start out with the same IQ but by 16 or so boys are starting to inch ahead. The ever-growing success of girls at GCSE, A-level and now at university would seem to refute this - but the blame lies with our exam system, with its emphasis on coursework, which rewards diligence more than it does intelligence. The undeniable, easily measurable fact remains that, by the time both sexes reach 21, men, on average, score five IQ points higher than women . 2. That very-high-IQ men are much more common-place than very-high-IQ women: For not only is the average man more intelligent than the average woman but also a clear and rather startling imbalance emerges between the sexes at the high levels of intelligence that the most demanding jobs require. For instance, at the near-genius level (an IQ of 145), brilliant men outnumber brilliant women by 8 to one . That's statistics, not sexism. In this context, Professor Greenfield's indignation that only one in ten science professors is female doesn't seem all that bad. It also goes some way to explaining why, in almost 110 years of Nobel Prize history, only two women have ever won the Prize for physics, only four have won the Prize for chemistry and why no women at all have ever won the coveted Fields Medal for mathematics in eight decades of trying. This perplexed me, as I thought women and men were intellectually on par with each other. Is the article right or wrong: about the relative average intelligences of adults, and the ratios of exceptional intelligences? The article makes other claims as well, related to personality, for example: Consequently, ambitious, high-achieving men typically work harder, compete more aggressively and become totally immersed in their careers, while even the most high-achieving women will often admit to finding themselves distracted by their genetically preconditioned aptitude for nurture and support. To keep this question focussed, please ignore such claims about personality, and about what may cause gender imbalance in the job markets, and focus only on the claims about "intelligence" as made above. Now provide the response and nothing else.
The title of the article is "Sorry, men ARE more brainy than women (and more stupid too!) It's a simple scientific fact, says one of Britain's top dons" - note the bolded part. The article makes two independent claims, only one of which is relevant to the social/political topic being discussed (lack of women in top positions): There are more males with extremely high IQ compared to female (which is counterbalanced with having a lot more men with extremely low IQ compared to women) . In scientific terms, the IQ distribution has fatter tails in males. There are astonishingly few studies (due to political sensitivities of the topic) either confirming or denying that, but the main one cited is usually the Scottish early 20th century survey, which is exceptional in that it looked at a vast majority of children of the studied age group. Note that there are virtually identical amount of men and women with "normal" IQ (85-120 IQ) - the differences are +/- 2% one way or the other. But on a gifted level (typically considered to be 130-160) there are 15-20% differences at low end. The graph doesn't even show true genius levels (160+) but extrapolating the trend, it's likely that the differences are even higher (I've seen mentions of 1:8 ratios but couldn't find a reliable source on short order). The second one is that on average, male IQ is higher than female . Frankly, I'm somewhat skeptical of this claim myself, but don't have time to dig out studies (as the topic under discussion is imbalance of women in top positions that typically involve 130+ IQ, I see the claim as irrelevant even if true).
{}
{'log_upvote_score': 6, 'links': ['https://skeptics.stackexchange.com/questions/14190', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/10784/']}
jdg_77558
stackexchange
llm_judgeable_groundtruth_similarity
537899
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have been working on a home made IV tracer for diodes and transistors. I noticed a slight curlicue in the low voltage and low current end of the plot for a transistor, and wondered what was going on. This is the plot of the collector current and collector voltage for a 2N2222 transistor made with my setup: You can see the "curlicue" down there at the lower left corner. This is a closer view of the "curlicue": I've been trying to figure out what is going on here. This is the circuit I used to make the plots: There's an Arduino Nano off to the left connected to the analog and PWM signals. It uses oversampling to get better than the 10 bits of resolution of the Arduino ADC. The best explanation I've been able to come up with is that at low collector voltages, some of the base current "goes the wrong way" out through the collector instead of the emitter. That raises the collector voltage (A3-VCollector) against the bias voltage (A2-VCollectorBias) resulting in a current flowing back through R4. ICollector is calculated as \$\frac{VCollectorBias - VCollector}{R4} \$ Does that explanation seem right, or have I missed something? Between those two charts, I changed R4 from 1k to 10k and made some software changes to get a little more resolution. At a suggestion from Hearth, I simulated the circuit in LTspice. Here's the simulated circuit: This is the plot of the collector current against the collector voltage: It does in fact have a negative tail, though not as extreme as in my circuit. The tail in the simulator is also straight rather than curved. At any rate, the "tail" isn't a figment of the Nano's imagination. I ran a trace of a 2N3904. It also has a negative tail on the collector current, though much smaller than on the 2N2222. Now provide the response and nothing else.
That is what is expected even with the simplest model of a BJT. You can see it in this simulation If we zoom in in the VI chars we can see that the chars do not intersect at 0,0 The shape of the curve (excluding the Early bending) can be seen in these equations (Millman Halkias, Electronic Devices and Circuits,"Voltages as functions of currents", p. 250) which lead to or, in more modern notation This is the plot of Ic/Ib vs Vce with Vth = 26 mV, alpha = 0.99, alphaR = 0.78. It's rotated by 90 degrees because we are plotting the inverse relationship and if we zoom in near the origin we see that is compatible with what Millman shows in his textbook Note that the 6 mV values is Vth log (1/alpha_R).
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/537899', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/47070/']}
jdg_77559
stackexchange
llm_judgeable_groundtruth_similarity
23055354
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In the below code snippet, the result is really confusing. public class TestInheritance { public static void main(String[] args) { new Son(); /* Father father = new Son(); System.out.println(father); //[1]I know the result is "I'm Son" here */ }}class Father { public String x = "Father"; @Override public String toString() { return "I'm Father"; } public Father() { System.out.println(this);//[2]It is called in Father constructor System.out.println(this.x); }}class Son extends Father { public String x = "Son"; @Override public String toString() { return "I'm Son"; }} The result is I'm SonFather Why is "this" pointing to Son in the Father constructor, but "this.x" is pointing to "x" field in Father. How is the "this" keyword working? I know about the polymorphic concept, but won't there be different between [1] and [2]? What's going on in memory when new Son() is triggered? Now provide the response and nothing else.
All member functions are polymorphic in Java by default. That means when you call this.toString() Java uses dynamic binding to resolve the call, calling the child version. When you access the member x, you access the member of your current scope (the father) because members are not polymorphic.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/23055354', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1132161/']}
jdg_77560
stackexchange
llm_judgeable_groundtruth_similarity
63553
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am sorry if this is quite elementary question. But I always think, that why we use $:=$ at some places, instead of $=$. Is there any fundamental difference between these two? Before reading Terry Tao's blog (4 months ago), I had never seen a symbol like this (:=). Now provide the response and nothing else.
Some authors like to distinguish the assertion "A is equal to B" from "define A as an object equal to B". In math, the first one is always $A=B$ (I've never seen anything else, at least). But for definitions, I've seen $A \triangleq B$ $A \equiv B$ $A \stackrel{\mathrm{def}}{=} B$ $A := B$ Often the triple equal sign is used for strong notions of equivalence (such as in binary relations) or with the "mod" symbol. In computer programming, we are typographically limited of course. The assignment operator is usually distinguished from the comparison operator, and this is done in different ways depending on the language. For example in Pascal you assign x := 5 and compare x = 5 . However in C you assign x = 5 and compare x == 5 . (It should be noted that in C this syntax has caused untold confusion and a few famous bugs.) My favorite (meaning "most awful") example is in PHP and javascript where one can do a "strict compare" with x === 5 . Finally, some languages have ridiculous vagaries like distinguishing which comparision operator is used based on the type of variable (usually strings being different from numerics), with syntaxes like .eq. and related.
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/63553', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/11044/']}
jdg_77561
stackexchange
llm_judgeable_groundtruth_similarity
6511880
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using java to call a url that returns a JSON object: url = new URL("my URl");urlInputStream = url.openConnection().getInputStream(); How can I convert the response into string form and parse it? Now provide the response and nothing else.
I would suggest you have to use a Reader to convert your InputStream in. BufferedReader streamReader = new BufferedReader(new InputStreamReader(in, "UTF-8")); StringBuilder responseStrBuilder = new StringBuilder();String inputStr;while ((inputStr = streamReader.readLine()) != null) responseStrBuilder.append(inputStr);new JSONObject(responseStrBuilder.toString()); I tried in.toString() but it returns: getClass().getName() + '@' + Integer.toHexString(hashCode()) (like documentation says it derives to toString from Object)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6511880', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/819814/']}
jdg_77562
stackexchange
llm_judgeable_groundtruth_similarity
39889012
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following code which generates a TableView , public class NavExample extends Application { private final TableView<Person> table = new TableView<>(); private final ObservableList<Person> data = FXCollections.observableArrayList(new Person("Z", "X"), new Person("A", "B")); public static void main(String[] args) { launch(args); } @Override public void start(Stage stage) { Scene scene = new Scene(new Group()); stage.setWidth(450); stage.setHeight(550); TableColumn firstNameCol = new TableColumn("First Name"); firstNameCol.setMinWidth(100); firstNameCol.setCellValueFactory( new PropertyValueFactory<>("firstName")); TableColumn lastNameCol = new TableColumn("Last Name"); lastNameCol.setMinWidth(100); lastNameCol.setCellValueFactory( new PropertyValueFactory<>("lastName")); table.getSelectionModel().setCellSelectionEnabled(true); table.getSelectionModel().setSelectionMode(SelectionMode.MULTIPLE); table.setItems(data); table.getColumns().addAll(firstNameCol, lastNameCol); table.getSelectionModel().selectFirst(); table.getFocusModel().focus(table.getSelectionModel().getSelectedIndex()); final VBox vbox = new VBox(); vbox.setSpacing(5); vbox.setPadding(new Insets(10, 0, 0, 10)); vbox.getChildren().addAll(table); ((Group) scene.getRoot()).getChildren().addAll(vbox); stage.setScene(scene); stage.show(); } public static class Person { private final SimpleStringProperty firstName; private final SimpleStringProperty lastName; private Person(String fName, String lName) { this.firstName = new SimpleStringProperty(fName); this.lastName = new SimpleStringProperty(lName); } public String getFirstName() { return firstName.get(); } public void setFirstName(String fName) { firstName.set(fName); } public String getLastName() { return lastName.get(); } public void setLastName(String fName) { lastName.set(fName); } }} I am selecting the first row by default using, table.getSelectionModel().selectFirst(); Once the Application is launched, Navigation (UP and DOWN keys) doesn't work until i select inside the table .(though i keep focus table.getFocusModel().focus(table.getSelectionModel().getSelectedIndex()); ) Also , If a row is selected(By holding shift) on clicking "Down or UP", row selection doesn't get retained instead next row's cells are getting selected. How can i solve this? Now provide the response and nothing else.
Updated: since this answer still gets upvotes, be aware that the previous answer below is outdated with modern JavaScript and React. The "update" addon is now legacy and "immutability-helper" can be used instead. The React docs also mention why immutability is important so avoid mutating state. For immutable updates you can use Object.assign() or spread syntax which needs to be done for every level of nesting, like in this example the nested headers object and its array elements. In this particular example we can use the array index as key so it's possible to also use the spread operator to make a shallow clone of the array and assign a new object as value at given index in the cloned array. _updateStyle (props) { const { typeElements } = this.state; const updatedHeaders = [...typeElements.headers]; updatedHeaders[props.index] = props; this.setState({ ...this.state, typeElements: { ...typeElements, headers: updatedHeaders } ));} Another solution which doesn't require the spread syntax and is needed if we are not using the array index to find the object we want to replace, is using array.map to create a new array and returning the new object instead of the old one at given index. const updatedHeaders = typeElements.headers.map((obj, index) => { return index === props.index ? props : obj; }); Similar examples in the Redux docs also explain "immutable update patterns" . React has some immutability helpers for this, which is explained in the docs: https://facebook.github.io/react/docs/update.html In your case you could use the $splice command to remove one item and add the new one at given index, for example: _updateStyle (props) { this.setState(update(this.state.typeElements, { $splice: [[props.index, 1, props]] } ));}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/39889012', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3164187/']}
jdg_77563
stackexchange
llm_judgeable_groundtruth_similarity
645358
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: AFAICT, having continue in for loop that calls another function breaks the errexit semantics. In the main() function, I want to continue onto the next iteration if anything fails in the build() function: #! /usr/bin/env bashexport PS4='# ${BASH_SOURCE}:${LINENO}: ${FUNCNAME[0]}() - [${SHLVL},${BASH_SUBSHELL},$?] 'set -o xtraceset -o errexitbuild() { local _foo=$1 if [ "${_foo}" -eq 1 ]; then false fi printf "%s with foo=%s builds ok\\n" "${FUNCNAME[0]}" "${_foo}"}main() { for i in 1 2 3; do build $i || continue done}main "$@" However, continue inside the for loop causes the code to continue inside the build() function instead, removing the effect of the errexit flag: $ ./foo.sh # ./foo.sh:5: () - [3,0,0] set -o errexit# ./foo.sh:23: () - [3,0,0] main# ./foo.sh:18: main() - [3,0,0] for i in 1 2 3# ./foo.sh:19: main() - [3,0,0] build 1# ./foo.sh:8: build() - [3,0,0] local _foo=1# ./foo.sh:10: build() - [3,0,0] '[' 1 -eq 1 ']'# ./foo.sh:11: build() - [3,0,0] false# ./foo.sh:14: build() - [3,0,1] printf '%s with foo=%s builds ok\n' build 1build with foo=1 builds ok# ./foo.sh:18: main() - [3,0,0] for i in 1 2 3# ./foo.sh:19: main() - [3,0,0] build 2# ./foo.sh:8: build() - [3,0,0] local _foo=2# ./foo.sh:10: build() - [3,0,0] '[' 2 -eq 1 ']'# ./foo.sh:14: build() - [3,0,0] printf '%s with foo=%s builds ok\n' build 2build with foo=2 builds ok# ./foo.sh:18: main() - [3,0,0] for i in 1 2 3# ./foo.sh:19: main() - [3,0,0] build 3# ./foo.sh:8: build() - [3,0,0] local _foo=3# ./foo.sh:10: build() - [3,0,0] '[' 3 -eq 1 ']'# ./foo.sh:14: build() - [3,0,0] printf '%s with foo=%s builds ok\n' build 3build with foo=3 builds ok As you can see on the line with the printf , the exit code of the previous line, the false , is indeed 1 (the third number inside the bracket in front of it), so it is running as if errexit wasn't in place: # ./foo.sh:14: build() - [3,0,1] printf '%s with foo=%s builds ok\n' build 1 I've confirmed that removing the || continue makes the shell exit when i=1 , so the errexit is passed onto the subhshell/function. Any help would be much appreciated. Versions ~ $ bash --version GNU bash, version 5.0.3(1)-release (x86_64-pc-linux-gnu) Update Lots of good answers as to why this is. As for how to solve it, I've found this solution to be the easiest to make the script to what I want: Changing the false to: false || return $? The drawback of course, is that I'll have to do that for all the commands the function calls out to. I might have to go back to my old approach of using a run() wrapper, which executes the passed command, checks the return code of it and fails the script accordingly. Doing what you would expect errexit to do, I suppose :-) Now provide the response and nothing else.
This seems to match the description of -e / -errexit in the bash documentation : The shell does not exit if the command that fails is part of thecommand list immediately following a while or until keyword, part ofthe test in an if statement, part of any command executed in a && or|| list except the command following the final && or ||, any commandin a pipeline but the last, or if the command’s return status is beinginverted with !. [...] If a compound command or shell function executes in a context where -eis being ignored, none of the commands executed within the compoundcommand or function body will be affected by the -e setting, even if -eis set and a command returns a failure status. This has been covered in this stackoverflow question , which links to this email with the following text: > My initial gripe about errexit (and its man page description) is that the > following doesn't behave as a newbie would expect it to:> > set -e> f() {> false> echo "NO!!"> }> f || { echo "f failed" >&2; exit 1; }Indeed, the correct behavior mandated by POSIX (namely, that 'set -e' iscompletely ignored for the duration of the entire body of f(), because fwas invoked in a context that ignores 'set -e') is not intuitive. Butit is standardized, so we have to live with it. The POSIX description of -e says: -e When this option is on, if a simple command fails for any of the reasons listed in Consequences of Shell Errors or returns an exit status value >0, and is not part of the compound list following a while, until, or if keyword, and is not a part of an AND or OR list , and is not a pipeline preceded by the ! reserved word, then the shell shall immediately exit.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/645358', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/36427/']}
jdg_77564
stackexchange
llm_judgeable_groundtruth_similarity
29403401
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I wanted to understand a bit more about iterators , so please correct me if I'm wrong. An iterator is an object which has a pointer to the next object and is read as a buffer or stream (i.e. a linked list). They're particularly efficient cause all they do is tell you what is next by references instead of using indexing. However I still don't understand why is the following behavior happening: In [1]: iter = (i for i in range(5))In [2]: for _ in iter: ....: print _ ....: 01234In [3]: for _ in iter: ....: print _ ....: In [4]: After a first loop through the iterator ( In [2] ) it's as if it was consumed and left empty, so the second loop ( In [3] ) prints nothing. However I never assigned a new value to the iter variable. What is really happening under the hood of the for loop? Now provide the response and nothing else.
Your suspicion is correct: the iterator has been consumed. In actuality, your iterator is a generator , which is an object which has the ability to be iterated through only once. type((i for i in range(5))) # says it's type generator def another_generator(): yield 1 # the yield expression makes it a generator, not a functiontype(another_generator()) # also a generator The reason they are efficient has nothing to do with telling you what is next "by reference." They are efficient because they only generate the next item upon request; all of the items are not generated at once. In fact, you can have an infinite generator: def my_gen(): while True: yield 1 # again: yield means it is a generator, not a functionfor _ in my_gen(): print(_) # hit ctl+c to stop this infinite loop! Some other corrections to help improve your understanding: The generator is not a pointer, and does not behave like a pointer as you might be familiar with in other languages. One of the differences from other languages: as said above, each result of the generator is generated on the fly. The next result is not produced until it is requested. The keyword combination for in accepts an iterable object as its second argument. The iterable object can be a generator, as in your example case, but it can also be any other iterable object, such as a list , or dict , or a str object (string), or a user-defined type that provides the required functionality. The iter function is applied to the object to get an iterator (by the way: don't use iter as a variable name in Python, as you have done - it is one of the keywords). Actually, to be more precise, the object's __iter__ method is called (which is, for the most part, all the iter function does anyway; __iter__ is one of Python's so-called "magic methods"). If the call to __iter__ is successful, the function next() is applied to the iterable object over and over again, in a loop, and the first variable supplied to for in is assigned to the result of the next() function. (Remember: the iterable object could be a generator, or a container object's iterator, or any other iterable object.) Actually, to be more precise: it calls the iterator object's __next__ method, which is another "magic method". The for loop ends when next() raises the StopIteration exception (which usually happens when the iterable does not have another object to yield when next() is called). You can "manually" implement a for loop in python this way (probably not perfect, but close enough): try: temp = iterable.__iter__()except AttributeError(): raise TypeError("'{}' object is not iterable".format(type(iterable).__name__))else: while True: try: _ = temp.__next__() except StopIteration: break except AttributeError: raise TypeError("iter() returned non-iterator of type '{}'".format(type(temp).__name__)) # this is the "body" of the for loop continue There is pretty much no difference between the above and your example code. Actually, the more interesting part of a for loop is not the for , but the in . Using in by itself produces a different effect than for in , but it is very useful to understand what in does with its arguments, since for in implements very similar behavior. When used by itself, the in keyword first calls the object's __contains__ method , which is yet another "magic method" (note that this step is skipped when using for in ). Using in by itself on a container, you can do things like this: 1 in [1, 2, 3] # True'He' in 'Hello' # True3 in range(10) # True'eH' in 'Hello'[::-1] # True If the iterable object is NOT a container (i.e. it doesn't have a __contains__ method), in next tries to call the object's __iter__ method. As was said previously: the __iter__ method returns what is known in Python as an iterator . Basically, an iterator is an object that you can use the built-in generic function next() on 1 . A generator is just one type of iterator. If the call to __iter__ is successful, the in keyword applies the function next() to the iterable object over and over again. (Remember: the iterable object could be a generator, or a container object's iterator, or any other iterable object.) Actually, to be more precise: it calls the iterator object's __next__ method). If the object doesn't have a __iter__ method to return an iterator, in then falls back on the old-style iteration protocol using the object's __getitem__ method 2 . If all of the above attempts fail, you'll get a TypeError exception . If you wish to create your own object type to iterate over (i.e, you can use for in , or just in , on it), it's useful to know about the yield keyword, which is used in generators (as mentioned above). class MyIterable(): def __iter__(self): yield 1m = MyIterable()for _ in m: print(_) # 11 in m # True The presence of yield turns a function or method into a generator instead of a regular function/method. You don't need the __next__ method if you use a generator (it brings __next__ along with it automatically). If you wish to create your own container object type (i.e, you can use in on it by itself, but NOT for in ), you just need the __contains__ method. class MyUselessContainer(): def __contains__(self, obj): return Truem = MyUselessContainer()1 in m # True'Foo' in m # TrueTypeError in m # TrueNone in m # True 1 Note that, to be an iterator, an object must implement the iterator protocol . This only means that both the __next__ and __iter__ methods must be correctly implemented (generators come with this functionality "for free", so you don't need to worry about it when using them). Also note that the ___next__ method is actually next (no underscores) in Python 2 . 2 See this answer for the different ways to create iterable classes.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/29403401', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/907921/']}
jdg_77565
stackexchange
llm_judgeable_groundtruth_similarity
25925
Below is a question asked on the forum astronomy.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have recently viewed two news articles announcing a galaxy composed almost entirely of dark matter and one with hardly any. This made me wonder if there is a "continuum" of percentages of dark matter in galaxies, where it has been measured, or if there is a "preferred" range. Now provide the response and nothing else.
tl;dr : In general, smaller galaxies have relatively more dark matter than larger galaxies. The atypical galaxies that you refer to are discussed in the last paragraph. Mass fractions in the Universe Matter in the Universe is dominated by two components; baryons (i.e. atoms in various forms) and dark matter (DM). The baryons predominantly exist in two forms: gas (including plasma) and stars (some gas condenses to form dust, planets, etc., but that's a minor part). The fraction of the Universe's total energy in the form of matter is $\Omega_\mathrm{M}\simeq0.31$ (most of the rest is dark energy). The baryons and DM have fractions $\Omega_\mathrm{b}=0.05$ and $\Omega_\mathrm{DM}=0.26$, respectively (numbers from Planck Collaboration et al. 2016 ). Hence, the baryons and DM comprise fractions$f_\mathrm{b} = \Omega_\mathrm{b} / \Omega_\mathrm{M} = 16$% and$f_\mathrm{DM} = \Omega_\mathrm{DM} / \Omega_\mathrm{M} = 84$% of the total mass, respectively. Galaxies have more dark matter than the average Universe Galaxies formed when DM+gas overdensities decoupled from the Hubble flow and started collapsing, and so were "born" with the "cosmic" fractions. However, observationally it is found that the baryon fraction in galaxies is much smaller; for instance, a "typical" Milky Way-sized galaxy roughly has $f_\mathrm{b}$ only half of the cosmic average, and the DM fraction is correspondingly larger, >$90$% (e.g. Werk et al. 2014 ). Gas and dark matter dynamics The reason is the different mechanisms that affect the dynamics of the particles.Whereas the DM is collisionless, the gas particles collide and dissipate their energy, making it easier to cool and contract. Thus, the part of the galaxies that we can observe is much smaller than "whole" galaxy; the typical "half-light radius" is but a few percent of the emcompassing DM halo ( Kravtsov 2013 , Somerville et al. 2017 ). The half-light radius is the radius within which half of the light is emitted; when you see an image of a galaxy, it typically looks perhaps 4-5 times larger than this. So, something like this: A significant fraction of the baryons reside in the halo as well. This is the hot gas of millions of Kelvin, and the high temperature and low density makes it difficult to detect, as it glows only in faint X-rays. Feedback blows out gas, but dark matter stays But various physical processes act so as to try to blow the baryons — but not the DM — out of the galaxies. These processes are known as feedback , and are due to energy being "injected" in the baryons.For massive galaxies with halo masses $M_\mathrm{h}\gtrsim10^{12}\,M_\odot$ this feedback is dominated by their central black hole accreting gas, resulting in extreme luminosities manifesting itself as a quasar , or active galactic nucleus ( Silk & Rees 1998 ; Croton et al. 2006 ). For smaller galaxies with $M_\mathrm{h}\lesssim10^{12}\,M_\odot$, the feedback is mostly due to stellar winds and exploding stars depositing thermal and kinetic energy in the surrounding gas ( Dekel & Silk 1986 ; Hopkins et al. 2012 ). At even lower masses, the galaxies are so small that a significant fraction of their gas may be ejected altogether during early epochs of star formation (e.g. Bullock et al. 2000 ), resulting in galaxies that consist virtually only of DM and a few stars. The smaller a galaxy is, the shallower its gravitational potential, and thus the easier it is for the gas to escape the galaxy. Hence, in general the smaller a galaxy is, the larger its amount of dark matter is . Conversely, the larger a halo is, the more its mass fractions converge toward the cosmic fractions. These are only reached for halos of masses $M_\mathrm{h} \gtrsim 10^{15}\,M_\odot$, which are no longer individual galaxies, but rather groups and clusters. This is seen in the figure below (from a paper from last week by Henden et al. 2018 ), which shows the stellar mass fraction ( left ) and gas mass fraction ( right ) as a function of halo mass: The baryon fraction is $f_\mathrm{b} = f_\mathrm{stars}+f_\mathrm{gas}$, and the DM fraction — which is what you're asking for — is then given by $f_\mathrm{DM} = 1 - f_\mathrm{stars} - f_\mathrm{gas}$. The stellar fraction is seen to decrease with halo mass, due to the AGN feedback discussed above, but the total star+gas fraction increases. Note though that although the DM — being collisionless — is not exactly blown out of the galaxy like the baryons, the gravitational attraction between the two components still affects the DM and alters the density profile of the halo ( Duffy et al. 2010 ). Galaxies with more gas than dark matter (?) Thus, a galaxy without baryons is not a huge mystery, as long as it's small. Even larger galaxies are apparently able to rid themselves of most the baryons; van Dokkum et al. (2016) reported a MW-sized halo with 98% DM. A galaxy with baryons, but without DM is more spectacular, but was reported recently by van Dokkum et al. (2018) (yes, the same guy).In another answer about DF2 I discuss various processes that might lead to such a galaxy, including misinterpretation of the data.
{}
{'log_upvote_score': 4, 'links': ['https://astronomy.stackexchange.com/questions/25925', 'https://astronomy.stackexchange.com', 'https://astronomy.stackexchange.com/users/7997/']}
jdg_77566
stackexchange
llm_judgeable_groundtruth_similarity
2564251
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Integrate $$\int \frac{\sin x \cos x}{\sin^4x + \cos^4x}dx$$ I solved the question by using the identity $\cos^4(x)+\sin^4(x) = \frac{1}{4}(\cos4x+3)$ and the substitution $u=\cos4x +3$, which turned it into a relatively familiar integral (see my answer below). However, I'm pretty sure there are easier ways I am missing, so please feel free to post alternative answers. There is a similar question here . Problem Source: James Stewart Calculus, 6E Now provide the response and nothing else.
$$ \frac{\sin x\cos x}{\sin^4 x + \cos^4 x} = \frac{\sin 2x}{2(1 - 2\sin^2x \cos^2 x)} = \frac{\sin 2x}{2 - (1-\cos 2x)(1 + \cos 2x)} $$ Substitute $u = \cos 2x$ to get$$ -\frac{1}{2}\int\frac{du}{1+u^2} = -\frac{1}{2}\arctan u = \color{blue}{-\frac{1}{2}\arctan (\cos 2x)} $$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2564251', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/506943/']}
jdg_77567
stackexchange
llm_judgeable_groundtruth_similarity
34473934
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am writing a Windows batch script, and I have an argument or variable containing a path that uses short 8.3 names. The path might represent a file or folder. How can I convert the 8.3 path into a long-name path? At a minimum I would like to be able to simply print out the full long-name path. But ideally I would like to safely get the long-name path into a new variable. For example, given the path C:\PROGRA~1\AVASTS~1\avast\BROWSE~1.INI , I would like to return C:\Program Files\AVAST Software\avast\BrowserCleanup.ini . As a batch enthusiast, I am most interested in a pure batch solution that uses only native Windows commands. But hybird use of other native scripting tools like PowerShell and JScript is also acceptable. Note: I am posting my own answer for this question. I searched the web, and was surprised to find precious little on this subject. I developed multiple working strategies, and thought others might be interested in what I found. Now provide the response and nothing else.
First I will demonstrate how to convert a batch file argument %1 and print the result to the screen. PowerShell The simplest solution is to use PowerShell. I found the following code on a MSDN blog by Sergey Babkin $long_path = (Get-Item -LiteralPath $path).FullName Putting that code in a batch script and printing the result is trivial: @echo offpowershell "(Get-Item -LiteralPath '%~1').FullName" However, I try to avoid using PowerShell within batch for two reasons PowerShell is not native to XP The start up time for PowerShell is considerable, so it makes the batch hybrid relatively slow CSCRIPT (JScript or VBS) I found this VBS snippet at Computer Hope forum that uses a dummy shortcut to convert from short to long form. set oArgs = Wscript.Argumentswscript.echo LongName(oArgs(0))Function LongName(strFName)Const ScFSO = "Scripting.FileSystemObject"Const WScSh = "WScript.Shell" With WScript.CreateObject(WScSh).CreateShortcut("dummy.lnk") .TargetPath = CreateObject(ScFSO).GetFile(strFName) LongName = .TargetPath End WithEnd Function I found similar code at a Microsoft newsgroup archive and an old vbscript forum . The code only supports file paths, and it is a bit easier to embed JScript within batch. After converting to JScript and adding an exception handler to get a folder if a file fails, I get the following hybrid code: @if (@X)==(@Y) @end /* Harmless hybrid line that begins a JScript comment::----------- Batch Code-----------------@echo offcscript //E:JScript //nologo "%~f0" %1exit /b------------ JScript Code---------------*/var shortcut = WScript.CreateObject("WScript.Shell").CreateShortcut("dummy.lnk");var fso = new ActiveXObject("Scripting.FileSystemObject");var folder='';try { shortcut.TargetPath = fso.GetFile(WScript.Arguments(0));}catch(e) { try { shortcut.TargetPath = fso.GetFolder(WScript.Arguments(0)); folder='\\' } catch(e) { WScript.StdErr.WriteLine(e.message); WScript.Quit(1); }}WScript.StdOut.WriteLine(shortcut.TargetPath+folder); Pure Batch Surprisingly, my web search failed to find a pure batch solution. So I was on my own. If you know that the path represents a file, then it is a simple matter to convert the 8.3 file name into a long name using dir /b "yourFilePath" . However, that does not resolve the names of the parent folder(s). The situation is even worse if the path represents a folder. There is no way to list a specific folder using only the DIR command - it always lists the contents of the folder instead of the folder name itself. I tried a number of strategies to handle the folder paths, and none of them worked: CD or PUSHD to the path and then look at the prompt - it preserves the short folder names XCOPY with /L and /F options - it also preserves the short folder names Argument or FOR variable modifier %~f1 or %%~fA - preserves the short names FORFILES - doesn't appear to support short names. The only solution I was able to come up with was to use DIR to iteratively convert each folder within the path, one at a time. This requires that I use DIR /X /B /AD to list all folders in the parent folder, including their 8.3 names, and then use FINDSTR to locate the correct short folder name. I rely on the fact that the short file name always appears in the exact same place after the <DIR> text. Once I locate the correct line, I can use variable substring or find/replace operations, or FOR /F to parse out the long folder name. I opted to use FOR /F. One other stumbling block I had was to determine if the original path represents a file or a folder. The frequently used approach of appending a backslash and using IF EXIST "yourPath\" echo FOLDER improperly reports a file as a folder if the path involves a symbolic link or junction, which is common in company network environments. I opted to use IF EXIST "yourPath\*" , found at https://stackoverflow.com/a/1466528/1012053 . But it is also possible to use the FOR variable %%~aF attribute modifier to look for the d (directory) attribute, found at https://stackoverflow.com/a/3728742/1012053 , and https://stackoverflow.com/a/8669636/1012053 . So here is a fully working pure batch solution @echo offsetlocal disableDelayedExpansion:: Validate pathset "test=%~1"if "%test:**=%" neq "%test%" goto :errif "%test:?=%" neq "%test%" goto :errif not exist "%test%" goto :err:: Initializeset "returnPath="set "sourcePath=%~f1":: Resolve file name, if presentif not exist "%~1\*" ( for /f "eol=: delims=" %%F in ('dir /b "%~1"') do set "returnPath=%%~nxF" set "sourcePath=%~f1\.."):resolvePath :: one folder at a timefor %%F in ("%sourcePath%") do ( if "%%~nxF" equ "" ( for %%P in ("%%~fF%returnPath%") do echo %%~P exit /b 0 ) for %%P in ("%sourcePath%\..") do ( for /f "delims=> tokens=2" %%A in ( 'dir /ad /x "%%~fP"^|findstr /c:"> %%~nxF "' ) do for /f "tokens=1*" %%B in ("%%A") do set "returnPath=%%C\%returnPath%" ) || set "returnPath=%%~nxF\%returnPath%" set "sourcePath=%%~dpF.")goto :resolvePath:err>&2 echo Path not foundexit /b 1 The GOTO used to iterate the individual folders will slow the operation down if there are many folders. If I really wanted to optimize for speed, I could use FOR /F to invoke another batch process, and resolve each folder in an infinite FOR /L %%N IN () DO... loop, and use EXIT to break out of the loop once I reach the root. But I did not bother. Devoloping robust utilities that can return the result in a variable There are a number of edge cases that can complicate development of a robust script given that ^ , % , and ! are all legal characters in file/folder names. CALL doubles quoted ^ characters. There is no good solution to this problem, other than to pass the value by reference using a variable instead of as a string literal. This is not an issue if the input path uses only short names. But it could be an issue if the path uses a mixture of short and long names. Passing % literals within batch arguments can be tricky. It can get confusing as to who many times (if at all) it should be doubled. Again it might be easier to pass the value by reference within a variable. The CALLer may call the utility from within a FOR loop. If a variable or argument contains % , then expansion of %var% or %1 within a loop in the utility can lead to inadvertent FOR variable exansion because the FOR variables are global in scope. The utility must not expand arguments within a FOR loop, and variables can only be safely expanded within a FOR loop if delayed expansion is used. Expansion of FOR variables containing ! will be corrupted if delayed expansion is enabled. The CALLing environment may have delayed expansion enabled or disabled. Passing values containing ! and ^ across the ENDLOCAL barrier to a delayed expansion environment requires that quoted ! be escaped as ^! . Also, quoted ^ must be escaped as ^^ , but only if the line contains ! . Of course those characters should not be escaped if the CALLing environment has delayed expansion disabled. I have developed robust utility forms of both the JScript and pure batch solutions that take into account all of the edge cases above. The utilities expect the path as a string literal by default, but accept a variable name that contains the path if the /V option is used. By default the utilties simply print the result to stdout. But the result can be returned in a variable if you pass the name of the return variable as an extra argument. The correct value is guaranteed to be returned, regardless whether delayed expansion is enabled or disabled in your CALLing environment. Full documentation is embedded within the utilities, and can be accessed using the /? option. There are a few obscure limitations: The return variable name must not contain ! or % characters Likewise /V option input variable name must not contain ! or % characters. The input path must not contain internal double quotes. It is OK for the path to be enclosed witin one set of double quotes, but there should not be any additional quotes within. I have not tested whether the utilities work with unicode in path names, or if they work with UNC paths. jLongPath.bat - hybrid JScript / batch @if (@X)==(@Y) @end /* Harmless hybrid line that begins a JScript comment::::::jLongPath [/V] SrcPath [RtnVar]:::jLongPath /?:::::: Determine the absolute long-name path of source path SrcPath::: and return the result in variable RtnVar.:::::: If RtnVar is not specified, then print the result to stderr.:::::: If option /V is specified, then SrcPath is a variable that::: contains the source path.:::::: If the first argument is /?, then print this help to stdout.:::::: The returned ERROLEVEL is 0 upon success, 1 if failure.:::::: jLongPath.bat version 1.0 was written by Dave Benham:::::----------- Batch Code-----------------@echo offsetlocal disableDelayedExpansionif /i "%~1" equ "/?" ( for /f "tokens=* delims=:" %%A in ('findstr "^:::" "%~f0"') do @echo(%%A exit /b 0)if /i "%~1" equ "/V" shift /1( for /f "delims=* tokens=1,2" %%A in ( 'cscript //E:JScript //nologo "%~f0" %*' ) do if "%~2" equ "" (echo %%A) else ( endlocal if "!!" equ "" (set "%~2=%%B" !) else set "%~2=%%A" )) || exit /b 1exit /b 0------------ JScript Code---------------*/try { var shortcut = WScript.CreateObject("WScript.Shell").CreateShortcut("dummy.lnk"), fso = new ActiveXObject("Scripting.FileSystemObject"), path=WScript.Arguments(0), folder=''; if (path.toUpperCase()=='/V') { var env=WScript.CreateObject("WScript.Shell").Environment("Process"); path=env(WScript.Arguments(1)); } try { shortcut.TargetPath = fso.GetFile(path); } catch(e) { shortcut.TargetPath = fso.GetFolder(path); folder='\\' } var rtn = shortcut.TargetPath+folder+'*'; WScript.StdOut.WriteLine( rtn + rtn.replace(/\^/g,'^^').replace(/!/g,'^!') );}catch(e) { WScript.StdErr.WriteLine( (e.number==-2146828283) ? 'Path not found' : (e.number==-2146828279) ? 'Missing path argument - Use jLongPath /? for help.' : e.message );} longPath.bat - Pure batch ::::::longPath [/V] SrcPath [RtnVar]:::longPath /?:::::: Determine the absolute long-name path of source path SrcPath::: and return the result in variable RtnVar.:::::: If RtnVar is not specified, then print the result to stderr.:::::: If option /V is specified, then SrcPath is a variable that::: contains the source path.:::::: If the first argument is /?, then prints this help to stdout.:::::: The returned ERROLEVEL is 0 upon success, 1 if failure.:::::: longPath.bat version 1.0 was written by Dave Benham:::@echo offsetlocal disableDelayedExpansion:: Load argumentsif "%~1" equ "" goto :noPathif "%~1" equ "/?" ( for /f "tokens=* delims=:" %%A in ('findstr "^:::" "%~f0"') do @echo(%%A exit /b 0)if /i "%~1" equ "/V" ( setlocal enableDelayedExpansion if "%~2" equ "" goto :noPath if not defined %~2!! goto :notFound for /f "eol=: delims=" %%F in ("!%~2!") do ( endlocal set "sourcePath=%%~fF" set "test=%%F" ) shift /1) else ( set "sourcePath=%~f1" set "test=%~1"):: Validate pathif "%test:**=%" neq "%test%" goto :notFoundif "%test:?=%" neq "%test%" goto :notFoundif not exist "%test%" goto :notFound:: Resolve file name, if presentset "returnPath="if not exist "%sourcePath%\*" ( for /f "eol=: delims=" %%F in ('dir /b "%sourcePath%"') do set "returnPath=%%~nxF" set "sourcePath=%sourcePath%\.."):resolvePath :: one folder at a timefor /f "delims=* tokens=1,2" %%R in (^""%returnPath%"*"%sourcePath%"^") do ( if "%%~nxS" equ "" for %%P in ("%%~fS%%~R") do ( if "%~2" equ "" ( echo %%~P exit /b 0 ) set "returnPath=%%~P" goto :return ) for %%P in ("%%~S\..") do ( for /f "delims=> tokens=2" %%A in ( 'dir /ad /x "%%~fP"^|findstr /c:"> %%~nxS "' ) do for /f "tokens=1*" %%B in ("%%A") do set "returnPath=%%C\%%~R" ) || set "returnPath=%%~nxS\%%~R" set "sourcePath=%%~dpS.")goto :resolvePath:returnset "delayedPath=%returnPath:^=^^%"set "delayedPath=%delayedPath:!=^!%"for /f "delims=* tokens=1,2" %%A in ("%delayedPath%*%returnPath%") do ( endlocal if "!!" equ "" (set "%~2=%%A" !) else set "%~2=%%B" exit /b 0):noPath>&2 echo Missing path argument - Use longPath /? for help.exit /b 1:notFound>&2 echo Path not foundexit /b 1
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34473934', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1012053/']}
jdg_77568
stackexchange
llm_judgeable_groundtruth_similarity
27351936
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Am using SMS manager to send SMS.For single SIM its works perfectly to send the SMS.but in dual SIM the SMS will not send.Is it possible to send the SMS from dual SIM,if possible means how can i select the Which SIM to send SMS.Can any one know help me to solve this issue. Single SIM working Code SmsManager smsManager = SmsManager.getDefault();smsManager.sendTextMessage(ph_number, null, body, null,null); Now provide the response and nothing else.
I use this way to manage which sim to use for sending SMS even long message .. Its working on my dual sim phone Lenovo A319 (4.4.3), no need for root. its built on reflection. import android.app.PendingIntent;import android.content.Context;import android.os.Build;import android.os.IBinder;import android.util.Log;import java.lang.reflect.InvocationTargetException;import java.lang.reflect.Method;import java.util.ArrayList;import java.util.List;/** * Created by Apipas on 6/4/15. */public class SimUtil { public static boolean sendSMS(Context ctx, int simID, String toNum, String centerNum, String smsText, PendingIntent sentIntent, PendingIntent deliveryIntent) { String name; try { if (simID == 0) { name = "isms"; // for model : "Philips T939" name = "isms0" } else if (simID == 1) { name = "isms2"; } else { throw new Exception("can not get service which for sim '" + simID + "', only 0,1 accepted as values"); } Method method = Class.forName("android.os.ServiceManager").getDeclaredMethod("getService", String.class); method.setAccessible(true); Object param = method.invoke(null, name); method = Class.forName("com.android.internal.telephony.ISms$Stub").getDeclaredMethod("asInterface", IBinder.class); method.setAccessible(true); Object stubObj = method.invoke(null, param); if (Build.VERSION.SDK_INT < 18) { method = stubObj.getClass().getMethod("sendText", String.class, String.class, String.class, PendingIntent.class, PendingIntent.class); method.invoke(stubObj, toNum, centerNum, smsText, sentIntent, deliveryIntent); } else { method = stubObj.getClass().getMethod("sendText", String.class, String.class, String.class, String.class, PendingIntent.class, PendingIntent.class); method.invoke(stubObj, ctx.getPackageName(), toNum, centerNum, smsText, sentIntent, deliveryIntent); } return true; } catch (ClassNotFoundException e) { Log.e("apipas", "ClassNotFoundException:" + e.getMessage()); } catch (NoSuchMethodException e) { Log.e("apipas", "NoSuchMethodException:" + e.getMessage()); } catch (InvocationTargetException e) { Log.e("apipas", "InvocationTargetException:" + e.getMessage()); } catch (IllegalAccessException e) { Log.e("apipas", "IllegalAccessException:" + e.getMessage()); } catch (Exception e) { Log.e("apipas", "Exception:" + e.getMessage()); } return false; } public static boolean sendMultipartTextSMS(Context ctx, int simID, String toNum, String centerNum, ArrayList<String> smsTextlist, ArrayList<PendingIntent> sentIntentList, ArrayList<PendingIntent> deliveryIntentList) { String name; try { if (simID == 0) { name = "isms"; // for model : "Philips T939" name = "isms0" } else if (simID == 1) { name = "isms2"; } else { throw new Exception("can not get service which for sim '" + simID + "', only 0,1 accepted as values"); } Method method = Class.forName("android.os.ServiceManager").getDeclaredMethod("getService", String.class); method.setAccessible(true); Object param = method.invoke(null, name); method = Class.forName("com.android.internal.telephony.ISms$Stub").getDeclaredMethod("asInterface", IBinder.class); method.setAccessible(true); Object stubObj = method.invoke(null, param); if (Build.VERSION.SDK_INT < 18) { method = stubObj.getClass().getMethod("sendMultipartText", String.class, String.class, List.class, List.class, List.class); method.invoke(stubObj, toNum, centerNum, smsTextlist, sentIntentList, deliveryIntentList); } else { method = stubObj.getClass().getMethod("sendMultipartText", String.class, String.class, String.class, List.class, List.class, List.class); method.invoke(stubObj, ctx.getPackageName(), toNum, centerNum, smsTextlist, sentIntentList, deliveryIntentList); } return true; } catch (ClassNotFoundException e) { Log.e("apipas", "ClassNotFoundException:" + e.getMessage()); } catch (NoSuchMethodException e) { Log.e("apipas", "NoSuchMethodException:" + e.getMessage()); } catch (InvocationTargetException e) { Log.e("apipas", "InvocationTargetException:" + e.getMessage()); } catch (IllegalAccessException e) { Log.e("apipas", "IllegalAccessException:" + e.getMessage()); } catch (Exception e) { Log.e("apipas", "Exception:" + e.getMessage()); } return false; }} Add permission: <uses-permission android:name="android.permission.SEND_SMS"/> then just call that (bloody) static method like this :) To use SIM1: SimUtil.sendSMS(this,0,"00970XXXXXXXXX",null,"Hi Stackoverflow! its me Maher. Sent by sim1",null,null); To use SIM2: SimUtil.sendSMS(this,1,"00970XXXXXXXXX",null,"Hi Stackoverflow! its me Maher. Sent by sim2",null,null); But wait...that won't work if message is longer than 160 characters.. so better way: String textSMS;//short <160// textSMS = "Hi Stackoverflow! its me Maher.";//long >160textSMS = "Hi Jerusalem, hi Cairo, Hi Prague, hi Baghdad, hi Riyadh, hi Jeddah, hi Dammam, hi Aleppo, hi Casablanca, hi Damascus, hi Alexandria, hi Algiers, hi Mosul, hi Basra, hi Arabia, hi Tripoli, hi Amman, hi Kuwait, hi Beirut, hi Abu Dhabi";int simID = 0;//0:sim_1, 1:sim_2ArrayList<String> messageList = SmsManager.getDefault().divideMessage(textSMS);if (messageList.size() > 1) { SimUtil.sendMultipartTextSMS(this, simID, "00972XXXXXXXXX", null, messageList, null, null);} else { SimUtil.sendSMS(this, simID, "00972XXXXXXXXX", null, textSMS, null, null);} so you can safely pass message body without worrying about length. ------------UPDATE 09.10.2016---------- To use PendingIntent/DeliveryIntent in MultipartMessage.. just create ArrayList with same content and pass it. Here is an implementation of creating List of PendingIntent: final static String sSMSManagerIntentSENT = "package.DeliveryReport.SMS_SENT";int numParts = parts.size();ArrayList<PendingIntent> pendingIntents = new ArrayList<PendingIntent>();for (int i = 0; i < numParts; i++) { Intent pendingIntent = new Intent(sSMSManagerIntentSENT); //optional if you want to keep info about what action has been done for feedback or analysis later when message is sent pendingIntent.putExtra("package.DeliveryReport.phoneNumber", phoneNo); // receiver phoneNo pendingIntent.putExtra("package.DeliveryReport.textSMS", msg);// msg body pendingIntent.putExtra("SIM", simID); // which sim is sending this message pendingIntents.add(PendingIntent.getBroadcast(getActivity(), 0, pendingIntent,PendingIntent.FLAG_ONE_SHOT));} For deliver,just use the same approach. ------------------ Extra ------------------ I have seen that Android 22 supports multi sim cards from Android 5.1 and here is how to use it .. unfortunately I don't have device with that version for testing, so please for feedback: SmsManager.getSmsManagerForSubscriptionId(int subscriptionId).sendTextMessage(String destinationAddress, String scAddress, String text,PendingIntent sentIntent, PendingIntent deliveryIntent); How to get subscriptionId? to review all available subscriptionID that belong to sim card: SubscriptionManager subscriptionManager = SubscriptionManager.from(getApplicationContext()); List<SubscriptionInfo> subscriptionInfoList = subscriptionManager.getActiveSubscriptionInfoList(); for (SubscriptionInfo subscriptionInfo : subscriptionInfoList) { int subscriptionId = subscriptionInfo.getSubscriptionId(); Log.d("apipas","subscriptionId:"+subscriptionId); } ** Please note that this code is working on 5.1. if you try to run it on older version you'd get an exception that method doesn't exist. ------------UPDATE 19.8.2015---------- Information on SIMs, are located in DB: telephony.db (by default: /data/data/com.android.providers.telephony/databases/telephony.db ) in table siminfo. See screenshot on table siminfo in DB on real device. Fortunately there is a content provider for that: "content://telephony/siminfo/" So basically just query data from that table.Its important to mention that slot 0 represents SIM1, and 1 represents SIM2, and slot -1 from old/removed/replaced SIMs. This applies on Lenovo A319 . I guess that may work on other devices.Here is the util method I use: public static List<SimInfo> getSIMInfo(Context context) { List<SimInfo> simInfoList = new ArrayList<>(); Uri URI_TELEPHONY = Uri.parse("content://telephony/siminfo/"); Cursor c = context.getContentResolver().query(URI_TELEPHONY, null, null, null, null); if (c.moveToFirst()) { do { int id = c.getInt(c.getColumnIndex("_id")); int slot = c.getInt(c.getColumnIndex("slot")); String display_name = c.getString(c.getColumnIndex("display_name")); String icc_id = c.getString(c.getColumnIndex("icc_id")); SimInfo simInfo = new SimInfo(id, display_name, icc_id, slot); Log.d("apipas_sim_info", simInfo.toString()); simInfoList.add(simInfo); } while (c.moveToNext()); } c.close(); return simInfoList; } and here is the entity class SimInfo: public class SimInfo { private int id_; private String display_name; private String icc_id; private int slot; public SimInfo(int id_, String display_name, String icc_id, int slot) { this.id_ = id_; this.display_name = display_name; this.icc_id = icc_id; this.slot = slot; } public int getId_() { return id_; } public String getDisplay_name() { return display_name; } public String getIcc_id() { return icc_id; } public int getSlot() { return slot; } @Override public String toString() { return "SimInfo{" + "id_=" + id_ + ", display_name='" + display_name + '\'' + ", icc_id='" + icc_id + '\'' + ", slot=" + slot + '}'; }} Good luck,'.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/27351936', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2090784/']}
jdg_77569
stackexchange
llm_judgeable_groundtruth_similarity
296709
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: It's always been known that Disk operations are slow and we know the reasons why they are slow.So the question here is why do we have to wait for I/O or why is there such a thing as IOWait, etc.? I mean I've noticed that when you're doing some I/O tasks in the background, your computer basically gets a lot slower, I've especially noticed that when using Linux, if you're doing some longer I/O tasks, the OS becomes almost unusable until they are completed. Indeed, I also found this topic in an article, there's a snippet: I/O wait is 12.1%. This server has 8 cores (via cat /proc/cpuinfo). This is very close to (1/8 cores = 0.125) So basically it means it's slowing down the computer a LOT, why is that? I mean OK, now the normal computer are at least 2 cores, sometimes 4 or sometimes they have more because of hyperthreading or something like that. But now the question is why the CPU actually has to stay there, practically not doing anything else than just waiting for IO? I mean the basic idea or architecture of the process managament, now I don't know whether it's the OS who's responsible for that, or does it come down to the hardware part, but it should be made possible for the cpu to wait or to check regularly, while actually performing lots of other tasks and only going back to the IO process when it's ready. Indeed, if that's such a difficult task and the cpu would have to wait, why isn't that managed by hardware more efficiently then? Like for instance there could be some kind of mini cpu which would just wait for it and deliver the small part of data to the real cpu as soon as it gets back to the process and so the process would be repeated and we wouldn't have to practically dedicate a whole cpu core for the data copy process... Or would I be the one who should invent this kind of stuff and get a nobel prize for that? :S Now okay, I'm really putting it now from an observers perspective and I really haven't gone that deep into the topic, but I really don't understand why the cpu has to work with the speed of HDD, while it could just do something else and come back to HDD once it's ready. The idea is not to speed up the application who needs that IO operation or the copy process or whatever, but the idea is to just minimally affect the CPU consumption while performing that operation, so that the OS could utilise it for other processes and the user wouldn't have to feel general computer lag when doing some copying operations... Now provide the response and nothing else.
It's possible to write asynchronous IO where you tell the OS to dispatch a disk read/write and then go do something else and then later check if it's done. It's far from new. An older method is using another thread for the IO. However that requires that you have something to do while that read is being executed and you will not be allowed to touch the buffer you passed in for the result. It's also much easier to program when you assume everything is blocking IO. When you call a blocking read function you know it won't return until something has been read and immediately after you can start processing on it. The typical read loop is a good example //variables that the loop useschar[1024] buffer;while((read = fread(buffer, 1024, 1, file))>0){ //use buffer} Otherwise you need to save the current function state (usually in the form of a callback + userData pointer) and pass it + identifier of the read operation back up to a select() type loop. There if an operation is finished it will map the identifier of the read operation to the callback+data pointer and invoke the callback with information of the completed operation. void callback(void* buffer, int result, int fd, void* userData){ if(result<=0){ //done, free buffer and continue to normal processing } //use buffer int readID = async_read(fd, buffer, userData->buff_size); registerCallback(readId, callback, userData);} This also means that every function that could end up using that async read would need to be able to handle a async continuation. That is a non-trivial change throughout most programs, you ask people trying to get into async C# about that. However synchronous IO vs. asynchronous IO isn't the cause of the general slowdown. Swapping pages in is also an operation that needs to wait on IO. The scheduler will just switch to another program that isn't waiting on IO if one is ( IO wait is when the processor is idle and there is a IO operation pending ). The real problem is that both the harddrive and the CPU use the same channel to communicate with the RAM ; the memory bus. And unless you are using RAID then there is only a single disk to get the data from. This is made worse if you are also using a graphics intensive application, then the communication with the GPU will also interfere. In other words the real bottleneck is probably in the hardware rather then the software.
{}
{'log_upvote_score': 5, 'links': ['https://softwareengineering.stackexchange.com/questions/296709', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/69764/']}
jdg_77570
stackexchange
llm_judgeable_groundtruth_similarity
11783232
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: There is - <html><body> <jsp:useBean id="user" class="user.UserData" scope="session"/></body></html> And - <html><body> <% Object user = session.getAttribute("user.UserData") ; %></body></html> Assume user.UserData exists on the session . is there any differnce between the two ways ? Now provide the response and nothing else.
What you can do is to have another Property to represent the Boolean representation . Decorate it with NotMapped attribute so that EF won't consider it for Mapping. Do and If condition and return true /false based on the value of Other property. public Class Customer{ [NotMapped] public bool MyColumnBool { get { return (MyColumn ==1); } } public int MyColumn {get; set;} // other properties}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11783232', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/831294/']}
jdg_77571
stackexchange
llm_judgeable_groundtruth_similarity
8832799
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following 1-liner that I use to see who might be a good candidate for help with a peice of code: git log --pretty=short . | grep ^Auth | sort | uniq -c | sort -nr which lists authors in order of commits, it's crude but it works OK. When I add it to my git config however, like this: [alias] guru=!git log --pretty=short . | grep ^Auth | sort | uniq -c | sort -nr running git guru Gives different results to running it from the command line. stuart@beavis(rp):~/git/apps$ git log --pretty=short . | grep ^Auth | sort | uniq -c | sort -nr710 Author: dave <dave@b2368a2b-315f-46b9-a0b0-05934f827f41>415 Author: pete <pete@b2368a2b-315f-46b9-a0b0-05934f827f41>402 Author: craig <craig@b2368a2b-315f-46b9-a0b0-05934f827f41> Compared to: stuart@beavis(rp):~/git/apps$ git guru859 Author: craig <craig@b2368a2b-315f-46b9-a0b0-05934f827f41>813 Author: paul <paul@b2368a2b-315f-46b9-a0b0-05934f827f41>798 Author: dave <dave@b2368a2b-315f-46b9-a0b0-05934f827f41> As Stefan Näwe notes below, aliases run in the root of your repository, is there any way to run the command on the directory I'm in, or specify? Now provide the response and nothing else.
is there any way to run the command on the directory I'm in, or specify? Yes, there is. When you run a git command, your environment gets a GIT_PREFIX variable with the value of the relative path (from toplevel to the path before executing git) so what you need is prefix your git aliases (that run shell commands and need the right pwd) with cd ${GIT_PREFIX:-.} && . Simple test, add this to your ~/.gitconfig [alias]p = !pwd && cd ${GIT_PREFIX:-.} && pwd
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8832799', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11470/']}
jdg_77572
stackexchange
llm_judgeable_groundtruth_similarity
28166547
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: To help in my knowledge of git so I can use it day to day, what is the difference between: revert amend rollback undo What are they and what do they do? Now provide the response and nothing else.
The terms revert and amend have a well defined meaning in Git. In contrast, rollback and undo do not have such a well defined meaning, and are open to interpretation. Reverting a commit... ...means creating (on the current branch) a new commit that applies the inverse changes that another commit introduced. It's the preferred approach for correcting a problem in a repo that has already been shared with others, because it doesn't involve any destruction (i.e. rewriting history). To revert a commit identified by <commit> , simply run git revert <commit> Amending a commit... ...means replacing the "current" commit by a new one that has the same parent(s); more details in How does git commit --amend work, exactly? Be aware that amending a commit that you've already pushed to a shared remote is bad practice, because it's a form of history rewriting (it "deletes" the most recent commit, which your collaborators may have already based their work on); you can only amend the last commit on a given branch; to rewrite older commits, you need to bring out the big guns (e.g. interactive rebase). To amend a commit, make all the required changes and stage them, then run git commit --amend No need to specify any commit, here, because the last commit on the current branch is the one that will be amended. Your editor will then pop up, giving you the opportunity to modify the commit message. Rolling back... ...usually means discarding (or stashing) any local changes and resetting the branch to a commit (or simply checking out a commit, but that puts you in detached-HEAD state) prior to commit one where things started to get messed up. Use git reset <commit-before-things-started-to-go-belly-up> Undo a commit... ...can mean, depending on the context, revert a commit, amend a commit, remove a commit via an interactive rebase.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/28166547', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3602256/']}
jdg_77573
stackexchange
llm_judgeable_groundtruth_similarity
14220
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is there a function to test the hypothesis that the correlation of two vectors is equal to a given number, say 0.75? Using cor.test I can test cor=0 and I can see whether 0.75 is inside the confidence interval. But is there a function to compute the p-value for cor=0.75? x <- rnorm(10)y <- x+rnorm(10)cor.test(x, y) Now provide the response and nothing else.
Using the variance stabilizing Fisher's atan transformation , you can get the p-value as pnorm( 0.5 * log( (1+r)/(1-r) ), mean = 0.5 * log( (1+0.75)/(1-0.75) ), sd = 1/sqrt(n-3) ) or whatever version of one-sided/two-sided p-value you are interested in. Obviuosly, you need the sample size n and the sample correlation coefficient r as inputs to this.
{}
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/14220', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/5835/']}
jdg_77574
stackexchange
llm_judgeable_groundtruth_similarity
1832139
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Find the sum of following series: $S=\frac{1}{5}-\frac{2}{5^2}+\frac{3}{5^3}-\frac{4}{5^4}+....$ upto infinite terms Could someone give me slight hint to solve this question? Now provide the response and nothing else.
$$S=\frac 15-\frac2{5^2}+\frac 3{5^3}-\dots\tag 1$$and $$\frac S5=\frac 1{5^2}-\frac2{5^3}+\frac 3{5^4}-\dots\tag 2$$ Now, $(1)+(2)$ yields $$S+\frac S5=\frac 15-\frac1{5^2}+\frac1{5^3}+\dots\\\implies \frac{6S}5=\frac 15-\frac1{5^2}+\frac1{5^3}+\dots=\frac 16\\\implies S=\frac5{36}.$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1832139', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/348758/']}
jdg_77575
stackexchange
llm_judgeable_groundtruth_similarity
2410199
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Consider a function $f(x,y):\mathcal{X}\subseteq \mathbb{R}\times \mathcal{Y}\subseteq \mathbb{R}\rightarrow \mathbb{R}$ and suppose that it is continuous over $y$. Assume also that $\int_{\mathcal{X}} f(x,y) dx$ exists and is finite. Can we say that $\int_{\mathcal{X}} f(x,y) dx$ is still continuous over $y$, or I need to impose other restrictions? Now provide the response and nothing else.
The answer is no. For instance, note that Fourier analysis tells us that there exists a sum of continuous functions $g_k(x)$ over $[0,1]$ such that the sum $g(x) = \sum_{k=1}^\infty g_k(x)$ is discontinuous (the link is to the Fourier series of a square wave). With that in mind, define $$f: [0,1]\times[0,1]\to \Bbb R\\f(x,y) = k(k+1)g_k(y) \quad \text{for all } x \in (1/(k+1),1/k]$$You will find that $\int_0^1 f(x,y)\,dx = g(y)$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2410199', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/105670/']}
jdg_77576
stackexchange
llm_judgeable_groundtruth_similarity
733574
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a list that contains several tuples, like: [('a_key', 'a value'), ('another_key', 'another value')] where the first tuple-values act as dictionary-keys.I'm now searching for a python-like way to access the key/value-pairs, like: "mylist.a_key" or "mylist['a_key']" without iterating over the list. any ideas? Now provide the response and nothing else.
You can't do it without any iteration. You will either need iteration to convert it into a dict, at which point key access will become possible sans iteration, or you will need to iterate over it for each key access. Converting to a dict seems the better idea-- in the long run it is more efficient, but more importantly, it represents how you actually see this data structure-- as pairs of keys and values. >>> x = [('a_key', 'a value'), ('another_key', 'another value')]>>> y = dict(x)>>> y['a_key']'a value'>>> y['another_key']'another value'
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/733574', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/89006/']}
jdg_77577
stackexchange
llm_judgeable_groundtruth_similarity
21572877
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm developing a ColdFusion website using Aptana. We're using SVN for version control. I renamed a few files and am now trying to commit one of them, but I'm getting the following error: 'SVN Commit' has encountered a problem.org.apache.subversion.javahl.ClientException: Illegal target for the requested operationsvn: Commit failed (details follow):svn: Cannot commit 'R:\myPath\My-New-File-Name.cfm' because it was moved from 'R:\myPath\My-Old-File-Name.cfm' which is not part of the commit; both sides of the move must be committed together This seems to indicate that I need to commit both the previous file (that was renamed) and the new renamed file together. How can I commit a file that no longer exists...? Now provide the response and nothing else.
Commit the directory, not the file. Think of a directory as a text file containing the list of files it contains, then you can see that to commit successfully, you need to update the directory itself so it can remove the old entry and add the new entry. This will show up in SVN as deleting the old and adding the new file (ie 2 changes to the directory, not 1 change to the file) If only want to commit the 1 file, you will need to add the other changed files to an ignore list temporarily.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/21572877', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/259541/']}
jdg_77578
stackexchange
llm_judgeable_groundtruth_similarity
10045396
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to know if a time belongs to an schedule or another. In my case is for calculate if the time is in night schedule or normal schedule . I have arrived to this solution: NIGHT = ["21:00", "06:00"]def night?( date ) date_str = date.strftime( "%H:%M" ) date_str > NIGHT[0] || date_str < NIGHT[1]end But I think is not very elegant and also only works for this concrete case and not every time range . (I've found several similar question is SO but all of them make reference to Date ranges no Time ranges) Updated Solution has to work for random time ranges not only for this concrete one. Let's say: "05:00"-"10:00""23:00"-"01:00""01:00"-"01:10" Now provide the response and nothing else.
This is actually more or less how I would do it, except maybe a bit more concise: def night?( date ) !("06:00"..."21:00").include?(date.strftime("%H:%M"))end or, if your schedule boundaries can remain on the hour: def night?(date) !((6...21).include? date.hour)end Note the ... - that means, basically, "day time is hour 6 to hour 21 but not including hour 21". edit: here is a generic (and sadly much less pithy) solution: class TimeRange private def coerce(time) time.is_a? String and return time return time.strftime("%H:%M") end public def initialize(start,finish) @start = coerce(start) @finish = coerce(finish) end def include?(time) time = coerce(time) @start < @finish and return (@start..@finish).include?(time) return !(@finish..@start).include?(time) endend You can use it almost like a normal Range: irb(main):013:0> TimeRange.new("02:00","01:00").include?(Time.mktime(2010,04,01,02,30))=> trueirb(main):014:0> TimeRange.new("02:00","01:00").include?(Time.mktime(2010,04,01,01,30))=> falseirb(main):015:0> TimeRange.new("01:00","02:00").include?(Time.mktime(2010,04,01,01,30))=> trueirb(main):016:0> TimeRange.new("01:00","02:00").include?(Time.mktime(2010,04,01,02,30))=> false Note, the above class is ignorant about time zones.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10045396', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/316700/']}
jdg_77579
stackexchange
llm_judgeable_groundtruth_similarity
6158053
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to get the number of days, weeks, months since Epoch in Java. The Java Calendar class offers things like calendar.get(GregorianCalendar.DAY_OF_YEAR), or Calendar.get(GregorianCalendar.WEEK_OF_YEAR), which is a good start but it doesn't do exactly what I need. Is there an elegant way to do this in Java? Now provide the response and nothing else.
You can use the Joda Time library to do this pretty easily - I use it for anything time related other than using the standard Java Date and Calendar classes. Take a look at the example below using the library: MutableDateTime epoch = new MutableDateTime();epoch.setDate(0); //Set to Epoch timeDateTime now = new DateTime();Days days = Days.daysBetween(epoch, now);Weeks weeks = Weeks.weeksBetween(epoch, now);Months months = Months.monthsBetween(epoch, now);System.out.println("Days Since Epoch: " + days.getDays());System.out.println("Weeks Since Epoch: " + weeks.getWeeks());System.out.println("Months Since Epoch: " + months.getMonths()); When I run this I get the following output: Days Since Epoch: 15122Weeks Since Epoch: 2160Months Since Epoch: 496
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6158053', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/446783/']}
jdg_77580
stackexchange
llm_judgeable_groundtruth_similarity
6901
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Please help me calculate the time complexity of the following program. int fun (int n) { if (n <= 2) return 1; else return fun(sqrt(n)) + n;} Please explain. There were four choices given. $\Theta(n^2)$ $\Theta(n \log n)$ $\Theta(\log n)$ $\Theta(\log \log n)$ Now provide the response and nothing else.
The running time of the function on an input $n$ can be expressed as: $$T(n) = T(\sqrt n) + \mathcal{O}(1)$$ which implies the running time of the function and the number of recursions differ only by a constant. The recursive chain ends when $n$ has been reduced to some value $k$, where $k \le 2$. The number of recursions, $r$, can be expressed as: $$k^{2^r} = n \implies r = \log_2 \log_k n$$ It follows that the running time is then: $$\Theta(\log_2 \log_k n) = \Theta(\log \log n)$$
{}
{'log_upvote_score': 4, 'links': ['https://cs.stackexchange.com/questions/6901', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/4732/']}
jdg_77581
stackexchange
llm_judgeable_groundtruth_similarity
1071755
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm writing code to help find prime numbers within a certain range. Here's my general pseudo-code: Iterate through every single number in the range. If the number is 2, 3, 5, or 7; then mark it as a prime number. If the number is NOT divisible by 2, 3, 5, or 7; then it's also a prime number. Think about it. Checking divisibility by 2 already removed even numbers. Three, five, and 7 are other fundamental prime numbers, so any other non-prime number has to be a divisor of any of these. I tested this algorithm with all numbers between 1-100, and it worked. But would it work for all numbers? Now provide the response and nothing else.
For any integer $n \ge 2$ either $n$ is prime or $n$ has a prime factor less than or equal to $\sqrt{n}$. So, if you are only finding prime numbers within a range of $1$ through $N$, then you need to check divisibility by every prime less than or equal to $\sqrt{N}$. Since you were only focused on the range $1$ through $100$, you need to check for divisibility by all primes up to $\sqrt{100} = 10$. So testing $2$, $3$, $5$, and $7$ is sufficient. However, if you go up to $121 = 11^2$ or higher, testing only $2$, $3$, $5$, and $7$ will not work.
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/1071755', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/201757/']}
jdg_77582
stackexchange
llm_judgeable_groundtruth_similarity
18054125
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The following is adaboost algorithm: It mentions "using weights wi on the training data" at part 3.1. I am not very clear about how to use the weights. Should I resample the training data? Now provide the response and nothing else.
I am not very clear about how to use the weights. Should I resample the training data? It depends on what classifier you are using. If your classifier can take instance weight (weighted training examples) into account, then you don't need to resample the data. An example classifier could be naive bayes classifier that accumulates weighted counts or a weighted k-nearest-neighbor classifier. Otherwise, you want to resample the data using the instance weight, i.e., those instance with more weights could be sampled multiple times; while those instance with little weight might not even appear in the training data. Most of the other classifiers fall in this category. In Practice Actually in practice, boosting performs better if you only rely on a pool of very naive classifiers, e.g., decision stump, linear discriminant. In this case, the algorithm you listed has a easy-to-implement form (see here for details): Where alpha is chosen by (epsilon is defined similarly as yours). An Example Define a two-class problem in the plane (for example, a circle of points inside a square) and build a strong classier out of a pool of randomly generated linear discriminants of the type sign(ax1 + bx2 + c). The two class labels are represented with red crosses and blue dots. We here are using a bunch of linear discriminants (yellow lines) to construct the pool of naive/weak classifiers. We generate 1000 data points for each class in the graph (inside the circle or not) and 20% of data is reserved for testing. This is the classification result (in the test dataset) I got, in which I used 50 linear discriminants. The training error is 1.45% and the testing error is 2.3%
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18054125', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/988709/']}
jdg_77583
stackexchange
llm_judgeable_groundtruth_similarity
32788303
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following code in my HTML file: <script type="text/javascript"> window.never = function() { console.log('this function is never called'); } (function(d, s, id){ var js, srjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) {return;} js = d.createElement(s); js.id = id; js.src = "this.script.does.not.exist.js"; srjs.parentNode.insertBefore(js, srjs); }(document, 'script', 'streamrail-jssdk')); </script> See fiddle: http://jsfiddle.net/sebvaeja/ Looking at the console, you can see that window.never function is actually called ('this function is never called' is written to the console). When debugging this with Chrome dev tools, I see in the call stack that the caller was the closure (first line: http://jsfiddle.net/sebvaeja/ ). If I change the never function to be off the global scope: function never() { console.log('this function is never called'); } Then it is not being called. Can someone please explain why is window.never function being called? What is triggering the call? I guess it's got something to do with the function being on the window object, but I can't see the reasoning behind that. Now provide the response and nothing else.
The function expression is followed by parenthesis: window.never = function() { ... } (...) The line break after the function expression does not terminate the variable statement, so for the parser that's a function call: function() { ... }(...) In fact, you are using the very same technique here: (function(d, s, id){ // ...}(document, 'script', 'streamrail-jssdk')) That's a function expression followed by (...) and it calls the function. Solution: Add a semicolon after the definition and you are good. If I change the never function to be off the global scope ... Then it is not being called. In that case the function definition is interpreted as function declaration , not expression. A function declaration is more like a statement and therefore cannot be part of a CallExpression . The following parenthesis are therefore interpreted as grouping operator (like you intended).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32788303', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2534777/']}
jdg_77584
stackexchange
llm_judgeable_groundtruth_similarity
976943
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I came across this question in some extracurricular problem sets my professor gave me: what is the closed form notation for the following sum: $$S_n = 1\cdot1!+2\cdot2!+ ...+n \cdot n!$$ I tried computing some terms, and the only "vague" thing I noticed was that maybe I should be subtracting a term, but I'm really not sure. I went around looking on StackExchange's archives for a closed form of $S_n = 1!+2!+ ...+ n!$ but that didn't help me with my problem much. Any pointers? Now provide the response and nothing else.
You're right about subtracting a term; in fact, there's a (clever) strategy called "telescoping sums" and it's particularly useful here, and you won't need induction to show it. You want terms to cancel out so that you're left with the first and last terms only. If you want to do it yourself, then stop reading here and meditate on this idea: how can you change what's in the summation notation in order to produce a sequence of numbers such that the "middle" terms cancel out? If you want the solution, here it is: Let $n=(n+1)-1$, and then substitute this into your summation notation accordingly: $$S=\sum\limits_{i=1}^{n}((n+1)-1)\cdot n!$$ $$S=\sum\limits_{i=1}^{n}[(n+1)\cdot n!-n!]$$ $$S=\sum\limits_{i=1}^{n}((n+1)!-n!)$$ Working out a few terms and the very last, we immediately see:$$S=2!-1!+3!-2!+4!-3!+...+n!-(n-1)!+(n+1)!-n!$$ Which simplifies to: $$S=(n+1)!-1$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/976943', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/179574/']}
jdg_77585
stackexchange
llm_judgeable_groundtruth_similarity
3743202
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have noticed, by using log4net, that when calling ISession.Update, it updates all the changed objects. For example: // Change 2 instances user1.IsDeleted = true;user2.UserName = "Xyz";// Call session.Update to update the 2 usersusing (ITransaction transaction = session.BeginTransaction()){ Session.Update(user1); // This updates both user1 & user2 transaction.Commit();}using (ITransaction transaction = session.BeginTransaction()){ Session.Update(user2); // Now there is no need for this transaction.Commit();} Is this the default behavior of NHibernate or has something to do with my mapping file? Can I make NHibernate update one by one? Now provide the response and nothing else.
It's the normal and default behavior : Hibernate maintains a cache of Objects that have been inserted, updated or deleted. It also maintains a cache of Objects that have been queried from the database. These Objects are referred to as persistent Objects as long as the EntityManager that was used to fetch them is still active. What this means is that any changes to these Objects within the bounds of a transaction are automatically persisted when the transaction is committed. These updates are implicit within the boundary of the transaction and you don’t have to explicitly call any method to persist the values. From Hibernate Pitfalls part 2 : Q) Do I still have to do Save and Update inside transactions? Save() is only needed for objects that are not persistent (such as new objects). You can use Update to bring an object that has been evicted back into a session. From NHibernate's automatic (dirty checking) update behaviour : I've just discovered that if I get an object from an NHibernate session and change a property on object, NHibernate will automatically update the object on commit without me calling Session.Update(myObj)! Answer: You can set Session.FlushMode to FlushMode.Never. This will make your operations explicit ie: on tx.Commit() or session.Flush(). Of course this will still update the database upon commit/flush. If you do not want this behavior, then call session.Evict(yourObj) and it will then become transient and NHibernate will not issue any db commands for it.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3743202', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9794/']}
jdg_77586
stackexchange
llm_judgeable_groundtruth_similarity
95742
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Related Wolfram Community question. I am looking for the fastest way to achieve the following: Given a Graph , retrieve its edge list in terms of vertex indices (not actual vertices). For example, given Graph[{a, b, c}, {a <-> b, a <-> c}] I am looking to get the output {{1,2}, {1,3}} as a packed array. Alternatively the flattened version would do just as well: {1,2, 1,3} Here {1,2} corresponds to UndirectedEdge[a,b] as a has vertex index 1 and b has vertex index 2 . What I have so far is the very straightforward idxEdgeList[graph_] := Developer`ToPackedArray[ List @@@ EdgeList[graph] /. AssociationThread[VertexList[graph] -> Range@VertexCount[graph]] ]g = GridGraph[{250,250}];idxEdgeList[g]; // AbsoluteTiming(* 0.27 seconds *) Using undocumented features or poking inside of the Graph object is okay for as long as the method is proven to be reliable for 10.0 – 10.2 for various directed and undirected, simple and non-simple graphs. Multigraphs (multiple edges between the same vertices) must be supported, but mixed graphs (both directed and undirected edges) do not. A documented way is of course always preferred! This is admittedly a fairly boring performance tuning problem, but this turned out to be a bottleneck in some cases, and I don't want to lose out on any possible performance improvements I may have missed. Use case: The edge list will eventually be passed to a LibraryLink function. What hasn't occurred to me before typing up the question is that maybe I should be using sparse arrays, which are directly supported by LibraryLink. Update: The solution proposed by @halmir, through IndexGraph , works well for the GridGraph above. But it is not fast for all graphs. In particular: g = GridGraph[{250, 250}];IndexGraph[g]; // AbsoluteTiming(* {4.*10^-6, Null} *)g = Graph[VertexList[g], EdgeList[g]];IndexGraph[g]; // AbsoluteTiming(* {0.259276, Null} *) We are now back to the same speed as the Replace method. Re-creating the graph from its vertex and edge lists somehow made IndexGraph be slow on it, and no matter what I try I cannot convert the graph back to a "fast" format. The SparseArray -based method is much faster, and proves that it is technically possible to extract the information quickly. But it has a big problem: it does not preserve the edge order, which means that I cannot match up the edges with an EdgeWeight vector anymore. It's also difficult to handle for multigraphs, though that would be solvable if I could preserve the ordering ... Update / 2017 @Ramble suggests using the IncidenceMatrix of the graph. The fastest way I found so far is to process the incidence matrix in C, using LibraryLink, to extract the index-based edge list. According to the documentation, an incidence matrix uses the following values: -1 represents the starting point of a directed edge 1 represents the endpoint of a directed edge or an undirected egde 2 represents an undirected self-loop -2 represents a directed self-loop This is not accurate. Between 10.0-11.2, both directed and undirected self-loops are represented with a positive 2 . This prevents the correct representation of mixed graphs ( MixedGraphQ ), but I do not need that anyway. Multigraphs are easily handled by this approach. This is now available in IGraph/M 0.3.95 as IGEdgeIndexList . This function is actually faster than EdgeList , and can be used to implement many edge-list based operations efficiently. An index-based edge list can be used to reconstruct a graph using the undocumented syntax Graph[vertexList, indexEdgeList] , e.g. Graph[{a,b}, {{1,2}}] . Here's the LTemplate code I used for this: mma::IntTensorRef incidenceToEdgeList(mma::SparseMatrixRef<mint> im, bool directed) { auto edgeList = mma::makeVector<mint>(2*im.cols()); if (directed) { for (auto it = im.begin(); it != im.end(); ++it) { switch (*it) { case -1: edgeList[2*it.col()] = it.row(); break; case 1: edgeList[2*it.col() + 1] = it.row(); break; case 2: case -2: edgeList[2*it.col()] = it.row(); edgeList[2*it.col() + 1] = it.row(); break; default: throw mma::LibraryError("Invalid incidence matrix."); } } } else { for (auto &el : edgeList) el = -1; for (auto it = im.begin(); it != im.end(); ++it) { switch (*it) { case 1: if (edgeList[2*it.col()] == -1) edgeList[2*it.col()] = it.row(); else edgeList[2*it.col() + 1] = it.row(); break; case 2: edgeList[2*it.col()] = it.row(); edgeList[2*it.col() + 1] = it.row(); break; default: throw mma::LibraryError("Invalid incidence matrix."); } } } return edgeList;} Now provide the response and nothing else.
Using IndexGraph: g = GridGraph[{250, 250}]; a = Developer`ToPackedArray[ List @@@ EdgeList[IndexGraph[g]]]; // AbsoluteTiming {0.063857, Null} Using AdjacencyMatrix: b = UpperTriangularize[AdjacencyMatrix[g]][ "NonzeroPositions"]; // AbsoluteTiming {0.002584, Null} c = idxEdgeList[g]; // AbsoluteTiming {0.276563, Null} Test results: Developer`PackedArrayQ /@ {a, b, c} {True, True, True} a == b == c True
{}
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/95742', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/12/']}
jdg_77587
stackexchange
llm_judgeable_groundtruth_similarity
9761279
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need a function to remove all characters except numbers + characters: '$', '.' and ','. How can I do this? Now provide the response and nothing else.
> 'worth $12,345.00 dollars'.replace(/[^0-9$.,]/g, '')"$12,345.00" This is the answer you asked for. I would not recommend it for extracting currencies, since it can suffer from problems like this: > 'A set of 12 worth between $123 and $456. A good buy.'.replace(/[^0-9$.,]/g, '')"12$123$456.." If you want to just extract expressions of a currency-like form, you could do: > 'set of 12 worth between $123.00 and $45,678'.match(/\$[0-9,]+(?:\.\d\d)?/g)["$123.00", "$45,678"] If you need more complicated matching (e.g. you'd just like to extract the dollar value and ignore the cent value) you could do something like How do you access the matched groups in a JavaScript regular expression? for example: > var regex = /\$([0-9,]+)(?:\.(\d\d))?/g;> while (true) {> var match = regex.exec('set of 12 worth between $123.00 and $45,678');> if (match === null)> break;> console.log(match);> }["$123.00", "123", "00"]["$45,678", "45,678", undefined] (Thus be careful, javascript regexp objects are not immutable/final objects, but have state and can be used for iteration as demonstrated above. You thus cannot "reuse" a regexp object. Even passing myRegex2 = RegExp(myRegex) will mix state; a very poor language decision for the constructor. See the addendum on how to properly clone regexes in javascript.) You can rewrite the above as a very exotic for-loop if you'd like: var myString = 'set of 12 worth between $123.00 and $45,678';var regex = '\$([0-9,]+)(?:\.(\d\d))?';for(var match, r=RegExp(regex,'g'); match=regex.exec(myString) && match!==null; ) console.log(match); addendum - Why you can't reuse javascript RegExp objects Bad language design, demonstrating how state is reused: var r=/(x.)/gvar r2 = RegExp(r)r.exec('xa xb xc')["xa", "xa"]r2.exec('x1 x2 x3')["x2", "x2"] How to properly clone a regex in javascript (you have to define it with a string): var regexTemplate = '(x.)'var r = RegExp(regexTemplate, 'g')var r2 = RegExp(regexTemplate, 'g')r.exec('xa xb xc')["xa", "xa"]r2.exec('x1 x2 x3')["x1", "x1"] If you wish to programmatically preserve flags such as 'g' , you can probably use regexTemplate = ['(x.)', 'g']; RegExp.apply(this, regexTemplate) .
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/9761279', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/535967/']}
jdg_77588
stackexchange
llm_judgeable_groundtruth_similarity
14912
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: I have a question about a naive test to tell whether a complex elliptic curve $E$ has complex multiplication. Recall that the endomorphism ring $End(E)$ of $E$ is isomorphic to either $\mathbb{Z}$ or an order in an imaginary quadratic field $K$. In the latter case we say that $E$ has CM by $K$. Suppose that we are given an elliptic curve over $\mathbb{C}$, say $$E: y^2=x^3+17x^2-19.$$ One wants to know if there is an imaginary quadratic field $K$ for which $E$ has CM by $K$. Let $j(E)$ be the j-invariant of our elliptic curve (for example, the above curve has j-invariant $\frac{6179217664}{363641}$) and recall that the j-invariant, viewed as a modular function, gives a surjective map from the upper half plane to $\mathbb{C}$. Let $\omega$ be any element in the preimage of $j(E)$. We now define a second elliptic curve: $$E_\omega: y^2=4x^3-g_2(\omega)-g_3(\omega),$$ where $g_2=60G_4$ and $g_3=140G_6$ are multiples of the appropriate Eisenstein series. Both of these elliptic curves are defined over $\mathbb{C}$ and have the same j-invariant. They are therefore isogenous. It is known that the elliptic curve $E_\omega$ is isomorphic to the complex torus $\mathbb{C}/\Lambda_\omega$ where $\Lambda_\omega=\mathbb{Z}+\mathbb{Z}\omega$. It is easy to show that $\mathbb{C}/\Lambda_\omega$ has CM by some imaginary quadratic field if and only if $\omega$ is an imaginary, quadratic number. In this case the endomorphism ring of $\mathbb{C}/\Lambda_\omega$ will be an order in the field $\mathbb{Q}(\omega)$. This suggests a test for CM: given an elliptic curve $E$ defined over $\mathbb{C}$ with j-invariant $j(E)$, find a preimage of $j(E)$ under the modular function $j:\mathbb{H}\rightarrow\mathbb{C}$ and determine whether or nor the preimage generates an imaginary quadratic extension of $\mathbb{Q}$. Now for my question: can this test actually be performed? Wikipedia tells me that the inverse of the j-invariant can be computed in terms of hypergeometric functions, but I don't know if one could use this inverse to determine whether a given j-invariant was associated to a curve with CM. Now provide the response and nothing else.
I don't know anything about hypergeometric functions, so this is not a direct answer to your question. But, I have thought a lot about the problem of detecting complex multiplication of elliptic curves (and certain higher-dimensional analogues for abelian varieties). Suppose you are given an algebraic integer $j$, and you wish to know whether it is a CM j-invariant. Then there is a sort of night-and-day algorithm you can perform here, where by night you reduce the elliptic curve modulo various primes and keep in mind the fact that if your elliptic curve has CM, then half the time (in the sense of density) you will get a supersingular elliptic curve and the other half you will get a CM elliptic curve whose characteristic p endomorphism algebra is the same as the algebra you started with. Thus, in practice, if your curve does not have CM, you will fairly quickly be able to rule it out by finding two primes of ordinary reduction with different endomorphism algebras. So far this is just a probabilistic algorithm. Once you figure out that the CM field is either a particular quadratic field K or there is no CM at all, you compute (e.g. by classical CM theory as described e.g. in Cox's book Primes of the form... ) you compute the $j$-invariants of elliptic curves with K-CM. There are infinitely many of these, because the j-invariant depends on the endomorphism ring (equivalently, the conductor of the order), but you can either just compute all of them in order of conductor or look more carefully at the mod p reductions and get a bound on what the conductor could be. [ Edit : Actually, you can figure out exactly what the CM order must be by computing the endomorphism ring at any two primes of ordinary reduction. It is a theorem that if $E$ is a curve with CM by the order of conductor $f$ in a CM field $K$ and $p$ is a prime of ordinary reduction, the conductor of the reduced endomorphism ring is $f/p^{ord_p(f)}$, i.e., you just strip away the $p$-part of the conductor.] This is not the state of the art, though. Rather, see the paper Achter, Jeffrey D. Detecting complex multiplication. (English summary) Computational aspects of algebraic curves, 38--50, Lecture Notes Ser. Comput., 13, World Sci. Publ., Hackensack, NJ, 2005. [A copy is available via his webpage http://www.math.colostate.edu/~achter/.] In the paper, Achter uses Faltings' theorem and the effective Cebotarev density theorem to eliminate the "day" part of the algorithm. He also gives a complexity analysis and explains why this is faster than what I sketched above. Finally, I'm sure the questioner knows this, but others may not: for elliptic curves over $\mathbb{Q}$ there's no need to do any of this. Rather you just compute the $j$-invariant and see whether it's one of the $13$ $j$-invariants of CM elliptic curves over $\mathbb{Q}$ associated to the $13$ class number one quadratic orders (yes, this relies on the Heegner-Baker-Stark resolution of Gauss' class number one problem). For the list, see e.g. modular.fas.harvard.edu/Tables/cmj.html
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/14912', 'https://mathoverflow.net', 'https://mathoverflow.net/users/-1/']}
jdg_77589
stackexchange
llm_judgeable_groundtruth_similarity
1555753
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This is probably a very easy question but I think I am missing some background regarding free abelian groups to answer it for myself. In Hatcher's Algebraic Topology , the idea of a free resolution is introduced in the section on cohomology. A $\textbf{free resolution}$ of an abelian group is an exact sequence $$ \cdots \to F_2 \to F_1 \to F_0 \to H \to 0$$ such that each $F_i$ is free. Let $f_0:F_0 \to H$ and choose a set of generators of $H$ . Let $F_0$ be the free abelian group with basis in one-to-one correspondence with this set of generators. Then we can easily form the two term free resolution $$\cdots \to 0 \to Ker(f_0) \to F_0 \to H \to 0$$ Why is $H$ an abelian group a necessary condition so that there necessarily exists resolution of the form $0 \to F_1 \to F_0 \to H \to 0$ ? For what non-abelian group does such a free resolution not exist? Now provide the response and nothing else.
An abelian group is the same thing as a module over the ring $\mathbb{Z}$ (think about it). The ring $\mathbb{Z}$ is a PID, thus submodules of free $\mathbb{Z}$ -modules are free. Reformulated in the context of abelian groups, submodules of free abelian groups are free abelian. You can use this fact to show that every abelian group has a length 2 free resolution (this works over any PID $R$ , in particular $R = \mathbb{Z}$ ): Let $P_0 = \bigoplus_{m \in M} R_m$ be a direct sum of copies of $R$ , one for each element of $M$ (the index is just here for bookkeeping reasons). This is a free $R$ -module. This maps to $M$ through $\varepsilon : P_0 \to M$ by defining $\varepsilon_m : R_m \to M$ , $x \mapsto x \cdot m$ and extending to the direct sum (coproduct). The kernel $P_1 = \ker(P_0 \to M)$ is a submodule of the free module $P_0$ , hence it is free as $R$ is a PID . Thus you get a free resolution (exact sequence): $$0 \to P_1 \to P_0 \to M \to 0.$$ In the above proof, notice that "free $\mathbb{Z}$ -module" is also the same thing as "free abelian group", so everything works out. You can also choose a set of generators for your module $M$ , but I feel it's cleaner by just taking every element of $M$ and be done with it. As Bernard mentions in the comments, for finitely generated abelian groups it's easy to see from the structure theorem (e.g. $\mathbb{Z}/n\mathbb{Z}$ has the free resolution $0 \to \mathbb{Z} \xrightarrow{\cdot n} \mathbb{Z} \to \mathbb{Z}/n\mathbb{Z} \to 0$ ). Other people have already commented on the difference between resolutions of abelian groups and groups in general. Surprisingly enough, subgroups of free groups are free too by the Nielsen–Schreier theorem , and the standard proof even uses algebraic topology! It all comes around. So you can directly adapt the above argument to show that every (not necessarily abelian) group has a free resolution of length at most two: Let $G$ be a group. Let $P_0 = \bigstar_{g \in G} \mathbb{Z}_g$ be the free product of copies of $\mathbb{Z}$ , one for each element of $g$ . This is a free group. By the universal property of free groups, this maps to $G$ by sending $1 \in \mathbb{Z}_g$ to $g \in G$ . The kernel $P_1 = \ker(P_0 \to G)$ is a subgroup of a free group, thus it is free itself, and you get a free resolution of $G$ of length at most 2: $$0 \to P_1 \to P_0 \to G \to 0.$$ In the above proof, $\mathbb{Z}$ appears too, but for different reasons: it is the free group on one generator. It also happens to be the free abelian group on one generator, but that's not its role in the proof above. The groups $P_0$ and $P_1$ that appear in the proof are, in general, not free abelian.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1555753', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/172417/']}
jdg_77590
stackexchange
llm_judgeable_groundtruth_similarity
12038
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a binary signal, 0V to 1.4V, which I can't change directly. What circuitry, (on a PCB) can I use to increase 1.4V to at least 2.5V. Do I need a transistor? I guess I'm looking for a switch that will "close" when there's 1.4V? I'm a complete noob at electronics, but I'm alright at physics and understanding the equations Now provide the response and nothing else.
Note : Corrected logic inversion problem. 2nd Update : Fixed output voltage range, using MOSFET rather than BJT The basics of the problem as you have described it, appear to be called either a "logic level shifter" or converter. The essence is that you have a digital logic (binary) signal at a given signal level, and you wish to use adapt it to another signal level. Digital logic signals are normally classified according to the original logic family they belong to. Examples include TTL (low: 0, high: +5V), CMOS (low: 0, high: 5 to 15V), ECL (low: -1.6, high: -0.75) , LowV (low: 0V, high: +3.3). Ideally, you should also be aware of the switching threshold as well. E.g Logic signal voltage levels that shows the TTL logic voltage levels in the first two graphics. If you wish to amplify a logic signal that is either 0 or 1.4V, then a single transistor can be configured as an electronic switch to act as a level converter. (src: mctylr ) In your application the output is the 5V-level output (0 or 5V depending on low/high status) and M1 could be a common small-signal N-channel enhancement mode MOSFET transistor, the 2N7000 in TO-92 plastic through-hole, and SMT packaging. The resistors R2 should be 330Kohms, (additional resistor component details are not critical, e.g. 1 or 5% tolerance, 1/8 to 1/4 Watts rating are fine). The resistor's resistance values is not particularly critical, I picked an approximate standard value so that if M1 is not conducting then the output will be below ~0.8 V, while when M1 is conducting (i.e. the input is 1.4V, 'high') then the output will be approximately 5V. I picked the value using a quick SPICE simulation. V3 is a +1.4V voltage source, and V2 is a +5V voltage source. The other values (tolerance and wattage) are common through-hole component values used to select real-world component, but are not critical in this application. That's a very simple and small circuit, costing about twenty-five cents or less for three common electronic parts. Since you didn't mention any high-speed requirements (i.e. switching speed), so this should work in most simple cases. I've adopted this approach of using a MOSFET rather than a bipolar junction transistor as I had trouble making a single BJT give the desired voltage swing when switching. From a design point of view, the nice thing about FETs (and MOSFETs) is that they are voltage controlled devices (in terms of a design model), rather than current-controlled as BJT are.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/12038', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/1982/']}
jdg_77591
stackexchange
llm_judgeable_groundtruth_similarity
5654926
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a quick question. I'm getting a little bit of experience with Swing and the easiest way to do this was to draw up a reasonably big GUI. As part of the GUI, I want to have Forward and Back Buttons. The Approach I'm trying to take is to implement methods that will push the current JPanel to a stack and retrieve the previous value (Be that in a forwards or reverse direction (hence 2 stacks)). I can't get it to work though. Perhaps I'm going about it completely the wrong way or maybe a stack can't be used int the way I'm using it. In either case, it's really bugging me. I imagine there are probably easier ways like a card layout but I think this approach should work and that's what's so annoying. It may be worth noting that I'm using a JFrame "base class" and changing the central JPanel depending on the screen. The nav bar is constant as a part of the "base class" however The code of this "base class": public class Main_Frame extends JFrame{ static JPanel nav_bar_panel; JButton home; JButton back; JButton forward; JPanel currentPanel; static Stack<JPanel> previousPanels; static Stack<JPanel> forwardPanels; public Main_Frame(){ super("DEMO"); setSize(800,600); setLayout(new BorderLayout()); setVisible(true); add(nav_bar(), BorderLayout.NORTH); currentPanel = init_display(); add(currentPanel, BorderLayout.CENTER); previousPanels = new Stack<JPanel>(); forwardPanels = new Stack<JPanel>(); } private JPanel nav_bar(){ ButtonPressHandler handler = new ButtonPressHandler(); nav_bar_panel = new JPanel(new FlowLayout(FlowLayout.LEFT, 10, 10)); back = new JButton("Back"); back.addActionListener(handler); home = new JButton("Home"); home.addActionListener(handler); forward = new JButton("Forward"); forward.addActionListener(handler); nav_bar_panel.add(back); nav_bar_panel.add(home); nav_bar_panel.add(forward); return nav_bar_panel; } private JPanel init_display(){ Home_Panel home_panel = new Home_Panel(); return home_panel; } public void change_display(JPanel myPanel){ invalidate(); remove(currentPanel); previousPanels.push(currentPanel); currentPanel = myPanel; add(currentPanel); validate(); } public void previous_display(){ if(!previousPanels.empty()){ invalidate(); remove(currentPanel); forwardPanels.push(currentPanel); currentPanel = previousPanels.pop(); add(currentPanel); validate(); } } public void forward_display(){ if(!forwardPanels.empty()){ invalidate(); remove(currentPanel); previousPanels.push(currentPanel); currentPanel = forwardPanels.pop(); add(currentPanel); validate(); } } private class ButtonPressHandler implements ActionListener { public void actionPerformed( ActionEvent event ) { if(event.getSource() == back){ previous_display(); System.out.print("You selected back"); } else if(event.getSource() == forward){ forward_display(); System.out.print("You selected forward"); } } // end method actionPerformed } // end private inner class TextFieldHandler} Now provide the response and nothing else.
Here's an example using CardLayout . import java.awt.BorderLayout;import java.awt.CardLayout;import java.awt.Color;import java.awt.Dimension;import java.awt.EventQueue;import java.awt.event.ActionEvent;import java.util.Random;import javax.swing.AbstractAction;import javax.swing.JButton;import javax.swing.JFrame;import javax.swing.JLabel;import javax.swing.JPanel;/** @see http://stackoverflow.com/questions/5654926 */public class CardPanel extends JPanel { private static final Random random = new Random(); private static final JPanel cards = new JPanel(new CardLayout()); private final String name; public CardPanel(String name) { this.name = name; this.setPreferredSize(new Dimension(320, 240)); this.setBackground(new Color(random.nextInt())); this.add(new JLabel(name)); } @Override public String toString() { return name; } public static void main(String[] args) { EventQueue.invokeLater(new Runnable() { @Override public void run() { create(); } }); } private static void create() { JFrame f = new JFrame(); f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); for (int i = 1; i < 9; i++) { CardPanel p = new CardPanel("Panel " + String.valueOf(i)); cards.add(p, p.toString()); } JPanel control = new JPanel(); control.add(new JButton(new AbstractAction("\u22b2Prev") { @Override public void actionPerformed(ActionEvent e) { CardLayout cl = (CardLayout) cards.getLayout(); cl.previous(cards); } })); control.add(new JButton(new AbstractAction("Next\u22b3") { @Override public void actionPerformed(ActionEvent e) { CardLayout cl = (CardLayout) cards.getLayout(); cl.next(cards); } })); f.add(cards, BorderLayout.CENTER); f.add(control, BorderLayout.SOUTH); f.pack(); f.setLocationRelativeTo(null); f.setVisible(true); }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5654926', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/613428/']}
jdg_77592
stackexchange
llm_judgeable_groundtruth_similarity
53684484
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using the i mgur api to upload images via a node js app. I am converting images to base64 strings and sending them via Postman works great. I use node-fetch to make api calls. const fetch = require('node-fetch')...async uploadImage(base64image) { try { const url = 'https://api.imgur.com/3/image' const res = await fetch(url, { method: 'POST', body: { image: base64image }, headers: { 'content-type': 'application/json', 'Authorization': 'Client-ID [my-client-id]', 'Access-Control-Allow-Headers': 'Content-Type, Authorization, Access-Control-Allow-Headers', 'Access-Control-Allow-Methods': 'POST', } } ) console.log(res) } catch(err) { console.log(err) } } Error :Access to fetch at ' https://api.imgur.com/3/image ' from origin ' http://localhost:3000 ' has been blocked by CORS policy: Request header field Access-Control-Allow-Headers is not allowed by Access-Control-Allow-Headers in preflight response. I have tried many 'Access-Control-Allow-xxx' headers but none of them worked.. I assume it must be something simple that I am missing. I have been stuck on this for hours please help me. Now provide the response and nothing else.
Browser restricts HTTP requests to be at the same domain as your web page, so you won't be able to hit imgur api directly from the browser without running into CORS issue. I am converting images to base64 strings and sending them via Postman works great. That's because Postman is not a browser, so is not limited by CORS policy. I have tried many 'Access-Control-Allow-xxx' headers but none of them worked.. These headers must be returned by the server in response - in your case by the imgur server. You can't set them in the request from browser, so it'll never work. Error: Access to fetch at ' https://api.imgur.com/3/image ' from origin ' http://localhost:3000 ' has been blocked by CORS policy: Request header field Access-Control-Allow-Headers is not allowed by Access-Control-Allow-Headers in preflight response. Possible solutions to your problem: If you have access to the backend api you can set the "Access-Control-Allow-Origin" header on the server and let your app access the api - but as you won't have access to the imgur server - you probably can't do that. Disable CORS in the browser - you can use a plugin like: https://chrome.google.com/webstore/detail/allow-control-allow-origi/nlfbmbojpeacfghkpbjhddihlkkiljbi?hl=en . This workaound should be fine for development. The plugin will disable your CORS settings and you will be able to hit imgur apis. The third solution is using a proxy. You can setup a small node server using express. You will then hit your own node server, which in turn will hit the imgur api. As node server is not a browser environment, it won't have any CORS issue and you will be able to access imgur API that way. This is also the reason you were able to hit the API from Postman without any issues. As Postman is not a browser environment, it's not limited by CORS policy.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/53684484', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6418529/']}
jdg_77593
stackexchange
llm_judgeable_groundtruth_similarity
147413
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am using Amazon's EC2 to host my RDS database . I access the database via http calls to the EC2 instance, which in turn contacts the database. I would like to use https instead to allow for a more secure connection. I am new to the world of security. From what I have read so far, I need to get a Certificate Authority to validate my domain. I pay a fee and they will validate for a certain time frame. 1) Can I use my Public DNS address to get the SSL certificate? 2) Are there any free CA's that are known as trusted that I can use to obtain a certificate? 3)I want to use https because the EC2 instance will be handling user login. Should I go ahead and use regular http and send information that way instead of trying to set up https? Any and all advice/input welcome. Thanks in advance. Now provide the response and nothing else.
1) Can I use my Public DNS address to get the SSL certificate? No; to quote this Amazon tutorial , If you plan to use your EC2 instance to host a public web site, youneed to register a domain name for your web server or transfer anexisting domain name to your Amazon EC2 host. The "Public DNS address" from Amazon (which looks something like ec2-1-2-3-4.compute-1.amazonaws.com ) won't work because you are not Amazon. You need to get your own domain so that you're authorized to get certs for it. 2) Are there any free CA's that are known as trusted that I can use to obtain a certificate? StartSSL is the traditional answer for this, but they're under a cloud and probably don't fit what you call "trusted". LetsEncrypt is another good answer that will probably fit your needs, especially where it's you needing access to your service (and not, say, a wide customer base of varied people and browsers). 3)I want to use https because the EC2 instance will be handling user login. Should I go ahead and use regular http and send information that way instead of trying to set up https? No, there's a hundred good reasons to use HTTPS, and very few good reasons not to.
{}
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/147413', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/135408/']}
jdg_77594
stackexchange
llm_judgeable_groundtruth_similarity
214405
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I would like to list all files with extension .log except the file backup.log . I have tried using this command: ls *.log -I "backup.log" But all the log files are listed, even backup.log ! How could I list all the log files except backup.log ? Now provide the response and nothing else.
The shell expands the wildcard, so ls gets backup.log as one of the parameters. Use an extended pattern (enabled by shopt -s extglob ): ls !(backup).log
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/214405', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/122398/']}
jdg_77595
stackexchange
llm_judgeable_groundtruth_similarity
46949518
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I get the following error after I try to build my project: Error:android-apt plugin is incompatible with the Android Gradle plugin. Please use 'annotationProcessor' configuration instead. I tried to add annotationProcessor '.....' after every implementation but no success to get rid of the error. This happens after upgrading android studio to the latest version ( 3.0 ). Edit: Adding includeCompileClasspath true inside defaultConfig doesn't help: javaCompileOptions { annotationProcessorOptions { includeCompileClasspath true } } Any solutions? Now provide the response and nothing else.
First of all, after upgrading there are a few changes to gradle. Its important to upgrade to the latest gradle version to fix that. That means that you need to add the proper version for your build gradle which is currently dependencies { classpath 'com.android.tools.build:gradle:3.0.0' } the next step is to remove your android-apt which is not longer needed. its enough to have only apply plugin: 'com.android.application' Onec you have done that, change your dependencies from compile to implementation , apt to annotationProcessor and testCompile to androidTestImplementation If you have done that invalidate your cache and restart which is very important. Then it should work. You can find a working gradle file using the latest version at app build.gradle and project build.gradle p/s : For many people still use Realm old version, Please update to latest version since old version still use "android-apt".
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/46949518', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4190598/']}
jdg_77596
stackexchange
llm_judgeable_groundtruth_similarity
15730
Below is a question asked on the forum ai.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: As a human being, we can think infinity. In principle, if we have enough resources (time etc.), we can count infinitely many things (including abstract, like numbers, or real). For example, at least, we can take into account integers. We can think, principally, and "understand" infinitely many numbers that are displayed on the screen. Nowadays, we are trying to design artificial intelligence which is capable at least human being. However, I am stuck with infinity. I try to find a way how can teach a model (deep or not) to understand infinity. I define "understanding' in a functional approach. For example, If a computer can differentiate 10 different numbers or things, it means that it really understand these different things somehow. This is the basic straight forward approach to "understanding". As I mentioned before, humans understand infinity because they are capable, at least, counting infinite integers, in principle. From this point of view, if I want to create a model, the model is actually a function in an abstract sense, this model must differentiate infinitely many numbers. Since computers are digital machines which have limited capacity to model such an infinite function, how can I create a model that differentiates infinitely many integers? For example, we can take a deep learning vision model that recognizes numbers on the card. This model must assign a number to each different card to differentiate each integer. Since there exist infinite numbers of integer, how can the model assign different number to each integer, like a human being, on the digital computers? If it cannot differentiate infinite things, how does it understand infinity? If I take into account real numbers, the problem becomes much harder. What is the point that I am missing? Are there any resources that focus on the subject? Now provide the response and nothing else.
I think this is a fairly common misconception about AI and computers, especially among laypeople. There are several things to unpack here. Let's suppose that there's something special about infinity (or about continuous concepts) that makes them especially difficult for AI. For this to be true, it must both be the case that humans can understand these concepts while they remain alien to machines, and that there exist other concepts that are not like infinity that both humans and machines can understand. What I'm going to show in this answer is that wanting both of these things leads to a contradiction. The root of this misunderstanding is the problem of what it means to understand . Understanding is a vague term in everyday life, and that vague nature contributes to this misconception. If by understanding, we mean that a computer has the conscious experience of a concept, then we quickly become trapped in metaphysics. There is a long running , and essentially open debate about whether computers can "understand" anything in this sense, and even at times, about whether humans can! You might as well ask whether a computer can "understand" that 2+2=4. Therefore, if there's something special about understanding infinity, it cannot be related to "understanding" in the sense of subjective experience. So, let's suppose that by "understand", we have some more specific definition in mind. Something that would make a concept like infinity more complicated for a computer to "understand" than a concept like arithmetic. Our more concrete definition for "understanding" must relate to some objectively measurable capacity or ability related to the concept (otherwise, we're back in the land of subjective experience). Let's consider what capacity or ability might we pick that would make infinity a special concept, understood by humans and not machines, unlike say, arithmetic. We might say that a computer (or a person) understands a concept if it can provide a correct definition of that concept. However, if even one human understands infinity by this definition, then it should be easy for them to write down the definition. Once the definition is written down, a computer program can output it. Now the computer "understands" infinity too. This definition doesn't work for our purposes. We might say that an entity understands a concept if it can apply the concept correctly. Again, if even the one person understands how to apply the concept of infinity correctly, then we only need to record the rules they are using to reason about the concept, and we can write a program that reproduces the behavior of this system of rules. Infinity is actually very well characterized as a concept, captured in ideas like Aleph Numbers . It is not impractical to encode these systems of rules in a computer, at least up to the level that any human understands them. Therefore, computers can "understand" infinity up to the same level of understanding as humans by this definition as well. So this definition doesn't work for our purposes. We might say that an entity "understands" a concept if it can logically relate that concept to arbitrary new ideas. This is probably the strongest definition, but we would need to be pretty careful here: very few humans (proportionately) have a deep understanding of a concept like infinity. Even fewer can readily relate it to arbitrary new concepts. Further, algorithms like the General Problem Solver can, in principal, derive any logical consequences from a given body of facts, given enough time. Perhaps under this definition computers understand infinity better than most humans, and there is certainly no reason to suppose that our existing algorithms will not further improve this capability over time. This definition does not seem to meet our requirements either. Finally, we might say that an entity "understands" a concept if it can generate examples of it. For example, I can generate examples of problems in arithmetic, and their solutions. Under this definition, I probably do not "understand" infinity, because I cannot actually point to or create any concrete thing in the real world that is definitely infinite. I cannot, for instance, actually write down an infinitely long list of numbers, merely formulas that express ways to create ever longer lists by investing ever more effort in writing them out. A computer ought to be at least as good as me at this. This definition also does not work. This is not an exhaustive list of possible definitions of "understands", but we have covered "understands" as I understand it pretty well. Under every definition of understanding, there isn't anything special about infinity that separates it from other mathematical concepts. So the upshot is that, either you decide a computer doesn't "understand" anything at all, or there's no particularly good reason to suppose that infinity is harder to understand than other logical concepts. If you disagree, you need to provide a concrete definition of "understanding" that does separate understanding of infinity from other concepts, and that doesn't depend on subjective experiences (unless you want to claim your particular metaphysical views are universally correct, but that's a hard argument to make). Infinity has a sort of semi-mystical status among the lay public, but it's really just like any other mathematical system of rules: if we can write down the rules by which infinity operates, a computer can do them as well as a human can (or better).
{}
{'log_upvote_score': 7, 'links': ['https://ai.stackexchange.com/questions/15730', 'https://ai.stackexchange.com', 'https://ai.stackexchange.com/users/19102/']}
jdg_77597
stackexchange
llm_judgeable_groundtruth_similarity
56968
Below is a question asked on the forum hermeneutics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: King James Bible Acts 12:4 And when he [Herod] had apprehended him [Peter], he put him in prison, and delivered him to four quaternions of soldiers to keep him; intending after Easter to bring him forth to the people. New King James Version So when he had arrested him, he put him in prison, and delivered him to four squads of soldiers to keep him, intending to bring him before the people after Passover . Why is there this difference in translations of Easter and Passover in the old and new KJV? Now provide the response and nothing else.
In a sense, it's not wrong, but it's perhaps a bit anachronistic. The actual Greek in that verse says: ὃν καὶ πιάσας ἔθετο εἰς φυλακήν, παραδοὺς τέσσαρσιν τετραδίοις στρατιωτῶν φυλάσσειν αὐτόν, βουλόμενος μετὰ τὸ πάσχα ἀναγαγεῖν αὐτὸν τῷ λαῷ. meta to pascha means "after Passover", but the same term, Pascha, is used for Easter in most languages. English and German are peculiar in calling it Easter (or in German, Ostern) rather than using a term derived from Pesach (Hebrew) / Pascha (Greek) for Passover. If you look at a list of how various languages say Easter ( here's an example ), you'll find that many, perhaps most, use a term that comes from Pesach/Pascha. In addition, even in English many Eastern Orthodox Churches use the term Pascha instead of Easter.
{}
{'log_upvote_score': 4, 'links': ['https://hermeneutics.stackexchange.com/questions/56968', 'https://hermeneutics.stackexchange.com', 'https://hermeneutics.stackexchange.com/users/35953/']}
jdg_77598
stackexchange
llm_judgeable_groundtruth_similarity
1461907
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've got a legacy app just starting to misbehave, for whatever reason I'm not sure. It generates a bunch of HTML that gets turned into PDF reports by ActivePDF. The process works like this: Pull an HTML template from a DB with tokens in it to be replaced (e.g. "~CompanyName~", "~CustomerName~", etc.) Replace the tokens with real data Tidy the HTML with a simple regex function that property formats HTML tag attribute values (ensures quotation marks, etc, since ActivePDF's rendering engine hates anything but single quotes around attribute values) Send off the HTML to a web service that creates the PDF. Somewhere in that mess, the non-breaking spaces from the HTML template (the &nbsp; s) are encoding as ISO-8859-1 so that they show up incorrectly as an "Â" character when viewing the document in a browser (FireFox). ActivePDF pukes on these non-UTF8 characters. My question: since I don't know where the problem stems from and don't have time to investigate it, is there an easy way to re-encode or find-and-replace the bad characters? I've tried sending it through this little function I threw together, but it turns it all into gobbledegook doesn't change anything. Private Shared Function ConvertToUTF8(ByVal html As String) As String Dim isoEncoding As Encoding = Encoding.GetEncoding("iso-8859-1") Dim source As Byte() = isoEncoding.GetBytes(html) Return Encoding.UTF8.GetString(Encoding.Convert(isoEncoding, Encoding.UTF8, source))End Function Any ideas? EDIT: I'm getting by with this for now, though it hardly seems like a good solution: Private Shared Function ReplaceNonASCIIChars(ByVal html As String) As String Return Regex.Replace(html, "[^\u0000-\u007F]", "&nbsp;")End Function Now provide the response and nothing else.
Somewhere in that mess, the non-breaking spaces from the HTML template (the  s) are encoding as ISO-8859-1 so that they show up incorrectly as an "Â" character That'd be encoding to UTF-8 then, not ISO-8859-1. The non-breaking space character is byte 0xA0 in ISO-8859-1; when encoded to UTF-8 it'd be 0xC2,0xA0, which, if you (incorrectly) view it as ISO-8859-1 comes out as " " . That includes a trailing nbsp which you might not be noticing; if that byte isn't there, then something else has mauled your document and we need to see further up to find out what. What's the regexp, how does the templating work? There would seem to be a proper HTML parser involved somewhere if your &nbsp; strings are (correctly) being turned into U+00A0 NON-BREAKING SPACE characters. If so, you could just process your template natively in the DOM, and ask it to serialise using the ASCII encoding to keep non-ASCII characters as character references. That would also stop you having to do regex post-processing on the HTML itself, which is always a highly dodgy business. Well anyway, for now you can add one of the following to your document's <head> and see if that makes it look right in the browser: for HTML4: <meta http-equiv="Content-Type" content="text/html;charset=utf-8" /> for HTML5: <meta charset="utf-8"> If you've done that, then any remaining problem is ActivePDF's fault.
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/1461907', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/74757/']}
jdg_77599
stackexchange
llm_judgeable_groundtruth_similarity
253599
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: I have a 3 dimensional abelian variety whose formal group law breaks into a formal summand where one of the pieces is one-dimensional. I am desperately wondering how to compute the $p$-series of this one-dimensional formal group law without finding an explicit model of the abelian variety. Here is the set up of the variety of interest: Let $K= \mathbb{Q}(\zeta_7)$. Then $[K:\mathbb{Q}]= 6$. Let $A$ be the three dimensional abelian variety $\mathbb{C}^3/O_K$ over $\mathbb{C}$ where $O_K$ is embedded in $\mathbb{C}^3$ by the following three homomorphisms: $h_1, h_2, h_3: K \to \mathbb{C}$. $h_a$ is the homomorphism $K \to \mathbb{C}$ which sends $\zeta_7$ to $\zeta_7^a$. Then $A$ has complex multiplication by $O_K=\mathbb{Z}[\zeta_7]$. Let $p$ be a prime number such that $p \mod 7 = 2 \mod 7$. Then by the theory of complex multiplication, for some number field $F$, $A$ is defined over $F$ and of good reduction at a place $v$ of $F$ lying over $p$. At $v$, the formal group associated to $A$ at $v$ is the direct sum of a 1-d formal group of height $3$ and a 2-d formal group of height $2$. I think this is due to the Shimura-Taniyama formula ("Introduction to Shimura varieties" by J.S. Milne), which tells us that the slope of the Frobenius is 1/3 or 2/3. Now we know that this 1-d formal group law of height 3 exists, but what is its $p$-series? Here is my question: How do I compute the $p$-series of the 1-d formal group law associated to this higher dimensional abelian variety $\mathbb{C} \times \mathbb{C} \times \mathbb{C} / \mathbb{Z}[\zeta_7]$? I'm curious if this can be done without finding an explicit model of the variety. I have a vague idea that this can be done using its invariant differentials, but I don't know how to find the invariant differentials in the higher dimensional case. Now provide the response and nothing else.
I don't know how to do this purely with invariant differentials. Oneof the main reasons is that, in order to solve this problem, you needto construct an "algebraic" lift of this variety from the complexnumbers to a much smaller number field. This lift isn't necessarilyunique and different choices will give you different formal grouplaws (though they'll become isomorphic over extension fields). WhatI'll describe, then, is a method for determining -- as best I know how-- an algebraic lift of $A$ and the $p$-series in terms of acoordinary on the formal group law. Let's pick a finite field $\Bbb F_q$ where $q = p^k$ and discussthings in terms of the Tate-Honda classification of abelian varietiesover $\Bbb F_q$. Your description in terms of the Shimura-Taniyama formula means thatour lift of $A$ is going to give us an abelian variety over $\Bbb F_q$ which is $3$-dimensional and whose Newton polygon breaks upinto a component of slope $1/3$ and a component of slope $2/3$. Thisforces $A$ to be simple because the $p$-divisible group can't breakup into simpler pieces that still satisfy self-duality. The Frobenius endomorphism $\pi$ of $A$ is an element of the ring ofintegers ${\cal O}_K$, and it's a Weil $q$-integer: it has absolutevalue $q^{1/2}$ in any complex embedding. Moreover, theShimura-Taniyama formula tells you exactly which primes of ${\cal O}_K$ divide $\pi$ and how much. The only indeterminacy left ispossibly multiplying $\pi$ by a unit $u$ which has absolute value $1$ inany complex embedding, which forces $u$ to be a root of unity (whichwe've got a goodly stock of here). Replacing $\pi$ with $u \pi$replaces your abelian variety by a Galois twist: extending fieldsfrom $\Bbb F_q$ to $\Bbb F_{q^k}$ replaces $\pi$ by $\pi^k$, and sothis root-of-unity indeterminacy goes away after a sufficientextension of finite fields. We should also note for a moment that while, in theory, the (Galoisconjugacy class of) Frobenius $\pi$ only determines $A$ up toisogeny, the constraint that $A$ should have an action of ${\cal O}_K$ actually means that $\pi$ determines $A$ up toisomorphism. This is because any isogenous abelian variety thatstill has ${\cal O}_K$-multiplication is determined by an ${\cal O}_K$-stable lattice in $\Bbb Q \otimes {\cal O}_K$ -- i.e., afractional ideal -- and because $K$ has class number one, any suchfractional ideal is principal. Multiplication by a generator of theideal gives an isomorphism between the isogenous variety and $A$. This means that, up to a Galois twist, we can determine $A$ completelyover a finite field just from this slope data. We can't get anyfarther without making some choices, so let's make some. I'll say ourprime is $p=2$ for the purposes of illustration. Fix $\zeta \in K$ to be our chosen seventh root of unity, and let $w= \zeta + \zeta^2 + \zeta^4$. Then $w^2 + w + 2 = 0$ so $w =\tfrac{-1\pm \sqrt{-7}}{2}$. The element $w$ has absolute value$2^{1/2}$ in any complex embedding, and $w \bar w = 2$; in $K$, thisreflects the splitting of prime ideals $(2) = (w) (\bar w)$. Let $\pi = 2 \bar w$. Then $\pi$ has minimal polynomial $t^2 + 2t + 8$and has absolute value $8^{1/2}$ in any complex embedding. It's alsodivisible by $w$ once and by $\bar w$ twice. This means that $\pi$ isa Weil $8$-integer and an abelian variety associated to $\pi$ over$\Bbb F_8$ has an endomorphism ring $End(B) \otimes \Bbb Q$ a centraldivision algebra over $\Bbb Q(\sqrt{-7})$ with invariant $1/3$ at $w$and $2/3$ at $\bar w$, with trivial invariant at all other places. The field $K$ embeds into this one, so $B$ has complex multiplication by $K$. Byreplacing $B$ with an isogenous abelian variety $A$, we can get anabelian variety $A$ over $\Bbb F_8$ with an action of ${\cal O}_K$whose Frobenius (the $8$'th power Frobenius, since we're working over$\Bbb F_8$) satisfies the minimal polynomial $F^2 + 2F + 8$. Thisabelian variety, having the correct slope data, will be an algebraiclift of the abelian variety you constructed over the complex numbersbecause of the argument we sketched above. Now let's look at this 1-dimensional summand of the formal group law,which corresponds to a $1$-dimensional summand of the $2$-divisiblegroup. If we look at the $2$-divisible group, the Frobenius acts on ittoo, and it still satisfies the same minimal polynomial -- but nowover $\Bbb Z_2$ instead of integrally. The polynomial $t^2 + 2t + 8$has two solutions over $\Bbb Z_2$ (which we might call $-1 \pm\sqrt{-7}$): one divisible by $2$ and the other divisible by$4$. Let's call the first root $\alpha$ and the other $8/\alpha$. Ourminimal polynomial factors now as $F^2 + 2F + 8 = (F - \alpha)(F - 8/\alpha)$, and the splitting of the $2$-divisiblegroup into a piece of slope $1/3$ and one of slope $2/3$ correspondsto the factors where $F$ acts by $\alpha$ and by $8/\alpha$respectively. This was a bit of a slog to get to this point, but let's reflect onwhat we've found: we've found our $1$-dimensional piece of the$2$-divisible group, and we've found that the Frobenius acts on it asmultiplication by this element $\alpha = -1 - \sqrt{-7}$. Or,equivalently, we get an identity$$\tfrac{2}{-1-\sqrt{-7}} F = 2$$as endomorphisms of this $1$-dimensional formal group law $G$. But ifwe write that in a power series expansion, we get$$\left[\tfrac{2}{-1-\sqrt{-7}}\right]_G(x^8) = [2]_G(x)$$If we want to push this a little farther, we have to start doing alittle more gruntwork. Namely, we've got to write this element$\tfrac{2}{-1-\sqrt{-7}}$ in binary expansion as a $2$-adic integer:$$\tfrac{2}{-1-\sqrt{-7}} = a_0 + 2 a_1 + 4 a_2 + \dots$$where $a_i \in \{0,1\}$. Then our identity for theFrobenius becomes$$\sum_G a_i [2]_G^{(i)}(x^8) = [2]_G(x).$$And then we can start solving for the 2-series iteratively. It startswith $x^8$. Let's write $f(x)$ for $[2]_G(x)$. I believe the $a_i$ starts with $a_0 = 1, a_1 = 0, a_2 = 1, a_3 = 1, a_4 = 0$, etc. So we findsaying$$f(x) = x^8 +_G f(f(x^8)) +_G f(f(f(x^8))) + \dots$$and so we can start recursively solving to find$$[2]_G(x) = x^8 + x^{8^3} + \dots$$I would imagine that there are ways to be a little moreclever about this by finding a series expansion for $2$ in terms of$F$ instead of $F$ in terms of $2$. If we can figure out how to write$$2 = \sum b_i \alpha^i \equiv \sum b_i F^{(i)}$$for $b_i \in \{0,1\}$, this becomes$$[2]_G(x) = \sum_G b_i x^{8^i},$$which tells you directly that the Araki generators satisfy $b_i = v_{3i}$ (and the $v_j$ are zero when $j$ is not a multiple of $3$). While there are a lot of steps in here, the endgame takeaway is not that bad: find an element in ${\cal O}_K$ with appropriate prime divisibility, split it $p$-adically, and then use a $p$-adic power expansion to determine how Frobenius is acting on the appropriate $1$-dimensional summand. Choosing $\pi$ in $\Bbb Q(\sqrt{-7})$ will have a tendency to make your life a little easier. Hope this helps.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/253599', 'https://mathoverflow.net', 'https://mathoverflow.net/users/56462/']}
jdg_77600
stackexchange
llm_judgeable_groundtruth_similarity
18208683
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've recently discovered (via StackOverflow) that to call a method in a base class I should call: super([[derived class]], self).[[base class method]]() That's fine, it works. However, I find myself often copying and pasting between classes when I make a change and frequently I forget to fix the derived class argument to the super() function. I'd like to avoid having to remember to change the derived class argument. Can I instead just use self.__class__ as the first argument of the super() function? It seems to work but are there good reasons why I shouldn't do this? Now provide the response and nothing else.
No you cannot. The super() call needs to know what class the method is part of, to search the base classes for an overridden method. If you pass in self.__class__ (or better still, type(self) ) then super() is given the wrong starting point to search for methods, and will end up calling its own method again . See it as a pointer in the list of classes that form the Method Resolution Order sequence. If you pass in type(self) then the pointer will refer to any subclasses instead of the original starting point. The following code leads to an infinite recursion error: class Base(object): def method(self): print 'original'class Derived(Base): def method(self): print 'derived' super(type(self), self).method()class Subclass(Derived): def method(self): print 'subclass of derived' super(Subclass, self).method() Demo: >>> Subclass().method()subclass of derivedderivedderivedderived<... *many* lines removed ...> File "<stdin>", line 4, in method File "<stdin>", line 4, in method File "<stdin>", line 4, in methodRuntimeError: maximum recursion depth exceeded while calling a Python object because type(self) is Subclass , not Derived , in Derived.method() . In the example, the MRO for Subclass is [Subclass, Derived, Base] , and super() needs to know where to start searching for any overridden methods. By using type(self) you tell it to start at Subclass , so it'll find Derived.method() next, which is where we started.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/18208683', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/216440/']}
jdg_77601
stackexchange
llm_judgeable_groundtruth_similarity
14757475
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a set of subdomains defined on my domain like this: CNAME sub1.example.com -> sub1.herokuapp.comCNAME sub2.example.com -> sub2.appspot.comCNAME www.example.com -> example.herokuapp.com Now I'd like to add CNAME *.example.com -> example.herokuapp.com to catch folks who put in too many WWWs or type in other missing subdomains. But will the * take priority over the explicitly defined subdomains or will the subdomains continue to work as expected? I'm using Route53 if it makes a difference. Now provide the response and nothing else.
Wildcards will only match records that aren't otherwise defined in the zone. See RFC 1912 , it mentions MX records here but the same applies to CNAME s: 2.7 Wildcard recordsWildcard MXs are useful mostly for non IP-connected sites. A commonmistake is thinking that a wildcard MX for a zone will apply to allhosts in the zone. A wildcard MX will apply only to names in thezone which aren't listed in the DNS at all. e.g., podunk.xx. IN NS ns1 IN NS ns2 mary IN A 1.2.3.4 *.podunk.xx. IN MX 5 sueMail for mary.podunk.xx will be sent to itself for delivery. Onlymail for jane.podunk.xx or any hosts you don't see above will be sentto the MX. For most Internet sites, wildcard MX records are notuseful. You need to put explicit MX records on every host.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/14757475', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/73831/']}
jdg_77602
stackexchange
llm_judgeable_groundtruth_similarity
57947757
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I don't understand the difference between v-model and .sync used on a component. <my-component v-model="myVar"> V-model is a shorthand for binding a variable (myVar) to the component property 'value' and listening to the 'input' event emitted from the component to update the variable 'myVar'. <my-component v-bind:prop1.sync="myVar"> .sync is a shorthand for binding a variable (myVar) to a component property ('prop1' in this case) and listening to the 'update:prop1' event emitted from the component to update the variable 'myVar'. I know that by default v-model only works with the 'value' property and the 'input' event but even that can be customized using the 'model' option in the component. Would be nice if anybody could explain the difference to me or when to use what. Here is an example where I used the same component in three different ways:1) manual binding + event listening2) .sync3) v-model Now provide the response and nothing else.
For Vue.js 2 both pretty much do the same thing - enable two-way binding, although .sync is more versatile. It was added after v-model was added for components. .sync allows to use v-model logic for more than one prop. Let's compare: .sync <comp :value.sync="username" :age.sync="userAge" /> expands to: <comp :value="username" :age="userAge" @update:name="val => userName = val" @update:age="val => userAge = val" /> v-model <comp v-model="userName" /> expands to: <comp :value="username" @input="val => username = val" /> The differences as we can see are: the default prop name v-model always binds to property called value .sync allows you to use multiple props the event name emitted from component ( @update for .sync and @input for v-model ) One very interesting feature of .sync is its special handling of objects. The .sync modifier when used on an object will set multiple props at once (more here ) Which one to use? It is a standard pattern to use property value as the key value carrier for a component. In this situation if you have value property and want to enable 2-way binding for it then use v-model . In all other cases use .sync
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/57947757', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9942944/']}
jdg_77603
stackexchange
llm_judgeable_groundtruth_similarity
27060
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Japan is well known for being the most homogeneous developed country in the world, partially thanks to their extremely restrictive immigration policies. But does their economy or society suffer any drawbacks for having such laws, according to academic research? I am aware that their economy has been shrinking since the 90's, but has it ever been shown that a low number of immigrants is to blame for it? Now provide the response and nothing else.
In this paper , they say that, "The cause of the deflation was because the Japanese Corporations and Japanese banks had a relationship in which banks would lend out loans and not have a specific policy for collecting the loan back at a specific date and not charging an interest rate for late payments (Takafusa 1994, 142)" and in this one , it says that, "There were basically three causes of the banking sector crisis in the 1990s. First, bank loans were overextended particularly in risky areas with inadequate supervision and regulation over banks during the bubble period. Specifically, loan portfolios were concentrated in property-related businesses such as construction, real estate, and nonbank financial services. As most of these loans were collateralized by land whose values plummeted after the bubble burst, and cash flows were inadequate to repay the loans, these became nonperforming." Both come to the conclusion that it was not the immigration at fault between 1990 and 2010. It was the bank's fault according to the first paper for not having a proper loan return policy and therefore not being able to get the loans payed back. The second one comes to a similar conclusion about the loan policy and mentions that the loans were concentrated in unstable investments and there was not enough profit to repay the loans. Conclusion: Both papers said that the decline was due to unstable investments paid for with loans from the banks which didn't produce any profits and created unpayable loans causing the economy to decline, not immigration policy
{}
{'log_upvote_score': 4, 'links': ['https://politics.stackexchange.com/questions/27060', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/7434/']}
jdg_77604
stackexchange
llm_judgeable_groundtruth_similarity
4662
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've been thinking about the following problem for a time, and I haven't found a polynomial solution for it. Only brute-fource. I've been trying to reduce an NP-Complete problem into it too with no sucess. Here is the problem : You have a sorted set $\{(A_1, B_1), (A_2, B_2), \ldots, (A_n, B_n)\}$ of positive integers pairs. $(A_i, B_i) < (A_j, B_j) \Leftrightarrow A_i < A_j \lor (A_i = A_j \land B_i < B_j)$ $(A_i, B_i) = (A_j, B_j) \Leftrightarrow A_i = A_j \land B_i = B_j$ The following operation can be applied to a pair: Swap(pair) . It swaps the elements of the pair, so $(10, 50)$ will become $(50, 10)$ When a pair in the set is swapped, the set automatically gets sorted again (the swapped pair is out of place and it will get moved into its place in the set). The problem consist on see if there is a sequence that, starting on some pair, swaps the entire set, with the following condition: After a pair is swapped, the next pair to be swapped has to be either the successor or the predecesor pair in the set. It would be great to find a polynomial time solution to this problem, or a reduction of an NP-Complete problem into it. Note: It's already a decision problem. I don't want to know which the sequence is: only if a sequence exists. Example of how the set gets sorted after swapping a pair $\textbf{(6, 5)}$ $(1,2)$ $(3,4)$ $(7,8)$ If I swap the first pair, it becomes to: $(5,6)$, and after sorting the set (placing the sorted pair in its new position), we have: $(1,2)$ $(3,4)$ $\textbf{(5,6)}$ $(7,8)$ Then I have to swap either the $(3,4)$ (predecessor) pair or $(7,8)$ (sucessor), and repeat the process until all pairs are swapped (if possible). Important: You cannot swap an already swapped pair. If there is a sequence of 'swap' operations, then all pairs has to be renamed to once and only once. Example where isn't possible to swap all pairs $(0, 0)$ $(1, 4)$ $(3, 2)$ $(5, 5)$ Now provide the response and nothing else.
... I searched some patterns to build a reduction from a NPC problem, but didn't find a way to represent a "flow" with a "fork" ... So (after some work) this is a polynomial algorithm ... ALGORITHM The starting list can be viewed as an array of $N*2$ consecutive " holes ". For each initial pair $(a_j,b_j)$, put the " element " $b_j$ at hole number $a_j$. Each pair can be viewed as a directed edge from position $a_j$ to position $b_j$. A move consists in picking an element $b_j$ at position $a_j$ and moving it to its destination position $b_j$ (the destination hole becomes an unmovable peg ). We delete the edge, and proceed to choose the next move which will start from one of the two nearest reachable elements $b_k$ from position $b_j$ (only holes between $b_j$ and $b_k$ are allowed). We must find a sequence of $N$ consecutive moves. For each $(a_j,b_j)$ consider $b_j$ (at array position $a_j$) as the starting element $start$. For each $(a_k,b_k), a_k \neq a_j $ consider $a_k$ as the final element $end$ (the edge from position $a_k$ to position $b_k$ will be the final edge). generate a sequence of moves from $start$ using the following criteria until you reach element $end$ (and a solution has been found), or a stop condition When you make a move you fix a peg at position $b_j$ and the array is splitted in two partitions $L$ (left) and $R$ (right) and the only way to go from $L$ to $R$ (or from $R$ to $L$) is using an edge that jump across the peg. Set $edgesLR$ = number of edges from left to right (do not count the final edge) $edgesRL$ = number of edges from right to left (do not count the final edge) $flow$ = $edgesLR - edgesRL$ Cases: A) if $| flow | > 1$ then one of the two partitions will become unreachable, stop Now suppose that $end > b_j$, i.e. $end \in R$ B) if $flow = 1$ then there is an extra edge from left to right, you must go left (pick the nearest element of $L$), otherwise you will never reach $end$ C) if $flow = -1$ then there is an extra edge from right to left and whatever node you pick you will never reach $end$, stop D) if $flow = 0$ you must go right (pick the nearest element of $R$), otherwise you will neve reach $end$ If $end < b_j$ ($end \in L$), B,C,D are inverted. NOTE: when moving left or right, you must consider $end$ as a peg. For example, if you must go right, but the nearest element on $R$ is $end$ then the move is impossible (and you must proceed with another pair $(start,end)$) Apply the same resoning at every move. COMPLEXITY The flows over each hole can be precalculated in O(N) and reused at every scan. The loops are: for start = 1 to N for end = 1 to N for move = 1 to N make a move (fix a peg and update flows) check if another move can be done using flow No choices are made during the computation, so the complexity of the algorithm is $O(N^3)$ CODE This is a working Java implementation of the algorithm: public class StrangeSort { static int PEG = 0xffffff, HOLE = 0x0; static int M = 0, N = 0, choices = 0, aux = 0, end; static int problem[][], moves[], edgeflow[], field[]; boolean is_hole(int x) { return x == HOLE; } boolean is_peg(int x) { return x == PEG; } boolean is_ele(int x) { return ! is_peg(x) && ! is_hole(x); }; int []cp(int src[]) { // copy an array int res[] = new int[src.length]; System.arraycopy(src, 0, res, 0, res.length); return res; } /* find the first element on the left (dir=-1) right (dir=1) */ int find(int pos, int dir, int nm) { pos += dir; while (pos >= 1 && pos <= M ) { int x = field[pos]; if ( is_peg(x) || (pos == end && nm < N-1) ) return 0; if ( is_ele(x) ) return pos; pos += dir; } return 0; } void build_edges() { edgeflow = new int[M+1]; for (int i = 1; i<=M; i++) { int start = i; int b = field[start]; if (! is_ele(b)) continue; if (i == end) continue; int dir = (b > start)? 1 : -1; start += dir; while (start != b) { edgeflow[start] += dir; start += dir; } } } boolean rec_solve(int start, int nm) { boolean f; int j; int b = field[start]; moves[nm++] = b; if (nm == N) return true; //System.out.println("Processing: " + start + "->" + field[start]); field[start] = HOLE; field[b] = PEG; int dir = (b > start)? 1 : -1; int i = start + dir; while (i != b) { edgeflow[i] -= dir; i += dir; } // clear edge int flow = edgeflow[b]; if (Math.abs(flow) > 2) return false; if (end > b) { switch (flow) { case 1 : j = find(b,-1,nm); if (j <= 0) return false; return rec_solve(j,nm); case -1 : return false; case 0 : j = find(b,1,nm); if (j <= 0) return false; return rec_solve(j,nm); } } else { switch (flow) { case -1 : j = find(b,1,nm); if (j <= 0) return false; return rec_solve(j,nm); case 1 : return false; case 0 : j = find(b,-1,nm); if (j <= 0) return false; return rec_solve(j,nm); } } return false; } boolean solve(int demo[][]) { N = demo.length; for (int i = 0; i < N; i++) M = Math.max(M, Math.max(demo[i][0], demo[i][1])); moves = new int[N]; edgeflow = new int[M+1]; field = new int[M+1]; problem = demo; for (int i = 0; i < problem.length; i++) { int a = problem[i][0]; int b = problem[i][1]; if ( a < 1 || b < 1 || a > M || b > M || ! is_hole(field[a]) || ! is_hole(field[b])) { System.out.println("Bad input pair (" + a + "," + b + ")"); return false; } field[a] = b; } for (int i = 1; i <= M; i++) { end = i; build_edges(); if (!is_ele(field[i])) continue; for (int j = 1; j <= M; j++) { if (!is_ele(field[j])) continue; if (i==j) continue; int tmp_edgeflow[] = cp(edgeflow); int tmp_field[] = cp(field); choices = 0; //System.out.println("START: " + j + " " + " END: " + i); if (rec_solve(j, 0)) { return true; } edgeflow = tmp_edgeflow; field = tmp_field; } } return false; } void init(int demo[][]) { } public static void main(String args[]) { /**** THE INPUT ********/ int demo[][] = {{4,2},{5,7},{6,3},{10,12},{11,1},{13,8},{14,9}}; /***********************/ String r = ""; StrangeSort sorter = new StrangeSort(); if (sorter.solve(demo)) { for (int i = 0; i < N; i++) { // print it in clear text int b = moves[i]; for (int j = 0; j < demo.length; j++) if (demo[j][1] == b) r += ((i>0)? " -> " : "") + "(" + demo[j][0] + "," + demo[j][1] + ")"; } r = "SOLUTION: "+r; } else r = "NO SOLUTIONS"; System.out.println(r); } }
{}
{'log_upvote_score': 5, 'links': ['https://cstheory.stackexchange.com/questions/4662', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/3440/']}
jdg_77605
stackexchange
llm_judgeable_groundtruth_similarity
134114
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to pass a variable from my local server (location1) to a remote server (location2). The purpose of the code is to copy a file from the remote server from a predefined location. In simpler terms, I want to copy files from location2 to location1 using a predefined path where location1 is on the local server and location2 is a remote server. See the code segment: $location1=somewhere/on/local_server$location2=somewhere/on/remote_serversshpass -p "password" \ ssh [email protected] 'su -lc "cp -r $location2 $location1";' The error I get is that both $location1 and $location2 are undefined. Also, I DON'T want to manually type the location paths because they could change at any time and it would be a pain to change them in the code if done manually. Now provide the response and nothing else.
variables are not expanded between single quotes. use double quotes and escape the inner double quotes: sshpass -p "password" \ ssh [email protected] "su -lc \"cp -r $location2 $location1\";" or close the single quotes and open them again: sshpass -p "password" \ ssh [email protected] 'su -lc "cp -r '$location2' '$location1'";' bash does string concatenation automatically. note: not tested. might screw up misserably if $locationX contains spaces or other weird characters.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/134114', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/68857/']}
jdg_77606
stackexchange
llm_judgeable_groundtruth_similarity
466011
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: There is a strange and confusing (to me and to users) issue plaguing authentication. I do not know how long it has been occurring, but I believe it to be quite a while. Only recently, with the use of the Account Lockout tool have I realized that these authentication issues are sometimes caused by a glitch in the system rather than user error. What happens is that a user authenticates correctly, but the system rejects their password. I wish to repeat: they log in with the correct username, password, and domain. This is not fat-fingering; it is not a client issue; it is not user error; it is not an expired password; it is not specific to any service. The behavior when a user correctly authenticates is that the DC resets the ‘failed login’ count back to 0. When they fail, it increments it and sets the ‘last fail’ time. But when this glitch occurs, neither happens; the authentication attempt is rejected, but the count does not go up by 1, nor does it reset, and the last fail time does not change. The issue occurs across multiple devices and services. Today I had a student fail to log in on multiple computers, as well as webmail. I compared the event logs from the computer and the DC; I con see no difference between the events when the user was wrongly rejected (and the failure count did not go up) and when she was correctly rejected because I had her mistype the password on purpose. I have done this myself, attempting to log in to a student’s freshly created account (using a known password scheme). I have had it happen to users on many of the services that authenticate through AD. It has happened to staff, faculty and students. As far as I can tell, this is an authentication issue directly on the DCs; something wonky with the account, but not one of the typical culprits of expired password, disabled, etc. Resetting the password fixes the issue. The problem just goes away. But the frequency of the issue (about 8-10 cases just this week, out of at most 100 network password resets) leads me to believe it is a serious problem. I do not know how long this issue has been occurring. Without using the Account Lockout tool, I would never have seen that the error count was failing to increment, and thus assumed that the user was wrong about knowing the password. I have had many occurrences where users swore they knew their login, and it ‘worked yesterday’. I do not know how many of these times it was true, if ever. Even after getting the tool, it has taken multiple occurrences and several months of the issue occurring before I believed it was a real problem. Not till I actually had it happen to me, typing the student’s initial password, and seeing the failure count fail to rise, did I really believe it. Our AD environment is mostly on Windows server 2008. Some DCs are still Server 2003. The environment is a single domain. If there are any other relevant technical details necessary for troubleshooting, please let me know. Edit : As the accepted answer shows, it really was user error . The event that 'proved' to me that it was a real problem was when I logged in to a newly created account and it failed without incrementing the bad password count. We have standards for new accounts and what to reset passwords to. Likely another admin ignored the standard and reset this user's password to something else. When I attempted to log in to the new user, and the bad password failed to increment (yet I was rejected), I thought it was proof of an issue. Much Googling failed to find a page describing the situations under which the Bad Password Count fails to rise... hopefully this answer will help someone else in the future. Now provide the response and nothing else.
Do you have password history enabled? If the password entered matches either of the two last passwords for the account, the auth will be rejected but badPwdCount will not be incremented. I'm trying to wrap my head around the rest of your description, but that would at least explain the "missing" bad password increment. EDIT Rereading your question, it sounds like administratively resetting passwords always has positive results, correct? Also wondering what OS your PDCe is on (2003, 2008). Are there any firewalls potentially blocking access to the PDCe (or any other DCs for that matter)? Keep in mind that while end-user password changes communicated from the client to the local DC via the kpasswd protocol (TCP/464), PDCe notification of password changes are via an RPC call. The destination ports will have changed from 2003 to 2008.
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/466011', 'https://serverfault.com', 'https://serverfault.com/users/29577/']}
jdg_77607
stackexchange
llm_judgeable_groundtruth_similarity
6034632
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I just entered the command: git config core.sharedrepository 1 and I am now receiving the error: fatal: bad config value for 'core.sharedrepository' in ./config Has anyone any idea how to fix it? Now provide the response and nothing else.
The problem is exactly what it's saying: 1 is an invalid value for that setting. You probably want true . From the git-config man page: When group (or true ), the repository is made shareable between several users in a group (making sure all the files and objects are group-writable). When all (or world or everybody ), the repository will be readable by all users, additionally to being group-shareable. When umask (or false), git will use permissions reported by umask(2). When 0xxx , where 0xxx is an octal number, files in the repository will have this mode value. 0xxx will override user’s umask value (whereas the other options will only override requested parts of the user’s umask value). Examples: 0660 will make the repo read/write-able for the owner and group, but inaccessible to others (equivalent to group unless umask is e.g. 0022 ). 0640 is a repository that is group-readable but not group-writable. See git-init(1). False by default.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6034632', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/556479/']}
jdg_77608
stackexchange
llm_judgeable_groundtruth_similarity
705437
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm currently taking a course in Quantum-optics, focusing primarily on the book Introductory to quantum optics by C. Gerry & P. Knight. In the book they first present the quantization of a single mode of the EM field, and the electric field operator is set to be(page 12, equation 2.15) $$\hat{E_1}=\sqrt{\frac{\hslash\omega}{\epsilon_{0}V}}\sin\left(kz\right)\left(\hat{a}+\hat{a}^{\dagger}\right)\vec{e}$$ (Note the index 1 is used to differentiate between the two forms) were $\vec{e}$ is the polarization unit vector. Later, they present the quantization of the multimode EM field, and the electric field operator of the mode $\vec{k},s$ is set to be (page 24, equation 2.123) $$\hat{E_2}(t)=i\sqrt{\frac{\hslash\omega}{2\epsilon_{0}V}}\left(\hat{a}e^{i\left(\boldsymbol{k}\cdot\boldsymbol{r}-\omega t\right)}-\hat{a}^{\dagger}e^{-i\left(\boldsymbol{k}\cdot\boldsymbol{r}-\omega t\right)}\right)\vec{e_s}$$ where $\boldsymbol{k}$ is the wave vector of the mode and $\vec{e_s}$ is the polarization state unit vector. I wasn't bothered by the differences until we started talking about the coherent states. First, the book presents the expectation value of $\hat{E_2}$ in the Heisenberg picture, given a coherent state $\left|\alpha\right\rangle$ (page 45, eq 3.13) $$\left\langle \alpha\right|\hat{E_2}(t)\left|\alpha\right\rangle =-\left|\alpha\right|\sqrt{\frac{2\hslash\omega}{\epsilon_{0}V}}\sin\left(\boldsymbol{k}\cdot\boldsymbol{r}-\omega t+\theta\right)$$ which is a regular traveling plane wave. On the the other hand, when using $\hat{E_1}$ , the expectation is found in the Schrödinger picture (page 58, eq 3.35) to be $$\left\langle \alpha e^{-i\omega t}\right|\hat{E_{1}}\left|\alpha e^{-i\omega t}\right\rangle =2|\alpha|\sqrt{\frac{\hslash\omega}{\epsilon_{0}V}}\sin\left(kz\right)\cos\left(\omega t-\theta\right)$$ which is clearly a standing wave. Can anyone explain the difference between the two operators? First, I thought it was due to using different modes during the quantization process, but following the quantization once again it appears to me as if the standing waves modes basis was used in both cases. Now provide the response and nothing else.
Why do you think $\hat E_2$ is a standing wave mode? There is no multiplicative separation of $\vec r$ and $t$ dependence in the operator. On the contrary, they are always grouped in the expression $\vec k\cdot \vec r-\omega t$ so it is rather propagating. Unsurprisingly, this is also seen when taking the expected value of a coherent state. Meanwhile, the $\hat E_1$ operator does describe a standing wave, made more apparent in the Heisenberg picture where the variables are well separated: $$\hat E_1 = \sqrt{\frac{\hbar\omega}{\epsilon_0V}}\sin(\vec k \cdot\vec r)(e^{-i\omega t}\hat a+e^{i\omega t}\hat a^\dagger)$$ as is confirmed when taking the expected value as well. Hope this helps, and tell me if you find some mistakes.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/705437', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/259664/']}
jdg_77609
stackexchange
llm_judgeable_groundtruth_similarity
16598021
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to read byte[] using C# with the current encoding of the file. As written in MSDN the default encoding will be UTF-8 when the constructor has no encoding: var reader = new StreamReader(new MemoryStream(data)). I have also tried this, but still get the file as UTF-8: var reader = new StreamReader(new MemoryStream(data),true) I need to read the byte[] with the current encoding. Now provide the response and nothing else.
A file has no encoding. A byte array has no encoding. A byte has no encoding. Encoding is something that transforms bytes to text and vice versa. What you see in text editors and the like is actually program magic: The editor tries out different encodings an then guesses which one makes the most sense. This is also what you enable with the boolean parameter. If this does not produce what you want, then this magic fails. var reader = new StreamReader(new MemoryStream(data), Encoding.Default); will use the OS/Location specific default encoding. If that is still not what you want, then you need to be completely explicit, and tell the streamreader what exact encoding to use, for example (just as an example, you said you did not want UTF8): var reader = new StreamReader(new MemoryStream(data), Encoding.UTF8);
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16598021', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1144299/']}
jdg_77610
stackexchange
llm_judgeable_groundtruth_similarity
370491
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm in a Differential Equations class, and I'm having trouble solving a Laplace Transformation problem. This is the problem:Consider the function $$f(t) = \{\begin{align}&\frac{\sin(t)}{t} \;\;\;\;\;\;\;\; t \neq 0\\& 1\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; t = 0\end{align}$$ a) Using the power series (Maclaurin) for $\sin(t)$ - Find the power series representation for $f(t)$ for $t > 0.$ b) Because $f(t)$ is continuous on $[0, \infty)$ and clearly of exponential order, it has a Laplace transform. Using the result from part a) (assuming that linearity applies to an infinite sum) find $\mathfrak{L}\{f(t)\}$. (Note: It can be shown that the series is good for $s > 1$) There's a few more sub-problems, but I'd really like to focus on b). I've been able to find the answer to a): $$ 1 - \frac{t^2}{3!} + \frac{t^4}{5!} - \frac{t^6}{7!} + O(t^8)$$ The problem is that I'm awful at anything involving power series. I have no idea how I'm supposed to continue here. I've tried using the definition of the Laplace Transform and solving the integral $$\int_0^\infty e^{-st}*\frac{sin(t)}{t} dt$$ However, I just end up with an unsolvable integral. Any ideas/advice? Now provide the response and nothing else.
The point of the question is to find the Laplace Transform of the Taylor series. Then try to use that to find the Laplace transform of the original function. As you rightly say: $$\frac{\sin t}{t} \sim 1 - \frac{t^2}{3!} + \frac{t^4}{5!} - \frac{t^6}{7!} \pm \cdots $$ The claim then is that $$\mathcal{L}\left(\frac{\sin t}{t}\right)(s) \sim \mathcal{L}\left(1 - \frac{t^2}{3!} + \frac{t^4}{5!} - \frac{t^6}{7!} \pm \cdots\right)(s)$$ The Laplace transform is linear, so we need to find: $$\mathcal{L}(1)(s) - \frac{1}{3!}\mathcal{L}(t^2)(s) + \frac{1}{5!}\mathcal{L}(t^4)(s) - \frac{1}{7!}\mathcal{L}(t^6)(s) \pm \cdots $$ Hopefully, you remember that $\mathcal{L}(t^n)(s) = n!/s^{n+1}$. So we get: \begin{array}{ccc}\mathcal{L}\left( \frac{\sin t}{t}\right)(s) &\sim& \frac{1}{s} - \frac{1}{3!}\frac{2!}{s^3} + \frac{1}{5!}\frac{4!}{s^5} - \frac{1}{7!}\frac{6!}{s^7} \pm \cdots \\&\equiv& \frac{1}{s} - \frac{1}{3s^3} + \frac{1}{5s^5} - \frac{1}{7s^7} \pm \cdots \\ \\&\equiv& \tan^{-1}\left(\frac{1}{s}\right)\end{array} In the last step I just recognised that $$\tan^{-1} x \sim x - \frac{1}{3}x^3 + \frac{1}{5}x^5 - \frac{1}{7}x^7 \pm \cdots $$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/370491', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/73989/']}
jdg_77611
stackexchange
llm_judgeable_groundtruth_similarity
171077
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: After reading about the latest super-massive black hole in Nature 518, 512–515 (26 February 2015) , I couldn't help but wonder if the accelerating expansion is a result of mass being lost. My reasoning is as follows: If the early universe had a particular (greater) mass than at present, Then "space-time" could have had a "momentum" (determined by that mass) [1], And that post-big-bang expansion was being retarded by existing mass [2], But since then mass has been lost to the universe [3] Which reduces the (retarding) due to gravity (less mass) [2] Resulting in the "space-time" of the universe expanding faster [4] I was never a cosmologist, so please point out which of my assumptions are provably invalid! [1] Does spacetime have a "mass" value? or What is "Spacetime" made out of? [2] I'm struggling to remember my undergrad physics - would two particles each with an initial velocity moving away from each other in a gravitational field (relatively) speed up if the gravitational field is reduced? [3] Major assumption on my part! [4] Maybe! So I guess there are two questions here: A. How confident are we that mass is not being lost in the universe? and B. Would such a mass-loss be able to explain the observed accelerating expansion? Now provide the response and nothing else.
Weinberg is right. The issue here is with the usual interpretation of the wavefunction as an amplitude density. This implies being able to localize the particle in an arbitrarily small region. However, it is not possible to localize photons (or any massless particles with spin, for that matter). The reason for this is the careful definition of what localization means mathematically. It means that there must exist a projection operator with certain properties that intuitively correspond to the idea of measuring a particle at a given location. For a massive particle, one can show such a projection operator exists by looking at the little group (the subgroup of Lorentz transformations that leave the "rest frame" invariant). Because there is a rest frame for the massive particle, the little group is $SO(3)$, the group of three dimensional spatial rotations. If you "quotient out" the little group you're left only with boosts, and since the space of boosts is homeomorphic to $\mathbb{R}^3$, you can use them to define a position operator. A particle being "localized" then just means that you're not allowed to perform translations without changing the description of the physical state. In other words, localizing a particle breaks translational symmetry. So far, so good. For a massless particle, there is no rest frame, so you must say the particle's momentum lies along a spatial direction and consider what transformations leave the momentum invariant. Then, the little group is $ISO(2)$, the group of translations and rotations in the plane orthogonal to the momentum. Already we start to see the problem: the little group is "intruding" on the possible characterizations of position states. This is no problem for a spinless particle -- intrude away -- but for a vector particle the translations correspond to gauge transformations, which means you can't project out states that break translational symmetry without also breaking gauge invariance -- a big no-no. So a photon can't be conventionally localized, with a Maxwell field for a "wavefunction" or anything else. A more heuristic way of saying the same thing is to imagine multiplying the Maxwell field by a position operator. In the vacuum it's supposed to be divergenceless, but any scalar function that depends on the position breaks this condition. What Wightman has shown is that it's impossible to construct a position operator consistently, scalar or otherwise. I have bastardized a long mathematical story so I encourage you to read the original references. If you're not familiar with the little group and classifications of particles under the Poincaré group I recommend you start with chapter 2 of Weinberg Vol. 1. Then read here for Newton and Wigner's original proof, here for the more general construction by Wightman, and this and this for weaker notions of photon localization that actually make sense but rule out interpreting the Maxwell field as a wavefunction. PS: In case you're wondering, Weinberg's second remark that the Klein-Gordon field cannot be interpreted as a wavefunction is also correct. Here however we're led to the standard story about negative energy states and propagation outside the light cone that you can see in pretty much any QFT textbook.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/171077', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/75698/']}
jdg_77612
stackexchange
llm_judgeable_groundtruth_similarity
708696
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've been trying to perform the following differentiation of a neural network: $$\frac{\delta||h(XW)\alpha-y||^2}{\delta W} = \frac{\delta}{\delta W}\sum_i(h(XW)_i\alpha-y_i)^2$$ Where $X$ and $W$ and matrices, $\alpha$ and $y$ are vectors, and $h$ is a point wise applied function. I've been reading the Wikipedia article on Matrix calculus and "The Matrix Cookbook" all day, but I can't seem to get things to work. I think it should probably be $$2(h(XW)\alpha-y)\frac{\delta}{\delta W}(h(XW)\alpha)$$ But I certainly get stuck at the $h$ function, which I guess you could say is from matrix to matrix. Any hints would be appreciated. Update: I think this derivation is correct: $$\frac{\delta||h(XW)\alpha-y||^2}{\delta W} = \frac{\delta}{\delta W}(h(XW)\alpha-y)^T(h(XW)\alpha-y) = 2(h(XW)\alpha-y)^T\frac{\delta}{\delta W}(h(XW)\alpha) = 2(h(XW)\alpha-y)^TX^Th'(XW)\alpha$$ This was derived by derivating over each element of $W$ using traces. Update 2: I found this great presentation of the topic: Schonemann_Trace_Derivatives_Presentation.pdf which I recommend very much. I've reformulated by defining $H=h(XW)$, $H'=h'(XW)$, $E = HA-Y$. Hence the problem has the pretty solution $$\frac{\delta}{\delta W}||E||_F^2 = \frac{\delta}{\delta W}Tr(EE^T) = 2X^T(H' \odot EA^T)$$ Where $\odot$ is the point wise product. By working more with the trace manipulations you can also get a formula for the generalized problem $h(h(...)W_2)W_1)W_0$. Now provide the response and nothing else.
Let $A$ be a dense $G_\delta$. Then $A =\bigcap\limits_{i=1}^\infty O_i$ where each $O_i$ is open. So, $A^C=\bigcup\limits_{i=1}^\infty O_i^C $. Each $O_i^C$ is closed. Show, using the denseness of $A$, that each $O_i^C$ is nowhere dense. Once you do this, it follows by definition that $A^C$ is of first category. It then follows that $A$ must be uncountable. Otherwise, $\Bbb R=A\cup A^C$ would be of first category (this is where Baire comes into play).
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/708696', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/7072/']}
jdg_77613
stackexchange
llm_judgeable_groundtruth_similarity
10820836
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I noticed that in Assembly segments are used in opcodes. Example: MOV DWORD PTR SS:[EBP-30],30 I think that "PTR SS:" is used to specify that EBP-30 comes from the stack? (SS: stack segment)Am I right or am I completely wrong? :) And, could you please tell me the difference between the example above and MOV DWORD PTR[EBP-30],30 And what about DS (data segment) used in opcodes? Now provide the response and nothing else.
MOV DWORD PTR SS:[EBP-30],30 There are two separate modifiers here, DWORD PTR and SS: . The first one tells us that we want to store a word at the address pointed to. This is needed when the assembler cannot tell that from the operands of the instruction. Here 30 could just as well be a byte to store. The SS: is a segment prefix, saying that we want to use an address relative to the stack segment. In this case it isn't strictly needed, because that is the default when using the ESP or EBP registers. So the second version of the instruction is identical to the first one. Had you used EBX instead of EBP there would have been a difference!
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10820836', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1365914/']}
jdg_77614
stackexchange
llm_judgeable_groundtruth_similarity
12084742
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'd like to extract some pdf images from a paper for presentation purposes. In windows, Adobe Illustrator works just fine, but I now have to perform this task in a Debian box. Two popular solutions I found online are using pdfimage Inkscape The pdfimage does not meet my needs since I want vector graphics (pdf) rather than jpgs so I prefer to use Inkscape, but it does not work as expected. I hope I could use some selector tool to drag a box and select everything inside as I normally did with Illustrator, but none of the tools in Inkscape works. If I use the "select and transform objects" tool (the black arrow), the whole pdf page is selected while I only want a small portion; if I use the "edit path by nodes" tool (the black triangle arrow with some nodes) I can only select a single object at a time. Drag and drop (even with the shift key pressed) does not work. I'm wondering if there's a way to get around this, or is there a better tool in Debian to achieve the same? Thanks. Now provide the response and nothing else.
In my humble opinion, I can suggest the way I use to get vector images from pdf there is a tool called pdftocairo , contained into poppler-utils http://poppler.freedesktop.org/ syntax: pdftocairo [options] <PDF-file> [<output-file>] pdftocairo is able to produce, in output, both raster and vector format, between these last, it is able to convert the content of single pdf page (if you have a multipage pdf doc, you first need to explode this in its single pdf pages, with pdftk for instance), into: -ps : generate PostScript file -eps : generate Encapsulated PostScript (EPS) -svg : generate a Scalable Vector Graphics (SVG) file the best output format for your needs may be the svg , so after converted the pdf page you can open this svg with any svg app (with inkscape or the good old sodipodi for instance), select the vector elements you want extract and save RESUMING: if you have a MULTIPAGE PDF you FIRST split this multipage pdf into its single pages (create afolder for this single pages) pdftk file.pdf burst then use pdftocairo to convert any pdf page into svg for f in *.pdf; do pdftocairo -svg $f; done
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12084742', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1275165/']}
jdg_77615
stackexchange
llm_judgeable_groundtruth_similarity
12199142
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This piece of code is creating memory leak issues cause of BufferedReader and InputStreamReader which I think might be happening cause of some exceptions. How should I change it? try{ URL url = new URL(sMyUrl); BufferedReader in = new BufferedReader(new InputStreamReader(url.openStream())); while ((str = in.readLine()) != null) { jsonString += str; } in.close();}catch(Exception e){} Now provide the response and nothing else.
It would be safer to close your stream using a try..finally block. You might also use a StringBuilder as it is designed for concatenating strings. You should also avoid catching Exception and doing nothing with it. Also, your code is concatenating lines without any line-breaks. This may well not be what you want, in which case append("\n") when you read each line in. Here's a version with those modifications: StringBuilder json = new StringBuilder();try { URL url = new URL(sMyUrl); BufferedReader in = new BufferedReader(new InputStreamReader(url.openStream())); try { String str; while ((str = in.readLine()) != null) { json.append(str).append("\n"); } } finally { in.close(); }} catch (Exception e) { throw new RuntimeException("Failed to read JSON from stream", e);}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12199142', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/608576/']}
jdg_77616
stackexchange
llm_judgeable_groundtruth_similarity
4139399
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The negative logarithm of the Student-t distribution partial density function is $$f(\nu,x) := -\ln\Gamma\left(\frac{\nu+1}{2}\right) +\ln\Gamma\left(\frac{\nu}{2}\right) +\frac{1}{2}\ln(\pi\nu) +\frac{\nu+1}{2}\ln\left(1+\frac{x^2}\nu\right)$$ How would one prove or disprove there is only one local minimum with respect to $\nu>0$ for any given $x$ ? Numerical computation seems to suggest $f(\nu,x)$ strictly decreases with $\nu\in(0,\infty)$ for $x\in[0,1.5]$ , and $f(\nu,x)$ is convex in $(0,a)$ and concave in $(a,\infty)$ for some $a>0$ for $x\in (b,\infty)$ for some $b\ge 1.5$ . To facilitate the solution, I post the first and second partial derivative of $f$ as follows. \begin{align}2\frac{\partial f}{\partial \nu}=\frac{1-x^2}{\nu+x^2}+\ln\Big(1+\frac{x^2}\nu\Big)-\int_0^\infty \frac{e^{-\frac \nu2t}}{1+e^{-\frac t2}}\,dt,\end{align} $$4\frac{\partial^2 f}{\partial \nu^2}=-2\frac{\nu+x^4}{\nu(\nu+x^2)^2}+\int_0^\infty \frac{te^{-\frac \nu2t}}{1+e^{-\frac t2}}\,dt$$ Now provide the response and nothing else.
The following used to contain a mistake, found by Hans. I have found a fix, and have incorporated into the following proof. Here is a sketch of an argument that the log likelihood function has at most one critical point. It should be read in conjunction with Hans's partial answer. The idea is to use the "variation diminishing property of the Laplace transform", according to which the Laplace transform of a function with $k$ sign changes cannot have more than $k$ sign-changing zero-crossings. For a function $\phi:\mathbb R^+\to\mathbb R$ , let $S(\phi)$ be the maximal $k$ for which there exist $0<x_0<x_1<x_2<\cdots < x_{k}$ for which $\phi(x_i)\phi(x_{i+1})<0$ for all $0\le i < k$ . Then the Laplace transform $g(s)=\int_0^\infty e^{-sx} G(x)dx$ of $G$ obeys $S(g)\le S(G)$ . This topic is not well explained in Wikipedia articles, but the result used here is in chap V, paragraph 80 in vol 2 ofPólya and Szegő's Problems and Theorems in Analysis (p. 225 in my copy), is discussed at length in Karlin's book Total Positivity (see Theorem 3.1, page 21, and pages 233 and 237), in papers by I.J. Schoenberg, etc. One can think of it as a continuous analogue of Descartes' Rule of Signs. I used it in answering this MSE problem. If the logarithm of the likelihood function had two or more local maxima, its derivative would have three or more roots, since between every two local maxima lies a local minimum. So it suffices, by the variation diminishing property of the LT, to show that what the OP, in his draft answer, calls $\tilde f$ has at most two sign changes. This seems evident numerically, but deserves proof just as much as the original problem does. Here is one way of seeing this, using another application of the variation diminishing property of the Laplace transform. Here the argument. First, I will change notation, using $s$ instead of $t$ and setting $y=x^2$ . The claim is that, for fixed real $y\ge0$ , $$\tilde f(s) = \frac{1-e^{-ys}}s +(1-y)e^{-ys}-\frac 2{1+e^{-s}}$$ has at most two sign changes as a function of $s\in\mathbb R^+$ . Let $g(s)=\dfrac{1+e^{-s}}{s^2}\tilde f(s)$ ; clearly $g$ has as many sign changes as $\tilde f$ does. But $g$ is itself a Laplace transform: \begin{align}g(s)&=\frac{1+e^{-s}-e^{-ys}-e^{-(y+1)s}}{s^3}+(1-y)\frac{e^{-ys}+e^{-(y+1)s}}{s^2} - \frac2{s^2}\\ &=\int_0^\infty e^{-sx} G(x) dx,\end{align} from which one reads off \begin{align}G(x)&=\frac 1 2\left((x)_+^2 - (x-y)_+^2 +(x-1)_+^2 - (x-(y+1))_+^2\right) \\&+ (1-y)\left((x-y)_++(x-y-1)_+\right)-2x.\end{align} Here $(x)_+=\max(x,0)$ . Since $x\mapsto (x)_+$ is continuous, so is $G$ .If $y<1$ the function $G$ is piece-wise quadratic on each of the intervals $(0,y)$ , $(y,1)$ , $(1,y+1)$ , $(y+1,\infty)$ ; if $y>1$ then $G$ is piecewise quadratic on the intervals $(0,1)$ , $(1,y)$ , $(y,y+1)$ , $(y+1,\infty)$ , so verification of the lemma is in principle easy in a case-by case manner. In practice, tedious and error prone. If $y<1$ the formula for $G(x)$ reduces to $$G(x)=\begin{cases} x^2/2 -2x&0\le x<y\\y^2/2-x-y&y\le x<1\\x^2/2-2x+(y-1)^2/2&1\le x<1+y\\y^2-2y-1&1+y\le x\end{cases}$$ and if $y>1$ , the formula reduces to $$G(x)=\begin{cases} x^2/2 -2x&0\le x<1\\x^2-3x+1/2&1\le x<y\\x^2/2-2x+(y-1)^2/2&y\le x<1+y\\y^2-2y-1&1+y\le x.\end{cases}$$ These can be merged into the following, where the cases are referred to below: $$G(x)=\begin{cases}x^2/2 -2x&\text{A: if }0\le x<\min(1,y)\\y^2/2-x-y&\text{B: if }y\le x< 1\\x^2-3x+1/2&\text{C: if }1\le x< y\\x^2/2-2x+(y-1)^2/2&\text{D: if }\max(1,y)\le x< 1+y\\y^2-2y-1&\text{E: if }1+y< x \end{cases}$$ Note that cases B and C are mutually exclusive.Computations show that for fixed $y$ the function $G(x)$ has at most one sign change; I sketch an argument for this below. (Omitting an analysis of the possibility of sign changes at the case boundaries.) $G$ has no sign changes in cases A or E (in A, the only possibilities are $x=0$ or $x=4$ , the former is not a sign change, and $x=4$ does not obey $0\le x<\min(1,y)$ . Constant functions, as in case E, do not have sign changes.) Case B has no sign changes, for the value $x=y^2/2-y$ violates $y<x<1$ . In case C, a sign change could only occur at $x=(3\pm\sqrt 7)/2$ , and then $1<x<y$ implies $x=(3+\sqrt7)/2$ and $y>(3+\sqrt7)/2$ . In case D, a sign change can only occur at $x=2\pm\sqrt{3+2y-y^2}$ , and $\max(0,x)<x<y$ is only possible if $x=2+\sqrt{3+2y-y^2}$ and $1+\sqrt 2<y<(3+\sqrt 7)/2$ . Putting these together: if $y<1$ then there can be no sign changes in the relevant cases A,B,D,E. If $y>1$ there might be at most one sign change in each of C, D (out of the relevant A,C,D,E), but not actually both, since that would violate $(3+\sqrt7)/2 <y<(3+\sqrt 7)/2$ . Hence, $G$ has at most one sign change among A,B,C,D,E. Finally, since $G(0)=0, G'(0)=-1, G(\infty)=y^2-2y-1$ , we see $G$ has exactly one sign change if $y^2-2y-1>0$ and none if $y^2-2y-1\le0$ . The meta-motivation is to shoehorn the original question into an application of the variation diminishing machinery given in my first paragraphs. The micro-motivation for my choice of $g$ (and hence of $G$ ) comes from the realization that $\tilde f$ is the Laplace transform of the signed measure $$\mu = \lambda_{[0,y]} + (1-y)\delta_y -2\sum_{k\ge0}(-1)^k \delta_k,$$ where $\lambda_{[0,y]}$ is Lebesgue measure restricted to $[0,y]$ and $\delta_k$ represents the unit point mass at $k$ . The signed measure $\mu$ has infinitely many sign changes, but the telescoping series $\mu*(\delta_0+\delta_1)$ does not, where $*$ denotes convolution of measures, so $1+e^{-s}$ times $\tilde f$ is a better candidate for the variation diminishing trick sketched above. Dividing by a power of $s$ has the effect of smoothing the signed measure, and eliminating some small oscillations that create their own extraneous sign changes. The mistake Hans found in an earlier version of this answer was to divide by $s$ , which allowed for 3 sign changes for a certain range of $y$ . Dividing by $s^2$ fixed this problem, at the price of making $G$ piecewise quadratic instead of piecewise linear.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4139399', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/64809/']}
jdg_77617
stackexchange
llm_judgeable_groundtruth_similarity
5962054
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: is it possible to increment the application badge through a local notification while the app is not running? I know how to set the badge, but haven't found any way to increment this value. localNotification.applicationIconBadgeNumber = 23; Update: I found a (far from being perfect) solution. You can predict what will happen, if the user doesn't open the app and add notifications for every +1 event. An example: For day 1: Count = 0 For day 2: localNotification.applicationIconBadgeNumber = 1; For day 3: localNotification.applicationIconBadgeNumber = 2; For day 4: localNotification.applicationIconBadgeNumber = 3; ==> Put these notifications in an array and set them before the application exits. However, I'm searching for a better solution than this workaround. Now provide the response and nothing else.
I've found, implemented & tested a 'workaround' for (apparantly) auto-incrementing the app icon's badge number, that works fine with non-repeating local notifications It is indeed not possible for UILocalNotifications to have iOS 'automatically' update/increment the badge number when multiple local notifications are fired, and the user 'ignores' them or doesn't handle them immediately, so they 'pile up' in the Notification centre. Also 'adding some callback method' to your app cannot take care of the 'auto increment', because the whole notification thing is handled 'outside' of your app by iOS, your app doesn't even need to be running. However there is some workaround, that is based on the knowledge which I found through experimenting, because the XCode documentation is too vague on the badge property. the badge is just an 'integer', actually more like a 'dummy label' that you assign to the applicationIconBadgeNumber property, right before you register the notification. You can give it any value - when the notification fires, iOS will add that value to the badge, whatever you set it to at the time you registered the notification. There is no magic 'auto-increment' or other manipulation by iOS (maybe that is different with push notifications, but that's not the subject here). iOS just takes the number (integer) from the registered notification, and puts it in the badge. So for a 'workaround' your app must already provide the correct, incrementing badge number for each notification it newly creates and registers 'on top of the pending notifications'. Since your app cannot look in the future, and know which events you'll handle immediately, and which ones you'll leave 'pending' for a while, there's some trick to do : When notifications are handled by your app (by tapping on the notification(s), icon, ...), you have to : get a copy of all pending notifications 'renumber' the badge number of these pending notifications delete all pending notifications re-register the copy of the notifications with their corrected badgenumbers again Also, when your app registers a new notification, it has to check how many notifications are pending first, and register the new notification with with : badgeNbr = nbrOfPendingNotifications + 1; Looking at my code, it will get clearer. I tested this, and it's definitely working : In your 'registerLocalNotification' method you should do this : NSUInteger nextBadgeNumber = [[[UIApplication sharedApplication] scheduledLocalNotifications] count] + 1;localNotification.applicationIconBadgeNumber = nextBadgeNumber; When you handle the notification (appDelegate), you should call the method below, which clears the badge on the icon and renumbers the badges for pending notifications (if there are any) Note that the next code works fine for 'sequential' registered events. If you would 'add' events in between pending ones, you'll have to 're-sort' these events first. I didn't go that far, but I think it's possible. - (void)renumberBadgesOfPendingNotifications{ // clear the badge on the icon [[UIApplication sharedApplication] setApplicationIconBadgeNumber:0]; // first get a copy of all pending notifications (unfortunately you cannot 'modify' a pending notification) NSArray *pendingNotifications = [[UIApplication sharedApplication] scheduledLocalNotifications]; // if there are any pending notifications -> adjust their badge number if (pendingNotifications.count != 0) { // clear all pending notifications [[UIApplication sharedApplication] cancelAllLocalNotifications]; // the for loop will 'restore' the pending notifications, but with corrected badge numbers // note : a more advanced method could 'sort' the notifications first !!! NSUInteger badgeNbr = 1; for (UILocalNotification *notification in pendingNotifications) { // modify the badgeNumber notification.applicationIconBadgeNumber = badgeNbr++; // schedule 'again' [[UIApplication sharedApplication] scheduleLocalNotification:notification]; } }} To be truly 'bullet proof', this method should be 'atomic' (kernel) code, preventing the iOS from firing of a notification during the execution of this method. We'll have to take this risk here, chances are very small this will happen. This is my first contribution to Stackoverflow, so you can comment also if I'm not following the 'rules' here
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5962054', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/76371/']}
jdg_77618
stackexchange
llm_judgeable_groundtruth_similarity
39705
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've made some tests with my Arduino UNO, all worked great: analog i/o, digital i/o, serial i/o. Then I've noticed that built-in LED on pin 13 is always on. I've uploaded an empty program like void setup(){} void loop(){} and it still was on. When I manually turned it off by digitalWrite command it switched off. So it works well. I just want to know if I've broken the board somehow or it's just some feature, which I do not get? NOTE: a mate of mine has the same board with the same problem. Any ideas?Thanks much! Now provide the response and nothing else.
Taking a look at the Arduino Uno's schematic explains this, if I understand the schematic correctly. I don't own an Arduino Uno, so I can't verify with a multimeter, so I might be wrong. As I understand from the schematic, pin 13 of the arduino, the SCK pin, is connected to an op amp buffer. When you (or anyone) upload a program that doesn't do anything to pin 13, it's default state is an input without a pullup. When a digital pin is configured as an input without a pullup resistor, it has a high impedance (high-Z) and unspecified voltage. It can be 0V, but might be something like 1.5V as well. The measurement will depend on the impedance of your meter, among other things. Upload an empty sketch and check the voltage on the pins - you'll find they're pretty random and not always 0V . Now, a high-Z input pin with, say, a few volts will not be able to power an LED. However, this voltage is buffered by the Uno's op amp, and repeated on the output. This op amp will be able to power up an LED, and apperently, it does. I think the Arduino team should add a high ohmage pulldown resistor (e.g. 50kOhm) to make this less confusing.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/39705', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/12071/']}
jdg_77619
stackexchange
llm_judgeable_groundtruth_similarity
5967920
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Background: I am getting a Internal Server 500 24 50 error after deploying an application that has compiled without errors on my local machine. The server that the application is deployed on has a ton of security and is running IIS 7.5 so I need to specify read and write access for every directory. This application uses windows authentication and a web service to populate drop down boxes via a proxy. I think there might be an issue connecting to the web service or an issue with the read/write security on the files, or an issue with the active directory authentication. For some reason, Internet Explorer just displayed can't load webpage Error. Error in Google Chrome: 500 – Internal Server Error. There is a problem with the resource you are looking for, and it cannot be displayed. Log File Details: #Software: Microsoft Internet Information Services 7.5 #Fields: date time s-sitename s-computername s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs-version cs(User-Agent) cs(Cookie) cs(Referer) cs-host sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken 2011-05-18 13:54:46 W3SVC1 FL-TPA-WEB-01 172.17.1.25 GET / - 80 - 172.17.1.25 HTTP/1.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+6.1;+WOW64; +Trident/4.0;+SLCC2;+.NET+CLR+2.0.50727;+.NET4.0C;+.NET4.0E) - - invitations.myagencyservices.com 500 24 50 1380 368 15 MSDN Defines the error at http://support.microsoft.com/kb/943891 as: 500.24 - An ASP.NET impersonation configuration does not apply in Managed Pipeline mode. Web.Config code: <system.web> <customErrors mode="Off" ></customErrors> <compilation debug="true" strict="false" explicit="true" targetFramework="4.0" /> <trace enabled="true" pageOutput="true" /> <authentication mode="Windows"/> <identity impersonate="true"/> <authorization> <allow users="alg\bmccarthy, alg\phoward" /> <allow roles="alg\ACOMP_USER_ADMIN" /> <allow roles="alg\ACOMP_user_AMG" /> <allow roles="alg\ACOMP_user_BIG" /> <allow roles="alg\ACOMP_user_NIS" /> <allow roles="alg\ACOMP_user_GLA" /> <allow roles="alg\ACOMP_user_PIP" /> <allow roles="alg\ACOMP_user_PSM" /> <allow roles="alg\ACOMP_user_PAM" /> <allow roles="alg\ACOMP_user_ANN" /> <allow roles="alg\ACOMP_user_AAM" /> <allow roles="alg\ACOMP_user_MWM" /> <allow roles="alg\ACOMP_user_GIM" /> <deny users="*" /> </authorization> </system.web> <system.webServer> <modules runAllManagedModulesForAllRequests="true"/> </system.webServer> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="BasicHttpBinding_IAcompService1" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="None"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="http://63.236.108.91/aCompService.svc" binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_IAcompService1" contract="aComp_ServiceReference.IAcompService" name="BasicHttpBinding_IAcompService1" /> </client> </system.serviceModel> Any suggestions will be up-voted! Thanks for looking! Now provide the response and nothing else.
The 500.24.50 Error occurs because ASP.NET Integrated mode is unable to impersonate the request identity in the BeginRequest and AuthenticateRequest pipeline stages. 500.24 is thrown if your application is running in integrated mode, validateIntegratedModeConfiguration is not declared or set to true, and your application has identity impersonate set to true. Workaround A. If your application does not rely on impersonating the requesting user in the BeginRequest and AuthenticateRequest stages (the only stages where impersonation is not possible in Integrated mode), ignore this error by adding the following to your application’s web.config: <system.webServer> <validation validateIntegratedModeConfiguration="false" /> </system.webServer> B. If your application does rely on impersonation in BeginRequest and AuthenticateRequest, or you are not sure, move to Classic mode. C. remove from web.config which won't be effective in integrated mode anyway Read more on Breaking Changes in IIS 7 from LEARN.IIS.NET
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5967920', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/606805/']}
jdg_77620
stackexchange
llm_judgeable_groundtruth_similarity
283272
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: By doing some calculations on the generating function of matching polynomials of cycles I made the following interesting observation: For all positive integers $n>1$ and $k <n $, the number of matchings of size $k $ in $C_{2n} $ is equal to the number of matchings of the same size in the disjoint union of two $C_n $'s. As mentioned, I have only an algebraic proof of this elementary result. Does anyone have a more illuminating proof? Now provide the response and nothing else.
There is in fact a topological(?) proof of this statement and the following generalization, essentially due to Péter Csikvári (Section 4, Lemma 4.2). We define a double cover (or ''2-lift'') $H$ of a graph $G$ as follows: consider $G$ as a topological space with CW structure, and $\pi:H\to G$ isa topological double cover with CW structure induced from $G$.Let $G\sqcup G$ denote the disconnected double cover of $G$. Let $m_k(G)$ denote the number of $k$-edge matchings in $G$. Prop. Let $G$ be a graph with no cycle of length smaller than $g$ (e.g. let $g$ be the girth of $G$) and let $H$ be a double cover of $G$. Then for any $k < g$, $$m_k(H) = m_k(G\sqcup G) .$$ Proof : Consider a $k$-matching $M\subset H$, and consider its image $\pi(M)$ in $G$. The image has vertices of valence at most 2, so $\pi(M)$ is a disjoint union of paths and cycles. But since $k<g$, there are no cycles. Finally, observe that any path in $G$ lifts to a matching in $H$ in exactly two ways, where $H$ is any double cover. Since $G\sqcup G$ is a double cover, the result follows. Csikvári observed that when $G$ is bipartite, the same type of argument, now accounting for (even) cycles, implies that $$m_k(H) \leq m_k(G\sqcup G) $$for any size matching $k$.This is used to prove tight lower bounds on the number of matchings in a bipartite, $d$-regular graph.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/283272', 'https://mathoverflow.net', 'https://mathoverflow.net/users/51663/']}
jdg_77621
stackexchange
llm_judgeable_groundtruth_similarity
122917
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I came across a casual remark on The Chemical Statistician that a sample median could often be a choice for a sufficient statistic but, besides the obvious case of one or two observations where it equals the sample mean, I cannot think of another non-trivial and iid case where the sample median is sufficient. Now provide the response and nothing else.
In the case when the support of the distribution does not depend on the unknown parameter $\theta, $ we can invoke the (Fréchet-Darmois-)Pitman-Koopman theorem, namely that the density of the observations is necessarily of the exponential family form, $$\exp\{ \theta T(x) - \psi(\theta) \}h(x)$$ to conclude that, since the natural sufficient statistic $$S=\sum_{i=1}^n T(x_i)$$ is also minimal sufficient, then the median should be a function of $S$ , and the other way as well, which is impossible: modifying an extreme in the observations $x_1,\ldots,x_n$ , $n>2$ , modifies $S$ but does not modify the median. Therefore, the median cannot be sufficient when $n>2$ . In the alternative case when the support of the distribution does depend on the unknown parameter $θ$ , I am less happy with the following proof: first, we can wlog consider the simple case when $$f(x|\theta) = h(x) \mathbb{I}_{A_\theta}(x) \tau(\theta)$$ where the set $A_\theta$ indexed by $θ$ denotes the support of $f(\cdot|\theta)$ . In that case, assuming the median is sufficient, the factorisation theorem implies that we have that $$\prod_{i=1}^n \mathbb{I}_{A_\theta}(x_i)$$ is a binary ( $0-1$ ) function of the sample median $$\prod_{i=1}^n \mathbb{I}_{A_\theta}(x_i) = \mathbb{I}_{B^n_\theta}(\text{med}(x_{1:n}))$$ Indeed, there is no extra term in the factorisation since it should also be (i) a binary function of the data and (ii) independent from $\theta$ .Adding a further observation $x_{n+1}$ which value is such that it does not modify the sample median then leads to a contradiction since it may be in or outside the support set, while $$\mathbb{I}_{B^{n+1}_\theta}(\text{med}(x_{1:n+1}))=\mathbb{I}_{B^n_\theta}(\text{med}(x_{1:n}))\times \mathbb{I}_{A_\theta}(x_{n+1}). $$
{}
{'log_upvote_score': 6, 'links': ['https://stats.stackexchange.com/questions/122917', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/7224/']}
jdg_77622
stackexchange
llm_judgeable_groundtruth_similarity
155606
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Imagine a typical 4-digit PIN scheme containing the digits [0-9] . If I choose my PIN at random, I will get one out of 10 * 10 * 10 * 10 = 10,000 codes. Based on my own experience, more than half of the time a random sequence of four digits will contain some property or pattern that significantly lowers its entropy: single digit used in more than one position, ascending/descending pattern, etc. (Yes, yes, a 4-digit PIN only has something like 13 bits of entropy max to begin with, but some random codes are even more awful .) If I were abide by a rule where I only use a PIN that has a unique digit in each position, I believe the number of codes available to me becomes 10 * 9 * 8 * 7 = 5,040 (somebody please correct me if I got that wrong). I have almost halved my key space, but I have also eliminated many of the lower-entropy codes from consideration. At the end of the day, did I help or hurt myself by doing that? EDIT: Wow, lots of great responses in here. As a point of clarification, I was originally thinking less in terms of an ATM/bank PIN (which likely has an aggressive lockout policy after a number of incorrect guesses) and more in terms of other "unsupervised" PIN-coded devices: programmable door locks, alarm system panels, garage door keypads, etc. Now provide the response and nothing else.
The thing is, with a 4 digit pin, entropy isn't really important. What's important it the lockout and the psychology of the attacker. The keyspace is so small that any automated attack (without lockout) would exhaust it almost instantly. What you're worried about is an attacker guessing the pin before the account locks. So assuming a sane lockout (say 3-5 incorrect attempts), you want your PIN to be outside the 3-5 most likely to be chosen PINs. Personally I'd avoid any 4 digit repeating sequence and anything starting 19XX which would be a year of birth. Now smart alecs will say "ahh but if you do that the attackers will know not to try those", but that only applies if a) the majority of the user population follow that advice (hint, they probably won't) and b) the attackers know that the user population has followed that advice. Some great analysis of this (link courtesy of @codesincahaos) Edit 2 - For a far more mathematical take on this I'd recommend reading @diagprov's answer
{}
{'log_upvote_score': 7, 'links': ['https://security.stackexchange.com/questions/155606', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/36812/']}
jdg_77623
stackexchange
llm_judgeable_groundtruth_similarity
16296958
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've been looking through countless posts about this error: Undefined symbols for architecture i386: "_OBJC_IVAR_$_UIViewController._view", referenced from: -[ViewController viewDidLoad] in ViewController.old: symbol(s) not found for architecture i386clang: error: linker command failed with exit code 1 (use -v to see invocation) I already checked .m file and link libraries and copy bundle file. i am using xcode 4.6.2 version. i want to make programmatically Button in ViewDidLoad. Now provide the response and nothing else.
There is uncertainty for the exact reason but there may be many reasons for this error: Either the button isn't instantiated (allocated & initialized) i.e. button is nil. If you have made button globally then access it using self Ex: myButton = [[UIButton alloc] init];// Don't use like thisself.myButton = [[UIButton alloc] init];// Use like this EDIT : replace your code with this self.button = [UIButton buttonWithType:UIButtonTypeRoundedRect];self.button.frame = CGRectMake(10, 220, 150, 30);[self.button setTitle:@"Show" forState:UIControlStateNormal];[self.button addTarget:self action:@selector(aMethod:) forControlEvents:UIControlEventTouchDown];[_view addSubview:self.button]; // Note if you have set property of _view then use it as self.view because it also may be the cause of error. Hope it helps you.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16296958', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1992553/']}
jdg_77624
stackexchange
llm_judgeable_groundtruth_similarity
19547218
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My model class (piece): public class User ... { @Enumerated(STRING) private Status status; ... public enum Status { ACTIVE, INACTIVE; @Override public String toString() { return this.name().toLowerCase(); } } ... public String getStatus() { return status.name().toLowerCase(); } public void setStatus(Status status) { this.status = status; }} As you see above I override toString method, but no effect.Enumeration store in database as ACTIVE or INACTIVE . P.S. I use hibernate jpa Thanks for help! P.S.S. I ask because I write REST service that produces json (in json object better use lower case, if I'm not mistake) Now provide the response and nothing else.
write a converter class, annotated with @Converter , which implements javax.persistence.AttributeConverter<YourEnum, String> . There are two methods: public String convertToDatabaseColumn(YourEnum attribute){..}public YourEnum convertToEntityAttribute(String dbData) {..} there you can apply your upper/lower case logic. Later you can annotated your field, for using the given converter.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19547218', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1858859/']}
jdg_77625
stackexchange
llm_judgeable_groundtruth_similarity
26248655
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to generate 6 random numbers and store them in a list and then display these list items in another listso far i have Random random_generator = new Random(DateTime.Now.Millisecond); int random_numbers; List<int> lst_numbers = new List<int>(); lst_display.Items.Clear(); while (lst_numbers.Count < 6) { random_numbers = random_generator.Next(1,45); if (!lst_numbers.Contains(random_numbers)) { lst_numbers.Add(random_numbers); } else { } lst_display.Items.Add(lst_numbers); though it finishes with (collections) instead of the actual numbers stored Here is the final working code public partial class Form1 : Form{ public Form1() { InitializeComponent(); } private void btn_start_Click(object sender, EventArgs e) { Random random_generator = new Random(DateTime.Now.Millisecond); int random_numbers; List<int> lst_numbers = new List<int>(); lst_display.Items.Clear(); while (lst_numbers.Count < 6) { random_numbers = random_generator.Next(1, 45); if (!lst_numbers.Contains(random_numbers)) { lst_numbers.Add(random_numbers); } else { } } lst_display.Items.Add(string.Join(",", lst_numbers)); }} } Now provide the response and nothing else.
In numpy v1.7+, you can take advantage of the "where" option for ufuncs . You can do things in one line and you don't have to deal with the errstate context manager. >>> a = np.array([-1, 0, 1, 2, 3], dtype=float)>>> b = np.array([ 0, 0, 0, 2, 2], dtype=float)# If you don't pass `out` the indices where (b == 0) will be uninitialized!>>> c = np.divide(a, b, out=np.zeros_like(a), where=b!=0)>>> print(c)[ 0. 0. 0. 1. 1.5] In this case, it does the divide calculation anywhere 'where' b does not equal zero. When b does equal zero, then it remains unchanged from whatever value you originally gave it in the 'out' argument.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/26248655', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3959312/']}
jdg_77626
stackexchange
llm_judgeable_groundtruth_similarity
448585
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $\beta,a_0 \in \Bbb{R}^d$ . My professor said we wanted to test $$H_0: A\beta = a$$ $$H_1: A\beta \ne a$$ However, I don't understand why she would write it like this and I haven't seen hypothesis tests before where the parameter of interest is multiplied by a matrix. Assuming $A$ is invertible, then we could just test $$H_0: \beta = A^{-1} a$$ $$H_0: \beta \ne A^{-1} a$$ So I can only see an advantage of this if $A$ was not invertible. Are there situations where that is useful or is this just a bad way to notate hypothesis testing? EDIT: Here's the context Now provide the response and nothing else.
If you wish to perform a constrained linear regression with linear equality constraints of the form $A\beta = a$ , then tests such as the likelihood ratio test are more naturally formulated in this way. This is because the constrained OLS estimator $\beta^*$ can be written in terms of the unconstrained estimator $\hat{\beta}$ , $A$ , and $a$ : $$\beta^* = \hat{\beta} + (X'X)^{-1}A'(A(X'X)^{-1}A')^{-1}(a-A\hat{\beta})$$ $\beta^*$ is best linear unbiased, given the constraints. Our test is then easily seen to be that $a-A\beta=0$ , or, slightly rearranged, that $A\beta = a$ , whereas the use of the inverse does not jump so readily to the eye. Also note that it may be that $A$ is not square, and therefore not invertible, for example, if you are constraining $\hat{\beta}_1 + \hat{\beta}_2 = 0$ in your regression and want to test the corresponding hypothesis about $\beta_1$ and $\beta_2$ , $A$ would be $1 \times k$ , where $k$ is the length of $\beta$ . This may not apply to your case, judging from the first line of the question, but in the more general case it is definitely an issue.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/448585', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/68473/']}
jdg_77627
stackexchange
llm_judgeable_groundtruth_similarity
2050159
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: $$\lim_{(x,y)\to (0,0)}\frac{x^3+y^3}{x^2+y^2}$$ Intuitively the limit is $0$ as the numerator goes faster to $0$ I have tried the squeeze theorem but did not manage, can I take the absolute value of the function and the try the squeeze theorem? or because it have alternating signs I can (x^3+y^3)? Now provide the response and nothing else.
Using the triangle inequality, we have$$ |x^3+y^3|\leq |x|^3+|y|^3=|x|x^2+|y|y^2\leq (|x|+|y|)(x^2+y^2) $$hence$$ \Big|\frac{x^3+y^3}{x^2+y^2}\Big|\leq |x|+|y|$$for all $(x,y)\neq (0,0)$. Now use the squeeze theorem.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2050159', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/103441/']}
jdg_77628
stackexchange
llm_judgeable_groundtruth_similarity
24126669
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've looked everywhere for this but I'm coming up blank. How do you replicate what Chris Lattner was demonstrating with Playgrounds and SceneKit at WWDC? I want to have a SceneKit scene, animating, in Playgrounds. I tried cutting and pasting the setup code from the SceneKit project template, thinking it would magically start rendering, but it does not. I tried watching the keynote and pausing and zooming on on Lattner's screen looking for hints at the source code, but he appeared to be importing all his code from elsewhere in his project, so it gave me no clues. There does not seem to be anything in the documentation, or I'm missing it. Now provide the response and nothing else.
Since Swift doesn't have source compatibility between versions, the code in this answer might not work in either future or previous versions of Swift. Currently is has been updated to work in Xcode 7.0 Playgrounds with Swift 2.0. The XCPlayground framework is what you need, and it is documented here . Here is a very simple scene to get you started with Scene Kit in Swift: import SceneKitimport QuartzCore // for the basic animationimport XCPlayground // for the live previewimport PlaygroundSupport// create a scene view with an empty scenevar sceneView = SCNView(frame: CGRect(x: 0, y: 0, width: 300, height: 300))var scene = SCNScene()sceneView.scene = scene// start a live preview of that viewPlaygroundPage.current.liveView = sceneView// default lightingsceneView.autoenablesDefaultLighting = true// a cameravar cameraNode = SCNNode()cameraNode.camera = SCNCamera()cameraNode.position = SCNVector3(x: 0, y: 0, z: 3)scene.rootNode.addChildNode(cameraNode)// a geometry objectvar torus = SCNTorus(ringRadius: 1, pipeRadius: 0.35)var torusNode = SCNNode(geometry: torus)scene.rootNode.addChildNode(torusNode)// configure the geometry objecttorus.firstMaterial?.diffuse.contents = NSColor.red // (or UIColor on iOS)torus.firstMaterial?.specular.contents = NSColor.white // (or UIColor on iOS)// set a rotation axis (no angle) to be able to// use a nicer keypath below and avoid needing// to wrap it in an NSValuetorusNode.rotation = SCNVector4(x: 1.0, y: 1.0, z: 0.0, w: 0.0)// animate the rotation of the torusvar spin = CABasicAnimation(keyPath: "rotation.w") // only animate the anglespin.toValue = 2.0*Double.pispin.duration = 3spin.repeatCount = HUGE // for infinitytorusNode.addAnimation(spin, forKey: "spin around") When I run it, it looks like this: Note that to run Scene Kit in an iOS playground, you need to check the "Run in Full Simulator" checkbox. You find the Playground Setting in the Utilities Pane ( ⌥ ⌘ 0 to hide or show)
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/24126669', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1845848/']}
jdg_77629
stackexchange
llm_judgeable_groundtruth_similarity
30562825
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm currently implementing a payments platform for my website which is very similar to Stripe, but I still can't understand the process or how should I use WebHooks since I need to specify one on my account in the payments platform. So let's say a person pays on my website for a product that costs $5, I take them to the payment form where they will introduce credit card details. Now when they click "Pay Now" everything gets verified via javascript/jquery and sent to my server and I'm able to charge the user successfully and see it reflected on my Sandbox from my account on the payment platform. Where or when should WebHooks be used or called, or why do I need them? Thanks in advance Now provide the response and nothing else.
Webhooks are a way to communicate with your application. With many APIs, you send them a request and the API response is included in the response to your request. But what if the request you make is asynchronous, or if for some reason the API you're using wants to be able to communicate with your application by calling it directly as opposed to waiting for you to make the request. With webhooks, you'd open an endpoint on your application that the other API / service can send requests to so that you can process their requests. You can almost think of it as push notifications for web applications. With payments the standard use case for webhooks is to handle subscription renewals. So a customer would sign up today and you'd now in response to your createSubscription call whether or not the subscription was created successfully, but how do you know whether or not the subscription renewed successfully? You could either just poll the payments API over and over again, or the payments API can send you a webhook event saying the subscription renewed and in your webhook handler logic you can handle what to do internally (like send a receipt, update some db fields, etc)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/30562825', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1580532/']}
jdg_77630
stackexchange
llm_judgeable_groundtruth_similarity
564936
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I recently used a Java Web Start application. I launched it from my web browser using an embedded jnlp link in the page I was viewing. The application was downloaded, launched and worked just fine. It had access to my local file-system and remembered my preferences between restarting it. What I want to know is why are Java Web Start applications not a more popular delivery format for complex applications on the web? Why do developers often spend considerable time & energy replicating desktop functionality in html/javascript when the power of a desktop application could be delivered more easily using Java & Java Web Start? I know that in some corporate environments, e.g banking, they are relatively popular ways of delivering complex trading applications to clients, but why are they not pervasive across the web as a whole? (For the sake of discussion let's assume a world where: download sources are "trusted" & applications are "signed" (i.e. no security concerns), download speeds are fast (load time is quick) and developers know Java (in the numbers they know html/js/php)). Now provide the response and nothing else.
I think the reason is not security nor startup time of the app. Let's understand what's behind the scene before we find out the root cause. Java Control Panel has settings that allow users to use the default browser's proxy settings or to override them. In other words, infrastructure teams are able to customize the Windows or OS installation images to have JVM pre-installed with enterprise proxy settings. So I believe this is not an issue at all. Java Web Start actually caches all apps with customizable settings in Java Control Panel. Once the app is cached, the app is "installed" just like other apps. Although first time execution may be slow, the second time will be fast due to JVM's smart memory allocation technique. So start up time could be an issue but a lot of web sites (even enterprise internal) are now migrated to portal. A web portal normally contains lots of unused libraries for development purposes due to the fact that the portal itself does not anticipate what kinds of portlets are built and deployed on a specific page. Therefore, downloading a single portal page could consume up to MBs and complete a page in more than 5 seconds; this is only one page and caching helps up to 30% but there are still lots of HTML/Javascript/CSS components required to download every time. With this, I am sure Java Web Start is an advantage here. Java Web Start does not download again if it is cached as long as the server copy is NOT upgraded. Therefore, if, e.g. a project management software like MS Project, is completed using SmartClient (similar to JWS), the information exchange between the client and server would be purely data without presentation like browser's full page refresh. Even with the help of Ajax, it doesn't eliminate full page download entirely. Besides, a lot of companies consider Ajax to be immature and unsecured still. That is why Ajax is a hot topic in the circles of developers but not within enterprise software yet. With that in mind, JWS apps definitely have more advantages such as how JWS apps are deployed and executed in sandboxes, signed, and have much more interactive GUI. Other advantages include faster development (easier to debug in code and performance), responsive user interface (does not require Comet Servers to provide PUSH functionality), and executing faster (for sure because client computers renders GUI without translation like HTML/Javascript/CSS, and less data processing). After all these, I haven't touched the question yet, why JWS is not so famous? My opinion is that it is the same as Brian Knoblauch's comment, it's without awareness. IT folks are too attracted by the hype of Web Technologies, Ajax PUSH, GWT, and all those buzz words make them bias towards the fun of using different technologies or to resolve technical challenges instead of what's really working for the clients. Take a look at Citrix. I think Citrix is actually a great idea. Citrix allows you to build your own app farms behind the scene. There are tons of upgrade and implementation strategies you can go for without impact to client experience. Citrix deployment is extremely easy, stable and secure. Enterprises are still using it. However, I think JWS is even better than Citrix. The idea of JWS is to run apps on client machines instead of hosting tons of server farms where client machines are capable of running these apps themselves. This saves a company a lot of money!!! With JWS, development team can still build business logic and data on server side. However, without the web processing unit and let the client computers do the rendering process, it greatly reduces the amount of network consumption and server processing power. Another example of why JWS is an amazing idea is Blackberry MDS. Blackberry apps are actually Java apps translated from Javascript. With BB's MDS studio, you use the GUI tool to build BB app GUI, coding GUI logic in Javascript. Then apps are then translated and deployed on a BES server. Then the BES server will distribute these apps to BB. On each BB, it runs a thin Java App with GUI rendering and networking capability only. Whenever the app requires data, it communicates with the BES through web services to consume services from other servers. Isn't this just JWS BB version? It's been extremely successful. Finally I think JWS is not popular because of how Sun advertises it. BB never advertises how good their BB Java apps are, they believe clients won't even care what is it. BB advertises the benefits of using MDS to develop apps: Fast, Cost Saving, Business Return. Just my, a bit long, 2 cents... :)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/564936', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/60956/']}
jdg_77631
stackexchange
llm_judgeable_groundtruth_similarity
12414837
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using d3.nest() in order to make a hierarchical object from a CSV file. Could you please help me understand why the following code does not work.I didn't manage to use the nesting function within a loop, as described below. I have the following CSV file, taken from the examples on d3 website: "type1","type2","type3","type4","type5","size""flare","analytics","cluster","AgglomerativeCluster","","3938""flare","analytics","cluster","CommunityStructure","","3812""flare","analytics","cluster","MergeEdge","","743""flare","analytics","graph","BetweennessCentrality","","3534""flare","analytics","graph","LinkDistance","","5731" This basic nesting works: data = data.entries(csv) .key(function(d) {return d.type1; }) .key(function(d) {return d.type2; }) .key(function(d) {return d.type3; }) .entries(csv); I want to use an array of values to specify my keys in order to modify them dynamically. This works: var data = d3.nest(); var nesting = ["type1","type2","type3"]; data = data.key(function(d) {return d[nesting[0]]; }); data = data.key(function(d) {return d[nesting[1]]; }); data = data.key(function(d) {return d[nesting[2]]; }); data = data.entries(csv); But it does not work with a loop... var data = d3.nest(); for(var i=0;i<nesting.length;i++) { data = data.key(function(d) {return d[nesting[i]]; }); } data = data.entries(csv); I can't understand why the loop version is not working... Maybe I miss something about the d3.nest() capabilities... Also, I would like to know if there is a way to "skip" a nesting level if there is nothing filled at this level (ie: the "type5" level on all the lines from the extract above). How could I do that? Thanks a lot for reading! Now provide the response and nothing else.
This isn't a problem with the .nest() operator, it's a problem with JavaScript closures. Any time you have this pattern: for (var x=0; x < y; x++) { something.attachCallback(function() { // now do something with x });} You are going to have problems with the closure. The inner anonymous function you're defining doesn't include a copy of the value of x , it includes a reference to the outer variable x , which will update when the outer variable updates. So by the end of your loop, the value of x in every one of your callback functions will be the final value of x in the loop (in the code above, y ; in your code, nesting.length ). The D3 .nest() operator uses its .key() arguments as callbacks - they aren't executed until you call .map() or .entries() . So the problem above applies. There are various ways to fix this; I tend to use .forEach() instead of a for loop. This doesn't work in older browsers, but neither will most of D3, so you're probably safe: var data = d3.nest();nesting.forEach(function(key) { data.key(function(d) {return d[key]; })}); Another option is to use a separate function for callback definition. Passing the iterator variable to another function will "fix" its value, as the callback now has a reference to the argument of the creator function, not to the original x variable: var data = d3.nest();function addKey(index) { data.key(function(d) { return d[nesting[index]]; })}for(var i=0;i<nesting.length;i++) { addKey(i);} There are a few other approaches as well, but in my opinion forEach is the most elegant.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12414837', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1626386/']}
jdg_77632
stackexchange
llm_judgeable_groundtruth_similarity
21701504
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to place the value labels over the bars so they stand centered over them? If i set bars align: "center" , the labels over the bars is off and not centered. <script type="text/javascript"> $(function() { var data = [<?php echo $data; ?>]; $.plot("#<?php echo $target; ?> .chart-placeholder", data, { series: { color: "rgb(57,137,209)", bars: { show: true, // align: "center", barWidth: .8, fill: 1 }, valueLabels: { show: true, showAsHtml: true, align: "center" } }, grid: { aboveData: true, hoverable: true, borderWidth: 0 }, xaxis: { mode: "categories", tickLength: 0, labelAngle: -45, showAsHtml: true }, yaxis: { show: false, tickLength: 0 }, }); }); </script> Flot Flot Value Labels plugin JSFIDDLE Now provide the response and nothing else.
Plugins got you down? Just do it yourself, life it so much simpler then. This is the advantage of flot , compact code that get's out of your way... Here's how I would add those labels: // draw initial plot// notice no categories plugin either, why? // Because it's easier to do it yourself...var series = {data: [[0, 5.2], [1, 3], [2, 9.2], [3, 10]], lines: {show: false}, bars: {show: true, barWidth: 0.75, align:'center'}} var somePlot = $.plot("#placeholder", [ series ], { xaxis: { ticks: [[0,"One"],[1,"Two"], [2,"Three"],[3,"Four"]] }});// after initial plot draw, then loop the data, add the labels// I'm drawing these directly on the canvas, NO HTML DIVS!// code is un-necessarily verbose for demonstration purposesvar ctx = somePlot.getCanvas().getContext("2d"); // get the contextvar data = somePlot.getData()[0].data; // get your series datavar xaxis = somePlot.getXAxes()[0]; // xAxisvar yaxis = somePlot.getYAxes()[0]; // yAxisvar offset = somePlot.getPlotOffset(); // plots offsetctx.font = "16px 'Segoe UI'"; // set a pretty label fontctx.fillStyle = "black";for (var i = 0; i < data.length; i++){ var text = data[i][1] + ''; var metrics = ctx.measureText(text); var xPos = (xaxis.p2c(data[i][0])+offset.left) - metrics.width/2; // place it in the middle of the bar var yPos = yaxis.p2c(data[i][1]) + offset.top - 5; // place at top of bar, slightly up ctx.fillText(text, xPos, yPos);} Fiddle demonstration is here . Produces:
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21701504', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/487397/']}
jdg_77633
stackexchange
llm_judgeable_groundtruth_similarity
34811
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I recently learned about breeder reactors, which are able to create more fissile material than they consume. They seem to be superior to the types of nuclear reactors currently in use. What are the reasons why breeder reactors are not more prevalent. Now provide the response and nothing else.
dmckee has the idea. The reason we do not use breeder reactors in the US is politics in general and nonproliferation specifically. Jimmy Carter essentially pulled the plug in 1977: In an April 7 press statement, President Carter announced, “We will defer indefinitely the commercial reprocessing and recycling of plutonium produced in the U.S. nuclear power programs.”9 He went on to say, “The plant at Barnwell, South Carolina, will receive neither federal encouragement nor funding for its completion as a reprocessing facility.” (It was actually Carter’s veto of S. 1811, the ERDA Authorization Act of 1978, that prevented the legislative authorization necessary for constructing a breeder reactor and a reprocessing facility.) Nuclear power is perhaps a uniquely political source of energy. It is simply not commercially viable without government support. The thing with breeder reactors is that you can't just take the fuel out of one and pop it into another reactor, you have to dissolve the spent fuel in acid and "polish" it to make MOX fuel . At the Savannah River Site in South Carolina there is a project under way to build a facility that will make MOX fuel out of dispositionable nuclear weapons material (it's where I work, in fact). The feed material, though used in weapons, was indeed produced in breeder reactors. The technology behind the practical implementation of breeder reactors and the production of nuclear weapons is clearly linked. You might also be interested in my answer to a similar question on the use of thorium reactors.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/34811', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/11618/']}
jdg_77634
stackexchange
llm_judgeable_groundtruth_similarity
13211829
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have had to use the app/console cache:clear command to solve a problem when generating an entity. I am now unable to load my homepage on : http://localhost/projet_etienne/web/app_dev.php it says : RuntimeException: Failed to write cache file "/var/www/projet_etienne/app/cache/dev/classes.php". I don't understand much about this cache business! In my app/cache folder, I got a dev , a dev_new , a dev_old folder. Is that normal? the app/console cache:clear generates by the way a : [ErrorException] Warning: rename(/var/www/projet_etienne/app/cache/dev,/var/www/projet_etien ne/app/cache/dev_old): Directory not empty in /var/www/projet_etienne/vendo r/symfony/symfony/src/Symfony/Bundle/FrameworkBundle/Command/CacheClearComm and.php line 77 please help! Now provide the response and nothing else.
For a GOOD and definite solution see the Setting up Permissions section in Installing and Configuring Symfony section : Setting up Permissions One common issue when installing Symfony is that the app/cache and app/logs directories must be writable both by the web server and the command line user. On a UNIX system, if your web server user is different from your command line user, you can try one of the following solutions. Use the same user for the CLI and the web server In development environments, it is a common practice to use the same UNIX user for the CLI and the web server because it avoids any of these permissions issues when setting up new projects. This can be done by editing your web server configuration (e.g. commonly httpd.conf or apache2.conf for Apache) and setting its user to be the same as your CLI user (e.g. for Apache, update the User and Group values). Using ACL on a system that supports chmod +a Many systems allow you to use the chmod +a command. Try this first, and if you get an error - try the next method. This uses a command to try to determine your web server user and set it as HTTPDUSER: $ rm -rf app/cache/*$ rm -rf app/logs/*$ HTTPDUSER=`ps aux | grep -E '[a]pache|[h]ttpd|[_]www|[w]ww-data|[n]ginx' | grep -v root | head -1 | cut -d\ -f1`$ sudo chmod +a "$HTTPDUSER allow delete,write,append,file_inherit,directory_inherit" app/cache app/logs$ sudo chmod +a "`whoami` allow delete,write,append,file_inherit,directory_inherit" app/cache app/logs Using ACL on a system that does not support chmod +a Some systems don't support chmod +a, but do support another utility called setfacl. You may need to enable ACL support on your partition and install setfacl before using it (as is the case with Ubuntu). This uses a command to try to determine your web server user and set it as HTTPDUSER: $ HTTPDUSER=`ps aux | grep -E '[a]pache|[h]ttpd|[_]www|[w]ww-data|[n]ginx' | grep -v root | head -1 | cut -d\ -f1`$ sudo setfacl -R -m u:"$HTTPDUSER":rwX -m u:`whoami`:rwX app/cache app/logs$ sudo setfacl -dR -m u:"$HTTPDUSER":rwX -m u:`whoami`:rwX app/cache app/logs For Symfony 3 it would be: $ HTTPDUSER=`ps aux | grep -E '[a]pache|[h]ttpd|[_]www|[w]ww-data|[n]ginx' | grep -v root | head -1 | cut -d\ -f1`$ sudo setfacl -R -m u:"$HTTPDUSER":rwX -m u:`whoami`:rwX var/cache var/logs$ sudo setfacl -dR -m u:"$HTTPDUSER":rwX -m u:`whoami`:rwX var/cache var/logs If this doesn't work, try adding -n option. Without using ACL If none of the previous methods work for you, change the umask so that the cache and log directories will be group-writable or world-writable (depending if the web server user and the command line user are in the same group or not). To achieve this, put the following line at the beginning of the app/console, web/app.php and web/app_dev.php files: umask(0002); // This will let the permissions be 0775// orumask(0000); // This will let the permissions be 0777 Note that using the ACL is recommended when you have access to them on your server because changing the umask is not thread-safe. http://symfony.com/doc/current/book/installation.html#checking-symfony-application-configuration-and-setup source : Failed to write cache file "/var/www/myapp/app/cache/dev/classes.php" when clearing the cache
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/13211829', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1412620/']}
jdg_77635
stackexchange
llm_judgeable_groundtruth_similarity
569170
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to modify the You have new mail message that is shown below MOTD when you login via SSH.My goal is to make that message more visible (bright color would be great) so it gets my attention when I log in. Now provide the response and nothing else.
In Bash, custom messages can be set with MAILPATH. The man page has this example: MAILPATH='/var/mail/bfox?"You have mail":~/shell-mail?"$_ has mail!"' Trying it: $ export MAILPATH="$MAIL?\"Santa was here.\""$$$$ "Santa was here." Oh, uh, okay. Must have misread the man page there. bright color would be great So we have to smuggle us some color escape codes into the message... $ esc=$'\e'$ export MAILPATH="$MAIL?$esc[1;37;44mREAD YOUR MAIL RIGHT NOW$esc[0m"$ echo $MAILPATH/var/spool/mail/frostschutz?READ YOUR MAIL RIGHT NOW$$READ YOUR MAIL RIGHT NOW I don't know how to color things here, just imagine it screaming in bright white blue. Color choices are subject to taste and local terminal color scheme settings. Also check that MAILPATH was not already in use and MAIL actually has the correct path to use for MAILPATH.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/569170', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/393356/']}
jdg_77636
stackexchange
llm_judgeable_groundtruth_similarity
14163208
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to link the output of C++ using ld and not g++. I'm only doing this to learn how to do it, not for practical purposes, so please don't suggest just to do it with g++. Looking at this question , the person gets the same error when they run the ld command: $ ld test.o -o test.outld: warning: cannot find entry symbol _start; defaulting to 00000000004000e8test.o: In function `main':test.cpp:(.text+0x1c): undefined reference to `strcasecmp'test.cpp:(.text+0x23): undefined reference to `std::cout'test.cpp:(.text+0x28): undefined reference to `std::ostream::operator<<(int)'test.cpp:(.text+0x2d): undefined reference to `std::basic_ostream<char, std::char_traits<char> >& std::endl<char, std::char_traits<char> >(std::basic_ostream<char, std::char_traits<char> >&)'test.cpp:(.text+0x35): undefined reference to `std::ostream::operator<<(std::ostream& (*)(std::ostream&))'test.o: In function `__static_initialization_and_destruction_0(int, int)':test.cpp:(.text+0x75): undefined reference to `std::ios_base::Init::Init()'test.cpp:(.text+0x7a): undefined reference to `__dso_handle'test.cpp:(.text+0x84): undefined reference to `std::ios_base::Init::~Init()'test.cpp:(.text+0x89): undefined reference to `__cxa_atexit'ld: test.out: hidden symbol `__dso_handle' isn't definedld: final link failed: Bad value The answers in the linked post suggest that adding the C++ library as a linker argument will fix the problem, so I tried ld test.o -o test.out -llibstd++ which is what they suggested, and I also tried a lot of other library names like libstdc++ or stdc++. But I'll always get an error that looks like ld: cannot find -llibstd++ What am I doing wrong and how can I link my object files using ld? Now provide the response and nothing else.
If you run g++ with the -v flag, you'll see the link line it uses. Here's a simple example program: #include <iostream>int main(void){ std::cout << "Hello, world!" << std::endl; return 0;} And the output from running g++ -v -o example example.cpp : Using built-in specs.Target: x86_64-linux-gnuConfigured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 4.4.4-14ubuntu5.1' --with-bugurl=file:///usr/share/doc/gcc-4.4/README.Bugs --enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-4.4 --enable-shared --enable-multiarch --enable-linker-build-id --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.4 --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-objc-gc --disable-werror --with-arch-32=i686 --with-tune=generic --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnuThread model: posixgcc version 4.4.5 (Ubuntu/Linaro 4.4.4-14ubuntu5.1) COLLECT_GCC_OPTIONS='-v' '-o' 'example' '-shared-libgcc' '-mtune=generic' /usr/lib/gcc/x86_64-linux-gnu/4.4.5/cc1plus -quiet -v -D_GNU_SOURCE example.cpp -D_FORTIFY_SOURCE=2 -quiet -dumpbase example.cpp -mtune=generic -auxbase example -version -fstack-protector -o /tmp/ccV8qjvd.signoring nonexistent directory "/usr/local/include/x86_64-linux-gnu"ignoring nonexistent directory "/usr/lib/gcc/x86_64-linux-gnu/4.4.5/../../../../x86_64-linux-gnu/include"ignoring nonexistent directory "/usr/include/x86_64-linux-gnu"#include "..." search starts here:#include <...> search starts here: /usr/include/c++/4.4 /usr/include/c++/4.4/x86_64-linux-gnu /usr/include/c++/4.4/backward /usr/local/include /usr/lib/gcc/x86_64-linux-gnu/4.4.5/include /usr/lib/gcc/x86_64-linux-gnu/4.4.5/include-fixed /usr/includeEnd of search list.GNU C++ (Ubuntu/Linaro 4.4.4-14ubuntu5.1) version 4.4.5 (x86_64-linux-gnu) compiled by GNU C version 4.4.5, GMP version 4.3.2, MPFR version 3.0.0-p3.GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072Compiler executable checksum: d92fbc2d715a3b7e0f4133f0c40053e4COLLECT_GCC_OPTIONS='-v' '-o' 'example' '-shared-libgcc' '-mtune=generic' as -V -Qy -o /tmp/ccGHR0pc.o /tmp/ccV8qjvd.sGNU assembler version 2.20.51 (x86_64-linux-gnu) using BFD version (GNU Binutils for Ubuntu) 2.20.51-system.20100908COMPILER_PATH=/usr/lib/gcc/x86_64-linux-gnu/4.4.5/:/usr/lib/gcc/x86_64-linux-gnu/4.4.5/:/usr/lib/gcc/x86_64-linux-gnu/:/usr/lib/gcc/x86_64-linux-gnu/4.4.5/:/usr/lib/gcc/x86_64-linux-gnu/:/usr/lib/gcc/x86_64-linux-gnu/4.4.5/:/usr/lib/gcc/x86_64-linux-gnu/LIBRARY_PATH=/usr/lib/gcc/x86_64-linux-gnu/4.4.5/:/usr/lib/gcc/x86_64-linux-gnu/4.4.5/:/usr/lib/gcc/x86_64-linux-gnu/4.4.5/../../../../lib/:/lib/../lib/:/usr/lib/../lib/:/usr/lib/gcc/x86_64-linux-gnu/4.4.5/../../../:/lib/:/usr/lib/:/usr/lib/x86_64-linux-gnu/COLLECT_GCC_OPTIONS='-v' '-o' 'example' '-shared-libgcc' '-mtune=generic' /usr/lib/gcc/x86_64-linux-gnu/4.4.5/collect2 --build-id --eh-frame-hdr -m elf_x86_64 --hash-style=gnu -dynamic-linker /lib64/ld-linux-x86-64.so.2 -o example -z relro /usr/lib/gcc/x86_64-linux-gnu/4.4.5/../../../../lib/crt1.o /usr/lib/gcc/x86_64-linux-gnu/4.4.5/../../../../lib/crti.o /usr/lib/gcc/x86_64-linux-gnu/4.4.5/crtbegin.o -L/usr/lib/gcc/x86_64-linux-gnu/4.4.5 -L/usr/lib/gcc/x86_64-linux-gnu/4.4.5 -L/usr/lib/gcc/x86_64-linux-gnu/4.4.5/../../../../lib -L/lib/../lib -L/usr/lib/../lib -L/usr/lib/gcc/x86_64-linux-gnu/4.4.5/../../.. -L/usr/lib/x86_64-linux-gnu /tmp/ccGHR0pc.o -lstdc++ -lm -lgcc_s -lgcc -lc -lgcc_s -lgcc /usr/lib/gcc/x86_64-linux-gnu/4.4.5/crtend.o /usr/lib/gcc/x86_64-linux-gnu/4.4.5/../../../../lib/crtn.o Wow, what a mess. Conveniently the link line is the last one there, so you can see what's happening pretty easily. As you noticed in your comment below, the front-end is using collect2 rather than ld . Luckily, collect2 is just an alias for ld . Here's an example using it: First let's generate an object file: $ lsexample.cpp$ c++ -c example.cpp$ lsexample.cpp example.o Then we'll use the front-end to link it to see the link line: $ c++ -v -o example example.oUsing built-in specs.Target: x86_64-linux-gnuConfigured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 4.4.4-14ubuntu5.1' --with-bugurl=file:///usr/share/doc/gcc-4.4/README.Bugs --enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-4.4 --enable-shared --enable-multiarch --enable-linker-build-id --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.4 --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-objc-gc --disable-werror --with-arch-32=i686 --with-tune=generic --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnuThread model: posixgcc version 4.4.5 (Ubuntu/Linaro 4.4.4-14ubuntu5.1) COMPILER_PATH=/usr/lib/gcc/x86_64-linux-gnu/4.4.5/:/usr/lib/gcc/x86_64-linux-gnu/4.4.5/:/usr/lib/gcc/x86_64-linux-gnu/:/usr/lib/gcc/x86_64-linux-gnu/4.4.5/:/usr/lib/gcc/x86_64-linux-gnu/:/usr/lib/gcc/x86_64-linux-gnu/4.4.5/:/usr/lib/gcc/x86_64-linux-gnu/LIBRARY_PATH=/usr/lib/gcc/x86_64-linux-gnu/4.4.5/:/usr/lib/gcc/x86_64-linux-gnu/4.4.5/:/usr/lib/gcc/x86_64-linux-gnu/4.4.5/../../../../lib/:/lib/../lib/:/usr/lib/../lib/:/usr/lib/gcc/x86_64-linux-gnu/4.4.5/../../../:/lib/:/usr/lib/:/usr/lib/x86_64-linux-gnu/COLLECT_GCC_OPTIONS='-v' '-o' 'example' '-shared-libgcc' '-mtune=generic' /usr/lib/gcc/x86_64-linux-gnu/4.4.5/collect2 --build-id --eh-frame-hdr -m elf_x86_64 --hash-style=gnu -dynamic-linker /lib64/ld-linux-x86-64.so.2 -o example -z relro /usr/lib/gcc/x86_64-linux-gnu/4.4.5/../../../../lib/crt1.o /usr/lib/gcc/x86_64-linux-gnu/4.4.5/../../../../lib/crti.o /usr/lib/gcc/x86_64-linux-gnu/4.4.5/crtbegin.o -L/usr/lib/gcc/x86_64-linux-gnu/4.4.5 -L/usr/lib/gcc/x86_64-linux-gnu/4.4.5 -L/usr/lib/gcc/x86_64-linux-gnu/4.4.5/../../../../lib -L/lib/../lib -L/usr/lib/../lib -L/usr/lib/gcc/x86_64-linux-gnu/4.4.5/../../.. -L/usr/lib/x86_64-linux-gnu example.o -lstdc++ -lm -lgcc_s -lgcc -lc -lgcc_s -lgcc /usr/lib/gcc/x86_64-linux-gnu/4.4.5/crtend.o /usr/lib/gcc/x86_64-linux-gnu/4.4.5/../../../../lib/crtn.o Then throw away the binary, and link ourselves (normally, I would have just copy/pasted the line, but to make it easier to read I did it the multiline way with \ s): $ lsexample example.cpp example.o$ rm example$ lsexample.cpp example.o$ ld \> --build-id \> --eh-frame-hdr \> -m elf_x86_64 \> --hash-style=gnu \> -dynamic-linker \> /lib64/ld-linux-x86-64.so.2 \> -o example \> -z relro \> /usr/lib/gcc/x86_64-linux-gnu/4.4.5/../../../../lib/crt1.o \> /usr/lib/gcc/x86_64-linux-gnu/4.4.5/../../../../lib/crti.o \> /usr/lib/gcc/x86_64-linux-gnu/4.4.5/crtbegin.o \> -L/usr/lib/gcc/x86_64-linux-gnu/4.4.5 \> -L/usr/lib/gcc/x86_64-linux-gnu/4.4.5 \> -L/usr/lib/gcc/x86_64-linux-gnu/4.4.5/../../../../lib \> -L/lib/../lib \> -L/usr/lib/../lib \> -L/usr/lib/gcc/x86_64-linux-gnu/4.4.5/../../.. \> -L/usr/lib/x86_64-linux-gnu \> example.o \> -lstdc++ \> -lm \> -lgcc_s \> -lgcc \> -lc \> -lgcc_s \> -lgcc \> /usr/lib/gcc/x86_64-linux-gnu/4.4.5/crtend.o \> /usr/lib/gcc/x86_64-linux-gnu/4.4.5/../../../../lib/crtn.o Finally, run it! $ lsexample example.cpp example.o$ ./example Hello, world! You can probably significantly shorten that link line by removing some arguments. Here's the minimal set I came up with after some experimentation: $ ld \> -dynamic-linker \> /lib64/ld-linux-x86-64.so.2 \> -o example \> /usr/lib/gcc/x86_64-linux-gnu/4.4.5/../../../../lib/crt1.o \> /usr/lib/gcc/x86_64-linux-gnu/4.4.5/../../../../lib/crti.o \> /usr/lib/gcc/x86_64-linux-gnu/4.4.5/crtbegin.o \> example.o \> -L/usr/lib/gcc/x86_64-linux-gnu/4.4.5 \> -lstdc++ \> -lc \> /usr/lib/gcc/x86_64-linux-gnu/4.4.5/crtend.o \> /usr/lib/gcc/x86_64-linux-gnu/4.4.5/../../../../lib/crtn.o This set of flags and libraries will of course depend on what library functions and language features your program uses.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14163208', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/803801/']}
jdg_77637
stackexchange
llm_judgeable_groundtruth_similarity
16597853
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a datepicker returning a date string, and a timepicker returning just a time string. How should I combine those into a single javascript Date? I thought I found a solution in Date.js . The examples shows an at( ) -method, but I can't find it in the library... Now provide the response and nothing else.
You can configure your date picker to return format like YYYY-mm-dd (or any format that Date.parse supports) and you could build a string in timepicker like: var dateStringFromDP = '2013-05-16'; $('#timepicker').timepicker().on('changeTime.timepicker', function(e) { var timeString = e.time.hour + ':' + e.time.minute + ':00'; var dateObj = new Date(datestringFromDP + ' ' + timeString); }); javascript Date object takes a string as the constructor param
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/16597853', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11956/']}
jdg_77638
stackexchange
llm_judgeable_groundtruth_similarity
36735842
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Not sure where to ask (feel free to close this if it is an inappropriate question) but I have not found anything on this specifically in C++17 proposals, neither this or this mentions it when dealing with the nested namespace addition to C++. So currently this is the only option: class A {public: class B; //forward-declared INSIDE class/namespace};class A::B //defined outside{}; Will this be possible in C++17? class A::B; //forward declared NESTED outside of parent class/namespaceclass C{ A::B *b;}; and then either this (1) (as seems to be the proposal of nested namepsace definitions) class A::B //definition of A::B without defining A{}; or this (2) class A{public: class A::B { };}; or this [3] class A{public: class B;};class A::B{}; I suspect the definition of A::B without defining A first might not work though (although the proposal seems to allow it). Now provide the response and nothing else.
There's a proposal on the issue titled Forward declarations of nested classes P0289R0 . However as you can see from the last Trip Report: C++ Standards Meeting in Jacksonville, February 2016 , this proposal was pendent to proposals for which further work is encouraged. I'm quoting the verdict of the committee ( Emphasis Mine ): This would allow things like X::A* to appear in a header without requiring a definition for X to also appear in the header (forward-declarations of X and X::A will be sufficient). EWG found the use case compelling , because currently a lot of class definitions to appear in headers only because interfaces defined in the header use pointers or references to nested classes of the type. Several details still need to be worked out . (For example, what happens if a definition of X does not appear in any other translation unit (TU)? What happens if a definition of X appears in another TU, but does not define a nested class A ? What happens if it does define a nested class A, but it’s private? The answer to some or all of these may have to be “ill-formed, no diagnostic required”, because diagnosing errors of this sort would require significant linker support.)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36735842', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2194193/']}
jdg_77639
stackexchange
llm_judgeable_groundtruth_similarity
6808130
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In Java, will using Class.forName in a cast result in the object being cast to a class or being cast to the cast found. As an example, if you did this Object a;String b = "testing";a = (Class.forName("java.lang.Object")) b; Would a be an instance of Class or an instance of Object? Now provide the response and nothing else.
Class.forName returns a Class instance. I'm fairly certain your code as quoted doesn't compile, you're trying to use a function call as a cast. Update : Just had a thought: If you're asking how to use a dynamically-loaded class in a cast, you basically can't. Casting is (mostly) a compile-time thing. Have the dynamically-loaded class implement an interface you can compile into your code and then cast to that, or if you can't do that, use reflection to access the dynamically-loaded class' members.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6808130', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/860401/']}
jdg_77640