id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_codereview.3367 | Please review this:<!DOCTYPE html > <html dir=ltr lang=en-UK> <head> <meta http-equiv=Content-Type content=text/html; charset=utf-8 /> <title>Fahad | Just another web designer</title> <!--Styling Starts--> </script> <style type=text/css> h1 { font-size:100px; display:inline-block; vertical-align:top; padding:5px; width:300px; } .menu { padding-top:90px; width:200px; display:inline-block; float:left; } .menu ul { list-style-type:none; } .container { color:#666; } a:link { color:#666; } a:hover { color:#333; } a:active { color:#666; } a:visited { color:#666; } a { text-decoration:none; } p { font-family:Verdana, Geneva, sans-serif; font-size:24px; } x { font-size:36px; font-family:Verdana, Verdana, Geneva, sans-serif; color:#06F; } #para { ; padding-left:200px; padding-right:200px; } #bio { float:left; height:200px; width:200px; } #bio h2 { font-size:40px; display:inline-block; vertical-align:top; padding:5px; width:300px; } #achievements { height:200px; width:200px; float:right; } #achievements h2 { font-size:40px; display:inline-block; vertical-align:top; padding:5px; width:300px; } #dreams { float:left; height:200px; width:200px; padding-left:150px; } #dreams h2 { font-size:40px; display:inline-block; vertical-align:top; padding:5px; width:300px; } </style> </head> <body> <div class=container> <h1>Fahad</h1> <div class=menu> <ul> <li><a href=http://www.facebook.com/fahd92>Home</a> </li> <li><a href=http://www.facebook.com/fahd92>Blog</a></li> <li><a href=http://www.facebook.com/fahd92>About</a></li> <li><a href=http://www.facebook.com/fahd92>Contact</a></li> </ul> </div > <div id=para> <p>I m a <x>programmer</x> and <del>now starting my career as</del> a<x>Web designer</x> . It takes a lot of time to start writing <x>beutifully</x>. I am not very good at spelling so do not point out <x>spelling mistakes</x>.</p> <div id=bio><!--Biography starts--> <h2>Biography</h2> <p>I was born in bla year <x>foo</x> date. I like working on <x>computers</x></p> </div><!--Biography ends--> <div id=achievements><!--Achievements starts--> <h2>Achievements</h2> <p>Don't know what to put in it. I have a lot of dreams to <x>accomplish</x></p> </div><!--Achievements ends--> <div id=dreams><!--Dreams starts--> <h2>Dreams</h2> <p>I <x>dream</x> a lot. Why shouldn't I? Dreams are <x>cool</x>. They make us <x>ambitious</x>.</p> </div><!--Dreams ends--> </div><!--para ends--> <!--menu div ends--></div><!--Container Div ends--> </body> </html> <!-- www.000webhost.com Analytics Code --><script type=text/javascript src=http://analytics.hosting24.com/count.php></script><noscript><a href=http://www.hosting24.com/><img src=http://analytics.hosting24.com/count.php alt=web hosting /></a></noscript><!-- End Of Analytics Code --> | Website built with HTML and CSS | html;css;html5 | Two notes:Line 2: use en-GBLine 5: </script> without <script>You had better not use the 000webhost.com analytics code, but other services such as Google Analytics or Clicky.And there is no <x> tag in HTML. You should use <b> instead, or even <span class=x>Shorten your codeInstead of: .container { color:#666; } a:link { color:#666; } a:hover { color:#333; } a:active { color:#666; } a:visited { color:#666; } use: .container,a:link,a:visited { color:#666; } a:hover { color:#333; } a:active { color:#666; }Finally, there is a problem with 1042x768 screens: The Dreams box isn't in the correct position. |
_unix.116030 | I am in doubt whether I have partitioned my hdd correctly as GPT on a BIOS motherboard. I used gparted to partition and I don't know if I aligned the beginning/end of the disk correctly, used correct flags etc. The disk in question is sdc: $ sudo lsblk -fNAME FSTYPE LABEL MOUNTPOINTsda sda1 ntfs System Reserved sda2 ntfs win7 sda3 ntfs WINYANCI sdb sdb1 sdb5 ext4 YAHSI sdc sdc1 sdc2 swap [SWAP]sdc3 ext4 /sdc4 ext4 /homesdc5 ext4 store1 sdc6 ntfs store2 sdd sdd1 sdd2 ntfs DEPO sdd5 ntfs HUSUSI sr0 here is what gdisk shows:$ sudo gdisk /dev/sdcGPT fdisk (gdisk) version 0.8.8Partition table scan: MBR: protective BSD: not present APM: not present GPT: presentFound valid GPT with protective MBR; using GPT.Command (? for help): pDisk /dev/sdc: 1953525168 sectors, 931.5 GiBLogical sector size: 512 bytesDisk identifier (GUID): 2758BB06-C7E7-451B-9C92-F1B278721BB6Partition table holds up to 128 entriesFirst usable sector is 34, last usable sector is 1953525134Partitions will be aligned on 2048-sector boundariesTotal free space is 3437 sectors (1.7 MiB)Number Start (sector) End (sector) Size Code Name 1 2048 6143 2.0 MiB EF02 2 6144 8394751 4.0 GiB 8200 3 8394752 76754943 32.6 GiB EF00 4 76754944 174409727 46.6 GiB 0700 5 174409728 1346283519 558.8 GiB 0700 6 1346283520 1953523711 289.6 GiB 0700 and parted shows this.Are there any mistakes? | Partitioning correctly for GPT in BIOS system | partition;gpt | To check if all partitions are properly aligned to 1MiB you should divide the start sector by 8. Let's see:1346283520/8=168285440174409728/8=2180121676754944/8=95943688394752/8=10493446144/8=7682048/8=256It looks good. The next thing is to check if the size of the partitions can also be divided by 8:(19535237111346283520+1)/8=75905024(1346283519174409728+1)/8=146484224(17440972776754944+1)/8=12206848(767549438394752+1)/8=8545024(83947516144+1)/8=1048576(61432048+1)/8=512It also looks good. You divide by 8 because of the technology called advanced format. Just look at the following images:But this concerns only disks that have this feature. If you don't have a disk with the advanced format technology, it doesn't matter what alignment you use. Most of modern disks use this thing, and most of partition tools align partitions to 1MiB by default. |
_cogsci.17453 | If I press my eyes I can see all kind of things: sparkling blue dots (which sometimes seem random and sometimes there seems to be a pattern in them), growing or diminishing rings of all kinds of color (I once read that these circles are also present in the retina of the non-yet-born, to provide some preparation), vague black-and-white faces, and many more, sometimes strange, sometimes for a short time recognizable forms. So, what strange capers are the different light receptors, or neurons in or behind the retinas of my eyes performing? | What happens in my retina if I press on my eyeballs? | neurobiology;vision;neurophysiology | Short answerPressure phosphenes are believed to be induced by sensory neurons in the retina downstream from the photoreceptors due to stretch-mediated activation.BackgroundYou are referring to pressure phosphenes. They are visual perceptions induced by applying pressure to the eye ball. They are often described as a glow with arcuate or circular characteristics and are generally perceived in the visual field opposite to the area of pressure. If the object applying the pressure is small, the center of the perceived area appears light with a dark surround and a bright outer portion. However, there is substantial inter-individual variability in how pressure phosphenes are perceived (Chew et al., 2005). Eyeball deformation leads to an activation of ON-center ganglion cells in the retina, while OFF-center ganglion cells are inhibited. It is thought that the activation of ON-center and inhibition of OFF-center ganglion cells by eyeball deformation are caused by retinal stretching, which may also lead to horizontal cell stretch. Stretching the horizontal cell membrane probably generates an increase in membrane sodium conductivity and a depolarization of the membrane potential. This depolarization of the horizontal cell membrane potential is transmitted either directly or indirectly (via receptor synapses) from the horizontal to the bipolar cells. The bipolar cells, in turn, can activate or inhibit the ganglion cells (Grsser et al., 1989). Note that in this model, the photoreceptor cells in the retina are not involved - it occurs downstream in the secondary sensory neurons.References- Chew et al., Eye (2005) 19: 6835- Grsser et al., Physiol Bohemoslov (1989); 38(4): 289-30 |
_codereview.2236 | This is a follow-up to the question that I posted earlier regarding my interpreter.After a lot of help, I refactored my code and added more functionality to it. Now, it allows users to declare functions like sofunc function_name param1 param2 param3 . / param3 * param1 param2 or like thisfunc to_radians deg . * deg / 3.14159 180and also like thisfunc sind theta . sin to_radians thetaAs you can see, you would use a period (.) to tell the interpreter to stop taking in parameters and start reading the body of the function.On top of that, it can read from source code.Here's an example of such a source code: ; Variablesdef PI * 4 atan 1; Functionsfunc to_rad deg . * deg / PI 180func sind theta . sin to_rad theta; Run the programsin to_rad 45sind 45Which will then output0.7071070.707107If you have ever programmed using a Lisp-like programming language -- such as scheme -- you would notice that there are a lot of similarities with what I made. In fact, it was inspired by those programming languages.Anyways, there's one limitation behind the code for the interpreter: when it encounters an error from within the user's source code, it's programmed to crash completely. That's not a problem. I did that on purpose. It's one way indicates to the user that there was an error.Now, I want to implement a more appropriate error handler. But, before I do that, I would like to ensure that my code is easier to maintain.I have a hunch that I should separate my code into multiple files. But that's just one, and even that I need help on how I should approach it.What would be your suggestion?#include <iostream>#include <vector>#include <string>#include <fstream>#include <sstream>#include <cmath>#include <cstdlib> // mmocny: I needed to add this to use atof#include <functional>using namespace std;//----------------------------------class Variable{public: Variable(const string& name, double val) : name_(name), val_(val) // mmocny: Use initializer lists { } // mmocny: get_* syntax is less common in C++ than in java etc. const string& name() const { return name_; } // mmocny: Don't mark as inline (they already are, and its premature optimization) double val() const { return val_; } // mmocny: Again, don't mark as inlineprivate: string name_; // mmocny: suggest renaming name_ or _name: Easier to spot member variables in method code, and no naming conflicts with methods double val_;};//----------------------------------class Func{public: Func(const string& name, const vector<string>& params, const string& instr) : name_(name), params_(params), instr_(instr) { } const string& name() const { return name_; } const vector<string>& params() const { return params_; } const string& body() const { return instr_; }private: string name_; vector<string> params_; string instr_;};//----------------------------------// mmocny: Replace print_* methods with operator<< so that other (non cout) streams can be used.// mmocny: Alternatively, define to_string()/str() methods which can also be piped out to different streamsstd::ostream & operator<<(std::ostream & out, Variable const & v){ return out << v.name() << , << v.val() << endl;}std::ostream & operator<<(std::ostream & out, Func const & f){ out << Name: << f.name() << endl << Params: << endl; for (vector<string>::const_iterator it = f.params().begin(), end = f.params().end(); it != end; ++it) { out << << *it << endl; } cout << endl; cout << Body: << f.body();}std::ostream & operator<<(std::ostream & out, vector<Variable> const & v){ for (vector<Variable>::const_iterator it = v.begin(), end = v.end(); it != end; ++it ) // mmocny: Use iterators rather than index access { out << *it << endl; } return out;}std::ostream & operator<<(std::ostream & out, vector<Func> const & v){ for (vector<Func>::const_iterator it = v.begin(), end = v.end(); it != end; ++it) { out << *it << endl; } return out;}//----------------------------------class Interpreter{public: const vector<Variable>& variables() const { return variables_; } const vector<Func>& functions() const { return functions_; } // mmocny: replace istringstream with istream // mmocny: you only need to predeclare this one function double operate(const string& op, istream& in, vector<Variable>& v); double operate(const string& op, istream& in); double get_func_variable(const string& op, istream& in, vector<Variable> v) { // mmocny: instead of using a vector<Variable> you should be using a map/unordered_map<string,double> and doing a key lookup here for (int size = v.size(), i = size - 1; i >= 0; i--) { if (op == v[i].name()) return v[i].val(); } for (int size = functions_.size(), i = 0; i < size; i++) { if (op == functions_[i].name()) { vector<Variable> copy = v; vector<Variable> params; for (int size_p = functions_[i].params().size(), j = 0; j < size_p; j++) { string temp; in >> temp; params.push_back(Variable(functions_[i].params()[j], operate(temp, in, v))); } /*for (vector<string>::const_iterator it = functions_[i].params().begin(), end = functions_[i].params().end(); it != end; ++it) { string temp; in >> temp; params.push_back(Variable(*it, operate(temp, in, v))); }*/ for (int size_p = params.size(), j = 0; j < size_p; j++) { copy.push_back(params[j]); } /*for (vector<Variable>::const_iterator it = params.begin(), end = params.end(); it != end; ++it) { copy.push_back(*it); }*/ istringstream iss(functions_[i].body()); string temp; iss >> temp; return operate(temp, iss, copy); } } // mmocny: what do you do if you don't find the variable? int char_to_int = op[0]; cout << char_to_int << endl; cout << ' << op << ' is not recognized. << endl; throw std::exception(); // mmocny: You should do something better than throw a generic exception() } double get_func_variable(const string& op, istream& in) { return get_func_variable(op, in, variables_); } //---------------------------------- bool is_number(const string& op) { // mmocny: someone else already mentioned: what if op is empty? int char_to_int = op[0]; // mmocny: couple notes here: // 1) chars are actually numbers you can reference directly, and not need magic constants // 2) functions in the form if (...) return true; else return false; should just be reduced to return (...); // 3) is_number functionality already exists in libc as isdigit() // 4) long term, you are probably going to want to improve this function.. what about negative numbers? numbers in the form .02? etc.. //return (char_to_int >= '0' && char_to_int <= '9'); return isdigit(char_to_int); } //---------------------------------- template< class Operator > double perform_action(istream& in, Operator op, vector<Variable>& v) { string left; in >> left; double result = operate(left, in, v); // mmocny: This is a big one: for correctness, you must calculate result of left BEFORE you read right string right; in >> right; return op(result, operate(right, in, v)); } template< class Operator > double perform_action(istream& in, Operator op) { return perform_action(in, op, variables_); } //---------------------------------- void define_new_var(istream& in) { string name; in >> name; string temp; in >> temp; double value = operate(temp, in); variables_.push_back(Variable(name, value)); } //---------------------------------- void define_new_func(istream& in) { string name; in >> name; string temp; vector<string> params; do { in >> temp; if (temp == .) break; params.push_back(temp); } while (temp != .); string body = ; while (in >> temp) { body += temp + ; } Func fu(name, params, body); functions_.push_back(fu); }private: vector<Variable> variables_; vector<Func> functions_;};//----------------------------------double Interpreter::operate(const string& op, istream& in, vector<Variable>& v){ double value; if (op == +) value = perform_action(in, plus<double>(), v); else if (op == -) value = perform_action(in, minus<double>(), v); else if(op == *) value = perform_action(in, multiplies<double>(), v); else if (op == /) value = perform_action(in, divides<double>(), v); /*else if (op == %) value = perform_action(in, modulus<double>());*/ else if (op == sin) { string temp; in >> temp; value = sin(operate(temp, in, v)); } else if (op == cos) { string temp; in >> temp; value = cos(operate(temp, in, v)); } else if (op == tan) { string temp; in >> temp; value = tan(operate(temp, in, v)); } else if (op == asin) { string temp; in >> temp; value = asin(operate(temp, in, v)); } else if (op == acos) { string temp; in >> temp; value = acos(operate(temp, in, v)); } else if (op == atan) { string temp; in >> temp; value = atan(operate(temp, in, v)); } else if (is_number(op)) value = atof(op.c_str()); // mmocny: consider using boost::lexical_cast<>, or strtod (maybe) else value = get_func_variable(op, in, v); return value;}double Interpreter::operate(const string& op, istream& in){ return operate(op, in, variables_);}//----------------------------------void run_code(Interpreter& interpret, const string& op, istream& in){ if (op == def) interpret.define_new_var(in); else if (op == func) interpret.define_new_func(in); else if (op[0] == ';' || op.empty()) return; else cout << endl << interpret.operate(op, in) << endl;}//----------------------------------bool is_all_blank(const string& line){ for (int i = 0; i < line.size(); i++) { if (line[i] != ' ') return false; } return true;}//----------------------------------int main(){ cout << endl << LePN Programming Language << endl; Interpreter interpret; while (cin) { cout << endl << > ; string temp; getline(cin, temp); if (temp.empty()) // mmocny: This also handles the case of CTRL+D continue; istringstream iss(temp); string op; iss >> op; if (op == quit) break; else if (op == show_vars) std::cout << interpret.variables() << std::endl; else if (op == show_func) std::cout << interpret.functions() << std::endl; else if (op == open) { string filename; if (iss) { iss >> filename; } else { cin >> filename; } ifstream file(filename.c_str()); while (file && !file.eof()) { string line; getline(file, line); istringstream temp_stream(line); if (!temp_stream || is_all_blank(line)) { continue; } temp_stream >> op; int char_to_int = op[0]; run_code(interpret, op, temp_stream); } } else run_code(interpret, op, iss); }} | Preparing interpreter for error-handling | c++;interpreter | If you're going to get serious on this project, here are some questions worth considering:Are you using version control? We all have our preferences here but the important thing is you're using something. It gives you a way to hit 'UNDO' in case you make a big oopsie.Are there any unit tests? If you're currently not unit testing I would strongly suggest you start doing that now before your interpreter gets bigger. I like to use google test but there are many other C++ testing frameworks to choose from. Unit testing gives you a way to better control entropy in your project and this becomes more important as your project grows.Is the code well separated into different appropriate source files? Just like you wouldn't stuff every sentence into one gigantic paragraph in an essay, you don't want to stuff all your class, functions and variables into one source. In C++ it's customary to put each defined class in its own respective header(.h) and implementation(.cpp) file.Now some specific comments regarding your presented code:There is no clear separation between the lexing stage, parsing stage and processing in your design which is the usual expected approach when writing a translator, compiler or interpreter etc. It seems like these stages are mixed in together in an ad-hoc fashion inside your interpreter class. This coupling and lack of clear separation in your code will make it harder to maintain later when you add more stuff to it.Typedef'ing std::vector usage to make it more container-agnostic. If you decide to change std::vector to a std::map later on you won't have to change it all over the place. Also consider using std::back_insert_iterator instead of directly std::vector::push_back to make it even more container-agnostic.This function can be expressed more directly:bool is_all_blank(const string& line){ // Shouldn't it check for other possible whitespace too? Like tab for example return line.find_first_not_of( \t ) == string::npos;}While skimming through your code I notice it's really bare of any comments. You might want to look into fixing that. One more point, when reorganizing your code into multiple source files do not use any using namespace directives in the header files. Reason being you don't want this header to pollute the namespace of the components that includes this header. |
_unix.107800 | I have a file servers.txt, with list of servers:server1.mydomain.comserver2.mydomain.comserver3.mydomain.comwhen I read the file line by line with while and echo each line, all works as expected. All lines are printed.$ while read HOST ; do echo $HOST ; done < servers.txtserver1.mydomain.comserver2.mydomain.comserver3.mydomain.comHowever, when I want to ssh to all servers and execute a command, suddenly my while loop stops working:$ while read HOST ; do ssh $HOST uname -a ; done < servers.txtLinux server1 2.6.30.4-1 #1 SMP Wed Aug 12 19:55:12 EDT 2009 i686 GNU/LinuxThis only connects to the first server in the list, not to all of them. I don't understand what is happening here. Can somebody please explain?This is even stranger, since using for loop works fine:$ for HOST in $(cat servers.txt ) ; do ssh $HOST uname -a ; doneLinux server1 2.6.30.4-1 #1 SMP Wed Aug 12 19:55:12 EDT 2009 i686 GNU/LinuxLinux server2 2.6.30.4-1 #1 SMP Wed Aug 12 19:55:12 EDT 2009 i686 GNU/LinuxLinux server3 2.6.30.4-1 #1 SMP Wed Aug 12 19:55:12 EDT 2009 i686 GNU/LinuxIt must be something specific to ssh, because other commands work fine, such as ping:$ while read HOST ; do ping -c 1 $HOST ; done < servers.txt | Using while loop to ssh to multiple servers | ssh;scripting;io redirection | ssh is reading the rest of your standard input.while read HOST ; do ; done < servers.txtread reads from stdin. The < redirects stdin from a file. Unfortunately, the command you're trying to run also reads stdin, so it winds up eating the rest of your file. You can see it clearly with:$ while read HOST ; do echo start $HOST end; cat; done < servers.txt start server1.mydomain.com endserver2.mydomain.comserver3.mydomain.comNotice how cat ate (and echoed) the remaining two lines. (Had read done it as expected, each line would have the start and end around the host.)Why does for work?Your for line doesn't redirect to stdin. (In fact, it reads the entire contents of the servers.txt file into memory before the first iteration). So ssh continues to read its stdin from the terminal (or possibly nothing, depending on how your script is called). SolutionAt least in bash, you can have read use a different file descriptor.while read -u10 HOST ; do ssh $HOST uname -a ; done 10< servers.txt# ^^^^ ^^ought to work. 10 is just an arbitrary file number I picked. 0, 1, and 2 have defined meanings, and typically opening files will start from the first available number (so 3 is next to be used). 10 is thus high enough to stay out of the way, but low enough to be under the limit in some shells. Plus its a nice round number...Alternative Solution 1: -nAs McNisse points out in his/her answer, the OpenSSH client has an -n option that'll prevent it from reading stdin. This works well in the particular case of ssh, but of course other commands may lack thisthe other solutions work regardless of which command is eating your stdin.Alternative Solution 2: second redirectYou can apparently (as in, I tried it, it works in my version of Bash at least...) do a second redirect, which looks something like this:while read HOST ; do ssh $HOST uname -a < /dev/null; done < servers.txtYou can use this with any command, but it'll be difficult if you actually want terminal input going to the command. |
_cstheory.21226 | The definition of algebraic poset in Continuous Lattices and Domains, Definition I-4.2, says that, for all $x \in L$, the set $A(x) = {\downarrow} x \cap K(L)$ should be a directed set, and$x = \bigsqcup ({\downarrow} x \cap K(L)$.Here $L$ is a poset, $K(L)$ is the set of compact elements of $L$, and ${\downarrow} x$ means $\{y \mid y \sqsubseteq x\}$.I was a bit surprised by the first condition. It is an easy argument to show that, if $k_1$ and $k_2$ are in $A(x)$ then $k_1 \sqcup k_2$ is also in $A(x)$. So, all nonempty finite subsets of $A(x)$ have upper bounds in it. The only question is whether the empty subset has an upper bound in it, i.e., whether $A(x)$ is nonempty in the first place. So,Is it ok to replace the first condition with $A(x)$ is nonempty?What is an example of a situation where $A(x)$ is empty?Note added: How is $k_1 \sqcup k_2$ in A(x)? First, since $k_1 \sqsubseteq x$ and $k_2 \sqsubseteq x$, we have $k_1 \sqcup k_2 \sqsubseteq x$. Second, $k_1$ and $k_2$ are compact. So, any directed set that goes beyond them must pass them. Suppose a directed set $u$ also goes beyond $k_1 \sqcup k_2$, i.e., $k_1 \sqcup k_2 \sqsubseteq \bigsqcup u$. Since it has gone beyond $k_1$ and $k_2$, it must have passed them, i.e., there are elements $y_1, y_2 \in u$ such that $k_1 \sqsubseteq y_1$ and $k_2 \sqsubseteq y_2$. Since $u$ is a directed set, it must have an upper bound for $y_1$ and $y_2$, say $y$. Now, $k_1 \sqcup k_2 \sqsubseteq y \in d$. This shows that $k_1 \sqcup k_2$ is compact. The two pieces together say $k_1 \sqcup k_2 \in A(x)$. | Is this an equivalent condition for algebraic posets? | domain theory | An example where $A(x)$ is empty is the set of real numbers $\mathbb{R}$ with the usual ordering. It has no compact elements at all.If we assume the second condition then $A(x)$ cannot be empty: if $A(x) = \emptyset$ then by the second condition $x$ is the empty join, therefore the least element of $L$, which is compact, therefore $x \in A(x) = \emptyset$, a contradiction.Your proposal to replace the first condition with non-emptyness does not work. Consider the poset $L$ which consists of two copies of $\mathbb{N}$ and $\infty$, where we write $\iota_1(n)$ and $\iota_2(n)$ for the two copies of $n$, ordered by:$\iota_1(m) \leq \iota_1(n) \iff m \leq n$$\iota_2(m) \leq \iota_2(n) \iff m \leq n$$x \leq \infty$ for all $x$.In words, we have two incomparable chains with a common supremum. All elements are compact except $\infty$. Now:${\downarrow}x \cap K(L) \neq \emptyset$, obviously.$x = \bigvee ({\downarrow}x \cap K(L))$, obviously.The set ${\downarrow}\infty \cap K(L) = \mathbb{N} + \mathbb{N}$ is not directed. |
_unix.2651 | Is there any way to disable the animation which occurs when attempting to unlock gnome Screensaver and failing? Currently the password dialog shakes back and forth and for some reason on some laptops, this animation causes the entire system to be unresponsive for about 5 - 7 seconds. Can be very frustrating because it increases the chances of a second, third, etc mistyped passsword.I'm currently on Fedora 11 but this issue also occurs in Ubuntu. The Ubuntu change log calls this feature Shake the dialog when authentication failsgnome-screensaver (0.0.17-0ubuntu1) dapper; urgency=low * New upstream release: - 0.0.16: - Shake the dialog when authentication fails -- Daniel Holbach <[email protected]> Mon, 24 Oct 2005 21:14:22 +0200 | How to disable Shake password dialog when authentication fails of Gnome Screensaver? | gnome;gui;authentication;screensaver | For a moment, I thought that this might be inherited from the GDM configuration (since the GDM login screen does the same thing), but apparently it's not.After checking a few other places without any luck, I decided to find out for myself and took a look at the source code(v2.30). The code responsible for the shaking only checks to make sure the dialog isn't already being shaken. It makes no checks against any configuration, so there doesn't appear to be a way to disable it without changing the code itself.You might try switching to xscreensaver and see if that helps. |
_unix.282368 | I have a MySQL database backup in a gz file. When trying to uncompress it I get the following:gzip: db_stepup.sql.gz: not in gzip formatI read that sometimes is just a matter of removing the gz extension. So I did that and I'm able to see the file up a point. (352, 'bs', 'lv', 'Bosnian'),(353, 'bs', 'lt', 'Bosnian'),(354, 'bs', 'mk', 'Bosnian'),(355, 'bs', 'mt', 'Bosnian')\8B\00\00\00\00\00\00}\9D\EE\DF\CF_\C1 \DBx<\95\F7\CC8O3c\CF\D8\C7c\8F\E3\D8s&]MWW\95*\B3\AA\AB2\AB\DDO@\83\84A\B2el\A1;\81$\8B\AB<\B4\F4\AD' \CEI6\D2\FFp2\F7\DA{\FB\B2 <nk\F4\AD\F5[;\F7\FAv\DE\F6\F7\FF \C7\F7\A2$\FD\FE\BEX\A9\FF\A1\FD[\BF\FF2\AB\A7\E3\FE\F4\FE\F1\FB\9D\9AA\9D\FAj\AE\D5\E9\C0W\A8\A5V\EB\FD#\92\D3\E4o\E34\D0\EAz\DFWC\A8\A2\E9\95\D9\AF\B4\F2\FE\C9XDh\FEi\BD-\EC^i\9B\98\D8XY\F8\89\98/\FD\00\B5\85Am\FF\E7\DBR\B7\85\D8z\EF\F7{7\BE<\B8\F7\D9\DE\CE\DE\C7\ED\FF~\D2\FD\AF\F4w\88\B5\9F\9E\82a\BD\F0U0\AC7\90\9EI_\CA \D8\F8At this point the file gets scrambled. It looks like a problem of text encoding, is there a way to recover the data in the file?Here's the file if you want to take a look at it | Corrupted gz file | compression;gzip | The file starts out in plain ASCII so it's uncompressed.$ hexdump -C db_stepup.sql.gz | less00000000 2d 2d 20 70 68 70 4d 79 41 64 6d 69 6e 20 53 51 |-- phpMyAdmin SQ|00000010 4c 20 44 75 6d 70 0a 2d 2d 20 76 65 72 73 69 6f |L Dump.-- versio|00000020 6e 20 34 2e 31 2e 31 34 2e 38 0a 2d 2d 20 68 74 |n 4.1.14.8.-- ht|[...]That goes on for a while until somewhere in the middle it turns binary.00012390 27 2c 20 27 6d 74 27 2c 20 27 42 6f 73 6e 69 61 |', 'mt', 'Bosnia|000123a0 6e 27 29 1f 8b 08 00 00 00 00 00 00 03 7d 9d db |n')..........}..|000123b0 93 14 d7 95 ee df cf 5f c1 db 78 22 3c 11 95 f7 |......._..x<...|It starts with 1f 8b 08 ... (which for some reason did not show as such in the output you posted), it could be a valid gzip header. The starting point is 000123a3 so let's split it off...$ dd if=db_stepup.sql.gz bs=$((0x000123a3)) skip=1 | gunzip | less,(356, 'bs', 'mo', 'Bosnian'),(357, 'bs', 'mn', 'Bosnian'),(358, 'bs', 'ne', 'Bosnian'),[...]And hey, that seems to be the data where it left off. For some strange reason phpMyAdmin seems to have decided to use gzip in the middle of the output...Stitching it back together:$ dd if=db_stepup.sql.gz bs=$((0x000123a3)) count=1 > db_stepup.stitch.sql$ dd if=db_stepup.sql.gz bs=$((0x000123a3)) skip=1 | gunzip >> db_stepup.stitch.sqlIf you're looking for a way to find such offsets automatically (maybe you have more broken files like that), there's this nice little tool called binwalk which can also look for known file headers in the middle of files.$ binwalk db_stepup.sql.gz DECIMAL HEXADECIMAL DESCRIPTION--------------------------------------------------------------------------------74659 0x123A3 gzip compressed data, from Unix, NULL date: Thu Jan 1 00:00:00 197092556 0x1698C gzip compressed data, from Unix, NULL date: Thu Jan 1 00:00:00 1970110522 0x1AFBA gzip compressed data, from Unix, NULL date: Thu Jan 1 00:00:00 1970[...]As you can see it has the same result (0x123A3 offset). It finds more than one because gzip comes in blocks / chunks (you can even concatenate multiple gzip files) and each block has the same distinct header. |
_softwareengineering.214056 | I've worked in a team that developed and gave support for SW practices tools. Those tools were written (and will be written) in many programming languages.According to Scrum, a given story, can be implemented by several team members (according to what I know).When organized in a team that is oriented to a specific SW domain/framework (.NET for example), this seems possible and even reasonable.But if your team needs to write in Python, C#, Java etc. for several tools, some with different orientation (DB/UI), is this possible? How does the division of labor should be made in this case? Thanks. | Can a team that uses Scrum achieve Co-Dev and domain expertise if it handles many SW domains? | agile;scrum | I understand the 'all team members' are equal in scrum as an ideal.Practically, you'll always have people better at some jobs than others. But it's all about reducing the bus factor: voluntarily assigning work to someone who's not the best at it so should a bus happen, or should your expert be overloaded with other things, work can still happen.The non expert will of course require help/assistance from the expert. But it is better to lower your velocity and take advantage of the help when it is available rather than be stuck when it's not (hospital/overwork case). |
_webmaster.1982 | I work for a company that setup a website, this one thelinearshop.com (warning this site contains malware so don't go there if you are concerned), before I started working for them. The site uses OSCommerce. It now looks like someone was able to infect it with a malware link and jumble the site itself and when I visit the site I get a warning page from Google with the choice to leave, go to http://safebrowsing.clients.google.com/safebrowsing/diagnostic?client=Firefox&hl=en-US&site=http://thelinearshop.com/ for info on why it is getting blocked, or to ignore the issue.I have looked at the site and someone definitely hacked it. I have never run into this before and I am not sure how to proceed in order to get this fixed. I know I need to get the virus out but I am also concerned that whoever set the site up used an older version of OSCommerce and that it must not be very secure. Is that the case? | How to fix a site that Google tells you is infected | security;oscommerce | Your best bet, if it is possible, is to set up a development version of their site and try to upgrade it to a newer version of OSCommerce and see if it works properly. I don't use OSCommerce but I would think they would offer upgrade scripts or something similar to help automate the process. Assuming it works properly I would then upgrade the live site to the new version. That way you are sure you have all of the latest patches and have closed whatever hole(s) were originally exploited.Once you've cleaned up the site create a Google Webmaster account for this website if you haven't already. In there you can request that Google check the site again and have them remove the unsafe website label from their listings. |
_codereview.83336 | I'm kinda new at this jQuery stuff and I've gathered up this much to make my code functional, but I would really want to shorten it. I've already tried $('#Monday','#Tuesday'....) to group them, but with no success.$('#Monday').click(function() { if ($(this).hasClass(active)){ $(this).switchClass( active, inactive, 200, easeInOutQuad ); $(this).parent().find('.miniTime').prop('disabled', true); } else { $(this).switchClass( inactive, active, 200, easeInOutQuad ); $(this).parent().find('.miniTime').prop('disabled', false); } $(this).find('input').prop('checked', function( foo, oldValue ) {return !oldValue});});$('#Tuesday').click(function() { if ($(this).hasClass(active)){ $(this).switchClass( active, inactive, 200, easeInOutQuad ); $(this).parent().find('.miniTime').prop('disabled', true); } else { $(this).switchClass( inactive, active, 200, easeInOutQuad ); $(this).parent().find('.miniTime').prop('disabled', false); } $(this).find('input').prop('checked', function( foo, oldValue ) {return !oldValue});});$('#Wednesday').click(function() { if ($(this).hasClass(active)){ $(this).switchClass( active, inactive, 200, easeInOutQuad ); $(this).parent().find('.miniTime').prop('disabled', true); } else { $(this).switchClass( inactive, active, 200, easeInOutQuad ); $(this).parent().find('.miniTime').prop('disabled', false); } $(this).find('input').prop('checked', function( foo, oldValue ) {return !oldValue});});$('#Thursday').click(function() { if ($(this).hasClass(active)){ $(this).switchClass( active, inactive, 200, easeInOutQuad ); $(this).parent().find('.miniTime').prop('disabled', true); } else { $(this).switchClass( inactive, active, 200, easeInOutQuad ); $(this).parent().find('.miniTime').prop('disabled', false); } $(this).find('input').prop('checked', function( foo, oldValue ) {return !oldValue});});$('#Friday').click(function() { if ($(this).hasClass(active)){ $(this).switchClass( active, inactive, 200, easeInOutQuad ); $(this).parent().find('.miniTime').prop('disabled', true); } else { $(this).switchClass( inactive, active, 200, easeInOutQuad ); $(this).parent().find('.miniTime').prop('disabled', false); } $(this).find('input').prop('checked', function( foo, oldValue ) {return !oldValue});});$('#Saturday').click(function() { if ($(this).hasClass(active)){ $(this).switchClass( active, inactive, 200, easeInOutQuad ); $(this).parent().find('.miniTime').prop('disabled', true); } else { $(this).switchClass( inactive, active, 200, easeInOutQuad ); $(this).parent().find('.miniTime').prop('disabled', false); } $(this).find('input').prop('checked', function( foo, oldValue ) {return !oldValue});});$('#Sunday').click(function() { if ($(this).hasClass(active)){ $(this).switchClass( active, inactive, 200, easeInOutQuad ); $(this).parent().find('.miniTime').prop('disabled', true); } else { $(this).switchClass( inactive, active, 200, easeInOutQuad ); $(this).parent().find('.miniTime').prop('disabled', false); } $(this).find('input').prop('checked', function( foo, oldValue ) {return !oldValue});}); | Adding click handler to multiple IDs | javascript;jquery | You should add a class to all elements, something like .day, and then use that class in your code:$('.day').click(function() { if ($(this).hasClass(active)){ $(this).switchClass( active, inactive, 200, easeInOutQuad ); $(this).parent().find('.miniTime').prop('disabled', true); } else { $(this).switchClass( inactive, active, 200, easeInOutQuad ); $(this).parent().find('.miniTime').prop('disabled', false); } $(this).find('input').prop('checked', function( foo, oldValue ) {return !oldValue});});Another way would be:$('#Monday, #Tuesday, #Wendnesday').click(function() { if ($(this).hasClass(active)){ $(this).switchClass( active, inactive, 200, easeInOutQuad ); $(this).parent().find('.miniTime').prop('disabled', true); } else { $(this).switchClass( inactive, active, 200, easeInOutQuad ); $(this).parent().find('.miniTime').prop('disabled', false); } $(this).find('input').prop('checked', function( foo, oldValue ) {return !oldValue});}); |
_unix.226316 | I used Luks to encrypt my whole hard drive, and created that sda drive as a physical volume. Under that volume, I create a volume group vg00 with my 3 logical volumes lv00_root, lv01_home, and lv02_swapHere is my logical volumes, contained in the volume group vg00 on physical volume /dev/sdaxubuntu@xubuntu:~$ sudo lvdisplay /dev/vg00 --- Logical volume --- LV Path /dev/vg00/lv00_root LV Name lv00_root VG Name vg00 LV UUID 9bzRlY-LWT3-YBV5-yK9U-s3yT-n8MR-B5HjAP LV Write Access read/write LV Creation host, time xubuntu, 2015-08-28 05:11:15 +0000 LV Status available # open 1 LV Size 12.00 GiB Current LE 3072 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:1 --- Logical volume --- LV Path /dev/vg00/lv01_home LV Name lv01_home VG Name vg00 LV UUID B9Ykg2-65Aq-fOS2-1T9I-msfW-OlLf-yMDJT5 LV Write Access read/write LV Creation host, time xubuntu, 2015-08-28 05:11:29 +0000 LV Status available # open 1 LV Size 15.00 GiB Current LE 3840 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:2 --- Logical volume --- LV Path /dev/vg00/lv02_swap LV Name lv02_swap VG Name vg00 LV UUID HHiMFa-D9fi-RH6B-ITN6-olQW-Fx0A-FSSzsY LV Write Access read/write LV Creation host, time xubuntu, 2015-08-28 05:11:41 +0000 LV Status available # open 0 LV Size 2.00 GiB Current LE 512 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:3My fstab is as follows(note I already tried putting /dev/sda UUID as an entry, now commented):# <file system> <mount point> <type> <options> <dump> <pass>#UUID=d6055580-65af-4ef0-aba5-dfcecaa0c82f none luks defaults 0 1/dev/mapper/vg00-lv00_root / ext4 errors=remount-ro 0 1# /boot was on /dev/sdb1 during installationUUID=05b8b1aa-4067-4938-ad02-72c3d7fb7331 /boot ext2 defaults $/dev/mapper/vg00-lv01_home /home ext4 defaults 0 2/dev/mapper/vg00-lv02_swap none swap sw 0 0My /etc/crypttab is as follows:roothd UUID=d6055580-65af-4ef0-aba5-dfcecaa0c82f none luksAt the bootloader unlock screen, I would like to input my one existing passphrase for /dev/sda drive and unlock the volume group vg00 which contains the all logical volumes.I did run update-initramfs -u after any change to the fstab or crypttabWhen I boot, after the grub screen I am dropped to the Ubuntu shell with: ALERT! /dev/mapper/vg00-lv00_root does not exist .I have suspicion that it is a problem with grub due to grub.cfg having its linux initramfs generated image set to the path of /dev/mapper/vg00-lv00_root. I obviously need to decrypt /dev/sda parent physical volume before grub could access this. FYI I roughly followed this and this tutorialAny help is appreciated | Encrypted boot drops to shell, how to set LVM crypttab consistently? 'ALERT! {logical volume path} does not exist' | linux;ubuntu;lvm;encryption;luks | null |
_cstheory.21534 | While discussion strong normalization proofs, this comment contrasts the normal forms model with purely syntactic methods.This brings me back to a more basic question: can we still distinguish syntactic and semantic constructions strictly, in the face of syntax-based models? What about term models for algebras, Henkin models for first-order logics? What about structural operational semantics? Since term models can be isomorphic to syntax, it seems hard to make a firm distinction.Until I studied the difference between proof theory and model theory in logic, I was even baffled by the idea that static type systems are a syntactic method. After all, a type system reasons about types, which are an abstraction of program behavior (and with dependent types, an arbitrarily precise one). | Can we distinguish strictly syntactic and semantic methods in programming language? | lo.logic;pl.programming languages;operational semantics | No, you cannot strictly distinguish syntactic from semantic methods, but the distinction still ends up making sense. Structural operational semantics is not denotational, because it is not a compositional method of giving semantics to a programming language. However, you can build denotational models out of a structural operational semantics by using a realizability or logical relations method. As an example, see Robert Harper's Constructing Type Systems over Operational Semantics.Term models are denotational, but generally semanticists are not satisfied with them. What they usually want is a category of models in which the term model is initial, which can be used to prove soundness and completeness results. (The soundness and completeness of the typed lambda calculus for cartesian closed categories is the paradigmatic example; see Alex Simpson's Categorical Completeness Results for the Simply-Typed $\lambda$-calculus for some details.)In the other direction, if you have a denotational semantics, you might want to figure out what the syntax for it is. Then you want to go and find a syntax and abstract machine whose term model can serve as an intitial object in a suitable category of models.For instance, game semantics began its life as a purely semantic construction, and eventually led to work on operational game semantics --- a recent example of which is Alexis Goyet's The lambda lambda-bar calculus: A dual calculus for unconstrained strategies. Overall, you can think of structural operational semantics as a way of specifying abstract machines, which we hope are easy to implement. A denotational semantics gives a compositional model of a language, which we hope is easy to reason about. If we have both, then we can both implement and reason about the language. Normalization theorems are an interesting ambiguous case. Usually, to prove normalization, you need a semantic model (typically a logical relation). However, once you know that normalization holds, many properties can now be proved by induction on normal forms, which is a purely syntactic argument. For weak logics (anything up to first-order logic without induction, roughly), you can prove normalization syntactically, using the technique of hereditary substitution. In these logics, the subformula property holds, and so you can prove normalization by induction on types. See Frank Pfenning's paper Structural Cut Elimination for an explanation of how this works. |
_cs.39821 | How would I go about constructing a nondeterministic 1-counter automaton for the language $L$ that is the complement of the palindromes $\overline{L}=\{ww^{rev}\}$ over a 2 symbol alphabet $\Sigma = \{0,1\}$? The definition I am using for 1-NCAs consists of:$S$, the set of states,$\Sigma$, the alphabet ($\Sigma = \{0,1\}$ in our case),$s_{0} \in S$, the initial state,$F \subseteq S \times \{\text{zero},\text{nonzero}\}$, the set of accepting states, and$\delta : S\times\{\text{zero},\text{nonzero}\}\times A \rightarrow S \times \{-1,0,1\}$, the transition function.Given a word $w = x_1x_2\ldots x_ny_1y_2\ldots y_n$, I know that if $x_i$ is not equal to $y_{n-i+1}$, the palindrome requirement fails. I have a feeling that the counter has something to do with that distance in between those letters but formalizing this is what I am stuck at. Since a 1-NCA is the same thing as a NPDA (nondeterministic pushdown automaton) with only one symbol in the stack alphabet, I would accept answers using such PDAs also. This is especially since I find very few texts discussing counter automata. | Construction of a counter automaton for the complement of the palindromes | context free;automata;pushdown automata | If you know that $L$ can not be recognised by a $k$-DCA for any $k$, this gives the hint that you need to use nondeterminism to do it with 1 counter.What we can do then is guess where the mismatch characters are, in particular, what we'll do is check that $x_{i} \neq y_{n-i+1}$ by using the counter to determine first that we're at position $i$ in the input (or rather, nondeterministically guessing that some $i$ is the right index to remember), then guessing nondeterministically that we've reached position $n-i+1$, checking that the two symbols at those positions are different, then counting down to check that we actually were at the right spot (i.e. the counter will have to be back to zero exactly when we reach the end of the input).One version of this looks like:So $s_{0}$ counts up until the symbol just before the one we're interested in. We nondeterministically guess to stop and record what symbol we see by moving to a different state, then $s_{1}$ and $s_{2}$ basically ignore the rest of the input until we guess we've reached the $(n-i+1)^{th}$ symbol, at which point we make sure it's different to the one at $i$, then $s_{3}$ counts down for the rest of the input. If there's no input left when the counter hits zero, we can move to $s_{4}$ and accept. If we move to $s_{4}$ too early, we can't accept because there's still input to be processed (so you could take some of the nondeterminism out by putting in an explicit sink state etc.), if we guessed to early which one was the mismatched symbol, then we can't reduce the counter to zero, so we can't get to $s_{4}$. |
_vi.3115 | I have a file with a bunch of user defaults in. I want to change some of the text, but I'm struggling coming up with a matcher and replacer. Using the following example:################################################################################ Trackpad, mouse, keyboard, Bluetooth accessories, and input ################################################################################# Trackpad: enable tap to click for this user and for the login screendefaults write com.apple.driver.AppleBluetoothMultitouch.trackpad Clicking -bool trueI'd like to replace # Trackpad: ... with running Trackpad: ...Breaking the problem down, I came up with something using a regex tester:/\n\n#\s(.*)/gIf I try and use this in Vim it doesn't work for me::/\n\n#\s(.*)/running \1/gI guess my problem boils down to two specific questions:How can I avoid searching for \n characters, and instead make sure # doesn't appear at the end of the search group?How can I effectively use capture groups?There are some great answers below. Hard to choose between all three, however I feel the chosen answer is the most accurate for my original spec. I recommend you try all three answers with the actual file to see how you feel about them. | Find and replace using regular expressions | regular expression;substitute | Just to be clear I believe you asked for this to be the result of the substitution?################################################################################ Trackpad, mouse, keyboard, Bluetooth accessories, and input ################################################################################running Trackpad: enable tap to click for this user and for the login screendefaults write com.apple.driver.AppleBluetoothMultitouch.trackpad Clicking -bool trueIn that case, I recommend the following command::%s/\n\n#\s+(.*)/^M^Mrunning \1/Explanation of the pattern:s/PATTERN/REPLACEMENT/ is the substitute command. The percent sign in :%s makes it work on the whole file, rather than just the current line.The \n\n says that the line of interest must occur after a blank line. If you didn't care about the preceding blank line, then ^ would suffice.#\s\+ matches a hash character followed by one or more whitespace characters. \(.*\) captures all subsequent text on the line.Explanation of the replacement text^M^M inserts two ends of lines to replace the \n\n that were present in the pattern. Otherwise, the text would get moved to the end of the line preceding the blank line. To type each ^M, press Ctrl-V Ctrl-M.Then, insert the string running, followed by whatever was captured in the parentheses within double-quotes. |
_datascience.19199 | I want to solve the problem of finding a parameter vector for an image filter (let us assume we know nothing about how the filter works, but we can feed it an input image and a set of parameters to produce an output image).Thus, having a set $\{{I_k, J_k:=F_{\alpha}(I_k)}\}_{k\in\overline{1,N}}$ of $I_k$ images together with their filtered counterparts, $J_k$, what solutions would you recommend for finding $\alpha^\ast$ such that given $I^\ast$ the result $F_{\alpha^\ast}(I^\ast)$ is in the same style as the one of the $N$ training correspondence pairs.I suppose one option is to use a convnet to transform $I_k$ into a feature vector, $v_k$, and then concatenate $\alpha_k$ to obtain $u_k =(v_k,\alpha_k)$. Once this is done, use a regression method to estimate the $\alpha^\ast$ part of $u^\ast$.I would like to find an alternative solution to what seems like a candidate for the style transfer approach (e.g. https://arxiv.org/pdf/1703.07511.pdf). That approach seems to solve the problem differently, and I envision situations where I need to simply use a filter rather than let a network guess the style of that filter.Additional details and possible assumptionsGiven the invoked no free lunch prospects, let us assume, for a paeticular problem from this class, that $F$ is a non-linear kernel-based filter that maps $I$ to $J$ as a result of an iterative and convergent process. More specifically, let $F$ be a mean shift filter with the $\alpha=(\rho, \sigma_s,\sigma_r)$ using a concatenated Gaussian kernel and a Parzen window of size $\rho$. Intuitively, I would be tempted to guess that this filter is not smooth w.r.t. $\alpha$, but a formal investigation is required (I suspect it is not smooth given that infinitesimal changes in the size of the window could shift the output towards another mode, indicating a step function behaviour).In general, it is correct to assume that $\alpha \in \mathbb{R}^d$, with $d \ll N$. Given the goal of finding $\alpha$ when both the filter action is known (either via numerical computation in general, or, in closed-form if the filter is a gaussian blur, for example), we can be confident that the $N$ input samples have non-constant $\alpha_k$ vector values to start with.But for sake of generalizability, it would be more elegant to pursue a solution that does not need to know how the filter operates without actually applying it to an input. The first approach suggested in the comments and based on convnets seems to fit this scenario and the optimization problem is taking into account the filter error. However, it would be interesting to hear more opinions, perhaps involving shallow approaches, even at the expense of designing the solution to address the concrete mean shift filter example from above. | Finding parameters of image filter using classified pairs | regression;convnet;supervised learning;parameter estimation | Your parameter $\alpha$ has fairly low dimension. Therefore, I recommend that you apply optimization methods directly to try to find the best $\alpha$ (without trying to use convolutional neural networks and regression for this purpose).Define a distance measure on images, $\|I-J\|$, to represent how dissimilar images $I,J$ are. You might use the squared $L_2$ norm for this, for instance.Now, the loss for a particular parameter choice $\alpha$ is$$L(\alpha) = \sum_{k=1}^N \|F_\alpha(I_k)-J_k\|.$$We can now formulate your problem as follows: given a training set of images $(I_k,J_k)$, find the parameter $\alpha$ that minimizes the loss $L(\alpha)$.A reasonable approach is to use some optimization procedure to solve this problem. You might use stochastic gradient descent, for instance. Because there might be multiple local minima, I would suggest that you start multiple instances of gradient descent from different starting points: use a grid search over the starting point. Since your $\alpha$ is only three-dimensional, it's not difficult to do a grid search over this three-dimensional space and then start gradient descent from each point in the grid. Stochastic gradient descent will allow you to deal with fairly large values of $N$.This does require you to be able to compute gradients for $L(\alpha)$. Depending on the filter $\alpha$, it might be possible to symbolically calculate the gradients (perhaps with the help of a framework for this, such as Tensorflow); if that's too hard, you can use black-box methods to estimate the gradient by evaluating $L(\cdot)$ at multiple points.If $L_2$ distance doesn't capture similarity in your domain, you could consider other distance measures as well.I expect this is likely to be a more promising approach than what you sketched in the question, using convolutional networks and a regression model. (For one thing, there's no reason to expect the mapping from features of $I_k$ to features of $J_k$ to be linear, so there's no reason to expect linear regression to be effective here.) |
_unix.23901 | I have a cPanel server I want to install some packages on. I connected to the server as root and ran the following commands to update and delete all the existing repositories:yum updateyum clean allrm -f /etc/yum.repos.d/*rm -rf /var/cache/yum/*My last command was to install the PDO PHP module, but that's when I got an error:root@linux [~/TMP]# pecl install pdoWARNING: pecl/PDO is deprecated in favor of channel://http://svn.php.net/viewvc/php/php-src/trunk/ext/pdo//ext/PDOdownloading PDO-1.0.3.tgz ...Starting to download PDO-1.0.3.tgz (52,613 bytes).............done: 52,613 bytes12 source files, buildingrunning: phpizeConfiguring for:PHP Api Version: 20041225Zend Module Api No: 20060613Zend Extension Api No: 220060519building in /var/tmp/pear-build-root/PDO-1.0.3running: /root/tmp/pear/PDO/configurechecking for grep that handles long lines and -e... /bin/grepchecking for egrep... /bin/grep -Echecking for a sed that does not truncate output... /bin/sedchecking for cc... ccchecking for C compiler default output file name... a.outchecking whether the C compiler works... configure: error: in `/var/tmp/pear-build-root/PDO-1.0.3':configure: error: cannot run C compiled programs.If you meant to cross compile, use `--host'.See `config.log' for more details.ERROR: `/root/tmp/pear/PDO/configure' failedroot@linux [~/TMP]# pecl install bdoNo releases available for package pecl.php.net/bdoinstall failedWhat causes this error? How can I fix it?Edit: Also, when I run yum install php*, after checking dependencies I get this:--> Finished Dependency ResolutionError: Package: rrdtool-php-1.3.8-6.el6.x86_64 (base) Requires: php(zend-abi) = 20090626Edit: I uploaded my config.log and config.status files | Cannot run C compiled programs error when installing PDO PHP module | centos;yum;repository | null |
_codereview.147225 | I have created an hex viewer in python, as a timed-challenge by friend. My implementation was in the form of a class, with a separate main file running with argparse to allow choosing the file (runs with a demo file by default).I was pretty satisfied with the final result. However, I have used many list comprehensions and mapping to cut up times. How can I improve the code or considerate styling standards? Any other advice regarding the code or the functionality?The code is divided into 3 files, first one is general utils for the task, second is the main class and the third is the runner:gen.pyimport stringdef hexa (num, fill = 2): return hex(num)[2:].lower().zfill(fill)def bina (num, fill = 8): return bin(num)[2:].zfill(fill)def chunks (arr, size = 1): return [arr[i: i+size] for i in range(0, len(arr), size)]def lmap (func, iterable): return list(map(func, iterable))hex_digits_chunks = chunks(lmap(hexa, range(16)), 4)printable_ascii = lmap(ord, string.digits + string.ascii_letters + string.punctuation)hexview.pyfrom gen import *class HexViewer (): def __init__ (self, file): self.data = open(file, 'rb').read() self.hex_data = lmap(hexa, self.data) self.hex_chunks = chunks(chunks(self.hex_data, 4), 4) self.ascii_data = [(chr(int(byte, 16)) if int(byte, 16) in printable_ascii else '.') for byte in self.hex_data] self.ascii_chunks = chunks(self.ascii_data, 16) self.rows = len(self.hex_chunks) self.addresses = lmap(lambda o: hexa(o * 16, 8), range(0, self.rows)) def __str__ (self): table_format = ' {:<15}{:<60}{:<20}\n' str_rep = '' str_rep += table_format.format( 'address'.upper(), ' '.join(' '.join(x) for x in hex_digits_chunks), 'ascii'.upper()) str_rep += '\n' for i in range(self.rows): str_rep += table_format.format( self.addresses[i], ' '.join(' '.join(x) for x in self.hex_chunks[i]), ''.join(self.ascii_chunks[i])) return str_repmain.pyimport tracebackimport argparsefrom gen import *from hexview import *try: parser = argparse.ArgumentParser(description='Hexadeciaml viewer.') parser.add_argument('file', type=str, nargs='?', default='demo.exe', help='the file to process') args = parser.parse_args() print('\n\n') print(HexViewer(args.file))except SystemExit: passexcept: traceback.print_exc()Demo:C:\...\Hexed> main.pyADDRESS 00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f ASCII00000000 4d 5a 90 00 03 00 00 00 04 00 00 00 ff ff 00 00 MZ..............00000010 b8 00 00 00 00 00 00 00 40 00 00 00 00 00 00 00 [email protected] 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................00000030 00 00 00 00 00 00 00 00 00 00 00 00 80 00 00 00 ................00000040 0e 1f ba 0e 00 b4 09 cd 21 b8 01 4c cd 21 54 68 ........!..L.!Th00000050 69 73 20 70 72 6f 67 72 61 6d 20 63 61 6e 6e 6f is.program.canno00000060 74 20 62 65 20 72 75 6e 20 69 6e 20 44 4f 53 20 t.be.run.in.DOS.00000070 6d 6f 64 65 2e 0d 0d 0a 24 00 00 00 00 00 00 00 mode....$.......00000080 50 45 00 00 4c 01 02 00 00 00 00 00 00 00 00 00 PE..L.............. and you got the idea ... | Python Hex Viewer | python;parsing;file;console | General commentsThis code is way too complicated for what it is achieving. It's core is located withing Hexviewer.__str__ with a little bit of preprocessing around. There is no need for a class here, a simple function with tiny helpers should suffice.Also, even though separating concerns between files is a good thing for complex projects, I find it adds complexity for such small task. You also fall into the bad habit of using from <xxx> import * to try and avoid this complexity.Lastly, I have a hard time understanding the logic behing your exceptions handling. Nothing in your code explicitly generate a SystemExit so you can drop this except clause. Especially if you plan on doing nothing and exit anyway And the bare except to just print the exception is just useless as it is the default behaviour anyway.Utilitiesgen.py is a terrible name for a file holding general utilities functions; as gen mostly associates to generate/generation. utils.py is more common in the Python's world.In this file, bina is never used, lmap should be replaced by list-comprehensions at the calling point, and chunks very look like the itertools recipe grouper.Also, printable_ascii being a list is a poor choice knowing that it will be used for existence checking. You should at least use a set instead, but I would favor using two constants BEGIN_PRINTABLE = 33 and END_PRINTABLE = 126 as all these characters are contiguous in the ASCII table.File processingFirst off, you open the file but never close it: time to get yourself familiar with the with statement.Second, instead of pre-processing the file at once and printing it later, you could read it by blocks of 16 bytes and process them before going to the next block. It will save a lot of memory and allow you to view very large files.Third, instead of building the whole string at once and returning it, you can yield each block of processed 16 bytes and let the caller be responsible of iterating over them to perform the desired operation (or feed them to '\n'.join for what it's worth).Fourth, you don't necessarily need to use hex or chr to convert integers to characters before formatting them: format specifiers x and c for integers can perform the same operations. And you can also mix them with 0>? where ? is an integer to perform the role of zfill. Example:>>> '{:0>4x}'.format(23)'0017'>>> '{:c}'.format(102)'f'Proposed improvementsimport itertoolsimport argparseBEGIN_PRINTABLES = 33END_PRINTABLES = 126def hex_group_formatter(iterable): chunks = [iter(iterable)] * 4 return ' '.join( ' '.join(format(x, '0>2x') for x in chunk) for chunk in itertools.zip_longest(*chunks, fillvalue=0))def ascii_group_formatter(iterable): return ''.join( chr(x) if BEGIN_PRINTABLES <= x <= END_PRINTABLES else '.' for x in iterable)def hex_viewer(filename, chunk_size=16): header = hex_group_formatter(range(chunk_size)) yield 'ADDRESS {:<53} ASCII'.format(header) yield '' template = '{:0>8x} {:<53} {}' with open(filename, 'rb') as stream: for chunk_count in itertools.count(1): chunk = stream.read(chunk_size) if not chunk: return yield template.format( chunk_count * chunk_size, hex_group_formatter(chunk), ascii_group_formatter(chunk))if __name__ == '__main__': parser = argparse.ArgumentParser(description='Hexadeciaml viewer.') parser.add_argument('file', nargs='?', default='demo.exe', help='the file to process') args = parser.parse_args() print('\n\n') for line in hex_viewer(args.file): print(line)You may also want to replace the magic number 53 with something that depend of chunk_size. Given the implementation of hex_group_formatter, it should be math.ceil(chunk_size/4) * 14 - 3. |
_unix.61602 | I run my local testing server inside a VM. I often have to open up multiple windows in it, and all have to be logged in root. Usually this is accomplished by writing su on every tab, a total of 7-8 times. Is there a way that I have to enter the root password only once, and the next terminal tab/window that I open has be logged in as root? Sort of like it maintains the current working directory? | How to start subsequent shells as root? | bash;ubuntu;root;autologin | null |
_softwareengineering.35491 | Is there a resource or site which illustrates building the same application (desktop or web) using several different contrasting architectures? Such as MVP versus MVVM versus MVC, etc. It would be very helpful to see how they look side-by-side using real-world code instead of comparing written theory to written theory. I've often found that something can be described well in a book, but when you go to implement it, the subtleties and weaknesses of the theory become readily apparent. | Examples of different architecture methodologies | architecture;comparison;mvc;mvvm | The caveat with this approach is that certain frameworks lend themselves to a given architecture. WPF and SL for instance lend themselves to an MVVM pattern due to the binding nature of the technologies.Since no specific technology was mentioned you could take a look at this thesis which attempts to compare/contrast MVC/MVP/MVVM using Silverlight.EDIT:I know you don't want theory but this blog post is beneficial in understanding the differences across the above mentioned architectures...MVC/MVP/MVVM. |
_softwareengineering.254967 | I am writing a GtkGrid-like container for my GUI library for Go, and I'm trying to write the actual layout part of the code.Basically, I have an unordered list of controls. Each control is a rectangle, and each list entry contains a pointer to the control to its north, south, east, and west. What I don't know is how to convert this back into a grid efficiently.At first I thought I could use the top-left control to mark (0, 0) on the grid. But what's the top-left of[ ][ ][X][ ][X][X][X][X][X]Any other solution I can think of runs in at least O(n^k) time, since I need to traverse the list multiple times to figure out what each row is, etc.So what other approach can I use? Thanks.The algorithm will need to be able to start from any control in the list. (Go programmers: the reason for this is that the list is a map.)I can say that there will be no freestanding controls (all controls must be attached to another control, but can be attached on any side at any time). (Except, of course, if there is only one control at all, but I imagine this scenario falling out of the solution.)(I hope this is in the right subdomain...) | I have an unordered list of rectangles and their neighbors on four sides with no origin. How can I efficiently convert this into a grid? | algorithms;data structures;go | Are you guaranteed to get to any rectangle from any other? (are there any gaps?) I'm presuming so, otherwise you can't determine what the relative coordinates are for the map. In that case, pick one arbitrarily and make that 0,0. Then it becomes a simple matter of walking the graph and assigning coordinates. If you require no negatives, then you'll need a second pass to shift everything +x/+y, but it should be O(n) at worst. |
_unix.360518 | I have a laptop connected to a external monitor. I want to configure the display settings automatically when the external monitor is connected or disconnected. So, i understand that this can be accomplished using udev events and a bash script using xrandr.This is the bash script#!/bin/shset -eVGA_STATUS=$(</sys/class/drm/card0/card0-VGA-1/status)if [ connected == $VGA_STATUS ]; then /usr/bin/xrandr --output LVDS1 --noprimary --auto /usr/bin/xrandr --output VGA1 --primary --mode 1920x1080 --left-of LVDS1 echo VGA plugged in >> /home/my_username/monitor.logelse /usr/bin/xrandr --output VGA1 --off /usr/bin/xrandr --output LVDS1 --primary --auto echo External monitor disconnected >> /home/my_username/monitor.log exitfiThis is the udev ruleKERNEL==card0, SUBSYSTEM==drm, ACTION==change, ENV{DISPLAY}=:0, ENV{XAUTHORITY}=/run/user/1000/gdm/Xauthority, RUN+=/bin/bash /path/to/my/script.shI tested the bash script and it works in both cases. The problem i have is that the udev rule only works when the monitor gets disconnected but not in the other case. The log file shows the External monitor disconnected message but it doesn't show the VGA plugged in message.This is the output of xrandr -q when i disconnect the external monitor:Screen 0: minimum 8 x 8, current 1366 x 768, maximum 32767 x 32767LVDS1 connected primary 1366x768+0+0 (normal left inverted right x axis y axis) 310mm x 170mm 1366x768 59.98*+ 1024x768 60.00 1024x576 60.00 960x540 60.00 800x600 60.32 56.25 864x486 60.00 640x480 59.94 720x405 60.00 680x384 60.00 640x360 60.00 DP1 disconnected (normal left inverted right x axis y axis)DP2 disconnected (normal left inverted right x axis y axis)DP3 disconnected (normal left inverted right x axis y axis)HDMI1 disconnected (normal left inverted right x axis y axis)HDMI2 disconnected (normal left inverted right x axis y axis)HDMI3 disconnected (normal left inverted right x axis y axis)VGA1 disconnected (normal left inverted right x axis y axis)VIRTUAL1 disconnected (normal left inverted right x axis y axis)This is the output of xrandr -q after the external monitor is connected:Screen 0: minimum 8 x 8, current 1366 x 768, maximum 32767 x 32767LVDS1 connected 1366x768+0+0 (normal left inverted right x axis y axis) 310mm x 170mm 1366x768 59.98*+ 1024x768 60.00 1024x576 60.00 960x540 60.00 800x600 60.32 56.25 864x486 60.00 640x480 59.94 720x405 60.00 680x384 60.00 640x360 60.00 DP1 disconnected (normal left inverted right x axis y axis)DP2 disconnected (normal left inverted right x axis y axis)DP3 disconnected (normal left inverted right x axis y axis)HDMI1 disconnected (normal left inverted right x axis y axis)HDMI2 disconnected (normal left inverted right x axis y axis)HDMI3 disconnected (normal left inverted right x axis y axis)VGA1 connected (normal left inverted right x axis y axis) 1920x1080 60.00 + 1600x900 60.00 1280x1024 75.02 60.02 1152x864 75.00 1024x768 75.03 60.00 800x600 75.00 60.32 640x480 75.00 59.94 720x400 70.08 VIRTUAL1 disconnected (normal left inverted right x axis y axis)This is the output of xrandr -q after the external monitor is connected and i run the bash script manually:Screen 0: minimum 8 x 8, current 3286 x 1080, maximum 32767 x 32767LVDS1 connected 1366x768+1920+0 (normal left inverted right x axis y axis) 310mm x 170mm 1366x768 59.98*+ 1024x768 60.00 1024x576 60.00 960x540 60.00 800x600 60.32 56.25 864x486 60.00 640x480 59.94 720x405 60.00 680x384 60.00 640x360 60.00 DP1 disconnected (normal left inverted right x axis y axis)DP2 disconnected (normal left inverted right x axis y axis)DP3 disconnected (normal left inverted right x axis y axis)HDMI1 disconnected (normal left inverted right x axis y axis)HDMI2 disconnected (normal left inverted right x axis y axis)HDMI3 disconnected (normal left inverted right x axis y axis)VGA1 connected primary 1920x1080+0+0 (normal left inverted right x axis y axis) 530mm x 300mm 1920x1080 60.00*+ 1600x900 60.00 1280x1024 75.02 60.02 1152x864 75.00 1024x768 75.03 60.00 800x600 75.00 60.32 640x480 75.00 59.94 720x400 70.08 VIRTUAL1 disconnected (normal left inverted right x axis y axis)What i'm doing wrong? Any help is apreciated.By the way i'm in Fedora 25 using i3.Thanks in advance | Udev rule for multi monitor hotplug | x11;udev;xrandr | null |
_codereview.171164 | Q1. Can this code be simplified/ made more efficient? Q2. Why does the following code have to be indented in the way that it is?used.append(guess) and so_far = newQ3. Out of interest, is there a way to print the hangman figure once and just replace the previous figure, to avoid mass text/ reloading in the Python Shell?The Following code has been slightly adapted from Python Programming for the absolute beginner - Hangman Game#Python 3.4.3, MAC OSX (Latest)#Hangman Gameimport randomfrom time import sleepHANGMAN = ( ------ | | | | | | | | |----------, ------ | | | O | | | | | |----------, ------ | | | O | -+- | | | | | ----------, ------ | | | O | /-+- | | | | | ----------, ------ | | | O | /-+-/ | | | | | ----------, ------ | | | O | /-+-/ | | | | | | ----------, ------ | | | O | /-+-/ | | | | | | | | | ----------, ------ | | | O | /-+-/ | | | | | | | | | | | ----------)WORDS = (APPLE, ORACLE, MIMO, TESLA)word = random.choice(WORDS)POSITIVE_SAYINGS = (Well done!, Awesome!, You Legend!)MAX_WRONG = len(word) - 1so_far = (-) * len(word)used = []wrong = 0print(\t \t Welcome to Hangman!)print()input(Press Enter to START: )while wrong < MAX_WRONG and so_far != word: print() print(HANGMAN[wrong]) print(Word so far: , so_far) print(Letters used: , used) guess = input(Guess a letter: ).upper() sleep(1) # Time delay - allows userfriendly reading print() while guess in used: print(Try again... You've already used this letter) guess = input(Guess a letter: ).upper() sleep(1) print() used.append(guess) if guess in word: print(random.choice(POSITIVE_SAYINGS),...Updating word so far...) new = for i in range(len(word)): if guess == word[i]: new += guess else: new += so_far[i] so_far = new else: print(INCORRECT! Try again!) wrong += 1print(Calculating result...)sleep(1)if wrong == MAX_WRONG: print(UNLUCKY! Better luck next time!)else: print(WINNER! Congratulations!)print()print()input(Press Enter to Leave: ) | Python Simple Hangman Game | python;beginner;python 3.x;hangman | This sure looks like a fun game!So far, it's pretty well constructed, so I just have some suggestions:To answer your questions...Q1. Can this code be simplified/ made more efficient?Yes. I recommend putting it in a class for simplicity. As far as efficiency goes, the largest running time is O(n), so it is already efficient. It might be possible to simplify it to allow the max running time to be O(log n), but I don't know how.Class version:# Whenever possible, only import specific functions from the libraryfrom random import choicefrom time import sleepclass Hangman: A Hangman class. _HANGMAN = ( ------ | | | | | | | | | ---------- , ------ | | | O | | | | | | ---------- , ------ | | | O | -+- | | | | | ---------- , ------ | | | O | /-+- | | | | | ---------- , ------ | | | O | /-+-/ | | | | | ---------- , ------ | | | O | /-+-/ | | | | | | ---------- , ------ | | | O | /-+-/ | | | | | | | | | ---------- , ------ | | | O | /-+-/ | | | | | | | | | | | ---------- ) _WORDS = (APPLE, ORACLE, MIMO, TESLA) _POSITIVE_SAYINGS = (Well done!, Awesome!, You Legend!) def __init__(self): The Python constructor for this class. self._word = choice(self._WORDS) self._so_far = - * len(self._word) self._used = [] self._wrong_answers = 0 def play(self): This is the main driver of the game. Plays the game. self._reset_game() self._start_game() # The amount of incorrect answers should be no greater than the length # of HANGMAN. # # Use the length of HANGMAN to ensure there's no index # overflow error when printing current progress. while self._wrong_answers < len(self._HANGMAN) and self._so_far != self._word: self._print_current_progress() guess = self._user_guess() self._check_answer(guess) self._print_result() self._play_again() # --------------------------------- # Private methods def _check_answer(self, guess): Checks to see if the user's guess is correct. :param guess: User's guess if guess in self._word: print(choice(self._POSITIVE_SAYINGS), ...Updating word so far...) for i in range(len(self._word)): if guess == self._word[i]: # so_far is spliced this way: # so_far [from the start : up until, but not including the # position of the correctly guessed letter] # + guessed letter # + so_far [from the position next to the # correctly guessed letter : to the end] self._so_far = self._so_far[:i] + guess + self._so_far[i+1:] else: print(INCORRECT! Try again!) self._wrong_answers += 1 def _play_again(self): Asks the user if he or she would like to play again. If the user wants to play again, calls play(). Otherwise, thanks the user for playing. print(Would you like to play again?) user_input = input(Enter Y for yes or N for no: ).upper() if user_input == Y: self.play() else: print() print(Thank you for playing!) def _print_current_progress(self): Prints the current progress of the game. print() print(self._HANGMAN[self._wrong_answers]) print(Word so far: , self._so_far) print(Letters used: , sorted(self._used)) def _print_result(self): Prints the result (win or lose). sleep(1) print() print(Calculating result...) sleep(1) print() if self._wrong_answers == len(self._HANGMAN): print(UNLUCKY! Better luck next time!) else: print(WINNER! Congratulations!) def _reset_game(self): Resets the game by calling the constructor. self.__init__() def _start_game(self): Starts the game by printing an introduction and asks the user to hit the ENTER key to continue. print() print(\t\tWelcome to Hangman!) print() input(Press Enter to START:) def _user_guess(self): Asks for the user to guess a letter. :returns: User's guessed letter. guess = input(Guess a letter: ).upper() sleep(1) # Time delay - allows user friendly reading print() while guess in self._used: print(Try again... You've already used this letter) guess = input(Guess a letter: ).upper() sleep(1) print() self._used.append(guess) return guessif __name__ == '__main__': game = Hangman() game.play()This is also available on secret GistIf you want to just focus on the method part, you can see the method version by clicking on my Github Gist linkQ2. Why does the following code have to be indented in the way that it is?used.append(guess) andso_far = newFrom the code in your question, used.append(guess) needs to be outside of the while loop or the guessed letter will be added to the used list repeatedly as long as the while condition is true.so_far = new needs to be outside of the for loop for the same reason. In my suggested code, you will have no need to use the new variable because string splicing is used instead.Q3. Out of interest, is there a way to print the hangman figure once and just replace the previous figure, to avoid mass text/ reloading in the Python Shell?Currently, I don't know. I'll keep it in my mind and let you know if I find a way.Update:In one of your comments below, you asked if the code above was OOP. The answer is kind-of, but no. I didn't define a model or the logic in a separate entity. However, I did create a Gist link that is the OOP version of the code. You can click on this Gist link to see the OOP version. |
_unix.294976 | I have LMDE 2 and I'm wondering if there is any practical way to sync my comic books over to my iPad Mini running iOS 9 so I can read them on YACReader or a similar app. I've messed around with ifuse and libimobiledevice but I can't get the iPad documents folder to mount. Any suggestions?Also I'm taking this laptop/iPad with me on deployment so network-related options aren't viable. | How to sync comic books from LMDE 2 to iPad mini | linux mint;lmde;ios | null |
_unix.87182 | I was wondering, is there any differences between Debian Standard and GNOME versions? Isn't Debian under GNOME by default? | What's the difference between Debian Standard and Gnome? | debian;gnome;standard | There should really be a page somewhere explaining the heterogeneous nature of GNU/Linux for people coming from monolithic mainstream OS's like windows or OSX (maybe there is already and I don't know about it). My point with heterogeneous vs. monolithic is that while windows and OSX are both essentially gigantic, singular pieces of integrated software, linux is a collection of pieces and often one piece can be interchanged with a different, parallel piece. Thus the final product varies a great deal; it is easy to end up with a system that may be completely unrecognizable to another linux user.Although linux is fine colloquially (that's how I was using it above), the proper name of the OS is actually GNU/Linux because linux is just the kernel (below, I use small l linux in the colloquial sense and capital L Linux to refer to just the kernel). The fundamental userspace (native libraries, common unix tools) is a completely separate project usable with various unix-like kernels, including Linux, although Linux is by far the most popular one.Both GNU and Linux are publicly distributed as source code. However, that's not much good to most people unless it is compiled into binary executable form. Because that is a complex task, various pre-compiled GNU/Linux distributions exist, of which Debian is one. Point being, Debian doesn't actually write most of the software in the distribution -- the GNU and Linux crew did.Distributions generally contain a lot more software than the kernel and fundamental userspace, however. For example, the basic layer of the graphical desktop used on linux is the Xorg server. Xorg is another independent organization, and X is also used on other (unix-like) operating systems. X itself is a sort of minimal, behind-the-scenes entity from a user perspective. It does not provide snazzy widget sets, etc; these come from a window manager (WM) and, optionally, a desktop environment (DE).There are a variety of DE's available for use with X on linux. GNOME is one of them, and it is the default used by Debian. GNOME is also a GNU project. Note that you don't have to use GNOME with Debian, you could also use one of the other available DEs (and/or WMs).So to answer your question more specifically:Isn't Debian under Gnome by default ?No. Debian is an independent organization, and Gnome is a project maintained by GNU, a separate independent organization. Your version of Gnome was compiled from the GNU source code by Debian. |
_computergraphics.3891 | Do current WebGL capabilities allow to use Clustered Lighting? If not what is it missing to allow that? Basically WebGL uses GLES 2, so is that possible with GLES 2?I want to implement this one:Clustered ShadingAnd use as fallback this one:Alternative to Clustered shading | Can I Implement Clustered Lighting with WebGL? | rendering;lighting;webgl | null |
_softwareengineering.92575 | The big project I'm working on for a couple years now is a control (and everything) application of an advanced device, heart of its firmware.The device is quite advanced, with more different functionalities than I could say from memory, and 98% of them are handled by this one huge executable. In one hand, the program is quite maintainable, well modularized inside, properly documented, there's a reasonable separation of functionalities by directories and files and so on. But in the end it gets all clustered into one application that does everything from remote database communication, touchscreen handling, handling a dozen various communication protocols, measurements, several control algorithms, video capture, sunrise time and date of easter (seriously, and they are needed for very serious purposes!)... In general, stuff that is very thinly related, often related only through some data that trickles between some far modules.It could be done as several separate executables communicating with each other, say, over sockets, with more specific purpose, maybe loaded/unloaded as needed, and so on. No specific reason why it is made this way.In one hand, it works, and it works okay. The project is more simple, without maintaining build of multiple binaries. The internal structure is easier too, when you can just call a method or read a variable instead of talking over sockets or shared memory. But in the other hand, the size, the scale of this thing just creeps me out, it feels like piloting Titanic. I was always taught to modularize, and clumping everything into one gargantuan file feels wrong. One problem I know is a heavy crash of one (even insignificant) module crashes all - but code quality assures this doesn't really happen in release versions. Otherwise, internal separation and defensive programming assures this will still run mostly correctly even if half of the internal modules fail normally for some reason.What other dangers did I overlook? Why does this creep me out? Is this just irrational fear of unknown? Is making serious big projects this way an accepted practice? Either calm my fears or give me a good reason to refactor version 2.0 into multiple smaller binaries. | Dangers of huge monolithic application | architecture;project;scalability;risk assesment | Except for the tiny comment at the end (second, supervising CPU) you could be describing my company. Yup, we need Easter too.Well, we're a bit further along. We did split the big executable and tried to use standard components for standard bits. Not exactly the big improvement you'd hope for. In fact, performance is becoming a major pain even on beefier hardware. And maintenance costs haven't really gone down now that there's tons of code to serialize and synchronize data over narrow interfaces.The lesson I've learned? Having a single executable is a well-proven solution for small systems, and we have decades of experience in managing it. All our tools support that natively. Modularity can be done within a single executable, too, and when you need to compromise on modularity for other reasons the hack remains small. |
_unix.237449 | I have some shared folders in my network that I would like to have mounted in my Raspberry Pi 2 once it boots. I have edited my /etc/fstab and if I manually run mount -a, I manage to have access to them. I have found different solutions around the internet, but none seems to be working for me.As a quick solution, I have created a script that mounts those volumes (with mount -a) and added it to my .bashrc; so when I connect via ssh with that user, I get access to them.I have tried adding this script to /etc/network/if-up.d/ but it does not seem to be working. Why is this not working? Which is a better way to do it?P.S.: I am using an osmc version of Raspbian (I am not sure if this is just Raspbian with an osmc service or a different distro). So when I run uname -a, I get:Linux osmc 4.2.1-1-osmc #1 SMP PREEMPT Wed Sep 23 17:57:49 UTC 2015 armv7l GNU/Linux | Mounting a network volume once Wifi is set up | boot;mount;networking | Sometimes, figuring out why the mechanisms that are supposed to work don't work is more work than just doing what you need to do some other more bulletproof way.You want to mount a shared folder at boot, but obviously you have to wait until the network is up (note that networking and the network is up tend to be distinctly different things WRT boot services). The canonical way to do this is obviously via ifup or a boot service with a dependency on a service that is supposed to establish the network connection (again, distinct from networking). But we've already decided these mechanisms aren't reliable in this case for whatever reason. To come up with a more bulletproof method, we need to reduce this task to its most basic elements, and resort to using shell tools to deal with these elements in a basic, fundamental way. This could start by coming up with some kind of test to determine if the network is up, and waiting until this test succeeds.Or we could skip that test and just test the result of mount itself, which will fail if the network share is unavailable.#!/bin/bash exec &> /var/log/mountshare.log gap=3 attempts=40 while [ $attempts -gt 0 ]; do mount -v /foo /bar if [ $? -eq 0 ]; then break; fi attempts=$(($attempts - 1)) sleep $gap doneTo explain:exec &> /var/log/mountshare.log redirects all output from this script to a file, overwriting any previous version. This is a bashism, which is why the shebang line isn't simply #!/bin/sh. However, using bash on a normal linux system is usually a safe bet.The value of gap and attempts mean this will be tried every three seconds for two minutes -- plenty of time. You could set gap down to one if you like. This is an incredibly minor task that could actually run forever at one second intervals without impairing the system.The mount line is specific (obviously you need to customize this). Since you know exactly what you want to do, it is preferable to mount -a and ensures our test is more bulletproof. If you have a listing in fstab, you should be able to use one argument (the device path or other identifier), but again, using a complete, specific command is easy and better fulfills that bulletproof criteria.The test, if [ $? -eq 0 ] checks the exit status of mount. When it succeeds, it returns 0; when it fails, it returns some other value. Note that mount will spit an error in that case, and this output will end up in the log file. Since we used -v, it will also produce a message when the call succeeds.The next two lines decrement the counter (attempts) and add a sleep. The sleep is very very important. Do not do this without a sleep. Without that, this goes from being an an incredibly minor task that could actually run forever to something that is going to hog a lot of processor time.To use this, put it in, e.g., /usr/local/bin. Ensure it is owned root.root (sudo chown root:root mountmyshare.sh) and executable (chmod 755 mountmyshare.sh). You could put it anywhere if you don't want it in $PATH since we are going to use an absolute, bulletproof path to invoke it anyway.Then at the top of /etc/rc.local:/usr/local/bin/mountmyshare.sh &The & is critical; it forks this into the background. I've said put it at the top in case you have other stuff there that may screw this up. This won't screw up anything else.While /etc/rc.local is sort of a legacy mechanism, it is still run at the end of the boot sequence on all reasonably normal linux systems as far as I am aware. |
_codereview.102510 | I've been implementing a custom internal server error page in ASP.Net MVC which will check if the current user is either an administrator or accessing the page from localhost, and if so, show them a whole bunch of details about the error to debug it with, otherwise just send them to a basic HTML error page.So far, it works great, but one problem I had was that if there is an error in a partial view on the page, the system gets stuck in a loop trying to report the error.To avoid this, I'm storing a temporary counter of how many times the current action has requested the error page in TempData, but I find the amount of lines and style of the code to get, set and check this variable a bit verbose:using System.Web.Mvc;namespace Test{ public class ErrorController : Controller { [ActionName(500)] public ActionResult InternalServerError(string aspxerrorpath = null) { int detectRedirectLoop = (TempData.Peek(redirectLoop) as int?) ?? 0; TempData[redirectLoop] = detectRedirectLoop + 1; if((int) TempData.Peek(redirectLoop) <= 1) { // Check if user is admin or running locally and display error if so } return Redirect(/GeneralError.htm); } }}Is there a better/prettier/shorter way of doing this? | Storing an integer counter in indexer | c#;asp.net mvc | I suggest the following:public class ErrorController : Controller{ private const string RedirectLoopCounterName = RedirectLoopCounter; private const int MaxRedirectLoopCount = 1; private int RedirectLoopCounter { get { return ((int?)TempData.Peek(RedirectLoopCounterName)) ?? 0; } set { TempData[RedirectLoopCounterName] = value; } } private int IncreaseRedirectLoopCounter() { return ++RedirectLoopCounter; } [ActionName(500)] public ActionResult InternalServerError(string aspxerrorpath = null) { var isRedirectLoop = IncreaseRedirectLoopCounter() > MaxRedirectLoopCount; if(isRedirectLoop) { return Redirect(/GeneralError.htm); } // Check if user is admin or running locally and display error if so }}Create a constant for the counter's name as already mentioned by @BCdotWEBCreate a property for getting and setting its valueCreate a method for actually increasing the counters valueAdditionaly you could replace the 1 by a constantFinally you can replace the condition by a helper variable to document it betterI would also invert the if |
_unix.57435 | How can I create (if the folder is new) or change (if the folder was already existent) permissions for a folder?In particular, I want to create a subdirectory in /etc/bind/ where I can put the named_dump.db file, so I've to create the new directory with write permission for bind user and group.First of all, is it right?And if I want to change /etc/bind/ permissions and save there named_dump.db? | Change/Create permissions for a folder | permissions;directory;users;group;bind | null |
_cstheory.38386 | A bigotous program is a program which decides if its input is semantically equivalent to itself. Of course, this is impossible in a Turing complete language due to Rice's theorem.In fact, its pretty hard to find any computational model that supports such a program. Indeed, even extending a computational model with the ability to write bigots often results in contradictions.What are some computational models that do support bigots? | Which computational models support bigotous programs? | computability;semantics;decidability | null |
_unix.9900 | I decided to connect my PC to the laptop via firewire. All interfaces are up, but no packets can be send/recieved.I'm using Linux Slackware64-current (kernel 2.6.37.4)Laptop (Dell Vostro 3700) has this adapter:root@trium:~# lspci | grep 139414:00.3 FireWire (IEEE 1394): Ricoh Co Ltd FireWire Host Controller (rev 01)Old ieee1394 stack was removed in new kernels. thats why I use new:root@trium:~# modprobe firewire-netroot@trium:~# lsmod | grep firefirewire_net 12930 0 firewire_ohci 27301 0 firewire_core 51107 2 firewire_net,firewire_ohcidmesg shows:firewire_net: firewire0: IPv4 over FireWire on device 47203fc0434fc000firewire_net: max_rec 0 out of rangefirewire_core: refreshed device fw01st string is good, but the 2nd is strange for me. what does it mean? maybe it is the source of the problem?ok. lets configure the interface:root@trium:~# ifconfig firewire0 192.168.1.2 netmask 255.255.255.0 uproot@trium:~# ifconfigfirewire0 Link encap:UNSPEC HWaddr 47-20-3F-C0-43-4F-C0-00-00-00-00-00-00-00-00-00 inet addr:192.168.1.2 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:20 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:24 errors:0 dropped:0 overruns:0 frame:0 TX packets:24 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2445 (2.3 KiB) TX bytes:2445 (2.3 KiB)Same configuration steps were made on my PC. OS is the same. FireWire device - 05:06.0 FireWire (IEEE 1394): VIA Technologies, Inc. VT6306/7/8 [Fire II(M)] IEEE 1394 OHCI Controller (rev 46)dmesg showed:firewire_net: firewire0: IPv4 over FireWire on device 4d5a900003000000firewire_net: max_rec 0 out of rangefirewire_core: refreshed device fw0interface configuration -root@bium:~# ifconfig firewire0 192.168.1.3 netmask 255.255.255.0root@bium:~# ifconfigfirewire0 Link encap:UNSPEC HWaddr 4D-5A-90-00-03-00-00-00-00-00-00-00-00-00-00-00 inet addr:192.168.1.3 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:20 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:107 errors:0 dropped:0 overruns:0 frame:0 TX packets:107 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:70120 (68.4 KiB) TX bytes:70120 (68.4 KiB)Now lets ping PC from laptop - root@trium:~# ping -c 3 192.168.1.3PING 192.168.1.3 (192.168.1.3) 56(84) bytes of data.From 192.168.1.2 icmp_seq=1 Destination Host UnreachableFrom 192.168.1.2 icmp_seq=2 Destination Host UnreachableFrom 192.168.1.2 icmp_seq=3 Destination Host Unreachable--- 192.168.1.3 ping statistics ---3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 1999mspipe 3Yes. the cable is connected. Laptop has 4pin port, and PC has 6pin port. AFAIK 6pin uses two pins for power supplyment. I have the coresponding cable, so i don't think that the problem is here.thats all. thanks | Network via ieee1394 is unreachable (PC and laptop) | networking;ping;firewire | null |
_cs.47173 | I am reading about red-black trees in Introduction to Algorithms, Second Edition by Cormen. Pages 316, 317.I don't understand why we need to rotate the given tree. See (c) and (d) in the attached picture.The author says that the following properties of a red-black tree can be violated after a new node has been inserted:the root is black;children of a red node are black.The second property is restored in (b). The first property wasn't violated because we didn't insert the tree root. So why do we need to perform any other actions after recoloring? | Cormen. Red-black trees. Why do we need to rotate the tree after fixing its properties? | algorithms;trees | After recoloring in (b), the following rule is still violated:children of a red node are black.Recoloring while preserving black height in step (b) introduces new double red nodes 2 and 7, which remains after step (c) and finally eliminated in step (d). |
_webmaster.30013 | As more and more browsers adopt stringent checks around preventing insecure content served up on secure sites, there is impact on the use of ad networks which in many cases don't support the secure content.What are some options that exist to continue to leverage the HTTP based ads in a secure site? Note that I am aware of HTTPS ad network providers, but would still like to explore workarounds allowing the use of HTTP (insecure) ad providers.I see through this question that proxying the requests through an SSL landing page could be an option. Does anyone have experience doing that or more information on this technique?What other things have people been doing in this context?Regards. | Using HTTP (insecure) ad providers on HTTPS (secure) site | https;http;advertising | I would think that using information served over HTTP on a HTTPS page would defeat some of the purpose of using HTTPS in the first place. HTTPS is meant to be more secure than HTTP, but once you start using HTTP on an HTTPS page you are only as secure as your weakest link, HTTP.You said you were aware of HTTPS advertising networks, that is probably your safest bet. If you do try a proxy solution you should make sure you read the Terms of Service, some advertising networks are strict about how you use their code/service. |
_unix.377377 | I followed this guide: https://askubuntu.com/questions/425140/unable-toboot-with-nvidia-gtx-750-ti-even-with-latest-beta-driversBut, even though this resolved the Nouveau error, it freezes here and doesnt boot further. (1050Ti)[ 12.241960] iwlwifi 0000:02:00.0: firmware: failed to load iwlwifi-7265D-26.ucode (-2) [ 12.241963] iwlwifi 0000:02:00.0: Direct firmware load for iwlwifi-7265D-26.ucode failed with error -2 [ 12.241984] iwlwifi 0000:02:00.0: firmware: failed to load iwlwifi-7265D-25.ucode (-2) [ 12.241986] iwlwifi 0000:02:00.0: Direct firmware load for iwlwifi-7265D-25.ucode failed with error -2 [ 12.568391] iwlwifi 0000:02:00.0: firmware: direct-loading firmware iwlwifi-7265D-24.ucode This happens, and I the boot doesnt progress further.I am able to boot into recovery mode. Please assist, thank you. | Nvidia Driver && iw1wifi.ucode | wifi;drivers;nvidia;freeze | null |
_unix.174936 | I have found many tutorials about create a bootable live Linux image, but I whant to create a very particular USB-linux pendrive that allows me -to boot Linux image using the native harware if started at boot time (so not virtualized)-to use the remaining space for storage purpose on Windows (so not formatted with ext filesystem)-to start as virtualized OS if I try to start it from another already started OS eg WindowsThe distro that I have choosen for the installation on the pendrive is Kali, but I don't know how to do all this.Someone could help me? Thanks | How to create dual mode usb liux, bootable and startable directly from a Windows OS as virtual machine | usb drive;live usb;kali linux | null |
_computerscience.3795 | So I just recently learned about the Compute Shader and it looks from what I have picked up the same idea as parallel programming you would do with CUDA or OpenCL, but in the shader pipeline.If I want to draw a million cubes in a scene should I be using one method over the other or both. If both how do you split that up so the GPU isn't trying to parallel compute both the shader and another process at the same time | Compute Shader vs CUDA/OpenCL | compute shader;opencl;cuda | null |
_cs.28780 | Dynamical systems are those whose evolution can be described by a rule, evolves with time and is deterministic. In this context can I say that Neural networks have a rule of evolution which is the activation function $f(\text{sum of product of weights and features})$ ? Are neural networks dynamical systems, linear or nonlinear dynamical systems? Can somebody please shed some light on this? | Are neural networks dynamical systems? | terminology;artificial intelligence;neural networks | A particular neural network does not evolve with time. Its weights are fixed, so it defines a fixed, deterministic function from the input space to the output space.The weights are typically derived through a training process (e.g., backpropagation). One could imagine building a system that periodically re-applies the training process to generate new weights every so often. Such a system would indeed evolve over time. However, it would be more accurate to call this a system that includes a neural network as one component of it.Anyway, at this point we are probably descending into quibbling over terminology, which might not be very productive. This site format is a better fit for objectively answerable questions with some substantive technical content. |
_webmaster.74302 | I'm managing a Google Analytics implementation on a fairly large e-commerce site. We're a big enough site to have multiple subdomains... So, we have:www.ourshop.comwww.community.ourshop.comwww.help.ourshop.comUnfortunately, we are seeing a significant amount of self-referral traffic from these sub-domains, and our understanding is that this will dramatically impact all of our session and attribution metrics. Since we're using Universal Analytics, we added all of the domains and subdomains (without the WWWs) to the referral exclusion list in GA. Unfortunately, no dice. We're still getting tons of self referrals.Additional notes: All of our sites push users over to HTTPS by default. Also, we have a reasonably large number of 301 redirects throughout the site to push people from old URLs to new, or to send them from HTTP to HTTPS, etc.How can we eliminate the self-referrals? | Google Analytics - Referral Exclusion List Not Working | google analytics | null |
_unix.267954 | All distro I tried with KDE solved my tearing problem. I would love to use KDE up its too heavy on my pc. I usually use xfce, but mate is fine too. I would love to get either xfce or mate working without screen tearing. How hard can it be if there are other desktops that work just fine? And by the way. I have a ATI readen HD 3400 which has no drivers from their site. | Screen tearing in mate with compiz, not screen tearing with KDE or Unity | kde;xfce;mate | null |
_softwareengineering.83259 | I'm working on an investigation on the NLP field. Since software engineering and software documentation are not the primary concerns of my whole investigation I decided to do it using XP.My question is, can you recomend:Any tool you've used to manage XP projectsAny recomendation/link on how to manage investigation projectsI'm very biased with XP because I've used it before, but if anyone has a better methodology you can post it as well | Any recommendation to manage an investigation projects? | methodology;development methodologies;extreme programming | XPlanner is my favorite XP/Scrum planning and management tool. It isn't heavy on reporting, but it gives me what I need. I've also used Rally, which is more suited to large organizations and is a bit heavy for the daily team tasks. If you need detailed reports for upper management, it does a good job.As for how to manage investigation, I have my teams set definable goals. Tasks have definable success/failure/completion states and aren't vague. So we end up with tons of short tasks like Test REST interface for performance capabilities and Examine proxy/firewall restrictions for client rather than Explore REST. New tasks go up all the time and forces us to have very short sprints. In the range of 3-5 days. You just can't plan exploration out much further than that in the beginning because you don't know what you don't know. |
_webapps.106295 | My company uses a Gmail-based email server. For integration with a partner's system I have been asked for my Gmail user ID, here is what they wrote:there are usually two email associated, one should be [email protected],the other one should be [email protected] the web reveals this question on StackOverflow:How to get Google User ID something which looks like 1242343543557656,using the GMail address in Android?QUESTION: As a simple web user, how can I get my ID?Or is there no such thing?I could not find it at https://myaccount.google.com | How to get my Gmail User ID? (a number like 1242343543557656) | gmail;google account;g suite | null |
_webapps.50864 | I want to hide the names of my followers on Facebook. This image shows what I'm trying to achieve:How can I do that? | How to hide Facebook followers list | facebook;followers | null |
_webmaster.67464 | I'm receiving the following errors in Google's Structured Data Testing Tool:It seems like the page for the category galeria does not contain these fields. There are also some more errors like this (all with categories links). The strange thing is that the first category doesn't show those errors.Anyway, I added the entry-title fields:<section class=content> <div class=page-title pad group> <h1 class=entry-title><i class=fa fa-folder-open></i>Categora <span>Biologa</span></h1> </div> ...</section>but the error is still there.Also, author and update doesn't seem to make sense here (categories), so should those messages be ignored?Update:Here is a link to Google's Structured Data Testing Tool for the page to demonstrate this, and to view the HTML code by clicking on the HTML tab. | Rich snippet errors in Google's Structured Data Testing Tool for category pages | rich snippets;google rich snippets tool | The errors were not due to the categories or tag pages, but to the slider and the sidebar posts in home page. So to solve it I just had to add rich snippets to all the posts shown there (also in categories and tags pages). |
_softwareengineering.355515 | I am currently reading and working through Clean Code: A Handbook of Agile Software Craftsmanship by Robert C. Martin. The author talks about how a function should do one thing only, and thus be relatively short. Specifically Martin writes:This implies that the blocks within if statements, else statements, while statements, and so on should be one line long. Probably that line should be a function call. Not only does this keep the enclosing function small, but it also adds documentary value because the function called within the block can have a nicely descriptive name.This also implies that functions should not be large enough to hold nested structures. Therefore, the indent level of a function should not be greater than one or two. This, of course, makes the functions easier to read and understandThis makes sense, but seems to conflict with examples of what I see as clean code. Take the following method for example: public static boolean millerRabinPrimeTest(final int n) { final int nMinus1 = n - 1; final int s = Integer.numberOfTrailingZeros(nMinus1); final int r = nMinus1 >> s; //r must be odd, it is not checked here int t = 1; if (n >= 2047) { t = 2; } if (n >= 1373653) { t = 3; } if (n >= 25326001) { t = 4; } // works up to 3.2 billion, int range stops at 2.7 so we are safe :-) BigInteger br = BigInteger.valueOf(r); BigInteger bn = BigInteger.valueOf(n); for (int i = 0; i < t; i++) { BigInteger a = BigInteger.valueOf(SmallPrimes.PRIMES[i]); BigInteger bPow = a.modPow(br, bn); int y = bPow.intValue(); if ((1 != y) && (y != nMinus1)) { int j = 1; while ((j <= s - 1) && (nMinus1 != y)) { long square = ((long) y) * y; y = (int) (square % n); if (1 == y) { return false; } // definitely composite j++; } if (nMinus1 != y) { return false; } // definitely composite } } return true; // definitely prime }}This code is taken from the Apache Commons source code repo at: https://github.com/apache/commons-math/blob/master/src/main/java/org/apache/commons/math4/primes/SmallPrimes.javaThe method looks very readable to me. For algorithm implementations like this one (implementation of Miller-Rabin Probabilistic Primality Test), is it suitable to keep the code as is and still consider it 'clean', as defined in the book?Or would even something already as readable as this benefit from extracting methods to make the algorithm essentially a series calls to functions that do one thing only?One quick example of a method extraction might be moving the first three if-statements to a function like: private static int getTValue(int n) { int t = 1; if (n >= 2047) { t = 2; } if (n >= 1373653) { t = 3; } if (n >= 25326001) { t = 4; } return t; }Note: This question is different that the possible duplicate (though that question is helpful to me too), because I am trying to determine if I am understanding the intention of the author of Clean Code and I am providing a specific example to make things more concrete. | Trouble grasping what clean code looks like in real life | clean code | Clean code is not an end in itself, it is a means to an end. The main purpose of refactoring bigger functions into smaller ones and cleaning up the code in other ways is to keep the code evolvable and maintainable. When picking such a very specific mathematical algorithm like the Miller-Rabin prime test from a text book, most programmers do not want to evolve it. Their standard goal is to transfer it from the pseudo code of the text book correctly into the programming language of their environment. For this purpose, I would recommend to follow the text book as close as possible, which typically means not to refactor.However, for someone working as mathematician in that field who is trying to work on that algorithm and change or improve it, IMHO splitting up this function into smaller, well named ones, or replacing the bunch of magic numbers by named constants, can help to make changes to the code easier like for any other kind of code as well. |
_unix.381909 | Is there a way to know which files are affected by any command that's entered in terminal? For instance, we all know what passwd command does, and which file it affects. But is there a definite way to know the exact file name any command affects?I have a command that adds user to a file, which implements some policies on those users. But I'm not sure which file it adds to. | Files Affected By Command | command line;file comparison | null |
_cogsci.17395 | I'm an Information Technology student (bachelor's degree) and as a summer project, I'd like to develop a small alarm system for ALS patients in case of emergency situations.I thought about tracking the eye movements of the patients via electrodes and then send the eye movement signals to the software system which I'm going to develop and then process the data. I'm planning to use Python programming language since I already have some experience on.However, I don't have any experience with using electrodes, processing the signals to detect eye movements. Do you know any good and simple online sources for the beginners like me? I can't afford to buy books so I'd be grateful if you can suggest me free online sources. | How to track eye movements with electrodes? | reference request;methodology;eye movement;brain computer interface | null |
_cs.28662 | In the third page (the third paragraph in the right column) of the paper Time, Clocks, and the Ordering of Events in a Distributed System by Leslie Lamport, it says thatThe reader may find it helpful to visualize a two-dimensional spatial network of processes, which yields a three-dimensional space-time diagram. Processes and messages are still represented by lines, but tick lines become two-dimensional surfaces.However I failed to visualize them in my head, especially the two-dimensional surfaces. Could anyone make more explanations? An illustrative picture would be excellent. | Visualization of Lamport's three-dimensional space-time diagram with introduced tick lines | distributed systems;intuition;clocks | Here's my take:...and here's a 3D version without the nice colors.Vertical black lines are processes; magenta lines are messages (wavy lines); spheres are events, and time is on the vertical (blue) axis. The planes are 'tick lines.' In Dr. Lamport's original diagrams, messages are only passed between adjacent processes. Presumably in a real distributed system any process can send a message to any other process, but this is hard to draw clearly in a 2D diagram. |
_unix.335830 | Looking for a solution to switch repository from CD to mirror ones, without going to /etc/apt/sources.list and changing the content to Internet mirror list.IMHO It is an absolute waste of time to do it just for one or two applications not included in the CD repo...Similar as in yum, for instanceyum install --enablerepo=repository_name_here package_nameanything similar in Debian ? | Switching repositories in Debian 8 | debian;repository | null |
_unix.165644 | I have a file like this (this is a sample, the file contains more lines like these):first line sss case-2-hello-world other wordssecond line other wordsthird line sss case-1-love-you other wordsfourth line other wordsfifth line other wordssixth line sss case-6-not-work other wordsAnd I would like to convert it to:pp:12 pme:4 plan:cpu_bind=hello mem_bind=world second line other words pp:6 pme:2 plan:cpu_bind=love mem_bind=youfourth line other words fifth line other words pp:36 pme:12 plan:cpu_bind=not mem_bind=work First, identify the lines with the pattern sss. Second, extract the number. Third, calculate the pp and pme: pp=number*6 and pme=number*2. Fourth, split the words that contain numbers in that line and assign them to cpu_bind and mem_bind. Fifth, put them together to replace the lines.For example, I identify the line first line sss case-2-hello-world other wordsby sss, the number is 2. After that, I need to calculate pp=2*6 pme=2*2. Split the string case-2-hello-world into parts and assign hello to cpu_bind and world to mem_bind. At the end, I need to get pp:12 pme:4 plan:cpu_bind=hello mem_bind=worldreplacing the original line.Note: sss can be anywhere in that line but occur only one time. sss is only pattern can be used to identify which line need to be substituted. There are other words contain number and other numbers int the line. The pattern case-number-cpu_bind-mem_bind have four parts. Its order is fixed and can be spit by -. | how to substitute some lines of one file according to the line's content | text processing | This is complex enough that I would use a full fledged programming language to do it. For example, in Perl:$ perl -ne 'if(/\ssss\s+/ && /(\S+-\d+-\S+)/){ @F=split(/-/,$1); print pp:, 6 * $F[1], pme:,2*$F[1], plan:cpu_bind=$F[2] mem_bind=$F[3]\n }else{print}' file Or, golfed a little but following the same idea : $ perl -lpe '/\ssss\s+/&&do{/(\S+-\d+-\S+)/;@F=split(/-/,$1); $_=pp:.6*$F[1]. pme:.2*$F[1]. plan:cpu_bind=$F[2] mem_bind=$F[3]}' file Note that this makes a few assumptions that might not be true (but I can't know from your question):It assumes that the word immediately after sss is the one you care about. It assumes that this word will always be divided by - into sub words.It assumes that the word will always have 4 parts, with case being the first, the number second, then two words that should be assigned to cpu_bind and mem_bind. Assuming these assumptions are correct, here is the same thing as a commented script:#!/usr/bin/env perl## Read the input file line by linewhile (<>) { ## If this line matches whitespace (\s), then sss, then one ## or more whitespace character, identify the string of interest ## by looking for non-whitespace characters (\S+), -, then ## numbers (\d+), then - and more non-whitespace characters and ## save them as $1. if(/\ssss\s+/ && /(\S+-\d+-\S+)/){ ## Split the word captured above into the @F array ## by cutting it on - @F=split(/-/,$1); ## Start printing. print pp:, ## 6 * the 2nd element in the array (the number) 6 * $F[1], pme:,2*$F[1], ## The third element ($F[2]) is the 1st word ## and the fourth element ($F[3]) is the 2nd word. plan:cpu_bind=$F[2] mem_bind=$F[3]\n } ## If this line does not match sss, print it. else{print}} |
_unix.317608 | i try to run the below command at the same time but only first one can run but second one get blocked by same process id.sh ./controller.sh $myfile/a.sh start '1' 'today'sh ./controller.sh $myfile/a.sh start '2' 'early'controller.shprogpath=$1prog=$(basename $progpath)get_pid() { echo `ps -ef | grep $prog | grep -v grep | grep -v $0 | awk '{print $2}'`} local pids=$(get_pid) if [ -n $pids ]; then echo $prog (pid $pids) is already running! return 0 fiHow can i run the 2 command successful by changing the controller.sh? | How to run same process with passing different parameter? | shell script | null |
_codereview.141546 | I want to be able to find out the value of a point inside a polygon if I have assigned weights to its vertices.In the figure below, some weights have been applied to the green dots. For some red interior point, I need to calculateI have use the inverse of the distance from the point to a vertex as my correction factor.This is my code so far to get the weighted value at one red point: def add_weights(poly, weights): poly.weights = [float(w) for w in weights]+[weights[0]] #need to add the first weight # at the end to account for # the first point being added to close the loopdef distance(a,b): dist = ( (b.x - a.x)**2 + (b.y - a.y)**2 )**0.5 if dist == 0: dist = 0.000000001 return distdef get_weighted_sum(poly, point): return sum([poly.weights[n]/distance(point,p) for n,p in enumerate(poly.points_shapely) if poly.weights[n] != 'nan'])def get_weighted_dist(poly, point): return sum([1/distance(point,p) for n,p in enumerate(poly.points_shapely) if poly.weights[n] != 'nan'])def get_point_weighted_value(poly, point): return get_weighted_sum(poly,point)/get_weighted_dist(poly,point)Note that my polygon can be irregular. Also, not all the vertices might have a weight, in which case the vertex is skipped for the calculation: in the example above it would mean that the value of the weight would only be calculated using two vertices rather than three.And this is a test code sample:import shapely.geometry as shapelyclass MyPoly(shapely.Polygon): def __init__(self,points): closed_path = list(points)+[points[0]] super(MyPoly,self).__init__(closed_path) self.points = closed_path self.points_shapely = [shapely.Point(p[0],p[1]) for p in closed_path]def convert_to_shapely_poly(poly): poly_shapely = MyPoly(poly) return poly_shapely #inputspoly = [[0,0],[5,10],[10,0]]#conversion to shapely formatmypoly = convert_to_shapely_poly(poly)#adding weights to the vertices of the polygoneadd_weights(mypoly,[2,11,5])#calculate the weighted value of a point inside the polygon:get_point_weighted_value(mypoly,shapely.Point(7,2))Any suggestion on improving this code would be welcome! | Given a polygon with weighted vertices, calculate a value for an interior point | python;algorithm;computational geometry | null |
_softwareengineering.93350 | I have seen many times that if we build a website for a client then there is a possibility that this site gets changed over a period of time.I was thinking that from now onwards whichever site I make I will host a copy of the site on a personal server. Like client1.myserver.com so that even if they change it I have the copy of it.So that if I need to show someone or I need to refer myself few things I have the proof there.I will not make them public but will password protect it.I want to know whether this is legal and a good idea or not. | Is it legal or good idea to have a backup of all client sites on my own server | web development;web | I think that's a bad idea.I am not a lawyer, but to me this must be explicitly expressed in a legal document. You might do yourself more harm than gain benefits. If you develop software you are not responsible for the back up. If you want to keep the copy, that's another responsibility, which actually include far more than you might think now. Password protection is a weak argument. To start with, make it storage redundant, encrypt the data, secure access to it - that's hard work.Given malicious user gains access to your storage, he subsequently can hack into the sites you have developed, as he sees the code.Weight all the benefits and risks, check the license agreement, be extremely cautious and consult with an IP lawyer if you are to make a backup of the source code (or binaries). For your portfolio, you can make a few GUI screenshots and then ask your employers to provide a reference for your work upon request from prospective employers. You are usually not allowed to tell many details about which specific work you have carried out by Non-Disclosure Agreement. |
_datascience.6904 | We are working with a complex application i.e. a physical measurement in a lab, that has approximately 230 different input parameters, many of which are ranges or multiple-value.The application produces a single output, which is then verified in an external (physical) process. At the end of the process the individual tests are marked as success or fail. That is, despite the many input parameters, the output is assessed in a boolean manner.When tests fail, the parameters are 'loosened' slightly and re-tested.We have about 20,000 entries in our database, with both success and fail, and we are considering a machine learning application to help in two areas:1) Initial selection of optimum parameters2) Suggestions for how to tune the parameters after a fail Many of the input parameters are strongly related to each other. I studied computer science in the mid-90s, when the focus was mostly expert systems and neural networks. We also have access to some free CPU hours of Microsoft Azure Machine Learning. What type of machine learning would fit these use-cases? | Which type of machine learning to use | machine learning;neural network | With using R, You could look at trees / randomforests. Since you have correlated variables, you could look into Projection pursuit classification trees (R package pptree). And there soon will be a ppforest package. But this is still under development. You could also combine randomforest with the package forestFloor to see the curvature of the randomforest and work from there. |
_unix.197935 | My current .bash_profile is equal to the below. I add some color and add a command that outputs whether or not I am in a git repository to my PS1 for my bash profile.ORIG=$PS1PS1=\[${txtund}${green}\]LOCAL\[\[${reset}\];PS1+=\$(prompt_git \${white} on ${violet}\);PS1+=\[${reset}\];PS1+= - \u\$: ;The problem is that when I run long commands, it is rewriting over the line. I want to word wrap my commands so that as I write they go to the next line. What do I need to wrap the PS1 in for this to occur?UPDATE -ORIG=$PS1PS1=\[${txtund}${green}\]LOCAL\[\[${reset}\];PS1+=\$(prompt_git \\[${white}\] on \[${violet}\]\);PS1+=\[${reset}\];PS1+= - \u\$: ;I have increased the number of escapes as per the comment from below. The wrap is still not working. Any other suggestions? | How to wrap bash commands after adding color | bash;prompt | ORIG=$PS1PS1=\[${txtund}${green}\]LOCAL\[\[${reset}\];PS1+=\$(prompt_git \\[${white}\] on \[${violet}\]\);PS1+=\[${reset}\];PS1+=\[ - \u\$: \];I have escaped both the colors as well as the final line of text. This solves my issue. It is through using [ ] and escaping colors as well as text, I am able to word wrap my commands in bash correctly. |
_unix.368087 | I am currently installing a Linux distro onto the SSD of a new desktop in my home. This PC will also have a HDD for storage which will be permanently mounted on this Linux OS, but I would also like to set up this HDD so that it can be accessible by all (Windows and Linux) machines across my home through the network.Is this possible, and if so should I have it formatted as NTFS? | Sharing a network drive between Linux and Windows | linux;networking;windows;storage;file format | null |
_cstheory.16867 | Hej guys,I'm working on customizing a Vehicle Routing Problem for a practical case, which is characterized as follows:The set of customers does not change over time, but their respective prizes are degressing in a non-linear fashion (deterministic).Traveling costs do not change.Need to visit every customer exactly once.So I've been digging through tons of papers and such, but I can't find anything similar to the given case .. The Dynamic VRP goes way too far, with stochastics and redefining the routes in an ongoing fashion, which I do not need in this special case, because the degression of prizes is deterministic. I thought about the Traveling Repairman Problem, but I can't come up with a way to use it .. The VRP with Time Windows seems to be a completely different case as I have no opening or closing times.Maybe someone got any hints for me ? I'm pretty sure this case has been dealt with, as it seems to be practically relevant.Any help is appreciated !Edit: as mentioned in the comments I should provide a sufficient version of the VRP:Input:N: given set of customers,D: given set of demands of said customers,C: given set of traveling costs, where c_ij is the weight of the arc (i,j),M: maximum number of vehicles that may be used,V: fixed costs of using a vehicle,Q: capacity of a vehicle,P: set of prizes for every customer,f(t): a function describing the degression of the prizes,(I'm not sure yet whether I want to visit every customer or not) Output:Optimal routes with the following objective: minimize total costs of travel, minimize number of vehicles used, maximize sum of prizes. Whether finding the optimal solution is possible or not is not really of interest in the first place.[As I looked for questions related to the VRP, TSP and similar models, these were posted in theoretical CS, but now I see that this section is definitely not the right one for me ..] | PCVRP with prizes reduced over time | co.combinatorics;combinatorics | This has been called Discounted Reward. In my brutish opinion, steer toward the CS guys and away from the OR guys. I just like the lit better.See: Approximation Algorithms for Orienteering and Discounted-Reward TSP. FOCS 2003 Which I just stole from my colleague's desk. They model reward as $\lambda^{-t}$, where $t$ is time, and $\lambda$ is constant, but otherwise it is the standard orienteering formulation: Maximize reward, subject to a budget, or minimize budget subject to a reward constraint. |
_unix.279603 | I am using Debian Jessie with the Mate desktop and I would like to remove systemd from my system. I am following these instructions but as soon as I enterapt-get remove --purge --auto-remove systemdapt-get lists the mate desktop packages mate-desktop-environment, etc as packages to be removed.I knew that Gnome 3 had a dependency on systemd but I thought Mate hadn't: I got Mate running fine on FreeBSD, obviously without systemd.Am I doing something wrong, or otherwise are there perhaps alternative Mate packages for Debian that do not rely on systemd? | Using mate desktop without systemd | debian;systemd;mate | You should be able to run MATE without using systemd as your system's init, but as it stands in Debian currently you need to have the systemd package installed, because mate-desktop-environment ends up depending on libpam-systemd which depends on systemd. To use all this alongside sysvinit (or Upstart) you need to install systemd-shim instead of systemd-sysv. |
_cs.48908 | Does Schaefer's dichotomy theorem establish that a general 3-sat clause cannot be transformed into an equivalent set of 2-sat/Hornsat/affine clauses (using auxiliary variables) or just that this would be very unlikely in that it would imply P = NP? I ask because I know that there are some types of problems involving 3-literal-clauses which can be transformed into equivalent 2-sat/Hornsat clauses. I'm thinking specifically of 2-or-3 SAT which can be solved by 2-sat/anti-Hornsat clauses or 1-or-3 SAT which can be solved using affine clauses. | Schaefer's dichotomy theorem and reformulating 3-literal clauses | complexity theory;np complete;satisfiability;p vs np;2 sat | Schaefer's dichotomy theorem doesn't purport to claim anything about what transformations might be possible / not possible.However, as Yuval says, we don't need Schaefer's theorem. We already know that 3SAT is NP-complete. Therefore, we know that if there is a polynomial-time transformation that transforms a 3SAT instance into an equisatisfiable 2SAT instance, then P = NP. Put another way, if P $\ne$ NP (as many researchers suspect to be the case), then there is no polynomial-time transformation that transforms a 3SAT instance into an equisatisfiable 2SAT instance. Put yet another way, finding such a transformation that provably works is at least as hard as proving that P = NP.The same holds for a transformation to any other set of formulas that are known to have a polynomial-time algorithm for testing satisfiable (e.g., the conjunction of Horn clauses). |
_softwareengineering.331139 | We currently have 4 environments for a project (local/dev, test, acceptance, production). We created some integration tests which want to run. The big question we now have is where to run them?Run them on all environments. The pro's for this is that we know for sure it works on all environments. The cons are that a) you don't want to stress your production environment with tests, b) Not all tests can be preformed on production since the external references from third parties of also productionRun them on single environment. So you pick one of the four. The problem with this is that environment specific settings (database, config) can cause bugs. So if you tests run fine on the test environment bug due to a config bug in production it breaks.Create a production environment on the fly overwrite only the settings you really need for the test and run your tests on those. The pros of this approach is that for production you filter out the environment specific settings bugs as much as possible. They can still occur on the other environment, but the consequences are less. The cons are that this approach takes quite some time to set up, can it be skipped by minimized the differences between the environments..I'm wondering, is there a standard for this? What would normally be done? | Where do I run my integration tests | continuous integration;integration tests | You'd run them on the test/acceptance environment. The acceptance environment is supposed to mimic production (as much as possible). The only differences should be configuration and possibly amount of servers (this depends on how you deploy). I use automated tools which validate config (and fails the deployment before it starts or during it), maybe you could use a similar approach. My deployment tool (Octopus Deploy) can handle rollbacks. Sidenote 1: You should never write tests against production nor should you clone your production environment. You could try using a Blue Green Deployment method and this would mean you'd be able to sense checkwhat is deployed in your inactive production environment before switching your load balancer(s) over. Possibly even run some validation tools for whatever has been deployed.Sidenote 2: I don't know why you have both test and acceptance environments, unless this is some sort of business requirement but I'd run integration tests on both. We have, development, quality (acceptance) and production. Our integration tests (database connectivity etc) run on all but production. Our acceptance tests run on quality (acceptance). You could run your integration tests locally if you have all of the neccessary systems installed locally i.e. if you're connecting to SQL, Mongo, Redis but for us that's not possible. |
_unix.191732 | I want to build FreeBSD custom Image for my Raspberry Pi2. I have read about the boot process in RPi2. I also went through the chrochet build instruction document.I learned from some websites that the firmware is not up-todate. So BSD cannot be booted on RPi2 . Please provide pointers to build custom image of FREEBSD for RPi2 | FreeBSD on Raspberry Pi2 | freebsd;raspberry pi;u boot;bootable | The latest release 11 of FreeBSD added support for BCM2836 making it compatible for Pi2.https://wiki.freebsd.org/FreeBSD/arm/Raspberry%20Pi |
_vi.13308 | I have the following line of code: autocmd FileType javascript execute setlocal formatprg=.g:prettier_eslint_path2.\\ --eslint-config-path . g:eslintrc_full_path .\\I can see this when I start vim in the directory which has .eslintrc:ESLINTRC PATH.eslintrcPrettier ESLINT PATH/Users/localuser/lendi/lendi-app/node_modules/.bin/prettier-eslint^@/Users/localuser/lendi/lendi-app/node_modules/.bin/prettier-eslintHas ESLINTRC/Users/localuser/lendi/lendi-app/.eslintrc--- Auto-Commands ---Press ENTER or type command to continueI have the g:eslintrc_full_path logged out correctly, how do I pass it from the variable to formatprg correctly.If I remove the args, I get an empty output as the result of the file.If I remove the \\ it informs that it is valid to do so.I dont understand how to pass arguments to the cli script when using a variable. I did not find any examples of string interpolation when reading through http://learnvimscriptthehardway.stevelosh.com/chapters/26.htmlAlso, the following code also causes a problem: autocmd FileType javascript execute setlocal formatprg=.g:prettier_eslint_path2.\\ --eslint-config-path\\ .g:eslintrc_full_pathand deletes the entire file. I have tried: autocmd FileType javascript execute setlocal formatprg=.g:prettier_eslint_path2.\\ --eslint-config-path\\ .g:eslintrc_full_pathbut it deletes the contents of the file as well.I have tried it with only --stdin like this but it fails and converts everything to double quotes: autocmd FileType javascript execute setlocal formatprg=.g:prettier_eslint_path2.\\ --stdinI have tried adding in the single quote as an argument here but it still converts everything to double-quotes.autocmd FileType javascript let &l:formatprg= g:prettier_eslint_path2.\\ --single-quote\\ --eslint-config-path\\ . g:eslintrc_full_pathAlso, logging &l:formatprg does not work:autocmd FileType javascript let &l:formatprg = g:prettier_eslint_path2.' --single-quote --eslint-config-path '. g:eslintrc_full_pathechom &l:formatprgI tried checking the messages:and it shows:/Users/localuser/lendi/lendi-app/node_modules/.bin/prettier-eslint --single-quote --write --eslint-config-path /Users/localuser/lendi/lendi-app/.eslintrcwhich should work. How does neoformat pass the file to the cli script.How do I solve this problem?I have also created a issue on neoformat for the same:https://github.com/sbdchd/neoformat/issues/112 | How to pass argument to Neoformat for prettier-eslint-cli from a variable in vimrc | vimrc;vimscript;plugin neoformat | null |
_cstheory.32857 | As I have seen a few questions on calculi floating around, I figure this is the right place to ask. Suppose we have an extended process $P \equiv \nu \tilde{n}.\left( \{M/x\} \mid N \right)$ where $\{M/x\}$ denotes an active substitutions and $M$ and $N$ are some terms.The question i ask myself is, whether we generally substitute $x$ for $M$ in $N$ if $x \in N$, or whether it depends on if $x \in \tilde{n}$ such that it is bound by $\nu \tilde{n}$. In other words, are processes computing in parallel in scope of active substitutions, and if yes, on what conditions?Despite skimming through several papers, i did not manage to get a clear cut answer. | Scope of active substitutions in the applied $\pi$-calculus | pi calculus | null |
_unix.353038 | I have all MySQL databases backup dump file (50 Databases) with in a server, and i want to restore a single database from this dump file.Is that possible? Any suggestions?Thanks in Advance | how to restore a mysql database from full database backup | mysql;xampp | null |
_codereview.87160 | How can I imrpove the performance of this T-SQL statement?SELECT Code, Description INTO [DBApps_Pulse_WarehouseAreas].[dbo].[Tbl_Partmaster]FROM SYSDataBridge.dbo.View_Pulse_Partmaster_SynchronizedALTER TABLE [DBApps_Pulse_WarehouseAreas].[dbo].[Tbl_Partmaster] ADD PRIMARY KEY (Code) | Perform SELECT INTO Statement and add Primary Key afterwards | performance;sql;sql server;t sql | Your current code effectively writes the table twice.Once as a heap and then again as a clustered index (with an intermediate sort step) after you add the primary key.I'd just create the table upfront with the desired primary key and insert into it.USE DBApps_Pulse_WarehouseAreas;CREATE TABLE [dbo].[Tbl_Partmaster] ( Code VARCHAR(10) NOT NULL CONSTRAINT PK_Tbl_Partmaster PRIMARY KEY, Description NVARCHAR(500) NOT NULL );INSERT INTO [dbo].[Tbl_Partmaster] WITH (TABLOCKX)SELECT Code, DescriptionFROM SYSDataBridge.dbo.View_Pulse_Partmaster_Synchronized; For at least three versions of SQL Server an insert into an empty B tree has been able to be minimally logged.In order to avoid page splits there might be a sort operator in the execution plan if the estimated number of rows is sufficiently high (or if the source has an index that could potentially present the selected columns in order of code this direct insert might avoid the need to sort all together).Finally avoid Hungarian notation such as Tbl_. At best it adds nothing but clutter. At worst at some point you might want to refactor the database and transparently replace the table with a view without updating calling code. Then the Tbl_ prefix would be down right misleading. |
_unix.85788 | I got audio working in Arch Linux without problems yesterday, but at some point it stopped working again:$ alsamixer cannot open mixer: No such file or directory$ vlc foo.mp4...[0x7f1388006be8] alsa audio output error: cannot open ALSA device default: No such file or directoryalsamixer runs fine as root, so it seems to be a permission problem. All the audio devices are owned by root:audio:$ ls -l /dev/sndtotal 0drwxr-xr-x 2 root root 80 Aug 6 22:54 by-pathcrw-rw---- 1 root audio 116, 7 Aug 6 22:54 controlC0crw-rw---- 1 root audio 116, 10 Aug 6 22:54 controlC1crw-rw---- 1 root audio 116, 6 Aug 6 22:54 hwC0D0crw-rw---- 1 root audio 116, 9 Aug 6 22:54 hwC1D0crw-rw---- 1 root audio 116, 5 Aug 6 22:54 pcmC0D0ccrw-rw---- 1 root audio 116, 4 Aug 6 22:54 pcmC0D0pcrw-rw---- 1 root audio 116, 3 Aug 6 22:54 pcmC0D1pcrw-rw---- 1 root audio 116, 2 Aug 6 22:54 pcmC0D2ccrw-rw---- 1 root audio 116, 8 Aug 6 22:54 pcmC1D3pcrw-rw---- 1 root audio 116, 1 Aug 6 22:54 seqcrw-rw---- 1 root audio 116, 33 Aug 6 22:54 timer$ getfacl /dev/snd/*getfacl: Removing leading '/' from absolute path names# file: dev/snd/by-path# owner: root# group: rootuser::rwxgroup::r-xother::r-x# file: dev/snd/controlC0# owner: root# group: audiouser::rw-group::rw-other::---# file: dev/snd/controlC1# owner: root# group: audiouser::rw-group::rw-other::---# file: dev/snd/hwC0D0# owner: root# group: audiouser::rw-group::rw-other::---# file: dev/snd/hwC1D0# owner: root# group: audiouser::rw-group::rw-other::---# file: dev/snd/pcmC0D0c# owner: root# group: audiouser::rw-group::rw-other::---# file: dev/snd/pcmC0D0p# owner: root# group: audiouser::rw-group::rw-other::---# file: dev/snd/pcmC0D1p# owner: root# group: audiouser::rw-group::rw-other::---# file: dev/snd/pcmC0D2c# owner: root# group: audiouser::rw-group::rw-other::---# file: dev/snd/pcmC1D3p# owner: root# group: audiouser::rw-group::rw-other::---# file: dev/snd/seq# owner: root# group: audiouser::rw-group::rw-other::---# file: dev/snd/timer# owner: root# group: audiouser::rw-group::rw-other::---I'm not in the audio group, as recommended by the ALSA instructions:$ groupswheel usersA discussion pointed me to an explanation of why adding the group is unnecessary and sometimes harmful. From there the user groups info said None of these groups is needed for standard desktop permissions like sound, 3D, printing, mounting, etc. as long as the logind session isn't broken. Following directions to session permission troubleshooting, I finally found a discrepancy - it says the output of loginctl show-session $XDG_SESSION_ID should contain Remote=no and Active=yes, but I get the following:$ loginctl show-session $XDG_SESSION_IDControlGroupHierarchy=/userResetControllers=cpuNAutoVTs=6KillExcludeUsers=rootKillUserProcesses=noIdleHint=yesIdleSinceHint=0IdleSinceHintMonotonic=0InhibitDelayMaxUSec=5sHandlePowerKey=poweroffHandleSuspendKey=suspendHandleHibernateKey=hibernateHandleLidSwitch=suspendIdleAction=ignoreIdleActionUSec=30minPreparingForShutdown=noPreparingForSleep=noFrom there I found info on preserving the session, which doesn't seem to be applicable - my /etc/X11/xinit/xserverrc is unmodified since the installation:$ ls -l /etc/X11/xinit/xserverrc-rw-r--r-- 1 root root 132 Oct 31 2012 /etc/X11/xinit/xserverrcSLiM seems to be running fine:$ systemctl status slim.serviceslim.service - SLiM Simple Login Manager Loaded: loaded (/usr/lib/systemd/system/slim.service; enabled) Active: active (running) since Sat 2013-08-10 00:00:39 CEST Main PID: 258 (slim) CGroup: name=systemd:/system/slim.service 258 /usr/bin/slim -nodaemon 292 /usr/bin/X -nolisten tcp vt07 -auth /var/run/slim.auth 416 /usr/bin/gnome-keyring-daemon --daemonize --login 418 awesome 423 /usr/bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session 472 xscreensaver -no-splash 479 firefox 481 java -Xmx192M -jar /usr/share/java/jedit/jedit.jar -reuseview 483 pidgin 485 xterm 500 bash 581 /usr/lib/at-spi2-core/at-spi-bus-launcher 641 xterm 643 bash 1006 git gui 1007 wish /usr/lib/git-core/git-gui -- 1367 dbus-launch --autolaunch e943bbb765d74fceb0393a55ceebfd1d --binary-syntax --close-stderr 1368 /usr/bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session 1403 ekiga 1405 /usr/lib/GConf/gconfd-2 1476 thunar 1478 /usr/lib/xfce4/xfconf/xfconfd 1589 systemctl status slim.serviceWhat do I need to do to fix the logind session, and thus (presumably) the audio permissions? | How to preserve the logind session in Arch Linux? | arch linux;audio;systemd;alsa;logind | null |
_unix.20784 | What's the most concise way to resolve a hostname to an IP address in a Bash script? I'm using Arch Linux. | How can I resolve a hostname to an IP address in a Bash script? | linux;bash;networking;dns | You can use getent, which comes with glibc (so you almost certainly have it on Linux). This resolves using gethostbyaddr/gethostbyname2, and so also will check /etc/hosts/NIS/etc:getent hosts unix.stackexchange.com | awk '{ print $1 }'Or, as Heinzi said below, you can use dig with the +short argument (queries DNS servers directly, does not look at /etc/hosts/NSS/etc) :dig +short unix.stackexchange.comIf dig +short is unavailable, any one of the following should work. All of these query DNS directly and ignore other means of resolution:host unix.stackexchange.com | awk '/has address/ { print $4 }'nslookup unix.stackexchange.com | awk '/^Address: / { print $2 }'dig unix.stackexchange.com | awk '/^;; ANSWER SECTION:$/ { getline ; print $5 }'If you want to only print one IP, then add the exit command to awk's workflow.dig +short unix.stackexchange.com | awk '{ print ; exit }'getent hosts unix.stackexchange.com | awk '{ print $1 ; exit }'host unix.stackexchange.com | awk '/has address/ { print $4 ; exit }'nslookup unix.stackexchange.com | awk '/^Address: / { print $2 ; exit }'dig unix.stackexchange.com | awk '/^;; ANSWER SECTION:$/ { getline ; print $5 ; exit }' |
_unix.311819 | what causes processes with no name in htop? http://image.prntscr.com/image/5ef407a1f99a4c9692db179a3afb2516.pngthis is a fully up to date debian 8.6 system, running htop 1.0.3 as root, amd64. and unix.stackexchange.com seems to shrink the image to an unreadable size, i recommend opening the image url http://image.prntscr.com/image/5ef407a1f99a4c9692db179a3afb2516.png directly | what causes htop processes with no name? | process;htop | htop displays the process's command line with spaces between the arguments. (The first argument, argument number 0, is conventionally the command name passed by the parent process.)A process may overwrite its command line arguments with a string of the same length or shorter. A few programs use this to convey information about the state of the program. Screen sets the first argument (command name) to uppercase in the background process that manages the sessions and leaves the usually lowercase command name in the front-end process that runs in a terminal that's attached to the session..It's also possible to start a process with no command line arguments. It's very unusual: conventionally the first argument is the command name. But it's technically possible.While this could be a display bug, or the effect of a command name containing carriage returns, the most likely explanation is that this process (currently) has no arguments. You can check by asking the kernel directly:cat -A /proc/12727/cmdline; echoThis displays the arguments with control characters replaced by a visual representation. The arguments are separated by ^@.You can find other information by exploring /proc/12727, for example /proc/12727/exe is a symbolic link to the executable that's running in this process and /proc/12727/fd shows what files the process has open. You can also display this information with lsof -p12727.ps l 12727 will show other information about this process, in particular its parent process ID (PPID). (You can also configure htop to show this information by activating the corresponding column in the settings.) |
_datascience.13044 | I'm not quite sure I understand the bag-of-visual-words representation, so I may misformulate my question.What I'm currently looking for is an open source library (possibly with python API). I give it pictures as input, and its output is a set of (sparse) features, so that I can perform my stuff base on this features. Idealy, I would like this piece of software to work without internet connection (so that I can work with it while in the plane).EDIT: I just learnt that facebook recently (summer 2016) released some of its image recognition code (namely multipathnet, deepmask and sharpmask) | Is there an open source implementation for bag-of-visual words? | image classification;computer vision | There is one implementation of BoVW in openCV. You can find the documentation here :http://docs.opencv.org/2.4/modules/features2d/doc/object_categorization.html |
_unix.368598 | I'm going to reduce SSD writes, so I'd like to keep changes to some directories (like /var/log or browser cache) in memory and syncing to permanent storage periodically. I should reduce memory size as much as possible, so only changes should be saved to memory;changes in memory should be stored compressed (zram)after periodical (10-30 min or so on poweroff time) syncing to persistent storage changes should be removed from memory.What is a better way to do this? I can get first 2 items using overlayfs, but I'm not sure how to sync merged dir to lower online and clean upper dir in proper way. | Proper way to create persistent RAM fs | synchronization;overlayfs | null |
_softwareengineering.39687 | We have all seen integer, floating point, string, and the occasional decimal type. What are some of the most strange or unique or useful types you have encountered, useful or not? | Interesting or unique types in programming languages? | programming languages;language features;type systems;data types | null |
_codereview.134065 | I have to modify the number of points in this XML in order to test the performance of another program of mine. Here is an example of the XML I have to modify.performance.xml:<?xml version=1.0 encoding=UTF-8 ?><!-- ICCP Local Control Center Configuration --><!DOCTYPE LocalControlCenter SYSTEM C:\Program Files\SISCO\IccpCfg\IccpCfg.dtd><LocalControlCenter xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance xsi:noNamespaceSchemaLocation=C:\Program Files\SISCO\IccpCfg\IccpCfg.xsd> <Name>Osiris_Control_Center</Name> <MaxDsTs>10</MaxDsTs> <MaxDataSets>10</MaxDataSets> <MaxMmsMsgSize>32000</MaxMmsMsgSize> <Description>calling site</Description> <LocalObjects> <LocalDataValues Count=15> <Ldv> <Name>Osiris_Local_Data_0001</Name> <DataType>State</DataType> <NormalSource>Telemetered</NormalSource> </Ldv> <Ldv> <Name>Osiris_Local_Data_0002</Name> <DataType>StateQ</DataType> <NormalSource>Telemetered</NormalSource> </Ldv> <Ldv> <Name>Osiris_Local_Data_0003</Name> <DataType>StateQTimeTag</DataType> <NormalSource>Telemetered</NormalSource> </Ldv> <Ldv> <Name>Osiris_Local_Data_0004</Name> <DataType>StateExtended</DataType> <NormalSource>Telemetered</NormalSource> </Ldv> <Ldv> <Name>Osiris_Local_Data_0005</Name> <DataType>StateQTimeTagExtended</DataType> <NormalSource>Telemetered</NormalSource> </Ldv> <Ldv> <Name>Osiris_Local_Data_0006</Name> <DataType>Real</DataType> <NormalSource>Telemetered</NormalSource> </Ldv> <Ldv> <Name>Osiris_Local_Data_0007</Name> <DataType>RealQ</DataType> <NormalSource>Telemetered</NormalSource> </Ldv> <Ldv> <Name>Osiris_Local_Data_0008</Name> <DataType>RealQTimeTag</DataType> <NormalSource>Telemetered</NormalSource> </Ldv> <Ldv> <Name>Osiris_Local_Data_0009</Name> <DataType>RealExtended</DataType> <NormalSource>Telemetered</NormalSource> </Ldv> <Ldv> <Name>Osiris_Local_Data_0010</Name> <DataType>RealQTimeTagExtended</DataType> <NormalSource>Telemetered</NormalSource> </Ldv> <Ldv> <Name>Osiris_Local_Data_0011</Name> <DataType>Discrete</DataType> <NormalSource>Telemetered</NormalSource> </Ldv> <Ldv> <Name>Osiris_Local_Data_0012</Name> <DataType>DiscreteQ</DataType> <NormalSource>Telemetered</NormalSource> </Ldv> <Ldv> <Name>Osiris_Local_Data_0013</Name> <DataType>DiscreteQTimeTag</DataType> <NormalSource>Telemetered</NormalSource> </Ldv> <Ldv> <Name>Osiris_Local_Data_0014</Name> <DataType>DiscreteExtended</DataType> <NormalSource>Telemetered</NormalSource> </Ldv> <Ldv> <Name>Osiris_Local_Data_0015</Name> <DataType>DiscreteQTimeTagExtended</DataType> <NormalSource>Telemetered</NormalSource> </Ldv> </LocalDataValues> <LocalDevices Count=1> <Ldev> <DeviceName>Osiris_Device</DeviceName> <DeviceType>Discrete</DeviceType> <Sbo>N</Sbo> <ChkBackId>0</ChkBackId> <SelTime>30</SelTime> <TagEn>N</TagEn> </Ldev> </LocalDevices> <LocalInfoMsgs Count=0> </LocalInfoMsgs> </LocalObjects> <RemoteControlCenters Count=1> <RemoteControlCenter> <Name>flcon1</Name> <Version>2000-08</Version> <Description>The site that Listens for the connect request</Description> <BilateralTable> <Name>Osiris_Bilateral_1</Name> <Id>1</Id> <LocalDomain>Osiris</LocalDomain> <RemoteDomain>flcon1</RemoteDomain> <ShortestInterval>1</ShortestInterval> <Blocks>1,2</Blocks> </BilateralTable> <Associations Count=1> <Association> <Name>PrimaryLink</Name> <LocalAr>Osiris</LocalAr> <RemoteAr>flcon1</RemoteAr> <ConnectRole>Called</ConnectRole> <AssocRetryTime>10</AssocRetryTime> <InitiateTimeout>30</InitiateTimeout> <ConcludeTimeout>30</ConcludeTimeout> <AssocHeartbeatTime>10</AssocHeartbeatTime> <ServiceRole>Client</ServiceRole> <ServiceRole>Server</ServiceRole> <MaxMmsMsgSize>32000</MaxMmsMsgSize> <MaxReqPend>5</MaxReqPend> <MaxIndPend>5</MaxIndPend> <MaxNest>5</MaxNest> </Association> </Associations> <ServerObjects> <NumVccDv>0</NumVccDv> <NumVccDev>0</NumVccDev> <NumVccInfoMsg>0</NumVccInfoMsg> <NumVccDs>0</NumVccDs> <NumIccDv>15</NumIccDv> <NumIccDev>0</NumIccDev> <NumIccInfoMsg>0</NumIccInfoMsg> <NumIccDs>0</NumIccDs> <ServerDataValues Count=15> <Sdv> <ObjName Scope=ICC>Osiris_Local_Data_0001</ObjName> <DataType>State</DataType> <ReadOnly>Y</ReadOnly> </Sdv> <Sdv> <ObjName Scope=ICC>Osiris_Local_Data_0002</ObjName> <DataType>StateQ</DataType> <ReadOnly>Y</ReadOnly> </Sdv> <Sdv> <ObjName Scope=ICC>Osiris_Local_Data_0003</ObjName> <DataType>StateQTimeTag</DataType> <ReadOnly>Y</ReadOnly> </Sdv> <Sdv> <ObjName Scope=ICC>Osiris_Local_Data_0004</ObjName> <DataType>StateExtended</DataType> <ReadOnly>Y</ReadOnly> </Sdv> <Sdv> <ObjName Scope=ICC>Osiris_Local_Data_0005</ObjName> <DataType>StateQTimeTagExtended</DataType> <ReadOnly>Y</ReadOnly> </Sdv> <Sdv> <ObjName Scope=ICC>Osiris_Local_Data_0006</ObjName> <DataType>Real</DataType> <ReadOnly>Y</ReadOnly> </Sdv> <Sdv> <ObjName Scope=ICC>Osiris_Local_Data_0007</ObjName> <DataType>RealQ</DataType> <ReadOnly>Y</ReadOnly> </Sdv> <Sdv> <ObjName Scope=ICC>Osiris_Local_Data_0008</ObjName> <DataType>RealQTimeTag</DataType> <ReadOnly>Y</ReadOnly> </Sdv> <Sdv> <ObjName Scope=ICC>Osiris_Local_Data_0009</ObjName> <DataType>RealExtended</DataType> <ReadOnly>Y</ReadOnly> </Sdv> <Sdv> <ObjName Scope=ICC>Osiris_Local_Data_0010</ObjName> <DataType>RealQTimeTagExtended</DataType> <ReadOnly>Y</ReadOnly> </Sdv> <Sdv> <ObjName Scope=ICC>Osiris_Local_Data_0011</ObjName> <DataType>Discrete</DataType> <ReadOnly>Y</ReadOnly> </Sdv> <Sdv> <ObjName Scope=ICC>Osiris_Local_Data_0012</ObjName> <DataType>DiscreteQ</DataType> <ReadOnly>Y</ReadOnly> </Sdv> <Sdv> <ObjName Scope=ICC>Osiris_Local_Data_0013</ObjName> <DataType>DiscreteQTimeTag</DataType> <ReadOnly>Y</ReadOnly> </Sdv> <Sdv> <ObjName Scope=ICC>Osiris_Local_Data_0014</ObjName> <DataType>DiscreteExtended</DataType> <ReadOnly>Y</ReadOnly> </Sdv> <Sdv> <ObjName Scope=ICC>Osiris_Local_Data_0015</ObjName> <DataType>DiscreteQTimeTagExtended</DataType> <ReadOnly>Y</ReadOnly> </Sdv> </ServerDataValues> <ServerDevices Count=0> </ServerDevices> <ServerInfoMsgs Count=0> </ServerInfoMsgs> <ServerDataSets Count=0> </ServerDataSets> </ServerObjects> <ClientObjects> <NumVccDv>0</NumVccDv> <NumVccDev>0</NumVccDev> <NumVccInfoMsg>0</NumVccInfoMsg> <NumIccDv>15</NumIccDv> <NumIccDev>0</NumIccDev> <NumIccInfoMsg>0</NumIccInfoMsg> <NumDs>1</NumDs> <NumDsTs>1</NumDsTs> <ClientDataValues Count=15> <Cdv> <ObjName Scope=ICC>Osiris_Test_Data_0001</ObjName> <DataType>State</DataType> <ReadOnly>Y</ReadOnly> </Cdv> <Cdv> <ObjName Scope=ICC>Osiris_Test_Data_0002</ObjName> <DataType>StateQ</DataType> <ReadOnly>Y</ReadOnly> </Cdv> <Cdv> <ObjName Scope=ICC>Osiris_Test_Data_0003</ObjName> <DataType>StateQTimeTag</DataType> <ReadOnly>Y</ReadOnly> </Cdv> <Cdv> <ObjName Scope=ICC>Osiris_Test_Data_0004</ObjName> <DataType>StateExtended</DataType> <ReadOnly>Y</ReadOnly> </Cdv> <Cdv> <ObjName Scope=ICC>Osiris_Test_Data_0005</ObjName> <DataType>StateQTimeTagExtended</DataType> <ReadOnly>Y</ReadOnly> </Cdv> <Cdv> <ObjName Scope=ICC>Osiris_Test_Data_0006</ObjName> <DataType>Real</DataType> <ReadOnly>Y</ReadOnly> </Cdv> <Cdv> <ObjName Scope=ICC>Osiris_Test_Data_0007</ObjName> <DataType>RealQ</DataType> <ReadOnly>Y</ReadOnly> </Cdv> <Cdv> <ObjName Scope=ICC>Osiris_Test_Data_0008</ObjName> <DataType>RealQTimeTag</DataType> <ReadOnly>Y</ReadOnly> </Cdv> <Cdv> <ObjName Scope=ICC>Osiris_Test_Data_0009</ObjName> <DataType>RealExtended</DataType> <ReadOnly>Y</ReadOnly> </Cdv> <Cdv> <ObjName Scope=ICC>Osiris_Test_Data_0010</ObjName> <DataType>RealQTimeTagExtended</DataType> <ReadOnly>Y</ReadOnly> </Cdv> <Cdv> <ObjName Scope=ICC>Osiris_Test_Data_0011</ObjName> <DataType>Discrete</DataType> <ReadOnly>Y</ReadOnly> </Cdv> <Cdv> <ObjName Scope=ICC>Osiris_Test_Data_0012</ObjName> <DataType>DiscreteQ</DataType> <ReadOnly>Y</ReadOnly> </Cdv> <Cdv> <ObjName Scope=ICC>Osiris_Test_Data_0013</ObjName> <DataType>DiscreteQTimeTag</DataType> <ReadOnly>Y</ReadOnly> </Cdv> <Cdv> <ObjName Scope=ICC>Osiris_Test_Data_0014</ObjName> <DataType>DiscreteExtended</DataType> <ReadOnly>Y</ReadOnly> </Cdv> <Cdv> <ObjName Scope=ICC>Osiris_Test_Data_0015</ObjName> <DataType>DiscreteQTimeTagExtended</DataType> <ReadOnly>Y</ReadOnly> </Cdv> </ClientDataValues> <ClientDevices Count=0> </ClientDevices> <ClientInfoMsgs Count=0> </ClientInfoMsgs> <ClientDataSets Count=1> <Cds> <Name>Test_Set</Name> <Transfer_Set_Name>Y</Transfer_Set_Name> <Transfer_Set_Time_Stamp>Y</Transfer_Set_Time_Stamp> <DSConditions_Detected>Y</DSConditions_Detected> <Event_Code_Detected>Y</Event_Code_Detected> <CdsVars Count=15> <CdsVar Scope=ICC Type=Dv Name=Osiris_Test_Data_0001/> <CdsVar Scope=ICC Type=Dv Name=Osiris_Test_Data_0002/> <CdsVar Scope=ICC Type=Dv Name=Osiris_Test_Data_0003/> <CdsVar Scope=ICC Type=Dv Name=Osiris_Test_Data_0004/> <CdsVar Scope=ICC Type=Dv Name=Osiris_Test_Data_0005/> <CdsVar Scope=ICC Type=Dv Name=Osiris_Test_Data_0006/> <CdsVar Scope=ICC Type=Dv Name=Osiris_Test_Data_0007/> <CdsVar Scope=ICC Type=Dv Name=Osiris_Test_Data_0008/> <CdsVar Scope=ICC Type=Dv Name=Osiris_Test_Data_0009/> <CdsVar Scope=ICC Type=Dv Name=Osiris_Test_Data_0010/> <CdsVar Scope=ICC Type=Dv Name=Osiris_Test_Data_0011/> <CdsVar Scope=ICC Type=Dv Name=Osiris_Test_Data_0012/> <CdsVar Scope=ICC Type=Dv Name=Osiris_Test_Data_0013/> <CdsVar Scope=ICC Type=Dv Name=Osiris_Test_Data_0014/> <CdsVar Scope=ICC Type=Dv Name=Osiris_Test_Data_0015/> </CdsVars> </Cds> </ClientDataSets> <ClientDataSetTransferSets Count=1> <Cdsts> <DsName>Test_Set</DsName> <AssocName>PrimaryLink</AssocName> <Interval>5</Interval> <Rbe>N</Rbe> <AllChangesReported>N</AllChangesReported> <BufferTime>2</BufferTime> <Integrity>0</Integrity> <StartTime>0</StartTime> <Critical>N</Critical> <BlockData>N</BlockData> <Tle>60</Tle> <DsCondInterval>Y</DsCondInterval> <DsCondIntegrity>N</DsCondIntegrity> <DsCondChange>N</DsCondChange> <DsCondOperator>N</DsCondOperator> <DsCondExternal>N</DsCondExternal> </Cdsts> </ClientDataSetTransferSets> <ClientDiscovery Enable=N> <AssocName></AssocName> <Execute>Never</Execute> <GetNameList>N</GetNameList> <RemoveMissing>Y</RemoveMissing> <RemoveMistyped>Y</RemoveMistyped> <RemoveReadError>Y</RemoveReadError> <DbDeleteMissing>Y</DbDeleteMissing> <DbCorrectMistyped>Y</DbCorrectMistyped> <CfgGetVaa>N</CfgGetVaa> <CfgRead>Y</CfgRead> <NewAddDv>N</NewAddDv> <DbAddNew>N</DbAddNew> <NewReadOnly>N</NewReadOnly> <NewGetVaa>N</NewGetVaa> <NewRead>N</NewRead> <DbAutoAccept>N</DbAutoAccept> <WriteXml>N</WriteXml> </ClientDiscovery> <ClientAutoDsts Enable=N> <AssocName>PrimaryLink</AssocName> <AssignByType>N</AssignByType> <AutoParam Enable=N Type=All> <AutoDs> <MaxDstsPduSize>32000</MaxDstsPduSize> <MaxDvPerDs>0</MaxDvPerDs> <Conservative>Y</Conservative> <Transfer_Set_Name>Y</Transfer_Set_Name> <Transfer_Set_Time_Stamp>Y</Transfer_Set_Time_Stamp> <DSConditions_Detected>Y</DSConditions_Detected> <Event_Code_Detected>N</Event_Code_Detected> </AutoDs> <AutoDsts> <StartTime>0</StartTime> <Rbe>Y</Rbe> <AllChangesReported>N</AllChangesReported> <Critical>N</Critical> <BlockData>N</BlockData> <DsCondInterval>N</DsCondInterval> <DsCondIntegrity>Y</DsCondIntegrity> <DsCondChange>Y</DsCondChange> <DsCondOperator>N</DsCondOperator> <DsCondExternal>N</DsCondExternal> <Interval>10</Interval> <Integrity>30</Integrity> <BufferTime>2</BufferTime> <Tle>60</Tle> </AutoDsts> </AutoParam> <AutoParam Enable=N Type=Real> <AutoDs> <MaxDstsPduSize>32000</MaxDstsPduSize> <MaxDvPerDs>0</MaxDvPerDs> <Conservative>Y</Conservative> <Transfer_Set_Name>Y</Transfer_Set_Name> <Transfer_Set_Time_Stamp>Y</Transfer_Set_Time_Stamp> <DSConditions_Detected>Y</DSConditions_Detected> <Event_Code_Detected>N</Event_Code_Detected> </AutoDs> <AutoDsts> <StartTime>0</StartTime> <Rbe>Y</Rbe> <AllChangesReported>N</AllChangesReported> <Critical>N</Critical> <BlockData>N</BlockData> <DsCondInterval>N</DsCondInterval> <DsCondIntegrity>Y</DsCondIntegrity> <DsCondChange>Y</DsCondChange> <DsCondOperator>N</DsCondOperator> <DsCondExternal>N</DsCondExternal> <Interval>10</Interval> <Integrity>30</Integrity> <BufferTime>2</BufferTime> <Tle>60</Tle> </AutoDsts> </AutoParam> <AutoParam Enable=N Type=Discrete> <AutoDs> <MaxDstsPduSize>32000</MaxDstsPduSize> <MaxDvPerDs>0</MaxDvPerDs> <Conservative>Y</Conservative> <Transfer_Set_Name>Y</Transfer_Set_Name> <Transfer_Set_Time_Stamp>Y</Transfer_Set_Time_Stamp> <DSConditions_Detected>Y</DSConditions_Detected> <Event_Code_Detected>N</Event_Code_Detected> </AutoDs> <AutoDsts> <StartTime>0</StartTime> <Rbe>Y</Rbe> <AllChangesReported>N</AllChangesReported> <Critical>N</Critical> <BlockData>N</BlockData> <DsCondInterval>N</DsCondInterval> <DsCondIntegrity>Y</DsCondIntegrity> <DsCondChange>Y</DsCondChange> <DsCondOperator>N</DsCondOperator> <DsCondExternal>N</DsCondExternal> <Interval>10</Interval> <Integrity>30</Integrity> <BufferTime>2</BufferTime> <Tle>60</Tle> </AutoDsts> </AutoParam> <AutoParam Enable=N Type=State> <AutoDs> <MaxDstsPduSize>32000</MaxDstsPduSize> <MaxDvPerDs>0</MaxDvPerDs> <Conservative>Y</Conservative> <Transfer_Set_Name>Y</Transfer_Set_Name> <Transfer_Set_Time_Stamp>Y</Transfer_Set_Time_Stamp> <DSConditions_Detected>Y</DSConditions_Detected> <Event_Code_Detected>N</Event_Code_Detected> </AutoDs> <AutoDsts> <StartTime>0</StartTime> <Rbe>Y</Rbe> <AllChangesReported>N</AllChangesReported> <Critical>N</Critical> <BlockData>N</BlockData> <DsCondInterval>N</DsCondInterval> <DsCondIntegrity>Y</DsCondIntegrity> <DsCondChange>Y</DsCondChange> <DsCondOperator>N</DsCondOperator> <DsCondExternal>N</DsCondExternal> <Interval>10</Interval> <Integrity>30</Integrity> <BufferTime>2</BufferTime> <Tle>60</Tle> </AutoDsts> </AutoParam> </ClientAutoDsts> </ClientObjects> </RemoteControlCenter> </RemoteControlCenters></LocalControlCenter>To streamline the setup, I created this Python script (despite me not knowing Python very well). It uses the standard Python library, and I would like to keep it that way.setup.py:from xml.etree import ElementTree as etreedef strip(elem): for elem in elem.getiterator(): if(elem.text): elem.text = elem.text.strip() if(elem.tail): elem.tail = elem.tail.strip()def removeBranch(tree, name): root = tree.getroot() parent_map = dict((c, p) for p in tree.getiterator() for c in p) for item in list(root.getiterator(name)): parent_map[item].remove(item)def addLocalDataBranch(tree, num): parent = tree.find('.//LocalDataValues') parent.set('Count', str(num)) for val in xrange(1, num+1): item = etree.SubElement(parent, 'Ldv') name = etree.SubElement(item, 'Name') name.text = 'Osiris_Local_Data_' + str(val) type = etree.SubElement(item, 'DataType') type.text = 'RealQTimeTagExtended' source = etree.SubElement(item, 'NormalSource') source.text = 'Telemetered'def addServerDataBranch(tree, num): obj = tree.find('.//ServerObjects/NumIccDv') obj.text = str(num) parent = tree.find('.//ServerDataValues') parent.set('Count', str(num)) for val in xrange(1, num+1): item = etree.SubElement(parent, 'Sdv') name = etree.SubElement(item, 'ObjName') name.set('Scope', 'ICC') name.text = 'Osiris_Local_Data_' + str(val) type = etree.SubElement(item, 'DataType') type.text = 'RealQTimeTagExtended' source = etree.SubElement(item, 'ReadOnly') source.text = 'Y'def addClientDataBranch(tree, num): obj = tree.find('.//ClientObjects/NumIccDv') obj.text = str(num) parent = tree.find('.//ClientDataValues') parent.set('Count', str(num)) for val in xrange(1, num+1): item = etree.SubElement(parent, 'Cdv') name = etree.SubElement(item, 'ObjName') name.set('Scope', 'ICC') name.text = 'Osiris_Test_Data_' + str(val) type = etree.SubElement(item, 'DataType') type.text = 'RealQTimeTagExtended' source = etree.SubElement(item, 'ReadOnly') source.text = 'Y'def addDataSetBranch(tree, num): parent = tree.find('.//CdsVars') parent.set('Count', str(num)) for val in xrange(1, num+1): item = etree.SubElement(parent, 'CdsVar') item.set('Scope', 'ICC') item.set('Type', 'Dv') item.set('Name', 'Osiris_Test_Data_' + str(val))def editInterval(tree, num): parent = tree.find('.//Interval') parent.text = str(num)if __name__ == '__main__': import sys if len(sys.argv[1:]) != 3: print('Usage: python setup.py [xml] [points] [time]') sys.exit(1) tree = etree.parse(sys.argv[1]) removeBranch(tree, 'Ldv') removeBranch(tree, 'Sdv') removeBranch(tree, 'Cdv') removeBranch(tree, 'CdsVar') addLocalDataBranch(tree, int(sys.argv[2])) addServerDataBranch(tree, int(sys.argv[2])) addClientDataBranch(tree, int(sys.argv[2])) addDataSetBranch(tree, int(sys.argv[2])) editInterval(tree, int(sys.argv[3])) strip(tree.getroot()) #print etree.tostring(tree.getroot(), 'utf-8') file = open(sys.argv[1], 'wb') file.write(etree.tostring(tree.getroot(), 'utf-8')) file.close()As you can see, there is lots of repetition, and I'm sure there's more Python-ic ways to approach this problem. Any suggestions? | #TODO Remove duplication in XML parsing | python;beginner;parsing;python 2.7;xml | Low hanging fruitMost of this will come down to reading PEP8, however I'll state them here anyway.From the top down:If statements don't need brackets, so don't put them around them.Functions at module level should have two new lines between them and other things.Rather than one as you have now.You should use snake_case for functions and variables.You can use a dictionary comprehension, rather than a generator comprehension fed to a dict constructor.You should put a space both sides of operators, num + 1, the only exception to this is to show precedence.Don't overwrite builtins type. Instead you can use type_ or a synonym.Put all imports at the top of the script.Use with rather than manually opening and closing.Use a main function, this keeps things out of the global scope.Removing repetitive codeAt first I was doing some funky stuff to remove your duplicate code, but you can keep it simple.Just use generators, and make a few more functions.I'd make four more functions to reduce the repetition of your code.You can also make set_obj_text which changes the attribute text.And so you can use this very small function:def set_obj_text(node, num, obj): node.find(obj).text = str(num)This will allow us to remove editInterval and instead use set_obj_text(tree, num, './/Interval').After this is where we'll remove more of the duplicate code.By first getting a parent, we can then use a generator we can lazily get all the items.This is used in most of the other functions.This can be just be finding the node, and then the for loop with a yield of etree.SubElement.def get_items(node, num, parent, item): parent_ = node.find(parent) parent_.set('Count', str(num)) for i in xrange(1, num + 1): yield i, etree.SubElement(parent_, item)This will allow us to change all the add*Branch functions.Allowing add_data_set_branch to be:def add_data_set_branch(node, num): for val, item in get_items(node, num, './/CdsVars', 'CdsVar'): item.set('Scope', 'ICC') item.set('Type', 'Dv') item.set('Name', 'Osiris_Test_Data_' + str(val))After this we can change all the add*Branch except add_data_set_branch to remove more duplicate code.This is as you mostly only change the nodes that you visit, being parent, item, name, name.text, source and the source.text.And so I'd make a function that takes these nodes names and does the repeated code.To make the return easier to use I'd use collections.namedtuple so that we can use ret.name rather than say ret[2].This can result in:def walk_branch(node, num, custom_values): v = custom_values.get for val, item in get_items(node, num, v('parent'), v('item')): name = etree.SubElement(item, v('name')) name.text = v('name_text') + str(val) type_ = etree.SubElement(item, 'DataType') type_.text = 'RealQTimeTagExtended' source = etree.SubElement(item, v('source')) source.text = v('source_text') yield branch(val, item, name, type_, source)And allows us to change add_server_data_branch to:def add_server_data_branch(node, num): set_obj_text(node, num, './/ServerObjects/NumIccDv') for ret in walk_branch( node, num, { 'parent': './/ServerDataValues', 'item': 'Sdv', 'name': 'ObjName', 'name_text': 'Osiris_Local_Data_', 'source': 'ReadOnly', 'source_text': 'Y', }): ret.name.set('Scope', 'ICC')After this, I'd change how you call remove_branch.Instead of manually calling it a lot, you can use a for loop.And you can do the same for all the data*branchs.for branch in ['Ldv', 'Sdv', 'Cdv', 'CdsVar']: remove_branch(tree, branch)for fn in [add_local_data_branch, add_server_data_branch, add_client_data_branch, add_data_set_branch]: fn(tree, args.points)This removes most of the duplicate code.Finally I'd use argparse rather than sys.argv.This is as it checks the input for you so you can make it check if points is a number and it exit with a message about it.Rather than Python abruptly exiting with a ValueError.You can also add help information which will display if you use file.py -h,and it also automates your message 'Usage: python setup.py [xml] [points] [time]'.The only downside to this is you'll need to read more documentation to add more features to the argument input.All the above changes resulted in me getting:from xml.etree import ElementTree as etreefrom collections import namedtupleimport argparsebranch = namedtuple('Branch', ['val', 'item', 'name', 'type', 'source'])def set_obj_text(node, num, obj): node.find(obj).text = str(num)def get_items(node, num, parent, item): parent_ = node.find(parent) parent_.set('Count', str(num)) for i in xrange(1, num + 1): yield i, etree.SubElement(parent_, item)def walk_branch(node, num, custom_values): v = custom_values.get for val, item in get_items(node, num, v('parent'), v('item')): name = etree.SubElement(item, v('name')) name.text = v('name_text') + str(val) type_ = etree.SubElement(item, 'DataType') type_.text = 'RealQTimeTagExtended' source = etree.SubElement(item, v('source')) source.text = v('source_text') yield branch(val, item, name, type_, source)def strip(elem): for elem in elem.getiterator(): if elem.text: elem.text = elem.text.strip() if elem.tail: elem.tail = elem.tail.strip()def remove_branch(node, name): parent_map = {c: p for p in node.getiterator() for c in p} for item in node.getroot().getiterator(name): parent_map[item].remove(item)def add_local_data_branch(node, num): # Consume generator for _ in walk_branch( node, num, { 'parent': './/LocalDataValues', 'item': 'Ldv', 'name': 'Name', 'name_text': 'Osiris_Local_Data_', 'source': 'NormalSource', 'source_text': 'Telemetered', }): passdef add_server_data_branch(node, num): set_obj_text(node, num, './/ServerObjects/NumIccDv') for ret in walk_branch( node, num, { 'parent': './/ServerDataValues', 'item': 'Sdv', 'name': 'ObjName', 'name_text': 'Osiris_Local_Data_', 'source': 'ReadOnly', 'source_text': 'Y', }): ret.name.set('Scope', 'ICC')def add_client_data_branch(node, num): set_obj_text(node, num, './/ClientObjects/NumIccDv') for ret in walk_branch( node, num, { 'parent': './/ClientDataValues', 'item': 'Cdv', 'name': 'ObjName', 'name_text': 'Osiris_Test_Data_', 'source': 'ReadOnly', 'source_text': 'Y', }): ret.name.set('Scope', 'ICC')def add_data_set_branch(node, num): for val, item in get_items(node, num, './/CdsVars', 'CdsVar'): item.set('Scope', 'ICC') item.set('Type', 'Dv') item.set('Name', 'Osiris_Test_Data_' + str(val))def get_arguments(): parser = argparse.ArgumentParser(description='Convert XML to test performance.') parser.add_argument('xml', help='path to xml file.') parser.add_argument('points', type=int, help='') parser.add_argument('time', type=int, help='') return parser.parse_args()def main(): args = get_arguments() tree = etree.parse(args.xml) for branch in ['Ldv', 'Sdv', 'Cdv', 'CdsVar']: remove_branch(tree, branch) for fn in [add_local_data_branch, add_server_data_branch, add_client_data_branch, add_data_set_branch]: fn(tree, args.points) set_obj_text(tree, args.time, './/Interval') strip(tree.getroot()) with open(args.xml, 'wb') as f: f.write(etree.tostring(tree.getroot(), 'utf-8'))if __name__ == '__main__': main() |
_unix.379834 | my install of linux throws the following errors upon boot:efi: requested map not found. esrt: ESRT header is not in the memory map. i0842: Can't read CTR while initializing i8042.Would these errors be indicative factors of disk corruption? Thanks! | Linux Throws Error: Can't read CTR while intializing i8042? | boot;encryption | null |
_unix.327250 | I have a systemd service file where ExecStartPost is used to start a long running process.Is this process affected in any way by reload being called on the service (assuming ExecReload doesn't do anything related to that process)?What about when stop is called?Will calling start on a stopped service invoke the ExecStartPost commands again? | Does restarting, reloading or stopping a systemd service have any effect on ExecStartPost? | systemd | null |
_unix.86720 | I am running Arch based on the Linux 3.10.5-1 kernel. The system uses the new de-facto naming conventions of ethernet interfaces enp*s* and wlp* etc. This is a problem however, as my educational institution is using a program called Maple 17. Maple's licensing system is dependant on the existence of an interface named eth0 because it must retrieve the MAC address of it to verify the license. It's a bad solution, but I have to work around it.This means I will need an eth0 interface with any MAC address at all (As I can retrieve a new license file for the new MAC address) that doesn't necessarily have to work. In fact, it should just be down at all times. I reckon there are several ways to attempt to solve this issue, but I haven't been able to find anything about any of the ideas.Creating an adapter without connectivityCreating an alias for enp3s0 named eth0Renaming enp3s0 or the loopback interface.The things I was able to find only covered changing to the newer conventions and on older versions of udev. They only worked on RHEL and SuSe anyway. I tried it without luck though. (persistent-net-names.rules and net-name-slot.rules, both of them just made my actual interface stop working and my wlan interface disappear) | Can I create a virtual ethernet interface named eth0? | networking;linux kernel;iproute | Sure. You can create a tap device fairly easily, either with tunctl (from uml-utilities, at least on Debian):# tunctl -t eth0Set 'eth0' persistent and owned by uid 0# ifconfig eth0eth0 Link encap:Ethernet HWaddr a6:9b:fe:d8:d9:5e BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)Or with ip:# ip tuntap add dev eth0 mode tap# ip link ls dev eth07: eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 500 link/ether 0e:55:9b:6f:57:6c brd ff:ff:ff:ff:ff:ffProbably you should prefer the second method, as ip is preferred network tool on Linux, and you likely already have it installed.Also, both of these are creating the tap device with aI'd guessrandom local MAC, you can set the MAC to a fixed value in any of the normal ways. |
_cs.74053 | In Sipser, It is given Pg. 284Let t(n) be a function, where t(n) n. Then every t(n) time non-deterministic single-tape Turing machine has an equivalent 2 O(t(n)) time deterministic single tape Turing machineIn the given proof, at end they concludeO(t(n)bt(n)) = 2O(t(n)).where b is the maximum number of legal choices given by Nstransition functionI am unable to understand how did they conclude so> | Time Complexity OF NTM vs TM | complexity theory;turing machines;time complexity;nondeterminism | null |
_unix.382853 | blockdev has this option --getmaxsect to get max sectors per request. BLOCKDEV(8) manual page however doesn't state what max sectors per request means. I ran this command on my system and I got the following results: # blockdev --getmaxsect /dev/sda2560 | blockdev command: what is maximum sectors per request? | kernel;hard disk;command;io;block device | blockdev is a basic interface to the block device ioctls; in --getmaxsects case, it uses BLKSECTGET, which returns the maximum number of sectors for a request in the queue associated with the block device. There doesnt seem to be much documentation on this; include/linux/blkdev.h is relevant. It is mentioned briefly in chapter 12 of Linux Device Drivers, 2nd edition:BLKSECTGETBLKSECTSETThese commands retrieve and set the maximum number of sectors per request (as stored in max_sectors). In summary, blockdev --getmaxsect returns the maximum number of sectors which can be used in a single request to the block device. |
_unix.88744 | On ubuntu this file exists: /var/log/syslog. However the same file does not appear on CentOS Distributions. What is the equivalent file on CentOS? | what is the CentOS equivalent of /var/log/syslog (on Ubuntu)? | centos;logs | Red Hat family distributions (including CentOS and Fedora) use /var/log/messages and /var/log/secure where Debian-family distributions use /var/log/syslog and /var/log/auth.log.Note that in newer Fedora (or RHEL/CentOS 7 if someone has gone out of their way to configure it this way), you may have no traditional syslog daemon running. In that case, the same data can be shown with journalctl (which defaults to producing text output in the syslog format). |
_softwareengineering.266243 | In the code below, I believe it would look more appropriate to make the method argument be of type Comparable[] instead of Object[]. The first reason it would be more appropriate is that one can be safe from runtime exceptions because every passed object has to implement interface Comparable. Second, every object in Java inherits from class Object so one can typecast any object as (Object) to use the methods of class Object.package java.util; public class Arrays {.........public static void sort(Object[] a) { Object[] aux = (Object[])a.clone(); mergeSort(aux, a, 0, a.length, 0); }.........}Is there a reason why type Object[] is preferred over type Comparable[] as an argument type? | Why use arg type `class Object` instead of `Comparable[]`? | java;object oriented;interfaces | null |
_webmaster.19762 | I have a website named indonesiasumateratravel.com, and when I search on Google something like 'indonesiasumateratravel' or something else, my website's name appears telling me it has been hacked. It looks like this:How can I remove that result, it's really annoying me and customer of course.@All:Thank you for the quick responses..!It turns out that the only solution is to report it directly to google, because I realized that the keyword indonesiasuamteratravel is under full control of the rogue website. And now that I've reported it to google, and everything is okay.hehehehe...:-) | Removing a Website Name in Another Website From Google Index | seo;google;google search | null |
_webmaster.105229 | In this article it says that you always should describe the picture with your alt text. For example a picture of a nervous worker on a website dedicated to negotiating higher salaries within the workplace should be named nervous worker shaking boss's hand.But logic states that the alt text for that image should be anxiety while negotiating salary because that is descriptive and more keyword rich. However it is mentioned that including keywords actually hurts your ranking.... I don't get it?How is describing a picture so specifically to the point you don't include a keyword help SEO? | Does SEO improve from picture alt text that doesn't include keywords? | seo;alt attribute | null |
_codereview.32213 | So I have this Pygame 3.3.2 code of 2 classes. I tried to make it as simpler as possible, but ofcourse to show the problems I have with thedesign.import pygameimport sysclass MainWindow(object):''' Handles main surface ''' def __init__(self, width, height): self.WIDTH = width self.HEIGHT = height self.SURFACE = pygame.display.set_mode((width, height), 0, 32) pygame.display.set_caption('Placeholder') self.draw_rect = DrawRect(self) # <-- this line looks really strange to me def update(self): pygame.display.update()class DrawRect(object):''' Draw rect on the surface ''' def __init__(self, window): self.window = window self.my_rect = pygame.Rect((50, 50), (50, 50)) def update(self): pygame.draw.rect(self.window.SURFACE, (0, 0, 255), self.my_rect)def main(): pygame.init() Window = MainWindow(900, 500) Draw = DrawRect(Window) Window.draw_rect.update() # <-- this looks very missleading to me Window.update()main()According to StackOverflow question -If B want to expose the complete interface of A - Indicates Inheritance.If B want to expose only some part of A - Indicates Composition.I don't need all of the content of the MainWindow so I use composition.The naming conventions and specialy the line Window.draw_rect.update() in the main() function.Later on, I will use a class Player, do I need to put something like self.player = Player(self), inside the MainWindow __init__ method?Let's say I want to use the width, height of the window to perform some method for positioning the Player.Is there a better way to write this code, to look profesional and clear? | Object Composition | python;python 3.x;pygame | null |
_softwareengineering.320143 | When designing REST endpoints idempotency is a crucial tool. Say we have a HTTP endpoint accepting PUT which we would like to be idempotent. In case a client makes the same request multiple times, should the server always return the same, or is it acceptable to return an error the second time?If the answer is yes is restapitutorial.com wrong in writing the following?From a RESTful service standpoint, for an operation (or service call) to be idempotent, clients can make that same call repeatedly while producing the same result. In other words, making multiple identical requests has the same effect as making a single request. Note that while idempotent operations produce the same result on the server (no side effects), the response itself may not be the same (e.g. a resource's state may change between requestsIf the answer is yes is a document database implementing optimistic concurrency using get+put not idempotent since the second call would result in an conflict?If the answer is yes is there a term for services providing the lesser form of guarantee that making the same request more than once has the same effect on the server state as making it exactly once? | Should an idempotent service always return the same | rest;web services | According to the HTTP 1.1 specification, idempotence is defined as: (aside from error or expiration issues) the side-effects of N > 0 identical requests is the same as for a single request. As this definition only discusses the side-effects of the request, and not the content returned, it is acceptable to return different content. |
_cogsci.5966 | The concept of fairness, in my opinion, is somehow defined by the concept of empathy through some searches by Daniel Goleman in his book Social Intelligence. The book deals with scientific measurement through MRI. Ethic and Moral are also trying to define what is fair. Can you give me some hint achieving a definition of what is fair? i.e. why killing people, slavery, exploiting are not fair stuff? | Is there a scientific definition of fairness? | social psychology | I'd say that 'fairness' would be best treated as a narrow concept not encompassing the whole of empathy/ethics/morals, otherwise the term ceases to be specific and useful. I.e., something could be perceived as both cruel/evil but still fair/just; and harsh 'eye for eye' style punishments are perceived as fair in some cultures but still require some disconnect of empathy.I believe that experiments of Kahneman and others such as the Dictator game are a good place to start - as this way can measure [an aspect of] fairness in a somewhat objective way and observe how the concept of fairness is different for different cultures. |
_unix.271150 | Hello I can't find the option to reassign shortcuts in xfce4-terminal 0.6.3. I'd like to reassign ctrl+c to copy, ctrl+v to paste and ctrl+shift+c to kill process. I know I can do that easily under gnome-terminal but since I'm using xfce I would like to avoid installing all the dependencies for gnome-terminal. Any idea on how to achieve that ? | How to set ctrl+c to copy, ctrl+v to paste and ctrl+shift+c to kill process in xfce4-terminal? | keyboard shortcuts;xfce4 terminal | In xfce terminal go to Edit, hover your mouse over Copyand press ctrl+c.Same goes for paste.Kill process gets automatically reassigned to ctrl+c+shift. |
_unix.348537 | I have a small Linux device that may choose to sync it's time with a handheld device when said device connects to it. My program's been running as root, and I've just been using date --set commands. But I'm trying to move said program to a less privileged user. Since I'm using systemd now, I think I should be using timedatectl to set the time rather than date directly. I've proven to myself how I can do this root. But I don't know how to drive it from non-root.I could use a specific sudo item, but I was hoping to not have my program running sudo. If that's the only way though, I know how to do that. If that's the only way, then just answer that :)I hoped that if I made my user a member of the systemd-timesync group, I might be able to, but with or without said group, I get the following error:> timedatectl set-time 2017-3-2 01:40:30Failed to set time: The name org.freedesktop.PolicyKit1 was not provided by any .service filesI have no idea what that means, or how to fix it, or if I should, or if it's possible. | Allow non-root user to use timedatectl | debian;permissions;systemd;date | I would use sudo. You express reluctance to that approach, but you would only be granting your user root access to run the single timedatectl command.This ought be able to be solved with PolicyKit as well, but it would effectively have the same result of allowing a user to run a single command as root. So the risks would be similar-- and you already understand how to solve the problem with sudo. |
_softwareengineering.206086 | I'm confused by this post by Mark Seeman.And his comment on IInitializable below:The problem with an Initialize method is the same as with Property Injection (A.K.A. Setter Injection): it creates a temporal coupling between the Initialize method and all other members of the class. Unless you truly can invoke any other member of the class without first invoking the Initialize method, such API design is deceitful and will lead to run-time exceptions. It also becomes much harder to ensure that the object is always in a consistent state.The same time he writes:This issue is similar to the issue of invoking virtual members from the constructor. Conceptually, an injected dependency is equivalent to a virtual member.I thinks this statement is true only if admit that constructed != initialized.What we get now:Dependencies are injected in constructor but it is not recommended to use them.Initialize phase brings complexity and should be avoided.Isn't it contradictory?Imagine class needs to set its state using the provided dependencies. Loading saved setting for example.Init is bad, constructor is bad, so where to perform this operation?And another point:Are not methods like Connection.Open() just another name for Initialize? Question:So can anyone describe a good initialization pattern in the context of Dependency Injection that addresses the concerns Mark Seeman brings up? | Separation of construction and initialization | architecture;object oriented design;constructors;initialization;construction | null |
_reverseengineering.4552 | I'm currently researching my home router (D-Link 2760-U) which has a (kind-of) proprietary ISP firmware. What I'm trying to achieve is understanding how the router is persisting it's configuration settings across reboots.mount output:/dev/mtdblock0 on / type squashfs (ro,relatime)/proc on /proc type proc (rw,relatime)tmpfs on /var type tmpfs (rw,relatime,size=420k)tmpfs on /mnt type tmpfs (rw,relatime,size=16k)sysfs on /sys type sysfs (rw,relatime)I did nvram show and it does contain some wireless configuration such as the encryption in use, wireless modes, pre-shared key but that's about it. Obviously the router has other configurations (such as DNS, PPP, Port forwards etc) that must be stored elsewhere. As it can be observed, all mounted filesystems are volatile and thus configuration cannot be stored in them.Where can the information be saved besides filesystem and NVRAM? How do I go about finding this out? | How can I figure where my router stores it's configuration across reboots? | linux;firmware;embedded | I have played with at least one (not a D-Link) Linux-based router that stored its configuration in a bare mtd partition and accessed it using a proprietary binary. It compressed the data using LZMA or something but within that, you could see passwords in clear-text (not good!)D-Link actually have a very good GPL source code system, go to http://tsd.dlink.com.tw/downloads2008list.asp?SourceType=download&OS=GPL and type in 2760 and you can download the entire buildroot and source for that router.Even if they happen to use some non-standard proprietary mechanism for saving configuration, you should be able to get some idea of where it is hidden from examining the GPL sources... |
_cogsci.10235 | It's generally accepted that males tend to have a more negative attitude toward homosexuality than females:Three studies conducted with students at six different universities revealed a consistent tendency for heterosexual males to express more hostile attitudes than heterosexual females, especially toward gay men. (Herek 1998)Men were more negative than women toward homosexual persons and homosexual behavior, but the sexes viewed gay civil rights similarly. Men's attitudes toward homosexual persons were particularly negative when the person being rated was a gay man or of unspecified sex. (Kite, Whitley Jr. 2009) There's research into finding explanations as to why there is this distinction (e.g. Whitley Jr. 2001), but it does not seem to cover my specific question:Q: Are heterosexual males more likely to view homosexuality as a mental illness (or as some kind of a sickness) than heterosexual females?The homophobia scales I've looked at (Szymanski and Chung 2008, Wright Jr., Adams, Bernat 1999, Raja, Stokes 1998, and Bouton et al. 1987) don't ask questions about whether or not homosexuality is considered a mental illness, but do ask if it's considered a sin or disgusting. | Are males more likely to view homosexuals as mentally ill than females? | sexuality;attitudes;gender | null |
_unix.321812 | I want to loop a process in a bash script, it is a process which should run forever but which sometimes fails.When it fails, it outputs >>747;3R to its last line, but keeps running.I tried (just for testing)while [ 1 ]do mono Program.exe last_pid=$1 sleep 3000 kill $last_piddonebut it doesn't work at all, the process mono Program.exe just runs forever (until it crashes, but even then my script does nothing.) | Restart a running process if it outputs a particular string? | bash;shell script;shell;scripting | null |
_softwareengineering.117943 | A lot of scripting languages like Perl, Awk, Bash, PHP, etc. use a $ sign before identifier names. I've tried to look up why but never had a satisfactory answer. | Why is $ in identifier names for so many languages? | history;scripting | The Bourne shell or its predecessor was probably the origin of the convention. Shells tend to use $foo to refer to a shell variable or environment variable, whereas just foo is taken as a literal string (which could be a command name, a file name, or just a word to be passed to another command).It's probably a matter of minimizing how much you have to type. In most programming languages, most of what you type is going to be keywords (from a small fixed set) and names of things that have been declared; it makes sense to use just a name for those. String literals are enclosed in quotation marks (single or double) to distinguish them syntactically.In an interactive shell, most of the things you type are going to be command names and file names, so having an undecorated form for them makes sense. Variable names aren't used quite as often, so preceding them with $ is a straightforward way to distinguish them.The trailing $ that some BASICs use for string variables (as opposed to numeric variables) was probably an inspiration as well.Perl's choice to use $ for scalar variables was undoubtedly inspired by shells such as the Bourne shell, and early versions of Perl commonly treated barewords as string literals (print FOO and print FOO were equivalent). This use of barewords has been deprecated in more modern versions of Perl, but the sigils remain, and are mostly used to distinguish different kinds of variables. Perhaps if Perl were being designed from scratch, scalar variables might be denoted by an empty sigil, but that didn't happen.(BTW, Awk doesn't use sigils. It does have a $ syntax, but it refers to field numbers. NF is the number of fields; $NF is the last (the NFth) field.) |
_unix.27594 | I understand that if you want to modify who can use sudo and what they can do with it that you should use visudo. I know I'm not supposed to directly modify the /etc/sudoers file myself.What is it that visudo does that directly modifying the file doesn't do? What can go wrong? | Why do we need to use visudo instead of directly modifying the sudoers file? | sudo | visudo checks the file syntax before actually overwriting the sudoers file.If you use a plain editor, mess up the syntax, and save... sudo will (probably) stop working, and, since /etc/sudoers is only modifiable by root, you're stuck (unless you have another way of gaining root).Additionally it ensures that the edits will be one atomic operation. This locking is important if you need to ensure nobody else can mess up your carefully considered config changes. For editing other files as root besides /etc/sudoers there is the sudoedit command which also guard against such editing conflicts. |
_unix.61866 | During some basic RHEL training, I came across this blurb: Although it's possible to create more, RHEL will recognize only up to 16 partitions on any individual SATA, SCSI, PATA, or virtual hard drive. That sentence seems to conflict with itself. If RHEL can't recognize more than 16 partitions, why would I ever want to create more than 16? | Why would I want to create more partitions if RHEL will only recognize up to 16? | linux;rhel;partition | There was a bit of discussion about that topic in an old bug report on exactly that limit:They used to reside in different (smaller) disks (and may go back). Several partitions give me more flexibility to move them around using labels.I wasn't using ext3 before, so smaller partitions made shorter fscks in the case of power-downs.I'm too lazy to use quotas to limit dept. disk usageBut even then the short answer was: anyone who needs even 16 partitions is insane, :).Nowadays we have LVM and those limits do not matter anymore. :) |
_cs.62998 | (I was not really sure in which stackexchange I should put this question. I hope CS is ok.)Let's say I have the following tree A. Every node has a unique ID, which will be counted up, beginning from 0. The ID counting is taking place in breadth-first order.Now I want to pick out a part of the graph A. The information of the subtree is given by a node ID in the original tree A.ExampleLet's say the new tree starts with the node ID 3 of the original tree. For this example, the tree depth has an offset of 1 (because 3 is in depth 1 of the original node). The result is the followingYou might have noticed that the new tree have the same structure than the old tree. The unique ID of each node in the new tree, will also be counted up in breadth-first order. In the brackets, you can see the link to the original node tree A.BackgroundThe reason, why I need this node link is the fact that I need to copy some node properties from the original tree node to the new tree node.QuestionWhat is the mathematical relation between the unique node IDs of the new tree and the node IDs of the original tree? I want to compute the ID's of the old tree, based on the ID's of the new tree.Thanks for any help. | Get mathematical relation of node IDs of a subtree based on a given tree | data structures;trees;subtree | null |
_unix.215461 | I have tried to transfer about 50 Gb files from a Redhat Linux variant unsuccessfully to my Debian 8.1.I would like to find other ways than external HDD to move data. There are USB3 connections and HDMI to both machines but nothing else. I am not allowed to install BTsync to transfer the files fast between each other. How can you mass transfer of big files easily between two Linux boxes of different variants? | Mass transfer big files from one Linux box to another Linux box? | usb;file transfer | The fact that one machine is running Red Hat and the other Debian won't cause you any problems. For most intents and purposes, the differences between distributions are insignificant.Realistically, you have two options for your data transfer:Using a removable disk, connected using USB or eSATA or similar.Using the network. Once both machines can connect to one another over the network, you can use any one of a variety of tools to do the file transfer. You mentioned that you cannot using BitTorrent Sync but something like rsync may well be an option or, failing, that sftp or scp. |
_unix.348802 | I'm doing a backup scheme wherein we create an uncompressed .tar file, add any new or changed files to it for a week or two. Now, at the end of this, is there an easy way to add compression to the .tar file so that we can start a new one? Or do I just have to run a separate gzip command or similar?Thanks in advance! | Adding compression to .tar file? | compression;tar | null |
_softwareengineering.84167 | This summer I'd like to develop a number of applications, all of them relatively small, yet risky -they're complex, I'm inexperienced-. I'm gonna work on my own because my classmates or other IT people I know really don't have a genuine interest in programming (I swear!).Besides from making good code, I'd like to analyze, design and plan everything correctly. Probably applying standard methodologies would be excesive for a single-person project wouldn't it be?So, what lifecycle and diagrams should I use? In college I've been taught only a very superficial approach to software engineering. | What design and planning techniques are the most suitable for individual projects? | design;requirements;methodology;project planning | I would recommend to focus on one little project at a time. If they are complex, you probably want to spend some time thinking first, so you can penetrate the problem domain at least to the degree where you have an idea of how to solve the central problem. Once you have that understanding, and often little sketches are a good way of getting there, then start by implementing the code from the start with unit tests, where each test expresses either a desired result or the ability of the code to handle defective or incomplete input.For a one man team, most of the standard methodologies are over the top, but the principle can still be applied, albeit at a much smaller scale. I would not recommend spending too much time on beautiful diagrams, hand-drawn sketches will do fine. And since this sounds like you are trying to learn new things about programming, don't worry about lifecycle. |
_unix.59096 | I have added to sudo nano /etc/xdg/lxsession/LXDE/autostartthe line:python /path/to/script.pyThis launches script.py on boot. Which is what I want. However, I can't see the output from script.py about what it is doing. Autostart just runs it in the background. How can I get script.py to run in a terminal?I'm using debian. | Run autostart program in a terminal instead of having it run hidden in background | debian;boot;lxde | You should see the output from your program in ~/.xsession-errors. If you want to run the command in a terminal itself you have to install a terminal which allows you to specify the command to be executed (most of them should support it), i.e. for xterm you can run:xterm -e python /path/to/scriptand place it in your autostart file. |
_softwareengineering.144080 | I need to generate debug output in a java appliation that cannot be debugged by placing break points and traces. I've thus far been placing the debug flag as a static variable in one of the classes and then accessing it from all other classes. But this I suspect will eventually cause massive issues later on. What's the best way to do this? | How to pass around a debug flag variable? | java | Consider using a logging framework like log4j. Log4j is very flexible and extendible. This is clearly the best option.If you cannot use such a framework, you have to tell all parts of your program generating debug output, whether this is a debug run or not. This can be done in two ways. Either using static access to some variable/singleton or passing the information to the relevant parts. Both solutions have advantages and disadvantages. In general the static/singleton pattern is considered harmful because it introduces a global variable and therefore global state which can make testing difficult. However, as long as you are only generating output which does not interfere with any other part of your program you should be fine.The other approach is to pass the information via a constructor for example. This approach does not hide the dependency of the object on the information and is preferable in theory. However, now you have to pass this information to all classes possibly generating debug output. If you got lots of these classes and/or they are buried deep in the system, you will have to do a lot of boilerplate coding. Dependency injection frameworks (e.g. Guice) are designed to help you with this kind of boilerplate code. If you already use DI this is probably the best way. But if this feels like using a sledge hammer to crack a nut, you should consider using the static access (which is btw what log4j does). |
_unix.84534 | I'm trying to map <c-y>, (from zencoding-vim) to <c-m>. I tried:map <C-m> <C-y>,map <C-m> <C-y>,<cr>But doesn't seems to be working. Maybe it's something with the comma? | VIM map not working | vim;vimrc | It's not the comma, it's ctrlm which is the terminal control sequence forreturn and the same as pressing the return key. Try typing ls in yourterminal and press ctrlm. You'll run into the same problem trying to mapctrlh, which is the terminal control sequence for backspace.This is not a Vim issue since Vim can't distinguish between ctrlmandreturn` key presses. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.