id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_webmaster.77776 | Ever since I started using my own Apache module on my server, I noticed that each Apache process size now went up from 30-50 MB to 60-70 MB+I'm not sure if it is my module that is causing it, or if it's some random PHP code someone used that is eating up memory.What I want to know is if there's a way that I can list the modules Apache has loaded along with the memory consumption for each.I did try httpd -M but all I get is a list of loaded modules along with an indication as to whether they're static or dynamic. | How can I view the memory size of modules used by Apache? | apache | null |
_webmaster.23259 | Here is my concern.I have the logo image along with the entire header section of the site in an 'included' file.How do I present that image in the image sitemap?Should I give the URL of the 'included' file that only contains part of each page and as such wouldn't make much sense to a user who would get it in a search result?Or is there another way of providing Google with the URL to that image? | Logo Image in Image Sitemap | seo;xml sitemap | How do I present that image in the image sitemap?You probably don't need to at all. Why are you asking? It might provide some more context.Should I give the URL of the 'included' fileNo. Sitemaps are intended for indexing pages, not the individual bits that make them up.The reason you're unclear where to add a reference to the logo is that the image extension tags are more intended for pointing out the important images in the page, which is generally going to be those in content, not your layout; the layout is just decoration for these purposes. (Else why not also add all your background images, arrows, etc?) Remember: this is for search engines, and they mostly don't care what your site looks like, just what the content is.If you happen to have a page on your site that specifically provides things like brand assets, then do it there and include it in the image tags for that page. Some companies do this for press use, for example.If this is important to you, then I'd probably just list it under the home page. I would not recommend including it for every page, just because it technically happens to appear on all of them. That'd be a waste of space and possibly trigger some alarms(that's speculative).is there another way of providing Google with the URL to that image?I'm not aware of anything official or widely-supported at this time.But as a bit of trivia, I'll mention rel=logo here.Note that it's just an unofficial microformat proposal right now, not even a few months old yet. But if it's something you find interesting, you might implement it on your site(takes minimal effort) in support and maybe it'll take off over time. |
_codereview.67144 | If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000.Could someone point out if there is something wrong with the code; where I should optimize/modify it?import java.util.Scanner;public class SumBelowN { private double getSum(int N) { int n1 = 3; int next_n1 = 3; int n2 = 5; int next_n2 = 5; double sum = 0; while (next_n1 < N || next_n2 < N) { if (next_n1 == next_n2) { sum += next_n1; next_n1 += n1; next_n2 += n2; continue; } else if (next_n1 < next_n2) { sum += next_n1; next_n1 += n1; continue; } else { sum += next_n2; next_n2 += n2; continue; } } return sum; } public static void main(String[] args) { Scanner sc = new Scanner(System.in); int i = sc.nextInt(); // print the sum of all the numbers below 'i' // which are divisible by both 3 and 5. SumBelowN sumBelowN = new SumBelowN(); System.out.println(Sum: + sumBelowN.getSum(i)); sc.close(); }} | Effeciency Project Euler #1 | java;programming challenge | I'm assuming that you did read about the solution that runs in constant time, but wanted to add your own non-constant solution.continueYour continue statements are unnecessary; you can just remove them.NamingYour naming violates every standard for Java. Java uses camelCase and doesn't allow names to start with uppercase letters.It is also generally discouraged to use numbers in names if they can be avoided, which in this case they can: n1 could be called three, and n2 could be five. If you want to stay flexible, use firstMultiple and secondMultiple or something similarly expressive.ConstantsAs n1 and n2 are constants, I would declare them as static fields.CommentsAs your solution is not straightforward, I would add a comment or two about how it works.EfficiencyIn case you're interested, here are the results that I get if I compare your solution with the solutions from the other thread (mentioned in the comments):name per call (mean, ns) per call (median, ns) 95th percentile (ns) total (ms) runsthree loops 280 277 256 0.224 1000000without function calls 392 392 362 0.3136 1000000getSum 694 695 641 0.5552 1000000with function calls 3739 3766 3338 2.9912 1000000 |
_webapps.85240 | I am creating a survey in Google Forms and I'm going to that use inside our organization. I am going to send the form link to everybody inside organization using their internal email. The form will not require Google login to participate, but I need to get only one response per person (one response per one emp-email-id). If the same employee trying to participate in the survey for thesecond time the form should able to identify that he is alreadyparticipated in the survey and prevent the second submission.To solve this issue I planned to do three thingsApplying unique id for each link which I am sending to employee and recording the unique id after submission and if the person trying to participate in the survey with same unique id link crosschecking the recorded unique id should be able to stop the second submission.Recording their IP address instead of unique id and doing crosscheck to prevent second submission.Recording their email id and doing crosscheck second submission.On the three things which one is possible? I am very new to Google Forms. | How to limit one response per email for Google Forms | google forms | null |
_unix.235040 | I want to set up an SSH tunnel to my MySQL database on a remote server. Can I create an SSH user that has only the minimum permissions that are strictly necessary to access MySQL, for example with a database management application like HeidiSQL or Sequel Pro? I don't want the user to be able to access anything else. | How do I create a SSH user that can only access MySQL? | ssh;mysql;ssh tunneling | You can add a user without a valid login shell:# useradd -s /sbin/nologin dbuserSet their password:# passwd dbuserOr leave it unset and make SSH keys:(on local machine)$ ssh-keygen(on remote machine)# su -s /bin/bash - dbuser$ cat local_id_rsa.pub >>~/.ssh/authorized_keysAt this point, you can use SSH to create the tunnel:ssh -TfnN -L localhost:<local_port>:localhost:<db_server_port> dbuser@remote_hostIf you used SSH keys to authenticate, this will work automatically, otherwise you'll need to type in a password. ssh will go to background immediately after authenticating, and will not attempt to execute any command, but the tunnel will be open. Type 'localhost' and the <local_port> value into your local DB client, and it'll work. However:$ ssh dbuser@remote_hostdbuser@remote_host's password:Last login: Fri Oct 9 09:27:24 2015 from local_hostThis account is currently not available.Connection to remote_host closed.SSH will not execute any shell or command as the remote user; /sbin/nologin will kick it out every time. |
_unix.37832 | I am trying to install the octave software package on a RHEL 6 Workstation. I have installed the epel-release 6.5 package to enable the EPEL package repository. When I try to install the octave using yum, the following errors are returned:Error: Package: 6:octave-3.4.2-2.el6.x86_64 (epel) Requires: libfftw3.so.3()(64bit)Error: Package: 6:octave-3.4.2-2.el6.x86_64 (epel) Requires: libfftw3f.so.3()(64bit)Error: Package: 6:octave-3.4.2-2.el6.x86_64 (epel) Requires: libglpk.so.0()(64bit)I tried to use yum to search for the packages libfftw3, libfftw3f, libglpk, fftw3,fftw3f, and glpk. However, it was not able to find any of these packages. I am wondering if anybody knowsShould I try to find the packages by the names fftw3,fftw3f, and glpk? Or should I search for the names libfftw3, libfftw3f, libglpk?Does this mean I have to try to find the required dependency packages online? Is there a reliable website providing these RPM packages for RHEL Workstation 6? | Resolve missing package dependencies when trying to install octave | rhel;package management;yum;libraries;octave | null |
_unix.13996 | Many times I have an SSH session that doesn't respond anymore (for example, when I lose internet connection and then reconnect). Ctrl+C, Ctrl+D, Ctrl+Z and a zillion of key presses don't have any effect.Most of the time I already have tmux or byobu running already, so I can just start another terminal and reconnect. However it does feel cumbersome. How can I disconnect SSH from the current terminal? | How can I break away from an SSH session that has crashed? | ssh;terminal | Use the escape character (normally, the tilde ~) to control an SSH session:~ followed by . closes the SSH connection;~ followed by Ctrl+Z suspends the SSH process;~ followed by another ~ sends a literal ~.You can set the escape character using the -e option to ssh.Additionally, remember thatYou should also remember to press Enter before ~. The escape character works when it is the first character in the line. And also you can use ~ and later ? to get help from the ssh client. (Thanks to the comment by Lukasz Stelmach.) |
_unix.187571 | I can't believe I am asking this question here myself. Apologies if this is not the right place for this question.I've been using linux since 1996, SuSE version 4.1...And yet I am still not able to put together a well performing machine.I bought a simple box a few months ago. I am a software developer. I don't need high end graphics nor RAID nor stuff like that. My main goal is to have a fast responsive machine.I installed arch linux on it, and i3 as desktop environment - really it should be quite fast.And yet...the machine responsiveness is very low, especially graphics lag A LOT. I know I didn't opt for a high end graphics engine - but shouldn't such a modern machine be enough for lag-free youtube and other video watching? Lag-free firefox browsing?Maybe I just misconfigured my machine, and that's what my question is about. Some hardware info follows below.As you can see, I have a AMD E-350 processor with 8GB-RAM - I should think this would be quite enough for a standard linux installation...What I also did is, as the MoBo comes with a GPU/processor bundle, I thought that maybe upgrading just the graphics card may have some impact - nope. I bought a AMD HD5450 card with 1GB RAM (!!!) - and still I can't enjoy a fluent video on youtube, nor a sufficient experience web browsing...(maybe it's even better to have the original GPU than the added card I bought?). Of course I know that modern browsers are pretty demanding, but I should think such a machine should be able to handle it?I read the posts on the arch linux forum about optimizing video, and AMD cards are not very well reviewed really...There are infinite posts about how to choose your distro, there are hardware compatibility lists - but are there any tutorials or guides which actually help setting up a fast and responsive machine?Is my machine maybe badly configured?Does it just have a bad hardware set up? Should I maybe replace one or the other piece to cheaply get satisfactory performance?Am I just asking too much from my cheap hardware?I am considering replacing the whole MoBo (I know it was cheap and I shouldn't expect too much from it, but I thought I should be able to tweak it...)(lshw):*-memory description: System memory physical id: 0 size: 7967MiB*-cpu product: AMD E-350 Processor vendor: Advanced Micro Devices [AMD] physical id: 1 bus info: cpu@0 size: 800MHz capacity: 800MHz width: 64 bits capabilities: fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp x86-64 constant_tsc rep_good nopl nonstop_tsc extd_apicid aperfmperf pni monitor ssse3 cx16 popcnt lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch ibs skinit wdt arat hw_pstate npt lbrv svm_lock nrip_save pausefilter vmmcall cpufreq *-pci:0 description: Host bridge product: Family 14h Processor Root Complex vendor: Advanced Micro Devices, Inc. [AMD] physical id: 100 bus info: pci@0000:00:00.0 version: 00 width: 32 bits clock: 66MHz configuration: latency=32 *-pci:0 description: PCI bridge product: Family 14h Processor Root Port vendor: Advanced Micro Devices, Inc. [AMD] physical id: 4 bus info: pci@0000:00:04.0 version: 00 width: 32 bits clock: 33MHz capabilities: pci normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:16 ioport:e000(size=4096) memory:fea00000-feafffff ioport:c0000000(size=268435456) *-display description: VGA compatible controller product: Cedar [Radeon HD 5000/6000/7350/8350 Series] vendor: Advanced Micro Devices, Inc. [AMD/ATI] physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: vga_controller bus_master cap_list rom configuration: driver=radeon latency=0 resources: irq:41 memory:c0000000-cfffffff memory:fea20000-fea3ffff ioport:e000(size=256) memory:fea00000-fea1ffff | how to choose appropriate linux hardware - poor graphics performance | linux;arch linux;performance;hardware;graphics | null |
_unix.320523 | I created a directory named $pattern and now when ever i try to remove it, it says pattern: Undefined variable.I have tried:$ rm -r $pattern$ rm -rf $pattern$ rm $ option[value='2016'] | How to remove a folder that starts with $? | shell;quoting;filenames | $, space, ' and [ are special characters in most shells. To remove their special meaning, you have to use the quoting mechanisms of the shell.The quoting syntax varies very much with the shell.In all shells that I know, you can use single quotes to quote all characters but single quote, backslash and newline (in Bourne-like shells, it does quote the last two as well, except in backticks for \ in some).rm -r '$pattern'Should work in most common shells.rm -r \$patternWould work (except inside backticks for Bourne-like ones) in all shells but those of the rc family.Same for:rm \$option[value='2016']In rc-like shells, you'd use:rm '$option[value=''2016'']' |
_cstheory.22138 | After reading a related question, about non-constructive existence proofs of algorithms, I was wondering if there are methods of showing existence of small (say, state-wise) computation machines without actually building it.Formally: suppose we are given some language $L\subseteq \Sigma^*$ and fix some computation model (NFAs / turing machine/ etc.).Are there any non-constructive existence results showing a $n$-state machine for $L$ exists, but without the ability of finding (in $poly(n,|\Sigma|)$ time) it?For example, is there any regular language $L$ for which we can show $nsc(L)\leq n$ but we don't know how to build a $n$-state automaton for?($nsc(L)$ is the non-deterministic state complexity of $L$, i.e. the number of states in the minimal NFA that accepts $L$).EDIT: after some discussion with Marzio (thanks !) I think I can better formulate the question as follows:Is there a language $L$ and a computation model for which the following holds:We know how to build a machine that compute $L$ that has $m$ states.We have a proof that $n$-states machine for $L$ exists (where $n << m$), but either we can't find it at all or it'd take exponential time to compute it. | Are there non-constructive proofs of existence of small Turing machines / NFAs? | cc.complexity theory;automata theory;turing machines | null |
_webapps.6270 | I recently signed up for Skype, and unfortunately, have forgotten my password. I went to reset my password with their online form, where it asked for the email address I signed up with. When I signed up, I used a gmail plus address ([email protected]), which the signup form did not complain about, and which they happily sent an automated email to. The initial stage of the reset form doesn't complain either, but on the page after I click submit (where it should be sending me the reset code) it says that the email address is invalid. I haven't been able to find any information on how to get this to work... does anybody know of a way for me to reset my Skype password when my email address seems to be broken in their system? I've never purchased any services from them, so their alternate method of providing your email address and billing records won't work. Any help would be much appreciated! | Unable to reset Skype password using Gmail plus address | gmail;account management;skype for web;password recovery | It appears that Skype has updated their system, and it finally let me reset my password recently when I went back to check (they never emailed me back). It's possible the problems will return eventually, so I changed my email address with them to one without the +. |
_unix.227258 | I've been running Gnome (v1 and v2) and use xfce now. But ever since I gave Unity a try when Ubuntu shipped it as default a couple years ago (and have been using Win7 for some games), I've been looking for a dock with simple mechanisms that both systems (Unity/Win7) provide:Windows key to open a menu where I can start to type to start programs directly. (The dock doesn't need to do that, xfce-whisker-menu does this job fine already.)Pin apps to the dock so they stay at the same position in the dock, whether started or not. (This is common, awn and cairo dock for example do that.)Start or switch to the app at n-th position via Winkey+n. (This seems to be the feature thats hard to come by.)I know I'm essentially describing Unity and in theory should give it another try, but I don't want to change my xfce setup I have. Does anybody know a nice way to add that to xfce/xfwm?Asked on askUbuntu already, but there wasn't any answer yet. | Dock (functionally) similar to Unity or Windows 7 | xfce;dock | I found DockbarX that does what I want it to do. It isn't perfect from my point of view, but I'll talk to the devs about that, maybe they can enhance those parts. It even integrates into xfce-panel, for those who might need it. |
_codereview.64872 | I've been working on this challenge for a week:There are two sequences. The first sequence consists of digits 0 and 1, the second one consists of letters A and B. The challenge is to determine whether it's possible to transform a given binary sequence into a string sequence using the following rules:0 can be transformed into non empty sequence of letters A (A, AA, AAA etc.) 1 can be transformed into non empty sequence of letters A (A, AA, AAA etc.) or to non empty sequence of letters B (B, BB, BBB etc) e.g.Examples:1010 AAAAABBBBAAAA -> Yes00 AAAAAA -> Yes01001110 AAAABAAABBBBBBAAAAAAA -> Yes1100110 BBAABABBA -> NoHere is the first small sample to feed the program and here is the second large sample.I have 3 working solutions, but they're painfully slow.My first idea was to build a regex for each test case, test if it matched and map Yes or Noputs IO.readlines(ARGV[0]).map { |l| l.chomp.split(' ') }.map { |a| index = 0 a[1].match(a[0].chars.to_a.inject('^') { |result, digit| result + (digit == '1' ? '([A|B])\\' : '(A)\\') + #{index += 1}{0,} } + '$') ? 'Yes' : 'No'}It works fine with the small sample, but takes forever with the large one.So I thought about backtracking and tried itdef check_char_number(char, number) (number == '0' && char == 'A') || number == '1'enddef check(string, numbers, current_char) return true if string.empty? && numbers.empty? return false if string.empty? return false if current_char != string[0] && !check_char_number(string[0], numbers[0]) if string[0] == current_char r = check(string[1..-1], numbers, current_char) return r if r end return check(string[1..-1], numbers[1..-1], string[0]) if check_char_number(string[0], numbers[0]) return falseendputs IO.readlines(ARGV[0]).map(&:chomp).map(&:split).map { |a| r = 0 l1 = a[0].length l2 = a[1].length if l1 > l2 or l1 < 1 or l2 < 1 r = 'No' else r = check(a[1], a[0], nil) ? 'Yes' : 'No' puts finished: #{r} end r}But it's still so slowSo I was getting desperate and started looking around on GitHub to see how other people did it, and found something interesting in pythonI don't have any academic degree in CS I learned everything with books and MIT courses and I can't tell if what he's using is a known technique to solve this sort of problems. I feel like I barely understand it (spent a lot of time with the debugger to see what happens step by step) and I'd really love to know if a Wikipedia page could explain this better or if this is completely home-made.I ported it ruby and my second question is: even though it is significantly faster than the previous two why is it sill five to seven times slower than his solution:def check_number_with_partial_string(number, partial_string) #45% of runtime spent here! return !partial_string.include?('B') if number == '0' return !(partial_string.include?('A') && partial_string.include?('B')) if number == '1'enddef check_matching(numbers, string, matching_matrix, row = 0, column = 0) return if matching_matrix[-1][-1] == true return if row >= matching_matrix.size or column >= matching_matrix[row].size return if matching_matrix[row][column] (column...string.length).each do |c| matching_matrix[row][c] = false if check_number_with_partial_string(numbers[row], string[column...(c + 1)]) matching_matrix[row][c] = true check_matching(numbers, string, matching_matrix, row + 1, c + 1) end endendputs IO.readlines(ARGV[0]).map(&:chomp).map(&:split).map { |a| matrix = Array.new(a[0].length).map { |b| Array.new(a[1].length) } check_matching(a[0], a[1], matrix) matrix[-1][-1] ? 'Yes' : 'No'}I profiled it and it seems 45% of the time is spent in the first little methodTime profiler | CodeEval Sequence Transformation | ruby;programming challenge;regex;comparative review;time limit exceeded | I focussed exclusively on your last bit of code. I'm afraid I only managed a ~1.8x speed increase. I think a lot of comes down to Ruby's itself not being the quickest thing going - at least not when it comes to this particular algorithm.Long story short: Convert your number and letter strings to integer arrays. For whatever reason, that's where I found a performance increase. I'll leave the explanation to someone with deeper knowledge of Ruby's string handling.There may be a totally different way to do this, which'll be super-fast in Ruby, but I haven't investigated that. Generally, though, Ruby aims for expressiveness and readability, and this algorithm doesn't play to Ruby's strengths.Anyway, review time! I've only changed minor stuff to make it more Ruby-like; the algorithm itself is as it ever was (for good or bad)You'll still find that the majority of the time is spent in that first method. For one, it's being called constantly, and for another, it's doing the most work. Anything else in the code is just loops and boolean logic. But check_number_with_partial_string is the one actually doing the real checking of numbers to letters. So it's no wonder it weighs heavy on the performance.check_number_with_partial_stringThe logic's iffy. By which I mean that there is literally one if too many. The number will be either 1 or 0, so why the extra check? Also, no need to (potentially) call include?('B') twice. Store it once, use it twice.There might also be a micro-optimization here if we flip the second return value around. If we've stored includes?('B') already and it's false, we can check that first, and, since the expression uses &&, the check for includes?('A') doesn't need to run.Should maybe just be called partial_match?check_matchingFor the sake of line-length, I'd probably just call the matrix matrix. It's not about to be confused with anythingFavor || over or for boolean logic (here's a good explanation)matching_matrix[row][c] = false is unnecessary since the initial value of that index is nil which is already false'y. But since check_number_with_partial_string returns a bool, we can skip a bit and just assign its return value to the index.The overall name is a bit misleading. I'd imagine it would return something, since it's called check - but it doesn't.Read loopRight now, the entire map must run before anything is printed. You can use IO#each_line instead of loading everything into memory at once. And print for each test case. I've changed it, but the original may be faster(!)Use do..end for multiline blocks (though if you keep the puts outside, the lower precedence of do..end compared to {...} will trip you up)Use Array.new with a block instead of creating and then mapping an array to fill itPull named variables from the split string to keep things readable (no opaque array indices)In the end, I had:def check_number_with_partial_string(number, string) includes_b = string.include?(66) # ASCII 'B' return !includes_b if number == 0 !(includes_b && string.include?(65)) # ASCII 'A'enddef check_matching(numbers, string, matrix, row = 0, column = 0) return if matrix[-1][-1] return if row >= matrix.size || column >= matrix[row].size return if matrix[row][column] (column...string.length).each do |c| matrix[row][c] = check_number_with_partial_string(numbers[row], string[column...(c + 1)]) check_matching(numbers, string, matrix, row + 1, c + 1) if matrix[row][c] endendFile.open(ARGV[0]).each_line do |line| digits, string = line.chomp.split digits = digits.chars.map(&:to_i) string = string.chars.map(&:ord) matrix = Array.new(digits.length) { Array.new(string.length) } check_matching(digits, string, matrix) puts matrix[-1][-1] ? 'Yes' : 'No'endAgain, as far as I can tell, any increased performance came from mapping the two strings to integers.For some sample input I generated (where everything always matches), I got ~43 sec for the code above, versus ~83 sec for the original. That's obviously not the most rigorous test, but it seemed consistent with different input sizes.Maybe there are more gains to be had somewhere, but I didn't find 'em (but I didn't try to do more overall restructuring either). Maybe someone else will spot something. But again (again) this just seems to me to be an algorithm that doesn't play to Ruby's strengths. |
_unix.7917 | I'm looking for a sticky notes/knotes/etc type program with one difference: I can stick a note on a window, rather than the desk top, and it will remain attached to the window.Basically, I want to be able to annotate windows. I want to be able to say this xterm is the one where I'm doing project X This application is running test Y this emacs is working on this bug etc. | sticking stickies to windows | software rec;gui;desktop | null |
_softwareengineering.179096 | I've been taking a look at Clojure lately and I stumbled upon this post on Stackoverflow that indicates some projects following best practices, and overall good Clojure code. I wanted to get my head around the language after reading some basic tutorials so I took a look at some real-world projects.After looking at ClojureScript and Compojure (two the the aforementioned good projects), I just feel like Clojure is a joke. I don't understand why someone would pick Clojure over say, Ruby or Python, two languages that I love and have such a clean syntax and are very easy to pick up whereas Clojure uses so much parenthesis and symbols everywhere that it ruins the readability for me.I think that Ruby and Python are beautiful, readable and elegant. They are easy to read even for someone who does not know the language inside out. However, Clojure is opaque to me and I feel like I must know every tiny detail about the language implementation in order to be able to understand any code.So please, enlighten me! What is so good about Clojure?What is the absolute minimum that I should know about the language in order to appreciate it? | What's so great about Clojure? | clojure | For the background you gave, if I may paraphrase:You're familiar with Ruby/Python.You don't see the advantages of Clojure yet.You don't find either Lisp or Clojure syntax clear....I think the best answer is to read the book Clojure Programming by Emerick, Carper and Grand. The book has numerous explicit code comparisons with Python, Ruby, and Java, and has text explanations addressing coders from those languages. Personally, I came to Clojure after building good-sized projects w/ Python, and having some Lisp experience; reading that book helped convince me to start using Clojure not just in side-projects but for professional ends.To address your two questions directly:What's so good about Clojure? Plenty of answers on this site and elsewhere, e.g. see https://www.quora.com/Why-would-someone-learn-ClojureWhat is the absolute minimum that I should know about the language in order to appreciate it? I'd suggest knowing the big ideas behind Clojure's design, as articulated in both Clojure Programming and The Joy of Clojure books-wise, and in Rich Hickey's talks, esp. the talk Simple Made Easy. Once you know the what/why you can then start understanding the how when reading Clojure code, esp. how to change your thinking from classes, objects, state/mutation to just functions and data (higher-order functions, maps/sets/sequences, types). Additional suggestions: Lisp's elegance and power is partly from its minimalist and utterly consistent syntax. It's much easier to appreciate that with a good editor, e.g. Emacs with clojure-mode and ParEdit. As you get more familiar with it, the syntax fades away and you'll see semantics, intentions, and concise abstractions. Secondly, don't start by reading the source for ClojureScript or Compojure, those are too-much-at-once; try some 4clojure.org problems, and compare solutions with the top coders there. If you see 4-6 other solutions, invariably someone will have written a truly idiomatic, succinct FP-style solution which you can compare with a clumsy, verbose, and needlessly complicated imperative-style solution. |
_unix.306686 | I am running Ubuntu 16.04.1 LTS, with a 4.4.0-36 kernel. I don't think that this matters a lot though, because I've seen terrible kernel behaviour across many distributions/kernel versions.Consider the following starting point:My system has let's say 4GB of memory and a swap partition of 4GB.I am running a desktop environment with GNOME, a Browser and an IDE currently open.I'm doing some programming, and in the course of that, I introduce a bug which causes my program to allocate an infinite amount of memory.So far so good, this is a situation that I should recover from, right? Here's what I want to happen:My operating system realizes that the program that I wrote is rogue and not behaving correctly.It kills my program, possibly with a helpful debug or log messageI am able to resume my activitiesBut this is not what happens with a more or less default install of Ubuntu. what happens is this:The buggy program is allowed to run indefinitelyEventually, when all the physical memory is exhausted, the system starts swapping. As the buggy program claims more and more memory, all the pages of my desktop programs are pushed out to the swap. My hard drive ist starting to make very very scary sounds My user-interface becomes completely unresponsive. Clicks are registered minutes after they happened, mouse movement is often ignored.Any access to the machine effectively grinds to a halt. This includes changing virtual terminals, logging in via virtual terminals and logging in via SSH.If I do manage to somehow kill the offending program, not everything is fine. Far from it.All of the user interface and system services are now swapped out, and start to get slowly swapped back in to the physical ram. My hard drive is still going crazy at this point.It takes about 0.5 - 1 hour for my system to calm down. Rebooting the machine without pushing the reset button also takes forever.Who would have thought that something simple as running out of memory causes such a cataclysmic event?I have several questions:Who thought that using swap on a desktop system would be a good idea and why??Should I simply disable swap? What are the reasons not to?Is there a better way than disabling swapping? For example, is there a way to limit the swapping to the rogue program and retain my system services and user interface in physical memory? | OOM situations handled horribly - better to disable swap? | ubuntu;memory;swap;out of memory | null |
_unix.180221 | I am accessing remote machine in Linux using Bash scripting.My code is as follows #!/bin/bash ssh -i manu_bp.pem [email protected] <<EOF sudo -s cat /opt/revsw-config/apache/rijomon2_revsw_net/ui-config.json > r1.txt echo `$(jq .rev_component_co.mode r1.txt)` EOFI am not able to print the jq value. If I write this command as echo $(jq '.rev_component_co.mode' r1.txt) in the terminal of the remote machine, it's showing the exact result. But if I write it in a script and try to execute, it shows the errorjq: r1.txt: No such file or directory | Run a command under sudo over SSH | bash;shell script;ssh;quoting | null |
_softwareengineering.305933 | I'm working on a REST api following the JSON api specification and I'm struggling with the no data responses (described here).A server MUST respond with 404 Not Found when processing a request to fetch a single resource that does not exist, except when the request warrants a 200 OK response with null as the primary data (as described above).The described above refers to the previous section of the specification :A server MUST respond to a successful request to fetch an individual resource with a resource object or null provided as the response document's primary data.null is only an appropriate response when the requested URL is one that might correspond to a single resource, but doesn't currently.I don't understand when I need to return a HTTP 404 Not Found and when I need with HTTP 200 OK with {data:null}.For example, if I have the following URL :http://example.org/users/52This URL is correct, but the User referenced by the ID 52 doesn't exist.What is the correct response ? A 404 or a 200 data: null ? | JSON API specification : When do I need to return a 404 not found? | rest;api;json | If the user 52 doesn't exist, return HTTP 404. Returning HTTP 200 is misleading.Think about it from the point of view of the client. Would the following dialog make sense to you?Client: please, I want to know something about user 52.Server: of course (HTTP 200). What do you want to know?Client: I want to know the basic information about the user.Server: There is no information whatsoever about this user; I don't even know what are you talking about.The case where you could use HTTP 200 with null is when you don't have information about a part of the entry being requested. Imagine a case where the users have a purchase history, unless they registered very recently and haven't been approved yet (and cannot order anything). For an ordinary, approved user, the response will be:HTTP/1.1 200 OK{ id: 51, name: Laura Norman, purchase-history: [ { ... }, ... ]}Instead, the user 52 doesn't have a purchase history:HTTP/1.1 200 OK{ id: 52, name: David Johnson, purchase-history: null}This is semantically different from an empty sequence. purchase-history: [] means that the user has a history, but haven't purchased anything yet. A null has a very different meaning. |
_unix.47427 | For the experiment purpose I need to login to my own PC from the remote PCs which are under the same domain, i.e: university network. How can I do remote login in this case using SSH? As an example, I have accounts in two machines in University X which are me@machine_a.cs.x.ca and me@machine_b.cs.x.ca, my personal laptop account is nawshad@ubuntu. What extra info do I need to provide? | How to login from remote server to my own PC? | ssh;ssh tunneling | As daveh's answer points out, that could be as simple as just issuing ssh [email protected], chances are that your PC is not accessible directly from the internet, i.e. it sits behind a router of some sort.One option is to tell the router to let traffic to your PC pass through. How you do this depends on the router. This options only works if you have administrative access to the router, and that there are no other routers involved.Another option is to create a reverse SSH tunnel, i.e. from your PC, you log into one (or both) of your university machines, creating at the same time a tunnel from the university machines to the SSH port of your PC. You leave that connection on when you are at university, which allows you to log back into your PC using the tunnel that was created by the SSH session.This process has been described by http://www.vdomck.org/2005/11/reversing-ssh-connection.html; sorry for the link, but I don't want to copy all the information from there to here.In principle, the command you issue from your PC is (assuming you want to connect from me@machine_b.cs.x.ca)ssh -f -N -R 10000:localhost:22 me@machine_b.cs.x.caThen, when at university, you can connect to your home PC with the following command:ssh -p 10000 nawshad@localhostYou can change the 10000 in both commands to a different value; just make sure that it is larger than 1024.Note: while this tunnel is alive, everyone with access to machine_b.cs.x.ca can try to log into your system; make sure that you have good passwords.To shut down the tunnel, just kill the corresponding ssh process, e.g. withpkill -f 'ssh -f -N -R 10000:localhost:22 me@machine_b.cs.x.ca' |
_webmaster.99824 | I am getting Missing: author, Missing: entry-title, Missing: updated microformat error in Google's structured data testing tool.I am showing the title in different way on my page that I cannot simple use <h1 class=entry-title>title...</h1>. Is it possible to use this with in meta so that it (title) does not show on the page? Like we can do in Schema.org or Open Graph<meta property=og:title content=title... /><meta itemprop=headline content=title... />The solution I can think of <span class=entry-title>title...</span> and then set display: none for span .entry-title{display: none;} in style sheet.Just FYI, it is a WordPress blog site. | hEntry microformats tags within | html;microformats | null |
_webapps.104916 | So today, Netflix officially launched a new thumbs up / thumbs down rating system, completely replacing its 5 star rating system. While I have many complaints and questions about this, for the most part, okay. I won't argue.What I am puzzled about however is what happened to all the hundreds of movies and shows that I've already rated over the years? How did the 5-star rating get converted over to the new 2-thumb rating?The way I used the 5-star rating system was like so:1 Star = Absolutely hated it, nothing good about it at all.2 Stars = Hated it, but at least they put some effort in.3 Stars = Meh, it was okay but not worth my time.4 Stars = It was great, but could have been better. [Most Common]5 Stars = Loved it, would recommend to others.Obviously, the above is now impossible to reflect with just a thumb up or down. I'm guessing, or rather hoping, that it may have been done like so:1 Star = Thumb Down2 Stars = Thumb Down3 Stars = No Thumb4 Stars = Thumb Up5 Stars = Thumb UpHow exactly did all my historical ratings get transferred over to this new system, if at all? What algorithm did they use to decide? | What happened with all my previous ratings on Netflix? | netflix | null |
_webmaster.4174 | How do I write a IDN domain name with IDN codes?E.g. mjlk.com with IDN codes? (mjlk is milk in swedish)And is there any list of available IDN codes and letters? E.g. all possible for .com ? | How do I write IDN domains? | domains;internationalization;idn;punycode | What you're looking for is called Punycode. You can do pretty much any Unicode character with it, I believe.Verisign have a nifty conversion tool here.In your scenario, mjlk.com translates to xn--mjlk-6qa.com - xn--mjlk-6qa.com is the domain you'll need to register and the DNS entries you'll need to create if you want that IDN.(You can try this in your browser. In both my FF and IE, http://mjlk.com redirects you to http://www.xn--mjlk-6qa.com/ |
_cogsci.8200 | Could anyone tell me if there are any serious neuroscientific mechanisms that have been proposed to explain how feelings of body states take on a positive or negative character (valence)? An example biocomputational approach based on Bayesian inference can be found here. Also related to the Bayesian brain: there is evidence that dopamine communicates prediction error, which has been proposed as a unifying principle for cognition - so I was wondering whether there were any comprehensive theories applying predictive coding to emotion (i.e. valence as prediction error of interoceptive (body) perception). Of course, any theories of valence with similarly broad import would be interesting too. | Are there any serious neuroscientific theories of emotional valence? | neurobiology;emotion | There is no such universally accepted model. It seems to me that this a specific instance of the hard problem: how can matter have a subjective experience at all?Of course we know positive experience is associated with dopaminergic action, and often the mechanism has the evolutionary function of surviving and reproducing... and we can argue the same for noxious stimuli - they signal danger in the environment and lead to aversive behaviors.But this still ignores the phenomenology step, between stimulus and behavior; the nature of the subjective experience. Friston's approach is interesting. I wonder if there is any relationship to be found betweem it and Tononi's model of consciousness (integrated informatiom theory). |
_codereview.66907 | In my application, I have a message dispatcher. Each message gets relayed to a dispatcher thread.In some scenarios, I can get two responses from the third party in the same millisecond:An I received your message ACKnowledgementMy response to your message RESPONSE When I get both messages at the same time, this can create a race condition. It is important to have my EventHandler handle the ACK before I get the RESPONSE, but I cannot guarantee ordering from the Asynchronous networking IO library; I must do it myself.Below is a working MVCE of this scenario. Most of it is just framework, it's not how it works in the real code, but it's enough to get the bit I'm asking about - the ordering logic - to show it's value. The Proposed Solution to this race condition is the ensureAck method, which is called in the handleResponse method. If you comment out that call, and run the application repeatedly, you will see that there is no guarantee of method ordering. Additionally, and this is the difficult bit: I don't want the RESPONSE to be ignored if, for some reason, the ACK never shows up (a dropped network packet or something) - I want the RESPONSE to get handled, reluctantly, eventually.Aside: the bit about NOTHING messages are just to warm up the ExecutorService so you can actually see the race condition.How is my solution?import java.util.concurrent.ExecutorService;import java.util.concurrent.Executors;public class RaceSolution { private static enum State { NEW, READY; } private static enum Type { ACK, RESPONSE, NOTHING; } private static class EventHandler { private volatile State state = State.NEW; private final Object lock = new Object(); public void handleEvent(Event event) { switch(event.t) { case ACK: handleAck(event); break; case RESPONSE: handleResponse(event); break; default: } } private void handleResponse(Event event) { // COMMENT OUT THE NEXT LINE TO SEE THE RACE CONDITION ensureAck(); synchronized(lock) { System.out.println(Received response event: + state); } } private void handleAck(Event event) { synchronized(lock) { System.out.println(Received ack Event: + state); state = State.READY; } } /* THIS IS THE IMPORTANT BIT */ private void ensureAck() { int tries = 0; try { while(state == State.NEW && tries < 10) { Thread.sleep(1); tries++; } } catch (InterruptedException e) { } } } private static class Event { Type t; public Event(Type t) { this.t = t; } } public static void main(String... args) throws InterruptedException { RaceSolution solution = new RaceSolution(); Event nothing = new Event(Type.NOTHING); Event ack = new Event(Type.ACK); Event response = new Event(Type.RESPONSE); for(int i = 0; i < 10; i++) { solution.submit(nothing); } Thread.sleep(100); solution.submit(ack); solution.submit(response); solution.shutdown(); } private final ExecutorService dispatcher = Executors.newCachedThreadPool(); private final EventHandler handler = new EventHandler(); public void shutdown() { if (!dispatcher.isShutdown()) { dispatcher.shutdownNow(); } } public void submit(final Event e) { dispatcher.execute(new Runnable() { @Override public void run() { handler.handleEvent(e); } }); }} | Solving a race condition | java;concurrency | Concurrency StrategyIn general, using volatile is complicated, partially because the meaning of volatile changed in Java 1.4, and also because it is hard to spot. I recommend against using it at all. Instead, you should use a more visible concurrency mechanism like synchronization, java.util.concurrent.* classes, java.util.concurrent.locks.*, or java.util.concurrent.atomics.*So, while I recommend against volatile, what's a real problem is using multiple different locking schemes in the same code, and you use both volatile and synchronized.The use of synchronization as your concurrency strategy should be fine all on its own in this case.GeneralThe private final Object lock = new Object() is great. Good to be final, and I prefer that to locking on the instance itself (like synchronized(this)). Makes it more clear.Println in a synchronized block is almost never a good idea:synchronized(lock) { System.out.println(Received ack Event: + state); state = State.READY;}A better solution would be:State copy = null;synchronized(lock) { copy = state; state = State.READY;}System.out.println(Received ack Event: + copy);Timed sleeps while waiting for a lock are a horrible solution:Thread.sleep(1);Better algorithmThe right way to solve this problem, though is to used timed-waits on the lock:private void handleAck(Event event) { State copy = null; synchronized(lock) { copy = state; state = State.READY; // tell any waiting threads the state has changed. lock.notifyAll(); } System.out.println(Received ack Event: + copy);}/* THIS IS THE IMPORTANT BIT */private void ensureAck() { // how long to wait... 10 milliseconds try { synchronized(lock) { if (state == State.NEW) { long waited = 0; // milliseconds long start = System.currentTimeMillis(); while (state == State.NEW && waited < 10) { lock.wait(10 - waited); waited = System.currentTimeMillis() - start; } } } } catch (InterruptedException e) { // do more than nothing..... }}The above strategy does not need anything to be volatile, and it does not 'poll' the state, it returns immediately when the state changes.... |
_cstheory.21424 | the paper In defense of the Simplex Algorithm's worst-case behavior Disser/Skutella [1] was recently cited on this tcs.se site by saeed on another interesting question. the paper introduces the idea of NP mighty algorithms (p3 def2). it follows a fruitful/continuing line of research analyzing P$\stackrel{?}{=}$NP wrt the simplex algorithm and linear programming of which there have been other major recent advances, eg results by Pokutta et al in [2] showing that the P-time TSP polytope must have an unlikely/restrictive form (commentaries in Barriers to P/NP proofs, RJLipton, also Stating P$\stackrel{?}{=}$NP without TMs). question (possibly with multiple leading answers): the Disser/Skutella paper has closely related ideas but does not seem to explicitly reformulate the P$\stackrel{?}{=}$NP question. what is an equivalent way to state/study it in their introduced schema/framework of NP Mighty algorithms? what is a/the basic open problem in simplex/linear programming complexity theory that is equivalent to P$\stackrel{?}{=}$NP?(somewhat related question: the Disser/Skutella paper also refers to Klee-Minty cubes, long used to show worst-case behavior on the simplex algorithm. are there any results relating lower bounds on them to general algorithmic lower bounds and/or complexity class separations eg P$\stackrel{?}{=}$NP etc?)[1] In defense of the Simplex Algorithm's worst-case behavior Dissker/Skutella[2] Exponential Lower Bounds for Polytopes in Combinatorial Optimization Fiorini et al | equivalent way(s) of expressing P=?NP problem in linear programming? | lower bounds;linear programming;p vs np;barriers;simplex | This answer is for your somewhat related question:In the following recent paper, Klee-Minty cubes were used explicitly to show that there exists a pivoting rule for the simplex method (not one of the standard ones, like Dantzig's pivoting rule analysed by Disser and Skutella) for which it is PSPACE-complete to decide whether a variable enters the basis for the simplex method with this pivoting rule. It was left open (Conjecture 3 in the paper) whether the same result holds for Dantzig's pivot rule. I. Adler, C. Papadimitriou, and A. Rubinstein (2014).On Simplex Pivoting Rules and Complexity Theory.In Proc. of IPCO, pp. 13-24.In the following paper we prove that conjecture and strengthen Disser and Skutella's result to show two PSPACE-completeness results for the simplex method with Dantzig's pivoting rule (one for whether a variable enters the basis along the way and another for whether a variable is in the final solution found by Dantzig's pivot rule).J. Fearnley and R. Savani (2015).The Complexity of the Simplex Method. In Proc. of STOC, pp. 201-208. http://arxiv.org/abs/1404.0605We use the known connection between linear programs and Markov decision processes (MDPs). A central part of our construction are the exponential-time examples due to Condon and Melekopoglou for single-switch policy iteration for MDPs. Those examples use a reflected binary Gray code, as used for Klee-Minty cubes. We use a modification of those examples as a clock to drive iterated circuit evaluation in our construction. |
_codereview.132586 | The following is my implementation of the absolute value function in the MIPS assembly language. I would very much like to hear constructive criticism of my code from you as well as any words of wisdom you want to pass my way. The code is heavily commented.################################################################################# ## File name: abs.a ## ## This program prints on the console the absolute value of the number declared ## in the data section. ## ################################################################################################################## ## System call code constants ## #################################SYS_PRINT_INT = 0x01SYS_PRINT_STRING = 0x04SYS_EXIT = 0x0A.globl __start .datax: .word 0xFFFFFFFC # 0xFFFFFFFC = -4endl: .asciiz \n .text__start: lw $a0, x # Load $a0 with the value of x jal _abs # Call the abs subroutine add $t0, $v0, $zero # Move the result into $t0 li $v0, SYS_PRINT_INT # Print the result move $a0, $t0 syscall li $v0, SYS_PRINT_STRING # Print a newline la $a0, endl syscall li $v0, SYS_EXIT # Exit the program syscall################################################################################# ## The abs function takes a 32-bit singed two's compliment number as an ## argument and returns its absolute value. ## ## $a0 - the number to find the absolute value of ## $v0 - the result of the function ## ## ## Implementation details: ## ## If the number is negative, we need to negate it to get its absolute value. ## If it's positive, we don't have to do anything. We just return the number ## itself. To find out whether a number is negative or positive, we need to ## examine its most significant bit. If it's got a zero in there, the number is ## positive. If it's instead got a one in there, and since we're using a singed ## two's compliment binary representation, the number is negative. So, we shift ## the number 16 positions to the left and save the result in the $t0 register. ## Then, we use a bit mask of 0x8000 (1000 0000 0000 0000 in binary) to extract ## the most significant bit. And lastly, we use the branch on equal to zero ## operation to either save the number in the $v0 register and return from the ## function if the number is positive or negate it if it's negative. To negate ## it, we use the nor oration and add one to it. ## #################################################################################_abs: srl $t0, $a0, 16 # Shit 16 positions to the right andi $t0, $t0, 0x8000 # Extract the most significant bit beqz $t0, skip # If it's zero, jump to skip nor $a0, $a0, $zero # If it doesn't equal to zero, addi $a0, 1 # make it positiveskip: add $v0, $a0, $zero # Move the result into $v0 jr $ra # Return to the call site################################################################################# ## end of function ## #################################################################################Test:$ spim -notrap file abs.a$ 4 | Absolute value function in MIPS | assembly | Since the only really good answers will be of the form Why didn't you just...?, this seems like a duplicate of https://stackoverflow.com/questions/2312543/absolute-value-in-mipsRe your extensive comments: Probably too extensive, given that the name of the function (_abs) tells the reader everything he needs to know (via man abs).Also, if you're going to write a ton of comments for the reader to wade through, please make sure they're correct! You misspelled at least signed and complement in the first sentence alone. |
_softwareengineering.38423 | Possible Duplicate:When are Getters and Setters Justified Is it a good programming practice to not use getters and setters in trivial parts of code? | Is it a good programming practice to not use getters and setters in trivial parts of code? | programming languages | null |
_unix.17040 | I've two configuration files, the original from the package manager and a customized one modified by myself. I've added some comments to describe behavior.How can I run diff on the configuration files, skipping the comments? A commented line is defined by:optional leading whitespace (tabs and spaces)hash sign (#)anything other characterThe (simplest) regular expression skipping the first requirement would be #.*. I tried the --ignore-matching-lines=RE (-I RE) option of GNU diff 3.0, but I couldn't get it working with that RE. I also tried .*#.* and .*\#.* without luck. Literally putting the line (Port 631) as RE does not match anything, neither does it help to put the RE between slashes.As suggested in diff tool's flavor of regex seems lacking?, I tried grep -G:grep -G '#.*' fileThis seems to match the comments, but it does not work for diff -I '#.*' file1 file2.So, how should this option be used? How can I make diff skip certain lines (in my case, comments)? Please do not suggest greping the file and comparing the temporary files. | How to diff files ignoring comments (lines starting with #)? | regular expression;diff | According to Gilles, the -I option only ignores a line if nothing else inside that set matches except for the match of -I. I didn't fully get it until I tested it.The TestThree files are involved in my test:File test1: textFile test2: text #commentFile test3: changed text #commentThe commands:$ # comparing files with comment-only changes$ diff -u -I '#.*' test{1,2}$ # comparing files with both comment and regular changes$ diff -u -I '#.*' test{2,3}--- test2 2011-07-20 16:38:59.717701430 +0200+++ test3 2011-07-20 16:39:10.187701435 +0200@@ -1,2 +1,2 @@-text+changed text #commentThe alternative waySince there is no answer so far explaining how to use the -I option correctly, I'll provide an alternative which works in bash shells:diff -u -B <(grep -vE '^\s*(#|$)' test1) <(grep -vE '^\s*(#|$)' test2)diff -u - unified diff-B - ignore blank lines<(command) - a bash feature called process substitution which opens a file descriptor for the command, this removes the need for a temporary filegrep - command for printing lines (not) matching a pattern-v - show non-matching linesE - use extended regular expressions'^\s*(#|$)' - a regular expression matching comments and empty lines^ - match the beginning of a line\s* - match whitespace (tabs and spaces) if any(#|$) match a hash mark, or alternatively, the end of a line |
_unix.118013 | I'm not sure why but my app never executes when I reboot my machine. In crontab> crontab -eI see@reboot /root/myscriptname.shThe file is#!/bin/bashnohup mono-sgen /root/myapp.exe /path/file > /dev/null 2>&1 &I run ps aux | grep monoand I don't see my file. If i run /root/myscriptname.sh from the command line it runs fine. I tried using bash /root/myscriptname.sh in crontab but that didn't solve it. How do I execute mono-sgen /root/myapp.exe /path/file > /dev/null 2>&1 without putting it in crontab (I want to leave it in a easy to disable sh file) | Cron not running my script | cron | null |
_codereview.28207 | I'm trying to find the closest point (Euclidean distance) from a user-inputted point to a list of 50,000 points that I have. Note that the list of points changes all the time. and the closest distance depends on when and where the user clicks on the point.#find the nearest point from a given point to a large list of pointsimport numpy as npdef distance(pt_1, pt_2): pt_1 = np.array((pt_1[0], pt_1[1])) pt_2 = np.array((pt_2[0], pt_2[1])) return np.linalg.norm(pt_1-pt_2)def closest_node(node, nodes): pt = [] dist = 9999999 for n in nodes: if distance(node, n) <= dist: dist = distance(node, n) pt = n return pta = []for x in range(50000): a.append((np.random.randint(0,1000),np.random.randint(0,1000)))some_pt = (1, 2)closest_node(some_pt, a) | Finding the closest point to a list of points | python;performance;numpy;clustering | It will certainly be faster if you vectorize the distance calculations:def closest_node(node, nodes): nodes = np.asarray(nodes) dist_2 = np.sum((nodes - node)**2, axis=1) return np.argmin(dist_2)There may be some speed to gain, and a lot of clarity to lose, by using one of the dot product functions:def closest_node(node, nodes): nodes = np.asarray(nodes) deltas = nodes - node dist_2 = np.einsum('ij,ij->i', deltas, deltas) return np.argmin(dist_2)Ideally, you would already have your list of point in an array, not a list, which will speed things up a lot. |
_codereview.92817 | I am creating a true or false quiz game and when the user chooses an answer a tick or cross will appear on the screen telling the user whether they are right or wrong and then automatically bring the next question in.Currently I am using 2 almost identical functions that draw the next question from my question array (one for each button). It appears to work but seems very cluttered and illogical. Is there a more elegant way to code the logic to this?Also, the concept of a previous question button has caused a lot of issues. Through a lot of research I am still not sure how I would incorporate one into the program.@IBAction func trueButton(sender: AnyObject) { delay(2) { nextQuestionTrue() } }@IBAction func falseButton(sender: AnyObject) { delay(2) { nextQuestionFalse() }}func nextQuestionTrue() -> QuizQuestion { // initialize current question var currentQuestion: QuizQuestion = QuizQuestion(question: , answer: false, explanation: ) if currentIndex < questions.count { currentQuestion = questions[currentIndex] } if currentQuestion.answer == true { answerLabel.text = correct } else if currentQuestion.answer == false { answerLabel.text = Incorrect } questionLabel.text = currentQuestion.question currentIndex++ return currentQuestion}func nextQuestionFalse() -> QuizQuestion { // initialize current question var currentQuestion: QuizQuestion = QuizQuestion(question: , answer: false, explanation: ) if currentIndex < questions.count { currentQuestion = questions[currentIndex] } if currentQuestion.answer == false { answerLabel.text = correct } else if currentQuestion.answer == true { answerLabel.text = Incorrect } questionLabel.text = currentQuestion.question currentIndex++ return currentQuestion} | True or false quiz logic | swift;quiz | Personally I would combine the two @IBActions into a single one and identify the pressed button from the sender object.I would also recommend you to look into the Ternary Conditional Operator, which can make your simple if statements much cleaner, like the following:if currentQuestion.answer == true { answerLabel.text = correct} else if currentQuestion.answer == false { answerLabel.text = Incorrect}// The previous is exactly the same as the following:answerLabel.text = (currentQuestion.answer == true) ? correct : IncorrectYou can read more about the Ternary Conditional Operator here.You can see my result from trying to recreate your current setup as inspiration (FYI, my buttons have the titles True and False, which I make use of to turn them into booleans)://// ViewController.swift// TrueFalseQuiz//// Created by Stefan Veis Pennerup on 06/06/15.// Copyright (c) 2015 Kumuluzz. All rights reserved.//import UIKitclass ViewController: UIViewController { // MARK: - Question models // Notice: This is not the best practice for storing the model in terms of the MVC pattern, // this is just for illustration purposes. let questions = [ QuizQuestion(question: Do I like coffee?, answer: true, explanation: Because it's awesome!), QuizQuestion(question: Is bacon god's gift to mankind?, answer: true, explanation: Because it's awesome!), QuizQuestion(question: Should I take a nap right now?, answer: false, explanation: You gotta review some code!)] var currentQuestionIndex = 0 // MARK: - Storyboard outlets @IBOutlet weak var questionLabel: UILabel! @IBOutlet weak var answerLabel: UILabel! // MARK: - Lifecycle methods override func viewDidLoad() { super.viewDidLoad() // Initializes the first question self.answerLabel.text = self.questionLabel.text = questions[currentQuestionIndex].question } // MARK: - Storyboard actions /** The target action for both the true false buttons */ @IBAction func answerPressed(sender: UIButton) { // Exits if there aren't any questions left if currentQuestionIndex >= questions.count { return } // Retrieves the user's answer and figures out if it correct let userAnswer = sender.currentTitle let isAnswerCorrect = userAnswer?.toBool() == questions[currentQuestionIndex].answer // Prints appropiate message answerLabel.text = (isAnswerCorrect) ? Correct : Incorrect // Updates with a new question currentQuestionIndex++ let isThereAnyQuestionsLeft = currentQuestionIndex < questions.count questionLabel.text = (isThereAnyQuestionsLeft) ? questions[currentQuestionIndex].question : No more questions }}// Extends the standard String class to convert a string to a booleanextension String { func toBool() -> Bool? { switch self { case True, true, yes, 1: return true case False, false, no, 0: return false default: return nil } }}Could you describe a bit more about the previous question button? Should it replace the current question? Should it show the user's answer or just start over with the question? UPDATE 1:This update is concerning the previous question funcionality outlined by OP in the comments of this answer.Regarding next and back:I would add two new buttons (a next and back button, surprise, surprise) and create two separate storyboard actions for each of them. Each method would respectively increment or decrement the current question index. Both of the methods would then update the currently displayed question.// MARK: - Next/back methods@IBAction func nextPressed(sender: UIButton) { currentQuestionIndex++ if currentQuestionIndex == questions.count { currentQuestionIndex = 0 } updateCurrentQuestion()}@IBAction func backPressed(sender: UIButton) { currentQuestionIndex-- if currentQuestionIndex < 0 { currentQuestionIndex = questions.count - 1 } updateCurrentQuestion()}func updateCurrentQuestion() { answerLabel.text = questionLabel.text = questions[currentQuestionIndex].question}Regarding submitting all:In the code outline aboved there are 2 main varibles related to your questions model:let questions: [QuizQuestion]var currentQuestionIndex: IntTo handle the feature for storing the user's answers, then I would add another property to your QuizQuestion model to hold the user's answers. This means your model would look something similar to the following://// QuizQuestion.swift// TrueFalseQuiz//// Created by Stefan Veis Pennerup on 06/06/15.// Copyright (c) 2015 Kumuluzz. All rights reserved.//import Foundationclass QuizQuestion { let question: String! let answer: Bool! let explanation: String! var usersAnswer: Bool? init(question: String, answer: Bool, explanation: String) { self.question = question self.answer = answer self.explanation = explanation }}Since usersAnswer is an Optional, then it will initially be nil. Whenever the method answerPressed is called, then you should add the following line to store the answer, thus overwriting nil: let userAnswer = sender.currentTitle questions[currentQuestionIndex].usersAnswer = userAnswer?.toBool()I would then add a new button called finish or submit all and connect the following storyboard action, which determines if all the questions have been answered before continuing.// MARK: - Finishing @IBAction func submitAll(sender: UIButton) { let hasAnsweredAllQuestions = questions.reduce(true) { (x, q) in x && (q.usersAnswer != nil) } println(has user answered all questions?: \(hasAnsweredAllQuestions))}UPDATE 2:Explanation of reduce for OP.reduce can be quite hard to grasp initially, but once you understand it, then it can be super powerful. In short, it takes all the values of an array and compute it into a single value (could be an Int, Bool or something else). You can read more about it here: https://stackoverflow.com/questions/28283451/ios-swift-reduce-function.The line let hasAnsweredAllQuestions = questions.reduce(true) { (x, q) in x && (q.usersAnswer != nil) } could be expanded to something similar the following:var hasAnsweredAllQuestions = truefor q in questions { if (q.usersAnswer != nil) { hasAnsweredAllQuestions = true } else { hasAnsweredAllQuestions = false break; }}As you can see it simply loops over all the questions and checks if the user has given an answer (if the answer is not nil).Other functions which are equally powerful as reduce are map and filter which you can try to do more research on. Regarding checking the user's answer with the correct answer, then you could consider using map, but I'll leave you with that hint for now ;) |
_unix.352920 | I have a file created with mysqldump that is 11GB. I need to use the mysql command to import it into a database, but I need to add:USE db1;at the top. It would take forever to rewrite the file. Is there a way I can concatenate another file at the beginning of the input redirect to fool it into looking at it as a single file?text.txt contents:USE db1;sql_out.sql contents:data from mysqldump using the --skip-add-drop-table and --no-create-info optionscommand attempted:mysql --host=<host> --user=<user> --password=<pwd> < echo $(cat text.txt sql_out.sql)When I do that I get:echo: No such file or directoryIf I try it without the echo, I get:$(cat text.txt sql_out.sql): ambiguous redirectIs there a way to do this? | using cat at input file | command line;mysql;input | You can pipe it in:cat text.txt sql_out.sql | mysql --host=...Alternatively, to avoid having to create a new file:(echo USE db1;; cat sql_out.sql) | mysql --host=... |
_codereview.47488 | After asking a question on StackOverflow about the two .NET caching systems taking dependencies on each other I gave implementing them a go. I posted them as my own answer there but before accepting the answer I'd like some second opinions on the implementations.HttpCacheChangeMonitorAllows ObjectCache items to take a dependency on HTTPCachepublic class HttpCacheChangeMonitor : ChangeMonitor{ private readonly string _uniqueId = Guid.NewGuid().ToString(N, CultureInfo.InvariantCulture); private readonly string[] _httpCacheKeys; public override string UniqueId { get { return _uniqueId; } } public HttpCacheChangeMonitor(string httpCacheKey) : this(new[] { httpCacheKey }) { } public HttpCacheChangeMonitor(string[] httpCacheKeys) { _httpCacheKeys = httpCacheKeys; Initialise(); } private void Initialise() { HttpRuntime.Cache.Add(_uniqueId, _uniqueId, new CacheDependency(null, _httpCacheKeys), DateTime.MaxValue, Cache.NoSlidingExpiration, CacheItemPriority.NotRemovable, Callback); InitializationComplete(); } private void Callback(string key, object value, CacheItemRemovedReason reason) { OnChanged(null); } protected override void Dispose(bool disposing) { Debug.WriteLine( _uniqueId + notifying cache of change., HttpCacheChangeMonitor); HttpRuntime.Cache.Remove(_uniqueId); }}Testpublic class HttpCacheChangeMonitorTests{ [Fact] public void ChangeMonitorTest() { HttpRuntime.Cache.Add(ChangeMonitorTest1, , null, Cache.NoAbsoluteExpiration, new TimeSpan(0,10,0), CacheItemPriority.Normal, null); HttpRuntime.Cache.Add(ChangeMonitorTest2, , null, Cache.NoAbsoluteExpiration, new TimeSpan(0, 10, 0), CacheItemPriority.Normal, null); using (MemoryCache cache = new MemoryCache(TestCache, new NameValueCollection())) { // Add data to cache for (int idx = 0; idx < 10; idx++) { cache.Add(Key + idx, Value + idx, GetPolicy(idx)); } long middleCount = cache.GetCount(); // Flush cached items associated with NamedData change monitors HttpRuntime.Cache.Remove(ChangeMonitorTest1); long finalCount = cache.GetCount(); Assert.Equal(10, middleCount); Assert.Equal(5, middleCount - finalCount); HttpRuntime.Cache.Remove(ChangeMonitorTest2); } } private static CacheItemPolicy GetPolicy(int idx) { string name = (idx % 2 == 0) ? ChangeMonitorTest1 : ChangeMonitorTest2; CacheItemPolicy cip = new CacheItemPolicy(); cip.AbsoluteExpiration = System.DateTimeOffset.UtcNow.AddHours(1); cip.ChangeMonitors.Add(new HttpCacheChangeMonitor(name)); return cip; }}MemoryCacheDependencyAllows HttpCache items to take a dependency on MemoryCachepublic class MemoryCacheDependency : CacheDependency{ private readonly string _uniqueId = Guid.NewGuid().ToString(N, CultureInfo.InvariantCulture); private readonly IEnumerable<string> _cacheKeys; private readonly MemoryCache _cache; public override string GetUniqueID() { return _uniqueId; } public MemoryCacheDependency(MemoryCache cache, string cacheKey) : this(cache, new[] { cacheKey }) { } public MemoryCacheDependency(MemoryCache cache, IEnumerable<string> cacheKeys) { _cache = cache; _cacheKeys = cacheKeys; Initialise(); } private void Initialise() { var monitor = _cache.CreateCacheEntryChangeMonitor(_cacheKeys); CacheItemPolicy pol = new CacheItemPolicy{AbsoluteExpiration = DateTime.MaxValue, Priority = CacheItemPriority.NotRemovable}; pol.ChangeMonitors.Add(monitor); pol.RemovedCallback = Callback; _cache.Add(_uniqueId, _uniqueId, pol); FinishInit(); } private void Callback(CacheEntryRemovedArguments arguments) { NotifyDependencyChanged(arguments.Source, EventArgs.Empty); } protected override void DependencyDispose() { Debug.WriteLine( _uniqueId + notifying cache of change., ObjectCacheDependency); _cache.Remove(_uniqueId); base.DependencyDispose(); }}Testpublic class MemoryCacheDependencyTests{ [Fact] public void CacheDependencyTest() { using (MemoryCache cache = new MemoryCache(TestCache, new NameValueCollection())) { cache.Add(HttpCacheTest1, DateTime.Now, new CacheItemPolicy {SlidingExpiration = new TimeSpan(0, 10, 0)}); cache.Add(HttpCacheTest2, DateTime.Now, new CacheItemPolicy {SlidingExpiration = new TimeSpan(0, 10, 0)}); // Add data to cache for (int idx = 0; idx < 10; idx++) { HttpRuntime.Cache.Add(Key + idx, Value + idx, GetDependency(cache, idx), Cache.NoAbsoluteExpiration, new TimeSpan(0,10,0), CacheItemPriority.NotRemovable, null); } int middleCount = HttpRuntime.Cache.Count; // Flush cached items associated with NamedData change monitors cache.Remove(HttpCacheTest1); int finalCount = HttpRuntime.Cache.Count; Assert.Equal(10, middleCount); Assert.Equal(5, middleCount - finalCount); } } private static CacheDependency GetDependency(MemoryCache cache, int idx) { string name = (idx % 2 == 0) ? HttpCacheTest1 : HttpCacheTest2; return new MemoryCacheDependency(cache, name); }} | HTTPCache and MemoryCache cross dependency | c#;.net;cache | Overall looks very good, clean and well done. I only see minor tiny little things...Nitpick - Initialise vs InitializeThis is just a minor nitpick, but it's kind of jumping at me:private void Initialise(){ HttpRuntime.Cache.Add(_uniqueId, _uniqueId, new CacheDependency(null, _httpCacheKeys), DateTime.MaxValue, Cache.NoSlidingExpiration, CacheItemPriority.NotRemovable, Callback); InitializationComplete();}I find it weird that a method called Initialise() is calling one that's called InitializationComplete() - I think for consistency it would be better named Initialize().Also the MSDN for InitializationComplete() says it's called from the constructor of derived classes to indicate that initialization is finished. - you are calling it from within the constructor, and I'm not familiar with using the ChangeMonitor class but it might be clearer to have that call explicitly show up in the constructor (similar to how InitializeComponents() remains in a form's constructor no matter what):public HttpCacheChangeMonitor(string[] httpCacheKeys){ _httpCacheKeys = httpCacheKeys; Initialize(); InitializationComplete();}But then again it's no biggie, the code is very clear and easy to follow.Temporal CouplingThat said I like constructors that do very little, and this is what you've got here, but the Initialise method seems of very little use, and has temporal coupling with the setting of _httpCacheKeys - I mean, if your constructor looked like this:public HttpCacheChangeMonitor(string[] httpCacheKeys){ Initialise(); _httpCacheKeys = httpCacheKeys;}I'd expect it to blow up. One way to eliminate the temporal coupling, would be to take the array as a parameter:private void Initialize(string[] httpCacheKeys){ HttpRuntime.Cache.Add(_uniqueId, _uniqueId, new CacheDependency(null, httpCacheKeys), DateTime.MaxValue, Cache.NoSlidingExpiration, CacheItemPriority.NotRemovable, Callback); InitializationComplete();}...and then the order of operations doesn't matter anymore:public HttpCacheChangeMonitor(string[] httpCacheKeys){ Initialize(httpCacheKeys); _httpCacheKeys = httpCacheKeys;}(haven't looked at the test code) |
_unix.67471 | Is it possible to configure for example 4 desktops and switch them separately for each monitor? | KDE; each monitor another desktop | fedora;kde;monitors;amd | I think the following is appropriate to your question, unless you're asking about entirely separate displays and input (like a multi-user/seat setup):I did this about 10 years ago as well. One advantage to the separate desktop setup is.... they're separate. Each monitor is a different $DISPLAY in X. You can run different window managers and DEs in each. The major disadvantage, as I recall, is that.... they're separate. there's no general way to switch a windows from one desktop to another. You'd have to quit the app and restart it on the appropriate display. Another problem, is this require much mucking about in your /etc/X11/xorg.conf and I will freely admit that I haven't had to manually modify that file in many years.I do not at all remember what I did way back then.If you had a NVIDIA gpu, which you appear not to have, the nvidia-settings tool will explictely allow you to set the monitors each as a Separate X Screen. This would eliminate manual modification the xorg.conf and that's generally a good thing.On the AMD side? Here's a general discussion (old, I know, and not about an amd card, but it still ) that may give you point to start, perhaps: Non-Twinview Non-Xinerama Xorg.com. The key setting is Option Xinerama off in the Section ServerLayout section.Here's my ServerLayout section from xorg.conf in the separate mode:Section ServerLayout Identifier Layout0 Screen 0 Screen0 0 0 Screen 1 Screen1 1920 0 InputDevice Keyboard0 CoreKeyboard InputDevice Mouse0 CorePointer Option Xinerama 0EndSectionThat pretty much did it. KDE worked, though Panels were only on one monitor. Cinnamon worked, but only in fall-back mode. To get anything on the other monitors, I explicitly had to change the DISPLAY to :0.0 or :0.1 and start another application. It would then appear on the monitor/display indicated. Once I seem to recall having had separate windowmanagers working, but my xorg.conf-fu is lacking these days. |
_cstheory.3847 | Could anyone give an intuitive (not intutionistic) explanation of the correspondence of left introduction and elimination of implication in Sequent Calculus (SC) and Natural Deduction (ND) respectively? I know they should by the symmetry of SC but I don't see how they correspond to each other. More generally, I don't understand how left introduction of a connective in SC renders exactly the elimination of it in ND intuitively. | About the correspondence of left introduction and elimination of implication in Sequent Calculus and in Natural Deduction resp. | lo.logic;proof theory;sequent calculus;natural deduction | In computational terms, a sequent calculus term is a sequentialization of a lambda term. That is, viewed as a type system, sequent calculus can be seen as giving an evaluation order of a lambda-term.So, suppose $\Gamma \vdash e_1 : A \to B$, and $\Gamma \vdash e_2 : A$. In natural deduction, the term for implication elimination is $\Gamma \vdash e_1\;e_2 : B$ -- that is, application. However, note that this doesn't tell us whether to evaluate $e_1$ first or $e_2$ first. In sequent calculus, a corresponding proof term assignment must look like $$\mathsf{let}\; f = e_1 \;\mathsf{in}\;\mathsf{let}\;v = e_2\;\mathsf{in}\;\mathsf{let}\; x = f\;v \;\mathsf{in}\;x$$ or else it must look like $$\mathsf{let}\; v = e_2\;\mathsf{in}\;\mathsf{let}\;f = e_1 \;\mathsf{in}\;\mathsf{let}\; x = f\;v \;\mathsf{in}\;x$$ Here, the let-form is the proof term corresponding to the use of the cut-rule, and since all of the left rules act only on hypotheses/variables, this forces us to bind all intermediate results to variables. This requirement ensures that we are forced to explicitly say which of $e_1$ or $e_2$ we reduce and bind to a variable first. For purely functional languages, this explicitness doesn't matter, but as you add effects it becomes easier to work with sequent calculi. This is why things like calculi of control effects/classical logic are typically presented sequent calculus. |
_unix.65836 | I successfully installed a script to automatically launch in /etc/init.d on my new Raspberry Pi.Unfortunately, it is a node.js app that never returns, and therefore hangs the device during boot (this is on Debian). Yes, I'm an idiot. Is there a secret handshake I can do during boot to prevent it from running my init.d script so I can get to login and a shell to fix it? | init.d script causes boot hang | debian;init script;raspberry pi;init.d | Assuming that the node.js init script runs before sshd or any other external access script (otherwise, you could just login in remotely, disable the script, and then reboot), the easiest thing to do is to take your SD card to another computer and mount it there, find the init script, and move it out of the init directory. Yes, it requires an external system, but you needed an external system to prepare the flash disk anyway, so I hope you still have one around.There's also a safe mode for Raspbian, but it sounds like you aren't running that. Here are relevant forum links in case they might help:Poor scripting stops my pi from bootingSafe Mode |
_unix.166933 | I have a server with two NIC and it's running Ubuntu Server 14.04. The first one is the interface p3p1 and it's connected to the subnet 192.168.1.0/24 with the IP 192.168.1.100. The second one is the interface em1 and it's connected to the subnet 192.168.100.0/24 with the IP 192.168.100.1.My server can ping all the hosts of the subnet 192.168.1.0/24, but it can't ping the hosts of 192.168.100.0/24.When I try to ping to host (192.168.100.20) on the subnet 192.168.100.0/24, I can see the ARP requests of my server and the ARP response of the host telling his MAC address to the server. But when I try to see the arp table of the server, it's telling: ? (192.168.100.20) at <incomplete> on em1When I try to ping to the server (192.168.100.1) with the host (192.168.1.20), I can see the ARP requests of my host, but I don't get responses of the server.If I add manually the MAC address of the host on the server ARP table, the ping works.I think the ARP service it doesn't work for the em1 interface, but I don't know how to repair.There is my config :ARP Tablethegorlie@serv-io ~> arp -a? (192.168.100.20) at <incomplete> on em1? (192.168.1.1) at e0:ce:c3:f5:be:56 [ether] on p3p1? (192.168.1.14) at 08:3e:8e:dd:05:e7 [ether] on p3p1ifconfigem1 Link encap:Ethernet HWaddr 74:d4:35:e7:62:16 inet addr:192.168.100.1 Bcast:192.168.100.255 Mask:255.255.255.0 inet6 addr: xxxx::xxxx:35ff:fee7:6216/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:72 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:5422 (5.4 KB) Interrupt:20 Memory:f7e00000-f7e20000lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:909 errors:0 dropped:0 overruns:0 frame:0 TX packets:909 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:91970 (91.9 KB) TX bytes:91970 (91.9 KB)p3p1 Link encap:Ethernet HWaddr 74:d4:35:e7:62:14 inet addr:192.168.1.100 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: xxxx:xxxx:xxxx:3a80:76d4:35ff:fee7:6214/64 Scope:Global inet6 addr: xxxx::xxxx:xxxx:fee7:6214/64 Scope:Link inet6 addr: xxxx:xxxx:xxxx:3a80:2d32:f878:e435:69ec/64 Scope:Global UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:27756 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:19ARP manual requestthegorlie@serv-io ~> arping -c 1 -I em1 192.168.100.20ARPING 192.168.100.20 from 192.168.100.1 em1Sent 1 probes (1 broadcast(s))Received 0 response(s)Wireshark capture on the host when server ping the hostGiga-Byt_e7:62:16 Broadcast ARP 60 Who has 192.168.100.20? Tell 192.168.100.1Sony_c8:7a:a3 Giga-Byt_e7:62:16 ARP 42 192.168.100.20 is at 30:f9:ed:c8:7a:a3Giga-Byt_e7:62:16 Broadcast ARP 60 Who has 192.168.100.20? Tell 192.168.100.1Sony_c8:7a:a3 Giga-Byt_e7:62:16 ARP 42 192.168.100.20 is at 30:f9:ed:c8:7a:a3Giga-Byt_e7:62:16 Broadcast ARP 60 Who has 192.168.100.20? Tell 192.168.100.1Sony_c8:7a:a3 Giga-Byt_e7:62:16 ARP 42 192.168.100.20 is at 30:f9:ed:c8:7a:a3Wireshark capture on the host when host ping the serverSony_c8:7a:a3 Broadcast ARP 42 Who has 192.168.100.1? Tell 192.168.100.20Sony_c8:7a:a3 Broadcast ARP 42 Who has 192.168.100.1? Tell 192.168.100.20Sony_c8:7a:a3 Broadcast ARP 42 Who has 192.168.100.1? Tell 192.168.100.20Sony_c8:7a:a3 Broadcast ARP 42 Who has 192.168.100.1? Tell 192.168.100.20Sony_c8:7a:a3 Broadcast ARP 42 Who has 192.168.100.1? Tell 192.168.100.20 | Network interface not registring ARP response | linux;ubuntu;networking;networkcard;arp | I found the problem.The problem was the driver that had some bugs. I downloaded the most recent version of the drivers for my ethernet card (Intel Ethernet Connection I217-V) embedded in the motherboard (GA-Z97N-WIFI) and installed it. It works with no problem. |
_cs.12354 | As I understand emulation a rule of the thumb says that a computer must be around a order of magnitude more powerful to emulate another system without resorting to tricks. This is because for every clock cycle of the system to be emulated, the host machine must process that cycle in the correct time and also process its own cycles.A Xbox 360 uses a 3.2GHz PowerPC Tri-Core Xenon processor, would this mean that emulating the console on a 3GHz PC CPU would be impossible, even if the host CPU was more powerful but not faster for clock cycles?To give a more straightforward example. A i7 that runs at the same clock speed as a Pentium 4 will not be able to emulate the Pentium 4, even though the i7 is many times more powerful. In addition, does this mean that to emulate the Xbox 360, The host computer must run at a speed of between 9GHz and 3.2GHz to be able to truly emulate a Xbox 360? If this is the case, surely there must be a way around this limitation? If we had a host computer with 10x3GHz CPU's could this system use parallelzation to somehow emulate a 3 core 3.2GHz system.Also the emulate should run at the speed that the user expected of the emulated system, I know if you slowed down the Xbox 360 on the emulator by a few times then it would be easier, I want to know if we can emulate a system at the speed it was meant to run.If this is not possible does that mean that for emulation, clock speed is king? And there is no way around this? Even if your host machine had a thousand CPU's running at 2GHz it could never ever emulate a Xbox 360 at its native speed?By the way I am sure this is in the domain of computer science. I am not sure exactly what it comes under but the simulation of a system within a system and inherent limitations is certainly applicable to computer science.What about this: could a 4 core 3GHz i7 CPU emulate in proper time a 3.8GHz single core Pentium 4? | Is it impossible to truly emulate a system if its actual clock speed is the same? | simulation | That very much depends on how well the instruction set of the CPU to be emulated maps to the instruction set of the emulating CPU. Certainly, clock speed is not the deciding factor. It might very well be that a slower (clock wise) CPU A can emulate a faster CPU B.This can be both because of more powerful instructions (e.g. SSE instructions that allow doing the same operation to multiple values at once), or improvements in instruction level parallelism (e.g. more adders, multipliers etc. so that more instructions can be executed simultaniously, smarter recognition of independent instructions). There are also strong contributions from larger caches and smarter branch prediction. All these factors (and probably many more that I don't know about) can be improved to increase the average number of instructions executed without changing the clock speed. Since clock-speed increases stalled some years ago, this kind of improvement has been a major contributor to performance increases in CPUs.Of course it also strongly depends on the skill of the person writing the emulator. There will always be some overhead involved in translating the instructions for CPU B to instructions that work on A. The larger the difference between the two CPUs, the larger the overhead. |
_unix.369089 | I'm not able to redirect output of command into a file when ran with cronjob [root@mail /]# crontab -l */1 * * * * /sbin/ausearch -i > /rummy[root@mail /]# cat /rummyIt's weird that when I dont give -i option , I'm able to redirect it very well.[root@mail /]# crontab -l */1 * * * * /sbin/ausearch > /rummy[root@mail /]# cat /rummy usage: ausearch [options] -a,--event <Audit event id> search based on audit event id --arch <CPU> search based on the CPU architecture -c,--comm <Comm name> search based on command line name - - - It there any syntax error or I'm missing here something? Note - ausearch -i fetches me below output on terminal and on redirecting output to file , it redirects it as it is. [root@server ~]# ausearch -i type=DAEMON_START msg=audit(05/22/2017 11:14:10.391:6858) : auditd start, ver=2.4.5 format=raw kernel=2.6.32-696.el6.x86_64 auid=unset pid=1319 subj=system_u:system_r:auditd_t:s0 res=success ----type=CONFIG_CHANGE msg=audit(05/22/2017 11:14:10.519:5) : audit_backlog_limit=320 old=64 auid=unset ses=unset subj=system_u:system_r:auditctl_t:s0 res=yes ----type=USER_ACCT msg=audit(05/22/2017 11:20:01.108:6) : user pid=2073 uid=root auid=unset ses=unset subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:accounting acct=root exe=/usr/sbin/crond hostname=? addr=? terminal=cron res=success' ----type=CRED_ACQ msg=audit(05/22/2017 11:20:01.108:7) : user pid=2073 uid=root auid=unset ses=unset subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:setcred acct=root exe=/usr/sbin/crond hostname=? addr=? terminal=cron res=success' ----type=LOGIN msg=audit(05/22/2017 11:20:01.119:8) : pid=2073 uid=root subj=system_u:system_r:crond_t:s0-s0:c0.c1023 old auid=unset new auid=root old ses=unset new ses=1 ---- | cronjob not redirecting output of command when used with option | cron;linux audit | The command does not produce output, but runs ok.You can see this because the file rummy got created.The ausearch utility seems to expect a search criteria, and the empty output could be due to you not providing one.See the ausearch manual on your system for further information.After a bit of reading of the ausearch manual, I found the following:--input-logs Use the log file location from auditd.conf as input for searching. This is needed if you are using ausearch from a cron job.Doing some Googling confirms that this indeed may be the issue. One email describes the problem:You need to use the --input-logs option. If ausearch sees stdin as a pipe, it assumes that is where it gets its data from. The input logs option tells it to ignore the fact that stdin is a pipe and process the logs. Aureport has the same problem and option to fix it.This was fixed in the 1.6.7 general release and backported to the 1.6.5 RHEL5 release.There also seems to be users who does not solve this by using --input-logs, but it's not clear what else may be wrong as there are never any followups from them. |
_webapps.92151 | I wish to link to a specific portion of a Wikipedia article. For example in this article about Mahatma Gandhi, I wish to create a link such that clicking on the link takes the user directly to the Struggle for Indian Independence section without showing the other details above. Is this possible? | How can one link to a specific portion of Wikipedia article? | wikipedia | Use the built in index in the Content box. If you click on Struggle for Indian Independence you'll get this URL https://en.wikipedia.org/wiki/Mahatma_Gandhi#Struggle_for_Indian_Independence_.281915.E2.80.9347.29which will link directly to that portion. It won't hide the upper portion but it will scroll to the appropriate place. |
_webmaster.3688 | I'm developing a website where users will be the ones who add new input, validate new suggestions and correct incorrect information. (Much like this site).To make sure that abuse of these rights by users is minimized, I need to develop a good reputation system which encourages good behaviour, much like this site's reputation, again.I would like to learn more about this subject, which is why I ask: are there any discussions about the development of user contributed websites? Any resources, studies or anything else that could help me out to develop a system that will work?Thanks in advance.- Tom | Are there discussions about developing a user contributed website? | user engagement;contribute | There is a good google tech talk about Building web reputation systemsYahoo patterns has a section on different reputation systems. |
_softwareengineering.244446 | I've been looking for some implementations of Service Layer and Controller interaction in blogs and in some open source projects. All of them seem to refer DbContext object in repository classes but avoided to use in service classes. Service classes essentially using a IQueryable<T> references of DbSet<T>. I want to know why this practice is good and why DbContext shouldn't have a reference in Service Layer. | Why DbContext object shouldn't be referred in Service Layer? | c#;entity framework | NOTE: I'm writing this answer as it would apply to a Programmers.SE question.A big part of application design is abstraction i.e. hiding away lower-level details and providing a simple conceptual interface to consumers. A related principle is encapsulation - the internal details of an operation should remain within the component and should not leak into the outside world.Typically, we would like to have all DB-related code in a single location. So we create a dedicated layer for it and encapsulate all DB logic in that layer. As far as the service layer is concerned, it does not know anything about how the data is retrieved. It just asks for some data from the data layer, magically gets it, and runs business logic on it.This makes it very easy to change the DB logic. For example, swapping the underlying database with an equivalent one has negligible impact on other areas of the application. Because the service knows nothing about the DB, as long as it gets the same data back, it will continue working exactly as before.Note that this also applies to other layers. The UI layer knows nothing of business logic. It talks in terms of domain-level abstractions and shows them to the consumer. How those domain objects are generated and manipulated is the job of the business layer and hidden within it. |
_cs.65180 | I am trying to understand how many states should be there in a Finite Automata which does not accept anything. I thought it to be containing only one non-final state, the starting state, with all transitions going to itself. But one of my teachers mentioned that the set of final states cannot be empty. Does that mean there will be two states states, one start state and one final state, with no transition between the two? | Automata for empty language | automata;finite automata | null |
_unix.163545 | I was asked the following: Which one, Firefox or Chrome, downloads and renders the page https://unix.stackexchange.com/ faster. Is there a command in Linux to measure that? | Command to compare speed of web-browsers | web | null |
_softwareengineering.160251 | I am managing a team of 6 people that recently moved to Scrum.We have a Scrum Master (one of the developers in the team) and a Product Owner.Since I have quite a lot of free time (because a lot of management work that I used to do is now done by the Scrum Master and Product Owner), and since I want to remain technically relevant, I am doing some technical development work.I act as a part of the development team, commit to some of the stories in each sprint, and participate in all the meetings as a part of the team.Do you think it is a good idea? Can it contradict the self-organization of the team? | Being a team manager and a developer in a Scrum team | scrum;management | Have a read of Roy Osherove's developing thoughts on team leadership in an Agile world at 5whys.comHe talks a lot about three key stages a team goes through as it evolves from Waterfall to Scrum.Survival phase (where most teams I see are in) -- in which team has no time to learn -- requires a more command and control type of leadership to create that learning time from nothing.Learning phase -- where a team has time to learn and is using it -- requires coach like leader, with bursts of control when things will take too long to learn the hard way (choosing no source control, for example)Self Organization Phase -- Where teams can solve their own problems -- requires more of a facilitator type of leader, that does not tell people what to do, but simply provides constraints and end goals. The team will get there on its own.When I came across Roy's ideas, at OpenVolcano '10, I was at a complete loss as to why my team had stopped improving. Then I realised that the team had crossed from Survival to Learning and I hadn't changed my style of management at all. I did so and it helped a lot.So I suggest figuring out which of those three phases you're in and managing accordingly.Also, make a decision now and be a leader or a developer. Don't fall into the trap of thinking you have spare time until you're well into the Self-Organisation phase. And, if you get there, realise that you're a good team leader (it's difficult) and move on to another team rather than reintegrating yourself. |
_webapps.3862 | (this is copied from my question on superuser.com)I'm house-hunting at the moment and I'm trying to geek it out er I mean streamline the decision-making process. I'm currently using google maps's my maps feature to store pins to properties. I create one map per estate agent, then put relevant into into the individual pins. The idea being, I can look at the map and quickly choose which property to view next.However the pins don't currently link back to the map they're owned by, so you have to hunt a bit to get the estate agent info, it's a hassle to get all maps displayed in a new session if you have lots of agents, and each pin doesn't automatically show its bubble so you have to do lots of clicking to see all the info you want.I've tried Evernote, but despite its tag system initially showing some promise, I can't find a way to seemlessly integrate maps. A few google searches don't turn anything up either.Even the big sites, like http://www.rightmove.co.uk, don't seem to provide any maps integration by default. You can see an individual house's location, but not all results of a search.So is there a web site or windows program I could use to do something like this? Viewing all properties on a map is a must, as is quick access to contact details. | Google Maps mashup for notes/househunting | google maps;database | As much as it pains me to say it - use Bing Maps instead.You can store notes, Photo URLs and website URLs to keep tabs on your listings. The My Places lists are also a lot easier for collating into a form of grouping (useful for your Estate Agent example).As you mentioned http://rightmove.co.uk it may be worth looking at other data sets in the UK - to help (and not just http://data.gov.uk).Mapumental is superb for looking at lifestyle/travel time balance.National Statistics shows slow moving trends in your area (such as local wealth/health for the last x years)About My Place is great for finding out house prices in and around the area (shows schools, etc but with no real context)Police Maps will show whether you need to worry about beefing up your locks/gates/grilles/shutters/alarm when moving inFix My Street helps you make an educated decision on how well the council look after the area (if something is reported several times over a 2 year+ period and the council still do nothing about it - then avoid the area) |
_webapps.100227 | I'm using Google Sheets to manage a membership and it means I'm regularly importing columns between sheets (within the same file). The sheet is starting to slow down. Would it be better to use Filter, Index-match, Vlookup, or importrange? | In Google Sheets, which of the functions are computationally 'cheapest'? | google spreadsheets;worksheet function | null |
_unix.64960 | I'm run a script with cron:*/10 * * * * flock -n /tmp/lock scriptI have to make sure that only one instance of the script is running at the same time, and for that I'm using flock. The problem is that sometimes this script starts a daemon, in this case, the daemon blocks the following script executions. I know a possible solution would be unlock the file at the end of the script, but is it possible do it directly in the cron command? | Lock a script starting daemons | cron;daemon;lock | null |
_unix.350892 | After reinstalling Ubuntu again on the machine, the GRUB Rescue screen appeared as it says hd1 not a known file system. I did the following steps:set prefix=(hd0,gpt7)/boot/grubset root=(hd0,gpt7)insmod normalnormalThis led me to boot Ubuntu. And I did my GRUB update too. But the main problem was I have to enter the above steps in order to view the GRUB OS selection page. Can I avoid this steps and go directly to the main Grub Selection page? | Avoid GRUB Rescue on Boot | ubuntu;grub2;rescue | null |
_unix.115082 | I'm testing SMART support on some Compact Flash cards. After running smartctl -A on my card I'm getting the output below (also available here: http://pastebin.com/BX8GcLCX). The UPDATED column says offline, does anyone know exactly what that means? UPDATE - it means the data is only collected offline.Also all the values seem to be at their defaults of 100 (except powercycle count). Does anyone know how to get the card to report it's values? The card I'm testing is an ATP AF1GCFI.Additionally if I try and run an offline test with smartctl --test=short /dev/sda I get back Warning: device does not support Self-Test functions. Given the fact that the parameters can only be reported offline, does this mean I can't get any SMART data at all?=== START OF READ SMART DATA SECTION ===SMART Attributes Data Structure revision number: 1 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x0000 100 100 000 Old_age Offline - 0 2 Throughput_Performance 0x0000 100 100 000 Old_age Offline - 0 5 Reallocated_Sector_Ct 0x0000 100 100 000 Old_age Offline - 0 7 Seek_Error_Rate 0x0000 100 100 000 Old_age Offline - 0 8 Seek_Time_Performance 0x0000 100 100 000 Old_age Offline - 0 12 Power_Cycle_Count 0x0000 100 100 000 Old_age Offline - 358 195 Hardware_ECC_Recovered 0x0000 100 100 000 Old_age Offline - 0 196 Reallocated_Event_Count 0x0000 100 100 000 Old_age Offline - 0 197 Current_Pending_Sector 0x0000 100 100 000 Old_age Offline - 0 198 Offline_Uncorrectable 0x0000 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0000 100 100 000 Old_age Offline - 0 200 Multi_Zone_Error_Rate 0x0000 100 100 000 Old_age Offline - 0 | Understanding smartctl output for a CF card | flash memory;smart | Most of the standard fields for SMART data were defined with only rotational, magnetic harddrives in mind. None of these really appear appropriate for your CF card.Vendors are able to define their own attributes as well and those are not standardized. smartmontools is distributed a database (it's stored /var/lib/smartmontools/drivedb/drivedb.h on my debian machine.) that defines custom/special/overrides for different model harddrives. You'll probably have to input details for your CF card into such a database.If you look at the atpinc.com website, you'll see that you can email their sales team to request a copy of the specifications. The specifications document should list which SMART attributes the device supports, what they're representing, and how to interpret them.Also, you'll get more SMART information if you use -a instead of -A. You can force an offline selftest by using smartctl -t offline /dev/XXX and the device may support automatic, periodic offline testing with smartctl -o on /dev/XXX.You can run an offline selftest (any of the selftests, actually) while using the drive. Performance may be impacted, but you wont break anything.Email ATP and ask em for the docs.Good luck. |
_scicomp.19689 | I am trying to solve the Navier Stokes equations using the finite element method. I plan on using the pressure correction method to deal with the pressure and an implicit time stepping scheme for diffusion and an explicit time stepping scheme for convection. My question is: How does one deal with the non-linear term using the finite element method? I have investigated a couple approaches. For example, I understand for steady problems it is common to use a Newton iteration. Since I am interested in unsteady problems I don't think this method is suitable. I have also seen an approach where part of the non-linear term is frozen at the last time step so that at each time step the problem appears as a linear advection problem, i.e.$u_{i}\frac{\partial{u_{i}}}{\partial{x_{j}}} \approx{} u_{i}^{n-1}\frac{\partial{u_{i}^{n}}}{\partial{x_{j}}}$In other words the velocity at the previous time step ($u_{i}^{n-1}$) is treated as constant and no shape function expansion is used in approximating this term. Is this method any good? What methods should I use to deal with the non-linear term in the unsteady Navier Stokes equations where accuracy, ease of implementation, stability, and robustness to different Reynolds numbers are all considerations?Thanks | How to deal with nonlinear term in Navier Stokes equations (finite element code) | finite element;fluid dynamics;navier stokes | You can absolutely use the Newton method to linearize the system of equations that results after you have discretized in time. It's a pretty common approach. It might be overkill in lots of cases, but it's not inappropriate. Freezing like you suggest and not iterating at all within a timestep can lead to some pretty awful solutions unless your timestep is miniscule. But you don't want to do that since you are interested in implicit approaches. Write the split equations, and then discretize in time to give a non-linear PDE in space. Then apply Newton's method. Then discretize in space with FEM. |
_unix.291261 | I want to delete certain word in multiple locations ( say I want to delete the word involve form Vi editor and it appear 10000 times) from a vi editor? | How do I delete the repeating word in vi editor? | vi | null |
_unix.321506 | I have a development board which has an older version of Linux installed on it. The vendor supplies an image for the device with a heavily modified linux kernel, some loadable kernel modules, and some example software.I would like to install a newer version of the linux kernel on the device, but the vendor has no support for this, as their modified linux kernel is based off of an older kernel version.What I don't understand, is why start hacking away at the linux kernel, when you can make the kernel compatible with the device it is running on by writing drivers as kernel modules. It could be easily recompiled for any kernel version without problems. This way, if the vendor only supports a certain kernel version, you are stuck :(But there must be some reason I am missing, because I see many projects use this approach of grabbing some version of the kernel, and heavily modifying it to fit their board. What I would be interested in, is:Why modify the linux kernel instead of creating a kernel module?What can be done if I need to run a newer kernel, but I get no support from the vendor (Device drivers should work on newer versions of the kernel...) | Why modify the linux kernel instead of creating a kernel module? | linux;kernel;drivers;kernel modules | This question has a lot of assumptions in it.Here are some reasons.The kernel interface is not stable so a module for one version may not compile for a different version.The kernel may not expose a required facility.The kernel may expose a required facility but not in a way that is acceptable, for example requiring the module to have a particular license.The people writing the code found it quicker to write the code this way.As to your options if you need a newer kernel.find someone else who has already ported the codeport it yourselfpay someone else to port it (may not need money, beer, flattery and curiosity may work). |
_codereview.58544 | I would love to have your reviews of this sign in and sign up system.You can see the main structure:For now, I have a total of 13 files (index.php, top_bar.php, header.php, container.php, footer.php, sign_in.php, sign_up.php, members.php, help.php, contact_us.php, change_theme.php, rules.php and the style.css.This is how my main page functions:index.php:<!DOCTYPE html><html lang=en-us> <head> <meta charset=UTF-8> <?php if(strpos($_SERVER[SCRIPT_FILENAME], index) !== false) { ?> <title>Test - Forums</title> <?php } ?> <?php if(strpos($_SERVER[SCRIPT_FILENAME], members) !== false) { ?> <title>Test - Members</title> <?php } ?> <?php if(strpos($_SERVER[SCRIPT_FILENAME], sign_up) !== false) { ?> <title>Test - Sign Up</title> <?php } ?> <?php if(strpos($_SERVER[SCRIPT_FILENAME], sign_in) !== false) { ?> <title>Test - Sign In</title> <?php } ?> <?php if(strpos($_SERVER[SCRIPT_FILENAME], change_theme) !== false) { ?> <title>Test - Change Theme</title> <?php } ?> <?php if(strpos($_SERVER[SCRIPT_FILENAME], contact_us) !== false) { ?> <title>Test - Contact Us</title> <?php } ?> <?php if(strpos($_SERVER[SCRIPT_FILENAME], help) !== false) { ?> <title>Test - Help</title> <?php } ?> <?php if(strpos($_SERVER[SCRIPT_FILENAME], rules) !== false) { ?> <title>Test - Rules</title> <?php } ?> <link href=css/style.css rel=stylesheet type=text/css> <link href=//maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css rel=stylesheet type=text/css> </head> <body> <?php include(top_bar.php);?> <?php include(header.php);?> <?php include(container.php);?> <?php include(footer.php);?> </body></html>This is how my main pages (top_bar.php, header.php, container.php and footer.php) are structured:top_bar.php:<!-- TOP BAR --><div id=top_bar> <div class=wrapper> <div id=top_bar_links> <ul> <?php $full_name = $_SERVER[PHP_SELF]; $name_array = explode(/,$full_name); $count = count($name_array); $page_name = $name_array[$count-1]; ?> <li id=home> <a href=../>Home</a> </li> <li> <a class=<?php echo ($page_name==index.php)?active:;?> href=.>Forums</a> </li> <li> <a class=<?php echo ($page_name==members.php)?active:;?> href=members.php>Members</a> </li> </ul> </div> </div></div>header.php:<!-- HEADER --><div id=header> <div class=wrapper> <h1 id=logo> <a href=.>Test</a> </h1> <div id=member_links> <ul> <li id=sign_up> <a href=sign_up.php>Sign Up</a> </li> <li id=sign_in> <a href=sign_in.php>Sign In</a> </li> </ul> </div> </div></div>container.php:<!-- CONTAINER --><div class=wrapper> <div id=container> <div id=breadcrumb_top> <div class=breadcrumb_links> <ul> </ul> </div> </div> <?php if(strpos($_SERVER[SCRIPT_FILENAME], index) !== false) { ?> <h1 style=margin-bottom: 15px;>Forums</h1> <h3 id=category_title>Categories</h3> <?php } ?> <?php if(strpos($_SERVER[SCRIPT_FILENAME], members) !== false) { ?> <h1 style=margin-bottom: 15px;>Members</h1> <?php } ?> <?php if(strpos($_SERVER[SCRIPT_FILENAME], sign_up) !== false) { ?> <h1 style=text-align: center; margin-bottom: 15px;>Sign Up</h1> <?php } ?> <?php if(strpos($_SERVER[SCRIPT_FILENAME], sign_in) !== false) { ?> <h1 style=text-align: center; margin-bottom: 15px;>Sign In</h1> <?php } ?> <?php if(strpos($_SERVER[SCRIPT_FILENAME], change_theme) !== false) { ?> <h1 style=margin-bottom: 15px;>Change Theme</h1> <?php } ?> <?php if(strpos($_SERVER[SCRIPT_FILENAME], contact_us) !== false) { ?> <h1 style=margin-bottom: 15px;>Contact Us</h1> <?php } ?> <?php if(strpos($_SERVER[SCRIPT_FILENAME], help) !== false) { ?> <h1 style=margin-bottom: 15px;>Help</h1> <?php } ?> <?php if(strpos($_SERVER[SCRIPT_FILENAME], rules) !== false) { ?> <h1 style=margin-bottom: 10px;>Rules</h1> <p style=margin-bottom: 15px;>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Proin eu nibh turpis. Nunc sit amet auctor elit. Lorem ipsum dolor sit amet, consectetur adipiscing elit. In malesuada lobortis tempus. Integer auctor condimentum sapien, non scelerisque eros cursus et. In vel leo elementum, iaculis tellus sit amet, vestibulum quam. Etiam dapibus pulvinar risus, vestibulum rhoncus sapien commodo vitae. Etiam sit amet ultrices dui. Suspendisse luctus fringilla eros. Nam vitae metus porttitor, sagittis arcu eleifend, malesuada odio. Aliquam erat volutpat.</p> <p style=margin-bottom: 15px;>Pellentesque id velit a elit porttitor sollicitudin et vulputate nisl. Donec eu purus non libero porta malesuada et non lorem. Vestibulum ultrices vitae elit vitae accumsan. Quisque euismod, quam sed ornare ultrices, magna mi posuere massa, vel placerat ipsum est quis erat. Aliquam non libero mauris. Etiam ligula velit, commodo et feugiat ac, porta eu orci. Donec laoreet ipsum in urna auctor, vitae malesuada nibh consequat. Donec sit amet libero vitae erat rhoncus venenatis. Maecenas nec pretium justo, eget fermentum tellus. Ut aliquet tellus venenatis posuere fermentum. Fusce mattis velit et tellus suscipit consectetur.</p> <?php } ?> <div id=breadcrumb_bottom> <div class=breadcrumb_links> <ul> </ul> </div> </div> </div></div>footer.php:<!-- FOOTER --><div class=wrapper> <div id=footer> <div id=footer_links> <ul> <li id=change_theme> <a href=change_theme.php>Change Theme</a> </li> <li id=contact_us> <a href=contact_us.php>Contact Us</a> </li> <li id=help> <a href=help.php>Help</a> </li> <li id=rules> <a href=rules.php>Rules</a> </li> </ul> </div> <p id=footer_copyright>Forum software coded by Dylan - 2014</p> </div></div>What do you think? Is it good?I would like your help, as I am more knowledgable in HTML and CSS. I don't have much knowledge in PHP. | Forum Sign up and Sign in software | php;html | null |
_unix.193550 | I'm seeing this in the journal:Mar 27 14:01:02 user-PC gdm-Xorg-:0[424]: (II) Using input driver 'evdev' for 'TOSHIBA Web Camera'Mar 27 14:01:02 user-PC gdm-Xorg-:0[424]: (**) TOSHIBA Web Camera: always reports core eventsMar 27 14:01:02 user-PC gdm-Xorg-:0[424]: (**) evdev: TOSHIBA Web Camera: Device: /dev/input/event12Mar 27 14:01:02 user-PC gdm-Xorg-:0[424]: (--) evdev: TOSHIBA Web Camera: Vendor 0xbda Product 0x58e5Mar 27 14:01:02 user-PC gdm-Xorg-:0[424]: (--) evdev: TOSHIBA Web Camera: Found keysMar 27 14:01:02 user-PC gdm-Xorg-:0[424]: (II) evdev: TOSHIBA Web Camera: Configuring as keyboardMar 27 14:01:02 user-PC gdm-Xorg-:0[424]: (**) Option config_info udev:/sys/devices/pci0000:00/0000:00:1a.0/usb3/3-1/3-1.5/3-1.5:1.0/input/input15/event12Mar 27 14:01:02 user-PC gdm-Xorg-:0[424]: (II) XINPUT: Adding extended input device TOSHIBA Web Camera (type: KEYBOARD, id 9)Mar 27 14:01:02 user-PC gdm-Xorg-:0[424]: (**) Option xkb_rules evdevMar 27 14:01:02 user-PC gdm-Xorg-:0[424]: (**) Option xkb_model pc104Mar 27 14:01:02 user-PC gdm-Xorg-:0[424]: (**) Option xkb_layout mt,fr,gbMar 27 14:01:02 user-PC gdm-Xorg-:0[424]: (**) Option xkb_variant ,oss,Why is a Webcam being configured as a keyboard (see 6th line above)? Is my system hacked? Is someone using my Webcam driver to pass commands to my PC? | Is my webcam driver being used to hack my computer? | arch linux;security | null |
_codereview.165283 | Inspired by another question here on Code Review, I decided to try implementing a Fraction type in Rust.Requirements:Able to be added, subtracted, multiplied, dividedAble to be compared (equality and ordering)Able to be converted to the floating point representation of the fractionWhen printed to screen, automatically simplify the fractionI created methods for arithmetic, as well as implementing Eq, PartialEq, and PartialOrd. As far as I can tell, I can't implement Ord itself, as the f64 type cannot be fully ordered. In my implementation for fmt::Display, I simplify the fraction and remove any '1' denominators from the display.Ideally I'd place this into a module for use in my other programs, but I haven't wrapped my head around the crate / module system yet.#![crate_type = lib]use std::fmt;use std::cmp;//////////pub struct Fraction { numerator: i64, denominator: i64,}impl fmt::Display for Fraction { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { // Reduce, THEN write let temp: Fraction = self.reduce(); if temp.denominator == 1 { write!(f, {}, temp.numerator) } else { write!(f, {}/{}, temp.numerator, temp.denominator) } }}impl cmp::PartialEq for Fraction { fn eq(&self, other: &Fraction) -> bool { // Simplify both before comparing let simp_self = self.reduce(); let simp_other = other.reduce(); simp_self.numerator == simp_other.numerator && simp_self.denominator == simp_other.denominator }}impl cmp::Eq for Fraction {}impl cmp::PartialOrd for Fraction { fn partial_cmp(&self, other: &Fraction) -> Option<cmp::Ordering> { self.to_decimal().partial_cmp(&other.to_decimal()) }}impl Fraction { /// Creates a new fraction with the given numerator and denominator /// Panics if given a denominator of 0 pub fn new(numerator: i64, denominator: i64) -> Fraction { if denominator == 0 { panic!(Tried to create a fraction with a denominator of 0!) } if denominator < 0 { // If the denominator is negative, multiply both by -1 Fraction { numerator: -numerator, denominator: -denominator } } else { Fraction { numerator: numerator, denominator: denominator } } } /// Returns a new Fraction equal to this Fraction plus another pub fn add<'a>(&self, other: &'a Fraction) -> Fraction { Fraction { numerator: (self.numerator * other.denominator + other.numerator * self.denominator), denominator: (self.denominator * other.denominator) } } /// Returns a new Fraction equal to this Fraction minus another pub fn subtract<'a>(&self, other: &'a Fraction) -> Fraction { Fraction { numerator: (self.numerator * other.denominator - other.numerator * self.denominator), denominator: (self.denominator * other.denominator) } } /// Returns a new Fraction equal to this Fraction multiplied by another pub fn multiply<'a>(&self, other: &'a Fraction) -> Fraction { Fraction { numerator: (self.numerator * other.numerator), denominator: (self.denominator * other.denominator) } } /// Returns a new Fraction equal to this Fraction divided by another pub fn divide<'a>(&self, other: &'a Fraction) -> Fraction { Fraction { numerator: (self.numerator * other.denominator), denominator: (self.denominator * other.numerator) } } /// Returns a new Fraction that is equal to this one, but simplified pub fn reduce(&self) -> Fraction { // Divide numerator and denominator by gcd [use absolute value because negatives] let _gcd = gcd(self.numerator.abs(), self.denominator.abs()); Fraction { numerator: (self.numerator / _gcd) , denominator: (self.denominator / _gcd) } } /// Returns a decimal equivalent to this Fraction pub fn to_decimal(&self) -> f64 { self.numerator as f64/ self.denominator as f64 }}//////////// Calculate the greatest common denominator for two numberspub fn gcd(a: i64, b: i64) -> i64 { // Terminal cases if a == b { return a } if a == 0 { return b } if b == 0 { return a } if a % 2 == 0 { // a is even if b % 2 != 0 { // b is odd return gcd(a/2, b) } else { // a and b are even return gcd(a/2, b/2) * 2 } } // a is odd if b % 2 == 0 { // b is even return gcd(a, b/2) } // Reduce larger argument if a > b { return gcd((a - b)/2, b) } return gcd((b - a)/2, a)}#[test]fn ordering_test() { let a = Fraction::new(1, 2); let b = Fraction::new(3, 4); let c = Fraction::new(4, 3); let d = Fraction::new(-1, 2); assert!(a < b); assert!(a <= b); assert!(c > b); assert!(c >= a); assert!(d < a);}#[test]fn equality_test() { let a = Fraction::new(1, 2); let b = Fraction::new(2, 4); let c = Fraction::new(5, 5); assert!(a == b); assert!(a != c);}#[test]fn arithmetic_test() { let a = Fraction::new(1, 2); let b = Fraction::new(3, 4); assert!(a.add(&a) == Fraction::new(1, 1)); assert!(a.subtract(&a) == Fraction::new(0, 5)); assert!(a.multiply(&b) == Fraction::new(3, 8)); assert!(a.divide(&b) == Fraction::new(4, 6));} | Fraction type in Rust | rust;rational numbers | There's no need to specify the crate type. Cargo knows if it's a library or a binary.Combine imports at the same level using the use path::to::{a, b, c} syntax.There's no need to specify the type most times; let the compiler do type inference when it can (in Display::fmt).Rust style has the curly braces on the same line as the else. Instead of }else {Use} else {Almost always derive(Debug) on your types.I try to avoid panics as much as possible, so I'd probably return a Result from Fraction::new. Good to see you documented the panic though.When constructing a struct, if you repeat yourself (Thing { foo: foo, bar: bar }), that can be simplified to Thing { foo, bar }.Use the Enter key a bit more; spread your arithmetic constructors across a few lines to allow for readability. All the math intertwined makes it look complicated.Your inherent arithmetic functions don't need explicit lifetimes as you aren't using the lifetime on the output. Remove them and let lifetime elision take over.Can choose to use Self instead of Fraction inside the impl blocks if you'd like.Don't prefix a variable with an underscore. This indicates that a variable has to exist but is intentially not used. There's no issues with shadowing a function with a variable if you aren't going to call it again.Even better than the inherent arithmetic methods, you can implement the std::ops traits. This gives you the ability to use + - * and /,as well as allowing you to implement the operators for both owned and borrowed values.I'd use a match to tighten up the GCD even / odd logic and avoid so many returns.There's some comments that repeat what the code is doing, but don't explain the why. Can be removed or improved.use std::{cmp, fmt};use std::ops::{Add, Sub, Mul, Div};#[derive(Debug)]pub struct Fraction { numerator: i64, denominator: i64,}impl Fraction { /// Creates a new fraction with the given numerator and denominator /// Panics if given a denominator of 0 pub fn new(numerator: i64, denominator: i64) -> Self { if denominator == 0 { panic!(Tried to create a fraction with a denominator of 0!) } if denominator < 0 { Self { numerator: -numerator, denominator: -denominator } } else { Self { numerator, denominator } } } /// Returns a new Fraction that is equal to this one, but simplified pub fn reduce(&self) -> Self { // Use absolute value because negatives let gcd = gcd(self.numerator.abs(), self.denominator.abs()); Self { numerator: (self.numerator / gcd), denominator: (self.denominator / gcd), } } /// Returns a decimal equivalent to this Fraction pub fn to_decimal(&self) -> f64 { self.numerator as f64/ self.denominator as f64 }}impl fmt::Display for Fraction { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { let temp = self.reduce(); if temp.denominator == 1 { write!(f, {}, temp.numerator) } else { write!(f, {}/{}, temp.numerator, temp.denominator) } }}impl cmp::PartialEq for Fraction { fn eq(&self, other: &Fraction) -> bool { let simp_self = self.reduce(); let simp_other = other.reduce(); simp_self.numerator == simp_other.numerator && simp_self.denominator == simp_other.denominator }}impl cmp::Eq for Fraction {}impl cmp::PartialOrd for Fraction { fn partial_cmp(&self, other: &Fraction) -> Option<cmp::Ordering> { self.to_decimal().partial_cmp(&other.to_decimal()) }}impl<'a> Add for &'a Fraction { type Output = Fraction; fn add(self, other: Self) -> Fraction { Fraction { numerator: (self.numerator * other.denominator + other.numerator * self.denominator), denominator: (self.denominator * other.denominator), } }}impl<'a> Sub for &'a Fraction { type Output = Fraction; fn sub(self, other: Self) -> Fraction { Fraction { numerator: (self.numerator * other.denominator - other.numerator * self.denominator), denominator: (self.denominator * other.denominator), } }}impl<'a> Mul for &'a Fraction { type Output = Fraction; fn mul(self, other: Self) -> Fraction { Fraction { numerator: (self.numerator * other.numerator), denominator: (self.denominator * other.denominator), } }}impl<'a> Div for &'a Fraction { type Output = Fraction; fn div(self, other: Self) -> Fraction { Fraction { numerator: (self.numerator * other.denominator), denominator: (self.denominator * other.numerator), } }}// Calculate the greatest common denominator for two numberspub fn gcd(a: i64, b: i64) -> i64 { // Terminal cases if a == b { return a } if a == 0 { return b } if b == 0 { return a } let a_is_even = a % 2 == 0; let b_is_even = b % 2 == 0; match (a_is_even, b_is_even) { (true, true) => gcd(a/2, b/2) * 2, (true, false) => gcd(a/2, b), (false, true) => gcd(a, b/2), (false, false) => { if a > b { gcd((a - b)/2, b) } else { gcd((b - a)/2, a) } } }}#[test]fn ordering_test() { let a = Fraction::new(1, 2); let b = Fraction::new(3, 4); let c = Fraction::new(4, 3); let d = Fraction::new(-1, 2); assert!(a < b); assert!(a <= b); assert!(c > b); assert!(c >= a); assert!(d < a);}#[test]fn equality_test() { let a = Fraction::new(1, 2); let b = Fraction::new(2, 4); let c = Fraction::new(5, 5); assert!(a == b); assert!(a != c);}#[test]fn arithmetic_test() { let a = Fraction::new(1, 2); let b = Fraction::new(3, 4); assert!(&a + &a == Fraction::new(1, 1)); assert!(&a - &a == Fraction::new(0, 5)); assert!(&a * &b == Fraction::new(3, 8)); assert!(&a / &b == Fraction::new(4, 6));}I can't implement Ord itself, as the f64 type cannot be fully ordered.Floating point numbers cannot be ordered, it's true, but you only have a floating point number because you chose to implement it that way. Since you maintain your data as integral values, you could choose to convert both fractions to equivalent fractions with a common denominator and then compare the numerator directly. |
_softwareengineering.41668 | I've been working as a freelancer for about two years in vWorker. Any person can visit a coder's profile, and see in how many projects the coder has worked on, (if the coders allows) see how much the coder obtained in each project, ratings, feedbacks, etc.Would be good to include a freelancer account in your Resume / CV when applying for a job?Is it something you would do if you have finished several projects there? | Would be good to include your freelancer account in your Resume / CV when applying for a job? | freelancing;interview;resume | If your profile doesn't show lots of good feedback, a reasonable workload, and a pay that is in line with what you're going to ask for, leave it out. Beyond that, just follow standard guidelines for writing a resume - make sure that you highlight the parts that are most relevant to the work you're applying to, and spent less space on the parts that are not relevant/likely to distract. Provide as much detail about the work that you've performed on vWorker as is relevant, don't just include just the reference to vWorker when describing the work you've done there. Use the reference as proof that the work was done, and was quality work. |
_unix.222897 | I was wondering is there any way to search and increase the values between the 2nd tilde symbol and the 3rd tilde symbol in each line of a txt file. Maybe vi can do this?For example, i have a test.txt file, and 2 lines in it:A~Test1~9463~testAB~Test2~4825~testBAnd can be changed to:A~Test1~8352~testAB~Test2~3714~testB | search and increase the values between the second tilde symbol and the 3rd tilde symbol in each line of a txt file | linux;bash;shell;shell script;vi | null |
_cs.7743 | I am looking to calculate the physical address corresponding to a logical address in a paging memory management scheme. I just want to make sure I am getting the calculation right, as I fear I could be wrong somewhere.So, the data I have is as follows: The logical address: $717$Logical memory size: $1024$ bytes ($4$ pages)Page Table:\begin{array}{| c | c |}\hlinePage\ Number & Frame\ Number\\ \hline0 & 5\\ \hline1 & 2\\ \hline2 & 7\\ \hline3 & 0\\ \hline\end{array}Physical memory: $16$ framesSo, with $1024$ bytes in the logical memory, and $4$ pages, then each page is $256$ bytes. Therefore, the size of the physical memory must be $4096$, right? ($256 \times 16$). Then, to calculate the logical address offset: $$1024 \mod 717 = 307$$Is that how we calculate the offset? And, we can assume that $717$ is in page $2$ ($\frac{1024}{717} = 2.8$)? So, according to the page table, the corresponding frame number is $3$. And so to get the physical address, we multiply the frame number and page size? $$2 \times 256 = 768$$Then, do we add the offset, like so: $$768 + 307 = 1,075$$Thank you for taking the time to read. If I don't quite have this correct, would you be able to advise on the correct protocol to calculating this? | How to work out physical address corresponding to logical address? | operating systems;memory management;paging | You are correct in your reasoning that the pages are $256$ bytes and that the physical memory capacity is $4096$ bytes.However, there are errors after that.The offset is the distance (in bytes) relative to the start of the page. I.e., logical_address mod page_size. The bits for this portion of the logical address are not translated (given power of two page size).The logical (virtual) page number is number of (virtual) page counting from zero. I.e., $$\frac{logical\_address}{page\_size}$$As you noted, the physical page is determined by the translation table, indexed using the logical (virtual) address.Once the physical page number had been found, the physical address of the start of that page is found by multiplying the physical page number by the page size. The offset is then added to determine the precise physical address. I.e., $$(physical\_page\_number \times page\_size) + offset$$So a logical address of, e.g., $508$, with $256$ byte pages would have an offset of$$508 \mod 256 = 252$$The logical/virtual page number would be$$\frac{508}{256} = 1$$With the given translation table, logical page $1$ translates to the physical page number $2$. The physical address would then be$$physical\_page\_number \times page\_size + offset = 2 \times 256 + 252 = 764$$ |
_softwareengineering.169863 | I am working on saving state of an object to a database using the data mapper pattern, but I am looking for suggestions/guidance on the validation and error message generation step (step 4 below). Here are the general steps as I see them for doing this:(1) The data mapper is used to get current info (assoc array) about the objectin db:+=====================================================+ | person_id | name | favorite_color | age | +=====================================================+ | 1 | Andy | Green | 24 | +-----------------------------------------------------+mapper returns associative array, eg. Person_Mapper::getPersonById($id) : $person_row = array( 'person_id' => 1, 'name' => 'Andy', 'favorite_color' => 'Green', 'age' => '24',); (2) the Person object constructor takes this array as an argument, populating its fields.class Person { protected $person_id; protected $name; protected $favorite_color; protected $age; function __construct(array $person_row) { $this->person_id = $person_row['person_id']; $this->name = $person_row['name']; $this->favorite_color = $person_row['favorite_color']; $this->age = $person_row['age']; } // getters and setters... public function toArray() { return array( 'person_id' => $this->person_id, 'name' => $this->name, 'favorite_color' => $this->favorite_color, 'age' => $this->age, ); }}(3a) (GET request) Inputs of an HTML form that is used to change info about the person is populated using Person::getters<form> <input type=text name=name value=<?=$person->getName()?> /> <input type=text name=favorite_color value=<?=$person->getFavColor()?> /> <input type=text name=age value=<?=$person->getAge()?> /></form>(3b) (POST request) Person object is altered with the POST data using Person::setters$person->setName($_POST['name']);$person->setFavColor($_POST['favorite_color']);$person->setAge($_POST['age']);*(4) Validation and error message generation on a per-field basis - Should this take place in the person object or the person mapper object? - Should data be validated BEFORE being placed into fields of the person object? (5) Data mapper saves the person object (updates row in the database):$person_mapper->savePerson($person);// the savePerson method uses $person->toArray() // to get data in a more digestible format for the // db gateway used by person_mapperAny guidance, suggestions, criticism, or name-calling would be greatly appreciated. | Validation and Error Generation when using the Data Mapper Pattern | php;object oriented;state | I think you should define in the person object and not in the mapping object the validation logic.You get the POST data and assign it to the person object, and if there is any error, it should happen there, before you reach the mappign step. In fact, you should never call savePerson() when the data provided is not correct. |
_codereview.8930 | I've been playing with Clojure for the last few evenings, going through the well known 99 problems (I'm using a set adapted for Scala).Problem 26Given a set S and a no. of items K, returns all possible combinations of K items that can be taken from set S.Here's my solution:(defn combinations [k s] (cond (> k (count s)) nil ;not enough items in sequence to form a valid combination (= k (count s)) [s] ;only one combination available: all items (= 1 k) (map vector s) ;every item (on its own) is a valid combination :else (reduce concat (map-indexed (fn [i x] (map #(cons x %) (combinations (dec k) (drop (inc i) s)))) s))))(combinations 3 ['a 'b 'c 'd 'f])My basic solution is to take each item from the given sequence (map-indexed) and recurse to generate combinations of size K - 1 from the remaining sequence. The termination conditions are described above.I'm still a complete Clojure newbie and would welcome comments on structure, efficiency, readability, resemblance to idiomatic Clojure, etc. Feel free to be brutal, but please remember I've been doing Clojure for only a few hours :)I'm less interested in alternative mathematical methods for generating k-combinations, more interested in feedback on whether this is passable Clojure. | Enumerate k-combinations in Clojure (P26 from 99 problems) | clojure;combinatorics | I'm currently reading Joy of Clojure so I'm (very) far from being fluent in Clojure but what I noticed is:your solution is clever but quite complicated, you use imperative habits like indexed iterationtry to keep with simple abstractions like sequence first and rest and the solution will work with any Clojure collection - see example belowyour solution use cond with three checks for k - consider using condpHere's my code:(defn subsets [n items](cond (= n 0) '(()) (empty? items) '() :else (concat (map #(cons (first items) %) (subsets (dec n) (rest items))) (subsets n (rest items))))) |
_scicomp.18744 | I am trying to simulate the phase separation of a binary mixture. If the free energy F is known as a function of the concentration $c$, the dynamical equation is:$\frac{\partial c(x,t)}{\partial t}=\frac{d^2}{dx^2} \frac{\delta F[c]}{\delta c}$For the Flory-Huggins free energy we have:$\frac{\delta F[c]}{\delta c}=\log\left(\frac{c}{1-c}\right) + \chi c^2 + \gamma (c')^2$First term is entropy, second term is attraction between particles ($\chi<0$) and third term is similar as a surface tension.I use time forward and space centered differentiation. Even for very small $dt$ I get numerical instabilities. I first thought the term $\gamma(c'^2)$ was responsible, but instabilities remain even without.Here is my Matlab code for no-flux boundary conditions, clear and simplified as much as I can. I am aware the boundary condition implementation may not be correct but I don't think the problem comes from this.What should I do?function phaseSep ()clear all;clf;N = 101; %number of grid points in xdx = 1/(N - 1);x = 0:dx:1; %vector of x valuesT = 1e3; %number of time stepsdt = 1e-8;% Second derivativefunction y=mylaplace(f,i) y = f(i+1)-2*f(i)+f(i-1);endfunction y=dfdc(C) p=-0.01; g=0.01; for i=2:N-1 y(i) = log(C(i)/(1-C(i)))+p*C(i)+g*mylaplace(C,i)/dx^2; end % No-flux Boundary conditions y(1)=y(2); y(N)=y(N-1);end%Initial concentration for i=1:N C(i)=0.2+0.1*tanh(10*(x(i)-0.5));end% Plot initial concentrationhold;plot(x,C);% iteratefor t=1:T Z=dfdc(C); for i=2:N-1 C(i) = C(i) + (mylaplace(Z,i)/dx^2)*dt; end %No flux boundary conditions C(1) = C(2); C(N) = C(N-1);end% Plot final concentrationplot(x,C);end | Modified diffusion equation and unstabilities | finite difference;diffusion | At first glance, it looks like you are using the method of lines with forward Euler time steps. What WolfgangBangerth is getting at with his example is that even for a simple heat equation, the stability limit of forward Euler (namely, that $|\lambda\Delta{t}| < 1$) combined with the eigenvalues induced by a finite difference approximation of the diffusion operator result in a very stringent limit on the time step relative to the spatial discretization. For a second-order centered finite difference approximation to the second-order derivative term, this limit is of the form $\Delta{t} < h^{2}/(2a)$, where $a$ is the diffusivity and $h$ is the grid spacing of the spatial discretization (assumed uniform). You should check out a text such as LeVeque's Finite Difference Methods for Ordinary and Partial Differential Equations for a basic overview of the theory. In terms of practical advice, you probably want to replace the time discretization you are currently using with an L-stable implicit method. To start, you might try using ode23tb in MATLAB. |
_unix.235262 | I'm currently trying to burn the Ubuntu operating system onto a USB flash drive of mine, so I can boot onto Ubuntu with my Chromebook. In order to do this, I executed the following dd command:sudo dd if=ubuntu-14.04.3-desktop-i386.iso of=/dev/sda1So far, this has been running for 18 hours, and it's appeared have done nothing, except produce the following output:0+0 records in0+0 records out0 bytes (0 B) copied, 0.01807 s, 0.0 kB/sIs this a normal behavior for the dd command? Have I done something wrong?To clarify, I'm running this command on a Raspberry Pi, running Debian. | The dd command isn't appear to accomplish anything | usb;dd;burning | Yeah, so turns out that my Ubuntu disk image had somehow been completely destroyed, and was empty. I simply had thought it hadn't stopped and had kept executing simply because it didn't return to a prompt like this:pi@raspberrypi ~ $ |
_unix.387246 | I'm trying to get the file of the current date with the following command in HP-UX Unix:$ lt ABC.LOG* |grep `date +%b %d`But, it's giving me the error:ksh: : cannot executegrep: can't open %dAny suggestions? | quotes inside backticks inside quotes in ksh | quoting;ksh;timestamps;hp ux | The error stems from the quoting of the arguments of grep and the fact that backticks don't do nesting very well:grep `date +%b %d`This is better written asgrep `date +'%b %d'`... or even better,grep $(date +'%b %d')In fact, with $(...) instead of backticks, you should be able to keep the inner double quotes:grep $(date +%b %d)An alternative to grepping the output of ls would be to dofind . -type f -name ABC.LOG* -ctime -1This would find all regular files (-type f) in the current directory whose names matches the given pattern and whose ctime is less than 24 hours since the current time. A file's ctime is the time when the last modification of the file's data or metadata was made.This is not exactly equivalent to what you're trying to achieve though. This also recurses into subdirectories. |
_unix.202034 | Can I forward traffic coming into a dummy interface on to another interface? Or is it not a real interface at all even?Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- dummy0 eth6 anywhere anywhere 0 0 ACCEPT all -- eth6 dummy0 anywhere anywhere I want all traffic reaching eth6 to go to dummy0, and all traffic reaching dummy0 to go to eth6.Should I be doing something else really? (I can't use bridges or bonding). | Forward traffic coming into dummy interface on to another interface(?) | iptables;network interface;forwarding | null |
_unix.325017 | I'm using Fedora 25 on ASUS laptop connected to the LG external monitor.The monitor (unlike the laptop's screen itself) doesn't show TTY full screen, while it's okay with the GUI (that's i3wm or Gnome) after startx command. It might be because the laptops resolution (1366x768) is lower than the monitors, and CLI unlike GUI doesn't resolve it automatically.One reason why I want to use TTY is because sometimes the colors (esp. of CMus and Ranger) mess up on the urxvt terminal in GUI.How can I make the TTY full screen in the ~1400x1050 monitor? | Change TTY resolution on Fedora | fedora;tty;i3;resolution;rxvt | null |
_unix.10428 | I have a development server, which is only accessible from 127.0.0.1:8000, not 192.168.1.x:8000. As a quick hack, is there a way to set up something to listen on another port (say, 8001) so that from the local network I could connect 192.168.1.x:8001 and it would tunnel the traffic between the client and 127.0.0.1:8000? | Simple way to create a tunnel from one local port to another? | networking;tcp;tunneling;port forwarding | Using ssh is the easiest solution.ssh -g -L 8001:localhost:8000 -f -N [email protected] forwards the local port 8001 on your workstation to the localhost address on remote-server.com port 8000.-g means allow other clients on my network to connect to port 8001 on my workstation. Otherwise only local clients on your workstation can connect to the forwarded port.-N means all I am doing is forwarding ports, don't start a shell.-f means fork into background after a successful SSH connection and log-in.Port 8001 will stay open for many connections, until ssh dies or is killed. If you happen to be on Windows, the excellent SSH client PuTTY can do this as well. Use 8001 as the local port and localhost:8000 and the destination and add a local port forwarding in settings. You can add it after a successful connect with PuTTY. |
_webapps.5605 | I have a feed from Google Books that lists some books I have. I want to create a new feed that has the Books' prices (as used books, going to feed it to a next service to get the prices). Yahoo pipes loop module does not allow operator modules or user input modules. How can iterate though the Google Books' Feed items so that I can use it for a next service say a Fetch Page Module ? | Using Yahoo Pipes and loops | yahoo pipes | null |
_codereview.158350 | I am trying to develop an optimal evaluation function to use in minimax/alpha-beta algorithm for developing tic-tac-toe AI. I am counting number of circles/crosses in a row/column/diagonal with empty space behind it (with three-in-a-row, there is no empty space). Based on number of symbols in such line, I multiply the separate scores with \$10^{\text{counter} - 1}\$, which results in \$1,10 \text{ or } 100\$ points. I am sure much can be improved, because optimal solution is rarely found and I am having problems using this function in alphabeta algorithmMy question is - How can this function be improved? Small pieces of code and suggestions appreciated.My code:private int h(int[][] field, int depth, int player) //final score of the node { if (win(field, 1)) //if human won return -1000; //very bad for MAX=computer if (win(field, 0)) //if computer won return 1000; int heuristics = individualScore(field, 0) - individualScore(field, 1); return heuristics; }private int individualScore(int[][] field, int player) { int sum = 0; int otherPlayer = -1; if (player == 0) //if computer is the current player otherPlayer = 1; //other player is human else otherPlayer = 0;//Vice versa for (int i = 0; i < 3; i++) // rows { int counter = 0; bool rowAvailable = true; for (int l = 0; l < 3; l++) { if (field[i][l] == player) counter++; if (field[i][l] == otherPlayer) { rowAvailable = false; break; } } if (rowAvailable && counter > 0) sum += (int)Math.Pow(10, counter - 1); } for (int i = 0; i < 3; i++) // columns { int counter = 0; bool columnAvailable = true; for (int k = 0; k < 3; k++) { if (field[k][i] == player) counter++; if (field[k][i] == otherPlayer) { columnAvailable = false; break; } } if (columnAvailable && counter > 0) sum += (int)Math.Pow(10, counter - 1); } int counterD = 0; bool diagonalAvailable = true; for (int i = 0; i < 3; i++) //diagonals { if (field[i][i] == player) counterD++; if (field[i][i] == otherPlayer) { diagonalAvailable = false; break; } } if (diagonalAvailable && counterD > 0) sum += (int)Math.Pow(10, counterD - 1); counterD = 0; diagonalAvailable = true; int j = 0; for (int i = 2; i >= 0; i--) { if (field[i][j] == player) counterD++; if (field[i][j] == otherPlayer) { diagonalAvailable = false; break; } } if (diagonalAvailable && counterD > 0) sum += (int)Math.Pow(10, counterD - 1); return sum; } | Trying to develop evaluation function for minimax/alphabeta algorithm | c#;algorithm | null |
_webmaster.24094 | I'm installing Apache, PHP and MySql on Windows 7 (I need to run Apache under current user instead of a service so I can't use wamp or similar). The thing is PHP does not get to load mysql extensions, using phpinfo() I can see mysqlnd, not mysql, mysqli or pdo_mysql.I remember having trouble with this in previous installations and having to download these DLL's from somewhere, so maybe it's happening again, are the mysql dll's coming with PHP version incorrect? What else can I do to properly install mysql extensions? | PHP 5.3.8 does not active mysql extensions | php;mysql | null |
_vi.12348 | I am searching a Plugin for autocompletion which does not have any requirements to lua ruby or python. I would love to use YouCompleteMe but it requires python, which is not availiable on my server. At all vim there does not have lua, python or ruby support, which makes it really hard to find anything. Maybe there is something written in Go like for example fzf, which could be easily added to vim without sudo permission. | Autocompletion Plugin for VIM without external requirements | autocompletion | Try vim-complete https://github.com/lifepillar/vim-mucomplete. It is a minimalistic autocompletion plugin, written in VimL. |
_unix.168013 | At work I usually run Windows 8 but I'm looking to use Linux since I'm mostly a Linux user. How do I connect to it usually? I just type //MIKE-SERVER then hit ENTER key in address bar of a file browser. How would I do that on Linux? It's a Windows Server. | How to connect to a Windows server on Ubuntu? | linux;windows;file server | null |
_unix.260749 | Lets take this command for example$ time ssh ec2 lswww appsreal 0m0.554suser 0m0.004ssys 0m0.000s$ time ssh ec2 ls 1>/dev/nullreal 0m0.554suser 0m0.004ssys 0m0.000s$ time ssh ec2 ls 2>/dev/nullwww appsreal 0m0.554suser 0m0.004ssys 0m0.000sClearly ls output come from fd 1. What about time output. Which file descriptor does it use? How can I find it?Other than this, instead of redirectly them individually, how can I redirect all of them at once.. i.e. all>/dev/null. I am not looking for 1>/dev/null 2>&1... | How to get file descriptor number of a output and how to redirect all of them at once | shell;io redirection;output;file descriptors;time utility | null |
_webmaster.13826 | I have a site where I anticipate to have a large amount of users. I've heard that it is a good idea to separate user content (uploaded images) and place them on a separate static server where lighttpd would serve the content. This is supposed to speed up the requests significantly.My question is:Roughly how much improvement can I expect form doing this?How is it done? Users on my site upload files but then how do I automate the transfer process? What's the best practice? Rsync?Any other tips? ideas? I'd really appreciate your input. | Separating static/uploaded content from the site | php;apache;content;cdn;lighttpd | Worry about Rsync later - for now you can do it all on the one box.Your main domain is http://www.example.com/ on this domain you serve cookies (yum!).The non-www redirects to the above.Your static domain is http://static.example.com/ and on this domain you don't serve cookies or any other header items that are not really needed. Put js+css on there compressed and images served uncompressed. Set anything served on static to be public cache-able and with an expiry date some time into the future. Now setup your domains to point to the same place on the file system, i.e. if you really want to load test.jpg you can get exactly the same file from either place. Do this with httpd.conf settings and two virtual server entries.When your traffic gets to big levels you can migrate your static content server to a different box and rsync the whole document root to the new box. A cron job can be setup to do this every 5 minutes or so and the 404 of the static can redirect to the www to pick up on any resources not yet rsynced. |
_webmaster.50658 | In the MediaWiki editor are there buttons for bold, italic and other styling, but not one for adding the <code> ... </code> tags around the selected text.Can such a button be added to the editor? | Possible to add code tag button in MediaWiki editor? | mediawiki | Make sure your wiki has the WikiEditor extension installed (installed by default).This extension allows you to customize the toolbar. They have a full guide over at http://www.mediawiki.org/wiki/Extension:WikiEditor/Toolbar_customization |
_codereview.149163 | This is my implementation of pricing an exotic option (in this case an up-and-in barrier option) using the Monte Carlo simulation in Python. I use NumPy where I can. Any ideas to optimize this code?################################################# Exotic Barrier Option (up-and-in) Pricer # By: DudeWah#################################################-----------------------------------------------################################################# Monte Carlo Simulation pricing an up-and-in # # call option. This call option is a barrier ## option in which pyoffs are zero unless the ## asset crosses some predifned barrier at some ## time in [0,T]. If the barrier is crossed, ## the payoff becomes that of a European call. ## Note: Monte Carlo tends to overestimate the ## price of an option. Same fo Barrier Options. ##################################################-----------------------------------------------#import librariesimport numpy as npfrom matplotlib import pyplot as pltfrom matplotlib import stylefrom random import gaussfrom math import exp, sqrt#-----------------------------------------------#Class for all parameters#-----------------------------------------------class parameters(): parameters to be used for program #-------------------------- def __init__(self): initializa parameters self.S = 5 #underlying asset price self.v = 0.30 #volatility self.r = 0.05 #10 year risk free rate self.T = 365.0/365.0 #years until maturity self.K = 6 #strike price self.B = 8 #barrier price self.delta_t = .001 #timestep self.N = self.T/self.delta_t #Number of discrete time points self.simulations = 1000 #num of simulations #--------------------------- def print_parameters(self): print parameters print --------------------------------------------- print --------------------------------------------- print Pricing an up-and-in option print --------------------------------------------- print Parameters of Barrier Option Pricer: print --------------------------------------------- print Underlying Asset Price = , self.S print Volatility = , self.v print Risk-Free 10 Year Treasury Rate =, self.r print Years Until Expiration = , self.T print Time-Step = , self.delta_t print Discrete time points =, self.N print Number of Simulations = , self.simulations print --------------------------------------------- print ---------------------------------------------#-----------------------------------------------#Class for Monte Carlo#----------------------------------------------- class up_and_in_mc(parameters): This is the class to price the barrier option defined as an up and in option. Paramaters are fed in as a subclass #--------------------------- def __init__(self): initialize the class including the subclass parameters.__init__(self) self.payoffs = [] self.price_trajectories = [] self.discount_factor = exp(-self.r * self.T) #--------------------------- def call_payoff(self,s): use to price a call self.cp = max(s - self.K, 0.0) return self.cp #--------------------------- def calculate_payoff_vector(self): Main iterative loop to run the Monte Carlo simulation. Returns a vector of different potential payoffs for i in xrange(0, self.simulations): self.stock_path = [] self.S_j = self.S for j in xrange(0, int(self.N-1)): self.xi = gauss(0,1.0) self.S_j *= (exp((self.r-.5*self.v*self.v) * self.delta_t + self.v *sqrt(self.delta_t) * self.xi)) self.stock_path.append(self.S_j) self.price_trajectories.append(self.stock_path) if max(self.stock_path) > self.B: self.payoffs.append(self.call_payoff(self.stock_path[-1])) elif max(self.stock_path) < self.B: self.payoffs.append(0) return self.payoffs #--------------------------- def compute_price(self): Uses payoff vector and discount factor to compute the price of the option. Numpy used for efficiency self.np_payoffs = np.array(self.payoffs, dtype=float) self.np_Vi = self.discount_factor*self.np_payoffs self.price = np.average(self.np_Vi) #--------------------------- def print_price(self): prints the option price to terminal print str(Call Price: %.4f) % self.price print --------------------------------------------- #--------------------------- def calc_statistics(self): uses payoffs and price to calc variance, standard deviation, and a 95% confidence interval for price self.variance = np.var(self.np_Vi) self.sd = np.std(self.np_Vi) #95% C.I. uses 1.96 z-value self.CI = [self.price - (1.96*self.sd/sqrt(float(self.simulations))), self.price + (1.96*self.sd/sqrt(float(self.simulations)))] #--------------------------- def print_statistics(self): prints option statistics that were calculated to the terminal print Variance: %.4f % self.variance print Standard Deviation: %.4f % self.sd print 95% Confidence Interval:, self.CI print --------------------------------------------- print ---------------------------------------------\n #--------------------------- def plot_trajectories(self): uses matplotlib to plot the trajectory of each individual stock stored in earlier trajectory array print Creating Plot... #use numpy to plot self.np_price_trajectories = np.array(self.price_trajectories, dtype=float) self.times = np.linspace(0, self.T, self.N-1) #style/plot/barrier line style.use('dark_background') self.fig = plt.figure() self.ax1 = plt.subplot2grid((1,1),(0,0)) for sublist in self.np_price_trajectories: if max(sublist) > self.B: self.ax1.plot(self.times,sublist,color = 'cyan') else: self.ax1.plot(self.times,sublist,color = '#e2fb86') plt.axhline(y=8,xmin=0,xmax=self.T,linewidth=2, color = 'red', label = 'Barrier') #rotate and add grid for label in self.ax1.xaxis.get_ticklabels(): label.set_rotation(45) self.ax1.grid(True) #plotting stuff plt.xticks(np.arange(0, self.T+self.delta_t, .1)) plt.suptitle('Stock Price Trajectory', fontsize=40) plt.legend() self.leg = plt.legend(loc= 2) self.leg.get_frame().set_alpha(0.4) plt.xlabel('Time (in years)', fontsize = 30) plt.ylabel('Price', fontsize= 30) plt.show()#-----------------------------------------------#Main Program#-----------------------------------------------#Initialize and print parametersprm = parameters()prm.print_parameters()#Price/print the optionui_mc = up_and_in_mc()ui_mc.calculate_payoff_vector()ui_mc.compute_price()ui_mc.print_price()#caclulate/print statsui_mc.calc_statistics()ui_mc.print_statistics()#plotui_mc.plot_trajectories() | Barrier Option Pricing using Python | python;finance;simulation;matplotlib | null |
_softwareengineering.92158 | Let's assume the following two assumptions are true.Your entire userbase has broadband access everywhereThere is an imaginary browser X that implements the entire draft specification of the HTML5 and WHATWG groups, consistently and all users use browser X.What are the intrinsic limitations of a commercial public HTML5 web application that we need commercial public desktop applications for?I'm interested in the limitations of plugin-less web applications that don't rely on Flash/Java/SilverLight/etc bridges for extra features nor rely on browser plugins for extra features.Possible Limitations that don't apply:Databases? We have WebSQL and indexedDB.File IO? We have the HTML5 File API which does both reading and writing.Speed? With the recent JavaScript engine race, the browser is no longer slow. Native C++ is only 3 times faster then chrome's V8 engine.Development Tools? The web has matured and there is a whole range of tools available which are too numerous to list.Closed Source? Yes, all the code is open source. This is a double-edged sword and there are numerous opinions on use of closed source or open source code. I personally believe the advantages of open source code outweigh the disadvantages.JavaScript/HTML5? Arguments along the likes of I personally think HTML5 and EcmaScript are horrible development platforms do not count.Known Limitations:Real time / security (top secret) critical code does not belong on the web nor can it. It needs to be written in a low level, highly controllable language like C or C++.Any tool that needs to interact with a foreign 3rd party piece of hardware attached to your computer will have a difficult time talking to your web application. There is also a whole suite of programs that do not belong on the web. Operation systems, drivers, server software, low level APIs. I'm aware of that but I don't classify them as commercial public applications, these are the type of software that can be pre-installed on computers.As an aside, I know the two assumptions are horribly unrealistic, but we might achieve them in 5/10/20/30 years. I'm interested in the type of applications and the features of the applications that make them completely incompatible with the web.Motivation:Google applicationsMicrosoft Office365Web application listAdobe AviaryThe point:Given the set of problems where a desktop application is a valid solution. Why is a web application not a valid solution?How do I identify whether or not I can use a web application as a solution.I've tried to remove the main difficulties with web applications (internet connection and browser support) by asserting they don't exist. As a further aside, HTML5 offline applications and Modernizr are on track to solving both those issues.What are the other difficulties with web application development? | Are there any limitations of an idealistic HTML5 web application | web development;javascript;web applications;web;desktop application | Off top of my head...access proprietary hardware that exports its I/O by other means than a file. Be that scientific equipment, industrial machinery, or plain CD recorder and a digitizer tablet with tilt support.only HTTP and a small family of other protocols. You can't create sockets as you wish, transferring whatever binary data you desire. That vastly limits connectivity with other systems and services.No sane developer will create graphics-intensive game in Javascript. Broadband is not nearly comparable to DVD/HDD throughputs often needed. Support for 3D in Canvas is vastly inferior to what you get with game engines. No way to support joystick, multiple simultaneous keypresses, the open nature makes cheating easy. But primarily, the performance drop is not acceptable.Heavy sandboxing. You won't get stuff that deeply integrates into the OS. Screenshots, antivirus, virtual drives, background tasks a'la system tray, administrative tasks etc.can't be mission-critical. Depending on broadband at all times to run their basic software is not the preferred way most companies like to run. |
_unix.378839 | I had a power issue in the house recently, and had issue getting my file server disks to mount. Turns out that one of the devices had renamed itself from sdb to sdd, and all of the LVM metadata is now missing. Using pvscan, lvscan, vgscan, etc all only show my system partition. Another reboot and the devices seemed to go back to they were before: sdb and sdc. I've managed to reassemble the raid using mdadm, but was unable to use vgcfgrestore to recreate my lvm configuration because apparently the UUID of my raid device has changed. My original VG was named vg0. Here's the result of vgcfgrestore: Couldn't find device with uuid 3fgedF-F7Dc-c300-svuP-b3Q3-qSnb-CukkLq. Cannot restore Volume Group vg0 with 1 PVs marked as missing. Restore failed.My /etc/lvm/backup/vg0 file shows this:vg0 { id = 3JWsYl-FmEP-gpsa-7grO-VlLU-x7uC-EevgFc seqno = 3 format = lvm2 # informational status = [RESIZEABLE, READ, WRITE] flags = [] extent_size = 8192 # 4 Megabytes max_lv = 0 max_pv = 0 metadata_copies = 0 physical_volumes { pv0 { id = 3fgedF-F7Dc-c300-svuP-b3Q3-qSnb-CukkLq device = /dev/md0 # Hint only status = [ALLOCATABLE] flags = [] dev_size = 3907028992 # 1.81935 Terabytes pe_start = 384 pe_count = 476932 # 1.81935 Terabytes } } logical_volumes { data { id = Sqjebo-rnKh-mgQH-a90E-Q0n7-idp1-1xPP56 status = [READ, WRITE, VISIBLE] flags = [] segment_count = 1 segment1 { start_extent = 0 extent_count = 476932 # 1.81935 Terabytes type = striped stripe_count = 1 # linear stripes = [ pv0, 0 ] } } }}So the issue it seems I'm having is that the pv UUID is no longer valid, and I'm not even sure now what to use. The raid I managed to reassemble with --scan auto-named to /dev/md1, but even changing that in the vg0 backup file had no effect. I'm still not sure what the new pv UUID is. # cat /proc/mdstatPersonalities : [raid1] md1 : active raid1 sdc1[1] sdb1[0] 1953383488 blocks super 1.2 [2/2] [UU] bitmap: 0/15 pages [0KB], 65536KB chunkunused devices: <none>Again, pvs, lvs, and vgs all show only my root/system volumes and vg's, nothing from vg0. Any suggestions on next steps? Both drives are full of data (most of which is backed up) but I'd like to take whatever steps I can to save the filesystems.EDIT: Displaying the head of both disks (/dev/md1 shows garbage). I notice that only one of them has a LABELONE label:[root@host ~]# head /dev/sdb1N+y{GyRjedi:1RUY1iSnZsH$WYuQ>4vg0 {id = IwXCM3-LnxU-Oguo-PXiN-nXwq-VFaU-ZmgySsseqno = 1format = lvm2status = [RESIZEABLE, READ, WRITE]flags = []extent_size = 8192max_lv = 0max_pv = 0metadata_copies = 0[root@host ~]# head /dev/sdc1LABELONEpu+ LVM2 0013fgedFF7Dcc300svuPb3Q3qSnbCukkLqN+y{GyRjedi:1RUYPFlO!H$WYuQ9vg0 {id = IwXCM3-LnxU-Oguo-PXiN-nXwq-VFaU-ZmgySsseqno = 1format = lvm2status = [RESIZEABLE, READ, WRITE]flags = []extent_size = 8192max_lv = 0max_pv = 0metadata_copies = 0So now the 50 cent question: how do I recover the LVM labels without damaging the underlying filesystem?UPDATE:So I was basically able to successfully execute vgcfgrestore to a valid copy of my lvm backup config using a new PV UUID, and assembled /dev/md0 with that one drive, but now I'm getting a message that my PV is smaller than the allocated space. Basically it's reporting that my physical extents dropped from 476932 to 476900. The size of the disk hasn't changed, and I verified that the PV actually does have the correct number of extents available: (see the last line)[root@host /]# pvs -v --segments /dev/md0 Using physical volume(s) on command line. Wiping cache of LVM-capable devices Wiping internal VG cache Device /dev/md0 has size of 3906766976 sectors which is smaller than corresponding PV size of 3907028992 sectors. Was device resized? One or more devices used as PVs in VG vg0 have changed sizes. PV VG Fmt Attr PSize PFree Start SSize LV Start Type PE Ranges /dev/md0 vg0 lvm2 a--u 1.82t 0 0 476932 data 0 linear /dev/md0:0-476931The last line shows that it's reporting extents from 0-476931, which is the correct size. I thought perhaps that the LVM headers itself may consume some space, but this isn't a new volume, it's been used for years without any issue and has never been resized. The volume is showing as suspended: LV Status suspended # open 0I attempted to extend my PV with a USB thumbdrive (didn't think it would work, and it didn't) thinking if I could even temporarily mount this filesystem I could copy off the data and then create the whole raid from scratch, but of course that was not effective. Any thoughts on possible next steps to save the data? | LVM metadata missing, trying to recreate raid 1 with LVM | raid;lvm;software raid;mdadm;raid1 | null |
_softwareengineering.312931 | From the Apache non-lawyer version of the license:It requires you to:include a copy of the license in any redistribution you may make that includes Apache software;provide clear attribution to The Apache Software Foundation for any distributions that include Apache software.The license itself clarifies derivative work:For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereofMy question is what feels like a common use case, but I'm not exactly sure where it fits into their license.If I am using an Apache 2.0 licensed software purely as an API for an internal project (not modifying the licensed code at all), what are my responsibilities from a licensing perspective? It seems my work is not a derivative work since I am not modifying the actual code.This link seems to suggest I only need to include it in a NOTICE file. Are there other obligations?This feels like a really common use case but I can't seem to find much actually referencing this in any of the sources I quoted from? | What are my obligations when using an Apache 2.0 library in an internal project? | licensing;apache license;apache2 | In general, licensing obligations pivot on the subject of distribution. If you don't distribute, you generally have no obligations, because you're not impacting anyone outside of your organization.For completeness, I will talk a bit about derivative works. Because the GPL depends greatly on the definition of derivative work, it discusses this matter in greater detail in their license and on their pages, so it provides a good platform for discussion. In layman's terms, the GPL considers a separate program (i.e. not a derivative work) to be a program that:Communicates at arm's length with the covered work, andThe covered work can operate successfully without it.By arm's length, they mean something like a command-line interface or some sort of published specification for a runtime plugin, and not a DLL reference (i.e. compilation of the two works together). But since you'll never distribute outside of the organization, the issue of derivative works is moot. |
_unix.41453 | I am trying to make cow-copies of some files/directories, but of the several ways I know of, all seem sub-optimal.For example, btrfs can, with the use of cp --reflink=auto quickly generate cow-copies of files.What I have tried:Symlinks: No good. Renamed file, broken link.Hardlinks: Better, but still no good. Changes to one file will change the other, and I don't necessarily want the other file changed.Create a snapshot of the dataset, then clone the snapshot: This can work, but not well. Often I'm not looking for a copy of the whole dataset, or for the copies to act like another dataset. Then there are the parent/child relationships between the clone/snapshot/original, which as I understand it are hard, if not impossible to break.Using zfs send/receive, and enabled dedup, replicate the dataset to a new dataset: This avoids the parent/child relationships of using a clone, but still needlessly creates another dataset, and still suffers from the slowness involved in the files having to be read 100% and the blocks referenced again instead of written.Copy files and let dedup do its job: This works, but is slow because the file(s) have to be 100% read and then the blocks referenced again instead of writing.Slowness of zfs send/receive and physically copying or rsyncing is further exacerbated because most things are stored compressed, and have to be decompressed during read, then compressed before dedup kicks in to reference duplicate blocks.In all of my research, I have not been able to find anything remotely resembling the simplicity of --reflink in btrfs.So, is there a way to create cow-copies in ZFS? Or is physically copying and letting dedup do its job the only real option? | Is there a way to create cow-copies in ZFS? | freebsd;zfs | I think option 3 as you have described above is probably your best bet. The biggest problem with what you want is that ZFS really only handles this copy-on-write at the dataset/snapshot level.I would strongly suggest avoiding using dedup unless you have verified that it works well with your exact environment. I have personal experience with dedup working great until one more user or VM store is moved in, and then it falls off a performance cliff and causes a lot of problems. Just because it looks like it's working great with your first ten users, your machine might fall over when you add the eleventh (or twelfth, or thirteenth, or whatever). If you want to go this route, make absolutely sure that you have a test environment that exactly mimics your production environment and that it works well in that environment.Back to option 3, you'll need to set up a specific data set to hold each of the file system trees that you want to manage in this way. Once you've got it set up and initially populated, take your snapshots (one per dataset that will differ slightly) and promote then into clones. Never touch the original dataset again.Yes, this solution has problems. I'm not saying it doesn't, but given the restrictions of ZFS, it's still probably the best one. I did find this reference to someone using clones effectively: http://thegreyblog.blogspot.com/2009/05/sparing-disk-space-with-zfs-clones.htmlI'm not real familiar with btrfs, but if it supports the options that you want, have you considered setting up a separate server just to support these datasets, using Linux and btrfs on that server? |
_unix.304800 | I've installed all dependencies needed to run cmake -G Unix Makefiles command. This command is executed successfully. Next, I ran make command and I've get following error:In file included from /home/hubert/Pobrane/miktex-2.9-2016-08-17/Programs/DviWare/dvipdfm-x/source/dpxfile.c:39:0:/home/hubert/Pobrane/miktex-2.9-2016-08-17/Programs/DviWare/dvipdfm-x/source/dpxfile.c: In function dpx_create_temp_file:/home/hubert/Pobrane/miktex-2.9-2016-08-17/Programs/DviWare/dvipdfm-x/source/dpxfile.c:827:15: error: _MAX_PATH undeclared (first use in this function) tmp = NEW(_MAX_PATH + 1, char); ^/home/hubert/Pobrane/miktex-2.9-2016-08-17/Programs/DviWare/dvipdfm-x/source/mem.h:37:50: note: in definition of macro NEW #define NEW(n,type) (type *) new(((uint32_t)(n))*sizeof(type)) ^/home/hubert/Pobrane/miktex-2.9-2016-08-17/Programs/DviWare/dvipdfm-x/source/dpxfile.c:827:15: note: each undeclared identifier is reported only once for each function it appears in tmp = NEW(_MAX_PATH + 1, char); ^/home/hubert/Pobrane/miktex-2.9-2016-08-17/Programs/DviWare/dvipdfm-x/source/mem.h:37:50: note: in definition of macro NEW #define NEW(n,type) (type *) new(((uint32_t)(n))*sizeof(type)) ^Programs/DviWare/dvipdfm-x/CMakeFiles/MiKTeX209-dvipdfmx.dir/build.make:206: polecenia dla obiektu 'Programs/DviWare/dvipdfm-x/CMakeFiles/MiKTeX209-dvipdfmx.dir/source/dpxfile.c.o' nie powiody simake[2]: *** [Programs/DviWare/dvipdfm-x/CMakeFiles/MiKTeX209-dvipdfmx.dir/source/dpxfile.c.o] Bd 1CMakeFiles/Makefile2:3759: polecenia dla obiektu 'Programs/DviWare/dvipdfm-x/CMakeFiles/MiKTeX209-dvipdfmx.dir/all' nie powiody simake[1]: *** [Programs/DviWare/dvipdfm-x/CMakeFiles/MiKTeX209-dvipdfmx.dir/all] Bd 2Makefile:149: polecenia dla obiektu 'all' nie powiody simake: *** [all] Bd 2I also get this error when I try compile MikTex using sudo make, make install and sudo make install. I've system Linux Mint 18 Sarah 64-bit on Toshiba Satellite C660D-102 computer. Can anyone help me? | Compilation error when I try compile MikTex | compiling;make | null |
_unix.377537 | Explanation:I need a script or program to discover devices on my network. I was thinking maybe doing the scan with nmap and I need to display just the name of device, what it is, and also the ip address/mac address of the device as well. I would like to do the scan in the background and only display the desired information in the form of a list I guess.Example:after script / program runs:-There are 2 hosts upHost 1: Lenovo-PC | 192.168.1.86 | 0A:65:3F:2B:F1 | WindowsHost 2: LG-3444 | 192.168.1.89 | A9:B2:C3:D4:E5 | LG Electronicsect... you get the point. PS: BTW, these are examples not real IP's.OverviewSo I want to scan my network for devices / hosts and display important info about each one in a list (using bash script, python, or anything that can achieve this). | Script or Program to discover hosts on network | bash;scripting;python | null |
_codereview.166582 | how would I go about making this code cleaner? Right now it's a pain to look at and try to understand.from msvcrt import getwch, kbhitfrom os import systemfrom time import sleepfrom random import randintdef check_type(num, type): # Check if int, if not return 0 if type == int: try: float(num) except: print( Error: Not number.) print() return 0 else: # Check if has decimal try: int(num) except: print( Error: Number can't have a decimal.) print() return 0 # Check if float elif type == float: try: float(num) except: print( Error: Not number.) print() return 0 # Check if string elif type == str: try: str(num) except: print( Error: Not string.) print() return 0 return type(num)exit = 0while exit == 0: print(Game settings) Board board = [] # Board size input board_size = check_type(input( Board size: ), int) if board_size == 0: continue aspect_ratio = 16 / 9 for num in range(board_size): board.append([.] * round(board_size * (aspect_ratio))) Variables # Defaults x and y to middle of the board x = round((board_size * (aspect_ratio)) / 2) - 1 y = round(board_size / 2) # Defaults to moving right directionx = 1 directiony = 0 key_press = 0 count = -1 apple_count = 0 x2 = [] y2 = [] applex = 0 appley = 0 apple_count_count = 0 # Speed input while 1: speed = check_type(input( Speed (lower = faster): ), float) if speed == 0: continue else: break speed_increment = 0 x2.append(x) y2.append(y) Main while board[y][x] != x: # Loops board while no key presses while kbhit() == 0: x += directionx y += directiony # Border overflow check if x >= round(board_size * (aspect_ratio)) and directionx == 1: x = 0 elif x < 0 and directionx == -1: x = round(board_size * (aspect_ratio)) - 1 if y >= board_size and directiony == 1: y = 0 elif y < 0 and directiony == -1: y = board_size - 1 # Checks to see if crash if board[y][x] == x: while 1: print() print(You have crashed!) # User input while 1: exit = check_type(input(Restart? (yes/no): ), str) if exit == 0: continue else: break if exit.lower() == yes: exit = 0 break elif exit.lower() == no: exit = 1 break system(cls) break # If eat apple elif board[y][x] == o: apple_count += 1 apple_count_count = 0 # Updates x position board[y][x] = X x2.append(x) y2.append(y) # Makes body small x board[y2[count + 1]][x2[count + 1]] = x # Tail eater if count - apple_count >= 0: board[y2[count - apple_count]][x2[count - apple_count]] = . count += 1 # Only allows one apple at a time if apple_count_count <= 10: # Makes sure apple doesnt spawn on anything while apple_count_count == 10: applex = randint(0, round(board_size * (aspect_ratio)) - 1) appley = randint(0, board_size - 1) if board[appley][applex] != .: continue board[appley][applex] = o break apple_count_count += 1 # Clears previous board then prints updated one system(cls) for row in board: print( .join(row)) # Prints extra info print(Speedup: +%d%% Apple count: %d % (round(((speed - (speed - speed_increment)) / speed) * 100), apple_count)) # Delay the loop sleep(round(speed - speed_increment, 2)) if speed - speed_increment >= speed / 2: speed_increment += speed / 1000 key_press = 0 # Game logic else: key = getwch() if key == w and directiony != 1 and key_press == 0: directionx = 0 directiony = -1 key_press = 1 elif key == a and directionx != 1 and key_press == 0: directionx = -1 directiony = 0 key_press = 1 elif key == s and directiony != -1 and key_press == 0: directionx = 0 directiony = 1 key_press = 1 elif key == d and directionx != -1 and key_press == 0: directionx = 1 directiony = 0 key_press = 1I was thinking about turning some while loops into functions but I'm not sure if that would make the game run slower. I'm planning on getting into website back-end development so knowing how to write code with good performance would be handy.Also, how should I go about commenting? Creating a newline at the end of every section I comment doesn't seem efficient at all. Is there some sort of guideline about commenting?Any help would be greatly appreciated! | Snake game made with Python | python;game;snake game | FunctionsYou can greatly improve your use of functions. For example, you could wrap the game logic into a function called game_logic() and call that. The same goes for your 'main' loop, which can be wrapped in game_loop.Improving check_type()Using a plain except: clause is not a good idea. Especially for longer and more complicated projects, if something else raises an exception while the try / except block is being executed and no specific exception is required, it will also run. To avoid this, use except ValueError:.Returning 0 can confuse people because it looks similar to exit code (0). To avoid the confusion, return False.Simplifying game logicTo shorten the length of 'game logic' statements, you could put the second part in a boolean variable:conditional = not directory and not key_pressif key in [w, a, s, d]: # This is true for all keys in [w a s d] key_press = Trueif key == w and conditional: directionx, directiony = 0, -1 # Use multiple assignment to shorten codeelif key == a and conditional: directionx, directiony = -1, 0elif key == s and conditional: directionx, directiony = 0, 1elif key == d and conditional: directionx, directiony = 1, 0Don't use os.system()If you need to clear the screen, you can do something like print(\n * y) where y is the vertical height of your terminal. You could also use a carriage return. This might look a lot cleaner (depending on your implementation).Questionable codeHere's some things I thought were quite unusual:Why are you starting a print() statement with a 'space'? And why are you doing that inconsistently?Why are you using docstrings where they are not at all needed and especially in cases where you'd much prefer -no- documentation?You have some redundant comments, for example # Speed input to explain the use of a variable called speed_input. You can greatly simplify boolean logic, instead of using if x == 0 or if x == 1 (etc.), use if not x and if x. This makes the code easier to read.You are using 1 and 0 a lot for boolean logic, which I find quite confusing. Where possible, it would, for readability, be a better choice to use True and False: while True: etc.It is not needed to check exit for being a string. Any input will automatically be a string, and there's no need to convert it here.BugsIf there's an error converting board_size to integer, the board size will remain 0. Set a default size to avoid this.Trying to turn a float into an integer will not raise an exception, so using that to check for decimal numbers is useless.I have left out comments on the many overly complicated while, if and else blocks, the unnecessary variables, the shortage of whitespace and the fact that there is no main() function / guard. |
_unix.354037 | If I have the following directoriesa/ b/ c/d/ c/f/ g/How can I find -name something -type f in just a/ c/f/ and g/ without having to iterate through the list of directories I want and running find on it individually?This is partially answered here: find exclude directoryBut I have 100s of directories to include and exclude so I was looking for a better solution where I could just specify a list of directories as an argument or a file with the directory names in them.Thanks. | find items only in a list of directories | find | null |
_webmaster.19946 | I've been using MS Frontpage 2003 to maintain our company website for years. Looking for a replacement that can:Import/convert a MS FrontPage website and modernize it (clean up the HTML to make it standards compliant, etc.)Supports (or converts) the substitutions (Include Page and Text substitutions that are done when the page is published (so they become static HTML).Leverages my knowledge of FrontPageLooks like the likely contender is Web Expressions but I'm open to objective suggestions. | What is a good replacement for MS Frontpage? | software | Expression Web will do what you need to do. It should be comfortable for you to use since you're familiar with the Microsoft UI paradigm. It also corrects a number of problems that the old Frontpage had.While there are some other very good web editors available there will be a significant learning curve for them. |
_codereview.27528 | I've been getting help with this script seen below. It allows one input box to be used to put values into a text area and also display a list of the items. The items can be deleted once inserted:http://jsfiddle.net/ZTuDJ/50/// If JS enabled, disable main input$(#responsibilities).prop('disabled', true);// $(#responsibilities).addClass(hidden);// If JS enabled then add fields$(#resp).append('<input placeholder=Add responsibility id=resp_input ></input><input type=button value=Add id=add> ');// Add items to input fieldvar eachline='';$(#add).click(function(){ var lines = $('#resp_input').val().split('\n'); var lines2 = $('#responsibilities').val().split('\n'); if(lines2.length>10)return false; for(var i = 0;i < lines.length;i++){ if(lines[i]!='' && i+lines2.length<11){ eachline += lines[i] + '\n'; } } $('#responsibilities').text($(<div> + eachline + </div>).text()).before(<li>+$(<p>+lines+</p>).text()+<span class='right'><a href='#'>Remove</a></span></li>); $('#resp_input').val('');}); $(document).on('click', '.right a', function(){ var el = $(this).closest('li') var index = $('li').index(el); var text = eachline.split('\n'); text.splice(index, 1); eachline = text.join('\n') $('#responsibilities').text(eachline) el.remove()})I was wondering if it would be possible to tag links in the list with the following information:<span refid=1>Remove</span>By tagging the links in this way I was thinking I could easily point to them with jquery if I wanted to delete it or later edit it.Is there room to simplify this script in this way? | Jquery improvement of this script - Adding and removing items from list | javascript;jquery | What's good here:It's fairly simpleUsing prop over attrWhat I did not likeUsing HTML strings in code makes code very hard to modify later on. It's hard to debug and might cause potential issues (like making XSS easy for example).No separation of concerns. You're treating your HTML like your source of knowledge instead of backing up your data with a model in the background. This is very harmful and I personally consider it a big design flaw. Your business logic (in this case the list items) and your presentational layer (the DOM nodes) is the same here. Every time you want to operate on the list you have to query the DOM. This is not only slow, but it becomes very nasty very fast as your code logic begins to expand.There are frameworks like KnockoutJS that do data binding making this trivial to accomplish in just a few lines of code. However since you wanted a solution that has no 'magic' (Which I totally appreciate by the way) let's see what would be a simple JavaScript solution would look like.Short note, I'm doing JavaScript with function constructors here since a lot of people find it easier, personally I'd use object initializers.Here is a working fiddleFirst, we'll have a data model for a responsibility.//This is our element in the view model.function Responsibility(text){ this.text=text;}Next, we'll need to store all our responsibilities. We'll use an array:// a list of our responsibilities, an actual view model to back our datavar responsibilities = []; Now, we'll want a render function, that'll take our elements, and turn them into actual DOM objects. We'll spit it into two. First, we'll render the list item:function renderResponsibility(rep,deleteClick){ var el = document.createElement(li); var rem = document.createElement(a); rem.textContent = Remove; rem.onclick = deleteClick; var cont = document.createElement(span); cont.textContent = rep.text; el.appendChild(cont); el.appendChild(rem); return el;}On renderResponsibility another alternative would be to use a templating engine, but you said no extra libraries so I'm keeping my word :)Now, our general render://render our actual elementsfunction render(){ respList.innerHTML = ; // note, foreach needs a modern browser but can easily be shimmed responsibilities.forEach(function(responsibility,i){ var el = renderResponsibility(responsibility,function(){ responsibilities.splice(i,1);//remove the element render();//re-render }); respList.appendChild(el); }); //update the text area; respTextArea.textContent = responsibilities.map(function(elem){ return elem.text;//get the text. }).join(\n);}Finally, let's add the functionality to the Add button://events addButton.onclick = function(e){ var text = newResp.value; var resp = new Responsibility(text); responsibilities.push(resp); render(); newResp.value = ;}And we're done :)Just to tease:Here is a solution in KnockoutJSHere is a shorter solution in KnockoutJS without using an object for responsibilityHere is a solution in AngularJSAlso here is a more 'jQuery' version of the vanilla solution. As you can see a DOM manipulation library isn't very useful when building an application. |
_unix.180099 | Env.# uname -aLinux FriendlyARM 3.0.8-FriendlyARM #1 PREEMPT Tue Oct 30 10:33:04 CST 2012 armv7l GNU/LinuxProblemWhen trying to run my chrome executable I am getting this:[root@FriendlyARM chromium]# ./chrome./chrome: error while loading shared libraries: libattr.so.1: cannot open shared object file: No such file or directoryPrint shared library dependencies, indeed libattr.so.1 is not foundldd ./chrome...libp11-kit.so.0 => /usr/lib/arm-linux-gnueabihf/libp11-kit.so.0 (0x469ab000)libXau.so.6 => /usr/lib/arm-linux-gnueabihf/libXau.so.6 (0x469be000)libXdmcp.so.6 => /usr/lib/arm-linux-gnueabihf/libXdmcp.so.6 (0x469c8000)libattr.so.1 => not found <======= NOT FOUNDlibgpg-error.so.0 => /lib/arm-linux-gnueabihf/libgpg-error.so.0 (0x469d3000)I have found a libattr.so.1 library that I copied to /usr/lib and created a symbolic link to /libbut chrome still can't find it.What could I try to fix this?UPDATE 20150120file libXau.so.6.0.0debian_wheezy_arm-sysroot/usr/lib/arm-linux-gnueabihf/libXau.so.6.0.0: ELF 32-bit LSB shared object, ARM, version 1 (SYSV), dynamically linked, BuildID[sha1]=0xcbd329ab335e695742bac844bfcb02c83e8fac78, strippedfile libattr.so.1.1.0libattr.so.1.1.0: ELF 32-bit LSB shared object, ARM, version 1 (SYSV), dynamically linked, not strippedReadelf libattr$ readelf -A libattr.so.1.1.0 Attribute Section: aeabiFile Attributes Tag_CPU_name: 7-A Tag_CPU_arch: v7 Tag_CPU_arch_profile: Application Tag_ARM_ISA_use: Yes Tag_THUMB_ISA_use: Thumb-1 <===== Tag_FP_arch: VFPv3 Tag_Advanced_SIMD_arch: NEONv1 <===== Tag_ABI_PCS_wchar_t: 4 Tag_ABI_FP_denormal: Needed Tag_ABI_FP_exceptions: Needed Tag_ABI_FP_number_model: IEEE 754 Tag_ABI_align_needed: 8-byte Tag_ABI_align_preserved: 8-byte, except leaf SP Tag_ABI_enum_size: int Tag_ABI_HardFP_use: SP and DP <============= Tag_ABI_optimization_goals: Aggressive SpeedReadelf libXauvagrant@vagrant:/vagrant_data$ readelf -A libXau.so.6.0.0 Attribute Section: aeabiFile Attributes Tag_CPU_name: 7-A Tag_CPU_arch: v7 Tag_CPU_arch_profile: Application Tag_ARM_ISA_use: Yes Tag_THUMB_ISA_use: Thumb-2 <===== Tag_FP_arch: VFPv3-D16 Tag_ABI_PCS_wchar_t: 4 Tag_ABI_FP_denormal: Needed Tag_ABI_FP_exceptions: Needed Tag_ABI_FP_number_model: IEEE 754 Tag_ABI_align_needed: 8-byte Tag_ABI_align_preserved: 8-byte, except leaf SP Tag_ABI_enum_size: int Tag_ABI_HardFP_use: SP and DP <======= Tag_ABI_VFP_args: VFP registers Tag_ABI_optimization_goals: Aggressive Speed Tag_DIV_use: Not allowedor with grep FPvagrant@vagrant:/vagrant_data$ readelf -A libattr.so.1.1.0 | grep FP Tag_FP_arch: VFPv3 Tag_ABI_FP_denormal: Needed Tag_ABI_FP_exceptions: Needed Tag_ABI_FP_number_model: IEEE 754 Tag_ABI_HardFP_use: SP and DPvagrant@vagrant:/vagrant_data$ readelf -A libXau.so.6.0.0 | grep FP Tag_FP_arch: VFPv3-D16 Tag_ABI_FP_denormal: Needed Tag_ABI_FP_exceptions: Needed Tag_ABI_FP_number_model: IEEE 754 Tag_ABI_HardFP_use: SP and DP Tag_ABI_VFP_args: VFP registers <===== CONFIRM it's ARMHF (?)Apparently my libattr is not is not using the correct ABI. I found another library that should work better hereARMHF libattrvagrant@vagrant:/vagrant_data/libattr-2.4.47-armhf-1/lib$ readelf -A libattr.so.1.1.0 | grep FP Tag_FP_arch: VFPv3-D16 Tag_ABI_FP_denormal: Needed Tag_ABI_FP_exceptions: Needed Tag_ABI_FP_number_model: IEEE 754 Tag_ABI_HardFP_use: SP and DP Tag_ABI_VFP_args: VFP registers | chromium compiled for ARM, libattr.so.1 not found | chrome;arm;shared library | You show the output of file libattr.so.1.1.0, but the executable is looking for libattr.so.1. That's not the same name. Normally libattr.so.1 should be a symbolic link to libattr.so.1.1.0, and the right way to create this symbolic link is to run the program ldconfig. So make sure that you put libattr.so.1.1.0 where you want it (/usr/local/lib would be a good idea, /usr/lib is for files installed by the package manager) and that you have run ldconfig. Run ldconfig -v and check that it tells you that it created the desired symbolic link.If that's not the issue, it's possible that you grabbed an incompatible libattr.so.1. There are several ABIs on ARM, depending on which instructions programs are allowed to use (allowing programs to use more instructions limits them to recent, high-end processors). Your system is evidently based on gnueabihf, i.e. ARM EABI with GNU libc and with hardware floating point (hard float) support the armhf architecture of Debian. Make sure that libattr.so.1 is also from armhf and not e.g. from armeabi (ARM EABI without hardware floating point). You can check the ABI that a library (or an executable) is for with readelf -A libattr.so.1 libXau.so.6.0.0. Look in particular for Tag_ABI_VFP_args the values must match. |
_webmaster.44467 | I want to use my YouTube channel and video pages to promote my website. I have read a website that talks about backlinks and anchors which use something like: <a href=http://foo.com>foo</a> to create the backlink . However as I know I cannot create a backlink like that in video description. How I can create a backlink for my website in YouTube? | How to create a backlink in my youtube channel/video pages to promote my website? | seo;backlinks;youtube | null |
_cs.71831 | I have an image of terrain that has presumably had an embossed effect applied (exact technique unknown) and I would like to convert that image into a grayscale depth/displacement map (ie. Digital Elevation Model), effectively un-embossing the image. How would I do that? Are there any tools/libraries that might help me?Example of embossed input imageExample of desired output (from a different input) | How to un-emboss an image? | image processing;graphics | null |
_unix.220559 | I want to set my keyboard so that pressing space and releasing will emit a normal space, but holding space and pressing one of i/j/k/l will emit an arrow key.Is it possible to do that?Thanks! | Remapping space + ijkl to arrow keys | keyboard layout;xkb | null |
_unix.192823 | I just spent a few frustrating hours troubleshooting on a Raspberry Pi 2 only to find out that it is impossible to print to my Canon printer natively using CUPS. Thankfully, my printer supports Google Cloud Print and I could just use a nice tool called CUPS Cloud Print to redirect my printing jobs to Google Cloud Print services and then to my printer without the need for drivers, however it is not optimal because there is a large lack in features (such as no duplexing).I have an old desktop computer laying around as a free NAS box running Ubuntu 14.04.01 and I was wondering if it would be possible to use that as my CUPS server with amd64 drivers so I could use CUPS on my Raspberry Pi 2 to send jobs to my printer over the server. My current printer is a Canon MF8580Cdw and it supports Google Cloud Print, Airprint, and tons of other network printing solutions. Is there maybe an easier way that I've been missing? | Can I print via CUPS on ARM device without drivers by installing CUPS onto an x86 server with drivers? | ubuntu;raspberry pi;printing;cups;raspbian | null |
_computerscience.1987 | A paper$^1$ I'm reading says fluence measures the incoming radiance from all directions and that fluence is similar to irradiance. It's defined by $\phi(x) = \int_{4 \pi} L(x, \vec{\omega'}) d\omega' $ (The tick mark seems to emphasize $\vec{\omega'}$ varies over the integration, rather than being a fixed direction.)Intuitively, radiance is the amount of light given off in a direction but attenuated by how far off the surface is from being perpendicular to the light direction. $L=\frac{d^2\Phi}{d\omega dA \cos \theta}$Irradiance is the area light density of a surface. $E=\frac{d\Phi}{dA}=\int_{2 \pi } L_i(\vec{\omega}) \cos \theta d\omega$Intensity is the amount of light hitting a surface from a particular direction. $I(\vec{\omega})=\frac{d\Phi}{d\omega}$It also looks like you can write the radiance as irradiance times intensity. $L=E\cdot I(\vec{\omega})$Looking at the formulas, it seems like irradiance sums radiance over the hemisphere over a surface point and attenuated by how much the surface orients toward the light source, while fluence sums up radiance over the whole sphere surrounding a point (to measure inscattering).Wikipedia says, radiant fluence is the radiant energy received by a surface per unit area, or equivalently the irradiance of a surface integrated over time of irradiation. But this definition doesn't seem to match the one given above since there's no time involved.https://www.researchgate.net/post/What_is_the_definition_and_difference_between_light_dose_and_light_fluence_ratehttp://omlc.org/education/ece532/class1/irradiance.htmlReal-Time Rendering of Translucent Materials with Directional Subsurface Scattering, Alessandro Dal Corso, 2014 | What's the difference between irradiance and fluence/radiant exposure? | rendering;physically based;physics | I can only cite what I have learned in my lecture on Global Illumination techniques which was unfortunately some time ago:Radiant Power : The amount of energy emitted by a light source in unit time. denoted by $\Phi$ and is measured in Watt which equals to Joules per second. (This does not specify any area!)Irradiance: The irradiance denotes the incident radiant energy on a surface. It is defined as $E=\frac{d\Phi}{dA}$ - This quantity denotes the incoming energy per area and is measure ind $\frac{W}{m^2}$ Note here: radiant energy is measured in Watt which is Joules per second!Radiant exitance: This is the reverse direction of Irradiance and is the energy an area emits in unit time and has the same definiton and units as Irradiance only with reverse direction $M = B = \frac{d\Phi}{dA}$. This is also often called radiosity B.Radiance: Is the power per projected area per solid angle. It is defined as $L=\frac{d\Phi}{d \omega * dA * cos\theta}=[\frac{W}{sr*m^2}]$ where $\omega$ denotes the solid angle, $\theta$ denotes the angle between emitting surface normal and the direction the energy is traveling to. So this is not attenuated by the distance but by the angles.In general as I learned the distance between emitting and receiving surface is only important for Radiance because the projected area will be smaller if they are farther apart.The time you are missing in your definitions seem to come from the conversion of Watt to Joules Per Second. |
_webmaster.21493 | If you had a list of URLs and you wanted to show which of them was more important than the others (by click-through), how would you gain the number of clicks for a given link so that you could sort them, possibly showing that some links are a thousand times more visited than the others?The output could be:1 : link1 : 105002 : link2 : 92003 : link3 : 81004 : link4 : 70005 : link5 : 2506 : link6 : 1007 : link7 : 20 | How can I obtain click data to sort a list of URLs? | analytics;usability;visitors;data;usage data | null |
_softwareengineering.149353 | This question is based on some testing I've done recently in order to better understand SQL indexing system.Table with 500k entries, InnoDB engineThis is a simple select query on a varchar field. The name column is not indexed.After indexing these are the results:All good until this point. When I'm trying to place a wildcard at the beginning of the search key, even though the column is indexed, the result time is the one from the non-index case:I'm wondering why is this happening? Is it because the index cannot be used any more? Is this solvable in any way? can I achieve a better search time even though I'm using % both at the beginning and at the end of the search string?Thank you! | Wildcard user in select statement on indexed varcharcolumn | mysql;indexing | If you want to see what indicies mysql is using, try using explain:http://dev.mysql.com/doc/refman/5.5/en/explain.htmlAs for how indicies are used, when you have a wildcard at the start and end of the like expression, e.g. '%something%', then no, the index is not used, as per this reference:The following SELECT statements do not use indexes:SELECT * FROM tbl_name WHERE key_col LIKE '%Patrick%';SELECT * FROM tbl_name WHERE key_col LIKE other_col;In the first statement, the LIKE value begins with a wildcard character. In the second statement, the LIKE value is not a constant.http://dev.mysql.com/doc/refman/5.0/en/mysql-indexes.html |
_softwareengineering.332070 | I am currently implementing my own programming language. Until now I have written:An Error class for errors (to be thrown) encountered while processing the input source code;Some SyntaxError functions (each one with a different tag, passed with a template, like SyntaxError<malformed_number>() or SyntaxError<unexpected_symbol>) which help constructing Error objects with different error messages (for now I only got this, but I will soon have FunctionErrors, IndexErrors and so on);A Token class which contains the informations about each token (line in the string at which it was when it was encountered, the type which can be string, number... things like that);Finally, a lexer function which takes as imput a string of source code and returns a std::list<Token*> object. Internally, it uses a few helper functions (buildname, builsymbol, skipcomment...) to help mantain the code organized.I didn't follow any particular programming technique will coding this, but I now feel the urge to follow those that go under the Test Developement Programming name.But I have a few concerns:First, what do I have to test in the above code? Of course the lexer function should be tested deeply, and maybe also the helper functions it uses (I guess), but what about the other things?Second, how do I test functions with a non-trivial output? Online I see a lot of examples comparing simple numbers or strings, but how does one write a test for a function which returns a list of pointers to objects whitout having to write too much for a single test case? | TDD on an already started project | c++;tdd;extreme programming;technique | TDD is less a testing but more a coding technique. The question you need to ask here is not shall I test this class or not but I want to implement a small change in my code base, how can I write a test which is red before the change, and becomes green after the changeFor example, a small change might be the introduction of a new reserved word in your programming language. You write a test for your lexer to return a special token for this - but since you did not implement the feature, the test will fail. Then you implement the feature - test goes green. Next you refactor the code, run the test again, it gets red because you made an error. You fix that error, test goes green. Next feature: using the new reserved word in a specifically wrong manner shall return a SyntaxError exception - write a new test for this - ... - I guess you got the idea. As you see, it does not really matter for applying TDD if there is an existing code base, or not - TDD is about verifying changes to a code base, not about testing the whole code base in each and every way. Of course, if a code base was written without TDD testability in mind, it can become hard to write tests for certain changes. So if you really want to switch to TDD, it might be easier to do this early. |
_softwareengineering.250793 | When developing for the web, I often find myself wanting to pass a few variables from the server scripts to my javascript - data pulled from a database, and set differently on different pages running basically the same code. (Typically, this is something along the lines of example.com/slideshows/1 needing to know that that slideshow 1 has 8 images, and what their urls are, but that's just a random example; I'm looking for a general answer.)My normal approach is to include a short block of inline javascript that sets a few global variables or calls some initializer functions, which is straightforward and effective. It is not, however, Content Security Policy-friendly, since inline javascript is blocked by default, and not without reason. (I know the policy can be set to allow inline javascript if you have access to it, but even if you do that opens potential security issues.) I need a strategy that plays nicely with proper CSPs.I know that it could be done with an ajax call, but that's overkill for something that only needs to be set once, and adds an extra call that slows down the page load. In some cases, like the slideshow I mentioned, this could be handled by having the javascript examine other dynamically-constructed page elements to reconstruct the desired data, which is fine, but not a very general solution. What's the best way to handle this?(I almost posted this question on SO, but it seems kind of subjective; I can think of ways to do this, I just don't like any of them.) | What's the Content-Security-Policy-friendly way to hand data from the server to the browser on page load? | web development;javascript | null |
_softwareengineering.231914 | http://www.thebuzzmedia.com/designing-a-secure-rest-api-without-oauth-authentication/I've been reading the article and I haven't really grasped it yet. When is the private key given to the client?Say I have a JavaScript client - where would I store this private key (once assigned) - a cookie?I think that this maybe is done when authenticating a user, if a JavaScript client makes a post request and authenticates a user, the server validates the input and gives back user id ( a public key ) and then some generated string (stored in a table) = the private key?And then just follow the approach (quoted) from article:[CLIENT] Before making the REST API call, combine a bunch of unique data together (this is typically all the parameters and values you intend on sending, it is the data argument in the code snippets on AWSs site)[CLIENT] Hash (HMAC-SHA1 or SHA256 preferably) the blob of data data (from Step #1) with your private key assigned to you by the system.[CLIENT] Send the server the following data: Some user-identifiable information like an API Key, client ID, user ID or something else it can use to identify who you are. This is the public API key, never the private API key. This is a public value that anyone (even evil masterminds can know and you dont mind). It is just a way for the system to know WHO is sending the request, not if it should trust the sender or not (it will figure that out based on the HMAC). Send the HMAC (hash) you generated. Send all the data (parameters and values) you were planning on sending anyway. Probably unencrypted if they are harmless values, like mode=start&number=4&order=desc or other operating nonsense. If the values are private, youll need to encrypt them.(OPTIONAL) The only way to protect against replay attacks on your API is to include a timestamp of time kind along with the request so the server can decide if this is an old request, and deny it. The timestamp must be included into the HMAC generation (effectively stamping a created-on time on the hash) in addition to being checked within acceptable bounds on the server.[SERVER] Receive all the data from the client.[SERVER] (see OPTIONAL) Compare the current servers timestamp to the timestamp the client sent. Make sure the difference between the two timestamps it within an acceptable time limit (5-15mins maybe) to hinder replay attacks. NOTE: Be sure to compare the same timezones and watch out for issues that popup with daylight savings time change-overs. UPDATE: As correctly pointed out by a few folks, just use UTC time and forget about the DST issues.[SERVER] Using the user-identifying data sent along with the request (e.g. API Key) look the user up in the DB and load their private key.[SERVER] Re-combine the same data together that the client did in the same way the client did it. Then hash (generate HMAC) that data blob using the private key you looked up from the DB. (see OPTIONAL) If you are protecting against replay attacks, include the timestamp from the client in the HMAC re-calculation on the server. Since you already determined this timestamp was within acceptable bounds to be accepted, you have to re-apply it to the hash calculation to make sure it was the same timestamp sent from the client originally, and not a made-up timestamp from a man-in-the-middle attack.[SERVER] Run that mess of data through the HMAC hash, exactly like you did on the client.[SERVER] Compare the hash you just got on the server, with the hash the client sent you; if they match, then the client is considered legit, so process the command. Otherwise reject the command!On second note, I believe it would be more readable from the article itself...Am I getting this right? | When is the private key given to the client? | security;api;server;client | null |
_codereview.113018 | The problem is to find all divisors of a given integer n.(It may be better to implement a true prime generator, rather than a fixed set, as discussed with examples here, but that was not my immediate concern. Also, I am aware that there are much faster algorithms to generate fixed number of prime numbersI am avoiding them until I fully understand the algorithm.)import functoolsimport itertoolsimport operatordef prime_generator(n): Sieve of Eratosthenes Create a candidate list within which non-primes will be marked as None. cand = [i for i in range(3, n + 1, 2)] end = int(n ** 0.5) // 2 # Loop over candidates (cand), marking out each multiple. for i in range(end): if cand[i]: cand[cand[i] + i::cand[i]] = [None] * ( (n // cand[i]) - (n // (2 * cand[i])) - 1) # Filter out non-primes and return the list. return [2] + [i for i in cand if i]primes_list = prime_generator(100000)def factorize(n): prime_multiples = [] for item in primes_list: if item > n: break else: while n > 1: if n % item == 0: n //= item prime_multiples.append(item) else: break return prime_multiplesdef calculate_divisors(n): prime_multiples_list = factorize(n) construct unique combinations A, B, B, C --> A, B, C, AB, AC, BB, BC, ABC, ABB, BBC unique_combinations = set() for i in range(1, len(prime_multiples_list)): unique_combinations.update( set(itertools.combinations(prime_multiples_list, i))) # multiply elements of each unique combination combination_product = list(functools.reduce(operator.mul, i) for i in unique_combinations) combination_product.sort() return combination_productprint(calculate_divisors(12500))>>> [2, 4, 5, 10, 20, 25, 50, 100, 125, 250, 500, 625, 1250, 2500, 3125, 6250]In the context of the above algorithm:Can I shorten factorize function? (For example, I couldnt find a way to turn into a list comprehension. I suspect there should be a shorthand via functoolsor itertools.)Is there a more Pythonic implementation of calculate_divisors function? | Finding all divisors of an integer | python;python 3.x | Lazy generatorsMost often in Python you do not want to build an actual list and return it.If I want to sum the divisors of a number, and there are many, I will waste a lot of space if I put them in a list first.Also, not creating a list is even shorter, you just give out the elements one by one from the function:def factorize(n): for item in primes_list: if item > n: break else: while n > 1: if n % item == 0: n //= item yield item else: breakReduce nestingYou got some serious nesting going on, each level of nesting is a level of complexity so getting it a bit down is positive. Luckily if we break we do not need an else: the code after will not be executed anyway:def factorize(n): for item in primes_list: if item > n: break while n > 1: if n % item != 0: break n //= item yield itemStill two nested loops, but the situation is becoming more manageable.Now let me re-factor the while loop away:def how_many_times_divides(n, div): >>> list(how_many_times_divides(40, 2)) [2, 2, 2] while n > 1: if n % div != 0: break n //= div yield divAnd now factorize is starting to look nice and small:def factorize(n): >>> list(factorize(480)) [2, 2, 2, 2, 2, 3, 5] for item in primes_list: if item > n: break yield from how_many_times_divides(n, item)DoctestsYou may have noted that I added some examples of usage in the doc-strings of this functions. They are actually automatically runnable with doctests and I highly recommend them when writing numerical code.Avoid unnecessary assignments & mutationIt is just simpler to return the expression:combination_product = list(functools.reduce(operator.mul, i) for i in unique_combinations)combination_product.sort()return combination_productBecomes:return sorted(functools.reduce(operator.mul, i) for i in unique_combinations)Note that I omitted list as it was not really needed. |
_unix.229129 | My file a contains the text, bcd\\\\.With bash, i read the file and print characters from 4th to 8th position as,tmp=$(cat a)echo ${tmp:3:4}It prints, \\\\All happy. Now i use python's array slicing to print characters from 4th to 8th position as,>>> f = open('a')>>> v=f.read()>>> v[3:7]It prints,'\\\\\\\\'Why does bash and python behave differently when there are backslashes? | python Vs bash string slicing | python;quoting;string | It is a matter of how python displays strings. Observe:>>> f = open('a')>>> v=f.read()>>> v[3:7]'\\\\\\\\'>>> print v[3:7]\\\\When displaying v[3:7], the backslashes are escaped. When printing, print v[3:7], they are not escaped. Other examplesThe line in your file should end with a newline character. In that case, observe:>>> v[-1]'\n'>>> print v[-1]>>> The newline character is displayed as a backslash-n. It prints as a newline.The results for tab are similar:>>> s='a\tb'>>> s'a\tb'>>> print sa b |
_codereview.40970 | I am using a combination of knockoutJS and jQuery. I have a number of jQuery plugins which perform particular re-usable functions, such as a numeric spinbox. I have written a binding handler to allow me to bind an observable property to the spinbox, it looks like this:ko.bindingHandlers.spinbox = { init: function (element, valueAccessor, allBindings, viewModel, bindingContext) { var value = valueAccessor(); $(element).spinbox({ min: value.min === undefined ? -5 : ko.unwrap(value.min), max: value.max === undefined ? 5 : ko.unwrap(value.max), scale: value.scale === undefined ? 2 : ko.unwrap(value.scale), step: value.step === undefined ? 0.1 : ko.unwrap(value.step), bigStep: value.bigStep === undefined ? 0.5 : ko.unwrap(value.bigStep) }).change(function() { value.data(parseFloat($(this).val())); }); }, update: function (element, valueAccessor, allBindings, viewModel, bindingContext) { var value = valueAccessor(); var valueUnwrapped = ko.unwrap(value.data); $(element).val(valueUnwrapped.toFixed(1)); }};and requires the binding to be set up something like:data-bind=spinbox: {data: myProperty, min: -100, max: 100}As you can see, I have used the object notation to sent multiple parameters to the binding (min, max etc) - this is similar to the syntax that the knockout template binding uses. However I am not sure this is a sensible way to do this. Should I have instead used allBindings and passed these along as separate values like:data-bind=spinbox: myProperty, min: -100, max: 100I am interested in opinions on complex knockout custom bindings. | knockout binding handler for custom components | javascript;jquery;knockout.js | I like your approach much better than using separate values.The code reads really well, I only have 1 minor suggestion.If you had a helper function like this:function unwrapOrDefault( gift , defaultValue ){ return gift === undefined ? defaultValue : ko.unwrap( gift ), }then you could have min: unwrapOrDefault(value.min, -5), max: unwrapOrDefault(value.max, 5 ); scale: unwrapOrDefault(value.scale, 2); step: unwrapOrDefault(value.step, 0.1); bigStep: unwrapOrDefault(value.bigStep, 0.5); |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.