id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_cs.11114
Some sets of ordered binary trees can be represented as a CFG with rules of the formA -> aBCA -> bWhere A,B,C are nonterminals and a and b are terminals representing internal nodes and leaf nodes respectively. The tree can be recovered from any word in the language by a preorder traversal.The set of all such grammars forms a class of languages which is a subset of context free languages but isomorphic to a superset of regular languages (by unary encoding the alphabet and adding a dummy terminal for the second nonterminal in every production). It is obviously closed under union as you can simply concatenate the lists of productions to get a new tree grammar.My question is whether this class is closed under intersection. I have been unable to prove that is either closed or not closed, and I figured I should see if anyone else can see how to do this.
Closure under intersection of context free binary trees
context free;formal grammars;closure properties
If I correctly read your question there are two types of terminal symbols, disjoint sets for internal labels and leaf labels. In that way the structure of the tree is uniquely determined by the string.Then your formalism is known as regular tree grammars. A production $A \to a BC$ seems to label a tree node by $a$ while attaching children $B$ and $C$. This formalism is closed under intersection. This can be proved precisely as for finite state automata or right-linear grammars. Simulate the two in parallel, as a product construction. With productions $A \to_1 a BC$ and $P\to_2 a QR$ we join them to $(A,P) \to a (B,Q)(C,R)$ and $A \to_1 a$ and $P\to_2 a$ are joined to $(A,P)\to a$.
_softwareengineering.240980
Please consider the following implementation of the Decorator design pattern:WordBank objects store strings and return them to the client through the method getWords(). The decorator class, WordSorter, is a subclass of WordBank (as in the Decorator pattern). However it's implementation of getWords() sorts some of the strings and removes them, before returning the array to the client.For example a WordSorter might delete all of the strings in the bank that start with letter 'a', and only after that return the array to the client.Does this violate the Liskov Substitution Principle? Since some implementations of WordBank return strings while others first sort through the strings and only return some of them, I'm not sure if it's safe to say that a WordSorter can be used anywhere any other WordBank is used. Or am I understanding this principle wrong?
Does this Decorator implementation violate the Liskov Substitution Principle?
design;design patterns;object oriented;solid;liskov substitution
A WordSorter is-a WordBank, so code that works with a WordBank should also work when a WordSorter is used instead of a WorkBank. On the other hand, a WordSorter is-not-a SomeWordBank. The compiler won't even let you use a WordSorter in place of a SomeWordBank, so the issue does not even begin to arise.There might be a LSP violation, but doesn't appear to be from the minimal specification you've given. Does WordSorter guarantee, for example, that one can add arbitrary strings to it and retrieve them all in the same order later? Then sorting the words would indeed break the that contract, and code that works for correct WordBanks can be broken by substituting a WordSorter. Whether there is such a guarantee is impossible to tell from the minimal UML diagram you've shown. If, for example, WordBank's contract says all words which are added are included in the result of getWords, then:bank.add(w);AssertContains(bank.getWords(), w);should always work. That code would break if bank was a WordSorter, and it's WordSorter's fault for breaking the contract and hence violating the LSP. But if WordBank offers no such guarantee, then the above code is in the wrong (in the same way asser x*y == 42 will usually fail) and WordSorter is a perfectly well-behaved subclass.
_unix.97028
I have just installed the system (Manjaro) and I have one, major, problem: it says that my network cable is unplugged, even if it's plugged.I have a Realtek RTL-8110SC/8169SC Gigabit Ethernet and with Windows it works with no problem.If I do ifconfig in the terminal, I get:enp7s1: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 ether 14:da:e9:21:fd:bf txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 0 (Local Loopback) RX packets 384 bytes 30176 (29.4 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 384 bytes 30176 (29.4 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0The enp7s1 should be my card (I recognise the MAC), but usually isn't it eth X ?Also, here is the output for lsmod command:Module Size Used byrfcomm 51153 4 fuse 74541 3 raid1 27772 1 bnep 11037 2 usblp 12722 0 bluetooth 308366 10 bnep,rfcommrfkill 15666 3 bluetoothiTCO_wdt 5407 0 iTCO_vendor_support 1929 1 iTCO_wdtmxm_wmi 1467 0 evdev 9880 5 joydev 9663 0 hid_generic 1153 0 md_mod 105782 1 raid1coretemp 6038 0 kvm_intel 128977 0 kvm 376330 1 kvm_intelsnd_hda_codec_hdmi 29733 1 microcode 13172 0 psmouse 85132 0 serio_raw 5041 0 snd_hda_codec_realtek 35645 1 r8169 57640 0lpc_ich 12849 0 mii 4027 1 r8169i2c_i801 11237 0 snd_hda_intel 35309 5 snd_hda_codec 147506 3 snd_hda_codec_realtek,snd_hda_codec_hdmi,snd_hda_intelsnd_hwdep 6332 1 snd_hda_codecsnd_pcm 77765 3 snd_hda_codec_hdmi,snd_hda_codec,snd_hda_intelsnd_page_alloc 7202 2 snd_pcm,snd_hda_intelsnd_timer 18718 1 snd_pcmsnd 58950 17 snd_hda_codec_realtek,snd_hwdep,snd_timer,snd_hda_codec_hdmi,snd_pcm,snd_hda_codec,snd_hda_intelacpi_cpufreq 10502 0 soundcore 5418 1 sndmperf 1203 1 acpi_cpufreqi7core_edac 17669 0 edac_core 44137 2 i7core_edacasus_atk0110 12000 0 wmi 8283 1 mxm_wmiprocessor 27755 1 acpi_cpufreqbutton 4669 0 nfs 145074 0 lockd 76805 1 nfssunrpc 221055 2 nfs,lockdfscache 44575 1 nfsext4 456475 1 crc16 1359 2 ext4,bluetoothmbcache 5866 1 ext4jbd2 81946 1 ext4hid_logitech_dj 10567 0 usbhid 41466 0 hid 88502 3 hid_generic,usbhid,hid_logitech_djsr_mod 14898 0 sd_mod 30730 4 cdrom 34848 1 sr_modata_generic 3370 0 pata_acpi 3387 0 crc32c_intel 14185 0 firewire_ohci 31837 0 firewire_core 51955 1 firewire_ohciahci 22792 5 uhci_hcd 24531 0 ehci_pci 4120 0 libahci 21169 1 ahcicrc_itu_t 1363 1 firewire_corexhci_hcd 89423 0 ehci_hcd 47672 1 ehci_pcilibata 171016 4 ahci,pata_acpi,libahci,ata_genericusbcore 177151 6 usblp,uhci_hcd,ehci_hcd,ehci_pci,usbhid,xhci_hcdscsi_mod 127772 3 libata,sd_mod,sr_modusb_common 1648 1 usbcoreradeon 807573 2 i2c_algo_bit 5391 1 radeondrm_kms_helper 35438 1 radeonttm 65324 1 radeondrm 231136 4 ttm,drm_kms_helper,radeoni2c_core 23720 5 drm,i2c_i801,drm_kms_helper,i2c_algo_bit,radeonI already tried to change the interface from down to up, but it remains down without displaying any errors. I have the kernel 3.10.1-1-MANJARO (linux310)How can I solve this?P.S: Excuse me for my english, of course.
Manjaro Linux - Cable unplugged even if plugged
networkcard
null
_softwareengineering.264716
I'm rather new to distributed computing and would like some assistance with the overall architecture of my application.My application has Jobs that can be added to a JobQueue. Then one or more JobRunner instances can be setup to run the jobs on the queue and generate JobResults. The JobResults will then be sent to some destination like a report, log file, email notification etc..However, I also want to be able to group a related set of Jobs into a JobSet which in turn will be processed into a JobSetResult that contains all the corresponding JobResults. Each Job, however, will still be processed independently by a JobRunner. Once all the JobResults are collected the final JobResult will be sent to some destination like a log or email notification.For example a user may create a set of jobs to process a list of files. They would create a JobSet containing a number of FileProcessingJobs and submit it to be run. I obviously don't want the user to get an email notification for every file, but only the final JobSetResult when the entire JobSet is complete.I'm having trouble figuring out the best way to keep track of all this in a distributed environment. Is there some existing architectural design pattern which matches what I'm trying to do?
Distributing a set of Jobs across multiple computers
distributed computing
null
_codereview.18799
I wrote a little code to list a number's prime factors:import java.util.Scanner;import java.util.Vector;public class Factorise2{ public static Vector<Integer> get_prime_factors(int number) { //Get the absolute value so that the algorithm works for negative numbers int absoluteNumber = Math.abs(number); Vector<Integer> primefactors = new Vector<Integer>(); //Get the square root so that we can break earlier if it's prime for (int j = 2; j <= absoluteNumber;) { //Test for divisibility by j if (absoluteNumber % j == 0) { primefactors.add(j); absoluteNumber /= j; if (newprime && j > (int)Math.sqrt(absoluteNumber)) { break; } } else j++; } return primefactors; } public static void main(String[] args) { //Declare and initialise variables int number; int count = 1; Scanner scan = new Scanner(System.in); //Get a number to work with System.out.println(Enter integer to analyse:); number = scan.nextInt(); //Get the prime factors of the number Vector<Integer> primefactors = get_prime_factors(number); //Group the factors together and display them on the screen System.out.print(Prime factors of + number + are ); primefactors.add(0); for (int a = 0; a < primefactors.size() - 1; a++) { if (primefactors.elementAt(a) == primefactors.elementAt(a+1)) { count++; } else { System.out.print(primefactors.elementAt(a) + ( + count + ) ); count = 1; } } }}I decided that I would try to optimise the algorithm, by skipping testing for divisibility with composite numbers.import java.util.Scanner;import java.util.Vector;public class Factorise2{ public static Vector<Integer> get_prime_factors(int number) { //Get the absolute value so that the algorithm works for negative numbers int absoluteNumber = Math.abs(number); Vector<Integer> primefactors = new Vector<Integer>(); Vector<Integer> newprimes = new Vector<Integer>(); boolean newprime = true; int b; //Get the square root so that we can break earlier if it's prime for (int j = 2; j <= absoluteNumber;) { //Test for divisibility by j, and add to the list of prime factors if it's divisible. if (absoluteNumber % j == 0) { primefactors.add(j); absoluteNumber /= j; if (newprime && j > (int)Math.sqrt(absoluteNumber)) { break; } newprime = false; } else { for (int a = 0; a < newprimes.size();) { //Change j to the next prime b = newprimes.elementAt(a); if (j % b == 0) { j++; a = 0; } else { a++; } } //Add j as a new known prime; newprimes.add(j); newprime = true; } } return primefactors; } public static void main(String[] args) { //Declare and initialise variables int number; int count = 1; Scanner scan = new Scanner(System.in); //Get a number to work with System.out.println(Enter integer to analyse:); number = scan.nextInt(); //Get the prime factors of the number Vector<Integer> primefactors = get_prime_factors(number); //Group the factors together and display them on the screen System.out.print(Prime factors of + number + are ); primefactors.add(0); for (int a = 0; a < primefactors.size() - 1; a++) { if (primefactors.elementAt(a) == primefactors.elementAt(a+1)) { count++; } else { System.out.print(primefactors.elementAt(a) + ( + count + ) ); count = 1; } } }}I can't see anything that I have done wrong, but it is much slower. On 9876103, for example, it takes too long to wait for it to report back that its only prime factor is itself. Can anyone see why it is eating CPU cycles?
Listing a number's prime factors
java;performance;beginner;primes
I decided that I would try to optimise the algorithm, by skipping testing for divisibility with composite numbers.That is only worthwhile if you factorise a lot of numbers. And then you need to remember the list of known primes between different factorisations.In your case, the change is a massive pessimisation, because now you check each potential divisor for primality, which in the best case takes one division, and in the worst case about 2*sqrt(j)/log(j) divisions. The worst case, which is common enough, takes much much more time than a simple division by j to check whether j is a divisor.You have changed the algorithm from O(sqrt(n)) complexity for the simple trial division to about O(n^0.75) (ignoring logarithmic factors) in good cases, and about O(n^1.5) in the worst case (when n is a prime).
_softwareengineering.131926
We have 7 developers in a team and need to double our development pace in a short period of time (around one month). I know there is a common sense rule that if you hire more developers, you only lose in productivity for the first few months. The project is an e-commerce web service and has around 270K lines of code.My idea for now is to divide the project in two more or less independent sub-projects and let the new team work on the smaller of the two sub-projects, while the current team works on the main project. Namely, the new team will work on checkout functionality, which will eventually become an independent web service in order to decrease coupling. This way, the new team works on a projects with only 100K lines of code.My question is: will this approach help newbie developers to adapt easily to the new project? What are other ways to extend the development team rapidly without waiting two months until newbies start producing more software then bugs?=======UPDATEThis enterprise failed completely, but not for the reasons you guys mentioned. First of all, I was misinformed about the size and capability of the new team. I should have evaluated them myself. Second, hiring turned out to be a hard job at that site. At the site of the main office hiring was much more easy, but in the city of the second team there was apparently shortage of developers with the required qualification. As a result, instead of projected 1.5 months the job extended to about 4.5 months, and was cancelled in the middle of it by the top management.Another mistake I made (and was warned about it by Alex D) is that I was trying to sell refactoring to the top management. You never sell refactoring, only features.The startup turned out to be successful anyway. The refactoring that never happened turned into technical debt: the system became more monolithic and less maintainable, developer productivity gradually decreased. I am not in the team now, but I do hope they complete it in the nearest future. Otherwise, I wouldn't give a penny for the project's survival.
Will giving new recruits a separate subproject from experienced developers help the newbies ramp up more quickly?
productivity;agile;team
null
_codereview.112305
I have this program that solves a \$(n^2 - 1)\$-puzzles for general \$n\$. I have three solvers:BidirectionalBFSPathFinderAStarPathFinderDialAStarPathFinder AStarPathFinder relies on java.util.PriorityQueue and DialAStarPathFinder uses so called Dial's heap which is a very natural choice in this setting: all priorities are non-negative integers and the set of all possible priorities is small (should be \$\{ 0, 1, 2, \dots, k \}\$, where \$k \approx 100\$ for \$n = 4\$). DialHeap.java:package net.coderodde.puzzle;import java.util.HashMap;import java.util.Map;import java.util.NoSuchElementException;/** * This class implements Dial's heap. * * @author Rodion rodde Efremov * @version 1.6 (Nov 16, 2015) * @param <E> the type of the actual elements being stored. */public class DialHeap<E> { private static final int INITIAL_CAPACITY = 64; private static final class DialHeapNode<E> { E element; int priority; DialHeapNode<E> prev; DialHeapNode<E> next; DialHeapNode(E element, int priority) { this.element = element; this.priority = priority; } } private final Map<E, DialHeapNode<E>> map = new HashMap<>(); private DialHeapNode<E>[] table = new DialHeapNode[INITIAL_CAPACITY]; private int size; private int minimumPriority = Integer.MAX_VALUE; public void add(E element, int priority) { checkPriority(priority); if (map.containsKey(element)) { return; } ensureCapacity(priority); DialHeapNode<E> newnode = new DialHeapNode(element, priority); newnode.next = table[priority]; if (table[priority] != null) { table[priority].prev = newnode; } if (minimumPriority > priority) { minimumPriority = priority; } table[priority] = newnode; map.put(element, newnode); ++size; } public void decreasePriority(E element, int priority) { checkPriority(priority); // Get the actual heap node storing 'element'. DialHeapNode<E> targetHeapNode = map.get(element); if (targetHeapNode == null) { // 'element' not in this heap. return; } // Read the current priority of the 'element'. int currentPriority = targetHeapNode.priority; if (priority >= currentPriority) { // No improvement possible. return; } unlink(targetHeapNode); targetHeapNode.prev = null; targetHeapNode.next = table[priority]; targetHeapNode.priority = priority; if (table[priority] != null) { table[priority].prev = targetHeapNode; } if (minimumPriority > priority) { minimumPriority = priority; } table[priority] = targetHeapNode; } public E extractMinimum() { if (size == 0) { throw new NoSuchElementException(Extracting from an empty heap.); } DialHeapNode<E> targetNode = table[minimumPriority]; table[minimumPriority] = targetNode.next; if (table[minimumPriority] != null) { table[minimumPriority].prev = null; } else { if (size == 1) { // Extracting the very last element. Reset to maximum value. minimumPriority = Integer.MAX_VALUE; } else { minimumPriority++; while (minimumPriority < table.length && table[minimumPriority] == null) { ++minimumPriority; } } } --size; E element = targetNode.element; map.remove(element); return element; } public int size() { return size; } private void ensureCapacity(int capacity) { if (table.length <= capacity) { int newCapacity = Integer.highestOneBit(capacity) << 1; DialHeapNode<E>[] newTable = new DialHeapNode[newCapacity]; System.arraycopy(table, 0, newTable, 0, table.length); System.out.println(table.length + -> + newCapacity); table = newTable; } } private void checkPriority(int priority) { if (priority < 0) { throw new IllegalArgumentException( Heap does not handle negative priorities. Received: + priority); } } private void unlink(DialHeapNode<E> node) { int priority = node.priority; if (node.next != null) { node.next.prev = node.prev; } if (node.prev != null) { node.prev.next = node.next; } else { table[priority] = node.next; } }}PuzzleNode.java:package net.coderodde.puzzle;import java.util.ArrayList;import java.util.Arrays;import java.util.List;import java.util.Random;/** * This class implements a puzzle node for {@code n^2 - 1} - puzzle. * * @author Rodion rodde Efremov * @version 1.6 (Nov 16, 2015) */public class PuzzleNode { private final byte[][] matrix; private byte emptyTileX; private byte emptyTileY; private int hashCode; public PuzzleNode(int n) { this.matrix = new byte[n][n]; byte entry = 1; for (int y = 0; y < n; ++y) { for (int x = 0; x < n; ++x) { matrix[y][x] = entry++; } } matrix[n - 1][n - 1] = 0; hashCode = Arrays.deepHashCode(matrix); emptyTileX = (byte)(n - 1); emptyTileY = (byte)(n - 1); } private PuzzleNode(PuzzleNode node) { int n = node.matrix.length; this.matrix = new byte[n][n]; for (int y = 0; y < n; ++y) { for (int x = 0; x < n; ++x) { this.matrix[y][x] = node.matrix[y][x]; } } this.hashCode = Arrays.deepHashCode(this.matrix); this.emptyTileX = node.emptyTileX; this.emptyTileY = node.emptyTileY; } @Override public boolean equals(Object o) { if (o == null) { return false; } if (!o.getClass().equals(this.getClass())) { return false; } PuzzleNode other = (PuzzleNode) o; if (this.hashCode != other.hashCode) { return false; } return Arrays.deepEquals(this.matrix, other.matrix); } @Override public int hashCode() { return hashCode; } public PuzzleNode up() { if (emptyTileY == 0) { return null; } PuzzleNode ret = new PuzzleNode(this); ret.matrix[emptyTileY][emptyTileX] = this.matrix[emptyTileY - 1] [emptyTileX]; ret.matrix[--ret.emptyTileY][emptyTileX] = 0; ret.hashCode = Arrays.deepHashCode(ret.matrix); return ret; } public PuzzleNode right() { if (emptyTileX == this.matrix.length - 1) { return null; } PuzzleNode ret = new PuzzleNode(this); ret.matrix[emptyTileY][emptyTileX] = this.matrix[emptyTileY] [emptyTileX + 1]; ret.matrix[emptyTileY][++ret.emptyTileX] = 0; ret.hashCode = Arrays.deepHashCode(ret.matrix); return ret; } public PuzzleNode down() { if (emptyTileY == matrix.length - 1) { return null; } PuzzleNode ret = new PuzzleNode(this); ret.matrix[emptyTileY][emptyTileX] = this.matrix[emptyTileY + 1] [emptyTileX]; ret.matrix[++ret.emptyTileY][emptyTileX] = 0; ret.hashCode = Arrays.deepHashCode(ret.matrix); return ret; } public PuzzleNode left() { if (emptyTileX == 0) { return null; } PuzzleNode ret = new PuzzleNode(this); ret.matrix[emptyTileY][emptyTileX] = this.matrix[emptyTileY] [emptyTileX - 1]; ret.matrix[emptyTileY][--ret.emptyTileX] = 0; ret.hashCode = Arrays.deepHashCode(ret.matrix); return ret; } public List<PuzzleNode> children() { List<PuzzleNode> childrenList = new ArrayList<>(4); insert(childrenList, up()); insert(childrenList, right()); insert(childrenList, down()); insert(childrenList, left()); return childrenList; } public List<PuzzleNode> parents() { List<PuzzleNode> parentList = new ArrayList<>(4); insert(parentList, up()); insert(parentList, right()); insert(parentList, down()); insert(parentList, left()); return parentList; } public int getDegree() { return matrix.length; } public byte get(int x, int y) { return matrix[y][x]; } public PuzzleNode randomSwap(Random rnd) { final PuzzleNode newNode = new PuzzleNode(this); int degree = this.matrix.length; int sourceX = rnd.nextInt(degree); int sourceY = rnd.nextInt(degree); for (;;) { if (matrix[sourceY][sourceX] == 0) { sourceX = rnd.nextInt(degree); sourceY = rnd.nextInt(degree); } else { break; } } for (;;) { int targetX = sourceX; int targetY = sourceY; switch (rnd.nextInt(4)) { case 0: --targetX; break; case 1: ++targetX; break; case 2: --targetY; break; case 3: ++targetY; break; } if (targetX < 0 || targetY < 0) { continue; } if (targetX >= degree || targetY >= degree) { continue; } if (matrix[targetY][targetX] == 0) { continue; } byte tmp = newNode.matrix[sourceY][sourceX]; newNode.matrix[sourceY][sourceX] = newNode.matrix[targetY][targetX]; newNode.matrix[targetY][targetX] = tmp; newNode.hashCode = Arrays.deepHashCode(newNode.matrix); return newNode; } } @Override public String toString() { StringBuilder sb = new StringBuilder(); sb.append([) .append(emptyTileX) .append(, ) .append(emptyTileY) .append(]\n); int n = this.matrix.length; for (int y = 0; y < n; ++y) { for (int x = 0; x < n; ++x) { sb.append(String.format(%2d, matrix[y][x])).append(' '); } if (y < n - 1) { sb.append('\n'); } } return sb.toString(); } private static void insert(List<PuzzleNode> list, PuzzleNode node) { if (node != null) { list.add(node); } }}PathFinder.java:package net.coderodde.puzzle;import java.util.ArrayList;import java.util.Collections;import java.util.List;import java.util.Map;/** * This interface defines the API for path finding algorithms and a couple of * methods for constructing shortest paths. * * @author Rodion rodde Efremov * @version 1.6 (Nov 16, 2015) */public interface PathFinder { public List<PuzzleNode> search(PuzzleNode source, PuzzleNode target); default List<PuzzleNode> tracebackPath(PuzzleNode target, Map<PuzzleNode, PuzzleNode> parentMap) { List<PuzzleNode> path = new ArrayList<>(); PuzzleNode current = target; while (current != null) { path.add(current); current = parentMap.get(current); } Collections.<PuzzleNode>reverse(path); return path; } default List<PuzzleNode> tracebackPath(PuzzleNode touchNode, Map<PuzzleNode, PuzzleNode> PARENTSA, Map<PuzzleNode, PuzzleNode> PARENTSB) { List<PuzzleNode> path = tracebackPath(touchNode, PARENTSA); PuzzleNode current = PARENTSB.get(touchNode); while (current != null) { path.add(current); current = PARENTSB.get(current); } return path; }}BidirectionalBFSPathFinder.java:package net.coderodde.puzzle;import java.util.ArrayDeque;import java.util.ArrayList;import java.util.Collections;import java.util.Deque;import java.util.HashMap;import java.util.List;import java.util.Map;import java.util.Objects;/** * This class implements bidirectional breadth-first search. * * @author Rodion rodde Efremov * @version 1.6 (Nov 16, 2015) */public class BidirectionalBFSPathFinder implements PathFinder { @Override public List<PuzzleNode> search(PuzzleNode source, PuzzleNode target) { Objects.requireNonNull(source, The source node is null.); Objects.requireNonNull(target, The target node is null.); if (source.equals(target)) { // Bidirectional algorithms do not handle correctly the case where // the source and target nodes are the same. returnTarget(target); } Deque<PuzzleNode> QUEUE_A = new ArrayDeque<>(); Deque<PuzzleNode> QUEUE_B = new ArrayDeque<>(); Map<PuzzleNode, PuzzleNode> PARENTS_A = new HashMap<>(); Map<PuzzleNode, PuzzleNode> PARENTS_B = new HashMap<>(); Map<PuzzleNode, Integer> DISTANCE_A = new HashMap<>(); Map<PuzzleNode, Integer> DISTANCE_B = new HashMap<>(); QUEUE_A.addLast(source); QUEUE_B.addLast(target); PARENTS_A.put(source, null); PARENTS_B.put(target, null); DISTANCE_A.put(source, 0); DISTANCE_B.put(target, 0); int bestCost = Integer.MAX_VALUE; PuzzleNode touchNode = null; while (!QUEUE_A.isEmpty() && !QUEUE_B.isEmpty()) { if (touchNode != null) { if (bestCost < DISTANCE_A.get(QUEUE_A.getFirst()) + DISTANCE_B.get(QUEUE_B.getFirst())) { return tracebackPath(touchNode, PARENTS_A, PARENTS_B); } } if (QUEUE_A.size() < QUEUE_B.size()) { PuzzleNode current = QUEUE_A.removeFirst(); if (DISTANCE_B.containsKey(current)) { int cost = DISTANCE_A.get(current) + DISTANCE_B.get(current); if (bestCost > cost) { bestCost = cost; touchNode = current; } } for (PuzzleNode child : current.children()) { if (!DISTANCE_A.containsKey(child)) { DISTANCE_A.put(child, DISTANCE_A.get(current) + 1); PARENTS_A.put(child, current); QUEUE_A.addLast(child); } } } else { PuzzleNode current = QUEUE_B.removeFirst(); if (DISTANCE_A.containsKey(current)) { int cost = DISTANCE_A.get(current) + DISTANCE_B.get(current); if (bestCost > cost) { bestCost = cost; touchNode = current; } } for (PuzzleNode parent : current.parents()) { if (!DISTANCE_B.containsKey(parent)) { DISTANCE_B.put(parent, DISTANCE_B.get(current) + 1); PARENTS_B.put(parent, current); QUEUE_B.addLast(parent); } } } } return Collections.<PuzzleNode>emptyList(); } private List<PuzzleNode> returnTarget(PuzzleNode target) { List<PuzzleNode> path = new ArrayList<>(1); path.add(target); return path; }}AStarPathFinder.java:package net.coderodde.puzzle;import java.util.Collections;import java.util.HashMap;import java.util.HashSet;import java.util.List;import java.util.Map;import java.util.Objects;import java.util.PriorityQueue;import java.util.Queue;import java.util.Set;/** * This class implements A* pathfinding algorithm. * * @author Rodion rodde Efremov * @version 1.6 (Nov 16, 2015) */public class AStarPathFinder implements PathFinder { private int[] targetXArray; private int[] targetYArray; private void processTarget(PuzzleNode target) { int n = target.getDegree(); this.targetXArray = new int[n * n]; this.targetYArray = new int[n * n]; for (int y = 0; y < n; ++y) { for (int x = 0; x < n; ++x) { byte entry = target.get(x, y); targetXArray[entry] = x; targetYArray[entry] = y; } } } @Override public List<PuzzleNode> search(PuzzleNode source, PuzzleNode target) { Objects.requireNonNull(source, The source node is null.); Objects.requireNonNull(target, The target node is null.); processTarget(target); Queue<NodeHeapEntry> OPEN = new PriorityQueue<>(); Set<PuzzleNode> CLOSED = new HashSet<>(); Map<PuzzleNode, PuzzleNode> PARENTS = new HashMap<>(); Map<PuzzleNode, Integer> DISTANCE = new HashMap<>(); OPEN.add(new NodeHeapEntry(source, 0)); DISTANCE.put(source, 0); PARENTS.put(source, null); while (!OPEN.isEmpty()) { PuzzleNode current = OPEN.remove().node; if (current.equals(target)) { return tracebackPath(target, PARENTS); } if (CLOSED.contains(current)) { continue; } CLOSED.add(current); for (PuzzleNode child : current.children()) { if (!CLOSED.contains(child)) { int g = DISTANCE.get(current) + 1; if (!DISTANCE.containsKey(child) || DISTANCE.get(child) > g) { PARENTS.put(child, current); DISTANCE.put(child, g); OPEN.add(new NodeHeapEntry(child, g + h(child))); } } } } return Collections.<PuzzleNode>emptyList(); } private int h(PuzzleNode node) { int n = node.getDegree(); int distance = 0; for (int y = 0; y < n; ++y) { for (int x = 0; x < n; ++x) { byte entry = node.get(x, y); if (entry != 0) { distance += Math.abs(x - targetXArray[entry]) + Math.abs(y - targetYArray[entry]); } } } return distance; } private static final class NodeHeapEntry implements Comparable<NodeHeapEntry> { PuzzleNode node; int priority; NodeHeapEntry(PuzzleNode node, int priority) { this.node = node; this.priority = priority; } @Override public int compareTo(NodeHeapEntry o) { return Integer.compare(priority, o.priority); } }}DialAStarPathFinder.java:package net.coderodde.puzzle;import java.util.Collections;import java.util.HashMap;import java.util.HashSet;import java.util.List;import java.util.Map;import java.util.Objects;import java.util.Set;/** * This class implements A* pathfinding algorithm using Dial's heap. * * @author Rodion rodde Efremov * @version 1.6 (Nov 16, 2015) */public class DialAStarPathFinder implements PathFinder { private int[] targetXArray; private int[] targetYArray; private void processTarget(PuzzleNode target) { int n = target.getDegree(); this.targetXArray = new int[n * n]; this.targetYArray = new int[n * n]; for (int y = 0; y < n; ++y) { for (int x = 0; x < n; ++x) { byte entry = target.get(x, y); targetXArray[entry] = x; targetYArray[entry] = y; } } } @Override public List<PuzzleNode> search(PuzzleNode source, PuzzleNode target) { Objects.requireNonNull(source, The source node is null.); Objects.requireNonNull(target, The target node is null.); processTarget(target); DialHeap<PuzzleNode> OPEN = new DialHeap<>(); Set<PuzzleNode> CLOSED = new HashSet<>(); Map<PuzzleNode, PuzzleNode> PARENTS = new HashMap<>(); Map<PuzzleNode, Integer> DISTANCE = new HashMap<>(); OPEN.add(source, h(source)); DISTANCE.put(source, 0); PARENTS.put(source, null); while (OPEN.size() > 0) { PuzzleNode current = OPEN.extractMinimum(); if (current.equals(target)) { return tracebackPath(target, PARENTS); } if (CLOSED.contains(current)) { continue; } CLOSED.add(current); for (PuzzleNode child : current.children()) { if (!CLOSED.contains(child)) { int g = DISTANCE.get(current) + 1; if (!DISTANCE.containsKey(child)) { PARENTS.put(child, current); DISTANCE.put(child, g); OPEN.add(child, g + h(child)); } else if (DISTANCE.get(child) > g) { PARENTS.put(child, current); DISTANCE.put(child, g); OPEN.decreasePriority(child, g + h(child)); } } } } return Collections.<PuzzleNode>emptyList(); } private int h(PuzzleNode node) { int n = node.getDegree(); int distance = 0; for (int y = 0; y < n; ++y) { for (int x = 0; x < n; ++x) { byte entry = node.get(x, y); if (entry != 0) { distance += Math.abs(x - targetXArray[entry]) + Math.abs(y - targetYArray[entry]); } } } return distance; }}PerformanceDemo.java:import java.util.List;import java.util.Random;import net.coderodde.puzzle.AStarPathFinder;import net.coderodde.puzzle.BidirectionalBFSPathFinder;import net.coderodde.puzzle.DialAStarPathFinder;import net.coderodde.puzzle.PathFinder;import net.coderodde.puzzle.PuzzleNode;public class PerformanceDemo { public static void main(String[] args) { int SWAPS = 16; PuzzleNode target = new PuzzleNode(4); PuzzleNode source = target; long seed = System.nanoTime(); Random random = new Random(seed); for (int i = 0; i < SWAPS; ++i) { source = source.randomSwap(random); } System.out.println(Seed: + seed); profile(new BidirectionalBFSPathFinder(), source, target); profile(new AStarPathFinder(), source, target); profile(new DialAStarPathFinder(), source, target); } private static void profile(PathFinder finder, PuzzleNode source, PuzzleNode target) { long startTime = System.nanoTime(); List<PuzzleNode> path = finder.search(source, target); long endTime = System.nanoTime(); System.out.printf(%s in %.2f milliseconds. Path length: %d\n, finder.getClass().getSimpleName(), (endTime - startTime) / 1e6, path.size()); }}DialHeapTest.java:package net.coderodde.puzzle;import java.util.NoSuchElementException;import org.junit.Test;import static org.junit.Assert.*;import org.junit.Before;public class DialHeapTest { private DialHeap<Integer> heap; @Before public void before() { heap = new DialHeap<>(); } @Test public void test() { for (int i = 9, size = 0; i >= 0; --i, ++size) { assertEquals(size, heap.size()); heap.add(i, i); assertEquals(size + 1, heap.size()); } int i = 0; while (heap.size() > 0) { assertEquals(Integer.valueOf(i++), heap.extractMinimum()); } try { heap.extractMinimum(); fail(Heap should have thrown NoSuchElementException.); } catch (NoSuchElementException ex) { } // 9 -> 14 // 8 -> 13 // ... // 0 -> 5 for (i = 9; i >= 0; --i) { heap.add(i, i + 5); } for (i = 5; i < 10; ++i) { heap.decreasePriority(i, i - 5); } for (i = 0; i < 5; ++i) { assertEquals(Integer.valueOf(i + 5), heap.extractMinimum()); } for (i = 0; i < 5; ++i) { assertEquals(Integer.valueOf(i), heap.extractMinimum()); } // Test that the heap expands its internal array whenever exceeding its // size. for (i = 0; i < 1000; ++i) { heap.add(i, i); } heap.add(10_000, 32_000); while (heap.size() > 0) { heap.extractMinimum(); } heap.add(1, 1); heap.add(0, 0); heap.decreasePriority(0, 10); assertEquals(Integer.valueOf(0), heap.extractMinimum()); assertEquals(Integer.valueOf(1), heap.extractMinimum()); assertEquals(0, heap.size()); heap.add(1, 1); heap.add(0, 2); heap.add(0, 0); assertEquals(Integer.valueOf(1), heap.extractMinimum()); assertEquals(Integer.valueOf(0), heap.extractMinimum()); }}The best figure I got so far:Seed: 665685156966189BidirectionalBFSPathFinder in 6929.62 milliseconds. Path length: 33AStarPathFinder in 458.56 milliseconds. Path length: 33DialAStarPathFinder in 104.86 milliseconds. Path length: 33Any critique is much appreciated.
Comparing puzzle solvers in Java
java;algorithm;pathfinding;sliding tile puzzle;priority queue
GeneralThis is a perfect example as multiple implementations of an interface make it very robust. So I only have only little to address.NormalizationBreak your implementations of the search(.., ..)-methods into smaller pieces By doing so try to preserve/improve locality, avoid rampantly parameter declaration and avoid passing working references. Especially your search()-method of the BidirectionalBFSPathFinder is very long. Extracting methods and introducing inner working classes should help.Multiple return statements, break, continueIf you experience difficulties by extracting methods or other classes then maybe multiple return statements within a method are the problem. Try to have only one return statement per method at the end. This will make sure that your code can be refactored and extended easily.break and continue cause the same problems. Avoid them in general and search other structures that preserve a well-defined control flow. Often break is used after checking a condition. This condition should be where it supposed to be: The loop header/footer. If you have multiple break statements the breaking conditions are spread all over the place. But they should all be at ONE place.default methods in interfacesDo not use default methods in interfaces to substitute the introduction of an abstract class. As they are public scope these methods are accessable to any client and confuses him/her about the usage. Introduce an abstract class where you provide general functionality for every concrete implementation of PathFinder and extend it (AbstractPathFinder). Then you get rid of the public scope and make it protected so subclass cann access these utility methods.Your interface will then look as clean as heaven:public interface PathFinder { public List<PuzzleNode> search(PuzzleNode source, PuzzleNode target);}
_cs.61066
my question is about how to compute a database with initial moves for Connect Four, where each board constellation is classified as leading to a win, draw or lose.John Tromp already has done this for the standard 7x6 sized board (https://tromp.github.io/c4/c4.html) and all board configurations with 8 tiles. Since I'm going to try to make up an AI for a larger board, these are not usable at all. Moreover I'd like to know more about the theory behind: How is it possible to determine the outcome of an initial position with e.g. 8 placed tiles, while not searching the whole game tree (which shouldn't be feasible i guess)?
Compute connect-four opening moves
artificial intelligence;heuristics
null
_unix.279266
I'm running Centos 7 and I want Icemon for icecc. If there is a package I can install, great, but I don't know of one, so I'm attempting to build it from scratch and that relies on cmake.My missing packages include:Qt5CoreQt5GuiQt5WidgetsQt5IcecreamAnd optionally:Docbook2XI've installed Qt5 Base and Qt5 GUI through yum, so I'm perplexed how that hasn't satisfied the requirements. I've built and installed Icecream (and configured it as a service) so I don't know why it can't find the requisite library and header files. Icecream also depends on Docbook2X for it's man pages (meaning I don't have them installed, either), and because it doesn't exist as an available package, I've only just downloaded the source for that; haven't tried building it yet.I am not an IT guy or a DevOp (yet), I'm just a dev trying to cut my teeth on Linux beyond vim and gcc (the only two things I've ever known on this platform).Are there Centos 7 friendly packages for these things?Or how do I resolve this misconfiguration?This is the output:[mred@matt icemon]$ cmake .-- Could NOT find Docbook2X (missing: DOCBOOK_TO_MAN_EXECUTABLE) -- -- The following OPTIONAL packages have been found: * Git * Doxygen , Doxygen documentation generator Needed for generating API documentation (make doc)-- The following REQUIRED packages have been found: * Qt5Core * Qt5Gui (required version >= 5.6.0) * Qt5Widgets * Qt5 (required version >= 5.2.0) * Icecream , Package providing API for accessing icecc information. Provides 'icecc/comm.h' header , <http://en.opensuse.org/Icecream>-- The following OPTIONAL packages have not been found: * Docbook2X , docbook2X converts DocBook documents into the traditional Unix man page format , <http://docbook2x.sourceforge.net/> Required for man-page generation-- Configuring done-- Generating done-- Build files have been written to: /home/mred/d2/icemon[mred@matt icemon]$ yum list installed | grep qt5qt5-qtbase.x86_64 5.6.0-7.el7 @epel qt5-qtbase-common.noarch 5.6.0-7.el7 @epel qt5-qtbase-devel.x86_64 5.6.0-7.el7 @epel qt5-qtbase-gui.x86_64 5.6.0-7.el7 @epel qt5-qtconfiguration.x86_64 0.3.0-2.el7 @epel qt5-qtconfiguration-devel.x86_64 0.3.0-2.el7 @epel qt5-qtdeclarative.x86_64 5.6.0-3.el7 @epel qt5-qtx11extras.x86_64 5.6.0-3.el7 @epel qt5-qtx11extras-devel.x86_64 5.6.0-3.el7 @epel qt5-qtxmlpatterns.x86_64 5.6.0-4.el7 @epel qt5-rpm-macros.noarch 5.6.0-7.el7 @epel
Trying to build icecc icemon, cmake can't find Qt5
centos;qt;cmake
null
_reverseengineering.3997
I loaded an MS-Windows executable in Ollydbg. But as soon as I hit run from the Debug menu a message shows up: Breakpoint set at address 76A010B1 is corrupt (contains hex code instead of int3 ...)And the program doesn't run, rather it breaks to Ollydbg. I am puzzled. What is really going on? I see an isdebugger call. Fixing it, also, it doesn't run the program. I suppose it's using some advanced anti debugging technique. Any suggestions?Here is the log from windbg:(a9c.1fd4): Break instruction exception - code 80000003 (first chance)*** ERROR: Symbol file could not be found. Defaulted to export symbols for C:\Windows\SYSTEM32\ntdll.dll - *** ERROR: Symbol file could not be found. Defaulted to export symbols for C:\Windows\SYSTEM32\KERNEL32.DLL - eax=7fe73000 ebx=00000000 ecx=00000000 edx=775edbeb esi=00000000 edi=00000000eip=7757f9fc esp=0be4ff58 ebp=0be4ff84 iopl=0 nv up ei pl zr na pe nccs=0023 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00000246ntdll!DbgBreakPoint: 7757f9fc cc int 3
Program won't run in a olly
ollydbg
null
_softwareengineering.339032
We were taught that objects are self contained things with data and behaviour and therefore they should have methods that act on their attributes. But there are several situations when this coupling between entities and their behaviour is not observed:Entity frameworks (like .NET with POCO and Enterprise Java with POJO entities) stipulate that CRUD/persistence/search operations should be done by external entity managers (and repositories) and not by the entities themselves, i.e. we have code entityManager.save(entity) and not entity.save();Business rule engines are the most visible examples of separating business logic from the entities - business rules form completely different code (even in different languages) from entities. E.g. JBoss Drools or IBM ILOG or other rule engines. Business rule paradigm can be used for the extrapolation of OO programming - in OO we consider data and methods, but in semantic web we can consider ontologies and logical/business rules that are acting on the ABox or TBox of ontologies - two completely different languages and reasoning systems.Distributed computing stipulates use of serialization and deserialization of objects and communication of those objects across network - we have XML, native binary formats or JSON for this. Usually only data are communicated over network and business logic is kept in one layer and there is not technology for moving business logic across network and platforms, e.g. there is no automatic translation and communication of business logic when Java entities are translated into JSON objects and exposed through REST API to Angular 2 frontend. Business logic usually is kept in one side (e.g. in Java). It is said that OO domain model should reflect/model real world. And sometimes the real world objects do not have business logic inside them. E.g. there are concepts about calculators, e.g. tax calculators, salary calculators etc. Therefore we write calculator.recalcTaxes(invoice) and not invoice.recalcTaxes. The former approach allow us to apply different calculators in different cases = e.g. across legislations. We are not forced to build complex inheritance hierarchy simply beacuse there are different business methods, we simply apply different business services/calculators to the same data.Considering those arguments pro separating business logic from data/entities - how acceptable is to make this separation as the general rule of design for my project of business software? What are the arguments against separating business logic from data?
How acceptable is to keep business logic outside entities (in separate service classes)?
design patterns;object oriented design;business logic;business rules;distributed system
null
_scicomp.18772
I am looking at a heat initial value problem\begin{align}\frac{\partial u}{\partial t}-\nabla^2u = f\quad&\text{in}\quad \Omega\times(0,T)\\u = g \quad&\text{on}\quad \partial\Omega\times(0,T)\\u = u_0\quad&\text{in}\quad \Omega\times\{0\}\end{align}which I have succesfully solved with the finite element method along with a time-stepping scheme, so that is not my question. My question is however, it seems to work to just solve the boundary value problem\begin{align}-\nabla^2u = f\quad&\text{in}\quad \Omega\\u = g \quad&\text{on}\quad \partial\Omega\\\end{align}for various fixed times, e.g. $f(x,y,0),f(x,y,0.1),$ etc. This seems to give a good approximation of the solution to the initial value problem, and I don't understand why. This simply ignores the time-derivative it seems to me (of course, the functions $f$ are the same for both problems so the information of the time-derivative is not lost, but it still confuses me). Is this simply the method of lines?Thanks
Solving Initial Value problem ignoring the time-derivative
finite element;time integration
null
_cs.2118
There is a family of random graphs $G(n, p)$ with $n$ nodes (due to Gilbert). Each possible edge is independently inserted into $G(n, p)$ with probability $p$. Let $X_k$ be the number of cliques of size $k$ in $G(n, p)$.I know that $\mathbb{E}(X_k)=\tbinom{n}{k}\cdot p^{\tbinom{k}{2}}$, but how do I prove it?How to show that $\mathbb{E}(X_{\log_2n})\ge1$ for $n\to\infty$? And how to show that $\mathbb{E}(X_{c\cdot\log_2n}) \to 0$ for $n\to\infty$ and a fixed, arbitrary constant $c>1$?
Number of clique in random graphs
graph theory;combinatorics;probability theory;random graphs
So basically there are three questions involved.I know that $E(X_k)=\tbinom{n}{k}\cdot p^{\tbinom{k}{2}}$, but how do I prove it?You use the linearity of expectation and some smart re-writing. First of all, note that$$ X_k = \sum_{T \subseteq V, \, |T|=k} \mathbb{1}[T \text{ is clique}].$$Now, when taking the expectation of $X_k$, one can simply draw the sum out (due to linearity) and obtain$$ \mathrm{E}(X_k) = \sum_{T \subseteq V, \, |T|=k} \mathrm{E}(\mathbb{1}[T \text{ is clique}]) = \sum_{T \subseteq V, \, |T|=k} \mathrm{Pr}[T \text{ is clique}]$$By drawing out the sum, we eliminated all possible dependencies between subsets of nodes. Hence, what is the probability that $T$ is a clique? Well, no matter what $T$ consists of, all edge probabilities are equal. Therefore, $\mathrm{Pr}[T \text{ is clique}] = p^{k \choose 2}$, since all edges in this subgraph must be present. And then, the inner term of the sum does not depend on $T$ anymore, leaving us with $\mathrm{E}(X_k) = p^{k \choose 2} \sum_{T \subseteq V, \, |T|=k} 1 = {n \choose k} \cdot p^{k \choose 2}$.How to show that for $n\rightarrow\infty$: $E(X_{\log_2n})\ge1$I am not entirely sure whether this is even correct. Applying a bound on the binomial coefficient, we obtain$$E(X_{\log n}) = {n \choose \log n} \cdot p^{\log n \choose 2} \leq \left(\frac{n e p^\frac{(\log n)}{4}}{\log n}\right)^{\log n} = \left(\frac{ne \cdot n^{(\log p) / 4}}{\log n}\right)^{\log n}.$$(Note that I roughly upper bounded $p^\frac{-1 + \log n}{2}$ by $p^\frac{\log n}{4}$.) However, now one could choose $p = 0.001$, and obtain that $\log_2 0.001 \approx -9.96$, which makes the whole term go to $0$ for large $n$. Are you maybe missing some assumptions on $p$?
_unix.64504
I have the need to use the same disk and so the same bootloader on devices with different motherboard.Each motherboard has it's own way of mapping devices so sometime the bootdisk is mapped ad hda other as hdc.I already try to manage this boot with different device assigning and before start check how thw board mapped the boot device ( hda, hdc , ecc... ) and based on that mapping fix the boot parameter.I already try to manage the boot from dom with different device replacing the device name with the LABEL option ( grub bootloader ).But it didn't worksThis workstitle Linux 2.4.37.9 root (hd0,0) kernel /boot/vmlinuz-2.4.37.9 ro root=/dev/hda1 console=ttyS0,9600 console=tty0 apm=offThis didn't works ( kernel unable to find root=LABEL=Flash-Root ) title Linux 2.4.37.9 root (hd0,0) kernel /boot/vmlinuz-2.4.37.9 ro root=LABEL=Flash-Root console=ttyS0,9600 console=tty0 apm=offSome guys suggested my as an alternative solution to manage initrd and so I now trying to manage and fix the boot parameter through the script linuxrcMi first question is about documentation about nash, the scripting interpreter used by linuxrc.I haven't found documentation about how use nash and most important how use nash for linuxrc.Does someone known how can I find some documentation and samples ?My second ( and last ) question is about how can I check from insice linuxrc what device ( hda1, hdc1, ecc.. ) is valid and based on that set the right value for the /proc/sys/kernel/real-root-dev variable.I think to check the disk using fdisk, but this program require some library to be loaded inside initrd and so I'm looking for a solution which need less space.
How customize initrd via linuxrc
linux kernel;initrd
null
_datascience.20081
I need some help understanding my partial dependence plots for binary features passed to a GradientBoostClassifier when comparing them to the feature importances. For some background, my goal here is to investigate user churn. My classifications are: 0 = not churned, 1 = churned.First of all when I plot the feature importances I get some interesting results which do not agree with the feature importances from my random forest. I put this down to algorithmic differences since many do align or are similar, but my specific concern is the binary feature 'registeredEmail', which is clearly very important in the random forest model, but not in the other (gradient boost, as shown). This is not my main concern, just adding in case relevant to my question.Secondly, and this is the part I am most confused by. Why does my feature importance chart show that 'playerInAlliance' (another binary feature) is significantly more important than 'registeredEmail', since when I check the partial dependencies there is a much steeper slope for 'registeredEmail'. I would interpret this as the churned prediction being highly influenced by a player not having a registered email, compared to being less likely to predict churn if the player does have their email registered. Is this correct?Please note these plots are in descending order of feature importance, from left to right. Also ignore the title.Comparing this to 'playerInAlliance' I can see that no matter if the player is in an alliance or not, this feature is influential in predicting a churned player. I don't understand how to interpret this, as if there's no real difference between players churning when in an alliance compared to having no alliance, then why is my GradientBoostClassifier considering it to be a highly important and highly dependent feature?Finally, if I plot both of these binary features against each other on one plot, we can see that dependence on a player not having a registered email is massively higher than if the player is in an alliance or not. So why is the registeredEmail feature less significant in terms of importance?TL;DR: How do I interpret Binary features in partial dependence plots?Any help appreciated.
Feature Importance and Partial Dependence plots seem to disagree?
machine learning;python;visualization;feature selection;scikit learn
null
_softwareengineering.147353
I have a very simple program in Ruby that opens a dictionary file, sorted-words.txt and prints out all the words in pig-latin. I tested its speed, and the first time I ran it, the program finished in about 1.6 seconds. The second time I ran the program, it took about 2.1 seconds. Between the two runs, there was maybe about a 10 second wait, me looking at the results and running the program again. I waited a minute, ran it again, and it ran in 1.8 seconds. Although the variation is slight, it is still there. Why is there this variation?
File Processing Times
optimization;file handling;speed;files
Variations like that are typically due to Operating System and hardware variance.In the Operating System, it runs processes like your program on a time shared basis so that even for a very small and simple program the multitasking of background programs will slightly affect the results.On a hardware level you've several things going on but mostly it's file I/O. In file I/O the program loads to RAM, then communicates with the CPU which is multitasking. It then has to send to, wait and receive from the hard drive. Since the Hard Drive is a sequential device, you have variances in access and read time since it may not be in the same place every time. To test this, get a solid state drive and a standard hard drive and test them in the same system. There should be less variance in the Solid State system. If not, it's some other issue.
_softwareengineering.164939
I have a lot of time invested in creating Wordpress templates. I want to release combinations of these templates along with different styles and Fancy Front pages as Premium Wordpress Themes. What I need to know is what does Premium mean? What do people expect of a GPL theme vs. a Premium theme? Are there features that are considered required to be premium? Are there features that are in demand but considered exceptional i.e. not part of every premium theme? How can I tell the difference?I have heard tounge-in-cheek answers that say that any theme that makes money is premium, but I mean to ask about what gives an outstanding theme it's quality. Why is it worth more? I am technically able to do many things, but as a lone developer with a family to feed, I can't afford to spend time on features that no one cares about. I have to try to isolate the things that people want. This is serious food and rent to me.How can I get this kind of info so I can make my project successful?
By what features and qualities are free and premium themes differentiated
web development;project management
Build a website (if you havent already), put your themes up there and make them available for download. IF you want to give some away, call those Free, and the ones you want to sell call them Premium, and charge money for them. You will probably also want to offer an Exclusive license which will make that person the owner of that theme, which would mean they pay a lot more, and you remove it from your site.Premium implies that it is custom, professional (in the sense that it stands up to other commercial designs), and it's created with the intent that it be great. Sloppy work is generally not accepted as premium work.. although the creator of sloppy work can call it premium. It would just not be as premium as other premium, more refined work.There are also a lot of sites out there where you can upload your theme to which will pay you when someone pays a fee to downloads them.. In that case you'd be listed in their premium section.I hope you like this premium answer
_unix.240377
From within a bash script, how can I use sed to write to a file where the filename is stored in a bash variable? Output redirection won't do because I want to edit one file in place, pulling lines that match a regex into a different file and deleting them in the first file.Something like:sed -i '/^my regex here$/{;w file2;d;}' file1...but where file1 and file2 are both actually variables that hold the filenames. (Such as file2=$(mktemp).)So what I really want is variable expansion for the filename, but no expansion for the rest of the sed command, and leaving the whole thing as a single argument passed to the sed command.For whatever reason, the following does not work:sed -i '/my regex here/{;w '$file2';d;}' $file1It says unmatched { and I can't see why.Any way I can do this?
How to use sed to write to a filename stored in a variable?
sed
Because you're using a single sed expression, everything that follows after the w (including the }) is interpreted as the wfile name: The argument wfile shall terminate the editing command.You can see that if you add a second command } e.g. like:sed -e '/my regex here/{w '$file2';d;}' -e '}' $file1then the lines matching my regex here will be saved in a file named whatever;d;} where whatever is whatever $file1 expands to.The correct syntax is via separate commands, either with several expressions:sed -e '/my regex here/{w '$file1 -e 'd' -e '}' $file2or one command per line:sed '/my regex here/{w '$file1'd}' $file2
_unix.181340
I've done a mistake: I've removed the 'sudo' group, because I have forgotten the -a option (I wanted to add 'video' group):sudo usermod -G video $USERNow, when I want to call sudo I have this message:orangepi@orangepi:~$ sudo apt-get update[sudo] password for orangepi: Sorry, user orangepi is not allowed to execute '/usr/bin/apt-get update' as root on orangepi.I read some solution (like visudo, or sudo adduser <username> sudo) but the problem persists. I can be root with calling su.Some informations:orangepi@orangepi:~$ groupsorangepi video # I would like to add sudo, groupes and othersvisudo:Defaults env_resetDefaults mail_badpassDefaults secure_path=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binroot ALL=(ALL:ALL) ALL%admin ALL=(ALL) ALL# Allow members of group sudo to execute any command%sudo ALL=(ALL:ALL) ALL%shutdown ALL=(root) NOPASSWD: /usr/lib/arm-linux-gnueabihf/xfce4/xfsm-shutdown-helper# Autoriser sudo ifup sans mot de passeorangepi ALL = (root) NOPASSWD: /sbin/ifupHow can I repair this?
How to add sudo group
sudo;group
Once you have become root via su, do:adduser orangepi sudoIf you don't have adduser on your system, try with usermod -a to append to the groups list:usermod -a -G sudo orangepiYou might also want to investigate which groups your user is a member of by default, and add those back as well (such as the group named after your user, adm, etc.).Alternatively, you can use su -c:su -c adduser orangepi sudo
_cs.76952
Let's say we have a string $s$. We define a unique substring in $s$ as a substring that occurs only once in $s$. How can we efficiently find such a substring with the smallest length in $s$?The most obvious solution is in $O(n^3)$ by checking every substring. What if we can preprocess the string?
Efficiently find smallest unique substring
algorithms;data structures;regular expressions;strings;pattern recognition
null
_unix.101352
I tried setting up git-shell on our CentOS (6.4) system (after getting this working correctly on Ubuntu 13.10, maybe cross platform hot mess?)my /etc/passwd showsgit:x:500:500:Web Archive VCS:/home/git:/usr/bin/git-shelland my shell commands are in /home/git/git-shell-commands[root@domain git]# cd /home/git/git-shell-commands/ && tree. addkey create drop help listBut ssh'ing in is still giving me Last login: Fri Nov 15 12:14:49 2013 from localhostfatal: What do you think I am? A shell?Connection to localhost closed.I am working off of this resourcehttp://planzero.org/blog/2012/10/24/hosting_an_admin-friendly_git_server_with_git-shellThere was some confusion that this was licensed GIT commands (push/pull etc) but this is a restricted shell with pre set commands! Please anyone reading this make note ;)Installer script if you want to see stepshttps://github.com/ehime/bash-tools/blob/master/git-server-setup.shEDITI still have not been able to resolve this over the weekend, I HAVE added# add to shellsecho '/usr/bin/git-shell' >> /etc/shells# Prevent full login for security reasonschsh -s /usr/bin/git-shell gitAnd have double checked that GIT Shell actually exists in /usr/bin[root@domain bin]# ll /usr/bin | grep git-rwxr-xr-x. 105 root root 1138056 Mar 4 2013 git-rwxr-xr-x. 1 root root 1138056 Mar 4 2013 git-receive-pack-rwxr-xr-x. 1 root root 457272 Mar 4 2013 git-shell-rwxr-xr-x. 1 root root 1138056 Mar 4 2013 git-upload-archive-rwxr-xr-x. 1 root root 467536 Mar 4 2013 git-upload-packThis IS a root account that I am dealing with though, could that havesomething to do with it?
Granting access to a restricted git shell
shell;ssh;centos;git
As it turns out, this feature has been introduced in git 1.7.4. git --version gave me 1.7.1 on a base CentOS 6.4 install sothat was the beginning of the issue =/If you experience this problem, check your git version. Here's is an updater script that I wrote to aid you in your troubles.#!/bin/bash# Git updater for RHEL systems# CPR : Jd Daniel :: Ehime-ken# MOD : 2013-11-18 @ 09:28:49# REF : http://goo.gl/ditKWu# VER : Version 1.1# ROOT checkif [[ $EUID -ne 0 ]]; then echo This script must be run as su 1>&2 ; exit 1fiyum install -y perl-ExtUtils-MakeMaker gettext-devel expat-devel curl-devel zlib-devel openssl-develcd /usr/local/srcgit clone git://git.kernel.org/pub/scm/git/git.git && cd gitmake && make prefix=/usr installgit --versionexit 0Thanks to everyone who took the time to look into this, I appreciate it greatly.
_codereview.9876
I am parsing a response from server, and in case it contains the fields chunk_number(J_ID_CHUNK_NUMBER) and total_chunk_number(J_ID_CHUNK_TOTAL), I want to check whether I should request another chunk or not. Not a complicated task. Yet I doubt what would be a better way to implement?Option 1 - using try catch:private int getNextChunkNumber(JSONObject jUpdate) { try { int current; int total; current = jUpdate.getInt(J_ID_CHUNK_NUMBER); total = jUpdate.getInt(J_ID_CHUNK_TOTAL); if (current < total) { return current + 1; } } catch(JSONException e) { // This is an empty exception because it merges with the default result (-1) } return -1;}Option 2 - using the has method:private int getNextChunkNumber(JSONObject jUpdate) { int current; if (jUpdate.has(J_ID_CHUNK_NUMBER) && jUpdate.has(J_ID_CHUNK_TOTAL)) { current = jUpdate.getInt(J_ID_CHUNK_NUMBER); if (current < jUpdate.getInt(J_ID_CHUNK_TOTAL)) { return current + 1; } } return -1;}
Check if a value exists or catch an exception
java;json
I would recommend that you use option 2, especially since you have the ability to avoid the exception (by using the has method).Exceptions should be used in exceptional circumstances (e.g. You might expect getInt to throw an exception if J_ID_CHUNK_TOTAL exists but does not contain a character which can be parsed to an integer). In which case you would want to wrap the call to that in a catch.I would also suggest that you do a null check against the jUpdate object before using it.
_opensource.1443
What is the typical situation when some company is found to violate the some copyleft free software license and refuses to fix it?Can the company then be forced to disclose any proprietary code that is linked with copylefted code?
Must a company disclose proprietary source code if it violates a copyleft license?
copyleft;enforcement
null
_softwareengineering.323812
Consider a web page where a user changes his password after clicking on an emailed password reset link.The page will be implemented against an API using ajax, rather than as an ordinary form submission. That is, this question is about designing an API endpoint for changing a password.RequirementsAny time a password is changed, the change must be recorded in the database in the password_changes table:password_changes - user_id - date_changedSpecific QuestionThinking about this design from the perspective of API endpoints that create, update, or delete abstract resources, does it make more sense to think about the password change as:We're updating the user resource, and hence should PUT a new password onto /user/{id} or perhaps /user/{id}/password? In this model, recording the change in password_changes is just an irrelevant implementation detail, and has no bearing on our concept of the user resource or the name of our endpoint.We're creating a password_change. Thus we should be POST to /user/{id}/password_change, or something similar. In this model, the change to the user resource is just a side effect of creating a password_change.In this particular example, I think most people would choose option 1. And indeed it feels more natural to me too. Buy why? Is this just a matter of judgment, taste, or intuition? Or is there some codified principle of REST or even the basic concepts of resources and http actions that option 2. clearly violates?More General QuestionWhat if we provide a history of password changes as a new enpoint: /user/{id}/password_changes? That seems like a reasonable service. But if we stick with design 1., now we make password changes by making PUTs to /user but view them with GETs to /user/{id}/password_changes. And then do we just disallow other http actions to password_changes (PUT, POST, DELETE)? That seems inconsistent... or maybe not?Again, what are the rules (or even heuristics) governing these decisions? I can make judgment calls, but I'm looking for something more definitive.
What API design makes the most sense for changing a user's password?
rest;api;api design;http
In this particular example, I think most people would choose option 1. And indeed it feels more natural to me too. Buy why?Not sure there's going to be a definitive answer to this, but my guess: because password, the entity, has a relatively straight forward representation as a document; HTTP's raison d'etre is document transfer, after all. It's pretty well designed for that.Additionally, there's a simplicity to password - current state is a simple value type, and consequently there's no confusion about what modify should mean. Having only a single method available to mutate the state of the resource doesn't cause any problems because there's only one way you would want to: by completely replacing one representation with another, which is exactly the semantic of the PUT method in HTTP.Generally speaking, a password change is not idempotent. Yes, but that's a moot point here; the choice of method in this case is more driven by what should happen if a request is not acknowledged. Do we get to retry? If the message travels through an intermediate, is it allowed to retry the message rather than immediately returning a failure to the client?(Admittedly, if we are throwing secrets around, we should be using an encrypted channel, which means that the intermediates can't see what's going on)We aren't sending the request twice because we want the change applied twice, we are sending it twice to ensure that it is delivered at least once.To ensure that the side effect on the domain doesn't get repeated, use a conditional put, so that the server can distinguish between two copies of the same request, and two different requests that happen to contain the same representation of the password.What if we provide a history of password changes as a new end point: /user/{id}/password_changes?That seems like a reasonable service. Yup. I would probably lean toward the spelling /user/{id}/password/history; it feels natural to make the audit log subordinate to the password resource. And then do we just disallow other http actions to password_changes (PUT, POST, DELETE)? That seems inconsistent... or maybe not?I don't see a consistency issue. Your domain model is the book of record for password changes; that history is not subject to override by the clients -- the domain reserves that privilege for itself. So the resource is read only. Ta-da?all that saidWe're creating a password_change. Thus we should be POST to /user/{id}/password_change, or something similar. In this model, the change to the user resource is just a side effect of creating a password_change.This isn't wrong either; if you twist it around a little bit, you are basically describing AtomPub; you are publishing an entry (the PasswordChanged event) into a feed (the history), and the new state of the resource is derived from looking at the history. In other words, RESTful event-sourcing.It's a bit crooked; the usual idiom is that commands are sent to an aggregate in the domain model, and the state changes that are caused by the command are published to a stream, which is exposed via an atom feed. In this use case, we turn it around, somewhat, by the fact that the human being, rather than the domain, is the book of record. Maybe.Jim Webber describes REST application protocols as an analog to a 1950s style office, where you get things done by passing documents into inboxes. You deliver a ChangePasswordRequest to the server, and as a side effect the domain model updates itself, which has the cascading side effect of changing the state of other resources, like the audit log.That's perhaps overkill for a simple ReplacePassword protocol, but is useful to keep in mind when things get to be more complicated.
_codereview.158855
I wanted to create a type or class, which can be used to handle angles. I started with defining a class, to make it possible to store degrees in Integral, radians in Floating, etc... But I realized, that I will meet complex problems if I continue it so. For example handling the addition of two angles, like a Radian Double and a Degree Int.To keep it simple, easy to use and develop, I decided to create one type only, and write the functions around it.module Data.Angle whereimport Data.Fixed -- mod'data Angle a = Radians a deriving (Eq, Show)-- Creating Angle from a value-- | Create an Angle with the given degreesangleFromDegrees :: (Integral d, Floating r) => d -> Angle rangleFromDegrees x = Radians $ (realToFrac x) * pi/180-- | Create an Angle with the given turnsangleFromTurns :: (Real t, Floating r) => t -> Angle rangleFromTurns x = Radians $ (realToFrac x) * pi*2-- | Create an Angle with the given turnsangleFromRadians :: (Floating r) => r -> Angle rangleFromRadians x = Radians x-- Get the value from Angle-- | Get degrees from an AngleangleValueDegrees :: (Floating r, RealFrac r, Integral d) => Angle r -> dangleValueDegrees (Radians x) = round $ x / pi * 180.0-- | Get radians from an AngleangleValueRadians :: (Floating r) => Angle r -> rangleValueRadians (Radians x) = x-- | Get turns from AngleangleValueTurns :: (Floating r) => Angle r -> rangleValueTurns (Radians x) = x / (pi*2)-- Basic functions-- | Adding two anglesaddAngle :: (Floating a) => Angle a -> Angle a -> Angle aaddAngle (Radians r1) (Radians r2) = Radians $ r1 + r2-- | Normalize Angle: transforming back to (0-2pi)normAngle :: (Floating a, Real a) => Angle a -> Angle anormAngle (Radians r) = Radians $ mod' r (pi*2)-- | Add two angles and normalize the resultaddAngleNorm :: (Floating a, Real a) => Angle a -> Angle a -> Angle aaddAngleNorm a b = normAngle $ addAngle a b-- | Distance between two anglesdistAngle :: (Floating a, Real a) => Angle a -> Angle a -> Angle adistAngle (Radians r1) (Radians r2) = Radians $ if (a' < b') then a' else b' where a' = mod' (r1-r2) (pi*2) b' = mod' (r2-r1) (pi*2)-- | Flip AngleflipAngle :: (Floating a) => Angle a -> Angle aflipAngle (Radians r) = Radians (-r)-- | Flip Angle and normalize the resultflipAngleNorm :: (Floating a, Real a) => Angle a -> Angle aflipAngleNorm = normAngle . flipAngle-- | Add degrees to AngleaddAngleDegrees :: (Floating r, Integral d) => Angle r -> d -> Angle raddAngleDegrees ang deg = addAngle ang $ angleFromDegrees deg-- | Add radians to AngleaddAngleRadians :: (Floating r) => Angle r -> r -> Angle raddAngleRadians (Radians r1) r2 = Radians $ r1 + r2-- | Add turns to AngleaddAngleTurns :: (Floating r, Real t) => Angle r -> t -> Angle raddAngleTurns ang turn = addAngle ang $ angleFromTurns turn-- Trigonometric functions-- | Sine of the anglesinAngle :: (Floating a) => Angle a -> asinAngle (Radians r) = sin r-- | Cosine of the anglecosAngle :: (Floating a) => Angle a -> acosAngle (Radians r) = cos r-- | Tangent of the angletanAngle :: (Floating a) => Angle a -> atanAngle (Radians r) = tan r-- | Cotangent of the anglecotAngle :: (Floating a) => Angle a -> acotAngle (Radians r) = 1 / (tan r)-- Inverse trigonometric functions-- | Create angle from inverse sineasinAngle :: (Floating a) => a -> Angle aasinAngle x = Radians $ asin x-- | Create angle from inverse cosineacosAngle :: (Floating a) => a -> Angle aacosAngle x = Radians $ acos x-- | Create angle from inverse tangentatanAngle :: (Floating a) => a -> Angle aatanAngle x = Radians $ atan x-- | Create angle from inverse cotangentacotAngle :: (Floating a) => a -> Angle aacotAngle x = Radians $ (pi/2) - (atan x)The finished library is available here:Haskell Angle Library
Angle type in Haskell
haskell
I would use newtype instead of data to define Angle, like this:newtype Angle a = Radians { angleValueRadians :: a }This will automatically create the angleValueRadians function to get the angle value, and will be a bit more efficient.Here is a good answer explaining the differences between data, newtype, and type: https://stackoverflow.com/a/21081227/1525759You could use function composition to define many of your functions instead of pattern matching (pattern matching isn't bad, I just prefer this style):sinAngle :: (Floating a) => Angle a -> asinAngle = sin . angleValueRadiansAlso, if you made Angle an instance of Applicative, you could define addAngle like this:addAngle :: (Floating a) => Angle a -> Angle a -> Angle aaddAngle r1 r2 = (+) <$> r1 <*> r2-- or even this, because `f <$> x <*> y` is the same as `liftA2 f x y`:addAngle = liftA2 (+)Here's how you could do that:instance Functor Angle where fmap f (Radians x) = Radians (f x)instance Applicative Angle where pure = Radians Radians f <*> r = fmap f r
_codereview.138994
This code was tested on IBM's 5-Qubit processor as an implementation of Grover's search algorithm, but for an unknown number of solutions. This code is based on Arthur Pittenger's book, An Introduction to Quantum Computing Algorithms and the mathematical structure was posted as a proof-verification question here.h q[0];h q[1];x q[2];s q[0];cx q[1], q[2];t q[3];cx q[0], q[2];s q[3];x q[0];z q[1];s q[2];tdg q[3];cx q[0], q[2];id q[3];cx q[1], q[2];h q[0];h q[1];h q[2];x q[0];x q[1];x q[2];cx q[0], q[2];h q[0];cx q[1], q[2];s q[2];h q[2];tdg q[2];h q[2];measure q[0];measure q[1];measure q[2];measure q[3];measure q[4];For successful implementations of Grover's algorithm, there has to be a bias towards |0> outputs, while the output of |1> only occurs upon a successful query of the qubit oracle.In line with this, the output produces a probabilistic distribution across all possible output permutations of the 5-qubit system, with a |0> bias. Since I am using the amplitude amplification method, or reflection about the mean the output value will generate a fluctuation in probabilities based on the solution to the problem presented. The fifth qubit Q4 is unused, and is reserved for problem encoding.While IBM allows anyone to publicly access their 5 qubit processor, you will need to be upgraded to Expert User status to copy/paste the QASM code provided. Otherwise, as a Standard User you can build this same algorithm by following the drag and drop operations provided. I've already run this experiment on their live system using 4096 shots, as I do with all tests I run live.I'd like feedback on suggestions of whether I need to keep the Q4 qubit reserved strictly for problem encoding, or if a problem would be more optimally encoded distributed across all qubits.
Unstructured quantum search algorithm for unknown solutions
algorithm;search;qasm;quantum computing
null
_opensource.5049
I have created a piece of software that I have chosen to license under the GPL. Its configuration is technically also code. Are any run-time configuration files also licensed under GPL? Do I have to disclose them if someone asks for them?The configuration is in a separate repository from the main code, but is imported (linked, I suppose) in the code. There is confidential information in the config files.
Licensing of configuration files that are technically code
licensing
First of all, if you have created the software yourself you are not bound by the GPL in any way even if you have chosen to distribute it under the GPL. You own the software, you don't need a license to use it, and you can do anything you want with it. You do not need to disclose anything. The rest of my answer is only relevant for users bound by the GPL.Regarding the configuration files, if they are required to build the software, they are decidedly a part of the corresponding source mentioned in Section 1 of the GPLv3:The Corresponding Source for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activitiesOr complete source in Section 3 in the GPLv2:For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable.As long as your software can be built and used without the configuration data itself, the configuration data doesn't belong to the complete/corresponding source. The configuration template files probably do though, but that's no issue I assume.
_cs.52305
I got this question:Let $A \oplus B = (A\cap \bar{B})\cup(\bar{A}\cap B)$.Proof that $NP = coNP$ if and only if $A,B\in NP$ and $A \oplus B\in NP$.But I don't know how to proof the direction that if $A,B\in NP$ and $A \oplus B\in NP$ then $NP = coNP$.I have tried to build the non-deterministic machines for $\bar{A}$ and $\bar B$, but I don't know how. What that I tried: Say I want to build the non-deterministic machine for $\bar{A}$. I know that if $w\in A \oplus B$ and $w\in B$, then for sure $w \in \bar A$. My problem is with the condition if $w\notin A \oplus B$ and $w\notin B$ then $w \in \bar A$, because I don't know if $w\notin A \oplus B$ and $w\notin B$ since I don't know that $NP = coNP$.So how can I show non deterministic machines for $\bar{A}$ and $\bar B$ when I don't know how to decide if word is not in ...?
Proving that if AB NP then NP = coNP
complexity theory;time complexity
Thanks to G. Bach and Yuval Filmus.If every $A, B \in NP$ and $A \oplus B \in NP$, then let $E$ to be some language $E\in NP$. Since $\Sigma^*\in NP$ then $E \oplus \Sigma^* \in NP$. But $ E \oplus \Sigma^* = \bar E$, so $\bar E \in NP$, and we got that $NP \subseteq coNP$.Let $D$ to be some language $D\in coNP$ ($\bar D \in NP$). Since $\Sigma^*\in NP$ then $\bar D \oplus \Sigma^* \in NP$. But $ \bar D \oplus \Sigma^* = D$, so $\bar D \in NP$, and we got that $coNP \subseteq NP$.And then $NP = coNP$.
_reverseengineering.14032
I have a lab in computer system a programer's perspective. Task is exploit by buffer overflow to call bang function. In phase2: I must change global_value = cookie to have complete. I tried to use IDA PRO call gets fucntion with parameter is address of global_value and then I will enter input global_value=cookie,but it don't be enter and return address of global_value. Why don't enter input? and why it return address of global_value.detail at: http://csapp.cs.cmu.edu/3e/buflab32.pdfMy string: python -c 'print A*44 + \xFA\x8C\x04\x08 + \xA8\x8C\x04\x08 +\x08\xD1\x04\x08' | ./linux_server\xFA\x8C\x04\x08 is address of gets function\xA8\x8C\x04\x08 is address of bang function\x08\xD1\x04\x08 is address of global_value// getbuf function to get input.text:080491F4 ; Attributes: bp-based frame.text:080491F4.text:080491F4 public getbuf.text:080491F4 getbuf proc near ; CODE XREF: test+Fp.text:080491F4.text:080491F4 var_28= byte ptr -28h.text:080491F4.text:080491F4 push ebp.text:080491F5 mov ebp, esp.text:080491F7 sub esp, 38h.text:080491FA lea eax, [ebp+var_28].text:080491FD mov [esp], eax.text:08049200 call Gets.text:08049205 mov eax, 1.text:0804920A leave.text:0804920B retn.text:0804920B getbuf endp.text:0804920B.text:0804920C.text:0804920C ; =============== S U B R O U T I N E =======================================.text:0804920C.text:0804920C ; Attributes: bp-based frame.text:0804920C.text:0804920C public getbufn.text:0804920C getbufn proc near function bang: int global_value = 0; void bang(int val) { if (global_value == cookie) { printf(Bang!: You set global_value to 0x%x\n, global_value); validate(2); } else printf(Misfire: global_value = 0x%x\n, global_value); exit(0); }
Bufbomb phase2?
exploit;buffer overflow;stack
null
_softwareengineering.256602
In my opinion it just inverses the inversion and could make new users (including myself) make incorrect assumptions about using IoC containers.It can be used for the Service Locator (anti-)pattern of course, but it doesn't sound like a strong reason to me (can be a separate class in the end of the day).There probably will be at least one call to get the root object to start the program, but it could be named and designed (signature and contract) accordingly to avoid calling it for more than one reason.I am more interested in single-point of entry classic apps rather than server-side web apps.
Why do IoC containers provide public Resolve method(s)?
design;dependency injection;anti patterns;ioc;ioc containers
null
_unix.333282
I want to create multiple chroot environments, but I faced the issue with mounting devptsHere is STR:> mkdir -p {1,2}/{proc,sys,dev/pts}> mount -v -t sysfs sysfs 1/sys/> mount -v -t proc proc 1/proc/> mount -v -o bind /dev 1/dev/> mount -v -o bind /dev/pts 1/dev/pts> mount -v -t sysfs sysfs 2/sys/> mount -v -t proc proc 2/proc/> mount -v -o bind /dev 2/dev/> mount | grep /root/ | awk '{print $3}' | sort/root/1/dev/root/1/dev/pts/root/1/proc/root/1/sys/root/2/dev/root/2/proc/root/2/sysIf I mount '/dev/pts' to the '2/dev/pts' directory, I will get duplicate mount points> mount -v -o bind /dev/pts 2/dev/ptsmount: /dev/pts bound on /root/2/dev/pts.As you can see after these actions system creates two mount points for '/root/1/dev/pts':> mount | grep /root/ | awk '{print $3}' | sort/root/1/dev/root/1/dev/pts <---/root/1/dev/pts <---/root/1/proc/root/1/sys/root/2/dev/root/2/dev/pts <---/root/2/proc/root/2/sysIf I unmount the first mount point, the second one will be unmounted too> umount -v /root/1/dev/ptsumount: /root/1/dev/pts unmounted> mount | grep /root/ | awk '{print $3}' | sort/root/1/dev/root/1/dev/pts <---/root/1/proc/root/1/sys/root/2/dev/root/2/proc/root/2/sysCould you please explain to me why it happens?
simultaneous mounts of devpts
linux;mount;devices
null
_unix.382564
Ive installed Ubuntu 16.04 on my laptop a few months ago and suddenly I can t get past the splash screen. (I see the red dots gradually becoming white and vice-versa, so something is happening, but I can t get past this screen)I thus decided to boot in recovery mode to update and upgrade all my packages. But once I get there I have no internet access (with or without Ethernet cable). This is what I get when trying to enable networking:Could someone explain me how to have internet access so I can hopefully be able to solve my other issue in order to be able to log in again on my machine?
grub recovery mode: etc/resolv.conf: no such file or directory
ubuntu;networking;networkmanager;resolv.conf
null
_webapps.23462
TrelloScrum is a Chrome extension for doing Scrum in Trello.It 'has access to all your Trello data'.Does it transmit anything to a server? If so, what information does this server retain?
What kind of data does TrelloScrum keep?
trello;google chrome
TrelloScrum stores nothing to any server! You can check it out in the code at GitHub.Disclamer: I'm a TrelloScrum co-author.
_unix.197699
Suppose I wanted to create a script that would prompt for a file to be search for.Such that,echo What file are you searching for?read filenamefilepath=/test/dir1file=$filenamefor I in filedoecho $Ifind $filepath $IdoneThis seems to do a continual search for the filename specified. However, I get a find: file: No such file or directory
How to retrive a file from a prompt?
shell script
First of all, for I in file should probably be for I in $file; thedollar sign is needed when retrieving a variable's value.I would alter your script in the following ways:echo What file names are you searching for?read filesfilepath=/test/dir1for f in $filesdo echo $f find $filepath/$fdoneI removed the assignment to $file from $filenames, and assumed youmeant for the user to enter several whitespace-separated file names.Then I passed find a single path, constructed by separating thedirectory name and the file name with a single slash. If you meant tolook for a file named like this at an arbitrary depth, you may use find$filepath -name $f.
_ai.2122
New to the topic, I think I have figured out how to implement a Multi Level Perceptron(MLP) ANN.And was wondering if there are any simple data sets to test a MLP ANN ?i.e. small number of inputs and outputsI'm not getting expected results from uci cancer, I was hoping someone could save me some time and point me to some data they have used before ?Maybe start slightly more complex than XOR ?
Is there any simple testing data?
neural networks
null
_cs.318
I was reading an article that describes the switch between user-space and kernel-space that happens upon a system call. The article says An application expects the completion of the system call before resuming user-mode execution.Now, until now I was assuming that some system calls are blocking, whereas others are non-blocking. With the comment above, I am now confused. Does this mean that all system calls are blocking or did I misunderstand a concept?
Are all system calls blocking?
operating systems;os kernel
You seem to be overloading the term 'blocking'.Any context switch you make to the kernel, you have to wait for it to switch back to the usermode before your application can continue. This is not what is usually called 'blocking'. In the current kernel design, blocking calls are calls where the kernel returns only when the request is complete (or error happens). These calls usually take longer amounts of time and usually lead your process to be scheduled out. For instance, many IO calls are blocking.There are system call which provides asynchronous IO and they are non-blocking. Note that there is still a context switch that happens here, only the application has to take care of the asynchronous nature of the call.The paper seems to aim to do away with this context switch back and forth (exception-less system calls) and try to make all the calls asynchronous.
_webmaster.96651
I am having many Soft 404 in my site. What is/are the reason behind this error? How can I stop having this error?
What does Soft 404 means in GWT?
google;google search console;soft 404
null
_unix.367210
I have data file looks like:12 4 5 6 7 19202224 26 27 29 30 31 32 34 40 50 56 58234 235 270 5001234 1235 1236 12372300I want to split those rows with more than 4 column to smaller rows with maximum 4 columns within each row. therefore the output should be: 1 2 4 5 6 7 19 20 22 24 26 27 29 30 31 32 34 40 50 56 58 234 235 270 500 1234 1235 1236 1237 2300Any suggestion please? Please consider that my real data file is huge.
How to split rows in a huge data file based on number of column within them in linux ?
linux;text processing;awk
With awk:awk '{ if(NF>4) for(i=5; i<=NF; i+=4) $i = \n $i } 1' fileWith sed:sed 's/ /\n/4;T;P;D' fileWith perl:perl -lpe '$c = 0; s/ /++$c % 4 ? : \n/goe' fileOutput:12 4 5 6 7 19202224 26 27 29 30 31 32 34 40 50 56 58234 235 270 5001234 1235 1236 12372300
_webmaster.99645
I've set up OpenCart 2 on my server. The products I sell have 21% BTW (VAT). When I add a product with a price, OpenCart assumes the price is without taxes, but the prices I type in are already including the taxes. How can I let OpenCart assume the prices I type in include the taxes? Or if that isn't possible by default, how would I change OpenCart's source code for that? I don't want to use an extension
How do I configure OpenCart for prices that already include tax?
ecommerce;opencart
null
_webapps.22617
I was looking for a way to do a Delayed Send of email in Gmail. Boomerang looks good but asks for informaiotn from my Google Account.B4g.baydin.com is asking for some information from your Google Account [email protected] Email address: [email protected] GmailHow can I tell if this is safe?
Is it safe to allow Gmail Boomerang pluggin to access Google Account?
gmail
null
_unix.134117
I've been having some instability issues with my LDAP server. I have ~2000 machines connecting to it. Using netstat -pant | grep slapd, I can typically see 1500+ connections from the clients to the server at any given moment. But every so often, the connection count drops to zero. At that point the clients start having problems with jobs that rely on LDAP. I have to restart slapd on the LDAP server to get it to start accepting connections again. Sometime I have to restart the ldap daemon on the clients too.I've been told by a vendor that I need to increase the hard and soft nofile limits for the LDAP user so that it can accept more connections. Current settings:ldap@myldapserver:~> ulimit -Hn8192ldap@myldapserver:~> ulimit -Sn4500The vendor suggests 65000 hard and 16000 soft. ldap soft nofile 16000ldap hard nofile 65000This seems like a pretty dramatic increase and I can't help but wonder if this will have any adverse effects on the server. The LDAP server is a single purpose server. Should I be worried?
What are the effects of increasing hard and soft limits for ldap user
limit;ulimit;openldap
null
_cs.22668
I need to use discrete differential evolutionary algorithm for assigning discrete values from set size $L$ to vectors of size $D$ where $L$ could be smaller, equal or larger than $D$. Elements of vector $X$ could take the same values of other elements. My question is if we have a population of size $NP$ with each vector $X$ in the population of size $D$. How do we actually apply the mutation operand:$$V_{j,i}^{G+1} = X_{j, r_1}^{G} + F\cdot (X_{j, r_2}^{G}-X_{j, r_3}^{G})$$where $i$, $r_1$, $r_2$, $r_3$ are references to vectors in $NP$ and none is equal to the other, $J$ is an index in vector $X$, and $F$ is a random number between $0$ and $1.2$.Suppose $X_{r_1}^{G}$ is equal to $\{4, 1, 3, 2, 2, 0\}$ and $X_{r_2}^{G}$ is equal to $\{2, 2, 3, 0, 4, 2\}$ and $X_{r_3}^{G}$ is equal to $\{1, 2, 3, 3, 0, 1\}$Could anyone explain in detail the steps (through example if possible) on how to get the mutant vector $V_{j,i}^{G+1}$
Mutation and crossover operations in discrete differential evolutionary operations?
algorithms;optimization;heuristics;evolutionary computing
null
_softwareengineering.266249
According to Wikipedia a single version of the truth is described as is a technical concept describing the data warehousing ideal of having either a single centralised database, or at least a distributed synchronised database, which stores all of an organisation's data in a consistent and non-redundant formA single source of the truth is described as refers to a data storage principle to always source a particular piece of information from one placeWhat is the difference between a single centralized database and to source information from one place?What does it mean to store data is a consistent and non-redundant form?How does relate to scenarios when data is stored in multiple sources which are then coalesced? In contrast how do you describe when you are attempting to have consistent definitions e.g. The definition of a customer in one source is the same in another source?
What is the difference between a single version of the truth and a single source of the truth when dealing with data?
data
null
_unix.226370
I set up a ssh server on Android using SSH Server by Ice Code App.Settings of ssh server: http://i.stack.imgur.com/ZlbzJ.jpghttp://i.stack.imgur.com/b1X4i.jpg http://i.stack.imgur.com/CgT4d.jpg Settings of ssh user t: http://i.stack.imgur.com/DHxMT.jpg http://i.stack.imgur.com/efqVA.jpgAfter upgrading from Android 4.3 to 4.4.2, sftp from Nautilus or FileZilla in Ubuntu to Android can't read or write the content of /sdcard any more. But ssh and scp from terminal in Ubuntu can still read, write and transfer files in /sdcard. I connect using the same ssh user t in sftp from Nautilus, and ssh and scp from terminal, although why is the ssh prompt u0_a116@C6730 rather than t@c6730?If they use the same ssh user, why is there the difference in read permission?In ssh to Android, I see the read, write and excecution permissions for others users are not allowed:u0_a116@C6730:/ $ ls -l...lrwxrwxrwx root root 1970-01-11 17:07 sdcard -> /storage/sdcard0...u0_a116@C6730:/sdcard $ ls -l drwxrwx--- root sdcard_r 1980-01-01 00:00 Alarmsdrwxrwx--x root sdcard_r 2015-07-22 08:05 Androiddrwxrwx--- root sdcard_r 2015-08-18 22:41 Cardboarddrwxrwx--- root sdcard_r 2015-08-20 19:40 DCIMdrwxrwx--- root sdcard_r 2015-08-29 21:22 Download...I guess my ssh user t belongs to the other group. but don't know how to confirm that. I couldn't add the permissions:u0_a116@C6730:/sdcard $ chmod -R a+r Download/ Bad mode What does the Bad mode error mean?What are some solutions?Can I change file permissions on Android directly? Do I need root? But I heard that rooting my device Kyocera Hydro Icon with Android 4.4.2 isn't possible due to locked bootloader? Can I downgrade Android from 4.4.2 to 4.3?Can I upgrade my ssh user account to be one level above others, so as to have write and read permissions? Do I need root? Will that solve the problem, given the difference in read permissions between sftp from Nautilus and ssh and scp from terminal under the same ssh user?can we make sftp in Nautilus work in the same way as ssh and scp in terminal to read and write the content of /sdcard?I found a discussion at http://forum.xda-developers.com/google-nexus-5/general/sdcard-problems-upgrading-android-t2938749, where the suggestions seem to require root, right?Thanks.
Can't show the content of /sdcard of my Android phone from sftp in Ubuntu's Nautilus or FileZilla
ubuntu;ssh;android;sftp;filezilla
null
_unix.11713
How do the commands top and ps calculate CPU utilization using the /proc/[$pid]/stat file? Also how do they obtain memory utilization information about the process?
Workings of top and ps commands
linux;command line
null
_unix.259974
I always use if for this type of stuff, and I haven't really thought of it, but lets say something simple like this:case $1 intest)echo test;;test2)echo test;;test3echo test3esacvs.if [[ $1 == test ]]thenecho testelif [[ $1 == test2 ]]thenecho test2elif [[ $1 == test3 ]]thenecho test3fiWhat are the bigger differences in this case? Is any of them objectively better for another task?
What is the bigger difference in this case when using case or if?
bash;case
null
_softwareengineering.355790
I'm creating a diagramming application and am trying to find a good design for dealing with selection of objects and operations on the selection.First some context:The individually selectable objects in the above diagram are the green block, the blue triangles and the black line. There could be multiple of each.Blue triangles are considered part of the block and are always on the edge of a blockThe user must be able to select any combination of the above objectsThere is an object in the application for each of the visible objectsThese objects know how to draw themselves on the canvasThe block maintains the information about which spots on its edge are taken by which triangles.Desired behaviour:The appearance of objects should be different based on whether they are part of the selectionThe application should allow the user to select all objects in a rectangular areaFor dragging a selection:if blocks (and optionally some lines and triangles) are selected, dragging should move all selected objects together as one groupif only triangles of the same block are selected, then dragging should rotate them around their block along the edgeDesign constraints/preferences:Keep knowledge of what dragging does to a group of objects in one place and out of the objects themselvesDo not put detailed knowledge of all diagram objects into the object that manages the dragging operationWhat would be a good design for implementing this behaviour in the application?
Good OO design for selected object manipulation in a diagramming application
design;object oriented design;gui
null
_unix.217840
A service on a linux server is only able to do full backups, where each backup is a .tar archive (no compression). Many contents of the archive do not change from day to day. Each .tar file size is about 3GB (slowly increasing from day to day).I want to transfer the backups to another server, which archives them. The transfer is done through the internet.A requirement is that the backups are not altered (the result is again a list of .tar files, whose md5 sum is still identical to the original files on the server).I'm currently using rsync to transfer the files, which works great, but all files are transferred with their full size. As far as I know rsync does some kind of deduplication on transfers, but only on a per-file level (right?). Is there any way to transfer few similar files through a SSH connection without retransmitting identical chunks of the files (so some kind of reduplication), thatdoes not require write access on the server (no unpacking of the tar files)is tolerant to connection losses (does not leave temp files on abortions and detects not correctly transmitted files)is able to resume the transfer after connection losses (do not retransmit all files if connection aborts)does not require any additional tools on the server (besides the standard unix toolchain including rsync)still uses a client-initiated SSH connection for the transfer
transfer many similar files over ssh
ssh;file transfer;deduplication
One thing you might do is to (on the receiving side) copy the last backup file to the new name before starting rsync. Then it will transfer only the diffs between what you have and what you should have.If you do this, be careful if you have rsync -u (update only, based on timestamp) that you ensure that your copy is older than the new source file.
_cs.80468
I have 4 xor gates1 magnitude comparator1 full substructure (get x,y and put out x-y),,,I get two numbers of 2-bits X,Y (X=x1,x0 ; Y=y1,y0) and need to out |x-y|Pls help ,Thanks =)
Boolean algebra hard question help pls!
boolean algebra
null
_unix.12725
lc----A tool to count lines of code in C files. The make file of lc is given as below SHELL=/bin/sh CC=cc# Objects we link together.OBJ=lc.o get.oall: $(OBJ) $(CC) -o lc $(OBJ)lc.o: lc.c lc.h get.o: get.c lc.hI have the source on my desktop. So what should I enter on the shell command line. I have tried /home/desktop/lc/lc.c. It's not running.
how to build lc tool in linux?
compiling
null
_unix.111630
I have got RHEL 6 running on MBR (Master Boot Record), management is done using fdisk and parted commands. I would like to know the steps or procedure to convert existing system to GPT (GUID Partition Table) can be managed using (gdisk and gparted) for large disk support.Also wants to make sure whether system has got normal BIOS is EFI BIOS. Is there any command for that?And if system have got normal BIOS how to implement GPT?
MBR to GPT Conversion of existing RHEL 6 system
linux;ubuntu;rhel
null
_unix.322245
I'm using an application on an embedded linux platform. I copied this application to a nand device with a jffs2 filesystem and ran it from there and system performance is significantly degraded.I think there should be no difference between two scenarios more than startup time because system memory is not full (around 50% used) and the application is not large compared to it. (4MB application, 128MB RAM). so my question is why this performance degradation happens?(what are possible problems)
is there difference between running an application from nand and ramdisk?
filesystems;embedded;virtual memory;memory management
null
_datascience.15271
A classification system I built is going to go into production soon (it'll be part of a larger dashboard), and I'm looking for ways to better visualize and convey to business folks the results of a classification.Basically, given old data on which the model was trained, I predict the classes of new data, with the goal being to show whether the class distributions for the new data are statistically indifferent from the class distributions observed in the old data. So, if there are three classes A, B, and C in the old data, with proportions of 50%, 30%, and 20%, respectively, I compare the classification distribution for the new data with those original, observed proportions.Outside of a confusion matrix (which I think is probably inappropriate for most dashboard users), how else can I effectively present these results? I was thinking of a bar chart like this:
Visualizing results of a classification problem, excluding confusion matrices?
classification;visualization
Here are a couple of options. I prefer the stacked column chart, which could also be annotated with the actual proportions if needed.
_unix.36920
Having spent most of my Linux life using Debian, I'm been having a look at other distros and am really surprised at extent to which they don't provide a smooth upgrade between versions. Debian is infinitely upgradeable, and I've upgraded through a few major stable versions now.I'm talking about well-supported distros like Fedora (and derivatives), even Ubuntu and derivatives. Even stable server-oriented distros like CentOS.Is it because Debian's package management system and package upgrade scripts are just far more advanced than anything other distros have to offer?Or is reinstalling from scratch on a major version upgrade just a better idea overall, regardless of distro?
Why do most distros (other than Debian) recommend/require a full reinstall when upgrading to a new version?
package management;distros;upgrade;reinstall
null
_unix.166176
I'm working on a script to install a package from a URL. The script needs to install the package if it isn't installed and forcibly replace the existing version with the specified RPM if it is already installed. Unfortunately both yum install and yum update return 1 if the package is already at the right version. How do I just tell yum to absolutely, positively install an RPM and only return an exit code if there's an actual error?
How to install or upgrade package with yum with a zero exit code?
yum
Instead of your script calling yum to do the download and install, I would just make the script download the file (with e.g. curl or wget) and then force the installation of the downloaded .rpm file:rpm --install --force file_name.rpmAs the OP indicated, rpm can donwload the URL directly without a problem. From the man page:INSTALLING, UPGRADING, AND REMOVING PACKAGES: rpm {-i|--install} [install-options] PACKAGE_FILE ... rpm {-U|--upgrade} [install-options] PACKAGE_FILE ... rpm {-F|--freshen} [install-options] PACKAGE_FILE ... <snip> In these options, PACKAGE_FILE can be either rpm binary file or ASCII package manifest (see PACKAGE SELECTION OPTIONS), and may be specified as an ftp or http URL, in which case the package will be downloaded before being installed. See FTP/HTTP OPTIONS for information on rpm's internal ftp and http client support.
_codereview.105208
For our first assignment in our algorithm class we are suppose to solve several different questions using information from a 2D array. We are also suppose to optimize our code to get more marks on our assignment. We also are only allowed to use 1D and 2D arrays as the only data structure.The problem was find the unique values in a 2D array (int data[][]) of 1000x250 values. The range of that is from 2000000 to 2200000. My idea was to make a 1D array (with 200000 indexes) and look at the number in the 2D array and see it matches anything prior to it in the 1D array, and if not, add it. However, with all the other questions, it takes a few seconds before it will actually finish.int[] uniqueValues = new int[200000]; boolean isUnique = true; int uniqueCounter = 0; for (int i = 0; i < data.length; i++) { for (int j = 0; j < data[i].length; j++) { for (int x = 0; x < uniqueCounter; x++) { if (data[i][j] != uniqueValues[x]) { isUnique = true; } else { isUnique = false; break; } } if (isUnique) { uniqueValues[uniqueCounter] = data[i][j]; uniqueCounter++; } }}
Unique value finder
java;performance;algorithm;array;memory optimization
null
_codereview.139704
This is my simple game that gives you a country and has you enter the corresponding capital. What do you think? (Before running it you need to do score.txt.)package com.company;import java.io.File;import java.io.FileNotFoundException;import java.io.PrintWriter;import java.util.Random;import java.util.Scanner;public class Main { public static void main(String[] args) throws FileNotFoundException{ Random r = new Random(); Scanner input = new Scanner(System.in); String[] capital = {Lisbon, Madrid, Paris, Berlin, Warsaw, Kiev, Moscow, Prague, Rome}; String[] country = {Portugal, Spain, France, Germany, Poland, Ukraine, Russia, Czech Republic, Italy}; Boolean[] was = new Boolean[9]; int score = 0; int good = 0; int bad = 0; File file = new File(score.txt); Scanner lastscoreopener = new Scanner(file); String lastscore = lastscoreopener.nextLine(); System.out.print(Your last score was: +lastscore +\n); for (int d = 1; d<4; d++) { int randomint = r.nextInt(9); while (was[randomint] == Boolean.FALSE){ randomint = r.nextInt(9); } was[randomint] = false; while (was[randomint] == true); for (int x = 0; x < 3; x++) { System.out.print(What's capital of + country[randomint] + \n); String answer = input.nextLine(); if (answer.toLowerCase().equals(capital[randomint].toLowerCase())) { if (x==0){ score = score+3; good++; } else if(x==1){ score = score+2; good++; } else if (x==2){ score = score+1; good++; } System.out.print(Good. Your score is +score +. \n); break; } else { if (x != 2){ System.out.print(Bad. You have +(2-x) + chances. );} if (x == 2){ System.out.print(Bad. Good answer is +capital[randomint] +\n); } bad++; } } } System.out.print(End of game. Good answers: +good +. Bad answers: +bad +. Score: +score); PrintWriter saver = new PrintWriter(score.txt); saver.println(score); saver.close(); }}
Capitals and countries game
java;beginner;quiz
Match what you say to what you are actually doing for (int d = 1; d<4; d++) {You want to loop three times. Why does it say 4? Either for (int d = 1; d <= 3; d++) {or more idiomatically for (int d = 0; d < 3; d++) {Since you never use d except for iteration, there seems no reason to start from 1 instead of 0. Use helper objectsclass Country { private final String name; private final String capital; public Country(String capital, String name) { this.name = name; this.capital = capital; } public String getName() { return name; } public String getCapital() { return capital; }}Now you can say List<Country> countries = new ArrayList<>();countries.add(new Country(Lisbon, Portugal));Same pattern for the other countries. Now if you add a new country, you can't accidentally add the capital to a different index than the country name. Consider making this a class field private static final List<Country> countries = new ArrayList<>();static { countries.add(new Country(Lisbon, Portugal));Don't forget to close the static initializer block after adding all the countries and capitals. }Either way, you can then say List<Country> choices = new ArrayList<>(countries);You use it like Country choice = choices.remove(r.nextInt(choices.size()));Now you don't need a was variable. This is self-maintaining. It only picks from the entries still in the list. Also, if you add an entry to the list, it will automatically include it. You don't have to go through your code changing all your magic numbers to match each other. Use methods to hide complexity File file = new File(score.txt); Scanner lastscoreopener = new Scanner(file); String lastscore = lastscoreopener.nextLine(); System.out.print(Your last score was: +lastscore +\n);The point of these four lines is to output one line to the screen. It creates three variables to do so which are never used again. So hide that in a method. Either private static final String SCORE_FILE_NAME = score.txt; public void displayLastScore() { File file = new File(SCORE_FILE_NAME); try (Scanner scanner = new Scanner(file)) { System.out.println(Your last score was: + scanner.nextLine()); } }Or with better separation of concerns private static final String SCORE_FILE_NAME = score.txt; public String readLastScore() { File file = new File(SCORE_FILE_NAME); try (Scanner scanner = new Scanner(file)) { return scanner.nextLine(); } }which you use like System.out.println(Your last score was: + readLastScore());The latter is generally recommended, as it leaves readLastScore more flexible. Either version will close the Scanner when done, as that's what the try-with-resources form does. It ensures that the AutoCloseable resource is closed before leaving the try block.
_datascience.22088
I am extracting tweets on brands for sentiment analysis. I am using twitteR package on R. Is there a specific way by which I can only collect tweets from a location ex NY or Paris or Canada etc. I have tried Geocode but here I have to give lat and long data and the vicinity range.
How to collect tweets by geo-location?
r;nlp;sentiment analysis
null
_cs.7453
Apparently, if ${\sf P}={\sf NP}$, all languages in ${\sf P}$ except for $\emptyset$ and $\Sigma^*$ would be ${\sf NP}$-complete.Why these two languages in particular? Can't we reduce any other language in ${\sf P}$ to them by outputting them when accepting or not accepting?
If P = NP, why wouldn't $\emptyset$ and $\Sigma^*$ be NP-complete?
complexity theory;np complete;reductions
null
_codereview.36686
Other people will be looking at this jquery code and i'm not an expert with jQuery so im asking. How can i make this jQuery code shorter, more efficient, and easier to read? /* HEADER NAVIGATION *//* navigation tabs click highlight */$(.header-main-tabs).click(function() {$(.header-main-tabs).removeClass(header-tab-selected);$(this).addClass(header-tab-selected);});/* drop menu hide and show for desktop */$(.header-main-tabs).hover(function() {$(this).children(.header-drop-menu).toggleClass(show-drop-menu-hover);});/* search input hide and show when search icon is pressed */$(#search-icon-container span).click(function() {$(this).toggleClass(fa-times);$(#search-input-container).fadeToggle(fast);});/* mobile navigation *//* show mobile tabs when toggle nav mobile button is clicked and when browser width is over 990px *//* toggle mobile navigation when nav button is clicked */$(#toggle-mobile-nav).click(function() {$(#nav-tabs-list).slideToggle();$(#toggle-mobile-nav).toggleClass(toggle-mobile-nav-clicked);});var browserWidth = $(window).width();$(window).resize(function() {browserWidth = $(window).width();if (browserWidth > 990) { $(#nav-tabs-list).show();} });$('#nav-tabs-list li').click(function() {if (browserWidth <= 990) { $('#nav-tabs-list').slideUp();}});/* always show drop menu when on mobile version (when browser width is below 960px) */$(document).ready(function() { fadeMobile(); });$(window).resize(function() { fadeMobile(); });function fadeMobile() { browserWidth = $(window).width(); if (browserWidth < 990) { $(#nav-tabs-list).hide(); $('.header-drop-menu').show(); $(#toggle-mobile-nav).removeClass(toggle-mobile-nav-clicked); } }
Window-size-dependent navigation menu with animation effects
javascript;jquery;animation
null
_unix.184244
Its been long that i have heard that CentOS 6.5 kernel (2009) is Too Old as compared to CentOS 7 kernel (2013). I want to better understand what is meant by Too Old. Comparing 2.6.32 to 3.10, what are the things i will not be able to do on 2.6 as compared to 3.10? Any answer here would be valuable. Thanks!
CentOS 6.5 kernel vs CentOS 7 kernel
centos
null
_softwareengineering.297769
TL;DR BelowI'm working on a game server (in Java, but that part is less important), and have decided to split up the server logic from the engine logic; in part because they're in two different logical domains. The server will interact directly with the client, and handle the networking aspect of things, while the engine will deal with in-game events. I would like these to be as separate as possible, so that the engine doesn't deal with packets and networking, and the server doesn't deal with player movement, or game events. However, these two will have to remain coupled because, although they are in different domains, they are part of the same process.For example, when the user launches their client and connects to the server, the server must send a message to the engine to create a new game session. The engine then initializes the world, adds the player to it, and registers the proper in-game event listeners. At this point, the engine tells the server that the world is ready. The server then sends a message client to load resources (images, landscape data, text, etc), registers packet handlers, and everything is ready to be played.We also have to take it the other way. If the player disconnects their client, or logs out, then the server is the first to know of this state change. The server session may stay active, in case it's just a slight network hiccup, but eventually, it must tell the game engine to do any cleanup.There can be many implementations of the engine and server. For example, they can be running in the same JVM instance, they could be on different instances, using socket channels to communicate between eachother, or we could have the engine on another server, using something like a REST service to communicate messages between them.TL;DRWhat sort of design patterns or strategies can I use to send messages between two coupled components, that should remain as separate as possible, without introducing an intermediary library?
Prevent circular dependencies without introducing intermediary library
java;design patterns;object oriented design
null
_webmaster.63583
I have a handful of subdomains set up as redirects because we are using them for QR codes.I want to be able to track the QR code redirects (which are already set up and printed so no changing them at this point) and see the effectiveness of each.Here's two examples: http://qr.glorkianwarrior.com and http://ad.glorkianwarrior.com are set up to forward to our iTunes page (later on this year it may forward to Google Play or a specific landing page), is there any way on my server to track the redirect from the subdomain to iTunes and see where traffic is coming from first?I have the redirects set up through cPanel presently using subdomains.Edit: From the research I've seen I can't track a 301 directly. If I redirect to an internal page and then do a timed redirect to the iTunes link, how long will it take for the tracking script to track a hit?
Is it possible to track redirects to external sites from our subdomains?
google analytics;redirects;analytics;tracking;cpanel
You can do this with PHP and Google Analytics.For each subdomain run a index.php file that will send data to Google Analytics and then give a header redirect response to send the visitor to itunes. Google will record the events in analytics and the user won't see anything different.Here is an example for a PHP redirect:https://stackoverflow.com/questions/768431/how-to-make-a-redirect-in-phpHere is the documentation for Google's Measurement Protocol API:https://developers.google.com/analytics/devguides/collection/protocol/v1/You can get details on the API here:https://developers.google.com/analytics/devguides/collection/protocol/v1/referenceThere is a PHP library on GitHub for this:https://github.com/google/google-api-php-client
_unix.73835
I am seeing tons of these error in my .xsession-errors:krunner(8135)/libakonadi Akonadi::SessionPrivate::socketError: Socket error occurred: QLocalSocket::connectToServer: Connection refused This akonadi/nepomuk thingy has always been a mystery to me. I never asked for it and yet it is on my computer, meesing up my logfiles. Ideally I would like to get rid of it. My questions areWhat do I have to do in order to get rid of akonadi and nepomukwhat are these guy supposed to do anyways?what would I loose when I got rid of themin case they do something useful, do have any ideas how I can avaoid the error message above
Akonadi floods my .xsession-errors
linux;kde;kmail
null
_codereview.33772
In the application I'm building, the user is able to define 'types' where each 'type' has a set of 'attributes'.The user is able to create instances of products by defining a value for each attribute the product's type has.A pic of the schema:I'm creating the query where the user specifies the attributes values and the product type and with that I should return all the product id's that meets the query.The problem I see in my query is that I'm performing a whole select * from attributes_products ... for each attribute that the product's type has.Is there a way to optimize this?If I create an index in the column attributes_products.product_id would this query be actually optimal?Example of a query where I'm looking for a product whose type has 3 attributes:select p.idfrom Products as pwhere exists( select * from attributes_products where product_id = p.id AND attribute_id = 27 AND value = 'some_value') ANDexists( select * from attributes_products where product_id = p.id AND attribute_id = 28 AND value = 'other_value') ANDexists( select * from attributes_products where product_id = p.id AND attribute_id = 29 AND value = 'oother_value')Many thanks.ConclusionsSo, Gareth Rees (selected answer) proposed another solution which involves multiple Joins.Here is the explanation of its query (done by PGAdmin):This is the explanation of the original query:I believe that the selected answer is slightly faster, but consumes a lot more memory (because of the triple join).I believe that my original query is slightly slower (very slightly, since there's an index on the attributes_products table) but a lot more efficient in memory.
Help optimizing this query with multiple where exists
optimization;sql;postgresql
SQL allows you to join the same table multiple times, so what you need here is:SELECT p.id FROM products AS pJOIN attributes_products AS ap1 ON ap1.product_id = p.id AND ap1.attribute_id = 27 AND ap1.value = '...'JOIN attributes_products AS ap2 ON ap2.product_id = p.id AND ap2.attribute_id = 28 AND ap2.value = '...'JOIN attributes_products AS ap3 ON ap3.product_id = p.id AND ap3.attribute_id = 29 AND ap3.value = '...'Here's the toy MySQL database that I'm using to answer this question:CREATE TABLE products ( id INTEGER PRIMARY KEY AUTO_INCREMENT);CREATE TABLE attributes_products ( product_id INTEGER NOT NULL, attribute_id INTEGER NOT NULL, value CHAR(40));CREATE INDEX ap_product ON attributes_products (product_id);CREATE INDEX ap_attribute ON attributes_products (attribute_id);INSERT INTO products VALUES (1);INSERT INTO products VALUES (2);INSERT INTO attributes_products VALUES (1, 27, 'a');INSERT INTO attributes_products VALUES (1, 28, 'b');INSERT INTO attributes_products VALUES (1, 29, 'c');With my query above, MySQL reports the following query plan:+----+-------------+-------+--------+-------------------------+--------------+---------+---------------------+------+--------------------------+| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |+----+-------------+-------+--------+-------------------------+--------------+---------+---------------------+------+--------------------------+| 1 | SIMPLE | ap1 | ref | ap_product,ap_attribute | ap_attribute | 4 | const | 1 | Using where || 1 | SIMPLE | ap2 | ref | ap_product,ap_attribute | ap_attribute | 4 | const | 1 | Using where || 1 | SIMPLE | ap3 | ref | ap_product,ap_attribute | ap_attribute | 4 | const | 1 | Using where || 1 | SIMPLE | p | eq_ref | PRIMARY | PRIMARY | 4 | temp.ap3.product_id | 1 | Using where; Using index |+----+-------------+-------+--------+-------------------------+--------------+---------+---------------------+------+--------------------------+See the MySQL documentation for an explanation of the EXPLAIN output.This looks better than the plan for the OP's query:+----+--------------------+---------------------+-------+-------------------------+--------------+---------+-------+------+--------------------------+| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |+----+--------------------+---------------------+-------+-------------------------+--------------+---------+-------+------+--------------------------+| 1 | PRIMARY | p | index | NULL | PRIMARY | 4 | NULL | 2 | Using where; Using index || 4 | DEPENDENT SUBQUERY | attributes_products | ref | ap_product,ap_attribute | ap_attribute | 4 | const | 1 | Using where || 3 | DEPENDENT SUBQUERY | attributes_products | ref | ap_product,ap_attribute | ap_attribute | 4 | const | 1 | Using where || 2 | DEPENDENT SUBQUERY | attributes_products | ref | ap_product,ap_attribute | ap_attribute | 4 | const | 1 | Using where |+----+--------------------+---------------------+-------+-------------------------+--------------+---------+-------+------+--------------------------+But results will vary from one database to another: a good query planner might be able to make something efficient out of the OP's query.
_softwareengineering.115368
Before you whip out your snarky comments, I know -- this is a nooby question. This is my first time using a C based language.I'm an undergrad student learning Objective C for a computer science course on mobile development. I know that, in an academic setting, a lot of real-world considerations aren't necessary since you're building smaller projects, work in smaller teams, etc. But our professor demands -- and XCode supports -- .h header files for every .m implementation file. To me, it kind of seems like busy work. I have to make sure I copy every method signature and instance variable over to the other file. If I change one file, I have to make sure it's consistent with the other file. It seems like just a bunch of small annoyances like that.But I know there has to be some real-world use for header files. A great answer would address both:What is a header file useful for that an implementation file isn't suited for? What is its purpose?Why do we as programmers have to manually write our header files? It seems like they could easily be generated automatically.Thanks in advance!
Why do we need to write a header file?
objective c;headers
In short;The header file defines the API for a module. It's a contract listing which methods a third party can call. The module can be considered a black box to third parties.The implementation implements the module. It is the inside of the black box. As a developer of a module you have to write this, but as a user of a third party module you shouldn't need to know anything about the implementation. The header should contain all the information you need.Some parts of a header file could be auto generated - the method declarations. This would require you to annotate the implementation as there are likely to be private methods in the implementation which don't form part of the API and don't belong in the header.Header files sometimes have other information in them; type definitions, constant definitions etc. These belong in the header file, and not in the implementation.
_softwareengineering.188696
I am programming in PHP and I have an include file that I have to, well, include, as part of my script. The include file is 400 MB, and it contains an array of objects which are nothing more than configuration settings for a larger project. Here is an example of the contents:$obj = new obj(); $obj->name = myname; ....$objs[$obj->name] = $obj->name;{repeat}....return $objs;This process repeats itself 40,000 times and ultimately is 650,000 lines long (the file is generated dynamically of course.)If I am simply trying to load a file that is 400MB, why then would memory usage increase to 6GB? Even if the file is loaded into memory, wouldn't only take up 400 MB of RAM?
Please help me understand the relationship between script file size and memory usage?
memory;memory usage
There's not just the file. There's the parse tree that gets generated from parsing the file, the bytecode it gets compiled to, the memory taken up by the variables defined in it, etc. 6GB still sounds like a lot, but for a 650,000 line file it doesn't surprise me much.At some point along the way, someone should probably have thought of using a database. :) Unless you're using every item in that file every time, loading half a million lines of stuff to pull out a few things is incredibly wasteful.
_unix.124393
I'm using Debian 'Jessie'. Sometimes my computer freezes, and then I can't use Ctrl+Alt+Del to reboot, Ctrl+Alt+Backspace to kill the X Window System nor Ctrl+Alt+F1 to open a new shell. I've read in several sites that in a computer freeze you can use the basic kernel commands that are used pressing Alt+Sysreq (holding Alt+Sysreq and pressing REISUB one key)But in my computer that 'trick' isn't working when it's frozen. Has the kernel frozen as well? I heard that one of the best things of Linux was that you never had to turn off the computer by holding the power button, but It's not being true for me :/
Why isn't REISUB working on Debian?
debian;kernel;reboot
Magic keys tend to be disabled in Debian these days, so you can't just hard-reboot your machine or kill all your X processes by pressing a few keys accidentally.The X Ctrl+Alt+Backspace key sequence is controlled by the DontZap option in /etc/X11/xorg.conf -- man xorg.conf for more details. I think you want this, though:Section ServerFlags Option DontZap falseEndSectionThe sysreq keys are controlled by the kernel options during kernel compile time, boot time, and also sysctl options. To enable it on Debian, put kernel.sysrq=1into /etc/sysctl.conf, and either reload that file (sysctl -p /etc/sysctl.conf; man sysctl for more), or just edit the file and reboot.
_unix.179145
I am running a lubuntu 14.04 server and its always worked well, except for the last couple of days it keeps dropping off the LAN. I am using a couple of powerlink adapters to connect it up as WiFi was never very solid on it.Anyway, I plugged a screen and mouse/keyboard into it to try to work out whats up with it, and I tailed the syslog. Here is an excerpt: http://pastebin.com/4ftXsai4Can someone please give me any clues to what might the problem be? Or any tips on how to track down the issue.I have unplugged the Ethernet cable from my router and rebooted it all, to no avail.EDIT #1I'm not sure what happened. But the server has been up, connected all day today. I am tailing a log file via a remote SSH connection and its not disconnected once.Could have been that the WiFi card was playing up, or the router was struggling to assign DHCP, although, if it was a DHCP problem, why would it keep disconnecting?
Understanding network problem (syslog)
ubuntu;networking;syslog;lan
null
_unix.12468
I have a Dell 710 with Quad Bcom NetExtreme 5709s. In the name of expediency, I'm trying to boot off the Squeeze live CD, but the Broadcom drivers are in non-free, so they don't come up when you boot.No problem, I think to myself. I will sneaker-net the bnx2-firmware deb and all is good.I can see the interfaces in lspci, I have unpacked the deb and successfully executed modprobe bnx2; however I still can't see the interfaces in ip link show. What else should I do to bring these interfaces up without a reboot?EDITI have old entries in /var/log/kern.log about the failure to load bnx2 at boot, but the modprobe completes successfully with no other log entries...$ lsmod | grep bnxbnx2 57385 0
Debian Live - modprobe failed to bring up Broadcom ethernet interfaces
linux;debian;ethernet;broadcom;modprobe
The firmware must be present at the time you load the driver. So be sure to unload the module and reload it: # <install firmware> rmmod bnx2 modprobe bnx2For some drivers (I don't know about this one), you may need to unload auxiliary modules that it's using. lsmod | grep bnx2 will show what modules bnx2 uses. Call rmmod on all of them in reverse dependency order.Most modules emit some log messages when they're loaded and they find a potential device, sometimes even if they don't find a potential device. These logs would be on /var/log/kern.log, at least on Debian and Ubuntu.
_unix.50016
Ok so i want to view the ram and have it run continuously root@another:/etc/mysql/database_backups# free -mt total used free shared buffers cached Mem: 998 870 127 0 92 362 -/+ buffers/cache: 415 582 Swap: 2047 31 2016 Total: 3046 902 2144 root@another:/etc/mysql/database_backups# free -mts free: option requires an argument -- 's' usage: free [-b|-k|-m|-g] [-l] [-o] [-t] [-s delay] [-c count] [-V] -b,-k,-m,-g show output in bytes, KB, MB, or GB -l show detailed low and high memory statistics -o use old format (no -/+buffers/cache line) -t display total for RAM + swap -s update every [delay] seconds -c update [count] times -V display version information and exitthe first command works great but adding the s flag gives me an error but i can clearly see that is an available flag....any ideas on what i am doing wrong and if there is a better way of doing this
Why is this flag not working?
linux;ubuntu;command line
null
_vi.11382
If Iopen (vanilla) vim by running $ vim [-u NONE],switch to insert mode by pressing a,add an 'a' by pressing a again,switch back to normal mode by pressing ESC, andrepeat steps 2 and 4 several times,I see my cursor switching from on top of the 'a' (normal mode) to after the 'a' (insert mode) and back instantly as I press the corresponding keys.If I now exit vim by typing :q! and confirming with RETURN/ENTER,start a (vannilla) screen session by running $ screen [-c /dev/null], andrepeat the above excercise,the switch from normal to insert mode after pressing a is still instantly but there is a notable delay between me pressing ESC and vim switching back to normal mode.I observed this with Screen version 4.01.00devel (GNU) and both, VIM - Vi IMproved 7.4 and NVIM 0.2.0-devHow can I configure [n]vim/screen to avoid this delay?
delay between hitting `ESC` (in insert mode) and switching to normal mode within `screen`
escape;gnu screen
I found the solution on http://vim.wikia.com/wiki/GNU_Screen_integration:Getting the Esc key to workIf you use Vim under Screen, you might find that the Esc key doesn't work. To fix this, add the following to your .screenrc:> maptimeout 5This may be necessary so Screen will wait no more than 5 milliseconds between characters when detecting an input sequence.
_softwareengineering.84144
I am currently working on a money tracking/invoice creation app that I intend to release for free.The app can be broken down to three parts:The Framework, a generic, all-purpose collection of classes (php/mySQL)The app itself (php/javascript)The design (images)I am trying to find licenses that fit three different purposes:I want to release the framework under a license that specifies thatThe framework is open-source, free, and cannot be soldHowever, the framework can be used in commercial products, as long as no author names are removed from the code and the framework's source is available (a link to my sourceforge in the about page will do...Even little, hidden in a subpage, or in the FAQ, as long as people really looking for it can find it).Code that uses my framework doesn't have to be open-sourced. I don't want to stop people from releasing non open-source, commercial products. Too many times I have been blocked by this when working for a client, I don't want to inflict the same problems on the community. Furthermore, I will surely use my framework myself for closed-source projects for clients.I want to release the app part under an open-source, free license that disallows any attempt to sell it (but allows forks, as long as they stay open-source and free)I want to release the design (icons, backgrounds) under a free license for non-commercial projects only.Additionally, If it is possible (if such a license exists), I would like to remove all constraints, even for commercial products, as long as the project is led by a one-man (or a one-woman) team. In other words, I'd like freelancers to be able to fully enjoy complete freedom, but have some restrictions for companies.It might be worth mentioning that although the framework is totally custom code, the app will contain some third-party, namely jquery, and maybe some other javascript components.I am aware this is a very specific question that doesn't necessarily helps the coding community, just me, but I don't know where to turn to.
Mix three different licenses for an open-source software
open source;licensing
My first impression: For the framework, LGPL. For the app, GPL. For the design, CC.Add appropriate exception clauses where necessary. Don't worry about anybody selling the framework (without an app) - that won't happen anyway.
_softwareengineering.129847
I've seen objects created in Java code without storing a reference to the object. For example, in an eclipse plugin I've seen a SWT Shell created like so:new Shell();This new Shell object is not stored in a variable, but will remain referenced until the window is disposed which [I believe?] happens by default when the window is closed.Is it bad practice to create objects like this without storing a reference to them? Or was the library poorly designed? What if I don't have need of a reference, but only want the side effects of the object? Should I store a reference anyways?UPDATE:Admitedly, my above example is poor. While I have seen UI elements created like this, creating a SWT Shell like this would probably be pointless because you need to call the open method on the Shell instance. There are better examples provided by aix such as the following from the Java concurrency tutorial:(new HelloThread()).start();This practice is seen in many contexts, so the questions remains. Is it good practice?
Is it bad practice to create new objects without storing them?
java
There's an element of personal preference to this, but I think that not storing the reference is not necessarily a bad practice.Consider the following hypothetical example:new SingleFileProcessor().process(file);If a new processor object needs to be created for every file, and is not needed after the process() call, there's no point in storing a reference to it.Here is another example, taken from the Java concurrency tutorial:(new HelloThread()).start();I've seen lots of other examples when the reference is not stored, and that read perfectly fine to my eye, such as:String str = new StringBuilder().append(x).append(y).append(z).toString();(The StringBuilder object is not kept.)There are similar patterns involving common.lang's HashCodeBuilder et al.
_unix.370124
I am currently looking at splitting the interface between cfg8011 and mac80211. So that, the control can be run on a separate server for WIFI AP.Could you please let me know the feasibility of this?
Control and data plane split in Wifi stack
linux kernel;wlan
null
_softwareengineering.110978
My team (~10 devs) has recently migrated to Maven (multi-module project, ca. 50 modules) and we now use Jenkins for continuous integration. As the overall setup is running, we are planning to include code analysis and reporting tools (e.g., Checkstyle and Corbertura).As far as I can see, we basically have two options to generate such reports: either use Maven's reporting plug-ins to generate a site and/or use Jenkins plug-ins to do the reporting. Which approach is better, and for which reasons? It seems both have their merits:Maven-Reporting: everything is configured in a single place (the pom files) report generation on a local machine (i.e., without Jenkins) is straightforward and does not require any additional stepsJenkins-Reporting:adjusting the reporting configuration seems more intuitive than with Maven and does not interfere with the main task of maven in our setup (building and deploying)a single web-interface to check the overall status of the project, so that the reported issues are easier to find and harder to ignore (i.e., no need to click through the maven site, some nice history plots, notifications, etc.)Has anyone made a similar decision in the past, or is this a non-issue for some reason? (How well does it work? Any regrets?) Is it advisable to go both ways, or is that setup too tedious to maintain?Are there aspects of the problem that I am unaware of, or some advantages I have not noticed (e.g., is Maven-based reporting integrated nicely with m2eclipse)?Any pitfalls or typical problems that we are likely to encounter?EDIT:We tried out Sonar, as suggested below (including its Jenkins plug-in). So far, it works pretty well. Sonar is relatively simple to install and works on top of our Maven setup (one just invokes mvn sonar:sonar or re-configures a Jenkins job, that's all). This means we do not have to maintain a duplicate configuration, as all our Maven settings are used automatically (the only exception being exclude patterns). The web-interface is nice, and -- even better -- there is an Eclipse plug-in that retrieves all issues automatically, so that no one really has to browse to the Sonar website, as all issues can be automatically displayed in the IDE.
Code Analysis & Reporting: Maven vs. Jenkins
development process;reporting;maven;jenkins;report generation
Since you already have your projects ready for Maven and Jenkins is installed, you may go one step further and install Sonar. It is an incredible code quality related tool, it performs Cobertura, Checkstyle, FindBug, and many other analysis tools at the same time. That way, the code quality reporting is done by Sonar, not by Maven or Jenkins.You can install Sonar on the same machine where Jenkins is installed or somewhere else. If you just want to test it, you can just perform the standard install (it uses its own HSQLDB if you don't configure the database yourself) and you are almost ready to go. The last thing you need is to configure the relationship between Jenkins and Sonar.There is done via the Sonar plugin for Jenkins. Once you have installed the plugin:Indicate where Sonar can be located on the Jenkins configurationFor each build job you want to be analysed, go into the job configuration and check the Sonar checkbox.Be careful for one thing: the analysis may be quite long for big projects. But then again, you may configure special nightly Sonar builds in addition to standard builds.Hope that helped.
_scicomp.1421
Suppose I have two matrices Nx2, Mx2 representing N, M 2d vectors respectively.Is there a simple and good way to calculate distances between each vector pair (n, m)?The easy but inefficient way is of course:d = zeros(N, M);for i = 1:N, for j = 1:M, d(i,j) = norm(n(i,:) - m(j,:)); endfor;endfor;The closest answer I've found is bsxfun, used like so:bsxfun(inline(x-y),[1,2,3,4],[3;4;5;6])ans = -2 -1 0 1 -3 -2 -1 0 -4 -3 -2 -1 -5 -4 -3 -2
Octave: calculate distance between two matrices of vectors
performance;octave;vectorization
null
_unix.279054
I am gearing up to do some scientific experiments that involve modifying the linux kernel to collect data on the internal runtime states of certain modules. We want to do the experiments over a handful of kernel versions that represent what's actually being used in real datacentres.Question: How do I come up with the list of kernel versions to test? I can certainly pick some versions myself, but it would have more scientific credibility if there was some reference we could point to justify our choices. Even something informal like DistroWatch.com, but for kernel versions, would be helpful. Or possibly scraping download statistics from package managers of popular distributions, if that data is public.[Note: I asked this first on ServerFault, but it was closed as not relevant to systems administrators. Hopefully it's more on-topic here]
How to find out which Linux kernel versions are the most prevalent?
linux kernel
null
_unix.333301
#!/bin/bash#echo $PWDcd /home/<my username>/<long path>echo $PWDWhat I get when executing it with bash script.sh:/home/<my username>: No such file or directorye/<my username>/<long path>/home/<my username>Or with bash . script.sh.: .: is a directoryIt looks like in first case it has just skipped first 4 characters (/hom) of the address line for no reason.And in second case, what the hell is .: .:? It's absolutely ungooglable.And ofc when I copypaste this line cd /home/<my username>/<long path> in terminal it works like it should.EDIT: IT WAS ALL ABOUT ONE MISSING SPACE SYMBOL AT THE END OF THE PATH, THANK YOU.
one simple little cd doesn't work in script
bash;scripting
You should check the script for any hidden characters before your /home part (that would explain it can't cd to it, and why the display truncates part of it too). For example: cat -ve THEFILE # -e will mark each end of line with a $ and -v will show some of the control characters in the form ^x, ex: ^M for the control character Carriage Return.Fix it: For this, type a working example in your shell, then copy it using the mouse, and edit the script, delete the line, and paste the one you copied in its place. (in vi: if the script is exactly as described (and the faulty line is line 4) : you go to line 4 with 4G, then delete that line with dd, and go in Insert mode on the line above with: O (capital o). Then you can paste the lines you copied with the mouse. then Escape to go back to command mode, and :wq to write the changes it the file and quit vi.You may want to compare the output $(pwd) with the value $PWD: try to replace the : echo $PWD with : pwd ; echo PWD=$PWDfinally : bash . script.sh should be: bash ./script.sh . The one you typed ask bash to execute . with the argument script.sh, and . being a directory, bash complains. When invoked in this way, the complaint is usually on the form: program_name: some message . Here bash tries to execute the program ., so its error message mistakenly use .: as the program name prompt, and the message it displays is .: is a directory, indicating that it couldn't execute it and why (it is a directory, not a bash script).Note that when you invode script.sh in this way (bash ./script.sh), you ask your current shell to invoke a bashsubshell that will execute script.shand exit. Only that bash subshell will be: echoing PWD, then cd-ing to the directory, the echoing the new PWD. When that bash exits, your current shell is still in the original directory. If you want to have a file making changes in your current shell, source it instead: . ./script.sh or in bash you also can source ./script.sh (note: . is the more portable way to source a file) (note 2: having a path for the file to source is recommended in recent shell, ie: . script.sh may work too, but it is recommended to specify the local path such as: . ./script.sh)
_scicomp.24206
I'm having difficulty with numerically solving the inviscid burgers equation.Godunov's scheme is used in most of what I've found in literature . Now my question is if using a crank nicolson shceme is wrong or not? $\frac{\partial u}{\partial t} + u\frac{\partial u}{\partial x}=0$with this BC:at $t> 0 ,x=0$ : $u=1$using a finite volume method: $\frac{\partial }{\partial t}\int udv + \int u\frac{\partial u}{\partial x}dv=0$$\frac{\partial u}{\partial t}\Delta v +(uuA) _{e}-(uuA) _{w}=0$$\frac{\partial u}{\partial t} + F_{e}u_{e}-F_{w}u_{w}=0$$F=\frac{u}{\Delta x}$and for discretization in time I used Crank-Nicolson $u_{p}^{n+1}-u_{p}^{n}=\frac{\Delta t}{2}(-F_{e}u_{e}+F_{w}u_{w})^{n+1}+\frac{\Delta t}{2}(-F_{e}u_{e}+F_{w}u_{w})^{n}$the final form of equation is as (upwind scheme) :$(1+\frac{\Delta t}{2}F_{e})u_{P}^{n+1}=(-\frac{\Delta t}{2}F_{w}) u_{W}^{n+1} +u_{P}^{n}+\frac{\Delta t}{2}(F_{w}u_{w}-F_{e}u_{e})^{n}$and for the first block :$(1+\frac{\Delta t}{2}F_{e})u_{P}^{n+1}= u_{P}^{n}+\frac{\Delta t}{2}(-F_{e}u_{e})^{n}+\Delta t F_{w}u_{w}$and after that I tried to solve a set of linear algebraic equation. The result in every time step seems to converge but with time passing the magnitude of velocity is also increasing which is an indication of a mistake.
Numerical solution of burgers equation with finite volume method and crank-nicolson
pde;finite volume;crank nicolson
If I understand correctly, you are using a centered finite difference in space and the implicit trapezoidal method in time. That scheme is unconditionally absolutely stable, but will generate spurious oscillations. So you should expect to see some increase in the maximum value of $u$, but it shouldn't blow up. If it blows up, you have an implementation error (bug).I'll add that this is a lousy method for a purely hyperbolic PDE, since it generates oscillations and isn't very accurate for large time steps anyway.
_webmaster.102840
In Google webmaster tools we can change domain. This tools will transfer all indexed pages to another domain. I know this technique, but I have another problem.I have web site that contains multiple keywords. I need to build another website for one of the same keyword from my site. For example, web design, and loan software have good ranking for my site. I need to transfer loan software to a new web site built just about that topic.
How transfer Google index for a single keyword to another domain
google;keywords;ranking;transfer
As far as I know, there is no way to transfer specific keyword from one site to another site, because keywords are indexed by Google according to the content of your site.However, you can create a new site & then redirect all the links that are being indexed for loan software from the old site to the new site. Of course, you'll have to transfer the contents of those links to the new site as well (or at least have very similar contents to the redirected links).That way Google will give some of the ranking points to your new site and index your desired loan software keyword for the new site.However, keep in mind that you may not have the same ranking points in the new site. Since Google SERP ranking may depend on other metrics (e.g. backlinks, Page Rank, authority etc.) that may not be immediately transferable to the new site.
_unix.109895
I'm mostly interested in annotating a PDF file with text at a predetermined position. GUIs and command line utilities are both Ok, but only free software solutions, please. However, I included image additions for completeness.To be clear, the annotations must be part of the PDF file, otherwise it is not useful.There are two similar questions on Ask Ubuntu, but they are both a couple of years old. These are How can I add text and images (for example, a signature) to a PDF? and How can I edit a picture into an existing PDF file?I've tried Xournal, which does work. However, I think a little tutorial about how to do this would be good, so you want to add a small tutorial on how to use Xournal to accomplish these tasks, please add an answer.I also tried updf, which didn't work for me, though this answer and this one for example says it can. I rebuilt the package (which is pure Python) on Debian Wheezy, using the sources from the updf PPA. It seems quite primitive and the Save As dialog did not even have a save button. If other people have had different experiences, please post.For each answer, please provide a brief tutorial with screenshots if appropriate, as to how you accomplished this task.
Text annotations and image additions to PDF file using free software
pdf;free software
Okular can make annotations on PDFs, as of the version in Debian 8 (Jessie). This is the version:okular --versionQt: 4.8.6KDE Development Platform: 4.14.2Okular: 0.20.2Here is how it works:For details, see the Annotation reference page from the Okular manual.To quote that page:Since Okular 0.15 you can also save annotations directly into PDF files. This feature is only available if Okular has been built with version 0.20 or later of Poppler rendering library.First, you need to annotate the PDF. You can do this via the menu or via a keystroke. You can find the tools under Tools->Review, or via the keystroke F6. This will bring up a menu on the left, with a variety of options.Probably the best option for inline annotations is Inline Note. Follow instructions in the link to save the note. As noted in the link, the background color, font and other features of the note can be customized.See alsoChange and save pdf annotation setting in Okular?.By default, the annotation information is stored inin xml files located in ~/.kde/share/apps/okular/docdataor more generally, in the location$(kde4-config --localprefix)/share/apps/okular/docdata/To save the annotation to the PDF file, as is desirable, you need to save the annotations back to the file using Save As.The annotation is seen by xpdf and evince (which throws the warning WARNING **: Unimplemented annotation: POPPLER_ANNOT_FREE_TEXT. It is a known issue and it might be implemented in the future.but still shows the annotation), but not by acroread (9.5.5) or the PDF plugin of Chromium (45.0.2454.85). It also prints Ok using gtklp, a CUPS frontend.
_webapps.74912
I have a need to email a form submission to two respondent addresses each time the form is submitted. In a perfect world I would be able to place two email fields on the form and select both in the mail To field on the submissions page. This doesn't look possible. I see in the forums that this should be possible by placing a semicolon between the addresses and using a single email field. However the validation for the email field rejects the address when you use the semicolon as a separator. Is there another way to do this that I am missing?
How do I send confirmation emails to multiple respondent addresses when entries are submitted in Cognito Forms?
email;cognito forms
null
_unix.255274
I have a tmux session inside Putty. It was fine for past many months, but now I see a strange thing. When I maximize a normal Putty window, and if cursor is not on a new line, then I get some weird ANSI Sequences in the shell. Eg, I maximized 4 times to get:0;44;8m 0;46;8m 0;50;8m 0;55;9mIt happens only on maximize, not on restore.If cursor is on a new line, then the codes printed start with ^[[<, & then the ANSI Sequences. Eg, when I maximised 4 times, ensure that the cursor is on a new line, I got:^[[<0;64;8m ^[[<0;138;8m ^[[<0;95;8m ^[[<0;79;7mWhat is happening ?I restarted the session and issue is not happening now. How to debug next time it happens ?
Maximizing tmux session shows weird ANSI Sequences
tmux;escape characters;putty;ansi term
null
_scicomp.25110
Now I have been believing that FEM/CFD is supposed to be faster on a GPU unit - here I am using CUDA as solid example. However, I have not been able to find a convincing paper where the benchmark actually appear to me that 'Yes, this is true!'. Can I please be pointed to one? Or if not, is there any reasons why GPU unit can suck when compared to CPU for CFD/FEM? Could it have anything to do with sparse matrices structure? In terms of performance index like speed/degree of parallelism etc.
The real myth of GPU (specifically CUDA) really speed up FEM/CFD
finite element;fluid dynamics;performance;cuda
Here's the deal with GPUs. On a GPU, every single core is slow. Really slow. However, you have thousands of cores. If you can effectively use the thousands of cores at a time, then your algorithm will run better on the GPU. If you cannot, then it will run much slower on the GPU. Linear algebra is one domain where parallelism is really well established. Thus the best way to write for a GPU is to essentially have the GPU do all of the linear algebra: it essentially becomes a card to compute Ax=b and A*B much faster than the CPU (this fact is pretty easy to check, look at the numerous benchmarks or even just open up MATLAB and type in A*B for both matrices and GPU matrices). But there's a caveat: data transfer to GPUs is really slow. Also, memory allocation on GPUs is really slow. So while the linear algebra is fast, you have to deal with the fact that:Serial performance is awful.Allocating memory dynamically on the GPU destroys performance.Transferring back and forth between the CPU and GPU is slow.This puts constraints on your algorithm: you need to try to leave as much on the GPU as possible, transferring back and forth the minimum amount, while trying to avoid serial parts from running on the GPU. Likewise, the GPU can easily do the linear algebra 1000x faster than the CPU (which is usually the performance bottleneck) so in many cases you can effectively manage this dilemma and end up with large performance gains.One interesting alternative are the Xeon Phi. These cards have much faster data transfer, much better serial performance, and can allocate memory much better. However, the tradeoff is it's less specialized to be a dumb linear algebra solver and you thus have to pay a heftier price, and in return its linear algebra performance is about half that of a GPU. However, this can be much easier to develop code for (OpenMP parallel codes will use it automatically, and you can use a Xeon Phi card as another node via MPI, so if you've already parallelized a code you can use the same code with the Phi) and, by allowing you to more effectively keep data on the acclerator or using the increased data transfer speeds, can be much faster than a GPU in real-world (S)PDE solving. Of course, it depends greatly on the implementation.
_codereview.14182
Any comments are welcome. However I'd like to call specific attention to my... interpretation of the N-Tier application architecture and how I'm consuming data. Note that the namespacing and inline comments have been factored out for brevity.The Data Tier (DataModel.csproj)// Base class for EF4 Entitiespublic abstract class EntityBase { public int Id { get; set; }}// Entitiespublic class Account : EntityBase { public string UserName { get; set; } public string Password { get; set; } public byte[] Salt { get; set; } public string EmailAddress { get; set; }}public class Entry : EntityBase { public string Title { get; set; } public DateTime Created { get; set; } public Account Owner { get; set; }}// EF4 Code-First Data Contextpublic class DataContext : DbContext { public DbSet<Account> Accounts { get; set; } public DbSet<Entry> Entries { get; set; }}// Repository Patternpublic interface IRepository<TEntity> where TEntity : EntityBase { TEntity Create(); void Delete(TEntity entity); TEntity Get(int id); TEntity Get(Expression<TEntity,bool>> where); void Insert(TEntity entity); void Save(); IQueryable<TEntity> Where(Expression<TEntity,bool>> where);}public class Repository<T> : IRepository<T> where T : EntityBase, new() { private readonly DataContext context; private readonly DbSet<T> set; public Repostory(DataContext context) { // context injected via ioc. this.context = context; this.set = this.context.Set<T>(); } // Implementation of IRepository<T>}Business Logic Tier (Runtime.csproj)// IOC Containerpublic static class IOC { public static readonly IWindsorContainer Container = new WindsorContainer(); static IOC() { InstallAssembly(Presentation); InstallAssembly(Runtime); InstallAssembly(DataModel); } public static void InstallAssembly(string name) { Container.Install(FromAssembly.Named(name)); }}// Also includes System.Configuration ConfigSections, HttpModules and Provider classes// which are strongly tied to IRepository<>.Presentation Layer (Mvc App) (Presentation.csproj)protected void Application_Start() { AreaRegistration.RegisterAllAreas(); FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters); RouteConfig.RegisterRoutes(RouteTable.Routes); // Controller Factory instance makes initial reference to IOC class, causing // the installers to run. ControllerBuilder.Current.SetControllerFactory(new Runtime.Mvc.WindsorControllerFactory());}// Data-Driven Controllerpublic class DashboardController : SecureController { private readonly IRepository<Entry> entries; public DashboardController(IRepository<Entry> entries) { // entries injected via ioc. this.entries = entries; } [ChildActionOnly, HttpGet] public ActionResult RecentEntries() { var viewModel = new RecentEntriesViewModel( entries.Where( stuffHappens ) ); return PartialView(viewModel); }}I feel like letting IRepository<> reach the presentation layer may be a violation of the pattern. Exposition of an IQueryable<> through IRepository<>.Where seems like a bad idea at the Presentation level. There's no way to moderate the use of IQueryable and could lead to brittle code. Inputs?
ASP.NET MVC using an N-Tier Model and DDD
c#;asp.net mvc 3;mvc
null
_scicomp.21372
When reading literatures about finite element method, the term hanging nodes can often be encountered. Could anyone tell me what indeed is a hanging node?
What is a hanging node in the finite element meshing?
finite element;mesh generation
The following picture illustrates a mesh with a hanging node and a mesh containing no hanging node:Usually with a finite element mesh the vertices are shared with their other neighbouring elements, but the circled node does not belong to the bottom triangle. We call this node a hanging node. This commonly occurs during the process of adaptive mesh refinement. Hanging nodes are either removed - as is done in the above picture by connecting it to another vertex and thus creating two new elements - or by imposing constraints on the system.
_datascience.11325
Using orange, I would like to be able to do real time analysis or visualizations on streaming data. I would appreciate any input on the matter!
In Orange, is it possible to analyze or visualize real time, streaming data?
time series;visualization;orange
null
_unix.274595
I'm new in Unix, and I'm learning about sed command. I need to write a very small script for searching a number in a file and replacing it. I tried doing grep, and than deleting the specific phone number from a file, and then adding another one. But i was told that i can use sed in a script to replace. I know how to use the sed command on a command line for search, and replacing something. But how do i write a script, which lets the user search a number, and then let the user enter the number which they want to replace the old number with. So far what i did was use grep to find a number, that works. but now I'm stuck at how can i let the user add in the new number, so the new number can replace the old one. I tried piping grep to sed, and vice versa. No luck so far. I sound redundant, :/ but I'm really frustrated. Any help will be appreciated :D.
How do i create a sed script to prompt the user to replace a number in a file?
sed;awk;scripting;grep;replace
null
_webapps.80676
I'm experimenting with Cognito Forms and the JSON WebHooks. The problem I'm running into is how to access images (or any file) uploaded with a form. I need to get the image's URL to download it, but the JSON data received only has an Id field, and the file name, but no URL. How can I use the Id to get the full file URL? This will be done automatically by a custom application so I need it to work programmatically, NOT as a manual (go to this page and click here) process. Thanks if you can help.Sample of JSON data received from image upload widget looks like this:[ { ContentType: image/png, Id: F-RvLAskaPGomphWSa4LFsXk, Name: Screen Shot 2015-03-08 at 1.34.19 PM.png, Size: 1044716 }]
How to access full URL to uploaded files in Cognito Forms with JSON WebHook
cognito forms
null
_softwareengineering.194979
Which reasoning would you follow to choose one license over another? Which literature would you expect someone to read, if he wants to make a meaningful decision about licensing?I specifically don't go too much into detail about my current project, because I am looking to learn how to make this decision correctly without getting a degree in law first.
FOSS licensing decision: What to read? What factors to consider?
open source;licensing;decisions
null
_cs.16703
Let $T$ be a tree, and there is a weight function on the edges $w:E\to X$. $(X,\oplus)$ is a monoid structure.Define $f(u,v) = \bigoplus_{i=1}^k w(e_i)$, where $e_1,\ldots,e_k$ is the unique path from $u$ to $v$.Can we preprocess the tree, such that we can use linear space (or close to linear space), so we can answer queries $f(u,v)$ in $O(\mathrm{polylog} (n))$ monoid operations? The problem become more interesting if we allow removing edges and add edge between two vertices in distinct trees.We can also ask the question on a DAG, with weights from a commutative monoid. We want to preprocess in linear time, and query the result of $f(u,v)= \sum_{e} w(e)$, where $e$ is in some path from $u$ to $v$, in $O(\mathrm{polylog} (n))$ time.If we consider the very special case where the entire graph is just a path, then what we want is a dynamic version of a Fenwick tree. A finger tree can solve the problem in $O(\log n)$ time.
Data structure for finding the sum of edge weights on a path
data structures;graphs
null
_unix.228073
According to this post can stat be used to give the atime on Linux, but FreeBSD 10.1 doesn't have the GNU stat.How do I list the atime for files?
How to list atime for files?
linux;bash;freebsd;ls;stat
ls -luwhere -l will provide a long listing format and -u will sort by access time.
_unix.41740
I understand that the -exec can take a + option to mimic the behaviour of xargs. Is there any situation where you'd prefer one form over the other? I personally tend to prefer the first form, if only to avoid using a pipe. I figure surely the developers of find must've done the appropriate optimizations. Am I correct?
Find -exec + vs find | xargs. Which one to choose?
find;pipe;xargs
You might want to chain calls to find (once, when you learned, that it is possible, which might be today). This is of course only possible as long as you stay in find. Once you pipe to xargs, it's out of scope. Small example, two files a.lst and b.lst: cat a.lstfuddel.shfiddel.shcat b.lstfuddel.shNo trick here - simply the fact that both contain fuddel but only one contains fiddel.Assume we didn't knew that. We search a file which matches 2 conditions: find -exec grep -q fuddel {} ; -exec grep -q fiddel {} ; -ls192097 4 -rw-r--r-- 1 stefan stefan 20 Jun 27 17:05 ./a.lstWell -maybe you know the syntax for grep or another program to pass both strings as condition, but that's not the point. Every program which can return true or false, given a file as argument, can be used here - grep was just a popular example. And note, you may follow find -exec with other find commands, like -ls or -delete or something similar. Note, that delete not only does rm (removes files), but rmdir (removes directories) too. Such a chain is read as a AND combination of commands, as long as not otherwise specified (namely with an -or switch (and parens (which need masking))). So you aren't leaving the find chain, which is a handy thing. I don't see any advantage in using -xargs, since you have to be careful in passing the files, which is something find doesn't need to do - it automatically handles passing each file as a single argument for you. If you believe you need some masking for finds {} braces, feel free to visit my question which asks for evidence. My assertion is: You don't.
_softwareengineering.117411
I've been wanting to try out graphics in Haskell. From what I've seen, the available libraries are either front-ends to C/C++ libraries, or an abstraction of them with minimal features. The high-level libraries do not seem to suit my needs, and so I'm left with lower-level front-ends.What I need is to render tiles and text - basics for a very simple game. I know how to do this with C, and was thinking I could write the graphics in C and interface it with Haskell. The alternative is to write the graphics using a Haskell library.My question is, can available Haskell libraries achieve what I want? I do not want to bend over backwards; if C can do it better than I would like to know.
Haskell GUI: how much can be done with Haskell?
libraries;gui;haskell
null
_unix.368314
I am trying to speed test some new LTO tape drives but cannot seem to send data to the tape via dd for any block size above 327,680 bytes. I must have a 1M blocksize for my application.[root@host]# mt -f /dev/nst0 statusBOT ONLINE IM_REP_EN[root@host]# dd if=/dev/zero of=/dev/nst0 bs=327679<this transfers data fine>[root@host]# dd if=/dev/zero of=/dev/nst0 bs=327680<this transfers data fine>[root@host]# dd if=/dev/zero of=/dev/nst0 bs=327681Device or resource busyI have spent many hours trying to debug this. Rebuilt kernels, updated drivers and firmware.REVELATION:The results of running a dmesg shows that there is a bufsize somewhere that is set at the exact critical value that I am seeing my blocksize wall at.[root@host]# dmesg | grep bufsize[ 9.114532] st: Version 20160209, fixed bufsize 327680, s/g segs 64Anyone know where I can change this bufsize value?
Anyone know where to change this bufsize value? ( st / mt LTO tape drives )
centos;buffer;tape
null
_webmaster.20627
I am new to SEO for blogs (to be more precise WordPress). I wanted to only page with a single article to be indexed.This is not because I am afraid of duplicate content, but because I am afraid a person, through search engine, comes to one multi-post page (like tag page or month page) to only find out that the keyword he/she looks for matches two irrelevant posts. I also won't know which post the visitor wanted if he/she comes into a tag archive page because it won't be recorded in the stats.So to achieve this I should add noindex (now I know there is no need to explicitly specify 'follow') to tag/category/author/date archive pages. What I am wondering is that if I should do this to the index pages (and page 2/3/.. of it too) as well? Would this have bad side-effects?EDIT: sorry now I clarified the question more.
Is 'noindex, follow' a good idea for blog's index page?
seo;wordpress;noindex
null
_scicomp.14369
I would greatly appreciate it if you could share some reasons the Conjugate Gradient iteration for Ax = b does not converge? My matrix A is symmetric positive definite. Thank you so much!Edit with more information:My matrix is the reduced Hessian in the optimization algorithms for problems with simple constraints. The matrix is 50% sparse and I am using matlab.
What are some reasons that Conjugate Gradient iteration does not converge?
iterative method;convergence;conjugate gradient
null
_unix.276488
When I highlight multiple files in ranger from the same dir, I can open them all in vim with r and than 0 or @ and than add vim. But if I highlight files from different dirs, than this doesn't work. Any ideas why and how to get it working?Bonus question: I would like to open all files with vims -o flag (to split vertical), but that doesn't work either, not even on files in the same dir.
Open multiple files from ranger in vim
vim;ranger
null
_codereview.164670
I implemented, somewhat closely based on this implementation, a blocking queue with emphasis on filling up first. So if one producer and one consumer thread are using the queue, the producing queue gets prioritized.Code// BQueue.hpp//#pragma once#ifndef _BQUEUE_HPP#define _BQUEUE_HPP#include <condition_variable>#include <mutex>#include <queue>template <class T> class BQueue {public: BQueue(size_t size); void push(T item); bool pop(T &item); /* ALTERNATIVE 1 void push(std::unique_ptr<T> item); bool pop(std::unique_ptr<T> &item); */private: std::mutex _mutex; //std::queue<std::unique_ptr<T>> _queue; // ALTERNATIVE 1 std::queue<T> _queue; size_t _size; std::condition_variable _condition_full; std::condition_variable _condition_empty;};#include BQueue.hxx#endif // _BQUEUE_HPPimplementation// BQueue.hxx#include BQueue.hpp#include <condition_variable>#include <cstdlib>#include <iostream>#include <mutex>#include <queue>template <class T> BQueue<T>::BQueue(size_t size) : _size(size) {}// ALTERNATIVE 1// template <class T> void BQueue<T>::push(std::unique_ptr<T> item) {template <class T> void BQueue<T>::push(T item) { std::unique_lock<std::mutex> lock(_mutex); while (_queue.size() >= _size) { _condition_full.wait(lock, [&]() { return (_queue.size() < _size); }); } _queue.push(std::move(item)); // _queue.push(item); // ALTERNATIVE 1 // if queue is full, notify consumation part first if (_queue.size() >= _size) { _condition_empty.notify_one(); } _condition_full.notify_one();}// template <class T> bool BQueue<T>::pop(T &item) {template <class T> bool BQueue<T>::pop(T &item) { std::unique_lock<std::mutex> lock(_mutex); while (_queue.empty()) { if (!_condition_empty.wait_for(lock, std::chrono::seconds(1), [&]() { return !_queue.empty(); })) { // waited too long for input. return false; } } item = std::move(_queue.front()); // item = _queue.fron(); // ALTERNATIVE _queue.pop(); /* THIS FOLLOWING CODE MAY BE NEEDED; * KEEP IT IN CASE * _condition_empty.notify_one(); * // if queue is empty, notify production * if (_queue.empty()) { //*/ _condition_full.notify_one(); // assert(_queue.size() < _size); //*/ return true;}main// alternatively: class Resourcestruct Resource { int x;};int main() { BQueue<std::unique_ptr<struct Resource>> q{40}; //BQueue<struct Resource> q_alternative1[40}; std::unique_ptr<struct Resource> res1{new struct Resource}; res1->x = 42; q.push(std::move(res1)); q.push(std::move(std::unique_ptr<struct Resource>{new struct Resource})); for (size_t i = 0; i < 30; i++) { std::unique_ptr<struct Resource> res{new struct Resource}; res->x = i; q.push(std::move(res)); } for (size_t i = 0; i < 15; i++) q.pop(res1); return 0;}Now, I came across the question of resource handling. What does make more sense? Using the shown code or implement the whole thing using std::unique_ptr where necessary (see ALTERNATIVE 1)? std::move even works with non-resource wrapped types, apparently..And, is the main function leak free? Or did I miss anything? Does it work for shared_ptr types as well, or do I have to specialize because of pop(T item &) being a reference?
Blocking Queue implementation with std::unique_ptr
c++;c++11;concurrency;queue
null