id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_unix.111437 | I'm getting the same error that other questions mention, such as:Installation Error: Kernel Panic - not syncinghttps://askubuntu.com/questions/41930/kernel-panic-not-syncing-vfs-unable-to-mount-root-fs-on-unknown-block0-0However, I don't think their solutions would fit my scenario.This has started happening when I tried to install an Ubuntu 12.04 into a new partition (/dev/sda5). The system has two hard disks (sda and sdb), and both have already other two OSs: a Windows8 in sda, an Ubuntu 13.04 in sdb.So when I installed Ubuntu12.04 I told it to install the boot loader in sdb (thinking it would overwrite the existing one), but it failed. So I told it to use sda instead and then it apparently didn't fail.However, when starting up the computer, the previous boot I had from Ubuntu 13.04 starts, so then the entry for the new Ubuntu 12.04 was not there. Tried to start manually from the other HD (sda) and it starts Windows, so I guess the boot loader couldn't be installed anywhere in the end.So, to make the sdb bootloader recognize the new Ubuntu12.04 partition I started Ubuntu 13.04 and did sudo update grub from there, and this command found the new OS, and there's a new entry in grub now. But when I try to run it, I get:error: no such device: 15ee4fac-ec5d-409b-88c2-7350e76d2e98Press any key to continue...Then I press a key, and then:Kernel Panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)The solutions that others have provided in other questions I guess that don't really apply to me because I have many partitions with different OSs, and the other questions seemed to focus on just one Linux OS, so I'm not sure I can run update-initramfs -u -k <version> happily, or from where to run it (LiveCD?), or how to guess the proper kernel version. | Kernel Panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) | partition;system installation;dual boot;boot loader | null |
_unix.283597 | I have a C program which I want to run as a daemon. I'm working in ubuntu 14.04LTS. Which is the right way to do that? Can anyone help? | How to run a C program as daemon? | ubuntu | null |
_codereview.48895 | Here's how I wrote the Haskell splitAt function:splitAt' :: Int -> [a] -> ([a], [a])splitAt' n ys | n < 0 = ([], ys) | otherwise = splitAt'' n ys [] where splitAt'' a (x:xs) acc | a == 0 = (acc, x:xs) | null xs = (acc ++ [x], []) | otherwise = splitAt'' (a-1) xs (acc ++ [x])I don't like I'm using the append (++) function to add an element to the end of my acc(umulator).But, given the importance of ordering, I'm not sure how to avoid using it.Please review this code as well. | Haskell#splitAt | haskell;reinventing the wheel | You should be able to implement it without ++.splitAt' :: Int -> [a] -> ([a], [a])splitAt' 0 ys = ([], ys)splitAt' _ [] = ([], [])splitAt' n (y:ys) | n < 0 = ([], (y:ys)) | otherwise = ((y:a), b) where (a, b) = splitAt' (n - 1) ys |
_unix.102061 | When I start an SSH session that executes a long-running command, what happens with Ctrl+C (SIGINT) handling?I can see that the SSH session is closed, but I am not sure who gets the SIGINT first: is it...the remote long-running command? that is, (a) the signal handler in the remote command is called and stops the remote command, (b) the shell that spawned it detects that the command stopped, and stops as well (c) the remote sshd detects the shell stopped, so it closes the connectionorthe local ssh receives the signal, and closes the connection.I think that (1) is happening, but want to make sure. I am also uncertain about what happens with shell handling of SIGINTs in this case. For example, if I ...ssh remote 'while true ; do sleep 1 ; date ; done'and Ctrl+C, then the remote connection is dropped. Is there a way to run the remote command under a shell that will stay alive after Ctrl+C? That is, in this case, stop the loop and allow me to keep working on the remote shell? | Ctrl-C handling in SSH session | ssh;signals | ssh can be invoked in a few different ways, each resulting slightly different treatment of terminal-initiated signals like Ctrl-C.ssh remotehost will run an interactive session on remotehost. On the client side, ssh will try to set the tty used by stdin to raw mode, and sshd on the remote host will allocate a pseudo-tty and run your shell as a login shell (e.g. -bash).Setting raw mode means that characters that would normally send signals (such as Ctrl-C and Ctrl-\) are instead just inserted into the input stream. ssh will send such characters as-is to the remote host, where they will likely send SIGINT or SIGQUIT and, typically, kill any command and return you to a shell on the remote host. The ssh connection will stay alive, as long as the remote shell is alive.ssh -t remotehost command args ... will run an interactive session on remotehost, just like the above, except on the remote side, your_shell -c command args ... will be run. As above, if you type Ctrl-C, it will be sent to the remote host, where the command will likely receive SIGINT and immediately exit, and then the remote shell will exit. The remote sshd then closes the connection, and ssh reports Connection to remotehost closed.ssh remotehost command args ... will run a non-interactive session on remotehost. On the client side, ssh will not set the tty to raw mode (well, except to read in a password or passphrase). If you type Ctrl-C, ssh will get sent SIGINT and will immediately be terminated, without even issuing a Connection to remotehost closed message.The your_shell -c command args ... processes will likely remain running on the remote host. Either they will exit on their own, or one process will try to write data to the now-closed ssh socket, which will cause a (typically) fatal SIGPIPE signal to be sent to it. |
_unix.362478 | I've installed Linux Mint 18.1 64-Bit to get away from Windows, however I'm having a few issues. I selected to encrypt my hard drive at the installation of Linux Mint, when it installs and shows the screen where you enter your password there is no input from the keyboard...I managed to gain access by setting no splash in the grub file at /etc/default/grub to use shell access to enter the password, not very ideal but it is the only way for me to gain access after hard rebooting to use recovery mode.So I thought why not update the kernel and see if that works. I tried 4.4 and no luck, I tried 4.8 and no luck and I tried 4.10 and no luck.So after getting to 4.10 on the kernel I added hid-generic to /etc/initramfs-tools/modules and then updated initramfs and still no luck entering my password at the splash screen.Also on Mint on all the kernels I tried restarting does not work either, i choose to restart my PC, all I get after Mint has gone is a black screen and my keyboard lights flash 2 times before the PC just sits there with a black screen.This is like very basic stuff that should be working, I love Linux and moving from Windows is what I've always wanted to do but this basic functionality is such a deal breaker for me.Is there anything I can do or at least explain why everything mentioned does not work correctly?My hardware:Linux Mint 18.1Kernel: 4.10.0-20CPU Intel i7-6700KRAM: 32 GB DDR4Boot Drive NVME M.2 512GB SamsungGPU: Nvidia 1080 with their drivers installedLatest updates from the Mint update manager | Linux Mint 18.1 Keyboard not working at cryptsetup before boot | linux mint;linux kernel;keyboard;cryptsetup | null |
_webmaster.7035 | I want to make clean URL from normal URL:// Normal URLhome.php?kategori=dress// Clean URLhome.php/dressHow could I build clean URL like above? | Basic way to clean url from normal url | web development;php | You're looking for mod_rewrite |
_unix.94316 | I've been getting some suggestions on how to figure out why my serial port is busy. Specifically, when I try to start gammu-smsd, it refuses to start on /dev/ttyS0 because it says that port is busy:sudo /etc/init.d/gammu-smsd startSep 30 16:16:51 porkypig gammu-smsd[25355]: Starting phone communication...Sep 30 16:16:51 porkypig gammu-smsd[25355]: gammu: [Gammu - 1.26.1 built 21:46:06 Nov 24 2009 using GCC 4.4]Sep 30 16:16:51 porkypig gammu-smsd[25355]: gammu: [Connection - at115200]Sep 30 16:16:51 porkypig gammu-smsd[25355]: gammu: [Connection index - 0]Sep 30 16:16:51 porkypig gammu-smsd[25355]: gammu: [Model type - ]Sep 30 16:16:51 porkypig gammu-smsd[25355]: gammu: [Device - /dev/ttyS0]Sep 30 16:16:51 porkypig gammu-smsd[25355]: gammu: [Runing on - Linux, kernel 2.6.32-42-server (#95-Ubuntu SMP Wed Jul 25 16:10:49 UTC 2012)]Sep 30 16:16:51 porkypig gammu-smsd[25355]: gammu: [System error - open in serial_open, 16, Device or resource busy]Sep 30 16:16:51 porkypig gammu-smsd[25355]: gammu: Init:GSM_TryGetModel failed with error DEVICEOPENERROR[2]: Error opening device. Unknown, busy or no permissions.Sep 30 16:16:51 porkypig gammu-smsd[25355]: Can't open device (Error opening device. Unknown, busy or no permissions.:2)Sep 30 16:16:51 porkypig gammu-smsd[25355]: Using PGSQL serviceSep 30 16:16:51 porkypig gammu-smsd[25355]: Disconnecting from PostgreSQLI used two different commands. Both of them find different processes culpable. First I try fuser:fuser -m -u /dev/ttyS0 /dev/ttyS0: 21624(guarddoggps)cd /proc/21624cat statusName: dropboxState: S (sleeping)Tgid: 21624Pid: 21624PPid: 1TracerPid: 0Uid: 1001 1001 1001 1001Gid: 1001 1001 1001 1001FDSize: 64Groups: 5 27 1001 5004 VmPeak: 873732 kBVmSize: 806040 kBVmLck: 0 kBVmHWM: 207668 kBVmRSS: 131864 kBVmData: 547820 kBVmStk: 160 kBVmExe: 3524 kBVmLib: 29660 kBVmPTE: 1244 kBThreads: 21SigQ: 0/16382SigPnd: 0000000000000000ShdPnd: 0000000000000000SigBlk: 0000000000000000SigIgn: 0000000001001000SigCgt: 00000001800004c8CapInh: 0000000000000000CapPrm: 0000000000000000CapEff: 0000000000000000CapBnd: ffffffffffffffffCpus_allowed: ffCpus_allowed_list: 0-7Mems_allowed: 00000000,00000001Mems_allowed_list: 0voluntary_ctxt_switches: 202nonvoluntary_ctxt_switches: 1So fuser says dropbox is using it.Then I use lsof:sudo lsof | grep ttyS0screen 23520 root 6u CHR 4,64 0t0 1421 /dev/ttyS0lsof says screen (rather than dropbox) is using it. So which of these programs (dropbox or screen) is really causing gammu-smsd to refuse to start because of the resourcing being busy? | fuser vs lsof to check files in use | lsof;fuser | The short answer is: screen.The slightly longer answer is that the -m flag to fuser tells it to list everything using the mountpoint. Depending on your setup, that probably means all of /dev, but it could also be /. Clearly not what you intended. You'll get a very long list if you do fuser -vm /dev/ttyS0, over 60 lines on my system.Take off the -m and it'll probably give you the same answer as lsof did. |
_webapps.99283 | I have come across a rather strange situation. I use Google Apps for Education at work. I have an account that essentially works as a ticketing system for certain types of requests. I use a coding system to name the requests so that I can find them easily later. The coding system is as follows:[Category] [(YYYY-MM-DD)] [Title] [Coding Symbols]An example is H3 (2016-10-07) Formation of Govt American Rev +>Nearly all of the time, this works perfectly. However, I have come across one specific example in which it does not work:Gv (2016-09-20) Q1 Before the ConstitutionI have no idea why this one case does not work, but I have narrowed the problem to the date. When I search for Gv Q1 Before the Constitution, The result comes up without a problem. However, any combination of search terms that includes (2016-09-20) generates results that do not include the relevant email strand. The same is true when I search for 2016-09-20, 2016-09-20, or (2016-09-20). Just to be sure I hadn't accidentally use some other random character that looked just like one of the intended ones, I replied to that same email strand with a wide variety of variations of the dysfunctional search term, including (2016-09-20), 2016-09-20, 2016-09-20, and 2016 09 20. Considering the last two, you'd think that searching for 2016-09-20 would include the desired conversation in the results, but no such luck.Now, it can't be something specific to the date itself, since I can find the email strands for the requests named AU (2016-09-20) U1 Native Americans +> and UAM (2016-09-20) Election Theory > without a problem. I also don't believe that the problem is related to Gmail attempting to use the text string as an actual date of an email message, since none of the three conversations with (2016-09-20) have a message sent on that date (so there is no reason that Gv (2016-09-20) Q1 Before the Constitution should have different results).Does anyone have an explanation for this? More specifically, can anyone provide me with a way to force Gmail search to return all email strands that contain the exact text string (2016-09-20)? | Gmail search sometimes fails when text looks like a date | gmail;gmail search | null |
_vi.8111 | I want to insert the unicode character Thorn into my document.http://www.fileformat.info/info/unicode/char/00de/index.htmThe character seems to be inserted when I typei<C-v>u00DE<esc>But is displayed as the replacement character:http://www.fileformat.info/info/unicode/char/fffd/index.htmI assume this is because there is no glyph that vim can map to the unicode character. I am not sure how to proceed to update vim to get that character displayed correctly.Note: This is different from the question marked as duplicate in that I already insert the correct unicode character. The problem is that it is not displayed correctly. | Latin Capital Letter Thorn | unicode;font | null |
_unix.101175 | I've the following variable :x=envVarand 'envVar' is the one of my environment variable's name containing a path (of a folder by instance).So I'd like to do cd $x but it doesn't work. How can I do to use x's value as the environment variable ? I wasn't able to make it work with eval. | How to retrieve environment variable from string name in KSH | environment variables;ksh | If you have ksh 93, you can declare x to be a reference to a variable name:$ ksh --version version sh (AT&T Research) 93u+ 2012-08-01$ ksh -c ' envVar=foo x=envVar nameref x echo $x'foo |
_unix.289680 | If I am correct, the output of free comes from reading /proc/meminfo.In the output of top, is the summary of memory part not specific to a process also coming from /proc/meminfo?which system files does the memory information for each process come from?Thanks. | Does top read some system files? | linux;top;virtual memory | null |
_codereview.46032 | I'm coding a little class hierarchy printing tool for easy show hierarchies of java classes.This is my current code:import java.io.FileInputStream;import java.io.FileOutputStream;import java.io.PrintStream;import java.util.ArrayList;import java.util.Arrays;import java.util.Collections;import java.util.List;import java.util.TreeMap;import javax.swing.JDialog;import jdk.nashorn.internal.runtime.PropertyMap; // since Java 8, just for testingimport com.sun.javafx.collections.TrackableObservableList;public class PrintClassHierarchy { private static final String PADDING = ; private static final String PADDING_WITH_COLUMN = | ; private static final String PADDING_WITH_ENTRY = |--- ; private static final String BASE_CLASS = Object.class.getName(); private TreeMap<String, List<String>> entries; private boolean[] moreToCome; public static void main(final String[] args) { new PrintClassHierarchy().printHierarchy( PrintStream.class, FileOutputStream.class, FileInputStream.class, TrackableObservableList.class, PropertyMap.class, JDialog.class ); } public void printHierarchy(final Class<?>... clazzes) { // clean values entries = new TreeMap<>(); moreToCome = new boolean[99]; // get all entries of tree traverseClasses(clazzes); // print collected entries as ASCII tree printHierarchy(BASE_CLASS, 0); } private void printHierarchy(final String node, final int level) { for (int i = 1; i < level; i++) { System.out.print(moreToCome[i - 1] ? PADDING_WITH_COLUMN : PADDING); } if (level > 0) { System.out.print(PADDING_WITH_ENTRY); } System.out.println(node); if (entries.containsKey(node)) { final List<String> list = entries.get(node); for (int i = 0; i < list.size(); i++) { moreToCome[level] = (i < list.size() - 1); printHierarchy(list.get(i), level + 1); } } } private void traverseClasses(final Class<?>... clazzes) { Arrays.asList(clazzes).forEach(c -> traverseClasses(c, 0)); } private void traverseClasses(final Class<?> clazz, final int level) { final Class<?> superClazz = clazz.getSuperclass(); if (superClazz == null) { return; } final String name = clazz.getName(); final String superName = superClazz.getName(); if (entries.containsKey(superName)) { final List<String> list = entries.get(superName); if (!list.contains(name)) { list.add(name); Collections.sort(list); // SortedList } } else { entries.put(superName, new ArrayList<String>(Arrays.asList(name))); } traverseClasses(superClazz, level + 1); }}This prints out:java.lang.Object |--- java.awt.Component | |--- java.awt.Container | |--- java.awt.Window | |--- java.awt.Dialog | |--- javax.swing.JDialog |--- java.io.InputStream | |--- java.io.FileInputStream |--- java.io.OutputStream | |--- java.io.FileOutputStream | |--- java.io.FilterOutputStream | |--- java.io.PrintStream |--- java.util.AbstractCollection | |--- java.util.AbstractList | |--- javafx.collections.ObservableListBase | |--- javafx.collections.ModifiableObservableListBase | |--- com.sun.javafx.collections.ObservableListWrapper | |--- com.sun.javafx.collections.TrackableObservableList |--- jdk.nashorn.internal.runtime.PropertyMapThis output is already valid. I want to know, if this is a good approach to do the task, or if there is another (better) way to do this?I'm not sure, if the entries and moreToCome global variables are the way to go.Or if you have general improvements, please let me know!Edit 1:I reworked the code to find out the maximum level in hierarchy to discover how big the boolean array has to be. Also I reworked the method calls and put the traverseClasses into the constructor as recommended. Furthermore I renamed some variables to more descriptive names and added some comments. moreToCome is now moreClassesInHierarchy.Second approach:import java.io.FileInputStream;import java.io.FileOutputStream;import java.io.PrintStream;import java.util.ArrayList;import java.util.Arrays;import java.util.Collections;import java.util.List;import java.util.TreeMap;import javax.swing.JDialog;import jdk.nashorn.internal.runtime.PropertyMap; // since Java 8, just for testingimport com.sun.javafx.collections.TrackableObservableList;public class PrintClassHierarchy { private static final String PADDING = ; private static final String PADDING_WITH_COLUMN = | ; private static final String PADDING_WITH_ENTRY = |--- ; private static final String BASE_CLASS = Object.class.getName(); private final TreeMap<String, List<String>> subClazzEntries; private final boolean[] moreClassesInHierarchy; private int maxLevel = 0; public static void main(final String[] args) { new PrintClassHierarchy( PrintStream.class, FileOutputStream.class, FileInputStream.class, TrackableObservableList.class, PropertyMap.class, JDialog.class ).printHierarchy(); } public PrintClassHierarchy(final Class<?>... clazzes) { subClazzEntries = new TreeMap<>(); // get all entries of tree traverseClasses(clazzes); // initialize array with size of maximum class hierarchy level moreClassesInHierarchy = new boolean[maxLevel]; } public void printHierarchy() { // print collected entries as ASCII tree printHierarchy(BASE_CLASS, 0); } private void printHierarchy(final String clazzName, final int level) { for (int i = 1; i < level; i++) { // The flag moreToCome holds an identifier, if there is another class // on the specific value, that comes beneath the current class. // So print either '|' or ' '. System.out.print(moreClassesInHierarchy[i - 1] ? PADDING_WITH_COLUMN : PADDING); } if (level > 0) { System.out.print(PADDING_WITH_ENTRY); } System.out.println(clazzName); if (subClazzEntries.containsKey(clazzName)) { final List<String> list = subClazzEntries.get(clazzName); for (int i = 0; i < list.size(); i++) { // if there is another class that comes beneath the next class, // flag this level moreClassesInHierarchy[level] = (i < list.size() - 1); printHierarchy(list.get(i), level + 1); } } } private void traverseClasses(final Class<?>... clazzes) { // do the traverseClasses on each provided class (possible since Java 8) Arrays.asList(clazzes).forEach(c -> traverseClasses(c, 0)); } private void traverseClasses(final Class<?> clazz, final int level) { final Class<?> superClazz = clazz.getSuperclass(); if (level > maxLevel) { // discover maximum level maxLevel = level; } if (superClazz == null) { // we arrived java.lang.Object return; } final String name = clazz.getName(); final String superName = superClazz.getName(); if (subClazzEntries.containsKey(superName)) { final List<String> list = subClazzEntries.get(superName); if (!list.contains(name)) { list.add(name); Collections.sort(list); // SortedList } } else { subClazzEntries.put(superName, new ArrayList<String>(Arrays.asList(name))); } traverseClasses(superClazz, level + 1); }}Edit 2:Now I implemented the Stack solution from Uri Agassi in my code.This is much more cleaner, since I don't have to care about the Stack size and the current level. It just fits the requirement!Furthermore I got rid of the TreeMap since I don't really need a sorted map. Therefore Map (HashMap) remains.Third approach:import java.io.FileInputStream;import java.io.FileOutputStream;import java.io.PrintStream;import java.util.ArrayList;import java.util.Arrays;import java.util.Collections;import java.util.HashMap;import java.util.List;import java.util.Map;import java.util.Stack;import javax.swing.JDialog;import jdk.nashorn.internal.runtime.PropertyMap; // since Java 8, just for testingimport com.sun.javafx.collections.TrackableObservableList;public class PrintClassHierarchy { private static final String PADDING = ; private static final String PADDING_WITH_COLUMN = | ; private static final String PADDING_WITH_ENTRY = |--- ; private static final String BASE_CLASS = Object.class.getName(); private final Map<String, List<String>> subClazzEntries = new HashMap<>(); public static void main(final String[] args) { new PrintClassHierarchy( PrintStream.class, FileOutputStream.class, FileInputStream.class, TrackableObservableList.class, PropertyMap.class, JDialog.class ).printHierarchy(); } public PrintClassHierarchy(final Class<?>... clazzes) { // get all entries of tree traverseClasses(clazzes); } public void printHierarchy() { // print collected entries as ASCII tree printHierarchy(BASE_CLASS, new Stack<Boolean>()); } private void printHierarchy(final String clazzName, final Stack<Boolean> moreClassesInHierarchy) { if (!moreClassesInHierarchy.empty()) { for (final Boolean hasColumn : moreClassesInHierarchy.subList(0, moreClassesInHierarchy.size() - 1)) { System.out.print(hasColumn.booleanValue() ? PADDING_WITH_COLUMN : PADDING); } } if (!moreClassesInHierarchy.empty()) { System.out.print(PADDING_WITH_ENTRY); } System.out.println(clazzName); if (subClazzEntries.containsKey(clazzName)) { final List<String> list = subClazzEntries.get(clazzName); for (int i = 0; i < list.size(); i++) { // if there is another class that comes beneath the next class, flag this level moreClassesInHierarchy.push(new Boolean(i < list.size() - 1)); printHierarchy(list.get(i), moreClassesInHierarchy); moreClassesInHierarchy.removeElementAt(moreClassesInHierarchy.size() - 1); } } } private void traverseClasses(final Class<?>... clazzes) { // do the traverseClasses on each provided class (possible since Java 8) Arrays.asList(clazzes).forEach(c -> traverseClasses(c, 0)); } private void traverseClasses(final Class<?> clazz, final int level) { final Class<?> superClazz = clazz.getSuperclass(); if (superClazz == null) { // we arrived java.lang.Object return; } final String name = clazz.getName(); final String superName = superClazz.getName(); if (subClazzEntries.containsKey(superName)) { final List<String> list = subClazzEntries.get(superName); if (!list.contains(name)) { list.add(name); Collections.sort(list); // SortedList } } else { subClazzEntries.put(superName, new ArrayList<String>(Arrays.asList(name))); } traverseClasses(superClazz, level + 1); }} | Class for printing class hierarchy as text | java;classes;tree;inheritance | null |
_computergraphics.326 | To render an image for use with red & blue 3d glasses, the usual way to do it is to render from one point of view, convert it to a single intensity (greyscale) value per pixel, and then put that into the red color channel. Render from a slightly different point of view, convert that to greyscale again and put that into the blue channel.This can be prohibitive when rendering a single time is already very costly.Are there any methods by which you could take a single render, and corresponding depth buffer, and come up with something suitable for both red and blue channels? | Is it possible to render red / blue 3d from one image and a depth buffer? | rendering;3d;stereo rendering | I've done some VR research; this comes up a lot since rendering the scene multiple times (especially at predicted VR resolutions) is expensive.The basic problem is that two views provide more information than only one. In particular, you have two slices of the light field instead of one. It's related to depth-of-field: screen-space methods fundamentally are incorrect.There has been some work in this area, most related to reprojection techniques that try to do some kind of geometry-aware holefilling in the final image. This sortof works. As far as I know, the best approach so far is to render the scene directly for the dominant eye, and then reproject it to the other one. |
_unix.369374 | I'm going to copy this largely from a post of mine on reddit, because I'm inhibited by a broken wrist on my dominant hand:Due to that broken wrist I've been reduced to one-handed hunt'n'peck.And the solution to that is what I'm trying to work out. What I want to do is set up a system event that - while happening - switches my keyboard to a horizontally flipped layout.Ideally, I'd like to place a game controller on the floor and be able to flip my keyboard by pushing a button with my toe.Some specs:Using a PC and running Linux Mint 17, with a KDE 4.13.2 desktop.Bonus: At work I use a Macbook Pro with an up-to-date MacOS and would like to do the same.Also note that I'm using a Dvorak keyboard map, and can touch-type that, but not on a QWERTY, so I'll have to stick to Dvorak for this to work.So, in KDE, I can see that in System Settings -> Hardware -> Input Devices -> Keyboard, I have a Switching to another layout option. Unfortunately, all of the options involve keyboard events.Also under System Settings I can go to Common Appearance and Behavior -> Shortcuts and Gestures -> Custom Shortcuts, and capture events to run anything I want. Unfortunately, it too only captures keyboard events.At that point I'm quite stuck.I've no Idea what to do for MacOS, But haven't really tried either; trying to get my home machine going first.I also found a partial solution from none other than Randall Munroe of XKCD fame.I'll have to edit that to match my Dvorak layout, but no big deal. The problem is that it still depends on a keyboard event.Any suggestions on how to trigger that key signal upon clicking a controller button?----edit----As per dirkt's suggestion, Here's the output from evtest on the down event:Event: time 1496762047.394575, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90002Event: time 1496762047.394575, type 1 (EV_KEY), code 289 (BTN_THUMB), value 1And of course the up event is the same, but with a BTN_THUMB value of zero. | How to change keyboard layout with a game controller | linux mint;keyboard layout;keyboard event | null |
_webmaster.52764 | I have identified some bot traffic in GA, but to filter it accurately I need to filter it by two dimensions, namely Browser and ISPTo be clear, I don't want to apply both a filter to block the entire ISP and the entire Browser segments, but only the combination of the twoTo illustrate:ga profile filter by two dimensions http://c714091.r91.cf2.rackcdn.com/87cff1afa51dfe665e62b5dbb0c22e0345557e0918.pngCould someone explain how to do this? It's not apparent using the interface and I'm not able to find any documentation about it | Google Analytics - Profile filter with more than one dimension? | google analytics | null |
_codereview.25917 | <?phpsession_start();include_once(model/Model.php);class Controller { public $model; public function __construct() { $this->model = new Model(); } public function invoke() { if (isset($_GET['page']) && $_GET['page'] == registration) { if(isset($_POST['username']) && isset($_POST['password'])) { // data send for registration $user = $this->model->userRegistration($_POST['username'], $_POST['password'], $_POST['name'], $_POST['familyname'], $_POST['country'], $_POST['age'], $_POST['degree']); if($user->IsRegistered){ include 'view/registered.php'; } else { include 'view/registration.php'; } } else { include 'view/registration.php'; } } // if end for registration page elseif (isset($_GET['page']) && $_GET['page'] == login) { if(isset($_POST['username']) && isset($_POST['password'])) { // data send for login $user = $this->model->userLogin($_POST['username'], $_POST['password']); if($user->IsLogin){ $_SESSION[username] = $user->Username; include 'view/home.php'; } else { include 'view/login.php'; } } else { include 'view/login.php'; } } // elseif end for login page elseif (isset($_GET['page']) && $_GET['page'] == edit_registered) { if(isset($_REQUEST['eid'])) { // data send for edit registration for a particular user $user = $this->model->userEditRegistration($_POST['username'], $_POST['password'], $_POST['name'], $_POST['familyname'], $_POST['country'], $_POST['age'], $_POST['degree'], $_REQUEST['eid']); if($user->IsRegistered){ include 'view/registered.php'; } else { include 'view/edit_registered.php'; } } else { include 'view/edit_registered.php'; } } // elseif end for edit registration page elseif (isset($_GET['page']) && $_GET['page'] == delete_registered) { if(isset($_REQUEST['id'])) { // id send for delete registration for a particular user $user = $this->model->userDeleteRegistration($_REQUEST['id']); if($user->IsRegistered){ include 'view/registration.php'; } else { include 'view/edit_registered.php'; } } else { include 'view/edit_registered.php'; } } // elseif end for edit registration page since after delete it is redirecting to this page elseif (isset($_GET['page']) && $_GET['page'] == logout) { // session is destroyed unset($_SESSION[username]); include 'view/home.php'; } // elseif end for logout elseif (isset($_GET['page']) && $_GET['page'] == intro) { include 'view/intro.php'; } // elseif end for content page intro elseif (isset($_GET['page']) && $_GET['page'] == leedsu) { include 'view/leedsu.php'; } // elseif end for content page leedsu elseif (isset($_GET['page']) && $_GET['page'] == leedsuleedsmet) { include 'view/leedsuleedsmet.php'; } // elseif end for content page leedsuleedsmet elseif (isset($_GET['page']) && $_GET['page'] == trav) { include 'view/trav.php'; } // elseif end for content page leedsuleedsmet else { include 'view/home.php'; } // elseif end for content page home }}?> | Is this a good controller? | php;mvc;web services | null |
_softwareengineering.232904 | My web application calls a third party API.If I successfully call the API but invalid data is returned which cannot be processed by my system what is the most appropriate HttpCode to return to the user calling me?500, I'm not convinced. Although it was an internal error it's due to external factors503, Perhaps, although very vague400, 4XX errors are usually related to the client contacting our server and so could be misleading.Which is the most appropriate response to return to my client? | Invalid data returned, which HttpCode to return? | http;error handling | If the problem is not the user / client's fault then your server should return a 5xx error. A 4xx error should only be returned if the >>request<< is incorrect in some way.Depending on the situation, either 500 or 503 could be appropriate:As far as a client is concerned the external factors are internal to your server / service. So 500 could be appropriate.If the problem is likely to resolve itself (or be resolved) in a relatively short period of time, 503 could be appropriate.Probably either of those is OK.Another possibility is to return a non-standard 5xx code. Note that non-standard (i.e. not defined in the HTTP 1.1 spec) status codes are not wrong. There is ample precedent for doing this ... including the precedent of other RFCs defining extra codes; see the Wikipedia List of HTTP Status Codes for examples.I noticed a comment suggesting this:In that case I will go for 500, however your users might blame on you for the error.Provided that you include details of the cause of the problem in the 500 response body (in an appropriate format!), that should not be a problem.I'd output a proper page -- 200 code, informing them about the third-party call failure and will ask them to try it again later.IMO, that is incorrect. A 200 code means that the request has succeeded ... which it patently hasn't. Assuming that this is a RESTful service, the client is going to use the status code to decide what to do next. It should not be a lie. If you want to say try again later, you should send a 503 response, possibly with a Retry-After header. |
_webmaster.86761 | The situation looks as followed:User includes Javascript snippet into his page.Javascript creates iFrame dynamically and appends it to page.iFrame has static content.Is Google bot smart enough to crawl iFrame's static content? | Does Google bot crawl dynamically created iframes? | seo;javascript;googlebot;iframe | null |
_unix.65369 | I have been performing an upgrade for a customer, using Solaris Live Upgrade to go from Solaris 10 update 2 to Solaris 10 update 10 and at the same time upgrade EMC powerpath from [an old version] to version 5.5I am by no means a powerpath expert, but I am well aware of the issues with upgrading only the one without the other. The process I followed is:Live-upgrade SolarisRemove (pkgrm) powerpath from the ABEComment out the powerpath dependent file systems in the ABELUactivate and rebootInstall PowerPath 5.5 P 01 B 2The install finds the left-over power-path config and asks whether I want to upgrade it. On some of the 5 servers, the old version is PowerPath version 5.2, on others it was still running 4.5, but the result is the same for all of them.At the end of the pkgadd it tells me the driver was successfully installed (it was) and tells me that no reboot is needed. However when I run powercf or powermt display I get an error stating Device(s) not foundRebooting did not help. cfgadm looks as expected (Sorry I did not save the output), devfsadm -Cv did not create or remove any device links. The HBAs were linking (confirmed by luxadm -e probe as well as fcinfo hba-port)format showed only the Solaris native links to the LUNs, with half of them in error state as expected due to them seen via both the avtive and passive path. mpathadm is not active.After googling around I found a suggestion to look at the output of powermt display options to confirm that clariion management is enabled, and found that is says unmanaged... All other storage classes showed as managedI then ran powermt manage class=clariion which returned an error stating incompatible initiator information received from the arrayDespite this error I then got the emcpower devices and could see everything looking normal in powermt display dev=all. For good measure I followed this by powercf -q; powermt config; powermt saveI then un-commented the entries in /etc/vfstab and rebooted to make sure all was ok. I then ended up with a system in single-suer mode with filesystems/local in maintenance. I discovered with a lot of testing that I had to redo the powermt manage class=clarion procedure after every reboot.For now I have reverted to the old pre-upgrade ABE. Everything is still working perfectly when I moved back to the old versions of Solaris and PowerPath. | Storage becomes unmanaged after Solaris and PowerPath upgrade | solaris;upgrade | I did the following and it worked:Although the Solaris OS can distinguish between FC and iSCSI devices, PowerPath 5.5 does not make this distinction for manage and unmanage. The mpxio-disable value must be set to yes in both the fp.conf and iscsi.conf files for PowerPath to manage the following storage arrays:EMC VNXEMC CLARiiONHitachi USP and HP StorageWorks EVA 3000/5000/8000Arrays listed in scsi_vhci.confSee page 35 of the Installation and Administration guide for EMC PowerPath for Solaris for more details. The Determine if arrays are managed by PowerPath or MPxIO chapter provides the details. When mpxio-disable=yes statement is missing from iscsi.conf file, VNX class is implicitly managed by MPxIO. Since there is no explicit statement to manage the class (the VNX class did not exist in the previous release), the new VNX becomes unmanaged. |
_unix.86120 | I don't have the a2dissite and a2ensite commands on my openSUSE 11.3 webserver.How can I add them?I didn't find these commands with YAST. Maybe, because there are no openSUSE 11.3 repositories anymore?How can I install/ make available a2dissite and a2ensite? | How to install a2dissite and a2ensite on openSUSE 11.3? | software installation;opensuse;yast | null |
_softwareengineering.285953 | I am making a basic game with geometric figures.I am trying to design now how to calculate if the figure collides with another figure in an Array List of figures (called entitiesList).I have:Class Entity { ... public boolean collidesWith(Entity anotherEntityInMap) {}}But I can't figure out how I should make this. I guess I must know what kind of figure is anotherEntity. For example, I can have squares, triangles and circles. Each figure has diverse calculations, and it must be abstracted. | How to design a class to check if geometric figures collide? | object oriented design;class design | The most academically proper way of doing it would be to have a two-dimensional matrix of collision detection methods, one method for each possible pair of figures, so each method would have knowledge of precisely what figures are being checked, so as to do the checking in the best way possible. Unfortunately, this is too much work for very little benefit.The quickest, dirtiest, simplest, hackiest and most inaccurate way of doing it would be to treat all shapes equally, consider them all to be circles, and just check the distance between their centerpoints to determine if they collide.A practical solution which is not too difficult to implement and yields accurate results is to have each shape contain a polygon representation of itself, and simply make use of a single polygon collision detection algorithm to detect collision between the polygon representations of any two shapes. |
_softwareengineering.349127 | What is the best way to provide a web application in multiple languages?The focus of my question is not what to think about, but indeed how to do it.The text in the web application:text in the popup of spatial featurestext in the sidebar containing the feature's descriptiontext elements of the website components: menu (navbar)As the web application should be available in both English and French, I am thinking of how to best implement the multiple language support.The text is available in two separate files (English, French) and I thought of based on the user's choice to call either the one or the other file and query the text. But how to deal with the website components, such as the menu items? This is plain text in the html, how could this be solved?The web application is built using Bootstrap and Leaflet (JavaScript).Let me know if it is easier to make two web applications, one for each language, but my thoughts were that this is a huge overload of code. | Multiple language web application - how to implement? | javascript;web applications;web;languages | null |
_webapps.91432 | In a small non-profit group (about 20 people), I have recently become responsible for the calendar, events and communications management, so I am taking a look at their gmail account and looking for optimal ways to manage it. I have experience with gmail and google products but not with the management of small groups like this.I created a mailing list with all the volunteers, so I can e-mail them fasterI created a calendar with all their events and shared it to the mailing list; everyone (me included on my personal account) receives it and can add it to their service of choice;I invited everyone to Event A: since members volunteer to participate, I thought this was the best way to keep count of who will attendHowever, when I double-check on my personal account I see Event A twice: as a shared calendar event AND as a personal event to which I have been invited. Obviously this is not optimal, because of the calendar clutter and the amount of e-mails I have to send (and receive!).Is there a way to have people join/respond/attend an event in a shared calendar without being invited directly? | Check people's attendance of shared calendar events | gmail;calendar;google calendar;events | null |
_softwareengineering.206031 | I have just moved to a new company and they are using TFS 2010 (2012 in a couple of months) as their version control system and recently started to use it as a work tracking system for the developers.However, there doesn't seem to be a bug tracking system for use by people outside development & test. Production support are getting reports of issues, fixing them on the fly and reporting back to their users at the moment. This needs to be changed but I don't really want to have a sperate system for tracking bugs and tracking dev work.Is there a way that I can create a very light weight way of entering bugs into TFS similar to the way that FogBugz does? Logging into TFS to fill out a bug report seems to be a lot heavier and you have to associate it with a particular application. Support may be able to do this but I want to be able to triage the item and potentially change the association to something other than an application.I have used FogBugz in the past and when adding a bug, you can add a much/little as you want to the item so it is at least recorded and later you can bounce it back to get more information when you come to triage the ticket. | Use TFS to track bugs from Production Support | issue tracking;team foundation server | null |
_codereview.95820 | I have been through a few SQLite tutorials and wrote this code on my own to reinforce the principles. The tutorials I went through varied widely in a few areas so this is what I came up with as a combination of everything. Other than the two classes I have listed below, all I have is one activity that adds, remove, inserts, update and display data from the database.I have a few specific questions:When should I close the helper class or does garbage collection deal with it automatically?Should I create a Boxer POJO (Plain Old Java Object) to pass boxer data to and from the DAO?Is the DAO implementation efficient?Does the code deviate from Java and Android best practices in any way?Helper Classpublic class BoxScoresHelper extends SQLiteOpenHelper { private static final String DB_NAME = boxing_scores.db; private static final int VERSION = 1; private static BoxScoresHelper instance = null; public static BoxScoresHelper getInstance(Context context){ if(instance == null){ instance = new BoxScoresHelper(context); } return instance; } private BoxScoresHelper(Context context) { super(context, DB_NAME, null, VERSION); } @Override public void onCreate(SQLiteDatabase db) { db.execSQL(createBoxerSQLString()); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { db.execSQL(Drop Table If Exists + BoxerDAO.TABLE_NAME); onCreate(db); } private String createBoxerSQLString(){ String boxerCreateString = create table + BoxerDAO.TABLE_NAME + ( + BoxerDAO._ID + Integer Primary Key AutoIncrement, + BoxerDAO.BOXER_NAME + Text Not Null, + BoxerDAO.WEIGHT_CLASS + Text Not Null, + BoxerDAO.WINS + Integer Not Null, + BoxerDAO.LOSSES + Integer Not Null);; return boxerCreateString; }}DAO Classpublic class BoxerDAO { public static final String TABLE_NAME = Boxer; public static final String _ID = _id; public static final String BOXER_NAME = boxer_name; public static final String WEIGHT_CLASS = weight_class; public static final String WINS = wins; public static final String LOSSES =losses; private final BoxScoresHelper myScoresHelper; private SQLiteDatabase myBoxerDB; public BoxerDAO(Context context){ myScoresHelper = BoxScoresHelper.getInstance(context); } public Cursor query(String[] projection,String selection,String[] selectionArgs, String orderBy){ Cursor cursor; myBoxerDB = myScoresHelper.getReadableDatabase(); cursor = myBoxerDB.query(TABLE_NAME, projection, selection, selectionArgs, null, null, orderBy); //myBoxerDB.close(); return cursor; } public Cursor queryAll(){ Cursor cursor; myBoxerDB = myScoresHelper.getReadableDatabase(); cursor = myBoxerDB.rawQuery(Select * From + TABLE_NAME, null); //myBoxerDB.close(); return cursor; } public int delete(int id){ int rowsDel; myBoxerDB = myScoresHelper.getWritableDatabase(); rowsDel = myBoxerDB.delete(TABLE_NAME, _ID + = + id , null); //myBoxerDB.close(); return rowsDel; } public long insert(ContentValues values){ long insertId = -1; myBoxerDB = myScoresHelper.getWritableDatabase(); insertId = myBoxerDB.insert(TABLE_NAME, null, values); //myBoxerDB.close(); return insertId; } public int update(ContentValues values,String selection, String[] selectionArgs){ int updatedRows; myBoxerDB = myScoresHelper.getWritableDatabase(); updatedRows = myBoxerDB.update(TABLE_NAME, values, _ID + = + selection, selectionArgs); //myBoxerDB.close(); return updatedRows; } } | Program to provide CRUD operations for a Boxers(Fighter) table in a database | java;android;sqlite | null |
_vi.9905 | I (very) often write python, and I use Vim's python3 backend to test my code. To perform my tests I use:vnoremap <localleader>p y:<c-r><c-b>python3 <cr>It simply takes my visual selection and runs within Vim's python backend. This is very practical because I can re-run only pieces of code (and I'm careful to remember the state the repl is in).This works fine even with installed libraries, for example beautifulsoup4 works::python3 import bs4But my issue starts when I try loading libraries that are not fully written in python. For example trying numpy::python3 import numpyI get an error from the backend:Traceback (most recent call last): File <string>, line 1, in <module> File /home/grochmal/.local/lib/python3.5/site-packages/numpy/__init__.py, line 180, in <module> from . import add_newdocs File /home/grochmal/.local/lib/python3.5/site-packages/numpy/add_newdocs.py, line 13, in <module> from numpy.lib import add_newdoc File /home/grochmal/.local/lib/python3.5/site-packages/numpy/lib/__init__.py, line 8, in <module> from .type_check import * File /home/grochmal/.local/lib/python3.5/site-packages/numpy/lib/type_check.py, line 11, in <module> import numpy.core.numeric as _nx File /home/grochmal/.local/lib/python3.5/site-packages/numpy/core/__init__.py, line 14, in <module> from . import multiarrayImportError: /home/grochmal/.local/lib/python3.5/site-packages/numpy/core/multiarray.cpython-35m-x86_64-linux-gnu.so: undefined symbol: PyType_GenericNewAfter debugging it a good deal I got to the conclusion that the issue only happens in /home/grochmal/.local and only to libraries that have components compiled from C. In other words, if I install numpy into /usr/lib it works.The questionThe python REPL has no issue with libraries at ~/.local but Vim's python backend does. I tried with and without:export PYTHONPATH=/home/grochmal/.local/lib/python3.5/site-packagesAnd the issue persists. Is there a way to use locally installed shared libraries together with Vim's python backend? i.e. Can I tell Vim's python backend to search both places (/usr/lib and ~/.local) for symbols to load?Extra note: both bs4 and numpy are in ~/.local in the tests above. In other words libraries that are fully written in python do work in ~/.local | Vim python backend, how to import user installed shared libraries? | vimscript python | This is because your numpy isn't linked against the Python 3 library (no -lpython3 used). This is fine for most applications that has the library loaded into global space, but Vim uses RTLD_LOCAL so numpy's libraries don't see Python 3's symbols unless it's linked against it.LD_PRELOAD is fine as long as you don't load Python 2 in the same Vim (or symbols may clash; that's why Vim doesn't use RTLD_GLOBAL). You can also recompile your numpy with -lpython3 (from Arch's PKGBUILD it seems that export LDFLAGS=-shared is enough to do that). |
_unix.118075 | I think its a simple question, but the answer is probably a little more complicated :PEdit: Actually, its not complicated at all!^^So I have a directory with multiple svn projects and I would like to search through all recent files (in trunk folder) by content in all projects.Here is somewhat the folders look like:Projects|->Project1| || ->tags| || ->trunk|->Project2| || ->tags| || ->trunk... | Search for files by content only in trunk subdirectories | files;find;search;recursive | As suggested in comments above:grep -l some-pattern ./Projects/*/trunk/*or recursively if there are subdirs under each trunk (and your grep supports -r):grep -lr some-pattern ./Projects/*/trunk/ |
_codereview.118843 | I'm working on this school project and was wondering if there was any way of storing this information better, faster, more professionally. I'm also restricted to only using an array; don't ask me we are not allowed to use ArrayLists yet.My Code:public void loadTweets(String fileName){ try { File file = new File(fileName); Scanner s = new Scanner(file); while(s.hasNextLine()){ numberOfTweets++; s.nextLine(); } tweets = new String[numberOfTweets]; s.close(); s = new Scanner(file); int counter = 0; while(s.hasNextLine()){ String[] elements = s.nextLine().split(\t); tweets[counter] = elements[2]; counter++; } s.close(); } catch (IOException e) { e.printStackTrace(); } }File Example:Each field is separated by a tab and it goes, user > date posted > tweet.USER_989b85bb 2010-03-04T15:34:46 @USER_6921e61d can I be...USER_989b85bb 2010-03-04T15:34:47 superstar USER_a75657c2 2010-03-03T00:02:54 @USER_13e8a102 They reached aUSER_a75657c2 2010-03-07T21:45:48 So SunChips made a bag...USER_ee551c6c 2010-03-07T15:40:27 drthema: Do something today thatUSER_6c78461b 2010-03-03T05:13:34 @USER_a3d59856 yes, i watched...USER_92b2293c 2010-03-04T14:00:11 RT @USER_5aac9e88: Let no 1 push uUSER_75c62ed9 2010-03-07T03:35:38 @USER_cb237f7f Congrats on... | Loading tab-separated tweet data into an array | java;array;file;csv;homework | The prohibition on ArrayList is unfortunate. One natural solution would be to use Files.readAllLines(), but that returns a List<String>, which is probably off-limits to you. Likewise, Files.lines() produces a Stream<String>, which would be even better and thus probably even more forbidden to you.Your workaround is to open the file twice, which is definitely undesirable. (File I/O is considered expensive.) If I had to make a recommendation based on arrays, I would suggestFiles.readAllBytes() to slurp the entire file into a byte array.Make a String from the byte array.Use String.split() to form an array of lines.For each line, retain only the third field.My reasoning is that you eventually have to read the entire file anyway, so you might as well read it all at once, and only once. Once you have a string, you can take advantage of String.split().I would also like to note that catching IOException to print a stack trace is counterproductive. If you don't have a good way to handle an exception, just let it propagate by declaring public void loadTweets() throws IOException. That way, you're letting the caller know that something went wrong which is exactly what exceptions are meant for. |
_webmaster.55078 | I have RSS feeds on my site. I've decided to follow Stack Exchange and disallow my RSS feeds in the robots.txt.I don't want search engine to display the RSS feed page to people, that's not really a good page to see for new visitors.Are there any advantage to allow search engine to crawl the RSS? Or is it a general good idea to disallow it? | RSS feeds and robots.txt | seo;search engines;robots.txt;rss | There are many reasons not to block your feed, but only you can know if they are relevant for you. For example:There may be bots that especially look for feeds, e.g., feed search engines.There may be bots that use feeds to discover new content.There may be other cases where bots would like to access your feeds, now and in the future.Some web search engines might index feeds resp. feed URLs, so that they can give it as a result if users search for example.com feed, site:example.com inurl:feed, etc.Some user agents, e.g., feed readers, might follow rules in robots.txt.I think most search engines will not be confused when they find a feed containing similar content to the front page of the website, as feeds are very common (almost every blog has them, news sites, forums, ). Make sure to link them with rel-alternate and give the corresponding MIME type in the type attribute:From the HTML5 spec:If the alternate keyword is used with the type attribute set to the value application/rss+xml or the value application/atom+xml The keyword creates a hyperlink referencing a syndication feed (though not necessarily syndicating exactly the same content as the current page).If your feeds contains the same content (i.e., the same number of posts and the same or less of the content) from a page of your site, you could use the canonical link type as HTTP header:Link: <http://example.com/>; rel=canonicalBut it should not be necessary. |
_unix.9189 | I'm using Vim /etc/zsh/zshrc to add key bindings for zsh because it doesn't work with inputrc. In my terminal with tmux when I type Ctrl+v then Ctrl+LeftArrow the shell will show ^[OD. However, when I'm in Vim insert mode, pressing the same sequence will result in ^[[D.I found out that ^[[D is what the shell produces when I type Ctrl+v then LeftArrow. I have also changed ^[[D to ^[OD in the file /etc/zsh/zshrc and it works as expected (pressing Ctrl+LeftArrow causes the cursor to move back a word). Here is the line I'm talking about:bindkey ^[OD backward-wordI guess something is wrong with Vim because it's consuming the Ctrl. How do I fix this? | Why is Vim eating up Ctrl when used with Ctrl+v and how to fix it? | vim;terminal;readline | This is actually your terminal doing something weird, not Vim. Terminals have two sets of control sequences associated with cursor keys, for historical reasons: one for full-screen applications, often called application cursor keys mode, and one for read-eval-print applications (e.g. shells).In the old days, read-eval-print applications didn't have any line-editing features, and it was intended that the terminal, or the OS terminal driver, would eventually become more sophisticated. So the terminal sent control sequences intended for the terminal driver. Somehow the Unix terminal drivers never gained decent line-editing features; these were added to applications instead (e.g. through the readline library).Your terminal is sending OD for Ctrl+Left in line edition cursor keys mode, and [D in application cursor keys mode. You have two options:Configure your terminal not to make a difference between the two modes. How to do this is entirely dependent on your terminal emulator.Live with it. Since any given application always sets the terminal in the same mode, just configure its key bindings according to the mode it uses. |
_softwareengineering.201534 | A previous developer has a couple public classes that do not inherit from any other classes but are filled with static properties. Is this another way of creating a struct or enum? Is this an older or newer technique of housing static data to be referenced? I find it odd/different to see a class built in this way but am wondering what other fellow programmers thoughts or feelings are about what this coder was trying to accomplish.This is a made up example of what I am seeing... public class CashRegister { public static decimal OneDollarBill { get { return (decimal)1; } } public static decimal TenDollarBill { get { return (decimal)10; } } } | Why would a developer create a public class that has all static properties? | c#;design patterns;programming practices;code quality | null |
_webapps.90716 | If I block someone on Facebook and also block him on Messenger, does everything on Messenger disappear or still is shown as active from the person that blocked him ? | Does everything disappear when I block someone on Facebook and Messenger? | facebook;facebook messages | null |
_scicomp.530 | What is the best (scalability and efficiency) algorithms for generating unstructured quad meshes in 2D? Where can I find a good unstructured quad mesh-generator? (open-source preferred) | Unstructured quad mesh-generation? | mesh generation;computational geometry | The are essentially two approaches to free quad meshing:Direct methods generate a quad mesh directly, usually by some advancing front method. The Paving paper is a standard reference and is the method used by CUBIT, so you have seen these meshes in many publications.Indirect methods generate some intermediate decomposition of the domain (e.g. triangles) and then produce an all-quad mesh through recombination and/or further decomposition. Q-Morph is an example that is used by ANSYS.Note that smoothing is necessary for both approaches, sometimes with alternating topology fix-up and smoothing steps. Some open source tools have built-in smoothing facilities and the LGPL-licensed Mesquite package is designed as a library specifically for mesh quality improvement.I know of two open source free-quad meshers:Gmsh (GPL with linking exception) can generate quad meshes using a recombination algorithm described in this paper.The Jaal component of MeshKit (LGPL) is based on recombination similar to Q-Morph above, read the IMR-2011 paper for more details. You can download the source through the link above, but it is not ready for production use yet.LBIE generates quad and hex meshes from volumetric data. From what I can tell, it is an interactive environment rather than a library. The site says that the source is available under GPL upon request.CUBIT is not open source (and although not expensive compared to commercial software, acquiring a license takes a long time), but produces high quality meshes and can be linked into other applications. |
_vi.11569 | While using Neovim, I love :term very much.Most of the time, I do the following steps:vsplit to open a vertical window.:term in the new window to enter terminal.$ cd CURRENT_PATH in the shell.I was wondering if maybe I could combine them all in one vim command? | How to combine `vsplit` `term` and `cd` current directory? | neovim | In Vim, the | character is used as a command separator, making it equivalent to the semicolon in the Unix shell. Furthermore, the full syntax to the :term command is as follows: :te[rminal][!] {cmd}Therefore, I believe the following command should do the trick::let CP=expand('%:p:h') | vsplit | exec ':term cd ' .CP .''I do not have Neovim installed, however, so am unable to test this command at the moment. |
_webmaster.16252 | We used to have a URL http://www.abc.com/index.php?itemID=144which is moved to http://www.abc.com/index.php?itemID=1556we want our users which are hitting the above url(144) to reach to 1556. How can it be achieved. If with mod_rewrite or anything else. | can mod_rewrite be used for this problem? | php;apache;joomla | Not sure if you want to redirect only 144=>1556 or it is just an example. Anyway you can try this:Options +FollowSymlinks -MultiViewsRewriteEngine OnRewriteCond %{QUERY_STRING} (^|&)id=144(&|$) [NC]RewriteRule ^(index\.php)/?$ /$1?id=1556 [NC,R,L]Explanation:RewriteCond is making sure there is id=144 in query stringRewriteRule is making sure that request URI is index.php (optionally followed by a trailing slash). If both conditions are satisfied then rule will redirect to /index.php?id=1556. (Note use of back reference $1 instead of repeating index.php here).Flags used are:NC - ignore case comparisonR - redirect (by default with status = 302)L - Mark it as last ruleIf you really want to replace multiple old IDs to new IDs then I will suggest you to take a look at: http://httpd.apache.org/docs/current/mod/mod_rewrite.html#rewritemap |
_unix.338238 | I've install haproxy, varnish and apache2 in sequence with https and http domain.Here I want to convert redirect condition from apache to haproxy.Flow :---443--> | | HAProxy ------>Varnish(8081) ----------> apache2(8080)---80---> | Apache2 :Convert to HAproxy for https request onlyRewriteCond %{HTTP_HOST} ^example.com [NC]RewriteCond %{HTTP_HOST} ^www.example.com [NC]RewriteRule ^(.*)$ https://www.example.com$1 [L,R=301]RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} !-fRewriteRule ^(.*)$ %{DOCUMENT_ROOT}/app_dev.php [QSA,L]Here I need to do two things, 1) Redirect domain to www with https or http2) Hide app_dev.php from URL Can we make above rule on HAProxy? or Is there any other option with replace HAProxy to make easy? | How to redirection on haproxy like apache2 | linux;apache httpd;htaccess;haproxy;varnish | null |
_unix.99450 | I recently installed Ubuntu 13.10 64 bit (dual boot with Win 7) on my desktop PC, but every once in a while the screen turns off suddenly, the fans slow down, and the system appears to crash. I can't see what actually happens, because the screen turns off (no input), but the sound crashes too. All I can do is to press the reset button on the computer itself, after which it boots up without any issue or mention of a problem.At first this happened randomly and unexpectedly simply from normal use, just by opening Firefox or moving the cursor around the desktop right after a clean install. Now it seems to happen only when I do something CPU intensive, like watching videos or importing my music library. I'm not sure what changed, because all I did was turn it off for a day and then come back.I've since cleaned the inside of the computer for dust, and all fans appear to be spinning.The system specs are:AMD Phenon II X4 965 3.4 GHzATI Radeon HD 4870, and Ubuntu says the driver is Gallium 0.4 on AMD RV770 | Sudden crash and black screen in Ubuntu, possible overheating | ubuntu | null |
_cs.80051 | It is possible to reduce EXACT COVER BY 3-SETS (X3C) problem, which is NP-Hard, to the SUBSET PRODUCT problem in polynomial time and show that the SUBSET PRODUCT problem is also NP-Hard.Kyle Jones shows this reduction in this link.I think that it is possible to reduce the SUBSET PRODUCT problem to the INTEGER FACTORIZATION problem and thus show that INTEGER FACTORIZATION problem is also NP-Hard by showing that polynomial solution to the INTEGER FACTORIZATION problem can be helpful to solve efficiently the SUBSET PRODUCT problem also in polynomial time.Remember that the SUBSET PRODUCT problem in general is:Show either$A \subset \mathbb{N} \land n \in \mathbb{N} \vdash \exists B: B \subseteq A \land \displaystyle \prod_{k \in B}k=n$OR$A \subset \mathbb{N} \land n \in \mathbb{N} \vdash \nexists B:B \subseteq A \land \displaystyle \prod_{k \in B}k=n$Of course that if an instance of the SUBSET PRODUCT problem is given, then the $A \subset \mathbb{N} \land n \in \mathbb{N}$ is replaced by $A=\{\ldots\} \land n=x$Where $\{\ldots\}\subset \mathbb{N} \land x \in \mathbb{N}$The SUBSET SUM problem in general is similar, and actually almost the same as the SUBSET PRODUCT problem, but with a little difference:Show either$A \subset \mathbb{N} \land n \in \mathbb{N} \vdash \exists B: B \subseteq A \land \displaystyle \sum_{k \in B}k=n$OR$A \subset \mathbb{N} \land n \in \mathbb{N} \vdash \nexists B: B \subseteq A \land \displaystyle \sum_{k \in B}k=n$Solution to the SUBSET PRODUCT problem by reducing it to the INTEGER FACTORIZATION problem:First of all remember that the INTEGER FACTORIZATION problem in general is:$n \in \mathbb{N} \land Factors(n) \subset PRIMES \land \displaystyle \prod_{k \in Factors(n)}k=n \vDash Factors(n)=?$If an instance of the INTEGER FACTORIZATION problem is given then $n \in \mathbb{N}$ is replaced by $n=x$ where $x \in \mathbb{N}$ and now it is needed to find out $?$, i.e. $Factors(x)$Let's assume that INTEGER FACTORIZATION $\in \mathbb{P}$ and let $GetFactors$ be a polynomial algorithm that returns all the factors of any natural number in polynomial time and so it solves the INTEGER FACTORIZATION problem in polynomial time.Now how the $GetFactors$ algorithm can be used to solve the SUBSET PRODUCT problem in polynomial time and show that $P=NP?$And the answer is that:$Factors(n) \subseteq A \vDash \exists B: B \subseteq A \land \displaystyle \prod_{k \in B}k=n$In that case $B=Factors(n)$$Factors(n) \nsubseteq A \land Factors(n) \subseteq \displaystyle \bigcup_{k \in A} Factors(k) \vDash \exists B: B \subseteq A \land \displaystyle \prod_{k \in B}=n$$Factors(n) \nsubseteq A \land Factors(n) \nsubseteq \displaystyle \bigcup_{k \in A} Factors(k) \vDash \nexists B: B \subseteq A \land \displaystyle \prod_{k \in B}=n$$\therefore \exists B: B \subseteq A \land \displaystyle \prod_{k \in B}k=n \equiv Factors(n) \subseteq A \lor Factors(n) \subseteq \displaystyle \bigcup_{k \in A} Factors(k)$ But I am unsure that this equivalent is true, but if it is true then:INTEGER FACTORIZATION $\in \mathbb{P} \implies$ SUBSET PRODUCT $\in \mathbb{P}$SUBSET PRODUCT $\in \mathbb{NP-HARD}$$\therefore$ INTEGER FACTORIZATION $\in \mathbb{NP-HARD}$Then is the equivalent above true? | Is integer factorization problem NP-Hard? | complexity theory | null |
_unix.98622 | I have gone through countless threads on the following error and none have helped. Your screen, graphics card, and input device settings could not be detected correctly. You will need to configure these yourself.I have tried countless things but nothing works. This pops up every time I reboot. The computer was running fine then all of a sudden it crashed and I got this error.I can enter recovery mode, mount the system read/write, go to the root shell, connect to the internet and run apt-get.As said above, I have already tried lots of solution from other forums, what I decided now is to reinstall all packages related to this.How can I make a list of thing to reinstall with apt-get? Like everything related to the screen, graphics card, and input device settings part.Or could I even re-install completely all system files (preserving my programs) from this command line? As alternative, I'd install a different Ubuntu version (also from recovery mode). I have space in the HDD. | How to repair Ubuntu after upgrade? (from recovery mode) | ubuntu | null |
_webmaster.1226 | I am working with a website that uses CodeIgnitor, a content management system I hadn't heard of before. The problem: Incoming links from AdWords contain a long query string, and if you add a query string to the requested URL, CodeIgnitor thinks that that's part of the page it has to fetch, so it returns a Page not found for anyone who comes in from AdWords.Another problem: Google AdWords doesn't allow you to link to a page that redirects (locally or remotely), although someone can enlighten me if this isn't always true.So, if anyone has any idea for how to solve this, either by changing AdWords settings, CodeIgnitor settings, or other hacks, it would be appreciated... | Is there any way to use AdWords with CodeIgnitor (or similar)? | advertising;codeigniter | You have to change the router object of codeignitor. The relevant documentation is found here: http://codeigniter.com/user_guide/general/routing.htmlBesides changing the CMS-code you can also disable Adwords Auto tagging. This will not append any extra parameters to the target url. |
_unix.268850 | When using the keyboard on my thinkpad x201, I cannot press left, up and space at the same time. That is, when I start pressing the keys one after the other, the third one will be ignored.I verified this with pygame, xev and evtest.How can this be fixed? I don't even know where to start debugging this.Update: The same thing happens when with either g, h, b, or n instead of space. But it works with other combinations, e.g. left+space+g. | left+up+space keys not working on thinkpad x201 | keyboard;thinkpad | This is a hardware issue with Matrix keyboards. Vendors put them in notebooks and sell most of the keyboard with Matrix technology, because they are cheaper in comparison to most mechanical keyboards.If you try to push three buttons on your keyboard, which use the same data lines, one key might be ghosted.Wikipedia explains it very well: https://en.wikipedia.org/wiki/Rollover_(key)If you really need to push these three buttons or more, it is advisable to buy mechanic keyboard. Some of these connect to your pc as multiple keyboard. It is possible to push every button on the keyboard and it will be recognized. |
_reverseengineering.6944 | How/Where can i get variants of a malware ?The only thing that concerns is that the variants must be a continuous update of the previous one | Where can i download malware variants | binary analysis;malware | null |
_unix.96047 | I have a functioning CIFS mount from CentOS 6.4, 2.6.32-358.18.1.el6.x86_64, to a Windows file server. If I hit Ctrlc while doing some IO intensive (like fgrep -r), then the mount (and all other mounts to the same file server) becomes unusable until I either reboot or forcibly unmount and remount.I'm pretty sure that the problem is as reported here:http://www.spinics.net/lists/linux-cifs/msg07576.htmlWhat I don't know, and don't know how to figure out, is whether the fix will ever work its way into CentOS 6.4.From what I can tell, the corresponding source code on centOS is in fs/cifs/transport.c, line 492.And indeed, building the cifs kernel module with --server->sequence_number; before that line does seem to solve the problem for me. | CIFS mount fails when read is interrupted | centos;cifs | I think I would compile this patch and confirm that it fixes my issue first before worrying about if it will get into CentOS upstream. It should be pretty easy to take the source RPM (SRPM) version of the package providing CIFS, apply the patch, recompile, and upgrade to it. |
_cs.22589 | I've always wondered why processors stopped at 32 registers. It's by far the fastest piece of the machine, why not just make bigger processors with more registers? Wouldn't that mean less going to the RAM? | Why does a processor have 32 registers? | computer architecture | First, not all processor architectures stopped at 32 registers. Almost all the RISC architectures that have 32 registers exposed in the instruction set actually have 32 integer registers and 32 more floating point registers (so 64). (Floating point add uses different registers than integer add.) The SPARC architecture has register windows. On the SPARC you can only access 32 integer registers at a time, but the registers act like a stack and you can push and pop new registers 16 at a time. The Itanium architecture from HP/Intel had 128 integer and 128 floating point registers exposed in the instruction set. Modern GPUs from NVidia, AMD, Intel, ARM and Imagination Technologies, all expose massive numbers of registers in their register files. (I know this to be true of the NVidia and Intel architectures, I am not very familiar with the AMD, ARM and Imagination instruction sets, but I think the register files are large there too.)Second, most modern microprocessors implement register renaming to eliminate unnecessary serialization caused by needing to reuse resources, so the underlying physical register files can be larger (96, 128 or 192 registers on some machines.) This (and dynamic scheduling) eliminates some of the need for the compiler to generate so many unique register names, while still providing a larger register file to the scheduler.There are two reasons why it might be difficult to further increase the number of registers exposed in the instruction set. First, you need to be able to specify the register identifiers in each instruction. 32 registers requires a 5 bit register specifier, so 3-address instructions (common on RISC architectures) spend 15 of the 32 instruction bits just to specify the registers. If you increased that to 6 or 7 bits, then you would have less space to specify opcodes and constants. GPUs and Itanium have much larger instructions. Larger instructions comes at a cost: you need to use more instruction memory, so your instruction cache behavior is less ideal.The second reason is access time. The larger you make a memory the slower it is to access data from it. (Just in terms of basic physics: the data is stored in 2-dimensional space, so if you are storing $n$ bits, the average distance to a specific bit is $O(\sqrt{n})$.) A register file is just a small multi-ported memory, and one of the constraints on making it larger is that eventually you would need to start clocking your machine slower to accommodate the larger register file. Usually in terms of total performance this is a lose. |
_datascience.19051 | My question is focused around how to appropriately update an encoded feature set when a new category is introduced by the test data. I use the data in logistic regression and I know it is not a 'live' model (i.e. gradient descent is performed whenever new data is introduced) but do I have to retrain the model to account for added features or do I just add it to subsequent test set values. To exemplify the problem consider a TV Show training set where each show has a 'networks' feature set that includes one or more of the following:[abc,cbs,nbc] Then, in the testing set there is a TV Show with the feature set: [abc, hulu] Would I have to add the new feature retroactively to the training data and retrain the model eventhough it will never occur? Wouldn't this introduce 'look-ahead-bias'? How do I account for the added feature in the encoder going forward? | Updating One-Hot Encoding to account for new categories | machine learning;logistic regression;categorical data;recommender system | I think you have two options:Automate your train/test pipeline so that one-hot encoding is part ofit. If new categorical variables are introduced, they can be featuredin the training dataset even if not very prevalent. This wouldintroduce some bias if the nature of the TV show distribution haschanged over time (e.g. 20 years ago there weren't as many options)but I don't necessarily think it is a show stopper.If new possibilities are introduced over time but for whatever reason you can't retrain, then you should omit using that new value. This has its own disadvantages because in your example, it would be a TV show with no network. |
_unix.385803 | I added server ipv6 / 64. Ping6 works for google.com.But as IPV6ADDR_SECONDARIES, almost all of what I have added does not work.[root@server ~]# ping6 google.comPING google.com(fra15s10-in-x0e.1e100.net) 56 data bytes64 bytes from fra15s10-in-x0e.1e100.net: icmp_seq=1 ttl=57 time=47.9 ms64 bytes from fra15s10-in-x0e.1e100.net: icmp_seq=2 ttl=57 time=47.9 ms64 bytes from fra15s10-in-x0e.1e100.net: icmp_seq=3 ttl=57 time=47.9 ms64 bytes from fra15s10-in-x0e.1e100.net: icmp_seq=4 ttl=57 time=48.0 ms^C--- google.com ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3945msrtt min/avg/max/mdev = 47.943/47.980/48.019/0.220 msI randomly created ipv6.Ex: ipv6adress:XXXX:XXXX:XXXX:XXXXI also created 3K. Only 110 workedI created the IPv6 address: ipv6adress::X - ipv6adress::XXXXIt worked no more than 150working IPV6ADDR_SECONDARIES:[root@server ~]# ping6 -I 2a03:2100:0:12::7b google.comPING google.com(fra15s10-in-x0e.1e100.net) from 2a03:2100:0:12::7b : 56 data bytes64 bytes from fra15s10-in-x0e.1e100.net: icmp_seq=1 ttl=57 time=50.9 ms64 bytes from fra15s10-in-x0e.1e100.net: icmp_seq=2 ttl=57 time=52.2 ms64 bytes from fra15s10-in-x0e.1e100.net: icmp_seq=3 ttl=57 time=50.6 ms64 bytes from fra15s10-in-x0e.1e100.net: icmp_seq=4 ttl=57 time=58.3 ms^C--- google.com ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3399msrtt min/avg/max/mdev = 50.661/53.061/58.308/3.097 msnot working IPV6ADDR_SECONDARIES: (I expect, but I cannot get results.)[root@server ~]# ping6 -I 2a03:2100:0:12:3515:fd11:5c82:2d7e google.comPING google.com(fra15s10-in-x0e.1e100.net) from 2a03:2100:0:12:3515:fd11:5c82:2d7e : 56 data bytes^C--- google.com ping statistics ---26 packets transmitted, 0 received, 100% packet loss, time 25534msBut it is pinging like this.[root@server ~]# ping6 2a03:2100:0:12:3515:fd11:5c82:2d7ePING 2a03:2100:0:12:3515:fd11:5c82:2d7e(2a03:2100:0:12:3515:fd11:5c82:2d7e) 56 data bytes64 bytes from 2a03:2100:0:12:3515:fd11:5c82:2d7e: icmp_seq=1 ttl=64 time=0.034 ms64 bytes from 2a03:2100:0:12:3515:fd11:5c82:2d7e: icmp_seq=2 ttl=64 time=0.083 ms64 bytes from 2a03:2100:0:12:3515:fd11:5c82:2d7e: icmp_seq=3 ttl=64 time=0.050 ms64 bytes from 2a03:2100:0:12:3515:fd11:5c82:2d7e: icmp_seq=4 ttl=64 time=0.041 ms^C--- 2a03:2100:0:12:3515:fd11:5c82:2d7e ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3493msrtt min/avg/max/mdev = 0.034/0.052/0.083/0.018 msThose who do not work are very heavy. Not connected to the sites. I do not get a result.My system Centos 6.9(final) 12 cpu 32gb ram/etc/sysconfig/network-scripts/ifcfg-eth0:DEVICE=eth0TYPE=EthernetUUID=15edbe41-c655-4bee-a309-6973bff8e6bbONBOOT=yesNM_CONTROLLED=yesBOOTPROTO=noneHWADDR=00:0C:29:F7:46:61IPADDR=89.**.**.***PREFIX=27GATEWAY=89.**.**.***DNS1=8.8.8.8DNS2=8.8.4.4DEFROUTE=yesIPV4_FAILURE_FATAL=yesIPV6INIT=yesNAME=System eth0IPV6ADDR=2a03:2100:0:12::/64IPV6_DEFAULTGW=2a03:2100:0:12::1IPV6FORWARDING=noIPV6_AUTOCONF=noIPV6ADDR_SECONDARIES=2a03:2100:0:12:XXX:XXXX:XXXX:XXXX 2a03:2100:0:12:XXX:XXXX:XXXX:XXXX 2a03:2100:0:12:XXX:XXXX:XXXX:XXXXMy settings are that wayIPV6ADDR_SECONDARIES=2a03:2100:0:12:XXX:XXXX:XXXX:XXXX/64 2a03:2100:0:12:XXX:XXXX:XXXX:XXXX/64orIPV6ADDR_SECONDARIES=2a03:2100:0:12:XXX:XXXX:XXXX:XXXX/64 \2a03:2100:0:12:XXX:XXXX:XXXX:XXXX/64I have tried them but the result is the same. | Centos ipv6 secondaries not working | centos;ipv6 | null |
_unix.93651 | How can I rotate a PDF file less than 90 degree under Ubuntu?Can I do that interactively? | Rotate pdf file less than 90 degree? | pdf | I looked hard and long and could find no tool that allowed you to do this interactively that is a native PDF viewer type of tool. I did not try this but you might be able to use Inkscape or Gimp to do this. I think the only issue you'll likely run into with using them is the ability to batch rotate a multi-page document.Even the command line tools such as PdfTk couldn't do rotation by degrees, which really surprised me.However using ImageMagick you can rotate PDF files in 1 degree increments.Examples$ convert original.pdf -rotate 45 rot45.pdfYou can put any value you want in for the rotate argument. It will also take negative numbers so this is possible:$ convert original.pdf -rotate -45 rot-45.pdfThe quality of the output will drop off dramatically using the default options so you'll likely need to include the -density switch to increase the quality of the resulting PDF file.$ convert -density 300x300 original.pdf -rotate 45 rot45.pdfResulting PDFHere's a screenshot of Evince with the resulting PDF file. |
_webmaster.61708 | I am in the process of setting up a webmail server and one of the requirements is to have a signed SSL which I obtained through StartCom. During the setup it asked me to define a sub-domain name (which I did). After the review was complete I received the key and created a new file to hold it.I configured apache2 correctly and all that is working, but I am getting the following error on connection.The certificate is only valid for the following names: host.siriusdesigner.com, siriusdesigner.com now the server hostname is host.siriusdesigner.com and the FQDN is siriusdesigner.com (registered). Am I missing something here? Or do I need to have some sort of wildcard in place (which means I would need to call them about upgrading my account to class 2)? | SSL not valid for domain | https | Using a wildcard won't work unless you have purchased an expensive SSL certification which supports that.Accessing https://domain.com I can see that the SSL looks like its working as intended, however accessing https://www.domain.com it reports about host. Unless I've mistaken and you do want to use www. then you need to add the SSL cert to the www.domain.com not domain.com, often people make mistakes of not thinking www is a sub domain when it is. |
_unix.107365 | Every time I start up the red5 server it works but hangs so only way to stop it is to close terminal window, I've never been able to command start and stop it and go on to something else without losing my shell connection. | terminal hangs after start up of red5 server | red5 server | To clarify my comment, Control+z pauses a process and returns control to the shell. From there fg will unpause it in the foreground and bg will unpause it in the background. You can initially start it in the background by adding & to the end of your command line. |
_unix.55925 | My Arch (3.6.5-1) is exhibiting a rather peculiar problem: when wifi is set up, all logs indicate that the setup was successful and that the interface is up and functional. However, when attempting to access a website (or execute ping) all requests time out (despite that connection is reported as working and signal at 63% strength). This tends to happen randomly after laptop is switched on - after some time the connection usually starts working and does not break until next shutdown/suspend.Relevant dmesg entries (full dmesg output can be found here):[ 13.858528] IPv6: ADDRCONF(NETDEV_UP): wlan0: link is not ready[ 14.024275] r8169 0000:02:00.0: eth0: link down[ 14.024339] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready[ 34.895920] wlan0: authenticate with 00:24:6c:c8:e4:a1[ 34.900827] wlan0: send auth to 00:24:6c:c8:e4:a1 (try 1/3)[ 34.902963] wlan0: authenticated[ 34.908362] wlan0: associate with 00:24:6c:c8:e4:a1 (try 1/3)[ 34.911153] wlan0: RX AssocResp from 00:24:6c:c8:e4:a1 (capab=0x1431 status=0 aid=9)[ 34.911217] IPv6: ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready[ 34.911294] wlan0: associatedip -s link shows:wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DORMANT qlen 1000 link/ether 50:b7:c3:1e:f4:21 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped overrun mcast 14970982 50472 0 0 0 0 TX: bytes packets errors dropped carrier collsns 19116 233 0 0 0 0 ip minotor outputs some failure messages:[NEIGH]144.32.78.1 dev wlan0 FAILED[NEIGH]144.32.78.1 dev wlan0 FAILED[LINK]3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> link/ether [LINK]3: wlan0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN link/ether 50:b7:c3:1e:f4:21 brd ff:ff:ff:ff:ff:ff[LINK]3: wlan0: <NO-CARRIER,BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state DORMANT link/ether 50:b7:c3:1e:f4:21 brd ff:ff:ff:ff:ff:ff[LINK]3: wlan0: <NO-CARRIER,BROADCAST,MULTICAST,UP,LOWER_UP> link/ether [LINK]3: wlan0: <NO-CARRIER,BROADCAST,MULTICAST,UP,LOWER_UP> link/ether [NEIGH]144.32.78.1 dev wlan0 FAILED[NEIGH]144.32.78.1 dev wlan0 FAILED[LINK]3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP link/ether 50:b7:c3:1e:f4:21 brd ff:ff:ff:ff:ff:ff[NEIGH]144.32.78.1 dev wlan0 FAILED[NEIGH]144.32.78.1 dev wlan0 FAILED[NEIGH]144.32.78.1 dev wlan0 FAILED[NEIGH]ff02::2 dev wlan0 lladdr 33:33:00:00:00:02 NOARP[NEIGH]ff02::1:ff1e:f421 dev wlan0 lladdr 33:33:ff:1e:f4:21 NOARP[NEIGH]ff02::16 dev wlan0 lladdr 33:33:00:00:00:16 NOARP[NEIGH]144.32.78.1 dev wlan0 FAILED[NEIGH]144.32.78.1 dev wlan0 FAILED[NEIGH]144.32.78.1 dev wlan0 FAILED[NEIGH]144.32.79.218 dev lo lladdr 00:00:00:00:00:00 NOARP[NEIGH]144.32.78.1 dev wlan0 FAILED[NEIGH]144.32.78.1 dev wlan0 FAILED[NEIGH]144.32.78.1 dev wlan0 FAILED[NEIGH]144.32.78.1 dev wlan0 FAILED[LINK]3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> link/ether [NEIGH]144.32.78.1 dev wlan0 FAILED[NEIGH]144.32.78.1 dev wlan0 FAILED[NEIGH]144.32.78.1 dev wlan0 FAILED[NEIGH]144.32.78.1 dev wlan0 FAILED[NEIGH]144.32.78.1 dev wlan0 FAILEDAll that should be loaded seem to be (output of lsmod). Any idea on how to solve this or what the problem is? | WiFi not working - wlan0 FAILED | arch linux;wifi;kernel modules;laptop | null |
_unix.334189 | I'm trying to understand the syntax of ifupdown a bit better, and on several sites detailing fairly straightforward static configurations, the example documentation includes a line stating `network 192.168.0.0' -- or something obviously similar. For example, # The loopback network interfaceauto lo eth0iface lo inet loopback# The primary network interfaceiface eth0 inet static address 192.168.10.33 netmask 255.255.255.0 broadcast 192.168.10.255 network 192.168.10.0 gateway 192.168.10.254 dns-nameservers 192.168.10.254What exactly does this line do? I can't imagine that it contains anything that the netmask + address doesn't convey about, for example, broadcast addresses. There is much useful documentation about the myriad array of powerful things that one can do with /etc/network/interfaces available online. Almost all of it details various aspects of networking. Therefore, googling isn't terribly helpful! | In Debian-derived flavours, what does the 'network' line in /etc/network/interfaces actually do? | networking;ifconfig | The network doesn't have to be specified as it is simply the result of address & netmask (& is a binary and):192.168.10.33 & 255.255.255.0 = 192.168.10.0It may make it easier to understand by showing it in binary: 11000000.10101000.00001010.00100001 (192.168.10.33)& 11111111.11111111.11111111.00000000 (255.255.255.0)------------------------------------- 11000000.10101000.00001010.00000000 (192.168.10.0) |
_codereview.24578 | I'm implementing an is_dir() function to determine if a given item is a directory or a file for FTP/FTPS connections. Currently, I have two methods: one using PHP's FTP wrapper for the is_dir() function and another recommended in the help page for the ftp_chdir() function.Here's the first one (protocol wrapper):if ($items = ftp_nlist($this->attributes[link], $path)) { $output = null; foreach ($items as $item) { if (is_dir(ftp://{$this->attributes[user]}:{$this->attributes[password]}@{$this->attributes[host]}/{$item})) { $output[] = $item; } } $output[] = sprintf(%5.4fs, (microtime(true) - $_SERVER[REQUEST_TIME_FLOAT])); return $output;}Here's the second one (chdir implementation):if ($items = ftp_nlist($this->attributes[link], $path)) { $output = null; $current = ftp_pwd($this->attributes[link]); foreach ($items as $item) { if (@ftp_chdir($this->attributes[link], $item)) { ftp_chdir($this->attributes[link], $current); $output[] = $item; } } $output[] = sprintf(%5.4fs, (microtime(true) - $_SERVER[REQUEST_TIME_FLOAT])); return $output;}The problem I find with these two implementations is they're quite slow (or, at least, I think they should be faster to be used within AJAX calls). Running these two functions and measuring their execution time with the last element in the returned array, these are their timed values (ran them 50 times to get an accurate enough time reading): 7.2312s for method A and 2.4534s for method B.Which shows the ftp_chdir() implementation is almost 3x faster than using the FTP wrapper. Still... I have the feeling there may be room for improvement.My development machine is Windows 7 x64 with Apache 2.4.4 (x86), PHP 5.4.13 (x86, thread-safe version) and MySQL 5.6.10 x64 (irrelevant in this case), always keeping up-to-date packages. The FTP server I ran these tests against is another Windows machine but a full-fledged production server (dedicated) on a data center.Is there any other method to do the same but in a better (and/or faster) fashion? | is_dir function for FTP/FTPS connections | php;file system;ftp | null |
_softwareengineering.311636 | I have seen many libraries provide high level API in the language like python or lua. For example:The linear algebra library Trilinos provides a python API.The deep learning framework torch provides a lua API.However, I can hardly find C/C++/CUDA libraries which provide APIs for Java/Jython/Scala or any other JVM languages. Of course there are exceptions, e.g. OpenCV provides API both for python and java.I want to know except for the language(for example python is good for analyzing or a small script), is there any other reasons stop C libraries provides API for JVM language?Personally I think using scala for scripting is as convenient as python. But not so many C libs support scala. Maybe it is not popular enough.Does the design of the JVM make it difficult of doing that? | Is it true that JVM language is difficult to integrate with C than other language? | java;extensibility | null |
_webmaster.10583 | So, i am currently writing something up for a college class. Problem is everything is hypothetical. I need some proof. I believe a first impression on a website is imperative so that people actually use it and in my case, buy your product or services as well.Basically I'm wondering has there been any studies that shows how a better web design will increase revenue for any kind of services? I don't just mean selling products like a T-shirt, but labor services as well. If someone wanted their computer fixed and searched for companies that can do so, will a first impression on the website help them make their decision to use your company? Are there any studies like this? White papers maybe?Thanks! | Do first impressions really count? | website design;marketing | It's a matter of common sense. Its basic human trait that man build their attitude to something/someone in the first interaction they have. There are several factors that lead to this so-called first impression. It's not always look. Its sometime a complex combination of the user experience in interacting with the site or the content itself. For a person who looking for information, what is more important is not design or look, but information itself. Some of the areas that can help in getting good impression are:User Experience.AccessibilityUsability.DesignSpeed.Content WorthinessSee also:21 things that influence in firstimpression - Vandelay Design First Impression through design -UIE A study published in BBCFirst Impression Count |
_unix.87928 | I'm doing rsync --files-from=find-output.txt ~/backup/ (simplified).I'm trying to add --delete, to make rsync delete files in backup that are not included in the file list, but it does not delete them.Am I trying to do something self-contradictory? Do I need to use another option? | Combining rsync --files-from with --delete | backup;rsync | null |
_softwareengineering.191207 | I'm having trouble deciding how to design this service API.public class GetCurrentValuesRequest{ public int ReferenceID { get; set; } public int[] FilterIDs { get; set; }}public class GetDefaultValuesRequest{ public int[] FilterIDs { get; set; }}public class GetValuesAsOfDateRequest{ public int ReferenceID { get; set; } public int[] FilterIDs { get; set; } public DateTime AsOf { get; set; }}public class GetValuesAsOfChangeSetRequest{ public int ReferenceID { get; set; } public int[] FilterIDs { get; set; } public long ChangeSetIDs { get; set; }}public class GetProposedValuesRequest{ public int ReferenceID { get; set; } public int[] FilterIDs { get; set; } public long ApprovalKey { get; set; }}public class GetValuesIfModifiedRequest{ public int ReferenceID { get; set; } public int[] FilterIDs { get; set; } public DateTime Since { get; set; }}public class GetValuesResponse{ public string[] Results { get; set; }}public class GetValuesIfModifiedResponse{ public string[] Results { get; set; } public bool IsModified { get; set; }}public interface IService{ GetValuesResponse GetValues(GetCurrentValuesRequest request); GetValuesResponse GetValues(GetDefaultValuesRequest request); GetValuesResponse GetValues(GetValuesAsOfDateRequest request); GetValuesResponse GetValues(GetValuesAsOfChangeSetRequest request); GetValuesResponse GetValues(GetProposedValuesRequest request); GetValuesIfModifiedResponse GetValuesIfModified(GetValuesIfModifiedRequest request);}I've thought about changing it to have make the IfModified request / response subclasses of the simple GetValues request response and only including one GetValues call. The server would return a different response depending on the input request, but that requires user to call IService this:var response = (GetValuesIfModifiedResponse)serviceClient.GetValues(new GetValuesIfModifiedRequest() { ... });I've also thought about placing IsModified in the simple GetValuesResponse, and only populating it if a GetValuesIfModifiedRequest is passed into it. But that seems a bit strange to include it in a result from a method which does not actually do anything with it. Also, it might throw of a user if they see it and expect to be able to use it in their code. bool? IsModified is better, but I'm not entirely sold on it just yet.Any suggestions for how to best design this API? | Designing an API for service operations with closely related parameters | c#;design;wcf;api design | In my opinion, your sole request object should look something like this:public class GetValuesRequest{ public int[] FilterIDs { get; set; } public DateTime SearchDate { get; set; } public SearchType SearchType { get; set; }}public enum SearchType{ AsOf, Since}You can tweak this to your taste, but the point is that there is only one search date submitted, and the enum provides a switch mechanism between past and future.You can then return an object thusly:public class GetValuesResponse{ public string[] Results { get; set; } public CacheState CacheResult { get; set; }}public enum CacheState{ Modified, Unmodified, NotChecked} |
_codereview.63349 | This code finds the index in the string and removes the one which will make the string a palindrome. If the string is already a palindrome, it should return -1.This code works correct, but I am looking for a better implementation of it. Here is the problem description.def palindrome(string): return string == string[::-1]def palindrome_index(string, start, end): while start < end: if string[start] == string[end]: start, end = start+1, end-1 continue if palindrome(string[start:end]): return end return start return -1N = input()strings = [str(raw_input()) for _ in range(N)]for string in strings: print palindrome_index(string, 0, len(string)-1)Can this be written shorter? The size of input strings can be as large as 105, hence a recursive approach won't work. | Index of letter removed to make string palindrome | python;programming challenge;palindrome | FlowThe continue and returns inside the while loop make me feel the loop is doing too much. I would say that the loop should be for moving start and end and then put the logic outside of it:while ( start < end and string[start] == string[end] ): start, end = start + 1, end - 1# now figure out the resultResultThe problem states that all strings that are not already palindromes are only off by one character.Our loop moved start and end such that if start == end, then the string is a palindrome. It has also moved these indices such that if the string is not a palindrome, one of them is 'pointing' to the character that needs to be removed.To figure out which index is correct, I chose to compare the character pointed to by start to the character pointed to by end - 1.if ( start == end ): return -1elif ( string[ start ] == string[ end - 1 ] ): return endelse: return startPalindrome FunctionYou don't need this function at all because we know that if the string is not a palindrome, it is only off by one character. We were able to figure out the index of this character just by moving start and endAnyway, since the function returns a boolean, it should be named isPalindrome |
_softwareengineering.280362 | We have some legacy code that has a bunch of singletons all over the place (written in C#).The singleton is a fairly classic implementation of the pattern:public class SomeSingleton{ private static SomeSingleton instance; private SomeSingleton() { } public static SomeSingleton Instance { get { if (instance == null) { instance = new SomeSingleton(); } return instance; } } }Note that thread safety is not a concern, so no locks are used.In order to make the code more testable, and without making too many modifications, I'd like to modify this code to delegate the creation of the singleton instance in another class (a factory or similar pattern).This can assist in creating a test instance for testing purposes, or the real version, as it is used now.Is this a common practice? I could not find any reference to such pattern being used. | Factory for creating a singleton instance | c#;.net;dependency injection;singleton;factory method | null |
_opensource.4200 | For an open source project is it OK to modify the original LICENSE.txt file from https://www.apache.org/licenses/LICENSE-2.0.txt and also the source file headers to use https instead of http for the links to the license?I do not want my users to accidentally follow non-https links. | Is it OK to change url scheme for the links to Apache 2 license? | apache 2.0;license file;source code | null |
_vi.8833 | The situation: I want to use vader.vim to write some unit tests for a script of mine.However to really test it, it would be best to only have my plugin and vader.vim loaded. I know how that you can use vim -c 'your-commands-here' via the cmdline. However when I try to use vim -c 'set rtp+=my-plugin' it simply does nothing. I feel like the whole bootup process is finished and then it gets added to the rtp, which is unfortunate.So the question: How do I effectively whole plugins from the cmdline, or how do I resource my plugins in my rtp? | How to load a single plugin when starting vim from the cmdline | plugin system | Use custom vimrc to source those plugins and use vim -u /path/to/custom_vimrcor, as suggested by @hgiesel, use process substitutionvim -u <( cat <<< 'commands to be sourced' ) |
_unix.301227 | I am rather new to these operating systems, and today I ran a tar -c command, specifically tar -c file.tar.gz folder/ which resulted in me panicking and ctrl-c my way out of that. But I've noticed there's a generic file named v with no extensions in the folder containing the one i was trying to put in a tar. I was wondering, what exactly did that command do and what is that file? I'm perplexed as v is nowhere near the name of the .tar.gz file i was trying to create. For reference, the machine i was working on is a CentOs 6.7 | tar - c and generic v file | shell;tar | If you just do tar -c file.tar.gz folder/the parameters tell tar to create a tar file (-c) from the files file.tar.gz and those files under folder/. Since you don't specify where the output has to go (with -f or by redirection on the commandline) no files will be created, but your screen might be garbled.What you probably wanted to do is write a tar file called file.tar.gz for that you need to both specify to tar to write to a file (-f), and specify to apply compression (-z). Combined:tar -czf file.tar.gz folder/ |
_webapps.26664 | On LinkedIn I used the profile sections feature to add some courses I've taken and organizations I participate in.The Organizations have dates associated with them but this does not seem to affect the order in which they appear. The Courses don't even have dates. How can I effectively rearrange courses and organizations once I've added them? | How can I rearrange items within Organizations and Courses on my LinkedIn profile? | linkedin | null |
_cs.65560 | Could anyone kindly point me to revenant literature in computer vision/graphics that detail on metrics for image/visual complexity and how to regularize that?More specifically, I am trying to design a neural algorithm for creating business logos and one major difference from image generation problems in CV/CG is the simplicity required of logos. I have been thinking along the lines of adding a visual/image complexity term to be regularized and penalized as the algorithm learns the balance between lower-level style and higher-level meaning/content of multidimensional input images and descriptions. I have been searching along but couldn't find related resources... Could anyone shed some light on that? | Metrics for image/visual complexity in computer vision/graphics? | graphs;reference request;image processing | null |
_codereview.14322 | I have implemented an STL-like graph class. Could someone review it and tell me things that I could add to it?File graph.hpp#include <vector>#include <list>using std::vector;using std::list;#ifndef GRAPH_IMPL#define GRAPH_IMPLnamespace graph { struct node { int v,w; }; template<class T, bool digraph> class graph; template<class T> class vertex_impl { friend class graph<T,true>; friend class graph<T,false>; private: list<node> adj; int index; T masking; public: vertex_impl(int index = -1, const T& masking = T()) : index(index), masking(masking){} vertex_impl(const vertex_impl& other) : index(other.index), masking(other.masking){} typedef typename list<node>::iterator iterator; typedef typename list<node>::const_iterator const_iterator; typedef typename list<node>::reverse_iterator reverse_iterator; typedef typename list<node>::const_reverse_iterator const_reverse_iterator; typedef typename list<node>::size_type size_type; iterator begin() { return adj.begin(); } const_iterator begin() const { return adj.begin(); } iterator end() { return adj.end(); } const_iterator end() const { return adj.end(); } reverse_iterator rbegin() { return adj.rbegin(); } const_reverse_iterator rbegin() const { return adj.begin(); } reverse_iterator rend() { return adj.rend(); } const_reverse_iterator rend() const { return adj.rend(); } size_type degree() const { return adj.size(); } int name() const { return index; } const T& mask() const { return masking; } void set_mask(const T& msk) { masking = msk; } }; template<class T, bool digraph> class graph { private: vector<vertex_impl<T> > adj; typename vector<vertex_impl<T> >::size_type V,E; public: typedef vertex_impl<T> vertex; typedef typename vector<vertex>::iterator iterator; typedef typename vector<vertex>::const_iterator const_iterator; typedef typename vector<vertex>::reverse_iterator reverse_iterator; typedef typename vector<vertex>::const_reverse_iterator const_reverse_iterator; typedef typename vector<vertex>::size_type size_type; graph(size_type v) : adj(v), V(v), E(0) { for(size_type i = 0; i < v; ++i) { adj[i].index = i; } } iterator begin() { return adj.begin(); } const_iterator begin() const { return adj.begin(); } iterator end() { return adj.end(); } const_iterator end() const { return adj.end(); } reverse_iterator rbegin() { return adj.rbegin(); } const_reverse_iterator rbegin() const { return adj.rbegin(); } reverse_iterator rend() { return adj.rend(); } const_reverse_iterator rend() const { return adj.rend(); } void insert(int from, int to, int weight = 1) { adj[from].adj.push_back((node){to,weight}); E++; if(!digraph) adj[to].adj.push_back((node){from,weight}); } void erase(int from, int to) { for(typename vertex::iterator i = adj[from].begin(); i != adj[from].end();) { if(i->v == to) { i = adj[from].adj.erase(i); E--; } else { ++i; } } if(!digraph) { for(typename vertex::iterator i = adj[to].begin(); i != adj[to].end();) { if(i->v == from) { i = adj[to].adj.erase(i); } else { ++i; } } } } size_type vertices() const { return V; } size_type edges() const { return E; } vertex& operator[](int i) { return adj[i]; } const vertex& operator[](int i) const { return adj[i]; } };}#endifFile main.cpp#include <stdio.h>#include graph.hppint main(){ graph::graph<char*,false> gr(5); int m; scanf(%d, &m); for(int i=0; i<m; ++i) { int a,b; scanf(%d %d, &a, &b); gr.insert(a,b); } for(graph::graph<char*,false>::size_type i = 0; i < 5; ++i) { printf(%d: , gr[i].name()); for(graph::graph<char*,false>::vertex::iterator j = gr[i].begin(); j != gr[i].end(); ++j) { printf( %d, j->v); } printf(\n); } return 0;} | STL-like graph implementation | c++;classes;graph;stl | your node type doesn't seem to be a node, but an edge (or an adjacency relationship, or something). Unless this implementation is based on some literature which describes entries in the adjacency list as nodes, I'd consider renaming itI have no idea what mask and masking are supposed to mean. They may be meaningful names in the context where you're using this graph, but it isn't clear that they are in a generic graph. It's just where the user attaches their arbitrary data to each vertex, right? Is it even used?the number of vertices is fixed at creation time, and you can only add edges. Is that sensible?graph::insert doesn't check whether from and to are valid indiceswhat does digraph mean here - directed graph? Because there is another meaning which is totally unrelated. Why not just say directed?it looks like graph::V is always identical to adj.size() (and is anyway never used), so you can remove itvertex_impl is an internal implementation detail, yet you're exposing it. STL containers hide these details (list & tree nodes for example) inside the iterator, and only expose the user data. Is there a useful way of iterating transparently over the graph without exposing this? |
_webapps.82250 | I have two Gmail accounts. When I click on an email address in a website, it automatically starts a full screen (or wide) mode.Is there anyway to change to my secondary profile without exiting this tab? | Changing Gmail profile when in full screen mode? | gmail | The New Message window is opened using the Google/Gmail account you are currently logged in as. It does not seem to be possible to switch to a delegated account at this stage.However, you can set up the other account as an email address from which you can Send mail as (in Settings > Accounts and Import) and then you can pick the other email address from the From dropdown when composing the email.Official Gmail Help: Change the From address when replying or forwarding |
_webmaster.17957 | I am streamlining my domains under one domain so getting rid of some. I have some about 5 years old domains with stable traffic. What should I do with such domains? Where can I sell them?I found an article here but I am unsure of its quality. It suggested GreatDomains -website and Afternic -website. I would also appreciate it how you usually determine the valuation of a domain. | Sell Unused domains? | domains | null |
_webapps.70689 | YouTube videos typically default to playing in 1080p for me, which isn't usually a problem. However, if the highest quality is 60fps, over half of the frames get dropped (pretty consistently for all 1080p@60fps videos).Whenever I try to change the video quality settings, YouTube completely ignores the setting I've chosen and returns the video to 1080p. The quality will change to what I've selected, but after a few seconds of playback at the quality I selected (from what I can tell, it play whatever buffers within approx. the 1st second after I change the quality) it returns to 1080p. This happens in all player sizes, and even using extensions to override the default YouTube behavior doesn't fix it. I'm using Chrome 39.0, running on a Win 8.1 machine with hardware acceleration disabled (although the same thing happens when hardware acceleration is enabled).Any insight about why this might be and how I can fix it? It's seriously hindering my ability to enjoy some videos.EDIT: I've noticed that this doesn't always happen, it seems to be on a day-to-day basis. Some days I can change the quality settings on all videos without an issue, other days they get stuck as described above. I'm not sure what's causing the difference, but I'll be keeping an eye out to see if I can figure out the source of the problem. If I do, I'll update things over here. | YouTube Stuck to HD Video | youtube;video;video streaming | null |
_unix.318496 | I'm new to using ldap, anyway I had my server and client configured and before I was able to enter getent passwd and my users would be displayed in the server. Then I decided to have a play around and tried to modidy binddn in nss_ldap.conf and after I did that or something rather, my users won't display anymore on the server. Can I please get some help with this. This is my nss_ldap.conf file and this is where I made the change before the users stopped displaying. Thank you.# @(#)$Id: ldap.conf,v 2.49 2009/04/25 01:53:15 lukeh Exp $## This is the configuration file for the LDAP nameservice# switch library and the LDAP PAM module.## PADL Software# http://www.padl.com## Your LDAP server. Must be resolvable without using LDAP.# Multiple hosts may be specified, each separated by a# space. How long nss_ldap takes to failover depends on# whether your LDAP client library supports configurable# network or connect timeouts (see bind_timelimit).host fred.com# The distinguished name of the search base.base dc=fred,dc=com,# Another way to specify your LDAP server is to provide an# uri with the server name. This allows to use# Unix Domain Sockets to connect to a local LDAP Server.#uri ldap://fred.ord.eu#uri ldaps://127.0.0.1/#uri ldapi://%2fvar%2frun%2fldapi_sock/# Note: %2f encodes the '/' used as directory separator# The LDAP version to use (defaults to 3# if supported by client library)#ldap_version 3# The distinguished name to bind to the server with.# Optional: default is to bind anonymously.#binddn cn=Managers,dc=fred,dc=com# The credentials to bind with.# Optional: default is no credential.bindpw secret# The distinguished name to bind to the server with# if the effective user ID is root. Password is# stored in /usr/local/etc/nss_ldap.secret (mode 600)#rootbinddn cn=manager,dc=padl,dc=com# The port.# Optional: default is 389.#port 389# The search scope.#scope sub#scope one#scope base# Search timelimit in seconds (0 for indefinite; default 0)#timelimit 0# Bind/connect timelimit (0 for indefinite; default 30)#bind_timelimit 30# Reconnect policy:# hard_open: reconnect to DSA with exponential backoff if# opening connection failed# hard_init: reconnect to DSA with exponential backoff if# initializing connection failed# hard: alias for hard_open# soft: return immediately on server failurebind_policy soft# Connection policy:# persist: DSA connections are kept open (default)# oneshot: DSA connections destroyed after request#nss_connect_policy persist# Idle timelimit; client will close connections# (nss_ldap only) if the server has not been contacted# for the number of seconds specified below.#idle_timelimit 3600# Use paged rseults#nss_paged_results yes# Pagesize: when paged results enable, used to set the# pagesize to a custom value#pagesize 1000# Filter to AND with uid=%s#pam_filter objectclass=account# The user ID attribute (defaults to uid)#pam_login_attribute uid# Search the root DSE for the password policy (works# with Netscape Directory Server)#pam_lookup_policy yes# Check the 'host' attribute for access control# Default is no; if set to yes, and user has no# value for the host attribute, and pam_ldap is# configured for account management (authorization)# then the user will not be allowed to login.#pam_check_host_attr yes# Check the 'authorizedService' attribute for access# control# Default is no; if set to yes, and the user has no# value for the authorizedService attribute, and# pam_ldap is configured for account management# (authorization) then the user will not be allowed# to login.#pam_check_service_attr yes# Group to enforce membership of#pam_groupdn cn=PAM,ou=Groups,dc=padl,dc=com# Group member attribute#pam_member_attribute uniquemember# Specify a minium or maximum UID number allowed#pam_min_uid 0#pam_max_uid 0# Template login attribute, default template user# (can be overriden by value of former attribute# in user's entry)#pam_login_attribute userPrincipalName#pam_template_login_attribute uid#pam_template_login nobody# HEADS UP: the pam_crypt, pam_nds_passwd,# and pam_ad_passwd options are no# longer supported.## Do not hash the password at all; presume# the directory server will do it, if# necessary. This is the default.#pam_password clear# Hash password locally; required for University of# Michigan LDAP server, and works with Netscape# Directory Server if you're using the UNIX-Crypt# hash mechanism and not using the NT Synchronization# service.#pam_password crypt# Remove old password first, then update in# cleartext. Necessary for use with Novell# Directory Services (NDS)#pam_password nds# RACF is an alias for the above. For use with# IBM RACF#pam_password racf# Update Active Directory password, by# creating Unicode password and updating# unicodePwd attribute.#pam_password ad# Use the OpenLDAP password change# extended operation to update the password.#pam_password exop# Redirect users to a URL or somesuch on password# changes.#pam_password_prohibit_message Please visit http://internal to change your password.# Use backlinks for answering initgroups()#nss_initgroups backlink# Enable support for RFC2307bis (distinguished names in group# members)#nss_schema rfc2307bis# RFC2307bis naming contexts# Syntax:# nss_base_XXX base?scope?filter# where scope is {base,one,sub}# and filter is a filter to be &'d with the# default filter.# You can omit the suffix eg:# nss_base_passwd ou=People,# to append the default base DN but this# may incur a small performance impact.#nss_base_passwd ou=People,dc=padl,dc=com?one#nss_base_shadow ou=People,dc=padl,dc=com?one#nss_base_group ou=Group,dc=padl,dc=com?one#nss_base_hosts ou=Hosts,dc=padl,dc=com?one#nss_base_services ou=Services,dc=padl,dc=com?one#nss_base_networks ou=Networks,dc=padl,dc=com?one#nss_base_protocols ou=Protocols,dc=padl,dc=com?one#nss_base_rpc ou=Rpc,dc=padl,dc=com?one#nss_base_ethers ou=Ethers,dc=padl,dc=com?one#nss_base_netmasks ou=Networks,dc=padl,dc=com?ne#nss_base_bootparams ou=Ethers,dc=padl,dc=com?one#nss_base_aliases ou=Aliases,dc=padl,dc=com?one#nss_base_netgroup ou=Netgroup,dc=padl,dc=com?one# attribute/objectclass mapping# Syntax:#nss_map_attribute rfc2307attribute mapped_attribute#nss_map_objectclass rfc2307objectclass mapped_objectclass# configure --enable-nds is no longer supported.# NDS mappings#nss_map_attribute uniqueMember member# Services for UNIX 3.5 mappings#nss_map_objectclass posixAccount User#nss_map_objectclass shadowAccount User#nss_map_attribute uid msSFU30Name#nss_map_attribute uniqueMember msSFU30PosixMember#nss_map_attribute userPassword msSFU30Password#nss_map_attribute homeDirectory msSFU30HomeDirectory#nss_map_attribute homeDirectory msSFUHomeDirectory#nss_map_objectclass posixGroup Group#pam_login_attribute msSFU30Name#pam_filter objectclass=User#pam_password ad# configure --enable-mssfu-schema is no longer supported.# Services for UNIX 2.0 mappings#nss_map_objectclass posixAccount User#nss_map_objectclass shadowAccount user#nss_map_attribute uid msSFUName#nss_map_attribute uniqueMember posixMember#nss_map_attribute userPassword msSFUPassword#nss_map_attribute homeDirectory msSFUHomeDirectory#nss_map_attribute shadowLastChange pwdLastSet#nss_map_objectclass posixGroup Group#nss_map_attribute cn msSFUName#pam_login_attribute msSFUName#pam_filter objectclass=User#pam_password ad# RFC 2307 (AD) mappings#nss_map_objectclass posixAccount user#nss_map_objectclass shadowAccount user#nss_map_attribute uid sAMAccountName#nss_map_attribute homeDirectory unixHomeDirectory#nss_map_attribute shadowLastChange pwdLastSet#nss_map_objectclass posixGroup group#nss_map_attribute uniqueMember member#pam_login_attribute sAMAccountName#pam_filter objectclass=User#pam_password ad# configure --enable-authpassword is no longer supported# AuthPassword mappings#nss_map_attribute userPassword authPassword# AIX SecureWay mappings#nss_map_objectclass posixAccount aixAccount#nss_map_attribute uid userName#nss_map_attribute gidNumber gid#nss_map_attribute uidNumber uid#nss_map_attribute userPassword passwordChar#nss_map_objectclass posixGroup aixAccessGroup#nss_base_group ou=aixgroup,?one#nss_map_attribute cn groupName#nss_map_attribute uniqueMember member#pam_login_attribute userName#pam_filter objectclass=aixAccount#pam_password clear# For pre-RFC2307bis automount schema#nss_map_objectclass automountMap nisMap#nss_map_attribute automountMapName nisMapName#nss_map_objectclass automount nisObject#nss_map_attribute automountKey cn#nss_map_attribute automountInformation nisMapEntry# Netscape SDK LDAPS#ssl on# Netscape SDK SSL options#sslpath /etc/ssl/certs# OpenLDAP SSL mechanism# start_tls mechanism uses the normal LDAP port, LDAPS typically 636#ssl start_tls#ssl on# OpenLDAP SSL options# Require and verify server certificate (yes/no)# Default is to use libldap's default behavior, which can be configured in# /usr/local/etc/openldap/ldap.conf using the TLS_REQCERT setting. The default for# OpenLDAP 2.0 and earlier is no, for 2.1 and later is yes.#tls_checkpeer yes# CA certificates for server certificate verification# At least one of these are required if tls_checkpeer is yes#tls_cacertfile /etc/ssl/ca.cert#tls_cacertdir /etc/ssl/certs# Seed the PRNG if /dev/urandom is not provided#tls_randfile /var/run/egd-pool# SSL cipher suite# See man ciphers for syntax#tls_ciphers TLSv1# Client certificate and key# Use these, if your server requires client authentication.#tls_cert#tls_key# Disable SASL security layers. This is needed for AD.#sasl_secprops maxssf=0# Override the default Kerberos ticket cache location.#krb5_ccname FILE:/etc/.ldapcache` | getent passwd not displaying users on the server | ldap | null |
_unix.182355 | I'm trying to get my ATI graphics driver installed on my fresh Slackware OS (Zenwalk). I'm coming from Ubuntu 14.04-14.10, so I know there's a whole lot I don't understand just yet.I download the .zip archive from here, then unzip it to extract the .run file.sh amd-driver-installer-catalyst-13.1-legacy-linux-x86.x86_64.run --buildpkggenerates the following output:Created directory fglrx-install.hu4jBUVerifying archive integrity... All good.Uncompressing AMD Catalyst(TM) Proprietary Driver-8.97.100.7.........===================================================================== AMD Catalyst(TM) Proprietary Driver Installer/Packager =====================================================================Generating package: Slackware/SlackwareATI SlackBuild -------------------------------------------- by: Emanuele Tomasi <[email protected]> AMD kernel module generator version 2.1kernel includes at /usr/src/linux/include not found or incompletefile: /usr/src/linux/include/linux/version.hERROR: I didn't make moduleRemoving temporary directory: fglrx-install.hu4jBUUsing the command ./amd-driver-installer-catalyst-13.1-legacy-linux-x86.x86_64.run runs the GUI setup, but results in the same error. I'm not sure what to make of this error, or what steps I should be taking. Any assistance is appreciated.If it matters, uname -r = 3.14.5 | AMD Catalyst installation woes on Slackware | slackware;amd graphics | null |
_codereview.80340 | The task is to create a DataTable with 1 column + header and 7 rows with content. The content will never changeImplementation: private DataTable createTable() { var table = new DataTable(); table.Columns.Add(new DataColumn(Wochentage, typeof(string)));// Wochentage=> WeekDays createRowAndAdd(table, Montag); createRowAndAdd(table, Dienstag); createRowAndAdd(table, Mittwoch); createRowAndAdd(table, Donnerstag); createRowAndAdd(table, Freitag); createRowAndAdd(table, Samstag); createRowAndAdd(table, Sonntag); return table; } private void createRowAndAdd(DataTable table,string text) { DataRow newRow = table.NewRow(); newRow[0] = text; table.Rows.Add(newRow); }createRowAndAdd does 2 things:create a row (and add the text)add the row to the TableAlso, my createTable does more than just creating the table.A method should only do one thing, right? How would you improve this and do you think there is a line between clean coding and adding overhead with no need?Wochenuebersicht public class Wochenuebersicht { public int KW { get; set; } public DateTime BeginDate { get; set; } public DateTime EndDate { get; set; } public List<WochenMenu> WochenMenuListe { get; set; } public DataTable Gesamt { get; set; } public string Wochenangebot { get; set; } public Wochenuebersicht() { DisplayTitle =Wochenbersicht; WochenMenuListe = new List<WochenMenu>(); WochenMenuListe.Add(createTage()); } private WochenMenu createTage() { var table = new DataTable(); table.Columns.Add(new DataColumn(Wochentage, typeof(string))); createRowAndAdd(table, Montag); createRowAndAdd(table, Dienstag); createRowAndAdd(table, Mittwoch); createRowAndAdd(table, Donnerstag); createRowAndAdd(table, Freitag); createRowAndAdd(table, Samstag); createRowAndAdd(table, Sonntag); var tage = new WochenMenu(); tage.MenuWochenDetails = table; return tage; } private void createRowAndAdd(DataTable table,string text) { DataRow newRow = table.NewRow(); newRow[0] = text; table.Rows.Add(newRow); } }WochenMenu public class WochenMenu { private Dictionary<int, Dictionary<string, int>> data; public string Title { get; set; } public DataTable MenuWochenDetails { get; set; } public WochenMenu() { } public WochenMenu(string title, Dictionary<int, Dictionary<string, int>> data) { Title = title; this.data = data; } public void CreateDataTable() { if (data == null) return; var Gesamt = Gesamt; var GesamtAPH = Gesamt APH; MenuWochenDetails = new DataTable(); // create column header foreach (string s in data[0].Keys) { if (s == KundenTyp.Essen_auf_Raedern.GetStringValue()) MenuWochenDetails.Columns.Add(new DataColumn(GesamtAPH)); MenuWochenDetails.Columns.Add(new DataColumn(s)); } MenuWochenDetails.Columns.Add(new DataColumn(Gesamt)); // Add data to DataTable foreach (var dataLine in data) { DataRow newRow = MenuWochenDetails.NewRow(); foreach (var item in dataLine.Value) { newRow[item.Key] = item.Value; } newRow[GesamtAPH] = dataLine.Value.Where(dic => dic.Key != KundenTyp.Essen_auf_Raedern.GetStringValue()).Sum(kvp => kvp.Value); newRow[Gesamt] = dataLine.Value.Sum(kvp => kvp.Value); MenuWochenDetails.Rows.Add(newRow); } } | Separation of concern on micro lvl | c#;design patterns | null |
_unix.269769 | Can somebody explain to me what is a character driver? Can somebody explain to me what is a character driver? Can somebody explain to me what is a character driver? | Drivers, Module - Kernel | linux;ubuntu;drivers;kernel modules | null |
_webapps.27749 | I want to make a link between two spreadsheets, like if I click on a code in a cell, it opens the spreadsheet which the code refers/links to. | Can we do a link between two Google Docs spreadsheets? | google drive;google spreadsheets | You want to enter a link in a Google spreadsheet.The syntax of the command is: =hyperlink(url;Phrase)The key is that the Google spreadsheet you want to open has a URL. |
_codereview.52085 | I wanted just to right align one column in WPF and I found out that the syntax is not exactly syntetic... Is there a way to make it more syntetic? Consider that this one colum, but I have 20 columns which share the same identical structure, changing only the displayed property.<GridViewColumn Header=trial Width=110> <GridViewColumn.CellTemplate> <DataTemplate> <TextBlock HorizontalAlignment=Right> <TextBlock.Text> <MultiBinding StringFormat={}{0:N} {1}> <Binding Path=Income></Binding> <Binding ElementName=UserControl Path=DataContext.Pinco></Binding> </MultiBinding> </TextBlock.Text> </TextBlock> </DataTemplate> </GridViewColumn.CellTemplate></GridViewColumn> | Rewriting WPF GridViewColumn alignment in a less verbose way | wpf;xaml | null |
_unix.223206 | I got a problem using yum repository in RHEL 6.3.First, I have created yum repository use rpmforge repository. I can do yum list also. Here the last line of yum list result.zssh.x86_64 1.5-0.c.2.el6.rf rpmforge zsync.x86_64 0.6.2-1.el6.rf rpmforge zvbi.x86_64 0.2.33-2.el6.rf rpmforge zvbi-devel.x86_64 0.2.33-2.el6.rf rpmforge zziplib.x86_64 0.13.45-1.el6.rf rpmforge zziplib-devel.x86_64 0.13.45-1.el6.rf rpmforge But, If I do yum search for zzip*, the result going to be like this[root@noi Downloads]# yum search zzip*Loaded plugins: product-id, refresh-packagekit, security, subscription-managerUpdating certificate-based repositories.Unable to read consumer identityWarning: No matches found for: zzip*No Matches foundIt will be different again when I just installed that,[root@noi Downloads]# yum install zzip*Loaded plugins: product-id, refresh-packagekit, security, subscription-managerUpdating certificate-based repositories.Unable to read consumer identitySetting up Install ProcessResolving Dependencies--> Running transaction check---> Package zziplib.x86_64 0:0.13.45-1.el6.rf will be installed---> Package zziplib-devel.x86_64 0:0.13.45-1.el6.rf will be installed--> Processing Dependency: SDL-devel for package: zziplib-devel-0.13.45-1.el6.rf.x86_64--> Processing Dependency: zlib-devel for package: zziplib-devel-0.13.45-1.el6.rf.x86_64--> Finished Dependency ResolutionError: Package: zziplib-devel-0.13.45-1.el6.rf.x86_64 (rpmforge) Requires: zlib-develError: Package: zziplib-devel-0.13.45-1.el6.rf.x86_64 (rpmforge) Requires: SDL-devel You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigestAnyone knows how it can be happened? | Can't find using yum search, but can see it in yum list | yum;repository | I don't want to replace nice answer provided by slm.You generally don't use any regular expressions (globs) when searching with yum search since the command search is already looking for sub-strings within the package names and their summaries. How do I know this? There's a message that tells you this when you use yum search. Name and summary matches only, use search all for everything. Try searching without regular expression yum search zzip |
_unix.38562 | So, after 10 years of wanting to study the book that Ramanujan relied upon for a lot of his early strides in mathematics, it's 2012, and the book is finally online.To celebrate, I want to go through each of the propositions using my command line, finding a way to interact with each one.In octave syntax, first one is a^2 - b^2 = (a-b) * (a+b)That's familiar from algebra, of course.For now, I just want to be able to make a picture of this difference of squares.I've looked at gnuplot, and it doesn't seem to be designed to do simple geometric shapes.NB: I don't want to plot the function f(x, y) = x^2 - y^2. I want to draw two squares of a given size in different colors, one inside the other, to illustrate the difference of squares graphically.What I'd like to be able to do is type something like $plotsquare --center origin --colors=black,gray black=8x8 gray=3x3 -q -o plot.png'black' being an 8x8 square, 9 being a 9x9 square; the gray square inside the black square illustrates the difference of squares.Does anything like that exist? | Illustrate the difference of two squares via the command line | command line;plotting;math | null |
_webapps.104162 | I have a generated number based by row (Google Form response). However, doing this limits my ability to filter.I want to create a secondary sheet that updates regularly with the values only (no formulas) then I will gain the function to filter again.Current sheet called Ticket Creation and the New Sheet created/updated called Active Tickets+.function cloneGoogleSheet() { /* Copy current sheet and replaced it >.< */ var name = labnol; var ss = SpreadsheetApp.getActiveSpreadsheet(); var sheet = ss.getSheetByName('Ticket Creation'); sheet.getDataRange().copyTo(sheet.getDataRange(), {contentsOnly:true}); /* Before cloning the sheet, delete any previous copy */ var old = ss.getSheetByName(name); if (old) ss.deleteSheet(old); // or old.setName(new Name); SpreadsheetApp.flush(); // Utilities.sleep(2000); sheet.setName('Active Tickets+'); /* Make the new sheet active */ ss.setActiveSheet(sheet);}Currently this will clone to the same sheet (Ticket Creation) and remove the formulas. I'm just starting to play with scripting. | Cloning a sheet without the formula(s) | google spreadsheets;google apps script | null |
_unix.233383 | I'm writing a script that shuts down the Apache service, performs a function, and turns it back on.I'd like to get some type of interactive confirmation for each process. I'm not familiar with printf, but I believe it would be required for what I want to do.Here's a portion of the script below. As I turn off the service, I would like it to say what it will do, then report OK or NOT OK on the same line.ie.Shutting down Apache service:few seconds later...Shutting down Apache service: OK#!/bin/bash# Turn off Apacheecho Shutting down Apache service:'service apache2 stopif ( $(ps -ef | grep -v grep | grep apache2 | wc -l) > 0 )thenecho NOT OK!elseecho OK!fi | Interactive bash script | shell script;output | In bash, you can use the -n option of echo not to print the end of line.echo -n Shutting down Apache service:or, use printfprintf '%s' 'Shutting down Apache service'To include the newline in printf, put \n into the templateprintf '%s\n' 'NOT OK!' |
_unix.97535 | I am testing an AWS Medium Instance with CentOS (AWS AMI) and comparing it with Hetzner's EX40-SSD (http://www.hetzner.de/hosting/produkte_rootserver/ex40ssd). Software setup on both is comparable( Same OS, Same version of MySQL 5.6.14). CPU Speed of AWS was 2Ghz and CPU Speed of Hetzner was 3.4GHZ. mysql my.cnf was exactly same for both the servers. Have a large abc.sql file which has dump from another server (2GB Size) copied locally onto both the above servers. I ran the following command on both the machines #date > StartTime ; mysql -pxyz < abc.sql ; date >> StartTime &Strangely, the AWS Small instance completed the task in 12 mins where as the Hetzner instance took more than 40 mins to complete the task. There are no other servers/apps running on Hetzner machine (Clean machine with only MySQL). AWS Machine has other software running like SugarCRM, Piwik, Nginx, but not loaded.What could be the reason why the AWS outperformed the Hetzner ? Logically, Hetzner should be faster as it has SSD (Faster IO), faster CPU (3.4Ghz) and operating as Dedicated Machine and not a VM instance. How to debug and trace the root cause for such strange behavior. | Performance Issue: AWS Medium Linux Instance V/S Hetzner Dedicated Instance | centos;performance;mysql | null |
_softwareengineering.253277 | I see two obvious approaches to the architecture for an iOS app which needs to talk to a server to do its job.Pretend to be a web browserUnder this approach, the app fetches a chunk of data (typically JSON) from the server in response to a user action (navigating to a new view controller, tapping an Update, whatever), puts up a spinner or some other progress indicator, then updates the view when the request completes. The model classes are mapped directly from the incoming JSON and possibly even immutable, and thrown away when the user moves off the screen they were fetched to populate.Under the pure version of this approach you set the appropriate headers on the server and let NSURLSession and friends handle caching.This works fine if you can assume the user has fast, low-latency network connectivity, and is mostly reading data from the server and not writing.SyncWith the sync approach, the app maintains a local Core Data store of model objects to display to the user. It refreshes this either in response to user action or periodically, and updates the UI (via KVO or similar) when the model changes. When the user modifies the model, these changes are sent to the server asynchronously; there must be a mechanism for resolving conflicts.This approach is more suitable when the app needs to function offline or in high latency/slow network contexts, or where the user is writing a lot of data and they need to see model changes without waiting for a server round-trip.Is there a third in-between way? What if I have an app which is mostly but not always reads, but where there are writes the user should see those updates reflected in the UI immediately. The app should be usable in low/no network situations (showing any previously cached data until a request to the server has time to respond.)Can I get the best of both worlds without going full Core-Data-with-sync (which feels heavyweight and is difficult to get right) but without also inadvertently implementing a buggy and incomplete clone of Core Data? | Middle ground architecture for client-server iOS apps? | architecture;ios;networking;client server | null |
_softwareengineering.16105 | I am a CS undergrad but I landed a programming job last year and I like it a lot. We are currently 3 programmer in the development section of the company and we have to work with pretty much anything were asked to do. We deal with many different languages and learn them as needed for some quick jobs etc etc. We want to hire a 4th programmer and I'm asked to suggest some students in my class, a year younger, since I failed a class. I don't really know any of these guys except my teammates which I wouldn't suggest. We don't really want to interview them all so I thought we could make a little challenge to help us choose who to interview. We're in need of someone who understand the business even though they're new to it, and likes to learn new stuff and code. Any idea on a programming challenge or a kind of letter saying why we should take them?TL;DR: We need a new undergrad programmer, we want the best to come to us without interviewing them all. Any challenge or test you could suggest? | Choosing the right programmer among a class of undergrads | interview;hiring | Run them through the Programmer Competency Matrix and see where they fall.Identify problem solvers. People who get 100% on assignments are great, but might not be the most out of the box thinkers. Look for people who ask questions and work around problems without following traditional routes.We don't really want to interview them all so I thought we could make a little challenge to help us choose who to interview.This line in particular worries me. You should sit down with every applicant for at least five minutes unless the interaction you have shows such a gross lack of knowledge it would be worthless. You might end up (as mentioned above) with people who are great at finishing specific tasks but lack an overall big picture view. |
_unix.299278 | Data starts on the second line. Is there a simple script or utility to remove the first instance of ^m on each line of data? The problem can also be rephrased as: how can every second (even) instance of ^m be removed? Looking forward interesting (clever) responses. Preferably in Ubuntu or similar.Raw data for the clever to cut, paste and parse:Date,From,To,Flight_Number,Airline,Distance,Duration,Seat,Seat_Type,Class,Reason,Plane,Registration,Trip,Note,From_OID,To_OID,Airline_OID,Plane_OID^M- -,JFK,OTBD,American Airlines (AA),American Airlines,6687,13:52,,,,,777^M,,,Direct,3797,2241,24^M- -,JFK,OTBD,Qatar Airways (QR),Qatar Airways,6687,13:52,,,,,77W^M,,,Direct,3797,2241,4091^MThat being said, the reason for posing this question is the unexpected ^m is causing import problems into Libre-Office Calc (spreadsheet): it cause an expected new-line. | Removing first ^M from each line of file | text processing | null |
_webapps.5768 | I have 3 Facebook pages having different fans. I would like to combine all pages with fans. Is it possible? If no, how can I add all fans to a single page? | How to combine Facebook pages | facebook | null |
_unix.59739 | I need to set up some logging on my debian squeeze x64 mysql 5.1 Server:1) logging when users logged in and logged off2) logging only specific users commandsset up logging which will not slow down performance of mysql server or harddrives.Is it possible to set logging only last month?Is it possible to set up logging only in specific hours in a day?I don't want gigabytes of useless logs, just logs of specific users in one file. | Mysql LOG only specific users | logs;mysql | null |
_codereview.146817 | So I'm working on a patch for the python interpreter where you go through the list and look at the types of the elements and try to use optimized special-case comparison functions that are much cheaper than the default comparison function. This is the code that goes through and checks the list elements and then assigns the comparison function. I put a lot of thought into the design, refactoring it many times; I really want this to be readable. Is it? How can I improve the form of this? I'm not looking for performance fixes, this just executes once per sort so performance isn't a big deal... I'm looking more for style/refactoring feedback./* Turn off type checking if all keys are same type, * by replacing PyObject_RichCompare with lo.keys[0]->ob_type->tp_richcompare, * and possibly also use optimized comparison functions if keys are strings or ints. *//* Get information about the first element of the list */int keys_are_in_tuples = (lo.keys[0]->ob_type == &PyTuple_Type && Py_SIZE(lo.keys[0]) > 0);PyTypeObject* key_type = (keys_are_in_tuples ? PyTuple_GET_ITEM(lo.keys[0],0)->ob_type : lo.keys[0]->ob_type);int keys_are_all_same_type = 1;int strings_are_latin = 1;int ints_are_bounded = 1;/* Test that the above bools hold for the entire list */for (i=0; i< saved_ob_size; i++) { if (keys_are_in_tuples && (lo.keys[i]->ob_type != &PyTuple_Type || Py_SIZE(lo.keys[0]) == 0)){ keys_are_in_tuples = 0; keys_are_all_same_type = 0; break; } PyObject* key = (keys_are_in_tuples ? PyTuple_GET_ITEM(lo.keys[i],0) : lo.keys[i]); if (key->ob_type != key_type) { keys_are_all_same_type = 0; break; } else if (key_type == &PyLong_Type && ints_are_bounded && Py_ABS(Py_SIZE(key)) > 1) ints_are_bounded = 0; else if (key_type == &PyUnicode_Type && strings_are_latin && PyUnicode_KIND(key) != PyUnicode_1BYTE_KIND) strings_are_latin = 0;}/* Set compare_function appropriately based on values of the above bools */if (keys_are_all_same_type) { if (key_type == &PyUnicode_Type && strings_are_latin) compare_function = unsafe_unicode_compare; else if (key_type == &PyLong_Type && ints_are_bounded) compare_function = unsafe_long_compare; else if (key_type == &PyFloat_Type) compare_function = unsafe_float_compare; else if ((richcompare_function = key_type->tp_richcompare) != NULL) compare_function = unsafe_object_compare;} else { compare_function = safe_object_compare;}if (keys_are_in_tuples) { tuple_elem_compare = compare_function; compare_function = unsafe_tuple_compare;}/* End of type-checking stuff! */ | Select comparison function for sorting based on type information | c;api | null |
_unix.256790 | How can I have a netcat connection terminate the sending half of a TCP connection if its input reaches EOF?I have a (non-standard) TCP service which reads all its input (i.e. until the client sends its FIN), and only then starts processing the data and sending back a reply. I would like to use nc to interact with this service. But at the moment the reply doesn't arrive at the nc console, and using Wireshark I can see that nc only terminates the sending side of the connection when it quits (e.g. because of a timeout). I found no command line option to change this behavior. | Half-close netcat connection | pipe;tcp;netcat | null |
_webmaster.30154 | I've heard that linking to a site from the same server is bad for SEO. If the link is set to rel no-follow, would that be OK? | Linking to site from same server SEO impact | seo;links | null |
_unix.320252 | when I run the atop -r /var/log/atop/...I see from the atop screen this PAG | scan 641376 | steal 635209 | stallthe PAG is colored with red can someone explain what PAG explain from the atop , and what this problem means ? | atop + what is the PAG in the atop | linux;rhel;atop | null |
_unix.255289 | I have been playing with a 9-bit Epson LX-300 dot-matrix printer under Debian. So far was able to: send plain text to the printer: echo 'Los comandos para configurar > /dev/usb/lp0change the printer's behavior through ESC/POS commands: echo -en '\x1B\x2D\x01' > /dev/usb/lp0 (add underline)Add it to cups: lpadmin -p LX-300 -v serial:/dev/usb/lp0 and print through it: lpr -P LX-300 -o raw text.txtNow, I'd like to use the printer's graphics mode capability. How to send a graphics file to the printer and get it to print as graphic and not plain text? | raw printing - graphics mode | linux;debian;printing;cups;printer | null |
_webmaster.103967 | I am searching in Google Xdebug for php7.1.1 but there is nothing that can really answer my question, though I found this https://github.com/oerdnj/deb.sury.org/issues/444, but the version is php7.1.0, so I can't fully rely on this answer.I followed the instructions given here, https://xdebug.org/wizard.php, and then I received a problem, phpize that I installed (from the instruction https://xdebug.org/docs/faq#phpize) is not compatible to php version (php7.1.1), then tried to resolve it by from the instruction https://xdebug.org/docs/faq#custom-phpize, still not working.Is there any workaround here?If my question is not suitable to this site, please do tell me.UpdateHere is the result of phpize:Configuring for:PHP Api Version: 20151012Zend Module Api No: 20151012Zend Extension Api No: 320151012Instead of:Configuring for:...Zend Module Api No: 20160303Zend Extension Api No: 320160303given in the instructions here https://xdebug.org/wizard.php. | Is Xdebug supported for php7.1.1? | web development | null |
_codereview.25941 | This is code for finding largest prime number smaller than \$N\$, where \$N\$ can be as large as \$10^{18}\$. But, it takes 1 minute for \$10^{18}\$. I need to pass it in 2-3 sec.What changes should I make to pass it? My compiler used is g++ 4.3.2The program here runs for multiple test cases.#include <iostream>#include <cmath>#define c2 341550071728321#define c1 4759123141using namespace std;unsigned long long int mul(unsigned long long int a, unsigned long long int b, unsigned long long int mod){ int i; unsigned long long int now = 0; for (i = 63; i >= 0; i--) if (((a >> i) & 1) == 1) break; for (; i >= 0; i--) { now <<= 1; while (now > mod) now -= mod; if (((a >> i) & 1) == 1) now += b; while (now > mod) now -= mod; } return now;}unsigned long long int pow(unsigned long long int a, unsigned long long int p, unsigned long long int mod){ if (p == 0) return 1; if (p % 2 == 0) return pow(mul(a, a, mod), p / 2, mod); return mul(pow(a, p - 1, mod), a, mod);}bool MillerRabin(unsigned long long int n){ int l; unsigned long long int ar[9] = { 2, 3, 5, 7, 11, 13, 17, 19, 23}; if (n < c1) { l = 3; } else if (n < c2) { l = 7; } else { l = 9; } //l = 9; unsigned long long d = n - 1; int s = 0; while ((d & 1) == 0) { d >>= 1; s++; } int i, j; for (i = 0; i < l; i++) { unsigned long long int a = min(n - 2, ar[i]); unsigned long long int now = pow(a, d, n); if (now == 1) continue; if (now == n - 1) continue; for (j = 1; j < s; j++) { now = mul(now, now, n); if (now == n - 1) break; } if (j == s) return false; } return true;}bool check_prime(unsigned long long int n) {if(!MillerRabin(n)) return false;return true;if(n == 2) return true;if(n % 2 == 0) return false;for(unsigned long long int i = 3; i <= sqrt(n); i+=2) { if(n % i == 0) return false;}return true;}int main(){int t;cin >> t;while(t--) { unsigned long long int n; cin >> n; while(1) { if(check_prime(n)) { cout << n << endl; break; } else { n--; } }}} | Greatest prime number smaller than N where N can be as large as 10^18 | c++;performance;primes | null |
_unix.305447 | I teach in various CLI tools like git,docker etc.I want to have two bash terminals: one for running commands, and get output, and one that at all time just mirroring what the command history would give me.Is it possible to mirror realtime commands in bash like that?Example:T1: pwdT1: /home/meT1: lsT1: Documents Desktop DownloadsT2: pwdls | Mirror Bash history | bash;command history | null |
_unix.373614 | I fully understand why I should disable Root login on a production server. That makes sense. I can log in as username, then sudo back to root functions just fine from the terminal. What I'm unable to do is utilize a graphic type of secure file transfer protocol (SFTP) tool (e.g. filezilla) to get access to key directories. Is there a way to keep the root login disabled and use SFTP, or am I stuck using a terminal $ ssh login with wget or curl for all of my file transfer needs? The problem is that I can only use wget or curl for content loaded on a server somewhere. I can't directly upload content from my laptop/desktop development machine to the server. Yes, I am aware that git style transfers are an other possibility. I would need a private git repo somewhere to make that work. Is there a way to set a sudo access for the SFTP tool, without using a root login?Is there an alternative method to send new content to a online server via remote access, without compromising security? Same thing goes for downloading / reading systems content. I know I can do a lot of things via terminal and su - or sudo, but I have to say, reading a systems log file that is 10 megs long is just out of the question with vim or nano. | Disable Root Password Login... but can I SUDO with a GUI SFTP program? | security;sudo;root;sftp | null |
_unix.189294 | I am running jmeter by launching the .sh script, once I do that I cannot start using the prompt again unless I ctrl-z, ctrl-c, or the script ends in some other way. How can I launch that script and still be able to use the same console after the script is launched. (i used to know how to do this, it's easy =()I guess another way of putting it is how do I launch the script independently of the terminal. | Run script and not lose access to prompt / terminal | linux;shell script | You can start your script in the background by putting & at the end of the command; something likejmeter.sh &If you want to avoid seeing any output, see Silently start task in background. |
_codereview.104212 | This script shows the number of contracts for persons. Table Contract has more than 82008 records and Candidate has about 7978.Could you please suggest improvements for this script?select ISNULL(CON.ContractCount, 0), CAN.PersoonName, ... /* some fields from table Candidate (more than 25) */from Candidate CAN left join (select PersoonID, COUNT(*) ContractCount from Contract group by PersoonID) CON on CAN.PersoonID = CON .PersoonIDwhere CAN.Type = 11 | Acquiring the number of contracts for persons | performance;sql;sql server | Sometimes, when doing left-join aggregation with a simple function (count()), it is easier to move the aggregation to be part of the select clause instead of as a subquery on the from clause.As long as you are only pulling one aggregate from the sub table, it's easy.Also, I presume there is at least one index on Contract where the PersoonID is the first (or only) column.select (select count (*) from Contract where PersoonID = CAN.PersoonID ) as ContractCount, CAN.PersoonName, ... /* some fields from table Candidate (more than 25) */from Candidate CANwhere CAN.Type = 11 |
_unix.381704 | I use byobu in tmux mode. In screen I can do this:Ctrl+a [, move to line, Y, Ctrl+a ]Y copys the whole line into the clipboard. I am looking to something similar in byobu in tmux mode. The only thing I found is:Ctrl+a/b (depends on your setting) + [, move to line, 0, space, $, enter, Ctrl+a/b + ]But I feel like that is a lot of hard to reach keystrokes, Y is much easier. | Byobu / Tmux copy whole line | tmux;gnu screen;byobu | null |
_webmaster.91624 | 95% of my website traffic is from India and 5% from USA.What happens if I host my server with higher resources in USA or with lower resources in India. Will it have any effect on the ranking of my website?I reserched and found this argument:Does server location matter in same country?Here @alfasin argues that servers with same resources would have better ranking if hosted in the same country but didn't say about the condition I mentioned above.I want to host my server in USA as its cheaper to buy Host from USA than from India.For refernce my website generally has 30K hits per day with over a million of pageviews per month. | What Ranks Better? Faster Server located in foreign country or Slower Server from same country? | seo;server;webserver;cdn;local seo | null |
_codereview.138537 | It's a simple hexdump that prints to stdout. I wanted to handle correctly input coming from the user typing at the terminal, this made the logic a little more complicated than just exiting after reading less than requested. It doesn't use the stack or memory variables, except for the tables and buffers, so I employed every register freely.I'm learning, so I would like for every possible improvement to be pointed out.; Description : Prints stdin to stdout as hex and displays ASCII next to it.; It handles correctly the cases in which a sys_read returns; less than was requested but an EOF was not received, this; allows the user to dump arbitrary text from stdin to a file ; in the following way: ./hexdump >> dump.txt;; Build using these commands:; nasm -f elf hexdump.asm; ld -o hexdump hexdump.o -m elf_i386 -s;; Usage:; ./hexdump << input_filesection .dataSTDIN: equ 0STDOUT: equ 1STDERR: equ 2EXIT_SUCCESS: equ 0SYS_READ: equ 3SYS_WRITE: equ 4SYS_EXIT: equ 1NEW_LINE: equ 0ahAsciiTable: db '................................ !',#$%&'()*+,-./0123456789:; db '<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz' db '{|}~...........................................................' db '...............................................................' db '.......'align 4Template: db '00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 'DivPos: db '| 'AsciiPart: db '................',NEW_LINETEMPLATELENGTH: equ $-Templatealign 2HexTable: db '000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f2' db '02122232425262728292a2b2c2d2e2f303132333435363738393a3b3c3d3e3f40' db '4142434445464748494a4b4c4d4e4f505152535455565758595a5b5c5d5e5f606' db '162636465666768696a6b6c6d6e6f707172737475767778797a7b7c7d7e7f8081' db '82838485868788898a8b8c8d8e8f909192939495969798999a9b9c9d9e9fa0a1a' db '2a3a4a5a6a7a8a9aaabacadaeafb0b1b2b3b4b5b6b7b8b9babbbcbdbebfc0c1c2' db 'c3c4c5c6c7c8c9cacbcccdcecfd0d1d2d3d4d5d6d7d8d9dadbdcdddedfe0e1e2e' db '3e4e5e6e7e8e9eaebecedeeeff0f1f2f3f4f5f6f7f8f9fafbfcfdfeff'section .bssBUFFERSIZE: equ 160Buffer: resb BUFFERSIZEsection .textglobal _start ; make it visible_start: xor esi,esi ; position in buffer xor ebp,ebp ; buffer size mov edi,Read ; address to jmp mov esp,BUFFERSIZE ; how much to read Fill: mov eax,SYS_READ xor ebx,ebx ; STDIN lea ecx,[Buffer+ebp] ; destination mov edx,esp ; size int 80h ; call cmp eax,0 ; check return value jg Ok ; read something jl Err ; < 0 -> error jmp Last ; 0 = EOF Ok: add ebp,eax ; update size Read: lea esp,[esi+16] ; where we'll be if we print sub esp,ebp ; how much we have to read to reach 16 jg Fill ; esp > ebp jnz NotZ ; we only reset when esp = ebp mov edi,_start ; reset NotZ: mov ecx,15 ; offset, not counter xor eax,eax ; clear bits 8-31 xor edx,edx ; zero-out the upper half .l0: mov al,[Buffer+esi+ecx] ; read byte from buffer mov bx,[HexTable+eax*2] ; get corresponding symbols mov [Template+ecx*2+ecx],bx ; write to template mov dl,[AsciiTable+eax] ; get ASCII symbol mov [AsciiPart+ecx],dl ; write to template sub ecx,1 ; next iteration jns .l0 ; ecx is an offset not a counter ; Print the Template to stdout Out: mov eax,SYS_WRITE mov ebx,STDOUT mov ecx,Template mov edx,TEMPLATELENGTH int 80h cmp eax,TEMPLATELENGTH jne Err add esi,16 ; move index jmp edi ; go to Read or _start Last: sub ebp,esi ; make it a count jz Exit ; if 0, we are done sub ebp,1 ; make it an offset mov ecx,15 ; 16*4=64 (0 ... 15) .l0: mov dword [Template+ecx*4],' ' ; fill buffer with spaces sub ecx,1 ; next iteration jns .l0 ; ecx is an offset mov byte [DivPos],'|' ; it was replaced by a space mov word [Template+64],' ' ; finish writing spaces xor ebx,ebx ; clear .l1: mov bl,[Buffer+esi+ebp] ; get byte mov cx,[HexTable+ebx*2] ; get corresponding hex code mov [Template+ebp*2+ebp],cx ; place hex code mov bl,[AsciiTable+ebx] ; get ascii symbol mov [AsciiPart+ebp],bl ; place it sub ebp,1 ; dec doesn't affect CF jns .l1 mov edi,End ; return address jmp Out ; print last line Err: mov ebx,-1 ; EXIT_FAILURE jmp Exit ; Exit the program End: xor ebx,ebx ; EXIT_SUCCESS Exit: mov eax,SYS_EXIT int 80h | Hexdump utility in x86 NASM assembly | linux;assembly | I would refrain from using esp as general purpose register. This is limiting you in using call/ret subroutines, and you risk memory corruption caused by interrupt happening in your program context.I would also refrain of such heavy LUT tables usage, as the program will be more likely limited by the I/O operations speed, so that tiny amount of calculation will quite likely hide in the buffered I/O waits.Here is my version, avoiding those things mentioned above:; hexdump.asm; Description : Prints stdin to stdout as hex and displays ASCII next to it.; It handles correctly the cases in which a sys_read returns; less than was requested but an EOF was not received, this; allows the user to dump arbitrary text from stdin to a file ; in the following way: ./hexdump >> dump.txt;; Build using these commands:; nasm -f elf hexdump.asm; ld -o hexdump hexdump.o -m elf_i386 -s;; Usage:; ./hexdump < input_fileSTDIN equ 0STDOUT equ 1SYS_READ equ 3SYS_WRITE equ 4SYS_EXIT equ 1NEW_LINE equ 0ahBUFFERSIZE equ 16section .dataHexLUT db '0123456789ABCDEF'section .bssReadBuffer: resb BUFFERSIZEDataBuffer: resb BUFFERSIZEDIVIDER_OFFSET EQU 3*BUFFERSIZEASCII_OFFSET EQU DIVIDER_OFFSET+2PRINT_BUFFER_SIZE EQU ASCII_OFFSET + BUFFERSIZE + 1; = BUFFERSIZE * %02x , 2 chars for divider | , BUFFERSIZE * char + 1 for NEWLINEPrintBuffer: resb PRINT_BUFFER_SIZEsection .textglobal _start ; make it visible_start: lea esi,[DataBuffer] lea edi,[DataBuffer+BUFFERSIZE]Read: xor ebx,ebx ; STDIN lea ecx,[ReadBuffer] ; buffer to read into mov edx,BUFFERSIZE ; BUFFERSIZE bytes at most mov eax,SYS_READ int 80h cmp eax,ebx ; check return value with 0 jle Finish ; < 0 -> error, 0 = no more bytesCopyReadData: ; ecx = source of read data, eax = count ; esi = current DataBuffer, edi = end of DataBuffer cmp esi,edi ja .DataBufferIsNotFull ; display DataBuffer, when full call DisplayDataBuffer lea esi,[DataBuffer] ; from the beginning of DataBuffer.DataBufferIsNotFull: ; copy eax bytes from @ecx to @esi (clobbers bl) mov bl,[ecx] mov [esi],bl inc ecx inc esi dec eax jnz CopyReadData jmp Read ; read more dataDisplayDataBuffer: ; esi = end of DataBuffer data push eax push ebx push ecx push edx lea ebx,[PrintBuffer] ; print BUFFERSIZE values as hexa nn string followed by space ; or three spaces in case of no more data in buffer lea ecx,[DataBuffer] mov edx,BUFFERSIZEPreparePrintBufferHex: cmp ecx,esi jz .noMoreData mov al,[ecx] inc ecx call Format_AL_AsHexWithSpaceTo_EBX jmp .finishLoop.noMoreData: mov [ebx],DWORD ' ' add ebx,3.finishLoop: dec edx jnz PreparePrintBufferHex ; print divider between hexa and ASCII parts mov [ebx],WORD '| ' add ebx,2 ; print ASCII data (or spaces, when no more data) lea ecx,[DataBuffer] mov edx,BUFFERSIZEPreparePrintBufferAscii: cmp ecx,esi mov al,' ' jz .validAscii ; no more data mov al,[ecx] inc ecx cmp al,0x7F ; values 00-1F and 7F-FF are shown as '.' jae .invalidAscii cmp al,' ' jae .validAscii.invalidAscii: mov al,'.'.validAscii: mov [ebx],al inc ebx dec edx jnz PreparePrintBufferAscii ; Add new line character at the end of line mov [ebx],BYTE NEW_LINE ; Output PrintBuffer to screen mov eax,SYS_WRITE mov ebx,STDOUT lea ecx,[PrintBuffer] mov edx,PRINT_BUFFER_SIZE int 80h pop edx pop ecx pop ebx pop eax retFormat_AL_AsHexWithSpaceTo_EBX: push eax shr eax,4 call .EAX_nibble pop eax call .EAX_nibble mov [ebx],BYTE ' ' inc ebx ret.EAX_nibble: and eax,0x0F mov al,[HexLUT+eax] mov [ebx],al inc ebx retFinish: call DisplayDataBuffer ; display any incomplete line of data mov ebx,eax ; error code from SYS_READ is exit code mov eax,SYS_EXIT int 80h |
_unix.306975 | In the last couple of days we have come across 2 cases where we seegunzip failing but filemanager decompression utils succeeding:Yesterday, on a remote server with only command line access, we had a fairly large file (about 2GB) to download and use. After downloading gunzip failed to decompress, showing corrupted fileSince we had compressed the fie using pigz we tried it but also failed, with wrong CRC error.The backup server has a web filemanager that comes with hosting software. On the backup server the file could decompress without a problem.On local machine, today, we also downloaded a file from backup server and gunzip failed with message invalid compressed data--crc error:$ gunzip file.gzgzip: file.gz: invalid compressed data--crc errorgzip: file.gz: invalid compressed data--length errorOn local machine we use Kubuntu and the same file could decompress without a problem using Dolphin filemanager.The 2 scenarios points to a possible flag used by filemanagers that makes them succeed in decompressing large files.Any idea?Edit: more detailsfile command$ file file.gzfile.gz: gzip compressed data, was file.sql, last modified: Wed Aug 31 01:50:35 2016, max speed, from UnixLocal$ uname -aLinux hppavilion 4.4.0-21-generic #37-Ubuntu SMP Mon Apr 18 18:33:37 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux$ gzip --versiongzip 1.6File system: EXT4Remote$ uname -aLinux remote 3.10.0-327.28.3.el7.x86_64 #1 SMP Thu Aug 18 19:05:49 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux$ gzip --versiongzip 1.5**File system*: EXT4Backup# uname -aLinux backup.com 3.10.0-327.3.1.el7.x86_64 #1 SMP Wed Dec 9 14:09:15 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux# gzip --versiongzip 1.5File system: EXT4Note:Seen only with large archivesIntermittent/inconsistent - tried an equally large archive and gunzip succeed | Difference between command line gunzip and GUI decompression versions | linux;gzip;file manager;archive | null |
_codereview.118250 | I'm using the Jquery date time picker link hereIn the four methods below I am initializing the DT pickers but they are all pretty similar except for the ID in the onChangeDateTime.Is there a way I can reduce the duplicate code?$('.jqDatetimeDateOne').datetimepicker({ defaultTime: '07:00', lang: 'en', className: 'setTimeClicker', step: 30, onChangeDateTime: function (current_time, $input) { // alert(ct.dateFormat('d/m/Y')) $('#emailGenDateOne').removeClass('empty').val(current_time.dateFormat('Y-m-d H:i')); }, inline: true});$('.jqDatetimeDateTwo').datetimepicker({ defaultTime: '07:00', lang: 'en', className: 'setTimeClicker', step: 30, onChangeDateTime: function (current_time, $input) { // alert(ct.dateFormat('d/m/Y')) $('#emailGenDateTwo').removeClass('empty').val(current_time.dateFormat('Y-m-d H:i')); }, inline: true});$('.jqDatetimeSearchDateOne').datetimepicker({ defaultTime: '07:00', lang: 'en', className: 'setTimeClicker', step: 30, onChangeDateTime: function (current_time, $input) { // alert(ct.dateFormat('d/m/Y')) $('#searchGenDateOne').removeClass('empty').val(current_time.dateFormat('Y-m-d H:i')); }, inline: true});$('.jqDatetimeSearchDateTwo').datetimepicker({ defaultTime: '07:00', lang: 'en', className: 'setTimeClicker', step: 30, onChangeDateTime: function (current_time, $input) { // alert(ct.dateFormat('d/m/Y')) $('#searchGenDateTwo').removeClass('empty').val(current_time.dateFormat('Y-m-d H:i')); }, inline: true}); | Four jQuery date time pickers | javascript;jquery;datetime | null |
_unix.203622 | I've just made a mistake and I have removed the 'Name' column from the Archive Manager program interface. Here is the actual interface:Now, I can't see the 'Name' column. I've tried to modify everything in the options but I couldn't get it back.How do I restore the 'Name' column? If the program has a config file, where can I find it?I'm using Ubuntu 14.04 | Add column into Archive Manager interface | ubuntu;gnome | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.