id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_unix.377769 | I'm trying to setup a Graphical Desktop on top of my RHEL server, so when I connect to it, it won't just be CLI, it will be a full GUI.(I.e. my setup now is, I use Putty on Windows to connect to my various Linux remote servers, and for our Windows remote servers, we use RDP, which is a full GUI). Essentially, I'm looking for the equivalent of RDP but for Linux remote servers. So if I'm on my Windows client, I log in (and instead of Putty) use some type of program like it, but one that can show a fully GUI.Is looking at VNC or freeNX my best option? I've tried x-11, but it was painfully slow. I'm hoping I can do something that's like the Windows RDP I use - no latency, full desktop GUI.I'm confused on to fully set it up. I've seen guides from my research for yum groupinstall <packagename> (and using Desktop or KDE Desktop GNOME desktop) etc, as well as seeing guides to install a VNC server yum install vncserver (then configuring it) and trying to use VNC Viewer or Tiger VNC to connect to it from Windows side.My confusion lies in those. Are they separate or related processes? I.e. if I install Desktop or KDE it seems like I have just to just change some settings and enable it to GUI from CLI - does that I mean I don't need a VNC program? I feel like I still need the VNC Viewer program on my windows side (a la in place of Putty to connect to it, but hopefully to show the GUI and not just CLI)Am I completely wrong on the order/steps I need to? In the end, I'm looking to be able to open something on the Windows end, (putty or vnc viewer etc) and log into my server (by hostname or IP) just Like I do with Putty but have a full graphical experience - if this is possible.I'm running RHEL 6.8 on the linux side - and my client machine, is Windows 7.EDIT:In regards to the comments, editing in to add my output of netstat command.EDIT 2: Switching netsat -l to netstat -nlprr83008@LAB2138:~> netstat -nlp(Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.)Active Internet connections (only servers)Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program nametcp 0 0 0.0.0.0:8060 0.0.0.0:* LISTEN -tcp 0 0 127.0.0.1:56765 0.0.0.0:* LISTEN -tcp 0 0 0.0.0.0:445 0.0.0.0:* LISTEN -tcp 0 0 0.0.0.0:3838 0.0.0.0:* LISTEN -tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN -tcp 0 0 0.0.0.0:2049 0.0.0.0:* LISTEN -tcp 0 0 0.0.0.0:801 0.0.0.0:* LISTEN -tcp 0 0 127.0.0.1:9121 0.0.0.0:* LISTEN -tcp 0 0 127.0.0.1:8001 0.0.0.0:* LISTEN -tcp 0 0 127.0.0.1:9090 0.0.0.0:* LISTEN -tcp 0 0 0.0.0.0:3939 0.0.0.0:* LISTEN -tcp 0 0 127.0.0.1:9187 0.0.0.0:* LISTEN -tcp 0 0 0.0.0.0:36196 0.0.0.0:* LISTEN -tcp 0 0 0.0.0.0:5989 0.0.0.0:* LISTEN -tcp 0 0 0.0.0.0:44678 0.0.0.0:* LISTEN -tcp 0 0 0.0.0.0:6311 0.0.0.0:* LISTEN -tcp 0 0 0.0.0.0:44075 0.0.0.0:* LISTEN -tcp 0 0 0.0.0.0:875 0.0.0.0:* LISTEN -tcp 0 0 0.0.0.0:139 0.0.0.0:* LISTEN -tcp 0 0 127.0.0.1:37419 0.0.0.0:* LISTEN -tcp 0 0 127.0.0.1:9100 0.0.0.0:* LISTEN -tcp 0 0 127.0.0.1:40590 0.0.0.0:* LISTEN -tcp 0 0 0.0.0.0:4750 0.0.0.0:* LISTEN -tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN -tcp 0 0 0.0.0.0:6000 0.0.0.0:* LISTEN -tcp 0 0 127.0.0.1:9168 0.0.0.0:* LISTEN -tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN -tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -tcp 0 0 127.0.0.1:8081 0.0.0.0:* LISTEN -tcp 0 0 0.0.0.0:6001 0.0.0.0:* LISTEN -tcp 0 0 0.0.0.0:35218 0.0.0.0:* LISTEN -tcp 0 0 0.0.0.0:49522 0.0.0.0:* LISTEN -tcp 0 0 0.0.0.0:8787 0.0.0.0:* LISTEN -tcp 0 0 127.0.0.1:34421 0.0.0.0:* LISTEN -tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -tcp 0 0 127.0.0.1:47830 0.0.0.0:* LISTEN -tcp 0 0 0.0.0.0:45207 0.0.0.0:* LISTEN -tcp 0 0 0.0.0.0:4151 0.0.0.0:* LISTEN -tcp 0 0 0.0.0.0:8888 0.0.0.0:* LISTEN -tcp 0 0 127.0.0.1:6010 0.0.0.0:* LISTEN -tcp 0 0 127.0.0.1:51002 0.0.0.0:* LISTEN -tcp 0 0 127.0.0.1:6011 0.0.0.0:* LISTEN -tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN -tcp 0 0 127.0.0.1:43451 0.0.0.0:* LISTEN -tcp 0 0 127.0.0.1:46043 0.0.0.0:* LISTEN -udp 0 0 0.0.0.0:47968 0.0.0.0:* -udp 0 0 0.0.0.0:58978 0.0.0.0:* -udp 0 0 0.0.0.0:875 0.0.0.0:* -udp 0 0 0.0.0.0:111 0.0.0.0:* -udp 9160 0 0.0.0.0:631 0.0.0.0:* -udp 0 0 0.0.0.0:760 0.0.0.0:* -udp 0 0 10.XXX.XX.65:123 0.0.0.0:* -udp 0 0 127.0.0.1:123 0.0.0.0:* -udp 0 0 0.0.0.0:123 0.0.0.0:* -udp 0 0 0.0.0.0:2049 0.0.0.0:* -udp 0 0 10.XXX.XX.255:137 0.0.0.0:* -udp 0 0 10.XXX.XX.65:137 0.0.0.0:* -udp 0 0 0.0.0.0:137 0.0.0.0:* -udp 0 0 10.XXX.XX.255:138 0.0.0.0:* -udp 0 0 10.XXX.XX.65:138 0.0.0.0:* -udp 0 0 0.0.0.0:138 0.0.0.0:* -udp 0 0 127.0.0.1:659 0.0.0.0:* -udp 0 0 0.0.0.0:43291 0.0.0.0:* -udp 0 0 0.0.0.0:40353 0.0.0.0:* -udp 0 0 0.0.0.0:47924 0.0.0.0:* -udp 0 0 0.0.0.0:54209 0.0.0.0:* -Active UNIX domain sockets (only servers)Proto RefCnt Flags Type State I-Node PID/Program name Pathunix 2 [ ACC ] STREAM LISTENING 12310 - @/var/run/hald/dbus-pfcv2kTrVTunix 2 [ ACC ] STREAM LISTENING 36108297 - @/tmp/dbus-O9QGf8R8Zcunix 2 [ ACC ] STREAM LISTENING 6556269 - /tmp/rstudio-rserver/session-server-rpc.socketunix 2 [ ACC ] STREAM LISTENING 6556128 - /tmp/rstudio-rserver/rserver.socketunix 2 [ ACC ] STREAM LISTENING 6556314 - /tmp/rstudio-rserver/rserver-monitor.socketunix 2 [ ACC ] STREAM LISTENING 6556330 - /tmp/rstudio-rserver/rserver-launcher.socketunix 2 [ ACC ] STREAM LISTENING 6569731 - /tmp/shiny-server/rserver-monitor.socketunix 2 [ ACC ] STREAM LISTENING 30610346 - /tmp/connect-server/rserver-monitor.socketunix 2 [ ACC ] STREAM LISTENING 31607547 - @/tmp/.X11-unix/X0unix 2 [ ACC ] STREAM LISTENING 31607326 - @/tmp/.X11-unix/X1unix 2 [ ACC ] STREAM LISTENING 31607327 - /tmp/.X11-unix/X1unix 2 [ ACC ] STREAM LISTENING 12258 - /var/run/acpid.socketunix 2 [ ACC ] STREAM LISTENING 31607548 - /tmp/.X11-unix/X0unix 2 [ ACC ] STREAM LISTENING 27487395 - /var/opt/gitlab/postgresql/.s.PGSQL.5432unix 2 [ ACC ] STREAM LISTENING 128344874 28107/gconfd-2 /tmp/orbit-rr83008/linc-6dcb-0-25c293a147828unix 2 [ ACC ] STREAM LISTENING 128344889 28108/gnome-keyring /tmp/orbit-rr83008/linc-6dc9-0-434569cf4e5efunix 2 [ ACC ] STREAM LISTENING 14370 - /var/run/tog-pegasus/cimxml.socketunix 2 [ ACC ] STREAM LISTENING 128344856 28108/gnome-keyring /tmp/keyring-L2uzmu/socketunix 2 [ ACC ] STREAM LISTENING 10459 - /var/run/vmware/guestServicePipeunix 2 [ ACC ] STREAM LISTENING 128344893 28108/gnome-keyring /tmp/keyring-L2uzmu/socket.sshunix 2 [ ACC ] STREAM LISTENING 128344895 28108/gnome-keyring /tmp/keyring-L2uzmu/socket.pkcs11unix 2 [ ACC ] STREAM LISTENING 12305 - @/var/run/hald/dbus-EnsWjU8vSpunix 2 [ ACC ] STREAM LISTENING 7401 - @/com/ubuntu/upstartunix 2 [ ACC ] STREAM LISTENING 27496827 - /var/opt/gitlab/gitlab-rails/sockets/gitlab.socketunix 2 [ ACC ] STREAM LISTENING 27484394 - /var/opt/gitlab/redis/redis.socketunix 2 [ ACC ] STREAM LISTENING 128444503 - /var/opt/quest/vas/vasd/.vasd_11406unix 2 [ ACC ] STREAM LISTENING 27498254 - /var/opt/gitlab/gitaly/gitaly.socketunix 2 [ ACC ] STREAM LISTENING 10834 - /var/run/rpcbind.sockunix 2 [ ACC ] STREAM LISTENING 27498301 - /var/opt/gitlab/gitlab-workhorse/socketunix 2 [ ACC ] STREAM LISTENING 33632870 - /var/nmbd/unexpectedunix 2 [ ACC ] STREAM LISTENING 11093 - /var/run/dbus/system_bus_socketunix 2 [ ACC ] STREAM LISTENING 42567568 - @/tmp/dbus-XpphHBjGKsunix 2 [ ACC ] STREAM LISTENING 11300 - /var/opt/quest/vas/vasd/.vasd40_ipc_sockunix 2 [ ACC ] STREAM LISTENING 11303 - /var/opt/quest/vas/vasd/.vasd_2000unix 2 [ ACC ] STREAM LISTENING 11306 - /var/opt/quest/vas/vasd/.vasd_2003unix 2 [ ACC ] STREAM LISTENING 11313 - /var/opt/quest/vas/vasd/.vasd_2002unix 2 [ ACC ] STREAM LISTENING 14161 - /var/run/abrt/abrt.socketunix 2 [ ACC ] STREAM LISTENING 11315 - /var/opt/quest/vas/vasd/.vasd_2001unix 2 [ ACC ] STREAM LISTENING 128344845 28103/dbus-daemon @/tmp/dbus-Qrg0vIDr4cEDIT 3: This is what I get from vncserver -listrr83008@LAB2138:~> service vncserver startrr83008@:LAB2138:~> vncserver -listTigerVNC server sessions:X DISPLAY # PROCESS ID | How do I enable and connect to a GUI/Desktop on a remote RHEL 6 server from Windows? | rhel;gnome;gui;vnc;desktop environment | null |
_unix.26788 | Say I have a shell variable $string that holds some text with several newlines, e.g.:string=thisis a testI would like to convert this string into a new string new_string where all line breaks are converted into spaces:new_string=this is a testI tried:print $string | sed 's/\n/ /g'but it didn't workI'm also wondering if there is a way of doing this using perl -0777 's/\n/ /g' or maybe the command tr ? | Using sed to convert newlines into spaces | shell;sed | If you only want to remove the new lines in the string, you don't need to use sed. You can use just$ echo $string | tr '\n' ' 'as others had pointed.But if you want to convert new lines into spaces on a file using sed, then you can use:$ sed -i ':a;N;$!ba;s/\n/\t/g' file_with_line_breaksor even awk:$ awk '$1=$1' ORS=' ' file_with_line_breaks > new_file_with_spaces |
_codereview.7983 | I am pretty new to JavaScript and learning the intricacies of this language. I wanted a unique collection which will overwrite while adding if an item already exists else add. Rest of the behavior should be just like any other collection. I had written the type and its as below. Instead of using an array internally, I have used JavaScript's notion of storing an object as key-value pair to store the items.My question is, is this implementation valid? Are there any disadvantage with this implementation over using an array internally to hold the collection items instead of an object? Why I chose objects to store items over array is:To avoid iterating through the array while adding an item, so that I don't have to check if objects exists(instead of adding a duplicate). To avoid iterating while retrieving or deleting an item.function isValid(obj) { return (obj != undefined && obj != null && typeof (obj) != 'undefined');}function uniqueCollection() { var collection = {}; var length = 0; return { removeItem: function (item) { if (isValid(collection[item])) { delete collection[item]; --length; } }, addItem: function (item) { if (!isValid(collection[item])) { collection[item] = item; ++length; } }, getLength: function () { return length; }, clear: function () { collection = {}; length = 0; }, exists: function (item) { if (isValid(collection[item])) { return true; } return false; }, getItems: function () { var items = []; var i = 0; for (var o in collection) { if (collection.hasOwnProperty(o)) { items[i] = o.toString(); i++; } } if (i !== length) { alert(Error occurred while returning items); } return items; } };//end of object literal};//end of function | Implementation of a unique collection | javascript;beginner;collections | This mostly seems like a layer of sugar coating that just adds overhead and doesn't add any improved functionality over what a javascript object already has. The only new functionality I see is keeping track of a length, but this is a lot of extra overhead just for that. The length could be calculated at any time on a plain javascript object.Here are the analogs to your methods:var collection = {};addItem: collection[key] = value;removeItem: delete collection[key]clear: collection = {};exists: if (key in collection)getItems: collection.keys()getLength: collection.keys().lengthSo, all you're getting out of your implementation is a slightly more efficient length and every other operation is less efficient than just using the native code way of doing it. Is this collection really useful?There are some older browsers that don't offer the .keys() method, but there's a pretty simple shim that implements it if not available.In addition, your implementation loses functionality that a plain javascript object has. For example, you can't pass your collection to any function that expects a javascript object with the keys and values on it because those are hidden inside, they aren't actually properties of the collection object itself. Then further, you can't do custom iteration of the keys and values without first creating an array of all the keys because you've hidden the natural ability to iterate the keys of a javascript object. |
_unix.129498 | I want to implement the fault tolerance in a CentOS environment.EDITScenario: Two different systems connected via network having CentOS 6.0. What I want: Now I want to setup Fault Tolerance in for both systems. So that even if one machine got any problem my running applications/servers never go down. I found kemari and RDMA tools to implement that in VMs. But I am newbie for this and don't know, from where I do start. I mean what to download, how to configure and how to run the VM by this? Also I didn't found any documentation for beginners. Please help who is having knowledge about either those technologies or any other technology to get my Fault Tolerance System.If not in VMs, please give me some idea to implement this task only on two physical machines rather than virtually. | How to configure a Fault Tolerance System in CentOS 6.0 | migration | null |
_webmaster.101605 | I have a website that is copying the same exact content to another server that is more web 2.0. This is happening without a canonical link. Does it make any sense to do that? Won't it end up with duplicate content issues? | Does it make sense to copy your own content and post it to another site without any canonical links? | seo;duplicate content;rel canonical | null |
_webmaster.33393 | On my website : www.openlx.com, in the submenu (below the main menu - Home), the URL currently looks like: http://openlx.com/home/aboutus.htmlSame way, in all the sub-menus to the main-manu Home, the word home is part of the URL.I want to remove this word home from the URL, and same for all the sub-menus to all the menus. | joomla 2.5.6 menu related | joomla | null |
_vi.230 | I type with a non-QWERTY keyboard layout. Many of the keys Vim uses are now on the home row, but some key bindings just don't work, the most obvious one being the hjkl keys.How and where can I change these key bindings to work better with my keyboard layout? | How can I modify Vim to work with a different keyboard layout? | vimrc;keyboard layout;key bindings | null |
_webapps.93045 | I'm having an issue with my Facebook account. A few days ago, my friend was messing round and kicked me from the group chat that we and my other friends have. He's since added me back but I still can't seem to message in the chat since it still thinks I'm kicked when on their phones, it tells them I'm actually in the chat.I do get notifications coming through when they send something but I can only see the message for a split second then it goes away.I have tried logging in/out, deleting messenger on my phone, deactivating my Facebook account, deleting the chat and adding it to archive then reopening it but nothing seems to be working.Is there a way I can reset my Facebook account so it stops bugging out. I don't really want to have to create a whole new chat just for me as there's quite a few people in it and we've had it for so long.I have contacted support but have yet to hear anything on the matter. Anyone else had this issue? | Facebook thinks I am blocked from a group chat | facebook;facebook chat | null |
_webapps.105160 | I live in Silver Spring, Maryland. Facebook has the idea that I am living in California. Events keep coming up on my timeline happening in California. How can I change Facebook to indicate that I live in Silver Spring and see events happening in Silver Spring? | Facebook shows me events that are nowhere near me | facebook | null |
_codereview.22784 | I have written a piece of python code in response to the following question and I need to know if it can be tidied up in any way. I am a beginner programmer and am starting a bachelor of computer science.Question: Write a piece of code that asks the user to input 10 integers, and then prints the largest odd number that was entered. if no odd number was entered, it should print a message to that effect.My solution:a = input('Enter a value: ')b = input('Enter a value: ')c = input('Enter a value: ')d = input('Enter a value: ')e = input('Enter a value: ')f = input('Enter a value: ')g = input('Enter a value: ')h = input('Enter a value: ')i = input('Enter a value: ')j = input('Enter a value: ')list1 = [a, b, c, d, e, f, g, h, i, j]list2 = [] # used to sort the ODD values into list3 = (a+b+c+d+e+f+g+h+i+j) # used this bc all 10 values could have used value'3' # and had the total value become an EVEN value if (list3 % 2 == 0): # does list 3 mod 2 have no remainder if (a % 2 == 0): # and if so then by checking if 'a' has an EVEN value it rules out # the possibility of all values having an ODD value entered print('All declared variables have even values') else: for odd in list1: # my FOR loop to loop through and pick out the ODD values if (odd % 2 == 1):# if each value tested has a remainder of one to mod 2 list2.append(odd) # then append that value into list 2 odd = str(max(list2)) # created the variable 'odd' for the highest ODD value from list 2 so i can concatenate it with a string. print ('The largest ODD value is ' + odd) | Asks the user to input 10 integers, and then prints the largest odd number | python;beginner | null |
_webapps.73254 | There is official GitHub API which returns data in JSON, but is there any way to return the data in XML format?JSON format example: https://api.github.com/repos/github/github-services/issues | How to return data from GitHub API in XML? | github | null |
_softwareengineering.312508 | I have a controller method as follow:public class RoomsController {@RequestMapping(method = RequestMethod.GET, path=/v1/rooms/{name}) public ResponseEntity<?> getRoomInformation(@PathVariable String name, @RequestHeader(Auth-Token) String token){ try { Room room = roomService.findByLogin(name); User user = userService.findByBackendToken(token); if(room == null || InstantHelper.biggerThanSixHours(room.getUpdatedAt())) room = gitHubService.createOrUpdateRoom(name, user.getAccessToken()); String roomJson = new RoomFormatter(room).toJson(); joinRoom(room.getLogin(), token); return new ResponseEntity<String>(roomJson, HttpStatus.OK); } catch (ChathubBackendException e) { log.error(e); return new ResponseEntity<>(Error: +e.getMessage(), HttpStatus.NOT_FOUND); } }as seen, I have this line, for example: roomService.findByLogin(name); but from what I understand, a Service does not return any value, it's just execute some action and raise an exception if something goes wrong. This is the first thing.The second is related to the formatter/join partString roomJson = new RoomFormatter(room).toJson();joinRoom(room.getLogin(), token);return new ResponseEntity<String>(roomJson, HttpStatus.OK);I'm not really sure if the controller should handle this amount of responsibility like know if it's time for update the room information from API, explicit call the RoomFormatter. I'm kinda lost here, because I don't really know where to put this. I think I should have a in-between layer that is not a service nor a repository, and this layer should know what to do with the room, format the JSON, and so on, but I don't know what it should be or if there is a pattern for this kind of things. Maybe it's ok to have this things in the controller...anyway. Any ideas are welcome! | where to put methods that manipulate objects | java;spring;service;controller | Design questions like these often tend to have no true or false answer. It depends heavily on the surrounding components and the bigger targets. Despite of this, here some things seem rather clear to me. So to your first question:This: Room room = something.findByLogin(name);is a task that clearly belongs to the model layer. You can imagine testing this method by a unit test suite that deals only with model classes. Here, I'd like to add two things. First, as your context is Spring, it is also a question how your model is made accessible by the application container. In the Java world, there are beans. Could it be that by writing Service, somebody (or you) is confusing the application container's object live cycle service and the service layer of the application itself? Second, in short you have a service for the following advantages: 1) Distinguishing the necessary web type activity, 2) managing transactions (also for nested services or multiple models), 3) adding some automated post or pre actionsteps. Assuming this, a service is rather something bigger than a controller, so you are right about the return values. It is often a good idea to nest a controller into a service.In my personal opinion, I'd like to see this:Room room = rooms.findByLogin(name);User user = users.findByBackendToken(token);Plus, I would expect to have rooms and users delivered as beans by AOP tags or by the application controller XML config. With users, I was especially careful as this is about security, authentication and authorization, so I would expect to look out for a security framework that allows me access to users.Then to your second question:You are right with the nagging feeling about the amount of work a controller has to do. Remind some basic rules how to write good functions: They should do only one clearly named thing. They should give either some value in return, or they should have a side effect, but not both. Of course those rules are being disputed, but they are most often a good guide. They are helping to stick to the principle of least astonishment (or, the smallest amount of WTF per minute). So if I was a new programmer in your team and if I would read:getRoomInformation(); then yes, I could be surprised if the room (or room list?) occasionally was reformatted. But this doesn't absolutely mean you have to redesign the whole thing. If this is a rare case, and adding other general structures is not justified, then you could make more clarity just by a proper, speaking naming. For instance, imagine this:@RequestMapping(method = RequestMethod.GET, path=/v1/rooms/{name}) public ResponseEntity<?> getRoomInformationAndUpdateXY(@PathVariable String name, @RequestHeader(Auth-Token) String token){ try { ...Well, let's turn the blind eye. Then it's ok for once. It is ok because the reader doesn't stumble, but she/he is being alerted. Even better is to outsource the reformatting part as an own method, thus keeping the controller clean.There could be more powerful concepts for actions pre and post the controller, if you need them. Maybe a wrapping service layer, as mentioned in the first part, is the best solution. Or with Spring, you can use interceptors: https://stackoverflow.com/questions/9212699/does-spring-mvc-have-the-concept-of-before-after-controller-action-events Maybe this is the best solution. Or it is an overhead. Or it is not working in your situation. I cannot know, but your decision starts with considerations like this. |
_unix.154176 | I created a script with the standard getopt magic, so you can call the script withbatman-connect -c ffkiHow can I add an option to call this script with only one option without dash? batman-connect ffkiso It will interpret this only option as the second argument for -c?According to this answer I tried:if [ $@ = ffki ]; then set -- -c $@fiBut that gives an error, I think because this line in my script will change it back: # Execute getopt on the arguments passed to this program, identified by the special character $@PARSED_OPTIONS=$(getopt -n $0 -o hsrvVi:c: --long help,start,restart,stop,verbose,vv,version,interface:,community: -- $@)I don't like to re-arrange my whole script, so is there a simple one-liner to just set the first argument to -c and the second to ffki ? | Modify bash arguments if only one argument is set | bash;arguments | null |
_unix.236103 | I want to use cat with wildcards in bash in order to print multiple small files (every file is one sentence) to standard output. However, the separate file contents are not separated by a newline, which I'd like to for ease of reading.How can I add a file delimiter of some sort to this command? | Force newlines with cat wildcard printing | bash;wildcards;cat;newlines | Define a shell function which outputs an end of line after every file and use it instead of cat:endlcat() { for file in $@; do cat -- $file echo done}then you can use endlcat *.The for loop loops over all provided arguments ($@) which are already escaped by the shell when you use wildcards like *. The -- is required to not choke on file names starting with a dash. Finally the echo outputs a newline. |
_softwareengineering.257801 | I'm currently embarking on a MongoDB project (a simple user login system), and I notice that there is an option for authentication. Here is the server string, with the userinfo shown as optional information.mongodb://[username:password@]host1[:port1][,host2[:port2:],...]/dbFrom the MongoDB Manual:Before gaining access to a system all clients should identify themselves to MongoDB. This ensures that no client can access the data stored in MongoDB without being explicitly allowed.Why would I choose to use this feature? Shouldn't it be up to the server programmers (i.e. the ones implementing the PHP, Python, or whatever have you) to authenticate the user to determine which information to display? | When should I implement authentication in a database? | database;authentication;mongodb | You would use that feature to log into the database. Your program (whether it runs on the server or the client) is going to have to identify and authenticate itself--either to the (MongoDB) database itself, or possibly to a proxy service. In either case, it will need at least a userid and password to do so. (Cryptographic keys or trusted tokens are even stronger, safer identifying and authenticating credentials. But that's another level for later.)So if your simple user login system wants/needs to contact the DBMS directly, it will have to construct something similar to the URL template you posted. (That's true of other databases like MySQL as well.)If you don't provide credentials such as this, the DBMS will not recognize your program as one that can legitimately access data, and it won't let your program in. (Note, this is separate from the user login process, which presumably relies on the data in MongoDB to navigate.)As @Matthew points out, your DBMS user accounts/credentials are best kept separate from the user accounts/credentials you're managing. A final, important note: There are no simple user login systems. Little issues like security exposures make them rather complicated and tricky to get right. When they're simple, they're usually woefully insecure--subject to spoofing, wiretapping, and other exploits. You may get away with it on a local, private network with low probability that others will want to attack your system. But if it's on the wider Internet, or protects anything of value, security hardening is essential. There's a 99.99999999% chance you're not going to get that with a simple homegrown solution; especially not the first time out of the barn. |
_codereview.161956 | This code locks a range and adds a timestamp. More details below. I'd like to learn how to make this code more efficient (minimize code/variables and reduce redundancy). Any thoughts on the areas I can improve?function lockEdits(e) { // delcare initial col variable var colCheck = e.range.getLastColumn(); // exit function if the col edited was not 11, 17, 20 if (colCheck != 11 && colCheck != 17 && colCheck != 20) { return; } // // delcare remaining variables var ss = e.source.getActiveSheet(); var thisRow = e.range.getRow(); var rngHeight = e.range.getHeight(); var email = Session.getActiveUser().getEmail(); var owners = [[email protected], [email protected]]; var checkEmpty = ss.getRange(e.range.getRow(), colCheck).getValue(); var rejectCheck = ss.getRange(e.range.getRow(), 17).getValue(); // if change in col 11, enter user email and timestamp, then protect range if (colCheck == 11 && checkEmpty !== '') { var protection = ss.getRange(thisRow, 2, rngHeight, 10).protect().setDescription('Lock Range:'); var nEmail = ss.getRange(thisRow, 21, rngHeight, 1); var nStamp = ss.getRange(thisRow, 22, rngHeight, 1); nEmail.setValue(email); // print email nStamp.setValue(new Date()); // print timestamp SpreadsheetApp.flush(); protection.removeEditors(protection.getEditors()); // protect range if (protection.canDomainEdit()) { protection.setDomainEdit(false); } protection.addEditors(owners); SpreadsheetApp.flush(); // if change in col 20, enter email and timestamp, then protect range } else if (colCheck == 20 && checkEmpty !== '') { var protection = ss.getRange(thisRow, 17, rngHeight, 4).protect().setDescription('Lock Range:'); var vEmail = ss.getRange(thisRow, 23, rngHeight, 1); var vStamp = ss.getRange(thisRow, 24, rngHeight, 1); vEmail.setValue(email); // print email vStamp.setValue(new Date()); // print timestamp SpreadsheetApp.flush(); protection.removeEditors(protection.getEditors()); // protect range if (protection.canDomainEdit()) { protection.setDomainEdit(false); } protection.addEditors(owners); SpreadsheetApp.flush(); // if rejection in col 17, enter email and timestamp } else if (colCheck == 17 && rejectCheck == Rejected) { var vEmail = ss.getRange(thisRow, 23, rngHeight, 1); var vStamp = ss.getRange(thisRow, 24, rngHeight, 1); vEmail.setValue(email); // print email vStamp.setValue(new Date()); // print timestamp SpreadsheetApp.flush(); }}So a few notes on what this does:This script is set up as an onedit trigger and this sheet is shared with multiple users.If a user edits a cell in column K (11), then lock that row/range from columns B-K. Then also add the users email in column U and a timestamp in column V.If a user edits a cell in column T (20), then lock that row/range from columns Q-T. Then also add the users email in column W and a timestamp in column X.If a user edits a cell in column Q (17) to Rejected, then just add the users email in column W and a timestamp in column X.This works as-is, I'm just not sure this is the most efficient way to do it and I'm hoping those of you with more knowledge can help me fine-tune this a bit.Let me know if any other info would be helpful in sorting this out!Thanks!! | Lock-range code for Google Scripts/Sheets | javascript;google sheets | null |
_unix.277331 | When a segmentation fault occurs in Linux, the error message Segmentation fault (core dumped) will be printed to the terminal (if any), and the program will be terminated. As a C/C++ dev, this happens to me quite often, and I usually ignore it and move onto gdb, recreating my previous action in order to trigger the invalid memory reference again. Instead, I thought I might be able to perhaps use this core instead, as running gdb all the time is rather tedious, and I cannot always recreate the segmentation fault.My questions are three:Where is this elusive core dumped?What does it contain?What can I do with it? | Segmentation fault (core dumped) - to where? what is it? and why? | segmentation fault;core dump | If other people clean up ...... you usually don't find nothing. But luckily Linux has a handler for this which you can specify at runtime. In /usr/src/linux/Documentation/sysctl/kernel.txt you will find:[/proc/sys/kernel/]core_pattern is used to specify a core dumpfile pattern name.If the first character of the pattern is a '|', the kernel will treat the rest of the pattern as a command to run. The core dump will be written to the standard input of that program instead of to a file.(thanks)According to the source this is handled by the abrt program (that's Automatic Bug Reporting Tool, not abort), but on my Arch Linux it is handled by systemd. You may want to write your own handler or use the current directory.But what's in there?Now what it contains is system specific, but according to the all knowing encyclopedia:[A core dump] consists of the recorded state of the working memory of a computer program at a specific time[...]. In practice, other key pieces of program state are usually dumped at the same time, including the processor registers, which may include the program counter and stack pointer, memory management information, and other processor and operating system flags and information.... so it basically contains everything gdb ever wanted, and more.Yeah, but I'd like me to be happy instead of gdbYou can both be happy since gdb will load any core dump as long as you have a exact copy of your executable: gdb path/to/binary my/core.dump. You should then be able to continue business as usual and be annoyed by trying and failing to fix bugs instead of trying and failing to reproduce bugs. |
_unix.147766 | I'm about to start a big rsync between two servers on my LAN. Is it better for me to push the files from one server to the other or pull them (backwards)?There is not anything that would make one work, and the other not work -- I am just wondering if there is a reason (maybe speed) to do one over the other.Can anyone give me a good reason, or is there no reason to do one over the other? | Big rsync -- push or pull? | rsync | The way rsync algorithm works can be found from here.The algorithm identifies parts of the source file which are identical to some part of the destination file, and only sends those parts which cannot be matched in this way. Effectively, the algorithm computes a set of differences without having both files on the same machine. The algorithm works best when the files are similar, but will also function correctly and reasonably efficiently when the files are quite different.So it would not make a difference whether you are uploading or downloading as the algorithm works on checksums of the source and destination files. So, any file can be the source/destination. I find some more useful information from here. Some of the excerpts are,RSync is a remote file (or data) synchronization protocol. It allows you to synchronize files between two computers. By synchronize, I mean make sure that both copies of the file is the same. If there are any differences, RSync detects these differences, and sends across the differences, so the client or server can update their copy of the file, to make the copies the same.RSync is capable of synchronizing files without sending the whole file across the network. In the implementation I've done, only data corresponding to about 2% of the total file size is exchanged, in addition to any new data in the file, of course. New data has to be sent across the wire, byte for byte.Because of the way RSync works, it can also be used as an incremental download / upload protocol, allowing you to upload or download a file over many sessions. If the current upload or download fails, you can just resume it later. |
_softwareengineering.263399 | Scenario:I'm looking to protect my software that is written in PHP. The nature of PHP is that it is delivered as plain text and therefore cannot be protected by itself. I don't want to install libs to the server like ZendGuard, IonCube or SourceGuardian. I want that individuals should use the software but should not interfere with the protected part of the application (licensing, sensitive parts). The software is distributed to clients as trial software.Solution Concept:Encrypt the payload with a blockcipher (AES) and store the key on a remote system. To software would require the key to decrypt the payload. An obfuscated part of the software should contact the key server via cURL over SSL and request the decryption key. The decryption key should only be sent back to the Server IP that holds a trial license. A checksum of the files could also be posted with the key request and if it even fails once, the key server could reject to provide the decryption key forever, which should hopefully prevent tampering.Notes:The client requesting the trial will be known in advance so unauthorized copies of the software should never receive a (correct) decryption key in first place.The decryption keys are unique to every client and the payload parts are too.The code will never be stored decrypted but only decrypted at runtime and eval()'ed.Questions:Is this a feasable or doomed to failure?Wouldn't this protect even against the tiniest tampering by anyone?Thank you! | PHP Source Code Encryption Concept | php;encryption | Let's see. Once your key server sends the decryption key, the hacker can monitor network activity to retrieve the key and use it to decrypt the source code. From this moment, he can do whatever he wants with it, especially:Modify it in a way that it will either send the checksums you are expecting, and not the actual checksums of files,Replace the part which contacts the key server, or simply remove it to avoid the check,Post on P2P networks the decrypted source code with key checking part removed.On the other hand, some part of the legitimate users of your product will probably prefer using the product of your competitors, because:They'll find that your app is slow to start. Contacting the remote key server takes time, which may or may not be accepted by the end users.Decrypting takes time too, which can waste too much resources on servers. Your competitors' products may achieve much better CPU footprint if they don't add this sort of complexity.Banning by IP will cause additional problems. Most users have dynamic IPs, which means that a ban will affect the concerned person only for a very limited amount of time (such as one day), and then prevent other persons to access your product. In the same way, banning the whole company or a wifi spot is rather unfortunate in terms of marketing.They won't accept to rely on a fragile product (unless you work in a very large company which guarantees that the product will be maintained for the next 5, 10 or 15 years). Not having the source code means that:If your company stops maintaining it, your customers will be using an obsolete product which may contain known but unpatched bugs.If a customer wants to modify it, he has no other choice than to pay you (often a lot, because of the monopoly) to do the change.They won't accept to rely on a product which can contain malware. If you're a known company with excellent reputation, this is not an issue. If you're a startup or don't have a well-established reputation, some customers would not install your app because of the risk of malicious code. This is especially important on servers which are often more protected than desktop PCs in companies.If you actually invented something that should be protected (and very probably, you haven't), the only way you can protect it is to move the sensitive code to the servers you own, and then provide an API to access the functionality from the outside. As soon as you give the code, no matter how well is it protected, it can be decrypted; otherwise, it wouldn't be possible to run it.If the only goal is to have a trial version, simply host it on your own servers. Potential customers will be able to try it, and if they are interested, purchase the product. Of course, in order to convince sysadmins as well, you should also provide very detailed information about the way your app should be deployed and hosted.Later, when your product becomes successful, you may consider evolving your offer, by providing:The limited in time demo hosted on your server,The pay-per-month subscription where the product is still hosted on your server. Multiple variants of subscriptions may target separately individuals, small companies and medium-sized companies. For small entities, the product may be out of charge for a year or some other long period of time (or forever).The more expensive solution where the customer deploys your app on his own servers. This solution may target large companies.With this model (used by many startups), encryption of source code becomes mostly irrelevant. Large companies won't try to download your product from a P2P network, because of all the compliance policies which prevent them from using unlicensed products. |
_softwareengineering.138023 | I have this setup:A windows/x86 development box and a PandaBoard ES for testing with a linux on it.I would like to ask you for recommending a linux distribution that I would run in Hyper-V on my devbox that would be used only for compiling arm/linux binaries. After compiling these binaries I will copy them to an SD Memory Card and test them on PandaBoard.Thanks! | Minimal linux distro for compiling arm binaries | c++;linux;arm | I use Zenwalk for whenever I need a real linux box (2.6 kernel + XFCE 4.8). It's not really a minimal distro (in the sense that DSL and Puppy are), but it feels fast and snappy in VirtualBox 4.1 with 1GB RAM allocated to it . The machine runs on a Windows 7 x64 host with a total of 4GB of RAM with hardware virtualization support and nested paging.If you're going to use Linux frequently and/or for extended periods of time, why not consider dual booting with Windows rather than running it inside a VM? Also, have you considered Cygwin? It provides quite a few linux tools on a Windows machine. |
_webapps.4863 | I found that I can no longer download any videos from Dailymotion. It seems to me that the download link has been disabled.Any idea how to download videos from Dailymotion? | How to download videos from Dailymotion | download;dailymotion | null |
_cstheory.7633 | I have a directed acyclic graph (DAG) such that there can only be at most one edge between any two nodes (ie, only one (i,j) can exist between i and j). I need to find the the smallest set of paths from sources si to sinks ti that cover all the edges (ie, any edge is in at least one path).The problem should be equivalent to a minimum flow problem, after having introduced a virtual source and a virtual sink and having set l(i,j)=1 for i,j not in {s,t} and l(s,si) = l(ti,t)=0 and c(i,j)=infinite.There is some literature about that (eg, http://basilo.kaist.ac.kr/mathnet/kms_tex/981523.pdf). However, I'd like to know if there is an algorithm that works better in the specific case. The best I can think of is the blocking flow method (see the paper above), which should be O(|V|*|E|) in my case.Thanks in advance for any help!Marco | Minimum path edge-cover or minimum flow with unit capacities and DAGs | ds.algorithms;graph theory;max flow | null |
_cs.48029 | What type of virus protections are available and their suitability to a cyber-security company. I'm talking about top notch stuff. Are there any methods or techniques that are rare or unusual? Othere than Anti-Virus softwares, are there other ways to keep your systems protected from virus? | What type of virus protections are available and their suitability to a cyber-security company. | operating systems | null |
_webmaster.33238 | I would like for User-agents to index my relative pages only without accessing any directory on my server. As initial thought, i had this version in mind: User-agent: *Disallow: */*Sitemap: http://www.mydomain.com/sitemap.xmlMy Questions: Is it correct to block all directories like that - Disallow: */*? Would still search engines be able to see and index my sitemap if i disallowed all directories? What are the best practices for securing the robots.txt file?For Reference:Here is a good tutorial for robots.txt#Add this if you want to stop Alexa from indexing your site.User-agent: ia_archiverDisallow: /#Add this to stop duggmirror User-agent: duggmirrorDisallow: /#Add this to allow specific agents User-agent: GooglebotDisallow: #Add this to allow all agents while blocking specific directoriesUser-agent: *Disallow: /cgi-bin/Disallow: /*?* | How to secure robots.txt file? | google;seo;sitemap;user agent;robots.txt | That's going to block your entire website from being crawled.NoThere is no such thing as securing your robots.txt. If you don't want to keep visitors out of your directory root you need to prevent that using more secure means. Putting a blank index.html file will easily do the trick. If you're running Apache you can also do it easily using htaccess. |
_unix.240144 | According to GNU Hurd Architecture the GNU operating system was originally designed to be used with the GNU Hurd kernel which is a microkernel architecture.How is it that hobbyists were able to combine the Linux kernel with GNU software to create GNU/Linux systems if Linux is a monolithic design? Does the Linux kernel replace GNU components like application IPC, device drivers, file system, etc. or was there a major effort to bring these GNU user mode utilities into kernel mode? If the latter is true, how difficult was it to do that? | How are GNU system utilities compatible with Linux? | linux kernel;gnu | null |
_webmaster.53333 | I am using Google Analytics and I am trying to set cross domain tracking for my website. I've read Google's cross domain tracking guide, but I am confused as to how to implement it properly.The issue I am having is that the example code they give looks nothing like the tracking code I was given through my Google Analytics admin console.My tracking code looks like this:<script>(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)})(window,document,'script','//www.google-analytics.com/analytics.js','ga');ga('create', 'MyTrackingID', 'MyDomain');ga('send', 'pageview');</script>(My actual tracking ID and my domain have been censored out with MyTrackingID and MyDomain, respectively.)However, the example tracking code given in the guide looks like this:<script type=text/javascript>var _gaq = _gaq || [];_gaq.push(['_setAccount', 'UA-XXXXXXXX-1']);_gaq.push(['_setAllowLinker', true]);_gaq.push(['_trackPageview']);(function() {var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);})();</script>How do I add the _gaq.push(['_setDomainName', 'A.com']); option to my tracking code as instructed? | Cross domain tracking with new analytics.js syntax? | google analytics | null |
_codereview.128479 | Here is my Graph class that implements a graph and has nice a method to generate its spanning tree using Kruskal's algorithm.I want to:Make it pythonicImprove readabilityImprove the abstraction (but not changing the use of outer and inner dicts to represent the graph)Performance is not a concern.Code:class Graph(object): def __init__(self): self.g = {} def add(self, vertex1, vertex2, weight): if vertex1 not in self.g: self.g[vertex1] = {} if vertex2 not in self.g: self.g[vertex2] = {} self.g[vertex1][vertex2] = weight self.g[vertex2][vertex1] = weight def has_link(self, v1, v2): return v2 in self[v1] or v1 in self[v2] def edges(self): data = [] for from_vertex, destinations in self.g.items(): for to_vertex, weight in destinations.items(): if (to_vertex, from_vertex, weight) not in data: data.append((from_vertex, to_vertex, weight)) return data def sorted_by_weight(self, desc=False): return sorted(self.edges(), key=lambda x: x[2], reverse=desc) def spanning_tree(self, minimum=True): mst = Graph() parent = {} rank = {} def find_parent(vertex): while parent[vertex] != vertex: vertex = parent[vertex] return vertex def union(root1, root2): if rank[root1] > rank[root2]: parent[root2] = root1 else: parent[root2] = root1 if rank[root2] == rank[root1]: rank[root2] += 1 for vertex in self.g: parent[vertex] = vertex rank[vertex] = 0 for v1, v2, weight in self.sorted_by_weight(not minimum): parent1 = find_parent(v1) parent2 = find_parent(v2) if parent1 != parent2: mst.add(v1, v2, weight) union(parent1, parent2) if len(self) == len(mst): break return mst def __len__(self): return len(self.g.keys()) def __getitem__(self, node): return self.g[node] def __iter__(self): for edge in self.edges(): yield edge def __str__(self): return \n.join('from %s to %s: %d' % edge for edge in self.edges())graph = Graph()graph.add('a', 'b', 2)graph.add('a', 'd', 3)graph.add('a', 'c', 3)graph.add('b', 'a', 2)graph.add('b', 'c', 4)graph.add('b', 'e', 3)graph.add('c', 'a', 3)graph.add('c', 'b', 4)graph.add('c', 'd', 5)graph.add('c', 'e', 1)graph.add('d', 'a', 3)graph.add('d', 'c', 5)graph.add('d', 'f', 7)graph.add('f', 'd', 7)graph.add('f', 'e', 8)graph.add('f', 'g', 9)print(graph.spanning_tree())print()print(graph.spanning_tree(False)) | Graph and minimum spanning tree in Python | python;algorithm;graph | There are no docstrings. How do I use this class? What arguments should I pass to the methods and what do they return?There's no way to add a vertex with no edges to a graph.The test code shows that it is quite laborious to initialize a graph. It would make sense for the constructor to take an iterable of edges.The attribute g is not intended for use outside the class (callers should use the public methods). It is conventional to name internal attributes with names starting with one underscore. The code in add could be simplified by making use of collections.defaultdict:def __init__(self, edges=()): Construct a graph containing the edges from the given iterable. An edge between vertices v1 and v2 with weight w should be specified as a tuple (v1, v2, w). self._g = defaultdict(dict) for edge in edges: self.add(*edge)def add(self, v1, v2, w): Add an edge between vertices v1 and v2 with weight w. If an edge already exists between these vertices, set its weight to w. self._g[v1][v2] = self._g[v2][v1] = wThe has_link method uses link, but the rest of the code uses edge. It would be better to be consistent.The __getitem__ method uses node, but the rest of the code uses vertex. It would be better to be consistent.Since add ensures that edges are stored in both directed, it's only necessary to test one direction:def has_edge(self, v1, v2): Return True if the graph has an edge between vertices v1 and v2. return v2 in self._g[v1]The edges method accumulates a list of edges. To avoid an edge appearing twice, the code checks each edge to see if it is already in the list. But lists do not have an efficient membership test, so the runtime is quadratic in the number of edges. It would be more efficient to make a set of edges:def edges(self): Return the edges in the graph as a set of tuples (v1, v2, w). edges = set() for v1, destinations in self._g.items(): for v2, w in destinations.items(): if (v2, v1, w) not in edges: edges.add((v1, v2, w)) return edgesIn the sorted_by_weight method, it would be better to name the keyword argument reverse, for consistency with Python's built-in sorted.In spanning_tree, the parent and rank dictionaries can be initialized directly:parent = {v: v for v in self._g}rank = {v: 0 for v in self._g}The parent data structure is known as a disjoint-set forest. An important optimization on this data structure is path compression: that is, when you search the parent data structure for the root of the tree containing a vertex, you update the parent data structure so that future searches run quickly:def find_root(v): if parent[v] != v: parent[v] = find_root(parent[v]) return parent[v]In the union function, you always attach the tree for root2 to the tree for root1:if rank[root1] > rank[root2]: parent[root2] = root1else: parent[root2] = root1It's more efficient to always attach the smaller tree to the bigger tree, so that the length of the searches in find_root doesn't increase:if rank[root1] > rank[root2]: parent[root2] = root1else: parent[root1] = root2The __len__ and __getitem__ methods treat the graph as a collection of vertices, but __iter__ treats it as a collection of edges. This seems likely to lead to confusion. It would be better to be consistent and have all special methods treat the graph as a collection in the same way. |
_unix.135028 | I scale down the CPU frequency by indicator-cpufreq on Ubuntu 12.04.After I suspend Ubuntu and wake it up, the CPU goes back toon demand (the default). I wonder if there is a way to keep the CPU frequency after scaling it?NOTE: sometimes, my Ubuntu can't wake up completely, and my laptop can easily become overheated under CPU frequency on demand. | CPU frequency goes back after scaling down by indicator-cpufreq, suspension and waking up | ubuntu;cpu frequency | null |
_webmaster.10617 | Is there a website or software I can use to easily make web 2.0 buttons? I really like this: https://github.com/imathis/fancy-buttons, but it's only for Ruby.Every time I design a new website, I need to go into Fireworks and tediously create the buttons by hand. Surely someone out there must have come up with an easy way of making new web buttons. | How can I easily create web 2.0 buttons? | graphics | null |
_webmaster.71462 | So it's obvious in the new era that we should disavow poor quality backlinks. What about links from sites like that are intended to provide statistical or performance information? Most of them seem pretty poor quality but are respectable and used enough by people. Allow or Disavow - how do you deal with these?Some random examples of ones that make followed backlinks:alexa.com (included for completeness of this thought)m.bizdig.dowoorank.comquantcast.comqirina.comaolstalker.comseobility.netstatsnode.comprlog.rukeywordspy.comseoprofiler.comstatscrop.comwebsitesalike.comppfinder.comsourcetool.comzibb.comThere are handfuls more but they get trashy fast. Thanks for your input! | Backlinks - Should I Disavow SEO, Keyword, Performance, Stats, Domain, and Other Similar Sites? | backlinks;directory;disavow links | Low quality is not always considered thin content or non-relevant Google's algorithm for determining low quality links is extremely complex than just assuming the link is not relevant or thin on content, of course the majority of the time a link from a site that is not relevant and thin will be considered low quality but this isn't necessary always the case.Google uses link profilingWhile we don't know Google's secret sauce when it comes to working out how quality a link is what we do know and repeatably Google have said both linking in and linking out can help rankings, it has also been said that often quality sites will often have a similar link profile to that of their competition, so with this said we should assume or believe lightly that having a similar link profile to that of the top ranking site within the same niche will benefit our rankings.With that said... link profiles and natural link diversity is importantNow with that said the likes of many top websites do not need to actively remove auto generated links from woorank, pingdom and so on, in fact these should be expected and its likely that most other websites have these type of links, removing these links can change your link diversity.. for example you removing no follow links from alexa, woorank will increase your dofollow links, anchor texts and so on... therefor changing your link profile.What disavow was designed for...It's important to know why disavow came about, basically as you most likely know Google invented evil panda and penguin that punished millions of websites for low quality pages and backlinks, Google released disavow for webmasters for those users affected by penguin and not those trying who believed they were infected, in fact removing links could actually damage your rankings even through you consider the link low quality.Disavow is for punished websites not for shaping your link profileSo disavow is designed for websites that at some point have feeled Google's wrath, its not designed for webmasters to gain better rankings by removing links they believe might not be helping. In fact most auto generated links such as the ones you listed could be considered 'natural' because its likely that 99% of websites have links from those sites therefor removing it becomes unnatural... not to say your get punished but a good saying is if something ain't broke, don't fix it.When to use the disavow toolIf you site has engaged in Blog Comment SpamIf you site has engaged in Forum Siganture SpamIf you site has engaged in Forum Field SpamIf you site has engaged in Forum Profile SpamIf you site has engaged in Apache Access Log SpamIf you site has engaged in PDF SpamIf you site has engaged in Guest Blog SpamIf you site has engaged in Mass Directory SpamIf you site has engaged in Social Media SpamIf you site has engaged in Article SpamIf you site has engaged in Link ExchangesIf you site has engaged in Web2.0 SpamIf you site has engaged in Linking Pyramid SchemesIf you site has engaged in anything on a big scale that Google will believe you or someone else has attempted to manipulate your search results using black hat linking schemas |
_codereview.17863 | for now I'm using:int connect(const String& address, int port) { struct sockaddr_in servAddr; struct hostent* host; /* Structure containing host information */ /* open socket */ if((handle = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP)) < 0) return ERROR; //TODO: gethostbyname is obsolete. if((host = (struct hostent*) gethostbyname(address)) == 0) return ERROR; memset(&servAddr, 0, sizeof(servAddr)); servAddr.sin_family = AF_INET; servAddr.sin_addr.s_addr = inet_addr(inet_ntoa(*(struct in_addr*)(host -> h_addr_list[0]))); servAddr.sin_port = htons(port); if(::connect(handle, (struct sockaddr*) &servAddr, sizeof(servAddr)) < 0) return ERROR; return OK; }this procedure but everytime I compile it I'm getting:socket.cpp:(.text+0x374): warning: gethostbyname is obsolescent, use getnameinfo() instead.getname info is still being confusing stuff for me. here is my try to implement it:struct sockaddr_in servAddr;struct hostent *host; /* Structure containing host information *//* open socket */if ((handle = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP)) < 0) return ERROR;memset(&servAddr, 0, sizeof(servAddr));servAddr.sin_family = AF_INET;servAddr.sin_addr.s_addr = inet_addr(address.ptr());servAddr.sin_port = htons(port);char servInfo[NI_MAXSERV];if ( ( host = (hostent*) getnameinfo( (struct sockaddr *) &servAddr ,sizeof (struct sockaddr) ,address.ptr(), address.size() ,servInfo, NI_MAXSERV ,NI_NUMERICHOST | NI_NUMERICSERV ) ) == 0) return ERROR;if (::connect(handle, (struct sockaddr *) &servAddr, sizeof(servAddr)) < 0) return ERROR;-- yes this doesn't work :(Maybe I should use getaddrinfo instead? getaddrinfo(hostname, NULL, &hints, &res) - can I use it alike gethostbyname? but where is host actually here? hints?Working recode based on answer:int Socket::connect(const String& address, int port) { struct sockaddr_in servAddr; /* open socket */ if((handle = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP)) < 0) return ERROR; //gethostbyname by getaddrinfo replacement addrinfo hints = {sizeof(addrinfo)}; hints.ai_flags = AI_ALL; hints.ai_family = PF_INET; hints.ai_protocol = 4; //IPPROTO_IPV4 addrinfo* pResult = NULL; int errcode = getaddrinfo(address, NULL, &hints, &pResult); if(errcode != 0) return ERROR; memset(&servAddr, 0, sizeof(servAddr)); servAddr.sin_family = AF_INET; servAddr.sin_addr.s_addr = *((uint32_t*) & (((sockaddr_in*)pResult->ai_addr)->sin_addr)); servAddr.sin_port = htons(port); if(::connect(handle, (struct sockaddr*) &servAddr, sizeof(servAddr)) < 0) return ERROR; return OK; } | Socket connect realization: gethostbyname or getnameinfo | c++ | This works for me:int connect2(const CStringA& address, int port) { struct sockaddr_in servAddr; /* open socket */ SOCKET handle; if((handle = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP)) < 0) return ERROR; //gethostbyname by getaddrinfo replacement ADDRINFO hints; ZeroMemory(&hints, sizeof(hints)); hints.ai_flags = AI_ALL; hints.ai_family = PF_INET; hints.ai_protocol = IPPROTO_IPV4; ADDRINFO* pResult = NULL; int errcode = getaddrinfo((LPCSTR)address, NULL, &hints, &pResult); if(errcode != 0) return ERROR; memset(&servAddr, 0, sizeof(servAddr)); servAddr.sin_family = AF_INET; servAddr.sin_addr.S_un.S_addr = *((ULONG*)&(((sockaddr_in*)pResult->ai_addr)->sin_addr)); servAddr.sin_port = htons(port); if(::connect(handle, (struct sockaddr*) &servAddr, sizeof(servAddr)) < 0) return ERROR; return OK;} |
_unix.5439 | At home I'm setting up a CentOS 5.5 server that will be running a bunch of KVM VMs. Normally pressing the CTRL-SHIFT-Fn combination on the attached keyboard switches to terminals on the host machine. What I'd like to do instead is have some number of CTRL-SHIFT-Fn combinations attach to the VMs that are running, in essence have the key combination behave like a KVM switch. So for example, pressing CTRL-SHIFT-F1 displays a text terminal for the host machine, but pressing CTRL-SHIFT-F2 displays an X session that is running on one VM and pressing CTRL-SHIFT-F3 displays yet another VM terminal.Some of the VMs will have X installed, so I'd like the solution to behave just like a 'normal' X session: Presents an X login screen if I haven't already logged in.How can this be done? | Attach terminal to X desktop running in VM | xorg;centos;virtual machine | null |
_unix.81599 | On the paragraph explaining arithmetic expansion, Bash's user guide uncovers 2 different ways of evaluating an expression, the first one uses $((EXPRESSION)) and the second one uses $[EXPRESSION]. The two ways seem pretty similar as the only difference I have found is:$[EXPRESSION] will only calculate the result of EXPRESSION, and do no tests:Yet, I am intrigued because the same document recommends using $[EXPRESSION] rather than $((EXPRESSION)).Wherever possible, Bash users should try to use the syntax with square brackets:Why would you want that if less tests are being done? | Why should I use $[ EXPR ] instead of $(( EXPR ))? | bash | Duplication Question (with answer)https://stackoverflow.com/questions/2415724/bash-arithmetic-expression-vs-arithmetic-expressionThe manpage for bash v3.2.48 says:[...] The format for arithmetic expansion is: $((expression)) The old format $[expression] is deprecated and will be removed in upcoming versions of bash.So $[...] is old syntax that should not be used anymoreIn addition to that answer:http://manual.cream.org/index.cgi/bash.1#27Info relating to bash versions:Here is some info about bash man pages (its hard to find info on what version each one is referring to):OPs link:http://www.tldp.org/guides.htmlBash Guide for Beginners version: 1.11author: Machtelt Garrels, last update: Dec 2008sth (74.6k rep) quoting bash v3.2.48from https://stackoverflow.com/questions/2415724/bash-arithmetic-expression-vs-arithmetic-expression)Note: More info about [] vs (()) here: http://lists.gnu.org/archive/html/bug-bash/2012-04/msg00033.htmla link I found:http://www.gnu.org/software/bash/manual/last updated August 22, 2012http://www.gnu.org/software/bash/manual/bash.html#Arithmetic-Expansion |
_webmaster.54986 | My web designer recently told me that I need to be careful not to Google for my business' website, click on its search result link, then quickly close the page (or click back) too many times.He says Google knows that I didn't stay on the page, and could penalize my site for having a high bounce rate if it happens too much.Apparently, it could look like the behavior of a visitor who was not interested in what they found (hence the supposed detrimental effect on the site's search ranking).This sounds hard to believe to me because I would not have thought any information is transmitted which tells Google (or anyone, for that matter) whether or not a website is still open in a browser (in my case Firefox v25.0).Could there possibly be any truth to this?If not, why might he have come to this conclusion? Is there some click-tracking or similar technology employed by search engines which does something similar?Looking forward to hearing everyone's thoughts. | Is it true that quickly closing a webpage opened from a search engine result can hurt the site's ranking? | seo;click tracking | null |
_unix.379002 | I am using a CentOS and I have enabled the USB driver for GSM and CDMA modems using make menuconfig.But, how does it work? After changing in menuconfig, is the modification performed in the moment? Or do I have to compile the whole kernel in order to get this configuration? | When are menuconfig changes performed? | linux;kernel;drivers | With make menuconfig you only change the configuration file .config which is used in the compilation process. One doesn't need to use this menuconfig tool - there are other scripts for that or one can even edit .config by hand (although this is error prone and thus not recommended).So in order to finish the task you've started you need to compile the kernel with new settings, copy that kernel to /boot (or wherever your boot loader is reading), optionally update link /usr/src/linux to point to correct source, add to grub (or other bootloader you use) a line with new kernel, and after that just reboot, select the previously set line in the grub menu, and voil. |
_unix.265530 | As the picture below , my VM Linux machine was hung and I cant loginhow to identify the reason for the hung according to the messages on the console? I also searched more info from the /var/log/messages file ( but I get lost there , I can't find something useful ) , also not exactly know where to find core files?what the other files that we can find info for this situation ? | Linux + how to know why Linux machine is hung from the messages on the console | linux;kernel;logs;kernel panic;core dump | null |
_cs.77628 | I have never seen my account not being able to login! | Aren't the servers that store MYSQL database in Facebook down for maintenance ever as Facebook is always on? | distributed systems;databases | null |
_codereview.40688 | I am very new to PHP and I kind of sort of want to create the pages like I would in ASP.NET; it's just the way I am wired. I have been inserting the header/menu using a PHP Include and here is what the code looks like for just the first page.Am I doing this properly, or is there a better way?index.php<!DOCTYPE html PUBLIC -//W3C//DTD XHTML 1.0 Transitional//EN http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd><html xmlns=http://www.w3.org/1999/xhtml> <head> <meta http-equiv=Content-Type content=text/html; charset=utf-8 /> <title>New Jerusalem</title> <link rel=stylesheet type=text/css href=Main.css/> </head> <body> <div id=wrapper> <div id=header> <?php include ('Menu.php') ?> </div> <div id=content> <h1 class=PageTitle >Sturgis Hellfighters</h1> <div class=indexright width=50%> <h3 class=smallheader>Scripture Corner</h3> <p><img src=images/Romans5-8.jpg width=475 alt=Romans 5:8 /></p> <p></p> </div> <div class=indexleft width=50%> <p><img src=images/BridalFallsSpearfishCanyon.jpg width=475 height=720 alt=Bridal Falls Spearfish Canyon /> </p> </div> </div> </div> </body></html>Menu.php<img src=images/SiteHeader.jpg width=1000 height=150 alt=HellFighters New Jerusalem id=siteheader/><ul id=HomeMenu> <li class=TopLevel><a href=index.php>Home</a></li> <li class=TopLevel><a href=#>News</a> <ul> <li class=SecondLevel><a href=#>Sturgis Current Events</a></li> <li class=SecondLevel><a href=#>Rapid City Events</a></li> <li class=SecondLevel><a href=#>Pierre Events</a></li> <li class=SecondLevel><a href=#>Other Events</a></li> </ul> </li> <li class=TopLevel><a href=#>Photos</a> <ul> <li class=SecondLevel><a href=christmasatpineridge.php>Christmas On Pine Ridge</a></li> <li class=SecondLevel><a href=XmasMission.php>Christmas At the Mission</a></li> <li class=SecondLevel><a href=OpenHouse.php>Open House</a></li> <li class=SecondLevel><a href=Nationals.php>Nationals</a> </ul> </li> <li class=TopLevel><a href=#>Events</a> <ul> <!-- --> <li class=SecondLevel><a href=#>A Pine Ridge Christmas</a> <li class=SecondLevel><a href=#>A Sturgis HellFighter Christmas</a> <li class=SecondLevel><a href=#>Sturgis Motorcycle Rally</a></li> <li class=SecondLevel><a href=#>Random City Swap Meet</a></li> <li class=SecondLevel><a href=#>Cool Event</a></li> </ul> </li> <li class=TopLevel><a href=#>Contact Info</a> <ul> <li class=SecondLevel><a href=#>President</a></li> <li class=SecondLevel><a href=#>Vice-President</a></li> <li class=SecondLevel><a href=#>Preacher Man</a></li> </ul> </li> </ul>I would like to use HTML5 if possible, so if your review brings up HTML5 that is cool with me too.If you would like to see the front page, click here.This page will be changing as I am currently working on it. | Review structure of PHP/HTML | php;html;beginner;html5 | Going to some HTML5 stuff straight away.Replace your doctype with the HTML5 doctype. This is necessary for the next few steps<!DOCTYPE html>Use the shorter charset meta tag and the meta viewport tag inside your head area. The viewport meta tag is necessary for mobile devices and ensures a good default viewport.<meta charset=utf-8><meta name=viewport content=width=device-width, initial-scale=1.0>Side note: You may omit the type-attribute for style, script and link tag (not for RSS) and the closing / for self-closing tags. This is not a recommendation, but it's possible.Use ID's only when you definitely can say There will be only one element of this per page, ever. A wrapper is not unique to a page. It should be a class.Generally speaking, ID's don't provide you with a true benefit and kinda makes your life harder. I don't use ID's as styling hooks anymore and I'm really happy with it.You're navigation is included with PHP and is named menu.php, but it actually contains a header image as well. A better name would be header.php, wouldn't it?That being said, you may consider moving everything above your header and the header itself to a header.php. Even your head area and the html tag. This is only usable, if you your header is the same on all of your pages.I'm using dash-delmited class names in my HTML. No CamelCase names. This is my preference and if you want to use CamelCase, this is fine. However I would avoid using no delimiting and lowercase names like indexright or smallheader.That's it for now. I'll going to edit some more stuff later. Stay tuned. |
_datascience.21807 | So I wrote a vanilla 1 hidden layer neural net in Java and with the goal of training it to recognize handwritten digits from the MNIST database. However, on the first run, it didn't work, performing no better than random guessing. I went back into the code to debug to find where I wen't wrong, and one of the things I tried was only training one layer at a time. This is where the weirdness happened. When I train only the connections into the output layer, the neural net converges to about 80% really quickly, then slowly degrades back to about random. When I train only the connections from the input layer to the hidden layer, it converges to about 80% really slowly, then degrades back to random rather rapidly.I decided to look at the actual output values produced in these cases (the ten values corresponding to the confidence that the neural net has towards each of the ten possible digits), and found that when I trained only the first layer, all values tended towards one, with the expected one converging faster. On the other hand, when I trained only the final layer, all values tended towards zero, with the expected one converging slower. This goes against all my sensibilities of how the back-propagation algorithm should work mathematically, and I can't seem to work out what's wrong despite hours of effort. run(); int sum = 0; for(int ii = 0; ii < outputLayer.size(); ii++) { Node n = outputLayer.get(ii); for(Connection c : n.connections) { c.origin.adjustSum += c.destination.value * (1-c.destination.value) * (c.destination.value - expected[ii])*c.weight; c.weight -= learningRate * c.origin.value * c.destination.value * (1-c.destination.value) * (c.destination.value - expected[ii]); } } for(Node n : hiddenLayer) { for(Connection c : n.connections) { c.weight -= learningRate * c.origin.value * c.destination.value * (1-c.destination.value) * n.adjustSum; } }Upon request here is node:public class Node implements Serializable{ public ArrayList<Connection> connections; public double value; public double adjustSum; /** * The constructor for the node object * Initializes the instance variables */ public Node() { connections = new ArrayList<Connection>(); } /** * A method that finds the value of a node * It sums up the values of the product of each of its inputs and the connections weight * then it passes this weight through the logistic function and sets that as teh value of the current node */ public void findValue() { double totalInput = 0; for(Connection c : connections) { totalInput += (c.weight*c.origin.value); } value = logisticCurve(totalInput); } /** * This method calculates the output of the function 1/(1+e^-x) for a given input * @param- the input(x) value * @return- The output(y) value */ public double logisticCurve(double xVal) { double yVal = 1/(1+Math.exp(-1*xVal)); return yVal; }}The neural net uses a sigmoid activation function and size 786-256-10. Connection weights are initialized in the following way:weight = (Math.random()*2/Math.sqrt(numInputs))-(1/Math.sqrt(numInputs));Nothing else in my unit testing has seemed odd or off, but if you have any tests you want me to try, or need to see or describe any more code, just ask. If you have any advice at all, it would be greatly appreciated, I am absolutely stuck. | Training each layer in a neural net works individually, but fails as a whole | machine learning;neural network;classification;backpropagation | null |
_unix.267164 | Not a long time ago we found out about pkill and we had in mind to start using it in a setuid (for root) script for global clean-up of processes. This could save us lots of stupid maintenance where some clients cant remove general resources using their scripts only due not important permission limitations.However, after some struggling we only came up with pkill -v -u root <name> (so far we intent to make it simple and prevent from devolving into a long and ugly script with sed,awk,grep and so on). Of course it doesnt work it just kills everything but the processes that match the given name.Is there a any short modified version of that pkill command that get us the results we need?P.S: I want to avoid any discussions about the morality of giving some sort of root power to the users.The running OS is solaris 10, if that matters. | How can I kill process by specific name and exclude root processes | process;solaris;kill;ps | null |
_unix.212543 | I'm quite new to the Linux world but very happy with my Linux Mint installation. One thing that confuses me is that I'm not sure how to update certain programs. For example, I installed Blender from the repository, but the version is very old (2.69). I have downloaded the newer version of Blender (2.74) but I have no idea how to install it so that when I open up Blender from the menu icon, it opens up the new version intead of the old. How should situations like this be handled in Linux? | How to run the latest version of a program? | linux;linux mint | For popular software you will usually find other users who provide ready to use packages for your distro.Example: https://launchpad.net/~thomas-schiex/+archive/ubuntu/blenderAnything you install should go through the package manager in any case, so you can keep track & get rid of its files in the future.If you just unpack a binary tarball somewhere in your home, I guess you'll also have to create your own menu icon for it... installing it as a distro package (or making your own according to your distro's docs) is usually the better solution though. |
_codereview.34022 | I created a class a while ago that I named a Recursion Array. It is a way to create dynamically memoized sequences such as the Fibonacci sequence or the factorial numbers sequences. The concept is to create mathematic sequences defined by recursion by only providing the first values and the recursive formula; all the computed results are internally stored into a vector. That allows to create mathematic formula in a recursive way, without having to care about the access time; each result is only computed once. Here is the base class:#include <initializer_list>#include <vector>template<typename T>types_t;template<typename Derived>class RecursionArray{ public: using value_type = typename types_t<Derived>::value_type; // A RecursionArray is not copyable RecursionArray(const RecursionArray&) = delete; RecursionArray& operator=(const RecursionArray&) = delete; /** * @brief Calls function and applies memoization * @see value_type self(size_t n) */ inline auto operator()(std::size_t n) -> value_type { return self(n); } protected: RecursionArray() = default; /** * @brief Initializer-list constructor * * This should be the one and only way to instance a * RecursionArray. * * @param vals Results of function for the first values */ RecursionArray(std::initializer_list<value_type> vals): _values(vals.begin(), vals.end()) {} /** * @brief Calls function and applies memoization * * @param n Index of the value to [compute, memoize and] return * @return Value of function for n */ auto self(std::size_t n) -> value_type { while (size() <= n) { // Compute and add the values to the vector _values.emplace_back(function(size())); } return _values[n]; } /** * @brief Returns the number of computed elements * @return Number of computed elements in the vector */ constexpr auto size() const -> std::size_t { return _values.size(); } /** * @brief User-defined function whose results are stored * * This is the core of the class. A RecursionArray is just * meant to store the results of function are reuse them * instead of computing them another time. That is why a * RecursionArray function can only accept unsigned integers * as parameters. * * @param n Index of the element * @return See user-defined function */ auto function(std::size_t n) -> value_type { return static_cast<Derived&>(*this).function(n); } private: // Member data std::vector<value_type> _values; /**< Computed results of function */};It is a base class that uses static polymorphism. Here is an example of a user-defined derived class:class MemoizedFibonacci;/* * We need to tell to the RecursionArray which * kind of data it has to store. */template<>struct types_t<MemoizedFibonacci>{ using value_type = unsigned int;};/** * @brief Fibonacci function class * * A way to implement the Fibonacci function and to force it * to store its results in order to gain some speed with the * following calls to the function. */struct MemoizedFibonacci: RecursionArray<MemoizedFibonacci>{ using super = RecursionArray<MemoizedFibonacci>; using typename super::value_type; /** * @brief Default constructor * * To use a Fibonacci function, we need to know at least * its first two values (for 0 and 1) which are 0 and 1. * We pass those values to the RecursionArray constructor. */ MemoizedFibonacci(): super( { 0, 1 } ) {} /** * @brief Fibonacci function * * Fibonacci function considering that the first values are * already known. Also, self will call function and * memoize its results. * * @param n Wanted Fibonacci number * @return nth Fibonacci number */ auto function(std::size_t n) -> value_type { return self(n-1) + self(n-2); }};And finally, here is how we can use the user-defined class:int main(){ MemoizedFibonacci fibonacci; // The Fibonacci numbers up to the nth are computed // and stored into the RecursionArray std::cout << fibonacci(12) << std::endl; // 144 std::cout << fibonacci(0) << std::endl; // 0 std::cout << fibonacci(1) << std::endl; // 1 std::cout << fibonacci(25) << std::endl; // 75025}The problem is that I want to keep the user side simple, but also to avoid virtual (hence the static polymorphism). There are three main functions in the class:* function: the recursive formula.* operator(): so that the end user can use the final instances as functions.* self: helper function (same as operator()) so that the recursive formula is easier to write.It would be great if the user did not have to specialize types_t and could just give MemoizedFibonacci, but I can't seem to find a way to do so. Do you whether there would be some way to ease the functor writer work? | Optimize RecursionArray interface | c++;c++11;recursion;cache;fibonacci sequence | null |
_codereview.158103 | I'm working on an exercise using Assembly 8086 which inputs a number (in string form) then output the binary form. Here is what I did (tested on emu8086 by the way):Convert the string to numeric form: I simply iterated through the string, and for each new value I simply multiply the old value by 10, then add by the new value. But there are some checks for overflow which was annoying: I have to check in both the multiplication and addition part, and since in this part I simply ignored the negative part (for simplicity) I had to check for a special case: a 16-bit signed number goes from \$-2^{15} = -32768\$ to \$2^{15} - 1 = 32767\$, so if the input is -32768 then there would be an error, so I had to check for that case specifically.From numeric to binary: I simply left-shifted the number (which is now stored in AX) consecutively and printed the most significant bit. Please give some feedback, because I'm not sure if the overflow part is ok.;Written by Dang Manh Truong.stack 100h.database10_string dw -32799$biggest_16bits_signed equ 7FFFhspecial_16bits_signed equ 8000hspecial_16bits_signed_str dw 1000000000000000$ error_overflow dw Arithmetic overflow encountered. Abort$error_not_a_number dw Not a number. Abort$base10 equ 10 tmp dw 0is_negative dw 0.codemain proc;Initialization mov ax,@data mov ds,ax ;;;part 1: convert string to value mov ax,0 ;number = 0; lea si,base10_string;check if positive or negative mov bl,[si] ;bl = value cmp bl,'+' jne check_if_negative add si,1 ;start from after the + sign mov ax,0 jmp while_loop check_if_negative: cmp bl,'-' jne check_if_not_a_number mov ax,1 mov is_negative,ax mov ax,0 add si,1 ;start from after the - sign jmp while_loopcheck_if_not_a_number: cmp bl,'0' jge keep_checking jmp not_a_numberkeep_checking: cmp bl,'9' jle while_loop jmp not_a_numberwhile_loop: mov bl,[si] ;bl = value ;check if end of string cmp bl,'$' je end_while ;check if not a numbercheck_: cmp bl,'0' jge keep_checking_ jmp not_a_numberkeep_checking_: cmp bl,'9' jle add_to_number jmp not_a_numberadd_to_number: and bx,000Fh ;'0'-> 0, '1' -> 1,... mov dx,base10 ;try mul dx ;number = number*10 ;catch (Exception ArithmeticOverflow) jo overflow ;try add ax,bx ;number = number*10 + value ;catch <Exception ArithmeticOverflow) ;jo overflow cmp is_negative,1 jne _perform_check cmp ax,special_16bits_signed jne _perform_check jmp _aftercheck_overflow_perform_check: ;jo overflow cmp ax,0 jl overflow_aftercheck_overflow: add si,1 ;next value jmp while_loopend_while: ;is negative? cmp is_negative,1 jne begin_part2 neg axbegin_part2: ;;;part 2: print base-2 number mov cx,16 mov bx,ax mov ah,2print_: cmp cx,0 je after_print shl bx,1 ;if bit == 1 jnc print_0 ;then print_1 mov dl,'1' int 21h jmp add_counter;else print_0print_0: mov dl,'0' int 21hadd_counter: dec cx jmp print_after_print: jmp return_to_dos overflow: lea dx,error_overflow mov ah,9 int 21h jmp return_to_dos not_a_number: lea dx,error_not_a_number mov ah,9 int 21h jmp return_to_dos return_to_dos: mov ah,4ch int 21hmain endp end main | Assembly 8086 program to input a 16-bit signed number (in string form) and output its binary equivalent | integer;assembly | null |
_unix.51882 | I installed RadioTray in Lubuntu 12.04 (64bit) and it can't play radio streams.When a stream is started, it throws this notification: > Radio Error gstplaysink.c(1906): gen_audio_chain ():/> GstPlayBin2:player/GstPlaySink:playsink0I can play all those streams from audacious and the browser itself (chrome).I have been playing radio streams (shoutcast) for months and had been using streamtuner2 to find streams (but it has a sorting error that you cannot tell what stream will you end up listening to).I also can watch videos with vlc and in youtube, etc.Maybe it could be some plugins problem, but I can't find any logs. | RadioTray error: gstplaysink | streaming;plugin | Look here: http://linuxlubuntu.blogspot.co.uk/2011/02/radio-tray-installing.html |
_cstheory.17714 | In Williamson and Shmoys' textbook The Design of Approximation Algorithms they make the following assumption: We assume that there is some objective function mapping each possible solution of an optimization problem to some nonnegative value...Vazirani makes the same assumption:Each valid instance...comes with a nonempty set of feasible solutions, each of which is assigned a nonnegative rational number called its objective function value.What if my objective function can take negative values? Is there a standard way to deal with this? What if it is known that $-1$ is a lower bound? | Approximation factor when objective can be negative | approximation algorithms | null |
_codereview.94227 | Many numbers can be expressed as the sum of a square and a cube. Some of them in more than one way.Consider the palindromic numbers that can be expressed as the sum of a square and a cube, both greater than 1, in exactly 4 different ways. For example, 5229225 is a palindromic number and it can be expressed in exactly 4 different ways:22852 + 20322232 + 66318102 + 1253 11972 + 1563Find the sum of the five smallest such palindromic numbers.I used brute force; it solves in around 11 seconds. I thought about it for some time but couldn't come up with any obvious optimizations, the problem seems built for brute force. Any suggestions? Note that I'm using PyPy which makes math code run much faster...Find the sum of the five smallest palindromic numbersthat can be expressed as the sum of a square and a cube.from math import log10from collections import Counterfrom timeit import default_timer as timerdef is_palindrome(n): length = int(log10(n)) while length > 0: right = n % 10 left = n / 10**length if right != left: return False n %= 10**length n /=10 length -= 2 return Truestart = timer()palindromes = Counter()for square in xrange(1, 30000): squared = square**2 for cube in xrange(1, 3000): cubed = cube**3 total = squared + cubed if is_palindrome(total): palindromes[total] += 1ans = sum(x[0] for x in palindromes.most_common(5))elapsed_time = timer() - startprint Found %d in %d s. % (ans, elapsed_time) | Project Euler 348: Sum of a square and a cube | python;performance;programming challenge;palindrome | null |
_softwareengineering.206989 | I have a class called Timeline. I want to allow several defaults in my code, such as a Timeline with a Start event (The details are not needed. All that matters is that I have a class, and I want to be able to have a few different default setting). Here are the options I thought of:1) Following a GUI/Swing inspired idea, I might want to subclass the Timeline class and, in its constructor, use the public functions to set the default. Then, I could later just instantiate those objects instead of the superclass2) A factory class could have functions like getTimeLineWithExplosion and set everything up.Which of those two is the better idea? Is there a design pattern better than both of them?PS: This is implemented in Java, but could easily be in another language. | Subclassing to change default settings? | java;design;design patterns;object oriented | About your options. I don't think subclassing here is a good idea. As I see you don't follow the is - a relationship here. If they have slightly different defaults - than it is different objects of the same class, but not different classes, since they don't have different behavior.Factory option looks much better. You don't need to create new class for each new defaults you want. And factory abstracts the underlying logic nicely.I propose you another idea. Implement something like Builder pattern, to achive constructions like this:createTimeline().withStartEvent().build()orcreateTimeline().withStartEvent().withAnotherDefault(def).andWithSomeAddition(42).build()I don't know Java well, but I sure this could be implemented, and won't be significantly harder then factory, but readability would be better (as for me).If Java allows a way to declare implicit type conversions - you could end up without that build() call |
_reverseengineering.8902 | I do have an executable packed file.I want to detect compression or encryption algorithm dynamic and static ways separatly; of course without signature base way.How can I detect compression or encryption algorithm used for packing exe file dynamically and statically ways ? (dynamic detect ways and static detect ways without signature base ways of course separately). I do not want any tools. I want ways for paper. | identify packer compression or encryption algorithm | static analysis;dynamic analysis;executable;encryption;packers | null |
_unix.14045 | I want to search for and remove a block text from several files. The block of text to be matched against is in file, say /home/user/myblock.txt I want to parse the directory /home/user/rep and remove the content of mybloc.txt from all the files in the directory. | Search & remove text in files from a source file | text processing | null |
_webmaster.31064 | I am working on a plattform that will have a lot of users in the so called developing countries. So many of them will be using old computers and old browsers in tiny internet cafes.We want to make sure to give them a good user Experience and make sure the website loads as fast as possible.Problem is, that while you can save a lot of requeasts and time, using jQuery/AJAX, it also brings along a lot of Problems:- Will the Computers be powerfull enough to deal with the client side scripts?- Will the old Browsers handle jQuery?Does anyone have any experience with these sort of problems or might know of some sort of article on the topic? | jQuery/AJAX on old Computers/Browsers | performance;jquery;cross browser;user friendly;browser support | Depending on the country, the browser usage is very different. In China, for instance, there is 22%+ IE6. When developing for old browsers, keep in mind that even if they support ajax and other fancy javascript stuff, they probably do it a lot slower than more modern browsers. We did a comparison for a customer project some time ago. IE6, compared to a Firefox 4 was more than ten times slower in javascript execution speed, so we had to add a lot of tweaks (for instance do not add js click handlers on dom ready event, but on mouse over of each element seperately). It's also a good idea to show loading/waiting indicators to the user during expensive js tasks to prevent further clicking which will again cost performance. |
_unix.358799 | In Windows, when using the CLI, only one program is used (cmd.exe). You send the input to cmd.exe, and cmd.exe in turn sends you the output (displays the output on the screen that is):But in Linux, there are two programs that are used: the Terminal and the Shell.You send the input to the Terminal (for example: gnome-terminal), and gnome-terminal in turn sends this input to the Shell (for example: bash), and then bash sends the output to gnome-terminal, and gnome-terminal in turn sends you the output.My question is: Why the Terminal and the Shell are two separate programs in Linux and not one program like in Windows? | Why the Terminal and the Shell are two separate programs in Linux? | shell;terminal | null |
_softwareengineering.262831 | I am trying to package a Python module for pip, following the guide here.One area I would like feedback on is best practices or convention for making my module configurable. The module is a library for talking to federated RESTful services / local datastores, and we use it both as a command-line library as well as a Django app. So I would like it to be manually configurable as well as configurable from Django's settings.py file. Our current method of doing this is horrible -- it relies on a set of settings.py files inside of the library, which are overwritten whenever we pull from git, not changeable during runtime, etc. My idea for a solution is to wrap the entire library in its own class, and in the __init__ method, do something like:try: from django.conf import settings self._remote_host = settings.REMOTE_HOSTexcept: self._remote_host = DEFAULT_HOSTBut that only seems to take care of Django and seems cumbersome -- what if someone wants to use our library with Flask, or something else? Is there a more universal way to make a Python module configurable by external tools plus have a default? Or is this a lost cause, and I should stick to configuration on init via arguments? | How to handle configuration of Python modules, especially when used standalone and in frameworks like Django | python;django;configuration | There are two problems come together here:Your package is a singleton although there exist many different configurations.Where to store the data and how to migrate - which database to use.SingletonI did a fun project with lots of modules which needed configuration themselves. The project was intended to run once per computer. The alternative is to put everything into classes and instantiate them with a configuration. Tist would allow many different configurations to exist within one program. When you may have this need you should restructure your whole code.DatabaseMaybe your project is for a company and shall be developed for a longer time and the configuration shall not be thrown away. Then you may need to keep in mind many previous configurations when changing the default values and updating the configuration. Proper databases have solved that problem.My tradeoff solutionIn my caseThe configuration can be thrown away if my model changes.The package runs not only once per process but also once per computer.Only one thread accesses the configuration.There is a module called config.py with the following methods:import configconfig.load()config.save()You use it like this (Example1):config.load()config.my_value = 'test'config.save()There is also a file called constants.py which should better be called default_config.py. It has the functionsimport constantsconstants.default_configuration()constants.config_file_name() # where to store the config dataAnd for (Example1) the constants.py should look like this:def default_configuration(): return {'my_value' : 'default'} # to avoid attribute errorsThe config module saves and loads the configuration using pickle. If no configuration is found for a variable the default configuration is used. You will need a coding style that always fetches the configuration from the config.py. It shall not be stored in a local variable or attribute over a longer time since it can change.My previous version is called runningConfiguration. It has no explicit load and save and also no default attributes. |
_unix.196636 | I want to configure a locally installed unpackaged application to be started via applications menu in Debian. It should be independent of desktop environment and visible to all users. I don't want to install any Gnome or KDE dependencies to perform this simple task.Which command is more suited to install desktop files in my case?Possible candidates:xdg-desktop-icondesktop-file-installWhat is the the difference between these? | System-wide desktop file installation for unpackaged software | debian;xdg | null |
_unix.23348 | There has been a similar question - but IMHO there has to be a simpler solution. If num-lock is on in the BIOS - why is it turned off during linux boot and/or KDE/Gnome/whatever startup? | Enable num-lock as default in Linux | linux;boot;keyboard | Linux initializes most peripherals so that they'll be in a known state. This includes the keyboard: Linux's internal data about the keyboard had better match the LEDs, so what Linux does is to turn off the LEDs (as far as I recall, the CPU can't read the state of the LEDs on a PC keyboard) and declare all *Lock to be off.I like to have NumLock on by default. For Linux text consoles, what I used to do is to runfor t in /dev/tty[0-9]*; do setleds -D +num <$t; donefrom a boot script (/etc/rc.local or /etc/init.d/50_local_setleds or wherever the distribution likes to put those).Nowadays, at least on some distributions such as Debian, you can add LEDS=+num to /etc/console-tools/config (or /etc/kbd/config depending on which one you have).The X window system has its own keyboard handling, so you need to deal with it separately. What I do is to switch caps lock permanently off (I don't have a Caps Lock key in my layout) and switch num lock permanently on (I don't have a Num Lock key in my layout, and the keypad keys send KP_1 and so on). If you want to retain the modifiers but make Num Lock default on, you can write a small program to call XKbLockModifiers to set the modifier inside X and XChangeKeyboardControl to set the physical LED. Used to, because I haven't bothered with text consoles in a while. |
_unix.305881 | I need to change the system Serial Numberit's shown in the output ofdmidecode -t systemPlease note that I'm running RedHat in a virtual machine in VMware FusionIs it possible to do it? | Is it possible to change the serial number of a VM in VMware Fusion? | rhel;hardware;vmware;system information;dmidecode | null |
_softwareengineering.146720 | I do not know too many things about databases, so a lot of questions concerning architecture came up lately. Two of these things are:If I have a table with a lot of entries(millions probably), how can I make the select queries faster? I thought about sorting the table alphabetically and then splitting it in two, but that doesn't seem to make things easier for me. Do you have any suggestions?I have a table user and one message. In message I should have sender_id and receiver_id. From what I know, I can't make them both foreign keys for user, so I have to pick one of them. Doesn't this, however, lead to data inconsistency(which, as far as I know, is bad)? What is the right approach here?I do not think that it matters, but I use MySQL 5.5. | Database architecture decisions | database design | Add indexes on the columns used in the WHERE clauses. In an extreme case, use sharding, but that's if you're talking about billions of rows (at least), not millions. Additionally, for complex queries with multiple joins and subqueries, the structure of the query can make a big difference. The query optimizer of the DB should take care of that, but sometimes it messes up or needs more information (e.g. updated DB statistics). You really need an experienced DBA at that point. And of course the structure of the DB also has a big effect. Sometimes you have to denormalize, or just fix a fundamentally broken schema.You most definitely can have foreign key constraints on multiple columns of a table that all refer to the same column in a different table. So do that. |
_softwareengineering.333598 | I am developing an application which is based on microservices architecture with Spring Boot. Beside my application jar I have a dashboard application jar too. I want to show some metrics at both my base application and dashboard. I have some other services too. So, I have decided to make an individual micro service for metrics.I know that I can have 2 approaches. One of them is PubSub and other one is Queue as like described here: simple metrics collector.My question is that: I have 2 kinds of metrics. Some of them needs to be real time (memory consumption, nodes are down etc.) and some of them are not (how many api calls are done, etc). At first, I will not have 4 servers so I don't need a complex system and get congestion. Which approach is suitable for my needs? | Metrics Collector at Microservices | microservices;message queue;metrics;pubsub;analytics | null |
_webmaster.103278 | Hi have the following sitewww.criticalpowersupplies.co.ukAnd google is not indexing all the pages, 97,000 pages in all but only 279 indexed down from 3000 in september 2016.Any ideas why? | Google Indexing pages dropped massively | google;sitemap;indexing;google index;ranking | null |
_webapps.101504 | I am on Gmail and would like to know how to search for a particular email sender so l can then review all messages over time from that sender. | Search for a sender on Gmail | gmail;gmail search | null |
_unix.322598 | There two steps, first check if exist the host in the fileIf have the file:# FILE: /etc/hosts:192.168.0.10 srv-db srv-db-home192.168.0.15 srv-db1192.168.0.20 srv-db-2192.168.0.20 srv-db-work srv-db-Would execute:sed -n -e '/\<srv-db\>/p' /etc/hostsWhich results to:192.168.0.10 srv-db192.168.0.20 srv-db-2192.168.0.20 srv-db-It ignores the number at the end just fine... but it does not ignore the dash (-).I have found a bunch of regex that works parcially... as these two below:Match exact string using grepand How to match exact string using `sed`? But not the part of it.?The answer below from @steeldriver helped with that:awk '$NF == srv-db' /etc/hostsBut, when going to update the IP Address, it becames fuzzy, here is the full code I have camed up with:sed -i -e s/\(^ *[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\)\(\s*\)\<srv-db\>/192.168.9.9\2/ /etc/hostsIt should update only the first line, but instead it updates the same results as of above.But none of them has worked perfectly is this particular case. | Match EXACT string in file and update IP Address | sed;scripting;ip | The \> word boundary anchor treats - as a boundary.If you know that there is no trailing whitespace, you could anchor to the end of line /\<srv-db$/ - to allow for trailing whitespace you could modify that to /\<srv-db[[:blank:]]*$/ or /\<srv-db[[:space:]]*$/Or use awk and do a string match instead:awk '$NF == srv-db' /etc/hostsIf you can't necessarily anchor to the end of line (even with trailing whitespace) then you will beed to construct an expression that (for example) tests for whitespace or end of line (i.e. word boundary without hyphen) e.g. (using extended regular expression mode)sed -En -e '/\<srv-db(\s|$)/p'If you can use perl (or grep in PCRE mode) then there are possibly more elegant options such asperl -ne 'print if /\bsrv-db(?![\w-])/'orgrep -P '\bsrv-db(?![\w-])' |
_unix.66160 | I want Unison to show the diffs with colordiff.Ive got diff = colordiff -u CURRENT2 CURRENT1 | less -R in my config file but it seems like Unison pipes the output to plain less nonetheless, so I get raw escape sequences instead. | Using Unison with colordiff | diff;less;unison | null |
_hardwarecs.2518 | I am looking for a black/white laser printer.Requirements:No requirement for proprietary / non-free driversNo requirement for proprietary printer pluginLinux compatible using Wi-Fi / LANBonus:Black/white sufficient (color is needlessly more expensive)Android compatibleWi-FiUSBScanning | Linux Wi-Fi network compatible laser printer | linux;wifi;printer;laser | null |
_webmaster.15591 | I have changed the DNS for my domain.what code (or header) should I use in my old server to tell the visitor's browser or ISP that it should check for my new DNS and the current content is old?is the temp redirecting to a subdomain should help?or you know a better way? | How to force browsers/ISPs to look for my new DNS? | browsers;dns;domains;isp | null |
_hardwarecs.5444 | I am looking for a laptop-sized keyboard (similar to those on Dell 15-16 laptops) that would fit in my backpack.I am looking for recommendations for a wireless back-lit keyboard which also supports bluetooth as I wish to use it with my tablet too.A combo package with a mouse would be nice as I need a mouse to control my laptop when connected to an external monitor. An integrated touchpad would be preferable & eliminate the need for the mouse. (tough already?)A plus, if it exists, would be to have a usb-wired functionality which allows it to be used with a usb cable (& charge too?).Preference is for one without a keypad but I also consider ones with it.Asking for too much?Update: I decided o go with cjm's recommendation of the Drevo 71 mechanical keyboard (black with blue switch). Though its bluetooth performance was a concern as reported by some reviewers but it was the only option that fits all criteria.I also ordered the TeckNet BM306 Souris Bluetooth mouseI will report on the recommendation once I receive it. | Laptop-sized Keyboard with Bluetooth, Wireless, & Wired | keyboards | If you are looking for a mechanical keyboard, this Drevo fits all your criteria: it has no number pad, it has backlights, and it's Bluetooth-compatible. |
_codereview.33128 | I am wondering if I can make these classes a bit more efficient.Test ResultsSingle RunMethod 1: 5 Columns - Text Query - 81178 Records = 00:00:00.6390366secsMethod 2: 5 Columns - Text Query - 81178 Records = 00:00:00.5360307secs10 Run LoopMethod 1: 5 Columns - Text Query - 81178 Records = 00:00:05.3253045secsMethod 2: 5 Columns - Text Query - 81178 Records = 00:00:05.0912912secs100 Run LoopMethod 1: 5 Columns - Text Query - 81178 Records = 00:00:54.1270959secsMethod 2: 5 Columns - Text Query - 81178 Records = 00:00:53.8710813secsAll 3 attempts for both methods never peak over 25% CPU usage.As you can see there really is no significant improvement over either method, and method 2 (judging by CPU usage) does not seem to multi-thread.I am thinking that if I can get rid of my usage of reflection to map the columns to strongly-typed classes that it would make a significant boost to both methods performance, and I am sure that I can make improvements to the asyncronicity of method 2 as well... I just don't know how.WrapperTest.cs private static IList<T> Map<T>(DbDataReader dr) where T : new() { try { // initialize our returnable list List<T> list = new List<T>(); // fire up the lamda mapping var converter = new Converter<T>(); // read in each row, and properly map it to our T object var obj = converter.CreateItemFromRow(dr); // reutrn it return list; } catch (Exception ex) { // Catch an exception if any, an write it out to our logging mechanism, in addition to adding it our returnable message property _Msg += Wrapper.Map Exception: + ex.Message; ErrorReporting.WriteEm.WriteItem(ex, o7th.Class.Library.Data.Wrapper.Map, _Msg); // make sure this method returns a default List return default(List<T>); } }This is a continuation of this question.The code above definitely runs more efficiently now. Is there any way to make it better? | DAL mapping efficiency | c#;performance;sql;sql server;asynchronous | Use LINQ expression compilation to generate mapping code at runtime. The concept is to generate a method that does obj.Property1 = dataReader[Property1]; ... dynamically.public class Converter<T> where T : new(){ private static ConcurrentDictionary<Type, object> _convertActionMap = new ConcurrentDictionary<Type, object>(); private Action<IDataReader, T> _convertAction; private static Action<IDataReader, T> GetMapFunc() { var exps = new List<Expression>(); var paramExp = Expression.Parameter(typeof(IDataReader), dataReader); var targetExp = Expression.Parameter(typeof(T), target); var getPropInfo = typeof(IDataRecord).GetProperty(Item, new[] { typeof(string) }); foreach (var property in typeof(T).GetProperties()) { var getPropExp = Expression.MakeIndex(paramExp, getPropInfo, new[] { Expression.Constant(property.Name, typeof(string)) }); var castExp = Expression.TypeAs(getPropExp, property.PropertyType); //var bindExp = Expression.Bind(property, castExp); var bindExp = Expression.Assign(Expression.Property(targetExp, property), castExp); exps.Add(bindExp); } return Expression.Lambda<Action<IDataReader, T>>(Expression.Block(exps), new[] { paramExp, targetExp }).Compile(); } public Converter() { _convertAction = (Action<IDataReader, T>)_convertActionMap.GetOrAdd(typeof(T), (t) => GetMapFunc()); } public T CreateItemFromRow(IDataReader dataReader) { T result = new T(); _convertAction(dataReader, result); return result; }}Test method with 80,000 x 100 iteration static void Main(string[] args){ var dummyReader = new DummyDataReader(); var properties = typeof(DummyObject).GetProperties(); var startDate = DateTime.Now; var converter = new Converter<DummyObject>(); for (int i = 0; i < 80000 * 100; i++) { //var obj = CreateItemFromRow2<DummyObject>(new DummyDataReader()); var obj = CreateItemFromRow<DummyObject>(dummyReader, properties); //var obj = converter.CreateItemFromRow(dummyReader); dummyReader.DummyTail = i; } //var obj = CreateItemFromRow2<DummyObject>(new DummyDataReader()); Console.WriteLine(Time used : + (DateTime.Now - startDate).ToString()); Console.ReadLine();}Result:CreateItemFromRow : 18.5 secondsConverter<T> : 7.3 secondsMap function: private static IList<T> Map<T>(DbDataReader dr) where T : new() { // initialize our returnable list List<T> list = new List<T>(); // fire up the lamda mapping var converter = new Converter<T>(); while (dr.Read()) { // read in each row, and properly map it to our T object var obj = converter.CreateItemFromRow(dr); list.Add(obj); } // reutrn it return list; } |
_softwareengineering.335840 | Currently we have 3 branches and trunk. If a developer makes a change in one branch, they alone are responsible from merging that change to the 3 other branches and eventually back to trunk. There's a published chart on the merge order that gets updated periodically.So, we have lots of little merges happening across branches. In the past, we have had one person cut branches and do merging back, mainly from branch to trunk. If there is a conflict, the 2 or 3 developers who caused the file conflict would resolve it. The branch and merging strategy was handled by an individual and mainly involved bring branches back to trunk.Is having individual developers merge typical? Or should it be handled by one person and coordinated at specific intervals to manage the branches? | Should developers merge code across branches? | version control;merging | What concerns me here is that you make it sound like anyone checking in code has to blindly check it in to multiple branches. That is wrong. Branches should exist for a reason. The event of checking in code should happen once and flow from there. Who gets it flowing isn't as important as that branches are kept meaningful and useful in each step.It sounds to me like you had a better approach in the past. The question then is: why did that change? |
_webmaster.44148 | I have a very selected group of users on my web application. Not everyone can just visit the website, and if they do they will need login information to see anything but the login box. The users have to contact in order to get an account.I am almost complete with my application, and it is not so friendly with ie, and has a few glitches with firefox. It works perfect with safari and chrome. Now, since I'm only going to have 100-200 users, is it OK to force users to use either safari or chrome? | Force users to use Google Chrome | users;google chrome | null |
_vi.5117 | I am using a recorded macro to manually wrap an extremely long string in a python script, like this:blah = ('blahblah ... blah blah')toblah = ('blahblah...' 'blahblah...' ... 'blahblah...')the macro is 4f,a''^[i^M^[This works fine, preserving my indentation right up to the point that the first line blah =... reaches the top of the page, then suddenly it starts dedenting to the first character of the line. How can I prevent this? i.e., how can I make vim recognize that the opening paren is above the top of the page?EDIT: OK, it's not the top of the page, but 50 lines above, and my window (at present) is 52 lines. | Parentheses off the page stop being recognized | indentation;macvim | null |
_unix.3800 | I have a specific MP4 video file that plays fine in MPlayer and VLC, but other programs on my Arch Linux box (Banshee, Gnome's Movie Player) are unable to play it. Most seem to treat it as having zero length. It also causes Gnome's Properties dialog to just display a dialog saying Creating Properties Window and never load. What is it about this file that causes it to do this? | All but MPlayer and VLC unable to play MP4 video? | gnome;video | This appears to be because of a misencoded file. Encoding with a different application than originally used did not have the same result. |
_softwareengineering.290951 | Although grammatically incorrect, when writing identifiers for functions, variables etc. does it make sense to simply append an s to plurals of words ending in Y? My reason for this would be that if you need to find-and-replace, for example, replacing company with vendor, company would match both singular and plural forms (company and companys), whereas if the plural was spelled correctly, you would have to do two separate searches. | Does it make sense to use ys instead of ies in identifiers to ease find-and-replace functionality? | grammar | null |
_softwareengineering.56940 | I have Visio 2010 and want to learn how to use it to document software. Does anyone have examples of what Visio does well?Are there any things that Visio can do, but there are better tools for the job?What are your experiences? | Any examples of using Visio to document software? | documentation;uml;database design;domain driven design;visio | null |
_webmaster.47974 | I've made a subsite which runs under mydomain.com/subsite - idealy from an architecture point of view it would be under a subdomain, but we wanted the links to come back to the main domain. The sub site has a completely different design, and layout, and has a 404 page that is in keeping with the rest of it. How can i set the 404 for the /subsite subdirectory, without affecting the root domains 404 settings in the .htaccess file. | Separate 404 page for subdirectory | htaccess;404 | Very simple. Just create another .htaccess file in the /subsite directory, and set your ErrorDocument 404 in it to the subsite's 404 page, ie ErrorDocument 404 http://example.com/subfolder/alternate-404.html. The .htaccess in the parent folder will still be used for the main document root, and all other subfolders. |
_codereview.114642 | This feels wrong:private decimal ConvertStringToDecimal(string decimalString, decimal defaultReturnValue){ var returnDecimal = defaultReturnValue; try { returnDecimal = decimal.Parse(decimalString.Trim()); } catch (Exception) { // naughty } return returnDecimal;}but I think this is the only way to transform a string from a third party system to a decimal whilst returning a default. Does anyone have any suggestions on how to improve this? | transform string to decimal | c# | The easiest and most clean way would be to use decimal.TryParse() like so private decimal ConvertStringToDecimal(string decimalString, decimal defaultReturnValue){ decimal result; if (decimal.TryParse(decimalString.Trim(), out result)) { return result; } return defaultReturnValue;} Using exceptions to control the returned value isn't a good way, especially if there are better methods to use. Your indentation is off but I assume that thats a posting issue. |
_unix.234251 | I have approximately 400k lines of output to scroll in tmux.How can I speed up the pace of scroll in copy mode?Alternatively, how can I transfer all the (already generated) output content to a file? | Is it possible to scroll faster than PgUp/PgDown in tmux? | tmux;scrolling | null |
_webmaster.26281 | I currently force www. and also specify that in Google Webmaster Tools.I know it's recommended to just pick on or the other, and to not use both.But if I change everything to not have www, and I force that in the server configs, and I change it in Webmaster Tools, will it still have an effect on my rankings? | Will changing from www to non-www version affect my rankings? | google;seo | null |
_cs.27667 | We have two computers, Comp1 and Comp2, which hold binary matrices A and B of size $n\times n$.We want to check if the matrices of the computers are identical except for exactly 1 entry.Comp1 has to send $O(n\log^2n)$ bits to Comp2 and Comp2 should return 0 or 1 (for no and yes respectively) for the answer.If the matrices differ by exactly 1 entry, Comp2 will answer 1 (yes) always.If they don't differ by exactly 1 entry, Comp2 will answer 0 in probability of at least $1/2$.Both computers can make any manipulation of their matrix before sending a message. Comp1 of course doesn't have to send a part of its matrix. It can send an answer of computation on its matrix, and Comp2 can make also any calculation it wants before answering.I've tried to solve it but whatever I tried it failed. | Randomized Algorithm with matrices | algorithms;randomized algorithms;randomness | null |
_softwareengineering.143036 | My time as a developer (~8 yrs) has been spent creating tooling/automation of one sort or another. The tools I develop usually interface with one or more API's. These API's could be win32, WMI, VMWare, a help-desk application, LDAP, you get the picture. The apps I develop could be just to pull back data and store/report. It could be to provision groups of VM's to create live like mock environments, update a trouble ticket etc.I've been developing in .Net and I'm currently reading into design patterns and trying to think about how I can improve my skills to make better use of and increase my understanding of OOP. For example, I've never used an interface of my own making in anger (which is probably not a good thing), because I honestly cannot identify where using one would benefit later on when modifying my code. My classes are usually very specific and I don't create similar classes with similar properties/methods which could use a common interface (like perhaps a car dealership or shop application might). I generally use an n-tier approach to my apps, having a presentation layer, a business logic/manager layer which interfaces with layer(s) that make calls to the API's I'm working with. My business entities are always just method-less container objects, which I populate with data and pass back and forth between my API interfacing layer using static methods to proxy/validate between the front and the back end.My code by nature of my work, has few common components, at least from what I can see. So I'm struggling to see how I can better make use of OOP design and perhaps reusable patterns.Am I right to be concerned that I could be being smarter about how I work, or is what I'm doing now right for my line of work? Or, am I missing something fundamental in OOP?EDIT: Here is some basic code to show how my mgr and api facing layers work. I use static classes as they do not persist any data, only facilitate moving it between layers.public static class MgrClass{ public static bool PowerOnVM(string VMName) { // Perform logic to validate or apply biz logic // call APIClass to do the work return APIClass.PowerOnVM(VMName); }}public static class APIClass{ public static bool PowerOnVM(string VMName) { // Calls to 3rd party API to power on a virtual machine // returns true or false if was successful for example }} | As a tooling/automation developer, can I be making better use of OOP? | c#;.net;object oriented;design patterns;interfaces | Sounds like you might be missing a lot. Development patterns have been around for a long time and they really help with OOP. I would suggest reading about these patterns.For a start have a look at SOLID - http://en.wikipedia.org/wiki/SOLID_(object-oriented_design)There are 5 basic development patterns.With regards to interfaces, they are really usefull. Interface defines the methods available and the concrete class provides the implementation. If you use interfaces you can easily swap classes to provide different functionality (without having to change the rest of the application).An example of that would be data access layer. Imagine if you develope your application which uses LINQ to SQL. Some time later the requirement is to change the data provider to something else.Now, if that was running of an interface with properly implemented Command Pattern http://en.wikipedia.org/wiki/Command_pattern , this would be very easy. You would only have to create new data access layer, implement the interface and that's it.It's worth reading about the design patterns and all features given by OOP.Hope that helps.UPDATEBased on your example, your APIClass already follows single responsibility pattern.The way you could implement interface would be :public interface IVMManager{ bool PowerOnVM(string VMName); bool PowerOffVM(string VMName);}public class MgrClass { private IVMManager _vmManager; public MgrClass(IVMManager vmManager) { _vmManager = vmManager; } public bool PowerOnVM(string VMName) { // Perform logic to validate or apply biz logic // call APIClass to do the work return _vmManager.PowerOnVM(VMName); } } public class APIClass : IVMManager{ public bool PowerOnVM(string VMName) { //IMPLEMENTATION } } REMEMBER : static methods cannot be exposed via interface !This example also follows the command pattern. VM Manager is easily swapable with different implementation. You can also unit test this class.Hope that gives you an idea how design patterns can be used and how powerfull OOP is. |
_unix.184276 | I have the following field namesdemelog voyapro naisjdf naismc decoide decoccm travide travccm equiccm mariccmBelow is a sample of my file. There are more columns before the first fck=83... fck=83;fcv=naismc;fcv=naisjdf;fck=83;fmd=1422811694,;fmd=1422811694; fck=83;fcv=demelog;fck=83;fmd=1423134370; fck=83;fcv=demelog;fck=83;fmd=1422292546; fck=83;fcv=demelog;fck=83;fmd=1421774352; fck=83;fcv=decoccm;fck=83;fmd=1422853444; fck=83;fcv=voyapro;fck=83;fmd=1422270462; fck=83;fcv=voyapro;fcv=demelog;fck=83;fmd=1422183999,;fmd=1422206234,;Starting from fck=83 as you can see, I can have for line 2 fcv=demelog and the relevant fmd 1423134370 or for line 7, fcv=voyapro and fcv=demelog and the relevant fmd's as well fmd=1422183999, and fmd=1422206234;You do remember the first line demelog voyapro naisjdf naismc decoide decoccm travide travccm equiccm mariccm ?My aim is to have the following format (I will use line 2 and line 7 as the example)line 2 fck=83;fcv=demelog;;;;;;;;;fck=83;fmd=1423134370;;;;;;;;;line 7 fck=83;;fcv=voyapro;;;;;;;;;fck=83;;fmd=1422270462;;;;;;;;;As you can see, I have added extra columns both for fck=83 for the fcv and the fmd, related to this demelog voyapro naisjdf naismc decoide decoccm travide travccm equiccm mariccmI thought doing it with awk or sed or python , even if I have not a clue on how to do it with python nor awk and sed.Maybe I can add the demelog voyapro naisjdf naismc decoide decoccm travide travccm equiccm mariccm in a separate file and then do a search with an index . If the string is here , I'm doing nothing. If it is not here, I'm adding an extra column.Any helps are welcomed as I'm confused on where I should go technically speaking and starting to pull my hair.update This my attempt below in pythonimport reword_list= ['fcv=demelog','fcv=voyapro','fcv=naisjdf','fcv=naismc','fcv=decoide','fcv=decoccm','fcv=travide','fcv=travccm','fcv=equiccm','fcv=mariccm']regex_string = (?<=\W)(%s)(?=\W) % ;.join(word_list)find=re.compile(regex_string)with open (idcacf_v5.txt, r) as myfile: data=myfile.read().replace('\n', '')finder = re.compile(regex_string)string_to_be_searched = dataresults = finder.findall (%s % string_to_be_searched)result_set = set(results)for word in word_list: print (%s in string % word)As you can see, I need 2 things, to be able to index. Second thing is I need to be able to replicate what I have done with the fcv in term of order and put the same order with fmd=timestamp of that line. | how can I add an extra character after a word search | bash;sed;awk;python;perl | Here's something I cobbled up using the CSV module:#! /usr/bin/env python3import csv, sysword_list = ['fcv=demelog','fcv=voyapro','fcv=naisjdf','fcv=naismc','fcv=decoide','fcv=decoccm','fcv=travide','fcv=travccm','fcv=equiccm','fcv=mariccm']csvin = csv.reader (sys.stdin, delimiter=';')csvout = csv.writer (sys.stdout, delimiter=';')for row in csvin: word_list_fck = [row[0]] + word_list fmd_start = row[1:].index(row[0]) + 1 row_fcv = row[:fmd_start] # split fcv from fmd row_fmd = row[fmd_start:] out_row = [entry if entry in row_fcv else '' for entry in word_list_fck] out_row = out_row + [row_fmd.pop(0) if out_row[i] != '' else '' for i in range(len(word_list_fck))] csvout.writerow (out_row)Example output:$ python3 test.py < test.txtfck=83;;;fcv=naisjdf;fcv=naismc;;;;;;;fck=83;;;fmd=1422811694,;fmd=1422811694;;;;;;fck=83;fcv=demelog;;;;;;;;;;fck=83;fmd=1423134370;;;;;;;;;fck=83;fcv=demelog;;;;;;;;;;fck=83;fmd=1422292546;;;;;;;;;fck=83;fcv=demelog;;;;;;;;;;fck=83;fmd=1421774352;;;;;;;;;fck=83;;;;;;fcv=decoccm;;;;;fck=83;;;;;;fmd=1422853444;;;;fck=83;;fcv=voyapro;;;;;;;;;fck=83;;fmd=1422270462;;;;;;;;fck=83;fcv=demelog;fcv=voyapro;;;;;;;;;fck=83;fmd=1422183999,;fmd=1422206234,;;;;;;;;Notes:I rely on the first element in the row (fck=83 in the example cases) to be the entry separating the fcvs from the fmds. If not, this thing is going to be a whole lot more complicated.Given the repeated if bar in foo in the list comprehensions, this might be very slow depending on the length of each row.Regarding out_row = [entry if entry in row_fcv else '' for entry in word_list_fck]: Consider how the desired output looks like when parsed by csv.reader to a list (taking, for example, the second line): [fck=83 ,fcv=demelog, , , , , , , , , , , fck=83, fmd=1423134370, , , , , , , , , ] - all the empty entries become empty strings.The output is supposed to contain empty entries for each fcv which did not appear in the input.Therefore, when building such a list for writing out using csv.writer, I use empty strings for all the fcv entries which do not appear in row_fcv (if entry in row_fcv else ''). |
_webapps.89254 | I like the feature of Gmail Priority Inbox that allows you to divide your priority inbox into sections. I like it so much, I want a section for all my standard labels. I have about 7. Gmail only allows you 4 sections by default. I found this trick to add more than 4 sections.. http://thenextweb.com/lifehacks/2014/07/14/3-gmail-tricks-can-save-hours-every-week/The article says: If you inspect Remove Section, and change the corresponding act=z snippet to act=n (highlighted in the image below) and then click Remove Section, this will add another new section! There doesnt seem to be a cap on the number of sections you can create through this technique, and these changes will persist for future browser sessions just like the Unread trick.I am trying to do this in 2016. I successfully added several more sections to my priority inbox using this method, but I notice that when I do more than 5 sections, the bottom sections start auto-collapsing, and there's no way to actually show the messages in that category in the priority inbox view. You have to go to the right-hand drop down arrow above the section and hit Show more messages to see them - and they open in a new screen, which I don't want.In sum, I can't get this hack to work properly for more than 5 total sections. Any advice? | Gmail Priority Inbox - Can I add more than 5 sections (using inspect hack) properly? | gmail;priority inbox | null |
_unix.158888 | So I have a python script that pulls down git/svn/p4 repositories and does some writing to a database.I'm simply trying to automate the running of this script and based from what I see in syslog it is being run, even the file I tried piping the output to is being created however the file is empty. Here is the cron job:10 04 * * * user /usr/bin/python2.7 /home/user/script.py -f someFlag > ~/cronout.log 2&>1Kinda stumped and don't know where to start. Thinking its maybe the requirement of passwords for the keychain and what not. Any ideas would be helpful! | Cron job not behaving as expected | cron;python;git;subversion;ssh agent | So it turns out the problem was with environment variables that the Python script needed, and it was so early on in the script that it broke the script before it even output anything.Cron does not have a regular environment.Furthermore ssh passwords were required for pulling git repos which i was able to solve by using Keychain.Using the help of this blog post and some bash wrapper scripts I was able to get everything working and automated. |
_webapps.102523 | I want to write an Array to sum a row if the cells contain a certain symbol. For example, I want a sum from A1:A10 if the cell contains /. | How to sum spreadsheet cells under the condition of containing the character / | google spreadsheets | null |
_softwareengineering.298448 | I'm wondering how to manage empty results returned by search queries in a REST web-service :I there was a query like my_ressource_collection/{id} and the resource didn't exists i would return 404 - Not FoundBut, if my URI is more like a search query like : my_ressource_collection/?fromDate={fromDate}&toDate={toDate}&filter1={filter1}&page={page} : Does the same logic applies ?If no results matchs to the perametters, do I have to return a 404 status ? | What HTTP status using for REST search query which returns nos results | rest;web services;api design | I think in the most cases the most useful reaction would be to return a regular answer (HTTP 200) and send empty data.For example if you are returning JSON you could send{}Another good option may be the HTTP Code '204 - No Content'.The 4xx codes describe a client failure, so the 404 code wouldn't be a good idea, because it's a valid request, even if there are no matching data. |
_unix.261103 | If I run this script, how do I pass super user permissions to it? I wrote this just to setup new machines with the basics. I don't want to run every command with elevated permissions, but the commands that do have sudo I want to run with them.How do I have some commands run with sudo and others run as the regular user?#!/bin/sh# If Linux, install nodeJSif $(uname) = 'Linux';then export IS_LINUX=1 # Does it have aptitude? if -x which apt-get; then export HAS_APT=1 # Install NodeJS sudo apt-get install --yes nodejs fi # Does it have yum? if -x which yum ; then export HAS_YUM=1 # Install NodeJS sudo yum install nodejs npm fi # Does it have pacman? if -x which pacman ; then export HAS_PACMAN=1 # Install NodeJS pacman -S nodejs npm fifi# If OSx, install Homebrew and NodeJSif $(uname) = 'Darwin' ;then export IS_MAC=1 if test ! $(which brew) then echo ================================ echo Installing Homebrew for you. echo ================================ ruby -e $(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install) export HAS_BREW=1 elif -x 'which brew' ; then export HAS_BREW=1 brew update fi # Install NodeJS brew install --quiet nodefi# Does it have python?if -x which python ;then export HAS_PYTHON=1 if -x which pip ; then pip list --outdated | cut -d ' ' -f1 | xargs -n1 pip install -U export HAS_PIP=1 fifi# Does it have node package manager?if -x which npm ;then export HAS_NPM=1else echo NPM install failed, please do manuallyfi# Does it have ruby gems?if -x which gem ;then export HAS_GEM=1fiThe rest of the bash script (that I didn't include for length) installs packages from an array using npm, apt, yum, brew, or pacman, depending on the machine. It only installs simple things like git, wget, etc. | How do I maintain sudo in a bash script? | bash;shell script;permissions;sudo | First time sudo is invoked password is prompted for. Then, depending on configuration, if invoked within N minutes (default 5 minutes IIRC), one do not need to enter password again.You could do something like:sudo echo >/dev/null || exit 1or perhaps something like:sudo -p Become Super: printf || exit 1at start of script.If you want to prevent anyone from doing sudo ./your_script you should check EUID as well (bash):if [[ $EUID -eq 0 ]]then printf Please run as normal user.\n >&2 exit 1fior something like:if [ $(id -u) = 0 ] ...In any case also check out which shell you targetr. I.e.https://wiki.debian.org/DashAsBinShhttps://wiki.ubuntu.com/DashAsBinShhttps://lwn.net/Articles/343924/etc.To keep it alive one could do something like:while true; do sleep 300 sudo -n true kill -0 $$ 2>/dev/null || exitdone & |
_softwareengineering.305318 | I'm creating a work diary web application in mvc using vb.net The application will display any diary entries that have a HiLite boolean field (HL) set to true in a table on the home page.Currently I've implemented a simple refresh of the home page using a meta tag that refreshes the page every 2 mins and this copes with any changes to the table. e.g. a new HiLited diary entry has been added by someone or a HiLited entry has had its flag set false.I would like to 'move with the times' and possibly use SignalR to eliminate unnecessary requests + have a more realtime feel to the table data.I've not used SignalR before. I've looked at some simple examples but need clarification if this is the best way to go.Will using SignalR do the following....On first visit get and display all current HL diary entriesIf a new entry is added refresh the table on all clientsIf an entry has it's HL flag set to false refresh the table on all clientsor is there some better method.The SignalR samples I've seen are simple 'chat' samples that basically have two params User + Msg and the code appends any new messages to an html element.I will have an list of a diary entry object... will I be able to do what I need and instead of appending to html element... I use it to replace the content of the html element. | SignalR design concept | design | null |
_codereview.59011 | I have a large Sql Server view with this schema:[archive_ID] (int, not null)[archive_date] (datetime, not null)[archdata_path_ID] (varchar(50), not null)[archdata_value] (int not null)I need to group the records by the Date, and I need to extract just the first record for each group.This is the current query:WITH cteAS ( SELECT * ,CAST(archive_date AS DATE) AS C ,ROW_NUMBER() OVER ( PARTITION BY CAST(archive_date AS DATE) ORDER BY CAST(archive_date AS DATE) ASC ) AS ad FROM ArchiveData WHERE archdata_path_ID = @PathID )SELECT DISTINCT C ,archdata_value AS valFROM cteWHERE ad = 1ORDER BY C ASCThe main problem is to improve the readability.Would be great to optimize also the performance, but it's not mandatory. | Efficient way to extract the first row in a Group By group | sql;sql server | I believe that DISTINCT is redundant, since the CTE should produce only one row for each date whose ROW_NUMBER() is 1.Avoid selecting * in the CTE, and list the columns you want explicitly.Your naming is poor: CTE, C, val, ad. Please find more descriptive names.If you are using any SQL Server 2012, then FIRST_VALUE() is the function you want.SELECT CAST(archive_date AS DATE) AS C , FIRST_VALUE(archdata_value) OVER ( PARTITION BY CAST(archive_date AS DATE) ORDER BY CAST(archive_date AS DATE) ) AS val FROM ArchiveData WHERE archdata_path_ID = @PathID ORDER BY C; |
_softwareengineering.332522 | What is the recommendation regarding re-using DTO's as a child in another DTO?I'm using ASP.NET Web API (C#) and consuming the results with Angular 1.x (not sure if that really matters).I have the following DTOsclass SiteDto { public int SiteId {get;set;} public string Name {get; set;} public DateTime CreatedDate {get;set;} ...}class SiteEventDto { // pk public int SiteEventId {get; set; } // fk public int SiteId {get; set;} public DateTime EventStartDate {get; set;} public DateTime EventEndDate {get; set;} ...}I want to return a list of SiteEvents with a site name (for example). Is there a recommendation for the link between SiteEvent and Site to get this parent information? For example, I see the following possibilities:Re-use existing DTO: Use the existing Site DTO as a property within the SiteEvent DTO (e.g. public SiteDto ParentSite { get; set; } ) Pros: Code re-useScalableCons: Potentially a lot of extra bloat (for web-based/json-based calls especially) due to unneeded properties and possibly additional parent hierarchiesCreate new parent DTO: Create a child-DTO specific to this task (e.g. class SiteEventSiteDto { ... ) and reference that through a child property on SiteEventDtoPros:Reduce additional bloat from unused fieldsCons:Minimal code re-use Flatten class structure: Flatten the class structure to simply include a SiteName property in the SiteEventDto (e.g. public string SiteName { get; set; } )Pros:Reduce additional bloat (even more)Cons:Minimal code re-useIt probably makes sense to look at each case individually and assess the following questions:Do I foresee a need for additional properties off the parent object? How difficult is it to add a new property in a flat structure? How much bandwidth is the bloat going to really cost the user?I guess my questions to the community are: Is it common practice to re-use a DTO for more than one controller (scenario #1, above) or is that considered a bad practice for some reason? Or, does it make more sense to create a new child DTO or use a flat structure when possible? And lastly, do my ramblings above seem like a good approach to this problem? | Reusing a top-level DTO as a child in another DTO | class design;json;code reuse;dto | null |
_unix.330392 | Sorry if this is a repeat. I searched, but with no luck.I'm using SNMPd on an openwrt/wr host with some ppp and tun connections. These connections get IDs in the if table, and will actually get a new ID whenever the tunnels reconnect.Nagios (check_mk), when that happens, complains that an interface went down; oh, and a different one with the same name came up right afterward. In the meantime, it's iterating over so many interfaces that the reports are of 'interface 4933 down'; and an snmpwalk shows close to 4932 datapoints before it.How are the helpful folks here handling a monitoring situation like that? | Nagios/SNMP - devices alerting when ppp/tun connections cycle | linux;monitoring;ppp;nagios;snmp | null |
_unix.44492 | I'd like to make a temporary change to my fstab file so that my /home is on another drive. However, I don't want the whole partition to be mounted, but just a folder (home) on that partition. I'm OK with the rest of the data being unavailable.What's the canonical way of expressing this in fstab? I can't think of a way to do it in one command (as I can't reference a folder on a filesystem I haven't mounted). I think I should do a first mount and then move the folder to /home. But I don't know if I can do a move in fstab, haven't found it in man (and I don't feel like trying blindly because I only have ssh access to the machine right now).For now I have a bind mount in fstab:/dev/sdd1 /mnt/temphome ntfs defaults,errors=remount-ro 0 2/mnt/temphome/home /home none bindHowever this leaves /dev/sdd1 mounted in both points.To summarize:can I do a move mount operation in fstab and if yes, then how?is that the right approach and if not, what is?Thanks in advance. | Mount a folder on a drive in fstab. Move? | arch linux;mount;automounting;fstab | I don't think you can perform moves from /etc/fstab. If you want to do that, add a mount --move command in /etc/rc.local. That leaves a time in the boot process during which the home directories are not available at their final location. Since these are the home directories, they shouldn't be used much if at all during the boot process, so that's ok. The one thing I can think of is @reboot crontab directives. If you have any of these, the home directories need to be available, so you should add mount --move to the right place in /etc/rc.sysinit instead (just after mount -a).Using a bind mount is probably fine, though. What can go wrong is mainly processes that traverse the whole disk, such as backups and updatedb. Leaving the bind mounts in /etc/fstab is the least risky option, but you should configure disk traversal processes to skip /mnt/temphome/home.Yet another possibility is to make /home a symbolic link. However this may cause some programs to record the absolute path to users' home directories, which would be /mnt/temphome/home/bob. A bind mount or moving a submount doesn't have this problem. |
_cs.3264 | I have two questions. Both are about finding any of the 3 largest among $n$ elements.How to show that $n-3$ comparisons suffice to find any of the $3$ largest among $n$ given numbers $(n \geq 4)$?How to show that $n-3$ comparisons are necessary to find any of the $3$ largest among $n$ given numbers $(n \geq 6)$?My Idea: I have tried to solve 1. using a heap but could not derive any such bound.The place where I found this question gave one hint which said to analyze the number of connected components in a comparison graph. I shall appreciate if anyone can throw some light on a comparison graph. | Find any of the 3 largest among $n$ elements | algorithms | null |
_cogsci.8509 | Dynamical systems theory as it is used in the context of developmental psychology, and by some psychoanalysts as in this article or in this video by the first speaker can be described as an attempt to introduce mathematical concepts in the study of psychological phenomena. Unlike other applications of dynamical systems theory, the above seem to use terms like attractor in an abstract or metaphorical sense, that is, outside of a formal mathematical context.Is this kind of application useful or just a bunch of preety words, stripped of their real meaning ?Do we have a better understanding of psychological phenomena through such a framework? | Dynamical systems theory as a metaphor in psychology: is it useful or not? | reference request;developmental psychology;psychology | Like all models and modeling frameworks, the utility of the approach is often tied to how much it increases our understanding of the phenomena of interest. As summarized nicely by Smith & Thelen (2003), there are developmental questions that are better conceptualized through the framework of DST than through alternative modeling frameworks due to the richly interactive and multi-causal structure of the phenomena.You'll also find that applications of DST in Developmental Psychology are not just appealing to the metaphorical use of DST. There are rigorous mathematical uses of the tools of DST to explain developmental phenomena. One of the most cited examples is Thelen et al. (2001), who modeled the A-not-B error. There is a lot of similarly inspired modeling work being done throughout the field. |
_webmaster.105922 | CRM: WordPressI'm updating a pdf file on our website, and would like to know if overwriting the old content with the new content would prevent the need to re-establish the links on the site that point to that specific file.In other words, does overwriting the file remove the need to re-direct old links?I feel like I'm being unclear, so I'm happy to chat and make it more straightforward.Thanks | Overwriting a File To Avoid Link Updating | links;pdf | Without putting wordpress or any of those CMS's into the picture, the answer is clear.If you upload a file to a server that is the exact same filename referenced by a link from any page on the web, then there is no need to update any of the links.As much as I'm taking shortcuts here, this answer is easy to verify. Simply create a simple HTML file with the following contents:<a href=something.html>A link</a>Save it as index.html or index.htm (depending on the first file your specific server looks for when a user requests the home page) then upload it to your document root folder on the server. To be on the safer side, upload the file as index.htm and make a copy of it and save it as index.html as well.Next, create a simple HTML file with whatever contents you want and save it as something.html and upload that file to the exact same folder where you saved the index.htm or index.html file.Access your domain (by going to www.whatever-your-domain-is.com in your browser) and you will see the following on a blank screen: A linkClick on it and you'll see the output of something.html after the browser processes the HTML.Now on your local computer, change the contents of something.html and save it again, and upload it again to the same location, and if asked to overwrite the existing file in your editor, choose yes. If asked to overwrite the file on the server, choose yes. Now go in your browser, and clear the cache and then access your domain again. You should notice that when A link is clicked, you'll see your changes.As you have noticed, no changes were made to index.htm or index.html. |
_ai.1390 | Is there any research which study application of AI into chemistry which can predict the output of certain chemical reactions.So for example, you train the AI about current compounds, substances, structures and their products and chemical reactions from the existing dataset (basically what produce what). Then you give the task to find how to create a gold or silver from group of available substances. Then the algorithm will find the chemical reactions (successfully predicting new one which weren't in the dataset) and gives the results. Maybe the gold is not a good example, but the practical scenario would be creation of drugs which are cheaper to create by using much more simpler processes or synthesizing some substances for the first time for drug industries.Was there any successful research attempting to achieve that using deep learning algorithms? | Predicting chemical reactions using AI | deep learning;research;prediction | Yes, many people have worked on this sort of thing, due to its obvious industrial applications (most of the ones I'm familiar with are in the pharmaceutical industry). Here's a paper from 2013 that claims good results; following the trail of papers that cited it will likely give you more recent work. |
_unix.329098 | For my development machine I need no data consistency in case of a crash. Is there a config for a Debian-like system, that optimizes PostgreSQL for speed (even if it sacrifices reliability)?So something like: Keep the last 1 GB in RAM. Don't touch the disk with data until the 1 GB is used. | PostgreSQL: Speed over reliability config | performance;postgresql | null |
_unix.260984 | I have a Linux server that authenticates users against an Active Directory. Now interactive logins require username and password and automatic login requires a keytab (Kerberos) for passwordless authentication.However, I have read in NIST IR 7966 that keytab authentication is not as secure as public/private key pair authentication because in this case you can restrict the access to the execution of a command and you have more control over what systems can the user authenticate.So, I would like to use public/private key pair and continue authenticating against an Active Directory from a Linux server. Is this possible? | Can a Linux server that authenticates against an Active Directory accept public/private keys for passwordless login? | scripting;authentication;key authentication;active directory;kerberos | null |
_webmaster.50132 | I'm pretty new to the whole webmaster stuff and am just trying to format a slideshow on my homepage to be in a certain area with an apDiv, but as soon as I use another browser it's in a different location. Is there a way to make them appear in the same spot for all browsers?Here's a screen shot: | How to make an apDiv show up in same spot on all browsers | php;html;flash | null |
_unix.77940 | I'm tring to build omniORB 4.1.6 under Arch Linux, however when I type make, here is the message:../../../../../src/tool/omniidl/cxx/idlpython.cc:188:26: fatal error: python3.3/Python.h: No such file or directory# include PYTHON_INCLUDEI'm sure both python3 and pyhon2 were installed, and I can remember last time I was tring to do the same thing under Linux Mint I met the same problem, that time I used this command to solve the problem:sudo apt-get install python-devHowever, it seems Arch doesn't seperate python-dev with python, I checked my /usr and found Python.h under /usr/include/python3.3m, so what should I do for now | Python.h: No such file or directory | arch linux;compiling;python | Normally running./configurebefore running make should set up things correctly, but in this fall that seems not to be the case.Python 3.3.X puts its header files in .../include/Python3.3m, whereas 2.7.x uses .../include/python2.7 (without any suffix), maybe omniORB is not aware (yet) of that suffix m.You can make a link from python3.3m to python3.3 using:cd /usr/includeln -s python3.3m python3.3and retry the build process ( this assumes python3.3 was configured using --prefix=/usr, adapt the cd as necessary). |
_unix.182624 | I'm looking for ways to speed up the boot time on a single core embedded Linux system. Significant lag happens during start up of some custom daemons. Once they are up, they do run in the background, but the process of getting them to run takes long.##This takes long during startup##In file /etc/init.d/run_custom_daemon.../opt/bin/custom_daemon -d...I've experience a decrease in boot time if I put the starting of the daemons in the background.##This takes much less but is it a real gain?##In file /etc/init.d/run_custom_daemon.../opt/bin/custom_daemon -d &...From the point of view of someone just looking how long it takes to get to the logging screen, this could seem like a speed up. However, I feel that this might be just a cosmetic gain which might lead to issues if the next process in the boot sequence would expect the daemon to be running at the point it is starting.Is this a correct assumption? | Running init script which starts a deamon in background for reducing boot time? | shell;boot;daemon;background process | null |
_unix.103302 | I recently found out about the lsusb command while troubleshooting a headset issue..My laptop runs debian and doesn't have anything plugged into USB currently, but when I run the lsusb command, I still get quite a bit of output:root@t500:~# lsusbBus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hubBus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hubBus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hubBus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hubBus 004 Device 003: ID 0a5c:2145 Broadcom Corp. Bluetooth with Enhanced Data Rate IIBus 004 Device 002: ID 08ff:2810 AuthenTec, Inc. AES2810Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hubBus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubBus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hubBus 001 Device 002: ID 0781:b6d0 SanDisk Corp. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubCould someone break down what this means in more detail?The SanDisk card is connected via PCMCIA, not USB. Are both PC cards and USB devices listed in lsusb?I don't have any external bluetooth connected, why would this display?Why are some root hubs 1.1, and some 2.0? Does that mean some of my USB ports are 2.0 and others aren't?What is a root hub exactly, just an empty USB port? | How to interpret the output of 'lsusb' -- What are all these Linux Foundation root hubs? | debian;usb | A USB hub is a device that has one cord that plugs into one USB port, but provides multiple USB ports for you to plug devices into. It's essentially a USB multiplexer.A root hub, AFAIK, is a USB hub that's internal. For example, there might ony be one USB slot in your motherboard, but there are multiple external ports because there's an internal root hub plugged into the motherboard. (This is simplified, of course. I'm not an expert in hardware.)The Bluetooth device is the chip inside your computer that actually broadcasts Bluetooth radio traffic. Probably, it's wired through a USB port inside the computer's case.With regards to the display of Linux Foundation, my guess is that that's where the drivers come from. But I'm not sure. |
_codereview.43028 | Call from my index.php to function get_data($curr_date) goes from herecase events : $curr_date = $_POST['current_date']; if ( $eid = get_data($curr_date) ) { $result = array( success=>true, eid=>$eid); $json_output = json_encode( $result ); echo $json_output; } else echo FAIL; break; enter code hereThis is my function which returns the resultfunction get_data($curr_date){ $events_list = mysql_query( SELECT eid, display_date, word_of_the_day, word_of_the_day_meaning, thought_of_the_day, crazyFact_of_the_day, joke_of_the_day FROM eventsoftheday WHERE display_date = '$curr_date' ); // get mysql result cursor if( mysql_num_rows( $events_list ) > 0 ) // check if the query returned any records { $mysql_record = mysql_fetch_array( $events_list ); // parse mysql result cursor $events = array(); $events['e_id'] = $mysql_record['eid']; $events['edisplay_date'] = $mysql_record['display_date']; $events['eword_of_the_day'] = $mysql_record['word_of_the_day']; $events['eword_of_the_day_meaning'] = $mysql_record['word_of_the_day_meaning']; $events['ethought_of_the_day'] = $mysql_record['thought_of_the_day']; $events['ecrazyFact_of_the_day'] = $mysql_record['crazyFact_of_the_day']; $events['ejoke_of_the_day'] = $mysql_record['joke_of_the_day']; return $events; // return events } else return false;} | Return JSON array through PHP | php;json | null |
_softwareengineering.123908 | When I do database table design, I often wonder if I should try to make the table as simple as possible, or if I should make the object as clear as possible.The requirement is to store all month of the year's salesTon and gross profit.I design from the object.The object has fields year, month, salesTon and gross profit. For this situation when I need save data to the database, it will save 12 rows(one row one month). This object has a relationship with another object (customer). so when save it will save n(customer)*12 rows per one time. It will do a huge insert operation. I'm afraid that tihs will make the db slow.Another design is to just use one object. One object to save the whole year - no need to split the object in to months. But in the object will has 12*2's fields like(month1_sales_ton, month1_gross_profit, month2_sales_ton, month2_gross_profit.......), this situation make save least rows, base on n(customer) per time.But since the second situation doesn't have month, I can't write the common method when I want to search one year's one month gross profit which accept year, month parameters. Maybe I should just write the method to just pass year and get whole a year object? i don't know.So which principle should I follow when I design the table? | What are the important factors for database table design and object design? | design;database | You need to Normalize your data properly. You are not helping anybody with these summary tables. You are NOT Speeding anything up, YOU ARE NOT SAVING ANY DISK SPACE!*I feel like a broken record saying this, but :DO NOT MODEL YOUR DATA BASED ON HOW YOU WANT YOUR CODE TO CONSUME IT!*You need to fully understand the rules of normalization before knowing when to break them (and you clearly don't). I can guarantee, in the long run, that de-normalizing will slow you down, take up more space, and inundate you with data integrity issues.Oh Yea, The Most important thing when modeling your data is the actual Entity (and there data points) and how they relate to other Entities. This is why data models are generally synonymous with Entity relationship diagrams. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.