id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
βŒ€
_unix.96385
What I am after is almost exactly the same as can be found here, but I want theformat line number, separator, filename, newline in the results, thus displayingthe line number at the beginning of the line, not after the filename, and withoutdisplaying the line containing the match. The reason why this format is preferable is that(a) the filename might be long and cryptic and contain the separator which the tool uses to separate the filename from the line number, making it incredibly difficult to use awk to achieve this, since the pattern inside the file might also contain the same separator. Also, line numbers at the beginning of the line will be aligned better than if they appear after the filename. And the other reason for this desired format is that (b) the lines matching the pattern may be too long and mess up the one line per row property on the output displayed on standard out (and viewing the output on standard out is better than having to save to a file and use a tool likevi to view one line per row in the output file).How can I recursively search directories for a pattern and just print out file names and line numbersNow that I've set out the requirement, consider this:Ack is not installed on the Linux host I'm using so I cannot use it.If I do the following, the shell executes find . and substitutes 'find .`with a list of absolute paths starting at the current working directory andproceeding downwards recursively:grep -n PATTERN `find .`then the -n prints the line number, but not where I want it. Also, for some reason I do not understand, if a directory name includes the PATTERN, then grep matches it in addition to the regular files that contain the pattern. This is not what I want,so I use:grep -n PATTERN `find . -type f`I also wanted to change this command so that the output of find is passed on togrep dynamically. Rather than having to build the entire list of absolute pathsfirst and then pass the bulk of them to grep, have find pass each line to grepas it builds the list, so I tried:find . -exec grep -n PATTERN '{}' \;which seems like the right syntax according to the man page but when I issuethis command the Bash shell executes about 100 times slower, so this is notthe way to go.In view of what I described, how can I execute something similar to this commandand obtain the desired format. I have already listed the problems associatedwith the related post.
recursive search for a pattern, then for each match print out the specific SEQUENCE: line number, file name, and no file contents
bash;find;grep;awk
null
_cs.74603
Suppose I have a task to write an algorithm that runs through an array of strings and checks if each value in the array contains s character. The algorithm will have two nested loops, here is the pseudo code:for (let i=0; i < a.length; i++) for (let j=0; j < a[i].length; j++) if (a[i][j] === 'c') do somethingNow, the task is to identify the runtime complexity of the algorithm. Here is my reasoning:let the number of elements in the array be n, while the maximum length of string values m. So the general formula for complexity is n x mNow the possible cases.If the maximum length of string values is equal to the number of elements, I get the complexity:n^2If the maximum length of elements is less than the number of elements by some number a, the complexity isn x (n - a) = n^2 - naIf the maximum length of elements is more than the number of elements by some number a, the complexity isn x (n - a) = n^2 + naSince we discard lower growth functions, it seems that the complexity of the algorithm is n^2. Is my reasoning correct?
How to calculate runtime complexity of the nested loops with variable length
algorithm analysis;runtime analysis;loops
null
_softwareengineering.214511
Please tell me if this is mad, but basically, I've created a custom rake task, and before it does its thing, it gives the user a warning message:Warning, please back up your database before continuingJust in case something does go wrong. But I thought this would be awesome:Warning, please back up your database before continuing. Do you wish to create it now [Y/N]and upon pressing Y, the existing database would be replicated.I don't want to create to duplicate the tables, I literally want to create a new, pristine, isolated database so that their architecture is identical, just in-case something does go wrong.I don't want any dependencies though, I only want to use libraries included with Rails. Could I have a simple example of how to create an empty database with RoR, if this is possible?I can create a new database like this: require 'sqlite3' db = SQLite3::Database.new( test.db )However, this will only work if the user is using sqlite3. I can detect which database they're using withdatabase_type = ActiveRecord::Base.connection.adapter_name #=> sqlite3and then use a case statement to execute slightly different commands for each database type, but that seems a bit elaborate, and not very dynamic. (Every single database will have to be thought up before-hand).So can I do it through Active Record?
Creating a new database with Active Record
ruby on rails
null
_softwareengineering.147710
I am maintaining a WinForms application, which talks to a SQL Server database. Sometimes I have to change database schema (for example to alter a sql procedure or add new one). For this purpose I have a SchemaChange.sql file, where I put the corresponding sql code.When I create an installer for my project, the msi package is created. It contains my application, referenced assemblies, COM+ assemblies,... Alongside the msi package I provide an SchemaChange.sql file, which has to be run on the production sql server. But sometimes I forget to add something to SchemaChange.sql file during development or I forget to execute it on production server after upgrading my application on client machines. Any advice how to automate this to avoid further problems?
How to include database changes during application publish
.net;deployment;sql server
null
_unix.26623
I am creating a SFTP server with Chroot Jail. The problem is user cannot log into the home directory. I need to keep ChrootDirectory value to one directory above the user's home directory (/home/jail/home in this case). I read that the directory needs to be owned by root. In that case user cannot do anything except logging into the server. Below is the sftp-specific part of my sshd_config fileMatch User ftpuser ChrootDirectory /home/jail/home/ftpuser ForceCommand internal-sftpOutput of $id ftpuser isuid=1001(ftpuser) gid=1002(ftpuser) groups=1002(ftpuser),0(root)I have intentionally added it to the root group so that ftpuser can at least login.Output of $grep ftpuser /etc/passwd isftpuser:x:1001:1002::/home/jail/home/ftpuser:/bin/falsePermissions of /home/jail/home/ftpuser aredrwx------+ 3 root root 4096 2011-12-12 12:49 /home/jail/home/ftpuser/What should I do?
Ubuntu 11.10 SFTP chroot jail problem
chroot;sftp
Well after some Google-ing I found a solution. Though it's not a good practice but it did help me. I changed the UID of chrooted users to 0 i.e. UID of root user, without changing the login shell. As a result chrooted user can access its home directory the way I need him to. And since the login shell is /bin/false he can't log into the system like other users (I actually tried doing su - ftpuser while I was logged into my machine as root and ftpuser didn't get the shell access).Though this solution might not be good or preferable, it was the only workaround I could find.
_softwareengineering.224103
Application server - JBoss AS 7.1.1JDK6J2EE 1.3My web application is more than 10 years old and facing this session swap problem in my portal. Noticed that swap happens mostly when many concurrent users accessing the portal and underlying windows server is busy (more than 90% CPU usage)To analyse this issue, I logged customer data (customer id, ip address, jsession id) to a table and found that customer having unique jsession id initially has his data and all of a sudden for the same jsession id and ip address receiving different customer data.customer1 123.123.12.123 jsessionid123 11:10:02customer2 123.123.12.123 jsessionid123 11:10:04ip address (123.123.12.123) having jsession id (jsessionid123) somehow gets customer2 data Any order placed by customer1 in ip - 123.123.12.123 gets created for customer2, I confirmed this by calling customer2 and they confirmed that they didn't place the order. customer1 won't realise he placed order for customer2 - all the data gets changed, like basket items, customer object, products etc.Now I need to find a fix for this, but first I need to know which part of my code is creating this problem.Do I have to use a stress test software? or any better mechanism to find out the problematic code?
J2EE - Session swap
java;java ee;session
I found a fix for this problem. In my case, the in-house framework got the below code which caused the problem, ... req.setAttribute(key, value); /* This code gets executed for both REQUEST & SESSION */ if (scope == Sp.SESSION) { req.getSession().setAttribute(key, value); ...I noticed somehow the code maps session objects to incorrect jsessionid, so I tried the below code, ... if (scope == Sp.REQUEST) { /* Added this check */ req.setAttribute(key, value); } if (scope == Sp.SESSION) { req.getSession().setAttribute(key, value); ...It's nearly a year after this fix, and the swap didn't happen at all. So I'm confident this code fix solved the swap problem.
_softwareengineering.255537
I have a Unix script which is called by a scheduler (CTRL M) every 3 seconds. The script queries an external database(not belonging to my application and therefore I can only query it) to check for new records. Currently it stores the last run timestamp in a local table and on the next poll queries the external database for all records > timestamp. Now I want to avoid updating and querying this local table.The only option I see is to store and read this timestamp in a file.However chances of the file getting corrupted or overriden are higher than data in a database. Google had lot of java options but I need this in a script.Is there any better way to store the timestamp or better design for the functionality as a whole?
Storing last polled database timestamp unix
database;unix;polling
null
_codereview.124321
I'm a bit of a neophyte when it comes to C++, and so I'd like some feedback regarding a recent project.The code sits on a Raspberry Pi and streams camera data over TCP on a specified port.The MJPGWriter class:#include <unistd.h>#include <sys/time.h>#include <sys/types.h>#include <sys/socket.h>#include <netdb.h>#include <netinet/in.h>#include <arpa/inet.h>#define PORT unsigned short#define SOCKET int#define HOSTENT struct hostent#define SOCKADDR struct sockaddr#define SOCKADDR_IN struct sockaddr_in#define ADDRPOINTER unsigned int*#define INVALID_SOCKET -1#define SOCKET_ERROR -1#define TIMEOUT_M 200000#define NUM_CONNECTIONS 10#include <pthread.h>#include <iostream>#include <stdio.h>#include opencv2/opencv.hppusing namespace cv;using namespace std;struct clientFrame { uchar* outbuf; int outlen; int client;};struct clientPayload { void* context; clientFrame cf;};class MJPGWriter{ SOCKET sock; fd_set master; int timeout; int quality; // jpeg compression [1..100] std::vector<int> clients; pthread_mutex_t mutex_client = PTHREAD_MUTEX_INITIALIZER; pthread_mutex_t mutex_cout = PTHREAD_MUTEX_INITIALIZER; int _write(int sock, char *s, int len) { if (len < 1) { len = strlen(s); } { int retval = ::send(sock, s, len, 0); return retval; } }public: MJPGWriter(int port = 0) : sock(INVALID_SOCKET) , timeout(TIMEOUT_M) , quality(30) { FD_ZERO(&master); if (port) open(port); } ~MJPGWriter() { release(); } bool release() { if (sock != INVALID_SOCKET) shutdown(sock, 2); sock = (INVALID_SOCKET); return false; } bool open(int port) { sock = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP); SOCKADDR_IN address; address.sin_addr.s_addr = INADDR_ANY; address.sin_family = AF_INET; address.sin_port = htons(port); if (bind(sock, (SOCKADDR*)&address, sizeof(SOCKADDR_IN)) == SOCKET_ERROR) { cerr << error : couldn't bind sock << sock << to port << port << ! << endl; return release(); } if (listen(sock, NUM_CONNECTIONS) == SOCKET_ERROR) { cerr << error : couldn't listen on sock << sock << on port << port << ! << endl; return release(); } FD_SET(sock, &master); return true; } bool isOpened() { return sock != INVALID_SOCKET; } static void* listen_Helper(void* context) { ((MJPGWriter *)context)->Listener(); } static void* writer_Helper(void* context) { ((MJPGWriter *)context)->Writer(); } static void* clientWrite_Helper(void* payload) { void* ctx = ((clientPayload *)payload)->context; struct clientFrame cf = ((clientPayload *)payload)->cf; ((MJPGWriter *)ctx)->ClientWrite(cf); }private: void Listener(); void Writer(); void ClientWrite(clientFrame &cf);};The Listener function loops indefinitely while the program runs, listening in to new clients as they are connected. On connection, it sends the header.voidMJPGWriter::Listener(){ fd_set rread; SOCKET maxfd; while (true) { rread = master; struct timeval to = { 0, timeout }; maxfd = sock + 1; int sel = select(maxfd, &rread, NULL, NULL, &to); if (sel > 0) { for (int s = 0; s < maxfd; s++) { if (FD_ISSET(s, &rread) && s == sock) { int addrlen = sizeof(SOCKADDR); SOCKADDR_IN address = { 0 }; SOCKET client = accept(sock, (SOCKADDR*)&address, (socklen_t*)&addrlen); if (client == SOCKET_ERROR) { cerr << error : couldn't accept connection on sock << sock << ! << endl; return; } maxfd = (maxfd>client ? maxfd : client); pthread_mutex_lock(&mutex_cout); cout << new client << client << endl; pthread_mutex_unlock(&mutex_cout); pthread_mutex_lock(&mutex_client); _write(client, (char*)HTTP/1.0 200 OK\r\n, 0); _write(client, (char*) Server: Mozarella/2.2\r\n Accept-Range: bytes\r\n Connection: close\r\n Max-Age: 0\r\n Expires: 0\r\n Cache-Control: no-cache, private\r\n Pragma: no-cache\r\n Content-Type: multipart/x-mixed-replace; boundary=mjpegstream\r\n \r\n, 0); clients.push_back(client); pthread_mutex_unlock(&mutex_client); } } } usleep(10); }}The Writer function also continues to loop, getting the information from the camera, and spawns a thread for each client to write the information to.voidMJPGWriter::Writer(){ VideoCapture cap; bool ok = cap.open(0); if (!ok) { printf(no cam found ;(.\n); pthread_exit(NULL); } while (cap.isOpened() && this->isOpened()) { Mat frame; cap >> frame; pthread_t threads[NUM_CONNECTIONS]; int count = 0; std::vector<uchar>outbuf; std::vector<int> params; params.push_back(CV_IMWRITE_JPEG_QUALITY); params.push_back(quality); imencode(.jpg, frame, outbuf, params); int outlen = outbuf.size(); pthread_mutex_lock(&mutex_client); std::vector<int>::iterator begin = clients.begin(); std::vector<int>::iterator end = clients.end(); pthread_mutex_unlock(&mutex_client); for (std::vector<int>::iterator it = begin; it != end; ++it, ++count) { if (count > NUM_CONNECTIONS) break; struct clientPayload cp = { (MJPGWriter*)this, { &outbuf[0], outlen, *it } }; pthread_create(&threads[count-1], NULL, &MJPGWriter::clientWrite_Helper, &cp); } for (; count > 0; count--) { pthread_join(threads[count-1], NULL); } usleep(10); }}The Client Write function then writes this information to the connected client.voidMJPGWriter::ClientWrite(clientFrame & cf){ char head[400]; sprintf(head, --mjpegstream\r\nContent-Type: image/jpeg\r\nContent-Length: %lu\r\n\r\n, cf.outlen); _write(cf.client, head, 0); int n = _write(cf.client, (char*)(cf.outbuf), cf.outlen); if (n < cf.outlen) { pthread_mutex_lock(&mutex_client); cerr << kill client << cf.client << endl; clients.erase(std::remove(clients.begin(), clients.end(), cf.client)); ::shutdown(cf.client, 2); pthread_mutex_unlock(&mutex_client); } pthread_exit(NULL);}Main: intiailise with port 7777 and spawn the two controllers:intmain(){ MJPGWriter test(7777); pthread_t thread_listen, thread_write; pthread_create(&thread_listen, NULL, &MJPGWriter::listen_Helper, &test); pthread_create(&thread_write, NULL, &MJPGWriter::writer_Helper, &test); pthread_join(thread_listen, NULL); pthread_join(thread_write, NULL); exit(0);}The programme works to some extent. I can connect to the IP address and port from another computer in the network, with either Firefox or VLC, and get a video stream from the camera.However, it isn't very stable, and multiple connections seem to slow it down, sometimes appearing to serve only one client at a time. Also, quite often when a client disconnects, the code ends without error, instead of continuing to serve the remaining clients/wait for new clients.
Multithreaded MJPG network stream server
c++;pthreads;raspberry pi;opencv;posix
null
_unix.159979
I would like to disable the Ctrl+Shift+W shortcut which closes the current terminal tab.I am using Vim where Ctrl+R Ctrl+W moves between windows of the vim instance. Sometimes it may happen that the Shift key is pressed unintentionally while trying to move to the next window. However, this immediately closes the current terminal tab with the editor session.Can I disable Ctrl+Shift+W in the terminal and leaving other shortcuts untouched?
xfce4 terminal: disable individual shortcut
terminal;xfce
This part from the XFCE FAQ:If you are running the Xfce desktop environment, enable Editable menu accelerators in the User Interface Preferences dialog.If you are running GNOME then you can enable Editable menu accelerators in the Menu and Toolbars control center dialog.Otherwise put the following in your ~/.gtkrc-2.0 file (create the file if it doesn't exist):gtk-can-change-accels=1When xfsettingsd is running you must change the setting with the Xfce GUI, not through the ~/.gtkrc-2.0 file.Once that is done:Open xfce4-terminalOpen a new tab (Ctrl+Shift+T) so you have two tabs openMove your mouse up to File and click to open the menuMove your mouse down to Close Tab Ctrl+Shift+W and do NOT click the mousePress the Backspace key on your keyboardShortcut gone!The User Interface Preferences plugin may not be installed and thus editing the file~/.config/xfce4/terminal/accels.scmmay become necessary. Adding the line(gtk_accel_path <Actions>/terminal-window/close-tab )will have disabled the shortcut after the next start of the terminal.
_cs.57225
Below is a saying from this article.Regular expressions sit just beneath context-free grammars in descriptive power: you could rewrite any regular expression into a grammar that represents the strings matched by the expression. But, the reverse is not true: not every grammar can be converted into an equivalent regular expression.Can anyone tell me if the saying is correct and why? Per my understanding, they should have same power and the differernce is a context-free grammar has recursive definition.
Why descriptive power of context-free grammar is greater than regular expression?
context free;formal grammars;regular expressions
Let the set of languages that can be represented by regular languages be $R$. And set of languages that can be generated by a context free grammar be $G$. Then surely $ R \subset G$. For the proof that every regular language has a context free grammar that generates it, I encourage you to look up Michael Sipser's book on complexity. And for the second part, there are many languages which have a context free grammar generating them but are not regular. For example $L=\{ww^{reverse}\}$, where $w^{reverse}$ is reverse of the string $w$. How do we know $L$ has a context free grammar ( you can easily come up with a push down automata for it ), look up Sipsers. And look up pumping lemma in the book to prove $L$ is not regular. To get an intuition, any regular language can be represented by a finite state automata $FSA$ and vice versa. And any context free grammar can be represented by a push down automata $PDA$ and vice versa ( again read the proof from Sipsers ). And in a $PDA$ in addition to the states ( like in $FSA$ ) you also have an infinite memory stack. Thus intuitively context free grammars have power greater than that of $FSA$.
_cs.50354
I started computer science in university. And we started learning about Discrete and it's mathematics which is completely based on logic, which I understand. however, how is predicates, sets, proofs are related to computers ? I want to know the reason for learning these stuff?
What is discrete mathematics and why am I learning about it?
mathematical foundations
null
_unix.281502
I found this article and seem to have a similar Problem.Why is USB not working in Linux when it works in UEFI/BIOS?MyConfigFujitsu Board Qxx Chipseti5-2400I use VPro KVM and the input doesn't work in debian jessie.Live CD and rescue mode are fine.I tried an upgrade to Kernel 4.5.To get network back to work I used service restart network-manager in rescue mode.Normal usb devices work again after replugging them.I changed mainboard and CPU recently, but I had similar problems with the last system.I don't have a Gigabyte board. And tried nearly all settings in bios.Following the lsusb output. 001:004 must be the vPro KVM.lsusb :Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubBus 001 Device 005: ID 046d:c529 Logitech, Inc. Logitech Keyboard + MiceBus 001 Device 004: ID 8087:0024 Intel Corp. Integrated Rate Matching HubBus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubjust the dmesg | grep '2-3'. The vPro KVM device.[ 4.284230] usb 2-3: new high-speed USB device number 3 using ehci-pci[ 4.491046] usb 2-3: New USB device found, idVendor=8086, idProduct=002b[ 4.491051] usb 2-3: New USB device strings: Mfr=1, Product=2, SerialNumber=3[ 4.491061] usb 2-3: Product: USBr Composite Device[ 4.491063] usb 2-3: Manufacturer: Intel[ 4.491065] usb 2-3: SerialNumber: 0001[ 5.921587] usb 2-3: USB disconnect, device number 3Why is it disconnecting?? In rescue mode it reconnects automatically.I can even launch lightdm from rescue mode and then use it with keyboard and mouse.Other real USB devices have the same problem.The vPro device must be a virtual USB.
USB Devices doesn't work but show up in lsub and dmesg
debian;usb;keyboard;mouse
My /etc/rc.local was wired.I deleted it.And now everything seems fine.#!/bin/sh -e## rc.local## This script is executed at the end of each multiuser runlevel.# Make sure that the script will exit 0 on success or any other# value on error.## In order to enable or disable this script just change the execution# bits.## By default this script does nothing.su user -c '/home/user/scripte/synergyc.sh' &exit 0This is my current script.But the broken one looked exactly the same.How do I mark the question as solved?
_webapps.91732
I have a question about combining rows to a column in Google Spreadsheets. There are a lot of threads already about this topic, but I couldn't find one about this particular problem.This is the data I have at the moment.Name1 Name_1_belonging_to_1 Name_2_belonging_to_Name_1 Name2 Name_1_belonging_to_2 Name_2_belonging_to_Name_2 Name3 Name_1_belonging_to_3 Name_2_belonging_to_Name_3What I need is:1 Name12 Name_1_belonging_to_13 Name_2_belonging_to_Name_14 Name25 Name_1_belonging_to_26 Name_2_belonging_to_Name_27 Name38 Name_1_belonging_to_39 Name_2_belonging_to_Name_3
Combine rows to a column in Google Spreadsheets
google spreadsheets
null
_codereview.152181
Is there a shorter way to write this code or parts of it and where can i inculde some try and excepts for validation People={Dan:22, Matt:54, Harry:78, Bob:91}schoice=input('Enter existing name: ')while schoice not in People: schoice=input('Invalid entry name is non existant, Enter a chosen name: ')if schoice in People: getscore=People[schoice] if getscore<50: presents=[] for x in range(2): present=input('Enter present ' +str(x+1)) while present == '': present = input('Invalid. Try again: ') presents.append(present) if getscore>=100: presents=[] for x in range (8): present=input('Enter present ' +str(x+1)) while present == '': present = input('Invalid. Try again: ') presents.append(present) if getscore>=50 and getscore <100: presents = [] for x in range (5): present=input('Enter present ' +str(x+1)) while present == '': present = input('Invalid. Try again: ') presents.append(present)File=open(presfile.txt,'a')File.write(schoice)File.write(str(presents)+'\n')File.close()
Add elements to a textfile using a dictionary
python
null
_scicomp.20719
I just read Chapter 3 in A Multigrid Tutorial by Briggs/Henson/McCormick, link.The text is about Multigrid cycles such as V-cycle, mu-cycle, FMG. What caught my eye: In most iterative procedures one checks whether it has converged to the desired tolerance/accuracy and if so, the procedure stops. But Briggs/Henson/McCormick do not use any convergence checking in the presented schemes. The number of iterations and recursions are just hardcoded and one has to trust that the scheme will converge.So how is this done in Multigrid normally? Is it usual that the number of iterations/recursions is just hardcoded? I really fear that I will either waste a lot of computation time because I'm over-precise or on the other hand, accuracy will be poor in many cases when I choose a lower number of iterations/recursions.
Is it usual to have no convergence checking in Multigrid?
multigrid
null
_webmaster.17186
I have an html page, and I need a link to show that the user would be going to 'example.html', when really, the link goes to 'javascipt:ajaxLoad(example.html);'.EDIT: I went over to stackexchange, and their solution worked. The html would go from this: Exampleto this: ExampleThe solution, is if javascript is enabled, the 'return false;' will cancel going to the link on the page, and load the javascript. If not, then it will go to the page in the href
How to have html display a link, but open a javascipt function instead?
html;javascript;links
null
_softwareengineering.308413
in UML deployment diagrams, the node element is used to represent a computational resource (in other words, something that can run software).I know that nodes may have other nodes placed inside them (to imply nesting), and that they may have artifacts placed inside them (to imply the deployment relationship, i. e. the artifact being deployed to the node).However, I've also found a couple of illustrations that had component elements placed inside a node, which I'm not sure how to interpret.Here's what I'd like to know: Is it legal to place a component inside a node?If so, what exactly does it imply - the component being deployed to the node (which I'm not sure is allowed), or the node consisting of the component?If not, does the specification explicitly say so at any point?
In UML, can a component be placed inside a node?
uml;diagrams;language specifications
Strictly speaking, no, it is not legal to place a component inside a node.The specification does not explicitly say so, but it would not be feasible for the spec to explicitly forbid every mistake one could possibly make.Personally, I think it is acceptable to draw a component inside a node. I would interpret it as a deployed artifact implementing that component.Some relevant quotes from the UML 2.5 specification:11.6.3.1: A Component may be manifested by one or more Artifacts, and in turn, that Artifact may be deployed to its execution environment.19.4.3: A Node is computational resource upon which Artifacts may be deployed. Nodes may be further sub-typed as Devices and ExecutionEnvironments.19.2.4: System elements deployed on a DeployedTarget, and Deployments that connect them, may be drawn inside the perspective cube.I think DeployedTarget is a typo or a synonym for DeploymentTarget, which is a superclass of Node. System elements is not well defined in the spec. I found this sentence:19.2.3: System elements are represented as DeployedTargetsBut I think this should not be treated as a formal definition of the word 'system element' as used in 19.2.4. I think it is clear that anything drawn inside a node should be something deployed on that node and the only deployments mentioned are the deployments of artifacts.
_unix.157442
Alternate title: Should I be worried?I've been reading up about the remote bash exploit and was wondering how severe it is and if I should be worried, especially since a new exploit has been found after the patch release.What does this mean for me as someone who uses Debian as my main desktop OS? Is there anything I should be aware of?
What is the severity of the new bash exploit (shellshock)?
bash;security;shellshock
TL;DR (aka executive summary)Yes, you should be worried.Yes, this is severe (giving total strangers potential complete control over your files and resources).You should definitely upgrade your desktop AS WELL AS any servers.(https://security.stackexchange.com/questions/68156/is-connecting-to-an-open-wifi-router-with-dhcp-in-linux-susceptible-to-shellshoc)Your DHCP client uses dhclient-script which uses shell variables passed from the server. If there's a rogue/compromised router, it may pass modified domain-name variables with the exploit.Credits: Stphane Chazelas, Mark, Michal ZalewskiIn addition, many desktops use OpenSSH, which is definitely vulnerable as per http://seclists.org/oss-sec/2014/q3/650 (although for logged-in users - aka rogue insiders on your network). According to the original reporter of the bug, Stphane Chazelas, the OpenSSH vulnerability is about bypassing ssh ForcedCommand settings.Please note, however, that a full fix for the issue is not available yet (http://seclists.org/oss-sec/2014/q3/679).See https://access.redhat.com/articles/1200223 for possible workarounds.Debian hasn't published the upgrade yet, and I haven't had the time to find a relevant discussion on their site.(NB: Second a suggestion by terdon; it would be very nice if Stphane Chazelas could write down a canonical Q&A on Shellshock.)
_unix.337106
Given an .xml output file from a nmap scan, how would you suggest to generate an html report from this file?nmap proposes the following utility with shell (reference):xsltprocxsltproc <nmap-output.xml> -o <nmap-output.html> Saxonjava -jar saxon9.jar -s:<nmap-output.xml> -o:<nmap-output.html>XalanXalan -a <nmap-output.xml> -o <nmap-output.html>Which python code can I use instead of one of those commands?PS: I am looking for a solution valid on Unices and Windows (running bash from pyhon is not a solution)
How to generate a nmap HTML report with python?
python;xml;html;nmap
null
_codereview.61484
I have two csv files that I need to compare and merge together. There is a 'big' list with many records and columns of data and a 'small' list of the desired records and some additional data. The ultimate goal is to output the records on the small list joined with the data in the big list. The below code seems to make the correct selects, but I am unsure if I should be looking at using JOIN instead?I like the cleanness of this solution in how it handles the CSV files in such few lines of code - let ... some.Split(',') - and haven't come up with as clean of a way in using JOIN.string[] small = System.IO.File.ReadAllLines(@I:\\ + Environment.UserName + \\smallfile.csv);string[] big = System.IO.File.ReadAllLines(@J:\\DROPFOLDERS\\bigfile.csv);IEnumerable<MBSGroup> queryMBSObjects =from smallLine in smalllet splitSmallLine = smallLine.Split(',')from bigLine in biglet splitBigLine = bigLine.Split(',')where ((splitSmallLine[0] == splitBigLine[0] && splitSmallLine[2] == splitBigLine[1] && splitSmallLine[0] != && splitSmallLine[2] != && splitBigLine[1] != ) || (splitSmallLine[0] == splitBigLine[0] && splitSmallLine[2] == splitBigLine[5] && splitSmallLine[0] != && splitSmallLine[2] != && splitBigLine[5] != ))select new MBSGroup(){ write data to merge object};List<MBSGroup> MBSlists = queryMBSObjects.ToList();
Compare / Join two object collections LINQ
c#;linq;ienumerable
null
_unix.257684
I have a bash script where I'm trying to assign a heredoc string to a variable using read, and it only works if I use read with the -d ''option, I.e.read -d '' <variable>script block#!/usr/bin/env bashfunction print_status() { echo echo $1 echo }read -d '' str <<- EOF Setup nginx site-config NOTE: if an /etc/nginx/sites-available config already exists for this website, this routine will replace existing config with template from this script. EOFprint_status $strI found this answer on SO which is where I copied the command from, it works, but why?I know the first invocation of read stops when it encounters the first newline character, so if I use some character that doesn't appear in the string the whole heredoc gets read in, e.g.read -d '|' <variable> -- this worksread -d'' <variable> -- this doesn'tI'm sure it's simple but what's going on with this read -d '' command option?
How does the -d option to bash read work?
bash;shell script;quoting;options;read
I guess the question is why read -d '' works though read -d'' doesn't.The problem doesn't have anything to do with read but is a quoting problem. A / '' which is part of a string (word) simply is not recognized at all. Let the shell show you what is sees / executes:start cmd:> set -xstart cmd:> echo read -d foo+ echo read -d ' ' foostart cmd:> echo read -d foo+ echo read '-d ' foostart cmd:> echo read -d foo+ echo read -d '' foostart cmd:> echo read -d foo+ echo read -d foo
_cs.2557
I created a simple regular expression lexer and parser to take a regular expression and generate its parse tree. Creating a non-deterministic finite state automaton from this parse tree is relatively simple for basic regular expressions. However I can't seem to wrap my head around how to simulate backreferences, lookaheads, and lookbehinds.From what I read in the purple dragon book I understood that to simulate a lookahead $r/s$ where the regular expression $r$ is matched if and only if the match is followed by a match of the regular expression $s$, you create a non-deterministic finite state automaton in which $/$ is replaced by $\varepsilon$. Is it possible to create a deterministic finite state automaton that does the same?What about simulating negative lookaheads and lookbehinds? I would really appreciate it if you would link me to a resource which describes how to do this in detail.
How to simulate backreferences, lookaheads, and lookbehinds in finite state automata?
automata;finite automata;regular expressions
First of all, backreferences can not be simulated by finite automata as they allow you to describe non-regular languages. For example, ([ab]^*)\1 matches $\{ww \mid w \in \{a,b\}^*\}$, which is not even context-free.Look-ahead and look-behind are nothing special in the world of finite automata as we only match whole inputs here. Therefore, the special semantic of just check but don't consume is meaningless; you just concatenate and/or intersect checking and consuming expressions and use the resulting automata. The idea is to check the look-ahead or look-behind expressions while you consume the input and store the result in a state.When implementing regexps, you want to run the input through an automaton and get back start and end indices of matches. That is a very different task, so there is not really a construction for finite automata. You build your automaton as if the look-ahead or look-behind expression were consuming, and change your index storing resp. reporting accordingly.Take, for instance, look-behinds. We can mimic the regexp semantics by executing the checking regexp concurrently to the implicitly consuming match-all regexp. only from states where the look-behind expression's automaton is in a final state can the automaton of the guarded expression be entered. For example, the regexp /(?=c)[ab]+/ (assuming $\{a,b,c\}$ is the full alphabet) -- note that it translates to the regular expression $\{a,b,c\}^*c\{a,b\}^+\{a,b,c\}^*$ -- could be matched by[source]and you would have tostore the current index as $i$ whenever you enter $q_2$ (initially or from $q_2$) andreport a (maximum) match from $i$ to the current index ($-1$) whenever you hit (leave) $q_2$.Note how the left part of the automaton is the parallel automaton of the canonical automata for [abc]* and c (iterated), respectively.Look-aheads can be dealt with similarly; you have to remember the index $i$ when you enter the main automaton, the index $j$ when you leave the main automaton and enter the look-ahead automaton and report a match from $i$ to $j$ only when you hit the look-ahead automaton's final state.Note that non-determinism is inherent to this: main and look-ahead/-behind automaton might overlap, so you have to store all transitions between them in order to report the matching ones later, or backtrack.
_unix.18796
Assume I'm logged in with user takpar:takpar@skyspace:/$As root, I've added takpar as a member of group webdev using:# usermod -a -G webdev takparBut it seems it has not been applied. because for example I can't get into a webdev's directory that has read permission for group:400169 drwxr-x--- 3 webdev webdev 4.0K 2011-08-15 22:34 public_htmltakpar@skyspace:/home/webdev/$ cd public_html/bash: cd: public_html/: Permission deniedBut after a reboot I have access as I expect. As this kind of group changing is in my routine, is there any way to apply changes without needing a reboot?AnswerIt seems there is no way to make the current session know the new group, for example the file manager won't work with new changes. But a re-login will do the job.The su command is also appropriate for temp commands in urrent session.
How to apply changes of newly added user groups without needing to reboot?
permissions;users;group
Local solution: use su yourself to login again. In the new session you'll be considered as a member of the group.Man pages for newgrp and sg might also be of interest to change your current group id (and login into a new group):To use webdev's group id (and privileges) in your current shell use: newgrp webdevTo start a command with some group id (and keep current privileges in your shell) use: sg webdev -c command(sg is like su but for groups, and it should work without the group password if you are listed as a member of the group in the system's data)
_webapps.107295
I have a blog on Wordpress.com and I want to add a link to a search page as a menu item. However, I can't seem to find how to do this. What I have found so far is how to make a search template but I don't think I can do that on Wordpress.com. Mainly my blog has a title, a menu of links beneath it and then the loop of posts. I do not have any side columns. I have experience with fully hosted Wordpress installs and have written plugins but in this case my blog is located at Wordpress.com and the UI is somewhat different. I don't know how to do this simple task. Please be gentle. FYI:I also tried creating a new page with a search form inside of it but when I view the published page the form disappears (when I look at the DOM it there's no form but span tag). Here is my code: <form class=search-form role=search action=https://myblog.wordpress.com/ method=get> <label> <span class=screen-reader-text>Search for:</span> <input class=search-field name=s type=search value= placeholder=Search /> </label> <input class=search-submit type=submit value=Search /></form>UPDATE:It looks Wordpress.com does not allow forms inside postsUsers of Wordpress.com-hosted blogs, please note that they do not permit placing forms on your blog. You may want to have AWeber host your form instead, and simply provide a link to the form on your blog. sourceSo if that's right then my HTML search form is being stripped out. I can't believe that Wordpress doesn't have a search page.
Creating a search menu item to a search page
search;wordpress.com
null
_datascience.10000
I have read that HMMs, Particle Filters and Kalman filters are special cases of dynamic Bayes networks. However, I only know HMMs and I don't see the difference to dynamic Bayes networks.Could somebody please explain?It would be nice if your answer could be similar to the following, but for bayes Networks:Hidden Markov ModelsA Hidden Markov Model (HMM) is a 5-tuple $\lambda = (S, O, A, B, \Pi)$:$S \neq \emptyset$: A set of states (e.g. beginning of phoneme, middle of phoneme, end of phoneme)$O \neq \emptyset$: A set of possible observations (audio signals)$A \in \mathbb{R}^{|S| \times |S|}$: A stochastic matrix which gives probabilites $(a_{ij})$ to get from state $i$ to state $j$.$B \in \mathbb{R}^{|S| \times |O|}$: A stochastic matrix which gives probabilites $(b_{kl})$ to get in state $k$ the observation $l$.$\Pi \in \mathbb{R}^{|S|}$: Initial distribution to start in one of the states.It is usually displayed as a directed graph, where each node corresponds to one state $s \in S$ and the transition probabilities are denoted on the edges.Hidden Markov Models are called hidden, because the current state is hidden. The algorithms have to guess it from the observations and the model itself. They are called Markov, because for the next state only the current state matters.For HMMs, you give a fixed topology (number of states, possible edges). Then there are 3 possible tasksEvaluation: given a HMM $\lambda$, how likely is it to get observations $o_1, \dots, o_t$ (Forward algorithm)Decoding: given a HMM $\lambda$ and a observations $o_1, \dots, o_t$, what is the most likely sequence of states $s_1, \dots, s_t$ (Viterbi algorithm)Learning: learn $A, B, \Pi$: Baum-Welch algorithm, which is a special case of Expectation maximization.Bayes networksBayes networks are directed acyclical graphs (DAGs) $G = (\mathcal{X}, \mathcal{E})$. The nodes represent random variables $X \in \mathcal{X}$. For every $X$, there is a probability distribution which is conditioned on the parents of $X$:$$P(X|\text{parents}(X))$$There seem to be (please clarify) two tasks:Inference: Given some variables, get the most likely values of the others variables. Exact inference is NP-hard. Approximately, you can use MCMC.Learning: How you learn those distributions depends on the exact problem (source):known structure, fully observable: maximum likelihood estimation (MLE)known structure, partially observable: Expectation Maximization (EM) or Markov Chain Monte Carlo (MCMC)unknown structure, fully observable: search through model spaceunknown structure, partially observable: EM + search through model spaceDynamic Bayes networksI guess dynamic Bayes networks (DBNs) are also directed probabilistic graphical models. The variability seems to come from the network changing over time. However, it seems to me that this is equivalent to only copying the same network and connecting every node at time $t$ with every the corresponding node at time $t+1$. Is that the case?
What is the difference between a (dynamic) Bayes network and a HMM?
bayesian networks;pgm
null
_datascience.14524
In Hintons talk What's wrong about convolutional nets (Late 2016 or early 2015, I guess) he talks about capsules to make a modular CNN.Is there any publicly available implementation or papers about this idea?
Is there any public implementation / publication with Hintons capsules idea?
convnet
null
_unix.257366
I'm running the MATE desktop environment on two monitors on Debian using the NVidia proprietary driver.I'm facing a problem where after logging in, panel widgets like the clock or the Workspace Switcher move to the left, so there's a large amount of space between the widgets and the right edge of the screen. It looks like this:I believe that this problem is related to the different resolutions of the two monitors (2560x1440 vs 1680x1050). The distance from the left edge of the screen to the right edge of the panel widget is exactly 1680 pixels.To try to fix this, I added manual configuration to my xorg.conf:Section Monitor Identifier DP-1 Option Primary trueEndSectionSection Monitor Identifier DVI-I-1 Option Primary falseEndSectionThis seems to prevent my widgets from being moved, but after logging in, I got a message stating Could not apply the stored configuration for the monitor, so I'm guessing something's weird here, too.Is it a good idea to try to fix this in xorg.conf? Does that prevent me from configuring my displays via mate-display-properties? Is there a better way to fix this problem?
How should I configure my multi-monitor setup on MATE?
xorg;mate;multi monitor
null
_unix.306990
I've installed Linux Mint 18 on my machine, but it has a Radeon HD Graphics card, which is not supported by AMD on systems based on Ubuntu 16.04 (according to some research I did).So, after the install the system was booting into a black screen and wouldn't go anywhere.I did some research and found that I could edit the grub boot commands and add radeon.modeset=0. That worked, but I don't know what that does.Is it ok to run permanently with this parameters?Does this lower my overall machine performance? I don't really care about graphics, I just use my machine to ordinary stuff.Thanks, guys.
Linux Mint with radeon.modeset=0
linux mint;radeon;proprietary drivers
The xxx.modeset=0 disables kernel mode setting for the hardware. In other words, this option tells Linux not to try activating and using the incompatible hardware, which is likely the source of your problems. When this is used, your computer will still be functional, but without the benefits of hardware acceleration provided by the graphics card. This will only affect you when you use graphics-intensive programs such as complex, high-end games. Otherwise it's just fine.
_unix.9756
I'm writing a program for Systems Programming in Unix, and one of the requirements is to process all possible error returns from system calls.So, rather than having a function tailored to each system call, I'd like to have one function take care of that responsibility. So, are all error number returns unique? If not, what areas of overlap exist?
Are all system call error numbers unique?
system calls;system programming;error handling
null
_unix.313374
When I first got my ubuntu server setup i noticed that the resolution was 640 x 480 in grub and my screen is 1080p. I followed this tutorial How to set the resolution in text consoles (troubleshoot when any vga= fails) by user jasonwryan and i edited /etc/default/grub and i added and edited the following lines:GRUB_GFXMODE=1920x1080x32GRUB_GFXPAYLOAD_LINUX=1920x1080x32The resolution seems to have changed but not to 1080p :( Only to 720 x 400 (and 70 hz weirdly)(I had 60 hz in 640 x 480). After this I followed more tutorials on this site and others but with no results/same 720 x 400 resolutionI know I have an weird video card Matrox G200EH (MGA G200EH).When i am home if you want i will give you some logs or more info.I have the lastest ubuntu server version: Ubuntu 16.04.1 Server (64-bit)Thank you!
Can't change resolution in grub in ubuntu server
ubuntu;grub;resolution
null
_unix.318422
We have a XYZ batch process that kicks of 2:00 AM every day. This XYZ is configured to starts on port 59070. Once it finishes this port is a open port.But recently we had a problem where, another process was using 59070 and when this XYZ process started, it failed to run.As a workaround,we have updated the configuration to different port 59071 and ran the process OK.My query is that, is there anyway,we can block this port 59070 & ensure no other process uses it?we are using Solaris 10.
How to make a port dedicated for a specific scheduled process?
networking;solaris
null
_softwareengineering.337718
I'm working on an architecture document based on quality attributes. I'm trying to explain our search algorithm based on tags, historics and some bi information as a way of favouring a quality attribute. I read some material about this (Software Architecture in Practice, Third Edition) : usability is the closest I got (in the way of making it easy to search for something) but that is not quite right. I've read a list of quality attributes in Wikipedia: maybe correctness or relevance would apply? are those even formal quality attributes?
What quality attribute would be favored using a rich search algorithm?
architecture;documentation;enterprise architecture;quality attributes
If you're looking for a formal attribute, you'd better stick to ISO/IEC 9126 or its successor ISO/IEC 25010. Following are functionality attributes: correctness corresponds to accuracy. This means that it produces the correct results and with sufficient degree of precision. relevance seams to match suitability. This means that it provides the right functions for the user's task and objectives. I understand you want to highlight how your algorithm is favoring/contributing to quality: not sure that correctness is a good match here, unless your algorithm has some feature that must be used to achieve correct results. relevance could be a good match as the algorithm will somehow make the searching more efficient, helping user to better achieve his/her goals. Usability is a group of several attributes, that are about ease of use and attractivity. It's too general for a document about quality attributes. The following would be more precise, and seem also to suit your needs: operablity is about facilitating the user to control the software while meeting user's expectations (e.g. better, more targeted search results ? thus avoiding lots of browsing across correct but less relevant results) attractiveness is about the emotions of the user. For example, if this algorithm will make the user like your software more than comparable software using another search algorithm.
_reverseengineering.15151
A while back, on a Linux system, I found a strange binary being executed at seemingly random times with some strange arguments, briefly appearing in ps. There was nothing to be found at the usual suspect like init.d, .bashrc and crontab. I found where the binary was located, renamed it, and put a script it its place. The script displayed and passed through all the received arguments and recorded the time and the duration of the binary execution, as well as some /proc information. I was able to figure out what called it, when, and what were the arguments. What is the name of this honeypotting, MITM technique?
What is the name of this technique?
linux;malware;binary
null
_codereview.87000
When developing a user interface that allows sorting by various different fields, I noticed that Array.prototype.sort() is extremely slow for large inputs. After some investigation, I found out that the problem was that my compare function was being called upwards of 8.5 million times for an input array of length 7500. If you ask me, that looks like \$O(n^2)\$ behavior. To address this, I decided to implement a merge sort.Although I have tested my code and it appears to be working properly, it would still be helpful to have some other people look at it and see if there is more room for improvement or if it misbehaves in some subtle or pathological cases.var mergesort = function (array, /* optional */ cmp) { /* Merge sort. On average, two orders of magnitude faster than Array.prototype.sort() for large arrays, with potentially many equal elements. Note that the default comparison function does not coerce its arguments to strings. */ if (cmp === undefined) { // Note: This is not the same as the default behavior for Array.prototype.sort(), // which coerces elements to strings before comparing them. cmp = function (a, b) { 'use asm'; return a < b ? -1 : a === b ? 0 : 1; }; } function merge (begin, begin_right, end) { 'use asm'; // Create a copy of the left and right halves. var left_size = begin_right - begin, right_size = end - begin_right; var left = array.slice(begin, begin_right), right = array.slice(begin_right, end); // Merge left and right halves back into original array. var i = begin, j = 0, k = 0; while (j < left_size && k < right_size) if (cmp(left[j], right[k]) <= 0) array[i++] = left[j++]; else array[i++] = right[k++]; // At this point, at least one of the two halves is finished. // Copy any remaining elements from left array back to original array. while (j < left_size) array[i++] = left[j++]; // Copy any remaining elements from right array back to original array. while (k < right_size) array[i++] = right[k++]; return; } function msort (begin, end) { 'use asm'; var size = end - begin; if (size <= 8) { // By experimentation, the sort is fastest when using native sort for // arrays with a maximum size somewhere between 4 and 16. // This decreases the depth of the recursion for an array size where // O(n^2) sorting algorithms are acceptable. var sub_array = array.slice(begin, end); sub_array.sort(cmp); // Copy the sorted array back to the original array. for (var i = 0; i < size; ++i) array[begin + i] = sub_array[i]; return; } var begin_right = begin + (size >> 1); msort(begin, begin_right); msort(begin_right, end); merge(begin, begin_right, end); } msort(0, array.length); return array;};
Fast merge sort in JavaScript
javascript;mergesort
null
_cstheory.31959
Found this from graphclasses.org.Two papers give conflicting results for coloring $P_5$-freegraphs which appear to imply $P=NP$.From Polynomial-time algorithm for vertex k-colorability of P_5-free graphsAbstract. We give the first polynomial-time algorithm for coloring vertices of P5 -free graphs with k colors. This settles an open problem and generalizes several previously known results.From Some new hereditary classes where graph coloring remains NP-hard, p 5... Coloring is NP-hard in $2K_2$-free ... and $\{C_5,P_5\} \cup \ldots$-free$2K_2-free \subset P_5-free$ and the other class contains $P_5$.According to graphclasses, another reason for hardness isclique cover on the complement (another paper), click +Detailsfor references.Question:What is wrong with this seeming contradiction?
Two paper appear to imply collapse via coloring $P_5$-free graphs
cc.complexity theory;graph theory;graph colouring;graph classes
One of those papers is about k-colorability and the other ofthose papers does not specify a constant number of colors.(For a similar example, consider k-variable-SAT versus SAT.)
_unix.363734
Fresh install of F25. Updated everything. I installed bumblebee following the instructions here: https://fedoraproject.org/wiki/Bumblebee . Closed source managed drivers w/ multilib support. Optirun works. But as soon as I log in, the fans kick in 100% and never shut off. I tried the work around here: http://forums.fedoraforum.org/showthr... but it didn't turn off the fans. This is on a asus zenbook pro ux501vw
Fans on 100% after Bumblebee Install on Fedora 25
fedora;nvidia;acpi;bumblebee;fan
null
_codereview.1741
I need to figure out the best way to deal with memory pre-allocationBelow is pseudo-code for what I am doing now, and it seems to be working fine.I am sure there is a better way to do this, I would like to see if anyone has any good idea.In Thread A I need to allocate 300 MB of memory and zero it out:Thread A:char* myMemory = new char[10*30*1024*1024];In Thread B I incrementally get 10 sets of data 30 MB each. As I get this data it must be written to memoryThread B:int index1 = 0;char* newData = getData(...); // get a pointer to 30 MB of datamemcpy(&myMemory[index1*30*1024*1024], &newData[0], 30*1024*1024];SetEvent(...) // tell Thread C that I have written new data// use index i as circular Bufferif (index1 < 10) index1++;else index1 = 0;In Thread C, when new data is written, I need to get it and process itThread C:int index2 = 0;char* dataToProcess[] = new char[30*1024*1024];if (event Fires) // if event in Thread B is set memcpy(&dataToProcess[0], &myMemory[index2*30*1024*1024], 30*1024*1024]; processData(dataToProcess)// use index i as circular Bufferif (index2 < 10) index2++;else index2 = 0;
Multi-Threaded Memory Preallocation
c++;multithreading
null
_codereview.24103
Implement atoi to convert a string to an integer.Requirements for atoi: The function first discards as many whitespace characters as necessary until the first non-whitespace character is found. Then, starting from this character, takes an optional initial plus or minus sign followed by as many numerical digits as possible, and interprets them as a numerical value.The string can contain additional characters after those that form the integral number, which are ignored and have no effect on the behavior of this function.If the first sequence of non-whitespace characters in str is not a valid integral number, or if no such sequence exists because either str is empty or it contains only whitespace characters, no conversion is performed.If no valid conversion could be performed, a zero value is returned. If the correct value is out of the range of representable values, INT_MAX (2147483647) or INT_MIN (-2147483648) is returned.The following is my code:int atoi(const char *str) { int sign = 1; long long i = 0, j = 0; while ((*str) != '\0' && isspace(*str)) { ++str; } if (((*str)!='\0') && ((*str) == '+' || (*str) == '-')) { if ((*str) == '-') { sign = -1; } ++str; } if (((*str) != '\0') && (!isdigit(*str))) { return 0; } while (((*str) != '\0') && (isdigit(*str))) { i = i * 10 + (*str - '0'); j = i * sign; cout << j << endl; if (j > INT_MAX) { return INT_MAX; } if (j < INT_MIN) { return INT_MIN; } ++str; } return j;}
string to integer (implement atoi)
c;strings;interview questions
if (((*str) != '\0') && (!isdigit(*str))) { return 0;}You don't need this condition, because of the condition in the while loop afterwards. If that fails the first time j with value 0 is returned anyway.And a minimal (and maybe unnecessary) optimization:if (j >= INT_MAX) { return INT_MAX;}if (j <= INT_MIN) { return INT_MIN;}Why look for the next character in the string which would only get appended to the number and would increase the value, if the number already has the maximumk/minimum value possible (INT_MAX/INT_MIN).
_webapps.86203
I am trying to use a form to create a list of jobs for our student employees, which exists as an editable Google sheet. I would like to be able to have any staff member add jobs that need to be taken, by using the form, and students could view a sheet for any openings and put their names down for any they wanted.With help from this site, and some modifications, I have a system that works to place the info from the forms into a list. Where my problems start coming in, is I want to be able to order the sheet by status and then by date.Here is the previous thread.Essentially, the form gathers the following data :+---+------------+-------+-------+-----+---------+-------+----------+------+--------+--------+--------+| | A | B | C | D | E | F | G | H | I | J | K |+---+------------+-------+-------+-----+---------+-------+----------+------+--------+--------+--------+| 1 | Timestamp | Date | Start | End | service | Event | Building | Room | #Cat 1 | #Cat 2 | #Cat 3 |Timestamp|Date|Start Time|End Time|Service|Event|Building|Room|#Category 1|#Category 2|#Category 3|Where that Categories are different classifications of worker, with a quantity for each. Upon running the following script, the data goes into a second sheet: Job List with the following fields: +---+-----------+-------+----------+-------+-----+---------+-------+----------+------+-------+--------+----------+ | | A | B | C | D | E | F | G | H | I | J | K | L | +---+-----------+-------+----------+-------+-----+---------+-------+----------+------+-------+--------+----------+ | 1 | Techname | Date | Category | Start | End | service | Event | Building | Room | Notes | Status | DateSort |Tech name (where students manually add their name if they want a job)|Date|Type|Start|End|Service|Event|Building|Room|Notes|Status*|DATESORT**|*Status is an array formula in Cell K1, which auto populates the column as filled unfilled or urgent using the formula =ARRAYFORMULA(If(Row(A1:A)=1,STATUS,IF(B1:B,If(ISBLANK(A1:A),If(NOT(ISBLANK(B1:B)),IF(B1:B=TODAY(),Urgent,Unfilled),),Filled),)))**DateSort is in L1 and simply coverts the date to a value:=ARRAYFORMULA(If(Row(A1:A)=1,DATESort,IF(ISBLANK(B1:B),,DATEVALUE(B1:B))))Here is my script:function processJobs() { var ss = SpreadsheetApp.getActiveSpreadsheet(); var sheet = ss.getActiveSheet(); var values = sheet.getDataRange().getValues(); var output = []; for (var i = 1; i < values.length; i++) { for (var j = 0; j < 11; j++) { output = output.concat(repeat(values[i], values[0][j+9], values[i][j+9])); } } outputSheet = ss.getSheetByName(Job List); outputSheet.getRange(2, 2, output.length, output[0].length).setValues(output); //runs function to order sheets by Status then date (as datevaule) OrderSheet();}// Adds as many duplicate rows as needed for each job typefunction repeat(row, category, quantity) { var arr = []; for (var i = 0; i < quantity; i++) { arr.push([row[1], category].concat(row.slice(2,8))); } return arr}// Invoked to order the Job List from earliest to latest by date of shiftfunction OrderSheet() { var ss = SpreadsheetApp.getActiveSpreadsheet(); var JobList = ss.getSheetByName(Job List); var data = JobList.getDataRange(); data.sort([10,11]);You will notice that I have a separate line for each person that would need to be on a shift, so if category 3 needed three people, there would be 3 different identical rows. The sheet works fine up until the sort. The rows are in fact sorted (first by status and then by date), however they are all moved to the bottom of the sheet (row1001 and up). I also notice that if there were student names entered in the name column, these values get erased when the script runs.Any suggestions on rectifying these issues? I think that it may have something to do with the array formulae, that is causing all rows to be seen as having data.Ultimately, when this is finished, I hope to add a script that will check the sheet every morning, and if a status is Urgent (meaning it is unfilled as of the day of the assignment), an email will be sent out to alert us of the vacancy.
sort function results in all of my data going to the bottom of the sheet
google spreadsheets;google apps script;google forms
Cells containing the empty string appear empty but are not (isblank returns FALSE for them). They get sorted ahead of nonempty strings, which is not what you want. To avoid this, replace if(isblank(B1:B), , datevalue(B1:B))by the simpleriferror(datevalue(B1:B))The command iferror returns its (optional) second argument if there is an error evaluating the first argument; otherwise it leaves the cell blank. Blank cells are sorted to the bottom.A potential drawback of the above is that misformatted dates are simply ignored, while you may want to see and fix them. This is something you can address separately; e.g., have a column with if(isblank(B1:B), 0, if(iserror(datevalue(B1:B), 1, 0))and sum over it to get the total number of nonempty but invalid date cells. Another potential issue is that you are sorting a range containing arrayformula. The sort may move the formula to another row, creating a mess. It's safer to exclude the header row from sorting. Replacing var data = JobList.getDataRange();by var data = JobList.getDataRange().offset(1,0);would do it.
_unix.4820
What is the difference between Copied context output format and Unified context output format when taking a diff? diff -NBur dir1/ dir2/ diff -NBcr dir1/ dir2/
What is the difference between Copied context output format and Unified context output format while taking diff?
linux;diff
Apparently you've misread the manual. The -u flag is for unified context, not Unicode and -c is for copied context, not 'Context format':-c -C NUM --context[=NUM] Output NUM (default 3) lines of copied context.-u -U NUM --unified[=NUM] Output NUM (default 3) lines of unified context.The most straightforward way to find out what is the difference, is to try it out:$ cat >1linediff more^D$ cat >2line ffidmore^D$ diff -c 1 2*** 1 2010-12-14 09:08:48.019797000 +0200--- 2 2010-12-14 09:08:56.029797001 +0200****************** 1,3 **** line! diff more--- 1,3 ---- line! ffid more$ diff -u 1 2--- 1 2010-12-14 09:08:48.019797000 +0200+++ 2 2010-12-14 09:08:56.029797001 +0200@@ -1,3 +1,3 @@ line-diff+ffid moreDo you get what's the difference?
_unix.125683
Back before I knew what I was doing, I installed my Arch system with a 10 GB root and an 85 GB /home partition. Now my 10 GB root (where my pacman installs to) is full and I'm unable to get gparted running on my computer (it's a mac) to resize because my CD-ROM drive isn't working.Is there an easy way to set my system up so that pacman installs and runs from my home partition where there is plenty of room?
Moving pacman from root to /home partition
arch linux;partition;pacman
Moving pacman is not the right approach.You do, however, have a couple of options. All of them assume that you already have a full and tested backup of your data.First, make sure that you have cleared all available space in pacman's cache with pacman -Sc: pass the second c for everything. There is a pacman tool for more fine-grained control fo this, see paccache --help for details.If that still does not provide enough room, then you can resize your partitions with fdisk which is part of util-linux and is installed in the base group. There are a number of guides online for doing this.The optimal solution, though, would be to grasp the nettle and do a full reinstall using LVM so that you have the flexibility to work around this issue (as well as other benefits like snapshotting) in the future. You could also fully encrypt your drive(s) while you are at it with LVM on LUKS.
_unix.387695
My goal is to set the environment variable JAVA_HOME for an user named grid which is just the traditional user name for hadoop.My machine is a virtual machine deployed in VMware Workstation, 64bit CentOS7.What I do:I edit the file that is supposed to change my user-specific environment variable, i.e. ~/.bash_profile. Below is my code:export JAVA_HOME=/home/grid/jdkexport HADOOP_HOME=/home/grid/hadoopPATH=$PATH:$HADOOP_HOME:$JAVA_HOMEexport PATHVery interestingly, when I log in as user grid, and I run echo $HADOOP_HOME, I get /home/grid/hadoop, but I get an nothing as response when I run echo $JAVA_HOME, literally just nothing, an empty string or null, something like that. I run cd $JAVA_HOME, and I end up in home directory.I tried changing the JDK folder, it didn't work. I tried the same code on another machine, it worked. I put the code in /etc/profile and log in as root, same thing happened, good for echo $HADOOP_HOME but no response for echo $JAVA_HOME.
Why is my environment variable is partially set?
bash;environment variables
null
_softwareengineering.179849
One of the problems web apps have against native apps, especially on the mobile front, is the constant need to re-download each web page on request. Ultimately, this leads to slower performance. Why if web apps only download new pages if they're actually needed, not because they're simply requested.For example: perhaps the server can store a web page version in a cookie. Every slight change to the page on the server-side changes the version number. Now instead of the browser requesting a new page each time, why not just check the version number and have the server send the page if they're different? If the page similar, the user can just use a cached page.I'm sure browsers doesn't necessarily have to change to accommodate changes to this, correct?
Alternative Web model
web applications;http
You are talking about client side caching and it's been available and extensively in use for many years.There are several ways to implement and control client side caching, mostly involving changing the HTTP headers that the server sends along with the initial page response. Similar to the cookie method you propose.Here's a good summary: http://www.librosweb.es/symfony_1_2_en/capitulo12/http_11_and_clientside_caching.html
_codereview.170965
Related to this code golf challenge, I tried to find acronyms with Haskell without using regular expressions.The idea is to split the input string at every space or dash before finally gluing the heads of these parts together, if they are uppercase.This is my code:import System.Environmentimport Data.Charmain :: IO ()main = do [inp] <- getArgs -- get input from the command line putStrLn $ getAcronym inpgetAcronym :: String -> StringgetAcronym [] = []getAcronym s = foldr step [] parts where parts = split isWordSep s -- split into words step x acc = if isUpper . head $ x then head x : acc else acc -- glue uppercase heads togethersplit :: (a -> Bool) -> [a] -> [[a]]split p [] = []split p s@(x:xs) | p x = split p xs -- discard trailing white spaces | otherwise = w : split p r -- continue with the rest where (w, r) = break p s -- seperate prefixisWordSep :: Char -> BoolisWordSep x = x == ' ' || x == '-'As this really seems like a very simple problem, my code looks like way too much complexity.Do you have any helpful improvements to slim down my code?
Find acronyms in Haskell
beginner;strings;haskell
With the help of Gurkenglas, I have found a good solution for this problem:First, the getAcronym function can be dramatically reduced by using higher order functions and function composition:getAcronym :: String -> StringgetAcronym = filter isUpper . map head . split isWordSepSecond, the split function can be replaced with Data.List.Split's wordsBy function, reducing the whole code to the following:import System.Environmentimport Data.Charimport Data.List.Split (wordsBy)main :: IO ()main = do [inp] <- getArgs -- get input from the command line putStrLn $ getAcronym inpgetAcronym :: String -> StringgetAcronym = filter isUpper . map head . wordsBy isWordSepisWordSep :: Char -> BoolisWordSep x = x == ' ' || x == '-'
_codereview.19710
Let's say I have three files: a page that is used for searching data called search, a file that handles requests called controller, and a file that has functions which the controller calls based on what POST parameter is passed. From the search page, I post some data and specify my action (search) via AJAX to the controller. The controller then takes that and calls a function that returns the data (which is inserted into the appropriate place in the DOM) which looks something like this:<?phpfunction fetchStudentData($search, $roster) { //If a user is not logged in or the query is empty, don't do anything if(!isLoggedIn() || empty($search)) { return; } global $db; //Placement roster search if($roster == placement) { $query = $db->prepare(SELECT t1.id, t2.placement_date, t2.placement_room, t1.student_id, t1.student_type, t1.last_name, t1.first_name, t1.satr, t1.ptma, t1.ptwr, t1.ptre, t1.ptlg, t1.waiver, t2.tests_needed, t2.blackboard_id FROM student_data AS t1 JOIN additional_data AS t2 ON t1.id = t2.student_data_id WHERE t2.placement_date = :search ORDER BY t1.last_name ASC); $query->bindParam(:search, $search); $query->execute(); if($query->rowCount() == 0) { return; } echo <div id='searchHeader'> <div class='searchSubHead'><em>Placement Roster For $search</em></div> <br /> <span>Placement Room</span> <span>Student ID</span> <span>Student Type</span> <span>Last Name</span> <span>First Name</span> <span>SATR</span> <span>PTMA</span> <span>PTWR</span> <span>PTRE</span> <span>PTLG</span> <span>Waiver</span> <span>Tests Needed</span> <span>Blackboard ID</span> </div>; while($row = $query->fetch()) { echo <div id='{$row['id']}' class='placementRosterRecord'> <span>{$row['placement_room']}</span> <span>{$row['student_id']}</span> <span>{$row['student_type']}</span> <span>{$row['last_name']}</span> <span>{$row['first_name']}</span> <span>{$row['satr']}</span> <span>{$row['ptma']}</span> <span>{$row['ptwr']}</span> <span>{$row['ptre']}</span> <span>{$row['ptlg']}</span> <span>{$row['waiver']}</span> <span>{$row['tests_needed']}</span> <span>{$row['blackboard_id']}</span> </div>; } }}?>This is obviously working very well for me but I wanted to know if there's a better way I should be doing this? Should I be returning the raw data from the database and encoded as JSON and then have my JavaScript take care of creating elements/structure? I see that a lot with various API's but this is something internal to my company and not something that other developers will be using. Is there anything wrong with my approach?
Better way to return data via AJAX?
javascript;php;mysql;ajax;json
First of all, you should read about MVC.There are a lot of tutorials out there, as well as frameworks (like CodeIgniter) that implement it. Basically your code should be divided into 3 main categories:Controllers - where all logic should be placed like validations, checks. This is also where Views and Models are used, hence controllerModels - basically the source of data. Usually this is the abstraction of the data storage (databases, sessions, external requests) in the form of functions.Views - the presentation, or in this case, the HTML.Reasons to use MVC or MV* patterns isPortability. The files, especially the views and models, can be plucked out of the architecture, and be used elsewhere in the system without modification. Models are simply function declarations one can just plug in the controller and call their functions. Views are simply layouts that, when plugged with the right values, display your data. They can be used and reused with little or no modification unlike with mixed code.Purpose-written code and ReadabilityYou would not want your HTML mixed with the logic, nor the data gathering mechanism. This goes for all other parts of your code. For example, when you edit your layout, all you want to see on your screen is your layout code, and nothing more.Never, and I mean NEVER return marked-up data using AJAX (use JSON).The main reason AJAX is used is because it's asynchronous. This means no page-loading, no more request-wait-respond, and you could do stuff in parallel. It's pseudo-multitasking for the human.On the same train of thought, the reason why JSON was created was because XML was too heavy for data transfer and we needed a lightweight, data transport format. XML is NOT a data-transport format but a document and mark-up language (Though by X for extensibility, it has served a lot of purposes beyond document).Use client-side templatingJavaScript engines have become fast these days that even developers create parsers/compilers on JavaScript. And with that comes client-side templating, like Mustache.Coupled with raw data from JSON, which is very light, you can integrate that data into a template to form your view and have JavaScript display it, like you already do.Have a rendered version and a data versionTaking into account the previous answer, if you are making a website rather than a web app, where JS is optional rather than required, one should create a website that works without JS. However, for the same data but different format, the URL should not change (much). You should then take into account in the request what data type is requested by the client. And so you must accept a parameter which indicates the data type.An example url would look like this://for a website page format (default: HTML):http://example.com/my/shop/available/items.php//for the JSON versionhttp://example.com/my/shop/available/items.php?format=JSON//for the XML versionhttp://example.com/my/shop/available/items.php?format=XMLThis means, that for the same page (available items), you have 3 formats you can use. This can be seen in the Joomla Framework (the framework that powers the Joomla CMS)With these 3 tips, your code will be cleanly separated and purposely written (MVC), fast on transport and rendering (JSON + client-side templating) and extensible (multi-format support).
_codereview.99262
I need to design and implement a service (C#, .Net 4.5+) which receives messages, converts the messages to BLL models and passes them to the BLL.Long story:The service will receive messages (strings containing multiple fields (no JSON, no XML just a string)) from a client. The format of the messages can change between different protocol versions. And the service needs to support multiple different versions.The service will need to deserialize/pars those messages into a message class (version specific).After the parsing the service needs to convert the message class to a BLL model. The idea behind the BLL model is to unify the different protocol versions. The same message type (version independently) should be converted to the same BLL model (There will be only version of the BLL model). The BLL model will then be passed to the BLL.For some messages (not all) the service needs to send a response to the client. This means the service also must be able to convert a BLL model to a version specific message and send this message to the client.I've written a demo application to show you what I've planned to do. The application supports two protocol version (V1 and V2). There are two messages defined MyMessage and MyOtherMessage. MyMessage has changed in V2, MyOtherVersion is in V2 still the same as in V1.The application contains a class simulating a client which sends both messages in both versions (totally random). The received messages are getting deserialized into message classes by a version specific serializer. After the deserialization the message classes are getting converted to version independent BLL models.The communication between the different components (receiver, serializer, converter, etc...) is done using the event aggregation pattern. My questions:How would you implement the version specific message classes? (My idea was to create a namespace per version and implement the messages which has changed in this version)How would you select the correct serializer and converter for the version of the received message?How would you handle the serialization and conversion of messages which hasn't changed since the last version? (for example MyOtherMessage has not changed in V2, would you call the V1 serializer from the V2 serializer?) in the example I've changed the version of the message to V1 and published the event again, so it gets handled by the V1 serializer.Do you think it's a good idea to work with the event aggregation pattern?Do you have any experience with that kind of problem (different versions) how have you solved it?I've uploaded the whole demo solution to Google Drive.Messages V1:public class MyMessage : MessageBase{ #region Properties public String MytSring { get; set; } public String MySecondString { get; set; } #endregion}public class MyOtherMessage : MessageBase{ #region Properties public Int32 MyInt { get; set; } #endregion}Messages V2:public class MyMessage : MessageBase{ #region Properties public String MyString { get; set; } public Int32 MyInt { get; set; } #endregion}Converter V1:public class MessageConverter{ #region Ctor public MessageConverter() { PseudoIoC.EventAggregator.GetEvent<MessageDeserialized>() .ObserveOn( new NewThreadScheduler() ) .Subscribe( MessageReceived ); } #endregion private ModelBase ConvertMyMessage( MessageDeserialized message ) { var receivedMessage = message.Message as MyMessage; return new MyMessageBllEquivalent { MytSring = receivedMessage.MytSring, MySecondString = receivedMessage.MySecondString, MyInt = 0 }; } private ModelBase ConvertMyOtherMessage( MessageDeserialized message ) { var receivedMessage = message.Message as MyOtherMessage; return new MyOtherMessageBllEquivalent { MyInt = receivedMessage.MyInt }; } private void MessageReceived( MessageDeserialized message ) { if ( message.Version != 1 ) return; ModelBase result; switch ( message.Type ) { case MessageType.MyMessage: result = ConvertMyMessage( message ); break; case MessageType.MyOtherMessage: result = ConvertMyOtherMessage( message ); break; default: throw new ArgumentOutOfRangeException(); } Console.WriteLine( Converted to {0}, result.GetType() ); //publish new event with BLL object => event gets handled by the BLL }}Converter V2:public class MessageConverter{ #region Ctor public MessageConverter() { PseudoIoC.EventAggregator.GetEvent<MessageDeserialized>() .ObserveOn( new NewThreadScheduler() ) .Subscribe( MessageReceived ); } #endregion private ModelBase ConvertMyMessage( MessageDeserialized message ) { var receivedMessage = message.Message as MyMessage; return new MyMessageBllEquivalent { MytSring = receivedMessage.MyString, MyInt = receivedMessage.MyInt, MySecondString = String.Empty }; } private void MessageReceived( MessageDeserialized message ) { if ( message.Version != 2 ) return; ModelBase result; switch ( message.Type ) { case MessageType.MyMessage: result = ConvertMyMessage( message ); break; case MessageType.MyOtherMessage: Console.WriteLine( delegate V2 to V1 ); message.Version = 1; PseudoIoC.EventAggregator.Publish( message ); return; default: throw new ArgumentOutOfRangeException(); } Console.WriteLine( Converted to {0}, result.GetType() ); //publish new event with BLL object => event gets handled by the BLL }}Serializer V1:public class MessageSerializer{ #region Ctor public MessageSerializer() { PseudoIoC.EventAggregator.GetEvent<MessageReceivedEvent>() .ObserveOn( new NewThreadScheduler() ) .Subscribe( MessageReceived ); } #endregion private void MessageReceived( MessageReceivedEvent message ) { if ( message.SchemaVersion != 1 ) return; MessageBase result; switch ( message.Type ) { case MessageType.MyMessage: result = ParsMyMessage( message ); break; case MessageType.MyOtherMessage: result = ParsMyOtherMessage( message ); break; default: throw new ArgumentOutOfRangeException(); } PseudoIoC.EventAggregator.Publish( new MessageDeserialized { Type = message.Type, Message = result, Version = 1 } ); } private MessageBase ParsMyMessage( MessageReceivedEvent message ) { var values = message.Message.Split( new[] { '{', '}' }, StringSplitOptions.RemoveEmptyEntries ); var result = new MyMessage { MytSring = values[0], MySecondString = values[1] }; Console.WriteLine( Revived V1 MyMessage ); return result; } private MessageBase ParsMyOtherMessage( MessageReceivedEvent message ) { var values = message.Message.Split( new[] { '{', '}' }, StringSplitOptions.RemoveEmptyEntries ); var result = new MyOtherMessage { MyInt = values[0].ToInt32() }; Console.WriteLine( Revived V1 MyOtherMessage ); return result; }}Serializer V2:public class MessageSerializer{ #region Ctor public MessageSerializer() { PseudoIoC.EventAggregator.GetEvent<MessageReceivedEvent>() .ObserveOn( new NewThreadScheduler() ) .Subscribe( MessageReceived ); } #endregion private void MessageReceived( MessageReceivedEvent message ) { if ( message.SchemaVersion != 2 ) return; MessageBase result; switch ( message.Type ) { case MessageType.MyMessage: result = ParsMyMessage( message ); break; case MessageType.MyOtherMessage: Console.WriteLine( delegate V2 to V1 ); message.SchemaVersion = 1; PseudoIoC.EventAggregator.Publish( message ); return; default: throw new ArgumentOutOfRangeException(); } PseudoIoC.EventAggregator.Publish( new MessageDeserialized { Type = message.Type, Message = result, Version = 2 } ); } private MessageBase ParsMyMessage( MessageReceivedEvent message ) { var values = message.Message.Split( new[] { '{', '}' }, StringSplitOptions.RemoveEmptyEntries ); var result = new MyMessage { MyString = values[0], MyInt = values[1].ToInt32() }; Console.WriteLine( Revived V2 MyMessage ); return result; }}EventAggregator:public interface IEventAggregator{ IObservable<TEvent> GetEvent<TEvent>(); void Publish<TEvent>( TEvent eventArgs );}public class EventAggregator : IEventAggregator{ #region Fields private readonly ConcurrentDictionary<Type, Object> _subjects = new ConcurrentDictionary<Type, Object>(); #endregion public IObservable<TEvent> GetEvent<TEvent>() { var subject = (ISubject<TEvent>) _subjects.GetOrAdd( typeof (TEvent), x => new Subject<TEvent>() ); return subject.AsObservable(); } public void Publish<TEvent>( TEvent eventArgs ) { Object subject; if ( !_subjects.TryGetValue( typeof (TEvent), out subject ) ) return; var castedSubject = subject as ISubject<TEvent>; castedSubject?.OnNext( eventArgs ); }}
Service receiving and sending messages multiple protocol versions
c#;.net
Usually I would vote to close, because this code looks more like example code, but as it has a bounty on it, closing could lead to a much more overhead by refunding the bounty back to you, so I will just answer your questions. How would you implement the version specific message classes? (My idea was to create a namespace per version and implement the messages which has changed in this version) I would create for eachh message type and version a new class naming them like e.g MyMessageV1, MyMessageV2 etc. By using namespaces you could come into trouble if you are using the wrong using's, hence you would get lost in the namespace jungle. If you need a V2 for e.g the MyMessage class but not for the MyOtherMessage class, I would nevertheless create a MyOtherMessageV2 which just inherits MyOtherMessageV1. How would you select the correct serializer and converter for the version of the received message? By using a SerializerFactory and ConverterFactory which will return the correct Serializer and Converter based on the type of the message and on the to be added property to the MessageBase class. In this way I would pass e.g the MessageDeserialized message to the ConverterFactory and would get the correct converter to use. This has the advantage, if you add a new message or change the message version, this will be the only place where you need to change anything about the converting or serializing of the messages. For sure you need to add a new serializer and converter too. These Serializer and Converter should implement an interface for the intented purpose. How would you handle the serialization and conversion of messages which hasn't changed since the last version? (for example MyOtherMessage has not changed in V2, would you call the V1 serializer from the V2 serializer?) in the example I've changed the version of the message to V1 and published the event again, so it gets handled by the V1 serializer. See first and previous answers.Do you think it's a good idea to work with the event aggregation pattern? Here I will answer with Martin Fowlers words from here When to Use ItEvent Aggregator is a good choice when you have lots of objects that are potential event sources. Rather than have the observer deal with registering with them all, you can centralize the registration logic to the Event Aggregator. As well as simplifying registration, a Event Aggregator also simplifies the memory management issues in using observers. Do you have any experience with that kind of problem (different versions) how have you solved it? No.
_webmaster.1943
I have been working on a project to build up a website and since the delay for the initial launch of the website were short, I wanted to have some minimal content on the website ASAP so that the website could have a good ranking in Google when it will launch. The bad part in the story is that I had some page with a lot of Lorem Ipsum online. When I realized that it was a pretty bad idea to have pages with a lot of Lorem Ispum, it was too late. The top keyword that Google had found for my website where nearly all Lorem Ipsum. Now, it's a bit less dramatic since the real keyword I am working toward have gone up, but nonetheless I still have a good percentage of keyword that are Lorem Ipsum.Should I be worried about the situation and what can I do to make those Lorem Ipsum keyword disappear or be less significant ?
How to clean the Lorem Ipsum of my keywords for my site?
seo;keywords
You can use Google Webmaster tools to remove content from cache for example.
_unix.67720
running testdisk I can actually see all of my files in /dev/sda1!I wanted to install Windows alongside my Debian installation in order to play some games. In order to do that, I had to boot up a Ubuntu Live 12.1 DVD and use GParted (I was resizing my main partition which had everything).The resizing finished successfully. I then tried to reboot into my Debian to backup my data, which I had forgotten to do before, unfortunately.GRUB loads nicely, but the system can't boot properly! It comes to a point where it tries to configure a ramdisk, or something like that; then it's just a prompt and that's it.Now I've booted into Ubuntu again, to run a check on my shrank partition. Essentially, it's broken, but I can't really understand the error message.This is what GParted reported:GParted 0.12.1 --enable-libparted-dmraidLibparted 2.3Check and repair file system (ext3) on /dev/sda1 00:11:35 ( ERROR )calibrate /dev/sda1 00:00:00 ( SUCCESS )path: /dev/sda1start: 2,048end: 1,232,117,759size: 1,232,115,712 (587.52 GiB)check file system on /dev/sda1 for errors and (if possible) fix them 00:11:35 ( ERROR )e2fsck -f -y -v /dev/sda1ext2fs_check_desc: Corrupt group descriptor: bad block for block bitmape2fsck: Group descriptors look bad... trying backup blocks...Block bitmap for group 4700 is not in group. (block 154014812)Relocate? yesInode bitmap for group 4700 is not in group. (block 154014813)Relocate? yesPass 1: Checking inodes, blocks, and sizesError allocating 1 contiguous block(s) in block group 4700 for block bitmap: Could not allocate block in ext2 filesystemError allocating 1 contiguous block(s) in block group 4700 for inode bitmap: Could not allocate block in ext2 filesystem/dev/sda1: ***** FILE SYSTEM WAS MODIFIED *****/dev/sda1: ********** WARNING: Filesystem still has errors **********e2fsck 1.42.5 (29-Jul-2012)e2fsck: aborted========================================This is the output of dmesg | tail:[ 1702.169848] EXT3-fs error (device sda1): ext3_check_descriptors: Block bitmap for group 4700 not in group (block 154014812)![ 1702.170231] EXT3-fs (sda1): error: group descriptors corrupted[ 1889.324746] CPU3: Package power limit notification (total events = 50)[ 1889.324749] CPU1: Package power limit notification (total events = 50)[ 1889.324750] CPU2: Package power limit notification (total events = 50)[ 1889.324752] CPU0: Package power limit notification (total events = 50)[ 1889.335756] CPU2: Package power limit normal[ 1889.335757] CPU3: Package power limit normal[ 1889.335759] CPU1: Package power limit normal[ 1889.335760] CPU0: Package power limit normalHow can I fix my partition? Will I be able to recover my data?Here are some additional commands I ran:sudo fsck.ext3 -cf /dev/sda1e2fsck 1.42.5 (29-Jul-2012)ext2fs_check_desc: Corrupt group descriptor: bad block for block bitmapfsck.ext3: Group descriptors look bad... trying backup blocks...Block bitmap for group 4700 is not in group. (block 154014812)Relocate<y>? yesInode bitmap for group 4700 is not in group. (block 154014813)Relocate<y>? yesfsck.ext3: e2fsck_read_bitmaps: illegal bitmap block(s) for /dev/sda1/dev/sda1: ***** FILE SYSTEM WAS MODIFIED *****sudo e2fsck -b 32768 /dev/sda1e2fsck 1.42.5 (29-Jul-2012)Block bitmap for group 4700 is not in group. (block 154014812)Relocate<y>? yesInode bitmap for group 4700 is not in group. (block 154014813)Relocate<y>? yes/dev/sda1 was not cleanly unmounted, check forced.Pass 1: Checking inodes, blocks, and sizesError allocating 1 contiguous block(s) in block group 4700 for block bitmap: Could not allocate block in ext2 filesystemError allocating 1 contiguous block(s) in block group 4700 for inode bitmap: Could not allocate block in ext2 filesysteme2fsck: aborted/dev/sda1: ***** FILE SYSTEM WAS MODIFIED *****/dev/sda1: ********** WARNING: Filesystem still has errors **********sudo parted -lModel: ATA Hitachi HTS54757 (scsi)Disk /dev/sda: 750GBSector size (logical/physical): 512B/4096BPartition Table: msdosNumber Start End Size Type File system Flags 1 1049kB 631GB 631GB primary ext3 boot 2 742GB 750GB 7985MB primary linux-swap(v1)Warning: Unable to open /dev/sr0 read-write (Read-only file system). /dev/sr0has been opened read-only.Error: Can't have a partition outside the disk! The partition table entry states msdos - is this supposed to be like that?
GParted messed up my partitions
ubuntu;debian;partition;hard disk;dual boot
Try booting back into a live environment and without any of your system's partitions run fsck.ext3 -pcf on the drive in question. If fsck.ext3 isn't available then e2fsck -pcf will work fine.The flags used will tell fsck.ext3 to behave as follows:-p, Automatic repair (no questions)-c, Check for bad blocks and add them to the badblock list-f Force checking even if filesystem is marked cleanIf this doesn't work, run fdisk /dev/sda and use the option to verify the partition table. I believe this option is v
_unix.354653
Something weird happened while running an rsync command.I first mounted a Windows share to copy files to like this:mount -t cifs //myserver/share] /media/sThen ran rsync command belowsudo rsync -avprP /* /media/s/Backup --exclude '/backups' '/media'Now that I look at it, did the above rsync command not exclude /media, but actually back up to it? Guessing I put the --exclude in wrong order or syntax? Thanks in advance.
Root directory duplicated in /media/media on Ubuntu Linux server
linux;rsync
Yes, --exclude only takes a single argument. Try --exclude '/backups' --exclude '/media'. (Actually, you don't need the single quotes around either of those, but they don't hurt.)
_unix.151178
So I'm having quite a bit of trouble trying to get my system to set the hardware clock's time. Here is what I did just now, everything has worked, right? The first and last command seem to indicate that I have changed the hardware clock's time.[omed@localhost ~]$ sudo hwclock -rFri 19 Aug 2011 12:15:59 PM MDT -0.407669 seconds[omed@localhost ~]$ sudo ntpdate 0.pool.ntp.org19 Aug 12:16:21 ntpdate[1816]: step time server 76.73.0.4 offset 94694401.172566 sec[omed@localhost ~]$ dateTue Aug 19 12:16:26 MDT 2014[omed@localhost ~]$ sudo hwclock -w[omed@localhost ~]$ sudo hwclock -rTue 19 Aug 2014 12:16:41 PM MDT -0.329495 secondsHowever, a few notes about the device is that it's an intel Q7 (image here)module plugged into a carrier board designed by someone else. The RC uses a battery that isn't actually on the device, but comes from a battery on the carrier board. The RTC does store and save time set through the BIOS, however, if I power the system off and back on (without doing standard shutdown, literally just cutting power to the board), after logging in, this happens:[omed@localhost ~]$ sudo hwclock -rFri 19 Aug 2011 12:23:02 PM MDT -0.580141 secondsissuing a poweroff command and then power-cycling the device does save the time to the RTC successfully, however doing a 'clean' shutdown is not an option, what does linux do during shutdown that it doesn't do when I run hwclock? The calibration time and drift time from the RTC also show the correct date for drift/calibration (August 2014, as below), but doesn't actually display the correct time (for some reason it still thinks it's 2011).[omed@localhost ~]$ sudo hwclock -r -D...Last drift adjustment done at 1408472197 seconds after 1969Last calibration done at 1408472197 seconds after 1969...Time read from Hardware Clock: 2011/08/19 18:26:45Hw clock time : 2011/08/19 18:26:45 = 1313778405 seconds since 1969Fri 19 Aug 2011 12:26:45 PM MDT -0.877045 secondsThe system is a special distro based around embedded systems (timesys bowler 14), however, it is based on fedora 14 and is close enough that it can be treated as fedora.
RTC is not saving time after hwclock
linux;shutdown;clock
null
_codereview.110386
I tried to draw a Sierpinski triangle by shading the even number entries of the Pascal triangle. Could you please suggest improvements?public class Sierpinski { public static void main(String[] args) { int no_of_row = 50; int[][] tri = new int[no_of_row][no_of_row]; for (int i = 0; i < no_of_row; i++) { for (int j = 0; j <= i; j++) { if (j == 0 || j == i) { tri[i][j] = 1; } if (i > 0 && j > 0 ) { tri[i][j] = tri[i - 1][j - 1] + tri[i - 1][j]; } } } for (int i = 0; i < no_of_row; i++) { printSpace(no_of_row, i); for (int j = 0; j <= i; j++) { System.out.print(isEven(tri[i][j])); System.out.print( ); } System.out.println(); } } private static void printSpace(int no_of_row, int current_row) { for (int i = 0; i < no_of_row - current_row; i++) { System.out.print( ); } } private static String isEven(int n) { return n % 2 == 0 ? x : ; }}Sample output: x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x
Sierpinski triangle from Pascal triangle
java;ascii art;fractals
Small rewriteThe code can be divided into two phases. It would be helpful to write a comment on each code block.The Pascal's triangle loop can be tightened by getting rid of unnecessary conditional logic.// Generate Pascal's triangleint[][] tri = new int[SIZE][SIZE];for (int i = 0; i < SIZE; i++) { tri[i][0] = tri[i][i] = 1; for (int j = 1; j < i; j++) { tri[i][j] = tri[i - 1][j - 1] + tri[i - 1][j]; }}isEven() is poorly named: it looks like it should return true or false, because it follow Java's naming convention for predicates.As for the printing, calling System.out.print() to output one character at a time is slow and, in my opinion, makes the code more cumbersome. I suggest this loop instead, which marks the places which have even entries, then prints a row at a time.// Print 'x' where Pascal's Triangle has even entrieschar[] buf = new char[2 * SIZE];for (int i = 0; i < SIZE; i++) { Arrays.fill(buf, ' '); int leftPad = SIZE - i; for (int j = 0; j < i; j++) { if ((tri[i][j] & 1) == 0) { buf[leftPad + 2 * j] = 'x'; } } System.out.println(new String(buf, 0, leftPad + 2 * i));}Window dressingNote that you don't actually need to store the entire Pascal's triangle at once; you can build it a row at a time as you print each line. I suggest making an iterator to take care of that bookkeeping and reduce main() to being just a simple, pretty loop. The Sierpinski object also makes the size and fill character are parameterizable.import java.util.Arrays;import java.util.Iterator;public class Sierpinski implements Iterable<String> { private class RowIterator implements Iterator<String> { private int row; private int[] thisPascalRow = new int[Sierpinski.this.size], nextPascalRow = new int[Sierpinski.this.size]; private char[] buf = new char[2 * Sierpinski.this.size]; @Override public boolean hasNext() { return this.row < Sierpinski.this.size; } // For compatibility with Java < 8 @Override public void remove() { throw new UnsupportedOperationException(); } @Override public String next() { try { // Generate the next row of Pascal's Triangle nextPascalRow[0] = nextPascalRow[row] = 1; for (int i = 1; i < row; i++) { nextPascalRow[i] = thisPascalRow[i - 1] + thisPascalRow[i]; } // Format it as a line of text in Sierpinski's Triangle int leftPad = Sierpinski.this.size - this.row; int length = leftPad + 2 * this.row; Arrays.fill(this.buf, 0, length, ' '); for (int i = 0; i < this.row; i++) { if ((nextPascalRow[i] & 1) == 0) { this.buf[leftPad + 2 * i] = Sierpinski.this.fill; } } return new String(this.buf, 0, length); } finally { // Prepare for next call. Swap to avoid reallocating arrays. this.row++; int[] swap = this.thisPascalRow; this.thisPascalRow = nextPascalRow; this.nextPascalRow = swap; } } } private final int size; private final char fill; public Sierpinski(int size, char fill) { this.size = size; this.fill = fill; } @Override public Iterator<String> iterator() { return new RowIterator(); } public static void main(String[] args) { for (String line : new Sierpinski(50, 'x')) { System.out.println(line); } }}
_unix.144313
I installed Arch Linux on VirtualBox. As a login manager, I've chosen LXDM and XFCE for the window manager. I've set /usr/bin/startxfce4 as the session in etc/lxdm/lxdm.conf. The problem is that when the lxdm login window appears, and when I try typing my password in and pressing Enter nothing happens. I see the same screen that I was seeing before I clicked on my username.
LXDM login doesn't work
linux;arch linux;xfce;lxde
null
_unix.369137
I'm currently working with a text file that contains the following text blocks:--------------------------------------Beginning of blockTextRandom TextkeywordATextEnd of block----------------------------------------------------------------------------Beginning of blockTextRandom TextkeywordATextEnd of block----------------------------------------------------------------------------Beginning of blockTextRandom TextkeywordDTextEnd of block----------------------------------------------------------------------------Beginning of blockTextRandom TextkeyworddTextEnd of block--------------------------------------The objective is to have egrep detect certain keywords, and if those words exist, I would like to copy the block to another file. So, I'm currently searching with:if egrep -wi 'keywordA|KeywordB|keywordC' Reportthen echo Words found!else echo No words found!fiI was wondering if there is any way to add a follow up action to use sed (for example) to copy the block of text where the words were found.Expected output, in this example, would be:--------------------------------------Beginning of blockTextRandom TextkeywordATextEnd of block----------------------------------------------------------------------------Beginning of blockTextRandom TextkeywordATextEnd of block--------------------------------------The file Report contains dozens of blocks like this, but not all of them have the keyword. I would like to copy just the ones that do (as demonstrated in the above example).I hope this makes sense!Thank you in advance for any help or pointers!
Extracting text blocks based on grep output
text processing;awk;sed;grep;scripting
null
_cs.41017
Why were primes which are $1$ modulo $4$ considered to be weak for use in RSA cryptography (http://en.wikipedia.org/wiki/Blum_integer)? Was there a time it was considered there could be an efficient algorithm if these primes be utilized (if so, then what exact reasons were these)?
A question on RSA
cryptography
null
_codereview.93245
I'm just playing with Python, trying to refamiliarize myself with it since I haven't used it in a few years. I'm using Python3, but as far as I know it could easily be Python2 with a change to the print statements. What I'm generating is something similar to GitHub's Identicons.I know of two possible issues, one is that I don't account for an odd width, and the other is that my fill string will be reversed on the symmetric half, so I can't use something like [ ]. import randomdef generateConnectedAvatar(fill=XX, empty= , height=10, width=10, fillpercent=0.4): halfwidth = int(width/2) painted = 0 maxPainted = height*halfwidth*fillpercent adjacent = [(-1,-1), (-1,0), (-1,1), (0,1), (0,-1), (1,-1), (1,0), (1,1)] # initialize a blank avatar avatar = [[empty for w in range(halfwidth)] for h in range(height)] # 'paint' a single cell and add it to the stack y, x = random.randint(0,height-1), random.randint(0,halfwidth-1) lst = [(y,x)] while (len(lst) != 0): # Take the first element off the list y, x = lst.pop() # Determine if we should paint it if painted <= maxPainted: avatar[y][x] = fill painted += 1 # Find all available neighbors and add them to the list (in a random order) # shuffle our adjacent positions table random.shuffle(adjacent) for posy, posx in adjacent: tmpy, tmpx = y + posy, x + posx if tmpx >= 0 and tmpx < halfwidth: if tmpy >= 0 and tmpy < height: # Make sure we haven't already painted it if avatar[tmpy][tmpx] is not fill: lst.append((tmpy,tmpx)) # output the result (symmetrically) # (in our case, just printing to console) for h in range(height): half = for w in range(halfwidth): half += avatar[h][w] print(half + half[::-1])The output can be pretty neat looking sometimes, but it's frequently very blocky, so I'll probably want to figure out some sort of spacing algorithm.Some sample output: XX XX XXXXXX XXXXXXXXXX XXXXXXXX XXXX XX XX XX XX XXXX XX XX XXXXXXXXXX XXXXXXXXXX XXXX XXXX XXXX XXXX
Identicon Generator
python;random;formatting
This is quite neat. I suggest some improvements though.The method does two things: it generates an identicon and then prints it. It would be better to split these two operations so that you can have functions that follow the single responsibility principle.The while loop condition can be simplified to be more pythonic:while lst:The range conditions can be simplified too:if 0 <= tmpx < halfwidth:Your naming and spacing should follow PEP8, the python style guide.
_webapps.6499
I need to find, in Gmail, a list of all my unread messages β€” which I can get via is:unread or label:unread or l:^u β€” which do not have the label ZDNet.In other words, my requirement is this:I have 185 e-mails which are unread in my Inbox.I have 174 unread e-mails which are labeled ZDNet.I need to find the 11 e-mails in my inbox that are not part of the group of 174.How can I do this?
Searching Gmail for messages without a particular label
gmail;gmail labels;gmail search
is:unread -label:ZDNet
_softwareengineering.25046
I've read several books and learned through experience that optimizing code to the point where it is inscrutable, or coming up with an extremely fast but extremely complex solution to a problem is not desirable when working in teams, or even when you're working by yourself and have to understand your clever solution some time later.My question is, should recursion be treated in the same manner? Does the average programmer understand recursion easily and thus one should use it with impunity, or does the average programmer not understand recursion very well and one should stay away from it for the sake of overall team productivity?I know there are simple answers of, Any programmer who doesn't understand recursion isn't worth a grain of salt, so don't worry about them but I was wondering if you all had some real world experience you would like to share that would illuminate the issue more than the opinion I just mentioned.
Is recursion an instance of being too clever when programming?
maintenance;teamwork;recursion
Does the average programmer understand recursion easily and thus one should use it with impunity, or does the average programmer not understand recursion very well and one should stay away from it for the sake of overall team productivity?I'd say that the average programmer understands recursion perfectly. Indeed, if the programmer has done a degree in Computer Science or Software Engineering it is pretty much guaranteed. Granted, there are some very below average programmers out there, but you don't want them on your team.In this case, the distinction between an average and a good programmer is knowing when to use recursion and when not to. And that depends on the problem being solved AND the language being used to solve it.If you are using a functional programming language, recursion is a natural and efficient solution for a wide range of problems. (Tail recursion optimization rules!)If you are using an OO or plain procedural language, recursion can be inefficient and can be problematic due to stack overflows. So in some cases you would choose an iterative solution rather than a recursive one. However, in other cases, the recursive solution is so much more simple and elegant that the (possibly more efficient) iterative solution would be the too clever one. (For example, if the problem requires backtracking, building or walking trees / graphs, etc, recursion is often simpler.)
_webmaster.84465
We have a Magento based store.Images are resized and served cached dynamically.When the image cache in Magento is cleared, there are generated new paths and files. And I get 404 errors for them in Webmaster tools / Search Console.After Googling and looking at examples, I settled on this as a good solution.(http://inchoo.net/ecommerce/magento-robots-txt-best-practices-google-image-crawler/)I added this to robots.txt:User-agent: Googlebot-ImageDisallow: User-agent: *Disallow: /media/The problem now is:We get a whole lot of Blocked resources and Mobile usability problems instead.Google says that it could result in suboptimal rankings: https://developers.google.com/webmasters/mobile-sites/mobile-seo/common-mistakes/blocked-resources?hl=enSo how does one handle this so Google is happy? What would you recommend?
Magento Robots.txt and blocking cached images
google search console;robots.txt;cache;magento
null
_softwareengineering.91300
I've been programming for about 2 years, and so far Java is all I know.When I made the switch from BlueJ to eclipse I noticed my old projects ran much faster. What would be the reason for this, as far as I knew the executing of java byte code is handled by the JVM and I still had the same JVM after I switched. Other than that does eclipse have had a different compiler? That's the only other thing I could think of.Edit:I timed it, the average runtime for, we'll call it A(100), in blueJ was 1 Minutes; 8 Seconds; 846 Milliseconds. In eclipse A(100) only took 8 Seconds; 459 Milliseconds. I've also noticed the increase for all my other programs too. I haven't measured those yet but the difference is definitely noticeable.A() is a function that does alot of computation than prints of a chart of information based on what it found, fyi.
Why do my java programs run faster in eclipse than in BlueJ?
java;eclipse;runtime
I think the main reason is that BlueJ runs your code in a VM with a debugger attached. BlueJ actually has two VMs running: the main one, and the one with your code inside (the user VM, aka the debug VM). The main VM has a debugger attached to the user VM, which allows it to do things like pause user code, inspect the state of objects and all sorts. I imagine this probably adds a fair bit of overhead (perhaps inhibiting the JIT compilation?); the speed that code executes is not a major concern for a learning environment like BlueJ, as long as it's reasonable.
_unix.82290
NFS mounts recently automatically got un-mounted. When I checked, the NFS service status it was shown to be running.[root@hsluasrepo]# service nfs statusrpc.svcgssd is stoppedrpc.mountd (pid 4083) is running...nfsd (pid 4148 4147 4146 4145 4144 4143 4142 4141) is running...rpc.rquotad (pid 4079) is running...[root@hsluasrepo]# service rpcbind statusrpcbind (pid 4203) is running...[root@hsluasrepo common]# rpcinfo -p 10.80.3.154 program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapperBut showmount output was showing an error.[root@hsluasrepo ]# showmount -e 10.80.3.154clnt_create: RPC: Program not registeredAfter restarting the NFS service, showmount output displayed the NFS server's export list.Can anyone tell me the root cause of this issue and How to avoid this problem in future?/var/log/messages:Jul 7 03:22:01 hsluasrepo rsyslogd: [origin software=rsyslogd swVersion=5.8.10 x-pid=1188 x-info=rsyslog.com] rsyslogd was HUPedJul 7 03:22:02 hsluasrepo rhsmd: In order for Subscription Manager to provide your system with updates, your system must be registered with RHN. Please enter your Red Hat login to ensure your system is up-to-date.Jul 8 03:22:01 hsluasrepo rhsmd: In order for Subscription Manager to provide your system with updates, your system must be registered with RHN. Please enter your Red Hat login to ensure your system is up-to-date.Jul 8 16:36:55 hsluasrepo kernel: nfsd: last server has exited, flushing export cacheJul 8 16:36:55 hsluasrepo rpc.mountd[4083]: Caught signal 15, un-registering and exiting.Jul 8 16:36:55 hsluasrepo rpc.mountd[24463]: Version 1.2.3 startingJul 8 16:36:55 hsluasrepo kernel: NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directoryJul 8 16:36:55 hsluasrepo kernel: NFSD: starting 90-second grace periodJul 8 16:37:32 hsluasrepo rpc.mountd[24463]: authenticated mount request from 10.60.5.208:1004 for /common/PROD (/common/PROD)Jul 8 16:38:09 hsluasrepo rpc.mountd[24463]: authenticated mount request from 10.60.5.181:869 for /common/PROD (/common/PROD) Jul 8 16:38:43 hsluasrepo rpc.mountd[24463]: authenticated mount request from 10.60.5.180:825 for /common/PROD (/common/PROD)Jul 8 16:39:12 hsluasrepo rpc.mountd[24463]: authenticated mount request from 10.60.5.176:688 for /common/PROD (/common/PROD)
Linux: clnt_create: RPC: Program not registered
linux;mount
null
_unix.347776
I have looked but haven't been able to find anyone else with the same sort of problem I have.I have an xml file like this:<ID>1</ID><data>asdf</data><data2>asdf</data2><dataX>asdf</dataX><dateAccessed>somedate</dateAccessed><ID>2</ID><data>asdf</data><data2>asdf</data2><dataX>asdf</dataX><dateAccessed>somedate</dateAccessed><ID>3</ID><data>asdf</data><data2>asdf</data2><dataX>asdf</dataX><dateAccessed>somedate</dateAccessed><ID>4</ID><data>asdf</data><data2>asdf</data2><dataX>asdf</dataX><dateAccessed>somedate</dateAccessed>Basically a whole bunch of data all on one line, no line breaks.I need to extract the info (preferably just as-is with tags intact) between a specific < ID> tag (eg < ID>2 )and the very next < /dateAccessed> tag. I have about 50 files to check for a particular ID and the following related data. I get that this is not standard, there is no nesting. I originally tried to do this using grep and sed, but I just get the whole file returned, which seems odd to me. Can't I just treat this like a text file?EDIT:I didn't realise the formatter removed text that was in enclosing < and > , so after re-reading my question this morning, I realised it's asking something completely different. TL;DRI need what is between a specific value between ID tags and the next closing DateAccessed tag. Not between the same opening and closing tags, ie between ID and /IDSo I can get something like this result:<ID>2</ID><data>asdf</data><data2>asdf</data2><dataX>asdf</dataX><dateAccessed>somedate</dateAccessed>
How to extract data between two different xml tags
text processing;xml
null
_codereview.119223
I'm just looking for a cleaner way. Obviously I could store a variable for some of the int parts. Personally, I think the one-liner looks better and everyone's definition of what constitutes clean readable code differs.def gen_isbn(isbn): for i, val in enumerate(isbn): if len(isbn) == 9: yield (i + 1) * int(val) else: yield int(val) * 3 if (i + 1) % 2 == 0 else int(val) * 1def isbn10(isbn): mod = sum(gen_isbn(isbn[:-1])) % 11 return mod == int(isbn[-1]) if isbn[-1] != 'X' else mod == 10def isbn13(isbn): mod = sum(gen_isbn(isbn[:-1])) % 10 return 10 - mod == int(isbn[-1]) if mod != 10 else int(isbn[-1]) == 0I'm really just wondering if there's a regex or cleaner one liners etc since this is already pretty short and works. It should be for both ISBN 10 and 13.Preferably, I would prefer pure Python. These are grabbed from web-scraping and hence I have enough external libraries floating around than throwing another one in just for ISBN validation.
Validating ISBNs
python;python 3.x;validation
Unnecessary check / bugmod can never be \$10\$ in isbn13 (it will always be smaller) because it is defined as x `mod` 10 and: $$ x\ `mod` \ n< \ n$$So you can just write:return 10 - mod == int(isbn[-1])Remove repetitionI like the conditional expression, it is terse and clear, and it can reduce duplication by a lot, but in the below int(val) is repeated twice:int(val) * 3 if (i + 1) % 2 == 0 else int(val) * 1This is easy to fix, just make the ternary comprehend just the multiplier:int(val) * (3 if (i + 1) % 2 == 0 else 1)or, using a temporary var:multiplier = 3 if (i + 1) % 2 == 0 else 1int(val) * multiplierThis makes it extra-clear that int(val) remains the same and only the number it is multiplied with changes.Base elevenA special case for X is not needed if you think about it as base eleven, where X is the eleventh digit:def from_base_eleven(digit): return 10 if digit == 'X' else int(digit)And then:return mod == from_base_eleven(isbn[-1])It makes the logic you're following easier to follow.
_softwareengineering.351810
I want to build a middleware system that my company can use and expand on to handle the integration points between our various bespoke and 3rd party systems as well as our hosted websites.I am having some trouble with the design/architecture and would like some advice (please let me know if there is a better site to ask these questions).The middleware will basically be the broker between our ERP and our other systems such as our website/3rd party retailers etc. It will be mainly dealing with API calls and XML file exports.I was thinking of:Using RabbitMQ as a queuing mechanism.Using Quartz.NET as a scheduler contained within a Windows Service. It would be very lightweight and would simply load up a list of jobs from a database, dropping these into the queue when their schedule is reached.Using a windows service (I'd call this the agent) which would be a consumer of the RabbitMQ queue. It would monitor the queue and process the jobs as they would come in. Ideally, I would want to be able to run separate jobs at the same time so that one job doesn't necessarily tie up another job from running. We can always scale this portion out if required.Does anyone see any issue with this approach? I can't find a lot of example implementations on the internet about using RabbitMQ as a worker service in .NET (using multiple threads) and am not sure if using a scheduler service to simply drop items on the queue is the right approach.
Is my approach at building middleware using a queue a good approach?
design patterns;.net;architectural patterns;message queue;producer consumer
To elaborate a little bit more on our comment exchanges. Advantages of your queue based approachYour queue based approach has definitively some advantages: Producers (your db feeding jobs) and consumers (your agents) are decoupled: They don't have to know each others internals. They only have to know your queue and the data structure therein. THis will make your whole architecture more maintainable. You could scale the producer side and enrich the data sources by connecting other applications to the queue. You could cope with production peaks, when producer proce fasters than consumers can consume, by spreading the processing over time thantks to the asynchronous behavior allowed by your queue. You could scale the consumer side by multiplying the processing nodes and distribute them across servers, if there'd be any lack of capacity. depending on the queuing system, you could even scale the queue through distribution (With RabbitMQ you can set up a distributed installation, same for other systems like for example Kafka which aims specifically at big data). Here a picture to highlight all this: : Disadvantage of your queueThe queue adds a level of complexity to your architecture and has some impacts: The queuing system is an additional system to be installed, maintained and operated. Everything is dependent of the queue. New releases could require to update all the producers and consumers. And you might find yourself dependent of your queuing technology vendor. In terms of processing power, the queuing architecture adds some overhead, adding more processes and requiring processing power for the intermediary. ConclusionIn my opinion, the inconvenience are marginal compared to the benefits. The flexibility and scalability enabled by the queuing architecture makes it a future proof option in many scenarios. However you don't seem to really take advantage of the queueing in your system. If your architecture is designed to stay like that, one could ask why you do batch processing by batch-loading your queue and not directly process the records from the database without additional overhead. If everything is driven by your db, your db will anyway be the bottleneck.
_cs.42155
I'm wanting to dabble with metaheuristics and am interested to know what the hello world problem sets are.In other words, what are the common problems (e.g Traveling Salesman, Vehicle Routing Problem, Maze Solving) to test various algorithms against in order to compare algorithm design?
Standard problem sets for metaheuristics
optimization;machine learning;artificial intelligence;simulation
null
_unix.228050
This is extracted (with some rewording) from Computer terminal and virtual console, which has been closed with a hatnote linking to What is the exact difference between a 'terminal', a 'shell', a 'tty' and a 'console'?. Original poster was interested in relations and differences between a computer terminal and a virtual console/terminal. Which is dependent on the operating system? How is it related to text vs graphical terminals?Quoted from Wikipedia:A computer terminal is an electronic or electromechanical hardware device that is used for entering data into, and displaying data from, a computer or a computing system. Early terminals were inexpensive devices but very slow compared to punched cards or paper tape for input, but as the technology improved and video displays were introduced, terminals pushed these older forms of interaction from the industry. A related development was timesharing systems, which evolved in parallel and made up for any inefficiencies of the user's typing ability with the ability to support multiple users on the same machine, each at their own terminal.Quoted from wikipedia:A virtual console (VC) also known as a virtual terminal (VT) is a conceptual combination of the keyboard and display for a computer user interface. It is a feature of some operating systems such as UnixWare, Linux, and BSD, in which the system console of the computer can be used to switch between multiple virtual consoles to access unrelated user interfaces. Virtual consoles date back at least to Xenix in the 1980s.
How are virtual consoles related to traditional computer terminals?
terminal;console;tty
null
_webmaster.15163
I'm trying to determine if a browser that my content is being pushed to supports flash/silverlight. That way, if there is no support on the browser, I can interchange the flash/silverlight components with HTML/AJAX components.
Is there a way to test if a browser supports flash/silverlight?
asp.net;flash;ajax;silverlight
null
_codereview.139024
Checking whether an item exist in a collection isn't straightforward in VBA, it requires error checking and during error handling we need to clear the error and also resume code to be able to correctly manage the next error.In my codes I generally use this structure, is it a good practice? With MyWorkBook On Error GoTo NoItem With .Sheets(SheetName) ' do stuff here if sheet exists End WithNoItem: If False Then Err.Clear ' code to do if sheet doesn't exist (e.g. add a new one) Resume NoItem End If On Error GoTo 0 ' (or specify another label for error handling) ' continue macro ... End With
Check existence of item in collection
vba;error handling
Don't make errors a part of your program flowIf an error doesn't cause you to prematurely end your Sub/Function, then it should be handled less severely, or your program needs restructuring. I also caution against using GOTOs. They are sometimes necessary, but they have the potential to cause far more problems than they solve. This is the only situation in which I regularly use GOTOs in VBA: Public Sub DoThing() On Error GoTo CleanFail ... ... ...CleanExit: '/ Clean Up Exit SubCleanFail: '/ Error Handling '/ Error handling '/ Error handling Resume CleanExit End SubHandling ConditionsIf you are going to continue your macro anyway, but want to avoid running some code if a condition fails then why not just:[Check Condition]If [condition] Then DoThingEnd IfIn the specific case of the Sheets() Collection, you have a number of options. Option #1: Write a utility function to do the checking for you Public Function GetSheet(ByRef targetBook As Workbook, ByVal sheetName As String) As Worksheet '/ Tries to return the sheet object. '/ If the sheet does not exist, return Nothing Dim targetSheet As Worksheet On Error GoTo CleanFail Set targetSheet = targetBook.Worksheets(sheetName) On Error GoTo 0CleanExit: Set GetSheet = targetSheet Exit FunctionCleanFail: On Error GoTo 0 Err.Clear Set targetSheet = Nothing Resume CleanExit End FunctionAnd now your code can be structured like so: Dim targetSheet As Worksheet Set targetSheet = GetSheet(ThisWorkbook, sheetName) If Not targetSheet Is Nothing Then ' do stuff here (don't do it if SheetName doesn't exist) End IfMuch cleaner.Option #2: Use a data structure that supports .Exists()For VBA, this would be a Dictionary. You'll need to set a reference to the Microsoft Scripting Runtime library in Tools --> References. Then do the following: Dim sheetnames As Dictionary Set sheetnames = New Dictionary Dim sheetName As String Dim sheetObject As Worksheet For Each sheetObject In ThisWorkbook.Sheets() sheetName = sheetObject.Name sheetnames.Add sheetName, sheetName Next sheetObject If sheetnames.Exists(Some sheet name) Then ' do stuff here (don't do it if SheetName doesn't exist) End IfOption #3: Restructure your program to make the check redundantI don't know if this is a good idea for your code. I'd have to know more about its' wider purpose, structure and context, but it's always worth thinking about.
_webapps.76079
Does anyone have experience of using the calendar facilities in Google Apps without using the e-mail part?We are investigating whether we can use Google Apps for calendaring while keeping our existing e-mail system in place.
Google Apps Calendar without Email
google apps
The email service in Google Apps is optional, so you could use Calendar without Gmail for Work.
_unix.19641
My question is similar to this question but with a couple of different constraints: I have a large \n delimited wordlist -- one word per line. Size offiles range from 2GB to as large as 10GB. I need to remove any duplicate lines. The process may sort the list during the course of removing the duplicates but not required.There is enough space on the partition to hold the new unique wordlist outputted.I have tried both of these methods but they both fail with out of memory errors.sort -u wordlist.lst > wordlist_unique.lstawk '!seen[$0]++' wordlist.lst > wordlist_unique.lstawk: (FILENAME=wordlist.lst FNR=43601815) fatal: assoc_lookup: bucket-ahname_str: can't allocate 10 bytes of memory (Cannot allocate memory)What other approaches can I try?
How to remove duplicate lines in a large multi-GB textfile?
linux;text processing;uniq
Try using sort with the -o/--output=FILE option instead of redirecting the output. You might also try setting the buffer-size with the -S/--buffer-size=SIZE. Also, try -s/--stable. And read the man page, it offers all of the info I gave.The full command you can use that might work for what you're doing:sort -us -o wordlist_unique.lst wordlist.lstYou might also want to read the following URL:http://www.gnu.org/s/coreutils/manual/html_node/sort-invocation.htmlThat more thoroughly explains sort than the man page.
_unix.231409
I am trying to debug an OpenGL issue in a piece of closed source software. The software supposedly works in Debian Wheezy and I would like to get it working in Arch. I tried running it in a Debian Wheezy chroot from my updated Arch system and I get the same issue. Assuming the software actually works in Wheezy, is it possible that there something about the old Wheezy kernel (with whatever Debian patches) that would allow OpenGL to work better than in the newer (essentially unpatched) Arch kernel?
How does a chroot system interact with the kernel
kernel;chroot
null
_codereview.11681
The Qt documentation recommend an iterator-based solution to iterate over an associative container like QMap and QHash, and I always wondered if there really isn't a (nice) solution using a foreach loop like in PHP:foreach($container as $key => $value)I looked into the source code of the foreach macro and extended it by a key variable. The following code is only the GCC version of the loop (it doesn't work with all compilers, see Q_FOREACH macro definition in qglobal.h for other versions).#define foreachkv(keyvar, variable, container) \for (QForeachContainer<__typeof__(container)> _container_(container); \ !_container_.brk && _container_.i != _container_.e; \ __extension__ ({ ++_container_.brk; ++_container_.i; })) \ for (keyvar = _container_.i.key();; __extension__ ({break;})) \ for (variable = *_container_.i;; __extension__ ({--_container_.brk; break;}))Now you can do the following:QVariantHash m;m.insert(test, 42);m.insert(foo, true);foreachkv(QString k, QVariant v, m) qDebug() << k << => << v;The output will be:foo => QVariant(bool, true) test => QVariant(int, 42)Tests I've done so far:examples from aboveforeachkv works as expected with break and continue statements within the loop.What did I do? I just added another for loop to introduce the key variable (called keyvar in the macro). Maybe I don't need this loop?
Qt foreach-like alternative to iterate over value AND key of an associative container
c++;macros;qt
You can probably do something simpler to avoid making this a special case macro, e.g. using std::tie or boost::tie if that's not available:#include <map>#include <functional>#include <boost/foreach.hpp>#include <iostream>int main() { std::map<int, std::string> map = {std::make_pair(1,one), std::make_pair(2,two)}; int k; std::string v; BOOST_FOREACH(std::tie(k, v), map) { std::cout << k= << k << - << v << std::endl; }}
_unix.328526
wgetis a great tool to download files or webpages. It is not the first timke that I find that the links of a webpage are not updated or wrong. For instance, a web page which has its pictures/files linked with http://websitehttp//website.file.extension.Is there a way to tell wget that, if no content is found, it has to look in the address http//website.file.extension instead of http://websitehttp//website.file.extension?EDIT: Following @Tigger's comment, I can get exit status but how to ask wget for that specific file that failed to get it on the right link/address?wget_output=$(wget limit-rate=200k no-clobber convert-links random-wait -r -p -E $URL)if [ $? -ne 0 ]; then ...fi
pass an alternative link to wget when downloading pictures/files on error
wget
null
_softwareengineering.193264
For commercial and licensing purposes, what is the correct wording for differentiating JavaScript source code (written by the programmer and including comments) from the minified version used in production?
Licensing: source code vs. production code in JavaScript
javascript;licensing;source code
I think you'll generally find it is called a debug build (example: KnockoutJS), or development build/version (example: BackboneJS), or uncompressed build (example: JQuery). I think that all three variants are clear to anyone downloading them.
_cogsci.3690
Is it just me, or are we dreaming less? Maybe it's the computer screens in our face all day. Maybe that's just what happens when we get older.I'm wondering if there's a resource of some kind to determine historical dream activity. Might it be possible to use Google Trends to determine dream activity (similar to how Google Flu tracks illness via search queries). Or is Google too novel a tool, and are dreamers searching too randomly to track any meaningful delta in dream activity? If so, is there another resource that might provide meaningful dream data?
Where to find historical dream activity data?
sleep;dreams;data
null
_unix.77129
I've been digging around in the API documentation, and I'm having limited sucessswitching tags from within awesome-client. Basically, I can switch tags with:awful.tag.viewidx(2)But that is relative not absolute. From looking in my rc.lua, looks like whatI need is:local screen = mouse.screenawful.tag.viewonly(tags[screen][2])But I get the following error instead:attempt to index field '?' (a nil value)What is the most concise way of changing tags (preferably one command) to an absolute tag number. Alternatively, how can I find the current tag?
How to switch tags via awesome API
awesome
null
_codereview.48828
This has been asked a few times here, but I was wondering what I could do to improve the efficiency and readability of my code. My programming skill has gotten rusty, so I would like to elicit all constructive criticism that can make me write code better. init:#include <iostream>using namespace std;merge()//The merge functionvoid merge(int a[], int startIndex, int endIndex){int size = (endIndex - startIndex) + 1;int *b = new int [size]();int i = startIndex;int mid = (startIndex + endIndex)/2;int k = 0;int j = mid + 1;while (k < size){ if((i<=mid) && (a[i] < a[j])) { b[k++] = a[i++]; } else { b[k++] = a[j++]; }}for(k=0; k < size; k++){ a[startIndex+k] = b[k];}delete []b;}merge_sort()//The recursive merge sort functionvoid merge_sort(int iArray[], int startIndex, int endIndex){int midIndex;//Check for base caseif (startIndex >= endIndex){ return;} //First, divide in halfmidIndex = (startIndex + endIndex)/2;//First recursive call merge_sort(iArray, startIndex, midIndex);//Second recursive call merge_sort(iArray, midIndex+1, endIndex);//The merge functionmerge(iArray, startIndex, endIndex);}main()//The main functionint main(int argc, char *argv[]){int iArray[10] = {2,5,6,4,7,2,8,3,9,10};merge_sort(iArray, 0, 9);//Print the sorted arrayfor(int i=0; i < 10; i++){ cout << iArray[i] << endl;}return 0; }
Implementing the merge sort with only arrays, out of place
c++;sorting;mergesort
Everything Jamal said (so I will not repeat)Plus:The normal idium in C++ for passing ranges is [beg, end).For those not mathematically inclined end is specified one past the end.If you follow the C++ convention you make it easier for other C++ devs to keep track without having to think to hard about what you are up to. Personally I think it also makes writting merge sort easier. merge_sort(iArray, 0, 10);Then in merge() midIndex = (startIndex + endIndex)/2; // The start of the second half // Is one past the end of the first half. merge_sort(iArray, startIndex, midIndex); merge_sort(iArray, midIndex, endIndex);Slight optimization here://Check for base caseif (startIndex >= endIndex){ return;}You can return if the size is 1. A vector of 1 is already sorted. Since end is one past the end the size is end-start.if ((endIndex - startIndex) <= 1){ return;}Don't do manually memory management in your code.int *b = new int [size](); // Also initialization costs. // So why do that if you don't need to!// PS in C++ (unlike C)// It is more common to put '*' by the type.// That's becuase the type is `pointer to int`Better alternative:std::vector<int> b(size); // Memory management handled for you. // Its exception safe (so you will never leak). // Overhead is insignificant.A conditional in the middle of a loop is expensive.while (k < size){ if((i<=mid) && (a[i] < a[j])) {...} else {...}}Once one side is used up break out of the loop and just copy the other one.while (i <= mid && j <= endIndex){ b[k++] = (a[i] < a[j]) ? a[i++] : a[j++];}// Note: Only one of these two loop will execute.while(i <= mid){ b[k++] = a[i++];}while(j <= endIndex){ b[k++] = a[j++];}Rather than calculating the mid point both in merge() and in merge_sort, it may be worth just passing the value as a parameter.Learn to do the inplace merge so you don't have to copy the values back after the merge. midIndex = (startIndex + endIndex)/2; merge_sort(iArray, startIndex, midIndex); merge_sort(iArray, midIndex, endIndex); merge(iArray, startIndex, midIndex, endIndex); // ^^^^^^^^ pass midIndex so you don't need to re-calc
_codereview.80496
This is the first coding class I've taken and I'm not getting much feedback from the instructor. The function of this program is to take two roman numeral inputs, convert them to arabic, add them together, then convert the sum to roman numerals. I wanted to see if any of you code ninjas could give me some feedback on the solution that I've come up with for this.I would like to note that in the class I'm taking we haven't learned about dictionaries and we have only briefly covered lists.This is only the functional part of the code, not including the loops to catch improper inputs.print('Enter two roman numeral number to be added together')print()#converting first roman numeral to arabicuser1 = input('Enter first roman numeral: ')print()total1 = 0for i in user1: if i == 'I': total1 += 1 elif i == 'V': total1 += 5 elif i == 'X': total1 += 10 elif i == 'L': total1 += 50 elif i == 'C': total1 += 100if 'IV' in user1: total1 -= 2 if 'IX' in user1: total1 -= 2 if 'XL' in user1: total1 -= 20print('The decimal value of', user1, 'is', total1)print() #converting second roman numeral to arabicuser2 = input('Enter second roman numeral: ')print()total2 = 0for i in user2: if i == 'I': total2 += 1 elif i == 'V': total2 += 5 elif i == 'X': total2 += 10 elif i == 'L': total2 += 50 elif i == 'C': total2 += 100if 'IV' in user2: total2 -= 2 if 'IX' in user2: total2 -= 2 if 'XL' in user2: total2 -= 20print('The decimal value of', user2, 'is', total2)print()totalSum = total1 + total2print('The decimal sum is:', totalSum)print()numeral = []# converting from arabic to roman numeral # splits the number into integers and appends roman numeral characters to a list while totalSum > 1: for i in range(len(str(totalSum))): #this loop to identify the one, tens, hundreds place if i == 0: digit = totalSum % 10 totalSum //= 10 if digit == 9: numeral.append('IX') if 5 < digit < 9: numeral.append('I' * (digit % 5)) digit -= digit % 5 if digit == 5: numeral.append('V') if digit == 4: numeral.append('IV') if digit < 4: numeral.append('I' * digit) if i == 1: digit = totalSum % 10 totalSum //= 10 if digit == 9: numeral.append('XC') if 5 < digit < 9: numeral.append('X' * (digit % 5)) digit -= digit % 5 if digit == 5: numeral.append('L') if digit == 4: numeral.append('XL') if digit < 4: numeral.append('X' * digit) if i == 2: digit = totalSum % 10 totalSum //= totalSum numeral.append('C' * digit)numeral.reverse() print('The roman sum is: ', ''.join(numeral))
Adding two roman-numeral inputs
python;python 3.x;roman numerals
null
_webmaster.33612
We have made some links nofollow in recent weeks, but they are still showing up in Google Webmaster Tools. Does GWT ignore these links? Will they vanish as the index is updated, or do even nofollow links show up here?
Does Google Webmaster Tools show nofollow links in 'Links to your site'?
seo;google search console
Simple answer: Yes, GWT shows all links, followed and nofollowed.
_webapps.84659
Following situation: Sending emails from google spreadsheets. I have a compiled text (with line breaks), which is stored in var message.I have been trying to include a picture for the signature. To display it, I used the html body in the options parameter (as suggested here: https://developers.google.com/apps-script/reference/mail/mail-app) This is the code:function sendEmails() {var sheet = SpreadsheetApp.getActiveSheet();var range = sheet.getRange(1, 2); // Fetch the range of cells B1:B1var subject = range.getValues(); // Fetch value for subject line from above rangevar range = sheet.getRange(1, 9); // Fetch the range of cells I1:I1var numRows = range.getValues(); // Fetch value for number of emails from above rangevar startRow = 4; // First row of data to processvar dataRange = sheet.getRange(startRow, 1, numRows,9 ) // Fetch the range of cells A4:I_var data = dataRange.getValues(); // Fetch values for each row in the Range.var googleLogoUrl = http://die-masterarbeit.de/media/grafik/logo.png;var googleLogoBlob = UrlFetchApp .fetch(googleLogoUrl) .getBlob() .setName(googleLogoBlob);for (i in data) {var row = data[i];var emailAddress = row[0]; // First columnvar message = row[8]; // Ninth columnMailApp.sendEmail(emailAddress, subject, message,{name: 'Peter Peschel',htmlBody: <img src='cid:googleLogo'>,inlineImages: { googleLogo: googleLogoBlob, }})};}What happens is that the message variable is not displayed at all. I have experimented in calling on the message variable in the htmlbody again, but this completely takes out the formatting (especially line breaks) of the message.MailApp.sendEmail(emailAddress, subject, message,{name: 'Peter Peschel',htmlBody: message + <img src='cid:googleLogo'>,inlineImages: { googleLogo: googleLogoBlob, },})};}Any suggestions on how to best display the image while keeping the formatting of the message? Thanks!
E-Mail from Google Spreadsheets: including image for signature takes out formatting of message
google spreadsheets;google drive;google apps script;google apps email
You are using the format sendEmail(recipient, subject, body, options) of the MailApp.sendEmail method. As the documentation says, if options include htmlBody, it will be used instead of the body parameter. This is why your first attempt ignored message.Since you need to send an HTML message, its content must be formatted as HTML; in particular, line breaks should be <br> instead of the control character \n. Solution: replace \n with <br> in the message body. Here is an example of a script sending the content of cell A1. It provides plainMessage in case the recipient's email client does not render HTML, and htmlMessage with inline image function sendA1() { var plainMessage = SpreadsheetApp.getActiveSheet().getRange(1,1).getValue(); var htmlMessage = message.replace(/\n/g, '<br>') + <img src='cid:googleLogo'>; MailApp.sendEmail('[email protected]', 'my subject', plainMessage, { htmlBody: message, inlineImages: { googleLogo: googleLogoBlob } }); }By the way, since you are using options extensively, it would make sense to switch to the format sendEmail(message) where message is an object holding all the parameters. This makes for a more readable code: MailApp.sendEmail({ to: '[email protected]', subject: 'my subject', body: plainMessage, htmlBody: message, inlineImages: { googleLogo: googleLogoBlob }});
_opensource.777
Suppose I was finishing up an open source software. I needed to license it under an open source license.My question:Why would I not want to license my software under a Creative Commons license?Why are other licenses such as MIT licenses better for this?
Why shouldn't Creative Commons licenses be used for software?
licensing;creative commons;software
Short answer: because the CC licenses have not been designed for software and source code.This is answered by Creative Commons themselves in their FAQ:Unlike software-specific licenses, CC licenses do not contain specific terms about the distribution of source code, which is often important to ensuring the free reuse and modifiability of software. Many software licenses also address patent rights, which are important to software but may not be applicable to other copyrightable works. Additionally, our licenses are currently not compatible with the major software licenses, so it would be difficult to integrate CC-licensed work with other free software. Existing software licenses were designed specifically for use with software and offer a similar set of rights to the Creative Commons licenses.
_unix.41698
Looking for an app or tool to invoke CPU usage on my virtual Linux and Unix guests.I found the app stress from a post on Stackoverflow, but curious what the Unix/Linux crowd would recommend.
Invoking CPU Stress
cpu;benchmark
The goto for Linux is cpuburn (homepage). I would expect that it should work on other UNIX systems as well.
_codereview.135636
Is this the correct way to set value of textbox to zero when it's null? Is there any way I can improve the calculation that I am doing?private void TB_PAID_CASH_TextChanged(object sender, EventArgs e){ try { if (!string.IsNullOrEmpty(TB_TOTAL_INV.Text) && !string.IsNullOrEmpty(TB_PAID_CASH.Text) && string.IsNullOrEmpty(TB_Discount.Text)) { int Discount; int.TryParse(TB_Discount.Text, out Discount); TB_REMAINDER.Text = (Convert.ToInt32(TB_PAID_CASH.Text) - Convert.ToInt32(TB_TOTAL_INV.Text) - Discount).ToString(); } } catch (Exception ex) { MessageBox.Show(ex.Message, Exception, MessageBoxButtons.OK, MessageBoxIcon.Error); }}
Setting a textbox value to zero if null
c#;winforms
You are mixing various things here. For some values you use the try/catch and for others the TryParse. I think you should stick to one of them. What I also don't understand is why you are checking whether this is true string.IsNullOrEmpty(TB_Discount.Text) and then try to parse it anyway int.TryParse(TB_Discount.Text)?Here is what you can do:Create a property for each value:public int TotalInv{ get { int totalInv; return int.TryParse(TB_TOTAL_INV.Text, out totalInv) ? totalInv : 0; }}public int PaidCash{ get { int paidCash; return int.TryParse(TB_PAID_CASH.Text, out paidCash) ? paidCash : 0; }}public int Discount{ get { int discount; return int.TryParse(TB_Discount.Text, out discount) ? discount : 0; }}For the reminder too:public int Reminder{ get { return PaidCash - TotalInv - Discount; }}Then when the text changes you can update the text box with only one line:private void TB_PAID_CASH_TextChanged(object sender, EventArgs e){ if(PaidCash < TotalInv) { // todo: show message... } TB_REMAINDER.Text = Reminder.ToString();}
_opensource.2739
I was looking to override a class in rt.jar with my own version (to guarantee that existing legacy code remains unbroken).With Oracle's JDK, it says the following for the -Xbootclasspath/p:path option:Do not deploy applications that use this option to override a class in rt.jar, because this violates the JRE binary code license.If I switch to use OpenJDK, will I be faced with this same limitation? That is, would I be violating a license?
License violation for OpenJDK and -Xbootclasspath/p:path?
java
If I switch to use OpenJDK, will I be faced with this same limitation? That is, would I be violating a license?There is no such limitation in the OpenJDK licensing which is a combo of CDDL and GPL with classpath exception. Run with it without fear.As a side note, there is no good reason to use the Oracle BCL-licensed JDK. In fact reading its ever changing license terms, you can barely use it for development and that is about it. The OpenJDK is always a better alternative IMHO.if the class I'm modifying is covered by the Classpath Exception to the GPL, then does your answer change at all?No. To the contrary: the BCL does not give any permission to modify code.The GPL with Classpath does allow that alright. You still have to obey the GPL for the code you modify of course and -- unless you enter the grey area of derivative work -- your code using this modified code would not be subject to the GPL's requirements. /IANAL /TINLA
_unix.102757
I recently did a clean install of Linux Mint 16 Petra on my laptop.Before I wiped the old system, I did a backup of all my MySQL databases by going to the Export tab in phpMyAdmin and dumping all the databases into one file. I didn't use any compression.Now, on the new system, in phpMyAdmin, when I go to the Import tab and try to upload the SQL file, I get this error:#1046 - No database selected... Of course no database is selected, I am trying to restore them.I am not at all an expert in MySQL, I use it to support my web design projects, so I don't understand why this isn't working. It seems to me phpMyAdmin should be able to read from a file that phpMyAdmin created.How do I restore my databases from the SQL dump file that I created?
How do I restore my MySQL databases that were dumped from phpMyAdmin?
mysql
The solution I used was to search the SQL file for everywhere that this text existed:-- Database: `my_database_01`And right under it, add the following lines:CREATE DATABASE IF NOT EXISTS `my_database_01`;USE `my_database_01`;I did this for each database in the SQL dump file, and then I was able to import and restore all databases using phpMyAdmin's import command.
_softwareengineering.50133
I am thinking of introducing weekly technology meeting where programmers working on the same project can discuss things like:current status of the project on technical sidetechnology backlog. Things that we may have skipped because of deadlines but now coming back to bite us.technology constraints that are limiting developers from being productivenew and emerging technologies that may apply to the projectBasically looking at the project from programmer's perspective, not the business side.-What would be some good guidelines for a meeting like this? How long should the meeting last? Is weekly too often?Should we time-limit each topic?What kinda of topics are good for a meeting like this and which ones are bad?Is 10 people too many?...
Weekly technology meeting?
development process;web applications;team;technology
null
_cogsci.5010
Has anyone read/reviewed this book - Qualitative Mathematics for the Social Sciences? I have been trying to find a credible review on the reliability of the book.
Book review for Qualitative Mathematics for the Social Sciences
cognitive psychology;reference request;mathematical psychology
null
_softwareengineering.298220
When did the trend of saying VanillaJS to refer to pure JavaScript come into widespread. Is the website Vanilla-js the discoverer of the term VanillaJS or was this term used even before the launch of this website?
History of VanillaJS
javascript;history
I created the Vanilla JS site.I didn't coin the term Vanilla JS - it's like asking someone if they invented the term Blue Chair. Blueness and Chairness have been things for thousands of years, and similarly, Vanilla in the software world usually means plain - Plain JS. I remember seeing it used before I created the site.However:The Google Trends data for the term Vanilla JS, which indicates that the term came into widespread use in August 2012, coincides with the registration date of the vanilla-js.com domain name. So, while I didn't invent the term, I probably popularized it. This is funny, since I really don't like the term vanilla meaning plain - but, it's what the software community uses, so in the interest of clarity (and comedy), it's what I chose to use.
_unix.284234
I have a program that's generating audio and I can't get mplayer to play it. I'm doing./myprogram | mplayer - -cache 1024 -cache-min 30 -noconsolecontrolsAnd I get the messageCache fill: X% (Y bytes)X goes up to the specified cache-min value (but not past it) and then it keeps printing the error message:Cache not filling, consider increasing -cache and/or -cache-min!I tried some other values for cache and cache-min but none of them worked. Of course, there is a possibility that my program is somehow at fault.
Cache not filling error when using mplayer to read from stdin
linux;io redirection;mplayer
Just in case anyone else has this issue: I simply waited a bit longer instead of killing the program immediately, and it worked. Just ignore the scary messages.
_codereview.154110
I had never programmed in a functional programming language before. I think that this function has large nesting. Is there way to rewrite it better? I am most worried about the use of flatten. Is there way to get rid of it? Also, is the indentation correct and does the function name correspond to Clojure naming style?(defrecord Point [x y])(def delimiter # )(defn str->cells Converts string with cells separated with delimiter and new lines to Points hash map [^String s] {:pre [(re-matches #^((?:^|\s)(?:.|\s)(?=\s|$))+$ s)]} (apply hash-map (flatten (map-indexed (fn [y line] (map-indexed (fn [x value] [(Point. x y) value]) (str/split line delimiter))) (str/split-lines s)))));; Example of use:(str->cells 1 2\n3) ;; -> {#Point{:x 1, :y 0} 2, #Point{:x 0, :y 0} #Point{:x 0, :y 1} 3}
Parsing string into hash-map with 2D coordinates as keys
beginner;strings;functional programming;clojure;hash map
Since there is no answers, I'll post my new version of code:(defn line->cells Converts string with cells to sequence of maps of Points It is str->cells helper, so it requires y argument. [str->value ^Long y ^String line] {:pre [(re-matches #^((?:^| )(?:.+?| )(?= |$))+$ line)]} (map-indexed (fn [x value] {(Point. x y) (str->value value)}) (str/split line # )))(defn str->cells Converts string with cells separated with delimiter and new lines to Points hash map [^String s str->value] {:pre [(re-matches #^((?:^|\s)(?:.+?| )(?=\s|$))+$ s)]} (->> s (str/split-lines) (map-indexed (partial line->cells str->value)) (flatten) (into {})))
_softwareengineering.193381
In a lecture, the lecturer described the following model : E - entry (the preconditions to a task). T - task - doing the task V - verifying the tasks quality D - Delivering the tasks X - Exit. or ETVDXIf anyone is familiar with this 'generic compliance model', how does it fit into software development exactly? I presume it's equivalent to the waterfall model of negotiating requirements > defining/decompose stage > estimating effort > estimating resources > developing schedule.
How does the ETVDX model fit in with project management?
project management
null
_cogsci.17990
I observe this kind of thinking especially in politics when someone brings up that it's fine to take a few dollars from a lot of people if it helps the city a little. I think the reason we see things like this is that we can better imagine the value of one thing with awesome qualities than million things of a small quality because we are more used to compare the quality, not the amount, in our daily lives.I know it might sound crazy but it's a very specific bias when one things about it. Is there a name for a similar cognitive bias/fallacy?
Name of the bias towards not seeing small harm of many as important?
cognitive psychology;terminology;bias;economics;political psychology
null
_cs.67366
In the current version as of the Wikipedia article Potential method, the amortized cost of each operation is defined as the following$$T_{\mathrm{amortized}}(o) = T_{\mathrm{actual}}(o) + C \cdot (\Phi(S_{\mathrm{after}}) - \Phi(S_{\mathrm{before}})),$$and the total amortized cost is $$ \begin{align*}T_{\mathrm{amortized}}(O) &= \sum_i (T_{\mathrm{actual}}(o_i) + C \cdot (\Phi(S_{i+1}) - \Phi(S_{i}))) \\ &= T_{\mathrm{actual}}(O) + C \cdot (\Phi(S_{\mathrm{final}}) - \Phi(S_{\mathrm{initial}})).\end{align*} $$I understand the formulas and understand $T_{\mathrm{amortized}}$ is an upper bound on $T_{\mathrm{actual}}$ if the initial potential is zero. And I think I understand the basic examples like binary counter, two stack FIFO, etc. However, I am puzzled this: The actual cost is part of the formula.I might be wrong but it looks like the actual cost is a known quantity. If we already know the actual cost, why do we need to add the potential to get an upper bound? It looks useless to me. Can anyone help give an example in which such potential analysis is useful?
Why do we need potential for amortized analysis?
algorithms;complexity theory;amortized analysis
As is unfortunately sometimes the case, Wikipedia is doing a terrible job of explaining what amortized analysis actually is.The idea of amortized analysis is that while operations may have a bad worst case cost, their average cost could be much lower. Average cost means different things in different circumstances. In amortized analysis, here is what it means:Suppose that you have a data structure supporting operations $O_1,\ldots,O_r$. We say that these operations have amortized cost $c_1,\ldots,c_r$ if the cost of the sequence $O_{i_1},\ldots,O_{i_n}$ of operations is at most $$c_{i_1} + \cdots + c_{i_n}.$$This definition is a bit simplistic, since sometimes the amortized cost depends on other auxiliary parameters such as the overall number of operations or the number of operations of a specific type.The potential method is one way of carrying out amortized analysis, that is, of proving that the operations in a certain data structure have some amortized cost. Stated differently, if you want to prove that the amortized cost of $O_1,\ldots,O_r$ is $c_1,\ldots,c_r$, then one of the techniques is the potential method. There are other techniques as well.How does the potential method work? You define a potential function $\Phi$ on the state of the data structure. The potential function must be always non-negative, and for the initial data structure it must be zero. You then prove that if at state $S_{\text{before}}$ you execute operation $O_i$ which leads to state $S_{\text{after}}$, then the cost $T_{\text{actual}}$ of the operation satisfies$$T_{\text{actual}} \leq c_i + \Phi(S_{\text{before}}) - \Phi(S_{\text{after}}).$$Given that this holds for all $i$, consider a sequence of operations $O_{i_1},\ldots,O_{i_n}$, and denote the corresponding states of the data structure by $S_0,\ldots,S_n$. The upper bound on $T_{\text{actual}}$ implies that the total cost of these operations is at most$$\begin{align*}\sum_{t=1}^n [c_{i_t} + \Phi(S_{t-1}) - \Phi(S_t)] &= \sum_{t=1}^n c_{i_t} + \sum_{t=1}^n [\Phi(S_{t-1}) - \Phi(S_t)] \\ &=\sum_{t=1}^n c_{i_t} + \Phi(S_0) - \Phi(S_n) \\ &=\sum_{t=1}^n c_{i_t} - \Phi(S_n) \\ &\leq\sum_{t=1}^n c_{i_t},\end{align*}$$which was to be proven.
_cstheory.31851
Assume we have an undirected graph $G=(V,E)$ and vertex locations $\pi: V \rightarrow \mathbb{R}^2$. I am looking for a procedure to perturb the vertex positions to obtain new positions $\pi'$ such that the following statements hold:For every pair $s\neq t \in V$, there is a unique shortest s-t-path $P_{s,t}$ in $G$ w.r.t. the euclidean weight function $c(v,w)=|\pi'(v)-\pi'(w)|_2$.$P_{s,t}$ is also a shortest s-t-path w.r.t. the original vertex positions $\pi$.$\pi'$ can be deterministically computed in polynomial time given $\pi$. The length of the numbers occuring in $\pi'$ are polynomially bounded in the length of those in $\pi$.I know that lexicographic perturbation is a standard procedure to do this deterministically. Sadly there is no simple way to modify the distance of only a single edge in euclidean instances.Is there any known approach that can be applied to those euclidean instances? Even a randomized perturbation algorithm would be interesting for me.
Lexicographic perturbation for euclidean shortest path instances?
reference request;graph theory
I think you're unlikely to get a good answer, because this is tied up in difficult and unsolved algebraic problems. The issue is that Euclidean path lengths (for points with integer coordinates) can be expressed as sums of square roots, but we don't know how small the difference between two distinct sums of square roots can be. Because of this, we also don't know how far apart the shortest path length and second-shortest distinct path length between a given pair of vertices can be, and therefore we don't know how small we have to make a perturbation to prevent it from changing the shortest path to a path that wasn't originally shortest.For the same reason, shortest paths in Euclidean graphs are not really known to be solvable in polynomial time, in models of computation that take into account the bit complexity of the inputs, even though Dijkstra is polynomial in a model of computation allowing constant-time real-number arithmetic. So asking for a polynomial time algorithm for a more complicated variant of the problem in which the bit complexity is unavoidable seems likely to have a negative answer.
_unix.342699
At first I thought this answer was the solution, but now I think I need a temporary file as buffer.This works unreliably:#!/bin/shecho 'OK' |{ { tee /dev/fd/3 | head --bytes=1 >&4 } 3>&1 | tail --bytes=+2 >&4} 4>&1When I run this in a terminal, sometimes I get:OKand sometimes I get:K OSeems totally random. So as a workaround I'm writing the output of tail to a file and reading it back to stdout after the pipe has finished.#!/bin/shecho 'OK' |{ { tee /dev/fd/3 | head --bytes=1 >&4 } 3>&1 | tail --bytes=+2 >file} 4>&1cat fileCan this be done in dash without temporary files? Shell variables as buffer aren't an option either, as the output might contain NUL bytes.
dash: Pipe STDIN to multiple commands and their output to STDOUT in defined order
pipe;io redirection;file descriptors;tee;dash
null
_unix.187611
In order to create my own custom Linux ISO (Ubuntu), I have decompressed the Squash file system and would like to make the changes to leave it with a installed and working OpenSSH-Server before re-squashing it again to ISO. I suppose these changes are:Some binary files in some specific directories.Some config files for the SSH Daemon.Some files corresponding to the keyfiles that are supposed to be generated.Some changes to the system files in order to make the SSH Daemon start on boot.Sumarizing, I would like to change just the same that the classical apt-get installer would change. If this is possible, how could it be done?
How to add runnable OpenSSH server to an Ubuntu ISO?
software installation;iso;squashfs
null
_unix.198220
Hi everybody, today I run dmesg and at the bottom I found some error lines about the device mmcblk0p2. Everything works fine even with those errors but I'd like to understand the problem.I've googled and I've found that many people solved the problem by running e2fsck -b block_number /dev/sdb with all the block_number(s) listed running sudo mke2fs -n /dev/xxx but the answer is always that my super-block is corrupt. I don't know if it is important but I did it by inserting the sd-card in a pc running debian and I unmounted /media/fc25...(that should be the filesystem) and usb1 (that should be the boot) because otherwise it said the /dev/sdb was in use.Any ideas?
ETX4-fs error on startup
ext4;fsck;sd card
null
_cs.59329
So, I now that any multiple-tape TM can be in theory turned into one-tape TM. However, it is too easy to copy lets say binary number from one tape to another. Thats why I thought about putting a separator between the two copies and then taking symbols one after the other and writing them after the separator until the separator itself is encountered. The problem however is, that I am not sure how could it remember the place to which it has already come/copied the characters. Example:First we have:##1011##Then we put the separator '&' at the end##1011&##Read back to beginning and change the state accordingly so that it will write $1$ or $0$ after the separator. So far - so good, then we read back again and now How could we know that we have already copied the first $1$ and must now copy the $0$ without putting any restriction on the input length (in regards to the number of states)? In other words, how could we remember the last copied symbol?I have thought of putting an extra parameter - just a integer $\leq length$ (something like $\delta (z_1,1,L,1)$ where the last one would be the number of already written-out symbols). This is would be easy to understand, but would be nowhere near the definition of Turing machine. So, any useful ideas?Thanks.
Turing Machine remembering copied symbols
turing machines;automata
Once you've hit the '&' separator, move left until you see the '#', then move one cell to the right and replace the '1' with an 'x' (and change to state $p$ to remember you've seen a 1) or a '0' with a 'y' (changing to state $q$ to remember you've seen a 0). Now move right until you pass the '&' separator and see a '#'. Replace that with a '1' if you were in state $q$ or with a '0' if you were in state $p$. Now move to the left until you see a 'x' or a 'y', move one cell to the right and repeat what you've done before. Continue the process until everything on the left is a string of 'x's and 'y's, then make one more pass, restoring the values to their originals.
_codereview.160972
I was making a simple currency converter utility and had to cache the conversion factors after fetching from server. Came up with the below interface. I have basically delegated the caching responsibility to OkHttp itself. Review appreciated.public interface CurrencyConverter { Interceptor CACHE_CONTROL_INTERCEPTOR = chain -> chain.proceed(chain.request()) .newBuilder() .header(Cache-Control, String.format(max-age=%d, max-stale=%d, 3600, 0)) .build(); OkHttpClient HTTP_CLIENT = new OkHttpClient.Builder() .cache(new Cache(new File(System.getProperty(java.io.tmpdir)), 1024 * 1024 * 10)) .addNetworkInterceptor(CACHE_CONTROL_INTERCEPTOR) .build(); BigDecimal getConversionFactor(String currencyFrom, String currencyTo) throws IOException; BigDecimal convert(String currencyFrom, String currencyTo, BigDecimal value) throws IOException;}The server is 3rd party and sends a no-cache header which I think does not make sense. It should be safe to cache conversion factors for some duration.
Caching strategy
java;networking;cache
null
_unix.76638
I am a Linux newbie. I know my way around the command-line enough to where it doesn't get in the way of my day job (I develop web applications in Python). This involves some piping, grep, awk, and programmer-oriented stuff like that.I am experimenting with desktop Linux on my home computer (using Ubuntu). I am trying to set up Jack, as I've heard it's the Linux equivalent to ASIO on Windows. I play guitar for fun and thought it would be cool to play with Ardour or find a FOSS equivalent to Guitar Rig.However I do not understand... well, anything. I don't understand what Jack does. From what I can gather, the general flow is[sound hardware] β†’ [kernel] β†’ [JACK] β†’ [ALSA] β†’ [PulseAudio] β†’ [Phonon] β†’ [my headphones](Phonon comes in because I use KDE. I think.)I don't actually know what the arrows represent. The JACK website contains essentially zero beginning user oriented documentation, except for one page describing how to use JACK with PulseAudio.As a beginner who, regardless of JACK, doesn't understand how sound works in Linux, where can I go to learn? I'd like to gain an understanding of the sound stack. But for JACK all I was able to find is its barren Wiki (including two juicy links named Configuring and running a JACK server and Setting up a simple audio chain, which both turn out to be Coming Soon pages which haven't been edited in five years) and a Linux Journal article from 2005.Many things confuse me. How can I tell which sound devices Linux recognizes? I have three: an onboard chip, a USB audio interface (an M-Audio FastTrack), and a USB webcam that has a microphone. Do all of these things get recognized by Linux? Do they all register specifically as sound devices? Does each device have to have independent drivers for JACK, ALSA, PulseAudio, etc.? Is there a basic way I can test my device to make sure it has output? Is there a way I can monitor my devices to see if the software is actually using them?Right now Amarok sound is audible, but Youtube sound isn't. Amarok is also running through my USB FastTrack instead of my onboard sound chip. Hydrogen refuses to start, presumably because I have JACK or Alsa or something configured wrong. I have no idea how to figure out the rhyme or reason for these things.I realize I am asking several questions. I am not really hoping to get an answer for any specific one, but just trying to paint a picture of the general mayhem I am experiencing. I know the questions are supposed to be focused and I apologize but I feel like I'm at the end of my rope. I'm hoping there's some nice, well-written Here's how you work the sound on Linux article (or set of articles) I just can't find. I am slightly panicked only because there seems to be so little information available on JACK in general, even on the official JACK wiki. Please help!!
Linux newbie: How do I use Jack? How does Linux sound work?
audio;jack
In my endeavor with Linux sound I have ended up disabling autospawning of Pulse Audio (so it doesn't restart when shut down):Add autospawn=no to ~/.pulse/client.conf. Stop with pactl exitStart with pulseaudioDoing live sound stuff or the like I shut down PA and run JACK only. No PA bridge. I have never gotten latency satisfactorily lowered using PA or JACK+PA.This article seems to give a rather good and quick introduction to the layers, which also mentions Phonon.You have perhaps read this, and is also not up to date, but would perhaps bring you closer to an understanding: Linux Music Workflow: Switching from Mac OS X to Ubuntu with Kim Cascone. Note the diagram above heading Workflow. (Which you can also find here under JACK Schematic diagram.) Also read the links e.g. the one on top Introduction to Linux Audio, even though from 2004, it gives you a quick view of ALSA.Though I'm not to familiar with either myself I believe a good approach is to split out the learning in various parts.Get an understanding of ALSAGet an understanding of JACK (Especially since you want to do studio work.)Get an understanding of Pulse Audioin that order. It is no wonder one struggle with grasping Linux sound. That has quite a bit to do with history and how it all has evolved. That is also why, if one want to truly understand it, it is a good thing to learn history of it. Thus again - ALSA is a good place to start. Do some sniffing on OSS. And work your way up.Quick way to might get it to work is follow either of these guides.Simplistically; ALSA is part of the kernel and know how to handle various hardware. JACK as well as Pulse Audio uses API to control and interact with the hardware. ALSA can also be used alone as a sound server. Applications uses JACK/PA API to do multi thread sound work.A quick view of your system can be achieved by running the alsa-info.sh script found here.A very simplified diagram of a blurry view showing some of the connections: +------------------------------------------------+ | SOUNDCARD | |------------------------------------------------| _____ __ | ___________ | / \/ \ | | ADC | <---- analog in --[o---7 :===========|==|==|=[';] | -----|----- | \____7 \__/ | __________ AMP | | | | MIXER |----+------o | | +---|---+-- AMP_____|______ | _______ | | | DAC | ---> analog out -[o------[ o o o ] | | +----------+ | | | | | | | (o) | | -- -+---^-- --v-- -- -- --^-- --v-- --+-- | | | | CONTROLS | | ((0)) | | | |_______| | | +------------------------------||----------------+ || ADC: Analog to digital || DAC: Digital to analog |- udev trigged and mounted _______________________________||________________| || KERNEL || -|-|-|-|- || || ALSA API <--> [Device Drivers] || ^ | module-alsa-card +--------|--| | | | |+---------|--|---------------------------| Memory Buffer I/O: | v | || +----|---|--| JACK ------------ PULSE AUDIO --------------+ || sinks | |--| * hardware-access-points * hardware-sink | | Uses ALSA API for HW I/O| * virtual-devices * mediaplayer-sink | | Mixing, Control etc.| * recorder-sink | || * ... | |--| | ||-----------------|------|--------------------|---|| APPLICATIONS -----------------+ ||-------------------------------------------------|| || Software based mixing || |+-------------------------------------------------+
_softwareengineering.235296
Does it violate the GPL v3 if I were to use a gpl licensed firmware with my closed-source hardware that I am selling? Or do both have to be open-source, or do I just have to make the firmware source freely available?Thanks!
Using GPL software with closed-source hardware
open source;hardware;closed source
There are lots of cases of consumer devices using GPL'd firmware. TiVo was the big name one because certain people were annoyed that even though TiVo released their changes, you couldn't take that and make your own changes and put it on your TiVo because they locked their boxes. In fact that's what caused the TiVo'isation clause in the next version of the GPL (v3).However, even GPLv3 doesn't require that the hardware be open source (to the best of my knowledge, IANAL). The GPLv3 seems to say that if you pulled a TiVo then you'd have to release the software changes you made and provide instructions/tools to allow people to replace the firmware on the device with new firmware that they create with their own changes.Pretty much any CPU out there isn't open source, so how could vendors sell any hardware that had GPL software on it?That said, you should thoroughly read the license before using it. I have read it, and I don't recall anything about requiring the hardware to be open source.