id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_webapps.102879
|
As a seller, I'd like to customize eBay's End of Auction email. I couldn't find any up-to-date information about it apart of some old eBay FAQ page which says:To customize your End of Auction email, go to My eBay:Click the My eBay button at the top of any page. You may be asked to sign in. In the left-hand navigation, click the eBay Preferences link under My Account. Under Seller Preferences, click the change link for the End of Auction and Transaction email.However when I go to Seller Preferences, it reads that the settings has been moved to Manage communications with buyers section:When following the link, I see some useless page with nothing to change apart of the invoice.Then following the invoice section I cannot upload any logo and the Custom Message box is grayed out as below:Is Customising Invoice page the right section to change eBay's End of Auction email? If so, does it mean it cannot be changed until I purchase eBay Shop subscription? If it's not the right place, where do I customize eBay's End of Auction emails?
|
How to customize eBay's End of Auction emails?
|
email;ebay;customization
|
According to eBay specialised department/team, the system is designed in a way that the edit option is not there for customizing afterwards.So that means it is not possible to customize eBay's End of Auction or End of Transaction emails without purchasing eBay Shop subscription and as of now this option is available for the shop owners only. Therefore the mentioned Custom Message option in Customising Invoice section relates to Logos and branding and it is available to eBay Shops subscribers only.It may be possible to customize the message on the PayPal site as mentioned in the FAQ page:You can also customize your eBay End of Auction email within your Profile section on the PayPal site.Although it's best to check with PayPal support if there is such option.Please also note that the mentioned FAQ is referring to US site only and there could be differences in few features for other sites (such as for the UK site).
|
_codereview.19790
|
I just learned about Maybe in Haskell, so I decided to try to use it with a binary search.Here's the function:binarySearch :: (Ord a, Int b) => [a] -> a -> b -> b -> Maybe bbinarySearch l e beg last | lookat < e = (binarySearch l e i last) | lookat > e = (binarySearch l e beg i) | lookat == e = Just i | otherwise = Nothing where i = floor ((beg+last)/2) lookat = l !! il is a list, e is the element of interest, beg is the start ofthe section of interest and end is the end of said section. The error I am getting is:BinarySearch.hs:1:25: `Int' is applied to too many type arguments In the type signature for `binarySearch': binarySearch :: (Ord a, Int b) => [a] -> a -> b -> b -> Maybe bI have tried a few other things including:binarySearch :: Ord a => [a] -> a -> Int -> Int -> Maybe bYielding the error:BinarySearch.hs:6:23: Could not deduce (b ~ Int) from the context (Ord a) bound by the type signature for binarySearch :: Ord a => [a] -> a -> Int -> Int -> Maybe b at BinarySearch.hs:(3,1)-(9,22) `b' is a rigid type variable bound by the type signature for binarySearch :: Ord a => [a] -> a -> Int -> Int -> Maybe b at BinarySearch.hs:3:1 In the first argument of `Just', namely `i' In the expression: Just i In an equation for `binarySearch': binarySearch l e beg last | lookat < e = (binarySearch l e i last) | lookat > e = (binarySearch l e beg i) | lookat == e = Just i | otherwise = Nothing where i = floor ((beg + last) / 2) lookat = l !! iI am not sure what I am doing wrong. Any corrections or comments on style or solution would be highly appreciated.
|
Errors with binary search in Haskell
|
haskell;beginner;binary search
|
You don't want Int b, you just want Int:Ord a => [a] -> a -> Int -> Int -> Maybe Int
|
_softwareengineering.164017
|
A few months ago, we started developing an app to control an in-house developed test equipment and record a set of measurements. It should have a simple UI, and would likely require threads due to the continuous recording that must take place. This application will be used for a few years, and shall be maintained by a number of computer science students during this period.Our boss graduated some 30 years ago (not to be taken as an offense; I have more than half that time on my back too) and has mandated that we develop this application in ANSI C. The rationale is that he is the only one that will be around the entire time, and therefore he must be able to understand what we are doing. He also ruled that we should use no abstract data types; he even gave us a list with the name of the global variables (sigh) he wants us to use.I actually tried that approach for a while, but it was really slowing me down to make sure that all pointer operations were safe and all strings had the correct size. Additionally, the number of lines of code that actually related to the problem in hand was a only small fraction of our code base. After a few days, I scrapped the entire thing and started anew using C#. Our boss has already seen the program running and he likes the way it works, but he doesn't know that it's written in another language.Next week the two of us will meet to go over the source code, so that he will know how to maintain it. I am sort of scared, and I would like to hear from you guys what arguments I could use to support my decision.Cowardly yours,
|
How can I convince my boss that ANSI C is inadequate for our new project?
|
c#;programming languages;c
| null |
_cs.41184
|
Say that string $x$ is a prefix of a string $y$ if there exists a string $z$ such that $xz = y$, and say that $x$ is aproper prefix of $y$ if in addition $x \not= y$. A language is prefix-free if it doesnt contain a properprefix of any of its members.$$\text{Prefix-FreeREX} = \{(R) \mid R \text{ is a regular expression and $L(R)$ is prefix-free}\}$$I was wondering about how to prove that Prefix-FreeREX is decidable. Also why does a similar approach fail to show that Prefix-FreeCFG is decidable?
|
Decidability of Is this regular expression prefix-free?
|
computability;regular expressions
| null |
_unix.10664
|
Help!! Have you ever tried dragging a folder to another folder by mistake in a graphical SSH client?Well, I did this morning, and since I wanted it to stop immediately (I could see it was about to move hundreds of files), I closed down the client by hitting X.Now, my problem is, I don't know which folder was accidentally moved (well, actually, the client didn't have time to move the folder, but my guess is 3-4 files in it were actually moved).So my question is: how do I recall the file transfer history (or whatever history I need) on a Debian/Linux system?I have google'd it to no avail. And, of course, I have tried the history command and the .bash_history file, but I guess they only cover commands issued from the command line.I don't know how my SSH client issues file commands (my guess is it has something to do with SFTP) so I have no clue where to find the history.My client, in case it matters, is SSH Secure Shell for Windows.
|
Where to find file transfer history from a SSH client (Debian/Linux)?
|
ssh;debian;file transfer
| null |
_unix.315124
|
By chance I overloaded my tiny speaker beyond simple nonlinearity, and now it sounds like a sick piglet. I have a Dell Precision-M laptop, and Ubuntu 14.04 loaded in it. No adjustment of sound panel helps. I cannot use it now for Skype, news, and background musics when I work.Most likely it is a plain mechanical damage (which would be a shame; I payed a top back to have a top Intel processor, and to have with it a low quality speaker doesn't sound right...), but I still have a little hope it is a fixable software problem. If this is true, is there any way I can fix it? If, on the other hand, it is a damaged speaker, can I replace it myself? (by first purchasing a new one)? or should I seek a professional help?
|
My laptop speaker produces pig-like sound; is it software or hardware problem? Any way I can fix it myself?
|
linux;ubuntu;audio;hardware
| null |
_codereview.74356
|
I implemented a search function like this:private static void GetResults(ref ObservableCollection<string> resultTitles, ref string[] query, ref ObservableCollection<int> weight){ int position = -1; foreach (string[] r in SearchKeys.Keys) { position++; foreach (string s in query) { int m = r.Length/2; int min = r[m][0] < s[0] ? m : 0; int max = r[m][0] <= s[0] ? r.Length : m+1; for (int i = min; i < max; i++) { weight.Add(0); if (r[i] == s) { if (weight[position] == 0) { resultTitles.Add(r[0]); } weight[position]++; } // end if } // end foreach(s) } // end foreach(t) } // end foreach(r)}In the main program, the use inputs a query, the program splits the query at the spaces, and passes the query and two empty ObservableCollections to this function.SearchKeys.Keys is implemented like this:public static string[][] Keys = { Array1, Array2, Array3 };The arrays are implemented like this:private static string[] Array1 = { Title, val1, val2, val3, val4, val5, val6 };As always, all comments are welcome; particularly, I am interested in which ways could this code's performance be improved. The data in SearchKeys is sorted, of course.This is a previous version of my code, which may also be of interest. This version should run about twice as slow as the above, according to my estimates:private void getResults(ref ObservableCollection<string> tmp, ref string[] query, ref ObservableCollection<int> weight){ int position = -1; foreach (string[] r in SearchKeys.Keys) { position++; foreach (string t in r) { weight.Add(0); foreach (string s in query) { if (t == s) { if (weight[position] == 0) { tmp.Add(r[0]); } weight[position]++; } // end if } // end foreach(s) } // end foreach(t) } // end foreach(r)}This is how I call this function:static void Main(string[] args){ ObservableCollection<int> weight = new ObservableCollection<int>(); ObservableCollection<string> resultTitles = new ObservableCollection<string>(); string[] query = Console.ReadLine().Split(' '); GetResults(ref resultTitles, ref query, ref weight); StableSort(ref resultTitles, ref weight); foreach (string s in resultTitles) Console.WriteLine(resultTitles.IndexOf(s)+: +s);}This is StableSort (see Stable Sort in C#):static void StableSort(ref ObservableCollection<string> values, ref ObservableCollection<int> weights){ while(weights.Contains(0)) { weights.Remove(0); } if (values == null) { throw new ArgumentNullException(values); } if (weights == null) { throw new ArgumentNullException(weights); } if (values.Count != weights.Count) { throw new ArgumentOutOfRangeException(collections count not equal, (Exception)null); } ObservableCollection<string> localValues = new ObservableCollection<string>(); ObservableCollection<int> localWeights = new ObservableCollection<int>(); int index = -1; var weightsWithIndex = weights.Select(p => new { Value = p, Index = ++index }).OrderByDescending(p => p.Value); foreach (var w in weightsWithIndex) { localWeights.Add(w.Value); localValues.Add(values[w.Index]); } values = localValues; weights = localWeights;}
|
Search arrays for values
|
c#;search
|
Focusing on foreach (string[] r in SearchKeys.Keys){ position++; foreach (string s in query) { int m = r.Length/2; int min = r[m][0] < s[0] ? m : 0; int max = r[m][0] <= s[0] ? r.Length : m+1; r.Length won't change by iterating over query nor does r[m][0] foreach (string[] r in SearchKeys.Keys){ position++; int length = r.Length; int middleOfArray = length / 2; Char firstKeyItemCharacter = r[middleOfArray][0]; foreach (string s in query) { int min = firstKeyItemCharacter < s[0] ? middleOfArray : 0; int max = firstKeyItemCharacter <= s[0] ? length : middleOfArray + 1;instead of adding to weight for each iteration of the most inner loop, add it at the most outer loop. Because you remove all entries of weight where the entry == 0 in the StableSort() method, we can do this here using the Where() method. Btw. by first removing the 0 entries the check for weights == null will be senseless. private static void GetResults(ref ObservableCollection<string> resultTitles, ref string[] query, ref ObservableCollection<int> weight){ int position = -1; IList<int> currentWeights = new List<int>(); foreach (string[] r in SearchKeys.Keys) { currentWeights.Add(0); position++; int length = r.Length; int middleOfArray = length / 2; Char firstKeyItemCharacter = r[middleOfArray][0]; foreach (string s in query) { int min = firstKeyItemCharacter < s[0] ? middleOfArray : 0; int max = firstKeyItemCharacter <= s[0] ? length : middleOfArray + 1; for (int i = min; i < max; i++) { if (r[i] == s) { if (currentWeights[position] == 0) { resultTitles.Add(r[0]); } currentWeights[position]++; } } } } weight = new ObservableCollection<int>(currentWeights.Where(x => x > 0));}
|
_unix.169238
|
I recently installed Debian Wheezy 7.6 in my laptop which has 4GB nVidia graphics card. By default, on login it boots into gnome classic mode and reports gnome 3 failed to load. Also, sometimes the mouse pointer is not consistent (goes invisible while scrolling and clicking).This is the output of .xsession-errors:/etc/gdm3/Xsession: Beginning session setup...localuser:a12 being added to access control listopenConnection: connect: No such file or directorycannot connect to brltty at :0gnome-keyring-daemon: insufficient process capabilities, unsecure memory might get usedgnome-keyring-daemon: insufficient process capabilities, unsecure memory might get usedgnome-keyring-daemon: insufficient process capabilities, unsecure memory might get usedgnome-keyring-daemon: insufficient process capabilities, unsecure memory might get usedGNOME_KEYRING_CONTROL=/home/a12/.cache/keyring-VJMq1wGNOME_KEYRING_CONTROL=/home/a12/.cache/keyring-VJMq1wSSH_AUTH_SOCK=/home/a12/.cache/keyring-VJMq1w/sshGNOME_KEYRING_CONTROL=/home/a12/.cache/keyring-VJMq1wSSH_AUTH_SOCK=/home/a12/.cache/keyring-VJMq1w/sshGPG_AGENT_INFO=/home/a12/.cache/keyring-VJMq1w/gpg:0:1GNOME_KEYRING_CONTROL=/home/a12/.cache/keyring-VJMq1wSSH_AUTH_SOCK=/home/a12/.cache/keyring-VJMq1w/ssh(gnome-settings-daemon:16895): color-plugin-WARNING **: failed to get edid: unable to get EDID for output(gnome-settings-daemon:16895): color-plugin-WARNING **: unable to get EDID for xrandr-default: unable to get EDID for output(gnome-settings-daemon:16895): color-plugin-WARNING **: failed to reset xrandr-default gamma tables: gamma size is zero(gnome-panel:16921): Gtk-CRITICAL **: gtk_accelerator_parse_with_keycode: assertion `accelerator != NULL' failed** (gnome-panel:16921): WARNING **: Unable to parse mouse modifier '(null)'Initializing tracker-store...vmware-user: could not open /proc/fs/vmblock/devTracker-Message: Setting up monitor for changes to config file:'/home/a12/.config/tracker/tracker-store.cfg'Tracker-Message: Setting up monitor for changes to config file:'/home/a12/.config/tracker/tracker-store.cfg'Starting log: File:'/home/a12/.local/share/tracker/tracker-store.log'Initializing tracker-miner-fs...Tracker-Message: Setting up monitor for changes to config file:'/home/a12/.config/tracker/tracker-miner-fs.cfg'Starting log: File:'/home/a12/.local/share/tracker/tracker-miner-fs.log'** (gnome-screensaver:16959): WARNING **: Config key not handled: disable-application-handlers** (gnome-screensaver:16959): WARNING **: Config key not handled: disable-command-line** (gnome-screensaver:16959): WARNING **: Config key not handled: disable-log-out** (gnome-screensaver:16959): WARNING **: Config key not handled: disable-print-setup** (gnome-screensaver:16959): WARNING **: Config key not handled: disable-printing** (gnome-screensaver:16959): WARNING **: Config key not handled: disable-save-to-disk(gnome-panel:16921): Gtk-CRITICAL **: gtk_accelerator_parse_with_keycode: assertion `accelerator != NULL' failed** (gnome-panel:16921): WARNING **: Unable to parse mouse modifier '(null)'** Message: applet now removed from the notification area(gnome-panel:16921): GLib-GObject-WARNING **: /tmp/buildd/glib2.0-2.33.12+really2.32.4/./gobject/gsignal.c:2459: signal `size_request' is invalid for instance `0x230b6c0'** Message: applet now embedded in the notification areaInitializing nautilus-gdu extension(gnome-settings-daemon:16895): updates-plugin-WARNING **: Failed to get symlink: Error when getting information for file '/run/udev/firmware-missing/ar3k/ramps_0x31010000_40.dfu': No such file or directoryShutting down nautilus-gdu extensionWindow manager warning: Log level 16: Error converting selectionWindow manager warning: Log level 16: Error converting selectionWindow manager warning: Log level 16: Error converting selectionWindow manager warning: Log level 16: Error converting selectionWindow manager warning: Log level 16: Error converting selectionWindow manager warning: Log level 16: Error converting selectionWindow manager warning: Log level 16: Error converting selectionWindow manager warning: Log level 16: Error converting selection(gnome-settings-daemon:16895): PackageKit-WARNING **: couldn't parse execption 'GDBus.Error:org.gtk.GDBus.UnmappedGError.Quark._pk_5ftransaction_5ferror.Code4: GetDistroUpgrades not supported by backend', please report(gnome-settings-daemon:16895): updates-plugin-WARNING **: failed to get upgrades: GDBus.Error:org.gtk.GDBus.UnmappedGError.Quark._pk_5ftransaction_5ferror.Code4: GetDistroUpgrades not supported by backendI've no idea what these are. I googled and cannot find a solution to it. Was it due to my graphics card? If so how can I resolve them?UPDATE: I really don't want gnome-3 but the classic mode. So, it is suffice for the classic mode to be perfect.
|
Problems with Debian Wheezy installation
|
debian;gnome;gnome3
| null |
_unix.171519
|
What exactly is happening here?root@bob-p7-1298c:/# ls -l /tmp/report.csv && lsof | grep report.csv-rw-r--r-- 1 mysql mysql 1430 Dec 4 12:34 /tmp/report.csvlsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs Output information may be incomplete.
|
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system
|
privileges;lsof;fuse
| null |
_codereview.112569
|
(See the next iteration.)The result looks likeand my code is:\documentclass[10pt]{article}\usepackage{amsmath}\usepackage[ruled,vlined,linesnumbered]{algorithm2e}\begin{document} \begin{algorithm} \SetKw{Nil}{nil} \SetKw{Is}{is} \SetKw{Not}{not} \SetKw{Mapped}{mapped} \SetKw{In}{in} \SetKw{ChildNode}{child node} \SetKw{Of}{of} \SetKw{Continue}{continue} $\text{OPEN} = \{ s \}$ \\ $\text{CLOSED} = \emptyset$ \\ $\pi = \{ (s \mapsto$ \Nil $)\}$ \\ $g = \{ (s \mapsto 0) \}$ \\ \While{$|\text{OPEN}| > 0$}{ $u = \textsc{ExtractMinimum}(\text{OPEN})$ \\ \If{$u$ \Is $t$}{ \KwRet \textsc{TracebackPath}$(u, \pi)$ \\ } $\text{CLOSED} = \text{CLOSED} \cup \{ u \}$ \\ \ForEach{\ChildNode $v$ \Of $u$}{ \If{$v \in \textsc{CLOSED}$}{ \Continue \\ } $c = g(u) + w(u, v)$ \\ \If{$v$ \Is \Not \Mapped \In $g$}{ $g(v) = c$ \\ $\pi(v) = u$ \\ \textsc{Insert}$(\text{OPEN}, v, c + h(v))$ \\ } \ElseIf{$g(v) > c$}{ $g(v) = c$ \\ $\pi(v) = u$ \\ \textsc{DecreaseKey}$(\text{OPEN}, v, c + h(v))$ \\ } } } \KwRet $\langle \rangle$ \caption{\textsc{AStarPathFinder}$(s, t, w, h)$} \end{algorithm} \begin{algorithm} \SetKw{Is}{is} \SetKw{Not}{not} \SetKw{Nil}{nil} $p = \langle \rangle$ \\ \While{$u$ \Is \Not \Nil}{ $p = u \circ p$ \\ $u = \pi(u)$ \\ } \KwRet $p$ \caption{\textsc{TracebackPath}$(u, \pi)$} \end{algorithm}\end{document}Can I make my code better? For example, is there a way to defined custom keywords in a global scope so that I don't have to repeat myself?
|
Typesetting A* in LaTeX using algorithm2e
|
tex;a star
| null |
_softwareengineering.186445
|
Let me preface this with saying I am not a Business Intelligence (BI) developer. I'm a .NET developer, and I have only a vague notion of what BI is, and encompasses.I've been an evangelist for a secure development lifecycle in our company, and recently convinced the rest of the development team that we should be doing peer reviews including...Requirements ReviewsCode ReviewsArchitecture/design reviewsDocumentation ReviewsThe .NET team does code reviews and we look for various things - general overview:Maintainability (clear design, adequately documented, etc.)Code clarity (standard .NET coding/naming guidelines)Security (scanning for failures to sanitize input, prevent SQL injection, etc)Functionality (does it do what it's supposed to)UsabilityPerformanceAt any rate, our Business Intelligence developers don't really do the same type of coding/development that we do for websites, windows apps, etc. Our templates/checklists don't really apply to them. Since I've been the one trying to evangelize and get everyone do start doing peer reviews, they asked me for advice on what types of things they should be looking for.I could probably guide them and help them to develop their own checklists by asking questions about what can go wrong, what is important to them, etc., but I thought I'd ask and see if anyone is aware of established guidelines for doing code reviews for BI developers.I'll also take, from BI people What would you look for in evaluating another BI developer's work?.
|
What would Business Intelligence (BI) developers look for in code reviews?
|
code reviews;business intelligence
| null |
_unix.80489
|
I want to mount my Windows NTFS share C:\ to Linux ext4 file system, so I can see the file system tree as part of my Linux file system and transfer my files.PS. I am using rhel6.
|
How to mount remote file system
|
rhel;mount;ext4;ntfs
|
As @Hauke Laging says, it will not become ext4, but you could mount it to /mnt/winshare or some other place, using Samba. A tutorial for RHE is here. Both ways (linux to windows, and vice versa) are described.BTW: this seems to be a similar question.
|
_unix.27203
|
I'm trying to run a program from the command line, and I got a -bash: command: command not found error. What can I do to troubleshoot/fix the problem?
|
-bash: command: command not found
|
bash
| null |
_unix.47156
|
I have openSUSE. I can install programs with yum. yum install <progname>I want to move my install to a new hard disk.How can I transfer programs from one OS to another?My problem is that I have no access to the internet. So how can I install a program on it?
|
Transfer programs to another OS
|
package management;opensuse;yum
| null |
_softwareengineering.27123
|
When I hear low-level programming, such as for drivers, embedded systems, operating systems, etc., I immediately think about C and perhaps C++. But mainly C.But what other languages are also used for these kind of tasks? Today, I mean, not what has been used.
|
Which languages are used today for low-level programming?
|
programming languages;low level;embedded systems
| null |
_unix.307412
|
I'm trying to mount my rootfs over NFS. I have these kernel configs enabled:CONFIG_NFS_FS=y (NFS support)CONFIG_IP_PNP=y (configure IP at boot time)CONFIG_ROOT_NFS=y (support for NFS as rootfs)This is my kernel commandline:debug nfsrootdebug loglevel=8 console=ttymxc1,115200 imx-fbdev.legacyfb_depth=32 consoleblank=0 ip=10.42.102.244:10.42.102.5::255.255.255.0::eth0: root=/dev/nfs nfsroot=10.42.102.5:/srv/nfs/dc10,v3,tcp noinitrdHere's the relevant parts from the boot messages:libphy: 63fec000.etherne:01 - Link is Up - 100/FullIP-Config: Complete: device=eth0, hwaddr=00:d0:93:2a:6c:8e, ipaddr=10.42.102.244, mask=255.255.255.0, gw=255.255.255.255 host=10.42.102.244, domain=, nis-domain=(none) bootserver=10.42.102.5, rootserver=10.42.102.5, rootpath=ALSA device list: #0: imx53-mba53-sgtl5000Freeing init memory: 6332KWelcome to Buildroot 2013.05!I was expecting to see lines about RPC communication and VFS mounting over nfs, but none of that and no error messages printed either, depsite providing debug nfsrootdebug and loglevel=8 as kernel parameters.I've verified with tcpdump on the nfs server side that no packets are being sent.Once the board has booted, I can connect to the computer running the nfs-server using ssh.Does anyone have any suggestion what's going wrong or how I can debug this further?
|
NFS rootfs booting fails silently on ARM imx53 board
|
boot;nfs;arm;buildroot
| null |
_unix.72511
|
I was working on a old version of VirtualBox (3.2.6) in which I can edit /etc/sysconfig/vbox, there I can add as many VM to start when the host boots up.Now on the this new version 4.2.6 can't edit that file because it's no longer supported. Checking on the forum I found something but I does not work.cat /etc/default/virtualbox# /etc/default/virtualbox## -------------------------------------------------------------------------------------------------# In the SHUTDOWN_USERS list all users for which a check for runnings VMs should be done during# shutdown of vboxdrv resp. the server:# SHUTDOWN_USERS=foo bar## Set SHUTDOWN to one of poweroff, acpibutton or savestate depending on which of the# shutdown methods for running VMs are wanted:# SHUTDOWN=poweroff# SHUTDOWN=acpibutton# SHUTDOWN=savestate# -------------------------------------------------------------------------------------------------##SHUTDOWN_USERS=foo bar#SHUTDOWN=savestateVBOXAUTOSTART_DB=/etc/vbox#VBOXAUTOSTART_CONFIG=/etc/vbox/vbox.cfgVBOXAUTOSTART_CONFIG=/etc/vbox/vboxauto.confthis is vboxauto.conf:cat /etc/vbox/vboxauto.conf# Default policy is to deny starting a VM, the other option is allow.default_policy = denyroot = { allow = true}then I do this:VBoxManage setproperty autostartdbpath /etc/vboxand thisVBoxManage modifyvm <the_machine> --autostart-enabled onwhen i try to look at the status rcvboxes statusVirtualbox machines: no virtual machines running. skippedvboxes.service - LSB: Autostart Virtual Box VMs Loaded: loaded (/etc/init.d/vboxes) Active: active (exited) since Tue, 2013-02-05 23:34:57 UYST; 32min ago Process: 30764 ExecStart=/etc/init.d/vboxes start (code=exited, status=0/SUCCESS) CGroup: name=systemd:/system/vboxes.serviceFeb 05 23:34:57 my.machine vboxes[30764]: Starting Virtualbox machines: no virtual machines configured..unusedFeb 05 23:34:57 my.machine systemd[1]: Started LSB: Autostart Virtual Box VMs.why won't VirtualBox see my virtual machines to leave them headless?Edit 30 - mayI did configure it in /etc/sysconfig/vbox, but get this when I run it under init.d/etc/init.d/vboxes statusVirtualbox machines: no virtual machines running. skippedvboxes.service - LSB: Autostart Virtual Box VMs Loaded: loaded (/etc/init.d/vboxes) Active: inactive (dead) since Sat, 2013-02-02 22:00:49 UYST; 1 day and 21h ago Process: 4155 ExecStop=/etc/init.d/vboxes stop (code=exited, status=0/SUCCESS) Process: 3955 ExecStart=/etc/init.d/vboxes start (code=exited, status=0/SUCCESS) CGroup: name=systemd:/system/vboxes.serviceI did a small test and move the script to /root/bin an it worked just fine../vboxes_test status test (user: root): running (since 2013-02-04 21:55:46) running test (user: root): running (since 2013-02-04 21:55:46)small edit:So far this is what I was talking about.... anyway did the old configuration (add the vm in /etc/sysconfig/vbox) and now then I reboot the host the vms also do that. The thing is I don want the vms to reboot when I poweroff the host: t*he vms must do savestate before the host shutdown*. small_coment:If I previous do a savestate and the reboot the host.. the vms do work as I wanted...Is there any info to make it work?
|
Do savestate before power off host (Virtualbox/OpenSUSE)
|
opensuse;virtualbox
|
I think the command you're looking for is this:VBoxManage modifyvm <Machine Name> --autostop-type savestateThe following are alternatives to savestate: [disabled|savestate|poweroff|acpishutdown].If that doesn't work I believe you can use the acpishutdown option to configure the VM so that when it receives the trigger to acpishutdown to savestate as well.If the above doesn't work there is always manually doing the savestates yourself using this command:VBoxManage controlvm <vm> savestateAnd then rebooting the host system.ReferencesInstalling VirtualBox on our Linux ClientsHow To Set Your VirtualBox 4.2 VM to Automatically StartupVirtualbox 4.2 VM autostart on Debian SqueezeStart VMs at boot (new in 4.2.0)
|
_unix.309102
|
So the thing is I have setup a home directory on proftpd using GADMIN-proftpd frontend. I tried doing system links onto the setup directory however they aren't recognized as shortcuts on my android file manager (MiXplorer).Previously I did this on Windows with Filezilla Server, having various directories listed onto my home directory.I don't know whether I'm doing this wrong. I even tried adding additional directories thinking there would be an alias option, but nothing. New to Linux here. Any help is appreciated.proftpd.confServerType standaloneDefaultServer onUmask 022ServerName 0.0.0.0ServerIdent on xv-pcServerAdmin [email protected] offUseReverseDNS offPort 980PassivePorts 49155 65534#MasqueradeAddress NoneTimesGMT offMaxInstances 30MaxLoginAttempts 3TimeoutLogin 300TimeoutNoTransfer 120TimeoutIdle 1200DisplayLogin welcome.msgDisplayChdir .messageUser nobodyGroup nobodyDirFakeUser off nobodyDirFakeGroup off nobodyDefaultTransferMode binaryAllowForeignAddress offAllowRetrieveRestart onAllowStoreRestart onDeleteAbortedStores offTransferRate RETR 220TransferRate STOR 250TransferRate STOU 250TransferRate APPE 250SystemLog /var/log/secureRequireValidShell off<IfModule mod_tls.c>TLSEngine offTLSRequired offTLSVerifyClient offTLSProtocol SSLv23TLSLog /var/log/proftpd_tls.logTLSRSACertificateFile /etc/gadmin-proftpd/certs/cert.pemTLSRSACertificateKeyFile /etc/gadmin-proftpd/certs/key.pemTLSCACertificateFile /etc/gadmin-proftpd/certs/cacert.pemTLSRenegotiate required offTLSOptions AllowClientRenegotiation</IfModule><IfModule mod_ratio.c>Ratios offSaveRatios offRatioFile /restricted/proftpd_ratiosRatioTempFile /restricted/proftpd_ratios_tempCwdRatioMsg Please upload first!FileRatioErrMsg FileRatio limit exceeded, upload something first...ByteRatioErrMsg ByteRatio limit exceeded, upload something first...LeechRatioMsg Your ratio is unlimited.</IfModule><Limit LOGIN> AllowUser xevile DenyALL</Limit><Anonymous /home/xevile/ftp/home>User xevileGroup HomeAnonRequirePassword onMaxClients 10 The server is full, hosting %m usersDisplayLogin welcome.msgDisplayChdir .msg<Limit LOGIN>Allow from AllDeny from all</Limit>AllowOverwrite on<Limit LIST NLST STOR STOU APPE RETR RNFR RNTO DELE MKD XMKD SITE_MKDIR RMD XRMD SITE_RMDIR SITE SITE_CHMOD PWD XPWD SIZE STAT CWD XCWD CDUP XCUP > AllowAll</Limit><Limit SITE_CHGRP MTDM > DenyAll</Limit><Directory /media/xevile/Music/Music>AllowOverwrite off<Limit LIST NLST STOR STOU APPE RETR PWD XPWD SIZE STAT CWD XCWD CDUP XCUP > AllowAll</Limit><Limit RNFR RNTO DELE MKD XMKD SITE_MKDIR RMD XRMD SITE_RMDIR SITE SITE_CHMOD SITE_CHGRP MTDM > DenyAll</Limit></Directory><Directory /media/xevile/01D206B2C33F5A20/RigoR>AllowOverwrite off<Limit LIST NLST STOR STOU APPE RETR PWD XPWD SIZE STAT CWD XCWD CDUP XCUP > AllowAll</Limit><Limit RNFR RNTO DELE MKD XMKD SITE_MKDIR RMD XRMD SITE_RMDIR SITE SITE_CHMOD SITE_CHGRP MTDM > DenyAll</Limit></Directory><Directory /media/xevile/Software/Torrents>AllowOverwrite on<Limit LIST NLST STOR STOU APPE RETR RNFR RNTO DELE MKD XMKD SITE_MKDIR RMD XRMD SITE_RMDIR SITE_CHMOD PWD XPWD SIZE STAT CWD XCWD CDUP XCUP > AllowAll</Limit><Limit SITE SITE_CHGRP MTDM > DenyAll</Limit></Directory><Directory /media/xevile/Software/Software/L-pah>AllowOverwrite on<Limit LIST NLST STOR STOU APPE RETR RNFR RNTO DELE PWD XPWD SIZE STAT CWD XCWD CDUP XCUP > AllowAll</Limit><Limit MKD XMKD SITE_MKDIR RMD XRMD SITE_RMDIR SITE SITE_CHMOD SITE_CHGRP MTDM > DenyAll</Limit></Directory><Directory /media/xevile/Anime>AllowOverwrite on<Limit LIST NLST STOR STOU APPE RETR RNFR RNTO DELE MKD XMKD SITE_MKDIR RMD XRMD SITE_RMDIR SITE_CHMOD PWD XPWD SIZE STAT CWD XCWD CDUP XCUP > AllowAll</Limit><Limit SITE SITE_CHGRP MTDM > DenyAll</Limit></Directory></Anonymous>
|
Using proftpd, how do implement directories to show up in home folder?
|
ftp;proftpd
| null |
_webmaster.106908
|
In google search console, for my website, when the number of impressions increased (red trend in google search console), then the ranking in google search decreased (green trend in google search console aka position). Then, when the number of impressions decreased, then the ranking in google search console increased.I thought that when my website links are displayed in google search at a better position, there should be more impressions because users don't click to visit subsequent google pages. They usually click on the links on the first page.As you can see at the end of the trend, the impressions increased by at least 3000 and meanwhile the average position lowered by at least 20 and vice versa.
|
How do you explain more impressions and less ranking in Google search console?
|
google search console;google search
| null |
_unix.348492
|
I've installed Debian 8 since a week, but I can not run any graphical application from console with the root user.Infact, when I run xampp (or any other program) I get the following errorroot# /opt/lampp/manager-linux-x64.runNo protocol specifiedNo protocol specifiedUnknown Error couldn't connect to display :0I've googled the error and tried any suggestion, but haven't solved the problem.This is my Xauthorityroot# echo $XAUTHORITY/root/.XauthorityThis is my DISPLAY variableroot# echo $DISPLAY:0The strange fact is that with my user (not the root one) I don't have any problem and I can run any graphical application.The DISPLAY variable is the same of the root one:user# echo $DISPLAY:0I also tried the suggestions here --> Why can't I run GUI apps from 'root': No protocol specified?:root# xauth + rootxauth: (argv):1: unknown command +and the command export XAUTHORITY=~/.Xauthoritydoesn't have any effects
|
No protocol specified: Unknown Error couldn't connect to display :0
|
debian;root;gui;display;xauth
| null |
_softwareengineering.311387
|
I know that one can not securely restrict normal CPython's capabilities to properly run foreign code without allowing it to access some builtins like open() or allowing other kinds of I/O.So I researched more and found the alternative interpreters PyPy (written in Python) and Jython (written in Java). Which of those is easier to use for sandboxing user scripts that must interact with the core application?Here's a short description of what my program should do and what is important for me:The program will be a multiplayer game that may be run on any PC as server and which allows any number of clients to connect locally or over the network. The game is a round-based normal strategy (mine resources, develop a strong base, conquer territories, eliminate enemies) game, but units should not be controlled directly via mouse/keyboard, but only through Python scripts and commands which the players may upload to the server and that are run each round to determine the actions. The idea for this project comes from http://screeps.com, but I would not like to make an exact clone but just something similar...Important points are:secure sandboxing of the user scripts. I want that the users should write a class derived from a given subclass which has functions to access the game core. There should be no chance to perform any I/O or system access other than through those methods. Imports must be restricted to tools like string and math only, no possibly dangerous stuff like os or subprocess.good support for real multi-threading, as I should run as many user scripts in parallel as possible to keep the round durations short.The interpreter should be fast and lightweight enough to handle quite a few clients in parallel. I also need an efficient database access with intelligent caching (probably have to code that myself?) that speeds up the data access times without flooding the RAM.I also know Java fairly well, so I could possibly also write parts of the game core in Java if that could speed things up or make them more secure, but I want the user scripts to be in Python 3.So again, which Python interpreter would be most suitable for this task? What (dis)advantages does each one have with respect to the listed requirements?
|
Does PyPy or Jython run untrusted Python 3 code more secure while still being fast?
|
python;interpreters;python 3.x
| null |
_softwareengineering.167242
|
Suppose I have a controller that loads a file and hands it over to the processing. Should I handle the exception in the file loader and return Null if something is wrong, or should I throw the exception and handle it in the controller? Without the file the rest of the program cant work. Where should I handle a exception that shuts down the program properly?I want to shut down an Android application properly.
|
Where should I handle fatal exceptions
|
exceptions;exception handling
|
In layman's words:You should not return null. The exception should be handled in the top-most layer.The program should not terminate, but give feedback to the user, unless it is a Unix style command line program where it's ok to terminate after showing an error message.By definition no exception is fatal and each one can be handled. Errors are fatal and cannot be recovered from.Exceptions and errors are not the same.
|
_codereview.88629
|
I wrote a small script that finds the solution to the cracker barrel peg game / triangle game.GistA small script that finds the solution tothe cracker barrel peg game / triangle game.Instructions along with one of the game's solution can be found here:http://www.joenord.com/puzzles/peggame/It finds the solution by repeatedly playing games and making randomdecisions until a game leaves one peg. The slot's are numbered0 - 14 in sequential order (Top to Bottom, Left to Right) /0\ /1 2\ /3 4 5\ etc etcAnother objective of this game is to leave the board with8 pegs and no possible jumps. To find that solution change thefinal if statement as follows:if triangle.total_peg() == 8:Language python 2.7import randomclass Slot: Represents a single slot opening of the triangle game. This class holds the slot number, it's possible jump dictionary and helper functions around maintaining slots. def __init__(self, num, jump_dict): self.num = num self.jump_dict = jump_dict self.peg = True def has_peg(self): return self.peg def add_peg(self): self.peg = True def remove_peg(self): self.peg = False def possible_jump(self, board): Determine possible jumps for a given peg :param board: The board to check possible jumps against :return: Dictionary of possible jumps assert self.has_peg() possible_jump_dict = {} for jump_over in self.jump_dict: jump_to = self.jump_dict[jump_over] if board[jump_over].has_peg() and not board[jump_to].has_peg(): possible_jump_dict[jump_over] = jump_to return possible_jump_dictclass Triangle: Represents a single board of the triangle game def __init__(self): Initializes the board for a new game. The board consists of slot 15 objects self.board = [Slot(0, {1: 3, 2: 5}), Slot(1, {3: 6, 4: 8}), Slot(2, {4: 7, 5: 9}), Slot(3, {4: 5, 1: 0, 6: 10, 7: 12}), Slot(4, {7: 11, 8: 13}), Slot(5, {2: 0, 4: 3, 8: 12, 9: 14}), Slot(6, {3: 1, 7: 8}), Slot(7, {4: 2, 8: 9}), Slot(8, {4: 1, 7: 6, }), Slot(9, {5: 2, 8: 7}), Slot(10, {6: 3, 11: 12}), Slot(11, {7: 4, 12: 13}), Slot(12, {11: 10, 7: 3, 8: 5, 13: 14}), Slot(13, {12: 11, 8: 4}), Slot(14, {13: 12, 9: 5})] self.jump_history = [] def jump(self, jump_from, jump_over, jump_to): Executes a jump action :param jump_from: Peg number to jump from :param jump_over: Peg number to be jumped over and removed :param jump_to: Empty slot number for the peg to jump into self.board[jump_from].remove_peg() self.board[jump_over].remove_peg() self.board[jump_to].add_peg() self.jump_history.append([jump_from, jump_over, jump_to]) def total_peg(self): return sum(slot.has_peg() for slot in self.board) def random_jump(self): Selects a random slot that contains a peg and then randomly selects one of it's possible jumps. :return: Tuple containing a single slot and it's jump coordinates slot_choice = [i for i in range(15)] random.shuffle(slot_choice) for slot in slot_choice: if self.board[slot].has_peg(): possible_jump = self.board[slot].possible_jump(self.board) if possible_jump: jump_over = random.choice(possible_jump.keys()) jump_to = possible_jump[jump_over] return slot, jump_over, jump_to return None def remove_first_peg(self): The first action of every game is to remove a peg from a full board This function randomly removes one peg from a full board first_peg = random.choice(self.board) first_peg.remove_peg() self.jump_history.append([first_peg.num]) def play_one_game(self): self.remove_first_peg() while True: next_jump = self.random_jump() if next_jump: self.jump(*next_jump) else: breakdef main(): total_games = 0 # Keep playing games until a single peg is left on the board while True: total_games += 1 triangle = Triangle() triangle.play_one_game() if triangle.total_peg() == 1: break print '***Game #' + str(total_games) + ' Report***' for slot in triangle.board: print str(slot.num) + ' ' + str(slot.has_peg()) print 'Jump History:' print triangle.jump_history print 'Total Pegs Remaining: ' + str(triangle.total_peg())if __name__ == __main__: main()I'm looking for any critiques, from style to class structure to variable names.
|
Solution to the cracker barrel peg game / triangle game
|
python;game
|
Your documentation (good work on including it, by the way) mentions that this is Python 2.7. In that case, your classes should be new-style and inherit from object (see e.g. What is the difference between old style and new style classes in Python?)While I mention the documentation, it would be nice to cover the format of jump_dict, e.g.:...:param jump_dict: The possible jumps from the slot, in the form {jump_over: jump_to}.... Having the board laid out within Board.__init__ seems a little inflexible; have a go at making the number of rows an instantiation argument, defaulting to the current 5, and see if you can write some code to calculate what slots must be provided (and linked to which others) for a given size.I think the structure might end up being a bit neater if you switch the dictionaries, too - if each Slot stored the slots it can be jumped to from the logic gets simpler. Really, it's the empty slots that are key here, not the full ones!There is duplication of information here:def jump(self, jump_from, jump_over, jump_to):you define three parameters, when any two would be sufficient to determine the third (e.g. if you know where you're jumping _from and _to, you can work out where you're jumping _over).Finally, the random method of solving the puzzle isn't ideal. At the very least I would try to decouple the random solver from the other methods, making it easier to slot in something a bit smarter when you can. For example, imagine:def play_one_game(self, method='random'):What would need to change to make this run while providing for alternative values of method later?
|
_softwareengineering.208081
|
I've used a software suite that is installed in offices and on remote vessels. The installations communicate back and forth, and they do that by using a simple proprietary file format that looks something like this:/SHIP:16MILES=45213/ORDER:22943STATUS=OPENTOTAL=447.84URGENCY=HIGH/ORDERLINES:22943ITEM=3544QUANTITY=1PRICE=299.99ITEM=11269QUANTITY=5PRICE=29.57Recently, I've been writing a piece of software for a customer that saves information in the same kind of flat file format.When the file is opened, the lines are iterated over, and stuff happens to the lines (i.e. they're inserted into a database, or whatever).But it got to me on to thinking, how would this kind of file scale? (I like things being able to scale)I could of course gzip it; but how does a file format evolve from being something basic like to this, to being monolithic? What typical practises are employed when making a file format for a new piece of software? How are they typically built?Related: Is there a proper way to create a file format? and Should I encrypt files saved by my program
|
How are new file formats constructed?
|
file handling;file structure;files;delimited files
|
The ability to scale will depend on the specific usage.If I take your example of lines inserted in a database, the closer model is a log. An application, such as a web server, writes some data to a log. Daily (or once per hour, or any other period of time), the log is rotated, i.e. the application frees the current file and starts writing to another one. Once the file is freed, an ETL can process this file and load the transformed data into the database.If I take a different example, such as a large file (and by large, I mean several gigabytes or terabytes) which should be read in a context where any information in it should be accessed quickly, then the format would be different and will probably use pages and indexes to point to the right content; additionally, fragmentation will be a concern too if the data in the file is modified. You can find more information about this sort of usage by reading about PST file format used by Microsoft Outlook (it can often take gigabytes) or file formats used by database files.This means that the format you are actually using is maybe extremely scalable in the context in which it is used.How are they typically built?Like any data structure and any piece of software in general.Ideally, during architecture and design phase, developers think about ways they can store information in a file, given the different requirements, priorities and constraints. Then the file format can evolve to take in account new requirements, priorities and constraints, while being, if needed, backwards compatible.Examples:If a requirement in the format you've shown in your question is that values can be multiline and contain =, this brings a specific issue of a value such as 12345=PRICE=123.If a requirement is to follow the standards, then something like EDIFACT can be used instead of the current format (maybe with some metadata if needed).If the priority is to make the file readable, item and price are fine or may even be expanded to be more explicit. If the priority is to shorten the size of the file, item could become i, quantity q, etc. Even better, the file can become:> 22943:3544,1,299.99;11269,5,29.57or be transformed to a binary format.If a constraint is to keep the data secure, cryptography will be used. If another constraint tells that some of the involved systems don't support Unicode, this is an additional problem to solve.
|
_codereview.157056
|
I've been reading a lot about IoC and DI and I still haven't understood if it's a bad practice to make your Application IoC agnostic, but seems only logical to me that it should be IoC agnostic.Using a 4 layer approach I've coded a test project just to try and apply these concepts and this is what a came up with:Data Layer with EF 6.1.3 UnitOfWork:using TestProject.Persistence.Contracts;namespace TestProject.Core{ public class UnitOfWork : IUnitOfWork { private readonly IContext _context; public IRepo<Customer> Customers { get; set; } public IRepo<MembershipType> MembershipsTypes { get; set; } public IRepo<Movie> Movies { get; set; } public IRepo<Genre> Genres { get; set; } public UnitOfWork( IContext context, IRepo<Customer> customers, IRepo<MembershipType> membershipsTypes, IRepo<Movie> movies, IRepo<Genre> genres) { _context = context; Customers = customers; MembershipsTypes = membershipsTypes; Movies = movies; Genres = genres; } public int Complete() { return _context.SaveChanges(); } public void Dispose() { _context.Dispose(); GC.SuppressFinalize(this); } }}Repo(generic):using System;using System.Collections.Generic;using System.Data.Entity;using System.Linq;using System.Linq.Expressions;using TestProject.Core.Contracts;using TestProject.Core.Contracts.Repos;using TestProject.Persistence.Contracts;namespace TestProject.Persistence.Repos{ public class Repo<TEntity> : IRepo<TEntity> where TEntity : class { protected readonly DbSet<TEntity> _entities; public Repo(IContext context) => _entities = context.Set<TEntity>(); public void Add(TEntity entity) => _entities.Add(entity); public IEnumerable<TEntity> Find(Expression<Func<TEntity, bool>> predicate) { return _entities.Where(predicate).ToList(); } public TEntity Get(int id) => _entities.Find(id); public IEnumerable<TEntity> GetAll() => _entities.ToList(); public void Remove(TEntity entity) => _entities.Remove(entity); public TEntity SingleOrDefault(Expression<Func<TEntity, bool>> predicate) { return _entities.SingleOrDefault(predicate); } }}Core Layer (with Interfaces to be used)IUnitOfWork:using System;using TestProject.Core.Contracts.Repos;using TestProject.Core.Models;namespace TestProject.Core.Contracts{ public interface IUnitOfWork : IDisposable { IRepo<Customer> Customers { get; set; } IRepo<Movie> Movies { get; set; } IRepo<MembershipType> MembershipsTypes { get; set; } IRepo<Genre> Genres { get; set; } int Complete(); }}IoC Layer(using Autofac)Builder:using Autofac;using Autofac.Integration.Mvc;using System.Reflection;using System.Web.Mvc;namespace TestProject.IoC{ public class Builder : ContainerBuilder { public static void Register(Assembly assembly) { var builder = new Builder(); builder.RegisterControllers(assembly); builder.RegisterModule<WebModule>(); DependencyResolver.SetResolver(new AutofacDependencyResolver(builder.Build())); } }}WebModule:using Autofac;using TestProject.Core;using TestProject.Core.Contracts;using TestProject.Core.Contracts.Repos;using TestProject.Persistence.Contexts;using TestProject.Persistence.Contracts;using TestProject.Persistence.Repos;namespace TestProject.IoC{ public class WebModule : Module { protected override void Load(ContainerBuilder builder) { builder.RegisterType<UnitOfWork>().As<IUnitOfWork>(); builder.RegisterGeneric(typeof(Repo<>)).As(typeof(IRepo<>)); builder.RegisterType<TestProjectWebContext>().As<IContext>(); base.Load(builder); } }}And finally MVC WebAppGlobal.asax:using System.Web.Mvc;using System.Web.Optimization;using System.Web.Routing;using TestProject.IoC;namespace TestProject.Web{ public class MvcApplication : System.Web.HttpApplication { protected void Application_Start() { Builder.Register(typeof(MvcApplication).Assembly); AreaRegistration.RegisterAllAreas(); FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters); RouteConfig.RegisterRoutes(RouteTable.Routes); BundleConfig.RegisterBundles(BundleTable.Bundles); } }}I ran it and it works. I had to do a little of work around in order to reference the MVC assembly in the IoC Layer but it is still but all the components are still registered at composition root.Does it matter that the actual class that handles the registration is not in the MVC project?What is the best approach?
|
IoC Agnostic Test Project
|
c#;mvc;dependency injection
| null |
_codereview.122118
|
I'm using TurboPower LockBox for the first time, and using the TwoFish algorithm to first encrypt a password, and later retrieve the password by decrypting the generated hash. I would like to know if the approach that I'm using is good, and if there are any pitfalls to avoid. Its a simple use case for my application to store and later retrieve a mail server password.This is the code I'm using:// Steve Faleiro 2016-03-06uses uTPLb_CryptographicLibrary, uTPLb_Codec;function TwoFish_ENC(const AString: String): String;var CryptoLib: TCryptographicLibrary; Codec: TCodec; encryptedStr: String;begin CryptoLib := TCryptographicLibrary.Create(nil); Codec := TCodec.Create(nil); try Codec.CryptoLibrary := CryptoLib; Codec.StreamCipherId := 'native.StreamToBlock'; Codec.BlockCipherId := 'native.Twofish'; Codec.ChainModeId := 'native.CBC'; Codec.Reset; Codec.Password := '56410AD2-9DD6-4B8C-9D2F-5DE7A7F7085E'; Codec.EncryptString(AString, encryptedStr, TEncoding.UTF8); finally CryptoLib.Free; Codec.Free; end; Result := encryptedStr;end;function TwoFish_DEC(const AString: String): String;var CryptoLib: TCryptographicLibrary; Codec: TCodec; decryptedStr: String;begin CryptoLib := TCryptographicLibrary.Create(nil); Codec := TCodec.Create(nil); try Codec.CryptoLibrary := CryptoLib; Codec.StreamCipherId := 'native.StreamToBlock'; Codec.BlockCipherId := 'native.Twofish'; Codec.ChainModeId := 'native.CBC'; Codec.Reset; Codec.Password := '56410AD2-9DD6-4B8C-9D2F-5DE7A7F7085E'; Codec.DecryptString(decryptedStr, AString, TEncoding.UTF8); finally CryptoLib.Free; Codec.Free; end; Result := decryptedStr;end;Is it good enough? If not, is there any way to improve it?
|
LockBox TwoFish string encryption and decryption
|
cryptography;delphi
| null |
_softwareengineering.296598
|
I always followed the opinion to not abuse interfaces in case of decomposition.Usually I only implement them if I am absolutely sure to have a is-a-relation and avoid implementing them if there is a has-a-relation to another kind of object.Now, this ethic leads me to a problem:I'm writing a small library including a type containing a name property.The objects representing this type are almost exclusively created through a XML-file where they are defined.The idea behind giving those objects a name is to easily access them in the code.The definition looks like this:<root> <object name=foo> .... </object></root>In the code it should be possible to apply any representor of this type by name or by the object itself. So the type includes a name:public interface NamedObject { String getName(); .....}So now the following two samples should create the same result:Sample 1:NamedObject obj = Utils.getFromXML(obj_name);Foo.doStuff(obj);Sample 2:Foo.doStuff(obj_name);The reason why this should work is that there may be a context where the programmer is already working with such a named object and so he should be able to use it directly like in Sample 1.But there may be a context where he wants to call a method depending on such an object and the programmer might want to use a named object he defined in the XML. Then he could just lookup the name and use it via its name.The actual problem is, that this is leading me to a mess in my structural design as there are a lot of objects requiring those kind of objects in their methods and in most cases there is a default behavoir defined as well.So if there is a default behavior I need 3 implementations of the method.public interface Bar { void doStuff() void doStuff(String namedObjectName); void doStuff(NamedObject object);}So 6 functionalities for example would lead to 18 methods.I think there might be no other way to represent the default behavoir than providing a method without parameter. I could run the default behavoir by accepting null pointers and run the default behavoir if a null pointer is given but in my eyes, a param-less method is the prettier solution.So to get rid of one of the other two signatures the NamedObject-type could inherit from java.lang.CharSequence.The implementations of the methods of java.lang.CharSequence could be done by using the name property.Since Java 8 I could already default implement them in the interface using the getName() method.With this move I could replace the method accepting the java.lang.String parameter representing the name and the method accepting the NamedObject with a single method accepting a java.lang.CharSequence representing either the name or the object itself.The NamedObject-type itself CLEARLY DOES NOT represent a is-a relation to its name. It represents a has-a relation.And that's the question. What's the best way to design a library including such a structure? Is my new intention ok?
|
CharSequence to represent a named object
|
java;class design;class;strings;conventions
|
Inheriting from java.lang.CharSequence will probably cause you much more maintenance headaches in the future, not just because of the has-a/is-a design glitch.I think the real problem is that you are trying to support calls like this directly: Foo.doStuff(obj_name);That way, your Foo class needs to be coupled to Utils.getFromXML, which makes it probably more complex than it needs to be. Lets ignore the default behaviour part of your question for a moment. So I recommend to design your interface just like this:public interface Bar { void doStuff(NamedObject object);}and if you need this, provide a helper or utility class XmlUtil containing a function like thispublic void DoStuff(Bar bar, String objname){ NamedObject obj = Utils.getFromXML(objname); bar.DoStuff(obj);}So you can call it likeXmlUtil.DoStuff(Foo, obj-name);which is only a little bit longer than your original call, but makes the Xml context explicit.That way, you do not have to provide the same string -> object mapping in each Bar implementation over and over again, you can reuse the function above for any Bar implementation. Moreover, only your utility class now needs to be coupled to the Xml directly, not your `Bar implementations.
|
_softwareengineering.276964
|
I'm trying to simulate a DVD production line in Java. The thing I have a problem with is the calculation of the time in this kind of simulation and the number of DVDs per hour.The production line consists of 4 machines, each of which spends a certain random time working on 1 DVD. The third one can work on more than 1 DVD at the same time (max. 100). Also, the third one will always accept more DVDs to work on, as long it's capacity is < 50%, which I plan to label as m3Capacity. But for now, let's just ignore the capacity characteristic.What I have so far is that, with a random number generator (since each machine has random DVD processing time each time it starts working on a new DVD), I assigned a random processing time to each of the four machines, summed up their processing times for the total time spent in one-go production (totalProdTime = processTime1 + processTime2 + processTime3 + processTime4), and then I keep summing up those totalProdTime values until the total time of the whole simulation reaches the desired duration of the simulation specified by the user (totalProdTimeSim = totalProdTimeSim + mTotal.totalProdTime). Later I just verify whether the totalProdTimeSim has reached the desired time and that's it.Now, the problem is that, this way, the production of the DVDs waits for one instance of production to finish all the way until the very end of the machine 4, and then the work begins on the second DVD, etc. What I want to do is that after machine 1 finishes it's work, and the second one begins its work, machine 1 begins work on another DVD with it's newly generated time. The same goes for all other machines. After machine 2 finishes work on a DVD, it immediately begins working on another DVD.Now, since all machines have random processing times each time they start working on another DVD, it will often happen that one machine 'wants to work on another DVD' while the previous one hasn't finished its work on the current one. In this case, the idle machine just 'waits' for the previous machine to finish processing and then accepts the DVD from the previous machine.So, in my example, we have that all the machines work one by one, but I want them all to work constantly, even after they finish working on the current DVD (except in the first iteration, where the further machines need to wait for the previous ones to forward them their DVD they previously worked on).How do you think I should calculate the total simulation time and DVD output? I've been thinking last night about it, but haven't gotten any ideas.My sketch of how it should work can be found in the picture below. In the sketch all machines behave as if they constantly have something to work on, but should wait from the input from the previous machine.Thanks in advance!
|
Simulating a Production Line
|
simulation
| null |
_softwareengineering.188011
|
I am very interested in network programming and I want to build a VNC client for fun and for learning purposes. I have been Googling for resources about VNC client and server, but most of time, resources found were about how to use VNC not how to build or how it actually works in business layer so I can get an idea about how it could be simulated. Please suggest me any documentation / articles / resources related to this. Thanks in advance.It is my first post on Programmers Stackexchange site so please forgive me if this question is not appropriate for topic of this site.
|
Any resources or books to build VNC client
|
java;html5;canvas
|
This tutorial takes you through the process of implementing the RFB protocol and what nots. It even has diagrams and Github hosted source code. NoVNC is another open source project with Github code. A small swarm of git tutorials is available if you've never used it.There's also the libVNCServer/LibVNCClient which is C based code for insertion into other things.
|
_codereview.97322
|
Here's my JavaScript file that queries data from Parse and then updates my Handlebars.js template to display them. The code has 3 async calls using a Promise. It still feel like it could be improved, specifically in the Promises results loop where I check which i value it is. I'm not guaranteed that the bar we're on in the first loop is the same as in the second or third loop, which is why I have the nested loop checking if the names are the same to ensure it's the right one. Could I order them all alphabetically or something so that I can eliminate that? It seems like a lot of repeated code.$(document).ready(function(){google.maps.event.addDomListener(window, 'load', initialize);function initialize() { Parse.initialize(ID,ID); var latlng = new google.maps.LatLng(42.035021, -93.645); var mapOptions = { center: latlng, scrollWheel: false, zoom: 13 }; var map = new google.maps.Map(document.getElementById(map-canvas), mapOptions); // The Handlebars template for a bar item var source = $(#barTemplate).html(); var template = Handlebars.compile(source); // Object containing the list of bars var data = new Object(); var barList = new Array(); // Build an array of Parse queries var queries = []; var Bar = Parse.Object.extend(Bars); var Deals = Parse.Object.extend(Deals); var Photos = Parse.Object.extend(BarPhotos); queries.push(new Parse.Query(Bar).find()) queries.push(new Parse.Query(Deals).find()) queries.push(new Parse.Query(Photos).find()) // Wait for them all to complete Parse.Promise.when(queries).then(function() { // The results of each query are returned as arguments to the callback for(var i = 0, l = arguments.length; i < l; i++) { if (i == 0) { // Add the retrieved bars to the map for (var j = 0; j < arguments[i].length; j++) { var bar = arguments[i][j]; var latlng = new google.maps.LatLng(bar.get('lat'), bar.get('long')); var marker = new google.maps.Marker({ position: latlng, map: map, animation: google.maps.Animation.DROP }); barList.push({name: bar.get('name'), deals: [], img: }); } // Add the retrieved deals to the bar for (var j = 0; j < arguments[i+1].length; j++) { var bar = arguments[i+1][j]; var barName = bar.get('name'); var dealsArr = new Array(); var arr = bar.get('Monday'); for (var k = 0; k < arr.length; k++) { dealsArr.push({deal: arr[k]}); } // Loop through bars to find match for (var l = 0; l < barList.length; l++) { if (barList[l].name == barName) { // Update the deals array property barList[l].deals = dealsArr; } } } // Add the retrieved images to the bar for (var j = 0; j < arguments[i+2].length; j++) { var bar = arguments[i+2][j]; var barName = bar.get('barName'); var imageFile = bar.get('imageFile'); var imageURL = imageFile.url(); // Loop through bars to find match for (var l = 0; l < barList.length; l++) { if (barList[l].name == barName) { // Update the image property barList[l].img = imageURL; } } } } } // Update template with bars data = {bars: barList}; $(#barList).html(template(data)); });};});
|
Looping through Parse.com queries
|
javascript;google maps;parse.com
| null |
_vi.4924
|
When I am coding in c++, I want vim to expand ( into ()<++> and place the cursor in the parenthesis. I do this by putting the following line in one of the files loaded at startup:inoremap ( ()<++><Left><Left><Left><Left><Left>However, I would like this binding to be disabled in comments, like// Inline comment where ( shouldn't become ()<++>or /* Comment block where ( shouldn't become ()<++>*/How can I do it?
|
Define new command that works only outside c++ comments
|
comments;filetype c++
|
You can make use of expression mappings. The basic idea is to test the current syntax and only expand, if it is not a comment. That is basically what all the plugins will be hiding from you::inoremap <silent><expr> ( synIDattr(synIDtrans(synID(line('.'), col('.')-1, 'name')),'name')=~?comment?(:()<++><Left><Left><Left><Left><Left>Which means, check that the current syntax item is not of type comment and in that case replace by your ()<++> item or else replace by themselves. Note, it is probably more easier to hide that functionality behind a function that does the necessary checks.Note: to make undo and redo work properly, instead of using <Left> you should use <C-G>U<Left>, which is a relative recent addition to Vim. Read the help at :h i_CTRL-G_U
|
_codereview.59680
|
I'm stuck with the slow VBA routine below, which I think is the cause of lagging (taking to long). I was wondering if there is a faster way to do this:With Sheets(Groups) .Cells(1, 1).Value = Groups .Cells(1, 2).Value = Aantal .Cells(2, 1).Value = 9N4 .Cells(2, 2).Value = n9N4 .Cells(3, 1).Value = 1A2A .Cells(3, 2).Value = n1A2A .Cells(4, 1).Value = 1A2B .Cells(4, 2).Value = n1A2B .Cells(5, 1).Value = 2A2 .Cells(5, 2).Value = n2A2 .Cells(6, 1).Value = 2A3 .Cells(6, 2).Value = n2A3 .Cells(7, 1).Value = 2A4 .Cells(7, 2).Value = n2A4 .Cells(8, 1).Value = 3A3 .Cells(8, 2).Value = n3A3 .Cells(9, 1).Value = 3A4 .Cells(9, 2).Value = n3A4 .Cells(10, 1).Value = 4A4 .Cells(10, 2).Value = n4A4 .Cells(11, 1).Value = 1B4A .Cells(11, 2).Value = n1B4A .Cells(12, 1).Value = 1B4B .Cells(12, 2).Value = n1B4B .Cells(13, 1).Value = 1B4C .Cells(13, 2).Value = n1B4C .Cells(14, 1).Value = 1B4D .Cells(14, 2).Value = n1B4D .Cells(15, 1).Value = 2G4 .Cells(15, 2).Value = n2G4 .Cells(16, 1).Value = 3G4 .Cells(16, 2).Value = n3G4 .Cells(17, 1).Value = 4G4 .Cells(17, 2).Value = n4G4 .Cells(18, 1).Value = 1E3 .Cells(18, 2).Value = n1E3 .Cells(19, 1).Value = 1E4 .Cells(19, 2).Value = n1E4 .Cells(20, 1).Value = 2E4 .Cells(20, 2).Value = n2E4 .Cells(21, 1).Value = 3E3 .Cells(21, 2).Value = n3E3 .Cells(22, 1).Value = 3E4 .Cells(22, 2).Value = n3E4 .Cells(23, 1).Value = 4E4 .Cells(23, 2).Value = n4E4 resultst = n9N4 + n1A2A + n1A2B + n2A2 + n2A3 + n3A3 + n1B4A + n1B4B + n1B4C + n1B4D + n2G4 + n3G4 + n4G4 + n2A4 + n3A4 + n4A4 + n1E3 + n1E4 + n2E4 + n3E3 + n3E4 + n4E4 .Cells(24, 1).Value = result .Cells(24, 2).Value = resultst'check if the sum is correct .Cells(24, 3).Value = =Sum(b2:b23)End WithThe labels are written and the values come from the code below.This is a whole list here are a few:n2E4 = Application.WorksheetFunction.CountIf(Sheets(totallist).Range(G2:G & rn_totallist), 2E4)n3E3 = Application.WorksheetFunction.CountIf(Sheets(totallist).Range(G2:G & rn_totallist), 3E3)n3E4 = Application.WorksheetFunction.CountIf(Sheets(totallist).Range(G2:G & rn_totallist), 3E4)n4E4 = Application.WorksheetFunction.CountIf(Sheets(totallist).Range(G2:G & rn_totallist), 4E4)
|
Optimize spreadsheet-filling routine
|
performance;vba;excel
|
It's hard to say exactly how this will be implemented without seeing the entire sub/function, but what you need is a Select Case statement and a For loop. Pseudo code to get you started: Dim row as Long For row = 1 to 24 Populate row next rowEnd SubPrivate Sub Populate(ByVal row as Long) Dim text as string text = GetText(row) Dim value as Variant value = GetValue(row, text) Sheets(Groups).Cells(row,1) = text Sheets(Groups).Cells(row,2) = valueEnd SubPrivate Function GetText(ByVal row as Long) As String Select Case Row Case 1: GetText = Groups Case 2: GetText = 9N4 Case 3: GetText = 1A2A 'etc End SelectEnd FunctionPrivate Function GetValue(ByVal row as Long, ByVal matchText as String) As Variant If row = 1 Then GetValue = Aantal Else GetValue = Application.WorksheetFunction.CountIf(Sheets(totallist).Range(G2:G & rn_totallist), matchText) End IfEnd Function
|
_cstheory.37474
|
This question is about subset problems (the solution is a subset of the instance, so trivially enumerable in $2^n \cdot n^c$ time), and the parameter is the solution size, so-called the standard parameterization.The answer to the question in the title is obviously yes: the clique problem on sparse graph ($m = O(n)$) remains W[1]-hard but can be trivially solved in $2^{o(n)}$ time. The trick here is that the solution can never be $\omega(\sqrt n)$. So to make the question nontrivial, we have the requirement that$f_k(n)$ is an unbounded function for any $k$, where $f_k(n)$ is the number of instances with instance size $n$ and optimal solution size $k$.Informally speaking, the W[1]-hardness characterizes the hardness of instances with extremely small solutions, while the subexponentional solvability concerns with the solution being almost half of the instance size. So it seems perfectly fine that such a problem exists (?).
|
Is there a W[1]-hard problem that can be solved in $2^{o(n)}$ time?
|
reference request;parameterized complexity;exp time algorithms
| null |
_unix.205650
|
I have an 8G usb stick (I'm on linux Mint), and I'm trying to copy a 5.4G file into it, but getting No space left on deviceThe filesize of the copied file before failing is always 3.6GAn output of the mounted stick shows..df -T/dev/sdc1 ext2 7708584 622604 6694404 9% /media/moo/ba20d7ab-2c46-4f7a-9fb8-baa0ee71e9fedf -h/dev/sdc1 7.4G 608M 6.4G 9% /media/moo/ba20d7ab-2c46-4f7a-9fb8-baa0ee71e9fedu -h --max-depth=188K ./.sshls -h myfile -rw-r--r-- 1 moo moo 5.4G May 26 09:35 myfileSo a 5.4G file, won't seem to go on an 8G usb stick. I thought there wasn't issues with ext2, and it was only problems with fat32 for file sizes and usb sticks ? Would changing the formatting make any difference ?Edit: Here is an report from tunefs for the drivesudo tune2fs -l /dev/sdd1Filesystem volume name: Last mounted on: /media/moo/ba20d7ab-2c46-4f7a-9fb8-baa0ee71e9feFilesystem UUID: ba20d7ab-2c46-4f7a-9fb8-baa0ee71e9feFilesystem magic number: 0xEF53Filesystem revision #: 1 (dynamic)Filesystem features: ext_attr resize_inode dir_index filetype sparse_super large_fileFilesystem flags: signed_directory_hash Default mount options: (none)Filesystem state: not clean with errorsErrors behavior: ContinueFilesystem OS type: LinuxInode count: 489600Block count: 1957884Reserved block count: 97894Free blocks: 970072Free inodes: 489576First block: 0Block size: 4096Fragment size: 4096Reserved GDT blocks: 477Blocks per group: 32768Fragments per group: 32768Inodes per group: 8160Inode blocks per group: 510Filesystem created: Mon Mar 2 13:00:18 2009Last mount time: Tue May 26 12:12:59 2015Last write time: Tue May 26 12:12:59 2015Mount count: 102Maximum mount count: 26Last checked: Mon Mar 2 13:00:18 2009Check interval: 15552000 (6 months)Next check after: Sat Aug 29 14:00:18 2009Lifetime writes: 12 GBReserved blocks uid: 0 (user root)Reserved blocks gid: 0 (group root)First inode: 11Inode size: 256Required extra isize: 28Desired extra isize: 28Default directory hash: half_md4Directory Hash Seed: 249823e2-d3c4-4f17-947c-3500523479fdFS Error count: 62First error time: Tue May 26 09:48:15 2015First error function: ext4_mb_generate_buddyFirst error line #: 757First error inode #: 0First error block #: 0Last error time: Tue May 26 10:35:25 2015Last error function: ext4_mb_generate_buddyLast error line #: 757Last error inode #: 0Last error block #: 0
|
Unable to copy large file onto ext2 usb stick
|
linux;disk usage;usb drive;disk
|
Your 8GB stick has approximately 7.5 GiB and even with some file system overhead should be able to store the 5.4GiB file.You use tune2fs to check the file sytem status and properties:tune2fs -l /dev/<device>By default 5% of the space is reserved for the root user. Your output lists 97894 blocks, which corresponds to approximately 385MiB and seems to be the default value. You might want to adjust this value using tune2fs if you don't need that much reserved space. Nevertheless, even with those 385MiB the file should fit on the file system.Your tune2fs output shows an unclean file system with errors. So please run fsck on the file system. This will fix the errors and possibly place some files in the lost+found directory. You can delete them if you're not intending to recover the data.This should fix the file system and copying the file will succeed.
|
_webmaster.104321
|
We juts notice a real time traffic fall in our website , from 1500 visits to 600 real time.This is extremely rare, I do a check on the behavor from the past weeks and we notice , that GoogleBot couldn't access the site because of a DNS error.Odd thing because I do several test on DNS side and everything is showing correctly.So in the las 4 weeks we do this:We migrate our site to an other infrastructure (wpengine)Add a SSL Certificate from Lets-encryptAdd a Test GA Property in GTM , so we can migrate to GTM (So there are 2 properties running in our site)Add a ScorllDept Event (http://scrolldepth.parsnip.io/)And that was all.I really know that DNS Error is affecting me also , but I can't verify or fix , because every DNS test I made turn out perfect.What do you suggest?
|
Sudden traffic fall related with DNS Error
|
google analytics;google search console;googlebot
| null |
_unix.338628
|
I'm trying to install awesome 4.0. To install all the dependencies I ran sudo apt-get build-dep awesome. If I run make in my awesome directory there are some libs still missing:$ makeRunning cmake-- git not found.-- asciidoc -> /usr/bin/asciidoc-- xmlto -> /usr/bin/xmlto-- gzip -> /bin/gzip-- ldoc -> /usr/bin/ldoc-- convert -> /usr/bin/convert-- Checking for modules 'glib-2.0;gdk-pixbuf-2.0;cairo;x11;xcb-cursor;xcb-randr;xcb-xtest;xcb-xinerama;xcb-shape;xcb-util>=0.3.8;xcb-keysyms>=0.3.4;xcb-icccm>=0.3.8;xcb-xkb;xkbcommon;xkbcommon-x11;cairo-xcb;libstartup-notification-1.0>=0.10;xproto>=7.0.15;libxdg-basedir>=1.0.0;xcb-xrm'-- No package 'xcb-xrm' foundCMake Error at /usr/share/cmake-3.5/Modules/FindPkgConfig.cmake:367 (message): A required package was not foundCall Stack (most recent call first): /usr/share/cmake-3.5/Modules/FindPkgConfig.cmake:532 (_pkg_check_modules_internal) awesomeConfig.cmake:153 (pkg_check_modules) CMakeLists.txt:17 (include)I checked which package I have to install to close this gap apt-cache search xcb-xrm but I got no results. Then I checked the dependencies list from awesome, there is only a entry xcb-util-xrm so I was looking for apt-cache search xcb-util-xrm`. I got also no results. How to install the missing library?$ lsb_release -aNo LSB modules are available.Distributor ID: UbuntuDescription: Ubuntu 16.04.1 LTSRelease: 16.04Codename: xenial
|
No package 'xcb-xrm' found
|
apt;dpkg
|
As mentioned by steeldriver, the package is not available until 16.10.One option is to built it manually from source (github)A Second option would be to get it from a 3rd party ppasudo add-apt-repository ppa:aguignard/ppasudo apt-get updatesudo apt-get install xcb-util-xrm
|
_unix.109300
|
Use case: when USB flash or SD card is plugged in, the filesystem is mounted. It is mounted by gvfsd. But if the filesystem contains files in legacy 8.3 format (like digital cameras often save file names on fs), then the file names are automatically transformed to upper case. So, the files on filesystem appears like FILENAME.JPG.How to make gvfs automatically mount FAT/VFAT drives so that the file names would appear in lowercase even if they are saved in the 8.3 format?
|
How to make gvfs mount fat/vfat drives with 8.3 names set to lowercase on automatic mount?
|
mount;vfat;gvfs
| null |
_softwareengineering.191058
|
I work in Java so basically I use OOP paradigm during coding.I am about to start working in Perl and I was wondering what is the paradigm that Perl developers follow.In wiki it mentions that it supports many paradigms but I am not sure I understand this since it is a scripting language.So my question is:Are the object oriented patterns I'm familiar with in Java idiomatic in Perl, or will I need significant change to my design style to write effective Perl?Note: This is not a question to critique Perl. I actually have to work in Perl and would like to understand how the current way I program will change.
|
Programming style in Perl
|
java;design patterns;object oriented;perl
| null |
_softwareengineering.303289
|
I don't remember when I wrote generic class last time. Every time I think I need it after some thinking I make a conclusion I don't.The second answer to this question made me to ask for clarification (since i can't comment yet, i made a new question).So let's take given code as an example of case where one needs generics:public class Repository<T> where T : class, IBusinessOBject{ T Get(int id) void Save(T obj); void Delete(T obj);}It has type constraints: IBusinessObjectMy usual way of thought is: the class is constrained to use IBusinessObject, so are the classes that use this Repository are. Repository stores these IBusinessObjects, most likely clients of this Repository will want to get and use objects through IBusinessObject interface. So why not just topublic class Repository{ IBusinessOBject Get(int id) void Save(IBusinessOBject obj); void Delete(IBusinessOBject obj);}Tho, the example is not good, as it is just another collection type and generic collection is classics. In this case type constraint looks odd too. In fact the example class Repository<T> where T : class, IBusinessbBject looks pretty much similar to class BusinessObjectRepository to me. Which is the thing generics are made to fix.The whole point is: are generics good for anything except collections and don't type constraints make generic as specialized, as the use of this type constraint instead of generic type parameter inside the class does?
|
Generics vs common interface?
|
c#;object oriented;interfaces;generics
| null |
_unix.365494
|
I have very little knowledge of computer networking, Unix, Linux, and VMs. However, I find myself in a job requiring these skills. Please explain everything to me like a noob (sorry).I have been tasked with creating two virtual instances of Ubuntu on my windows 7 machine and having them transfer files back and forth over a virtual internal network. I configured the two instances in virtual box and made an internal network in the network settings of virtual box.I then noticed both VMs had the same IPs. I experienced exactly what is described in:Why are my two virtual machines getting the same IP address?So, I then tried the ping command and it was successful. I want these VMs to communicate with each other. My question is: was I pinging myself? If I was, how do I ping the other VM?Thank you!
|
Will both VMs having the same IP cause me to ping myself
|
ubuntu;virtualbox;virtual machine
|
As pointed out by 0xSheepdog, the VM was clearly pinging itself. By default virtual box will set the IP address of the machine to 10.0.2.15. So, when I ran ping 10.0.2.15, I should have known better. I was able to find the following video tutorial explaining how to solve this problem, starting by using vboxmanage.exe on the host OS:https://www.youtube.com/watch?v=lhOY-KilEeEAs demonstrated in the video, this will create an internal virtual network and the VMs will have connectivity. That is, they will be able to ping each other.
|
_reverseengineering.4261
|
I am starting to look a bit more precisely at ARM assembler and I looked up some dumps from objdump. I saw a lot of instruction (add is not the only one) with an extra s at the end (adds, subs, ...).I looked a bit to the ARM documentation and it seems to mean something significant, but I can't figure out exactly what (the documentation I found about it seemed extremely obscure to me).Has somebody some insight on what is the meaning of this extra s added at the end of some ARM instructions ?
|
Difference between 'add' and 'adds' in ARM assembler?
|
arm;gas
|
Usual ADD doesn't update flags.ADDS does.See better documentation at arm infocenter.As it wrote there:If S is specified, these instructions update the N, Z, C and V flags according to the result.
|
_codereview.95696
|
It first creates a server and waits to connect. Once the client connects to the server he gets the line. If he puts the correct code he gets output -> good job. If not -> the software disconnects.My question is whether there is some sort of security hole in the software that allows you to know (as a client) the password. Is there any weakness in the code? Is it possible to exploit my code? If so, how?#include <stdio.h>#include <stdlib.h>#include <unistd.h>#include <string.h>#include <errno.h>#include <signal.h>#include <netinet/in.h>#include <sys/wait.h>#define ALARM_TIMEOUT_SEC (1200)#define PASSWORD_LENGTH (100)#define BRUTE_FORCE_TIMEOUT (1)int is_correct(char * given_password_hex){char b2h[256] = { -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, -1, -1, -1, -1, -1, -1, /* 0-9 */ -1, 10, 11, 12, 13, 14, 15, -1, -1, -1, -1, -1, -1, -1, -1, -1, /* A-F */ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 10, 11, 12, 13, 14, 15, -1, -1, -1, -1, -1, -1, -1, -1, -1, /* a-f */ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1,};char password[50] = hajdgufh{0000000000123456780000000000000000000000};char given_password[50];char value1; char value2; int i; char diff = 0; size_t given_password_hex_length = strlen(given_password_hex); if (PASSWORD_LENGTH != given_password_hex_length) { printf(bad input1: %zu\n, given_password_hex_length); return 0; } bzero(given_password, sizeof(given_password)); for (i = 0; i < sizeof(given_password); i++) { value1 = b2h[given_password_hex[i * 2]]; value2 = b2h[given_password_hex[i * 2 + 1]]; if (value1 == -1 || value2 == -1) { printf(bad input2\n); return 0; } given_password[i] = (value1 << 4) | value2;}for (i = 0; i < 50; i++) { diff |= (password[i] ^ given_password[i]);}return (diff == 0);}void right_trim(char * str){char * t = str + strlen(str) - 1;char * p;for (p = t; p >= str; p--) { if (!strchr( \r\n, *p)) { break; } *p = '\0';}}void handle(int s){char inbuf[4096];//we defined inbuf as 4096 sizedup2(s, 0);dup2(s, 1);setbuf(stdout, NULL);alarm(ALARM_TIMEOUT_SEC);printf(lets see if you able to solve me : );if (NULL == fgets(inbuf, sizeof(inbuf), stdin)) {//fgets -> don't have any vulnarble to buffer overflow return;//becoas its restrict the size of the input }right_trim(inbuf);if (is_correct(inbuf)) { printf(Good job!\n); }}void handle_sigchld(int sig) { waitpid((pid_t)(-1), 0, WNOHANG);}int main(int argc, char * argv[])//------------------------------------------------------------{ printf(we in); if (1 == argc) { printf(Usage: %s <port>\n, argv[0]); printf(section 1); exit(-0); } int port = strtol(argv[1], NULL, 10); if (0 == port) { printf(section 2); perror(Invalid port); exit(-1); }struct sigaction sa;sa.sa_handler = &handle_sigchld;sigemptyset(&sa.sa_mask);sa.sa_flags = SA_RESTART | SA_NOCLDSTOP;if (sigaction(SIGCHLD, &sa, 0) == -1) { perror(Unable to register sigaction); exit(-2);}int s = socket(AF_INET, SOCK_STREAM, 0);if (-1 == s) { perror(Unable to create server socket); exit(-3);}int optval = 1;if (0 != setsockopt(s, SOL_SOCKET, SO_REUSEADDR, &optval, sizeof(optval))) { perror(Unable to setsockopt); exit(-4);}struct sockaddr_in bind_addr = { .sin_family = AF_INET, .sin_port = htons(port)};if (0 != bind(s, (struct sockaddr *) &bind_addr, sizeof(bind_addr))) { perror(Unable to bind socket); printf(section 3); exit(-5);}if (0 != listen(s, 10)) { perror(Unable to listen); exit(-6);}while (1) { int s_ = accept(s, NULL, NULL); sleep(BRUTE_FORCE_TIMEOUT); if (-1 == s_) { perror(Unable to accept); continue; } pid_t child_pid = fork(); if (-1 == child_pid) { perror(Unable to fork); goto accept_cleanup; } if (0 == child_pid) { close(s); handle(s_); exit(0); }accept_cleanup: close(s_);}exit(0);}
|
Open server asking for a password
|
c;security;authentication;server;tcp
| null |
_cs.6694
|
I'm having difficulty with the following question:Given a bit pattern with mantissa $10110000$ and exponent $0111$, what does the bit pattern represent in denary (i.e decimal / base 10)?I got the right answer(!) but the wrong (or alternative) work:Convert the exponent to denary: $0111$ is $7$Apply this exponent to the mantissa: $10110000\rightarrow1011000$ after shifting $7$ placesConvert the mantissa to denary: $1011000$ is $88$Set the sign: $-88$ (which is correct!)Using a different method, the mantissa $1.0110000$ is somehow deteremined to be $-11/16$ and then $-11/16 \times 2^7 = - 88$ (I understand this shift with the exponent). However, what I don't understand is:How do you convert 1.0101 (mantissa) to -11/16? Is this a standard way to do it?
|
Floating-Point Binary to Decimal
|
terminology;binary arithmetic
| null |
_unix.216156
|
I have this command, that lists how many job files I have runningls -las /var/www/data/forked/jobs/j*.job | echo Jobs: $(wc -l)However, if there aren't any, it throws an error. What's the simplest way I can ignore the error, and get it to just display Jobs: 0?I've tried some things using the > /dev/null & and the like, but so far no luck.
|
How can I ignore errors in the ls command?
|
bash;ubuntu
| null |
_unix.160196
|
apt is using two locations to store downloaded packages and other files:/var/lib/apt/lists/var/cache/apt/archivesThese folders can get quite big, even when using apt-get clean regularly.My /var is on a separate partition and is relatively small. Is it possible to configure apt, so that is stores its files somewhere ales (i.e. in /home/apt/?
|
change location of the lists and archives folders
|
apt
|
You have a few options.Change the settings in /etc/apt/apt.confdir::state::lists /path/to/new/directory;dir::cache::archives /path/to/new/directory;Mount a larger partitions at the current directories (if you have spare space for a partition): # mount /dev/sda5 /var/lib/apt # mount /dev/sda6 /var/cache/aptOf course, for the above to work, you'll need to create partitions and filesystems first.Symlink to another location (if you have no space for new partitions, but space within current partitions):# ln -s /home/apt/lib /var/apt/lib# ln -s /home/apt/cache /var/apt/cacheOr as above, but using bind mounts:# mount --bind /home/apt/lib /var/apt/lib# mount --bind /home/apt/cache /var/apt/cache
|
_unix.85862
|
I'm trying to transfer a file between two servers and i'm getting different errors. Option 1: Logged at OLDSERVER via SSHscp file.tar.gz root@IPADDRESS:/var/www/.The error in this case is/usr/bin/ssh: no such file or directoryOption 2: Logged at new server via SSHscp OLDUSER@OLDURL:/var/htdocs/file.tar.gz /var/www/The error in this case isssh: connect to host OLDURL port 22: Connection refusedDo you know what could be the problem?
|
errors trying to transfer via SCP
|
ubuntu;scp
| null |
_codereview.139766
|
How can I improve the speed of the script?$Excel = New-Object -Com Excel.Application$Excel.visible = $True$Excel = $Excel.Workbooks.Add()$wSheet = $Excel.Worksheets.Item(1)$wSheet.Cells.item(1, 1) = Folder Path:$wSheet.Cells.Item(1, 2) = Users/Groups:$wSheet.Cells.Item(1, 3) = Permissions:$wSheet.Cells.Item(1, 4) = Permissions Inherited:$WorkBook = $wSheet.UsedRange$WorkBook.Interior.ColorIndex = 8$WorkBook.Font.ColorIndex = 11$WorkBook.Font.Bold = $True ####Change the path to the folder or share you want NTFS perms on#### $dirToAudit = Get-ChildItem -Path c:\inetpub -recurse | Where { $_.psIsContainer -eq $true } $intRow = 1 foreach ($dir in $dirToAudit) { $colACL = Get-Acl -Path $dir.FullNameforeach ($acl in $colACL) { $intRow++ $wSheet.Cells.Item($intRow, 1) = $dir.FullName foreach ($accessRight in $acl.Access) { $wSheet.Cells.Item($intRow, 2) = $($AccessRight.IdentityReference) $wSheet.Cells.Item($intRow, 3) = $($AccessRight.FileSystemRights) $wSheet.Cells.Item($intRow, 4) = $acl.AreAccessRulesProtected $intRow++ }}} $WorkBook.EntireColumn.AutoFit()
|
Getting NTFS permissions of all shared folders on the local machine
|
performance;excel;file system;windows;powershell
|
The only way to improve the speed of the script is to drop the usage of Excel as a COM-object altogether. It is considered bad practice. And it is not possible on servers where Excel is not installed.It's better to use a library like NPOI (.NET port from Apache POI (java)).I used this (old) example as a base and adapted it to roughly match your code.[Reflection.Assembly]::LoadFrom(C:\Users\IEUser\Desktop\Release\Net40\NPOI.dll)[Reflection.Assembly]::LoadFrom(C:\Users\IEUser\Desktop\Release\Net40\NPOI.OOXML.dll)[Reflection.Assembly]::LoadFrom(C:\Users\IEUser\Desktop\Release\Net40\NPOI.OpenXml4Net.dll)[Reflection.Assembly]::LoadFrom(C:\Users\IEUser\Desktop\Release\Net40\NPOI.OpenXml4Net.dll)[Reflection.Assembly]::LoadFrom(C:\Users\IEUser\Desktop\Release\Net40\NPOI.OpenXmlFormats.dll)[Reflection.Assembly]::LoadFrom(C:\Users\IEUser\Desktop\Release\Net40\ICSharpCode.SharpZipLib.dll)$wb = New-Object NPOI.XSSF.UserModel.XSSFWorkbook;$ws = $wb.CreateSheet(output);$ws.CreateRow(0)| out-null;$dirToAudit = Get-ChildItem -Path C:\Users\IEUser -recurse | Where { $_.psIsContainer -eq $true }$intRow = 1foreach ($dir in $dirToAudit){ $colACL = Get-Acl -Path $dir.FullName foreach ($acl in $colACL) { $fileNameRow = $ws.CreateRow($intRow) $fileNameRow.CreateCell(1).SetCellValue($dir.FullName) $intRow++ foreach ($accessRight in $acl.Access) { $values = $ws.CreateRow($intRow) $values.CreateCell(2).SetCellValue($($AccessRight.IdentityReference).ToString()) $values.CreateCell(3).SetCellValue($($AccessRight.FileSystemRights).ToString()) $values.CreateCell(4).SetCellValue($($acl.AreAccessRulesProtected).ToString()) $intRow++ } }}$fs = new-object System.IO.FileStream(C:\Users\IEUser\Desktop\test.xlsx,[System.IO.FileMode]'Create',[System.IO.FileAccess]'Write')$wb.Write($fs);$fs.Close()There is also another library for this kind of stuff: EPPLusAnd someone else made some nice powershell functions for it.
|
_softwareengineering.212180
|
Suppose I have a mesh of relationships, such as Friends who trust some friends and not others A IPv6 router that needs to locate peers across the InternetA PGP Web Of Trust that needs two people to locate each other's trust levelI'm interested in determining not only the shortest path, but the cost of each (an arbitrary weight by be added), the nodes of each path, and other information typically used in this general purpose need.What approaches are there to accommodate this need? Ideally this will be something that I can run locally on a phone or computer and search a large graph in O(N) speed or better.
|
How can I find all connections in a mesh network/graph?
|
relational database;graph;facebook;ipv6
| null |
_softwareengineering.104507
|
In my current development situation, we have a lot of DLLs, executables, and static libraries. How do you decide what should go into a DLL? What should go into an executable? Why have separate functionality in different executable files? I'm hoping the answer will be concise, but this is a largely opinionated topic it would seem.How do you decide where functionality resides in a large-scale project (more than one executable file)? I'm expecting answers to range from, Good Design to Modularity to Whatever management puts in a requirements doc.
|
How do you decide where functionality should belong in a large-scale project?
|
design;architecture;libraries
|
Choosing between a library and an executable file is relatively simple: does it make sense to execute the code you want to put into an executable as a standalone program? If not, it should probably be a library. In general, I would favour a thin executable layer over as many libraries as needed, since that makes it easier to reuse those backend libraries later and they are not tied to a particular program.Far as deciding on how to split your code between libraries goes, you might find Uncle Bob Martin's article on granularity useful.In it he talks about OO structure and defines several principles that can help you package your code appropriately. These are also covered in more detail in his book, Agile Principles, Patterns, and Practices in C#.I will summarize the principles below:The Reuse/Release Equivalence Principle (REP)The granule of reuse is the granule of release. Only components that are released through a tracking system can be effectively reused. This granule is the package.Uncle Bob defines reuse as being able to statically or dynamically link the reused library into his program and never having to look at its source code. When a new version of the library is released, he can just integrate it into his system.Treating libraries this way drives only keeping related things together in the same package. Otherwise, the library's consumers might have to upgrade to a new version for no reason or lag a few versions behind.The Common Reuse Principle (CRP)The classes in a package are reused together. If you reuse one of the classes in a package, you reuse them all.This principle supports the one above. If you have classes in the same package that aren't related to one another, you may be forcing the users of your library to upgrade unnecessarily.The Common Closure Principle (CCP)The classes in a package should be closed against the same kinds of changes. A change that affects a package affects all the classes in that package.This principle talks about maintainability. The idea here is to group classes based on how they might need to change. That way your changes can be localized to one part of the application and not spread all over.The Acyclic Dependencies Principle (ACP)The dependency structure between packages must be a Directed Acyclic Graph (DAG). That is, there must be no cycles in the dependency structure.Disallowing cyclic dependencies allows each package to be developed independently and released to the rest of the company when new changes are ready. This way you don't end up with two teams deadlocked, waiting on each other to finish some work.
|
_unix.72435
|
I like how you can drag a window with the cursor anywhere in the window by pressing Alt + left mouse. Is it possible to drag a window by pressing and holding just the right mouse button?Minimize window pressing right mouse + left mouseClose window by pressing right mouse + middle mouseN.B. This is all without holding down any keys on the keyboard.
|
Drag windows with right mouse button in Linux Mint KDE
|
window manager;window management
| null |
_unix.352510
|
I am trying to install libgmp sources as I am trying to build some other source that requires gmp.h.In Cygwin setup-x86_64.exe I Search for gmp and it finds the libgmp line which says 6.1.2-1 Keep on the left, n/a under Bin? , and empty checkbox under Src?.I check the checkbox and proceed and it appears to install something and succeed. However, gmp.h doesn't appear anywhere under /usr/include (or anywhere else that I can find); and when I run setup-x86_64.exe again, the Src box is unchecked.What's going wrong? I have tried with a few other libraries too, in all cases the source doesn't seem to appear and the Src checkbox resets next time the setup is run.
|
Installing source package has no effect
|
cygwin
| null |
_softwareengineering.237469
|
Disclaimer: This is a question derived from this one.What do you think about the following example of use case?I have a table containing orders.These orders has a lot of related information needed by my current queries (think about the products; the buyer information; the region, country and state of the sale point; and so on)In order to think with a de-normalized approach, I don't have to put identifiers of these related items in my main orders collection. Instead, I have to repeat all the information for each order (ie: I will repeat the buyer's name, surname, etc. for each of its orders).Assuming the previous premise, I'm committing to maintain all the data related to an order without a lot of updates (because if I modify the buyer's name, I'll have to iterate through all orders updating the ones made by the same buyer, and as MongoDB blocks at a document level on updates, I would be blocking the entire order at the update moment).This raises the following questions:Do I have to replicate all the products' related data? (ie: category, maker and optional attributes like color, size)What if a new feature is requested and I've to make a lot of queries with the products as the entry point of the query? (ie: reports showing the products' sales performance grouping by region, country, or whatever)Is it fair enough to apply the $unwind operation to my orders original collection? (What about the performance?)I should have to do another collection with these queries in mind and replicate again all the products' information (and their orders)?Wouldn't be better to store a product_id in the original orders collection in order to be more tolerable to requirements change? (What about emulating JOINs?)The optimal approach would be a mixed solution with a RDBMS system like MySQL in order to retrieve the complete data?I mean: store products, users, and location identifiers in the orders collection and have queries in MySQL like getAllUsersDataByIds in which I would perform a SELECT * FROM users WHERE user_id IN ( :identifiers_retrieved_from_the_mongodb_query )
|
MongoDB: Replicate data in documents vs. join
|
sql;nosql;mongodb;rdbms
| null |
_unix.379696
|
Opera fails using Java. The online test is negative.~ $ locate libnpjp2.so/home/USER/.mozilla/plugins/libnpjp2.so/usr/java/jre1.8.0_111/lib/i386/libnpjp2.soThe tutorial on Opera's webpage is outdated. It refers to a non-existing directory, where Opera browser should be installed:~ $ cd /usr/lib/opera/pluginsbash: cd: /usr/lib/opera/plugins: No such file or directoryAccording to Opera's manual, the only problem might be the missing link to the existing java installation directory. But where is the folder ../plugins ? I cannot find it in /usr/lib/i386-linux-gnu/opera/ My PC runs with Linux Mint 18 Xfce 32-bit
|
Opera not loading Java applet
|
java;internet;browser;plugin;opera
| null |
_softwareengineering.342974
|
Within a Java shop, we are using Spring MVC, some HTML output and some JSON/Rest output as a service. Our client side is more familiar with jQuery for ajax messaging to the server. We have been able to build Ajax applications but the approach is cumbersome with just jquery and java code. What are some approaches for building ajax apps that don't use the single page application approach? For example, are there libraries that would support ajaxy widgets on the client side? In angularjs you can achieve this with directives.Mainly, it would be nice to have a better javascript client side library like with Angluarjs that uses a client side MVC approach.I have seen Ajax oriented architectures that use AngularJS with $http REST calls. Without using angularjs and single page oriented applications, what are approaches for building ajax based applications. Our current approaches have been Spring MVC HTML output and some Ajax request calls using jQuery. Are there are different approaches without the SPA approach?
|
What are approaches for building more robust Ajax applications without using AngularJS
|
java;javascript;mvc;ajax
|
Before SPA's came around people did what you are suggesting all the time. They would just add javascript to a page, hook up jquery event handlers and update the dom with them using ajax requests. It's a pretty manual approach but it works.Personally I like everything angular and similar frameworks offer. If your goal is not to avoid angular but instead to be able to serve pages from your server but have widgets that are easy to create and add value where appropriate I would suggest instead to look at using angular apps as widgets on your page. Then you can still serve your pages but also serve widgets where appropriate for highly interactive areas.You would need to manually bootstrap each widget to make this work. You can follow the following link for manual bootstrapping.https://docs.angularjs.org/guide/bootstrap
|
_webapps.84727
|
I currently have a zap on Zapier which takes Formidable forms entries on my WordPress website and creates a card on one of my Trello boards with the entered details.There are quite a number of fields and it seems to split the entry over 2 cards on Trello.Is there a solution/setting that can force Trello and/or Zapier to create a single card only?
|
Zapier/Trello Link Causing Duplicate Cards
|
trello;wordpress;zapier
| null |
_cs.43103
|
I'm creating an application where client connects to server, retrieves some data and after that connection is closed. He spends some time changing that data and then connects again to the server sends it back and connection is closed. At server data is compared with original and saved to database if it validates.I see only one solution to this - creating two ServerSocket one for input and one for output. But it does not seem good solution. Is this the way to do it or is there something else? How can I determine purpose of connection?PS I'm using JAVA 8.
|
Implementing TCP request and answer
|
java
| null |
_datascience.2416
|
I would like to ask your opinion on how to choose a similarity measure. I have a set of vectors of length N, each element of which can contain either 0 or 1. The vectors are actually ordered sequences, so the position of each element is important. Suppose I have three vectors of length 10, x_1 x2, x3: x1 has three 1 at positions 6,7,8 (indexes start from 1. Both x2 and x3 have an additional 1, but x2 has it in position 9 while x3 has it in position 1. I am looking for a metric according to which x1 is more similar to x2 than to x3, in that the additional 1 is closer to the bulk of ones. I guess this is a relatively common problem, but I am confused on the best way to approach it.Many thanks in advance!
|
Similarity measure for ordered binary vectors
|
similarity
| null |
_codereview.86961
|
I am making an app that shows some pictures and you can like each picture. I am sending a POST request from the Android device. In my backend I have a Python script that takes parameters category and URL of the picture, opens a JSON file, and modifies the likes value.This is the Python script:#!/usr/bin/python# Import modules for CGI handling import cgi, cgitb, json# Create instance of FieldStorage form = cgi.FieldStorage() # Get data from fieldscategory= form.getvalue('category')url= form.getvalue('url')#Gets the file path,depending on the categoryfilepath='/home2/public_html/app/%s/%s.json' % (category,category)# Open a filewith open(filepath) as data_file: data = json.load(data_file)# Finds the picture with the specific url and updates the likes value.for picture in data: if picture [url] == url: likes = int(picture[likes]) picture[likes]=str(likes+1) break# Writes the json file with the new value.with open(filepath, 'w') as data_file: data_file.write(json.dumps(data))My JSON looks like this:{url: http://pictures.com/app/Funny/picture.jpg, category: Funny, likes: 0}What can go wrong with this script? Are there any potential drawbacks? I am new to backend programming, so I don't know what can go wrong here.
|
App for displaying pictures to like
|
python;json;server
| null |
_unix.1448
|
I have recently had to have our dedicated server rebuilt as core system files were deleted. But that's another story...Before the rebuild, we were merging, branching, and committing changes to a number of different projects with Bazaar. Now it seems that the hosting company has rebuilt the server at CentOs 4.8 rather than 5.0, which I guess it was before. According to the support staff, Bazaar is not compatible with the new OS.The support staff have said: I have installed all of the Bazaar Dependencies, however the missing dependencies for example;Missing Dependency: libc.so.6(GLIBC_2.4) is needed by package bzrIs for a higher version of CentOS.My question is. Is this entirely true and is there any way to get Bazaar working on Centos 4.8?If not, are there any other version control systems similar to Bazaar?Below is the log for the attempted installation of Bazaar.Error: Missing Dependency: libc.so.6(GLIBC_2.4) is needed by package bzrError: Missing Dependency: python(abi) = 2.4 is needed by package bzrError: Missing Dependency: rtld(GNU_HASH) is needed by package bzrError: Missing Dependency: libcrypto.so.6 is needed by package python-pycurlError: Missing Dependency: python(abi) = 2.4 is needed by package python-paramikoError: Missing Dependency: rtld(GNU_HASH) is needed by package python-pycurlError: Missing Dependency: python(abi) = 2.4 is needed by package python-pycurlError: Missing Dependency: libssl.so.6 is needed by package python-pycurlError: Missing Dependency: rtld(GNU_HASH) is needed by package python-cryptoError: Missing Dependency: libc.so.6(GLIBC_2.4) is needed by package python-cryptoError: Missing Dependency: python(abi) = 2.4 is needed by package python-cryptoError: Missing Dependency: python-abi = 2.4 is needed by package python-crypto
|
Version Control System similar to Bazaar needed for CentOS 4.8 dedicated hosting server
|
version;version control
| null |
_unix.37686
|
I want to insert some code into multiple tex files foo*.tex in a directory, one line after \documentclass{.*}. Note that the files have different document classes so the .* is just a symbolic placeholder here. The code has multiple lines, for example\usepackage[utf8]{inputenc}\usepackage[T1]{fontenc}\usepackage{ngerman}What's the best way to do this automatically?
|
Insert multiline text into several files via commandline
|
bash;shell script;latex
|
One way using GNU sed:Content of script.sed:/^\\documentclass[^{]*{[^}]*}/ { a\\\usepackage[utf8]{inputenc}\\\usepackage[T1]{fontenc}\\\usepackage{ngerman}}Run it like:sed -s -i.bak -f script.sed foo*.texI use ^ to match the documentclass at the beginning of the line. Remove it if you can have spaces before it.The -s switch considers each input as a different file, and -i creates backups with bak extension.
|
_unix.331882
|
I have an i7 6700 processor, and I have no idea how to find out what architecture it is (amd64, arm64, armel, armhf, i386, mips, mipsel, powerpc, ppc64el or s390x) in order to choose a matching Debian build. How do I find out? It wasn't listed on Intel's site nor could I find it in a Google search.
|
What debian build am I supposed to download?
|
debian
| null |
_webmaster.83608
|
When setting up a new A Record / Host Record there's an option to enter the host, and then one to enter the IP. What are the options available for host?I've seen www, @, *... are there others? I have read many tutorials about A Records, and yet every single one of them fail to explain the host part of the record. I understand what www and * does, but what about @?
|
What are the host record options available?
|
dns
| null |
_codereview.145953
|
I was working on adjusting some variable naming on an Excel project and ran into an issue that the MSForm control names needed to be updated. When you change the properties of the control on the form, the underlying code for the form doesn't seem to update accordingly.Examples:You rename btn_button1 to btn_populateList. Now when you click the button it still wants to call btn_button1_Click() rather than btn_populateList_Click().You rename inputbox1 to listToPopulate but when you run pre-existing code, it still refers to inputbox1 instead of the new name of the control.Anyhow, I've always wanted to mess around to get VBA to do stuff in the VBE, so this was my chance!The macro finds the target project and target form and then updates the control names and appends the string NEW to oldName such that it's oldNameNEW. This was done so I could easily revert the names back using the same procedure with left(len()-3). Ideally, the array of new names would be populated by the user with descriptive names and would be populated outside of the For loop, obviously.I'm sure there's some clever refactoring that can be done, but I wasn't sure if I'd break it.Also, I'm worried I misunderstand how Forms call their macros given it's not designed the same way as controls on sheets.Public Sub FindReplaceInEntireModule() Const TARGET_ADDIN As String = bUTLAddIn Const TARGET_FORM As String = form_chtGrid Dim targetProject As VBIDE.VBProject Dim targetModule As VBIDE.VBComponent Dim targetCode As VBIDE.CodeModule Dim controlIndex As Long Set targetProject = Application.VBE.VBProjects(TARGET_ADDIN) Set targetModule = targetProject.VBComponents(TARGET_FORM) Dim targetForm As Object Set targetForm = targetModule.Designer Dim numberOfControls As Long numberOfControls = targetForm.Controls.count Dim newControlNames() As String ReDim newControlNames(1 To numberOfControls) Dim oldControlNames() As String ReDim oldControlNames(1 To numberOfControls) For controlIndex = 1 To numberOfControls oldControlNames(controlIndex) = targetForm.Controls(controlIndex).name newControlNames(controlIndex) = oldControlNames(controlIndex) & NEW targetForm.Controls(controlIndex).name = newControlNames(controlIndex) Next Set targetCode = targetModule.CodeModule Dim findWhat As String Dim replaceWith As String Dim numberOfLines As Long numberOfLines = targetCode.CountOfLines Dim lineNumber As Long Dim lineText As String For controlIndex = 1 To numberOfControls findWhat = oldControlNames(controlIndex) replaceWith = newControlNames(controlIndex) For lineNumber = 1 To numberOfLines lineText = targetCode.Lines(lineNumber, 1) If InStr(1, lineText, findWhat, vbTextCompare) > 0 Then targetCode.ReplaceLine lineNumber, Replace(lineText, findWhat, replaceWith, , , vbTextCompare) End If Next NextEnd Sub
|
Renaming form controls and underlying code
|
vba;excel
|
You've basically implemented (parts of) a rename refactoring.The problem is that the VBIDE API only gives you strings to work with, so you essentially have a smart find & replace, but it's still a find & replace.A LOT of things can go wrong when processing code without a symbol table.Imagine a control named MyButton that you want to rename to SomeButton. Then you have this code in the module:Dim MyButtonForeColor As LongMyButtonForeColor = vbBlackYour code will rename MyButtonForeColor too, because it contains MyButton - but did you mean to? In this case, probably.UserForm controls are Public, so you can have code outside the form that does this:With New UserForm1 .Show vbModal ActiveSheet.Cells(1, 1) = .SomeTextBox.TextEnd WithAnd that code will break when you rename SomeTextBox to anything else.Or, you could have conflicting but unambiguous names in different scopes, that a simple find & replace will break:UserForm1.SomeButton.Caption = UserForm2.SomeButton.CaptionNaming is hard. Renaming is even harder.Having worked on Rubberduck for over two years, I can assure you that there is absolutely no way to implement any kind of rename refactoring without a symbol table - and that means lexing, parsing and resolving the VBA code you want to refactor.Ctrl+H isn't a refactoring tool. Rubberduck is. I don't mean to sound like an ad, but your code needs to know exactly which code across the entire project is referencing the control you're renaming - and that simply isn't possible to implement reliably in VBA. Heck, Rubberduck has a proper parser and resolver, and still struggles with a number of edge cases.I give you this code to play with - it's EVIL beyond words, I know... but it's 100% legal VBA code, and Rubberduck sorts it all out:'Project Name = MyProject'Module Name = MyModuleOption ExplicitPrivate MyVar As MyProjectPrivate MyVar1 As MyModulePrivate Type MyProject MyModule As String MySub As StringEnd TypePrivate Type MyModule MyVar As String String As MyProjectEnd TypePrivate Type MySub MyVar As MyModule MyVar1 As MyProjectEnd TypePrivate Type MyVar MyProject As MyModuleEnd TypeSub MySub() Dim MyProject As MyProject Dim MySub As MySub Dim MyVar As MyVar MyVar.MyProject.MyVar = Smith MySub.MyVar1.MySub = MyProject.MySub 'My brain hurts....End SubYour strategy (I know you're only looking at controls, but the principle is the same) will fail to correctly rename MySub here (whichever you pick).Your code looks great though - good naming, clean code, nice spacing, indentation, casing, a bit large for a single procedure though, but as I said I don't think any of it is relevant, since there's no way any identifier-renaming VBA macro can work [in all cases] simply by looking at context-less strings.
|
_codereview.126938
|
What do you think of this way to generate a random number from the total rows in a table? I would like to create a page that random.php precisely generates a random ID and shows it.require 'includes/config.php';$pdo->query('SELECT id FROM xxxx');$pdo->resultset();$total = $pdo->rowCount();//echo random id$id = rand(1, $total);$pdo->query('SELECT * FROM xxxxx WHERE id = :id AND status = :status');$pdo->bind(':id', $id);$pdo->bind(':status', 1);$rand = $pdo->single();echo $rand['id'];
|
Select random row using PDO
|
php;sql;random;pdo
| null |
_codereview.13133
|
Any suggestions to improve the performance of the following code? This code is exceeding the time limit for the last four cases. This is a solution to the problem https://www.interviewstreet.com/challenges/dashboard/#problem/4fcf919f11817 def main(): N = int(raw_input()) if(0<=N<=100000): x = [] ans=[] l=[] m=[] for i in xrange(0, N): tmp = raw_input() a, b = [xx for xx in tmp.split(' ')] l.append(a) m.append(int(b)) for i in xrange(0, N): if l[i]=='r': if len(x)<1 or m[i] not in x: ans.append('Wrong!') elif m[i] in x: x.remove(m[i]) q=len(x) if q%2==1: ans.append(x[(q/2)]) elif q%2==0: if q==0: ans.append('Wrong!') else: k=x[q/2]+x[(q/2)-1] if k%2==0: ans.append(int(k)/2) else: ans.append(k/2.0) elif l[i]=='a': x.append(m[i]) x=sorted(x) p=len(x) if p%2==1: ans.append(x[(p/2)]) else: k=x[p/2]+x[(p/2)-1] if k%2==0: ans.append(int(k)/2) else: ans.append(k/2.0) for i in ans: print iif __name__ == __main__: main()from sys import stdin,stdoutfrom bisect import bisect,insortdef main(): N = int(raw_input()) x = [] ans = [] append = x.append ans_append = ans.append remove = x.remove read = stdin.readline for i in xrange(0, N): tmp = read() a, b = tmp.split(' ') b = int(b) if a == 'r': search_result = search(x,b) if len(x) < 1 or not(search_result): ans_append('Wrong!') else: remove(b) ans_append(median(x)) else: insort(x, b) ans_append(median(x)) for i in ans: print idef median(arr): q = len(arr) if q==0: return 'Wrong!' if q%2 == 1: return arr[(q/2)] else: k = arr[q/2] + arr[(q/2)-1] if k%2 == 0: return int(k)/2 else: return k/2.0def search(arr,element): position = bisect(arr, element) if len(arr) >= position and len(arr)!=0: if arr[position-1] == element: return True else: return False else: return Falseif __name__ == __main__: main()
|
Median solving from Interview street in Python
|
python;performance;contest problem
|
def main(): N = int(raw_input()) if(0<=N<=100000):There really isn't much point in checking this. Your code works regardless of the value, and you don't need to worry about whether the contest will feed you larger values x = [] ans=[] l=[] m=[]I suggest using longer names even in a contest scenario for i in xrange(0, N): tmp = raw_input() a, b = [xx for xx in tmp.split(' ')]Actually this line is the same as a, b = tmp.split(' ') The list comphrension just wastes processing power. Since there isn't going to be any unusual whitespcae, you can skip the parameter to split as well l.append(a) m.append(int(b)) Its not clear why you store this in a list. Just perform the operation as soon as you read it. No need to store things into the lists just to read them in another loop. for i in xrange(0, N):Here you should have used something like for action, number in zip(l,m): if l[i]=='r': if len(x)<1 or m[i] not in x:There isn't really a point to checking for len < 1, since the second condition will fail in that case anyways ans.append('Wrong!')Just print out the output, don't store it up elif m[i] in x:You should be able to just make this an else x.remove(m[i])This is going to be slow. The checks for m[i] in x and the call to remove have to scan over the whole list to find the elements. But the list is sorted, which means you should be able to find elements faster. Check out the bisect module which enables provides binary search to make finding elements faster in sorted lists. q=len(x) if q%2==1: ans.append(x[(q/2)]) elif q%2==0: if q==0: ans.append('Wrong!')This case is somewhat odd here, it'd make more sense to check for this on level out, before checking for odd and even else: k=x[q/2]+x[(q/2)-1] if k%2==0: ans.append(int(k)/2) else: ans.append(k/2.0) elif l[i]=='a': x.append(m[i]) x=sorted(x)This is better written as x.sort() so as to sort in place. However, its going to be slow constantly resorting the list. Before adding the element, the list is almost sorted, so you can use that to your advantage. The bisect module has operations to make that easy p=len(x) if p%2==1: ans.append(x[(p/2)]) else: k=x[p/2]+x[(p/2)-1] if k%2==0: ans.append(int(k)/2) else: ans.append(k/2.0)Ok, you're doing this again. You should have put this in a function or something for i in ans: print iif __name__ == __main__: main()Maintaining a sorted list is too expensive. Everytime you insert or remove elements, the computer has to shift the whole array in memory. Knowing this the contest site will devise questions to give you the worst case scenario. You are going to have to do something else.One possibility to explore would be to use a binary tree. Binary trees have faster insertion/removal times, and a reasonably fast way to calculate the median. Their disadvantage here is the complexity required to implement it. A binary tree itself isn't that complicated, but the problem is in a balancing strategy.A second possibility is to try and use heaps implemented in the heapq module. The idea would be that at any point in time, you maintain two heaps. One has the element less then the median and the other has the elements greater then the median. There is some trickiness there in how you would handle deletions, but I think it could be done.
|
_cs.50400
|
im trying to figure out why this is trueThe clauses {a,b}, {b,~c}, {c,~a} constitute a 2SAT problem with an implication graph without bad loops.Can someone show me how to illustrate this and indicate how i can find wether any 2SAT problem has a bad loop or not, Thank you so much :)
|
How can you check if a 2SAT problem has a bad loop
|
complexity theory;logic;satisfiability
|
The implication graph is constructed using the following ingredients:Vertices: One vertex for each variable, and one for each negation of a variableEdges: For each clause {x,y}, create an edge from -x to y and one edge from -y to xThe following should be the implication graph corresponding to your set of clauses:A bad loop (I'm guessing this is what you want, haven't encountered that wording before) is a loop in the implication graph that contains both a variable and its negation (i.e. both x and -x). Clearly there is no such loop in your case.Why bad loop? It is because each edge represents an implication that necessarily must hold for your 2-SAT instance. For example, the first clause reads a or b. If this clause is to be satisfied, then we must have one of the two variables true. So:If a isn't true (i.e. -a is true), then b must be true, which is the implication -a$\to$bIf b isn't true (i.e. -b is true), then a must be true, which is the implication -b$\to$aIn a bad cycle, you have a cycle of such implications, so that x$\to$...$\to$-x$\to$...$\to$x. Such a cycle of implications can never be satisfied, otherwise x would be both true and false at the same time. So if you have any bad cycles, the 2-SAT formula is unsatisfiable!In fact, a 2-SAT formula is unsatisfiable when, and only when, there is a bad cycle in the implication graph. So your formula is, by the absence of bad loops, satisfiable. The proof of satisfiability in the case of no bad loops even yields a poly-time algorithm to find a satisfying assignment (which is great : ) )So how to assign values to the variables and satisfy your (or any) 2-SAT formula? I'll refer to https://en.wikipedia.org/wiki/2-satisfiability#Strongly_connected_components for the algorithm, but be warned that you need to know of the concept of strongly connected component to fully understand the algorithm.
|
_webmaster.42588
|
I have a wp 3.5 multisite and sometimes the page fail to load..I got this msg on chromeOops! Google Chrome could not connect to domain.xError 7 (net::ERR_TIMED_OUT): The operation timed out.on Firefox got thisServer not foundThe connection has timed outUpdate infos 2:This a WordPress or hosting or isp issue?!I scan my mulstiste wp site before a little moment with P3 (Plugin Performance Profiler):WordPress Plugin Profile Report:Report date: 24 January 2013Theme name: Pages browsed: 18Avg. load time: 1.7010 secNumber of plugins: 25Plugin impact: 45.81% of load timeAvg. plugin time: 0.7792 secAvg. core time: 0.4346 secAvg. theme time: 0.4586 secAvg. mem usage: 25.86 MBAvg. ticks: 17,257Avg. db queries : 139.33Margin of error : 0.0287 secPlugin list:P3 (Plugin Performance Profiler) - 0.0088 sec - 1.12%Bwp Google Xml Sitemaps - 0.0605 sec - 7.76%Google Analytics For Wordpress - 0.0200 sec - 2.57%Jw Player Plugin For Wordpress - 0.0241 sec - 3.09%Google Analytics Social Engagement Tracking Code - 0.0005 sec - 0.06%Multisite Toolbar Additions - 0.0039 sec - 0.50%Multisite User Management - 0.0044 sec - 0.57%My Gallery - 0.0188 sec - 2.41%Nextgen Gallery - 0.0109 sec - 1.39%NextScripts: SNAP Pro Upgrade Helper - 0.0060 sec - 0.77%SEO Ultimate - 0.2213 sec - 28.40%Social Networks Auto Poster Facebook Twitter G - 0.0767 sec - 9.84%Use Google Libraries - 0.0140 sec - 1.80%Watermark Hotlinked Images - 0.0007 sec - 0.09%Wordpress Mu Domain Mapping - 0.2089 sec - 26.82%WP Smush.it NextGEN Gallery Integration - 0.0004 sec - 0.05%WP Smush.it - 0.0014 sec - 0.17%Admin Locale - 0.0028 sec - 0.36%Contact Form 7 - 0.0166 sec - 2.13%Really Simple CAPTCHA - 0.0004 sec - 0.06%Simple Ads Manager - 0.0068 sec - 0.88%Top 10 - 0.0154 sec - 1.98%Welcome Popup - 0.0278 sec - 3.57%Wordpress Social Share Buttons - 0.0023 sec - 0.30%WP-Filebase - 0.0258 sec - 3.32%
|
Random fail to load wordpress multisite on Chrome & Firefox browser
|
wordpress;shared hosting;firefox;google chrome;isp
|
So based on the above discussion the issue is with your ISP/server environment. Either it is under too heavy of an overall load (the reseller is overselling), is underpowered, or one of your plugins is rogue and causing the CPU to strain.I would lean more towards the former as the cause of your problems. When purchasing shared server space, you really have no idea how many other sites are on that server nor do you really know how robust that server is. My guess is that moving to your own hosting on a VPS or to WordPress-specific hosting will resolve all of the issues.
|
_vi.11631
|
vim 7.4 console mode.When in insert mode, I would like to turn off the ugly yellow when I am in insert mode.My background is set to light, so vim should not use a light yellow for anything.
|
turn off -- INSERT -- color
|
insert mode;colorscheme;statusline
|
Puthi clear ModeMsgin your vimrc or color file.
|
_webmaster.6844
|
My topic seems to be solved by stackexchange engine but me (a human) couldn't find a solution here.As you may understand on subject, I have many repeated words on my site like comment, like, read more and also some system messages like mysql_result,error, php etc.I saw that my most valuable words are these on Webmaster's Tools. Some people suggests to load those texts from a js file and restrict access to those files. Or using iframes for those text areas. Both have some negative sides, too.I think about another solution which is using images instead of repeated texts but it is neither practical nor seems well.Thank you for suggestions.
|
How you solve preventing common words from Google issue?
|
seo;google;keywords
|
If the occurrences of mysql_result, error or php are the result of PHP errors on the site, then it should be obvious you need to fix those errors! Try and find which pages they appear on (search for example mysql_result site:example.com in Google) and fix and problems.Otherwise, don't worry about it. The keywords listed in GWT aren't necessarily the most valuable, they are just the most common. I have plenty of common words listed on various sites I've developed such as 'pts', 'ago', 'new' and 'its'. In nearly all cases only the top 1 or 2 keywords were directly relevant to the site, with 1-2 more relevant words mixed in lower down.Definitely do not use iframes or Javascript to load individual words or sentences, unless the site would make complete sense without said iframes or JS. Even then it's probably not worth the hassle and extra page load time.
|
_codereview.171136
|
The following script is supposed to got through each server in the server list and check if there are users logged in. If there aren't, it supposed to disable logon using PSExec (essentially place it in maintenance mode). If there are users logged in, it will go through each of there sessions and log out those that have an idle time of greater than an hour. Please let me know if this code can be considered production ready. I've excluded the primary functions for log off and only included the script I was not sure about.$content = H:\demo\computernames.txt #please input where the list of servers is found (in a txt doc please)$input = H:\demo\session\run11.csv #please input where you want the csv to be exported (this includes all the active and disconnected servers)$export ='H:\demo\session\openservers.txt' #please input where you want to 'open servers' (no users) to be exported.#and that does it. cheers!if (Test-Path $input) { Clear-Content $input}$Servers = Get-Content $content$openservers = @()#THE LOOP THAT FILTERS THE SERVERS MENTIONED IN THE LIST ABOVEforeach ($Server in $Servers) { #tests the connection to the server (general test for the entire server) if (-not (Test-Connection $Server -Count 1 -Quiet)) { continue } #if their are no users logged in (hence the count of zero), push that server name onto a txt doc if (-not (Convert-QueryToObjects $Server -ErrorAction SilentlyContinue)) { $openservers += $server $openservers | Out-File $export #**TXT EXPORT**# } #if there are users logged onto the server, make sure they are either in disc or active state and push all the session details onto a seperate csv else { Convert-QueryToObjects -Name $Server | Where-Object { @('Disconnected','Active') -contains $_.SessionState }|Export-Csv $input -NoTypeInformation -Append # does the append so each time the content is kept (however, the issue of duplicate content arose (hence the fix above). } } #concludes the first foreach loop#<--- the folllowing function works with PsExec(external program) $openservers = Get-Content $export #**TXT IMPORT**# #for each server in the server list, it will check to make sure its online (Again). foreach($openserver in $openservers){ if( Test-Connection -Cn $openserver -Quiet) { H:\PsTools\PsExec.exe \\$openserver change logon /enable # calls an external tool and passes it the } else { Write-Host(the server>> $openserver is NOT online) -ForegroundColor DarkYellow }}#THIS IMPORTS FROM THE EXPORTED FILE PATH# Import-Csv $input | Where-Object { ($_.SessionState -eq 'Disconnected') -or #if its disconnected or (($_.IdleTime -like *:*) -and ($_.IdleTime -gt 00:59)) #idle time is over an hour, kindly do this: } | ForEach-Object { #disconnect the session by using the server name and session ID Disconnect-LoggedOnUser -ComputerName $_.ComputerName -Id $_.ID -Verbose }
|
production ready powershell script
|
powershell
| null |
_scicomp.24596
|
I am trying to find some package of Discontinuous Galerkin (DG) method for solving hyperbolic and parabolic equations. In my research, I focus on designing new schemes for some very simple equations and boundary condition and doing numerical analysis. Thus, I need the package which is easy for me to do possible modification in algorithm. Moreover, python is better. I am not familiar with C++.Could anyone tell me what's a suitable package for me?
|
Package for Discontinuous Galerkin method
|
libraries;discontinuous galerkin
| null |
_codereview.95232
|
I want to replace an object of a List with another one. To find the item to replace I need to check the field id of the object to find what item is the same.public void changeItemInformation(@NonNull Suggestion suggestion) { if (mSuggestions != null) { int itemChanged = -1; int index = -1; for (Suggestion suggestionItem : mSuggestions) { index++; if(suggestionItem.getId() == suggestion.getId()){ itemChanged = index; break; } } if (itemChanged != -1) { mSuggestions.set(itemChanged, suggestion); notifyItemChanged(itemChanged); } }}mSuggestions is the list of Suggestions. suggestion is the new item to replace the old one. notifyItemChanged() is an Android method to notify a RecyclerView that an element changed. Is there a better way to do this? There aren't any perform problems, the list always has a maximum of 100 elements.
|
Replace an element of a List with a concrete Id
|
java;array;android
|
I don't like your itemChanged and index variables.If you iterate over a list and keep track of the index anyway, why don't you use an ordinary for-loop?In addition to that, you also just copy the value of your index variable and break out of the loop; you could easily merge all that to one loop with one condition.This: int itemChanged = -1; int index = -1; for (Suggestion suggestionItem : mSuggestions) { index++; if(suggestionItem.getId() == suggestion.getId()){ itemChanged = index; break; } } if (itemChanged != -1) { mSuggestions.set(itemChanged, suggestion); notifyItemChanged(itemChanged); }would become this:for (int index = 0; index < mSuggestions.size(); index++) { if(mSuggestions.get(index).getId() == suggestion.getId()){ mSuggestions.set(index, suggestion); notifyItemChanged(index); break; }}You don't even need to check if there is an item to replace, because if the condition is never met, it's never going to execute the code inside of it.
|
_unix.371722
|
Some commands are provided as both builtins and external utilities. Take echo for example. On my machine (macOS) running Bash 3.2,$ type echoecho is a shell builtinRunning man bash | less --pattern='^ *echo +\[' shows:echo [-neE] [arg ...]But running man 1 echo shows a man page for a different implementation of echo, with a different signature:echo [-n] [string ...]I'm able to use -e successfully, so I must be running the builtin, and presumably that's /bin/echo$ which echo/bin/echoWhere does the other implementation live, and how can I distinguish between builtins and external utils in general (e.g. printf)Update/Correction Thanks @Gilles for clarifying. And the proof is in the pudding!$ /bin/echo -e \tabc-e \tabc$ echo -e \tabc abc
|
How to distinguish between builtin and external util? (e.g. echo)
|
shell builtin
|
To find out whether a command is built in, run type.$ type echoecho is a shell builtintype is itself a builtin and knows what commands are built in. (In bash, builtins can be disabled, and type will correctly report that a command is not built in if the builtin has been disabled.) type reports whatever will be executed if you use the command name alias, function, builtin or external command.which is an external command that reports the location of external commands. It doesn't know anything about aliases, functions or builtins. And it might even not report the correct external commands, depending on your setup. Just forget about which and use type instead.I must be running the builtin, and presumably that's /bin/echoNo! By definition, a builtin is not an external command. The code that implements the echo builtin, like all other builtins, is in /bin/bash. /bin/echo is an external command that has the same name as the echo builtin.When a command exists both as a builtin and as an external command, using its name calls the builtin. The precedence order for command names is alias, then function, then builtin, then external command in the directories listed in $PATH in order. If, for some reason, you want to force an external command, use its full path.
|
_unix.43948
|
From the lfs doc: The exec env -i.../bin/bash command in the .bash_profile replaces the running shell with a new one with a completely empty environment, except for the HOME, TERM, and PS1 variables. This ensures that no unwanted and potentially hazardous environment variables from the host system leak into the build environment. The technique used here achieves the goal of ensuring a clean environment.What case will cause that problem?Is there any simple example?
|
Why lfs couldn't use original host environment variables?
|
environment variables;lfs
|
There are plenty of variables which will change how the shell behaves, what programs are executed or can hook into new programs. Examples for some of the more problematic environment variables are CDPATH, LD_LIBRARY_PATH, LD_PRELOAD, PATH. By resetting the environment you can ensure a clean and sane build environment without the need to take care/reset all kind of environment variables.
|
_webmaster.24310
|
I have a website that serves static assets (CSS, JavaScript, images) directly from Amazon's S3. As far as I know S3 does not support gzip even when browsers accept it.What problems might arise if the HTML page links to pre-compressed style_v1.gz.css and script_v1.gz.js (instead of without gz) when the application sees gzip in the Accept-Encoding request headers?And do I need to set any special headers on these files in S3? (other than the obvious Content-Encoding: gzip)
|
How should I reference precompressed versions of static files hosted with Amazon's S3?
|
static content;gzip;amazon s3
|
Why do you want to serve those static assets with a gz extension at all?Despite being the common indicator for ages, file extensions are actually an inferior and inaccurate mechanism to communicate a MIME type in the first place: ideally, web resources should be entirely agnostic to file extensions and only communicate their content by means of appropriate HTTP headers like Content-Type and Content-Encoding (which by the way eases maintaining an URI space as well - Cool URIs don't change ;)Granted, this has not worked in the early days of the internet, and might still not work with a few inferior clients (annoyingly some proxies and weaker mobiles seem to suffer from this still), but unless you can clearly identify a relevant portion of your audience being affected from this limitation (and nudging them to upgrade their browser eventually is not an option), you should probably start to simply trust the HTTP headers in use instead.Consequently I'd simply serve the static assets pre-compressed with the Content-Encoding and (to address your last question) the equally important Content-Type headers (else clients would indeed need to infer the media type from the extension and/or the content itself), but without the gz extension, as discussed in Skyler Johnson's answer to Serving gzipped CSS and JavaScript from Amazon CloudFront via S3.Obviously, once you rely on Content-Type and Content-Encoding, this should work just as well, if you'd serve it with gz.css (or no extension at all for that matter), however, as outlined above, this might actually turn out to be misleading to downstream clients, insofar they do not not necessarily have to deal with an gzip encoded file anymore (a proxy might have decoded it already for example).
|
_softwareengineering.305696
|
TL;DRWhat are the good practices of iterative search of a better solution?Well, if I knew everything in advance and could immediately suggest 146% correct solution for a given context, I'd probably be the richest man in the world, but definitely I am not.Thus some space and time for experimentation is needed.In case of approaching some new technology or framework it's quite reasonable (if you have time for that) to create small prototype project specially for the purpose of testing its capabilities and exploration of caveats.On the other side, changing architecture is more about how code is written and refactored. It's more about the system's evolution, ease of development and augmenting the product with new functionality. So I believe that these kind of changes is somewhat difficult to verify if abstracted from the real world development use cases. I also have doubts that business allows parallel development of several versions of the product just for developers could find better solution.I see no other way but introduce changes while developing new features.After several iterations stabilization is achieved and solution gets polished. But then system becomes a good example for Lava antipattern: you have several slightly differt approaches (Refactoring is to the help). Not surprisingly some teammates are going mad without clear understanding HOW should they do.I do not want to constantly think how I should do this, I just need to complete the task. We have conventions, I get used to it.That's what I hear quite often from a teammate.Actually even setting access modifier of a classs to internal instead of public, or decision to use constructor injection instead of indeed incorrectly but widespread used property injection, or using Trace.WriteLine can cause the same reaction.So the relationship with some of my teammates is worsening, and I don't want it to happen, but the same time I hate to do things in a way just because everyone got used to without questioning myself whether better, more correct solution exists. Yet I understand I am not perfect and inevitably make mistakes sometimes.The claim of PM is that I do not initiate a discussion. But should I really ask for permission not to use Copy-Paste development and extract setup logic of the test?! Should I discuss whether I may use test data generator like Autofixture?More recent example:I finally could express what I strongly didn't like about our code. Our Get is developed both for the needs of UI and subsequent validation while updating; thus we reuse the entity (completely anemic model) returned by the Get when performing an Update. So whenever we need to change what we show in UI we also 'toutch' Update functionality. At least this requires to change unit tests.I decided to check my guess and extracted query logic into another class and made this class not accessible by business rules where only command processing logic leaves. I also put validation logic inside the entity.The reaction of a reviewer?What the heck? Are we moving to CQRS? Have we discussed it? We do not use logic inside entities. Why query is merged with Db access logic (I simply return projections from EF query)?Could I do better?Face to face discusson showed that majority of the team considered this approach worth trying though without being completely sure it brings dividends later. And few members was strongly against it stating that reuse suffers, and get-logic is going to be somewhat duplicated if we need data from other entities (aggregates).The best way to test the hypothesis is practice, isn't it?The feature is isolated and still under development, and requirements for UI and functionality will change for several iterations. Al said allows to test safely the assumption about more stable codebase with this approach. Nonetheless, those who were against demand it to be refactored back to conventional solution.
|
How to refine the architecture, look for better solutions and not to spoil relationship with the team?
|
design;architecture;refactoring;teamwork
| null |
_webapps.17667
|
How do you search for google+ on Twitter without results also showing the form without the plus sign (+)?I want to search for the literal google+ on Twitter. Unfortunately, the plus gets encoded and the Twitter search API does not find any results. Results appear the same as if there was no plus sign.
|
Search for google+ on Twitter without regular google in results
|
twitter;search;google plus;twitter api
|
This seems to be your best bet for now within Google Search as Google may have added it as a special reserved word in a similar format to say c++site:twitter.com inurl:status google+The following seems to search Google and Google+ but if you look at the results in realtime, there are also results that do not make any sense.twitter.com/?q=google%2B#!/search
|
_codereview.95934
|
Problems:Wanting to check all checkboxes when the checkbox w/id=checkall is checked.Wanting to uncheck the checkall box when any other checkbox is unchecked.Solution:$(document).ready(function () {$('#checkall').on('click', function () {if ($(this).is(':checked')) { $('input[type=checkbox]').each(function () { console.log(this); this.checked = true; });} else { $('input[type=checkbox]').each(function () { this.checked = false; });}// Uncheck Checkall if any other checkbox is uncheckedif ($('#checkall').is(':checked')) { $('input[type=checkbox]:not(#checkall)').click(function () { console.log(this); $('#checkall').attr('checked', false); });}});});I'm new with jQuery and I'm not sure if what I've written is really and truly production value code, but it is operational. It's not a matter of life and death, it's really just a personal project I'm working on that will likely be used by myself and some friends only. I am in the game to learn, though, and if the code can be improved upon please point me in the right direction to read about it more in depth. I've only been able to find bits and pieces on this topic.
|
Checking all checkboxes
|
javascript;beginner;jquery;html;form
|
I've written my changes in the code with comments, it explains it easier :) Most of the changes are good to knows and you dont need the each() if you want them all to do the same.$(document).ready(function () { // We use the checkboxes multiple times on a page, save the reference // This will save DOM lookups, thus smoother performance (though very little in this case) var $allCheckBoxes = $('input[type=checkbox]').not('#checkall'); // prefix with $ to indicate jQuery object (this is a preference, not a must) // We use it multiple times, save the reference var $checkAllTrigger = $('#checkall'); // We only need to calc this once, not needed every click, so save it in a var var allCheckBoxesLength = $allCheckBoxes; // this gets set once, and that's exactly enough $checkAllTrigger.on('click', function () { // Use this.checked, this is the clicked element, .checked is true/false if it's checked or not $allCheckBoxes.prop('checked', this.checked); }) // as per Paul's comment, blur the element to remove. // We chain the blur() to the on(), on() returns the object, you can think $(this) // gets returned, which you then blur. .blur(); // Uncheck Checkall if any other checkbox is unchecked // We do this by comparing the total amount of checkboxes vs the checked ones. // If e.g. 10 out of 10 are checked, they're all checked, check this on $allCheckBoxes.on('change', function(){ $checkAllTrigger.prop('checked', allCheckBoxesLength == $allCheckBoxes.filter(:checked).length); });});
|
_cs.11585
|
Is the following language context free: $L = \{ uxvy \mid u,v,x,y \in \{ 0,1 \}^+, |u| = |v|, u \neq v, |x| = |y|, x \neq y\} $ ?I think that it's not context free but I'm having a hard time proving it. I tried intersecting this language with a regular language (like $ \ 0^*1^*0^*1^* $ for example) then use the pumping lemma and \ or homomorphisms but I always get a language that is too complicated to characterize and write down.
|
Is this strange language context free?
|
formal languages;context free;pumping lemma;pushdown automata
| null |
_softwareengineering.328013
|
I'm working as part of a team of 10 on an agile project, this is our first time using GitHub. We write our code, create a pull request and then start on our next task. 95% of these pull requests are being reviewed by our scrummaster/manager who tests whether everything is working but generally doesn't look at the code. Another issue is that he's a bit overloaded so PRs can take several days to be seen and if the code is important I'll sometimes just merge immediately after testing it myself.I'm definitely guilty of not bothering to check other people's pull requests as it's quite disruptive to immediately stop what I'm working on to look at code that has nothing to do with what you're working on. I'd also get considerably less done than everyone else if I started to accept PRs from all corners.Who should accept responsibility for reviewing someone's pull requests and how can we ensure this responsibility is not laid upon the same people all/most of the time?
|
How should my team distribute checking pull requests?
|
github;pull requests
|
A team of 10 is approaching the large side for an agile team, but you can still keep the pull requests manageable if you make a rule that people can merge after two thumbs ups. That means for every pull request you create, on average you are reviewing two, assuming you don't get many from outside your team.That's not an unreasonable rate, and usually once your team gets out of the habit of sticking it to one person, it becomes a lot easier. To my knowledge, everyone on my team has a bookmark of recently updated pull requests that they clear twice a day or so when they need a break from their own work. When someone needs a review more urgently, they make a special request. It's not that hard.
|
_unix.362932
|
I have two drives in my laptop:SSD (128GB):Windows 10 installedHDD (1TB):Windows 10 Data Linux Mint installed on ext4Now, once you go SSD you never look back, so loading times on Linux Mint installation are driving me nuts.From what Ive read, I could possibly clone the ext4 partition with Clonezilla and move it to a new ext4 partition on SSD. Is that something feasible? I wonder about how grub will behave after the move. I am not very savvy with it and worried that any automated tool I could use after moving Mint partition to SSD is not gonna help me out much.Any help/comments would be greatly appreciated.
|
Moving Linux installation from HDD to SSD that already has Windows 10 installed
|
partition;grub2;dual boot;hard disk;ssd
|
Moving your Linux install partition with Clonezilla is definitely feasible, and it's probably the easiest way. Here's the general process:Backup any important data. (Although I've successfully done this before, I haven't tested these instructions, so move forward without backup at your own risk).Shrink your Windows partition to provide enough space for your Linux partition, plus a bit more (maybe ~1GB) for the fact that Clonezilla can't clone to a smaller drive than the original.Use a partitioning program (GParted Live would be suitable). Create a new partition in the space that you made in step 2.Boot into Clonezilla and do a disk to disk clone, from your original partition to your new one. Instead of choosing disk_to_local_disk in those instructions, choose, part_to_local_part, and select the partitions to clone rather than disks to clone.Install and configure GRUB on the new drive.
|
_vi.6350
|
When using u or CTL-R to undo or re-do an edit in vim, I seem to alter chunks of text, not just the most recent keystroke. What determines the size of the chunk that is considered to be a single edit?
|
How does vim determine the size of a single edit when using u and CTL-R?
|
command history
|
You are looking for the definition of undo-blocks.From :h undo-blocks:One undo command normally undoes a typed command, no matter how many changes that command makes. This sequence of undo-able changes forms an undo block. Thus if the typed key(s) call a function, all the commands in the function are undone together.The same block is used for redo. From :h redo:The last changes are remembered. You can use the undo and redo commands above to revert the text to how it was before each change. You can also apply the changes again, getting back the text before the undo.
|
_cs.30523
|
This problem is based on an order picking problem with a forward area. The problem description is as follows. We have a warehouse with a set of items $I$ and a forward area $F$ of size $k$. Each day, there is a set $D$ of deliveries. Each delivery $d_i$ is a set of items such that $d_i \subset I$ and $|d_i| < k$. The items for each delivery have to be picked from $F$ and each delivery must be completed before the items for the next delivery can be picked. The items in $F$ are not consumed ($F$ contains pallets of each item, and we assume that a pallet does not run out). If a delivery requires an item that is not in $F$, the required item can be swapped with an existing item.We now need to find an order $d_1,...,d_n$, such that we minimize the amount of swaps needed.My question is: is there any available literature on this kind of problem? Specifically regarding genetic algorithms, since that is the goal of my research.Thus far, the closest thing I have found is the Traveling Salesman Problem, using the minimum number of swaps between deliveries as distance measure. However, since the items in $F$ are not only influenced by the predecessor of a delivery, the distance between two consecutive deliveries also depends on (all) previous deliveries.
|
Given a set of sets and a storage area, find an order that minimizes the sum of the differences between each set and the storage area
|
reference request;optimization;sets;traveling salesman
| null |
_softwareengineering.87396
|
WebServices are one possible implementation of a SOA, so there comes the name Services from. But why are they called *WEB*Service? Is it because they use web technologies and protocols (HTTP, XML, ...) for the implementation?
|
name origin of WebService
|
web services;terminology
|
Straight from wikipedia:The W3C defines a Web service as a software system designed to support interoperable machine-to-machine interaction over a network. It has an interface described in a machine-processable format (specifically Web Services Description Language WSDL). Other systems interact with the Web service in a manner prescribed by its description using SOAP messages, typically conveyed using HTTP with an XML serialization in conjunction with other Web-related standards.[1]Source: http://www.w3.org/TR/2004/NOTE-ws-gloss-20040211/As to the origin of the word:Web - (Referring to the World Wide Web) N. a vast computer network linking smaller computer networks worldwide (usually preceded by the ). the Internet includes commercial, educational, governmental, and other networks, all of which use the same set of communications protocols.Service - N. an act of helpful activity; help; aid: to do someone a service.So a Web Service is some sort of helpful utility / function that is accessible over the global computer to computer network we call the Web. Typically these web services interact with users via traditional web protocols like HTTP and XML (or JSON).
|
_webmaster.60133
|
What is the difference between a CNAME and a Subdomain?I understand that the a cname (the left side of a domain) can point to the domain, so you can two different urls point to the same address, ie.ex1.mydomain.com - if setup as a CNAME can return the IP of mydomain.comIf ex1.mydomain.com is setup as a subdomain, does it have a different IP?Another question is what should the ideal setup be in this situation:I have IP1:80 for a web appI have IP2:80 for another appCan I point both of these IPs to the same A record, with perhaps a different cname or subdomain?Thanks for any help?
|
Difference between CNAME and SUBDOMAIN
|
dns;subdomain;cname
|
A CNAME is a type of DNS record, where a hostname points at another hostname. An A record is another type of DNS record, where a hostname points at an IP address.A subdomain is what you described as 'the left side of the domain', e.g. webmasters.stackexchange.com is a subdomain of stackexchange.com. The DNS setup for a subdomain could use either an A record or a CNAME.Your question:Can I point both of these IPs to the same A record, with perhaps a different cname or subdomain?doesn't really make sense. You don't point IPs at A records, you point hostnames at IPs using A records. If you're asking if you could point a domain and a subdomain at the same IP, the answer is yes.This might be clearer with a real world example:webmasters.stackexchange.com has an A record that points to the IP 198.252.206.140.stackexchange.com also has an A record that points to the IP 198.252.206.140.It would therefore be possible to change webmasters.stackexchange.com to CNAME to stackexchange.com, and everything would continue to work as it does now.(In practice, CNAMES are slightly slower than A records as they might result in an additional DNS lookup, so that's one reason why A records are more commonly used.)
|
_unix.45433
|
I have a sheevaplug with debian 5.0..internet was always working fine until today. I have a bunch of pcs connected via wireless to my router and internet works fine. From today though my debian won't connect to the internet and i have no idea why... when I run ping google.com I get this:ping google.comPING google.com (173.194.35.8) 56(84) bytes of data.^C--- google.com ping statistics ---443 packets transmitted, 0 received, 100% packet loss, time 442006msIt stays like this forever and I am unable to wget anything. Here is some info that might be useful :Content of /etc/network/interfaces:# Used by ifup(8) and ifdown(8). See the interfaces(5) manpage or# /usr/share/doc/ifupdown/examples for more information.auto loiface lo inet loopbackauto eth0iface eth0 inet dhcpOutput of ifconfig:eth0 Link encap:Ethernet HWaddr f0:ad:4e:01:40:78 inet addr:192.168.2.12 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::f2ad:4eff:fe01:4078/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2200 errors:0 dropped:0 overruns:0 frame:0 TX packets:2926 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:229615 (224.2 KiB) TX bytes:466378 (455.4 KiB) Interrupt:11lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)Content of /etc/resolv.conf:nameserver 192.168.2.1Here is the output of netstat -rn:debian:~# netstat -rnKernel IP routing tableDestination Gateway Genmask Flags MSS Window irtt Iface192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth00.0.0.0 192.168.2.1 0.0.0.0 UG 0 0 0 eth0Output of arp -na:debian:~# arp -na? (192.168.2.1) at 00:1a:2a:27:c0:68 [ether] on eth0Output of route:Kernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Iface192.168.2.0 * 255.255.255.0 U 0 0 0 eth0default . 0.0.0.0 UG 0 0 0 eth0I tried everything : static IP, restarting network connection, restarting router, no luck... Any ideas what might be wrong?Edit: Here's the output of route:Kernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Iface192.168.2.0 * 255.255.255.0 U 0 0 0 eth0default . 0.0.0.0 UG 0 0 0 eth0Edit: Here's the output of netstat -rn:debian:~# netstat -rnKernel IP routing tableDestination Gateway Genmask Flags MSS Window irtt Iface192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth00.0.0.0 192.168.2.1 0.0.0.0 UG 0 0 0 eth0and arp -na:debian:~# arp -na? (192.168.2.1) at 00:1a:2a:27:c0:68 [ether] on eth0
|
Debian sheevaplug internet not working
|
linux;networking
| null |
_webmaster.4472
|
I'd like to make my CSS3 background gradient fixed. Any suggestions?
|
CSS3 Background Gradient Fixed
|
css3
|
A way I have done it in the past was, within your css file set your background color to a solid color (example, #FFF). Create an gradient image that fades from red (top) to white (bottom), with a width of 8px and a height of 800px. Go back to your css file and add a background image attribute and point it to the gradient image you just created and set it to repeat horizontally.Now if someone has a screen resolution greater than 800 it will not matter because the white bottom of the image will fade in perfectly with the white background color you set.
|
_unix.280146
|
I want to run some long calculation jobs in the background. I choose screen. However I found screen didn't accpet redirection. for examplescreen -dmS name ls>ls.datwon't generate ls.dat.Fortunately, screen -L will output screen's log to a file. However, what it does is to append to previous log file, even if I pkill screen and start a fresh new screen. Is there a way to force it to overwrite the previous log file when starting a new screen?
|
how to force `screen -L` to overwrite log?
|
logs;gnu screen
|
I don't believe you can force screen to overwrite the log. It logs to screenlog.%n by default, where %n is the screen window number (so each window has it's own log). If that file exists, it appends to it.However, you can tell screen to use a different filename, including a timestamp, so you'll get a new log file each time, but you'll then need to manage the old logs.In .screenrc you can put the following line,logfile /path/to/log/screenlog-%n-%Y%m%d-%c:%sto create log files that include the window number (%n) and the year, month, date, and time.Alternatively, you could create a bash alias that deletes the log file before running screen, for example,alias screen='rm /path/to/log; screen'If you want to affect screen log files in the current directory, just remove /path/to/log/ from the commands above.Lastly, depending on what you're trying to achieve, the Linux tool script might be more useful than just logging in screen. man script for more information.
|
_codereview.9296
|
I am working on a fluent interface for exception handling.Here's how it looks :On<NullReferenceException>() .First.LogTo(errors) .Then.TranslateTo(null object); On<FileNotFoundException>() .First.Translate(excpetion => excpetion.Message) .Then.LogTo(errors); On<InvalidOperationException>() .If(excpetion => excpetion.Message.Contains(This is a test)) .First.DoSomethingElse(exception => Console.WriteLine(exception.Message)); On<ArgumentException>().Ignore();Do you think it's as fluent as it should be ? can I make it better ? any suggestions ?
|
How to improve this fluent interface?
|
c#
| null |
_unix.88550
|
How to make something similar to Excel's vlookup function in Unix?excerpt from office website, VLOOKUPThe V in VLOOKUP stands for vertical. Use VLOOKUP instead of HLOOKUP when your comparison values are located in a column to the left of the data that you want to find.Syntax VLOOKUP(lookup_value,table_array,col_index_num,range_lookup)Lookup_value The value to search in the first column of the table array. Lookup_value can be a value or a reference. If lookup_value is smaller than the smallest value in the first column of table_array, VLOOKUP returns the #N/A error value.Table_array Two or more columns of data. Use a reference to a range or a range name. The values in the first column of table_array are the values searched by lookup_value. These values can be text, numbers, or logical values. Uppercase and lowercase text are equivalent.Col_index_num The column number in table_array from which the matching value must be returned. A col_index_num of 1 returns the value in the first column in table_array; a col_index_num of 2 returns the value in the second column in table_array, and so on. If col_index_num is:Less than 1, VLOOKUP returns the #VALUE! error value. Greater than the number of columns in table_array, VLOOKUP returns the #REF! error value.Range_lookup A logical value that specifies whether you want VLOOKUP to find an exact match or an approximate match:File1:1GR_P1:001PI:040VG_L1:001PO_L31JPI_P1:001PO_L11JPI_P1:001PO_L2File2:1JPI_P1:001PO_L1 1401UC1JPI_P1:001PO_L2 1401UC1HIK_P2:001ER 1402UC1GR_P1:001PI 1402UCOutput-File3:1GR_P1:001PI 1402UC:040VG_L1 NA:001PO_L3 NA1JPI_P1:001PO_L1 1401UC1JPI_P1:001PO_L2 1401UC
|
vlookup function in unix
|
text processing
| null |
_reverseengineering.1839
|
I'm trying to analyse a piece of malware and after finding the OEP I'm having trouble dumping the process.Using LordPE (and IDA) I get an error saying Couldn't grab process memory, I searched for this and resolved it by using the intellidump engine in LordPE. Although this produces an exe, the file doesn't have an icon. I also tried using OllyDump and get the error: Unable to read memory of debugged process.... I tried to fix this by modifying memory segments in Olly, setting their access to 'full access' however I'm still getting the same error.When I try to use ImpREC on the dump from LordPE, I get sometimes get another error with similar gist.I'm guessing there's some kind of memory protection going on here but really have no idea what to do next. Any help would be much appreciated.
|
Can't access process memory when dumping
|
unpacking;dumping
| null |
_unix.259361
|
Excel files can be converted to CSV using:$ libreoffice --convert-to csv --headless --outdir dir file.xlsxEverything appears to work just fine. The encoding, though, is set to something wonky. Instead of a UTF-8 mdash () that I get if I do a save as manually from LibreOffice Calc, it gives me a \227 (). Using file on the CSV gives me Non-ISO extended-ASCII text, with very long lines. So, two questions:What on earth is happening here?How do I tell libreoffice to convert to UTF-8?The specific file that I'm trying to convert is here.
|
Specify encoding with libreoffice --convert-to csv
|
character encoding;unicode;conversion;libreoffice
|
Apparently LibreOffice tries to use ISO-8859-1 by default, which is causing the problem.In response to this bug report, a new parameter --infilter has been added. The following command produces U+2014 em dash:libreoffice --convert-to csv --infilter=CSV:44,34,76,1 --headless --outdir dir file.xlsxI tested this with LO 5.0.3.2. From the bug report, it looks like the earliest version containing this option is LO 4.4.See also: https://ask.libreoffice.org/en/question/13008/how-do-i-specify-an-input-character-coding-for-a-convert-to-command-line-usage/
|
_vi.6743
|
I currently have the following map setup to allow me to quickly create a pair of parens by tapping ( twice, leaving the cursor between them for more typing:imap (( ()<left>Separately, I have another map for skipping over the closing paren when it already exists. That is, if you type ) when there is already a ) to the right of the cursor, instead of inserting another ) the cursor just jumps over the existing one:inoremap <expr> ) strpart(getline('.'), col('.')-1, 1) == ) ? \<Right> : )The problem occurs in the interaction of these two maps. That is, say my code is:func(foo)I realize that foo is not a value, but a function, and what I really want is to pass the return value from calling foo() to func, so I go ahead and move my cursor -- signified by | between the o and ):func(foo|)Now what I want to do is simply tap ( twice, which will trigger my first map and leave me with:func(foo())But in fact, because of my second map, what I get is this:func(foo()How can I create an exception to my 2nd map so that it won't interfere with this specific case? That is, when the typed ) is caused by another map rather than by me physically hitting a key, I don't want the 2nd map to be triggered.
|
Skip a map in a specific circumstance
|
vimscript;key bindings
|
Unless I misunderstood you, all you need to do is change your first map to:inoremap (( ()<left>This will ignore mappings on the right side and therefore will not trigger your second mapping. I tried it out and is seemed to work for me.
|
_cs.28039
|
I would like to minimize linear pseudo-boolean function$$\mathrm{obj} = \sum_i c_i \mathrm{sel}_i$$subject to $$\sum_i c_i sel_i \geq \mathrm{Value} \qquad\qquad(1)$$ where$c_1,\dots c_5, \mathrm{Value}$ are some constants;$\mathrm{sel}_1,\dots,\mathrm{sel}_5$ are sort of selector variables, $ 0 \leq \mathrm{sel}_i \leq 1$Everything is pretty straightforward, but I need to check constraint (1) if and only if some of $\mathrm{sel}_i = 1$. In the other words, if $\mathrm{obj} = 0$, that is OK with me, but if some or all of $\mathrm{sel}_i$ are picked up, I need to check $\mathrm{obj} \geq \mathrm{Value}$. Currently I'm using SAT to solve similar problem, but I prefer to study something else. Maybe I need consider different framework for this task? Any directions will be very useful. Thanks in advance.
|
Check constraint under some condition in linear programming
|
optimization;satisfiability;linear programming
|
This can be solved by case-splitting, depending upon whether any of the $sel_i$ are 1 or not, and then solving each case separately. Each case can then be handled using integer linear programming, so you might like to read about ILP solvers.There are only two cases. We'll optimize for each, separately. The two cases are:Case 1: No selector is $=1$. In this case, $sel_i < 1$ for all $i$, and the linear inequality can be skipped. So the problem is to minimize the objective function subject to the requirement that $sel_i < 1$ for all $i$.You could solve this using an integer linear programming solver, but actually that's shooting a fly with a cannon. Since each $sel_i$ is required to be an integer in the range $0 \le sel_i \le 1$, this means you know that $sel_i = 0$ for all $i$. So, this case really covers only one possible assignment to the selectors: the all-zeros assignment. You can simply evaluate the value of the objective function at $sel_i=0$; in particular, this means that the objective function takes on the value $0$. So the value $0$ is always attainable.Case 2: At least one selector is $=1$. In this case, there exists at least one index $i$ such that $sel_i = 1$ for all $i$, and the linear inequality cannot be skipped. So now we have some kind of knapsack-like problem: we are looking for a sum of a subset of the $c_i$'s that is as small as possible, subject to the requirement that it be greater than $Value$.This could be handled with integer linear programming. Each $sel_i$ is an integer unknown, and you have linear inequalities and want to minimize a linear objective function. Adding the extra constraint $sel_1 + sel_2 + \dots + sel_5 \ge 1$ will ensure that at least one of the selectors is $=1$, as needed for this case. This means that you could just feed the problem to an ILP solver and see what solution it gives you.You might be able to find better algorithms, by borrowing techniques used for the subset sum problem or knapsack problem. In fact, it looks like this is a kind of reverse knapsack. If we just negate the constants, then we're looking for a subset of the negated constants whose sum is as large as possible, subject to the constraint that it be $\le Value$. That sounds like a knapsack problem; each negated constant is the weight and value of some object, and $Value$ represents the capacity of the knapsack. The one twist is that now your objects might have negative weight/value.Since we're in the special case where weight = value, this is in fact an instance of a subset sum kind of problem. The decision version of youur problem is: given $\alpha$, we want to know whether there is any subset of the constants whose value is in the range $[\alpha,Value]$. If we can solve the decision version, then we can serve your optimization problem by using binary search over $\alpha$.
|
_unix.309980
|
I have my VMs on a dedicated computer, over SSH I use vboxheadless to start them, and then I use remote desktop to use them.Now, while a VM is running, it is trivial to insert the GuestAdditions image into the guest's optical drive and install them. To do that with an attached GUI, it's at Devices > Insert Guest Additions CD Image.However, I'm not using the GUI because I'm using the guest OS via remote desktop, so I obviously don't have the menus, either.I'd like to know how to perform this function from command line. I'd imagine it's using vboxmanage to insert and remove that CD image from the virtual guest machine's drive.Also, is there a way how to insert any other CD images and/or floppy images into the virtual drives of a guest system - and remove them - while the guest OS is running?
|
How to insert guest additions image in VirtualBox from command line, while VM is running?
|
command line;virtualbox
| null |
_webapps.52314
|
Is there a spreadsheet function that can watch a row and update a cell with the current date when any cell in that row changes? If not for the entire row then perhaps just watch a single cell?
|
Spreadsheet: function to watch row for changes
|
google spreadsheets
| null |
_unix.110415
|
I have installed Postfix, I can send mails between local users as expected, but I would try to limit the access of a particular user to the server. I edited the /etc/postfix/access file in this way:[email protected] REJECTwhere example.com is $mydomain. I did also a postmap access to generate the validate file. Strangely, I can still send mail from the diego account using mutt. Here the maillog:Jan 22 15:46:36 server postfix/pickup[6637]: 62117BF647: uid=500 from=<diego>Jan 22 15:46:36 server postfix/cleanup[6737]: 62117BF647: message-id=<[email protected]>Jan 22 15:46:36 server postfix/qmgr[6638]: 62117BF647: from=<[email protected]>, size=422, nrcpt=1 (queue active)Jan 22 15:46:36 server postfix/local[6739]: 62117BF647: to=<[email protected]>, relay=local, delay=0.07, delays=0.06/0.02/0/0, dsn=2.0.0, status=sent (delivered to mailbox)Jan 22 15:46:36 server postfix/qmgr[6638]: 62117BF647: removed
|
Why access file is being ignored by Postfix?
|
email;postfix
|
I found finally the correct sintax. If you want to just block a user, you have to edit the main.cf file in this way:smtpd_sender_restrictions = check_sender_access hash:/etc/postfix/accessand in the access file:user@ [email protected] REJECT #this will REJECT only if sender is from server.example.com domain
|
_unix.138275
|
I am trying to attach multiple files in Unix, which are the result of a find command. When I try to send the mail, the attachments are missing.dir=$pathecho Entered into $spr/sum_masterfor fil in `find $dir -ctime -2 -type f -name Sum*pdf*`do uFiles=`echo $uFiles ; uuencode $fil $fil`done\($uFiles\) | mailx -s subject [email protected] is wrong with this code?
|
Attachment missing in Unix mail when multiple files were attached
|
shell script;mailx
|
If uFiles ends up containing the string foo bar qux, then the last line runs the command (foo with the arguments bar and qux). This results in the error message (foo: command not found (or similar), and mail gets an empty input.That's not the only problem with the script. The command that builds up the uFiles variable doesn't do at all what you think it does. Run bash -x /path/to/script to see a trace of the script, it'll give you an idea of what's happening. You're echoing the uuencode command instead of running it. You don't need echo there: uFiles=$uFiles $(uuencode $fil $fil)That will make the loop work, but it's brittle; in particular, it will break on file names containing spaces and other special characters (see Why does my shell script choke on whitespace or other special characters? for more explanations). Parsing the output of find is rarely the easiest way to do something. Instead, tell find to execute the command you want to execute.find $dir -ctime -2 -type f -name Sum*pdf* -exec uuencode {} {} \;The output of that is the concatenation of the uuencoded files that you were trying to build. You can pass it as input to mail directly:find $dir -ctime -2 -type f -name Sum*pdf* -exec uuencode {} {} \; |mailx -s subject [email protected] you want to detect potential failures of the uuencode step, you can stuff it into a variable (but beware that it might be very big):attachments=$(find $dir -ctime -2 -type f -name Sum*pdf* -exec uuencode {} {} \;)if [ $? -ne 0 ]; then echo 1>&2 Error while encoding attachments, aborting. exit 2fiif [ -z $attachments ]; then echo 1>&2 Notice: no files to attach, so mail not sent. exit 0fiecho $attachments | mailx -s subject [email protected], write a temporary file.attachments=trap 'rm -f $attachments' EXIT HUP INT TERMattachments=$(mktemp)find $dir -ctime -2 -type f -name Sum*pdf* -exec uuencode {} {} \; >$attachmentsif [ $? -ne 0 ]; then echo 1>&2 Error while encoding attachments, aborting. exit 2fiif ! [ -s $attachments ]; then echo 1>&2 Notice: no files to attach, so mail not sent. exit 0fimailx -s subject [email protected] <$attachments
|
_webmaster.36504
|
I have a server on the website www.example.com, and I want to register another domain like www.examplehelp.com with DNS from www.example.com/help. Can I do something like this on 1&1 domain registration?
|
Can I register DNS under an IP with a subdomain on 1&1?
|
dns;domain registration;1and1
| null |
_unix.302514
|
I prepared a Bootable kali 2016 USB using win32diskimager, like the tutorial on kali's official site. I restarted the computer, and boot from my kali USB, but the setup doesn't start, after a while my pc boot from hard disk normally . I tried an Ubuntu SO, it booted and started Ubuntu installation. So I thought that problem is about kali SO, so I compared Ubuntu and kali SO. I saw while Ubuntu include an efi file, there is no **efi ** file on kali. Is problem about no including this file, and how to solve this? Thanks
|
Win10 kali dual boot installing problem
|
kali linux;uefi
|
Problem was the secure boot is enabled, I disabled and tried again, it worked and got installing screen [SOLVED]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.