id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_unix.314968
When upgrading from Jessie to Stretch, at the end of dist-upgrade, it ends with an error:Errors were encountered while processing: nagios-nrpe-server E: Sub-process /usr/bin/dpkg returned an error code (1)Have tried running apt upgrade, install, and reinstall without correcting this.What to do?
Strange nagios-nrpe-server error upgrading from Jessie to Stretch
debian;apt;monitoring;nagios;dist upgrade
To finish installing nagios-nrpe-server, I ended up verifying the post-install scripts.At nagios-nrpe-server.postinst:#!/bin/shset -e# Automatically added by dh_installinitif [ -x /etc/init.d/nagios-nrpe-server ]; then update-rc.d nagios-nrpe-server defaults >/dev/null invoke-rc.d nagios-nrpe-server start || exit $?fi# End automatically added sectionAs I have nagios-nrpe being invoked by (x)inetd and not running as a daemon, it failed startup and thus the apt dist-upgrade error.For the moment commented out the start line, considering whether filing up a bug, and/or changing from xinetd to a daemon. I use xinetd because I also use it to invoke the backup daemon.
_unix.49798
I'm using Mint 13 if that matters. And I'm new.Anyway, I tried installing Adobe Reader 9 using their installer from:http://www.adobe.com/support/downloads/product.jsp?product=10&platform=UnixIt got stuck when on a line that said something about setting up icons. There was no CPU usage for about an hour, so I killed the process.Of course, running apt-get again (to install another program) gives me a lock error. So after rebooting to unlock, I use this command:sudo dpkg -a --configureand then...sudo apt-get -f install... as instructed by similar questions on the AskUbuntu forum. However, it once again gets stuck at an installation step and gives me this line:Setting up acroread (9.4.7-1oneiric1) ......which I then kill. I tried using Synaptic to mark the incomplete acroread install for deletion, but it gets hung up on another line. It says something about a plugin being invalid if I recall correctly.I want to tell apt-get to just stop trying to install acroread every time I use the command. Any ideas?
Apt-get refuses to stop trying to install Acroreader
apt;dpkg;synaptic
null
_unix.295388
I want to disable 'app folders' in the GNOME menu, because I want to have all applications sorted alphabetically. gsettings set org.gnome.desktop.app-folders folder-children [] lets the folders disappear, but after a reboot the app folders were set back to default. How can I make the settings persistent ? What do I have to do ? Or is this just a bug in fedora 24 or in GNOME 3.20 ?
How can I disable app folders in the GNOME menu permanently?
fedora;gnome3;dconf
It has to be [''] instead of [] - Thanks and reference to the user zdenek from the ask fedora platform who helped me to figure it out and find the solution : How to make app folders settings permanent? So, the command is : gsettings set org.gnome.desktop.app-folders folder-children ['']
_codereview.92791
I made a program that asks for a specified amount of numbers and checks if the input number is in the primitive type range. If it is, it fits on the primitive type. Each input number, it checks if the number is in the range of each primitive type MIN and MAX range. If it is between the range, it prints the type(s) it is ranged on.import java.math.BigInteger;import java.util.Scanner;public class Main {private Scanner in = new Scanner(System.in);public Main(){ // Get input on how many numbers to ask for int t = in.nextInt(); String input; for(int times = 0; times < t; times++) { // Ask for the number input = in.next(); try { BigInteger number = new BigInteger(input); System.out.println(input + can be fitted in:); if(number.longValue() < Byte.MAX_VALUE && number.longValue() > Byte.MIN_VALUE) { System.out.println(* byte); } if(number.longValue() < Short.MAX_VALUE && number.longValue() > Short.MIN_VALUE) { System.out.println(* short); } if(number.longValue() < Integer.MAX_VALUE && number.longValue() > Integer.MIN_VALUE) { System.out.println(* int); } // If the longValueExact() returns a greater number than the long max value, then it throws an ArithmeticException if(number.longValueExact() < Long.MAX_VALUE && number.longValue() > Long.MIN_VALUE) { System.out.println(* long); } } catch(NumberFormatException | ArithmeticException e) { System.out.println(input + * nothing); } }}public static void main(String[] args){ new Main();}}I got this challenge from this challenge website. It may explain the challenge better.
Checking if a number fits in a primitive type
java;beginner
String input;for(int times = 0; times < t; times++) { // Ask for the number input = in.next();There's no need to declare input outside of the loop. It has no advantage at all as you don't need to keep it between iterations. Always minimize the scope. // If the longValueExact() returns a greater number than the long max value, then it throws an ArithmeticExceptionAs pointed in a comment, I misread longValueExact as longValue. Updated the answer, so that both are covered.For longValue, this isn't true and the Javadoc says it pretty clearly:if this BigInteger is too big to fit in a long, only the low-order 64 bits are returnedFor longValueExact, the following is saidIf the value of this BigInteger is out of the range of the long type, then an ArithmeticException is thrown.So checking for exception is correct. If it gets thrown, the number doesn't fit in long and obviously in no smaller type either. if(number.longValueExact() < Long.MAX_VALUE && number.longValue() > Long.MIN_VALUE) { System.out.println(* long); } By definition, there's no long bigger than the biggest long, is it?So this test is a tautology.Using longValueExact makes the program correct. With longValue, it wouldn't work and instead of testing if a number fits in a type, you test if its lowest 64 bits do.Still, letting an exception be thrown for a normal program flow is extremely inefficient as the JIT assumes that exceptions are well... exceptional. Moreover, filling their stacktrace is pretty complicated and slow, as they actually don't exists in an optimized code.In order to get a faster code, you need to do it the other way round: Convert to BigInteger and then compare. As a bonus to speed, you get compatibility with Java 7.You should also fix your spacing and braces. Let your IDE do it, it's free.
_vi.10777
Due to performance, I disable the cursorline and cursorcolumn in Vim. However, in the case jumping through a Quickfix window, it is hard to locate the cursor in the file. So I would like to enable cursorline and cursorcolumn automatically after a Quickfix window is opened (by whatever actions).Now I have the solution using the Autocmd like this:autocmd BufferEnter quickfix :bufdo set cursorline cursorcolumnIs this the proper solution? Any better idea?Thanks very much!
Enable cursorline and cursorcolumn after Quickfix window is opened
vimrc;autocmd;quickfix;cursorline
Create the file in ~/.vim/ftplugin/qf.vim (or $HOME/vimfiles/ftplugin/qf.vim if you are on Windows) with the following contents: Only do this when not done yet for this bufferif exists(b:did_ftplugin) finishendifwincmd p go to original windowset cursorlineset cursorcolumnwincmd p back to quickfix windowThe ftplugin files are executed whenever the filetype is set for a given file buffer (quickfix and location window has the filetyp qf).Using the filetype plugin files instead of autocmds avoids cluttering your vimrc.You can find more details at :help ftplugin-name, :help 43 and Vim FAQ 26.8.Edit:The check on b:did_ftplugin is necessary to allow disabling it if necessary and to avoid loading it twice (see :help ftplugin).The let b:did_ftplugin = 1 is omitted from this file because the intent of this file is to increment the default filetype plugin for quickfix, and not override. If this line was added then the settings at $VIMRUNTIME/ftplugin/qf.vim would be skipped, as explained at :help ftplugin-overrule.
_unix.39383
I want to set up a launcher for my terminal in XFCE that automatically prompts for the su password, in gnome my shortcut was:gnome-terminal --command=su - --geometry=180x40+400+80How do I configure the XFCE shortcut for that? the current properties are:exo-open --launch TerminalEmulator
Launch terminal in su mode XFCE
terminal;xfce
null
_reverseengineering.6267
I have the following lines push [ebp+dwBytes] push 8 ; dwFlags call ds:GetProcessHeap push eax ; hHeap call ds:HeapAlloc push [ebp:lpCmdLine] ; char * mov edi, eax call _atol mov esi, eax xor esi, 28Ch mov eax, esi pop ecx mov edx, ebx xor eax, 1104h xor ecx, ecx shr edx, 1 jz short loc_60114AThe first part is not difficult. They get a handle to the default heap and after that, they allocate some memory on that heap using the handle hHeap. The part with lpCmdLine take the lpCmdLine and converts it into a long value.Now, I do not understand the part which comes after call _atol.Especially the lines with the XOR : xor esi, 28ChQuestions:a) Is that a way of encryption ? I mean, is it so that they try to encrypt the string pointed by lpCmdLine ? b) Normally I had a cmp before jz, but here as you can see, there is only a jz instruction? Why is the cmp is missing ? best regards,
XORing command-line for encryption?
assembly;xor
null
_webmaster.108771
I used this Google tool to test my web site.When I got the report it recommended the following..Compress resources with GZIPSee how to enable GZIP compression When I look at the response of my html css, or js files in chrome dev tools, I see the following encoding..content-encoding:brLooking up br (for example here), br seems to be another compression, an alternative to gzip.Also, when I use a tool such as this, is suggest my site is compressed.I do notice my images don't have this encoding, but they are all either .png or .jpg so I imagine you would not compress much anyway.Does anyone know why the Google tool would be telling me to compress my resources when my site seems to already be compressed?
Why does Google mobile site test say my site is not compressed when it uses content-encoding:br?
google analytics;mobile;compression
Compress resources with GZIPThis is another one of those things where new technology is being shoved into our faces and some companies and/or tools aren't setup to handle it (such as the Google page speed insights). After looking at the new compression info, it seems only newer web browsers support it. A large number of tools and web servers still support GZIP compression, but some servers (including nginx as per your link on the br compression) don't have it where one can enable it in the server.Because the goal of a website is to present information to users from around the world, we have to try to make much of the world happy by creating two versions of a webpage. One version being compressed with GZIP and the other version not compressed at all for browsers that don't support compression. When the user loads the page, the browser tells the server what compression methods it can handle (example: GZIP) and if the server also supports it, then the content is downloaded compressed then extracted on the user's computer and then the HTML is processed in the browser. If however the user's browser can't handle the compression then the server should delivered the uncompressed version. This is better than users seeing errors.
_unix.116415
I'm using Debian 6. I've been trying to set up the Netgear N300 WiFi (model number WNA3100) adapter but haven't been successful. The Ndiswrapper was installed from .deb packages which I downloaded on one computer and transfered to the computer I'm haveing issues with. I have added it to /etc/modulesUSERNAME@COMPUTERNAME:~$ sudo ndiswrapperinstall/manage Windows drivers for ndiswrapperusage: ndiswrapper OPTION-i inffile install driver described by 'inffile'-a devid driver use installed 'driver' for 'devid' (dangerous)-r driver remove 'driver'-l list installed drivers-m write configuration for modprobe-ma write module alias configuration for all devices-mi write module install configuration for all devices-v report version informationwhere 'devid' is either PCIID or USBID of the form XXXX:XXXX,as reported by 'lspci -n' or 'lsusb' for the card# /etc/modules: kernel modules to load at boot time.## This file contains the names of kernel modules that should be loaded# at boot time, one per line. Lines beginning with # are ignored.# Parameters can be specified after the module name.loopndiswrapperBut modprobe doesn't recognize it as present and I can't get online.USERNAME@COMPUTERNAME:~$ sudo modprobe ndiswrapperFATAL: Module ndiswrapper not found.
Debian 6 NDISwrapper doesn't install correctly
linux;debian;wifi;kernel modules;ndiswrappers
null
_codereview.64504
I have a webpage with a table. The table, with 10 columns, can be sorted by each column ascending or descending.At this moment I control order by big switch:Func<IQueryable<DeviceUsage>, IOrderedQueryable<DeviceUsage>> orderBy; IOrderedEnumerable<DeviceUsage> info; switch (sortOrder) { case user: ViewBag.sortableBy = sortOrder; info = unitOfWork.deviceUsageRepository.Get(where, null, null, d => d.DeviceInstance, d => d.Storage, d => d.User).OrderBy(s => s.User.FullName); ViewBag.Desc = true; return PartialView(info.ToPagedList(pageNumber, 15)); case userDesc: ViewBag.sortableBy = sortOrder; info = unitOfWork.deviceUsageRepository.Get(where, null, null, d => d.DeviceInstance, d => d.Storage, d => d.User).OrderByDescending(s => s.User.FullName); ViewBag.Desc = true; return PartialView(info.ToPagedList(pageNumber, 15)); case manufacturer: ViewBag.sortableBy = sortOrder; orderBy = q => q.OrderBy(o => o.DeviceInstance.Device.Manufacturer1.Name); break; case manufacturerDesc: ViewBag.sortableBy = sortOrder; orderBy = q => q.OrderByDescending(o => o.DeviceInstance.Device.Manufacturer1.Name); break; . . . default: ViewBag.sortableBy=; orderBy=q => q.OrderBy(o => o.DeviceInstance.Device.CatalogNo); break; }This code works but looks not nice. I think it's the longest switch I ever wrote.Can it be someway upgraded or leave it as it is?Explanation why there is difference in logic:As you can see in user I'm sorting by Fullname which isn't in db ( its created locally in DAL from FirstName & LastName). @Nick's approach is ok BUT adding a new Desc variable is a bit problematic because there is only the possibility to sort by ONE row at a time. So I would need to store it somewhere in the last sorting order.
Big switch statement for sorting a table
c#
null
_codereview.60928
I am porting C# code to F# that makes use of LINQ's Join() extension method. Just as I use that in method call chains, I would of course like to have an F# function to pipe into. However, there is no equivalent in the Seq module, and while the same result could also be achieved with a nested lambda using Seq.where, I'd also want something that tells me by name that it is in fact a join. I came up with this function using a simple query expression:let seqJoin innerKeySelector outerKeySelector resultSelector (innerSequence : 'b seq) (outerSequence : 'a seq) = query { for outer in outerSequence do join inner in innerSequence on (outerKeySelector outer = innerKeySelector inner) select (resultSelector outer inner) }While this works, I'm not sure about the idiomaticity of the API.LINQ's Join() method has this signature:public static IEnumerable<TResult> Join<TOuter, TInner, TKey, TResult>( this IEnumerable<TOuter> outer, IEnumerable<TInner> inner, Func<TOuter, TKey> outerKeySelector, Func<TInner, TKey> innerKeySelector, Func<TOuter, TInner, TResult> resultSelector)Parameters are 'logically' ordered by importance, the first being the sequence on which the method is (syntactically) called. In F#, in order to be able to pipe the 'original' sequence into the function, that argument needs to be last instead of first, which requires shuffling the whole signature around a bit.In order to keep the two sequences next to each other, I moved innerSequence to the second to last position, and to stay in line with their order, I also switched outerKeySelector and innerKeySelector. I wonder, though, whether once I've changed it that far, it might be more idiomatic to go all the way and make it the full reverse of the C# version, in part because now the resultSelector parameter separates the sequences from their key selector functions and looks a bit out of place in that position. What would be the most 'natural'/idiomatic way to arrange those parameters?As for the query itself, is that the proper way to do it with regards to laziness and expression trees that will be used under the cover? Will the seq type annotations break anything in that respect? Those were necessary to actually allow for using seq and list values as inputs.(This is of course supposed to become an extension for the Seq module; I just haven't bothered to do that yet.)
Join() equivalent function for F# sequences
f#
null
_webapps.27766
What's this newly-appearing-never-seen-before thing in Facebook Chat:
What's the new Seen in Facebook Chat?
facebook chat
Seen means that your message was loaded to the browser AND was visible to the user. If the tab was minimized or inactive at the time that the message was sent the message will not be marked as seen. Messages are marked as seen through AJAX in a javascript function. I traced the function call, and it is being made somewhere on line 221 of http://static.ak.fbcdn.net/rsrc.php/v2/yz/r/7wj8FiAwzjS.js. Of course it is minified javascript so it is relatively hard to read.If you are looking to disable the feature, a chrome extension has been created called Facebook Unseen.
_cstheory.11755
What is the relationship between the expressiveness of LTL, Bchi/QPTL, CTL and CTL*?Can you give some references that cover as many of these temporal logics as possible (especially between linear- and branching-time)?A Venn diagram with those temporal logics and some practical properties as examples would be perfect.For instance:Is it true that there are properties specifiable in Bchi but not in CTL*? Do you have a good example?How about in Bchi and CTL but not in LTL? Details: The expressiveness of the logics is more relevant for me than the examples. The latter is just helpful for understanding and motivation.I already know of the expressibility theorem between CTL* and LTL from [Clarke and Draghicescu, 1988], but do not like the usual example of fairness being in CTL and not in LTL since there are a plethora of fairness variants, some of which are expressible in LTL.I also do not like the usual example of the evenness Bchi-property, given, e.g., in [Wolper83] about the restrictions of LTL, since adding another propositional variable would solve the problem ($even(p) \equiv q \wedge \Box ( q \implies X \neg q ) \wedge \Box ( \neg q \implies X q ) \wedge \Box ( q \implies p )$).I do like the example of the evenness Bchi-property, given, e.g., in [Wolper83] about the restrictions of LTL, since it is simple and shows the necessity of PQTL for evenness (thanks for the note below).Update:I think the expressibility theorem between CTL* and LTL from [Clarke and Draghicescu, 1988] can be lifted to Bchi automata, resulting in: Let $\phi$ be a CTL* state formula. Then $\phi$ is expressible via Bchi automaton iff $\phi$ is equivalent to $A\phi^d$.With this, Bchi $\cap$ CTL* = LTL, answering my questions above:Is it true that there are properties specifiable in Bchi but not in CTL*? Yes, e.g. evenness.How about in Bchi and CTL but not in LTL? No.Has anyone lifted Clarke and Draghicescu's theorem already to Bchi automata, or stated a similar theorem? Or is this too trivial to be mentioned in a paper, since CTL*'s path quantifiers are obviously orthogonal to the criteria on accepted paths states by Bchi automata?
Expressiveness of Bchi vs CTL(*)
lo.logic;automata theory;model checking;temporal logic
One thing we have to be clear on is the kind of property we are talking about: CTL and CTL* are branching-time logics, used to talk about tree languages, whereas LTL is a linear-time logic, which per se talks about words, but can be applied to trees by requiring all branches to satisfy the formula. This already gives you a hint for some CTL properties which LTL cannot express, namely ones which mix universal and existential path quantifiers, like AGEFp (It will always be possible to get to a p-state). The usual example in the other direction is FGa, see for example http://blob.inf.ed.ac.uk/mlcsb/files/2010/02/mlcsb7.pdf for details (and a Venn diagram).Regarding automata, things get more complicated. You could be talking about word or tree automata; if the latter, note that Bchi automata are less expressive than the other acceptance conditions (Rabin/parity/...) in this case. See for example http://www.cs.rice.edu/~vardi/papers/lics96r1.ps.gz for comparisons (including the case of derived languages, which are the tree languages recognizable by word automata).
_webapps.70629
I don't even know if it's possible to do this, but what I'm trying to do is to change how my name on YouTube shows up to others (when I post videos or comments or the like). I just want to change the name while keeping my channel, the videos I've posted, and suchlike.I read (I think) that you can do this by unlinking your YouTube account to the Google+ account it's currently linked to, then link it to a different Google+ accountis this accurate? Currently it is forcing me to use my real name as my YouTube channel name. This is downright obnoxious; I want anonymity, I don't want every video I post to be emblazoned with my real name for every person who googles me to see! How can I change it to a username that is not my real name?
How to change my YouTube channel name
youtube
It is possible to change the YouTube channel account name. Simply follow these steps:Note: Quotes were taken from Change your channel details - Youtube Help.Change the channel NameIf your channel is connected to Google+, you can change the channel name:Sign in to your channel on YouTube.Open the Guide and click My Channel.Point your cursor at your channel name and click the pencil icon > Channel settings.Under Account Information, click Change next to your channel name.Change the channel ICONSign in to your channel on YouTube.Open the Guide and click My Channel.Point your cursor at your channel icon and click the pencil icon.
_webmaster.45566
If a user visits demo.domain.com, does that count as having visited the site domain.com? Does the Alexa ranking apply to the sub-domain or the primary domain?
Does the Alexa ranking apply to the sub-domain or the primary domain?
seo;alexa
null
_unix.354442
I'm looking to create a baseline of file extensions and then search for the inverse of them (essentially scanning for new extensions and then reporting on them).I have:base_file=`find /volume1/ -type f | grep -E .*\.[a-zA-Z0-9]*$ | sed -e 's/.*\(\.[a-zA-Z0-9]*\)$/\1/' | sort | uniq -u`to create my baseline - the initial search of the files on the volume.For a small amount of files, using find $dir -type f \( -not -name foo* -and -not -name *bar \) worked wonderfully. Alas, I have tons of files, though. If I pipe every unique extension into the find command, it does not work (understandably). Ex. of output:.acx .adb .aex .agt .ahs .alt .amsorm .ANI .ARTX .ASAX .ASDefs .asmdot .ASMDOT .ASPX .atb .atm .aus .auth .authd .awk .ben .Bin .BIO .bkp .bms .boo .bootstrap .bplist .bridgesupport .bto .btt .CBK .ccp .cd .cdm .cdrom .CFGOLD .cfm .cfp .CFS .cg .cidb .cilk .clk .cmptag .CMValidateMovieDataReferenceService .ColorSyncXPCAgent .common .con .CONFIG .COR .cpi .cpu .crc .crdownload .crmlog .cryptodev .csh .ctd .ctl .cue .cws .d .daeexportpreset .daeimportpreset .DATA .dbg .DBG .dbl .DCD .DCX .debug .defaults .defltools .defmtools .der .desktop .dfont .DGDLL .DGN .DictionaryServiceHelper .dig .django .dla .dlb .dlh .dlk .dLL .dlmp .DLO .DMP .DNP .dps .DriverHelper .DRWDOT .dsd .dtc .DTL .dwd .dwfx .dwG .e .eai .eapol .EDB .edc .edited .ENC .eng .ENV .epub .erl .esi .esm .EVM .EVP .ews .example .exv .fac .fatal .fbk .FBK .fbT .FCL .fe .file .fin .fl .FLL .font .FontDownloadHelper .for .fpk .fre .frT .FW .FXP .gadget .Gadget .gdb .generic .ger .gi .glo .gm .gpx .groovy .group .gsl .gss .gws .GZ .ham .hbs .hd .hidden .hkf .hpdata .hs .htb .HTT .hun .hx .hxd .hxx .HXX .IBM .ICNS .igb .IGS .iHB .imaging .IME .IMG .in .INP .install .Installsettings .int .IPConfiguration .IPMonitor .ITK .ITS .iuf .java .jnilib .job .JPEG .jqx .kd .keychainproxy .keys .kondo .krn .kscript .ksh .lfs .libraries .LID .lisp .liveReg .local .LOCAL .lok .lppi .lsl .lt .ltools .mak .mako .mapping .mappings .mas .masm .matlab .mbr .mch .MDE .mdmp .mdw .me .med .MediaLibraryService .mem .mholders .MIF .MIG .min .mk .mm .mno .mobileconfig .mom .mp .MPE .mpq .MPV .mpx .MPX .msdb .MSDefs .msilog .MSM .MSP .mtools .mup .nasm .netsa .new .nfm .nlog .nor .nsi .ntd .numbers .nut .nv .nvv .NWD .O .oai .oct .Ocx .oft .ogv .older .omo .ooc .openAndSavePanelService .ori .orignal .osf .override .pad .page .partial .pas .patch .pbb .pch .Pdf .PDFFileRefsValidator .pdn .PDR .pexe .pfw .phar .pif .pike .pix .PJT .PJX .PLS .plsql .po .pokki .pot .ppf .ppk .pptm .preferences .PRG .prm .PRN .pro .propdesc .prtdot .PRTDOT .prx .PSDefs .PST .psw .pta .ptb .ptg .python .r .rayhosts .rc .rcd .RCF .rd .RecentPictureService .regcccc .registerassistantservice .RLA .rnd .rpk .RPW .RSC .rst .rupldb .rus .salog .sap .SAP .sbt .sbx .sbxx .SCH .schemas .scm .SCR .sct .SDP .sds .sdu .Search .securityd .SEP .set .setup .Setuplog .SFV .sfx .sgi .sgn .sidb .sidd .sigs .sites .skin .slddrt .smc .SMC .smf .smilebox .SOL .spdc .speechsynthesisd .spn .sqfs .squashfs .srt .srx .ssi .st .ste .stg .styx .swb .swtag .TAR .TDC .tdf .tex .th .tib .time .tips .tmx .tpg .tpm .trace .transformed .trm .TSK .tst .Txt .txz .type .udf .ufm .ult .uninstall .upd .upstart .urf .user .User .UserDictionary .UserProfile .UserScriptService .usr .ux .v .vala .values .var .VAR .vbe .VBR .vcs .vcxproj .vdb .vdf .VERSION .VersionsUIHelper .vhdl .vms .vmsn .vmss .VOL .voucher .vps .vsb .vst .vvv .wax .wbt .Wdf .webp .WIZ .wnt .WPT .ws .wsc .wsdl .WSF .wsp .xap .xht .XLL .xlS .XLT .xmp .xpfwext .xtext .yaml .zipx .zzHow can I search for all of these, or inverse of, without running into issues? Or, more importantly, is there a better solution for this type of task?
Searching for a large number of extensions with find
files;find
null
_softwareengineering.292087
I was reading this answer about testing private methods and it mentioned several possibilities: extract methods as public to another classmake them public separate the env of test and production so that methods are only public in the test envI am currently coding in Java and I was wondering what does the community think about a kind of intermediary possibility provided by Java and maybe some other languages. With Java you can make your methods Protected or Package access so it is neither private not public but something between the two that make it testable by JUnit.The Google Guava library even provide an annotation just for that called VisibleForTesting which doesn't do anything apart from explicitly saying that it is protected or public for testing purpose. Since there is even annotations dedicated to this usage surely it is an accepted strategy among part of the community at least?Is it an acceptable strategy or should I stick to the 3 options mentioned above and consider it the same as the make public strategy?
Testing private methods as protected
java;unit testing;tdd;junit
Generally, one would better be judicious about using protected access at all. The reasons for that are laid out in answers to prior questions over here: Why is Clean Code suggesting avoiding protected variables?As for using it the way you think of here - weakening access limitation because it feels like more comfortable to test - it looks like a terribly bad idea.The right option when you test functionality covered by private method is to leave its modifier alone and figure testing approach based on assumption that method is and will stay private.There could be many cases and reasons when one needs to redesign code for testability. Various difficulties in writing unit tests often serve as good indication for such a need. But, oh, that method is private is not one of those indicative difficulties.I wrote an explanation for this in an answer to prior question as a side note because that other question and answer were focused on different matters. But for the question you asked, it seems to apply fully and directly, so I will simply copy that part over here.<rant I think I heard enough whining. Guess it's about time to say loud and clear...>Private methods are beneficial to unit testing.Note below assumes that you are familiar with code coverage. If not, take a time to learn, since it's quite useful to those interested in unit testing and in testing at all.All right, so I've got that private method and unit tests, and coverage analysis telling me that there's a gap, my private method isn't covered by tests. Now...What do I gain from keeping it privateSince method is private, the only way to proceed is to study the code to learn how it is used through non-private API. Typically, such a study reveals that reason for the gap is that particular usage scenario is missing in tests. void nonPrivateMethod(boolean condition) { if (condition) { privateMethod(); } // other code... } // unit tests don't invoke nonPrivateMethod(true) // => privateMethod isn't covered.For the sake of completeness, other (less frequent) reasons for such coverage gaps could be bugs in specification / design. I won't dive deep into these here, to keep things simple; suffice to say that if you weaken access limitation just to make method testable, you'll miss a chance to learn that these bugs exist at all.Fine, to fix the gap, I add a unit test for missing scenario, repeat coverage analysis and verify that gap is gone. What do I have now? I've got as new unit test for specific usage of non-private API.New test ensures that expected behavior for this usage won't change without a notice since if it changes, test will fail.An outside reader may look into this test and learn how it is supposed to use and behave (here, outside reader includes my future self, since I tend to forget the code a month or two after I'm done with it).New test is tolerant to refactoring (do I refactor private methods? you bet!) Whatever I do to privateMethod, I'll always want to test nonPrivateMethod(true). No matter what I do to privateMethod, there will be no need to modify test because method isn't directly invoked.Not bad? You bet.What do I loose from weakening access limitationNow imagine that instead of above, I simply weaken access limitation. I skip the study of the code that uses the method and proceed straight with test that invokes my exPrivateMethod. Great? Not!Do I gain a test for specific usage of non-private API mentioned above? No: there was no test for nonPrivateMethod(true) before, and there is no such test now.Do outside readers get a chance to better understand usage of the class? No. - Hey what's the purpose of the method tested here? - Forget it, it's strictly for internal use. - Oops.Is it tolerant to refactoring? No way: whatever I change in exPrivateMethod, will likely break the test. Rename, merge into some other method, change arguments and test will just stop compiling. Headache? You bet!Summing up, sticking with private method brings me a useful, reliable enhancement in unit tests. In contrast, weakening access limitations for testability only gives me an obscure, hard to understand piece of test code, which is additionally at permanent risk of being broken by any minor refactoring; frankly what I get looks suspiciously like technical debt.</rant>
_webapps.79113
A few years ago, if I remember correctly, Facebook used to send emails regarding new friend requests. But now, in the notification settings, I can not find the setting to enable such email notifications. So, my question is: has the friend requests email notification feature gone from FB? Or was it never present?
Email notification for new friend requests on Facebook
facebook;facebook notifications;facebook friend request
null
_vi.6305
Many sections of stuff we right is such that it could be automatically formatted by user defined rules. Here are a couple of examples:List of include files in languages such as C. It is customary to sort these alphabetically, and this sorting is also part of style guides such as those of Google.Ditto for usepackage in LaTeXComments in languages such as BASHTables, which could be processed by column -tThe entire file, from which I would like to automatically remove spaces at eoln, and repeated empty lines.Doing these in vi is not tough. Just do something like !}sort -u. What's annoying is the need to do this repeatedly. Ideally, I would write:// AUTO: `!}sort -u | column -t`#include z.h#include a.h#include b.hand each time I save my file, these specially marked commands will execute. Is this possible to do? Are there any plugins that do something similar?Inspiration may be drawn from this question: Apply formating with a script, or have ftplugin format particular text based on syntaxEDIT: Here is another concrete example# To sort: +2,/)/-1!column -t | sortJUNKFILES = $(wildcard \*~ \*.aux \*.out \*.bbl \*.bcf \*.blg \*blx.bib \[dD]elme* \DELME* \*.dvi \*.fdb_latexmk \*.fls \*.listing \*.log \$(MAIN).pdf \*.o \*.out \*.run.xml \*.synctex.gz \)
Auto sorting of lists of include files
vimscript
You want to automatically run a command when you write your buffer, and the n save your buffer after the command.. This means you're looking for an autocmd that runs during a BufWrite.Place something like this into your .vimrc:autocmd BufWrite *.c call Libsort()function Libsort() normal mfgg} let lineNumber = line('.') - 1 execute '1,' . lineNumber . '!sort -u' normal 'fendfunctionThat should do something similar to what you want. When you write your current buffer, if it is a .c file, it marks your current position, goes to the start of the document and sorts the first paragraph, and then returns to the mark. I wrote that function quite quickly, so it's not error checked or anything but should give you an idea for a starting point.
_webapps.27029
The drop down list has recent activity but it shows only the most recent five
How can I view repositories by activity in bitbucket?
bitbucket
null
_datascience.19504
I am trying to stitch together multiple packages and tools from multiple languages (R, python, C etc.) in a single analysis workflow.Is there any standard way to do it? Preferably (but not necessarily) in python.
Standard method to integrate tools coded in multiple languages in an analysis workflow
methods
Luigi is an open source python package by Spotify that does exactly what you described: Luigi is a Python (2.7, 3.3, 3.4, 3.5) package that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization, handling failures, command line integration, and much more.Its philosophy is similar to GNU Make, letting you define tasks and their dependencies.There is also Apache Airflow (originally from Airbnb), another python solution:Airflow is a platform to programmatically author, schedule and monitor workflows. Use airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The airflow scheduler executes your tasks on an array of workers while following the specified dependencies.You may found a complete comparative table here.
_scicomp.16002
I use B-spline curve fitting to obtain one smooth curve. If I obtain two smooth B-spline , how can I connect then smoothly. For example, I have 59 points((x0,y0,z0),...,(x58, y58, z58)) and I have two fitted B spline. One B-spline is for the first 30 points, another is for the next 30 points and the two point set share one common point((x29,y29,z29)). The point (x29,y29,z29) will be modified twice due to curve fitting and will have two new positions. If I just connect the two new positions, the final curve will not be smooth at the point (x29,y29,z29). Currently I perform curve fitting for all data together but that will modify the smooth curve for the first 30 points entirely. I hope to only modify the connecting part of the first smooth curve. I know I need to impose derivatives need to be equal at the joint. I don't know how to do that.
How to connect two fitted B-spline curve?
b spline;smoothing;curve fitting
null
_unix.168507
I need to remove multiple chrome extensions from several hundred devices. I have a script that will see if a certain extension exists and if it does then deletes the Default folder. How can I add multiple extensions to my script so that if any of these extensions exist the Default folder is deleted? The script currently looks like this:#!/bin/bashcurrentUser=`ls -l /dev/console | awk '{print $3}'`if [ -d /Users/$currentUser/Library/Application Support/Google/Chrome/Default/Extensions/omghfjlpggmjjaagoclmmobgdodcjboh ]; thenrm -rf /Users/$currentUser/Library/Application Support/Google/Chrome/Default && killall Google Chrome && echo <result>Delete Browsec</result>elseecho <result>No</result>fi
Removing mulitple extensions in chrome
bash
There are various minor issues and things I don't understand in your script which I asked about in my comment. In the meantime, without touching anything else in your script, you could just provide a list of extension names as an array:#!/bin/bashcurrentUser=`ls -l /dev/console | awk '{print $3}'`exts=(omghfjlpggmjjaagoclmmobgdodcjboh foofoobarbar barbarfofo)for extension in ${exts[@]};do if [ -d /Users/$currentUser/Library/Application Support/Google/Chrome/Default/Extensions/$extension ]; then rm -rf /Users/$currentUser/Library/Application Support/Google/Chrome/Default && killall Google Chrome && echo <result>Delete Browsec</result> else echo <result>No</result> fidoneI don't know what the target OSs are but I must point out that this is not portable at all and will fail on any Linux for example.
_webmaster.44754
I'm trying to find out if there's a way to form a Google query to quickly find out what position you are for a search term.I know that Google Webmaster Tools and Google Analytics has something similar but I don't find it accurate.
How to find out what # you are for google keyword
seo;google search;keywords
null
_softwareengineering.164283
I've been programming professionally in C, and only C, for around 10 years in a variety of roles.As would be normal to expect, I understand the idioms of the language fairly well and beyond that also some of the design nuances - which APIs to make public, who calls what, who does what, what is supposed to reentrant and so on. I grew up reading 'Writing Solid Code', it's early C edition, not the one based on C++.However, I've never ever programmed in an OO language. Now, I want to migrate to writing applications for iPhone (maybe android), so want to learn to use Objective-C and use it with a degree of competence fitting a professional programmer.How do I wrap my head around the OO stuff? What would be your smallest reading list suggestion to me.Is there a book that carries some sort of relatively real world example OO design Objective-C?Besides, the reading what source code would you recommend me to go through.How to learn OO paradigm using Objective-C?
Learning OO for a C Programmer
object oriented;c;objective c
I also learnt object-oriented programming coming from procedural programming (Pascal and C) so I can understand the difficulty of the switch: you have to start thinking differently, and you already have a lot of experience in another paradigm (BTW, I still enjoy programming in the procedural way from time to time).Besides reading language-specific books, I think it would be useful to read a general introduction to object-orientation and object-oriented design.For example, I found Object-Oriented Modelling and Design very useful. The book is not very recent, but the basic concepts of object orientation have not changed that much. Also, I like this book because it concentrates on design concepts and only in the later chapters it explains how to map these concepts to programming language constructs. Furthermore, it shows that an object-oriented design can (also) be implemented in non-object oriented languages (e.g. C) if you need to.So, in addition to books specific to Objective-C, I would recommend such a language-agnostic reading to get general ideas that are not biased by a particular object-oriented language.
_unix.220510
When I use any variation of English, US international (with dead keys,altGr dead keys or alternative) on my Linux Mint machine I always encounter this behaviour.When I press one of these keys: ' and then follow them with a 'non-accentable' character like a [ or a b no output comes out at all. Whereas in Windows US-International it would print [ or b. If I wanted to type this I would have to escape each dead key with a space instead of with any 'non-accentable' character. This is annoying when programming (not really, but I trained with the Windows 'Qwerty International' on typing.io and switching back and forth between the systems is irritating). Is there any way to change that so it works like in Windows?
Insert both characters if a dead key combination is not recognized (e.g. 'a , 'b 'b)
x11;keyboard layout;dead keys
On Ubuntu 14.04 I did the following:1) Installed uim using the Software Manager, other packages like uim-xim, uim-gtk2, uim-gtk3 and uim-qt are auto installed. See https://launchpad.net/ubuntu/+source/uim.2) Defined environmental variables by adding the next lines to ~/.profile, this way the custom compose key sequences only apply to the current user:# Restart the X-server after making alterations using:# $ sudo restart lightdm# It seems only GTK_IM_MODULE or QT_IM_MODULE needs to be defined.export GTK_IM_MODULE=uimexport QT_IM_MODULE=uim3) To mimic Window US International keyboards I saved one of the following files at ~/.XCompose:https://gist.githubusercontent.com/guiambros/b773ee85746e06454596/raw/0ea6d7f7cf9a6ff38b4cafde24dd43852e46d5e3/.XCompose orhttp://pastebin.com/vJg6G0thThis worked for me after 1) restarting Ubuntu or 2) just the X-server by entering the following command in a terminal:$ sudo restart lightdmNB: Restarting only seems necessary after altering the ~/.profile file, alterations to ~/.XCompose will take effect the next time an application (Terminal, Gedit, etc.) starts.To check whether the environmental variables are set right, enter the following command in your terminal:$ printenv | grep IM_MODULEMany thanks to:https://wrgms.com/using-xcompose-with-chrome-and-sublime-textAbout custom compose key sequences:http://manpages.ubuntu.com/manpages/trusty/man5/XCompose.5.htmlhttps://help.ubuntu.com/community/ComposeKeyAbout custom keyboard mapping:https://help.ubuntu.com/community/Custom%20keyboard%20layout%20definitions
_unix.210206
cat file_outunique.out | awk '{ print $1, $2}' | sort $2 -u > ledernier.out I want to output $1 and $2 for the 1st iteration of $2Do you have any suggestions about this?I've tried to change the syntax but I don't know if I can do it. Can i do this? How?Inputsjffszh dgfeg7754zezrlgk 544ad4z5gqjiofzo 544ad4z5gzlfkpif 546787438zfkozfk 446787466lfjzfoj dgfeg7754kfkjzfj dgfeg7754Outputsjffszh dgfeg7754zezrlgk 544ad4z5gzlfkpif 546787438zfkozfk 446787466
can i do this or how can i do this awk + sort?
shell;awk
null
_cs.70270
Let's assume that we have a directed acyclic graph $G = (V, E)$, non-negative vertex weight functions $w_a(v)$ and $w_b(v)$, and a non-negative edge weight function $t(u,v)$. We want to divide vertices in two subsets $V_a$ and $V_b$, such that $V_a \cap V_b = \emptyset$ and $V_a \cup V_b = V$. The set of edges $E_{ab}$ between the two sets is defined as$$E_{ab} = \{ (u,v) : (u \in V_a, v \in V_b) \lor (u \in V_b, v \in V_a)\}.$$A cost function $C$ is defined as follows:$$C = \sum_{v \in V_a}{w_a(v)} + \sum_{v \in V_b}{w_b(v)} + \sum_{(u,v) \in E_{ab}}{t(u,v)}$$Define the cost $C_P$ of a path $P$ traversing vertices $v_1, v_2, \dots, v_n$ to be:$$C_P = \sum_{v \in V_a \land v \in P}{w_a(v)} + \sum_{v \in V_b \land v \in P}{w_b(v)} + \sum_{(v_i, v_{i+1}) \in E_{ab} \land v_i \in P \land v_{i+1} \in P}{t(v_i,v_{i+1})}$$I know that the problem of finding the subsets $V_a$ and $V_b$ such that the $C$ function is minimized can be reduced to minimum cut, so it's in P.I've also managed to come up with the solution to find the subsets $V_A,V_B$ with the path with highest possible cost: we can transform the graph $G$ to graph $G'$ in such a way that for every vertex $v$ we create two vertices $v_a$ and $v_b$ with appropriate weights (from $w_a$ and $w_b$ functions respectively), and for every edge $(u,v)$ we create two edges with weight $0$: $(u_a,v_a)$, $(u_b,v_b)$; and two edges with weight $t(u,v)$: $(u_a,v_b)$, $(u_b,v_a)$. In other words, we're creating graph $G'$ that covers every possible path for every division of $V$.The problem can be reduced to finding the longest path in a graph. The graph $G'$ is still a DAG, so the longest path can be found in polynomial time.Now, the multi-objective problem that I'm trying to solve is to find subsets $V_a$ and $V_b$ that minimize the cost $C$ but with a constraint that the cost $C_P$ of every possible path is not greater than maximal acceptable cost $c_{max}$. Graph $G = (V, E)$ and all weight functions are given as an input and fixed.Or the other way around: minimize the maximum cost $C_P$ of every possible paths, but constrain the overall cost $C$ to be not greater than $c_{max}$.How should I approach this problem? Are there any generic ways to prove complexity of such multi-objective problems?The best answer that I've found in literature is that multi-objective optimization problems are generally hard which is not very helpful.My real-life application of this problem is to allocate software components on two heterogenous machines (hence two weight functions) connected via network (hence edge weights) and minimize both energy use (the $C$ function is basically the CPU time + transfer time) and the computation time (the path with the highest cost).Thanks in advance for any help.
Graph optimization problem with multiple objectives/constraints
graph theory;optimization;np hard;partitions
null
_softwareengineering.182271
I'm confused between aggregation and containment. I'm wondering if the following represent an aggregation or containment?class Auto { private string model; private int speed; class AutoCustomer { public string LastName; public string Address; public DateTime DateOfPurchase; }}
How can I understand aggregation and containment?
c#;object oriented
null
_softwareengineering.214981
I have the following classes: Teacher Student Class (like a school class)They all extend from KObject that has the following code:- initWithKey- send- processKeyTeacher, Student Class all use the functions processKey and initWithKey from KObject parent class. They implement their own version of send. The problem I have is that KObject should not be instantiated ever. It is more like an abstract class, but there is no abstract class concept in objective-c. It is only useful for allowing subclasses to have access to one property and two functions.What can I do so that KObject cannot be instantiated but still allow subclasses to have access to the functions and properties of KObject?
Objective-C Lesson in Class Design
ios;objective c
null
_webmaster.2944
Given a (slightly theoretical) physical disk layout of:\products\cameras\50d.jpg\products\cameras\20d.jpg\products\lenses\18-55.jpg\products\lenses\28-135.jpg\products.phpAt the moment, I've URLs of the form:/products.php//products.php/cameras//products.php/cameras/50dWith the products.php using the PATH_INFO to make a decision on what to display.I'm struggling to rewrite the URL to remove the .php, but still allow static resources to be retrievable though?
Removing the script extension with URL rewriting on Apache
apache;url rewriting
Turns out that this works, and still allows resources within to work.RewriteRule ^products([^\.]+)$ products.php$1
_opensource.2436
My most recent encounter with WebYog's SqlYog revealed that the vendor has made their Community edition open source, and available on GitHub, however the product still has obtrusive ads in it.It is clearly licensed under GPL, and although I haven't checked whether the ads are easily removable or contained in one of the included compiled libraries, I'm curious as to their motivations.Although not something I would do; is there anything inherently wrong with publicly forking and possibly even promoting a project like this given that it is released under GPL, for the sole purpose of removing ads and bloat?My question of course is not specific to SQLyog, and could extend to other projects, like Ubuntu's (possibly not GPL?) Desktop variant for example, which includes links to Amazon; or FileZilla, which is distributed via SourceForge, and consequently carries all the bonuses of their much-loved downloader.
Removing Ads from Open Source projects
gpl;forking
Yes, you can do that. The GPL gives you the right to make modifications and distribute these. But considering that this is conflicting with the business interests of the original creators, you can expect that they will see what they can do to stop you. One thing they might try is sue for violation of the SQLyog trademark. To protect you from trademark claims, you need to give your software a different name. This makes it harder to promote it as SQLyog without ads.
_unix.121098
I an trying to install tlp as explained in the website by addin ppa as follows in Linux Mint 15sudo add-apt-repository ppa:linrunner/tlpsudo apt-get update and thensudo apt-get updateproducesHit http://panthema.net precise Release.gpgHit http://archive.ubuntu.com raring Release.gpg Hit http://panthema.net precise Release Get:1 http://packages.linuxmint.com olivia Release.gpg [198 B] Hit http://security.ubuntu.com raring-security Release.gpg Hit http://panthema.net precise/main i386 Packages Hit http://archive.ubuntu.com raring-updates Release.gpg Get:2 http://packages.linuxmint.com olivia Release [18.5 kB] Hit http://security.ubuntu.com raring-security Release Hit http://archive.ubuntu.com raring Release Hit http://security.ubuntu.com raring-security/main i386 Packages Hit http://archive.ubuntu.com raring-updates Release Get:3 http://packages.linuxmint.com olivia/main i386 Packages [23.5 kB] Hit http://security.ubuntu.com raring-security/restricted i386 Packages Hit http://archive.ubuntu.com raring/main i386 Packages Hit http://archive.ubuntu.com raring/restricted i386 Packages Hit http://security.ubuntu.com raring-security/universe i386 Packages Get:4 http://packages.linuxmint.com olivia/upstream i386 Packages [9,237 B] Hit http://archive.ubuntu.com raring/universe i386 Packages Hit http://security.ubuntu.com raring-security/multiverse i386 Packages Get:5 http://packages.linuxmint.com olivia/import i386 Packages [40.1 kB] Hit http://archive.ubuntu.com raring/multiverse i386 Packages Ign http://panthema.net precise/main Translation-en_US Hit http://security.ubuntu.com raring-security/main Translation-en Ign http://panthema.net precise/main Translation-en Hit http://archive.ubuntu.com raring/main Translation-en Hit http://security.ubuntu.com raring-security/multiverse Translation-en Hit http://security.ubuntu.com raring-security/restricted Translation-en Hit http://archive.ubuntu.com raring/multiverse Translation-en Hit http://security.ubuntu.com raring-security/universe Translation-en Hit http://archive.ubuntu.com raring/restricted Translation-en Hit http://ppa.launchpad.net raring Release.gpg Hit http://ppa.launchpad.net raring Release.gpg Hit http://ppa.launchpad.net raring Release Hit http://archive.ubuntu.com raring/universe Translation-en Hit http://ppa.launchpad.net raring Release Hit http://ppa.launchpad.net raring/main Sources Hit http://archive.ubuntu.com raring-updates/main i386 Packages Hit http://ppa.launchpad.net raring/main i386 Packages Hit http://archive.ubuntu.com raring-updates/restricted i386 Packages Hit http://ppa.launchpad.net raring/main Sources Hit http://archive.ubuntu.com raring-updates/universe i386 Packages Hit http://ppa.launchpad.net raring/main i386 Packages Hit http://archive.ubuntu.com raring-updates/multiverse i386 Packages Hit http://archive.ubuntu.com raring-updates/main Translation-en Hit http://archive.ubuntu.com raring-updates/multiverse Translation-en Hit http://archive.canonical.com raring Release.gpg Hit http://archive.canonical.com raring Release Hit http://archive.canonical.com raring/partner i386 Packages Hit http://archive.ubuntu.com raring-updates/restricted Translation-en Hit http://archive.ubuntu.com raring-updates/universe Translation-en Ign http://security.ubuntu.com raring-security/main Translation-en_US Ign http://ppa.launchpad.net raring/main Translation-en_US Ign http://ppa.launchpad.net raring/main Translation-en Ign http://security.ubuntu.com raring-security/multiverse Translation-en_US Ign http://ppa.launchpad.net raring/main Translation-en_US Ign http://archive.canonical.com raring/partner Translation-en_US Ign http://security.ubuntu.com raring-security/restricted Translation-en_US Ign http://ppa.launchpad.net raring/main Translation-en Ign http://archive.canonical.com raring/partner Translation-en Ign http://security.ubuntu.com raring-security/universe Translation-en_US Ign http://packages.linuxmint.com olivia/import Translation-en_US Ign http://packages.linuxmint.com olivia/import Translation-enIgn http://packages.linuxmint.com olivia/main Translation-en_USIgn http://packages.linuxmint.com olivia/main Translation-enIgn http://packages.linuxmint.com olivia/upstream Translation-en_USIgn http://packages.linuxmint.com olivia/upstream Translation-enIgn http://archive.ubuntu.com raring/main Translation-en_USIgn http://archive.ubuntu.com raring/multiverse Translation-en_USIgn http://archive.ubuntu.com raring/restricted Translation-en_USIgn http://archive.ubuntu.com raring/universe Translation-en_USIgn http://archive.ubuntu.com raring-updates/main Translation-en_USIgn http://archive.ubuntu.com raring-updates/multiverse Translation-en_USIgn http://archive.ubuntu.com raring-updates/restricted Translation-en_USIgn http://archive.ubuntu.com raring-updates/universe Translation-en_USFetched 91.6 kB in 20s (4,571 B/s)Reading package lists... DoneBut when I try sudo apt-get install tlp tlp-rdw it givesReading package lists... DoneBuilding dependency tree Reading state information... DoneE: Unable to locate package tlpE: Unable to locate package tlp-rdwIn /etc/apt/sources.list.d/linrunner-tlp-raring.list I have deb http://ppa.launchpad.net/linrunner/tlp/ubuntu raring maindeb-src http://ppa.launchpad.net/linrunner/tlp/ubuntu raring main
E: Unable to locate package tlp
software installation;apt
Try this, its works for me :Add new repo in /etc/apt/sources.list:deb http://repo.linrunner.de/debian wheezy mainsudo apt-get updatesudo apt-get install tlp tlp-rdwHope this will work for you.
_unix.278551
I use /etc/ssh/wrapper.sh script, like in tutorial, that filters which commands are allowed to be run via ssh and logs them.Currently when internal-sftp is required, I use /usr/lib/sftp-server.Is there a way I could run internal-sftp of ssh binary, maybe even with chroot, instead of /usr/lib/sftp-server? (e.g. via some ssh command line flags)
Can I use `internal-sftp` when using wrapper script with `ForceCommand` in sshd?
ssh;chroot;sftp
No. internal-sftp is evaluated inside of sshd server. If you use wrapper script as ForceCommand already, you can't go back. Even if you could, in chroot you don't have the sshd binary either.Unfortunately, ForceCommand different from internal-sftp blocks even the sftp subsystem (subsystem is internally handled as a command).Only way to do that is to copy sftp-server into chroot and run it from there (again with all dependent shared objects and so).
_unix.317996
I have a program which takes urls as commandline arguments and output a pdf file. For example, I use command substitution to provide the input urls from a file urlsfilewkhtmltopdf $(cat urlsfile) my.pdfIn urlsfile, each line is a url.Now I would like to group every 15 lines in urlsfile, and feed a group of urls to the program at a time.How can I do that in bash?Note that it is acceptable to create a pdf file per 15 urls, and then I will merge the pdf files into one. If the merge can be done by the program, that is better.Thanks.
Group lines in a file and feed a group to a program at a time
bash;text processing
null
_unix.102038
I would like to write a small script copying from a directory A to directory B all the files with the .log extension.So in my directory A, I've : ls : a.logb.logc.logHere is the pseudo-code I would like to implement : foreach *.log x do : if [stat -c %s pk_copylogs < 10485760]; then cp A/x B/x else read vANSWER? >> File x is bigger than 10 MB, would you like to copy it anyway ? Type YES or NO : if [ $vANSWER = YES]; then cp A/x B/x fi fiMy main problem here, is to find a way to implement my foreach *.log. How can I do that ?
KSH : cp only based on file size
ksh;cp;size
null
_datascience.9373
Since PyLearn2 is build upon Theano, is it possible to do anything I can do in Theano in Pylearn2? For example, if I have some snippets of Theano code, can I run them as-is in Pylearn2, or would this not work? If it wouldn't, why not?
Can PyLearn do everything that Theano can?
machine learning;deep learning;theano
null
_softwareengineering.299552
I have recently started reading Heads First Design Pattern Book as well as coding on my final year project. In my project I am having Tanks which extends Entity. A Entity can be anything in the game which is positionable on the game map. Entity is an abstract class.I have a move() method in my Entity class which will be used to change the position of tank. I will have a lot of different kinds of tank in my game and they will move faster or slower according to the velocity they have. Now my question is, I read about strategy pattern and according to it I should use interface for movement as I don't want to keep overriding or changing move method behaviour every time in different tanks. Maybe if a tank doesn't move I need to keep the move method empty. So should I rather than hard-coding move() method in every Tank introduce a Movable interface in Tank class? This will be beneficial as I can change tank's movement behaviour during runtime but then I am also not sure how to do it as I may have to then introduce x and y location of tank inside the interface implementations somehow and change them accordingly. This will defeat purpose of Entity class as Entity class is used to hold x and y locations.Please suggest me the correct way to do it.Let me know if question is not clear because it is a bit hard to express my question as it's a bit complex.
Should I use strategy design pattern or something else?
design patterns
What I recommend is keeping all of your behavior in the tank, but keep the parameters such as velocity in another object. It would look something like this, in pseudocode:interface TankParameters { int getVelocity(); int getArmor(); int getAttackPower();}class Tank : Entity { private TankParameters parameters; private Point location; public Tank(TankParameters p) { parameters = p; } public void move(Direction d) { // Use the direction, velocity on the parameters object, // something else like time and calculate the new location. }}You can have a single Tank class but alter the behavior of each Tank object by passing in different parameters at construction time.This is a form of Dependency Injection: while the TankParameters class is not technically a dependency because it is contrived and exists solely to break up the responsibilities of the Tank class, the idea of passing in arbitrary objects to alter behavior is a cornerstone of DI.This is not quite a strategy, simply because TankParameters does not actually contain any algorithms. It exists to externalize data not behavior.While not strictly part of your question, I would also change Entity to be an interface and use composition instead of inheritance. The links below go into more detail about the benefits of using this approach.See AlsoDesigning a library that is easy to use: composition or inheritanceComposition over inheritance
_codereview.14738
I have many <spans>s which need to group to change opacity after hover. All of them need to have unique IDs. Is there a way to combine all these hover functions to one function?jQuery//09 $('#c_09_241a, #c_09_241b, #c_09_241c, #c_09_241d').hover( function() { $('#c_09_241a, #c_09_241b, #c_09_241c, #c_09_241d').stop(true, true).css('opacity','1'); }, function() { $('#c_09_241a, #c_09_241b, #c_09_241c, #c_09_241d').stop(true, true).css('opacity','0'); }); $('#c_09_242a, #c_09_242b').hover( function() { $('#c_09_242a, #c_09_242b').stop(true, true).css('opacity','1'); }, function() { $('#c_09_242a, #c_09_242b').stop(true, true).css('opacity','0'); }); $('#c_09_245a, #c_09_245b').hover( function() { $('#c_09_245a, #c_09_245b').stop(true, true).css('opacity','1'); }, function() { $('#c_09_245a, #c_09_245b').stop(true, true).css('opacity','0'); }); $('#c_09_246a, #c_09_246b').hover( function() { $('#c_09_246a, #c_09_246b').stop(true, true).css('opacity','1'); }, function() { $('#c_09_246a, #c_09_246b').stop(true, true).css('opacity','0'); }); //08 $('#c_08_235a, #c_08_235b, #c_08_235c, #c_08_235d').hover( function() { $('#c_08_235a, #c_08_235b, #c_08_235c, #c_08_235d').stop(true, true).css('opacity','1'); }, function() { $('#c_08_235a, #c_08_235b, #c_08_235c, #c_08_235d').stop(true, true).css('opacity','0'); }); $('#c_08_236a, #c_08_236b').hover( function() { $('#c_08_236a, #c_08_236b').stop(true, true).css('opacity','1'); }, function() { $('#c_08_236a, #c_08_236b').stop(true, true).css('opacity','0'); }); $('#c_08_239a, #c_08_239b').hover( function() { $('#c_08_239a, #c_08_239b').stop(true, true).css('opacity','1'); }, function() { $('#c_08_239a, #c_08_239b').stop(true, true).css('opacity','0'); }); $('#c_08_240a, #c_08_240b').hover( function() { $('#c_08_240a, #c_08_240b').stop(true, true).css('opacity','1'); }, function() { $('#c_08_240a, #c_08_240b').stop(true, true).css('opacity','0'); }); //07 $('#c_07_227a, #c_07_227b').hover( function() { $('#c_07_227a, #c_07_227b').stop(true, true).css('opacity','1'); }, function() { $('#c_07_227a, #c_07_227b').stop(true, true).css('opacity','0'); }); $('#c_07_228a, #c_07_228b, #c_07_228c').hover( function() { $('#c_07_228a, #c_07_228b, #c_07_228c').stop(true, true).css('opacity','1'); }, function() { $('#c_07_228a, #c_07_228b, #c_07_228c').stop(true, true).css('opacity','0'); }); $('#c_07_007a, #c_07_007b').hover( function() { $('#c_07_007a, #c_07_007b').stop(true, true).css('opacity','1'); }, function() { $('#c_07_007a, #c_07_007b').stop(true, true).css('opacity','0'); }); $('#c_07_008a, #c_07_008b').hover( function() { $('#c_07_008a, #c_07_008b').stop(true, true).css('opacity','1'); }, function() { $('#c_07_008a, #c_07_008b').stop(true, true).css('opacity','0'); });HTML<div id=rzuty09p class=rzuty> <span id=c_09_241a class=mieszkanie title=></span> <span id=c_09_241b class=mieszkanie title=></span> <span id=c_09_241c class=mieszkanie title=></span> <span id=c_09_241d class=mieszkanie title=></span> <span id=c_09_242a class=mieszkanie title=></span> <span id=c_09_242b class=mieszkanie title=></span> <span id=c_09_243 class=mieszkanie title=></span> <span id=c_09_244 class=mieszkanie title=></span> <span id=c_09_245a class=mieszkanie title=></span> <span id=c_09_245b class=mieszkanie title=></span> <span id=c_09_246a class=mieszkanie nie title=></span> <span id=c_09_246b class=mieszkanie nie title=></span> </div> <div id=rzuty08p class=rzuty> <span id=c_08_235a class=mieszkanie title=></span> <span id=c_08_235b class=mieszkanie title=></span> <span id=c_08_235c class=mieszkanie title=></span> <span id=c_08_235d class=mieszkanie title=></span> <span id=c_08_236a class=mieszkanie nie title=></span> <span id=c_08_236b class=mieszkanie nie title=></span> <span id=c_08_237 class=mieszkanie title=></span> <span id=c_08_238 class=mieszkanie title=></span> <span id=c_08_239a class=mieszkanie title=></span> <span id=c_08_239b class=mieszkanie title=></span> <span id=c_08_240a class=mieszkanie title=></span> <span id=c_08_240b class=mieszkanie title=></span> </div> <div id=rzuty07p class=rzuty> <span id=c_07_226 class=mieszkanie title=></span> <span id=c_07_227a class=mieszkanie title=></span> <span id=c_07_227b class=mieszkanie title=></span> <span id=c_07_228a class=mieszkanie title=></span> <span id=c_07_228b class=mieszkanie title=></span> <span id=c_07_228c class=mieszkanie title=></span> <span id=c_07_229 class=mieszkanie nie title=></span> <span id=c_07_005 class=mieszkanie title=></span> <span id=c_07_006 class=mieszkanie title=></span> <span id=c_07_007a class=mieszkanie nie title=></span> <span id=c_07_007b class=mieszkanie nie title=></span> <span id=c_07_008a class=mieszkanie title=></span> <span id=c_07_008b class=mieszkanie title=></span> </div>
Combining hover functions
javascript;jquery
var groups = [ '09_241', '09_242', '09_245', '09_246', '08_235', '08_236', '08_239', '08_240', '07_227', '07_228', '07_007', '07_008'];$.each(groups, function(i, group) { var $group = $('[id^=c_' + group + ']'); $group.hover(function() { $group.stop(true, true).css('opacity', '1'); }, function() { $group.stop(true, true).css('opacity', '0'); });});Here's the fiddle: http://jsfiddle.net/W8GA3/This is similar to @Matt's solution, with the following key differences:Sane Selectors - There's no need to add all those selectors by hand. Simply use an attribute selector to select all elements that start with a given string.Selector caching - If there's one thing you can do to your code to keep it fast, it's selector caching.$.each - We're using jQuery's each helper to loop through the array, since it also supports older browsers (forEach is not supported in IE < 9).If you want to, you could even build that original groups array dynamically, which will make this much easier to maintain:var groups = [];$('[id^=c_]').each(function() { // Extract group number from ID var group = this.id.substring(2, 8); // If the ID is not in the groups array, add it ~$.inArray( group, groups ) || groups.push( group );});Here's the fiddle: http://jsfiddle.net/W8GA3/1/
_codereview.144417
I have the following functions in JavaScript:function SelectFirstVisiblePublicationAndHideIt ( PageObj ){ var hideButtonXpath = //img[@title='To hide this item in your profile, mark it as invisible']; var hideButtonArray; var numberOfHideButton; var publicationTitle; aqUtils.Delay(500); hideButtonArray = PageObj.EvaluateXpath(hideButtonXpath); numberOfHideButton = hideButtonArray.length; hideButtonArray[0].Click(); publicationTitle = hideButtonArray[0].parent.parent.FindChildByXPath(//a).innerText; return publicationTitle;}function SelectFirstVisiblePublicationAndReclaimIt ( PageObj ){ var reclaimButtonXpath = //button[@title='Claim this publication and stay on this page']; var reclaimButtonArray; var numberOfReclaimButton; var publicationTitle; aqUtils.Delay(500); reclaimButtonArray = PageObj.EvaluateXpath(reclaimButtonXpath); numberOfReclaimButton = reclaimButtonArray.length; reclaimButtonArray[0].Click(); publicationTitle = reclaimButtonArray[0].parent.parent.FindChildByXPath(//a).innerText; return publicationTitle;}SelectFirstVisiblePublicationAndHideIt and SelectFirstVisiblePublicationAndReclaimIt are very similar except for the Xpath expression. I am wondering if I should merge them into one function by:re-naming variables and functions names in a more generic wayintroduce an additional input argument that serves as a flag to indicate which Xpath expression is to be usedDoing so has pros and cons:Pros: reduce the number of function by oneCons: purpose of a function can no longer be read from its name; having one more input argument; become further away from the principle of one function should do one thing and one thing onlyAny suggestions?
Functions to hide and reclaim first visible publication on a page using Selenium
javascript;beginner;comparative review;selenium;xpath
As the code is similar, they can be merged together.I'll suggest you to break down the functions into smaller reusable chunks which will follow the Single Responsibility Principle (SRP). These functions will be testable and reusable.Both functions areAdding a delay of 500 millisecondsSelecting elements from provided pageClicking the first element from the collectionFinding an element from its ancestor and returning the innerText of it.As these are the steps, we can similarly divide those functions into smaller functions each doing the task of one step above.For first step (to add delay) I've not created a new function as I guess that is already created in utils module.The first function (Step #2) will select the elements from the page using the provided xpath and return the first of them.function getFirstVisiblePublication(page, xpath) { aqUtils.Delay(500); var elements = page.EvaluateXpath(xpath); return elements[0];}The second function combines Steps #3 & #4; as #3 only requires to click the element I haven't created new function.The below function will accept the page in which the target element is to be searched and xpath of the element. This will call the above function to get the first element and click it.This function will return the text/label of the anchor element which is obtained by using the first element's ancestors.function getFirstVisiblePublicationAnchorLabel(page, xpath) { // Get first element var firstElement = getFirstVisiblePublication(page, xpath); firstElement.Click(); return firstElement.parent.parent.FindChildByXPath('//a').textContent;}Usage:Function can be called as follows:var hideButtonXpath = //img[@title='To hide this item in your profile, mark it as invisible'];var publicationTitle = getFirstVisiblePublicationAnchorLabel(PageObj, hideButtonXpath);
_cs.64368
Since parity game solving is in TFNP (Total Function Nondeterministic Polynomial) (and the decision version is is NP coNP), I wonder whether it is contained in PLS (Polynomial Local Search) or PPA (Polynomial Parity Argument)? Add PPP (Polynomial Pigeonhole Principle) if you want, even so this would probably mean that it is already contained in PPAD (Polynomial Parity Arguments on Directed graphs), and hence in PPA.Or is it rather the other way round and parity game solving can be shown to be hard for PLS or PPAD? But that would be surprising, since a recursive algorithm that solves parity games is known (even if it is not efficient in the worst case).Edit 12 March 2017: I recently learned that parity game solving has been shown to be possible in quasipolynomial time. Here are the quoted references:Deciding Parity Games in Quasipolynomial Time (PDF), by Cristian S. Calude, Sanjay Jain, Bakhadyr Khoussainov, Wei Li, and Frank Stephan. A short proof of correctness of the quasi-polynomial time algorithm for parity games, by Hugo Gimbert and Rasmus Ibsen-Jensen. Succinct progress measures for solving parity games, by Marcin Jurdziski and Ranko Lazi.An implementation and comparison with previous approaches is available (classic strategy improvement wins on random instances, but gets slow on Friedmanns trap examples):An Ordered Approach to Solving Parity Games in Quasi Polynomial Time and Quasi Linear Space, by John Fearnley, Sanjay Jain, Sven Schewe, Frank Stephan, Dominik Wojtczak
Complexity of parity game solving compared to PLS, PPA, and PPAD
complexity theory;game theory;game semantics
Yes, solving parity games is known to be in PPAD (and thus PPA and PPP too) and PLS, and is thus unlikely to be hard for either (since this would imply containment of one of these classes in the other).See, e.g., Daskalakis, Constantinos, and Christos Papadimitriou. Continuous local search. Proceedings of the twenty-second annual ACM-SIAM symposium on Discrete Algorithms. SIAM, 2011.and combine the membership of Simple Stochastic Games (SSGs) in CLS (which is in PPAD and PLS) with the well-known observation that solving parity games can be reduced to solving SSGs in polynomial time.The reason that these problems are in PPAD is that they admit optimality equations, rather like Bellman equations, that characterize solutions as fixed points. The reason these problems are in PLS is that they can be solved with local improvement algorithms like strategy improvement (a two-player generalization of policy iteration for MDPs).
_cs.30073
You have $n$ sticks of arbitrary lengths, not necessarily integral.By cutting some sticks (one cut cuts one stick, but we can cut as often as we want), you want to get $k<n$ sticks such that:All these $k$ sticks have the same length;All $k$ sticks are at least as long as all other sticks.Note that we obtain $n + C$ sticks after performing $C$ cuts.What algorithm would you use such that the number of necessary cuts is minimal? And what is that number?As an example, take $k=2$ and any $n\geq 2$. The following algorithm can be used:Order the sticks by descending order of length such that $L_1\geq L_2 \geq \ldots \geq L_n$.If $L_1\geq 2 L_2$ then cut stick #1 to two equal pieces. There are now two sticks of length $L_1 / 2$, which are at least as long as the remaining sticks $2 \ldots n$.Otherwise ($L_1 < 2 L_2$), cut stick #1 to two unequal pieces of sizes $L_2$ and $L_1-L_2$. There are now two sticks of length $L_2$, which is longer than $L_1-L_2$ and the other sticks $3 \ldots n$.In both cases, a single cut is sufficient.I tried to generalize this to larger $k$, but there seem to be a lot of cases to consider. Can you find an elegant solution?
Cutting equal sticks from different sticks
algorithms;optimization
The first core observation to solving this problem is that the feasibility of a cutting length $l$,$\qquad\displaystyle \operatorname{Feasible}(l) = \biggl[\, \sum_{i=1}^n \Bigl\lfloor\frac{L_i}{l} \Bigr\rfloor \geq k \,\biggr]$,is piecewise-constant, left-continuous and non-increasing in $l$. Since the number of necessary cuts behaves similarly, finding the optimal length is just$\qquad\displaystyle l^{\star} = \max \{ l \mid \operatorname{Feasible}(l) \}$.Furthermore, as the other answers have proposed, all jump discontinuities have the form $L_i/j$. This leaves us with a discrete, one-dimensional search problem amenable to binary search (after sorting a finite set of candidates).Note furthermore that we only need to consider the $L_i$ that are shorter than the $k$-largest one, since that one is always feasible.Then, different bounds on $j$ lead to algorithms of different efficiency.$1 \leq j \leq k$ results in a search space of quadratic size (in $k$),$1 \leq j \leq \lceil k/i \rceil$ in a linearithmic one (assuming the $L_i$ are sorted by decreasing size), andslightly more involved bounds in a linear one.Using this, we can solve the proposed problem in time $\Theta(n + k \log k)$ and space $\Theta(n + k)$.One further observation is that the sum in $\mathrm{Feasible}$ grows in $l$ by $1$ for each candidate $L_i/j$ passed, counting duplicates. Using this, we can use rank selection instead of binar search and obtain an algorithm that runs in time and space $\Theta(n)$, which is optimal.Find the details in our article Efficient Algorithms for Envy-Free Stick Division With Fewest Cuts (by Reitzig and Wild, 2015).
_webapps.73829
I'm logged in my Gmail account all the time with my cell phone. Tonight I lost my cell phone and would like to log off on all devices. Can I do this from my home computer after I log in to my Gmail account?
How to log out of Gmail of all devices from desktop
gmail
null
_cs.3172
S --> Ta | b | ScT --> Tc | This isn't an LL grammar but I need it to be so I can do a parse table. The problem is that no matter how much I try I never manage to do make it an LL grammar. Can someone please help by making it a LL grammar? It isn't that big and I'm very confused.
What's wrong with this LL grammar? (very short)
formal languages;formal grammars;parsing
null
_unix.10081
Some of my friends and I are interested in starting a new Linux distro. How do we do that? What do we need to plan?BackstoryI represent a community of Linux sysadmins/implementors whose special needs include, among others:A specific 'lean' kernel configPackage management that fits our 'field needs'Binary packages optimized for our 'use cases'X-less systemTo the point: We have need of a specially-configured production-quality Linux distribution to be run exclusively as Para-Virtualized Production Servers. Rather than jumping through all the hoops and loops every time we need a VM-ized Server, we would very much like a semi-prepared system, optimized for its environment.Since these VMs would be Production Servers, stability is a must, and honestly the available package management systems we're currently aware of just do not provide assurance. Zypp and Conary are the closest ones to our needs, but again still miss on some points.
How to start a new Linux distro?
linux;distros
null
_scicomp.5220
I am trying to solve a particular system of non linear equations written as $F(x) = 0$ in an efficient way.More specifically, $$F(x) = (I - \gamma A)x - g(x) + C$$ where $\gamma$ is a scalarconstant, $C$ is a vector constant, $A$ is a constant matrix, and $g$ is a non-linear function of $R^n \rightarrow R^n$.I am using Newton's method and I am relatively satisfied with it. It involves computing the Jacobian $F'(x) = I - \gamma A - g'(x)$ and solving a linear system at each Newton iteration of the form $ F'(x^k) \Delta x^k = -F(x^k)$ and then set $x^{k+1} = x^k + \Delta x^k $. In this specific case it converges quickly (2-3 iterations at most) and everything works well.However, as the problem becomes more complex and the size of my system of equations increases, computing the Jacobian and solving that linear system becomes a bottleneck. To overcome this, I use a simplified Newton's method which keeps F' constant over the whole Newton iteration process, and some parallel computing since the Jacobian can be computed in parallel efficiently in my case. Unfortunately, solving the linear system still remains a bottleneck (despite use of diverse parallel linear algebra techniques).My question is then: Is there a way to take advantage of the specific form of $F$ ($i.e$ the fact that it is the sum of a linear operator and a non linear operator) ?In particular, I am especially interested in methods which would require me to solve linear systems with either $A$ (or $I - \gamma A$) or $g'$, but not their sum. Indeed, I have specific algorithms to solve linear systems with these matrices, but solving linear system with the sum of $I - \gamma A$ and $g'$ is very inefficient ($g'$ turns out to be block diagonal so adding $I - \gamma A$ messes it up, and solving $(I - \gamma A)X = B$ by itself is not an issue thanks to the specific structure of A)Thanks
Solving a nonlinear algebraic system that includes a linear term
nonlinear equations;newton method
Please state up-front in your question that you are solving a reaction-diffusion problem. You should also be aware that time splitting methods, such as Strang Splitting, are very popular for these problems, though they can have serious accuracy and stability problems (e.g., Ropp, Shadid, and Ober, 2004). That said, you can use a multiplicative iterative method, preferably as a preconditioner to a linear or nonlinear Krylov method. To solve $(A + B) x = f$, compute$$ \tilde x \gets x_0 + \omega_1 A^{-1} \big(f - (A+B) x_0\big) $$$$ x_1 \gets \tilde x + \omega_2 B^{-1} \big(f - (A+B) \tilde x \big) $$where $\omega_1,\omega_2$ are weighting parameters. The iteration matrix of this method is$$ T = \big(1 - \omega_1 A^{-1}(A+B)\big) \big(1 - \omega_2 B^{-1}(A+B)\big) = (1-\omega_1 - \omega_1 A^{-1} B)(1 - \omega_2 - \omega_2 B^{-1} A). $$The map is contractive if you choose $\omega_1,\omega_2$ such that $\lvert T \rvert < 1$, which may not be possible in case of indefinite reaction. One would typically use a Krylov method (and perhaps $\omega_1=\omega_2=1$) instead of tuning $\omega_1$ and $\omega_2$.
_softwareengineering.333959
We have a database backing an iOS and Android app whose primary data is a simple text field. We have a build tool which builds the database - taking a raw txt file which is a custom data format, which looks like:---.id SOME_ID.author AUTHOR_NAME.field MORE_DATA.data_title TITLEData data data [#tag, #tag, ...]: some more dataField: [id] description of data; [id] descriptionExamples: - e.x. some example describing the entry - e.x. another exampleClosing field: more data; more data---or something along those lines.To build the database, a parser was implemented to take the raw txt file, apply some transformations/optimizations to save storage space (e.g. compressing similar entries), and then convert it to a binary format. The resulting entry is stored in a similar but compressed format in the database, also as a raw text field that we'll call entry_data in table entries.All consumers of the database, because the data is stored essentially in the same format, also need to implement a very similar parser, but for the compressed format in entry_data. That means Android and iOS both have about 20-30 files handling the parsing, as well as the web app that employees use to update the database.I believe that the initial reason for this custom format was due to ease-of-adding for the people working on the database, as well as for storage space reasons (I'm not sure, but I think 6-8 years ago mobile devices had low limits on available storage). In any case, the database is only about 8 MB compressed, but is split into several chunks, also probably to avoid file size limit restrictions on very old devices, which we no longer even support.The biggest issue with this custom format is that even small changes, such as adding fields, causes a cascade of changes throughout the web tool the people working on the database use, the database build tool, as well as all clients. So if we want to add a category field, we need to make changes to parsers in 4 different code bases, all of which are in different languages (JavaScript, Java and Objective-C).All of our data relationships can be modeled with a relational database. There's no reason to have examples embedded in the entry - examples could easily be linked via an entry_examples join table. There are several other fields which have duplicate data across entries, which could also be solved by simple foreign keys.My other thought is that data storage space is much less of an issue in modern devices - if losing the complex compression inflates our database from 8 MB to 32 MB, which is what many other apps in the same category as ours have (or even a lot more), I don't think users will mind, especially if changing to a standard data format allows us to add features in the apps more quickly.Does it make sense to have custom data formats like ours, when there are widely-adopted standard formats for data (e.g. relational structure in relational databases, or even NoSQL databases that have the entries as structured JSON)?
Does it make sense to have a custom textual data format?
database design;data
Does it make sense to have custom data formats like ours, when there are widely-adopted standard formats for data (e.g. relational structure in relational databases, or even NoSQL databases that have the entries as structured JSON)?No.The thing is, not only do things like JSON and relational databases exist, they're long-time standards, which means strong, robust ecosystems of tooling has been built around them. All the common things you might need to do with your data, someone has already published an open-source library to take care of, which you can get access to for free. If you start from scratch with a proprietary format, you have to build all that tooling yourself. You're going to make mistakes along the way--mistakes that the people who wrote the libraries for standard formats have already made and fixed.This is exactly the sort of scenario that the phrase don't reinvent the wheel was created for. If the data is small and simple, use JSON. (Not JSON-backed NoSQL databases; just flat JSON text files.) If it's a large dataset and you need to run non-trivial queries against it, use a relational database.Also, a bit of free advice: Don't use NoSQL databases unless you're actually running something web-scale (ie. on the same order of magnitude as Amazon, Google or Facebook) as the hassles it brings tends to outweigh the benefits until you're big enough that those benefits are actually necessary rather than nice to have. The ACID guarantees the relational model offers can be the best thing that ever happened to your data, if you let them.
_unix.198184
I have here a bash script:#!/bin/bashTARGET_FILE=ping_result.txt# declare the target ip addressesdeclare -a ips=(XXX.XXX.XXX.XXX YYY.YYY.YYY.YYY)function print_and_log() { echo $1 echo $1 >> $TARGET_FILE 2>&1}# loop through all ip addressesfor i in ${ips[@]}do print_and_log ----- Begin Pinging for $i --- print_and_log command: ping -c10 $i print_and_log $(ping -c10 $i) print_and_log ----- Pinging for $i end ----- doneMy aim is to print the same output into a file and into console. But when I disconnect my operation system from the network, then in the console, I see (for example for the first IP address):----- Begin Pinging for XXX.XXX.XXX.XXX --- command: ping -c10 XXX.XXX.XXX.XXXconnect: Network is not reachable----- Pinging for XXX.XXX.XXX.XXX end ----- But in the file, I do no see any message, that the Network is not reachable. Why? How can I change my bash script, that I also can see this message in my logging file.
Redirection to file does not work properly
bash;logs;io redirection;ping
The Network is not reachable message is printed to stderr, not stdout, so it isn't captured by your substitution ($(ping ...)). You need to redirect stderr to stdout when running ping, not when you log: print_and_log $(ping -c10 $i 2>&1)
_softwareengineering.211959
I have some difficulty understanding the concept of fixture. I know what a test suite is, a test case, a test run, but what exactly is a fixture? A parameterized test case?It seems to me that the meaning or semantics of the term fixture can vary slightly by programming language or by testing framework? I think a phpunit fixture the code to set the world up in a known state and then return it to its original state when the test is complete. This known state is called the fixture of the test.is slightly different from a fitnesse fixture, where Fixtures are a bridge between the Wiki pages and the System Under Test (SUT), which is the actual system to test.Is there an expert in software testing around here who can answer this question? References to other programming languages are welcome.
What are the different meanings of 'fixture'?
java;php;testing;terminology;definition
In the context of testing tools you mentioned, such as PHPUnit and Fitnesse, this term definitely refers to the notion of test fixture:something used to consistently test some item, device, or piece of software...SoftwareTest fixture refers to the fixed state used as a baseline for running tests in software testing. The purpose of a test fixture is to ensure that there is a well known and fixed environment in which tests are run so that results are repeatable. Some people call this the test context.Examples of fixtures:Loading a database with a specific, known set of dataErasing a hard disk and installing a known clean operating system installationCopying a specific known set of filesPreparation of input data and set-up/creation of fake or mock objects...Use of fixturesSome advantages of fixtures include separation of the test initialization (and destruction) from the testing, reusing a known state for more than one test, and special assumption by the testing framework that the fixture set up works...
_unix.192981
I'm running a pfsense 2.2 router. While not required most of the time, sometimes using the console is the way to go. Unfortunately pfsense doesn't come with /usr/bin/manI was able to runpkgpkg updatethough. Is there some way to install man using pkg? If it is, where would I download the man pages for my system?I've already read https://serverfault.com/questions/239533/how-can-i-install-man-pages-from-freebsd-server-via-console-command/239550#239550 and I will try that approach if using pkg isn't an option. The later approach is just harder to automate.
Install man on FreeBSD 10.1 based pfsense 2.2
freebsd;pfsense;man
null
_unix.126937
I'm trying to determine if an application's connection to a database is internal to the application or due to a networking event. This seems like something netfilter would have system-wide visibility on. To that end, I wanted to use -j LOG with iptables to log all TCP resets and timeouts that occur on the system. I don't know what to use for the matching criteria, though.I feel like the answer would involve the conntrack module at some point but I've been able to locate next to nothing on it. The database server is MS-SQL and the J2EE application is running on a RHEL 5.10 VM. The latter is the machine I'm trying to perform the logging on.EDIT:I found this blog post which shows how to log TCP resets (amongst other things) with the --tcp-flags option to iptables. So the outstanding issue is figuring out how to log connections with no explicit RST but are closed due to the connection being seen as stale/timing out.
Can iptables be used to monitor TCP timeouts/resets system-wide?
networking;rhel;iptables;tcp
After asking on IRC a bit, it seems the general expectation is that if one node feels a connection has terminated abnormally for any reason (including reaching an internally derived timeout) it's expected to send the remote node an RST packet before closing the connection on its side. So it appears both questions get answered with the same solution: log TCP resets via --tcp-flags. The basic command to do that on my RHEL 5.10 system (should work on Debian based distros as well) is:root@xxxxxxvlt01 ~ $ iptables -A OUTPUT -m tcp -p tcp --tcp-flags RST RST -j LOGroot@xxxxxxvlt01 ~ $ iptables -A OUTPUT -m tcp -p tcp --tcp-flags FIN FIN -j LOGWithout matching criteria, I would probably end up matching quite a few packets, so I created a new rule specifically targeting the system I'm going after:root@xxxxxxvlt01 ~ $ iptables -A OUTPUT -d xxx.xxx.64.248/32 -m tcp -p tcp --tcp-flags RST RST -j LOGroot@xxxxxxvlt01 ~ $ iptables -A OUTPUT -d xxx.xxx.64.248/32 -m tcp -p tcp --tcp-flags FIN FIN -j LOGroot@xxxxxxvlt01 ~ $ iptables -nvLChain INPUT (policy ACCEPT 767K packets, 108M bytes) pkts bytes target prot opt in out source destinationChain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destinationChain OUTPUT (policy ACCEPT 448K packets, 68M bytes) pkts bytes target prot opt in out source destination 0 0 LOG tcp -- * * 0.0.0.0/0 xxx.xxx.64.248 tcp flags:0x04/0x04 LOG flags 0 level 4 0 0 LOG tcp -- * * 0.0.0.0/0 xxx.xxx.64.248 tcp flags:0x01/0x01 LOG flags 0 level 4root@xxxxxxvlt01 ~ $Which is a lot better. Since my confirmation for the RST-upon-timeout stuff is from someone on IRC I'll leave this open/unanswered in case someone can prove me wrong. After a week I'll accept my own answer on this one unless I'm contradicted.
_unix.60575
I'm intending to replace a NAS's Custom Linux with Arch Linux (details at the bottom), of course wishing to preserve all user data and (due to the SSH-only headless access) attempting to be fool-proof since a mistake may require firmware reinstallation (or even brick the device). So instead of running the installation (downloading the appropriate Arch Linux ARM release and probably using the chroot into LiveCD approach) without appropriate preparations, what must I keep in mind? (Please feel free to answer without restriction to Arch Linux) More precisely:Do I somehow have to bother with figuring out how a specific device's boot process works (e.g. which parts of the bootloader reside on the flash memory and which ones are on the harddisk) or can I rely on the distribution's installer to handle this correctly?How can I determine whether some (possibly proprietary) drivers are used and how can I migrate them into the new setup?Is the RAID configuration safe from accidental deletion?Is there a way to fake the booting process so I can check for correct installation while the original system remains accessible by simply rebooting? E.g. using chroot and kexec somehow?What else should I be aware of?The specific case is that I want to replace the custom Linux from a Buffalo LinkStation Pro Duo (armv5tel architecture, the nas-central description is a bit more helpful here and also provides instructions on how to gain SSH root access) with Arch Linux ARM. But a more general answer may be more helpful for others as well.
How to safely replace one Linux distribution with another one via SSH?
linux;ssh;system installation;reinstall
with the required skill and especially knowledge about the installed linux it is not worth while anymore to replace it. and whatever you do, you probably never want to replace the already installed kernel. however, you can have your arch linux relatively easy and fool proof!the concept:you install arch linux into some directory on your NAS and chroot (man chroot) into it. that way you don't need to replace the nas linux. you install and configure your arch linux and replace the native linux's services by arch linux services step by step. the more complete and powerful your arch linux installation gets you auotomate the chrooting procedure, turn off the services provided by the native linux one by one while automating the starting of services within the chrooted arch linux. when you're done the boot procedure of your NAS works like this: load the kernel and mount the hdds, chroot into arch linux, exec /sbin/init in your chrooted environment.you need to work out the precise doibg yourself, b/c i know neither arch linux nor your NAS and its OS. you need to create the target directory into which you want to install arch linux; it needs to be on a device with sufificient available writable space (mkdir /suitable/path/archlinux), then you need to bootstrap your arch linuxcd /suitable/path/archlinuxwget http://tokland.googlecode.com/svn/trunk/archlinux/arch-bootstrap.shbash arch-bootstrap.sh yournassarchitectureNow you have a basic arch linux in that path. You can chroot into it along the lines ofcp /etc/resolv.conf etc/resolv.confcp -a /lib/modules/$(uname -r) lib/modulesmount -t proc archproc procmount -t sysfs archsys sysmount -o bind /dev devmount -t devpts archdevpts dev/ptschroot . bin/bashThen you should source /etc/profile. Now your currnt shell is in your arch linux and you can use it as if you had replaced your native linux ... which you have for the scope of your current process. Obviously you want to install stuff and configure your arch linux. When you use your current shell to execute /etc/init.d/ssh start you are actually starting the ssh daemon of your arch linux installation.When you're done and you really want to entirely replace your native linux (services) with arch linux, your NAS's native linux doesn't start any services anymore but executes the chroot procedure above with the difference that the last line is exec chroot . sbin/init.This is not as complete as a real replacement, but as fool proof as it gets. and as stated initially, with the knowledge and skill required for this, IMHO (!), a complete replacement is not necessary and worth while.
_unix.125940
I have an mdadm-RAID and want to get notified about events, for example a failing HDD. I can use MAILADDR and PROGRAM in the mdadm config file to achieve this. I decided to use the latter. So I wrote a simple notification bash-script and set the PROGRAM-option to the script's path.To avoid that each and every user can use this script to send notifications only the root-user has the execute-right on that script. So when mdadm wants to send a notification it must run the script as root.However I cannot find any option to set the user. Is it always root by default?
mdadm: Is PROGRAM always run by the user root?
mdadm
null
_unix.327630
I have a CentOS machine with 1 hard drive and partition as follow:[root@localhost var]# fdisk -lDisk /dev/sda: 214.7 GB, 214748364800 bytes255 heads, 63 sectors/track, 26108 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sda1 * 1 19 152586 83 Linux/dev/sda2 20 280 2096482+ 82 Linux swap / Solaris/dev/sda3 281 2610 18715725 8e Linux LVM[root@localhost var]# vgs VG #PV #LV #SN Attr VSize VFree vg_root 1 3 0 wz--n- 17.84G 0[root@localhost var]# pvs PV VG Fmt Attr PSize PFree /dev/sda3 vg_root lvm2 a-- 17.84G 0[root@localhost var]# lvdisplay --- Logical volume --- LV Name /dev/vg_root/lv_root VG Name vg_root LV UUID YFfzh2-03mH-zrHa-INJn-iqxZ-vYpo-XDyimP LV Write Access read/write LV Status available # open 1 LV Size 8.91 GB Current LE 285 Segments 2 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 --- Logical volume --- LV Name /dev/vg_root/lv_var_lib_mysql VG Name vg_root LV UUID xdoJuc-21WP-99ZW-5aOE-bINc-fgYF-qP50vw LV Write Access read/write LV Status available # open 1 LV Size 5.94 GB Current LE 190 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:1 --- Logical volume --- LV Name /dev/vg_root/lv_zenoss_perf VG Name vg_root LV UUID ByUzF7-wYab-R9q5-3h4o-S3AI-L0HH-2Kn07p LV Write Access read/write LV Status available # open 1 LV Size 3.00 GB Current LE 96 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:2The /dev/sda is showing the total size of 214.7GB.How can I extend the size for each logical volume by 20GB?Thank you,
Extend partition
partition
null
_unix.352256
I have started the tor service with no issue on whonix-ws and anon-whonix (although both won't connect to Tor because of this issue I'm having). I can also confirm that I have a connection on sys-net to clearnet just fine.When I try to start the tor service on whonix-gw I keep getting the following error. I have used sudo service tor start, sudo service tor@default start and one or two other commands but no luck.user@host:~$ sudo service tor startuser@host:~$ sudo service tor status tor.service - Anonymizing overlay network for TCP (multi-instance-master) Loaded: loaded (/lib/systemd/system/tor.service; enabled) Drop-In: /lib/systemd/system/tor.service.d 30_qubes.conf, 40_qubes.conf Active: inactive (dead) start condition failed at Sat 2017-03-18 15:27:04 AEDT; 2s ago ConditionPathExists=/var/run/qubes-service/whonix-gateway was not metMar 18 15:25:08 host systemd[1]: Started Anonymizing overlay network for TCP (multi-instance-master).Mar 18 15:27:04 host systemd[1]: Started Anonymizing overlay network for TCP (multi-instance-master).I am out of ideas and hoping someone else has come across this before.
Whonix - whonix-gw keeps saying start condition has not been met
tor;whonix;qubes
null
_cs.28845
I'm trying to solve a constraint programming problem using a SAT solver. I have set of constraints in the form of propositional logic statements, which are converted to CNF using Tseitin transformation. From the nature of my problem I know that there are redundant constraints. For example, I have a rule:$$AND\:(OR\: (x1,x2),\: OR\: (x3,x4)) \,\,\,\,\,\,(1)$$and another:$$AND\:(OR\: (x1,x2),\: OR\: (x3,x4), \:OR\:(x5,x6))\,\,\,\,\,\, (2)$$where $x_i$ are inputs. Both constraints have in common subnodes, for example $OR (x1,x2)$. Due to this I know that there is no point in having both constraints - if the first one becomes FALSE, then the second equals FALSE too. So the second one is redundant. This example is the easiest. One more, closer to what might be found in my formula: $$AND\:(x1,\:x2,\: OR \:(AND \:(x3,x4), \:AND\:(x5,x6)))\,\,\,\,\,\,(3)$$ and the second one:$$AND\:(x3,x4)\,\,\,\,\,\,(4)$$ In this case, I can simplify the first one to the form of: $$AND\:(x1,\:x2,\:AND\:(x5,x6))\,\,\,\,\,\,(5)$$ due to (4).I prefer to have a lot of small rules (by a lot I mean $10^3$ or $10^4$, which in CNF become $10^5-10^6$ of clauses). The rules are not complicated - 10 ANDs and ORs and 20-30 inputs per rule at most, but they are usually much simpler.Is there an efficient way to remove such redundancy from a formula? Is it possible with CNF formulation or I should use some other representation of the formula (BDD, for example)? Any help and directions will be useful.
Detection of redundant boolean constraints
satisfiability;sat solvers
null
_unix.61876
In my new Gentoo installation, su doesn't work as my non-root user: After entering the correct password I get the message su: Permission denied. What could be causing this? I have already tried reinstalling the package containng /bin/su.EDIT: sudo works.
su: Permission denied despite correct password
permissions;su
I solved the same problem by editing /etc/pam.d/suand commenting out this line:auth required pam_wheel.so use_uidIt requires users to be in the wheel group to be able to switch user.User switching as non-root works again when this pam module is disabled for su.
_codereview.111101
I'm implementing a graphical representation of an analog clock with moving hands (seconds, minutes and hours), using mainly2 FLTK facilities and the function Sleep(miliseconds):main.cpp: #include iostream#include GUI.h#include Window.husing namespace Graph_lib;#include AnalogClock.hint main () try { Analog_clock ac (Point (700,30), Analog clock); return gui_main ();} catch (exception &e) { cerr << e.what () << endl; getchar();}AnalogClock.h:#pragma once#define PI 3.14159265/*class Analog_clockIt creates a GUI representing an analog clockwith three indicators: hour, minute, second and a fancy dial.*/class Analog_clock: public Window {public: Analog_clock (Point xy, const string &label);private: // background image Image clock_dial; // data representing second, minute and hour Line* second_indicator; Line* minute_indicator; Line* hour_indicator; // helper functions Point rotate (Point initial, Point pivot, double angle); void set_clock (); void run_clock (); // action functions void increment_second (); void increment_minute (); void increment_hour (); // callback functions /* typedef void* Address; template<class W> W& reference_to (Address pw) { return *static_cast<W*>(pw); } */ static void cb_seconds (Address, Address pw) { reference_to<Analog_clock>(pw).increment_second (); } static void cb_minutes (Address, Address pw) { reference_to<Analog_clock>(pw).increment_minute (); } static void cb_hours (Address, Address pw) { reference_to<Analog_clock>(pw).increment_hour (); }};//------------------------------------------------------------------------------------------------------------------------// class member implementations/*class constructor: Analog_clock()It initializes a window containingan image (dial) and three lines(indicators), together with a function that runs the clock utilizingthe machine clock.*/ Analog_clock::Analog_clock (Point xy, const string &label) : Window (xy, 480, 460 , label), clock_dial (Point (0, 0), Chapter16Exercise6.gif), second_indicator (nullptr), minute_indicator (nullptr), hour_indicator (nullptr){ attach (clock_dial); set_clock (); run_clock ();}// helper function/*Member function: rotate ();Use: -It is used to rotate the clock indicatorsaround the center point at an angle corresponding to second, minute and hour.*/Point Analog_clock::rotate (Point initial, Point pivot, double angle) { return Point((cos(angle) * (initial.x - pivot.x)) - (sin(angle) * (initial.y - pivot.y)) + pivot.x, (sin(angle) * (initial.x - pivot.x)) + (cos(angle) * (initial.y - pivot.y)) + pivot.y);}/*Member function: set_clock ()Use: -It initializes the data membersrepresenting clock indicators toinitial value: pointing at 12 o'clock.*/void Analog_clock::set_clock () { Point clock_center (x_max () / 2. - 2, y_max () / 2.); // set seconds const int second_indicator_length = 150; Point twelve_o_clock_s (x_max () / 2. - 2, y_max () / 2. - second_indicator_length); second_indicator = new Line (clock_center, twelve_o_clock_s); second_indicator->set_style (Line_style (Line_style::solid, 2)); second_indicator->set_color (Color::red); // set minutes const int minute_indicator_length = 150; Point twelve_o_clock_m (x_max () / 2. - 2, y_max() / 2. - minute_indicator_length); minute_indicator = new Line (clock_center, twelve_o_clock_m); minute_indicator->set_style (Line_style (Line_style::solid, 8)); // set hours const int hour_indicator_length = 50; Point twelve_o_clock (x_max () / 2. - 2, y_max () / 2. - hour_indicator_length); hour_indicator = new Line (clock_center, twelve_o_clock); hour_indicator->set_style (Line_style (Line_style::solid, 8)); // attach in the right order attach (*minute_indicator); attach (*hour_indicator); attach (*second_indicator);}/*Member function: run_clock ()Use: -It updates the clock time byinvoking the functions responsiblefor the rotation of the indicatorsat a specific interval defined withthe help of the functions: clock ()and sleep ().*/void Analog_clock::run_clock () { // get real time and set the clock // ... // run the clock while (true) { for (auto i = 0; i < 60; i++) { for (auto i = 0; i < 60; i++) { cb_seconds (0, this); Sleep (1000); } cb_minutes (0, this); } cb_hours (0, this); }}// action functions/*Member function: increment_second ()Use: -It increments the second indicator byrotating the line that represents itby an angle of 6 degrees.*/void Analog_clock::increment_second () { Point center = second_indicator->point (0); Point old_time = second_indicator->point (1); // rotate 6 degrees (6 degrees x 60 seconds = 360 one rotation) double angle_radians = ((6.) * PI) / 180.; Point new_time = rotate (old_time, center, angle_radians); // delete old indicator, create new one and attach it detach (*second_indicator); second_indicator = new Line (center, new_time); second_indicator->set_style (Line_style (Line_style::solid, 2)); second_indicator->set_color (Color::red); attach (*second_indicator); // redraw (); draw();}/*Member function: increment_minute ()Use: -It increments the minute indicator byrotating the line that represents itby an angle of 6 degrees.*/void Analog_clock::increment_minute () { Point center = minute_indicator->point (0); Point old_time = minute_indicator->point (1); // rotate 6 degrees (6 degrees x 60 seconds = 360 one rotation) double angle_radians = ((6.) * PI) / 180.; Point new_time = rotate (old_time, center, angle_radians); // delete old indicator, create new one and attach it detach (*minute_indicator); minute_indicator = new Line (center, new_time); minute_indicator->set_style (Line_style (Line_style::solid, 8)); attach (*minute_indicator); // redraw (); draw();}/*Member function: increment_hour ()Use: -It increments the hour indicator byrotating the line that represents itby an angle of 30 degrees.*/void Analog_clock::increment_hour () { Point center = hour_indicator->point (0); Point old_time = hour_indicator->point (1); // rotate 6 degrees (6 degrees x 60 seconds = 360 one rotation) double angle_radians = ((30.) * PI) / 180.; Point new_time = rotate (old_time, center, angle_radians); // delete old indicator, create new one and attach it detach (*hour_indicator); hour_indicator = new Line (center, new_time); hour_indicator->set_style (Line_style (Line_style::solid, 8)); attach (*hour_indicator); // redraw (); draw ();}Result:after 33 seconds:after 1 min 24 seconds:Questions:Is the above code looking good enough, are there any mistakes that could be eliminated?How to make the clock update more smoothly; without blinking on every second?If I use the commented functions redraw () within increment_second () / minute () / hour () the clock shows, within the specified window, only once after the loops within run_clock () are over. On the other hand, in the current implementation, i.e. using functions draw () the clock updates every second by not in the specified window and it interferes with the rest of the open windows.1. Exercise 6 Chapter 16 from B. Stroustrup's: C++ Programming Language: Principles and Practice. 2. All the needed files for compilation are here. The FLTK can be found here.
A simple Analog Clock
c++;gui
you shouldn't be using an infinite loop and sleep in a gui, instead use the timer functionality of FLTK using Fl::add_timeout and Fl::repeat_timeoutvoid timer_callback(void* window){ reinterpret_cast<Analog_clock*>(window)->increment_second(); Fl::repeat_timeout(1, timer_callback, window);}void Analog_clock::run_clock () { Fl::add_timeout(1, timer_callback, this);}Analog_clock::increment_second(){ //will also handle seconds overflow so minute and hour hand can be updated}Many gui libraries use a single thread and callbacks to manage events and draws, if you block in one of the callbacks then it can't handle any other event while it is blocking. This was the reason redraw didn't work.repeat_timeout is like add_timeout except that it adds t (the time until callback should happen) to the time the current callback (should have) happened instead of now(). This way the time between timer_callback getting called and it calling repeat_timeout is not a factor.
_cs.3414
A partition of a set S is a separation of the set into an arbitrary number of non-empty, pairwise disjoint subsets whose union is exactly S. What manner of a data structure should be used to represent a partition of a set if the following methods need to be optimized:moving an element from one part to another, possibly an entirely new one, anditerating over the parts of the partition.A naive way of prioritizing 1 would be a hash/tree/whatever mapping from set elements to part labels, but iterating over the parts would require O(N) for first constructing the actual parts from the labels. 2 is naively prioritized as a hash/tree/whatever set of hash/tree/whatever sets, but then moving elements around, especially to new subsets, incurs that memory management overhead.Is there a way to get the best of both worlds? The implementation I need is Python but I imagine this is a cross-language question.
Data structure for partition of a set
data structures;partitions;sets
null
_unix.169385
I was thinking of performing a Clonezilla backup and was wondering what backup mode to choose. Generally speaking, Clonezilla offers the following backup options:savedisk: Save a full disk imagesaveparts: Save images of specific partitionsCorrespondingly there are two restore modes:restoredisk: Restores full disk imagerestoreparts: Restores partition imagesWhat I am looking for is a hybrid of these two options. I would like to be able to both restore specific partitions and restore my full hard drive in case of a total failure. Does Clonezilla support this restoration pathway out of the box?So far I haven't been able to find any official documentation regarding this. The only reference I did find was a mailing list discussion from 2010 which pointed to imgconvert, a custom script which can supposedly convert disk images to partition images. Unfortuantely I have no idea if this script still works. After all it's 5 years old.That's why I wanted to ask here if anyone had any experience with this use case of Clonezilla and could vouch for this solution (or a different one, for that matter).
Can I restore a single partition from a Clonezilla disk image?
backup;cloning;restore;clonezilla
Yes, just use for restore image from Clonezilla:cat sda5.ext3-ptcl-img.gz.a* | gunzip -c | partclone.restore -d -s - -o /dev/sda5
_unix.40610
I wanted to use and have tried sudo usermod durrantm_test -mdl durrantm_test2but I get Usage: usermod [options] LOGIN...Howeversudo usermod durrantm_test -l durrantm_test2 -md durrantm_test2doesn't give an error but seems repetitive, can I shorten it?
Is there a shorter way to change username, home directory and move files at the same time
users;sudo
usrmodx() { sudo usermod $1 -l $2 -md $2; }usrmodx durrantm_test durrantm_test2But shouldn't it be,sudo usermod -l new_name -md new_dir old_namesosudo usermod -l durrantm_test2 -md durrantm_test2 durrantm_testand as a function,moveuser() { sudo usermod -l $2 -md $2 $1; }moveuser durrantm_test durrantm_test2or am I missing something?This assumes you're using a shell which supports functions (e.g. bash), and avoids aliases because you can't use positional variables.
_cs.69802
Assume $u(n)$ calculates $n^2$ in $n$ steps, thus is in $O(n)$, and $v(n)$ calculates the square root of $n$ in $\sqrt n$ steps, thus is in $O(\sqrt n)$.What is the runtime of the program u(v(n))?The argumentation in my solution is:We calculate the square root first, then we take it's square. $\sqrt n + \sqrt n = 2\sqrt n \rightarrow O(\sqrt n).$I don't understand that. Wouldn't it be actually $n + \sqrt n$?
Runtime of a programm that takes the square of a root
runtime analysis
null
_unix.123849
I'm trying to update the timestamp to the current time on all of the xml files in my directory (recursively). I'm using Mac OSX 10.8.5.On about 300,000 files, the following echo command takes 10 seconds:for file in `find . -name *.xml`; do echo >> $file; doneHowever, the following touch command takes 10 minutes! : for file in `find . -name *.xml`; do touch $file; doneWhy is echo so much faster than touch here?
Why is echo so much faster than touch?
shell;command;echo
null
_unix.66110
As we all know, the top command displays various real-time information about the running system. What I can't figure out is this:top claims that gzip is using 95% CPU. (Not unreasonable, considering I just asked it to compress a 20GB file!) But top also claims that the system is 75% idle. What the hell...?Basically I wanted to figure out whether stuff is taking so long due to insufficient CPU or insufficient disk bandwidth. But the above statistics just leave me baffled.
Contradictory information from top
monitoring;top
null
_codereview.69154
My code in my main background async task is a mess. Everything is everywhere. I'm afraid of moving things around without something breaking but I definitely think it needs work. What I can do to make it more condensed, cleaner and refactored? Here's the bulk of what I want to have judged as code review/ refactoring: public class RetrieveInfoTask extends AsyncTask<Void, Void, Void> { //Called on the UI thread to execute progress bar @Override protected void onPreExecute() { super.onPreExecute(); progressBar = new ProgressDialog(context); progressBar.setIndeterminate(true); progressBar.setCancelable(false); progressBar.setMessage(MainActivity.this.getString(R.string.retrieve_info)); progressBar.show(); } //Methods that retrieves information. This is performed in the Background thread private void retrieveInfoFromDevice() { StringBuilder text = new StringBuilder(); SQLiteDatabase sqLiteDatabase; sqLiteDatabase = dbHandler.getWritableDatabase(); sqLiteDatabase.beginTransaction(); String PATIENT_TABLE = patient; try { //Take in the encrypted string and perform checksum Uri uri = Uri.parse(file:///storage/emulated/0/Download/information.csv); File file = new File(uri.getPath()); //Read the file BufferedReader br = new BufferedReader(new FileReader(file)); String line; Long orig_checksum = null; Long new_checksum = null; boolean firstLine = true; while ((line = br.readLine()) != null) { if (firstLine) { orig_checksum = Long.parseLong(line); System.out.println(first line: + line); firstLine = false; continue; } else { System.out.println(Print the next line : + line); //Get the checksum new_checksum = Checksum.getChecksum(line); System.out.println(new_checksum); if(new_checksum.equals(orig_checksum)) { System.out.println(Checksum works!); //If values are correct, decrypt the encrypted string Encryption.decrypt(line); //deserialize the string toObject TransferData tdBack = TransferData.fromString(Encryption.decrypt(line)); //Create a new map of values, where column names are the keys ContentValues values = new ContentValues(); //Save object in db values.put(primary_id, tdBack.getPrimaryID()); values.put(region, tdBack.getRegion()); values.put(date, tdBack.getKeyDate()); values.put(clinic, tdBack.getClinic()); //insert the new row sqLiteDatabase.insert(PATIENT_TABLE, null, values); Cursor cursor = sqLiteDatabase.rawQuery(Select * FROM patient, null); Log.d(MainActivity, DatabaseUtils.dumpCursorToString(cursor)); } } } } catch(FileNotFoundException e) { e.printStackTrace(); } catch(IOException e){ e.printStackTrace(); } catch(ClassNotFoundException e) { e.printStackTrace(); } sqLiteDatabase.setTransactionSuccessful(); sqLiteDatabase.endTransaction(); } //Methods that retrieves information. This is performed in the Background thread private void retrieveCollectionInfo() { StringBuilder text = new StringBuilder(); SQLiteDatabase sqLiteDatabase; sqLiteDatabase = dbHandler.getWritableDatabase(); sqLiteDatabase.beginTransaction(); String COLLECTION_TABLE = collection; try { Uri uri = Uri.parse(file:///storage/emulated/0/Download/information.csv); File file = new File(uri.getPath()); BufferedReader br = new BufferedReader(new FileReader(file)); String line; Long orig_checksum = null; Long new_checksum = null; boolean firstLine = true; String timeOfCollection = null; int collectionInterval = 0; int numberOfReminders = 0; while ((line = br.readLine()) != null) { if (firstLine) { orig_checksum = Long.parseLong(line); System.out.println(first line: + line); firstLine = false; continue; } else { //Get the checksum new_checksum = Checksum.getChecksum(line); System.out.println(new_checksum); if (new_checksum.equals(orig_checksum)) { System.out.println(Print the next line : + line); System.out.println(Checksum works!); //If values are correct, decrypt the encrypted string Encryption.decrypt(line); System.out.println(decrypted: + line); //deserialize the string to Object TransferData tdBack = TransferData.fromString(Encryption.decrypt(line)); //Create a new map of values, where column names are the keys ContentValues values = new ContentValues(); for (Collection collection : tdBack.getCollections()) { values.put(reminders, collection.getReminders()); values.put(region, collection.getRegion().getId()); values.put(time_of_collection, collection.getTimeOfCollection()); values.put(collection_interval, collection.getCollectionInterval().getId()); reminders = collection.getNumberOfReminders(); collection = collection.getTimeOfFirstCollection(); collectionInterval = collection.getCollectionInterval().getId(); //insert the new row sqLiteDatabase.insert(COLLECTION_TABLE, null, values); Cursor cursor = sqLiteDatabase.rawQuery(Select * FROM collection, null); Log.d(MainActivity, DatabaseUtils.dumpCursorToString(cursor)); } /* Retrieve a PendingIntent that will perform a broadcast */ Intent alarmIntent = new Intent(MainActivity.this, AlarmReceiver.class); pendingIntent = PendingIntent.getBroadcast(MainActivity.this, 0, alarmIntent, 0); AlarmManager manager = (AlarmManager) getSystemService(Context.ALARM_SERVICE); //determine interval from device string if(collectionInterval == 0) { interval = 3600000; } else if (collectionInterval == 1) { interval = 86400000; } else if (collectionInterval == 2) { interval = 604800000; } else if(collectionInterval == 6) { interval = 43200000; } else if (collectionInterval == 4) { interval = 900000; } //If the date is set to HHmm, then add current date time SimpleDateFormat format = new SimpleDateFormat(HH:mm); String time = timeOfCollection; long timeOfCollectionInMillis = format.parse(time).getTime(); System.out.println(Time in Milis: + timeOfCollectionInMillis); Calendar now = Calendar.getInstance(); now.setTime(new Date()); Calendar cal = Calendar.getInstance(); Date timedate = format.parse(time); cal.setTime(timedate); // thinks 1970 cal.set(Calendar.YEAR, now.get(Calendar.YEAR)); cal.set(Calendar.MONTH, now.get(Calendar.MONTH)); cal.set(Calendar.DAY_OF_MONTH, now.get(Calendar.DAY_OF_MONTH)); //If the time from the db is before now (That is no date set but time set), then set it for tomorrow if (cal.before(now)) { Date tomorrow = cal.getTime(); cal.setTime(tomorrow); cal.add(Calendar.DATE, 1); tomorrow = cal.getTime(); System.out.println(TimeDate for Tomorrow: + tomorrow); //convert date to milis long timeInMilis = (tomorrow.getTime()); //Set Alarm to Repeat manager.setRepeating(AlarmManager.RTC_WAKEUP, timeInMilis, interval, pendingIntent); //else, set the alarm for today } else { timedate = cal.getTime(); System.out.println(TimeDate: + timedate); //convert date to milis long timeInMilis = (timedate.getTime()); //Set Alarm to Repeat manager.setRepeating(AlarmManager.RTC_WAKEUP, timeInMilis, interval, pendingIntent); } Intent intent = new Intent(com.example.myapp.MainActivity); intent.putExtra(Interval, interval); sendBroadcast(intent); } } } //Create a new map of values, where column names are the keys ContentValues values = new ContentValues(); values.put(reminders_remaining,remindersRemaining); values.put(date, new SimpleDateFormat(yyyy-MM-dd).format(new java.util.Date())); values.put(time, new SimpleDateFormat(HH:mm:ss).format(new java.util.Date())); values.put(collection,1); //insert the new row sqLiteDatabase.insert(alerts, null, values); Cursor cursor = sqLiteDatabase.rawQuery(Select * FROM alerts, null); Log.d(MainActivity, DatabaseUtils.dumpCursorToString(cursor)); } catch (IOException e) { e.printStackTrace(); } catch (ClassNotFoundException e) { e.printStackTrace(); } catch (ParseException e) { e.printStackTrace(); } sqLiteDatabase.setTransactionSuccessful(); sqLiteDatabase.endTransaction(); }I have a setup button that the user clicks from the main page. When they click on the button, a background task is performed. Here's what is supposed to going on in the background: Grabs an encrypted file from SD Card, performs a checksum on it first. If the checksum passes, it starts to decrypt the file. If not, it does not proceed, checksum fails. Once it checksums and decrypts the file, it reads in a serializable object and converts it to a string and saves the values into the database. One more I noticed is I am redoing all the encrypt, checksum stuff again in another method because I want separate information from that same file. Maybe that needs to be refactored for DRY? After all that stuff, I want to set an alarm up based on the information in the file. So I take the information from the file and set up an alarm for a specific regions grabbed from the file.
Long methods in background task
java;android
retrieveInfoFromDevice method You should get rid of unused objects like StringBuilder text = new StringBuilder();You should get rid of the firstLine stuff by refactoring this out of the while loop. If you use a guard condition you could remove one more indention. if (line = br.readLine()) == null) { return; }orig_checksum = Long.parseLong(line);System.out.println(first line: + line);while ((line = br.readLine()) != null) { System.out.println(Print the next line : + line); //Get the checksum new_checksum = Checksum.getChecksum(line); System.out.println(new_checksum); if(!new_checksum.equals(orig_checksum)) { continue; }}What happens if there aren't any valid data in the meaning that the checksums not match for any line ? You are getting a database, starting a transaction and ending the transaction. It would be better to get all the data of the file and build a List<ContentValues>. So let us refactor this to a separate method. private List<ContentValues> readPatienInformation(File file) { List<ContentValues> contentValues = new ArrayList<ContentValues>(); BufferedReader br = new BufferedReader(new FileReader(file)); String line; if (line = br.readLine()) == null) { return contentValues; } Long originalChecksum = Long.parseLong(line); System.out.println(first line: + line); while ((line = br.readLine()) != null){ if (!originalChecksum.equals(Checksum.getChecksum(line))) { continue; } TransferData tdBack = TransferData.fromString(Encryption.decrypt(line)); ContentValues values = new ContentValues(); values.put(primary_id, tdBack.getPrimaryID()); values.put(region, tdBack.getRegion()); values.put(date, tdBack.getKeyDate()); values.put(clinic, tdBack.getClinic()); contentValues.add(values); } return contentValues;}Now, as we have the list of ContentValues let us refactor the former method private final String fileName = file:///storage/emulated/0/Download/information.csv;private void retrieveInfoFromDevice() { Uri uri = Uri.parse(fileName); File file = new File(uri.getPath()); List<ContentValues> patientInformation = readPatienInformation(file); if (patientInformation.isEmpty()) { return; } SQLiteDatabase sqLiteDatabase; sqLiteDatabase = dbHandler.getWritableDatabase(); sqLiteDatabase.beginTransaction(); final String PATIENT_TABLE = patient; for (ContentValues currentInformation : patientInformation){ sqLiteDatabase.insert(PATIENT_TABLE, null, currentInformation); Cursor cursor = sqLiteDatabase.rawQuery(Select * FROM patient, null); Log.d(MainActivity, DatabaseUtils.dumpCursorToString(cursor)); } sqLiteDatabase.setTransactionSuccessful(); sqLiteDatabase.endTransaction();} You need to add the try..catch yourself as I don't know which exceptions could be thrown.Removing boilerplate comments will make your code more readable. Comments should used to explain why something is donenot be used to explain what is doneshould be true Code should explain by itself what is done. Some examples //If values are correct, decrypt the encrypted stringEncryption.decrypt(line); looking at the line of code, it is obvious that the content of line should be decrypted and for sure this can only happen if the content is encrypted. If values are correct implies that it is possible at this position that the values (which btw.) could be incorrect. //deserialize the string toObjectTransferData tdBack = TransferData.fromString(Encryption.decrypt(line));oops, you are again decrypting the same line. Would have been better to store the first result in a string variable. But again the comment the string is obvious as the methods name is fromString. Next it isn't deserialized neither toObject nor to Object, but to TransferData.
_webapps.26155
I have noticed that some of my friends have this musical symbol to the right of their names in the Facebook chat panel.What does it mean?
Why do some of my Facebook friends on chat have a musical symbol by their names?
facebook
It means they are listening to music from Open Graph Facebook applications. Hover to see what they are playing and listen as well.For more info: https://blog.facebook.com/blog.php?post=10150457932027131
_webmaster.24356
I'm working on a site locally within a folder with WAMP. When I add the following to my .htaccess file the folder is no longer viewable in WAMP. ExpiresActive OnExpiresDefault access I'm working on a cache manifest demo and the code is to prevent files being cached, I've no idea why it would make the folder disappear.
'ExpiresActive On' and 'ExpiresDefault access' in .htaccess make site disappear in WAMP
htaccess;wampserver
I suspect a different approach will work better (I haven't actually tested this though:-). Try in your .htaccess something likeExpiresActive OffHeader set Cache-Control no-cache, proxy-revalidate, no-transformHeader set Pragma no-cache(No blank lines needed in .htaccess, but if I leave them out here line wrap thinks it's just a funky paragraph and runs everything together.)What you've got now specifies that the pages ARE cacheable, but the cache time is 0, so all pages are always out of date. This arguably doesn't make a whole lot of sense, and who knows which piece of software is doing some error recovery that results in the directory not being visible. I suspect something is stuck in a loop requesting the same page over and over trying to get a copy that isn't already expired, and the failsafe for too many loops goes off and the browser just throws up its hands and makes the whole folder invisible. It's sometimes true that xxx but 0 is the same as no-xxx ...but not in this case. Saying to an Apache server cacheable but 0 is NOT quite the same as saying not cacheable ...close but not the same, and possibly handled weirdly by some software.
_codereview.139834
ScenarioA portable controls/widgets library for Xamarin.Forms (PCL Profile 111) contains a combobox/dropdown/picker control where the ItemsSource property is an IEnumerable.The ItemsSource can be populated with any IEnumerable of objects.The combobox displays value byDisplayMemberPath: stringThe combobox syncs selection: SelectedItem : objectSelectedIndex: intTo do all this the IEnumerable must be parsed so we canSelect indexSelect item from source collectionThe tricky partParse IEnumerable to Dictionary<object, object> in a Portable Class Library.The sourceDictionary field is used to sync the selected properties as described above. private Dictionary<object, object> sourceDictionary; /// <summary> /// Parse type of ItemsSource as Dictionary and populate <see cref=Picker.Items /> and /// <see cref=sourceDictionary />. /// </summary> private void PopulateItemsAndSelectionValuesFromDictionary() { // TODO: Better way to get Dictionary from IEnumerable if (this.ItemsSource is Dictionary<object, object>) { foreach (var newItem in (Dictionary<object, object>)this.ItemsSource) { this.AddToItemsAndSelectionValues(newItem.Key, newItem.Value); } } else if (this.ItemsSource is Dictionary<object, string>) { foreach (var newItem in (Dictionary<object, string>)this.ItemsSource) { this.AddToItemsAndSelectionValues(newItem.Key, newItem.Value); } } else if (this.ItemsSource is Dictionary<object, int>) { foreach (var newItem in (Dictionary<object, int>)this.ItemsSource) { this.AddToItemsAndSelectionValues(newItem.Key, newItem.Value); } } else if (this.ItemsSource is Dictionary<int, object>) { foreach (var newItem in (Dictionary<int, object>)this.ItemsSource) { this.AddToItemsAndSelectionValues(newItem.Key, newItem.Value); } } else if (this.ItemsSource is Dictionary<string, object>) { foreach (var newItem in (Dictionary<string, object>)this.ItemsSource) { this.AddToItemsAndSelectionValues(newItem.Key, newItem.Value); } } else if (this.ItemsSource is Dictionary<int, int>) { foreach (var newItem in (Dictionary<int, int>)this.ItemsSource) { this.AddToItemsAndSelectionValues(newItem.Key, newItem.Value); } } else if (this.ItemsSource is Dictionary<string, string>) { foreach (var newItem in (Dictionary<string, string>)this.ItemsSource) { this.AddToItemsAndSelectionValues(newItem.Key, newItem.Value); } } } /// <summary> /// Add the provided KeyValuePair to <see cref=Picker.Items /> and <see cref=sourceDictionary /> /// </summary> private void AddToItemsAndSelectionValues(object key, object value) { this.Items.Add(this.ValueIsDisplayMember() ? value.ToString() : key.ToString()); this.sourceDictionary.Add(this.ValueIsDisplayMember() ? key : value, new KeyValuePair<object, object>(key, value)); }
IEnumerable to typed Dictionary
c#;iterator;dictionary;xamarin
If you know that ItemsSource is a dictionary then you can simply cast it into IDictionary and iterate it because you just need objects and it gives you exacly that:var items = (IDictionary)ItemsSource;foreach (var key in items.Keys){ AddToItemsAndSelectionValues(key, items[key]);}You cast it into a concrete dictionary but you never use the types anyway so go directly with objects instead.That's all. You don't need any of those ifs.
_codereview.173592
So i completely understand and know that migrations shouldn't be used to seed data. That being said I have a migration that does just that. My question is regarding the beginTransaction aspects. Are there any pitfalls or edge cases that I should watch out for when it comes to wrapping up and down methods in a migration with beginTransaction?public function up() { DB::beginTransaction(); DB::table('SomeTable')->where([ 'companyId' => 111, 'indicatorName' => 'level1' ])->update([ 'fieldUnit' => 'PERCENTAGE' ]); DB::commit(); } /** * Reverse the migrations. * * @return void */ public function down() { DB::beginTransaction(); DB::table('SomeTable')->where([ 'companyId' => 111, 'indicatorName' => 'level1' ])->update([ 'fieldUnit' => 'CURRENCY' ]); DB::commit(); }
Laravel Migrations & Transactions
php;laravel
null
_scicomp.4718
Solutions to PDEs over irregular domains can be computed using the finite difference method by the introduction of so called body fitted coordinate systems where the coordinate lines are aligned to the domain boundary. The benefit of the idea is that the finite difference method can be applied directly to the problem in a new coordinate system (with certain modifications regarding the computation of the derivatives).As these modifications seem ridiculously complicated for practical consideration (http://hdl.handle.net/2060/19800017591 p. 18) I am looking for ways to make the problem easier by having a less generic coordinate representation.My question is if using orthogonal coordinate systems has any advantages over a generic 2D curvilinear coordinate system (one to one mapping)? Orthogonal coordinate systems can be generated for a specific domain based on the Scwarz-Christoffel mapping (conformal mapping), what I would be interested in is if this approach would have any practical benefits, i.e. if orthogonality can somehow be exploited for this problem?
Orthogonal vs general curvilinear coordinates
finite difference
null
_webmaster.19072
This question is bugging me from last few days..I have seen many sites have 'TAGS' or 'INCOMING SEARCH ITEMS' and they have big list of keywords under it.Check out this example:One of my blog was recently de-indexed may be because i had used long titles.Can some one tell me if its okay to use such keywords in the beginning of post ?Is it allowed by Google search engine guidelines?is there any chance that the blog will get de indexd or penalized ?
Is it Okay to add Keyword TAGS at the beginning or endiing of the post?
seo;google search;keywords
One of my blog was recently de-indexed may be because i had used long titles.That's probably not the reason you were de-indexed. Long titles by themselves are not something to remove pages from the index for. I am sure there were other, heavier weighted, issues that caused your problems. Low quality content is the first thing that comes to mind.Can some one tell me if its okay to use such keywords in the beginning of post ? Is it allowed by Google search engine guidelines? is there any chance that the blog will get de indexd or penalized ?Tags are fine to use when used properly. Generally speaking, an article or blog post will have a handful of tags that apply to it. Those tags will link to a main tag page that lists other articles that also use that tag.If you have a dozen or more tags for each article then you might be using tags incorrectly or need to break your content down into smaller pieces. If your tags do not link to a main tag page, or do not link to anywhere at all, then they are being used incorrectly and may be problematic. Especially if they aren't links at all. That just becomes keyword stuffing which is a no-no.Common practice says that tags belong at the end of an article or blog post. Unless you have a really good reason to put them somewhere else I would stick to that convention.
_unix.47302
I already allowed outbound connection to TCP/UDP port number 1723, but PPTP connection still fail to establish, If I allow all outbound connection, the tunnel will just go fine.
What other rules should I use to allow PPTP outbound connection?
iptables;firewall;pptp
null
_codereview.79055
I am trying to make my class a bit cleaner, especially some of its methods like Count, __add__, remove and __iter__. I feel like I could use cleaner code on the above methods and get the same results.Bag class represents and defines methods, operators, and an iterator for the Bag class. Bags are similar to sets, and have similar operations but unlike sets they can store multiple copies of items. Store the information in bags as dictionaries whose elements are keys whose associated values are the number of times the key occurs in the bag.from collections import defaultdictfrom goody import type_as_strimport copyfrom collections import Counterclass Bag: def __init__(self, items = []): self.bag_value = defaultdict(int) for item in items: self.bag_value[item] += 1 def __repr__(self): bag_list = [] for item, count in self.bag_value.items(): bag_list.extend(list(item*count)) return 'Bag(' + str(bag_list) + ')' def __str__(self): return 'Bag(' + ','.join(str(item) + '[' + str(count) + ']' for item, count in self.bag_value.items()) + ')' def __len__(self): bag_len = 0 for value in self.bag_value: bag_len += self.bag_value[value] return bag_len def unique(self): return len(self.bag_value) def __contains__(self, item): return item in self.bag_value def count(self, item): if item in self.bag_value: return self.bag_value[item] else: return 0 def add(self, new): self.bag_value[new] += 1 def __add__(self,right): mergedbag = Bag() mergedbag.bag_value = copy.copy(self.bag_value) for item in right.bag_value.keys(): mergedbag.bag_value[item] += right.bag_value[item] return mergedbag def remove(self, item): if item in self.bag_value: if self.bag_value[item] == 1: del self.bag_value[item] else: self.bag_value[item] -= 1 else: raise ValueError(str(item) + ' not in bag') def __eq__(self, right): if type(right) is not Bag: raise TypeError('Cannot compare Bag with' + type_as_str(right) + '. Can only compare Bag with Bag') else: return (Counter(self.bag_value)==Counter(right)) def __ne__(self, right): return not self.__eq__(right) def __generate__(self, bag_value): for item in self.bag_value: for value in range(self.bag_value[item]): yield item def __iter__(self): return self.__generate__(self.bag_value)if __name__ == '__main__':# bag = Bag(['d','a','b','d','c','b','d'])# bag2 = Bag(['d','a','b','d','c','b','d'])# bag3 = Bag(['d','a','b','d','c','b'])# print(bag == bag2)# print(bag == bag3)# print(bag != bag2)# print(bag != bag3) import driver driver.driver()
Bag data structure (a set that can store multiple copies of items)
python;classes;python 3.x;iterator
As you're trying to implement a multiset (or bag) I would recommend you to use collections.Counter which already does the same thing:The Counter class is similar to bags or multisets in other languages.So, simply inherit from Counter class and add your additional methods in the class or override the already present methods from Counter class as per your requirements. Counter class also supports &, |, - operators to perform various multiset operations.Now, coming back to your code:def __init__(self, items = []): self.bag_value = defaultdict(int) for item in items: self.bag_value[item] += 1Never use a mutable object as default value of an argument as there value can persist between calls, the recommended way is to use None as default value and then check it inside of the __init__ method. As we are now inheriting from Counter we can directly operate on self instead of using a defaultdict to keep the count.class Bag(Counter): def __init__(self, _items=None, **kwargs): _items = [] if _items is None else _items super().__init__(_items, **kwargs)Why both _items and kwargs?It depends on how you want to use it ultimately, but with this signature you can use it as:Bag('aaabbccd')Bag(a=3, b=5, c=7)Bag('aabbcc', a=5, c=7, r=9)I used _items to handle the case when user actually passed a key named items:Bag(a=5, c=7, r=9, items=9)def __repr__(self): bag_list = [] for item, count in self.bag_value.items(): bag_list.extend(list(item*count)) return 'Bag(' + str(bag_list) + ')'Counter class already has a method to handle this called elements() which returns an itertools.chain object which is an iterator, we can call list on it and we are done:def __repr__(self): return 'Bag({})'.format(list(self.elements()))def __str__(self): return 'Bag(' + ','.join(str(item) + '[' + str(count) + ']' for item, count in self.bag_value.items()) + ')'We can take advantage of string formatting here to make the above code a little concise and easier to read:def __str__(self): format_str = '{}[{}]'.format return 'Bag({})'.format(', '.join(format_str(k, v) for k, v in self.items()))def __len__(self): bag_len = 0 for value in self.bag_value: bag_len += self.bag_value[value] return bag_lenIn our class we can obtain the length by simply summing the .values(), sum() returns 0 when the Bag is empty so it saves a lots of line:def __len__(self): return sum(self.values())We can simply drop __contains__ as Counter can already handle that part, and for unique() we can use len(self.keys()).def __add__(self,right): mergedbag = Bag() mergedbag.bag_value = copy.copy(self.bag_value) for item in right.bag_value.keys(): mergedbag.bag_value[item] += right.bag_value[item] return mergedbagHere as we're only dealing with integer values copy.copy is not actually required, and instead of manual loop use the .update() method of dictionary:def __add__(self, right): merged_bag = Bag() merged_bag.update(self) merged_bag.update(right) return merged_bagIn .remove() the only change is that I used string formatting with __repr__ version of item. __repr__ is more useful as with str() you won't be able to differentiate between 1 and '1'.raise ValueError('{!r} not in bag'.format(item))def __eq__(self, right): if type(right) is not Bag: raise TypeError('Cannot compare Bag with' + type_as_str(right) + '. Can only compare Bag with Bag') else: return (Counter(self.bag_value)==Counter(right))The recommended way to check the type of an object is to use isinstance rather than calling type. And in else block I simply called the super class's __eq__ to handle the result instead of creating Counters:def __eq__(self, right): if not isinstance(right, Bag): raise TypeError(('Cannot compare Bag with object of type {!r}. Can only ' 'compare Bag with Bag').format(type(right).__name__)) else: return super().__eq__(right)def __iter__(self): return self.__generate__(self.bag_value)Never define your own dunder method(__generate__), these methods are considered special by Python and if any future version of Python defines a method by the same name then your code can result in some unknown errors. In our __iter__ method we can simply yield from self.elements():def __iter__(self): yield from self.elements()Note that there should be only be single line of space between method of a class and your indentation is also messed up, always use 4 spaces no tabs(Change your IDE's setting to use 4 spaces for a tab). Read PEP-008 for more information on styling-guide.Complete code:from collections import Counterclass Bag(Counter): def __init__(self, _items=None, **kwargs): _items = [] if _items is None else _items super().__init__(_items, **kwargs) def __repr__(self): return 'Bag({})'.format(list(self.elements())) def __str__(self): format_str = '{}[{}]'.format return 'Bag({})'.format(', '.join(format_str(k, v) for k, v in self.items())) def __len__(self): return sum(self.values()) def unique(self): return len(self.keys()) def count(self, item): return self[item] def add(self, new): self[new] += 1 def __add__(self, right): merged_bag = Bag() merged_bag.update(self) merged_bag.update(right) return merged_bag def remove(self, item): if item in self: if self[item] == 1: del self[item] else: self[item] -= 1 else: raise ValueError('{!r} not in bag'.format(item)) def __eq__(self, right): if not isinstance(right, Bag): raise TypeError(('Cannot compare Bag with object of type {!r}. Can only ' 'compare Bag with Bag').format(type(right).__name__)) else: return super().__eq__(right) def __ne__(self, right): return not self.__eq__(right) def __iter__(self): yield from self.elements()
_cs.11243
I have a problem which I suspect is NP-complete. It is easy to prove that it is NP. My current train of thought revolves around using a reduction from knapsack but it would result in instances of 0-1-Knapsack with the value of every item being equal to its weight.Is this still NP-complete? Or am I missing something?
Is the 0-1 Knapsack problem where value equals weight NP-complete?
complexity theory;np complete;decision problem;packing
Yes, this is called the subset-sum problem and is NP-Hard.
_codereview.123018
I have a simple Flask app that realises rest-api with SQLAlchemy. I have written this create_app function: views.pydef create_app(config): app.config.from_object(config) url = config.DATABASE_URI engine = create_engine(url) Base.metadata.create_all(engine) Base.metadata.bind = engine DBSession = sessionmaker(bind=engine) session = DBSession() app.db_session = session return appand a typical view looks like:[email protected]('/restaurants', methods=['GET', 'POST'])def all_restaurants_handler(): if request.method == 'GET': # RETURN ALL RESTAURANTS IN DATABASE restaurants = app.db_session.query(Restaurant).all() return jsonify(restaurants=[i.serialize for i in restaurants])So, when I write tests, I can pass to create_app function test configuration. Although this works, I have some doubts regarding design of this solution.First of all, I don't like that I need to use app.db_session in order to get the session. Second, I am not sure if it's a good decision to initialize db in the create_app function.
Implementing Flask create_app function with SqlAlchemy
python;flask;sqlalchemy
null
_cs.40400
Suppose a program was written in two distinct languages, let them be language X and language Y, if their compilers generate the same byte code, why I should use language X instead of the language Y? What defines that one language is faster than other?I ask this because often you see people say things like: C is the fastest language, ATS is a language fast as C. I was seeking to understand the definition of fast for programming languages.
What determines the speed of a programming language?
programming languages;compilers
There are many reasons that may be considered for choosing a languageX over a language Y. Program readability, ease of programming,portability to many platforms, existence of good programmingenvironments can be such reasons. However, I shall consider only thespeed of execution as requested in the question. The question does notseem to consider, for example, the speed of development.Two languages can compile to the same bytecode, but it does not meanthat the same code will be produced,Actually bytecode is only code for a specific virtual machine. It doeshave engineering advantages, but does not introduce fundamentaldifferences with compiling directly for a specific harware. So youmight as well consider comparing two languages compiled for directexecution on the same machine.This said, the issue of relative speed of languages is an old one,dating back to the first compilers.For many years, in those early times, professional considered thathand written code was faster than compiled code. In other words,machine language was considered faster than high level languages suchas Cobol or Fortran. And it was, both faster and usually smaller. Highlevel languages still developed because they were much easier to usefor many people who were not computer scientists. The cost of usinghigh level languages even had a name: the expansion ratio, which couldconcern the size of the generated code (a very important issue inthose times) or the number of instructions actually executed. Theconcept was mainly experimental, but the ratio was greater than 1 atfirst, as compilers did a fairly simple minded job by today standards.Thus machine language was faster than say, Fortran.Of course, that changed over the years, as compilers became moresophisticated, to the point that programming in assembly language isnow very rare. For most applications, assembly language programscompete poorly with code generated by optimizing compilers.This shows that one major issue is the quality of the compilersavailable for the language considered, their ability to analyse the sourcecode, and to optimize it accordingly.This ability may depend to some extend on the features of the languageto emphasize the structural and mathematical properties of the sourcein order to make the work easier for the compiler. For example, alanguage could allow the inclusion of statements about the algebraicproperties of user defined functions, so as to allows the compiler touse these properties for optimization purposes.The compiling process may be easier, hence producing better code, whenthe programming paradigm of the language is closer to the features ofthe machines that will intepret the code, whether real or virtualmachine.Another point is whether the paradigms implemented in the language areclosed to the type of problem being programmed. It is to be expectedthat a programming language specialized for specific programmingparadigms will compile very efficiently features related to thatparadigm. Hence the choice of a programming language may depend, forclarity and for speed, of the choice of a programming languageadapted to the kind of problem being programmed.The popularity of C for system programming is probably due to the factthat C is close to the machine architecture, and that systemprogramming is directly related to that architecture too.Some other problem will be more easily programmed, with fasterexecution using logic programming and constraint resolution languages.Complex reactive systems can be very efficiently programmed with specialized synchronous programming languages like Esterel which embodies very specialized knowledge about such systems and generate very fast code.Or to take an extreme example, some languages are highly specialized,such as syntax description languages used to program parsers. A parsergenerator is nothing but a compiler for such languages. Of course, itis not Turing complete, but these compilers are extremely good fortheir specialty: producing efficient parsing programs. The domain ofknowledge being restricted, the optimization techniques can be veryspecialized and tuned very finely. These parser generators are usuallymuch better than what could be obtained by writing code in anotherlanguage. There are many highly specialized languages with compilers that produce excellent and fast code for a restricted class of problems.Hence, when writing a large system, it may be advisable not to rely ona single language, but to choose the best language for differentcomponents of the system. This, of course, raises problems ofcompatibility.Another point that matters often is simply the existence of efficient libraries for the topics being programmed.Finally, speed is not the only criterion and may be in conflict withother criteria such as code safety (for exemple with respect to badinput, or resilience to system errors), memory use, ease ofprogramming (though paradigm compatibility may actually help that),object code size, program maintainability, etc.Speed is not always the most important parameter. Also it may take different guises, like complexity which may be average complexity or worse case complexity. Also, in a large system as in a smaller program, there are parts where speed is critical, and others where it matters little. And it s not always easy to determine that in advance.
_unix.309580
As XKB is part of X window, is it XKB used in Wayland as well? If so, is there any utility planed to replace XKB at some point?In weston, setxkbmap does not work obviously. What is the currently recommended way to change keyboard layout?
Does Wayland use XKB for keyboard layouts?
keyboard layout;xkb;wayland
Yes, Wayland uses XKB for keyboard layouts. But it's not quite the right question, because things work different than in X. Remember that Wayland is only a protocol (plus a wrapper library).At the protocol level, wayland has a wl_keyboard.keymap event. This event contains a file descriptor to the keymap and a format classifier. Right now, only one format is defined: xkb. So a wayland client will receive an XKB-compatible keymap and can use libxkbcommon to interpret that to get the right glyph on the screen, etc.But Wayland does not define how this keymap is decided on. This decision is up to the compositor. In Weston, it is read from the config file on startup, in GNOME it comes from gsettings, etc. And this decision thus also defines how you can change keymaps at runtime (if at all possible). In GNOME you'd either use the config panel or you'd set the gsettings keys directly.The X protocol has requests to set the keymap on the protocol level and these are what makes tools like setxkbmap possible. The Wayland does not have these requests, it's not possible to set the keymap using the Wayland protocol alone.
_cs.21937
Why are instruction decoding and register read are combined in single stage of a 5-stage MIPS-pipeline, even though they serve two different operation?
Why Instruction Decode and Register Read are in the same stage of MIPS pipeline
computer architecture;cpu pipelines
Instruction Decode and Register Read are done in parallel on that stage in order to prevent needing to wait on either to complete. There's no reason not to do this since they are both known at that point in time for the MIPS-Pipeline, and since they will most likely be needed for the next step (Execution Cycle).By no reason, I mean that you would effectively be using more power while separating them (as mentioned in the comments below). You would also be adding a whole extra stage in that case, as the stage before (Instruction Fetch) is what actually gets the information for ID and RR, and adding it to the stage after would not work because that stage requires both. Thus the only way to do it would be to add the stage in-between the ID stage and EX stage, which would add a whole extra cycle to every operation, which would obviously result in slow downs overall.If you're interested in reading more about this, I'd recommend this textbook I'm using for a class (Computer Architecture: A Quantitative Approach by John Hennessy and David Patterson). It's got a pretty handy reference for MIPS especially from what I've read so far and it's where I pulled this information from.
_webapps.105628
A number is starting to appear in YouTube Tab title recently like in the image (686) What does it mean? It's also increasing day by day.
What does (number) in the YouTube tab mean?
youtube
A number is starting to appear in YouTube Tab title - what does it mean?It's apparently yet another unannounced change made by Google/ YouTube and the numbers in the brackets in the browsers tab apparently indicate how many comments/ posts you currently have on YouTube which you have not yet read/ checked...Source There is a number next to my YouTube tab that keeps appearing So how can I remove it?sign in to your YouTube accountclick on the bell icon (top right side in YouTube)click on the gear icondeactivate desktop notificationsSource There is a number next to my YouTube tab that keeps appearing
_webmaster.105425
I've been trying to set up a DNS at Freenom for my home server, but every time I change the nameservers it says:Invalid nameserver givenThe nameservers in my Ubuntu server (/etc/resolv.conf) are:212.115.192.193 212.115.192.100 62.238.255.69Am I using wrong nameservers? Am I supposed to enter Freenom's nameservers in resolv.conf? What am I supposed to do now?
How to setup nameservers at Freenom
nameserver
null
_codereview.127005
What the following code does is:The key field is column0 (where sometimes there could be single key, and sometimes keys separated by comma).The rest of the columns are also either single or comma separated. But based on the single key value from column0, the goal is to collect all the values as a set in the rest of the columns.import sysimport csvdict={}def getVal(k): try: v = dict[k] except: v= None return v# First read as a line and first transformation separates first column# and stores into a table with key,value (where value is the remaining)for line in sys.stdin: line = line.strip() row = line.split('\t') l = len(row) keyvalues = row[0] items = keyvalues.split(,) for item in items: key = item value=[] for i in range(1, l): value.append(row[i].split(,)) if getVal(key) == None: dict[key] = value else: prevval = dict[key] cols = len(prevval) newvalue=[] newlist = [] for j in range(0,cols): newlist = prevval[j]+value[j] newset = set(newlist) newvalue.append(list(newset)) dict[key] = newvaluefor k,v in dict.items(): rowstr = k+\t ncols = len(v) for items in v: cols= for item in items: cols +=item+, cols = cols[0:len(cols)-1] rowstr += cols+\t print rowstr Sample input3,15,75 1635,1762 878 3425 121,122,12315 1762 871 3475 121,125,1263 1585,1590,5192 882,832,841 3200,3211 120,121,122,123,124I'm getting the results correctly as expected, but I would like to know any improvements to make on the code.
Combining values corresponding to the same key
python;csv
Indentation matters a lot in Python. Please follow the PEP 8 standard, which is 4 spaces.This code feels very procedural. Python can be much more expressive than that, if you take advantage of features like list comprehensions.You've used a lot of variables: line, row, l, keyvalues, items, item, key, value, i, prevval, cols, newvalue, newlist, j, newset too many for me to keep track of what each represents. You could reduce the mental load by eliminating variables for ephemeral expressions. For example, you could eliminate items by writing for item in keyvalues.split(,).I see that you considered using the csv module, but didn't. If you did, you could have simplifiedfor line in sys.stdin: line = line.strip() row = line.split('\t') to for row in csv.reader(fileinput.input(), delimiter='\t'). I recommend fileinput instead of sys.stdin so that the user has the option of passing a filename on the command line.The getVal(k) function can just be replaced by dict.get(k, None). It would be better to avoid choosing dict as the name of a variable, since it shadows the dict() constructor.Suggested solutionimport csvimport fileinputdata = {}for row in csv.reader(fileinput.input(), delimiter='\t'): values = [set(col.split(',')) for col in row] for key in values.pop(0): # Numbers in column 0 data[key] = [ new_col.union(old_col) for new_col, old_col in zip(values, data.get(key, values)) ]for key, values in data.items(): print '\t'.join([key] + [','.join(col) for col in values])
_unix.352212
A kickstart file for a CentOS 7 GUEST virtual machine was created by modifying the anaconda-ks.cfg file from the CentOS 7 HOST physical machine. The host is a virtualization install, which contains packages for a HOST. How should the list of required packages change for a GUEST? Specifically, which of the below packages can safely be removed from the GUEST? Here is the %packages ... %end fragment of the kickstart file: %packages@base@compat-libraries@core@debugging@development@network-file-system-client@remote-system-management@security-tools@smart-card@virtualization-hypervisor@virtualization-platform@virtualization-tools@virtualization-clientkexec-tools%end
Which packages are necessary for a GUEST kickstart file?
centos;virtual machine;kvm;kickstart;libvirtd
null
_unix.339762
Well I had the problem that bluetooth was always off. I go to configuration->Bluetooth and every time I tried to put it on just returns to off a while after. I tried with this command:aptitude install bluetoothand then run/etc/init.d/bluetooth startafter that I runed /etc/init.d/bluetooth status and this is what shows: bluetooth.service - Bluetooth service Loaded: loaded (/lib/systemd/system/bluetooth.service; disabled; vendor preset: disabled) Active: active (running) since Tue 2017-01-17 21:13:47 UTC; 29s ago Docs: man:bluetoothd(8) Main PID: 7939 (bluetoothd) Status: Running Tasks: 1 (limit: 4915) CGroup: /system.slice/bluetooth.service 7939 /usr/lib/bluetooth/bluetoothdJan 17 21:13:47 kali bluetoothd[7939]: Error adding Link Loss serviceJan 17 21:13:47 kali bluetoothd[7939]: Not enough free handles to register ...ceJan 17 21:13:47 kali bluetoothd[7939]: Not enough free handles to register ...ceJan 17 21:13:47 kali bluetoothd[7939]: Not enough free handles to register ...ceJan 17 21:13:47 kali bluetoothd[7939]: Current Time Service could not be re...edJan 17 21:13:47 kali bluetoothd[7939]: gatt-time-server: Input/output error (5)Jan 17 21:13:47 kali bluetoothd[7939]: Not enough free handles to register ...ceJan 17 21:13:47 kali bluetoothd[7939]: Not enough free handles to register ...ceJan 17 21:13:47 kali bluetoothd[7939]: Sap driver initialization failed.Jan 17 21:13:47 kali bluetoothd[7939]: sap-server: Operation not permitted (1)Hint: Some lines were ellipsized, use -l to show in full.My computer is a sony VAIO SVF15A17CLV.Output from rfkill list all0: sony-wifi: Wireless LAN Soft blocked: no Hard blocked: no1: sony-bluetooth: Bluetooth Soft blocked: no Hard blocked: no2: phy0: Wireless LAN Soft blocked: no Hard blocked: no3: hci0: Bluetooth Soft blocked: no Hard blocked: noThank you!!
Bluetooth not Working on Kali Linux
kali linux;bluetooth
null
_hardwarecs.193
I've looked quite a bit through my local electronics stores but nobody could help me there and I couldn't find what I was looking for, so I'm asking this here.I'm looking for a router, supporting gigabit ethernet and at least having four ports. This router should also be able to open its own WLAN network as well as act as repeater for other networks. Support for (multi-stream) 802.11ac WLAN is mandatory. Only if there's really nothing supporting 802.11ac 802.11n would be acceptable but it would have to have good sending power and range and support 2.4 and 5 GHz.You'll scratch your head now and ask: Why can't a plain router be used for this? - I'm looking for a solution that is not a standard home router and doesn't support connecting to the internet by itself as I consider this an unnecessary feature for this device. Its use-case would be to open a new WLAN network at a physically different location from the internet access point.Bonus points include:Low price (< $100)A webinterface for configuration purposesRegular updates or really good firmware (really unlikely usually)
Switch with WLAN?
networking;router;switch;wlan
null
_unix.37778
I have a cPanel web hosting account. On this account I have a PHP-based support/trouble ticket system for my customers to use. I'd like to be able to send an email to both my customer and to the ticket system, but have the email that goes to the ticketing system appear to come from my customer, so that the support ticket which gets created appears under their account.So what I want to do is create some email address [email protected] which gets piped to a unix command or shell script. This means that when I send an email from [email protected] to [email protected] and CC [email protected], the shell script should be able to resend the email to [email protected] but resend the email from the To: address of the email.Is there any way to make a shell script which will accept an email on stdin, rewrite the From: address, and resend the mail to a new address? Can Procmail do anything like that? Or will I have to hand code this myself?
How can I rewrite the From: address of an email and resend it?
scripting;email
Procmail comes with the formail command to manipulate mail headers. The procmailex contains examples of uses in .procmailrc. This should do what you want (untested):formail -R To: From: -U From: -I Cc: -I 'To: [email protected]'
_unix.285379
In my /etc/samba/smb.conf, I use option printer admin. Then I got, # testparm -sv WARNING: The enable priviledges option is deprecated Unknown option encountered printer admin Ignoring unknown parameter printer admin ...I suppose I should replace this option by something else? But what exactly?
Samba printer admin option is deprecated on 3.6.X and removed in 4.0
samba;printer
null
_codereview.22864
I'm creating a CRUD page. My first approach was use the same class editCategory.php for doing these actions:If this file is called via GET and categoryId parameter doesn't exist -> shall show a empty formIf this file is called via GET and categoryId parameter is provided -> shall show a form with the data of this category.If this file is called via POST and there is no categoryId parameter -> shall create a new category.If this file is called via POST and categoryId parameter is provided -> shall update this category.The code works ok, but I have the sense that the code is cluttered and that it could be organized better. <?php require_once ../include_dao.php; $action = isset($_POST[action]); $category = new Category(); $categoryDao = new CategoryMySqlDAO(); $categoryName = ; if ($action == save) { // DO_POST $categoryName = $_POST[name]; $category->name=$categoryName; if (isset($_POST[categoryId])) { // update $categoryId = (int) $_POST[categoryId]; $category->categoryId=$categoryId; $categoryDao->update($category); } else { // insert $categoryDao->insert($category); } $messageSuccess = Category saved; } else { // DO_GET if (isset($_GET[categoryId])) { $categoryId = (int) $_GET[categoryId]; $category = $categoryDao->load($categoryId); $categoryName = $category->name; } ?><html> HTML code here showing the form</html>
Insert, update and get in the same php file?
php
I am not a php developer, but I think that your objects should encapsulate a more logic, I made a small simple example:$category = new Category();$categoryDao = new CategoryMySqlDAO();if ($_SERVER['REQUEST_METHOD'] === 'POST') { if (isset($_POST[action]) { //$category has logic with get varialbles in method fill $category->fill($_POST); //$categoryDao has logic insert or update $category $result = $categoryDao->process($category); //.. do something with $result }} else if (isset($_GET[categoryId])){ $categoryId = (int) $_GET[categoryId]; $category = $categoryDao->load($categoryId);}
_unix.243706
I am working on a platform with an ARM CPU running linux that has a single MAC in the CPU directly connected to a switch IC. I am trying to set up a VLAN mode using systemd-networkd which I have done successfully. However, the MAC addresses for the created VLAN ports are random and that is not ideal. Using MACAddressPolicy=persist in the relevant .link file I do have a persistent MAC address though each boot however the MAC address it grabbed was random. However, the CPU is assigned two MAC addresses. What I am looking to do is find the first MAC address it is assigned (which is set to the eth0 device), assign that MAC to eth0.1, and then assign that MAC +1 to eth0.2 Is there an easy way to do this through systemd-networkd or udev? I also need a setup that can be put on to thousands of the completed devices and have systemd-netorkd automatically handle everything, rather than modifying the .network files on each unit.
systemd-networkd: Set VLAN device MAC based on host MAC
networking;systemd;udev;systemd networkd
null
_webapps.70498
When zoomed in close enough, Google Maps shows highways that are planned or under construction.An example from Cluj, Romania the grey section of the highway is not built yet and the overlay goes away when you zoom out.One more example from Bremmen, Germany Is there anyway to make this map layer visible at every zoom level?
Show planned highways at all zoom levels in Google Maps
google maps
null
_unix.164863
I've been using CentOS 6.5 as my OS for quite a while. Usually I don't have many problems with Linux installations, but as they new CentOS 6.6 was released I found myself in a bit of a problem. There were hundred of updates to be done, but only one turned out to be problematic. If I input the commandyum update I getResolving Dependencies--> Running transaction check---> Package scl-utils.i686 0:20120927-8.el6 will be updated---> Package scl-utils.i686 0:20120927-23.el6_6 will be an update---> Package xcb-util.i686 0:0.3.6-1.el6 will be updated--> Processing Dependency: libxcb-icccm.so.1 for package: qt5-qtbase-gui-5.3.2-1.el6.i686--> Processing Dependency: libxcb-image.so.0 for package: qt5-qtbase-gui-5.3.2-1.el6.i686--> Processing Dependency: libxcb-keysyms.so.1 for package: qt5-qtbase-gui-5.3.2-1.el6.i686---> Package xcb-util.i686 0:0.3.6-5.el6 will be an update--> Running transaction check---> Package xcb-util.i686 0:0.3.6-1.el6 will be updated--> Processing Dependency: libxcb-icccm.so.1 for package: qt5-qtbase-gui-5.3.2-1.el6.i686---> Package xcb-util-image.i686 0:0.3.9-4.el6 will be installed---> Package xcb-util-keysyms.i686 0:0.3.9-5.el6 will be installed--> Finished Dependency ResolutionError: Package: qt5-qtbase-gui-5.3.2-1.el6.i686 (@epel) Requires: libxcb-icccm.so.1 Removing: xcb-util-0.3.6-1.el6.i686 (@anaconda-CentOS-201311271240.i386/6.5) libxcb-icccm.so.1 Updated By: xcb-util-0.3.6-5.el6.i686 (base) Not found You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigestNow, I know that often these sort of conflicts come from updates from different repos, but this is my current repo listadobe-linux-i386 Adobe Systems Incorporated base CentOS-6 - Base epel Extra Packages for Enterprise Linux 6 - i386extras CentOS-6 - Extras google-chrome google-chromespideroak-stable SpiderOak Stable Distribution updates CentOS-6 - UpdatesAnd the weird thing is that it seems that the conflicting packages not only come from the same repository, but are actually dependencies!xcb-util in fact asks me to install xcb-util-image and xcb-util-keysyms as dependencies..and then conflicts with them! The current version of xcb-util does not have these other two dependencies installed, and says that Source RPM: xcb-util-0.3.6-1.el6.sr.This is a bit weird. Has anyone met this problem?Many thanks, and I hope I provided enough information.
Interdependent conflicting packages in CentOS 6.5
centos;package management;upgrade
Problem is qt5 from epel, I removed it and upgrade without problem. There is already fixed qt5 in epel-testing: https://admin.fedoraproject.org/updates/FEDORA-EPEL-2014-3484/qt5-qtbase-5.3.2-3.el6
_unix.343363
I'm trying to diagnose a problem with certain email addresses possibly being blocked on my server. I'm running PHP 5.3 on CentOS 5.7. The php.ini file lists a sendmail_path of /usr/sbin/sendmail -t -i, which when run in CLI hangs there. I've noticed that qmail is installed on my server, too, but I don't know if PHP is using it or not.How do I find out which MTAs (i.e. sendmail, qmail, etc) PHP is using?
What Mail Transfer Agents is PHP using?
centos;email;php;mail transport agent
TL;DR: PHP doesn't care about what MTA you're using.Longer explanation: this goes way back almost as far as the POSIX standards themselves, but every properly written MTA will provide a binary named sendmail that will behave exactly as the official sendmail program would be expected to behave.As a result, every unix program or daemon that, for one reason or other, finds itself needing to email someone, knows they can just call /usr/sbin/sendmail with known options, and be confident that whatever MTA was installed will know what to do with the message from there on in. As such, unless you use a specific SMTP PHP module and explicitly to use different mail settings (generally, a remote server/port with or without TLS and/or authentication), it will just call /usr/sbin/sendmail and let the underlying distribution worry about what happens next. If your mail isn't arriving, I recommend you check the error logs of the MTA (usually in /var/log/mail.* but depends on your distribution and MTA) for answers.
_softwareengineering.64775
I am writing my first research paper on distributed storage systems. We also have a prototype working (partially). The project was a complete implementation based project where we envisage to take this project a step further and do some future development.So, my questions are:1. How should I go about creating the skeleton or the outline of the paper? Any pointers.2. I understand, benchmarks and some working stats of the systems is important, but what is the breakup? How much content should be dedicated to theoretical algorithms (which we implemented, did not create new ones) and the benchmark data.3. What format (IEEE, ACM etc) should I select and why?PS: The project does not really concentrates on creating something new and unique, but it's aim to create an app which can be used by students and researchers to help understand the field of study better. Performance is not an objective for us, since there are alot of open source implementation (production quality) of the same domain. Think of Kaffe for JVM, that's what we are building for distributed storage systems.
Research paper on distributed computing - Advice?
research;distributed computing;journal
Usually research paper means you want to publicate it on an accepted platform, usually these are conferences. If so you should select such an conference, which has soon a submission deadline. Usually this conference dictate the layout of the submissions (so this answers your question 3.). From the other 2 question I would conclude that you have almost no paper experience. If you want to submit such things you should have a mentor (e.g. for phd-students this is there professor) who should answer such questions. Esp. because certain communities have certain demands which are known by experienced researchers. An general outline of such a paper is Abstract, Introduction, Prior/Related Work, Idea, Implementation, (Experimental) Results, Conclusion & Future Work (This should answer 1.). For 2 I could just say: Search someone experienced from your field, as different research communities have different expectations.In general a not-research-paper (as yours is, as you said, you did nothing new) is unlikely to get accepted. In my field of research (compilers and hardware) the acceptance rate of the conferences or journals is 20-30% (of the submissions). And again: here an advisor would be really helpful.If you write your paper, just to put it on web on a internal company server, use my above structure as guideline and weight the single points as you want.
_unix.226219
I am under the following restrictions:I have a 1.0 GB .zip file on my computer which contains one file, a disk image of raspbian. When uncompressed, this file is 3.2 GB large and named 2015-11-21-raspbian-jessie.img.After having downloaded the zip file, I have just under 1.0 GB of storage space on my computer, not enough space to extract the image to my computer.This file needs to be uncompressed and written to an SD card using plain old dd.Is it possible for me to write the image to the SD card under these restrictions?I know it's possible to pipe data through tar and then pipe that data elsewhere, however, will this still work for the zip file format, or does the entire archive need to be uncompressed before any files are accessible?
Can I uncompress a zip file containing a disk image and then save that to an SD card all in one step?
dd;zip
Use unzip -p:unzip -p 2015-11-21-raspbian-jessie.zip 2015-11-21-raspbian-jessie.img | dd of=/dev/sdb bs=1M
_cs.57296
What is an example of a context-free language that runs in NP-time? I've done searches but cant find one. Frankly, I do not know how to determine when a CFL is P or NP. Can someone tell me, please?
CFL that runs in NP-time
formal languages;runtime analysis;np
null
_datascience.2564
A short while ago, I came across this ML framework that has implemented several different algorithms ready for use. The site also provides a handy API that you can access with an API key.I have need of the framework to solve a website classification problem where I basically need to categorize several thousand websites based on their HTML content. As I don't want to be bound to their existing API, I wanted to use the framework to implement my own.However, besides some introductory-level data mining courses and associated reading, I know very little as to what exactly I would need to use. Specifically, I'm at a loss as to what exactly I need to do to train the classifier and then model the data.The framework already includes some classification algorithms like NaiveBayes, which I know is well suited to the task of text classification, but I'm not exactly sure how to apply it to the problem.Can anyone give me a rough guidelines as to what exactly I would need to do to accomplish this task?
Using the Datumbox Machine Learning Framework for website classification - guidelines?
machine learning;classification;java
null
_codereview.96218
I have been working the some project and I'm struggling to find the answer. Right now I'm coding with Angular and there some code that makes me write twice. There only two arrays now but if I have lots of arrays the code will be really big. The thing makes me stop always that I have to write the IDs one by one and I could not find the answer.Please help me simplify the code even more.$scope.allTheCont = [{ styleClass: 'lftMn', theContId: 'theIdOne', pageNum: 'views/1.html', theOpn: function() { if ($scope.theIdOne) { return true }; }, imgClass: 'mainLeftImg', imgSrc: 'img/1.jpg', imgName: 'Traveling with sport', contClass: 'lftCnt', openCont: function() { $scope.all = false; $scope.theIdOne = true; $('.lftMn').addClass('lftMnOpnd') $('.lftCnt').addClass('mnCntOpnd') $('.mainLeftImg').addClass('mnImgOpnd') $location.hash('main'); $anchorScroll(); }, closeCont: function() { $scope.all = true; $scope.theIdOne = false; $('.lftMn').removeClass('lftMnOpnd') $('.lftCnt').removeClass('mnCntOpnd') $('.mainLeftImg').removeClass('mnImgOpnd') $location.hash('theIdOne'); $anchorScroll(); }}, { styleClass: 'mnRgtImg', theContId: 'theIdTwo', pageNum: 'views/2.html', theOpn: function() { if ($scope.theIdTwo) { return true }; }, imgClass: 'mnRgtImgs', imgSrc: 'img/2.jpg', imgName: 'Traveling with sport', contClass: 'mnRgtCnt', openCont: function() { $scope.all = false; $scope.theIdTwo = true; $('.mnRgtImg').addClass('mnRtOpnd') $('.mnRgtCnt').addClass('mnRtCntOpnd') $('.mnRgtImgs').addClass('mnRgtImgOpnd') $location.hash('main'); $anchorScroll(); }, closeCont: function() { $scope.all = true; $scope.theIdTwo = false; $('.mnRgtImg').removeClass('mnRtOpnd') $('.mnRgtCnt').removeClass('mnRtCntOpnd') $('.mnRgtImgs').removeClass('mnRgtImgOpnd') $location.hash('theIdTwo'); $anchorScroll(); }}]
Scroll related inline image resizing
javascript;array;angular.js
You have a couple of methods that repeat themselves for the most part that could be factored out. For example;the openCont and closeCont are nearly identical, so you could factor them out to be their own function, for example,var _openCont = function(theId, classesToSet) { $scope.all = false; $scope[theId] = true; for (var key in classesToSet) { $(key).addClass(classesToSet[key]); } $location.hash('main'); $anchorScroll();};Which would be called like:_openCont('theIdOne', { '.lftMn': 'lftMnOpnd', '.lftCnt': 'mnCntOpnd', '.mainLeftImg': 'mnImgOpnd'});Along with this, I'd recommend naming your variables more clearly. Ideally code should be able to be read very easily even to a new set of eyes. This will make maintainability far easier down the road.
_unix.179869
I'm using openssl dgst -sha1 -binary to get hash values of my strings in binary format.(I'm using -binary flag because my version of openssl adds stdout before each hash value on default output, and -binary helps to avoid it, therefore it's easier to store hash results in binary format for further processing (so I can just use xxd -p when I want hex values instead of manually cutting out that stdout from each string)So, the binary output of openssl dgst -sha1 -binary for a Hello! string in Cygwin console will look like this: _q%a.&C0NQvH&8iNow I create a new variable with this result and concatenate it with another variable, which value is not in binary format (i.e. World). So my new variable now looks like_q%a.&C0NQvH&8iWorldThen I generate another hash for this concatenated string and compare it to the one I got using default Java hashing libraries (MessageDigest). On this step, however, hashes obtained via shell and Java don't match (and I need to get exactly the same value as the one generated on Java side).So I suppose that my World string should be in binary format as well to match my Java hash output (because as long as I generate hashes for concatenated binary values all hashes match). However, I don't know how to convert my World string to binary format in shell. Any ideas?
How to get binary representations of strings in Shell?
shell;shell script;java;openssl;binary
You can't store binary data (binary data generally refers to data with arbitrary byte values, not only byte values that form valid characters but is not special otherwise) in bash variables as bash doesn't support storing the 0 byte value in its variables (and remember you can't pass such strings in arguments to commands as those are NUL delimited strings).You can in zsh though. Also remember that command substitution strips trailing newline characters (0xa bytes, maybe different on Cygwin), so it's probably better to use read here:$ echo 323 | openssl dgst -sha1 -binary | hd00000000 3a 8b 03 4a 5d 00 e9 07 b2 9e 0a 61 b3 54 db 45 |:..J]......a.T.E|00000010 63 4b 37 b0 |cK7.|00000014See how that contains both a 0 byte and newline character (0xa)$ echo 323 | openssl dgst -sha1 -binary | IFS= LC_ALL=C read -ru0 -k20 var && var=${var}World$ printf %s $var | hd00000000 3a 8b 03 4a 5d 00 e9 07 b2 9e 0a 61 b3 54 db 45 |:..J]......a.T.E|00000010 63 4b 37 b0 57 6f 72 6c 64 |cK7.World|00000019Note again that you can only pass that variable to builtin commands (printf...).Now, if all you want is hash it again, then it's just(echo 323 | openssl dgst -sha1 -binary; printf %s World) | openssl dgst -sha1 -binaryno need for a variable.
_unix.134558
I was wondering if it's possible to detect a shutdown due to power loss on a Linux system. (Power loss defined as: pressing reset button, pressing power button, pulling the power cord) If so, how? (ie, if this is already something that people can do, what commands do I run?)The way I would imagine something like this working:When system comes back up, checks for an issued shutdown or reboot command, finds none.System checks for any sort of errors that have been logged that would require a reboot, such as kernel panic, etc (OOM maybe?) andfinds none.If no smoking gun found as described above, the system logs something like No cause for shutdown found, potential power loss detected.
Detecting a plug pulled scenario with Linux
logs;shutdown
null
_unix.34412
I'm using the following configuration:Ubuntu 11.10 x64Awesome Window Manager v3.4.10 Gnome 3.2...and I got the following problem:Every time I click on the empty space in a menubar or toolbar in a Gtk application (like Nautilus, GEdit), mouse clicks don't work any more. Sometimes it also happens after clicking a resize grip (See this bug report).I'm still able to move the mouse cursor, but clicking and scrolling doesn't have any effect.All keyboard actions work as before, e.g. I can switch desktops with Mod4-Left / Mod4-Right, type in text boxes or switch the active window with Mod4-j/k.When I close the application where I did the killing click, everything works perfectly as before.This is especially annoying when it happened in Nautilus as I have to killall nautilus to make it work again, which kills the desktop as well.Does anybody have the same problem, and is there any bugfix around?EDITI found this item in the Ubuntu Bug tracker which describes exactly this misbehavior. As far as I understand, they want to release the bugfix in the next Ubuntu version, 12.04. Is there a chance to get the update earlier? Do I have to compile Gtk myself, if yes, how to do this?Note: I asked this, less specific, here
Clicking on Gtk apps menu bar kills mouse
gnome;mouse;gtk;awesome
null
_codereview.11677
One day I realised that my Android project utilizes AlertDialogs here and there in very ineffective way. The heavily duplicated portion of code looked like this (actually, copied and pasted from some documentation):AlertDialog.Builder builder = new AlertDialog.Builder(context);builder.setMessage(Are you sure you want to close application?) .setPositiveButton(Yes, new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int id) { dialog.dismiss(); System.runFinalizersOnExit(true); System.exit(0); } }) .setNegativeButton(No, new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int id) { dialog.dismiss(); } });builder.create().show();Apparently, this code fragment should be transformed into a function accepting 2 parameters: a question to ask, and an action to perform.After several iterations of code optimizations (each one of which seemed as the last and most minimal one), I came to the following class:public abstract class DialogCallback implements DialogInterface.OnClickListener,Runnable{ DialogCallback(Context c, String q) { AlertDialog.Builder builder = new AlertDialog.Builder(c); builder.setMessage(q) .setPositiveButton(Yes, this) .setNegativeButton(No, this); builder.create().show(); } public void onClick(DialogInterface dialog, int which) { dialog.dismiss(); if(which == DialogInterface.BUTTON_POSITIVE) run(); }}with the following usage:new DialogCallback(context, are you sure?){ public void run() { // some action }};Is there a way of further optimization? What other methods for effective dialog callbacks can be used?
Efficient implemetation of AlertDialog callback in Android
java;android;callback
null