id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_unix.213011
I know that I may have the C package #include <stdint.h> in the first 1-30 lines. I would like to add it to lines where no previously if the file contains the word LARGE_INTEGER by GNU tools. Substitute [0,1] matchesI thought first about replacing [0,1] matches always as follows but now I think the extra substitution is unnecessary so should be avoided:gsed -i '1-30s/^(#include <stdint.h>\n)?/#include <stdint.h>\n/'I propose to reject this. Extra ggrep approach to say no matchI think this may be a good solution because it first looks the conditions and then substitutes if necessary.This thread suggests to add an extra grep so code where the while loop structure is from here while read -d '' -r filepath; do \ [ $(ggrep -l LARGE_INTEGER $filepath) ] && \ [ $(ggrep -L #include <stdint.h> $filepath) ] \ | gsed -i '1s/^/#include <stdint.h>\n/' $filepathdonewhich however givestest.sh: line 5: stdint.h: No such file or directoryand does add the package to lines without #include <stdint.h> too which is wrong. How can you combine two conditional statements for SED efficiently?
To sed if with and without conditions correct
bash;sed;grep;gnu
I'm not sure I decrypted your prose correctly. The script below adds #include <stdint.h> to files in the current directory that (1) contain the word LARGE_INTEGER, and (2) don't already include <stdint.h>:sed -i -e '1i\' -e '#include <stdint.h>' $( egrep -c '#include +<stdint\.h>' $( fgrep -lw LARGE_INTEGER * ) | \ sed -n '/:0$/ { s/:0$//; p; }')The script assumes GNU sed (for -i).In detail:fgrep -lw LARGE_INTEGER * lists the files in the current directory that contain the word LARGE_INTEGERegrep -c '#include +<stdint\.h>' counts the occurrences of #include <stdint.h> through the files found by fgrep abovesed -n '/:0$/ { s/:0$//; p; }' selects the files with 0 matches and keeps only the filenamessed -i -e '1i\' -e '#include <stdint.h>' adds a first line #include <stdint.h> to the files found above.
_unix.270707
So I updated Ubuntu to 14.04 a few days ago and I just noticed Windows 10 went missing from the grub menu options. I tried multiple variations of update-grub and tried using boot-repair, too, but nothing fixed it. Here's the pastebin from boot-repair.I'm at a loss as to what to try next. Any help?EDIT: After reading a few suggestions elsewhere, I tried editing /etc/grub.b/40_common, and here are its current contents:#!/bin/shexec tail -n +3 $0# This file provides an easy way to add custom menu entries. Simply type the# menu entries you want to add after this comment. Be careful not to change# the 'exec tail' line above.menuentry Windows 10 { set root='(hd0,msdos1)' chainloader +1}menuentry Windows 102 { set root='(hd0,msdos2)' chainloader +1}But booting from either Windows 10x option doesn't work.Option 1 (set root='(hd0,msdos1)') displays this error (imgur .com/AbymY1r.jpg), which stays onscreen for about half a minute or until I ctrl+alt+del out of it (which restarts the computer and goes back to grub).Option 2, on the other hand, gives off this error:BOOTMGR is missingPress Ctrl+Alt+Del do restartI tried using the repair options through the Windows 10 installation disk, and assorted commands within it (e.g. bootrec /RebuildBcd, bootrec /FixMbr and bootrec /FixBoot), but all that did was screw up grub again, and I ended up not being able to boot to neither Ubuntu nor Windows. I made grub come back by using the Ubuntu Live CD, now I'm back to the same problem, except for these new Windows 10 entries I manually added to grub.This is the output for fsbkl -f:NAME FSTYPE LABEL MOUNTPOINTsda sda1 ntfs System Reserved sda2 ntfs sda3 sda5 swap [SWAP]sda6 ext4 /sr0EDIT 2: SOLVED!So, I managed to solve it by following Christian_Sosa's answer at MS support, basically run chkdsk on the windows drives and then try startup repair. In my case, chkdsk did the trick.
Windows 10 missing from grub after Ubuntu update
ubuntu;windows;dual boot;grub
So, I managed to solve it by following Christian_Sosa's answer at MS support, basically run chkdsk on the windows drives and then try startup repair. In my case, chkdsk did the trick.Boot Repair mode from a Windows 10 install disk.Launch the command promptType the following commands:diskpartThis will launch the disk partition utility, we will want to know the volume disk letter for where our OS is located.list volumeIt should list your HDD as well as their drive letter. Remember the drive letter that is in a HDD that most resembles the storage capacity. It may or may not say boot for file description.In my case, I had to repeat this process both for C: and D: drives, though both had very different sizes.exitIn order to run the next command we will need to exit the disk partition utility.chkdsk /f X:replace X for your boot os drive letter that we confirmed earlier.Reboot the system back into the Recovery disc.Choose Startup Repair and let it run.In my case, Startup repair never actually ran, but I tried anyway. It seems chkdsk alone did the trick. And for the record, the correct grub menuentry in my case wasmenuentry Windows 10 { set root='(hd0,msdos1)' chainloader +1}Thanks for the answers and comments.
_unix.21156
I downloaded this package for ffmpeg.When I try to install it with commandsudo dpkg -i ffmpeg_0.7.1-5_i386.debit writes this error message:Unpacking ffmpeg (from ffmpeg_0.7.1-5_i386.deb) ...dpkg: error processing ffmpeg_0.7.1-5_i386.deb (--install):trying to overwrite '/usr/share/ffmpeg/libx264-ipod640.ffpreset', which is also in package libavcodec-extra-52 4:0.5.1-1ubuntu1.2dpkg-deb: subprocess paste killed by signal (Broken pipe)Errors were encountered while processing: ffmpeg_0.7.1-5_i386.debCould you help me with the installation of this particular version (0.7.1-5) for Ubuntu 10.04?EDIT: after commandsudo apt-get remove libavcodec52 libavcodec-extra-52new outputShould I go manually now step by step and install the dependencies (and possibly their dependencies) or is there some trick?
Install ffmpeg 0.7.1-5 from debian package
software installation;ffmpeg;dpkg
null
_codereview.136384
I have some filenames, and they each have data contained in them at specific locations in the string. I want a function that returns the value of the data specified by field. The doctest illustrates.I'd like a nicer, easier-to-understand way to do this.If possible, but secondarily, I'd like to be able to give the data in different types, according to which field they are, rather than just the string value.def get_field_posn_list(field): List containing field position indices in filename return({'place': [-11], 'year': range(-10, -7), 'session': [-6]}[field])def get_field_str(fn, field): Get the string value of field stored in string fn >>> get_field_str('/path/to/file/Z998-5.asdf', 'session') '5' val = ''.join([fn[i] for i in get_field_posn_list(field)]) return val
Extract specified field value from filename using positional indexing
python
This task is nearly equivalent to another recent question. My advice is that string analysis is usually best done using regular expressions.import osimport redef get_field_str(file_path, field): Get the string value of field stored in file_path >>> get_field_str('/path/to/file/Z998-5.asdf', 'session') '5' pattern = re.compile('(?P<place>.)(?P<year>.{3})-(?P<session>.)\.') return pattern.search(os.path.basename(file_path)).group(field)The regex can validate the filename as it works. Feel free to adjust it to match your expectations.
_unix.355653
I was curious to see how *nix based operating systems generate routing tables and found that netstat -rn parses out from /proc/net/route. Viewing my /proc/net/route is something attune to:Iface Destination Gateway Flags RegCnt Use Metric Mask MTU Window IRTTeth0 00000000 0101A8C0 0003 0 0 0 00000000 0 0 0Running a normal netstat -rn, I get the following:Destination Gateway Genmask Flags MSS Window irtt IFace0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth0For the most part, I understand what I'm seeing. I just want to understand how the Flags are determined. For example, how does the value 0003 map to UG (interface is up and is a gateway).Could someone point me in the right direction?Thanks.
How is *Nix parsing Netstat -rn (routing table) Flags?
netstat
null
_softwareengineering.190961
I'm planning to write a single API that is syntactically valid in most major programming languages (to the greatest extent possible, so that only minimal amounts of code will need to be re-written when moving from one language to another). I think the simplest way to do this would be to write a group of functions that consisted entirely of function calls and variable assignments, since the syntax for function calls is almost exactly the same in all major programming languages.Of course, I'd need to write wrapper functions for if-statements and while-loops, since the syntax of each of these is different in each programming language.Here's a proof-of-concept for one polyglot function that should work in most programming languages://JavaScript syntaxfunction proofOfConcept(){ printSomething(This is just a proof of concept.); //the function printSomething will // need to be implemented separately in each programming language.}//Java syntaxpublic static void proofOfConcept(){ printSomething(This is just a proof of concept.); //the function printSomething will // need to be implemented separately in each programming language.}//Python syntaxdef proofOfConcept(): printSomething(This is just a proof of concept.); //the function printSomething will // need to be implemented separately in each programming language.//C syntaxvoid proofOfConcept(){ printSomething(This is just a proof of concept.); //the function printSomething will // need to be implemented separately in each programming language.}Would this be a useful design strategy, or is there a better way to do this? I think this would be a useful strategy for developing libraries in multiple programming languages, althogh it would require a small number of language-specific functions to be written for each target language.
Writing an API that is syntactically valid in multiple programming languages
language agnostic
The Katahdin programming language was designed to solve this problem. Its documentation includes this example, which combines Fortran and Python functions in one file:import fortran.kat;import python.kat;fortran { SUBROUTINE RANDOM(SEED, RANDX) INTEGER SEED REAL RANDX SEED = 2045*SEED + 1 SEED = SEED - (SEED/1048576)*1048576 RANDX = REAL(SEED + 1)/1048577.0 RETURN END}python { seed = 128 randx = 0 for n in range(5): RANDOM(seed, randx) print randx}
_unix.227312
I am trying to get a bash script working and in order to do so I need to transform a local time in +%Y%m%d%H%M%S format (example: 20150903170731) into UTC time in the same format.I know date -u can give me the current UTC time:$ date +%Y%m%d%H%M%S -u20150903161322Or, without the -u the local time (here, BST):$ date +%Y%m%d%H%M%S20150903171322Now if I add an argument to date, I get an error:echo 20150903154607 | xargs date +%Y%m%d%H%M%S -udate: extra operand `20150903154607'Is there any way I can do perform this conversion without such errors?
How can I convert a local date-time into UTC date-time?
date
null
_codereview.97103
I have a Swing vector-program similar to Dia targeting RAD. There are some plug-ins that let you create structures like Rectangle, Text and Circle but also complex types like Table-Columns.generate files like Spring-Roo-Script, grails-script, SQL-Files and svg-Files.In the Center you have the Drawing-Panel (DrawPanel), on the right side you have the component-tree.In the DrawPanel class, this is the function select that I need a review for. This function changes the selected nodes in the Tree on the right./** The main tab where paint areas are. Never <code>null</code>. */@InjectTab tab;public void select(Set<? extends TreeNode> newSelection, boolean layersModified) { boolean inAssertMode = false; assert inAssertMode = true; if (inAssertMode && getSelection().equals(newSelection)) { if (!layersModified) { throw new UnnecessaryOperationException(Select tree Nodes, getSelection(), newSelection); } LOG.severe(I did not noticed about changed selection, + you only notify about modified layers!); } Set<TreePath> c = new HashSet<>(); for (TreeNode newSelected : newSelection) { c.add(new TreePath(newSelected)); } TreePath[] array = c.toArray(new TreePath[0]); setSelectionPaths(array); if (layersModified) { tab.getCurrentDrawPanel().notifyLayersModified(); }}As you will notice, the code above is only executed if asserations are enabled.
Setting selected elements in a UI tree widget
java
null
_codereview.86963
The class below will be used to track the execution time of various operations. There is no need to dip into C libraries to get formatted output then.Output would be something like this:10 seconds 345 milliseconds 324 microseconds23 hours 32 minutes 15 seconds 324 millisecondsIt depends on the class initialization units and time passed. My query is whether any improvement is possible on this design or this good enough to be used widely. There are dummy parameters in the various diff_str private functions to enable overloading since seconds::rep and microseconds::rep essentially boils down to same basic types and so one for other pairs. I have done this on VS 2013.#include <chrono>#include <iostream>template<typename clock_type, typename dur>class ExecutionTimer{private: typename clock_type::time_point m_start; typename clock_type::time_point m_end; const std::wstring SS = std::wstring(L );public: void set_start(){ m_start = clock_type::now(); } void set_start(typename clock_type::time_point start){ m_start = start; } void set_end(){ m_end = clock_type::now(); } void set_end(typename clock_type::time_point end){ m_end = end; } typename dur::rep diff() { return std::chrono::duration_cast<dur>(m_end - m_start).count(); }; std::wstring diff_str() { auto ret = diff_str(diff(), std::chrono::duration_cast<dur>(std::chrono::seconds(1))); return ret; }private:std::wstring diff_str(std::chrono::hours::rep value, std::chrono::hours dummy) { using namespace std::chrono; auto ret = std::to_wstring(value) + SS + Lhours; return ret; }std::wstring diff_str(std::chrono::minutes::rep value, std::chrono::minutes dummy){ using namespace std::chrono; std::wstring ret{ L }; auto cmp_unit = minutes(60).count(); auto dummy_to_pass = duration_cast<hours>(dummy); if (value > cmp_unit){ ret = diff_str(hours::rep(value / cmp_unit), dummy_to_pass); ret += SS + std::to_wstring(value % cmp_unit) + SS + Lminutes; } else if (value == cmp_unit){ ret = diff_str(hours::rep(1), dummy_to_pass); } else{ ret = std::to_wstring(value) + SS + Lminutes; } return ret; }std::wstring diff_str(std::chrono::seconds::rep value, std::chrono::seconds dummy){ using namespace std::chrono; std::wstring ret{ L }; auto cmp_unit = seconds(60).count(); auto dummy_to_pass = duration_cast<minutes>(dummy); if (value > cmp_unit){ ret = diff_str(minutes::rep(value / cmp_unit), dummy_to_pass); ret += SS + std::to_wstring(value % cmp_unit) + SS + Lseconds; } else if (value == cmp_unit){ ret = diff_str(minutes::rep(1), dummy_to_pass); } else{ ret = std::to_wstring(value) + SS + Lseconds; } return ret;}std::wstring diff_str(std::chrono::milliseconds::rep value, std::chrono::milliseconds dummy){ using namespace std::chrono; std::wstring ret{ L }; auto cmp_unit = milliseconds(1000).count(); auto dummy_to_pass = duration_cast<seconds>(dummy); if (value > cmp_unit){ ret = diff_str(seconds::rep(value / cmp_unit), dummy_to_pass); ret += SS + std::to_wstring(value % cmp_unit) + SS + Lmilliseconds; }else if (value == cmp_unit){ ret = diff_str(seconds::rep(1), dummy_to_pass); }else{ ret = std::to_wstring(value) + SS + Lmilliseconds; } return ret; }std::wstring diff_str(std::chrono::microseconds::rep value, std::chrono::microseconds dummy){ using namespace std::chrono; std::wstring ret{ L }; auto cmp_unit = microseconds(1000).count(); auto dummy_to_pass = duration_cast<milliseconds>(dummy); if (value > cmp_unit){ ret = diff_str(milliseconds::rep(value / cmp_unit), dummy_to_pass); ret += SS + std::to_wstring(value % cmp_unit) + SS + Lmicroseconds; } else if (value == cmp_unit){ ret = diff_str(milliseconds::rep(1), dummy_to_pass); } else{ ret = std::to_wstring(value) + SS + Lmicroseconds; } return ret;}};///////////////////////////////////////////////////////////////////////////////int main(){ using namespace std::chrono; using namespace std; ExecutionTimer<high_resolution_clock, microseconds> exe_timer_micro; ExecutionTimer<high_resolution_clock, milliseconds> exe_timer_milli; exe_timer_micro.set_start(); exe_timer_milli.set_start(); for (size_t i = 0; i < (std::numeric_limits<unsigned int>::max()); i++) { if (0 == i % (32 * std::numeric_limits<unsigned short>::max())){ wcout << L.; exe_timer_micro.set_end(high_resolution_clock::now()); exe_timer_milli.set_end(high_resolution_clock::now()); } } wcout << endl; wcout << to_wstring(exe_timer_micro.diff()) << L microseconds << endl; wcout << exe_timer_micro.diff_str() << endl; wcout << to_wstring(exe_timer_milli.diff()) << L milliseconds << endl; wcout << exe_timer_milli.diff_str() << endl;}
Track execution time using std::chrono facilities and print the execution time in a comprehensible way
c++;c++11;datetime;stl
null
_webmaster.59624
I have a web page on a Linux server I administer, running Apache 2.2. This server is visible to the outside world for some other services. I would like to configure Apache so that a given virtual host is only visible from inside the local network, so I can deploy a web application to get feedback from other people in my organization. I reckon this has to do with the Allow directive, but my experiments are not going well.How can I alter my config file to achieve that? Should I change the firewall configuration as well?
Allowing access to an Apache virtual host from the local network only
apache;apache2;linux
Easy. Just set something like this within your main configuration or your virtual configuration:<Directory /var/www/path/to/your/web/documents> Order Deny,Allow Deny from all Allow from 127.0.0.1 ::1 Allow from localhost Allow from 192.168 Allow from 10 Satisfy Any</Directory>The <Directory></Directory> statement basically says, Use these rules for anything in this directory. And by this directory that refers to the /var/www/path/to/your/web/documents which I have set in this example but should be changed to match your sites local directory path.Next within the <Directory></Directory> area you are changing the default Apache behavior which Allows all by default to Order Deny,Allow. Next, you set Deny from all from denies access from everyone. Follwing that are the Allow from statements which allows access from 127.0.0.1 ::1 (localhost IP address), localhost (the localhost itself). Thats all the standard stuff. Since access from localhost is needed for many internal system processes.What follows is the stuff that matters to you.The Allow from for 192.168 as well as 10 will allow access from any/all network addresses within the network range that is prefixed by those numbers.So by indicating 192.168 that basically means if a user has an address like 192.168.59.27 or 192.168.1.123 they will be able to see the website.And similarly using the Allow from for the 10 prefix assures that if someone has an IP address of 10.0.1.2 or even 10.90.2.3 they will be able to see the content.Pretty much all internal networks in the world use either the 192.168 range or something in the 10 range. Nothing external. So using this combo will achieve your goal of blocking access to the outside world but only allow access from within your local network.
_webmaster.88185
I am experiencing high traffic on my website at certain times of the data, at regular intervals. I am trying to determine whether GoogleBot or BingBot access of my website correlates with the high load we experience on our server.Is there a log analyzer or any other way by which I can trace googlebot's access of my website - and the time it occurred and visually show that to me, so I can connect the two metrics (googlebot access vs high load on server)?Details of what happens during the load is described here: https://dba.stackexchange.com/questions/123946/high-data-traffic-from-mysql-server-to-web-serverThis is a Magento website with a large number of products.
How to visually see frequency of GoogleBot / BingBot access of my website from apache access logs (per 15 mins / etc)?
googlebot;logging;apache log files
These both free services for log file realtime visualization will help you:http://logstalgia.io/http://www.fudgie.org/
_unix.231816
I bought a new machine, New Intel i7 Haswell-E 6 Cores processor, 16Gb 3000MHz DDR4 Memory on a GA-X99 Mobo.I put in it 3 new SDD Disks: One for a clean Windows 10 (Works Fine), One for a clean Hackintosh (Works fine, but have same problems with the bootloader) and one for Ubuntu Gnome x64 15.04.In my previous machine (an old DDR2 800Mhz, Intel Core2Quad q9550 machine) Grub2 solved all my boot problems, but all the systems runs under MBR scheme and the Operating Systems shared disks.Now all the systems boots under UEFI scheme and Grub2 do not see the other systems.I installed Grub2 on the Ubuntu Disk with the two other OS partitions mounted on linux but grub2 can't recognize the other 2 OSs.
LINUX Ubuntu Gnome 15.04 Grub2 doesn't detect other SO
boot;grub2
null
_unix.348937
There are two lines in regular user .bashrc(debian8).cat /home/debian8/.bashrcexport HISTTIMEFORMAT=%F %T `tty` export PROMPT_COMMAND=history -wAt the first login with user debian8 and to input tty pwd,not to close it .At the second login with user debian8 and to input tty ls.Now to reboot pc and get log info with history command.What i get is as below:debian8@hwy:~$ history 1 2017-03-02 22:48:25 /dev/pts/0 tty 2 2017-03-02 22:48:28 /dev/pts/0 pwd 3 2017-03-02 22:48:38 /dev/pts/0 tty 4 2017-03-02 22:48:40 /dev/pts/0 ls 5 2017-03-02 22:48:38 /dev/pts/0 tty 6 2017-03-02 22:48:40 /dev/pts/0 ls 7 2017-03-02 22:48:25 /dev/pts/0 tty 8 2017-03-02 22:48:28 /dev/pts/0 pwd 9 2017-03-02 22:48:55 /dev/pts/0 historyWhy can't get the info such as following?How to get the following log info? debian8@hwy:~$ history 1 2017-03-02 22:48:25 /dev/pts/0 tty 2 2017-03-02 22:48:28 /dev/pts/0 pwd 3 2017-03-02 22:48:38 /dev/pts/1 tty 4 2017-03-02 22:48:40 /dev/pts/1 ls 5 2017-03-02 22:48:55 /dev/pts/0 historyTo change from export HISTTIMEFORMAT=%F %T `tty` export PROMPT_COMMAND=history -winto export HISTTIMEFORMAT=%F %T `tty` export PROMPT_COMMAND=history -anot '%F %T tty '.What i get is debian8@hwy:~$ history 1 2017-03-02 22:48:25 /dev/pts/0 tty 2 2017-03-02 22:48:28 /dev/pts/0 pwd 3 2017-03-02 22:48:38 /dev/pts/0 tty 4 2017-03-02 22:48:40 /dev/pts/0 ls 5 2017-03-02 22:48:55 /dev/pts/0 historyIs there no way to get such info as following?debian8@hwy:~$ history 1 2017-03-02 22:48:25 /dev/pts/0 tty 2 2017-03-02 22:48:28 /dev/pts/0 pwd 3 2017-03-02 22:48:38 /dev/pts/1 tty 4 2017-03-02 22:48:40 /dev/pts/1 ls 5 2017-03-02 22:48:55 /dev/pts/0 history
How to set the history log properly?
bash;history
null
_unix.28588
So I want to learn using Solaris [Diskgroups, etc.]. I recently found out that there is an OS named OpenSolaris: https://en.wikipedia.org/wiki/OpenSolarisIs it a good point to learn Solaris? Or it's different then Solaris? How much it differs from it?
Difference between Solaris and OpenSolaris?
solaris;opensolaris
You can read the history of OpenSolaris from that article you link, and a bit on the main Solaris page on Wikipedia. That tells you that indeed, OpenSolaris is/was very close to plain Solaris.Given that some modules - the core in particular - are not published as open source code anymore by Oracle, OpenSolaris (or the other open source derivatives) is likely to drift a bit from Solaris but if it's just to get started, it's still fine.Also note that you can download Solaris 11 from OTN, including a live CD version and pre-built VM images for x86. If you check the FAQ for these downloads, you'll see (near the end) that they can be used without a support contract for non-production uses – read that section and the licensing requirements carefully to make sure it applies to you, as you should do with any licensing agreement.
_cs.66079
I've tried researching, and my best strategy I've got is to attempt to pull out data from the table, and then push the data against itself in order to remove all duplicates, ending up with a table of non-duplicates, which I can then 'minus set' over the original table, leaving me with duplicates.OPERATION (ID, PID, DID, Date, Time, Type)Is there anybody who is able to help me obtain a list of entries where there is more than 1 entry of a matching PID? I'm having issues working out exactly how to do what I've thought of, as well as, with relational algebra, is that the correct way?
Relational Algebra obtaining a list of duplicates
databases;relational algebra
null
_unix.275414
I'm trying to get a better sense of how the Print to File option works on Linux Mint. Basically, when I open a PDF with Evince and print it to PDF, the output is different from the original, and typically quirks in the original file are cleaned up in the output. If I pipe the original file through Ghostscript instead, the output is closer to the original. How do I figure out how Print to File works differently from Ghostscript?
How do I trace why printing pdf from Ghostscript vs Evince is different?
pdf;printing;ghostscript
null
_unix.327490
I have a lenovo thinkpad 13 laptop running linux mint 18. After a recent update it stopped logging in to my university network (still worked at home). Someone suggested that I remove network manager from my system and try wicd. But when I did that and rebooted, there is a problem with xsession and it keeps saying your session lasted less than 10 seconds...etcThis has happened before on my old laptop when doing something else and the solution was to just type:sudo apt-get install cinnamonHowever, my problem now is that I have no internet connection so I cannot reinstall cinnamon. My laptop has no dedicated ethernet port so i cant plug directly to the router, however i have been able to usb tether from my phone in the past.Is there a way of connecting via my USB phone tethering so that i can reinstall cinnamon and log back in to mint? Or failing that, is there a way I can install the cinnamon package from a USB drive?Any help would be greatly appreciated!EDIT:If it helps at all, I think my wifi might still be available too. When I type iwconfig, the result is:enp0s31f6 no wireless extensionslo no wireless extensionswlp3s0 IEEE 802.11abgn ESSID:off/any Mode: Managed Access Point: Not-Associated Tx-Power=0 dBm Retry short limit: 7 RTS thr:off Fragment thr:off Power Management:onim not sure if that will help or not, but i thought it might be relevant..
How can I connect to the internet from the console? (Removed network manager and now can't log in or connect to internet)
networking;linux mint;cinnamon
I managed to work it out, so thought I would share what I did in case anyone else had a similar problem. I found a cached version of network manager in /var/cache/apt/archives/ and reinstalled that using dpkg and then apt-get install -f, and that let me connect to the internet to reinstall cinnamon.Phew!
_unix.240356
I'm trying to do a simple script that unzips a zip file which has an exclamation mark in its name:#!/bin/bashUNZIP='/usr/bin/unzip'CUT=/usr/bin/cutGREP=/usr/bin/grepFILENAME=testFILE=/usr/local/var/www/htdocs/$FILENAME\!3.zipUNZIPPEDFOLDER=$($UNZIP ${FILE} | $GREP -m1 'creating:' | $CUT -d' ' -f5-)echo $UNZIPPEDFOLDERbut when script is being executed, unzip returns:unzip: cannot find or open /usr/local/var/www/htdocs/test\!3.zip, /usr/local/var/www/htdocs/test\!3.zip.zip or /usr/local/var/www/htdocs/test\!3.zip.ZIP.It works fine when there is no ! sign in filename, but to make some of my work automated, I need to stay with original file names.
Unzipping file with exclamation mark from command line in bash script
bash;shell script;osx
In general you don't have to escape and quote anything. The secure way to write this would be:FILE=/usr/local/var/www/htdocs/${FILENAME}!3.zipUNZIPPEDFOLDER=$($UNZIP $FILE | $GREP -m1 'creating:' | $CUT -d' ' -f5-)echo $UNZIPPEDFOLDER
_softwareengineering.259941
This question has been asked here, but received poor answers and didn't clarify the issue. I believe it justifies asking it again.I understand that you can have duck typing with either dynamically typed languages or with statically typed ones (but examples for these are rare, such as C++'s templates).However I'm not sure if there is such a thing as a dynamically typed language without duck typing.Duck typing means that the type of an object is based on the operations and attributes it has at a given point in time. Is there a way to do dynamic typing without inevitably supporting duck typing?Let's look at this Python code for exampe:def func(some_object) some_object.doSomething()something = Toaster()func(something)In dynamically typed languages, the type of an object is only known at run time. So when you try to do an operation on it (e.g. some_object.doSomething()), the runtime has only one choice - which is to check whether or not the type of some_object supports doSomething(), which is exactly what duck typing is.So is it possible to have a dynamically typed languages without duck typing? Please explain.
Is it possible to have a dynamically typed language without duck typing?
programming languages;dynamic typing;duck typing
null
_softwareengineering.200320
I work as a solo developer in a small company. There's more than enough work, but the same does not apply for money. Thus, I won't be seeing any new colleagues in the near future.I am responsible for absolutely everything that has do to with IT operations. This involves development and maintenance of software used in-house, development and maintenance of various websites which our clients use, website infrastructure, local network infrastructure including maintenance of several servers and in-house support to mention the most immediate things. I really enjoy 95% of what I do, and I've got a high degree of flexibility in my work. I get to decide what do when, and no one really tells me what do to except that I now and then sit down with my colleagues to create a roadmap for what I need to get done. I do consider myself to have a high work ethic and being above average focused on what I do, so things get done.However, I've come to the point where I really miss having other people around me who work with the same. Even though I need to get familiar with a wide range of technologies as I am a solo developer, I have the feeling that I am missing out one the knowledge sharing which other like minded people who work in bigger companies are taking part in. I don't really have anyone to discuss programming obstacles and design decisions with - and I am starting to miss that. Also, I'm worried about what future employers might think of this hermit who has been working on his own for too long to ever be able to take part in a team. However, on the other side, I'm thinking that I won't get my current degree of flexibility in a larger company. I'll be seeing a lot more strict deadlines, late hours and specialized areas of work. Also; I'm not sure if this idea of knowledge sharing will ever take take place?Has anyone else been in this situation? Is it a good idea seen from a career perspective and a personal development perspective? Should I consider moving on to a bigger place to (maybe) become a part of a larger group of developers and like minded people? In other words, will the grass be greener on the other side?
Solo developer vs. team developer : should I move on?
skills;solo development
null
_unix.146259
I know I can increase the size of file system and LV in one go using lvresize -r but is it safe to use the sam approach to reduce a LV file system ?here is the man page of -r-r, --resizefs Resize underlying file system together with the logical volume using fsadm(8).I would thought it should be safe if file system gets reduce first.Thanks
Is it safe to use lvresize -r to reduce LVM
filesystems;lvm
null
_unix.35432
I have read many other threads here and elsewhere but found no solution...I'm trying to install Fedora 16 x86-64. At random points during installation I get black screens that do not go away. I have tried adding nomodeset to the kernel options (by far the most common answer to this kind of problems) but it didn't help. I also tried it with combination of xdriver=vesa, still with no success. I also tried acpi=off, noapic, nolapic and noapictimer combined with and without nomodeset with no success.Any suggestions?Btw, my machine has an Intel i7-950 CPU, 12 GB RAM, Gigabyte X58A-UD3R motherboard, an ATI 5870 graphics card (Gigabyte GV-R587UD-1GD Bios F8) and a couple of SATA hard drives.
Black screen when installing Fedora 16
boot;system installation;crash;fedora
null
_unix.359673
I'm trying to figure out how to write this as a standard bash script (running on a QNAP), but not sure where to start - I can do it in NodeJS (since I use it all the time for my job), but would really rather have something.I'm looking for pointers to how to do this rather than an actual script (though don't let that stop you) ;-)Using the built-in Plex converted files (for compatibility etc) - input to that is a folder tree with various movie files (extensions are most commonly .avi, .mp4 or .mkv), which it then makes into an .mp4 file in a slightly different path.I want to copy (not move - I'll let Plex clean up its own mess) the Plex output files back over the input files.Example structure - from...Plex Versions/Optimized for Mobile/My Tv Show/S01E01.mp4...to...My TV Show (1990-1995)/Season 1/s01e01 My Episode Name.mkvNotes:The root path for both is the same (for my setup, others might have the Plex Versions folder within the My TV Show (1990-1995) folder - this script will go on GitHub when done etc).The TV show name may change. I put the start and end date (or a trailing - if ongoing) in brackets. Plex will use the official name, which might have a start year if there is more than one show with the same name.Bearing the above in mind, I may have multiple shows with the same base name, but different years.The target basename will be in the Season x folder (season number without leading 0) and have the same prefix.If it's season 0, then it might instead be in the Specials/ folder (I'll get things consistent at some point).If the original filename had a .mp4 filename, then the copied target file must have a .m4v filename.After copying the new file over, the original file can be removed (leaving the copy source and target files).If I was doing this in NodeJS I'd do something like the following:Get a sorted list of TV Show folders (earlier shows don't generally have years while later ones do).For each folder regex replace end years (/-\d*\)/ to to include the start year only, and case insensitive match against the Plex folders. If no match then try with no year.If no match found go back to 2 and try again.Get a list of Plex version files that match the pattern /S\d+E\d+\.mp4/ (there can be subtitles included from when I just ripped everything and didn't filter the streams).For each file get the season number, then search the Season x folder for the same prefix. If it's season 0 then also search the Specials folder.Replace the extension on the matched name with .mp4, if no replacement done then replace it with .m4v.Copy the Plex file to the new path+name.Remove the matched file, then go back to 5.Now variables for names I can do, same for the obvious unlink - what I'd like help with is -Best way to make the initial folder match (is 1-3 the best way, or is there a better shortcut? Looking at this one: Copy files to a destination folder only if the files already exist. but the source files have a different file extension )Checking if a file exists and try different extension combinations if not.edit: There's over 30tb of files, and Plex does not replace original files (there's been an open request for it since 2015).
Move partially matching files that already exist (Plex transcoder)
scripting
null
_softwareengineering.1785
Please, stay on technical issues, avoid behavior, cultural, career or political issues.
What should every programmer know about programming?
experience;self improvement;programming practices
null
_codereview.114409
I wanted a convenient class to easy access the parameters inside. I am using a lot of math in my game; that's why a wanted to access them through x, y and z (for readability). And this is my outcome.It is a class which lies on top of numpy array. It basically means that I can control a numpy array trough my class. And using x, y and z for accessing the parameters instead of using indices ([0], [1] or [2]).For that I overwrite the built in functions.Example:a = numpy.array([1,1,1], dtype = float32)b = a[0]toa = Vector3(1,1,1)b = a.xSo what I want to know is if this is a good way (I wanted a fast written implementation). Are there any performance issues and is it well written?import numpy as npfrom numbers import Numberclass Vector3(object): def __init__(self, x = 0, y = 0, z = 0, dtype = float32): self._data = np.array([x,y,z], dtype = dtype) self.x = self._data[0] self.y = self._data[1] self.z = self._data[2] def __str__(self): return str(Vector3({0.x},{0.y},{0.z}).format(self)) def __mul__(self, value): if isinstance(value, type(self)): result = self._data * value._data elif isinstance(value, Number): result = self._data * value return type(self)(x = result[0], y = result[1], z = result[2]) def __rmul__(self, value): return self.__mul__(value) # Kommutativgesetz/commutative def __add__(self, value): if isinstance(value, type(self)): result = self._data + value._data elif isinstance(value, Number): result = self._data + value return type(self)(x = result[0], y = result[1], z = result[2]) def __radd__(self, value): return self.__add__(value) # Kommutativgesetz/commutative def __sub__(self, value): if isinstance(value, type(self)): result = self._data - value._data elif isinstance(value, Number): result = self._data - value return type(self)(x = result[0], y = result[1], z = result[2]) def __rsub__(self, value): if isinstance(value, type(self)): result = value._data - self._data elif isinstance(value, Number): result = value - self._data return type(self)(x = result[0], y = result[1], z = result[2])#Testif __name__ == __main__: a = Vector3(5,5,5) b = Vector3(2,4,3) c = a * b d = 2 * a e = a * 2 f = a + b g = a + 2 h = 2 + a i = a - b j = 2 - a k = a - 2 print(c,d,e,f,g,h,i,j,k, sep = \n)
A 3-D vector class built on top of numpy.array
python;performance;object oriented;numpy;coordinate system
Your code and text mismatch. You state that you want to use x, y and z, but you don't actually do it. You only use them in the initializer, and then you use them as named parameters when you recreate your object when returning the value. So your code kind of obfuscates that you are working on a numpy array.Simplify overridden functionsYou could also simplify your code some as almost all of the functions does the same:OPERATIONS = { add : numpy.add, mul : numpy.multiply, sub : numpy.subtract, ...}def _apply_operation(self, value, operation): if isinstance(value, type(self)): result = OPERATIONS[operation](value._data, self._data) elif isinstance(value, Number): result = OPERATIONS[operation](value, self._data) return type(self)(*result)And this could be called like:def __add__(self, value): return self._apply_operation(value, add)Another question is why do you use if ... elif, without any else. This does leave for potentially cases where neither of them hit, and this could lead to the result being undefined when returning.If however you actually mean else instead of elif you simplify the calculation to:result = OPERATIONS[operation](value._data if isinstance(value, type(self)) else value, self._data)Or possibly even loose the result and do:return type(self)(*OPERATIONS[operation](value._data if isinstance(value, type(self)) else value, self._data)Consider adding/changing __repr__ and __str__In Python it's normal to add spaces after commas, something your __str__ doesn't do. I would change the __str__ and add a __repr__ method like the following:def __str__(self): return str('!r'.format(self))def __repr__(self): return 'Vector3({0}, {1}, {2})'.format(*self._data)Implement a better test schemeYour tests are executed but you only have a visual test of them. A much better option would be to use doctest. This would allow you to write tests in the header of each function, and you could be sure they all works as expected.Something like the following:def __add__(self, value): Adds the value into self, and returns result as Vector3. >>> Vector3(1, 2, 3) + Vector3(2, 4, 6) Vector3(3.0, 6.0, 9.0) >>> Vector3(1, 2, 3) + 5 Vector3(6.0, 7.0, 8.0) return self._apply_operation(value, add)And then in your main code you could simply do:doctest.testmod()If all tests pass, you'll see nothing, and it they fail, you'll see why the fail and both the expected and actual results.PropertiesIf you want to use x, y and z as aliases into self._data you could use properties which changes self._data like the following:@propertydef x(self): Property x is first element of vector. >>> Vector3(10, 20, 30).x 10.0 return self._data[0]@x.setterdef x(self, value): self._data[0] = value...if __name__ == '__main__': doctest.testmod() a = Vector3(1, 2, 3) a.x = 10.0 print('a = {}, a.z = {}'.format(a, a.x))Which would output:a = Vector3(10.0, 2.0, 3.0), a.z = 10.0Do however remove the setting of self.x, self.y and self.z from __init__ so you don't have conflicting data variables.
_cseducators.2808
Why do some instructors delay teaching mutation due to considering it to be a more difficult concept? (than functional or recursive concepts, etc.)It is very likely that, back in the 8-bit PC days, many thousands of 8 to 12 year olds learned to code in Tiny Basic or other street BASIC implementations (where there weren't even local variables or other easy recursion or functional semantic support). How could these kids do so if variable mutation was a difficult concept?Or is the spaghetti code they often produce a result of not really understanding the concept itself? (rather than just the software engineering downsides)
Why would mutation be considered by some as a difficult concept to grasp?
curriculum design;variables
null
_webapps.37917
I was looking on a person's resume and he has put his email like this: [email protected]. I am talking about that additional +resume.I want to know:How these type of email works?How do I create email like that with my present Gmail address?Additionally I tried creating an email address with + in between, but gmail only allows letters, numbers and periods.
How to create email address in format of [email protected]?
gmail;email
Anything after the + sign in the email prefix is generally ignored by mail servers and resolves and sends to the address without the [+word]. It's a way to easily track who sends you email.In your example, [email protected] would receive the email, and +resume would be ignored, but the recipient ([email protected]) will receive an email with [email protected] in the recipient (TO) list.See Gmail reference: Using an address alias
_unix.117676
In the process of building one library (Webdriver) I got the following error:Package ibus-1.0 was not found in the pkg-config search path.Perhaps you should add the directory containing `ibus-1.0.pc'to the PKG_CONFIG_PATH environment variableNo package 'ibus-1.0' foundIt seems to be because of the following line in source code of Webdriver:pkg-config ibus-1.0 --libsthat produces the same output when I run it.So I installed ibus using installation instructions from its website:sudo apt-get install ibus ibus-clutter ibus-gtk ibus-gtk3 ibus-qt4But I still get the same output after invoking pkg-config ibus-1.0 --libs. Should I install ibus 1.0 to build that library? If yes, where can I find it? It doesn't seem to be present in ibus's downloads list?My OS is Ubuntu 13.04
Should I install ibus-1.0 to build Webdriver?
compiling;libraries;pkg config
If you need it for a build, then you need the #include headers as well. These, and the pkgconfig files, are not in the normal packages because they don't serve any purpose outside of compiling. Instead, they are included in seperate -dev packages which you can install if you want to build something which must be compiled against whatever library.It looks to me (on Debian) like the package you want is libibus-1.0-dev.
_scicomp.8602
What architecture are current computers based on?The Princeton Architecture or the Harvard Architecture? Some notes I found online state the Princeton architecture, but this creates the Princeton bottleneck. The Harvard architecture was suppose to alleviate this. Have all systems transferred to a more Harvard approach?
What architecture are current computers based on?
architecture
null
_softwareengineering.293388
In my web application that uses an MVC framework which has different modules for models, views and controllers I talk to several databases and APIs. Those are implemented as individual models. A lot of data gets entered by the user accross several different screens. That data goes into the session. Some of that data is meta information that validates the success of the process, and some of it is stuff that I want to keep. That data is supposed to be written to a file in the file system, and the path of that file with some meta-information is stored in a database. After that, a confirmation page will be displayed to the user.I am now struggling with where to put the writing of the subset of the data that has accumulated in the session to the file in my specific format.There are several thoughts I have on the matter. I am not sure which one is the most right.It should be a View because it takes data that is already there in the user's session and presents it in a specific way – the file format (which is XML, but that is not relevant). Formatting and writing the file is implemented there.Reasoning: The default HTML view that renders data as a website to the user's browser has the webserver's interface set as its STDOUT channel. Likewise would a JSON view that presents stuff in case an API call is made. If we write to a file, the STDOUT of the view that formats the file is set to a local file handle and we write there.There should be a View to bring the data into my XML format, but a Model to write the data into the file.Reasoning: Because after writing the file, another website will be displayed. Only one view can be at the end of the chain of things that do stuff in the lifetime of one request handling. But because data is being formatted, a view is appropriate. It just returns the formatted data rather than writing it to a sink (STDOUT).It should be a Model, because it deals with data.Reasoning: Models are data sources and data sinks. Although the source-part is absent here there is still a data sink. The fact that it needs to be formatted also is neglegible because if we'd talk to e.g. a RESTful API of some sort we would also first format the data to be either part of a GET request (which is very simple) or maybe a JSON representation as the body of a POST/PUT request.The code that formats the data into XML is already built as a stand-alone class that is not tied to the webapp yet.My question is this: Where in the application should I use that class and write the file to disk so I do not break the MVC pattern?Example cases where this specific process would happen include:a user feedback/survey form with multiple pages, like the Stack Overflow anual user survey,an e-commerce order funnel,back-office data entry that talks to a third party through a file-based API where the sending happens asynchronously at a later, unrelated time
Where in an MVC web application should writing files locally go?
web applications;mvc;file handling
null
_unix.206765
I am trying to copy a directory from one system to another with rsync; but it erred. Can you please advise me on what to check?Command:[M root@aMachine ~]# rsync -e ssh -c blowfish -v -a /home/aDir/ [email protected]:/home/aDirOutput:ssh_dispatch_run_fatal: no matching cipher foundrsync: connection unexpectedly closed (0 bytes received so far) [sender]rsync error: error in rsync protocol data stream (code 12) at io.c(605) [sender=3.0.9]I tried couple attempts in /etc/ssh_config, but to no avail. This is what I have in my /etc/ssh_config now:# Host *# ...# ...# Port 22# Protocol 2,1# Cipher 3des# Ciphers aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc# MACs hmac-md5,hmac-sha1,[email protected],hmac-ripemd160IdentityFile /root/.ssh/identityIdentityFile /root/.ssh/id_rsaIdentityFile /root/.ssh/id_dsaIdentityFile /etc/static_keys/CURRENT
no matching cipher found error in `rsync`
ssh;rsync;encryption
You are trying to force the use of the blowfish cipher (ssh -c blowfish). According to your configuration, this cipher is not available (the SSH configuration, by default, shows the default configuration settings as comments).Unless you have any compelling reason to do so (none was mentioned in your post), do not force the use of a particular cipher.Notice also that you usually do not have to fiddle with /etc/ssh/ssh_config. As a user, it's easier to modify $HOME/.ssh/config. You are working on this system as a non-root user, aren't you?
_softwareengineering.341920
Imagine the following git repository://ProductA/product_a_main.py/ProductB/product_b_main.py/SharedLibrary/shared_library.pySharedLibrary is used by both ProductA and ProductB.ProductA and ProductB have different release schedules.Any advice on how to handle branching in this scenario? I imagine one model would be to have two master branches: master_a and master_b.
One huge git repository: How to handle releases and branching?
git;branching
Keeping the three projects in separate repositories will help release management (as you have noticed), but will be nice for development. A developer working on the shared library doesn't need to know a thing about ProductA or ProductB, which will help keep it at a good level of abstraction (especially if you end up creating a ProjectC). Developers working on the main projects don't need to know about the other project, and will be encouraged to avoid changes to the shared library as a quick hack to get something working. I've seen some really bad API design when a new method is added every time a client developer wants a change. Project developers can also more easily keep the library up to date without pulling changes from their project, or keep the library on an old version when debugging changes.If you have a good test environment and use continuous integration, keeping the repos separate will also make it easier to run your tests. When you make a change to ProjectA, you don't want to run all of the tests from ProjectB and SharedLibrary.The biggest potential issue I can see is code breaking when the project and library versions are mismatched. That would be avoided by keeping them in one repository (so changes to both can be made in the same commit). If the projects and library are changing rapidly together, it might be worth it to keep them in one repo. As you mentioned, Google and Facebook use this strategy since they have lots of developers working across projects and libraries. However, they also invest in lots of powerful build and test servers, and need changes to propagate as quickly as possible. If a developer improves the efficiency of MapReduce, the costs of waiting for every other team to pull that change are pretty big.If you want to keep everything in sync at all times, and are ok with managing build/test times, then one repository is all you need. If changes to the library will generally not break project code, changes to projects will not usually require changes to library code, and you want to keep things simple, then go for separate repositories. It's easier to merge repositories than split them apart, so as a rule of thumb I would recommend starting with separate repositories until you run into problems. One thing to note is that git's submodule system can be problematic. I've never worked with it, but this guy has some complaints. This guy has some advice on deciding whether or not to use git submodules that's a little more balanced, but still cautious.
_unix.199214
With the following command, I can list all the installed kernel package:$ dpkg -l | grep linux-imageWith the following command, I get for instance the version of the current kernel used:$ uname -rHowever, my needs is to just display in a terminal, the Debian package name corresponding of the current loaded kernel.As multiples packages names can have the same version, it is difficult to identify uniquely a particular kernel with the previous commands.So... Do you have an idea to get the package name of the current kernel ?
Debian: How to get the current loaded kernel package name?
shell;debian;linux kernel
Use this:$ dpkg --get-selections | grep -o ^linux-image-$(uname -r)linux-image-3.13.0-32-genericor$ dpkg -l | grep -o linux-image-$(uname -r)linux-image-3.13.0-32-genericEDIT: If you have multiple versions of the same kernel release, run the following bash script:#!/bin/bashrel=$(uname -r)ver=$(uname -v)current=${rel%-*}.${ver:1:2}echo $(dpkg -l | grep -Po linux-image-${rel}(?=\s+${current}))
_unix.110716
I just setup the OpenVPN and it is working as expected. However, the routing table of the client is confusing me to no end. Here is the route table:# route -nKernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Iface10.8.0.5 0.0.0.0 255.255.255.255 UH 0 0 0 tun010.8.0.1 10.8.0.5 255.255.255.255 UGH 0 0 0 tun054.202.18.143 10.0.2.2 255.255.255.255 UGH 0 0 0 eth010.0.2.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth00.0.0.0 10.8.0.5 128.0.0.0 UG 0 0 0 tun0128.0.0.0 10.8.0.5 128.0.0.0 UG 0 0 0 tun00.0.0.0 10.0.2.2 0.0.0.0 UG 0 0 0 eth0So lets dissect it line-by-lineAny packet destined for 10.8.0.5 has no gateway and will use tun0any packet destined for 10.8.0.1 will use 10.8.0.5 as gateway via tun0any packet destined for 54.202.18.143 will use 10.0.2.2 as gateway via eth0any packet destined for 10.0.2.0/24 has no gateway and will use eth0Lets ignore the 169.254.0.0 partAll other packets (destined for 0.0.0.0) will us 10.8.0.5 as default gateway via tun0. So this is default gateway, isn't it?Any packet destined for 128.0.0.0/7 will use 10.8.0.5 as default gateway via tun0All other packets (0.0.0.0) will use 10.0.2.2 as default gateway via eth0Questions:Do I have 2 default gateways if we consider point 6 and 8? (there can be only 1 Default Gateway though, so I know I am wrong but can't justify) (probably answered, see below)Considering point 1 and 2, anything going for 10.8.0.1 is not really using any gateway via tun0. Is this correct?Considering point 3 and 4, anything going for 54.202.18.143 is not really using any gateway via eth0. Is this correct?UPDATE...After reading this, I found some more information. The below lines makes a lot of sense to me now:0.0.0.0 10.8.0.5 128.0.0.0 UG 0 0 0 tun0128.0.0.0 10.8.0.5 128.0.0.0 UG 0 0 0 tun0So, the 1st line is defining 0.0.0.0/128.0.0.0 and second one is defining 128.0.0.0/128.0.0.0. Essentially:0.0.0.0/128.0.0.0 = 0.0.0.0/1 = 0.0.0.0 TO 127.255.255.255128.0.0.0/128.0.0.0 = 128.0.0.0/1 = 128.0.0.0 TO 255.255.255.255So, above 2 routes are covering the entire IPv4 Address range [0.0.0.0 TO 255.255.255.255]. It is a clever way of OpenVPN to add a default route without replacing the original default route and this default route will be routed via tun0.So I think I have an answer for my first question:Do I have 2 default gateways if we consider point 6 and 8?NO, there is only one default gateway and that is :0.0.0.0 10.0.2.2 0.0.0.0 UG 0 0 0 eth0
how to understand the routing table on an OpenVPN client
linux;networking;routing;openvpn
null
_softwareengineering.144481
I have been looking around on the internet and have not found a good answer yet. If anyone could be so kind and supply a code sample I would greatly appreciate it. I apologize for the simplicity of the question.
What is sequential code execution?
java
This is the simplest one of all: it's when your instructions are executed in the same order that they appear in your program, without repeating or skipping any instructions from the sequence.For example, this is sequential execution:int a = 5;int b = 12;int c = a*a + b + 7;On the other hand, this is not sequential execution, because one instruction is going to be skipped.int a = 5;int b = 12;int c;if (a > b) { c = a;} else { c = b;}
_unix.139802
I've installed Linux Mint and Manjaro Linux on my computer. I installed only the Linux mint on the MBR. For Manjaro, I created a /boot/efi partition, but I have not checked to install to MBR.So, I am controlling grub from mint. Now, when I try to boot Manjaro, it shows :ERROR: resume: no device specified for hibernation: performing fsck ondev/sda11 /dev/sda11: clean 1727/915712 files, .... blocksWARNING: The root device is not configured to be mounted read-write!Itmay be fsck'd again later:mounting /dev/sda11 on real boot running cleanup hook [udev]ERROR: Root device mounted successfully, but /sbin/init does not exist.sh:can't access tty; job control turned off[rootfs /]#After the shell prompt, I can't write anything. It hangs, or sometimes it shows me messages continuously like :usb 3-3: device not accepting address 2, error -62and so on...I tried to add init=/usr/lib/systemd/systemd to grub, as I saw in google, but still the same.I must note that for the Manjaro installation I am using a separate partition for / and for /usr and for /var.This maybe have an influence? As I saw here .But the problem is that I can't write anything, it hangs.I also found a comment on a blog post here that states:If you keep /usr as a separate partition, you must adhere to the following requirements: - Add the shutdown hook. The shutdown process will pivot to a saved copy of the initramfs and allow for /usr (and root) to be properly unmounted from the VFS. - Add the fsck hook, mark /usr with a passno of 0 in /etc/fstab. While recommended for everyone, it is mandatory if you want your /usr partition to be fscked at boot-up. Without this hook, /usr will never be fsckd. - Add the usr hook. This will mount the /usr partition after root is mounted. Prior to 0.9.0, mounting of /usr would be automatic if it was found in the real roots /etc/fstab.And never forget to run mkinitcpio -p linux every time after you make changes to mkinitcpio.conf to actually create the new images and get them the right place. That sounds promising since my /usr is indeed on a separate partition. What are these hooks and how do I add them?parted -l:Model: ATA TOSHIBA MQ01ABD0 (scsi)Disk /dev/sda: 750GBSector size (logical/physical): 512B/512BPartition Table: gptNumber Start End Size File system Name Flags 1 1049kB 1075MB 1074MB ntfs Basic data partition hidden, diag 2 1075MB 1347MB 273MB fat32 Basic data partition boot 3 1347MB 1482MB 134MB ntfs Basic data partition msftres 4 1482MB 80,1GB 78,6GB ntfs Basic data partition msftdata 5 80,1GB 80,4GB 262MB ext4 6 80,4GB 90,4GB 10,0GB ext4 msftdata 7 93,0GB 102GB 9000MB ext4 msftdata 9 102GB 106GB 3999MB linux-swap(v1)10 106GB 106GB 250MB fat32 boot11 106GB 121GB 15,0GB ext4 msftdata12 121GB 151GB 30,0GB ext4 msftdata13 151GB 165GB 14,0GB ext4 msftdata14 165GB 206GB 40,9GB ext4 msftdata 8 206GB 743GB 537GB ext4 msftdata15 743GB 747GB 4000MB linux-swap(v1) msftdatagrub:menuentry 'Linux Mint 17 Cinnamon 64-bit, 3.13.0-24-generic (/dev/sda5)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_gpt insmod ext2 set root='hd0,gpt5' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt5 --hint-efi=hd0,gpt5 --hint-baremetal=ahci0,gpt5 19af2e09-8946-4ca2-9655-75921f3609a5 else search --no-floppy --fs-uuid --set=root 19af2e09-8946-4ca2-9655-75921f3609a5 fi linux /vmlinuz-3.13.0-24-generic root=UUID=9356f543-f391-4ba5-9dcc-e8484d6935e0 ro quiet splash $vt_handoff initrd /initrd.img-3.13.0-24-generic}menuentry 'Linux Mint 17 Cinnamon 64-bit, 3.13.0-24-generic (/dev/sda5) -- recovery mode' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_gpt insmod ext2 set root='hd0,gpt5' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt5 --hint-efi=hd0,gpt5 --hint-baremetal=ahci0,gpt5 19af2e09-8946-4ca2-9655-75921f3609a5 else search --no-floppy --fs-uuid --set=root 19af2e09-8946-4ca2-9655-75921f3609a5 fi echo 'Loading Linux 3.13.0-24-generic ...' linux /vmlinuz-3.13.0-24-generic root=UUID=9356f543-f391-4ba5-9dcc-e8484d6935e0 ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /initrd.img-3.13.0-24-generic}menuentry 'Manjaro Linux (0.8.10) (on /dev/sda11)' --class gnu-linux --class gnu --class os $menuentry_id_option 'osprober-gnulinux-simple-95ed019d-9269-4869-9f99-a03f002a53c6' { insmod part_gpt insmod ext2 set root='hd0,gpt11' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt11 --hint-efi=hd0,gpt11 --hint-baremetal=ahci0,gpt11 95ed019d-9269-4869-9f99-a03f002a53c6 else search --no-floppy --fs-uuid --set=root 95ed019d-9269-4869-9f99-a03f002a53c6 fi linux /boot/vmlinuz-312-x86_64 root=/dev/sda11 initrd /boot/initramfs-312-x86_64.img}submenu 'Advanced options for Manjaro Linux (0.8.10) (on /dev/sda11)' $menuentry_id_option 'osprober-gnulinux-advanced-95ed019d-9269-4869-9f99-a03f002a53c6' { menuentry 'Manjaro Linux (0.8.10) (on /dev/sda11)' --class gnu-linux --class gnu --class os $menuentry_id_option 'osprober-gnulinux-/boot/vmlinuz-312-x86_64--95ed019d-9269-4869-9f99-a03f002a53c6' { insmod part_gpt insmod ext2 set root='hd0,gpt11' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt11 --hint-efi=hd0,gpt11 --hint-baremetal=ahci0,gpt11 95ed019d-9269-4869-9f99-a03f002a53c6 else search --no-floppy --fs-uuid --set=root 95ed019d-9269-4869-9f99-a03f002a53c6 fi linux /boot/vmlinuz-312-x86_64 root=/dev/sda11 initrd /boot/initramfs-312-x86_64.img }}
ERROR: Root device mounted successfully, but /sbin/init does not exist
boot;systemd;manjaro
As @Leiaz very correctly pointed out in the comments, /sbin in Arch (and by extension, Manjaro) is now a symlink to /usr/bin. This means that unless /usr is mounted, /usr/sbin/init will not exist. You therefore need to make sure that /usr is mounted by the initial ramdisk. That's what the Arch wiki quote in your OP means:If you keep /usr as a separate partition, you must adhere to the following requirements:Enable mkinitcpio-generate-shutdown-ramfs.service or add the shutdown hook.Add the fsck hook, mark /usr with a passno of 0 in /etc/fstab. While recommended for everyone, it is mandatory if you want your /usr partition to be fsck'ed at boot-up. Without this hook, /usr will never be fsck'd.Add the usr hook. This will mount the /usr partition after root is mounted. Prior to 0.9.0, mounting of /usr would be automatic if it was found in the real root's /etc/fstab.So, you need to generate a new init file with the right hooks1. These are added by changing the HOOKS= line in /etc/mkinitcpio.conf. SoBoot into Mint and mount the Manjaro / directory:mkdir manjaro_root && sudo mount /dev/sda11 manjaro_rootNow, Manjaro's root will be mounted at ~/manjaro_root.Edit the mkinitcpio.conf file using your favorite editor (I'm using nano as an example, no more):sudo nano ~/manjaro_root/etc/mkinitcpio.confFind the HOOKS line and make sure it contains the relevant hooksHOOKS=shutdown usr fsckImportant : do not remove any of the hooks already present. Just add the above to those there. For example, the final result might look likeHOOKS=base udev autodetect sata filesystems shutdown usr fsckMark /usr with a passno of 0 in /etc/fstab. To do this, open manjaro_root/etc/fstab and find the /usr line. For this example, I will assume it is /dev/sda12 but use whichever one it is on your system. The pass number is the last field of an /etc/fstab entry. So, you need to make sure the line looks like/dev/sda12 /usr ext4 rw,errors=remount-ro 0 0 ^ This is the important one -----|Create the new init image. To do this, you will have to mount Manjaro's /usr directory as well. sudo mount /dev/sda12 ~/manjaro_root/usrI don't have much experience with Arch so this might not bee needed (you might be able to run mkinitcpio without a chroot) but to be on the safe side, set up a chroot environment:sudo mount --bind /dev ~/manjaro_root/dev && sudo mount --bind /dev/pts ~/manjaro_root/dev/pts && sudo mount --bind /proc ~/manjaro_root/proc && sudo mount --bind /sys ~/manjaro_root/sys &&sudo chroot ~/manjaro_rootYou will now be in a chroot environment that thinks that ~/manjaro_root/ is actually /. You can now go ahead and generate your new init imagemkinitcpio -p linuxExit the chrootexitUpdate your grub.cfg (again, this might not actually be needed):sudo update-grubNow reboot and try booting into Manjaro again.1 Hooks are small scripts that tell mkinitcpio what should be added to the init image it generates.
_unix.73262
Is it good practice to create a directory in /run/shm (formerly /dev/shm) and use that like a temp directory for an application?Background: I am writing black box tests for a program which does a lot of stuff with files and directories. For every test I create a lot of files and directories and then run the program and then create the expected set of files and directories and then run diff to compare. I now have about 40 tests and they are already taking over 2 seconds to run. Hoping to speed things up I want to run the tests in a directory on some sort of ramdisk.Researching about ram disk I stumbled upon a question with an answer stating that it is okay to create a directory in /dev/shm and use that like a temp directory. Researching some more however I stumbled upon a wiki page from debian stating that it is an error to use /dev/shm directly. I should use the shm_* functions. Unfortunately the shm_* functions seem to be not available for use in a shell script.Now I am confused. Is it okay or not to use /run/shm (formerly /dev/shm) like a temp directory?
use `/run/shm` (formerly `/dev/shm`) as a temp directory
directory structure;tmpfs;shared memory
It's perfectly okay to use some directory in /run as long as you have the appropriate rights on it. In some modern distros, /tmp is already a virtual file system in memory or a symlink to a directory inside /run. If this is your case (you can check that in /etc/fstab, or typing mtab), you could use /tmp as your temporary directory.Also, don't get confused with the article from Debian. shm_* functions are used to create shared memory segments for Inter-Process Communication. With those functions, you can share a fragment of memory between two or more processes to have them communicate or collaborate using the same data. The processes have the segment of memory attached in their own adressing space and can read and write there as usual. The kernel deals with the complexity. Those functions are not available as shell functions (and wouldn't be very useful in a shell context). For further information, have a look at man 7 shm_overview. The point of the article is that no program should manage directly the pseudo-files representing shared segments, but instead use the appropriate functions to create, attach and delete shared memory segments.
_unix.274304
I use RPZ to blacklist some domains and my configuration looks like:*.com A 127.0.0.1 mydomain.net A 127.0.0.1if i query a whatever domain .com it works correctly giving me 127.0.0.1let's dig fun.com @localhost, my reply will be:;; ANSWER SECTION:fun.com. 5 IN A 127.0.0.1now let's edit the previous config and make my zone now look like:*.com A 127.0.0.1 mydomain.net A 127.0.0.1this.fun.com 127.0.0.1It's unnecessary because the master *.com should cover all the cases however I have my domains loaded by multiple sources so the list is compiled automatically and things like this can happen.While this seems to be an harmless change and if I do dig this.fun.com @localhost it will reply again stuff like:;; ANSWER SECTION:this.fun.com. 5 IN A 127.0.0.1If I now query the root domain dig fun.com @localhost I will get:;; ANSWER SECTION:fun.com. 86400 IN A 209.61.131.188Like.. WHAAT? What happened here? adding this.fun.com masked out fun.com main domain from the upper omni-inclusive *.com?Is this a wanted behaviour of bind? Did I found some kind of weird bug?How can avoid this? Should I write a script that recurse all the domains removing the ones contained into the bigger ones? (annoying but doable - in search of better alternatives)TL;DR: Add of a 3rd level domain in bind rpz in order to BLACKLIST IT make the 2nd level domain not follow the main FILTER resulting WHITELISTED.
Bind RPZ config with domains of various levels
linux;dns;bind;bind9;rpz
null
_softwareengineering.61980
I am sure that this question will piss off some people, but that is not my intent. We are all in the same boat - I will be subject to it one day as well.According to Milton Friedman's view, who was not a theoretician, discrimination in the work place can only go so far, for there will be employers out there willing to pick up the overlooked talent based on their productivity alone, and those who base their hiring/salary decisions on wrong perceptions will be taken care of by smarter competitors. Starting own business is a form of competition.Age is obviously a huge factor in sports or a job that requires very hard manual labor. What about in software industry? Ageism does exist (or does it not?), but why?Some straight questions:Are corporations inherently evil and like to mistreat people just because?Are employers stupid / unorganized because they still liken software to construction industry?Are older folks less productive?Are they not willing to work crazy hours?Do they demand wages that are too high?Does it come down to hormones and primal instincts? In monkey societies testosterone is everything. What about in code monkey societies?Is ageism a myth after all?Do only the lazy ones (those who do not keep up) get lower salaries?Is it not about one's age but about having family and kids or not having it, thus influencing how much time on can spend keeping up with stuff?Do employers want to pay young people more because they like the way they look?Other?Are my questions not very relevant? If so, then why?I myself am not married yet, but I do not like to work extra hours. I do find some time to read up on things, but I have other interests as well. At the same time it is hard for me to compare my abilities to that of others of the same age; I have met both geniuses and dummies. I also do not really know how much other programmers, other than a couple of my friends are making. Even if I had a lot of data, how do I prove strictly the presence of ageism and the extent of it?Finally, what are some good ways for an individual contributor to maintain good salary level through older years?Thank you for your feedback.
Is ageism in software development based on anything other than bias?
career development
null
_unix.367845
I have my working system in a remote location and the os is in unix and i need to get the files from that remote unix system to my local machine using bash commands. I triedscp ls .txt* D:\BACKUPthis command by connecting using putty to my remote system. i tried but its not working. can you please help me out .
How to copy the files from remote unix server to local windows?
linux;bash;shell;ssh;scp
assuming that you have scp installed on your Windows (e.g. using Windows 10 bash)The proper command is:scp remote_username@remote_hostname.com:/full/path/to/file local_file_nameIf you don't have scp installed on your windows you can install winscpYou can use winscp to download using sftp (see instructions)ConnectingStart WinSCP. Login Dialog will appear. On the dialog:Select your File protocol. When you are about to use FTPS protocol, select FTP and then choose one of the FTPS invocation methods.Enter your host name to Host name field, username to User name and password to PasswordPress Login to connect.
_softwareengineering.153318
I am working on a project that has around a hundred different files (.cpp & .h), and it requires around an hour to build the entire project on MSVC 2008, suppose that I now make a change to any one file, do I need to build the entire project once again, right from the beginning, or is there a way out ?I know it is very very noob like, but I am wasting a lot of time, hence I was wondering
Is a new build required everytime I make a change to the code?
development process;visual studio;builds
The short answer: It depends.The long answer: It depends on your source code dependencies.For example, if every class knows about every other class in the project, then every time you make a change to one, you probably have to recompile the other.It also depends on linkage. If you link everything into one binary file, it's also likely to compile everything again. A better way would be to separate to different shared libraries (DLLs), or at the very least into separate object (.o) files.But then (in the latter case) you might need to tinker with it to see that it doesn't recompile all object files when you just need to recompile one and link them all together again.But then again, maybe the linking process takes a while too. You basically need to experiment with that.On a final note... An hour for building ~100 files? That's way, way too much, unless you're running on a really old machine. Or unless each file contains tens of thousands of lines.
_cogsci.4445
Examples:Imagine coming to SE tomorrow and instead of seeing 1 2 3 4 5 ... 77 next at the bottom of the questions page, you see a giant circle with a triangle inside pointing to the right and you no longer have the option of going from page 1 to page 5 in one click, but you must now click the big circle incrementally instead.Or imagine your favorite weather website having a 10-day forecast horizontally across the screen like a calendar would have, then one day it changes to a vertical alignment because someone decided it was time for a change.Questions:Why do people do this? And is it normal to feel extremely frustrated by this? I mean, when something works well and makes sense, leave it alone?. Why change stuff just to change stuff? And especially when the change is less functional or intuitive.
Why are websites constantly updated even when the change seemingly reduces functionality?
cognitive psychology;human factors
null
_unix.163516
Environment$ cat /etc/*-releaseCentOS release 6.5 (Final)$ openssl versionOpenSSL 1.0.1e-fips 11 Feb 2013This works$ tr -dc A-Za-z0-9 </dev/random | head -c 24 > ${f_host_passphrase}$ echo -e \n >> ${f_host_passphrase}$ openssl genpkey ... -pass file:${f_host_passphrase} -out ${f_host_key}$ openssl req ... -key ${f_host_key} -passin file:${f_host_passphrase} \ -out ${f_host_req}$ openssl ca ... -in ${f_host_req} -out ${f_host_cert}$ openssl pkcs12 \ -export \ -inkey ${f_host_key} \ -passin pass:$(cat ${f_host_passphrase}) \ -in ${f_host_cert} \ -name ${l_ds_cert_name} \ -password file:${f_host_passphrase} \ -out ${f_host_p12}...$ pk12util -i ${f_host_p12} \ -w ${f_host_passphrase} \ -d ${l_sql_prefix}${d_nssdb} \ -k ${f_host_passphrase}pk12util: PKCS12 IMPORT SUCCESSFULThe full, functional script is here. I wrote the script for rapid testing because, it turns out, this next variation fails.This fails$ tr -dc A-Za-z0-9 </dev/random | head -c 24 > ${f_host_passphrase}$ echo -e \n >> ${f_host_passphrase}$ openssl genpkey ... -pass file:${f_host_passphrase} -out ${f_host_key}$ openssl req ... -key ${f_host_key} -passin file:${f_host_passphrase} \ -out ${f_host_req}$ openssl ca ... -in ${f_host_req} -out ${f_host_cert}$ openssl pkcs12 \ -export \ -inkey ${f_host_key} \ -passin file:${f_host_passphrase} \ -in ${f_host_cert} \ -name ${l_ds_cert_name} \ -password file:${f_host_passphrase} \ -out ${f_host_p12}...$ pk12util -i ${f_host_p12} \ -w ${f_host_passphrase} \ -d ${l_sql_prefix}${d_nssdb} \ -k ${f_host_passphrase}pk12util: PKCS12 decode not verified: SEC_ERROR_BAD_PASSWORD: The security password entered is incorrect.Why?The only difference between the two variations (and the failed openssl pkcs12 command is a comment block in the script to which I linked) is how I pass the passphrase file to the openssl pkcs12 command.If I send the passphrase as -passin pass:$(cat ${f_host_passphrase}), the following pk12util command succeeds.If I send the passphrase as -passin file:${f_host_passphrase}, the openssl pkcs12 command still succeeds, but the pk12util command fails.My guess is that the openssl pkcs12 command is parsing something as the password from the -passin file:${f_host_passphrase} argument. Just not what the rest of world is expecting to use...
OpenSSL 1.0.1e usage bug?
openssl
null
_unix.269528
This is my current users expression users.users = { john = { name = john; group = users; extraGroups = [ wheel disk audio video networkmanager systemd-journal ]; isNormalUser = true; uid = 1000; home = /home/john; createHome = true; }; };My problem is that group = users; allows all users to see my files. How can I make the group = john; and clean up permissions on all of my files in the home directory? Is it possible to do this in my configuration.nix file? Also would restarting in one of these bad configurations mess up permissions again? How do I remove these old configurations so they cannot be accessed?
NixOS: How do I change my group and clean up the bad configurations?
nixos
null
_unix.387947
I am trying to figure out how effective my laptop battery is.Therefore, I woud like to see for how long the battery has been running without AC, since reboot or since AC was plugged in last.Is this possible? If not, I'll just have to time it manually myself of course.I'm running Ubuntu 14.
How do I see how long the battery has been running without AC?
linux;battery
null
_unix.353044
I want to be able to log in to a (publicly-accessible) SSH server from the local network (192.168.1.*) using some SSH key, but I don't want that key to be usable from outside the local network.I want some other key to be used for external access instead (same user in both cases).Is such a thing possible to achieve in SSH?
How to make restrict an SSH key to certain IP addresses?
ssh
Yes.In the file ~/.ssh/authorized_keys on the server, each entry now probably looks likessh-ed25519 AAAAC3NzaC1lZSOMEKEYFINGERPRINT comment(or similar)There is an optional first column that may contain options. These are described in the sshd manual.One of the options isfrom=pattern-listSpecifies that in addition to public key authentication, either the canonical name of the remote host or its IP address must be present in the comma-separated list of patterns. See PATTERNS in ssh_config(5) for more information on patterns.In addition to the wildcard matching that may be applied to hostnames or addresses, a from stanza may match IP addresses using CIDR address/masklen notation.The purpose of this option is to optionally increase security: public key authentication by itself does not trust the network or name servers or anything (but the key); however, if somebody somehow steals the key, the key permits an intruder to log in from anywhere in the world. This additional option makes using a stolen key more difficult (name servers and/or routers would have to be compromised in addition to just the key).This means that you should be able to modify ~/.ssh/authorized_keys fromssh-ed25519 AAAAC3NzaC1lZSOMEKEYFINGERPRINT commenttofrom=pattern ssh-ed25519 AAAAC3NzaC1lZSOMEKEYFINGERPRINT commentWhere pattern is a pattern matching the client host that you're connecting from.
_softwareengineering.254777
How does the for loop function is implemented so that it can accept ; as parameter separator rather than , which is trivial in normal functions.
Uniqueness of for loop
c++;c
The for loop is not a function, the for_each is http://en.cppreference.com/w/cpp/algorithm/for_each which takes , as parameter separator.for is a statement, according to the C++ standard 6.5.3.You can look at a for as a set of actions being performed, like this: for(initialization; condition; expression) at which point they really aren't different parameters, but different statements.
_cstheory.20733
Given a graph G = (V,E) with edge and vertex weights. The minimum distance r-dominating set problem for a graph G = (V,E) requires to find a set S $\in$ V of smallest vertex-weight such that every vertex v not in S, there is a vertex u in S such that d(u,v) $\leq$ r. Distance is defined by the length of shortest path on graph. Does anyone know papers that solve this problem on a $\textbf{tree}$?
minimum distance r-dominating set on tree
graph algorithms;tree
null
_unix.95992
I know that my machine hardware name is i686 and processor type is i686 due to my Linux output. But I have no idea why they are the same. I want to know what's the difference between machine hardware name and processor type.
what's the difference between machine hardware name and processor type
linux kernel
This is explained more clearly in info uname:`-m'`--machine' Print the machine hardware name (sometimes called the hardware class or hardware type).`-p'`--processor' Print the processor type (sometimes called the instruction set architecture or ISA). Print `unknown' if the kernel does not make this information easily available, as is the case with Linux kernels.So, the hardware name is the CPU architecture, while the processor type is the name of the instruction set used. To quote from wikipedia:Instruction set architecture is distinguished from the microarchitecture, which is the set of processor design techniques used to implement the instruction set. Computers with different microarchitectures can share a common instruction set. For example, the Intel Pentium and the AMD Athlon implement nearly identical versions of the x86 instruction set, but have radically different internal designs.
_cstheory.17788
I came up with a problem below, which looks like a linear programming problem:Given $n$ sets $S_{1}, S_{2},..., S_{n}$, with constraints of : $$\forall i=1, 2, 3,...,n\space\space \left | S_{i} \right |=k;\\\left | S_{j_{11}}\bigcup S_{j_{12}} \right |\leqslant k_{1};\\\left | S_{j_{21}}\bigcup S_{j_{22}} \right |\leqslant k_{2};\\...\\\left | S_{j_{m1}}\bigcup S_{j_{m2}} \right |\leqslant k_{m}.\\$$ $j_{11},j_{21},...,j_{m1},j_{12},j_{22},...,j_{m2}$ are integers between $1$ and $n$. $k,k_{1},k_{2},...,k_{m}$ are constant positive integers. The questions is: For a positive integer $q$,Do such $S_{1}, S_{2},..., S_{n}$ exsit, such that $\left| \bigcup_{i}^{n}S_{i} \right |=q$?I came up with this problem in order to judge complexity of another graph problem. But I failed to find any relative research. So has complexity of this kind of problem been studied? Is this problem NP-Complete or just P? How to judge this problem? Thanks.By the way, what if $k=2,k_{1}=k_{2}=...=k_{m}=3$?
What's complexity of this set problem which looks like Linear Programming?
set theory;cc.complexity theory;np complete
null
_reverseengineering.14118
I am trying to get to the root file system of a Yi Dome Camera and I haven't done this before. Can anyone point me in the right direction? I ran binwalk on the firmware which showed: DECIMAL HEXADECIMAL DESCRIPTION--------------------------------------------------------------------------------1904862 0x1D10DE MySQL MISAM compressed data file Version 9Then:binwalk -D '.sql:myd.myisamchk' 1.9.1.0E_201608251601home_v200mdrwxr-xr-x 2 root root 4096 Dec 5 21:43 _1.9.1.0E_201608251601home_v200m.extractedThe extracted directory has:-rw-r--r-- 1 root root 38712 Dec 5 21:43 1D10DE.myd.myisamchkroot@kali:~/Downloads/_1.9.1.0E_201608251601home_v200m.extracted# `myisamchk 1D10DE.myd.myisamchk`myisamchk: error: '1D10DE.myd.myisamchk' is not a MyISAM-table
Binwalk and myisamchk Yi Dome Cam firmware
binary analysis
null
_cogsci.5864
Can you briefly explain in a non-technical way which are the main differences between the MyersBriggs Type Inventory and the MMPI test?
What is the difference between the MMPI and the MyersBriggs Type Inventory?
personality;test;clinical psychology
The MMPI calculates scale scores from dichotomous responses; hence those scale scores the constructs of actual interpretive interest are roughly continuous, or at least polytomous.Yes, there are very many other tests that can assess personality traits that can aid in diagnosis. I'm not sure what you mean by mature, and I hesitate to judge their sophistication.I'm guessing by INFJ you're referring to the MyersBriggs Type Indicator, of which INFJ is one available type. Typology is one primary difference from the MMPI, which measures traits in terms of continuous dimensions. I'll leave the rest of the differences to Wikipedia for now.
_webmaster.86907
What is the meaning of following statement?A shared server should not have more than 25 simultaneous connections for an extended or consistent period of time.
What is the meaning of 25 simultaneous connections for an extended or consistent period of time
web hosting
null
_unix.54952
My router has Linux as its OS. The system log has a lot of rows about iptable and klogd that I don't understand, could someone explain them to me?The iptables setup:iptables -t nat -A PREROUTING -i ppp33 -p tcp --dport 44447 -j DNAT --to 192.168.1.101iptables -I FORWARD 1 -i ppp33 -p tcp -d 192.168.1.101 --dport 44447 -j ACCEPTiptables -A INPUT -i ppp33 -p tcp --syn -m limit --limit 6/h -j LOG --log-level 1 --log-prefix=Intrusion -> iptables -A FORWARD -i ppp33 -p tcp --syn -m limit --limit 6/h -j LOG --log-level 1 --log-prefix=Intrusion -> iptables -A INPUT -i ppp33 -j DROPiptables -A FORWARD -i ppp33 -j DROPSample log lines:klogd: Intrusion -> IN=ppp33 OUT= MAC= SRC=188.11.48.248 DST=2.40.146.60 LEN=48 TOS=0x00 PREC=0x00 TTL=116 ID=12802 DF PROTO=TCP SPT=60584 DPT=64137 WINDOW=8192 RES=0x00 SYN URGP=0klogd: Intrusion -> IN=ppp33 OUT= MAC= SRC=188.11.48.248 DST=2.40.146.60 LEN=48 TOS=0x00 PREC=0x00 TTL=116 ID=12889 DF PROTO=TCP SPT=60584 DPT=64137 WINDOW=8192 RES=0x00 SYN URGP=0
Meaning of log entries from an iptables configuration
linux;networking;logs;iptables;tcp
iptables -t nat -A PREROUTING -i ppp33 -p tcp --dport 44447 -j DNAT --to 192.168.1.101This means that your interface ppp33 has Network Address Translation (NAT) setup for all requests to the destination of 192.168.1.101:44447.iptables -I FORWARD 1 -i ppp33 -p tcp -d 192.168.1.101 --dport 44447 -j ACCEPTThis rule complements the previous rule by ensuring that the request is forwarded to the 192.168.1.101 host.iptables -A INPUT -i ppp33 -p tcp --syn -m limit --limit 6/h -j LOG --log-level 1 --log-prefix=Intrusion -> This rule states that when it sees SYN flags only in a TCP packet, it will log Intrusion upto 6 times per hour (thanks Gilles for the call out). This is commonly done to help an administrator discover Stealth network scans. This is for all tcp inbound to the host.iptables -A FORWARD -i ppp33 -p tcp --syn -m limit --limit 6/h -j LOG --log-level 1 --log-prefix=Intrusion -> This is the same as the above, but for all TCP packets intended to other hosts that sit behind this hosts NAT that it may be doing some translation for.iptables -A INPUT -i ppp33 -j DROPThis is a rule that is all encompassing. Should you see any other traffic that is intended for this host AND doesn't meet the above rules, DROP the connection.iptables -A FORWARD -i ppp33 -j DROPSame as the previous rule, but DROP connections for anything that may be forwarded onto another machine that this machine can forward to.I hope this helps.
_unix.146710
After having installed the MySQL database on openSUSE I realized that for all files in /usr/bin the owner was changed to the mysql user of the mysql group. Maybe there was some mistake of mine. The worst problem was with the /usr/bin/sudo command, which obviously did not work, but I've taken back the ownership to root (having logged to root) and it is OK now.Should I change owner of all files in /usr/bin to root or may this cause some malfunctioning of other programs? Should they also have the Set UID option marked in the Privileges tab as sudo does?
Should /usr owner be root?
files;permissions;opensuse;chown
Yes, all files under /usr should be owned by root, except that files under /usr/local may or may not be owned by root depending on site policies. It's normal for root to own files that only a system administrator is supposed to modify.There are a few files that absolutely need to be owned by root or else your system won't work properly. These are setuid root executables, which run as root no matter who invoked them. Common setuid root binaries include su and sudo (programs to run another program as a different user, after authentication), sudoedit (a companion to sudo to edit files rather than run an arbitrary programs), and programs to modify user accounts (passwd, chsh, chfn). In addition, a number of programs need to run with additional group privileges, and need to be owned by the appropriate group (and by the root user) and have the setgid bit set.You can, and should, restore proper permissions from the package database. If you attempt to repair manually, you're bound to miss something and leave some hard-to-diagnose bugs lying around. Run the following commands:rpm -qa | xargs rpm --setugids --setperms
_unix.272400
Docker Toolbox for Windows uses boot2docker.iso for creating Linux VM and puts Docker Quick Start Terminal shortcut which when started opens bash shell to home directory on Windows.When executing cd ~ it actually opens c:\Users\%USERNAME%.And whoami reports %USERNAME%.Could I configure another guest Linux VM to start like this?
Configure VM on Windows 10 with VirtualBox similar to Docker Quick Start Terminal
bash;virtualbox
null
_vi.6975
Is there any way to store the number of matches in a variable in a VIMScript function?For instance I'm using:%s/,//gn
Store the number of matches in VimScript function?
vimscript;regular expression
I know that @romainl debugged the question as a XY problem in the comments but I guess it still could be useful to some people, so here is a solution. (It is deeply inspired from this answer)You can use this function (to add to your .vimrc or to your script):function! Count( word ) redir => cnt silent exe '%s/' . a:word . '//gn' redir END let res = strpart(cnt, 0, stridx(cnt, )) return resendfunctionYou can call the function like this: :call Count(pattern).For example in this file:a,bcakj,dhjlkdfa,oiua ,lkjoiua, lkjoiua , lkji,With :echo Count(,) you'll get 7.
_unix.299346
I have text file contain the following commandscommand1 file1_input; command2 file1_outputcommand1 file2_input; command2 file2_outputcommand1 file3_input; command2 file3_outputcommand1 file4_input; command2 file4_outputcommand1 file5_input; command2 file5_outputcommand1 file6_input; command2 file6_outputcommand1 file7_input; command2 file7_output................................................................................I named this file commands then I gave it permission using chmod a+x I want command 1 to be run, then command 2. Also I want this to be applied on all the files (file1, file2, .... etc) at once. How can I modify the content of this file to do that?I tried the following but it didn't work:(command1 file1_input; command2 file1_outputcommand1 file2_input; command2 file2_outputcommand1 file3_input; command2 file3_outputcommand1 file4_input; command2 file4_outputcommand1 file5_input; command2 file5_outputcommand1 file6_input; command2 file6_outputcommand1 file7_input; command2 file7_output................................................................................)&
Running commands at once
shell script;scripting;parallelism
Make the lines as such:(command1 file1_input; command2 file1_output) &(command1 file2_input; command2 file2_output) &...And each line will execute its two commands in sequence, but each line will be forked off as parallel background jobs.If you want the second command to execute only if the first command completed successfully, then change the semicolon to &&:(command1 file1_input && command2 file1_output) &(command1 file2_input && command2 file2_output) &...
_unix.267457
I just learned something that shocked me, because I did not have a clue it was a fact.If I have a directory with the following permissions:user@host:~$ ls -la testdirtotal 8drwxrwxrwx 2 user user 4096 Mar 3 20:36 .drwx------ 34 user user 4096 Mar 3 20:36 ..-rw-r--r-- 1 user user 0 Mar 3 20:36 testfile 1-rw-r--r-- 1 user user 0 Mar 3 20:36 testfile 2Even though the files testfile 1 and testfile 2 have write permissions only for the owner everyone can write on them.Until now, I thought that the directories' permissions only affected the directory itself.So now for my question - what good are file permissions on files, if everything seems to be set by the directories' permissions that the files reside in?==== EDIT 1 ====On the other hand look at these permissions:[user@geruetzel2 default]$ ls -latotal 24drwxr-xr-x. 2 root root 41 Dec 19 23:07 .drwxr-xr-x. 96 root root 8192 Mar 3 20:28 ..-rw-r--r--. 1 root root 354 Dec 19 23:07 grub-rw-r--r--. 1 root root 1756 Nov 30 19:57 nss-rw-------. 1 root root 119 Mar 6 2015 useraddIf I do a cat useradd as non-root here, I get a permission denied error. Why is that? The direcory has read permissions for other so it should be readable? There seems to be a difference between the two examples I gave but I don't see the reason for the different behavior.
How do permissions on a directory affect files in it?
files;permissions;directory
The directory permissions only affect the content of the directory. So anybody with write permissions on the directory can e.g. delete files or folders in that directory, even if the permissions of the files or folders are set to have no write access.It maybe makes it easier to understand if you once open the folder with vi or any other text editor. In Unix and Linux Everything is a file.If you for example edit a file with vi, it will not edit the file inplace but make a copy and delete the original when saved.On the other hand, the user not owning the file couldn't echo directly into that file.
_unix.67459
When you open the main menu in say MS Windows (7/Vista) you can quickly search for any item (apps/docs etc.) the same in Ubuntu (Dash), Kubuntu (Kickoff menu).Is there any easy way to add such a search input which can filter apps (at least just apps no need for docs) in LXDE's main menu?I Google'd a lot with no success, that's why I'm here.I am a technical person so if you can give even more low level guide's would help (may be I should create some widget custom panel etc.)
LXDE customize main menu add SEARCH BOX like in KDE Kickoff or Windows 7/Vista
search;lxde;menu
null
_unix.226974
In the excellent article Why Information Security is Hard - An Economic Perspective, Ross Anderson poses that:... you can make the security critical part of the system small enough that the bugs can be found. This was understood, in an empirical way, by the early 1970s. However, the discussion in the above section should have made clear that a minimal TCB is unlikely to be available anytime soon, as it would make applications harder to develop and thus impair the platform vendors appeal to developers.The above section mentions:The huge first-mover advantages that can arise in economic systems with strong positive feedback are the origin of the so-called Microsoft philosophy of 'we'll ship it on Tuesday and get it right by version 3'. Although sometimes attributed by cynics to a personal moral failing on the part of Bill Gates, this is perfectly rational behaviour in many markets where network economics apply.I would like to understand if this also applies to an open-source kernels such as Linux, BSD, Hurd. Intuitively I would think that it only applies to a lesser extent, but that this pressure is not altogether non-existent. My main question is therefore:Is the TBC in an open-source system quantifiably smaller?If so, is this the consequence of:UNIX modular designcontinuous scrutiny of open-source codelesser pressure to get to market fastlesser pressure to court developersother ...Apologies if I got the terminology or something else wrong. Feel free to edit, or comment so that I can edit.
Is information security less hard in an open source system?
kernel;security;source code
null
_unix.3149
I have been using Fedora as my primary distribution since very long. One thing that bothers me is its relatively short life cycle. I install its latest release, restore my backup, customize the applications, take a sigh of relief but by then the new release is just around the corner.Fedora has a comparatively short life cycle: version X is maintained until one month after version X+2 is released. With 6 months between releases, the maintenance period is a very short 13 months for each version. WikipediaOnce I used pre-upgrade when moving from Fedora 9 to 10. It didn't work smoothly. The new upgraded Fedora was using the old kernel images of Fedora 9. Took me long to figure it out and I had to use live usb to fix it. Since then I decided not to use pre-upgrade or Upgrade an existing installation option. I had some hiccups with applications too.Using fresh install seems safer. But now I have to backup all data, along with my scripts and rc files and restore it again. This takes time along with installing apps that are not installed by default and removing not-required apps.Main problem is customization settings of each application. From Firefox only, I would have to export saved passwords, bookmarks, saved sessions, preferences of different extensions, etc. Some other applications do not provide option of save/export settings at all. So I have to configure each one manually.All in all, upgrading to latest release takes time, even longer if my net connection goes down for some reason. Each time I upgrade, I cannot take it out my mind that within few months a new release will be knocking my door, and I will have to repeat the whole exercise again.What could be a painless and easy procedure to take backups of all data and to restore it? I would prefer a command line solution.How can I preserve settings of applications, if they do not provide an option to export settings?If you are Fedora user, what do you do to keep up with its frequent releases?How can I make this whole procedure faster and less painful? Its the amount of time and efforts that an upgrade takes altogether which made me post this question. How can I make my life easier?
How should I deal with Fedora's short life cycle?
fedora;package management;upgrade
I use an external drive, where I backup some of my folders and dotfiles with rsync -avz once the first snapshot is taken, it only needs very little data to move onto the external drive for backups.Pretty much all of that information is stored in a dotfile or in some dotdirectory. All you need to do, is backup those directories. That's what I do anyway, and it's been working for years. Keep in mind however, that sometimes a newer version won't play nice with the older config files, so it's not guaranteed to work every single time.It all depends on what big of a change the next release is. For instance, when the file systems don't change, I don't see a reason to re-install anymore. It was all different back when FC6 was around. Upgrading was a pain, and I usually made fresh installs back then.From Fedora 8 onward, preupgrade was working fine, I didn't have had any issues with it. I did however a fresh install for Fedora 13, since I wanted all my hard drives to be formatted in ext4. Other than that, upgrading to latest version of Fedora usually works well. As of this edit, I've recently upgraded to Fedora 24. By now, the upgrade process with dnf goes smothly, although I think there is room for improvement. Generally, problems only arise when the upgrade process is aborted mid-install, or mid-cleanup.Usually, keep somewhat track, of what changes you do to the system. What files in /etc/ you changed, what programs you compiled yourself, or what libs you put into /usr/lib/ yourself, etc. This makes life much easier, as well as a backup, that is constantly kept up to date. Preupgrade works fine by now, but when you want to change The file system or so, there's no way around reinstalling. The upgrade guide of Fedora will advice you when you should indeed reinstall instead of doing an upgrade. The PreUpgrade manual says, it's possible to upgrade from F11 directly to F13, for example. I would advice against it. Since older Fedoras aren't upgraded, the PreUpgrade package is most likely outdated. With the latest versions and the newer upgrade processes with dnf and the system-upgrade plugin, this isn't as much of an issue, really. Conflicting file versions are stored with the extension .rpmsave, and you can then later resolve these issues with rpmconf -a.This won't help, but when you're a OpenBSD user, you need to make most changes manually and you can't upgrade to the latest release from any other than the previous one.A word of warning: What might somewhat ruin your day, are 3rd party repositories that aren't ready for the new release yet. This happens almost routinely with dropbox, and even with RPMfusion. It takes RPMfusion about two weeks to get in sync with the new release, but sometimes four moths for Dropbox to fix their repos. So, when you're using something like RPMfusion, etc. I advice checking if they're ready. Otherwise waiting another week or two won't really hurt, and it'll save you a lot of headaches. Especially when dealing with nVidia drivers and such.
_codereview.10182
Ok I have a method://_seen is defined as a HashSet earlierpublic boolean isTerminal(Node node){ if (_seen.contains(node.getState())) { return true; } else { _seen.add(node.getState()); return false; }}What is the nicest way of doing this? It could use a variable to hold _seen.contains, and return just that, and maybe always add. Or, perhaps I could do:public boolean isTerminal(Node node){ return !_seen.add(node.getState());}but is that clear?
Sets, Check and add
java
First of all I don't think its a good idea to have a function that does all that tasks Check, Sets and Add.The code should look like this:public boolean isTerminal(Node node){ return _seen.contains(node.getState());}and if you want to add the state/node, you should have a separate methodpublic void addNode(Node node){ if (!isTerminal(node)) { _seen.add(node.getState()); }}Hope I understood well what you need.Cheers!
_unix.231180
I am currently attempting to automagically reboot my modem that for some reason or another will only allow certain types of small Linux OS's to ssh into it - and not the one I am using. So, rebooting that modem currently (manually) requires me to ssh into a third-party on my network that is allowed to ssh into the modem, ssh-ing into the modem from there, and then rebooting it with the rebootcommand.This would work, but I would also like the modem to auto-reboot every two hours, which means I need to automate this whole ssh-ing into the modem process. I would like to write a script that will ssh into the third-party, and then immediately ssh into the modem from the third-party to reboot it.However, after telling the script to ssh into the third-party, I'm lost as to how I also make it ssh into the modem from the third party.So far I'm using sshpass to automatically input the password and this is what is looks likesshpass -p third_party_password ssh [email protected] -p modem_password ssh [email protected] obviously the second line never runs and that's why I'm asking this.Please let me know if there is any other info I can provide. Also, if what I am asking is not possible, are there any other suggestions as to how I could accomplish the automatic modem reboot from the linux command line I'm trying to use?
Bash script - sshing into one computer and then immediately into that comupter's network's modem
bash;ssh;networking;scripting
null
_softwareengineering.219655
I have a xlsx file, that has some tabs with different data. I want to be able to save each row of a tab in a list. The first thing that comes to mind is a list of lists, but I was wondering if there is another way. I'd like to save that information in a object, with all its benefits, but can't think of a way to generate/create such diverse objects on the fly. The data in the xlsx is diverse and ideally the program is agnostic of any content.So instead of e.g. create a list for each row, than put that list in another list for each tab and each tab in another list, I'd like to store the information that each row represents in a single object and just have a list of different objects.The question can be summarized as : What is an alternative to list of lists ?A small graphic to visualize the problem :+--------------------------------------------------------------------+|LIST || || +------------------+ +------------------+ +-----------------+ || | Class1 | | Class2 | | Class3 | || |------------------| |------------------| |-----------------| || | var1 | | var1 | | var5 | || | var2 | | var2 | | var6 | ||... | var3 | | | | var7 |...|| | | | | | | || | | | | | | || | | | | | | || | | | | | | || +------------------+ +------------------+ +-----------------+ || |+--------------------------------------------------------------------+
Flexible / Dynamic object creation or Alternative to list of lists
java;design patterns;object oriented design
I assume that your worksheets contain tabular data, i.e., the first row contains field names and the following rows contain values. In such a case it would make sense to define a Table class whose instances would each contain the contents of one worksheet. This Table object could hold general information regarding one worksheet, if necessary, like the name of the worksheet tab.It could also have a list holding the column names or a dictionary holding the column indexes with the column names as a key. If you define a Column class for this purpose (instead of just a string for the column name), it allows you to store additional information regarding a column, like its type (numeric, text, date) and display format.Even if you don't need this kind of infrastructure right now, having such classes allows you to add this kind of functionality much easier later.The Table class would also contain a list of rows. Here again, having a Row class can have advantages. However, it would be ok to represent one row as an array of strings, for instance. Since each row of the table has the same length, a list is not necessary here.Instead of using strings to store the values, having a Value class or struct can also have advantages. Such a type could automatically convert strings to an appropriate type, could perform comparisons between values of the same type etc.Without knowing more about your data and the kind of logic you want to apply to it, it is difficult to give you an answer tailored to fit your needs.
_cstheory.21456
In packing problems, we need to select a set of sets of items, such that no item is chosen twice (in $Set-Packing$, the actual items must not be packed twice, in $Graph-Packing$ the copies of the graph has to be vertex disjoint, in multidimensional matching every item has to appear once, etc.).Are there any studied problems that ask for packing such that every item is packed at most $p$ times?For example, is there any known reference for (perhaps under a different name?)-$repetition-Set-Packing$:given a universe $\mathcal{U}=\{e_1,..,e_n\}$, two numbers $k,r\in \mathbb{N}$ and a set of subsets $\mathcal{S}=\{s_1,s_2,...,s_m\}\subseteq 2^\mathcal{U}$, is there a set $\mathcal{S'} \subseteq \mathcal{S}$, $|\mathcal{S'}|=k$ such that every item in $\mathcal{U}$ appears in at most $r$ sets in $S'$?Are there any other works on packing with limitied repetitions?
Packing problems with repetitions
reference request;packing;covering problems
Yes, people do consider these problems but there is no standard name. A useful way to think about these problems is via packing integer programs. Consider the problem $\max wx$ such that $Ax \le b, x \in {0,1}^n$ where $A$ is a $m \times n$ non-negative matrix. The width of the program is $\min_{i,j} b_i/A_{i,j}$ (which we can assume is at least $1$). If $A$ is a $0,1$ matrix and $b$ is an integer vector then this captures packing problems where one allows repetitions up to $\min_i b_i$. Packing problems become easier as width increases. For Set Packing one can get an approximation of the form $d^{1/W}$ where $W$ is the width and $d$ is the maximum set size and this is more or less tight from hardness of maximum independent set. In particular when $W = \Omega(\log d)$ one gets a constant factor approximation.
_scicomp.21295
I have some data: number of nodes $N$ and error in energy norm corresponing to it.I have seen in some references that the rate of convergence is reported by $$\| u-u_h\| _E=CN^{\alpha} $$How can I find $C$ and $\alpha$ by MATLAB, I tried by :polyfit $(N,\| u-u_h\| _E,1)$But the answer was not true.How can I find them.
Finding rate of convergence by curve fitting in Matlab
matlab;finite element;convergence;regression
First of all, rates of convergence are usually given in the form$$ \|u-u_h\| \leq C N^\alpha,$$rather than equality. Furthermore, rates are asymptotic, i.e., only have to hold for $N\to \infty$. This means that you're unlikely to find a single $C$ and $\alpha$ such that your equation holds.Another reason why your approach doesn't work is that what you're trying to fit is not a polynomial, since $\alpha$ is a) unknown and b) not an integer (for one thing, it must be negative since the error goes down as $N$ goes up).What people usually do is look at a double-logarithmic plot: If you take the logarithm of your equation (or my inequality), you get$$ log(\|u-u_h\|) \leq \log(CN^\alpha) = \log(C) + \alpha\log(N).$$This is a linear polynomial $ax+b$ in $\log (N)$ with coefficients $a=\alpha$ and $b=\log(C)$ (from which you can find $C=\exp(b)$).So if you apply polyfit to $\log(N)$, $\log(\|u-u_h\|)$, you should get an array $[a,b]$ with $\alpha=a$ and $C=\exp(b)$.
_softwareengineering.43867
From time to time, I experience different bugs with proprietary software that I need to interact with. In order to get through these bugs, I need to develop various workarounds. Is there a good book for debugging/disassembling proprietary software to write better workarounds?
What are some good resources for debugging/disassembling proprietary software?
debugging;third party libraries;reverse engineering
Advanced Windows Debugginghttp://www.amazon.com/gp/product/0321374460/ref=oss_productAdvanced .NET Debugginghttp://www.amazon.com/Advanced-NET-Debugging-Mario-Hewardt/dp/0321578899/ref=pd_sim_b_2Reversing: Secrets of Reverse Engineeringhttp://www.amazon.com/Reversing-Secrets-Engineering-Eldad-Eilam/dp/0764574817/ref=sr_1_1?s=books&ie=UTF8&qid=1296931123&sr=1-1The IDA Pro Book: The Unofficial Guide to the World's Most Popular Disassembler http://www.amazon.com/IDA-Pro-Book-Unofficial-Disassembler/dp/1593271786/ref=pd_sim_b_3Introduction to 80x86 Assembly Language and Computer Architecturehttp://www.amazon.com/Introduction-Assembly-Language-Computer-Architecture/dp/0763772232/ref=pd_sim_b_12
_codereview.111178
The idea here is to be able to find a USB serial port device connected during runtime, thus not knowing its port number, and use it in the application to retrieve information from the device.string comportInfo = string.Empty;using (var entitySearcher = new ManagementObjectSearcher(root\\CIMV2, SELECT * FROM Win32_PnPEntity WHERE Caption LIKE '% + SerialPortToFind + %')){ using (var serialPortSearcher = new ManagementObjectSearcher(root\\WMI, SELECT * FROM MSSerial_PortName)) { //testBase.SamplerWithCancel is like a for loop with exception control, a time between interations and amount of iterations to be made. It expects a true or false value to determine whether a desired condition is met. Failing to return a true value in the specified iterations throws an exception. testBase.SamplerWithCancel(() => { var portList = serialPortSearcher.Get().Cast<ManagementBaseObject>().ToList(); var matchingEntities = entitySearcher.Get().Cast<ManagementBaseObject>().First(); if (portList.Count != 0 && matchingEntities != null) { foreach (ManagementBaseObject port in portList) { if (port[InstanceName].ToString().Contains(matchingEntities[DeviceID].ToString())) { comportInfo = port[PortName].ToString(); } } return true; } else return false; }, Serial port not found, 3, 1500, 500, false, false); }}The code works well, but I want to know where I can improve it to make it more resilient and less error prone.I feel like using LINQ would be way more appropiate than what I did.
Find a serial port device through WMI (windows management instrumentation)
c#;linq;windows;serial port
The code works well, but I want to know where I can improve it to make it more resilient and less error prone. A very good start would be to always use braces {} although they might be optional. By stacking the using blocks you can save some horizontal spacing. By reverting the condition of if (portList.Count != 0 && matchingEntities != null) you could return early and the else would be (like now) redundant and could be removed which will save some spacing too. The naming of the variables could need some imporvements too, e.g the plural matchingEntities doesn't match the result of the call to First(). Storing the result of matchingEntities[DeviceID].ToString() into a variable will speed up things. Implementing the mentioned points will lead to string comportInfo = string.Empty;using (var entitySearcher = new ManagementObjectSearcher(root\\CIMV2, SELECT * FROM Win32_PnPEntity WHERE Caption LIKE '% + SerialPortToFind + %')) using (var serialPortSearcher = new ManagementObjectSearcher(root\\WMI, SELECT * FROM MSSerial_PortName)){ //testBase.SamplerWithCancel is like a for loop with exception control, a time between interations and amount of iterations to be made. It expects a true or false value to determine whether a desired condition is met. Failing to return a true value in the specified iterations throws an exception. testBase.SamplerWithCancel(() => { var portList = serialPortSearcher.Get().Cast<ManagementBaseObject>().ToList(); var matchingEntity = entitySearcher.Get().Cast<ManagementBaseObject>().First(); if (portList.Count == 0 || matchingEntity == null) { return false; } string entity = matchingEntity[DeviceID].ToString(); foreach (ManagementBaseObject port in portList) { if (port[InstanceName].ToString().Contains(entity)) { comportInfo = port[PortName].ToString(); } } return true; }, Serial port not found, 3, 1500, 500, false, false);}
_unix.74911
I have a directory that contains many subdirectories. The subdirectories contain lots of types of files with different file extensions. I want to move (not copy) all the files of one type into a new directory. I need all these files to be in the same directory, i.e. it needs to be flat.(My use case is that I want to move ebooks called *.epub from many directories into a single folder that an EPUB reader can find.)
How to move all files with a certain file extension from subdirectories to a single directory
shell;files;rename
In zsh, you can use a recursive glob:mkdir ~/epubsmv -- **/*.epub ~/epubs/In bash 4, run shopt -s globstar (you can put this in your ~/.bashrc) then the command above. In ksh, run set -o globstar first.With only POSIX tools, run find:find . -name '*.epub' -exec mv {} ~/epubs \;
_reverseengineering.16097
I am trying to get all the library function calls that a binary performs in a preorder-DFT traversal of the CFG. I'm able to get the CFG like:import sys, angrimport networkx as nxproj = angr.Project(sys.argv[1],auto_load_libs=False)cfg = proj.analyses.CFG().graphI was able to get the CFG and I can even traverse it like this (Suppose I'm getting the correct main function's node):s = nx.dfs_preorder_nodes(cfg,mainFuncNode)nodes = []try: while True: nodes.append(ns.next())except: passHowever I don't know how to get the function calls from the nodes (if they are actually doing it).I read some documentation and all I could come up with was:for n in nodes: if n.is_simprocedure: print n.to_codenode().functionThe output is all None and I'm sure that's wrong because the binary Is doing some I/O operations. So I expect to see something like:libc_putslibc_gets...I would appreciate if you could give me some better pointers.
How can I get the shared libraries' function calls using angr
static analysis;python;binary;control flow graph;angr
However I don't know how to get the function calls from the nodes Are you saying you want to know which function a node belongs to, or which function a node is calling?For the former, each block has a corresponding CFGNode object that are in the graph. Each CFGNode has a .function_address member, which tells you the address of the function that the node belongs to.For the latter, every edge in the graph is labeled with properties, and we use 'jumpkind' to mark the type of an edge. An Ijk_Call jumpkind means that edge is a call from a block (or a node) to a function. By the way, angr's CFG class is more than just the .graph member (which is a networkx.DiGraph instance). You might find it easier sometimes to directly work on CFG, instead of manually traversing the graph.In addition, once a CFG is generated, you can access all functions by accessing CFG.functions. Each Function instance has two intra-function graphs associated with it: a .graph and a .transition_graph. You may find it easier to work with than traversing the CFG of the whole binary.In the end, if you like GUI, and you have a lot of patience, you might want to give angr Management a try.
_webmaster.68265
I have a site with a search form. When someone searches, the url for the results page has PHP variables in it and looks like:mysite.com/used-construction-equipment-search&searchterm=Sany&searchlocation=Brisbane&searchtype=AnyWould Google be able to index these sorts of pages? Can crawler bots somehow trigger 'searches'?
SEO: does Google index search result pages?
seo;google search;googlebot
null
_webmaster.67756
I want to track user sign up using Google Analytics (not universal). Below is the step-by-step process:1) User enters email 1a. if email record found they get redirected to repopulated form1b. if email is not found they get redirected to a blank form2) User gets redirected to a confirmation page and is prompted to check their email3) User clicks the email link and lands on a page to fill in passwords4) User is redirected to their profile page.I would like to put this into a goal funnel so I guess using Events is not going to work? Does that mean I have to use Virtual Pages views on each step? If so what would be the best way to set this up.
Tracking a sign up form with Google Analytics that uses Ajax and multiple steps
google analytics;ajax
You'll have to create virtual pages for each step, and set up a goal funnel accordingly.So on each step you've described in your question you'll have to send a pageView on each event. In case of an actual redirect you really don't have to send the event manually.My advice would be that you should streamline the steps into one javascript function, that way you can control the full process, and send custom pageView on the key points. With universal analytics, you can even do this on the server side of the site. By sending the pageViews you'll still see the drop-offs from the start to the goal achievement, just as you would in a more classic environment.For sending a virtual pageView you can use this snippet_gaq.push(['_trackPageview', '/registration/email-entered']);In case of universalga('send', 'pageview', '/registration/email-entered');
_softwareengineering.16117
One thing working in Haskell and F# has taught me is that someone in a university smarter than me has probably already found an abstraction for what I'm doing. Likewise in C# and object-oriented programming, there's probably a library for it, whatever it is that I'm doing.There's such an emphasis on reusing abstractions in programming that I often feel a dilemma between: 1) just coding something short and dirty myself or 2) spending the same time to find someone else's more robust library/solution and just using that.Like recently one of the coders here wrote up a (de)serializer for CSV files, and I couldn't help but think that something like that is probably very easy to find online, if it doesn't already come with the .NET standard APIs. I don't blame him though, several times working in .NET I've patched together a solution based on what I know, only to realize that there was some method call or object or something ,often in the same library, that did what I wanted and I just didn't know about it.Is this just a sign of inexperience, or is there always an element of trade-off between writing new and reusing old? What I hate the most is when I run across a solution that I already knew about and forgot. I feel like one person just isn't capable of digesting the sheer quantities of code that comes prepackaged with most languages these days.
Does don't reinvent the wheel ignore the limits of human memory?
libraries;code reuse
null
_webmaster.13010
I'm working on moving a client's hosting to a new provider as part of a site redesign. They're currently using Rackspace to host their site. Their domain is registered with ENOM. My one concern is their email accounts. Currently, they login to webmail.theirdomain.com to read email. There is a control panel (very minimal) at websitesettings.com for their account, but I can't find any information about email servers there. I'd like to transition them to Google Apps, but I need to first make sure all old emails are backed up, as well as minimize any downtime.Has anyone dealt with a setup like this before? Know where I should start?
Email settings for a Rackspace Cloud Sites hosted site?
web hosting;dns;email;rackspace
The place to start is to find out where the email gets delivered to. I would go to http://www.webmaster-toolkit.com/dns-query.shtml, put in the domain, and look for lines with MX in them.
_webmaster.57021
I have a site with thousands of pages. The pages are dedicated to be found by the search engines as they are very specific. It is very unlikely that the users will find necessary pages using any navigation on the site. So for me, I'd leave the link to page only in the sitemap as it is easier.But how would Google, Bing and others consider pages with no internal links pointing to them? Will they be ranked lower due to the lack of those links?
Do search engines rank URLs in sitemaps lower if there are no internal links pointing to them?
seo;search engines;sitemap;links;ranking
But how would Google, Bing and others consider pages with no internal links pointing to them? Will they be ranked lower due to the lack of those links?URLs that appear in your sitemap without any internal or external links will have less authority than those with internal and external links pointing to them. The more authority a page has that links to another page, the more likely it will rank higher as a result. Whether they will rank lower as a result of not having any links to them depends on the specific algorithm of each search engine, and the quality of the links you're comparing them against.As this source covers (as well a good background on internal linking):You should add internal links to all of those pages that point to the pages that you want to rank well in search engines.
_unix.172397
If I join with 'screen' to existing sessions, I can get a list of the sessions with C-a keystroke and switch between them, however I can't find a way to safely leave/detach and keep all the processes running; if I press C-a \ it seems that it kills the processes.
screen manager with VT100/ANSI terminal emulation
fedora;process;gnu screen
Why do you use \ at all? The detach character is d.
_unix.338571
Using ifconfig, the IP address resets to old IP address after a reboot. How to make the change permanent?
How to change linux machine IP from command line?
linux;ip
null
_codereview.160667
This was a straightforward test for a junior level position. The README.md estimated completion should take 45 minutes. Provided were class and function stubs to complete and several unit tests which StackExchange won't let me link to due to my reputation (they are reproduced below).I'd love to offer belabored rationalizations of why I did what I did, but I'm sure no one's interested. I stripped most comments for brevity, but they can be found in the above links. Please tear it apart:import functoolsclass Campaign: # Prefer not to refactor this because I think comparison logic belongs elsewhere def __init__(self, name, bid_price, target_keywords): self.name = name self.bid_price = bid_price self.target_keywords = target_keywords def __repr__(self): return <Campaign> {}.format(self.name)@functools.total_orderingclass Score: def __init__(self, user_search_terms, campaign): self.user_search_terms = user_search_terms self.campaign = campaign def __repr__(self): return <Score> Campaign: {}, Relevance: {}.format(self.campaign.name, self.value) def __eq__(self, other): self._except_if_terms_ne(other) if self.campaign.bid_price == other.campaign.bid_price: return self.value == other.value return False def __lt__(self, other): self._except_if_terms_ne(other) if self.value < other.value: return True if self.value == other.value: return self.campaign.bid_price < other.campaign.bid_price return False def _except_if_terms_ne(self, other): if self.user_search_terms != other.user_search_terms: raise ValueError(Cannot compare unequal user search terms.) @property def value(self): ust_set = set(self.user_search_terms) ctk_set = set(self.campaign.target_keywords) return len(ust_set.intersection(ctk_set))def choose_best_campaign_for_user(user_search_terms, campaigns): The best campaign to serve is the one with the most search term matches. If two or more campaigns have the same number of matches then the one with the highest bid_price should be served. If two or more campaigns have the same number of matches and the same bid_price, then either one may be returned. best_score = max([ Score(user_search_terms, camp) for camp in campaigns ]) return best_score.campaign if best_score.value > 0 else NoneAnd to the extent that they're relevant, the tests:import unittestfrom bidder import Campaign, Score, choose_best_campaign_for_user# Sample campaigns for testing - the apple_mac and apple_ios campaigns have one keyword# in common 'apple'apple_mac_campaign = Campaign( name=Apple Mac Products, bid_price=1.00, target_keywords=[apple, macbook, macbook pro, imac, power mac, mavericks],)apple_ios_campaign = Campaign( name=Apple IOS Products, bid_price=1.50, target_keywords=[apple, ipad, iphone, ios, appstore, siri],)google_android_campaign = Campaign( name=Google Android Products, bid_price=1.50, target_keywords=[google, android, phone])marvel_comics_campaign = Campaign( name=Marvel Comics, bid_price=0.50, target_keywords=[spiderman, hulk, wolverine, ironman, captain america],)all_campaigns = [ apple_mac_campaign, apple_ios_campaign, marvel_comics_campaign]class ScoreTest(unittest.TestCase): def setUp(self): self.eq_val_dif_bids = ( Score([], apple_mac_campaign), Score([], apple_ios_campaign) ) self.eq_val_eq_bid = ( Score([], apple_ios_campaign), Score([], google_android_campaign) ) def test_eq_val_eq_bid_compares_eq(self): self.assertEqual(self.eq_val_eq_bid[0], self.eq_val_eq_bid[1]) def test_eq_val_dif_bids_compares_lt(self): self.assertTrue(self.eq_val_dif_bids[0] < self.eq_val_dif_bids[1]) def test_nonidentical_search_terms_raise_value_error(self): with self.assertRaises(ValueError): Score([], apple_ios_campaign) == Score([test], google_android_campaign)class BidderTest(unittest.TestCase): def test_it_picks_no_campaign_if_no_keywords_match(self): chosen_campaign = choose_best_campaign_for_user( user_search_terms=[batman], campaigns=[marvel_comics_campaign], ) self.assertIsNone(chosen_campaign) def test_it_chooses_a_campaign_if_search_terms_match_keywords(self): chosen_campaign = choose_best_campaign_for_user( user_search_terms=[spiderman], campaigns=[marvel_comics_campaign], ) self.assertEqual(marvel_comics_campaign, chosen_campaign) def test_it_chooses_campaign_with_most_overlapping_keywords(self): chosen_campaign = choose_best_campaign_for_user( [apple, macbook], all_campaigns ) self.assertEqual(chosen_campaign, apple_mac_campaign) def test_it_chooses_campaign_with_higher_bid_price(self): chosen_campaign = choose_best_campaign_for_user( [apple], all_campaigns ) self.assertEqual(chosen_campaign, apple_ios_campaign)if __name__ == __main__: unittest.main()
Ad-exchange bidder exercise for a job interview
python;object oriented;interview questions
null
_codereview.170895
For a while now I have been working to construct a program to calculate a lot of mathematical constants. Before I explain, here's the code:Code#include <boost/math/constants/constants.hpp>#include <boost/multiprecision/cpp_dec_float.hpp>#include <boost/math/special_functions.hpp>#include <complex>#include <iostream>/* Only change PRECISION*/const unsigned long long PRECISION = 100, INACC = 4;typedef boost::multiprecision::number< boost::multiprecision::cpp_dec_float<PRECISION + INACC> > arb;arb alladi_grinstead(){ arb c; for(unsigned long long n = 1;; ++n) { const arb last = c; c += (boost::math::zeta<arb>(n + 1) - 1) / n; if(c == last) break; } return exp(c - 1);}arb aperys(){ return boost::math::zeta<arb>(3);}arb buffons(){ return 2 / boost::math::constants::pi<arb>();}arb catalans(){ const arb PI = boost::math::constants::pi<arb>(); return 0.125 * (boost::math::trigamma<arb>(0.25) - PI * PI);}arb champernowne(const unsigned long long b = 10){ arb c; for(unsigned long long n = 1;; ++n) { arb sub; for(unsigned long long k = 1; k <= n; ++k) { sub += floor(log(k) / log(b)); } const arb last = c; c += n / pow(b, n + sub); if(c == last) break; } return c;}arb delian(){ return boost::math::cbrt<arb>(2);}arb dottie(){ arb x; std::string precomp, postcomp; for(x = 1; x == 1 || precomp != postcomp;) { precomp = static_cast<std::string>(x); precomp.resize(PRECISION); x -= (cos(x) - x) / (-sin(x) - 1); postcomp = static_cast<std::string>(x); postcomp.resize(PRECISION); } return x;}arb e(){ return boost::math::constants::e<arb>();}arb erdos_borwein(){ arb e; for(unsigned long long n = 1;; ++n) { const arb last = e; e += 1 / (pow(static_cast<arb>(2), n) - 1); if(e == last) break; } return e;}arb euler_mascheroni(){ return boost::math::constants::euler<arb>();}arb favard(const unsigned long long r = 2){ if(r % 2 == 0) { return (-4 / boost::math::constants::pi<arb>()) * (pow(-2, static_cast<arb>(-2) * (r + 1)) / boost::math::tgamma<arb>(r + 1)) * (boost::math::polygamma<arb>(r, static_cast<arb>(0.25)) - boost::math::polygamma<arb>(r, static_cast<arb>(0.75))); } else { return (4 / boost::math::constants::pi<arb>()) * ((1 - pow(2, -(static_cast<arb>(r) + 1))) * boost::math::zeta<arb>(r + 1)); }}arb gauss(){ const arb ROOT_TWO = boost::math::constants::root_two<arb>(), PI = boost::math::constants::pi<arb>(); return pow(boost::math::tgamma<arb>(0.25), 2) / (2 * ROOT_TWO * pow(PI, 3 / static_cast<arb>(2)));}arb gelfond_schneider(){ return pow(2, boost::math::constants::root_two<arb>());}arb gelfonds(){ return pow(boost::math::constants::e<arb>(), boost::math::constants::pi<arb>());}arb giesekings(){ return (9 - boost::math::trigamma<arb>(2 / static_cast<arb>(3)) + boost::math::trigamma<arb>(4 / static_cast<arb>(3))) / (4 * boost::math::constants::root_three<arb>());}arb glaisher_kinkelin(){ return boost::math::constants::glaisher<arb>();}arb golden_ratio(){ return boost::math::constants::phi<arb>();}std::complex<arb> i(){ return std::complex<arb>(0,1);}arb inverse_golden_ratio(){ return boost::math::constants::phi<arb>() - 1;}arb khinchin(){ return boost::math::constants::khinchin<arb>();}arb khinchin_levy(){ return pow(boost::math::constants::pi<arb>(), 2) / (12 * log(static_cast<arb>(2)));}arb kinkelin(){ return 1 / static_cast<arb>(12) - log(boost::math::constants::glaisher<arb>());}arb knuth(){ return (1 - (1 / boost::math::constants::root_three<arb>())) / 2;}arb levys(){ return exp(pow(boost::math::constants::pi<arb>(), 2) / (12 * log(static_cast<arb>(2))));}arb liebs(){ return (8 * boost::math::constants::root_three<arb>()) / 9;}arb lochs(){ return (6 * log(static_cast<arb>(2)) * log(static_cast<arb>(10))) / pow(boost::math::constants::pi<arb>(), 2);}arb niven(){ arb c; for(unsigned long long j = 2;; ++j) { const arb last = c; c+= 1 - 1/boost::math::zeta<arb>(j); if(c == last) break; } return c + 1;}arb nortons(){ const arb PI = boost::math::constants::pi<arb>(), EULER = boost::math::constants::euler<arb>(), GLAISHER = boost::math::constants::glaisher<arb>(), PI_SQR = pow(PI, 2), LOG_TWO = log(static_cast<arb>(2)); return -((PI_SQR - 6 * LOG_TWO * (-3 + 2 * EULER + LOG_TWO + 24 * log(GLAISHER) - 2 * log(PI))) / PI_SQR);}arb omega(){ arb omega; std::string precomp, postcomp; for(omega = 0; omega == 0 || precomp != postcomp;) { precomp = static_cast<std::string>(omega); precomp.resize(PRECISION); omega -= ((omega * exp(omega)) - 1) / (exp(omega) * (omega + 1) - ((omega + 2) * (omega * exp(omega) - 1) / ((2 * omega) + 2))); postcomp = static_cast<std::string>(omega); postcomp.resize(PRECISION); } return omega;}arb one(){ return 1;}arb pi(){ return boost::math::constants::pi<arb>();}arb plastic_number(){ return (boost::math::cbrt<arb>(108 + 12 * sqrt(static_cast<arb>(69))) + boost::math::cbrt<arb>(108 - 12 * sqrt(static_cast<arb>(69)))) / static_cast<arb>(6);}arb pogsons(){ return pow(100, 1 / static_cast<arb>(5));}arb polyas_random_walk(){ const arb PI = boost::math::constants::pi<arb>(), PI_CBD = pow(PI, 3), ROOT_SIX = sqrt(static_cast<arb>(6)); return 1 - 1/((ROOT_SIX / (32 * PI_CBD)) * boost::math::tgamma<arb>(1 / static_cast<arb>(24)) * boost::math::tgamma<arb>(5 / static_cast<arb>(24)) * boost::math::tgamma<arb>(7 / static_cast<arb>(24)) * boost::math::tgamma<arb>(11 / static_cast<arb>(24)));}arb porters(){ const arb PI = boost::math::constants::pi<arb>(), GLAISHER = boost::math::constants::glaisher<arb>(); return ((6 * log(static_cast<arb>(2)) * (48 * log(GLAISHER) - log(static_cast<arb>(2)) - 4 * log(PI) - 2)) / pow(PI, 2)) - (1 / static_cast<arb>(2));}arb prince_ruperts_cube(){ return (3 * boost::math::constants::root_two<arb>()) / 4;}arb pythagoras(){ return boost::math::constants::root_two<arb>();}arb robbins(){ const arb PI = boost::math::constants::pi<arb>(), ROOT_TWO = boost::math::constants::root_two<arb>(), ROOT_THREE = boost::math::constants::root_three<arb>(); return ((4 + 17 * ROOT_TWO - 6 * ROOT_THREE - 7 * PI) / 105) + (log(1 + ROOT_TWO) / 5) + ((2 * log(2 + ROOT_THREE)) / 5);}arb sierpinski_k(){ const arb PI = boost::math::constants::pi<arb>(), E = boost::math::constants::e<arb>(), EULER = boost::math::constants::euler<arb>(); return PI * log((4 * pow(PI, 3) * pow(E, 2 * EULER)) / pow(boost::math::tgamma<arb>(0.25), 4));}arb sierpinski_s(){ const arb PI = boost::math::constants::pi<arb>(), E = boost::math::constants::e<arb>(), EULER = boost::math::constants::euler<arb>(); return log((4 * pow(PI, 3) * pow(E, 2 * EULER)) / pow(boost::math::tgamma<arb>(0.25), 4));}arb silver_ratio(){ return boost::math::constants::root_two<arb>() + 1;}arb theodorus(){ return boost::math::constants::root_three<arb>();}arb twenty_vertex_entropy(){ return (3 * boost::math::constants::root_three<arb>()) / 2;}arb weierstrass(){ const arb PI = boost::math::constants::pi<arb>(), E = boost::math::constants::e<arb>(); return (pow(2, static_cast<arb>(1.25)) * sqrt(PI) * pow(E, PI / 8)) / pow(boost::math::tgamma<arb>(0.25), 2);}arb wylers() { const arb PI = boost::math::constants::pi<arb>(); return (9 / (8 * pow(PI, 4))) * pow(pow(PI, 5) / 1920, 0.25);}arb zero(){ return 0;}int main(){ std::cout << std::fixed << std::setprecision(PRECISION) << wylers() << '\n'; return 0;}Code breakdownFrom top to bottom (-ish):I use Boost for the following purposes:To allow for arbitrary precision, via cpp_dec_float.To take advantage of Boost's pre-built constants - such as Pi, the Khinchin constant, etc.To take advantage of Boost's special functions.To use Boost's cpp_dec_float, I define a type arb. arb's precision is determined at compile-time, so I use const PRECISION. PRECISION is meant to be manually changed depending on what's needed. I also have the variable INACC, which adds precision to arb. By having INACC, I prevent larger inaccuracy (last 4 digits), but the constants may still be inaccurate in the last two digits because of rounding. So it goes.Each constant's function is independent of one another - and this is how I want it to be. I want it to be so that someone can copy the function code, the declaration of the arbitrary type (arb), and the required headers - that's it.Often, some constants require summation or successive approximation (such as Newton's method or Halley's method). To achieve this, I use a for-loop. The problem: it will go forever (& exceeding the accuracy of PRECISION) if unchecked. This results in two situations:I use const arb last to compare the variable before and after calculations. You can see this in action in alladi_grinstead().I use std::strings, by statically casting the variable, and then resizing it to PRECISION. You can see this in action in dottie(). The reason I have to use this method is because without resizing the variable, it will work with extended precision (beyond PRECISION, because of INACC) and continue forever (or, at least way longer). If I attempt to use the first method described, it doesn't work.Constants can be printed in conjunction with std::setprecision(PRECISION) - std::fixed isn't required, but it helps.Conclusion and Misc.For further detail on the mathematics I use, my web page expands on such things.I tested this on Linux, using g++ with the -O3 flag. I have checked that all functions work properly to calculate the constants to any precision - well, except the Niven constant, which I couldn't find a way to verify its accuracy past 100 digits.How can I optimize my code, by improving the code's logic, structure, readability, or by improving performance. My code goes beyond the 80-char width, which is annoying because it makes me scroll horizontally a lot, however I am not sure where to break it so that it still is readable.Honestly, all improvements are useful.
Calculating a ton of mathematical constants
c++;performance;mathematics;boost;fixed point
I see some things that may help you improve your program. Use static const for performanceThe way it works right now, each function recalculates the value every time. However, these are constants, right? Each value needs only to be calculated once. Doing so will use some memory, but as I'll show later, there are also ways to mitigate that. Here's a simple modification that shows the concept:arb wylers() { static const arb PI = boost::math::constants::pi<arb>(); static const arb result = (9 / (8 * pow(PI, 4))) * pow(pow(PI, 5) / 1920, 0.25); return result;}int main(){ auto wy = wylers(); for (std::size_t trials{1'000'000}; trials; --trials) { wy = wylers(); } std::cout << std::fixed << std::setprecision(PRECISION) << wy << '\n';}Note that I'm using C++14 in the test code (only) hence, the quote character as a digit separator. Other than that, it should compile and run in any C++11 compliant environment. This test code calculating Wyler's constant one million times took over 3 minutes on my machine with the original version but just 140 ms with this version because, of course, the value was only actually calculated once and the only overhead was involved in copying result one million times.Use a templateThere is a potential problem with the code above in that if the user wants, say, both a 50 digit and 104 digit version, there's not really a way to do that because it's a single function. I'd fix that by using a template. The rewritten version would look like this:template<typename T>T wylers() { static const T PI = boost::math::constants::pi<arb>(); static const T result = (9 / (8 * pow(PI, 4))) * pow(pow(PI, 5) / 1920, 0.25); return result;}In addition to allowing the creation of multiple version, each with its own precision, the fact that its a template means that it generates no code at all unless it's actually invoked somewhere in the code. This makes the resulting binary smaller and faster.Eat your own dogfoodOne way to help assure that your software is usable is to use it yourself. The phrase Eat your own dogfood refers to a company using its own products. For example, why store an additional private copy of PI? So starting with the proposal above:static const T PI = boost::math::constants::pi<arb>();static const T result = (9 / (8 * pow(PI, 4))) * pow(pow(PI, 5) / 1920, 0.25);we can instead write it like this:template<typename T>const T &wylers() { static const T result = (9 / (8 * pow(pi<T>(), 4))) * pow(pow(pi<T>(), 5) / 1920, 0.25); return result;}Using this version, our million calculations take 9 ms on my machine. Naturally, we also need the templated version of pi():template<typename T>T pi(){ return boost::math::constants::pi<T>();}In this case, I decided not to use the static const trick because I would expect that the boost version of a templated pi is probably already very efficient.Note that this also requires that pi must be defined before any use which means your neat alphabetic order would be disturbed.Return a const reference where practicalThe current routines always make a copy even when none is needed. It would probably make more sense to return a const reference instead which would then only force a copy when one is really needed. For instance the boost code for number includes this operator:cpp_dec_float& operator*=(const cpp_dec_float& v);Because the other argument is const &, we might guess that the multiplication can be done without making a copy. Indeed, examining the source code for that function verifies this hunch. The effect is then that we can write things like this:arb pi_5 = 1;for (int i=5; i; --i) pi_5 *= pi<arb>(); // no need to make a copy of pi hereand they won't be forced to make multiple copies of pi.Break templates into pieces where neededFor functions like omega(), the static const trick won't work because the value is calculated in a loop and can't be const. One simple way of dealing with this is by splitting the function into two pieces:template<typename T>T omega_helper(){ T omega; std::string precomp, postcomp; for(omega = 0; omega == 0 || precomp != postcomp;) { precomp = static_cast<std::string>(omega); precomp.resize(PRECISION); omega -= ((omega * exp(omega)) - 1) / (exp(omega) * (omega + 1) - ((omega + 2) * (omega * exp(omega) - 1) / ((2 * omega) + 2))); postcomp = static_cast<std::string>(omega); postcomp.resize(PRECISION); } return omega;}template<typename T>const T& omega(){ static const T omega = omega_helper<T>(); return omega;}Add your constants to boostSince you've invested some time and effort into this already, you might want to actually add the constants to boost (either your local copy or submit it for inclusion). That mechanism is detailed in the boost docs.
_webapps.90702
Right now I'm uploading a somewhat large MP4 video to YouTube (3GB). It's been over an hour and it's nearly halfway there.In any case, I got disconnected (aprox. 5 - 10 times) while doing so, due to a faulty network adapter driver. It's now been fixed.While the upload process seems to resume from the previous point, I'm a bit distrustful about whether the transferred file will contain any errors, because I'm yet to see a web upload form which gets it right with the resume functionality.Question: Is the uploaded video going to be a perfect, bit-by-bit copy of the file I have on disk, even if internet went off several times during the transfer? If some sort of corruption is likely to happen, I'd rather manually restart it right now.
Will my video contain errors if connection is lost while uploading to YouTube
youtube;video;upload
The Youtube upload engine is very powerful. Since I asked this question, I have witnessed a number of occasions in which the upload process was interrupted and then resumed without any sort of error.I cannot testify beyong doubt whether the uploaded file is bit-identical, but it most likely is.
_webapps.86508
We are interested in using the free version of Cognito Forms. In what situations will the 1% fee apply while on this plan? Also, when someone fills out the form and has attachments, can those attachments accompany the reply email on the free plan?
Cognito Forms Pricing and Attachment Questions
cognito forms
null
_softwareengineering.148458
I am trying to find out which is the best practice when organising the PreparedStatement strings in a application which uses JDBC to connect to the database.Usually this is the way I retrieve the data from the database:PreparedStatement select = null;String prepQuery = SELECT * FROM user JOIN details ON user.id = details.user_id WHERE details.email = ? AND details.password = ?;select = con.prepareStatement(prepQuery);select.setString(1, email);select.setString(2, pass);ResultSet result = select.executeQuery();// Handle the result.This is a method which retrieves a user entity using the email and a password. For all my methods this is how I save the query strings.The first drawback which comes to mind is that this is quite messy and difficult to manage if I'm changing the SQL provider (Maybe if I would place all the queries in a Constants file would be a better solution). Furthermore JPA offers tools like @NamedQuery which is said to improve the performance of the application.Is there a best practice when it comes to organising the query strings in an application using JDBC. Thank you!
Manage query strings in JDBC methods
java;programming practices;jdbc
Consider JDBIIf you want to work closely with JDBC so that you create your own SQL and have your application manage the binding between the query parameters, then JDBI might be a good approach for you. Essentially, you create an annotated interface that describes your DAO, and JDBI will provide an implementation at runtime. A typical DAO could look like this:public interface MyDAO{ @SqlUpdate(create table something (id int primary key, name varchar(100))) void createSomethingTable(); @SqlUpdate(insert into something (id, name) values (:id, :name)) void insert(@Bind(id) int id, @Bind(name) String name); @SqlQuery(select name from something where id = :id) String findNameById(@Bind(id) int id); /** * close with no args is used to close the connection */ void close();}This approach lends itself to a very lightweight handling of SQL. If your needs are more geared towards supporting multiple databases or abstracting those databases away from the SQL and into a pure object model, then JPA is probably a better approach.
_unix.280892
In bash, given these files:$ cat noquotes.txts/a/b/g$ cat quotes.txts/a/b/gWhy does$ echo aaa | sed -e $(cat noquotes.txt)bbbsucceed, but$ echo aaa | sed -e $(cat quotes.txt)sed: 1: s/a/b/g: invalid command code fails?If if run set -x first, then I see that$ echo aaa | sed -e $(cat noquotes.txt)+ echo aaa++ cat noquotes.txt+ sed -e s/a/b/gbbband$ echo aaa | sed -e $(cat quotes.txt)+ echo aaa++ cat quotes.txt+ sed -e 's/a/b/g'sed: 1: s/a/b/g: invalid command code So it seems like there are extra quotes being inserted around the contents of noquotes.txt, but not around quotes.txt.
command substitution - cat file inserts quotes
bash;quoting;command substitution
null
_unix.340285
CentOS 6 (and probably most Linux distros) includes a man page section 1p for POSIX specifications.man 1p sh, man 1p sed, et. al. are all very useful to be able to refer to for portable shell scripting.However, I've just noticed that these man pages on my system are from the 2003 Open Group Base Specifications! Since then there have been the 2008 edition, the 2013 edition and the 2016 edition.How can I make the latest POSIX specifications available offline as man pages on my Linux system?I already attempted the following:[vagrant@localhost ~]$ set -x++ printf '\033]0;%s@%s:%s\007' vagrant localhost '~'[vagrant@localhost ~]$ sudo yum upgrade $(rpm -qf $(man -w 1p sh))+++ man -w 1p sh++ rpm -qf /usr/share/man/man1p/sh.1p.gz+ sudo yum upgrade man-pages-3.22-20.el6.noarchLoaded plugins: fastestmirror, securitySetting up Upgrade ProcessLoading mirror speeds from cached hostfile * base: mirrors.evowise.com * extras: centos.sonn.com * updates: mirror.scalabledns.comNo Packages marked for Update++ printf '\033]0;%s@%s:%s\007' vagrant localhost '~'[vagrant@localhost ~]$ (On a related note: is there somewhere I can look up the differences of what exactly has changed between the 2003 edition and the 2016 edition?)
Install the latest POSIX man pages?
man;posix
null
_unix.255775
I am trying to make a bash script that greps value and output the count and the value togetherSomething like thisgrep -c 'Thenis' example.txt > example.lstBut in that list file not only to show the count but also the value I have entered like thisThenis: 123
Take grep value and output the count and the value together
shell script;grep
null
_reverseengineering.13115
Are there any disassemblers for Mac that comes in freeware. I have installed radare2 but finding difficult to use.
Are there Disassembler for Mac which supports scripting
disassembly;disassemblers
null
_unix.87234
I use PuTTY and followed this tutorial:http://www.howtoforge.com/ssh_key_based_logins_puttyEvery time I login Im being asked to input the passphrase. How do I avoid having to do this?
Why SSH key login asking me the Passphrase every time?
ssh;key authentication
null
_webmaster.9309
I have been searching for image hosting website that displays images of a user in a nice and managed way.I want to upload the files to that image hosting website in my account of that website from a page in my website. i.e if i have a website abc.com then user browse my website abc.com. Uploads the file to my website. Now I want to transfer the uploaded file to the image hosting website so that it can be viewed by other users of that hosting website and get better visibility to world
Which image sharing websites supports file uploading dynamically via api
api;image hosting
Imgur does: http://api.imgur.com/
_unix.222505
I've a floating profile on a server. There's no local account. All machines load same profile. I tried to use public key for automatic login without any luck. Is there any way to automatically login without password? I'm a standard user. Currently I use a loop to open all machines in a separate shell:#!/bin/bashfor ((i=62; i<=71; i++))do gnome-terminal -e ssh mf-f3-$i #machine name start with mf-f3-*doneThen I have to enter the password in all the shells. I don't want that. I want automatic login.
SSH Automatic login With Floating Profile
ssh
null
_softwareengineering.221149
I have recently been moving away from ASP.NET Websites in favor of Web Applications. More specifically I have recently been picking up MVC as an alternative to developing ASP.NET Forms websites.Something that I find highly frustrating is the constant build->change->re-build process when testing small changes to compiled code. I find myself 'bundling' several changes between builds just to avoid waiting 20-25 seconds for Visual Studio to crunch it's way through the compilation process. I created a clean MVC 4 project from the VS template and timed the build after making a single line code change, it took 19 seconds.When making a change in the code-behind in an ASP.NET website I've always found that the page is seemingly re-compiled on the fly when you next hit the page. This would seem to me to be a better environment for web development rather than VS forcing an entire rebuild after every small code change.In the past I have enjoyed the luxury of making a change, hitting refresh in the browser, and waiting 1-2 seconds for my result. Is this achievable with web applications built using Visual Studio?
Build times for small incremental changes to C# Web Applications
web applications;asp.net;performance;asp.net mvc;visual studio 2010
It seems I had overlooked the usefulness of the Edit and Continue option (Tools -> Options -> Debugging -> Edit and Continue) after struggling to get past the infernal dialog message:Changes are not allowed in the following cases:When the debugger has been attached to an already running process.The code being debugged was optimized at build or run time.The assembly being debugged is loaded as domain-neutral.The assembly being debugged was loaded through reflection.Despite having the option checked in VS options you have to enable it for each separate web application. In the Properties Page it's on the Web tab when you scroll right to the bottom Enable Edit and Continue.Making small changes using Edit and Continue isn't exactly the same as making code-behind changes to a Website project, however, it's far more palatable than restarting the app for every code change.
_unix.49319
When I wake up my computer, the internet automatically reconnects, however the VPN does not. Is it possible to make it automatically reconnect (using the network manager on KDE, or at least in a way that the network manager is aware if it being connected via VPN) ?
auto reconnect VPN on waking up
linux mint;sleep
null
_webmaster.18928
I want to do an A B test of an entire site for a new design and UX with only slight changes in content (a big brand site that has good Google rankings for many generic keywords. My idea of implementation is doing a 302 redirect to the new version (placing it on www1 subdomain) and allowing only user agents of known browsers to pass. The test version will have disallow all in the robots text.Will Google treat this favorably or do I have to use Google Website Optimizer (which will give me tracking headaches)?
Will multivariate (A/B) testing applied with 302 redirects to a subdomain affect my Google ranking?
google;a b testing;google website optimizer
null
_unix.276712
We have a device (actually two of them) running Debian 8.4, largely preconfigured by the vendor. There is a slot for an SD card, which (if present) is automatically mounted at boot.Question is: after I manually unmount the card in order to fsck it, how can I mount it again? I can manually mount it again, but since it was mounted automatically at boot, it seems to me there should be a way to make the system mount it the same way again. I can simply reboot the system, but that doesn't seem an optimal solution.Since systemctl | grep mmc includes this:media-sd\x2dmmcblk0p1.mount loaded active mounted /media/sd-mmcblk0p1it seems to me it was systemd that mounts the card at boot. But after umount that entry disappears. Systemd still is largely a mystery to me, so that knowledge doesn't help me much.Edit: I forgot to say: there's nothing about the SD card in /etc/fstabEdit: After boot, systemctl status 'media-sd\x2dmmcblk0p1.mount' says:? media-sd\x2dmmcblk0p1.mount - /media/sd-mmcblk0p1 Loaded: loaded (/proc/self/mountinfo) Active: active (mounted) since Fri 2016-04-15 11:47:52 UTC; 3h 2min ago Where: /media/sd-mmcblk0p1 What: /dev/mmcblk0p1After umount, it says:? media-sd\x2dmmcblk0p1.mount Loaded: not-found (Reason: No such file or directory) Active: inactive (dead)systemctl cat 'media-sd\x2dmmcblk0p1.mount' says nothing in both cases.
How to mount/unmount SD card that was automatically mounted at boot?
debian;mount;systemd
I think I found it.Contrary to what I first thought, the SD card is mounted at boot by udev, not by systemd. It turns out there's a rule /etc/udev/rules.d/11-media-by-label-auto-mount.rules containing:KERNEL!=mmcblk[0-9]p[0-9], GOTO=media_by_label_auto_mount_end# Import FS infosIMPORT{program}=/sbin/blkid -o udev -p %N# Get a label if present, otherwise specify oneENV{ID_FS_LABEL}!=, ENV{dir_name}=%E{ID_FS_LABEL}ENV{ID_FS_LABEL}==, ENV{dir_name}=sd-%k# Global mount optionsACTION==add, ENV{mount_options}=relatime# Filesystem-specific mount optionsACTION==add, ENV{ID_FS_TYPE}==vfat|ntfs, ENV{mount_options}=$env{mount_options},utf8,gid=100,umask=002# Mount the deviceACTION==add, RUN+=/bin/mkdir -p /media/%E{dir_name}, RUN+=/bin/mount -o $env{mount_options} /dev/%k /media/%E{dir_name}# Clean up after removalACTION==remove, ENV{dir_name}!=, RUN+=/bin/umount -l /media/%E{dir_name}, RUN+=/bin/rmdir /media/%E{dir_name}# ExitLABEL=media_by_label_auto_mount_endSo I can mount the SD card with something like:sudo udevadm trigger -c add -y mmcblk*Still a bit cryptic for something simple (I think), but it works.
_cogsci.1017
People may have very little hesitation in spending $3 on a coffee once a week, but when it comes to buying things online, such as virtual goods or services, they are often much more reluctant. Is there a block when it comes to buying things online?If so, why?Is there a name for this phenomenon?Is so, how can someone selling something online (e.g. an online service provider or a software company) get around this and make customers less hesitant when shopping online?
Bias towards purchasing tangible vs virtual goods
terminology;decision making;economics;consumer psychology
null
_softwareengineering.93245
What kinds of software testing do you know? I've heard about Test-Driven Development, Unit tests etc, but can't understand their importance and difference. For example, why are we using regression tests or acceptance tests. What advantage they provide?
Software Testing Techniques or Categories
unit testing;testing;tdd;integration tests;acceptance testing
null
_unix.39445
When I want to redirect an output of a.out program I will use./a.out > output.txtThis doesn't work when the program reads something from stdin.How would you redirect output in this case?I can do it only with./a.out < inputs.txt > output.txtCan I do the same but reading inputs from stdin?EDIT: I realized that it works, but I can't see prompts because everything goes to file output.txt. So, the only problem is to see prompts on stdin and preserve redirection at the same time.
Redirecting output of program reading from stdin
shell;command line;c
One option would be to write your prompts to stderr rather than stdout. They'll be visible on the terminal but not in output.txt.Another option is not to use redirection for your output but take an output filename as a parameter and open that file yourself. You can then use stdout for your prompts. (This is more flexible. You can decide what goes only to the file, what goes only to the screen, and potentially what goes to both.)If you can't change the code, the only option is to use tee or some other such utility. Buffering can be a problem; stdbuf might help with that.
_cs.7276
I read this question somewhere, and could not come up with an efficient answer.A string of some length has beads fixed on it at some given arbitrary distances from each other. There are $k$ different types of beads and $n$ beads in total on the string, and each bead is present atleast once. We need to find one consecutive section of the string, such that:that section contains all of the $k$ different types of beads atleast once.the length of this section is as small as possible, provided the first condition is met.We are given the positions of each bead on the string, or alternatively, the distances between each pair of consecutive beads.Of course, a simple brute force method would be to start from every bead (assume that the section starts from this bead), and go on till atleast on instance of all beads are found while keeping track of the length. Repeat for every starting position, and find the minimum among them. This will give a $O(n^2)$ solution, where $n$ is the number of beads on the string. I think a dynamic programming approach would also probably be $O(n^2)$, but I may be wrong. Is there a faster algorithm? Space complexity has to be sub-quadratic. Thanks!Edit: $k$ can be $O(n)$.
Smallest string length to contain all types of beads
algorithms;dynamic programming;search algorithms
With a little care your own suggestion can be implemented to $O(kn)$, if my idea is correct. Keep $k$ pointers, one for each colour, and a general pointer, the possible start of the segment. At each moment each of these colour pointers keeps the next position of its colour, that follows the segment pointer. One colour pointer points to the segment pointer. That colour pointer is updated when the segment pointer moves to the next position. Each colour pointer in total moves only $n$ positions. For each position the segment pointer computes the maximal distance to the colour pounters, and one takes the overal minimum of that.Or, intuitively perhaps simpler, let the pointers look into the past, not the future. Let the colour pointers denote the distance to the respective colours last seen. In each step add the distance to the last bead to each pointer, except the one of the current colour, which is set to zero.(edit: answer to question) If $k$ is large, in the order of $n$ as suggested, then one may keep the $k$ pointers in a max heap. An update of a pointer costs $\log k$ each of $n$ steps. We may find max (the farthest colour, hence the interval length) in constant time, each of $n$ steps. So $n \log k$ total plus initialization.Now we also have to find the element/colour in the heap that we have to update. This is done by keeping an index of elements. Each time we swap to elements in the heap (a usual heap operation) we also swap the positions stored in the index. This is usually doen when computing Dijkstra's algorithm with a heap: when a new edge is found some distances to vertices have to be decreased, and one needs to find them.
_webmaster.13799
I thought my image host was always up and did have enough bandwidth to support my images, most of my images are screenshots I took with a tool. Now apparently I've got a mail that I'm reaching the bandwidth limit for free users, which means that if I continue to use the tool like this I will simply get out of bandwidth and a lot of my answers will then just show broken images.Other than that, I eventually have infrequently used a link that points to an external source.It would be handy if I could port my images to imgur in order to prevent this problem.How do I go about and do this in an automatized manner? Perhaps this can be interesting as a solution?If there is no automation possible or if it is prohibited, how do I form a Data Exchange query to find images?
How can I port all my images to imgur to prevent bad images across the network?
images
Don't know much about doing this, but I made you a Query to look for images in you posts through Data Explorer. Let me know if something's wrong with it!http://data.stackexchange.com/superuser/s/1376/find-posts-by-you-that-contain-images
_scicomp.5108
I'm working on helping my friend create code to perform the Overlap Dirac Operator and have come across one part that I'm not sure what to do.I need to compute the Eigenvalues and corresponding Eigenvectors for a hermitian matrix (my test is for n=4) so that I can implement a matrix sign function $epsilon(M) = U * epsilon(A) * U^*$ where $U$ is an eigenvector matrix, $U^*$ is the complex transpose of $U$, and $epsilon(A)$ is a matrix with the signs of eigenvalues of $M$ on it's diagonal.Any resources or help on existing algorithms/libraries to solve this (or at least make close approximations) would be appreciated. My target programming language is Scala.
Eigenvalue Decomposition of Hermitian Matrix in Scala
linear algebra;algorithms;matrices;eigenvalues
null