id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_unix.108568 | I'm following my own article: Making a Wireless HotspotWhen I put in sudo add-apt-repository ppa:nilarimogard/webupdI get the messageCannot access PPA (https://launchpad.net/api/1.0/~nilarimogard/+archive/webupd) to get PPA information, please check your internet connection.I use Cylon Linux, which uses the updates from Ubuntu 12.04, basically. I'm using Ethernet.After doing it and responding to an answer, I getAddress already in use [fail]invoke-rc.d: initscript dnsmasq, action start failed. | Installing Certain PPAs | terminal | null |
_webmaster.14821 | I have done some basic web programming in the past and wish to develop my skills further and begin doing large scale projects as a contractor. I have no experience with version control to this point but know enough about it to know I want to use it. What version control systems are available for a web platform and which are the most popular (I'm assuming they'd be most popular for ease of use and functionality...)? Thanks! | What version control options are available for PHP web development? | php;web development | Really any VCS will do. Your major options (sorted by what I would prefer)Mercurial - I actually use this to keep my entire website in version control on the host. Its quite easy to setup on any webserver that has python. You can also copy and paste your own build.Git - I tried and failed to get this working on Godaddy. However if your just going to be doing it locally its still a respectable option. In the Mercurial VS Git debate, it just depends on what you preferSubversion - Completely different from the above. I've never tried to keep a website in svn, but it is an option. Really though I would recommend this. |
_webapps.60438 | Terminology - I use spreadsheet for a document and sheet for a sheet within the document.TL; DRIf several users must work on data, stemming from the same source, which is fastest:one Google Account for all users, one spreadsheet and different sheets for the users.one GA, different spreadsheets for the different users.different GAs for the differents usersLong versionLet's say I have a document with sheet A, containing a table with several columns - ID, Name, Shares etc.. I want to copy this data several times so that this data can be accessed by different people. On their view (be that in a sheet withing the same document or in another document) the data will be combined with other things. So if I have columns A, B, and C in the original source, they will be copied to columns A, B, and C in the users' view and the users will enter data / modify columns D, E, F, etc.. I have several options:Create Sheets B, C, D in the same spreadsheet. This way person 1, 2 and 3 will use the same Google Account, the same document and only use different sheets.Create new spreadsheets in the same Google Account and use Importrange to import the data. Each user works in a separate documents, but from Google's point of view it is the same user/account, logged from different devices and working on different documents.Create separate Google Accounts, within them create spreadsheets and then import using Importrange.As this is an In-house thing authorisations/permissions are not an issue, so it's not a problem if the users use the same Google Account. My purpose is to use the most lightweight option because the users will be using relatively cheap Android tablets, which do not possess much processing power or fast wifi cards. I have two things to consider:as I have tried to make things as automatic as possible I have a lot of index/match formulas, importrange formulas and validations plus a source table of more then 4500 rows. This may not sound a lot, but it definitely makes a difference on a tablet. So I guess putting too much logic in a single spreadsheet is not a good idea.having the same spreadsheet opened by many people is also not very good as updating changes, made by one user on the screens of other users takes (relatively) long. At the beginning I went with option 1 (one account, one document, many users with many spreadsheets), but the Google Drive app would constantly crash on the tablets.Currently I use option 2 - one Google Account, different spreadsheets for the different users, which is much better. The main logic happens in one spreadsheet and each user opens a separate spreadsheet, where the data is copied. However, I don't know if it's a problem that one Google Account is used from several devices at the same time. Does it make sense to create separate accounts for the different users from performance point of view? This will certainly introduce a lot of overhead for me (keeping records of usernames/passwords, logging users etc.), but if it will be faster I would go with it. | What is the fastest way to use Google Spreadsheets by several users - from the same account or from different accounts? | google spreadsheets;google drive | In an arrangement where each of N user's changes propagate to all the other users' screens, that incurs N2 propagation costs (network messages, recalculations, and display updates). That grows rapidly as N grows. So if N > 3 you want to avoid it for sure.A separate sheet per user might avoid most of the updates but the best bet is to use a separate spreadsheet document for each user.Given that, using a separate Google Account per user might improve performance a bit by avoiding synchronized updates to some shared state (e.g. the user's document list). Also this is the normal case, unlike one account editing docs from many tablets at once, and any optimizations will favor the normal case. You'd have to measure it to be sure, but my semi-informed bet is that you won't notice the performance difference.Giving each user a separate login account would help with tracking changes, but that might not be worth your setup work.Idea: If all the columns are uniform within each sheet, that is, if the data is like a simple sequence of records, look into using Google Fusion Tables in place of spreadsheets. Fusion Tables scale up to very large data sets since the rows are independent of each other. You can publish columns from one table data to other tables.Idea: If this is not a temporary application, consider replacing the spreadsheets with a custom implementation as a web app or native Android apps. Even then, it's good to prototype your application with spreadsheets as a way to discover what really matters to your use. |
_unix.122593 | How would I start an application (by launcher) the exact same way as GNOME would, in a command line interface? I want to set some environment variables.I know I can check the launcher file for the 'EXEC' command but for some reason that command makes my application crash; while when it's launched through GNOME it works fine. | How to start an application as GNOME would by command line? | gnome3;command | null |
_unix.3171 | What is the debian_chroot variable in my bashrc file? and what is it doing here?PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ ' | What is $debian_chroot in .bashrc? | bash;ubuntu;debian | null |
_unix.13487 | According to http://16s.us/Linux/dm-crypt.txt, it's better to use only dm-crypt, because LUKS leaves a trace.That's OK. But, what are the most secure cryptsetup parameters to use? E.g.: aes-cbc-essiv:sha512 is better then the default. Also the default cryptsetup uses 256bit AES, but it could be (max?) 512bit. Which parameters create an encrypted partition with the most security? | What are the most secure cryptsetup parameters to create an Encrypted filesystem? | security | Hiding the fact that you are using encryption is very hard. Consider that some minimal decryption software (like in initrd) must be stored in plain text somewhere. Seeing that and a disk full of random data people might find out.If you can't prevent that you might as well take advantage of LUKS. For example if you have multiple users they can have their own password.About the cipher modes and algorithms. AES is the most widely used algorithm and it supports 128, 192 and 256 bit keys. None of them are even close to being broken. CBC means cipher block chaining. It should not be used for disk encryption because it is vulnerable to watermarking. Use the XTS mode instead. Essiv means that the IV is secret too. This also prevents watermarking. SHA512 is a hashing algorithm used to generate the encryption key from the password. It is considered secure.You may want to look at loop-aes. It has no header, uses multiple 64x256 bit keys and you can use an external disk (like a pendrive) for storing the keys encrypted with your password. Unfortunately it requires kernel patching.BTW I agree with the comments. There is no most secure way. Think about what are you trying to protect and what kind of attacks do you expect. |
_reverseengineering.8247 | I'm trying to RE an application which shows after hitting a button a specific number.The number comes from a file after parsing it. Input = 5, number displayed = 5 and so on.I was able to track down the disassembled code where the number being loaded.v3 = *(_DWORD *)(*((_DWORD *)AfxGetModuleState() + 1) + 164);ATL::CStringT<char,StrTraitMFC_DLL<char,ATL::ChTraitsCRT<char>>>::Format(v2 + 168, %u, *(_DWORD *)(v13 + 1032));The last part v13 + 1032 told me where to look on the stack and indeed I found the number.Stack overview00188F7C 000000000188F80 000000400188F84 000000100188F88 000001DNow my question is: Is there any way in IDA to show me the code which is putting those 4 values on the stack? The surounding lines (Line1, 3 and 4) always have the same values. | Finding responsible part of code writing specific values on stack | ida | Set a code breakpoint at the point where the stack frame is established and another one at the point where your target value gets used (and hence must have been written already).In the condition script for the first breakpoint you can add a hardware write breakpoint for your target location; its absolute address can be computed at that point because you have the actual values of ESP and EBP. The second breakpoint is for deleting that hardware breakpoint. In the condition script for the hardware breakpoint you can do whatever you want: check for a specific value being written, check surrounding values etc.Note: CheatEngine was written expressly for tasks like that. You might get results quicker and more easily if you get CheatEngine instead of trying to make sense of IDAs meagerly documented - and often quite bizarre - interfaces.Here's a rudimentary script that you can adapt by modifying the test in target_check_() and adapting the stack offset in set_target_breakpoint_(). The values there are from a quick test that I did to ensure that the code works. This is for IDA 6.7; it will definitely not work with the free IDA (v5.0).#include idc.idcstatic main (){ set_helper_breakpoints_(LocByName(test_ufuncs_t), 0);}static target_check_ (ea){ auto e; try { Message(DbgDword(%x): %x @ EIP %x\n, ea, DbgDword(ea), EIP); return DbgDword(ea) == 0x410A10; } catch (e) { Message(error: %s\n, e.description); } return 0;}static set_target_breakpoint_ (term_bpt){ auto target_ea = ESP - 0x70; Message(target_ea %a\n, target_ea); SetBptCndEx(term_bpt, form(DelBpt(0x%x) & 0, target_ea), 0); AddBptEx(target_ea, 4, BPT_WRITE); SetBptCndEx(target_ea, form(target_check_(0x%x), target_ea), 0); return 0;}static set_helper_breakpoints_ (init_bpt, term_bpt){ if (term_bpt <= 0) term_bpt = FindCode(GetFunctionAttr(init_bpt, FUNCATTR_END), SEARCH_UP); Message(set_helper_breakpoints(): %s %s\n, GetFuncOffset(init_bpt), GetFuncOffset(term_bpt)); AddBptEx(init_bpt, 0, BPT_DEFAULT); AddBptEx(term_bpt, 0, BPT_DEFAULT); SetBptCndEx(init_bpt, form(set_target_breakpoint_(0x%x), term_bpt), 0); SetBptCndEx(term_bpt, 0, 0);} |
_cs.45083 | Assume the CPU has 64 data lines. Then Z reading cycles will beneeded to load an array of 12 double-precision floating-point numbers,each number coded in eight bytes, from the main memory into the CPU.Z = ?So I'm looking at some of the past exam questions to prepare for my exams next year and this question (above) has stumped me. I genuinely don't know how to work this out. I know that a 64 bit machine is a lot faster than a 32 bit one but that's about it. Can anyone explain this? (Assume I am an idiot please) | CPU reading cycles. | computer architecture;memory access | null |
_vi.5649 | I'm using MacVim on OSX 10.11 El Capitan, which still has some issues with El Captain's new Split Screen feature namely, MacVim automatically resizes to 191 columns, taking up most of the screen real estate. To fix this, I have to manually set columns=95, which works perfectly except that then, the entire vim window goes black. The easiest way to fix that is <C-l> to redraw the screen.So, I have the following mapping set up in my .vimrc: nnoremap <Leader>ss :set columns=95<CR><C-l>Problem is, the <C-l> command seems to come too soon after the screen is resized, so it remains black and I still have to manually redraw the screen myself. Is there any way to delay the execution of the final <C-l> command by a few ms or so (or, even fancier, to wait until the screen has been fully resized) so that I can do this all in one fell swoop?Thanks in advance. | Can I add a delay/wait to a key mapping? | key bindings | You could use the the builtin sleep function (see :h sleep).:sleep 2<CR> lets Vim sleep for 2 seconds, :sleep 200m<CR> for 200 milliseconds. There is also the gosleep command in normal mode, e.g. 2gs. |
_unix.205697 | I'm writing an audit script to find all files with SUID & SGID bit set on the system, using the below command:find / perm /u=s,g=sThe script will run with a non-root user. Will this user be able to (have permission) to search for all files with SUID/SGID bit set?If not which specific permission would need to be granted to the user to accomplish this?The Script would be run mainly on an RHEL system. | Non-root privileged user to find all files with SUID & SGID bit...Permission Required? | permissions;rhel;not root user;audit | null |
_unix.261864 | I am on Arch Linux and I'm trying to make a cron job that fires every minute. So I use:$ crontab -eAnd add the script in:* * * * * Rscript /srv/shiny-system/cron/CPU.R~~/tmp/crontab.8VZ7vq 1 line, 47 characters (I have no idea what that /tmp/crontab.8VZ7vq is!)But it is not working - CPU.R is not running every minute. What should I do then in Arch Linux to run the cron job? I have looked into these wiki guides below but I am still lost:https://wiki.archlinux.org/index.php/Cronhttps://wiki.archlinux.org/index.php/Systemd/TimersEditI found some hints from here regarding crond.[xxx@localhost ~]$ systemctl status crond crond.service Loaded: not-found (Reason: No such file or directory) Active: inactive (dead)[xxx@localhost ~]$ sudo systemctl start crond[sudo] password for xxx: Failed to start crond.service: Unit crond.service failed to load: No such file or directory.What does this mean? Where should I put this crond.service and what script should I put in it? | Arch Linux - How to run a cron job? | arch linux;cron | There is no crond.service on Arch Linux. As the Arch Wiki makes perfectly clear:There are many cron implementations, but none of them are installed by default as the base system uses systemd/Timers instead.Consequently, if you want to use cron, you have to choose which of the many implementations you will install, and then start that specific service.You don't just randomly type systemctl enable nonexistent.service and then wonder why it isn't running...If you want cronie, then you install cronie and start it with:pacman -Syu croniesystemctl enable --now cronie.serviceThe Arch documentation is generally very clear; if you read the pages you linked to more carefully, you should find out what you need. |
_scicomp.26647 | I have encountered the term cell Cell averaged derivatives and Taylor series expansion using cell averaged derivatives about centroid in the context of polynomial representation in any cell (computational element with some higher order interpolation polynomial nodes).Here is the link to the below paper:http://dept.ku.edu/~cfdku/papers/AIAA-2009-605.pdfI have also read the reference of this reference paper http://people.math.gatech.edu/~yingjie/publications/centr_DG_ovlpcell_new_iii.pdfSection 4.1I tried to find alternate references and easier sources of explanation, but everywhere it has been mentioned that the coefficients of the above Taylor series expansion can be found easily.However, I am still unable to grasp the procedure properly. It would be nice if someone could help me understand which seems to be trivial for everyone. | How to perform Taylor series expansion consisting of cell averaged derivatives in a computational element? | fluid dynamics;interpolation;discontinuous galerkin | null |
_vi.12602 | Is there a command to paste (p) text with space inserted before that text like in join?I don't want to achieve that by mapping.Normally p pastes text right in the cursor's position in Normal Mode.Switching modes, and inserting space manually everytime can be frustrating. | Paste with space inserted before text | cut copy paste;normal mode | null |
_unix.32728 | I started downloading a big file and accidently deleted it a while ago. I know how to get its current contents by cping /proc/<pid>/fd/<fd> but since the download is still in progress it'll be incomplete at the time I copy it someplace else.Can I somehow salvage the file right at the moment the download finishes but before the downloader closes the file and I lose it for good? | Recover deleted file that is currently being written to | linux;filesystems;data recovery;open files | Using tail in follow mode should allow you to do what you want.tail -n +0 -f /proc/<pid>/fd/<fd> > abc.deletedI just did a quick test and it seems to work here. You did not mention whether your file was a binary file or not. My main concern is that it may not copy from the start of file but the -n +0 argument should do that even for binary files.The tail command may not terminate at the end of the download so you will need to terminate it yourself. |
_unix.83795 | How do I reduce these two commands to one? I am connecting from my client to an x11vnc server and the two commands below already work. I just wish to do it in one step:first command:ssh -fNL 5901:localhost:5678 -i ~/.ssh/some_id_rsa [email protected] second:vncviewer localhost:5901From reading the man page, it seems like the -via option might do it. Unfortunately, the man page leaves me very confused. For reference (not that I understand it) here is what my man page says: -via gateway Automatically create encrypted TCP tunnel to the gateway machine before con nection, connect to the host through that tunnel (TightVNC-specific). By default, this option invokes SSH local port forwarding, assuming that SSH client binary can be accessed as /usr/bin/ssh. Note that when using the -via option, the host machine name should be specified as known to the gateway machine, e.g. localhost denotes the gateway, not the machine where vncviewer was launched. The environment variable VNC_VIA_CMD can override the default tunnel command of /usr/bin/ssh -f -L $L:$H:$R $G sleep 20. The tunnel command is exe cuted with the environment variables L, H, R, and G taken the values of the local port number, the remote host, the port number on the remote host, and the gateway machine respectively. | Establishing client VNC connection over SSH in one step (e.g., with the -via option) | ssh;vnc;ssh tunneling | This is what the man page is trying to say. I have the following setup. vncviewer .-,( ),-. __ _ .-( )-. gateway vncserver [__]|=| ---->( internet )-------> __________ ------> ____ __ /::/|_| '-( ).-' [_...__...] | | |==| '-.( ).-' |____| | | /::::/ |__|NOTE: The above diagram was done using asciio.The vncviewer is running from my laptop. From my laptop I can run the following command and connect to the vncserver which is behind my router:$ vncviewer vncserver_host:0 -via mygateway.mydom.comThis will instantly connect me to the vncserver. This command is shown on my laptop which helps to show what the man page is trying to explain:/usr/bin/ssh -f -L 5599:vncserver_host:5900 mygateway.mydom.com sleep 20This is the command that vncviewer is automatically constructing when you use the -via gateway switch.including ssh configurationsYou can make use of the ~/.ssh/config file and put entries in this file like this:Host *IdentityFile ~/.ssh/id_dsaOr you can target a specific host like this:Host mygateway User sam HostName mygateway.mydom.com IdentityFile ~/.ssh/someother_id_rsaThis will allow you to leverage the Host entries in this file like this:$ vncviewer vncserver_host:0 -via mygateway |
_vi.11349 | I'm brushing up on my vim skills and fiddling with completion using the Ctrl-n command. I'm programming in Perl. When I am in insert mode and hit Ctrl-n, I get a message that it is Scanning included file down at the bottom of the terminal and it looks like it's scanning files in the perl distribution directories. This can take several seconds before it finishes. It looks like it does this with packages I include that are commented out:#use Moose;So I have to delete these lines for this feature to be of any value. Am I missing something?I am using the perl-support plugin. | Ctrl-n completion takes a long time for scanning included file | autocompletion | By default, when you hit C-n or C-p, Vim looks inside various sources to find candidates which will populate the completion menu.These sources can be configured with the buffer-local 'complete' option.The value of this option is a comma-separated list of flags. Each flag has its own meaning described in :h 'cpt:. scan the current buffer ('wrapscan' is ignored)w scan buffers from other windowsb scan other loaded buffers that are in the buffer listu scan the unloaded buffers that are in the buffer listU scan the buffers that are not in the buffer listk scan the files given with the 'dictionary' optionkspell use the currently active spell checking |spell|k{dict} scan the file {dict}. Several k flags can be given, patterns are valid too. For example: :set cpt=k/usr/dict/*,k~/spanishs scan the files given with the 'thesaurus' options{tsr} scan the file {tsr}. Several s flags can be given, patterns are valid too.i scan current and included filesd scan current and included files for defined name or macro |i_CTRL-X_CTRL-D|] tag completiont same as ]By default, its value is .,w,b,u,t,i, which means: 1. the current buffer 2. buffers in other windows 3. other loaded buffers 4. unloaded buffers 5. tags 6. included filesIf you find that scanning the included files takes too much time, you could try to remove the i flag from the 'cpt' option.If you wanted to remove it from the global value, to affect all the buffers by default, you would write in your vimrc:setglobal complete-=iIf you wanted to do the same thing, but only for perl files, you could install an autocmd inside your vimrc:augroup PerlSettings autocmd! autocmd FileType perl setlocal complete-=iaugroup ENDOr better, you could create a filetype plugin, for example in ~/.vim/after/ftplugin/perl.vim, in which you would simply write:setlocal complete-=iTo check out what's the current global and local values of your 'complete' option, and where they were last set, you could use these commands:verbose setglobal complete?verbose setlocal complete?Or shorter:verb setg cpt?verb setl cpt?If the only source in which you're interested is the current buffer, then, instead of using C-n, you could use C-x C-n. See :h i_^x^n for more info. |
_softwareengineering.81106 | I'm one of those developers that has the mindset that the code written should be self-explanatory and read like a book.HOWEVER, when developing library code for other people to use I try to put as much documentation in the header files as possible; which brings up the question: Is documenting methods that are non-Public even worth the time? They won't be using them directly, in fact, they can't. At the same time if I distribute the raw code (albeit, reluctantly) those non-Public methods will be visible and may need explaining.Just looking for some standards and practices that you all see or use in your travels. | Code documentation: Public vs. Non-Public? | c++;documentation | I would never consider omitting documentation for internals just because an end user won't be using them; code maintenance is more than enough reason to include documentation comments for all components, in fact especially for internals which tend to be the most complex (and oft-changing) part.That said, there may be a valid case to be made for keeping them restricted to the non-header source code (rather than publicly documented), in order to maintain the abstraction.This is all rather subjective, mind you. |
_webapps.101062 | I'd like to be able to set up 2-factor authentication to better protect my account. But I'd like to be able to access it with any of my several Android devices. Is this possible? And is it possible to perhaps set this up without the Facebook app, for example just with Google Authenticator? | How to set up Facebook 2-factor authentication with multiple devices | facebook;authentication;google authenticator;two factor authentication | null |
_cs.7759 | It is said that computability theory is also called recursion theory. Why is it called like that? Why recursion has this much importance? | Importance of recursion in computability theory | computability;terminology;history | In the 1920's and 1930's people were trying to figure out what it means to effectively compute a function (remember, there were no general purpose computing machines around, and computing was something done by people).Several definitions of computable were proposed, of which three are best known:The $\lambda$-calculusRecursive functionsTuring machinesThese turned out to define the same class of number-theoretic functions. Because recursive functions are older than Turing machines, and the even older $\lambda$-calculus was not immediately accepted as an adequate notion of computability, the adjective recursive was used widely (recursive functions, recursive sets, recursively enumerable sets, etc.)Later on, there was an effort, popularized by Robert Soare, to change recursive to computable. Thus we nowadays speak of computable functions and computably enumerable sets. But many older textbooks, and many people, still prefer the recursive terminology.So much for the history. We can also ask whether recursion is important for computation from a purely mathematical point of view. The answer is a very definite yes!. Recursion lies at the basis of general-purpose programming languages (even while loops are just a form of recursion because while p do c is the same as if p then (c; while p do c)), and many fundamental data stuctures, such as lists and trees, are recursive. Recursion is simply unavoidable in computer science, and in computability theory specifically. |
_cstheory.22730 | In the first edition of Introduction to Algorithms (Cormen et al., MIT Press, 1990), the discussion of parallel algorithms is based on the PRAM model. In the second edition, paralellism has been eliminated, but in the third edition (Cormen et al., MIT Press, 2009), the topic is reintroduced, but with a dynamic threading model (based on Cilk). The chapters are very different, for sure, and the models seem to be, as well, at least superficially. But I'm wondering: What are the differences in the underlying computational model or abstract machine here?Their underlying model is still a shared-memory RAM machine with multiple processors. How is this different from the PRAM? Is it the case, perhaps, that they are in fact using the same underlying model, but approaching it differently? The threading is certainly handled differently in the classic PRAM algorithms more in line with static threading, where you manually schedule which threads/processes are to run on which processors, rather than simply express concurrency/potential parallelism and have some automatic scheduler use the processors available. But still: Are there more fundamental differences?In their chapter notes (3rd ed., Chapter 27), Cormen et al. write, Prior editions of this book included material on [] the PRAM (Parallel Random Access Machine) model. This seems to indicate that they do not view their dynamic multithreading as being built on this model. Is this so? If so, what differences am I missing? | Difference between PRAM and machine model in dynamic multithreading | ds.algorithms | null |
_unix.124160 | It is possible mount a flash drive without read permission? | Mounting a device without read permissions | permissions;mount | You can choose the permissions of the files and directories on a vfat filesystem in the mount options. Pass fmask to indicate the permission on files that are not set, and dmask for directories the values are the same as in umask. For example, to allow non-root users to only traverse directories but not list their content, and create files and directories and overwrite existing files but not read back from any file, you can use fmask=055,dmask=044 (4 = block read permission, 5 = block read and execute permissions). You can assign a group with more or fewer permissions; for example, if you want only the creator group to be allowed to create directories, you can use the options gid=creator,fmask=055,dmask=046.This is a handy way of preventing the creator of a file from reading back the data written to the file. However, this is a rare requirement, and it has the considerable downside of not allowing the creator of a file to read back the data written to the file. |
_unix.121855 | I am totally stuck even after reading both this and this (if it isn't already obvious I don't have much experience with Linux.)I am trying to install WebSphere 8.5 on RHEL6. Everything was going fine with the Installation Manager GUI, then after IBMHTTPServer, Web Server Plugin, and WebSphere Customization Toolbox, I was unable to start the installation manager.Searching the internet let me to believe that testing if even xterm worked would be a good idea and unfortunately it does not.# whoamiroot# xtermWarning: This program is an suid-root program or is being run by the root user.The full text of the error or warning message cannot be safely formattedin this environment. You may get a more descriptive message by running theprogram as a non-root user or by removing the suid bit on the executable.xterm Xt error: Can't open display: %sI am connected to the Linux box using mobaXterm and the xterm command works fine on a cloned VM via the same connection.# echo $DISPLAYlocalhost:10.0# xauth listap01/unix:10 MIT-MAGIC-COOKIE-1 7f20dc9e52baff302a442c46bbd4869bAgain, I am new to this so may not be understanding what is relevant, please let me know if I can post further information.The IBMIM error (removed due to character limit):./IBMIM(IBMIM:22290): GLib-GObject-WARNING **: invalid (NULL) pointer instance(IBMIM:22290): GLib-GObject-CRITICAL **: g_signal_connect_data: assertion `G_TYPE_CHECK_INSTANCE (instance)' failed...(IBMIM:22289): Pango-CRITICAL **: pango_layout_get_line_count: assertion `layout != NULL' failed Floating point exception (core dumped)~Adding xlock, xmag, and ssh -v output:ssh -v root@<ip_address>OpenSSH_6.2p2, OpenSSL 1.0.1c 10 May 2012debug1: Reading configuration data /etc/ssh_configdebug1: Connecting to <ip_address> [<ip_address>] port 22.debug1: Connection established.debug1: identity file /home/mobaxterm/.ssh/id_rsa type -1debug1: identity file /home/mobaxterm/.ssh/id_rsa-cert type -1debug1: identity file /home/mobaxterm/.ssh/id_dsa type -1debug1: identity file /home/mobaxterm/.ssh/id_dsa-cert type -1debug1: identity file /home/mobaxterm/.ssh/id_ecdsa type -1debug1: identity file /home/mobaxterm/.ssh/id_ecdsa-cert type -1debug1: Enabling compatibility mode for protocol 2.0debug1: Local version string SSH-2.0-OpenSSH_6.2debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3debug1: match: OpenSSH_5.3 pat OpenSSH_5*debug1: SSH2_MSG_KEXINIT sentdebug1: SSH2_MSG_KEXINIT receiveddebug1: kex: server->client aes128-ctr hmac-md5 [email protected]: kex: client->server aes128-ctr hmac-md5 [email protected]: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sentdebug1: expecting SSH2_MSG_KEX_DH_GEX_GROUPdebug1: SSH2_MSG_KEX_DH_GEX_INIT sentdebug1: expecting SSH2_MSG_KEX_DH_GEX_REPLYdebug1: Server host key: RSA f6:69:0b:e8:73:a6:8e:5a:e5:de:95:96:cb:61:2e:4adebug1: Host '<ip_address>' is known and matches the RSA host key.debug1: Found key in /home/mobaxterm/.ssh/known_hosts:2debug1: ssh_rsa_verify: signature correctdebug1: SSH2_MSG_NEWKEYS sentdebug1: expecting SSH2_MSG_NEWKEYSdebug1: SSH2_MSG_NEWKEYS receiveddebug1: Roaming not allowed by serverdebug1: SSH2_MSG_SERVICE_REQUEST sentdebug1: SSH2_MSG_SERVICE_ACCEPT receiveddebug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,passworddebug1: Next authentication method: publickeydebug1: Trying private key: /home/mobaxterm/.ssh/id_rsadebug1: Trying private key: /home/mobaxterm/.ssh/id_dsadebug1: Trying private key: /home/mobaxterm/.ssh/id_ecdsadebug1: Next authentication method: passwordroot@<ip_address>'s password:debug1: Enabling compression at level 6.debug1: Authentication succeeded (password).Authenticated to <ip_address> ([<ip_address>]:22).debug1: channel 0: new [client-session]debug1: Requesting [email protected]: Entering interactive session.debug1: No xauth program.debug1: Requesting X11 forwarding with authentication spoofing.Last login: Thu Mar 27 15:43:13 2014 from <build_machine_ip_address># xlock-bash: xlock: command not found# xmagError: Can't open display: localhost:12.0Verification of xauth install and PATH (honestly I wasn't 100% that the below is sufficient, what is the difference between xauth and xorg-x11-xauth?, just need parent directory included in PATH?, etc):[root@vm-ap01 apps]# yum whatprovides xauthLoaded plugins: product-id, refresh-packagekit, security, subscription-managerThis system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.1:xorg-x11-xauth-1.0.2-7.1.el6.x86_64 : X.Org X11 X authority utilitiesRepo : myrepoMatched from:Other : xauth1:xorg-x11-xauth-1.0.2-7.1.el6.x86_64 : X.Org X11 X authority utilitiesRepo : installedMatched from:Other : Provides-match: xauth[root@vm-ap01 apps]# rpm -ql xauthpackage xauth is not installed[root@vm-ap01 apps]# rpm -ql xorg-x11-xauth/usr/bin/mkxauth/usr/bin/xauth/usr/share/doc/xorg-x11-xauth-1.0.2/usr/share/doc/xorg-x11-xauth-1.0.2/AUTHORS/usr/share/doc/xorg-x11-xauth-1.0.2/COPYING/usr/share/doc/xorg-x11-xauth-1.0.2/ChangeLog/usr/share/doc/xorg-x11-xauth-1.0.2/NEWS/usr/share/doc/xorg-x11-xauth-1.0.2/README/usr/share/man/man1/mkxauth.1x.gz/usr/share/man/man1/xauth.1.gz[root@vm-ap01 apps]# echo $PATH/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/binI'm not exactly sure about from the remote side, but below is the output when connected via mobaxterm from my workstation to RHEL box:[root@vm-ap01 apps]# echo $DISPLAYlocalhost:13.0[root@vm-ap01 apps]# xauth list | awk '{print $1, $2}'vm-ap01/unix:10 MIT-MAGIC-COOKIE-1vm-ap01/unix:11 MIT-MAGIC-COOKIE-1vm-ap01/unix:12 MIT-MAGIC-COOKIE-1vm-ap01/unix:13 MIT-MAGIC-COOKIE-1vm-ap01/unix:14 MIT-MAGIC-COOKIE-1[root@vm-ap01 apps]# strace xclockexecve(/usr/bin/xclock, [xclock], [/* 34 vars */]) = 0brk(0) = 0xc2b000mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f08611d5000access(/etc/ld.so.preload, R_OK) = -1 ENOENT (No such file or directory)open(/etc/ld.so.cache, O_RDONLY) = 3fstat(3, {st_mode=S_IFREG|0644, st_size=99025, ...}) = 0mmap(NULL, 99025, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f08611bc000close(3) = 0open(/usr/lib64/libXaw.so.7, O_RDONLY) = 3read(3, \177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0@|!\3601\0\0\0..., 832) = 832fstat(3, {st_mode=S_IFREG|0755, st_size=412648, ...}) = 0mmap(0x31f0200000, 2506392, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x31f0200000mprotect(0x31f025a000, 2093056, PROT_NONE) = 0mmap(0x31f0459000, 45056, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x59000) = 0x31f0459000close(3) = 0open(/usr/lib64/libXmu.so.6, O_RDONLY) = 3read(3, \177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0pf`/:\0\0\0..., 832) = 832fstat(3, {st_mode=S_IFREG|0755, st_size=107600, ...}) = 0mmap(0x3a2f600000, 2201280, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x3a2f600000mprotect(0x3a2f618000, 2097152, PROT_NONE) = 0mmap(0x3a2f818000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x18000) = 0x3a2f818000close(3) = 0open(/usr/lib64/libXt.so.6, O_RDONLY) = 3read(3, \177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0p1\2418:\0\0\0..., 832) = 832fstat(3, {st_mode=S_IFREG|0755, st_size=412832, ...}) = 0mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f08611bb000mmap(0x3a38a00000, 2508640, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x3a38a00000mprotect(0x3a38a5f000, 2093056, PROT_NONE) = 0mmap(0x3a38c5e000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x5e000) = 0x3a38c5e000mmap(0x3a38c64000, 1888, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x3a38c64000close(3) = 0open(/usr/lib64/libX11.so.6, O_RDONLY) = 3read(3, \177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\360\334\3412:\0\0\0..., 832) = 832fstat(3, {st_mode=S_IFREG|0755, st_size=1300376, ...}) = 0mmap(0x3a32e00000, 3394936, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x3a32e00000mprotect(0x3a32f37000, 2097152, PROT_NONE) = 0mmap(0x3a33137000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x137000) = 0x3a33137000close(3) = 0open(/usr/lib64/libXrender.so.1, O_RDONLY) = 3read(3, \177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\300\30\3404:\0\0\0..., 832) = 832fstat(3, {st_mode=S_IFREG|0755, st_size=42472, ...}) = 0mmap(0x3a34e00000, 2135176, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x3a34e00000mprotect(0x3a34e09000, 2097152, PROT_NONE) = 0mmap(0x3a35009000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x9000) = 0x3a35009000close(3) = 0open(/usr/lib64/libXft.so.2, O_RDONLY) = 3read(3, \177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\320? ::\0\0\0..., 832) = 832fstat(3, {st_mode=S_IFREG|0755, st_size=88632, ...}) = 0mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f08611ba000mmap(0x3a3a200000, 2181216, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x3a3a200000mprotect(0x3a3a214000, 2097152, PROT_NONE) = 0mmap(0x3a3a414000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x14000) = 0x3a3a414000close(3) = 0open(/usr/lib64/libxkbfile.so.1, O_RDONLY) = 3read(3, \177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\320J`\3601\0\0\0..., 832) = 832fstat(3, {st_mode=S_IFREG|0755, st_size=151792, ...}) = 0mmap(0x31f0600000, 2245768, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x31f0600000mprotect(0x31f0623000, 2097152, PROT_NONE) = 0mmap(0x31f0823000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x23000) = 0x31f0823000mmap(0x31f0824000, 1160, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x31f0824000close(3) = 0open(/lib64/libm.so.6, O_RDONLY) = 3read(3, \177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0p> 0:\0\0\0..., 832) = 832fstat(3, {st_mode=S_IFREG|0755, st_size=599384, ...}) = 0mmap(0x3a30200000, 2633912, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x3a30200000mprotect(0x3a30283000, 2093056, PROT_NONE) = 0mmap(0x3a30482000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x82000) = 0x3a30482000close(3) = 0open(/lib64/libc.so.6, O_RDONLY) = 3read(3, \177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0000\356!/:\0\0\0..., 832) = 832fstat(3, {st_mode=S_IFREG|0755, st_size=1926800, ...}) = 0mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f08611b9000mmap(0x3a2f200000, 3750152, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x3a2f200000mprotect(0x3a2f38b000, 2093056, PROT_NONE) = 0mmap(0x3a2f58a000, 20480, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x18a000) = 0x3a2f58a000mmap(0x3a2f58f000, 18696, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x3a2f58f000close(3) = 0open(/usr/lib64/libXext.so.6, O_RDONLY) = 3read(3, \177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\3006 3:\0\0\0..., 832) = 832fstat(3, {st_mode=S_IFREG|0755, st_size=80944, ...}) = 0mmap(0x3a33200000, 2174216, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x3a33200000mprotect(0x3a33212000, 2097152, PROT_NONE) = 0mmap(0x3a33412000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x12000) = 0x3a33412000close(3) = 0open(/usr/lib64/libXpm.so.4, O_RDONLY) = 3read(3, \177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0`/\340\3571\0\0\0..., 832) = 832fstat(3, {st_mode=S_IFREG|0755, st_size=72832, ...}) = 0mmap(0x31efe00000, 2165496, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x31efe00000mprotect(0x31efe11000, 2093056, PROT_NONE) = 0mmap(0x31f0010000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x10000) = 0x31f0010000close(3) = 0open(/usr/lib64/libSM.so.6, O_RDONLY) = 3read(3, \177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0@\32`?:\0\0\0..., 832) = 832fstat(3, {st_mode=S_IFREG|0755, st_size=34024, ...}) = 0mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f08611b8000mmap(0x3a3f600000, 2126776, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x3a3f600000mprotect(0x3a3f607000, 2097152, PROT_NONE) = 0mmap(0x3a3f807000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x7000) = 0x3a3f807000close(3) = 0open(/usr/lib64/libICE.so.6, O_RDONLY) = 3read(3, \177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0pM`>:\0\0\0..., 832) = 832fstat(3, {st_mode=S_IFREG|0755, st_size=101608, ...}) = 0mmap(0x3a3e600000, 2208960, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x3a3e600000mprotect(0x3a3e617000, 2097152, PROT_NONE) = 0mmap(0x3a3e817000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x17000) = 0x3a3e817000mmap(0x3a3e818000, 13504, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x3a3e818000close(3) = 0open(/usr/lib64/libxcb.so.1, O_RDONLY) = 3read(3, \177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0`\230`2:\0\0\0..., 832) = 832fstat(3, {st_mode=S_IFREG|0755, st_size=124728, ...}) = 0mmap(0x3a32600000, 2217576, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x3a32600000mprotect(0x3a3261d000, 2097152, PROT_NONE) = 0mmap(0x3a3281d000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1d000) = 0x3a3281d000close(3) = 0open(/lib64/libdl.so.2, O_RDONLY) = 3read(3, \177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\340\r\340.:\0\0\0..., 832) = 832fstat(3, {st_mode=S_IFREG|0755, st_size=22536, ...}) = 0mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f08611b7000mmap(0x3a2ee00000, 2109696, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x3a2ee00000mprotect(0x3a2ee02000, 2097152, PROT_NONE) = 0mmap(0x3a2f002000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x2000) = 0x3a2f002000close(3) = 0open(/usr/lib64/libfontconfig.so.1, O_RDONLY) = 3read(3, \177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\200\\`4:\0\0\0..., 832) = 832fstat(3, {st_mode=S_IFREG|0755, st_size=223040, ...}) = 0mmap(0x3a34600000, 2316776, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x3a34600000mprotect(0x3a34634000, 2097152, PROT_NONE) = 0mmap(0x3a34834000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x34000) = 0x3a34834000close(3) = 0open(/usr/lib64/libfreetype.so.6, O_RDONLY) = 3read(3, \177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0P\310\2404:\0\0\0..., 832) = 832fstat(3, {st_mode=S_IFREG|0755, st_size=644912, ...}) = 0mmap(0x3a34a00000, 2737840, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x3a34a00000mprotect(0x3a34a98000, 2093056, PROT_NONE) = 0mmap(0x3a34c97000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x97000) = 0x3a34c97000close(3) = 0mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f08611b6000open(/lib64/libuuid.so.1, O_RDONLY) = 3read(3, \177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\240\25 ;:\0\0\0..., 832) = 832fstat(3, {st_mode=S_IFREG|0755, st_size=18936, ...}) = 0mmap(0x3a3b200000, 2111272, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x3a3b200000mprotect(0x3a3b204000, 2093056, PROT_NONE) = 0mmap(0x3a3b403000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x3000) = 0x3a3b403000close(3) = 0open(/usr/lib64/libXau.so.6, O_RDONLY) = 3read(3, \177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\320\r\2402:\0\0\0..., 832) = 832fstat(3, {st_mode=S_IFREG|0755, st_size=13168, ...}) = 0mmap(0x3a32a00000, 2106112, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x3a32a00000mprotect(0x3a32a02000, 2097152, PROT_NONE) = 0mmap(0x3a32c02000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x2000) = 0x3a32c02000close(3) = 0open(/lib64/libexpat.so.1, O_RDONLY) = 3read(3, \177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\320<`3:\0\0\0..., 832) = 832fstat(3, {st_mode=S_IFREG|0755, st_size=167648, ...}) = 0mmap(0x3a33600000, 2260432, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x3a33600000mprotect(0x3a33626000, 2093056, PROT_NONE) = 0mmap(0x3a33825000, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x25000) = 0x3a33825000close(3) = 0mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f08611b5000mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f08611b4000mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f08611b2000arch_prctl(ARCH_SET_FS, 0x7f08611b2740) = 0mprotect(0x3a30482000, 4096, PROT_READ) = 0mprotect(0x3a2f58a000, 16384, PROT_READ) = 0mprotect(0x3a2f002000, 4096, PROT_READ) = 0mprotect(0x3a2ec1f000, 4096, PROT_READ) = 0munmap(0x7f08611bc000, 99025) = 0open(/usr/lib/locale/locale-archive, O_RDONLY) = 3fstat(3, {st_mode=S_IFREG|0644, st_size=99158576, ...}) = 0mmap(NULL, 99158576, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f085b321000close(3) = 0brk(0) = 0xc2b000brk(0xc4c000) = 0xc4c000open(/proc/meminfo, O_RDONLY|O_CLOEXEC) = 3fstat(3, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f08611d4000read(3, MemTotal: 1923456 kB\nMemF..., 1024) = 1024close(3) = 0munmap(0x7f08611d4000, 4096) = 0socket(PF_NETLINK, SOCK_RAW, 0) = 3bind(3, {sa_family=AF_NETLINK, pid=0, groups=00000000}, 12) = 0getsockname(3, {sa_family=AF_NETLINK, pid=30849, groups=00000000}, [12]) = 0sendto(3, \24\0\0\0\26\0\1\3\320\3545S\0\0\0\0\0\0\0\0, 20, 0, {sa_family=AF_NETLINK, pid=0, groups=00000000}, 12) = 20recvmsg(3, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000}, msg_iov(1)=[{0\0\0\0\24\0\2\0\320\3545S\201x\0\0\2\10\200\376\1\0\0\0\10\0\1\0\177\0\0\1..., 4096}], msg_controllen=0, msg_flags=0}, 0) = 108recvmsg(3, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000}, msg_iov(1)=[{@\0\0\0\24\0\2\0\320\3545S\201x\0\0\n\200\200\376\1\0\0\0\24\0\1\0\0\0\0\0..., 4096}], msg_controllen=0, msg_flags=0}, 0) = 128recvmsg(3, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000}, msg_iov(1)=[{\24\0\0\0\3\0\2\0\320\3545S\201x\0\0\0\0\0\0\1\0\0\0\24\0\1\0\0\0\0\0..., 4096}], msg_controllen=0, msg_flags=0}, 0) = 20close(3) = 0socket(PF_FILE, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 3connect(3, {sa_family=AF_FILE, path=/var/run/nscd/socket}, 110) = -1 ENOENT (No such file or directory)close(3) = 0socket(PF_FILE, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 3connect(3, {sa_family=AF_FILE, path=/var/run/nscd/socket}, 110) = -1 ENOENT (No such file or directory)close(3) = 0open(/etc/nsswitch.conf, O_RDONLY) = 3fstat(3, {st_mode=S_IFREG|0644, st_size=1688, ...}) = 0mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f08611d4000read(3, #\n# /etc/nsswitch.conf\n#\n# An ex..., 4096) = 1688read(3, , 4096) = 0close(3) = 0munmap(0x7f08611d4000, 4096) = 0open(/etc/host.conf, O_RDONLY) = 3fstat(3, {st_mode=S_IFREG|0644, st_size=9, ...}) = 0mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f08611d4000read(3, multi on\n, 4096) = 9read(3, , 4096) = 0close(3) = 0munmap(0x7f08611d4000, 4096) = 0getpid() = 30849open(/etc/resolv.conf, O_RDONLY) = 3fstat(3, {st_mode=S_IFREG|0644, st_size=355, ...}) = 0mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f08611d4000read(3, # Generated by NetworkManager\n\n\n..., 4096) = 355read(3, , 4096) = 0close(3) = 0munmap(0x7f08611d4000, 4096) = 0open(/etc/ld.so.cache, O_RDONLY) = 3fstat(3, {st_mode=S_IFREG|0644, st_size=99025, ...}) = 0mmap(NULL, 99025, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f08611bc000close(3) = 0open(/lib64/tls/x86_64/libnss_files,dns.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/lib64/tls/x86_64, 0x7fff58448b00) = -1 ENOENT (No such file or directory)open(/lib64/tls/libnss_files,dns.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/lib64/tls, {st_mode=S_IFDIR|0555, st_size=4096, ...}) = 0open(/lib64/x86_64/libnss_files,dns.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/lib64/x86_64, 0x7fff58448b00) = -1 ENOENT (No such file or directory)open(/lib64/libnss_files,dns.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/lib64, {st_mode=S_IFDIR|0555, st_size=12288, ...}) = 0open(/usr/lib64/tls/x86_64/libnss_files,dns.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/usr/lib64/tls/x86_64, 0x7fff58448b00) = -1 ENOENT (No such file or directory)open(/usr/lib64/tls/libnss_files,dns.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/usr/lib64/tls, {st_mode=S_IFDIR|0555, st_size=4096, ...}) = 0open(/usr/lib64/x86_64/libnss_files,dns.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/usr/lib64/x86_64, 0x7fff58448b00) = -1 ENOENT (No such file or directory)open(/usr/lib64/libnss_files,dns.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/usr/lib64, {st_mode=S_IFDIR|0555, st_size=69632, ...}) = 0munmap(0x7f08611bc000, 99025) = 0open(/usr/lib64/X11/XtErrorDB, O_RDONLY) = -1 ENOENT (No such file or directory)getuid() = 0geteuid() = 0getuid() = 0write(2, Error: , 7Error: ) = 7write(2, Can't open display: localhost:13..., 34Can't open display: localhost:13.0) = 34write(2, \n, 1) = 1exit_group(1) = ?[root@vm-ap01 apps]#Looks like SE Linux is enabled as well:[root@vm-ap01 apps]# sestatusSELinux status: enabledSELinuxfs mount: /selinuxCurrent mode: enforcingMode from config file: enforcingPolicy version: 24Policy from config file: targeted | xterm no longer starting as root user on RHEL6 | ssh;x11 | debug1: No xauth program.That looks like a problem. Make sure that you have the xauth program installed on both the server and the client. You're logging in as root, so make sure that xauth is in root's PATH, too. I'm not completely sure that this is the problem, because in my tests I found other error messages (X11 forwarding request failed on channel 0, appearing even without -v) if X11 forwarding wasn't working due to a lack of xauth, but this may be due to a version or configuration difference.In the strace xclock output, you identified a line that doesn't match between a working machine and a non-working machine. On the non-working machine:open(/lib64/tls/x86_64/libnss_files,dns.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)That file name is wrong: libnss_files,dns.so.2 isn't supposed to exist, what exists is libnss_files.so.2 and libnss_dns.so.2. These files are used by NSS, the component of the standard C library that manages sources for names of hosts, users, etc. Typical sources include files (/etc/hosts, /etc/passwd, ), dns, ldap, etc. You have a source called files,dns instead of a source called files and a source called dns. Edit the file /etc/nsswitch.conf and change the line(s) SOMETHING: files,dns to SOMETHING: files dns, i.e. the words must be whitespace-separated, not comma-separated.I don't fully understand the trace but this is definitely wrong and could be causing your problem. In particular, I think that a wrong hosts: line in /etc/nsswitch.conf might be causing your system not to find localhost. |
_codereview.85188 | Have I followed best practices for PHP development, or can this class be improved?GitHub<?php namespace azi;/** * Class Validator * * @package azi * @author Azi Baloch <http://www.azibaloch.com> * @version 1.0 * @license The MIT License (MIT) * */class Validator { /** * RegExp patterns * * @var array */ private $expressions = [ ]; /** * Custom RegExp error messages * * @var array */ private $error_messages = [ ]; /** * holds validation errors * * @var array */ private $validation_errors = [ ]; /** * @var array */ private static $errors = [ ]; /** * @var null */ private static $instance = null; /** * @var string */ private static $session_data_key = form_validation_errors; /** * @var array */ private $builtin_rules = []; /** * Class Constructor */ public function __construct() { // load built-in expressions $this->expressions = array( 'alpha' => '#^([a-zA-Z\s])+$#', 'num' => '#^([0-9])+$#', 'alpha-num' => '#^([a-zA-Z0-9\s])$#', ); $this->builtin_rules = array( array( 'id' => 'email', 'exp' => '#^[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}$#', 'message' => 'Please enter a valid email address' ) ); foreach($this->builtin_rules as $rule) { $rule = (object) $rule; $this->registerExpression($rule->id, $rule->exp, $rule->message); } static::$instance = $this; } /** * @param $key * @param $expression * @param null $message * * @return bool * @throws \Exception */ public function registerExpression( $key, $expression, $message = null ) { if ( ! isset( $this->expressions[ $key ] ) ) { $this->expressions[ $key ] = $expression; if ( $message ) { $this->error_messages[ $key ] = $message; } return true; } throw new \Exception( Expression key already exists ); } /** * @param $key * @param $newExpression * @param null $message * * @return bool * @throws \Exception */ public function updateExpression( $key, $newExpression, $message = null ) { if ( $this->expressions[ $key ] ) { $this->expressions[ $key ] = $newExpression; if ( $message ) { $this->error_messages[ $key ] = $message; } return true; } throw new \Exception( Expression dose not exists ); } /** * @return bool */ public function passed() { if ( count( $this->validation_errors ) < 1 ) { return true; } return false; } /** * retrieve error messages after validation * * @return array */ public function getErrors() { return $this->validation_errors; } /** * Get error message of a field * * @param $fieldKey * * @return mixed */ public static function error( $fieldKey ) { if(!session_id()) { session_start(); } if(isset($_SESSION[static::$session_data_key])) { if(count($_SESSION[static::$session_data_key]) > 0) { self::$errors = $_SESSION[ static::$session_data_key ]; unset($_SESSION[static::$session_data_key]); } } else { if(!is_null(static::$instance)) { self::$errors = static::$instance->validation_errors; } } if ( isset( self::$errors[ $fieldKey ][ 'message' ] ) ) { return self::$errors[ $fieldKey ][ 'message' ]; } return false; } /** * @param array $fields the array of form fields - ( $_POST , $_GET, $_REQUEST ) * @param $rules * * @return Validator $this */ public function validate( $fields, $rules ) { $return = [ ]; foreach ( $fields as $key => $field ) { if ( ! array_key_exists( $key, $rules ) ) { continue; } $r = $rules[ $key ]; $matches = [ ]; $not = false; if ( preg_match( '#if:#', $r ) ) { preg_match( #if:(.*)\\[\\!(.*)\\]:(.*)\\[!(.*)\\]\\((.*)\\)#, $r, $matches ); if ( count( $matches ) < 1 ) { preg_match( #if:(.*)\\[(.*)\\]\\((.*)\\)#, $r, $matches ); } else { $not = true; } $r = end( $matches ); } if ( ! strpos( $r, | ) ) { $r .= |IGNORE_ME5; } $theRules = explode( |, $r ); foreach ( $theRules as $theRule ) { if ( $theRule == IGNORE_ME5 ) { continue; } $customMessage = [ ]; if ( strpos( $theRule, -- ) ) { $rcm = explode( --, $theRule ); // custom message for current rule $theRule = $rcm[ 0 ]; if ( strpos( $rcm[ 0 ], : ) ) { $rcm[ 0 ] = explode( :, $rcm[ 0 ] )[ 0 ]; } $customMessage[ $rcm[ 0 ] ] = $rcm[ 1 ]; } if ( count( $matches ) > 0 ) { if ( $not ) { if ( $fields[ $matches[ 1 ] ] == $matches[ 2 ] && $fields[ $matches[ 3 ] ] == $matches[ 4 ] ) { continue; } } else { if ( $fields[ $matches[ 1 ] ] != $matches[ 2 ] ) { continue; } } } if ( strtolower( $theRule ) == required ) { if ( ! empty( $customMessage[ 'required' ] ) ) { $theMessage = $customMessage[ 'required' ]; } else { $theMessage = $this->keyToLabel( $key ) . ' is required'; } if ( $field == ) { $return[ $key ] = [ 'error' => 'required', 'message' => $theMessage ]; continue; } } if ( strtolower( $theRule ) == alpha ) { if ( ! preg_match( $this->expressions[ 'alpha' ], $field ) ) { if ( ! empty( $customMessage[ 'alpha' ] ) ) { $theMessage = $customMessage[ 'alpha' ]; } else { $theMessage = $this->keyToLabel( $key ) . ' must not contain numbers and special characters'; } $return[ $key ] = [ 'error' => 'alpha', 'message' => $theMessage ]; continue; } } if ( strtolower( $theRule ) == num ) { if ( ! empty( $customMessage[ 'num' ] ) ) { $theMessage = $customMessage[ 'num' ]; } else { $theMessage = $this->keyToLabel( $key ) . ' may only contain numbers'; } if ( ! preg_match( $this->expressions[ 'num' ], $field ) ) { $return[ $key ] = [ 'error' => 'num', 'message' => $theMessage ]; continue; } } if ( strtolower( $theRule ) == alpha-num ) { if ( ! empty( $customMessage[ 'alpha-num' ] ) ) { $theMessage = $customMessage[ 'alpha-num' ]; } else { $theMessage = $this->keyToLabel( $key ) . ' may only contain alpha numeric characters'; } if ( ! preg_match( $this->expressions[ 'alpha-num' ], $field ) ) { $return[ $key ] = [ 'error' => 'alpha-num', 'message' => $theMessage ]; continue; } } if ( strpos( $theRule, ':' ) ) { $theRule = explode( :, $theRule ); $length = $theRule[1]; $theRule = $theRule[0]; if ( strtolower( $theRule ) == min ) { if ( ! empty( $customMessage[ 'min' ] ) ) { $theMessage = $customMessage[ 'min' ]; } else { $theMessage = $this->keyToLabel( $key ) . ' must be at least ' . $length . characters long; } if ( strlen( $field ) < $length ) { $return[ $key ] = [ 'error' => 'min', 'message' => $theMessage ]; continue; } } if ( strtolower( $theRule ) == max ) { if ( ! empty( $customMessage[ 'max' ] ) ) { $theMessage = $customMessage[ 'max' ]; } else { $theMessage = $this->keyToLabel( $key ) . ' must be less than ' . $length . characters; } if ( strlen( $field ) > $length ) { $return[ $key ] = [ 'error' => 'max', 'message' => $theMessage ]; continue; } } } /* Custom Expressions */ if ( array_key_exists( $theRule, $this->expressions ) ) { if ( ! preg_match( $this->expressions[ $theRule ], $field ) ) { if(isset($customMessage[$theRule])) { $error_message = $customMessage[$theRule]; } else if ( array_key_exists( $theRule, $this->error_messages ) ) { $error_message = $this->error_messages[ $theRule ]; } else { $error_message = $this->keyToLabel( $key ) . ' dose\'t match the required pattern'; } $return[ $key ] = [ 'error' => $theRule, 'message' => $error_message ]; continue; } } } } $this->validation_errors = $return; return $this; } /** * Convert an array key to Label eg. full_name to Full Name * * @param $key * * @return string */ private function keyToLabel( $key ) { return ucwords( str_replace( [ '-', '_', '+' ], , $key ) ); } public function goBackWithErrors(){ if(!session_id()) { session_start(); } $_SESSION[static::$session_data_key] = $this->validation_errors; header('Location: ' . $_SERVER['HTTP_REFERER']); exit; }} | Server side form validation library | php;beginner;validation;library | Look at Symfony 2 Validation and you will find many useful things.About the code split validate method to small prices: it should not take more than one screen.Use protected instead of private if you are not 100% sure it must be private.In your case, $session_data_key must be constant.If you are providing a library, it must be unit tested and when you will start do it you will find many other architectural problems. |
_unix.38702 | I remember seeing an article at one time on how to do this, but now I can't find it. I have seen other install directions. I want to make sure I can easily update it via apt. | How do I install mod_pagespeed using apt-get? | apt;apache httpd | null |
_reverseengineering.2603 | What tools produce C code that does not produce errors when you try to recompile it again? Can Hex-Rays decompiler convert everything to project files in a single folder and just compile it? | Decompile and recompile C? | c;decompile | null |
_webapps.15321 | It seems like recently, when I post do a paste in a new Gmail message, there's a box with a gray border and rounded corners that surrounds the contents of the paste. This is incredibly annoying. If I do a Ctrl + Shift + V it'll paste plain text, and I'll lose my hyperlinks and such.How do I paste without resulting in the quote box?Edit: I'm using Chrome 12 on Windows7.Steps to reproduce:Copy text from another Gmail message.Go to Compose MailPaste into mail window in GoogleHappens in new mail, replies, etc.Edit 2: This seems to ONLY happen if you copy and paste between Gmail messages, the above steps have been corrected. Sorry about that.Here is a screenshot | Paste text in Gmail without the grey box with rounded corners? | gmail | I can reproduce the quote box with the latest Chrome version (Canary Build v.13), but not with current stable version (Chrome v.11). So the quote box should be a new feature introduced in the newer versions of Chrome.I am afraid that, at least now, if you do not like the quote box, you can only either use plain text to compose your mail, or change back to use the stable channel Chrome (v.11) instead.Update:Another workaround, as you mentioned, since this only happens in copying and pasting Gmail messages, you could first try to copy the message you want from Gmail to another text editor supporting rich text, and then copy it back to your Gmail again...... |
_webmaster.15137 | Google recently announced that they will no longer support older browsers on Aug 1st:For this reason, soon Google Apps will only support modern browsers. Beginning August 1st, well support the current and prior major release of Chrome, Firefox, Internet Explorer and Safari on a rolling basis. Each time a new version is released, well begin supporting the update and stop supporting the third-oldest version.There is nothing worse than looking at the patching of code that takes place to support older browsers. If we could all move towards a standards only web (I'm looking at you IE9) then surely we could spend more time programming good web apps and less trying to make them run equally on terrible non standards compliant older browsers. So when can the rest of us expect to be able to tell our clients that we no longer support older browsers? Because it seems that large corporates will continue to run older browsers and even if google chrome frame can be installed without admin privileges (it's coming soon, currently in beta) we can't expect all users to be motivated to do this.I appreciate any thoughts. | So now Google has said no to old browsers when can the rest of us follow suit? | web development;html5;browsers;standards;browser support | If ( your site does not make money ){ do what makes you happy}else if ( the cost of supporting IE6 > the money you make from IE6 users ) { stop supporting IE6}else { keep making money from IE6 users} |
_unix.367366 | I'm trying to run multiple shell commands in ansible on a remote host, but failing miserably: - name: Mkdir on server and copy packages locally shell: | mkdir /root/vdbench cd /root cp vdbench50403.zip /root/vdbench cd /root/vdbench unzip vdbench50403.zipError:TASK [Mkdir on server and copy packages locally] *******************************fatal: [153.254.108.166]: FAILED! => {changed: true, cmd: mkdir /root/vdbench\n cd /root\n cp vdbench50403.zip /root/vdbench\n cd /root/vdbench\n unzip vdbench50403.zip, delta: 0:00:00.006011, end: 2017-05-26 07:24:30.518445, failed: true, rc: 127, start: 2017-05-26 07:24:30.512434, stderr: mkdir: cannot create directory /root/vdbench: File exists\n/bin/sh: line 4: unzip: command not found, stdout: , stdout_lines: [], warnings: [Consider using file module with state=directory rather than running mkdir]}fatal: [153.254.108.165]: FAILED! => {changed: true, cmd: mkdir /root/vdbench\n cd /root\n cp vdbench50403.zip /root/vdbench\n cd /root/vdbench\n unzip vdbench50403.zip, delta: 0:00:00.005799, end: 2017-05-26 07:24:30.740551, failed: true, rc: 127, start: 2017-05-26 07:24:30.734752, stderr: mkdir: cannot create directory /root/vdbench: File exists\n/bin/sh: line 4: unzip: command not found, stdout: , stdout_lines: [], warnings: [Consider using file module with state=directory rather than running mkdir]}fatal: [153.254.108.164]: FAILED! => {changed: true, cmd: mkdir /root/vdbench\n cd /root\n cp vdbench50403.zip /root/vdbench\n cd /root/vdbench\n unzip vdbench50403.zip, delta: 0:00:00.006032, end: 2017-05-26 07:24:30.745565, failed: true, rc: 127, start: 2017-05-26 07:24:30.739533, stderr: mkdir: cannot create directory /root/vdbench: File exists\n/bin/sh: line 4: unzip: command not found, stdout: , stdout_lines: [], warnings: [Consider using file module with state=directory rather than running mkdir]}fatal: [153.254.108.163]: FAILED! => {changed: true, cmd: mkdir /root/vdbench\n cd /root\n cp vdbench50403.zip /root/vdbench\n cd /root/vdbench\n unzip vdbench50403.zip, delta: 0:00:00.006703, end: 2017-05-26 07:24:30.832733, failed: true, rc: 127, start: 2017-05-26 07:24:30.826030, stderr: mkdir: cannot create directory /root/vdbench: File exists\n/bin/sh: line 4: unzip: command not found, stdout: , stdout_lines: [], warnings: [Consider using file module with state=directory rather than running mkdir]} | unable to execute ansible play on remote host | ansible | null |
_codereview.31674 | All of the class here share these same propertyCountry, Total and Date.Total property are integer in some classes and decimal in some classed.The extension method below has a lot of duplication. i have tried to refactor it using interface but i can't find a way to refactor it at the last step on selection since it need different class public static class TotalApprovedPOAmountStatisticExtension{ public static void MergeNullAndEmplyCountry(this IEnumerable<TotalApprovedPOAmountStatistic> totalUserStatistic) { foreach (var item in totalUserStatistic) { if (string.IsNullOrWhiteSpace(item.Country)) { item.Country = ; } } totalUserStatistic = totalUserStatistic .GroupBy(item => new { item.Country, item.Date }) .Select(groupedUser => new TotalApprovedPOAmountStatistic() { Country = groupedUser.Key.Country, Total = groupedUser.Sum(item => item.Total), Date = groupedUser.Key.Date }).ToList(); }} public static class TotalPOStatisticListExtension{ public static void MergeNullAndEmplyCountry(this IEnumerable<TotalPOStatistic> totalUserStatistic) { foreach (var item in totalUserStatistic) { if (string.IsNullOrWhiteSpace(item.Country)) { item.Country = ; } } totalUserStatistic = totalUserStatistic .GroupBy(item => new { item.Country, item.Date }) .Select(groupedUser => new TotalPOStatistic() { Country = groupedUser.Key.Country, Total = groupedUser.Sum(item => item.Total), Date = groupedUser.Key.Date }).ToList(); }} public static class TotalProjectAmountStatisticListExtension{ public static void MergeNullAndEmplyCountry(this IEnumerable<TotalProjectAmountStatistic> totalUserStatistic) { foreach (var item in totalUserStatistic) { if (string.IsNullOrWhiteSpace(item.Country)) { item.Country = ; } } totalUserStatistic = totalUserStatistic .GroupBy(item => new { item.Country, item.Date }) .Select(groupedUser => new TotalProjectAmountStatistic() { Country = groupedUser.Key.Country, Total = groupedUser.Sum(item => item.Total), Date = groupedUser.Key.Date }).ToList(); }} public static class TotalProjectStatisticListExtension{ public static void MergeNullAndEmplyCountry(this IEnumerable<TotalProjectStatistic> totalUserStatistic) { foreach (var item in totalUserStatistic) { if (string.IsNullOrWhiteSpace(item.Country)) { item.Country = ; } } totalUserStatistic = totalUserStatistic .GroupBy(item => new { item.Country, item.Date }) .Select(groupedUser => new TotalProjectStatistic() { Country = groupedUser.Key.Country, Total = groupedUser.Sum(item => item.Total), Date = groupedUser.Key.Date }).ToList(); }} public static class TotalUserStatisticListExtension{ public static void MergeNullAndEmplyCountry(this IEnumerable<TotalUserStatistic> totalUserStatistic) { foreach (var item in totalUserStatistic) { if (string.IsNullOrWhiteSpace(item.Country)) { item.Country = ; } } totalUserStatistic = totalUserStatistic .GroupBy(item => new { item.Country, item.Date }) .Select(groupedUser => new TotalUserStatistic() { Country = groupedUser.Key.Country, Total = groupedUser.Sum(item => item.Total), Date = groupedUser.Key.Date }).ToList(); }} | Refactor Linq grouping | c#;linq | You can use an interface and a generic method to remove the code duplication:public interface IStatistic{ string Country {get; set;} DateTime Date {get; set;} Decimal Total {get; set;}}public static class StatisticExtension{ public static void MergeNullAndEmplyCountry<T>(this IEnumerable<T> totalUserStatistic) where T : IStatistic, new() { foreach (var item in totalUserStatistic) if (string.IsNullOrWhiteSpace(item.Country)) item.Country = ; totalUserStatistic = totalUserStatistic.GroupBy(item => new { item.Country, item.Date }) .Select(groupedUser => new T() { Country = groupedUser.Key.Country, Total = groupedUser.Sum(item => item.Total), Date = groupedUser.Key.Date }).ToList(); }}Important here is the new() constraint, which allows us to create an instance of T.If the Total property of a class is int instead of decimal, we can make use of an explicit interface implementation:public class BlaStatistic : IStatistic{ public string Country {get; set;} public DateTime Date {get; set;} public Int32 Total {get; set;} Decimal IStatistic.Total { get { return Convert.ToDecimal(Total); } set { Total = Convert.ToInt32(value); } }}Also, your extension method should actually return something, because it won't change the IEnumerable<...> you pass in:public static IEnumerable<T> MergeNullAndEmplyCountry<T>(this IEnumerable<T> totalUserStatistic) where T : IStatistic, new(){ ... return totalUserStatistic.GroupBy...} |
_codereview.131537 | I created a web service which can be called from a form to add new users to the system. The final service will add users to Active Directory and Exchange in the required format (please note that FirstName and LastName are required fields in the form, so they will never be null): using System.Collections.Generic;using System.Web.Services;namespace NewStarterWebService{[WebService(Namespace = http://tempuri.org/)] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] [System.ComponentModel.ToolboxItem(false)] public class NewStarter : WebService { [WebMethod] public void CreateNewUser( string firstName, string middleName, string lastName, string jobTitle, string department, string office, string role, string manager) { var newUser = new NewUser(firstName, middleName, lastName, jobTitle, department, office, role, manager); newUser.Create(); } }}The NewUser Class only has 1 method - Create() and a constructor: using System;using System.Collections.Generic;namespace NewStarterWebService{ public class NewUser { public readonly string FullName; public readonly string FirstName; public readonly string MiddleName; public readonly string LastName; public readonly string Department; public readonly string Office; public readonly string JobTitle; public readonly string Role; public readonly string Manager; public readonly string Email; public readonly string DotName; public readonly string HomeDirectory; public readonly string Initials; public readonly Dictionary<string, string> DefaultProperties; private const string Password = Password99; public NewUser( string firstName, string middleName, string lastName, string jobTitle, string department, string office, string role, string manager) { Office = office; Department = department; FirstName = firstName.Trim(); MiddleName = middleName.Trim(); LastName = lastName.Trim(); JobTitle = jobTitle.Trim(); Role = role.Trim(); Manager = Ad.GetManagerDistinguishedName(manager); FullName = ${FirstName} {LastName}; Email = ${FirstName}{LastName}@domain.com; DotName = ${FirstName}.{LastName}; HomeDirectory = $@\\File\Home$\{DotName}; try { Initials = ${FirstName[0]}{MiddleName[0]}{LastName[0]}; } catch (Exception ex) when ( ex is ArgumentOutOfRangeException || ex is NullReferenceException || ex is ArgumentNullException) { Initials = ${FirstName[0]}{LastName[0]}; } // Set the default properties for AD DefaultProperties = new Dictionary<string, string> { {userprincipalname, Email}, {samaccountname, DotName }, {sn, LastName}, {givenname, FirstName }, {displayname, FullName }, {description, JobTitle }, {mail, Email }, {homedirectory, HomeDirectory }, {homedrive, H: }, {physicalDeliveryOfficeName, Office }, {Manager, Manager }, {Initials, Initials } }; } public void Create() { Ad.CreateUser(this); Ps.AddExchangeUser(this); } }}In the constructor, Manager is initialised with a call to the AD class, because the manager's DisplayName is sent through, but for my purposes I need this to be the DsitinguishedName. The Create() method makes a call to Ad (Active Directory) and Ps (PowerShell) to do the AD and Exchange creation. I didn't post this code because I thought that it would maybe be a bit much but if people think it would help to put the rest of it in context I will add it! | Automating New User Creation With C# Web Service | c#;web services;active directory | I would not put dependency on singleton in this class it means two responsibility for NewUser - format properties and orchestrate AD API. Lets define this helper class to be more explicit on your validation scenarios:static class Trimming{ public static string TrimRequired(this string value, [CallerMemberName] string property = ) { if(string.IsNullOrWhiteSpace(value)) throw new InvalidOperationException($Mailformed property {property}.); return value.Trim(); } public static string TrimOptional(this string value) { if (value == null) return null; return value.Trim(); }}And define UserBuilder (It makes sense to prefer properties over ctor parameters when we have so many):public class UserBuilder{ public string FirstName { get; set; } public string MiddleName { get; set; } public string LastName { get; set; } public string JobTitle { get; set; } public string Department { get; set; } public string Office { get; set; } public string Role { get; set; } public string Manager { get; set; } public IDictionary<string, string> ToProperties() => new Dictionary<string, string> { {userprincipalname, RequiredEmail}, {samaccountname, RequiredDotName }, {sn, RequiredLastName}, {givenname, RequiredFirstName }, {displayname, RequiredFullName }, {description, OptionalJobTitle }, {mail, RequiredEmail }, {homedirectory, RequiredHomeDirectory }, {homedrive, H: }, {physicalDeliveryOfficeName, OptionalOffice }, {Manager, RequiredManager }, {Initials, RequiredInitials } }; string RequiredFullName => ${RequiredFirstName} {RequiredLastName}; string RequiredEmail => ${RequiredFirstName}{RequiredLastName}@domain.com; string RequiredDotName => ${RequiredFirstName}.{RequiredLastName}; string RequiredHomeDirectory => $@\\File\Home$\{RequiredDotName}; string RequiredInitials { get { try { return ${RequiredFirstName[0]}{OptionalMiddleName[0]}{RequiredLastName[0]}; } catch (Exception ex) when ( ex is ArgumentOutOfRangeException || ex is NullReferenceException || ex is ArgumentNullException) { return ${RequiredFirstName[0]}{RequiredLastName[0]}; } } } string OptionalOffice => Office.TrimOptional(); string OptionalDepartment => Department.TrimOptional(); string RequiredFirstName => FirstName.TrimRequired(); string OptionalMiddleName => MiddleName.TrimOptional(); string RequiredLastName => LastName.TrimRequired(); string OptionalJobTitle => JobTitle.TrimOptional(); string RequiredRole => Role.TrimRequired(); string RequiredManager => Manager.TrimRequired();}P.S. Generally speaking, it makes sense to do not hold an intermediate state of you calculations (like trimmed string values) just an original data + define functions/properties to process them. It makes your design more flexible.P.P.S. It might be useful to have UserName class: class UserName { public string FirstName { get; set; } public string MiddleName { get; set; } public string LastName { get; set; } public string FullName => ... public string Initials => ... }... and use in UserBuilder. |
_unix.253370 | I'm working with some apps (the best example i can give is Firefox Developer Edition) that i put in the /opt folder. These apps work perfectly until they need to apply and update, and that's because /opt is owned by root, and to write those apps in the /opt folder i use the sudo command when i unpack the apps. To fix this, my best solution was to chmod, chown and chgrp the apps (each app folder, no de /opt folder) to my user, and give permissions to users to write inside the apps folders:[user@localhost /opt] $ ls -lahtotal 76Kdrwxr-xr-x. 19 root root 4,0K dic 28 22:32 .dr-xr-xr-x. 20 root root 4,0K nov 13 02:00 ..drwxrwxr-x 7 MyUser MyGroup 4,0K nov 24 21:46 android-studiodrwxr-xr-x 10 MyUser MyGroup 4,0K dic 17 21:45 firefox-devdrwxrwxr-x 2 MyUser MyGroup 4,0K dic 28 22:34 linux-wbfs-managerdrwxr-xr-x. 5 MyUser MyGroup 4,0K mar 25 2015 sublime_text_2drwx------. 3 MyUser MyGroup 4,0K dic 2 14:53 tor-browser_en-USdrwxrwxr-x 4 MyUser MyGroup 4,0K dic 17 19:28 WordPress.comNow, i'm wondering if this a good practice or is insecure. Until now, i didn't have any problem, but i'm using browsers and other apps with network access. | Is insecure to give users permissions to apps in /opt? | linux;permissions;security | null |
_codereview.173477 | My objective is to run some 'N' number of PHP Scripts in cron. Since these are web scrapers coded in PHP and hits database on my server, each script runs for like 1 hour and they take a lot of CPU and memory. That's why I want to run one script at a time.My first approach was to run each script at specific hour of the day but then there are only 24 hours in a day, so I can't really go with that way.I have prepared a Unix script which runs in an infinite loop and runs the entire script. Assuming the shell will execute one script, wait for some time and then execute the next script. This is my script 1, say.Then there is another script, Script 2 which runs every 3 hours and checks if Script 1 is running. If Script 1 is not running it will execute Script 1.My concerns:Is it feasible to run like 100 PHP scripts from one Unix script? If one fails, will it fail the whole script?Is there any better approach?Script 1 (MasterCronCarlos.sh):#!/bin/bash# Description = Script to check cpu usage and if above threshold issue a linux based command.# This Script only checks user cpu time (us - userspace (doing userspace stuff))#SUBJECT=CRON Status | Jai [email protected] truedoecho -en Starting fresh loop of cron | mail -s $SUBJECT $TO php /home/site1/public_html/scrapperM1.php sleep 300 php /home/site1/public_html/scrapperM2.php sleep 300 php /home/site1/public_html/scrapperM3.php sleep 300 echo -en Script completed for site1.com | mail -s $SUBJECT $TO php /home/site2/public_html/scrapperM1.php sleep 300 php /home/site2/public_html/scrapperM2.php sleep 300 php /home/site2/public_html/scrapperM3.php sleep 300 echo -en Script completed for site2.tv | mail -s $SUBJECT $TO php /home/site3/public_html/scrapperM1.php sleep 300 php /home/site3/public_html/scrapperM2.php sleep 300 php /home/site3/public_html/scrapperM3.php sleep 300 echo -en Script completed for site3 | mail -s $SUBJECT $TO php /home/site4/public_html/scrapperM1.php sleep 300 php /home/site4/public_html/scrapperM2.php sleep 300 php /home/site4/public_html/scrapperM3.php sleep 300 echo -en Script completed for site4 | mail -s $SUBJECT $TO php /home/site5/public_html/scrapperM1.php sleep 300 php /home/site5/public_html/scrapperM2.php sleep 300 php /home/site5/public_html/scrapperM3.php sleep 300 echo -en Script completed for site5 | mail -s $SUBJECT $TO php /home/site6/public_html/scrapperM1.php sleep 300 php /home/site6/public_html/scrapperM2.php sleep 300 php /home/site6/public_html/scrapperM3.php sleep 300 echo -en Script completed for site6 | mail -s $SUBJECT $TO php /home/site7/public_html/scrapperM1.php sleep 300 php /home/site7/public_html/scrapperM2.php sleep 300 php /home/site7/public_html/scrapperM3.php sleep 300 echo -en Script completed for site7 | mail -s $SUBJECT $TOdoneScript 2 (mastercron.sh):#!/bin/shRESULT=`ps axf | grep MasterCronCarlos.sh | grep -v grep | awk '{print $1}'`if [ $RESULT -ge 0 ]; thenecho Runningelseecho Not Runningsh /home/backups/MasterCronCarlos.shfi; | Run 'N' Number of PHP Scripts on shell forever in a loop | php;shell;unix | null |
_cstheory.2780 | The traditional definition of PCPs have perfect completeness -- If $x\in L$, then the prover can give a proof on which the verifier (on reading constantly many bits) always accepts. Suppose we modify the definition as follows:A language $L$ is in $iPCP(r,q)$ such that there exists a verifier who tosses $r$ coins and queries $q$ locations of a proof $\pi$ such that:If $x\in L$, then there exists a $\pi$ that makes the verifier accept with probability at least $3/4$. If $x\notin L$, then for every proof $\pi$, the verifier accepts with probability at most $1/4$.There is an easy reduction from $3SAT$ to $\text{Max}2SAT$ (problem 5 in this) that gives a simple $iPCP$ for $3SAT$. And by definition, the class is closed under complement so it includes $coNP$ as well. Therefore, it is unlikely that there is a way to convert an $iPCP$ to a $PCP$ with the same parameters (well, that would at least mean $coNP = NP$). (thanks to Peter Shor for pointing this out)Have such incomplete PCPs been studied earlier? | PCPs with imperfect completeness | cc.complexity theory;reductions;pcp | Yes, PCPs with imperfect completeness have been studied before. The main motivation is that for some natural and interesting problems, finding whether there is a perfect solution is actually easy (polynomial-time), while approximating the best solution, if this best solution is not perfect, is (or believed to be) hard. Here are some examples:The Unique Games Conjecture. Unique games with perfect completeness are easy, so the Unique Games Conjecture focuses on unique games with imperfect completeness.Feige and Reichman showed NP-hardness for unique games for some constant completeness and soundness values (not c=1-delta s=epsilon as conjectured).Linear equations. Linear equations with perfect completeness are easy (using Gaussian elimination), so hardness results for linear equations (e.g., Hastad's) focus on imperfect completeness. |
_unix.786 | Please give a brief description for each tool. | What are some common tools for intrusion detection? | linux;networking;security;monitoring | null |
_unix.145028 | Somehow my .vdi (Linux guest OS) file got corrupted. Now I have some files in it (Inside the vdi file) and I want to recover those files. How can I do that? | Linux: Recover files from .vdi file | virtualbox;virtual machine;data recovery;ext4 | Assuming you're on a Linux host as well (you didn't mention that). You can always try the Network Block Device (NBD) option:-sudo modprobe nbd max_part=16sudo qemu-nbd -c /dev/nbd0 <path to your vdi file>ls -lh /dev/nbd0*<lists all the partitions on the vdi>Choose which of the partitions you want to mount (eg 1st partition), then:sudo mount /dev/nbd0p1 /mntThat may work, depending on how corrupt your vdi file is. You can use normal filesystem tools on this mount and or dev node.When done, unmount it and:-sudo qemu-nbd -d /dev/nbd0Note: You may have to install qemu-nbd depending on your distro. Package qemu-utils on Ubuntu, qeu-img on Fedora.If you're on Windows you may have some success by following this post.An alternative Windows way would be to quickly install another Linux VM and then add your vdi file as additional disk to that VM. You can then use the NBD procedure above on it. |
_unix.61459 | I run OpenBSD/amd64 5.2 on a system with 12GB of RAM, and I want to use about 6GB to 8GB for filesystem caching.By default, 5.2 amd64 comes with sysctl kern.bufcachepercent set to 20 (20%); I've increased it to 50% and then to 60%, and then went through lots of files that definitely total above 10GB, yet, when I go into top, I am shown the following line:Memory: Real: 25M/1978M act/tot Free: 9961M Cache: 1670M Swap: 0K/48GThat's 1.7GB out of 12GB, less than 15%! I've even tried increasing kern.maxvnodes from 117091 to 400000 (and kern.numvnodes did indicate that all 400k of vnodes got utilised pretty quickly), but I'm still having under 2GB of RAM used for caching.Is it not possible to use 6GB of RAM for disc cache on OpenBSD 5.2 amd64? Is it limited to something around 1.7GB? | Does sysctl kern.bufcachepercent not work in OpenBSD 5.2 above 1.7GB? | openbsd;64bit;cache;buffer | I did some testing, and it seems like on my system, an equivalent of 100% buffercache would have been about 2.8GB (I tried 75%, and I'm getting about 2.1GB used for cache), so, the percentage is taken out of a value similar to about 2.7 or 2.8GB (it might depend on a system / bios etc).It would seem like this is related to the buffer cache being restricted to 32bit DMA memory, and most likely even at 100% of the setting, said memory is taken out of the pool that is shared with other kernel resources, so, the percentage would always be out of a number quite significantly below 4GB on any system, it seems.http://www.openbsd.org/cgi-bin/cvsweb/src/sys/kern/vfs_bio.chttp://marc.info/?l=openbsd-tech&m=130174663714841&w=2 |
_reverseengineering.14967 | I wanted to know which permissions each section of the following PE Sections have (windows):.idata.rsrc.data.text.bss.rdata.edataThanks in advance, I couldn't found an answer using google. :) | PE Sections Permissions | windows;pe | You can find the official PE-format documentation here.The section permissions are completely up to the programmer / compiler. You can parse the permissions out of the sections table. Please know that the permissions may be changed at runtime and also may be manipulated for smaller units of memory (pages!).That being said, there is always some 'default' permissions set by the compiler if the programmer doesn't care about them explicitly:For example, the .text section is usually READ + EXECUTE, while data sections are usually READ + WRITE (+ PROTECTED). There is no magic here.There is plenty of software available to look at the startup section permissions (basically everything parsing pe files). |
_unix.49376 | For representing the logical email folder structure, a Maildir++ format exists that was first implemented by the Courier mail server, thus sometimes also being referred to as the Courier format.Unfortunately, some ambiguity exists in documentation and various implementations, regarding how the logical email folders are to be stored inside the main Maildir directory, ie. inside the main email directory containing the three subdirectories cur, new and tmp.Assuming there are three email folders, INBOX, Sent and Trash, I have understood the canonical spec to mean the following filesystem folder structure:mail \ - cur - new - tmp - .INBOX \ - cur - new - tmp(and likewise for .Sent and .Trash)Furthermore, assuming we have a deeper folder structure, say, a top-level email folder Chronological/2012/09, it would be stored as follows:mail \ - cur - new - tmp - .Chronological.2012.09 \ - cur - new - tmpIs this correct? Some docs are a bit ambiguous about this, and I've seen implementations vary: some store the top-level folders without the leading dot, and some create a new filesystem subdirectory for each subfolder level (rather than using the dots to denote a new logical level). | what is the correct Courier / Maildir++ email (sub)folder format? | email;imap;maildir | null |
_unix.144742 | Can I combined multiple .out files with the cat command in bash or csh? Additionally I want only the header of first file to be concatenated and I was thinking of using the tail function for that. Anyone know if this is possible or of a way to do this? | concatenate .out files | bash;shell;cat;output | null |
_codereview.127341 | I am going through some of my companies EDI reports to see if I can make them faster. The query below runs in less than one second in our AWS RDS environment, but it takes ~55 seconds on a blade in the office, so there is room for improvement. I do not see any common performance culprits based upon Google searches, but I am not an expert.When I say, EDI, I am expressing that the purpose of this query is to transfer data from one computer system to another. For instance, that is why I am converting all of the dates to strings, because that is what the target system wants (in mm/dd/yyyy format).Some background info:There are about 11,000 rows in each of the two tablesTwo of tables are de-normalized Snapshot tablesThe query returns 3500 recordsThe two Snapshot tables have non-unique, non-clustered indexesThe third table (CaseOpenEnrollmentPeriod) has a unique clustered index, although it is only four records long.DECLARE @MostRecentSnapshotTime datetimeSET @MostRecentSnapshotTime = (SELECT MAX(SnapshotLoadStartTime) AS Expr1 FROM Snapshot.SnapshotLog)SELECT SPE.SSN ,SPE.SSN ,CONVERT(NVARCHAR(10), SPE.BirthDate, 101) ,SPE.LastName ,SPE.FirstName ,LEFT(SPE.MiddleName, 1) ,SPE.Address1 ,SPE.Address2 ,SPE.City ,SPE.StateCode ,LEFT(SPE.ZipCode, 5) ,'SampleText' ,'SampleText' ,CASE WHEN SPE.MaritalStatusCode = 'SampleText' THEN 'SampleText' END ,SPE.Gender ,SPE.[Status] ,CASE WHEN SEE.PlanCode = 'MED' THEN 'MED 16' WHEN SEE.PlanCode = 'MEDP' THEN 'MEDPlus 16' WHEN SEE.PlanCode = 'MEDH' THEN 'MEDHeavy 16' WHEN SEE.PlanCode = 'MEDHP' THEN 'MEDHeavyPlus 16' WHEN SEE.PlanCode = 'MVP' THEN 'MVP 16' ELSE NULL END ,'4' ,CASE WHEN SEE.TierCode = 'EO' THEN 'E' WHEN SEE.TierCode = 'ESP' THEN 'ES' WHEN SEE.TierCode = 'ECH' THEN 'EC' WHEN SEE.TierCode = 'EFam' THEN 'F' ELSE NULL END ,'4' ,SPE.DepartmentCode ,CONVERT(NVARCHAR(10), SPE.HireDate, 101) ,CONVERT(NVARCHAR(10), COEP.BenefitsEffectiveDate, 101) ,CONVERT(NVARCHAR(10), SEE.EffectiveDate, 101) ,CONVERT(NVARCHAR(10), SEE.StopDate, 101) ,CASE WHEN NULLIF(LTRIM(SPE.TerminationDate), '') IS NOT NULL AND SEE.LifeEventActionID IS NOT NULL THEN CONVERT(VARCHAR(4), SEE.LifeEventActionID) WHEN NULLIF(LTRIM(SPE.TerminationDate), '') IS NOT NULL AND SEE.LifeEventActionID IS NULL THEN 'AI' ELSE '' END ,SEE.EnrollerID ,'1' ,(SEE.IssCost * 12 / SPE.PayCycle) ,CASE WHEN SPE.PayCycle = 52 THEN 'Weekly' WHEN SPE.PayCycle = 12 THEN 'Monthly' WHEN SPE.PayCycle = 24 THEN 'Semi-Monthly' ELSE 'Other' END ,SPE.WorkPhone ,SPE.HomePhone ,SPE.Email ,ISNULL(SPE.HeightInInches, '') ,ISNULL(SPE.WeightInPounds, '')FROM Snapshot.EmployeeElection SEE JOIN Snapshot.PersonEmployee SPE ON SEE.EmployeeID = SPE.AssignedID AND SEE.ConfirmationID = SPE.ConfirmationID AND SEE.CaseOpenEnrollmentPeriodID = SPE.CaseOpenEnrollmentPeriodID AND SEE.LoadDateTime = SPE.LoadDateTime JOIN CaseOpenEnrollmentPeriod COEP ON COEP.CaseOpenEnrollmentPeriod_ID = SEE.CaseOpenEnrollmentPeriodIDWHERE SEE.LoadDateTime = @MostRecentSnapshotTime AND PlanID <> 8 AND SPE.ConfirmationID > 0;Here is the XML execution plan. I am new to execution plans, so I am not sure if this is the info you need or not. | Demographics information for EDI report | performance;sql;sql server;t sql | AliasesThe alias names SEE, SPE and COEP are not descriptive. It's often tempting to make acronyms for tables and use those as aliases. It's all well and good for people who are already familiar with the structure of the database, but when you show this code to an outsider those aliases become a bit of an obstacle in understanding the code. I think something like this would read a lot better:FROM Snapshot.EmployeeElection AS EmpElecs --was SEE JOIN Snapshot.PersonEmployee AS Emps --was SPE ... JOIN CaseOpenEnrollmentPeriod AS Periods --was COEPExecution planAn update after you posted the execution plan. See that RID Lookup (Heap) operation on SnapshotPersonEMployee? That to me indicates that the SQK engine could not use the index for that query, see Identifying Key and RID Lookup Issues and How to Resolve by Aaron Bertrand on the DBA.SE site. Quoting him:These lookups occur when an index does not satisfy the query (non-covered query) and therefore additional data needs to be retrieved from the clustered index or the heap. Non-covered queries can be a problem because, for every row in the index, the additional column(s) must then be fetched; this can have a significant impact on large data sets and impact overall performance.If hard to pinpoint the exact problem, I would suggest starting by looking at the columns in this join and see if they are indexed, if not that might be your problem right there. If not all those columns are indexed then it likely has to go lookup in the table itself as hinted by your execution plan. JOIN Snapshot.PersonEmployee SPE ON SEE.EmployeeID = SPE.AssignedID AND SEE.ConfirmationID = SPE.ConfirmationID AND SEE.CaseOpenEnrollmentPeriodID = SPE.CaseOpenEnrollmentPeriodID AND SEE.LoadDateTime = SPE.LoadDateTimeYou could either look into adding indexes, which will speed up performance for all queries that use them, use perhaps move the non-indexed columns to an HAVING clause at the end, which might move these matches to the result set instead of the initial lookup. (maybe).Date conversionsYou've mentioned that the code is meant for an EDI which accepts dates as string (in mm/dd/yyyy format), I have to deal with things like that pretty frequently, and chances are you probably cannot change the EDI itself (or not without significant effort/expense, at least) so that's understandable. I was going to suggest casting the dates instead of converting, but then I realized that casting does not allow to pick the output format, it would come out as yyyy-mm-dd which is default SQL Server/T-SQL date format. Unfortunately, that's probably one of the most expensive set of operations in your query. There may not actually be a way to improve these, given the limitations. Date-Time handling can be annoying, and more so when you have to convert between actual date-time values and string representations of them. |
_unix.244954 | I'm launching a X server with Xquartz (OSX 10.10.5) using Terminal.app, and I'm trying to connect to a EC2 instance with:##local$ echo $DISPLAY/private/tmp/com.apple.launchd.MO57wSPup1/org.macosforge.xquartz:0##EC2$ ssh -X user@servername$ echo $DISPLAYlocalhost:10.0$ cat /proc/versionLinux version 4.1.10-17.31.amzn1.x86_64 (mockbuild@gobi-build-60008) (gcc version 4.8.3 20140911 (Red Hat 4.8.3-9) (GCC) )When I try to launch xclock, I have an X window opening up, but when I try emacs, no window is showing up. For example, if I type emacs, emacs opens inside the terminal. If I try emacs &, the job is stopped.Anyone has an idea why ? I have installed it using :$ sudo yum install emacs$ which emacs/usr/bin/emacs$ emacs --versionGNU Emacs 24.3.1 | Emacs X11 Forwarding does not work but xclock does | linux;x11;emacs | Most likely the Emacs you're running was compiled without X support. As an example, the default Emacs binary in /usr/bin on Mac OS X was definitely built without X support. To check your binary, start Emacs and then in the *scratch* buffer type:(featurep 'x)and then hit Ctrl+J. A response of t means you have X support, nil means you don't. |
_unix.102225 | Do we have any option to transfer folder from one server to another, that is faster and compressible and bandwidth limit settable method? The servers run on Linux.I tried tar + ssh but I couldn't set bandwidth limit to 3 or 4 MBPS likewise.Any other service does it? | How to compress and transfer folders from one server to another over network | scripting;scp | null |
_codereview.127543 | I'd like to know how I can refactor my JS code for usability.Currently my JavaScript uses some HTML to build out a simple calendar with the following functionality:a. opens into the current month with today's date highlightedb. allows you to view next and previous monthMy goal was to build out a calendar with the previously mentioned functionality while also having the ability to accept an array of event objects to populate the calendar. Currently, my calendar does NOT accept an array of event objects but that feature will be for another time.My goal with your help is to refactor the calendar to be less repetitive and more like functional programming.You can view my Github with the HTML and CSS.var createMonth = { createMonthYearLabel: function () { var me = this; // create the label that appears on top of the calendar var currentMonth = me.currentMonth; // the index of the current month ex. 0 var currentMonthName = me.months[ currentMonth ]; // get the name of current month ex. January var monthYear = me.currentMonthYear = currentMonthName + + currentYear; // create ex. January 2016 me.$monthYearSpan.innerHTML = monthYear; }, createDaysInMonth: function () { var me = this; var currentYear = me.currentYear, currentMonth = me.currentMonth, currentDateFull = new Date(currentYear, currentMonth); var totalDaysInMonth = new Date(currentYear, currentMonth + 1, 0).getDate(); // all days in a month var firstDay = new Date(currentYear, currentMonth, 1); firstDay = /[A-Za-z]{3}/.exec( firstDay )[0]; // day of the week when the month first begins ex. Tue var totalNumOfRows = 6; // using 6 rows to show an entire calendar month var dayCounter = 0; var daysOfWeek = [ 'Sun', 'Mon', 'Tue', 'Wed','Thu','Fri', 'Sat' ]; var setFirstDayOfMonth; // to be set once to true for (var i = 0; i < totalNumOfRows; i++) { var $row = me.$monthTable.insertRow( i ); // creating a row daysOfWeek.forEach( function( day, index ){ // iterating through the days of the week var $cell = $row.insertCell( index ); // adding a cell to a row if ( day === firstDay && !setFirstDayOfMonth ) { // if first day of month not set & day equals first day // happens once for setting first day of month dayCounter++; setFirstDayOfMonth = true; $cell.innerHTML = dayCounter; if ( me.currentMonthYear === me.todayMonthYear ) { if ( dayCounter === me.todayDate ) { $cell.className = 'today'; // in case the first day of the month is today } } return; } if ( dayCounter === 0 || dayCounter === totalDaysInMonth ) { // creating empty squares with no dates / placeholders $cell.innerHTML = ; $cell.className = 'nil'; return; // dayCounter will not be triggered on empty days } if ( dayCounter > 0 ) { dayCounter++; $cell.innerHTML = dayCounter; if ( me.currentMonthYear === me.todayMonthYear ) { if ( dayCounter === me.todayDate ) { $cell.className = 'today'; // in case the first day of the month is today } } } }); }; }};function createCalendar () { var me = this; me.prevBtn = document.getElementsByClassName('left button')[0]; // left arrow on calendar me.nextBtn = document.getElementsByClassName('right button')[0]; // right arrow on calendar me.$monthYearSpan = document.getElementsByClassName('month-year')[0]; // month-year title me.$monthTable = document.getElementsByClassName('current')[0]; // table me.months = [ 'January', 'Feburary', 'March', 'April', 'May','June','July','August','September','October','November','December' ]; me.todayDateFull = new Date(); me.currentYear = me.todayDateFull.getFullYear(); me.currentMonth = me.todayDateFull.getMonth(); me.todayDate = me.todayDateFull.getDate(); me.currentMonthName = me.months[ me.currentMonth ]; // get the name of current month ex. January me.todayMonthYear = me.currentMonthName + + me.currentYear; var createMonthYearLabel = createMonth.createMonthYearLabel, createDaysInMonth = createMonth.createDaysInMonth; createMonthYearLabel.call( me ); createDaysInMonth.call( me ); prevBtn.onclick = function () { // goes back in time var me = this; me.$monthYearSpan.innerHTML = ; me.$monthTable.innerHTML = ; if ( me.currentMonth === 0 ) { // decrease the year me.currentMonth = 11; me.currentYear--; } else { me.currentMonth--; } createMonthYearLabel.call( me ); createDaysInMonth.call( me ); }.bind( me ); nextBtn.onclick = function () { // goes forward in time var me = this; me.$monthYearSpan.innerHTML = ; me.$monthTable.innerHTML = ; if ( me.currentMonth === 11 ) { // increase the year me.currentMonth = 0; me.currentYear++; } else { me.currentMonth++; } createMonthYearLabel.call( me ); createDaysInMonth.call( me ); }.bind( me );}createCalendar();body { background: #e0e0e0e;}#cal { -webkit-box-shadow: 0px 3px 3px rgba(0, 0, 0, 0.25); margin: 50px auto; font: 13px/1.5 Helvetica Neue, Helvatica, Arial, san-serif; display:table; /* centers the entire calendar */}#cal .header { cursor: default; background: #cd310d; background: -webit-gradient(linear, left top, left bottom, from(#b32b0c) to(#cd310d)); height: 55px; position: relative; color:#fff; -webkit-border-top-left-radius: 5px; -webkit-border-top-right-radius: 5px; border-top-left-radius: 5px; border-top-right-radius: 5px; font-weight: bold; text-shadow: 0px -1px 0 #87260C; text-transform: uppercase;}#cal .header span { display: inline-block; line-height: 55px; /* centers item horizontal */}#cal .header .hook { width: 9px; height: 28px; position: absolute; bottom: 60%; border-radius: 10px; -webkit-border-radius: 10px; background: #ececec; background: -moz-linear-gradient(right top, #fff, #827e7d); background: -webkit-gradient(linear, right top, right bottom, from(#fff), to(#827e7d)); box-shadow:0px -1px 2px rgba(0, 0, 0, 0.65 ); -moz-box-shadow:0px -1px 2px rgba(0, 0, 0, 0.65 ); -webkit-box-shadow:0px -1px 2px rgba(0, 0, 0, 0.65 );}.right.hook { right: 15%;}.left.hook { left: 15%;}#cal .header .button { width: 24px; text-align: center; position: absolute;}#cal .header .left.button { left: 0; -webkit-border-top-left-radius: 5px; border-top-left-radius: 5px; border-right: 1px solid #ae2a0c;}#cal .header .right.button { right: 0; top: 0; border-left: 1px solid #ae2a0c; -webkit-border-top-right-radius: 5px; -moz-border-radius-topright: 5px; border-top-right-radius: 5px;}#cal .header .button:hover { background: -moz-linear-gradient(top, #d94215, #bb330f); background: -webkit-gradient(linear, left top, left bottom, from(#d94215), to(#bb330f));}#cal .header .month-year { letter-spacing: 1px; width: 100%; text-align: center;}#cal table { background: #fff; border-collapse: collapse; width: 100%;}#cal td { color: #2b2b2b; width: 80px; height: 65px; line-height: 30px; text-align: center; border: 1px solid #e6e6e6; cursor: default;}#cal #days td { height: 26px; line-height: 26px; text-transform: uppercase; font-size: 90%; color: #9e9e9e;}#cal #days td:not(:last-child) { border-right: 1px solid #fff;}#cal #cal-frame td.today { background: #ededed; color: black; border: 0;}#cal #cal-frame td:not(.nil):hover { color: #fff; text-shadow: #6C1A07 0px -1px; background: #cd310d; background: -webkit-linear-gradient(top, #b32b0c, #cd310d); background: -moz-linear-gradient(linear, left top, left bottom, from(b32b0c), to(#cd310d)); -moz-box-shadow: 0px 0px 0px; -webkit-box-shadow: 0px 0px 0px;}#cal #cal-frame td span { font-size: 80%; position: relative;}#cal #cal-frame td span:first-child { bottom: 5px;}#cal #cal-frame td span:last-child { top: 5px;}#cal #cal-frame .curr { float: left;}#cal #cal-frame table.temp { position: absolute;}<!DOCTYPE html> <head> <meta charset=utf-8> <title>Calendar Widget</title> </head> <body> <div id=cal> <div class=header> <span class=left button id=prev>⟨</span> <span class=left hook></span> <span class=month-year id=label></span> <span class=right hook></span> <span class=right button id=prev>⟩</span> </div> <table id=days> <tr> <td>Sunday</td> <td>Monday</td> <td>Tuesday</td> <td>Wednesday</td> <td>Thursday</td> <td>Friday</td> <td>Saturday</td> </tr> </table> <div id=cal-frame> <table class=current></table> </div> </div> </body></html> | JavaScript calendar widget | javascript;performance;object oriented;datetime | null |
_unix.311613 | I should note, I'm kinda new to this so I am not sure what to do. Ifconfig returns the following:eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.1.39 netmask 255.255.255.0 broadcast 192.168.1.255 inet6 fe80::a00:27ff:fefa:258e prefixlen 64 scopeid 0x20<link> ether 08:00:27:fa:25:8e txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 20 bytes 1368 (1.3 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0My etc/network/interfaces looks like this:# This file describes the network interfaces available on your system# and how to activate them. For more information, see interfaces(5).source /etc/network/interfaces.d/*auto eth0iface eth0 inet staticaddress 192.168.1.39netmask 255.255.255.0gateway 192.168.1.1Route -n returns the following:Kernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Iface0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth0192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0I am using the following dns:domain **search **nameserver 172.139.62.5nameserver 8.8.8.8(Stars hiding my local DNS which I rather not have public)Using a non-static IP does work. I am running Linux in a VM.I have tried everything that is easy enough to Google and I'm running out of solutions. What could it be? | Internet not working when using Static IP. No current solutions working | networking;configuration | It appears that the gateway address you are using, 192.168.1.1 does not match the address of your router. If you are using static IP addressing the details much match your network or it cannot work.Start with the IP address of your router. It might be 192.168.1.254. It might be 10.11.12.13. It could be something else entirely.You then need the netmask, again from your router configuration, which will either be 24 or something like 255.255.255.0.For every 255 in the netmask you need to copy through the corresponding numbers from your router's IP address. So three lots of 255 would mean you copy through the first three groups of numbers. (If you have only a single number netmask such as 24, divide it by 8, and the result gives you the count of the numbers you must copy through.) Finally, you need to assign unused values so that you end up with four groups of numbers.Assume your router is 10.1.1.254 and your netmask is 255.255.0.0. Then you'd copy through 10.1 and come up with the remaining two numbers in the range 1-254. The result is your IP address, eg 10.1.44.66, but do not use a number group that's already in use! |
_cs.74792 | R-Trees can take several paths to retrieve an item because of overlaps in data being indexed or index structure. However, if we run the same search many times on the same unmodified structure, can we remember the correct path and avoid taking the wrong path? Maybe caching the correct path? But I'm not so sure that R-Tree implementations do this or that R-Trees fully loaded in memory do this either. | R-Tree multipath search - remember the path? | trees;database theory;databases | null |
_webapps.51424 | I came across this post on the GitHub blog: GitHub in your language, but I can't seem to find the option to use the GitHub website in another language. The links in the blog post that append a GET parameter to the URL seem to do nothing apart from that, and the post refers to links in the footer which I can't find.Does anyone know how I can get a localized version of GitHub? It seems strange that such a widely used application isn't localized. | GitHub in other languages | github;localization;translation | Thanks to the Internet Archive Wayback Machine, this is what the footer used to look like when they had the translation options circa late 2010:A redesign or two since then has removed the footer, and that the URL parameters have no effect, it's looking like that feature has been quietly deprecated from the GitHub web interface.Also, you can't load SSL pages via Google Translate or Bing Translate, so using those to filter the page in a non-English language will unfortunately not work.Unless there is another service that will translate SSL delivered pages (all of github.com is served over SSL), you can't live render a local translation of github.com to read. |
_unix.89575 | I have a configuration where UDP packets containing encapsulated DNS data are sent to a KVM instance for processing. The KVM instance sits behind an IPtables firewall which is also doing NAT. The stream is coming in on average at about 25Mb per second.The stream comes in and works as expected with one exception. If I terminate the KVM instance and remove the IPtables rules, the stream no longer gets forwarded through the IPtables box. I expect this behavior. If I bring up another KVM instance with the same IP address, and add the same rules back into iptables, the stream does not get forwarded until I stop the stream on the sending side and leave it off for at least 30 seconds. If I start the stream after waiting 30 seconds, the traffic is again passed to the KVM instance.I'm thinking there is some kernel parameter related to UDP timeouts that I'm unaware of. Has anyone seen behavior like this? The IPtables box is running CentOS 6.4. The KVM instance is running Ubuntu 10.04. | IPTables dropping UDP packet stream | iptables;timeout;udp | null |
_unix.156249 | I am writing a process that needs to periodically authenticate users (i.e. call pam_authenticate()). For security purposes I would prefer not to run this process with root privilege. I can use capabilities(7) for my other security requirements (i.e. it needs to bind to HTTPS port, so can use CAP_NET_BIND_SERVICE).I could start with root privs and drop them temporarily, raising them only when needed to call pam_authenticate and then dropping them again. But as long as the ability remains to raise privs this is really only security-by-obscurity, since an attacker who could gain control of the process could raise the privileges.Is the security restriction due to/driven by the specific PAM modules being utilized? Could it be only pam_unix.so's need to read the shadow password file that is causing issues? | Is there a Linux capabilities(7) to allow a process to execute pam_authenticate? | security;pam;capabilities | null |
_webmaster.26810 | I want to write a flexget config file to download some subtitle files form http://subs.sab.bz/ The site provides rss feeds for its new releases. Unfortunately, the link provided will open a download page, but will not get the file.I browsed through the code of the download page and the download link in the code will open the download page again when pasted in the browser will re-open the download page again... Clicking on the download button, though, will start the download of a rar file. I want to get to the link for downloading this rar file(containing the subtitle files)Is there a way to bypass this behavior. I read that usually a script on the server will provide the direct link of the download, I want to get that direct link.Alternative solution will be a linux (Ubuntu 11.10 preferred) download manager, which is able to read RSS, trigger download based on filter( specific shows) download the file to specific location, and unpack it | How to find the hidden download link of a file? | php;download | From the look of things they have configured their site to prevent people from doing exactly what you're trying to do. Any attempt to hit the download link with a download manager or request it directly won't work. |
_codereview.135608 | I made a website where is possible to search for the best transportation from a place to another. For instance, if I need to go from Berlin to Bremen, the website will search in its internal database for all the trains and buses that will cover that distance.To give the results, the websites look for all the trains which have Berlin or Bremen in their stops and will also match trains which have only one of the stations as stop.For example, I look for Berlin-Bremen, the website will also find a solution which includes a train that cover the track Milan-Dresden and stops also in Berlin, after 20 minutes a bus leave from Berlin and goes to Paris but stops in Bremen.As you can imagine the query is quite complicated and it might take a lot of time to find the results.This is the main query I've done:SELECT DISTINCTline.id AS line_id,line.type AS line_type,TIME_TO_SEC(stop.timeDeparture) AS time_departure,stop.id_station AS departure_idFROM `stops` AS stopINNER JOIN `days` AS day ON ( day.id_line = stop.id_line )INNER JOIN `lines` AS line ON ( line.id = stop.id_line )INNER JOIN `stations` AS station ON ( station.id = stop.id_station )WHERE ( line.status = 1 AND day.day = ' . $this->day_of_the_week . ' AND line.type IN (' . $this->type_of_transport . ') AND stop.id_station IN (' . $stations_in_departure_location . ') AND time_departure > ' . $min_time_plus_walk_time . ') OR ( line.status = 1 AND day.day = ' . $day_of_the_week_tomorrow . ' AND line.type IN (' . $this->type_of_transport . ') AND stop.id_station IN (' . $stations_in_departure_location . ') AND time_departure > ' . $night_min_time_plus_walk_time . ' AND time_departure < ' . $night_max_time . ') ORDER BY stop.timeDeparture, stop.timeArrivalThis is the database structure:I made a recursive search function that uses this same query to find every train/bus that you could take from any of the cities where any of the trains/buses found in the iteration before stops.Every time it finds a combination of lines that reach the chosen destination, it adds it to an array which will be then sorted and filtered depending on the users preferences.I had to limit the iteration count to 2, otherwise it gets too slow.Can you please suggest best practices, a better solution or a way to improve it? | Query for searching for the best transportation | php;mysql;database;search | null |
_softwareengineering.302269 | My site allows users to create custom HTML templates for their profiles (very much like Tumblr and the theme system), and I picked the Twig template engine for the site. However, I'm not sure if it's a good idea to give users the control of being able to access a template engine. Is this a bad thing? How should I restrict access, or should I just totally rethink the strategy? | Is opening a templating engine to users a bad idea? | security;templates;code security;server security | From the Twig homepage:Secure: Twig has a sandbox mode to evaluate untrusted template code. This allows Twig to be used as a template language for applications where users may modify the template design. and farther down:Sandboxing: Twig can evaluate any template in a sandbox environment where the user has access to a limited set of tags, filters, and object methods defined by the developer. Sandboxing can be enabled globally or locally for just some templates: {{ include('page.html', sandboxed = true) }}So, all you have to do is sandbox the user templates and you should be fine |
_unix.194175 | For instance, I have a service named mysshd.service under /usr/lib/systemd/system/ directory. Can I create a symbolic link such as:ln -s /usr/lib/systemd/system/mysshd.service /usr/lib/systemd/system/fool.serviceso that whatever operation I do with fool.service will be reflected to mysshd.service (systemctl enable/disable start/stop fool.servce) ?My purpose is that overwrite the native sshd service by a symbolic link of my own sshd service. | Can I use a symbolic link as a service of systemd? | systemd;services | null |
_softwareengineering.304648 | The Apollo missions had technology no more complicated than a pocket calculator. From link here, there's an information about Apollo Guidance Computer (AGC)The on-board Apollo Guidance Computer (AGC) was about 1 cubic foot with 2K of 16-bit RAM and 36K of hard-wired core-rope memory with copper wires threaded or not threaded through tiny magnetic cores. The 16-bit words were generally 14 bits of data (or two op-codes), 1 sign bit, and 1 parity bit. The cycle time was 11.7 micro-seconds. Programming was done in assembly language and in an interpretive language, in reverse Polish.So, I've stumbled upon some source code when I researched what was up there, and I've noticed great comments (eg. TEMPORARY, I HOPE HOPE HOPE)VRTSTART TS WCHVERT# Page 801 CAF TWO # WCHPHASE = 2 ---> VERTICAL: P65,P66,P67 TS WCHPHOLD TS WCHPHASE TC BANKCALL # TEMPORARY, I HOPE HOPE HOPE CADR STOPRATE # TEMPORARY, I HOPE HOPE HOPE TC DOWNFLAG # PERMIT X-AXIS OVERRIDE ADRES XOVINFLG TC DOWNFLAG ADRES REDFLAG TCF VERTGUIDThe actual programs in the spacecraft were stored in core rope memory, an ancient memory technology made by (literally) weaving a fabric/rope, where the bits were physical rings of ferrite material. Core memory is resistant to cosmic rays. The state of a core bit will not change when bombarded by radiation in Outer Space. Virtual Apollo Guidance Computer (AGC) software is also on GITHUB!Some part of documentation is here.Another sample of source code with great comments.033911,000064: 32,3017 06037 FLAGORGY TC INTPRET # DIONYSIAN FLAG WAVING 034090,000243: 32,3241 13247 BZF P63SPOT4 # BRANCH IF ANTENNA ALREADY IN POSITION 1 034091,000244: 034092,000245: 32,3242 33254 CAF CODE500 # ASTRONAUT: PLEASE CRANK THE 034093,000246: 32,3243 04616 TC BANKCALL # SILLY THING AROUND 034094,000247: 32,3244 20623 CADR GOPERF1 034095,000248: 32,3245 16001 TCF GOTOP00H # TERMINATE 034096,000249: 32,3246 13235 TCF P63SPOT3 # PROCEED SEE IF HE'S LYING 034101,000254: 32,3251 04635 TC POSTJUMP # OFF TO SEE THE WIZARD ... 034102,000255: 32,3252 74126 CADR BURNBABYMy question here is this:How were the teams writing this much code able to make it functional given the tools at the time?Because if you compile so much code that was used on Apollo 11... it'd take days, even weeks. I seriously doubt that programmers back then left everything to happen by chance. | Development process used for the code on Apollo 11 missions? | history;assembly;low level;imperative programming;space technology | If I understand correctly, the development process was peer review and experimentation. The team consisted of people like Math Doctors - extremely dedicated, intelligent, passionate, detail-oriented folks whose lives were dedicated to their work. So when I say peer review, I mean many peer reviews over the course of many months (more than a year).These developers ran the simulations in their heads, debugged the software on paper and worked in groups with many developers looking at the same code over and over until they were convinced it was correct. There were multiple teams - each working on a part of the whole. My Numerical Methods professor at The Ohio State University (Spring '96) wrote the code that decided when to kick off a stage of the booster rocket. He described the printout being the size of the phone book (So, maybe 2.5 to 3.5 inches thick of 8.5 x 11 inch paper - he didn't describe the font size) of Fortran code. When convinced, they launched an unmanned missile (rockets technically don't have gyroscopes) with a radio on board that emitted a beep at regular intervals. They listened to the beeps up to the point they expected the radio to impact the moon (crash into it and destroy itself) and stop beeping. They knew that if they missed, the radio would keep beeping long past the calculated time of impact. Impact occurred 15 seconds after the calculated time. This admitedly anecdotal story are my recollections from an office visit with the doctor. He was very old, and it was a long time ago. This is my best recollection. |
_unix.366746 | When I select multiple files within ranger (using <Space> or V), how do I move these selected files to another directory?I've tried to use dd and pp, but this only moves the file that's currently highlighted. | How can I move all `marked` files to another directory in ranger? | file manager;ranger | null |
_codereview.143646 | This is more of an exercise for me to get use to Pandas and its dataframes. For those who didn't hear of it: Pandas is a Python package providing fast, flexible, and expressive data structures designed to make working with structured (tabular, multidimensional, potentially heterogeneous) and time series data both easy and intuitiveI'll make this sound like an exercise:Given some link http://ABCD.abc/some_date.html, take the necessary information from the table on the page.Say the information looks like this:Team | Another Team | Col2 | Current | Col4 | Halftime | ScoresTeam1 | TeamX | info | Current1 | Col4 | Halftime1 | Scores1Team2 | TeamY | info | Current2 | Col4 | Halftime2 | Scores2Team3 | TeamW | info | Current3 | Col4 | Halftime3 | Scores3Team4 | TeamZ | info | Current4 | Col4 | Halftime4 | Scores4From fileA (data from the file is pickled - yeah, I know pickling isn't the best option, but let's stick with it for the sake of the exercise), add the info at the end of the dataframe in another 3 new columns: Current, Halftime and Scores.Let's suppose the data in the dataframe looks like this: | Team | Opponent | Col2 | Col3 Col4 | Col5 | Col6 | Date0 | Team1 | TeamX | info | info | info | info | info | some_date1 <-- see the link. date goes there in the link 1 | TeamX | Team1 | info | info | info | info | info | some_date2 <-- see the link. date goes there in the link 2 | Team3 | TeamW | info | info | info | info | info | some_date3 <-- see the link. date goes there in the link3 | TeamW | Team3 | info | info | info | info | info | some_date4 <-- see the link. date goes there in the link...and so onNow, the task:Parse each row from the dataframe (access the link using the date from the Date column of that row), and check if the team from this row can be found in the HTML table. If you find it, take Current, Halftime and Scores from the table and add the info into the newly created dataframe columns. Do this for each row from the dataframe.Now, I did solve this pretty easy, but it takes up to 1 minute to resolve 137 rows in the dataframe.I'd like some ideas on how can I optimise it, make better use of pandas modules and if there's something wrong with the logic.import pickleimport requestsimport pandas as pdfrom bs4 import BeautifulSoupdef get_df_from_file(pickle_filename): objects = [] with open(pickle_filename, rb) as openfile: objects.append(pickle.load(openfile)) return objectsdef add_new_df_columns(): return get_df_from_file('CFB_15_living-2.p')[0].join(pd.DataFrame(columns=['Currents', 'Halftimes', 'Scores']))def get_html_data_from_url(custom_date): url = 'http://www.scoresandodds.com/grid_{}.html'.format(custom_date) html = requests.get(url) soup = BeautifulSoup(html.text, 'lxml') rows = soup.find(table, {'class': 'data'}).find_all(tr, {'class': ['team odd', 'team even']}) teams, currents, halftimes, scores = [], [], [], [] for row in rows: cells = row.find_all(td) teams.append(cells[0].get_text().encode('utf-8')) currents.append(cells[3].get_text().encode('utf-8')) halftimes.append(cells[5].get_text().encode('utf-8')) scores.append(cells[6].get_text().encode('utf-8')) data = { 'teams': teams, 'currents': currents, 'halftimes': halftimes, 'scores': scores } return datadef process_data(): df_objects = add_new_df_columns() # data from file for index, row in df_objects.iterrows(): html_data = get_html_data_from_url(row['Date']) # dict from html for index_1, item in enumerate(html_data['teams']): if row['Team'] in item: # print('True: {} -> {}; Index: {}'.format(row['Team'], item, index)) df_objects.set_value(index, 'Currents', html_data['currents'][index_1]) df_objects.set_value(index, 'Halftimes', html_data['halftimes'][index_1]) df_objects.set_value(index, 'Scores', html_data['scores'][index_1]) print(df_objects)if __name__ == '__main__': process_data()After some tests, it looks like add_new_df_columns() is the function that takes the most time to execute, and that's because I always take the date from the row I'm at that point, and make a request using it. | Matching values from html table for updating values in pandas dataframe | python;python 2.7;time limit exceeded;pandas | Consider avoiding row iteration and simply use pandas.DataFrame.merge() on Team and Date columns. Usually, in Python pandas or numpy, vectorized processes are always the recommended course where you pass in a serialized object (vector, list, array, dataframe) to run a bulk operation in one call instead of on individual elements. To follow this approach, first you will need to compile the html data for all unique dates found in your file dataframe (pulled from pickle). Also, no need to create empty columns --Currents , Halftimes, Scores-- as the merge will bring them over. Below first two defined methods should return a dataframe object of which the final function simply merges together. Possibly, the html dataframe build may take some time as you have to parse all unique dated web pages. For this, try implementing pandas.read_html.def get_df_from_file(): with open(FILE_TO_PROCESS, rb) as openfile: return pickle.load(openfile)def get_html_data_from_url(df): # LIST OF DATAFRAMES dfList = [] # ITERATE ON UNIQUE DATES for dt in set(df['Date'].tolist()): url = 'http://www.scoresandodds.com/grid_{}.html'.format(dt) html = requests.get(url) soup = BeautifulSoup(html.text, 'lxml') rows = soup.find(table, {'class': 'data'}).find_all(tr, {'class': ['team odd', 'team even']}) dates, teams, currents, halftimes, scores = [], [], [], [], [] for row in rows: cells = row.find_all(td) dates.append(dt) teams.append(cells[0].get_text().encode('utf-8')) currents.append(cells[3].get_text().encode('utf-8')) halftimes.append(cells[5].get_text().encode('utf-8')) scores.append(cells[6].get_text().encode('utf-8')) data = { 'Date': dates, 'Team': teams, 'Currents': currents, 'Halftimes': halftimes, 'Scores': scores } # APPEND DATAFRAME CREATED FROM EACH DICTIONARY dfList.append(pd.DataFrame(data)) # CONCATENATE DATAFRAME LIST finaldf = pd.concat(dfList) return finaldfdef process_data(): filedf = get_df_from_file('CFB_15_living-2.p') filedf['Team'] = filedf['Team'].str.lower() htmldf = get_html_data_from_url(filedf) htmldf['Team'] = htmldf['Team'].str.replace('[0-9]', '').str.strip().str.lower() # LEFT JOIN MERGE mergedf = pd.merge(filedf, htmldf, on=['Date', 'Team'], how='left') mergedf.to_csv('results.csv', sep='\t') |
_unix.236056 | I am trying to get hold of truecrypt for an armhf linux or something similar.I was wondering if there were any sources to do this? or can anyone make a suggestion? I need it so when its ejected nothing is at risk from being left decrypted. | is there a truecrypt for armv7 hard float? | truecrypt | null |
_unix.256811 | I am using a script that emails me a particular status report. The script sends me the contents of several files. I am using echo to provide a heading in the email that is used to differentiate between parts of the files. I need to make the heading bold and underlined. However, when I do this using tput bold and tput sgr0, or \033[1m the email sends an attachment instead of just populating the body of the email with the data I want. For instance, in the script I have tried placing these variables at the top and wrapping the variables around the text I want bolded:bold=${tput bold}normal=${tput sgr0}echo -e ${bold}Bolded text${normal}I have also tried:echo -e \033[1mBolded text:\033[0mBut like I mentioned when I use these methods I get a .bin attachment instead of the data in the body of the email. If I stop trying to bold the headings it works as expected, except my heading does not stand out. I am wondering if this is not compatible when combined with mailx. Is there a way to make this work when emailing? Here is my complete echo command with mailx:{ echo -e \n \033[1mBolded Text:\033[0m \n \n More text:; cat /script; } | mailx -r user -s email subject [email protected] | Bold and Underline using echo with email | bash;shell script;scripting;email;fonts | null |
_unix.381576 | the task given for me is to write a script in linux to automatically loops the input files. I have given 400 files in a folder and each files have the contigs with header. So now, I need to rename the file and the header of the contigs in each files with numbering in ascending order. My script works when deals with contigs. But, my looping for the input is not working. Anyone could help me to solve this ? Thank you.(Below is my script)for f in home/intern/folder/strainsdo awk '/^>/{print >$f_contigs ++i; next}{print}' <$f.fa> $f_contigs.fadone | Missing Pathway | linux;awk;rename | null |
_cstheory.38493 | Consider multiple students taking a multiple-choice tests. The test has $q$ questions, and each question has $m$ choices. ($q$ can be large, but $m$ is rather small.) You are given $n$ tests, complete with all answer choices and their grades (number of questions correct). Find the set of answers keys which produce these scores.I and a few friends had a go at it, but I was wondering if there was an efficient algorithm to solve this. | Finding an answer key given tests and scores | ds.algorithms;optimization | Solving this problem is NP-hard. In particular, even deciding whether there exists an answer key consistent with the given answer choices and scores is NP-complete. We can prove this by reduction from 1-in-3 SAT.Given a 1-in-3 SAT formula with $c$ clauses and $n$ variables, we construct an instance of your problem with $n$ questions, each with 3 choices, and $c+1$ sample answer/grade pairs.For each variable $x$, one of the questions is what is the value of boolean variable $x$? and the three possible answers are True, False, and 17.One student answers every question with the answer $17$ and gets a score of zero. The remaining students are each assigned a clause. For every positive literal $x$ occurring in the clause, the student answers the question about $x$ with the answer True. For every negative literal $\neg x$ occurring in the clause, the student answers the question about $x$ with the answer False. For every other question, the student answers $17$. Every such student gets a score of one.An answer key must correspond with an assignment of binary values to the variables (since no question has $17$ as the correct answer as shown by the fact that the student who always answered $17$ got zero points). Furthermore, there must be exactly one true literal in each clause under this assignment in order for the answer key to match the scores given to the students. In other words, the satisfying assignments to the input 1-in-3 SAT formula are in a bijection with the answer keys consistent with the student scores.Clearly then, deciding whether an answer key consistent with the student scores exists is NP-hard. Similar arguments can be made to show that for example counting answer keys is #P-complete. |
_cs.14135 | Let's say we have an inequality, $p \le {a \choose b}$ where $p$ is a fixed constant and $a, b$ are variables. The problem is that, we are trying to find the minimum $a$ with respect to the inequality $p \le {a \choose b}$. Is there a closed form solution (can be approximate as well/doesn't have to be exact) for that combinatorial optimization problem? | Solution for a combinatorial minimization problem | optimization;combinatorics | G. Bach noticed that the best choice for $b$ is $\lfloor a/2 \rfloor$, and furthermore this choice gives the maximal binomial coefficient. The easiest way to see this is to consider the ratio of two adjacent binomial coefficients:$$ \frac{\binom{a}{b}}{\binom{a}{b+1}} = \frac{b+1}{a-b}.$$Therefore $\binom{a}{b} \leq \binom{a}{b+1}$ iff $b+1 \leq a-b$ iff $2b+1 \leq a$, and we get the desired result.The binomial theorem states that$$ \sum_{b=0}^a \binom{a}{b} = 2^a, $$hence$$ \frac{2^a}{a+1} \leq \binom{a}{\lfloor a/2 \rfloor} \leq 2^a. $$(The correct order of magnitude, $\Theta(2^a/\sqrt{a})$, can be found using Stirling's approximation.) Therefore the optimal $a$ satisfies the following inequalities:$$ \frac{2^{a-1}}{a} < p \leq 2^a. $$Therefore $a \geq \log_2 p$. On the other hand, when $p \geq 4$, $a \geq 2$, and so $a - \log_2 a \geq a/2$ (since $a-\log_2 a$ is monotone and $2 - \log_2 2 = 2/2$). Therefore for $p \geq 4$,$$ 2p\log_2 (2p) > 2\frac{2^{a-1}}{a} (a - \log_2 a) > 2^{a-1}.$$We conclude that$$ \log_2 p \leq a < \log_2 p + 1 + \log_2 \log_2 (2p). $$Therefore the binary search that G. Bach mentions takes only $O(\log\log\log p)$ steps.If all we want is an asymptotic expression, then since $\binom{a}{\lfloor a/2 \rfloor} = \Theta(2^a/\sqrt{a})$, approximately we have $2^a/\sqrt{a} = Cp$, and so $a = \log_2 p + \Theta(\log_2\log_2 p)$. More terms can be obtained by taking more terms in Stirling's approximation. For example, we can find a constant $A$ such that $a = \log_2 p + A\log_2\log_2 p + o(\log_2\log_2 p)$. |
_codereview.119107 | I am trying to learn Meteor, so I tried to write a basic location autocomplete.Currently, given a string, it will request suggestions from https://photon.komoot.de/ (amazing free location autocomplete btw) and display them in an ul-li. What is not done yet is keyboard navigation.autocomplete.html<template name=autocomplete> <input id={{name}} class=autocomplete-input {{className}} placeholder={{placeholder}} autocomplete=off/> <ul class=suggestions></ul></template>autocomplete.js(function() { var inputEl, suggestionsEl, timeout, displayedSuggestions, selectedSuggestion; var search = function(inputString) { if (timeout) { clearTimeout(timeout); } timeout = setTimeout(function() { HTTP.get(https://photon.komoot.de/api/?q= + inputString + &lang= + globalConfig.language, {}, function(error, response) { if (error) { } else { displayedSuggestions = response.data.features; display(displayedSuggestions); } }); }, 300); }; var suggestionString = function (suggestion) { return suggestion.properties.name + ' (' + suggestion.properties.country + ')'; }; var selectSuggestion = function(elt) { inputEl.value = elt.innerHTML; selectedSuggestion = displayedSuggestions[elt['data-index']]; suggestionsEl.style.display = 'none'; }; var display = function(data) { suggestionsEl.innerHTML = ''; suggestionsEl.style.display = 'block'; for (var i = 0; i < data.length; i++) { var suggestion = data[i]; var li = document.createElement('li'); li.className = suggestion; li.onclick = function() {selectSuggestion(this);}; li['data-index'] = i; li.innerHTML = suggestionString(suggestion); suggestionsEl.appendChild(li); } }; Template.autocomplete.onRendered( function(){ inputEl = this.find('.autocomplete-input'); suggestionsEl = this.find('.suggestions'); }); Template.autocomplete.events({ keyup .autocomplete-input: function (event) { search(event.target.value); } });})();Two things does not seem very meteor-like. First, I have a lot of local functions, and local variables. Are they supposed to be put somewhere else?Also, I am worried about performance. The keyup event is based on the class of the input. If it is looking through the DOM at each keystroke, it might get slow. I tried using inputEl instead, but this does not seem possible (or is it?). | My first Meteor autocomplete | javascript;meteor;autocomplete | null |
_unix.97244 | My git client claimserror: Peer's Certificate issuer is not recognized.That means it can not find the corresponding ssl server key in the global system keyring. I want to check this by looking at the list of all system wide available ssl keys on a gentoo linux system. How can I get this list? | List all available ssl ca certificates | linux;openssl | It's not SSL keys you want, it's certificate authorities, and more precisely their certificates.You could try:awk -v cmd='openssl x509 -noout -subject' ' /BEGIN/{close(cmd)};{print | cmd}' < /etc/ssl/certs/ca-certificates.crtTo get the subject of every CA certificate in /etc/ssl/certs/ca-certificates.crtBeware that sometimes, you get that error when SSL servers forget to provided the intermediate certificates.Use openssl s_client -showcerts -connect the-git-server:443 to get the list of certificates being sent. |
_cs.47757 | How should I understand the definition of computational problem?A computational problem is a mathematical object representing a collection of questions that computers might be able to solve. For example, the problem of factoringGiven a positive integer n, find a nontrivial prime factor of n.What is the mathematical object in the example above? Quoting Wikipedia again:Commonly encountered mathematical objects include numbers, permutations, partitions, matrices, sets, functions, and relations.So how can you 'represent' a collection of questions with numbers or permutations, matrices etc.? What is meant here is probably the following collection of sentences:'find a nontrivial prime factor of 1', 'find a nontrivial prime factor of 2' and so on...But the thing is - these sentences are not mathematical objects.A little further in the article it reads:A computational problem can be viewed as an infinite collection of instances together with a solution for every instance.which makes perfect sense, but I don't quite see the relationship with the first definition. | Computational problem - definition | terminology;computation models | null |
_softwareengineering.218801 | One of the advantages of using a DVCS is the edit-commit-merge workflow (over edit-merge-commit often enforced by a CVCS). Allowing each unique change to be recorded in the repository independent of merges ensures the DAG accurately reflects the true pedigree of the project.Why do so many websites talk about wanting to avoid merge commits? Doesn't merging pre-commit or rebasing post-merge make it more difficult to isolate regressions, revert past changes, etc.?EDIT: Point of clarification: The default behavior for a DVCS is to create merge commits. Why do so many places talk about a desire to see a linear development history that hides these merge commits? | Why do so many projects prefer git rebase over git merge? | version control;git;dvcs;merging | People want to avoid merge commits because it makes the log prettier. Seriously. It looks like the centralized logs they grew up with, and locally they can do all their development in a single branch. There are no benefits aside from those aesthetics, and several drawbacks in addition to those you mentioned, like making it conflict-prone to pull directly from a colleague without going through the central server. |
_cs.41850 | Given a word list of $N$ words formed from a language of $M$ characters, where each word is composed of $n \geq 1$ not necessarily distinct characters, how can I find the best set of $k<M$ characters to learn, where best is defined as the set of $k$ with which the most complete words can be spelled. Ie, if I know these $k$ characters I can spell $N_k$ words. What is the maximum $N_k$ for every $k$ and which characters should I choose?Checking a given set of $k$ words is equivalent to a Scrabble problem, but searching the space becomes very hard very fast. This problem is of limited interest in English where $M=26$ but is more important in Chinese or Japanese where $M \sim 10^3$.I thought of considering the bipartite graph of characters and the words that they make but I'm not sure what the best search strategy is. I'm somewhat pessimistic that this problem is strictly solvable and therefore I am willing to try heuristic or stochastic methods. | What are the k characters which make the most complete words? | algorithms;optimization;strings;counting | As Lopsy states, your problem is NP-hard, so you shouldn't expect an efficient algorithm for your problem.Your problem is an instance of the maximum coverage problem (which is also NP-hard). This may help you find algorithms that will be helpful in your setting.There are standard algorithms for the maximum coverage algorithm. There's a natural greedy algorithm; it achieves an approximation ratio of approximately 0.63.Perhaps a better approach is to formulate this as an instance of integer linear programming (ILP), and then throw an off-the-shelf ILP solver at it (e.g., lp_solve, CPLEX). See Wikipedia for a description of how to do that. For your English example (alphabet size 26) it's possible this might give you an optimal solution, but for the Chinese language problem, you shouldn't expect this to terminate and give you the optimal solution within your lifetime. Instead, I suggest you let it run for a bounded time and do the best it can within that time bound. Standard ILP solvers will let you provide a time bound: e.g., run for 5 minutes and then give me the best solution you've found so for.You will probably want to apply the following simple optimization. Replace each word with the set of letters it contains (so order doesn't matter and repetitions are ignored). Then, de-dup these sets, and include a weight for each set that counts how many times it appears. Thus, GRIN and RING map to the same set, {G,I,N,R}. This will reduce the problem size and might make the ILP solver run faster.Depending upon your application, you might also want to weight the words by their frequency of occurrence in natural language and then solve the weighted maximum coverage problem, so that you place an extra emphasis on covering common words and a lower priority on covering rare words.You can also search the literature for other algorithms for the maximum coverage problem, but I suspect the pragmatic answer is going to be to use ILP with a reasonable time bound. |
_cstheory.8019 | The traditional art gallery problem sets up a region and guards with some notion of visibility, and asks for the minimum number of guards that need to be placed to see the entire region.Has anyone ever looked at art gallery variants where the visibility region is instead defined by a pair of guards. For example, one formulation might be that a point is covered if there's a pair of guards whose minimum bounding disk covers it ? | Art gallery variants with pairwise visibility? | reference request;cg.comp geom | null |
_unix.330121 | I would like to see my applications call graph with KDE and valgrind -based tool kcachegrind. valgrind -v --tool=callgrind --dump-instr=yes --collect-jumps=yes ./AvoidObjectsThing that I noticed is that the kcachegrind shows a hierarchical call graph. i.e. main function is preceeded by inside called functions. I need to see the chronological call graph in order to model every runnable for Graph Theory analysis of multi-core partitions.Additionaly, I need to find the information such as read/write accesses, instruction costs of both runnables and communication. (i.e. Node costs and communication costs)Is there a way to achieve this with a kcachegrind/valgrind?Or is there another tool for this?Thanks in advance for the help. Cheers, | Software Profiling with kcachegrind for node analysis | kernel;process;kde;proc;process management | null |
_unix.271508 | I'm investigating the relationship between bash and emacs shorcuts. Someone told me that the reason why they're similar is that bash uses emacs as its command line interpreter. However, I haven't found any evidence that supports this thesis.I know there are edits modes in bash and one of them is emacs. But, is it true that the command line interpreter is implemented on emacs?Please note I'm referring to the actual implementation and not to the similarities between them. | Is bash command line interpreter implemented on emacs? | bash;command line;emacs | The short answer is no. bash's command-line processing is implemented mostly in bashline.c and its copy of readline, which supports vi-like and Emacs-like behaviours. Emacs itself is written mostly in Emacs Lisp; using it to implement bash would be quite involved since Emacs Lisp isn't designed to be used without Emacs. |
_webmaster.56469 | I'm using Email tracking services of some Email sender company. It is achieved through requesting img resource when opening email. Here is the code inserted into HTML message:<img src=https://ci4.googleusercontent.com/proxy/TObr7aARe70s=s0-d-e1-ft#http://www.mywebsite.com/TrackEmail?j=eyJ1IjoiMQ4In0%3D&r=0.614693022798747.gif width=2 height=1>I'm interested in how eventually the request will come to my website. Can anyone please explain me? I think the magic is somewhere in this part ...d-e1-ft#htt.... | Track Email with Google user contents img resource and redirecting to my website | email;links;tracking | If you are looking at this email in Gmail, I believe that the googleusercontent.com was added to your image. Google recently announced that they are now caching all images shown in Gmail on their own servers as a user privacy protection.This may break some email tracking. It will certainly limit the amount of information you can gather about Gmail users that open emails. You will no longer be able to see what browser they use, for example. The announcement article gives some more information about how the Google image proxy actually works. Google serves the images from their site, but has their server request it from your server the first time it is used. |
_unix.344595 | I am working with Ubuntu 14.04 LTS. This already ships a packaged gcc 4.8.4 but I wanted to have a later version of gcc.Using the existing version 4.8.4, I built version 4.9.4 from the source code through the recommended chain compile-build-test-install. I have used no exotic settings for ./configure except for a prefix directory and a program-suffix. The rest is default.All seems to have gone well. The test summary for g++ shows no FAIL and two XPASS calls onlyXPASS: g++.dg/tls/thread_local-order2.C -std=c++11 execution testXPASS: g++.dg/tls/thread_local-order2.C -std=c++1y execution testI then wanted to build version 5.4.0 using the newly installed 4.9.4, to put the latter to the test with a minimum learning curve.I edited the same installation script used for 4.9.4 only by specifying the make flags CC and CXX.These flags now point to the binaries in the 4.9.4 installation directory. The configure stage for 5.4.0 proceeds well. I see from the files config.log and Makefile that the new binaries' paths are read in correctly.The build stage fails though and stderr reports some 100 errors.The earliest and most frequently recurring ones are of type `error: template with C linkage``error: template specialization with C linkage`They show up in patterns of blocks like this one:In file included from ${installation directory for 4.9.4}/include/c++/4.9.4/bits/stringfwd.h:40:0, from ${installation directory for 4.9.4}/include/c++/4.9.4/iosfwd:39, from /usr/include/x86_64-linux-gnu/gmp.h:25, from /usr/local/include/isl/val_gmp.h:4, from ${source directory for 5-4-0}/gcc/graphite-isl-ast-to-gimple.c:35:${installation directory for 4.9.4}/include/c++/4.9.4/bits/memoryfwd.h:63:3: error: template with C linkage template<typename> ^${installation directory for 4.9.4}/include/c++/4.9.4/bits/memoryfwd.h:66:3: error: template specialization with C linkage template<> ^${installation directory for 4.9.4}/include/c++/4.9.4/bits/memoryfwd.h:70:3: error: template with C linkage template<typename, typename> ^I would think that the 5.4.0 source is fine, since all information has been gathered and processed in the same way as it was for 4.9.4. There could be an error/omission in the configure stage of either 4.9.4, or 5.4.0 or both.Which flaws do these errors indicate? How do you fix them? Thanks for thinking along.A search query on StackExchange on error: template with C linkage gives about 171,000 results. The posts that I skimmed do not seem to address this point:https://stackoverflow.com/questions/29872494/compiling-a-program-under-windows-gives-a-bunch-of-error-template-with-c-linkahttps://stackoverflow.com/questions/4115930/using-cygwin-to-build-template-with-c-linkageNote that I am not interested in triaging the codes, but in setting up a (robustly) working compiler. | gcc compiles gcc with errors 'template with C linkage' and 'template specialization with C linkage' | ubuntu;compiling;gcc;gnu make | null |
_codereview.29154 | I'm doing this in Hadoop Java where I'm reading a String. The string is huge that has been tokenized and put in an array. It has key-value pairs but they are not in any order. I want this order to be rigid so I can load that as a table. So in SQL, if I select a column (after loading this in a table), all the keys of one type should be in colA.I'm checking each word of the String array and copying them in a new string in a fixed position. The way I thought of doing this is using if else ladder like this://row is the tokenized unordered StringString[] newRow = new String[150];for (int i = 0; i < row.length; ++i) { if(row[i].equals(token1)){ newRow[0] = row[i]; //key newRow[1] = row[i+1];//value } else if(row[i].equals(token2)){ newRow[2] = row[i]; newRow[3] = row[i+1]; }//...and so on. Elseif ladder at least is at least 100 long.I wanted to know if there is a more efficient way to do this?PS: I'm not sorting the string. Example: row1 String is {apple,good,banana,bad}, row2 String is {banana,good,apple,bad} where apple and banana are keys. Now in my output I will have two records with say apple as the first key and then banana. So output will be : newRow1: {apple,good,banana,bad}, newRow2: {apple,bad,banana,good}. Essentially I'm rearranging all input to a fixed output. | Efficient way to copy unordered String into ordered String | java;algorithm;strings;hadoop | I'd put token names and positions in a Map like this:Map<String, Integer> tokenIndexes = new HashMap<String, Integer>();tokenIndexes.put(token1, 0);tokenIndexes.put(token2, 2);tokenIndexes.put(token3, 4);// ...and then in the sorting part:String[] newRow = new String[150];for (int i = 0; i < row.length; i+=2) { // go two by two as you have keys in even indexes if(tokenIndexes.contains(row[i])) { int index = tokenIndexes.get(row[i]); newRow[index] = row[i]; newRow[index + 1] = row[i + 1]; } else { // handle missing token }}This way I would get rid of all if-else statements, although now I have to maintain a Map (which I think is easier than maintaining a list of if-else).UPDATEI'm supposing you just receive that String array and you're unable to change the way the information is retrieved. If you can change the initial array with a Map that would simplify your code even further.If that's the case then try this to capture your tokens instead of putting everything in an array (I'm supposing a key is the first token to appear):Map<String, String> info = new HashTable<String, String>();boolean isKey = true;String lastKey = null;String token;while(tokensAvailable() /* or (token = readToken()) != null */) { token = readToken(); if(isKey) { lastKey = token; } else { info.put(lastKey, token); } isKey = !isKey;}And then when you have to print your table, you can do something like this:printOut(VAL_1 -- VAL_2 -- VAL_3);printOut(String.format(%08d -- %10.2 -- %s, info.get(numericVal1), info.get(monetaryVal2), info.get(val3)));String.format() is useful in these cases, you can control the format (like the width) of how every field is printed. |
_cs.79563 | I'm pretty new to CFG, and I've tried out some samples as well. As of now I'm trying this for-loop to write it in CFG using BNF:Start{ int z = 20; for(int b = 0; b < 5; b++) { //some statement }}Attempted BNF:<program> ::= Start <blockstatements><blockstatements> ::= <statement> | <statement>; | <statement_list> | <empty><statement> ::= <typespecification> <varDeclaration> <assignment-operator> <expression> <typespecification> ::= char | short | int | long | float | double | string<varDeclaration> ::= a | b | c | d | e | f | g | h | i | j | k | l | m | n | o | p | q | r | s | t | u | v | x | y | z<assignment-operator> ::= =<expression> ::= <value> <value> ::= <digit> | <letter> | <number> | <word> <digit> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9<letter> ::= A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | a | b | c | d | e | f | g | h | i | j | k | l | m | n | o | p | q | r | s | t | u | v | w | x | y | z<number> ::= <digit> + <digit> | <digit> - <digit> | <digit> * <digit> | <digit> / <digit>The above CFG isn't complete and I wanted to make sure that I'm doing it correctly. Am I heading towards the right path? Should it be written as above or have I missed anything?Any help could be appreciated. | Writing a Context-Free-Grammar using Backus Naur Form for a for-loop? | formal languages;context free;formal grammars;attribute grammars | I've usually seen it expressed as something like:forExpression ::= for ( <declExpression> <expression>; <expression> ) <blockStatement>blockStatement ::= <statement> | { <statementlist> }statement ::= <declExpression> | <forExpression> | <ifExpression> | ...<declExpression> includes the semi colon You want to be very careful of the details like where you define where things like the semi colon are required, In your <number> you don't have any priorities set up, there is also no option for parenthesis, ... Creating a good CFG is hard which is why so many languages start out as a dialect of C and just take that grammar and adapt it. |
_unix.247672 | I'm quite sure it didn't happen before I started using LUKS and LVM on my disk (2 or 3 weeks ago), but now root user is constantly (every 2-3 seconds) writing (not reading, just writing) on disk and I can't figure out why.The disk has one LUKS partition with one LVM group, which contains both root (ext4) and home (ext4) logical partitions (besides the swap one).I've used iotop command to check what processes are accessing the disk and I've seen it's jbd2/dm-X-8, executed by root, the process which is writing constantly to the disk.It only happens with the disk containing the LVM; I have two more ext4 disks mounted (just for storage purposes; they also use LUKS encryption but not LVM) and they stay quite while no file operations are made on them.I've checked log files to see if this writing activity could be due to some kind of logging, but it doesn't seem to be the case.I've also read questions like this one:LVM keeping harddisk awake?But I can't understand why would the system keep on writing the disk even when no changes are being made to it.On the other hand, I've had some issues with the disk which I though that could be related to other programs crashes, but now I don't know whether it could be related to this constant writing to disk, so I'm quite worried about it.Is it normal? Can I do something to avoid it or is it something that just comes with LUKS/LVM? Or maybe it has nothing to do with LUKS/LVM and I should check some other thing? | Why is OS constantly writing to disk (ext4) on a Ubuntu 14.04 machine? Is it normal? | hard disk;lvm;ext4;io;journaling | I've finally managed to reduce dramatically this constant writing activity on the disk, though still I don't understand totally how all this is linked. I guess this is not a perfect solution to the problem, but fixed it to me some way, so for if it helps someone:I've edited the /etc/fstab file adding to the options column for the root partition defaults, since it was set by default to error... (I was just experimenting, I don't understand why it makes any difference, but apparently it does).I've set swappiness to 0, but again, even when this seems to affect to the writing rate, I don't understand why it does in my case, since the computer has 16GB of RAM, so IMHO this (as with the fstab issue) shouldn't make any difference.With this new low-frecuency-writing-rate at least I've achieved to be less worried about the disk health, but I'm still concerned about the logic behind this behaviour, so any explanation and/or better solutions will be really welcomed. |
_datascience.20011 | I have a bunch of time series data I'm doing classification on. I used TPOT with a custom cv (walkforward time series split) to find the best performing model. Logistic regression was selected. I'm a bit surprised by this, given that most kaggle competitions are won by ensembles of newer models like XGB, etc. In fact, not only was logistic regression the best model but it was actually the only one that learned something meaningful (outperformed the others like RF, XGB, etc. by > 5% where 60% is the best accuracy it achieved).My expertise is mostly in deep networks, not as much the stuff dealt with by sklearn, but in this case the training data is very limited and quite noisy. What should I take away from logistic regression being the best classifier? Can I use this knowledge to inform my design of other classifiers that could perform better? Again, I have a lot less experience with this side of ML, so I'm really looking for an intuition as to how I can take this knowledge and use apply it to build better models or ensembles. | Intuition for Logistic Regression Performance | time series;logistic regression | null |
_codereview.4890 | In the main class, loops generate numbers (0~100), and when its generated number is > 20, its value is passed to the thread where it simulates some work with this number. Meanwhile, while this number is being processed, other generated numbers > 20 must be skipped.Is this code OK? I am not sure if I'm doing something bad or not. In fact it seems to be working fine, but I don't know if it can be written by that way or even better..class Program{ delegate void SetNumberDelegate(int number); delegate bool IsBusyDelegate(); static void Main(string[] args) { Random rnd = new Random(); ProcessingClass processClass = new ProcessingClass(); SetNumberDelegate setNum = new SetNumberDelegate(processClass.setNumber); IsBusyDelegate isBusy = new IsBusyDelegate(processClass.isBusy); Thread processThread = new Thread(new ThreadStart(processClass.processNumbers)); processThread.Start(); int num; int count = 0; while (count++ < 100) { num = rnd.Next(0, 100); Console.WriteLine(Generated number {0}, num); if (num > 20) { if (!isBusy()) { setNum(num); } else { Console.WriteLine(Thread BUSY! Skipping number:{0}, num); } } Thread.Sleep(1); } processThread.Abort(); processThread.Join(); Console.ReadKey(); }}class ProcessingClass{ private volatile bool busy; private volatile int number; public ProcessingClass() { busy = false; number = -1; } public bool isBusy() { return busy; } public void setNumber(int num) { number = num; } public void processNumbers() { while (true) { if (number > -1 && !busy) { busy = true; Console.WriteLine(Processing number:{0}, number); // simulate some work with number e.g. computing and storing to db Thread.Sleep(500); Console.WriteLine(Done); number = -1; busy = false; } } }} | Thread-safety and delegates with generated numbers | c#;multithreading;delegates;thread safety | null |
_unix.126688 | I have a TrueRNG USB-based hardware RNG, and I am trying to read its output on Mac OS X 10.9.2. I need just one-way communication.The device says that it appears as a CDC Virtual Serial Port, and indeed I see it as /dev/tty.usbmodem1411.I've never been able to cat /dev/tty.usbmodem1411; when I do, I get no output at all. I can, however, use minicom or picocom to read from it. With picocom, it works even with --noinit --noreset options, suggesting that I should be able to cat that device.So, my issues/questions:Why can I not cat that device?Though it works with minicom and picocom, it works just on the first session: if I close either of these programs, then reopen them, then they read about 300 bytes and block forever. If I close and reopen again, they can't read anything. When I unplug and replug the USB device, though, it's again readable forever on the first attempt. Why would this be, and does it have to do with this quote from TrueRNG documentation? By clearing the DTR flag on the virtual serial port, the stream of data will stop. The stream of data will resume when DTR is set.I want to view the random data in hex. So I try this command: picocom /dev/tty.usbmodem1411 | xxd -p. The output, however, doesn't seem to respect the newline character; just the carriage return. It moves to the next line without rewinding to the beginning of the line. I'd prefer it to be continuous.Here's some diagnostics:% stty -a -f /dev/tty.usbmodem1411speed 9600 baud; 0 rows; 0 columns;lflags: -icanon -isig -iexten -echo -echoe -echok -echoke -echonl -echoctl -echoprt -altwerase -noflsh -tostop -flusho -pendin -nokerninfo -extprociflags: -istrip -icrnl -inlcr -igncr -ixon -ixoff -ixany -imaxbel -iutf8 -ignbrk -brkint -inpck -ignpar -parmrkoflags: -opost -onlcr -oxtabs -onocr -onlretcflags: cread cs8 -parenb -parodd hupcl -clocal -cstopb crtscts -dsrflow -dtrflow -mdmbufcchars: discard = ^O; dsusp = ^Y; eof = ^D; eol = <undef>; eol2 = <undef>; erase = ^?; intr = ^C; kill = ^U; lnext = ^V; min = 1; quit = ^\; reprint = ^R; start = ^Q; status = ^T; stop = ^S; susp = ^Z; time = 0; werase = ^W; | Reading from serial port in the simplest way? | serial port | null |
_cs.11489 | First off, I am not sure if this is the correct stackexchange site to ask this question on, so moderators can feel free to move it.I am working on an application that identifies an object in an image. For example, let's say that the object is an apple, and the apple can be green, red, or both.My plan is to create an array the same size as the image, then scan through the image, pixel by pixel, and if the current pixel is within my range of colors (green-ish or red-ish), add a 1 to the corresponding location in the array, else add a 0.After this scan through the array, I will have an array of 1s and 0s, which will hopefully contain a concentration of 1s, representing the apple.I am a computer science student, but have yet to take an AI class. So my question is: is this a good way to go about doing this? Or are there more established AI methods (or algorithms) for identifying objects in an image based on color?EDIT: I should clarify that the problem isn't identifying if the image contains an apple, the problem is identifying where the apple is in the image. | Identifying an object in an image based on color (AI ?) | algorithms;artificial intelligence;image processing | The general class for this problem is Template matching. The most common approach is performing a cross correlation of the template, in your example an apple, and an image which may contain the template.A Google search for template matching fast normalized cross correlation will bring up some literature that should be helpful. |
_unix.263758 | I'm trying open a website with cURL like this:$ curl -vH Accept: application/json https://www.rocketleaguereplays.com/api/replays/-1/The output is:* Trying 104.24.114.83...* Connected to www.rocketleaguereplays.com (104.24.114.83) port 443 (#0)* ALPN, offering h2* ALPN, offering http/1.1* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH* successfully set certificate verify locations:* CAfile: /etc/ssl/certs/ca-certificates.crt CApath: /etc/ssl/certs* TLSv1.2 (OUT), TLS header, Certificate Status (22):* TLSv1.2 (OUT), TLS handshake, Client hello (1):* TLSv1.2 (IN), TLS header, Unknown (21):* TLSv1.2 (IN), TLS alert, Server hello (2):* error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure* Closing connection 0* TLSv1.2 (OUT), TLS alert, Client hello (1):curl: (35) error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failureI have Linux kernel 4.4.0 and the newest cURL version installed:$ curl -Vcurl 7.47.1 (x86_64-pc-linux-gnu) libcurl/7.47.1 OpenSSL/1.0.2f zlib/1.2.8 c-ares/1.10.0 nghttp2/1.6.0Protocols: dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp smtp smtps telnet tftpFeatures: AsynchDNS IPv6 Largefile NTLM SSL libz TLS-SRP HTTP2 UnixSocketsHow can I fix this? On Ubuntu it works fine with cURL and same URL. | How to fix curl sslv3 alert handshake failure on Gentoo? | curl;ssl;https;failure;handshake | Basically, https://www.rocketleaguereplays.com uses outdated encryption (SSL3), you can force curl to connect to insecure sites like this using the -k (--insecure) switch.Try this:curl -kvH Accept: application/json https://www.rocketleaguereplays.com/api/replays/-1/You could also try using the -3 aka --sslv3 switch, however, if curl was built without SSL3 support, then you need to compile your own version of curl, enabling SSL3.EDIT: The op has found the problem.I got confused by the error message.This is a bug in gentoo:https://bugs.gentoo.org/show_bug.cgi?id=531540Basically, when you build openssl with the bindist flag, the elyptic curve crypto is disabled. This site requires elyptic curve cryptography.When I run this, I get the following:$ curl -vH Accept: application/json https://www.rocketleaguereplays.com/api/replays/-1/* STATE: INIT => CONNECT handle 0x6000572d0; line 1090 (connection #-5000)* Added connection 0. The cache now contains 1 members* Trying 2400:cb00:2048:1::6818:7353...* STATE: CONNECT => WAITCONNECT handle 0x6000572d0; line 1143 (connection #0)* Connected to www.rocketleaguereplays.com (2400:cb00:2048:1::6818:7353) port 443 (#0)* STATE: WAITCONNECT => SENDPROTOCONNECT handle 0x6000572d0; line 1240 (connection #0)* ALPN, offering http/1.1* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH* successfully set certificate verify locations:* CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none* TLSv1.2 (OUT), TLS header, Certificate Status (22):* TLSv1.2 (OUT), TLS handshake, Client hello (1):* STATE: SENDPROTOCONNECT => PROTOCONNECT handle 0x6000572d0; line 1254 (connection #0)* TLSv1.2 (IN), TLS handshake, Server hello (2):* TLSv1.2 (IN), TLS handshake, Certificate (11):* TLSv1.2 (IN), TLS handshake, Server key exchange (12):* TLSv1.2 (IN), TLS handshake, Server finished (14):* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):* TLSv1.2 (OUT), TLS change cipher, Client hello (1):* TLSv1.2 (OUT), TLS handshake, Finished (20):* TLSv1.2 (IN), TLS change cipher, Client hello (1):* TLSv1.2 (IN), TLS handshake, Finished (20):* SSL connection using TLSv1.2 / ECDHE-ECDSA-AES128-GCM-SHA256 <----[...]So my curl uses elyptic curve with this site. |
_vi.13 | Actually, I hit this problem when I used ssh from my android phone to log in into a linux server, and I tried to use vim to edit some files.It was a little bit... displeasing, although I could do everything which I could do from a normal desktop/keyboard.Extension: There was a question, why it was displeasing if I could do anything I wanted. For example, typing 5j45| on an emulated keyboard on a touchscreen has a much lower information rate (between your brain and your device) as touching the intended character position on a touchscreen. | Does any solution exist to use vim from touch screen? | input devices | Does enabling mouse helps? This way you should be able at least do selections via touch. Maybe navigate files and buffers/tabs. Depends on plugins I guess.set mouse=a |
_cogsci.13315 | I have began taking a class on the sociology of gender and how gender is more malleable than previously thought and that femininity and masculinity exists in a continuum. What does this actually imply for the observable activity of people? For example, are women born with more male brains more likely to be lesbian? or prefer male friends? or prefer a more feminine sexual partner? And Can you tell from physical appearances that an individual has a more masculine or feminine brain? (ex: a man with a more feminine voice or a woman with more masculine features) | What does it even mean to have a male/female brain? | cognitive psychology;social psychology;gender;sociology | null |
_unix.293568 | From coreutils manual2.10 Traversing symlinksThe following options modify how chown and chgrp traverse a hierarchy when the --recursive (-R) option is also specified. If more than one of the following options is specified, only the final one takes effect. These options specify whether processing a symbolic link to a directory entails operating on just the symbolic link or on all files in the hierarchy rooted at that directory. These options are independent of --dereference and --no-dereference (-h), which control whether to modify a symlink or its referent.-H If --recursive (-R) is specified and a command line argument is a symbolic link to a directory, traverse it.-L In a recursive traversal, traverse every symbolic link to a directory that is encountered.-P Do not traverse any symbolic links. This is the default if none of -H, -L, or -P is specified.In These options are independent of--dereference and --no-dereference (-h), which control whether to modify a symlink or its referent, what do these options and --dereference and --no-dereference do respectively, and how are they different?The descriptions for -H and for -L seem to say the same thing to me. How are these two options different?Thanks. | Usage of options for symlinks in coreutils | symlink;coreutils | The -h flag (aka --no-dereference) is a good flag to use. Let's say we have this setup:$ ln -s /etc/passwd /tmp/foobar$ sudo chown fred /tmp/foobarBecause --dereference is the default, this will actually change /etc/passwd... which is probably not what you want :-) The -h flag would make it change the symlink ownership instead. So you should get into the habit of using -h, especially if recursively changing ownership. i.e. do chmod -hR rather than chmod -R.The -H flag only applies to directory symlinks you list on the command line. The -L option applies to all directory symlinks found, including those in subdirectories during a recursive chown. |
_cstheory.31242 | Imagine two lists $L$, $M$ of the same length $n$. How to find $j$ such that $\sum_{i=1}^n(L[i]-M[i+j])^2$ is minimal, where the index $i+j$ is taken modulo $n$?Of course one can take all the indices $j\in\{1, \dots, n\}$ and compute all the sums, but the goal is to find something faster.The same question hold for matrices: If $A$, $B$ are two square matrices, how to minimize $f(a,b)=\sum_{i=1}^n\sum_{j=1}^n (A[i][j]-B[i+a][j+b])^2$ for $a,b\in\{1,\dots,n\}$, where indices are again taken modulo $n$?Thanks a lot. I would like to apologize, if my question is a classic one, but I searched and I didn't find an answer. | Minimize L2 norm by circular permutation | permutations;minimization;norms | null |
_codereview.76904 | I'm trying to break up some procedural JavaScript code used for running an automation test. The code is a series of steps that execute selenium webdriver commands within the neustar load testing framework. The framework is not too important for the purposes of this review, except to know this code is not run in a browser. Its more the structure of the javascript I'm after feedback on, in terms of encapsulating the steps.The controller initializes the test, retrieves parameters for the virtual user and orchestrates running the steps. The steps object is a container for the individual steps in the test.Note the 'test' object is a global object provided by the framework, which I have mocked for the purpose of the review. See it in action here.I want the steps object to contain some reusable functions. I've added one called utilityFunction and called it from a step.The code is a wrapper around a selenium automation test that is run in the cloud using neustar. I'm looking for a way to break up multiple steps in the test into a clean, maintainable structure to allow me to better re-use and encapsulate distinct steps in the test automation. The 'test' object can be ignored, it's simply there to mock the neustar global object.This is a simple framework to run step 1, then step 2 through to step n, e.g. navigate to site, logon, create new order, submit order etc...Please evaluate the quality of the JavaScript, as well as the pattern for running a series of steps while being able to control what happens before and after each step. function testController(steps) { this.steps = steps; // simplified params this will be read from a service this.params = { Param1: Value1, Param2: Value2 }; this.runAll = function () { test.beginTransaction(); for (var s in this.steps) { test.beginStep(this.steps[s].description, this.steps[s].timeout); this.steps[s].execute(this.params); test.endStep(); } test.endTransaction(); }}function testSteps() { var utilityFunction = function (v) { console.log(** calling util with value: + v + ***); } this.step1 = { description: first step, timeout: 1000, execute: function (params) { console.log(executing + this.description); } } this.step2 = { description: second step, timeout: 30000, execute: function (params) { console.log(executing + this.description); utilityFunction(hello); } }}// mock test objectvar test = { beginTransaction: function () { console.log(beginning transaction); }, endTransaction: function () { console.log(ending transaction); }, beginStep: function (name, timeout) { console.log(begin step + name); }, endStep: function () { console.log(end step); }}var steps = new testSteps();//steps.utilityFunction(world);var tc = new testController(steps);tc.runAll(); | Modelling an automation test controller and test steps | javascript | null |
_softwareengineering.241097 | The below Javadoc is an snippet of HashMap documentation. Why authors would emphasize on putting a lock on the object that encapsulate a HashMap? Lock on the actual HashMap Object makes for sense. Note that this implementation is not synchronized. If multiple threads access a hash map concurrently, and at least one of the threads modifies the map structurally, it must be synchronized externally. (A structural modification is any operation that adds or deletes one or more mappings; merely changing the value associated with a key that an instance already contains is not a structural modification.) This is typically accomplished by synchronizing on some object that naturally encapsulates the map. If no such object exists, the map should be wrapped using the Collections.synchronizedMap method... | java.util.HashMap lock on actual HashMap object compare to lock on object that encapsulate the HashMap | java;collections | null |
_unix.222911 | I'm trying to run a Python snippet inside a Make target, but I'm having trouble trying to figure out how these things work in Make.Here is my attempt so far:define BROWSER_PYSCRIPTimport os, webbrowser, systry: from urllib import pathname2urlexcept: from urllib.request import pathname2urlwebbrowser.open(file:// + pathname2url(os.path.abspath(sys.argv[1])))endefBROWSER := $(shell python -c '$(BROWSER_PYSCRIPT)')I wanted to use $(BROWSER) in a target like:docs: #.. compile docs $(BROWSER) docs/index.htmlIs this really a bad idea? | Using embedded python script in Makefile | python;make | Related: https://stackoverflow.com/q/649246/4937930You cannot recall a multi-line variable as is in a single recipe, it rather gets expanded to multiple recipes and causes syntax error.A possible workaround would be:export BROWSER_PYSCRIPTBROWSER := python -c $$BROWSER_PYSCRIPTdocs: #.. compile docs $(BROWSER) docs/index.html |
_codereview.33280 | This works just fine, however, the assignment was to write a recursive function. I'm curious to see if this should count. Any comments / suggestions/ stuff I should watch out for are appreciated. #include <iostream>#include <functional>#include <cctype>#include <cstdlib>int main() { char go_again = 'Y'; do { int lhs, rhs; std::cout << Enter 2 integer values: ; std::cin >> lhs >> rhs; if (lhs < 0 || rhs < 0) std::cout << Implicit converstion to positive integers.\n; std::function<int(int, int)> gcd = [&](int lhs, int rhs) -> int { return rhs == 0 ? std::abs(lhs) : gcd(rhs, lhs % rhs); }; std::cout << gcd == << gcd(lhs, rhs) << '\n'; std::cout << Go again? <Y/N> ; std::cin >> go_again; } while (std::toupper(go_again) == 'Y');} | Recursive GCD using a lambda | c++;recursion;lambda | null |
_unix.277345 | Last part of a project and I've created 3 scripts already that do the following, but I now need to run them within a 4th script as if it was one program. Below are the instructions as well as the code I'm using to run the first 3 scripts.1. Remove all content from the Content directory 2. Categorize all files in the file category, if this is successful then validate the files in the categories 3. If this is successful, provide a listing of the categorized files. 4. Provide an option to process all categories or a specific category (This one is giving me trouble) #!/bin/bash function script_One { sh ./script1.sh echo Running Script One } function script_Two { sh ./script2.sh echo Running Script Two } function script_Three { sh ./script3.sh echo Running Script Three } cd content /documents find -type f -delete /media find -type f -delete /pictures find -type f -delete /other find -type f -delete cd - script_One script_Two script_Three | How do i run multiple scripts within another script in the same directory? | shell script;function;parallelism | null |
_cstheory.31767 | Let $R$ be a commutative ring. Let $f(x_1, \dots, x_n), g(x_1, \dots, x_n)$ be two multivariate polynomials with the same $x_i$-terms with maximal total degree $\delta$, but with different coefficients in $R$.How fast can we compute the product of $f$ and $g$, i.e. the resulting coefficients of each term?For univariate multiplication in a general setting like this, one can use variants of schonhage-strassen to achieve $O(\delta \log \delta \log\log \delta)$ (e.g. Theorem 8.23 in von zur Gathen & Gerhard, Modern Computer Algebra).I have only been able to find a non-trivial algorithm for when $R$ is a field of characteristic 0, but not for $R$ being a general commutative ring. | Algorithm for multiplying multivariate polynomials in a commutative ring | co.combinatorics;polynomials;convolution | null |
_codereview.30295 | I have a function which gets called very often, for this test 30,720,000 times. It has to run in real-time, so performance is very important. Are there possible improvements?// This function looks up in the vertice table wheter a vertice at that position already exists// - If yes, return the vertice index// - If no, create a new vertice and return the created vertice index//// @param cfg: Pointer to a chunkConfig struct// @param x: x position of the vertice// @param y: y position of the verticeinline VERTICE_INDEX GetOrCreateVertice(float x, float y, ChunkGenerator::chunkConfig *cfg) { int x_pos = int((x*POSARR_ADJ_F) + 0.5f); int y_pos = int((y*POSARR_ADJ_F) + 0.5f); int dict_index = x_pos + (y_pos * cfg->subdivisions_adj); VERTICE_INDEX dict_entry = cfg->vertice_pool.vertice_dict[dict_index]; VERTICE_INDEX current_index = cfg->vertice_pool.current_vertice_index; if (dict_entry >= 0 && dict_entry < 65535 && current_index > 0) return dict_entry; LVecBase3f* offset = cfg->base_position; LVecBase2f* dim = cfg->dimensions; LVecBase2f* tex_offset = cfg->texture_offset; LVecBase2f* tex_scale = cfg->texture_scale; int pool_index = ((int)current_index) * 5; float base_scale = 1.0 / (cfg->subdivisions_f-1.0); float x_scaled = x * base_scale; float y_scaled = y * base_scale; cfg->vertice_pool.vertice_array[pool_index+0] = offset->get_x() + (x_scaled * dim->get_x()); cfg->vertice_pool.vertice_array[pool_index+1] = offset->get_y() + (y_scaled * dim->get_y()); cfg->vertice_pool.vertice_array[pool_index+2] = offset->get_z(); cfg->vertice_pool.vertice_array[pool_index+3] = tex_offset->get_x() + (x_scaled * tex_scale->get_x()); cfg->vertice_pool.vertice_array[pool_index+4] = tex_offset->get_y() + (y_scaled * tex_scale->get_y()); cfg->vertice_pool.vertice_dict[dict_index] = current_index; cfg->vertice_pool.current_vertice_index++; return current_index;}LVecBase3f and LVecBase2f are vector-types provided by the graphics-engine I use. VERTICE_INDEX is a unsigned short, POSARR_ADJ and POSARR_ADJ_F is constant 2.This is the chunkConfig struct:struct chunkConfig { int subdivisions; int subdivisions_adj; float subdivisions_f; LVecBase3f *base_position; LVecBase2f *dimensions; LVecBase2f *texture_offset; LVecBase2f *texture_scale; verticePool vertice_pool;};struct verticePool { VERTICE_INDEX current_vertice_index; VERTICE_INDEX current_primitive_index; VERTICE_INDEX * vertice_dict; float *vertice_array; VERTICE_INDEX *primitive_array;};Performance result measured by very-sleepy: linkVersion based on the suggestions made in the comments: very-sleepy, AMD CodeAnalyst, and the assembler for the slow-line: generated assembler,I also made some arrays global, and renamed vertice to vertex. | Conditionally creating a new vertex | c++;performance;coordinate system | null |
_webapps.17897 | Often I am only interested in certain categories of posts from a blog. Is there any feed reader that allows this? Also, is this information even included in RSS/ATOM? | Feed reader that allows subscription to only certain categories of posts from a blog | rss;feeds | null |
_softwareengineering.274892 | I'll preface this by saying that I'm not particularly familiar with writing http services.To keep things simple, I'll use the metaphor of creating a collages from images selected by the user. The collage creation takes some time because a fair amount of image processing is done. Let's say that it takes 10 minutes to produce the output. Again, this isn't the actual application but works, hopefully, for the point of this discussion.My goal is that I don't want to require the user to worry about being done with creating the collage or to have to click some OK button to queue up generation of the collage. Rather, the collage generator runs against an input set of images and produces a collage based on whatever has recently been selected by the user. S/he can add/remove images from the input set whenever s/he wants to and at some point a new collage will be produced.The user can make multiple collages from images anytime s/he wants. Imagine that there's a Bobby's baseball games collage and Family Vacations collage. Hopefully you get the idea here. Anytime I want to add to a particular collage I just specify which images to add and I'm done.The application being used for the selection of the images neither knows nor cares how the collage generator works. To decouple these two applications, I have a http service that receives the additions/deletions from the user. It takes those inputs and persists them. To keep things simple, the service sets a 'dirty' flag and timetags any new entries to the collage. This allows the CollageGenerator to know if a collage needs to be redone.Now... to my actual question.I could write CollageGenerator as a windows service that, on some periodic basis, looks for work to do. This is pretty straightforward and it could also be a simple application that is called by the Windows Task Scheduler. Either of those solutions is essentially the same solution.But, could the http service also accomplish the same thing by firing off a thread with a timer and doing the checking later? I recognize that generally what happens with these http services is that they are called, do something and then return some result. But can it also spawn off a separate thread to do timer-related background task work? Again, still learning about http services so my question might be very much out of left field. | Can an http service queue up work via a timer? | web api | null |
_unix.123139 | I have a following expired X.509 certificate:$ openssl x509 -in openvpn.net -text -nooutCertificate: Data: Version: 3 (0x2) Serial Number: 03:fa:55:a7:80:b5:b5 Signature Algorithm: sha256WithRSAEncryption Issuer: C=US, ST=Arizona, L=Scottsdale, O=GoDaddy.com, Inc., OU=http://certs.godaddy.com/repository/, CN=Go Daddy Secure Certificate Authority - G2 Validity Not Before: Dec 10 20:42:04 2013 GMT Not After : Mar 5 17:46:58 2014 GMT Subject: OU=Domain Control Validated, CN=*.openvpn.net Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (2048 bit) Modulus (2048 bit): 00:d8:1c:cd:03:64:34:52:e3:6a:fd:96:10:4d:76: c6:33:f8:70:fb:6c:0d:93:ac:3c:49:1b:bf:c4:9a: c3:b5:08:87:c8:1c:fc:81:64:91:41:45:81:e0:70: 63:69:e0:86:ec:e1:48:84:26:2f:3f:4b:7d:6d:6c: 88:bc:44:11:ff:72:b1:32:d9:30:24:e4:78:78:0c: fb:73:5d:43:05:4e:5c:5a:05:f7:85:e0:69:c9:b8: ca:7d:0a:33:b9:12:ee:ff:ed:20:7b:8d:04:89:05: 74:80:7a:5c:4a:39:07:70:14:56:31:59:ae:4f:6f: 3d:5d:c6:36:00:b6:aa:7e:45:6b:bc:cb:4a:8f:cc: 20:69:f6:39:ec:29:e9:6a:14:6e:42:ca:99:d1:d7: 08:23:31:5c:5b:b3:48:13:01:fe:bc:44:34:62:c7: 81:2e:4e:74:1e:73:42:b3:5f:ee:23:55:9f:62:d0: 46:5e:c2:00:14:7b:b5:e5:26:40:12:a6:32:50:22: b3:a6:df:b6:a3:90:d4:39:ae:ea:3e:53:f5:58:89: 7a:b7:6a:d8:6f:d3:ae:1b:e0:7c:90:86:04:39:c3: a3:c8:8a:52:5a:d5:83:e7:07:80:5b:b2:e2:7a:5a: 24:b2:d8:53:34:ad:2a:e2:d4:3a:57:5c:6e:3c:46: 58:b5 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:FALSE X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 CRL Distribution Points: URI:http://crl.godaddy.com/gdig2s1-6.crl X509v3 Certificate Policies: Policy: 2.16.840.1.114413.1.7.23.1 CPS: http://certificates.godaddy.com/repository/ Authority Information Access: OCSP - URI:http://ocsp.godaddy.com/ CA Issuers - URI:http://certificates.godaddy.com/repository/gdig2.crt X509v3 Authority Key Identifier: keyid:40:C2:BD:27:8E:CC:34:83:30:A2:33:D7:FB:6C:B3:F0:B4:2C:80:CE X509v3 Subject Alternative Name: DNS:*.openvpn.net, DNS:openvpn.net X509v3 Subject Key Identifier: DA:4D:97:2B:F8:A2:C5:E9:9D:A2:E4:CB:56:01:0B:9B:74:24:01:01 Signature Algorithm: sha256WithRSAEncryption 9b:b7:07:59:02:0c:67:f3:c1:49:45:fe:30:9a:1a:39:19:cb: 42:33:fc:62:02:29:fc:f5:ef:5d:61:36:4a:e2:c5:f6:52:04: 57:81:28:18:77:60:c0:99:1a:4a:45:e5:f7:eb:03:36:d2:bf: 9d:b6:93:38:98:06:b4:81:fb:5c:ff:e6:ef:7c:8d:ff:cd:5f: 53:b1:10:23:03:38:12:12:a8:99:c8:35:a1:6a:60:ba:4a:f4: 61:7f:96:cb:81:70:f3:c6:d8:2a:b5:69:b8:d9:56:0a:46:73: 9b:d0:d7:c1:2f:9a:d8:94:ac:37:0b:57:80:f9:a1:ec:e1:bf: 43:76:c6:ea:01:c6:97:c8:55:29:a8:b6:b9:19:bd:81:92:9a: a9:ec:be:b0:4c:3e:11:f5:8b:8c:8f:af:fa:f5:d4:4d:d7:77: c0:1f:aa:cd:f7:01:80:ad:62:d4:db:1d:e3:a0:23:77:2f:4b: ea:65:5c:9e:9c:46:bc:be:ce:f3:71:79:cd:19:c3:44:f5:49: de:4b:24:a5:8b:48:3e:60:4d:9d:dd:1d:50:35:66:6a:d6:96: 77:7c:19:9b:66:e1:46:de:4e:c2:ce:c5:96:88:2c:d7:7d:cc: 94:ac:1f:23:d4:a8:e9:6d:c0:f3:9f:a8:21:a7:fd:dc:25:95: 6f:eb:e3:a0$ As I understand, this certificate is issued from Go Daddy Secure Certificate Authority to *.openvpn.net. I think that in order to verify, that this certificate is indeed issued by GoDaddy, I should download one of the GoDaddy root certificated from here. However, which one? And how can I verify that the certificate above is indeed issued by GoDaddy using openssl utility? | How to validate X.509 certificate? | ssl | null |
_unix.237069 | I've seen a lot of tutorials on how to crack a WPA/WPA2 Network using Reaver or Aircrack-Ng on Kali Linux. However, neither of these tools work for me.With Reaver, after it switches to the correct channel (after a decent amount of time switching between channels), it just freezes and doesn't output anything, even with -vv as an option. With the Aircrack-Ng method, I never see a WPA handshake after sending a deauth package.If this isn't the right place to ask about this, where should I ask for help on this? If this is the right place, what can I do about this?Here is a picture from my iPhone that shows what pops up in my virtual console 1 after booting up. Everything above the part for entering the login shows up during boot, everything after that shows up after boot.I don't know if this has anything to do with it, but it may. Regardless, does this pose any problem for my Kali Linux? I am running Kali Linux 2.0 on an ext4 partition. If the installation process of Kali Linux makes a difference, it is contained in my post here: Warning while Booting into Ubuntu. This post of mine contains the warnings I get while booting into Ubuntu, which are similar to that of booting into Kali Linux.So I guess I have two questions:Where can I get help using tools like Reaver or Aircrack-ng?Are the warnings in the image anything I need to worry about?EDIT: Here is information requested about the errors:For the ext4 one, my installation completed successfully. It ended saying that I could reboot into Kali now. However, if you look in the link I provided, it shows about my installation procedure, and there were some parts that I believe were not that significant that I could not see any options for, as part of the installer was cut off on my display, but I'm not sure if that was the problem.My Wifi connection works fine, as I am posting from Kali Linux. However, I can just try sudo apt-get <app> if I must. I don't know what the name is for the iwlwifi firmware.I just tried running the command regarding the openvas-scanner, and then with the -l option, which it said to show extra information, and this is the output:sudo systemctl status openvas-scanner.service -l openvas-scanner.service - Open Vulnerability Assessment System Scanner Daemon Loaded: loaded (/lib/systemd/system/openvas-scanner.service; enabled) Active: failed (Result: exit-code) since Sun 2015-10-18 20:08:25 PDT; 21h ago Docs: man:openvassd(8) http://www.openvas.org/ Process: 742 ExecStart=/usr/sbin/openvassd --listen=127.0.0.1 --port=9391 (code=exited, status=1/FAILURE)Oct 18 20:08:25 kali systemd[1]: openvas-scanner.service: control process exited, code=exited status=1Oct 18 20:08:25 kali systemd[1]: Failed to start Open Vulnerability Assessment System Scanner Daemon.Oct 18 20:08:25 kali systemd[1]: Unit openvas-scanner.service entered failed state.EDITIf it makes a difference, my Kali Linux is on a logical ext4 partition. I also installed Kali grub the second time going through, which is what is run now instead of my Ubuntu grub.This is the output from lsusb:Bus 001 Device 004: ID 0a5c:5801 Broadcom Corp. BCM5880 Secure Applications Processor with fingerprint swipe sensorBus 001 Device 003: ID 8087:07dc Intel Corp. Bus 001 Device 002: ID 8087:8000 Intel Corp. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubBus 003 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hubBus 002 Device 002: ID 1bcf:2985 Sunplus Innovation Technology Inc. Laptop Integrated Webcam HDBus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubAnd the output from lspci:00:00.0 Host bridge: Intel Corporation Haswell-ULT DRAM Controller (rev 0b)00:02.0 VGA compatible controller: Intel Corporation Haswell-ULT Integrated Graphics Controller (rev 0b)00:03.0 Audio device: Intel Corporation Haswell-ULT HD Audio Controller (rev 0b)00:14.0 USB controller: Intel Corporation 8 Series USB xHCI HC (rev 04)00:16.0 Communication controller: Intel Corporation 8 Series HECI #0 (rev 04)00:16.3 Serial controller: Intel Corporation 8 Series HECI KT (rev 04)00:19.0 Ethernet controller: Intel Corporation Ethernet Connection I218-LM (rev 04)00:1b.0 Audio device: Intel Corporation 8 Series HD Audio Controller (rev 04)00:1c.0 PCI bridge: Intel Corporation 8 Series PCI Express Root Port 1 (rev e4)00:1c.3 PCI bridge: Intel Corporation 8 Series PCI Express Root Port 4 (rev e4)00:1c.4 PCI bridge: Intel Corporation 8 Series PCI Express Root Port 5 (rev e4)00:1d.0 USB controller: Intel Corporation 8 Series USB EHCI #1 (rev 04)00:1f.0 ISA bridge: Intel Corporation 8 Series LPC Controller (rev 04)00:1f.2 RAID bus controller: Intel Corporation 82801 Mobile SATA Controller [RAID mode] (rev 04)00:1f.3 SMBus: Intel Corporation 8 Series SMBus Controller (rev 04)02:00.0 Network controller: Intel Corporation Wireless 7260 (rev 73)03:00.0 SD Host controller: O2 Micro, Inc. SD/MMC Card Reader Controller (rev 01)I apologize if this is in a difficult-to-understand format. | Using Kali Linux | kali linux;airmon ng | null |
_softwareengineering.144502 | I am currently publishing a paper on skin detection. However, I need to find the appropriate histogram bin size for each colorspace. I recently came upon a paper that published what it found to be the ideal bin size. The paper can be found at: http://www.inf.pucrs.br/~pinho/CG/Trabalhos/DetectaPele/Artigos/OPTIMUM%20COLOR%20SPACES%20FOR%20SKIN%20DETECTION.pdf. I am specifically talking about Table 1. If I cite the source, would it be okay for me to use data from the table? Note that I cannot contact the author of the paper. | Would this violate any copyright issues? | copyright;research;computer vision | Yes using the data would be perfectly normal But I'm not a lawyer, I'm only an academic talking about the etiquette of quoting results from another academic paper, I can't say anything about the copyright law in Azerbijan when applied to web apps by a Liberian shell company registered in the Cayman islands. |
_unix.48788 | I want to configure a remote linux (Ubuntu Precise) to use a PPTP VPN. I can only access to it using ssh, so I don't have a graphical interface (neither do I want it).Some hours googling:Guides on where to click in the graphical interface (useless);pptp-linux:getaddrinfo(): Name or service not knownnmcli** (process:5244): WARNING **: Could not initialize NMClient /org/freedesktop/NetworkManager: The name org.freedesktop.NetworkManager was not provided by any .service files** (process:5244): WARNING **: Error enabling/disabling networking: The name org.freedesktop.NetworkManager was not provided by any .service filesCan anyone point me a tutorial, a command that actually works, or any other way to setup all network traffic to go through the PPTP VPN on that machine?UpdateMy VPN service has expired and I will not renew it. I no longer have access to a PPTP VPN, so I can not replicate the problem, nor do I have the need to verify if any given answer solves the problem I described. | Setup PPTP VPN connection with no GUI | linux;ubuntu;vpn;pptp | null |
_unix.241297 | Need to run lubuntu in several workstations, but can't replace the existing distro. Also need to have Octave running, but it's not included in lubuntu. I would need to install Octave and the scripts that are to be run with it in a single computer with the same architecture, and then generate a live-USB containing all this (lubuntu + Octave + scripts), and been able to copy it to a number of sticks, so every student can run it in a different computer (even at home).I've heard of Yocto Project, can it solve this problem? | Customized distro to live-USB | live usb;lubuntu;distributions | null |
_unix.43320 | How to block IP adresses which access more than n pages in m period of time?I want to block all auto-traffic and I am not sure how to approach this issue. | How to block ip adresses which access more than n-pages in m-prediod of time? | centos;webserver | null |
_softwareengineering.134802 | I'm lost in a mass of licences - GPL, LGPL, Creative Commons, BSD, Apache, etc, and looking for a beginners guide.When using a component under a popular licence like these, what are the restrictions for using them without paying in a commercial product?E.g.do I have to redistribute the source?do I have to credit them?what if I modify the component? etc.From what I can tell from http://www.codeproject.com/info/Licenses.aspx, only GPL and Creative Commons force you to redistribute source? | Which licenses are free for commercial use? | licensing | A great resource for understanding open source licenses is the very comprehensive, interactive license differentiator, from Oxford Universities OSS Watch.This asks questions based on the assumption that you want to find a license for your own software, but turn it around and you could also use it to determine which licenses are are suitable depending on the way you want to use other peoples software. |
_webapps.5798 | I have a family domain set up with Google Apps Standard. Everybody has their own calendar, and then a shared family calendar.I configured my daughter's iPod Touch to talk to GMail using the Google Sync (= Exchange) mechanism. The shared calendar doesn't show up.Google's help page claims that, if I go to m.google.com, select the domain, and log in, I can select Sync and configure the calendars. I don't see any such option on the touch. When I select sync from the green region, all I get is a link to a help page, and, on that help page, I get the string 'null' where I should get the string 'ipod touch'.Ironically, if I just use the safari access to the calendar, I do see the entries from all of the calendars.Is there something Google isn't telling me, or have I just hit a bug? | Google Sync on iPod touch: instructions lead to dead end on calendar control | google apps;google calendar;ipod touch | You need to first make sure that the Google mobile service is enabled in the Google Apps dashboard. If it isn't, add the Mobile service, and make sure that the Enable Google Sync is checked.Next, you navigate to m.google.com on your mobile device. Enter in your domain (like you said), and tap the green Sync. This should take you to a page which has the How to setup Google Sync on X link you mention. Do not click on that link. Instead, click below it where it says Sign in with your Google Apps Account (it's easy to miss since it looks like a footer). Once you sign in, you should see your device listed there (for me it says iPhone). Tap it and you should be able to select which calendars to sync.I tested this on both an iPhone and iPad, not an iPod touch, however it's very unlikely that it wouldn't work. In the rare case that it still doesn't work, I would recommend using a desktop browser and spoofing your user agent as the iPhone. Safari has this feature (along with the iPhone's user agent) built into it. Just select Develop > User Agent > Mobile Safari X.X -- iPhone. You should then be able to navigate to m.google.com and follow the same steps above. |
_codereview.171369 | I wrote a Hangman text-based game in Java that needs to include the features that I included in the comments of my code.In short, the game will ask the user to type a word that they (or a 2nd person) will then guess. The word will be censored by the program. The program will tell the user if their guessed letter is in the word or not, and show the progress of the censored word after each guess. If the user already guessed the letter before, the program will tell the user of that and show their previous guesses without repeating any letters. The program will show the number of attempts at the end.The code I wrote below works and has all the features I listed. But it seems not optimal and probably with very poor etiquette, as I'm self-taught thus far. Therefore, I'm looking for any advice that will make this code better and to ensure I don't get into any bad habits (I probably already have haha) as I continue to learn Java by myself.//Simple Hangman game where user types a word, program stores it in all CAPS for easier user readability and censors the word (i.e *****)//User then guesses one letter at a time until the entire word is guessed. Program will inform the user if the guess is in the word, and show the progress of the word after each guess.//If the guessed letter is in the word, program will print out the # of times the letter is in the word.//Program will store and print out # of guesses (attempts) needed to guess the word at the end of the program. //If user tries to duplicate a previous guess, program will inform user of that and show previous guesses by user. Attempt count will not go up for duplicate guesses.//When the program shows previous guesses by the user (using a string), it cannot contain duplicate letters. (i.e: if user guesses 's' twice, 's' will still only show up once in the string)//StackOverFlow readers: This program works as intended, but as a self-taught beginner coder, I need assistance on optimal coding style (less lines the better) and good coding principles/etiquette//I definitely think there are much better ways to code this, but I cannot think of any more (as you probably noticed, this is v3, which has more features and yet similar amount of lines as version 1 haha) //All and any help is appreciated! Thank you :Dimport java.util.*;public class HangmanGameV3 { public static void main(String [] args){ //Initialize all the variables used here String storedword; char[] charstring; int length; char[] censor; int attempts=0; StringBuilder pastguesses = new StringBuilder(); //String Builder to add and print out previous guesses Scanner typedword = new Scanner(System.in); System.out.println(Enter your word to guess: ); storedword = typedword.nextLine(); storedword = storedword.toUpperCase(); //stores the word and changes it to all caps length = storedword.length(); charstring = storedword.toCharArray(); //creates char array of string //creates and prints an array of chars with the same length as string censor = storedword.toCharArray(); System.out.println(Your secret word is: ); for (int index = 0; index < length; index++){ censor[index] = '*'; } //Main loop to take guesses (is this while loop the ideal loop here? while (String.valueOf(censor).equals(storedword)== false){ //Initialize all variables in loop char charguess; String tempword; String tempstring; boolean correct = false; //required for if loops below/lets the user know if the letter is in the word or not int times = 0; //number of times a letter is in the word boolean repeated = false; //check if user guessed the same letter twice //prints the censored secret word for(int a= 0; a < length; a++){ System.out.print(censor[a]); } System.out.println(); //asks user for guess, then stores guess in Char charguess and String tempstring Scanner guess = new Scanner(System.in); System.out.println(Type your guess: ); tempword = guess.next(); charguess = tempword.charAt(0); //gets char data from scanner pastguesses.append(charguess); //adds guess to previous guess string tempstring = pastguesses.toString(); //checks if user already guessed the letter previously if (tempstring.lastIndexOf(charguess, tempstring.length() -2 ) != -1){ System.out.println(You already guessed this letter! Guess again. Your previous guesses were: ); pastguesses.deleteCharAt(tempstring.length()-1); System.out.println(tempstring.substring(0, tempstring.length()-1)); repeated = true; } //if the guess is not a duplicated guess, checks if the guessed letter is in the word if (repeated == false){ for (int index = 0; index < length; index++){ if(charstring[index] == Character.toUpperCase(charguess)) { censor[index] = Character.toUpperCase(charguess); //replaces * with guessed letter in caps correct = true; times++; } } if(correct == true){ System.out.println(The letter + charguess + is in the secret word! There are + times + + charguess + 's in the word. Revealing the letter(s): ); } else if (correct == false){ System.out.println(Sorry, the letter is not in the word. Your secret word: ); } System.out.println(); } attempts++; } System.out.println(You guessed the entire word + storedword.toUpperCase() + correctly! It took you + attempts + attempts!); //typedword.close(); //StackOverFlow readers: is this necessary? Not sure how to use .close()}Sample output of my code for reference if needed: | Text-based Hangman game in Java | java;hangman | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.