id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_softwareengineering.279833
In a major application REST API that covers several related domains, does it make more sense to split resources into 'areas' based on the business domain they belong to or is it better to to maintain a single model?For example, there are 'Sales' and 'Inventory' sub-domains. Users of the system typically only care about one domain at a time, but exceptions are possible.There is an 'Item' concept that exists in both domains so we could implement the 'item' resource in two different ways.have different resources to represent the concept in each domain, each resource holding only the relevant data:/sales/items/:id/inventory/items/:idhave a single resource with all the data to be used in all contexts:/items/:idThere are also plenty of resources that only belong to one of the domains.pros of 'areas'easier to understand the API for users who only care about a single domaineasier to implement resources (less stuff to read/update at a time)resources can be more specialized/optimized for each particular domainability to control access to resources at more granular levelpros of a single unified modelno duplicated resources for concepts that belong to more than one domainif a user needs to work with multiple domains, he will only have to use a single API that covers all his needsIs API partitioning as described above a valid way to reduce complexity of both API contract and implementation? I haven't seen it mentioned much anywhere.Are there any more things that need to be considered to make a decision in favor of either approach?
Partitioning REST API resources into areas based on business domains
architecture;rest;api design
I think that this is the perfect fit for Bounded Context pattern from Domain Driven Design.Large models don't scale well, it's better to split them into smaller models (bounded contexts) and explicitly declare their relationships (common entities, interactions between bounded contexts etc.)In this case you would have two bounded contexts (sales and inventory) with one shared resource (item). Interpretation of item in sales context may be a bit different than in inventory context and that's completely fine.To use this pattern, you should: identify the boundaries of different contexts (which you partly did by creating subroots /sales/ and /inventory/)identify shared resources (item) and explicitly describe what is resource's interpretation in different bounded contextsidentify interactions between bounded contexts (which is probably out of scope of this question)Note on pros of a single unified model: no duplicated resources for concepts that belong to more than one domainFrom REST point of view, resource is not duplicated. One resource can be identified by multiple URIs, each can have different representations (fitting for given bounded context).
_codereview.110696
This is the first AJAX website I constructed. I am trying to have information generated according to the name that user types in.It works well but I have some problem with the slow performance. Is there anything I can do to improve the performance of this code please?HTML: <body background = http://starecat.com/content/wp-content/uploads/my-code-doesnt-work-i-have-no-idea-why-my-code-works.jpg> <section id=fetch> <input type=text placeholder=Please enter your name here id=term /> <button type= button id=search>ENTER</button> </section> <section id=content> <div id = figures> </div> <div id =name> <div id = nameBox> </div> <div id = nameText> </div> </div> <div id = clickBox> CLICK ME! </div> <div id = message> <div id = messageBox> </div> <div id = greetingText> </div> <div id = messageText> </div> </div> </section> <section id=email> <input type=text placeholder=Please enter your email here id=emailBox /> <button type= button id=emailEnter>ENTER</button> </section></body>jQUERY:$(document).ready(function(){$('#content').hide();$('#email').hide();$('#term').focus(function(){ var full = $(#content section).length ? true : false; if(full == false){ $('#content').hide(); }});var getFigures = function(type){ if(type == mf){ $('#figures').prepend('<img src = images/mf.png height = 500px width = 500px />'); console.log(mf found); } else if (type == mm){ $('#figures').prepend('<img src = images/mm.png height = 500px width = 500px />'); console.log(mm found); } else if (type == ff){ $('#figures').prepend('<img src = images/ff.png height = 500px width = 500px />'); console.log(ff found); } else if (type == male){ $('#figures').prepend('<img src = images/male.png height = 500px width = 500px />'); console.log(male found); } else if (type == female){ $('#figures').prepend('<img src = images/female.png height = 500px width = 500px />'); console.log(female found); } };var displayMessage = function(message){ console.log(box clicked); $('#clickBox').hide(); $('#message').show();} var displayContent = function(){ $('#fetch').hide(); $('#message').hide(); $('#content').show(); var content = $('#term').val(); content = content.toLowerCase().replace(/\b[a-z]/g, function(letter){ return letter.toUpperCase(); }); console.log(Name entered: + content); var googleApi = https://api.com/; var googleKey = googleKeyHere; var googleSecret = googleSecretHere; var data = []; $.ajax({ url: googleApi, headers: { Authorization: Basic + btoa(googleKey + : + googleSecret) }, data: data, dataType: 'json', type: 'GET', success: function(data) { var data = data || []; var results = data.filter(function(element, index, array){ if (element.name === content){ return (true); } return (false); }); if (results[0]){ // user name is identified console.log (results[0]); var name = results[0].name; $('#nameText').text(name); console.log(Name: + name); var greeting = results[0].greeting; $('#greetingText').text(greeting); console.log(Greeting: + greeting); var type = results[0].type; console.log(Type: + type); getFigures(type); console.log(FIGURES PRINTED); $('#clickBox').show(); $('#clickBox').click(displayMessage); var message = results[0].message; $('#messageText').text(message); console.log(Message: + message); return; } else { // user name is unidentified: new user console.log(new user); var name = content; $('#nameText').text(name); console.log(Name: + name); var greeting = data[0].greeting; $('#greetingText').text(greeting); console.log(Greeting: + greeting); console.log('input#term'); $('input#term').genderApi({key: genderAPIKeyHere}).on('gender-found', function(e, result) { if (result.accuracy >= 60) { var gender = result.gender; console.log('Gender found: ' + gender); var type = gender; console.log(Type: + type); getFigures(type); console.log(FIGURES PRINTED); if (type == gender){ $('#email').show(); } else if (type == item.type){ $('#email').hide(); } } }); console.log(should have printed figures yo); $('#clickBox').show(); $('#clickBox').click(displayMessage); var message = data[0].message; $('#messageText').text(message); console.log(Message: + message); } return false; }, error: function(xhr,status,error) { console.log(ERROR: + xhr.responseText); console.log(ERROR: + status); console.log(ERROR: + xhr.status); console.log(ERROR: + error); } }); };$('#search').click(displayContent); $('#term').keypress(function(event){ if(event.which == 13){ displayContent(); } console.log(SEARCH);});console.log(waiting for name to be entered);});
A first AJAX search page
javascript;performance;jquery;html;ajax
null
_unix.158469
I'm trying to setup a VPN connection. My corporate IT team went and hid under a table when I said Linux so you guys are my last hope. I'm trying to port all my the settings from my Windows 7 machine which uses Cisco AnyConnect for a VPN. I have the .xml configuration files (ProgramData/Cisco/Cisco AnyConnect Secure Mobility Client) , my network certificate (which I extracted into a user and private key cert using OpenSSL) and the CA certificate.So using the GUI for openconnect I put in the Gateway, CA, user cert and private key. I get the follow response XML responce has no auth node and then in the top right of my screen after a few mins I'll get the vpn connection 'xxx' failed because there were no valid vpn secrets. After Goggling this issue most people say to restart network-manager which I have done many times. I also have update to the latest version of OpenConnect V6.00 which is much newer than the repository version 5.02, both versions had same issue. I have also tried from terminal to make the connection but I get the same issues. I don't know what else to try :?: . On my windows side there are some .xml profile files (same path as in first paragraph) not sure if I have to have those linked somehow like the .../Profile/corparate-profile.xml file? Any ideas on what to try would be most welcome. I'm not sure of my setup 100% but I have good certs, I pulled the Gateway from preferences_global.xml (from windows) and there is no proxy listed so they seem right as far as I can tell.
Using OpenConnect for a VPN, Linux Mint 17
networking;linux mint;vpn;networkmanager
null
_codereview.171433
beginner web dev here.I created a custom list component (with edit feature) in plain JS/HTML.I have few questions on which I would like to get an answer (besides the normal feedback that you folks do here on the site):is this a standard/right way to create and use components when using plain JS?if I can create and use objects like this, what is the benefit of creating such component using say react.js?ListComponent.js/////////// ListComponent is a class which lets you create a list component// dynamically using JS and DOM. The list component also has some features// out of the box - e.g. editing items when clicked.//function ListComponent(type) { // Model data of the list. this.data = [{ name: mona, id: 0 }, { name: dona, id: 1 }, { name: jona, id: 2 }], // Create a list component. this.create = function () { let list = document.createElement(!type ? ul : type); list.id = customList; document.body.appendChild(list); this.draw(); }, this.remove = function () { // Remove our list component from DOM var elem = document.getElementById(customList); return elem.parentNode.removeChild(elem); } /// // draw // Appends items to the list component. // Deletes any child items first if there are any. // this.draw = function () { let that = this; // First delete all items of the list. let list = document.getElementById(customList); while (list.firstChild) { list.removeChild(list.firstChild); } // Now, append new items. that.data.forEach(function (item) { let listItem = document.createElement(li); // Listen to click event on this list item. listItem.addEventListener(click, function () { let newName = prompt(Enter item name, item.name); if (newName == null || newName.length == 0) return; // Make change in the list model data. for (let i = 0; i < that.data.length; i++) { if (that.data[i].id == item.id) { that.data[i].name = newName; break; } } // Redraw the list. that.draw(); }, false); listItem.innerHTML = item.name; listItem.id = item.id; list.appendChild(listItem); }); }}index.html<!doctype html><html lang=en><head> <meta charset=utf-8> <title>Demo site</title> <!-- Import our list component --> <script src=ListComponent.js></script></head><script> document.addEventListener(DOMContentLoaded, function (event) { // Run this code when DOM is loaded. let list = new ListComponent(ol); list.create(); });</script></body></html>
List component using plain JS
javascript;object oriented;html;dom
null
_scicomp.21186
For a multi-objective optimization task I want to use the DTLZ5, DTLZ6 and DTLZ7-problems definied by Deb et al. in their Paper Scalable Multi-Objective Optimization Test Problems.There are multiple libraries which implemented these problems like Shark (C++), jMetal (Java), deap (Python) and pagmo (C++ and Python).They all share their code on GitHub, but unfortunately I may only post two links here, for which I chose jMetal to illustrate the problems that occur in all of these implementations.DTLZ5In the paper, $$g(\text{x}_M)=\sum_{x_i \in \text{x}_M} x_i^{0.1}$$while in jMetal it's$$g(\text{x}_M)=\sum_{x_i \in \text{x}_M} (x_i - 0.5)^{2}$$There is no equivalent to this in the paper, although for DTLZ2 the $g$-function looks this way.DTLZ6In short: DTLZ6 in jMetal is DTLZ5 in the paper.DTLZ7I could not find any implementation of the DTLZ7 problem like it is stated in the paper. In contrast, I found at least one paper and other scientific work that referred to the $h$-value of DTLZ7 like Dynamic Multiobjective Optimization Problems: Test Cases, Approximations, and Applications by Farina et al. where Deb is even a co-author. Additionally, there is a definition of the DTLZ7-problem on the website of the ETH Zrich where the three co-authors worked. These two sources as well as the libraries listed above describe/implement DTLZ6 instead of DTLZ7.TL;DRproblem implemented as DTLZ5 is not part of the paperimplemented DTLZ6 equals DTLZ5 from the papersame for DTLZ7 and DTLZ6there is no implementation of DTLZ7 out thereThe QuestionIs there any paper I could not find that makes my observations obsolete or why do all these sources differ from the paper?
Definition of the DTLZ 5 - 7 Problems
optimization;constrained optimization
null
_webapps.23109
I've noticed that for some locations, such as Tbilisi Georgia where I am now, that both Google Maps and OpenStreetMaps have incomplete data but with different omissions.It would be really nice to be able to see them overlaid on one another so I can do things like zooming in on something in the Google satellite image to know I have the right spot on OSM.I was sure I stumbled across a site that did just this once but now that I need it I can't seem to find it or anything like it via Googling.
Is there a map service which overlays Google Maps and Openstreetmaps?
google maps
null
_unix.192373
I have 3 linux servers - the first one is a dev machine (where I compile my binary), the second one is a jumpbox which helps me connect to a testbed - which is the place where I need to copy my binary.Right now, I am doing something like this: 1) Copy my binary from devmachine to jumpbox with scp -r binary abc@jumpbox:/temp/2) Log in to the jumpbox and copy the binary from jumpbok to testbed with scp -r binary abc@testbed:/bin/Is there some way I can do the above 2 steps thru a single script?I realize I will be forced to save my passwords in a file for that to work, but I am fine with it - I can always use encryption.
How can I transfer a file across 2 servers (source->serverA->serverB) with a single script.
scp;autologin
null
_unix.322906
My machine has 4GB of ram and 4GB of swap. The amount of usable memory is about 3GB. The machine is running an iscsi target, squid, munin, unbound, and other services.However, the memory usage indicates that most of the 3GB being used is for buffers/cache. If that is the case, why does the system still throw an OOM and not reduce the amount used by buffers / cache? Furthermore, can't swap be utilized if needed (it is barely touched):If I echo 3 into vm.drop_caches, the number if buffers / caches drops significantly and I don't get any OOM errors for a while.I perceive the problem to be from lack of freeing up buffers / caches as well as not addressing swap space that is available. total used free shared buff/cache availableMem: 3502 169 29 9 3303 3274Swap: 4095 11 4084[Sat Nov 12 22:20:03 2016] Out of memory: Kill process 19741 (squid) score 3 or sacrifice child[Sat Nov 12 22:20:03 2016] Killed process 19742 (log_file_daemon) total-vm:12500kB, anon-rss:136kB, file-rss:1204kB, shmem-rss:0kB[Sat Nov 12 22:20:03 2016] tcp invoked oom-killer: gfp_mask=0x27080c0(GFP_KERNEL_ACCOUNT|__GFP_ZERO|__GFP_NOTRACK), order=2, oom_score_adj=0[Sat Nov 12 22:20:03 2016] tcp cpuset=/ mems_allowed=0[Sat Nov 12 22:20:03 2016] CPU: 0 PID: 20616 Comm: tcp Not tainted 4.7.4-hardened #1[Sat Nov 12 22:20:03 2016] Hardware name: Dell Inc. OptiPlex GX620 /0HH807, BIOS A07 03/31/2006[Sat Nov 12 22:20:03 2016] 0000000000000286 0000000000000286 ffff8800642a3b10 ffffffff812d1bbe[Sat Nov 12 22:20:03 2016] 0000000000000000 ffff8800642a3cb0 ffff8800d4799e00 ffff8800642a3b78[Sat Nov 12 22:20:03 2016] ffffffff811a57ca ffff8800ab41b200 0000000000000015 0000000000000206[Sat Nov 12 22:20:03 2016] Call Trace:[Sat Nov 12 22:20:03 2016] [<ffffffff812d1bbe>] dump_stack+0x4e/0x79[Sat Nov 12 22:20:03 2016] [<ffffffff811a57ca>] dump_header+0x5e/0x1e7[Sat Nov 12 22:20:03 2016] [<ffffffff8114981d>] oom_kill_process+0x95/0x34d[Sat Nov 12 22:20:03 2016] [<ffffffff81149eeb>] out_of_memory+0x3a1/0x3ca[Sat Nov 12 22:20:03 2016] [<ffffffff8114e163>] __alloc_pages_nodemask+0x93a/0xaaa[Sat Nov 12 22:20:03 2016] [<ffffffff8114e312>] __alloc_pages_node+0x3f/0x54[Sat Nov 12 22:20:03 2016] [<ffffffff8114e517>] alloc_kmem_pages_node+0x2d/0x7f[Sat Nov 12 22:20:03 2016] [<ffffffff8107a097>] copy_process.part.48+0x191/0x1f9e[Sat Nov 12 22:20:03 2016] [<ffffffff8107c07c>] _do_fork+0xd7/0x2db[Sat Nov 12 22:20:03 2016] [<ffffffff8107c351>] sys_clone+0x39/0x50[Sat Nov 12 22:20:03 2016] [<ffffffff81001a6f>] do_syscall_64+0x55/0x74[Sat Nov 12 22:20:03 2016] [<ffffffff81733913>] entry_SYSCALL64_slow_path+0x25/0x25[Sat Nov 12 22:20:03 2016] Mem-Info:[Sat Nov 12 22:20:03 2016] active_anon:13586 inactive_anon:19647 isolated_anon:24 active_file:666416 inactive_file:138007 isolated_file:0 unevictable:0 dirty:1007 writeback:1 unstable:0 slab_reclaimable:36881 slab_unreclaimable:4204 mapped:6840 shmem:2218 pagetables:1185 bounce:0 free:8281 free_pcp:0 free_cma:0[Sat Nov 12 22:20:03 2016] Node 0 DMA free:13984kB min:32kB low:44kB high:56kB active_anon:24kB inactive_anon:260kB active_file:472kB inactive_file:528kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15996kB managed:15908kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:40kB slab_reclaimable:368kB slab_unreclaimable:52kB kernel_stack:16kB pagetables:12kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:4 all_unreclaimable? no[Sat Nov 12 22:20:03 2016] lowmem_reserve[]: 0 3486 3486 3486[Sat Nov 12 22:20:03 2016] Node 0 [Sat Nov 12 22:20:03 2016] DMA32 free:19140kB min:7528kB low:11088kB high:14648kB active_anon:54320kB inactive_anon:78328kB active_file:2665192kB inactive_file:551500kB unevictable:0kB isolated(anon):96kB isolated(file):0kB present:3643928kB managed:3570484kB mlocked:0kB dirty:4028kB writeback:4kB mapped:27360kB shmem:8832kB slab_reclaimable:147156kB slab_unreclaimable:16764kB kernel_stack:3712kB pagetables:4728kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:4 all_unreclaimable? no[Sat Nov 12 22:20:03 2016] lowmem_reserve[]:[Sat Nov 12 22:20:03 2016] 0[Sat Nov 12 22:20:03 2016] 0[Sat Nov 12 22:20:03 2016] 0 0[Sat Nov 12 22:20:03 2016] Node 0 DMA: 26*4kB (UMEH) 17*8kB (UMEH) 23*16kB (UMEH) 14*32kB (MEH) 10*64kB (UMH) 4*128kB (UMH) 2*256kB (EH) 2*512kB (ME) 2*1024kB (EH) 0*2048kB 2*4096kB (M) = 13984kB[Sat Nov 12 22:20:03 2016] Node 0 [Sat Nov 12 22:20:03 2016] DMA32: [Sat Nov 12 22:20:03 2016] 2313*4kB [Sat Nov 12 22:20:03 2016] (UME) [Sat Nov 12 22:20:03 2016] 1248*8kB [Sat Nov 12 22:20:03 2016] (UME) [Sat Nov 12 22:20:03 2016] 0*16kB [Sat Nov 12 22:20:03 2016] 0*32kB [Sat Nov 12 22:20:03 2016] 0*64kB [Sat Nov 12 22:20:03 2016] 0*128kB [Sat Nov 12 22:20:03 2016] 0*256kB [Sat Nov 12 22:20:03 2016] 0*512kB [Sat Nov 12 22:20:03 2016] 0*1024kB [Sat Nov 12 22:20:03 2016] 0*2048kB [Sat Nov 12 22:20:03 2016] 0*4096kB [Sat Nov 12 22:20:03 2016] = 19236kB[Sat Nov 12 22:20:03 2016] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB[Sat Nov 12 22:20:03 2016] 809484 total pagecache pages[Sat Nov 12 22:20:03 2016] 2843 pages in swap cache[Sat Nov 12 22:20:03 2016] Swap cache stats: add 3997, delete 1154, find 3819274/3819343[Sat Nov 12 22:20:03 2016] Free swap = 4182696kB[Sat Nov 12 22:20:03 2016] Total swap = 4194300kB[Sat Nov 12 22:20:03 2016] 914981 pages RAM[Sat Nov 12 22:20:03 2016] 0 pages HighMem/MovableOnly[Sat Nov 12 22:20:03 2016] 18383 pages reserved[Sat Nov 12 22:20:03 2016] [ pid ] uid tgid total_vm rss nr_ptes nr_pmds swapents oom_score_adj name[Sat Nov 12 22:20:03 2016] [ 2521] 0 2521 5671 400 13 3 38 -1000 udevd[Sat Nov 12 22:20:03 2016] [ 2789] 0 2789 23964 64 14 3 11 0 lvmetad[Sat Nov 12 22:20:03 2016] [ 3259] 0 3259 8535 21 19 3 127 0 syslog-ng[Sat Nov 12 22:20:03 2016] [ 3260] 0 3260 49135 818 31 3 135 0 syslog-ng[Sat Nov 12 22:20:03 2016] [ 3328] 0 3328 3240 430 11 3 103 0 crond[Sat Nov 12 22:20:03 2016] [ 3644] 103 3644 3315 501 10 3 14 0 dbus-daemon[Sat Nov 12 22:20:03 2016] [ 3676] 0 3676 140206 1988 55 3 162 0 NetworkManager[Sat Nov 12 22:20:03 2016] [ 3730] 0 3730 2610 0 10 3 33 0 atd[Sat Nov 12 22:20:03 2016] [ 3746] 104 3746 90640 1097 37 3 561 0 polkitd[Sat Nov 12 22:20:03 2016] [ 3748] 0 3748 7529 1035 21 3 107 0 wpa_supplicant[Sat Nov 12 22:20:03 2016] [ 3888] 0 3888 2997 464 11 3 606 0 haveged[Sat Nov 12 22:20:03 2016] [ 3922] 0 3922 12795 2711 29 3 936 0 munin-node[Sat Nov 12 22:20:03 2016] [ 4006] 0 4006 4196 2125 12 3 10 0 dhclient[Sat Nov 12 22:20:03 2016] [ 4145] 0 4145 2369 31 9 3 0 0 rngd[Sat Nov 12 22:20:03 2016] [ 5773] 0 5773 3027 457 10 3 0 0 agetty[Sat Nov 12 22:20:03 2016] [ 5774] 0 5774 3026 437 11 3 0 0 agetty[Sat Nov 12 22:20:03 2016] [ 5775] 0 5775 3026 424 11 3 2 0 agetty[Sat Nov 12 22:20:03 2016] [ 5776] 0 5776 3026 435 11 3 0 0 agetty[Sat Nov 12 22:20:03 2016] [10460] 1000 10460 3805 86 12 3 0 0 ssh-agent[Sat Nov 12 22:20:03 2016] [14256] 113 14256 282464 1904 47 5 0 0 privoxy[Sat Nov 12 22:20:03 2016] [14290] 113 14290 259927 1876 39 4 0 0 privoxy[Sat Nov 12 22:20:03 2016] [14761] 0 14761 69836 1005 46 4 4 -1000 tgtd[Sat Nov 12 22:20:03 2016] [14763] 0 14763 2691 42 11 3 0 0 tgtd[Sat Nov 12 22:20:03 2016] [15840] 106 15840 4471 2188 13 3 10 0 dhcpd[Sat Nov 12 22:20:03 2016] [16782] 501 16782 24486 220 18 3 0 0 stunnel[Sat Nov 12 22:20:03 2016] [16788] 0 16788 6045 562 16 3 0 -1000 sshd[Sat Nov 12 22:20:03 2016] [18585] 118 18585 29712 4475 29 3 23 0 unbound[Sat Nov 12 22:20:03 2016] [20456] 0 20456 3026 451 11 3 0 0 agetty[Sat Nov 12 22:20:03 2016] [22476] 0 22476 3805 87 10 3 0 0 ssh-agent[Sat Nov 12 22:20:03 2016] [24989] 0 24989 14459 987 33 3 0 0 sshd[Sat Nov 12 22:20:03 2016] [24991] 1000 24991 14497 683 32 3 0 0 sshd[Sat Nov 12 22:20:03 2016] [24992] 1000 24992 14459 582 32 3 0 0 sshd[Sat Nov 12 22:20:03 2016] [26146] 0 26146 14459 967 33 3 0 0 sshd[Sat Nov 12 22:20:03 2016] [26152] 0 26152 14459 643 30 3 0 0 sshd[Sat Nov 12 22:20:03 2016] [ 7433] 0 7433 12775 1601 30 3 0 0 cupsd[Sat Nov 12 22:20:03 2016] [ 9717] 0 9717 14459 1201 34 3 0 0 sshd[Sat Nov 12 22:20:03 2016] [ 9721] 1000 9721 14459 736 32 3 0 0 sshd[Sat Nov 12 22:20:03 2016] [ 9722] 1000 9722 7638 1372 19 3 0 0 zsh[Sat Nov 12 22:20:03 2016] [ 9899] 107 9899 13293 1473 28 3 0 0 squid[Sat Nov 12 22:20:03 2016] [17193] 112 17193 29139 1167 37 3 0 0 minidlnad[Sat Nov 12 22:20:03 2016] [19741] 107 19741 19303 6751 40 3 0 0 squid[Sat Nov 12 22:20:03 2016] [20415] 0 20415 7439 571 19 3 87 0 crond[Sat Nov 12 22:20:03 2016] [20420] 177 20420 2815 671 11 3 0 0 sh[Sat Nov 12 22:20:03 2016] [20423] 177 20423 2814 656 10 3 0 0 munin-cron[Sat Nov 12 22:20:03 2016] [20424] 177 20424 1447 427 8 3 0 0 logger[Sat Nov 12 22:20:03 2016] [20425] 177 20425 37465 6280 61 3 0 0 munin-update[Sat Nov 12 22:20:03 2016] [20440] 177 20440 38644 5840 63 3 0 0 /usr/libexec/mu[Sat Nov 12 22:20:03 2016] [20441] 0 20441 12795 2639 28 3 858 0 /usr/sbin/munin[Sat Nov 12 22:20:03 2016] [20616] 65534 20616 2408 600 10 3 0 0 tcp[Sat Nov 12 22:20:03 2016] [20617] 65534 20617 2408 62 7 3 0 0 tcp[Sat Nov 12 22:20:03 2016] Out of memory: Kill process 19741 (squid) score 3 or sacrifice child[Sat Nov 12 22:20:03 2016] Killed process 19741 (squid) total-vm:77212kB, anon-rss:17284kB, file-rss:9640kB, shmem-rss:80kB
how to prevent OOM, use swap
linux;out of memory
null
_cs.21977
For many machine learning projects that we do, we start with the k Nearest Neighbour classifier. This is an ideal starting classifier as we usually have sufficient time to calculate all distances and the number of parameters is limited (k, distance metric and weighting)However, this has often the effect that we stick with the knn classifier as later in the project there is no room for switching to another classifier. What would be good reason to try a new classifier. Obvious ones are memory and time restraints, but are there cases when another classifier can actually improve the accuracy?
When should I move beyond k nearest neighbour
neural networks;classification
k-NN generalizes in a very restrictive sense. It simply uses smoothness priors (or continuity assumption). This assumption implies that patterns that are close in feature space are most likely belong to the same class. No functional regularity in pattern distribution can be recovered by k-NN.Thus, it requires representative training samples, which can be extremely large especially in cases of highly dimensional feature spaces. Worse, these samples can be unavailable. Consequently, it cannot learn invariants. If patterns can be subjected to some transformations without changing their labels, and training sample doesn't contain patterns transformed in all admissible ways, k-NN will never recognize transformed patterns that were not presented during training. This is true, e.g., for shifted or rotated images, if they are not represented in some invariant form before running k-NN. k-NN cannot even abstract from irrelevant features.Another somewhat artificial example is following. Imagine that pattern belonging to different classes distributed periodically (e.g. in accordance with sine - if it is less than 0, then patterns belong to one class, and it is greater, then patterns belong to another class). Training set is finite. So, it will be located in a finite region. Outside this region recognition error will be 50%. One can imagine the logistic regression with periodic basis functions that will perform much better in this case. Other methods will be able to learn other regularities in pattern distributions and extrapolate well.So, if one suspect that available data set is not representative, and invariance to some transformations of patterns should be achieved, then this is the case, in which one should move beyond k-NN.
_unix.307746
I installed Lubuntu with QWERTY, and now want to switch it to DVORAK.The documented way to do this is to use the Keyboard Layout Handler on the panel.This works, but only during and within a desktop session. For login, the system still reverts to QWERTY, both initial and for any window lock. The TTY sessions also default to QWERTY.Is there any way to change everything to DVORAK?
Change Default Keyboard Layout on Lubuntu 16.04
ubuntu;console;keyboard layout;display manager
null
_codereview.28656
I've got the imperative styled solution working perfectly.What I'm wondering is how to make a recursive branch and bound.This is my code below, Evaluate function returns the optimistic estimate, the largest possible value for the set of items which fit into the knapsack(linear relaxation).For some inputs this outputs an optimal solution, for some it comes really close, for every input it is extremely fast, so it doesn't seem that it hangs anywhere in the search space. So, maybe I'm not branching where I have to or I did something wrong. I'm sure optVal is optimal.def branch(items: List[Item], taken: List[Item], capacity: Int, eval: Long): (List[Item], Boolean) = items match { case x :: xs if x.weight <= capacity => { val (bestLeft, opt) = branch(xs, x :: taken, capacity - x.weight, eval) if (opt) (bestLeft, opt) // if solution is optimal get out of the tree else { val evalWithout = evaluate(taken.reverse_:::(xs)) if (evalWithout < optVal) (bestLeft, opt) else branch(xs, taken, capacity, evalWithout) } } case x :: xs => branch(xs, taken, capacity, evaluate(taken.reverse_:::(xs))) case Nil => if ((taken map (_.value) sum) == optVal) (taken, true) else (taken, false)}
Recursive branch and bound for knapsack problem
scala;recursion
null
_unix.344420
after the lost of one disk and and the data inside, I decide is time to take actions to prevent this again. Fisrt I think about RAID, but then I heard about zfs and I decided to investigate. I have a few question for those who use it. I will tried to answer my self too. In case a disk broke:The system will halt?The redundancy of the files is resolved automatically?Can I order a disk to be emptied? so i can disconnect it?Let's say the system stopped working:Can I move the disk to a new system?Even if it is encrypted?What kind of hardware do i need to run this with encryption?Can I share folder with samba?pd: This server willbe use for ownclud or similar and nas.
General questions about zfs
zfs
null
_datascience.12032
In R I have data where head(data) givesday count promotion1 33 20.82 23 17.1 3 19 1.6 4 37 20.8 Now day is simply the day (and is in order). promotion is the promotion-value for the day. It is simply the number of times an advertisement has been on television. count is the number of new users we got that day. I want to investigate the impact the promotion-value has on new users (count). Since we have a count process I thought it would be best to make a poisson regression model. model=glm(formula= data$count ~ data$promotion, data=data)When we type summary(model) we getCoefficients: (Intercept) good_users$promotion 13.40216 0.24342Degrees of Freedom: 793 Total (i.e. Null); 792 ResidualNull Deviance: 9484 Residual Deviance: 9325 AIC: 12680Here is a plot of the data.But when I plot the fitted values for the model points(model$promotion, model$fitted, col=blue)we get thisHere is another plot that shows the same but where days with 0 promotion are removed. How should I chose my regression model (should I use lm instead of glm) or is the another better approach to solve this? Because the data is not pretty but more random like this what should one do ?UpdatedFinding the sweet spotI have done the following for finding a sweet spot.I divide data into 10 groups. group1 is simply a subset where the promotion-value is within 1:10. group2 is data where the promotion-value is between 11:20, and so on for the other groups. So in R we havegroup1 <- subset(data, data$promotion %in% 1:10)group2 <- subset(data, data$promotion %in% 11:20)group3 <- subset(data, data$promotion %in% 21:30)...group10 <- subset(data, data$promotion %in% 91:100)Now I can use wilcox.test to test if there is a significantly difference between the groups by typingwilcox.test(group2, group1, alternative=greater)which gives a low p-value, ie group2 has significant higher new_good_users than group1. The same goes for wilcox.test(group3, group2, alternative=greater)but for wilcox.test(group4, group3, alternative=greater)I get a p-value at 0.20, ie there is no significant difference in new_good_users between group4 and group3. And the same goes for the rest of the group-pairs up to 10.So this must mean that if we increase promotion in the first groups we have an increase in new_good_users but in the last groups we do not have that increase. This means that we have a sweet spot at group3 where the promotion-value is 21:30. Is this not correct ?
Regression model for a count proces
regression
I have to quote Tukey, perhaps the grandfather of data science:The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data.I see nothing wrong with your Poisson model. In fact its a pretty good fit to the data. The data is noisy. There is nothing you can do about it. Perhaps the noise if due to whatever else is on TV at the time, or the weather, or the phase of the moon. Whatever it is, its not in your data.If you reasonably think the weather might be affecting your data, get the weather data and add it. If it decreases the log-likelihood enough for each degree of freedom then it's doing a good job and you leave it in. This is regression modelling 101. Of course there's a zillion other things you can do. Scale the data by any old transformation you want. Fit a quadratic. A quartic. A quintic. A spline. You could include the date and possible temporal correlation effects. But always bear in mind what Tukey was saying - if your data is noisy, you won't get anything much out of it. So it goes.
_unix.230102
I'd like to write bash script to parse data from a configuration file. I searched for this but without finding something I could modify to suit my needs.Joomla! config file: public $access = '1'; public $debug = '0'; public $debug_lang = '0'; public $dbtype = 'mysqli'; public $host = 'localhost'; public $user = 'template'; public $password = 'template'; public $db = 'template_druha'; public $dbprefix = 'dsf1i_'; public $live_site = ''; public $secret = '2w9gHzPb4HfAs2Y9'; public $gzip = '0'; public $error_reporting = 'default';I'd like to parse the database credentials on lines with $user and $password and store them in a variable. What is the best practice?
parse credentials from PHP configuration file
bash;shell script;text processing;regular expression;php
With GNU grep, you could do:user=$(grep -oP \\\$user.+?'\K[^']+ file)pass=$(grep -oP \\\$password.+?'\K[^']+ file)The -P enables Perl Compatible Regular Expressions, which give us \K (ignore anything matched so far). The -o means only print the matching portion of the line. Then, we search for $var (we need three \, to avoid expanding the variable and to avoid the $ being taken as part of the regex), a single quote and one or more non-' characters until the next '.Alternatively, you could use awk:user=$(awk -F' '/\$user/{print $2}' file)pass=$(awk -F' '/\$password/{print $2}' file)Here, we are setting the field delimiter to ', so the value of the variable will be the second field. The awk command prints the second field of matching lines.
_webmaster.16843
I've just now deleted many pages from my website (with 410 status) which were product pages and copied from respective companies.Will now search engine rankings improve for my site?I hope it should take 4-6 weeks for this.
Deleted duplicate pages will search engine ratings improve?
seo;search engines;duplicate content
Search engines don't rank websites. They rank individual pages. What made you decide to delete them? It's okay to have the same products as other websites. If you couldn't then only one website would be found when doing product searches and we know that isn't the case. And that is even with them all using the same product description. (It's very common to use the manufacturers product description and specs for a product's listing).Assuming the pages were duplicate content, they either were already filtered out by Google or devalued as being duplicates so they weren't helping you to begin with (i.e. links from those pages didn't count, PageRank wasn't passed. etc). So removing them only makes Google's life easier but doesn't directly affect your other pages. The only benefit you may see is by not linking to those pages anymore you have fewer links to bad pages and thus have more links to good pages which makes those links a little bit stronger (pass more PageRank, stronger link value). But in all reality that gain will be barely noticeable if it is noticeable at all unless it was a very large number of pages.Another thing to consider, if those pages weren't being penalized buy Google in anyway, this may actually hurt you. If those pages weren't filtered out, reduced in value, etc, then you had more pages that were linking to your important pages (i.e. home page, main product pages, etc) and now you have fewer which hurts those pages. It would be only a small hit but a hit nonetheless. Not to mention you have fewer pages in their index and thus have fewer opportunities to be found.
_webmaster.59371
I have asked this question in https://tex.stackexchange.com/ but then i got a reply that it is off topic there and webmasters stack exchange site would be useful. This is a continuation from my previous question: https://tex.stackexchange.com/questions/164670/convert-tex-file-to-html-in-miktex?noredirect=1#comment378543_164670As I have realized that LaTeX cannot be put directly in WordPress pages, I tried to convert into HTML and use that in WordPress, and I am facing the following problem. I would like to write some mathematical notes on a WordPress site.I am expecting the output to be something like http://crazyproject.wordpress.com/2010/04/24/stabilizer-commutes-with-conjugation-2/But instead I am getting for the same content output as I do not really understand what is the problem.I think all the mathematical symbols in the Crazy Project link are actually images, whereas mine are just LaTeX code converted into HTML.I am waiting for some better idea to convert my ugly looking statement to change to something like that in Crazy Project notes.Another problem I am facing is:I read that to write something in WordPress with LaTeX, I should just write $latex {code }$, but then it is showing the error formula does not parse in red colour.What am I supposed to do with this kind of problem?Am I not allowed to just copy the content of LaTeX code I have written before and paste it in WordPress?If I am writing all those just by typing, I am not getting any problem, but if I copy the code then I get the formula does not parse error.
Writing mathematical notes in WordPress
html;wordpress
null
_unix.6596
How can I create and extract zip archives from the command line?
How do I zip/unzip on the unix command line?
command line;zip
null
_unix.147417
I am setting up a lot of 64-bit RedHat VMs (VMware vSphere) for our development department in my company. Most of them start with 4096MB of RAM, but Linux only seems to recognize 3832MB. Can anyone tell me why that is?
Why does Linux only see 3832MB of my 4096MB of RAM?
linux;memory;ram
null
_webmaster.13124
Of late recaptcha images have become excessively difficult to decipher. Is there any way to get analytics on unsuccessful attempts before a successful one for your website? I'm pretty certain the numbers are going to be bad and something needs to be done about it.
Recaptcha statistics
java;ruby on rails;captcha
null
_unix.82171
I am using AM1808 ARM9-based micro processor for my project on Ubuntu v10.04 using G++ compiler.I am using sqlite3 database for data management.My application needs multiple simultaneous access to database.I found I need to implement connection pooling method to work efficiently.I googled a bit and found that Libzdb library is available for connection pooling and it is open source.I don't know how to cross compile this library for ARM9 architecture.How I can do this ?
Cross compile libzdb library for ARM9 architecture
debian;arm;cross compilation
null
_codereview.86028
This is the second part of a multi-part review. The first part, the server for this client, can be found here.I've been building a simple C# server-client chat-style app as a test of my C#. I've picked up code from a few tutorials, and extended what's there to come up with my own spec.In this second part, I'd like to get some feedback on my client. It feels leaner and more efficient than the server, but I don't doubt that there are plenty of problems in here.Program.csusing System;using System.Collections.Generic;using System.Threading;using System.Runtime.InteropServices;using System.Windows.Forms;namespace MessengerClient{ class Program { private static Thread receiverThread; private static bool FirstRun = true; static void Main(string[] args) { if (FirstRun) { Console.BackgroundColor = ConsoleColor.White; Console.ForegroundColor = ConsoleColor.Black; Console.Clear(); Application.ApplicationExit += new EventHandler(QuitClient); FirstRun = false; } if (args.Length == 1 && args[0] == --debug) { Console.WriteLine(<DEBUG> Setting debug mode ON...); Output.DebugMode = true; } Console.WriteLine(Enter the IP to connect to, including the port:); string address = Console.ReadLine(); try { string[] parts = address.Split(':'); receiverThread = new Thread(new ParameterizedThreadStart(Receiver.Start)); receiverThread.Start(address); Client.Start(parts[0], Int32.Parse(parts[1])); } catch (Exception e) { Console.Clear(); Output.Message(ConsoleColor.DarkRed, Could not connect: + e.Message); Main(new string[1]); } } private static void QuitClient(object sender, EventArgs e) { Client.Disconnect(); while (!Commands.ExitHandlingFinished) { Thread.Sleep(100); } } }}Client.csusing System;using System.Collections.Generic;using System.Text;using System.Net;using System.Net.Sockets;using System.Threading;namespace MessengerClient{ class Client { private static ASCIIEncoding encoder = new ASCIIEncoding(); private static int clientId = 0; public static int GetClientId() { return clientId; } public static TcpClient client = new TcpClient(); private static IPEndPoint serverEndPoint; public static void Start(string ip, int port) { serverEndPoint = new IPEndPoint(IPAddress.Parse(ip), port); try { client.Connect(serverEndPoint); } catch (Exception e) { throw new Exception(No connection was made: + e.Message); } while (true) { Output.Write(ConsoleColor.DarkBlue, Me: ); Console.ForegroundColor = ConsoleColor.DarkBlue; string message = Console.ReadLine(); Console.ForegroundColor = ConsoleColor.Black; if (Commands.IsCommand(message)) { Commands.HandleCommand(client, message); continue; } SendMessage(message); } } public static void SendMessage(string message) { NetworkStream clientStream = client.GetStream(); byte[] buffer; if (message.StartsWith([Disconnect]) || message.StartsWith([Command])) { buffer = encoder.GetBytes(message); } else { buffer = encoder.GetBytes([Send] + message); } clientStream.Write(buffer, 0, buffer.Length); clientStream.Flush(); } public static void HandleResponse(ResponseCode code) { switch (code) { case ResponseCode.Success: return; case ResponseCode.ServerError: Output.Message(ConsoleColor.DarkRed, The server could not process your message. (100)); break; case ResponseCode.NoDateFound: Output.Message(ConsoleColor.DarkRed, Could not retrieve messages from the server. (200)); break; case ResponseCode.BadDateFormat: Output.Message(ConsoleColor.DarkRed, Could not retrieve messages from the server. (201)); break; case ResponseCode.NoMessageFound: Output.Message(ConsoleColor.DarkRed, The server could not process your message. (300)); break; case ResponseCode.NoHandlingProtocol: Output.Message(ConsoleColor.DarkRed, The server could not process your message. (400)); break; case ResponseCode.NoCode: Output.Message(ConsoleColor.DarkRed, Could not process the server's response. (NoCode)); break; default: return; } } public static void ParseClientId(string id) { clientId = Int32.Parse(id); } public static void Disconnect() { SendMessage([Disconnect]); Commands.EndRcvThread = true; Output.Debug(Requested receive thread termination.); Output.Message(ConsoleColor.DarkGreen, Shutting down...); } }}Receiver.csusing System;using System.Collections.Generic;using System.Text;using System.Net;using System.Net.Sockets;using System.Threading;namespace MessengerClient{ class Receiver { private static TcpClient client = new TcpClient(); private static IPEndPoint serverEndPoint; public static void Start(object address) { string[] parts = ((string) address).Split(':'); try { serverEndPoint = new IPEndPoint(IPAddress.Parse(parts[0]), Int32.Parse(parts[1])); } catch (Exception e) { Output.Message(ConsoleColor.DarkRed, Could not connect: + e.Message); return; } try { client.Connect(serverEndPoint); client.ReceiveTimeout = 500; } catch (Exception e) { Output.Message(ConsoleColor.DarkRed, Could not connect: + e.Message); return; } NetworkStream stream = client.GetStream(); string data = ; byte[] received = new byte[4096]; while (true) { if (Commands.EndRcvThread) { Output.Debug(Ending receiver thread); client.Close(); Output.Debug(Cleaned up receive client); Commands.RcvThreadEnded = true; Commands.HandleResponse([DisconnectAcknowledge]); Output.Debug(Notified Commands handler of thread abortion); Thread.CurrentThread.Abort(); return; } data = ; received = new byte[4096]; int bytesRead = 0; try { bytesRead = stream.Read(received, 0, 4096); } catch (Exception e) { continue; } if (bytesRead == 0) { break; } int endIndex = received.Length - 1; while (endIndex >= 0 && received[endIndex] == 0) { endIndex--; } byte[] finalMessage = new byte[endIndex + 1]; Array.Copy(received, 0, finalMessage, 0, endIndex + 1); data = Encoding.ASCII.GetString(finalMessage); Output.Debug(Server message: + data); try { ProcessMessage(data); } catch (Exception e) { Output.Message(ConsoleColor.DarkRed, Could not process the server's response ( + data + ): + e.Message); } } } public static void ProcessMessage(string response) { Output.Debug(Processing message: + response); response = response.Trim(); if (response.StartsWith([Message])) { Output.Debug(Starts with [Message], trying to find ID); response = response.Substring(9); int openIndex = response.IndexOf(<); int closeIndex = response.IndexOf(>); if (openIndex < 0 || closeIndex < 0 || closeIndex < openIndex) { Output.Debug(No ID tag? ( <ID-#-HERE> )); throw new FormatException(Could not find ID tag in message); } int diff = closeIndex - openIndex; int id = Int32.Parse(response.Substring(openIndex + 1, diff - 1)); if (id != Client.GetClientId()) { string message = response.Substring(closeIndex + 1); Console.WriteLine(); Output.Message(ConsoleColor.DarkYellow, <Stranger> + message); Output.Write(ConsoleColor.DarkBlue, Me: ); } else { Output.Debug(ID is client ID, not displaying.); } } else if (response == [DisconnectAcknowledge] || response == [CommandInvalid]) { Output.Debug(Sending response to Commands handler: + response); Commands.HandleResponse(response); } else if (response.Length == 5 && response.StartsWith([) && response.EndsWith(])) { Client.HandleResponse(ResponseCodes.GetResponse(response)); } else { Output.Debug(Figuring out what to do with server message: + response); try { Int32.Parse(response); Output.Debug(Int32.Parse has not failed, assume client ID sent.); Client.ParseClientId(response); return; } catch (Exception e) { Output.Debug(Could not process client ID: + e.Message); } Output.Debug(Could not identify what to do with message.); Output.Message(ConsoleColor.DarkCyan, <Server> + response); } } }}ResponseCodes.csusing System;using System.Collections.Generic;using System.Linq;using System.Text;namespace MessengerClient{ public enum ResponseCode { Success, ServerError, NoDateFound, BadDateFormat, NoMessageFound, NoHandlingProtocol, NoCode, NoResponse } class ResponseCodes { public static Dictionary<string, ResponseCode> CodeStrings = new Dictionary<string, ResponseCode> { {[600], ResponseCode.Success}, {[100], ResponseCode.ServerError}, {[200], ResponseCode.NoDateFound}, {[201], ResponseCode.BadDateFormat}, {[300], ResponseCode.NoMessageFound}, {[400], ResponseCode.NoHandlingProtocol}, }; public static ResponseCode GetResponse(string code) { if (CodeStrings.ContainsKey(code)) { return CodeStrings[code]; } else { return ResponseCode.NoCode; } } }}Commands.csusing System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Threading;using System.Net.Sockets;namespace MessengerClient{ class Commands { public static volatile bool EndRcvThread = false; public static volatile bool RcvThreadEnded = false; public static bool ExitHandlingFinished = false; public static bool IsCommand(string command) { if (command.StartsWith(/)) { return true; } else { return false; } } public static void HandleCommand(TcpClient client, string command) { string[] args = command.Split(' '); switch (args[0].ToLower()) { case /server: if (args.Length >= 2) { int startIndex = args[0].Length; string commandArgs = command.Substring(startIndex + 1); Client.SendMessage([Command] + commandArgs); } else { Output.Message(ConsoleColor.DarkRed, Not enough arguments); return; } break; case /exit: Client.Disconnect(); break; default: Output.Message(ConsoleColor.DarkRed, Unknown command.); return; } } public static void HandleResponse(string response) { // Command was sent; server did not recognise if (response == [CommandInvalid]) { Output.Message(ConsoleColor.DarkRed, The command was not recognised by the server.); return; } // Disconnect was sent; server acknowledges if (response == [DisconnectAcknowledge]) { EndRcvThread = true; Output.Debug(Waiting for thread termination); while (!RcvThreadEnded) { Thread.Sleep(100); } Output.Debug(Thread terminated, cleaning send client); Client.SendMessage(); Client.client.Close(); Output.Debug(Cleaned up send client); if (Output.DebugMode) { Console.WriteLine(); Output.Debug(Press any key to exit); Console.ReadKey(); } Environment.Exit(0); } // Fallback for neither case: pass it off to the client ResponseCode code = ResponseCodes.GetResponse(response); Client.HandleResponse(code); } }}The final class, Output.cs, is the same class as in the last post, and I'm still happy with it so am not putting it up for review. Please also note, I do have XML documentation comments in the code but to save characters have excluded them here.
C# Chat - Part 2: Client
c#;client
private static bool FirstRun = true;private fields are lowerCamelCase or _lowerCamelCase, the latter receiving much preference.if (args.Length == 1 && args[0] == --debug)This is fine if you only have one argument but as soon as you want multiple you'll have issues: people expect arguments to be swappable so you might want to look into making this more generic if you go in that direction. I would also use args.Any() to make it more expressive.Output.DebugMode = true;I don't like the supposedly singleton instance of Output. You might just as well create a normal instance and pass it along to your client, no? Perhaps some dependency injection?I would parse the Uri before you pass it to the client. A trick to do it with IP + Port could be this:Uri uri = new Uri(http:// + 192.168.11.11:8080);Console.WriteLine (uri.Host);Console.WriteLine (uri.Port);Keep it in a try-catch though because it will throw an exception if it's badly formatted.This also solves the problem that you might have in case no port is specified (ArrayIndexOutOfRangeException) or that the IP can't be parsed into an IPEndPoint (FormatException). On top of that it also keeps the responsibility of validation inside your main() block instead of passing exceptions through threads and all that stuff.catch (Exception e) Console.Clear();Don't clear my console! I use that to retract my steps and perhaps contact support.Main(new string[1]);What's the point of setting its length to 1? I do like the approach you used here to re-call the Main method.Client.Disconnect();while (!Commands.ExitHandlingFinished){ Thread.Sleep(100);}This sort of polling should have a timeout in case something isn't going as expected. An indication to the user that the program is quitting is advised as well.I'd advise you to group your members by their type so you know exactly where you can find something. group private fields, private static fields, public static fields, methods, etc.public static TcpClient client = new TcpClient();We don't do public fields in C#. This should be a property (why does the outside world even need to know about this inner detail?)Too much static. This is very hard to test and limits scalability.catch (Exception e){ throw new Exception(No connection was made: + e.Message);}Pass in the original exception as the new inner exception.The only way to interrupt your chat program is by exiting the application. That isn't very nice -- I might want to keep the program open! Perhaps provide interruptability?if (message.StartsWith([Disconnect]) || message.StartsWith([Command]))This is a simple approach and its purposes are clear but I would consider a custom object that holds a property Message and something like MessageKind which could be an enum of Message and ConnectionStatus, that sort of stuff. It allows you to add other variations more easily and doesn't restrict you to an exact string to work with.public static void ParseClientId(string id){ clientId = Int32.Parse(id);}Seems a little pointless -- You're even adding characters. I would also just use int instead of Int32 to retain conformity with the rest of the code.Thread.CurrentThread.Abort();You shouldn't have to abort the thread since that is considered unreliable. Just using return; should do the trick.int endIndex = received.Length - 1;while (endIndex >= 0 && received[endIndex] == 0){ endIndex--;}This is a curious piece of code to me. Maybe I'm misinterpreting it so perhaps you can clarify: are you expecting 0 values being sent? Why would it do this? You should probably add a comment to specify what you're doing (e.g.: // trimming useless data).response = response.Substring(9);Make clear why you're using 9. The above debug statement isn't adequate documentation (it's not explicitly linked to the line of code so people might remove it). It also strikes me more as a comment than debug output, really.int openIndex = response.IndexOf(<);This is the first I see of these fishbrackets. What are they used for? Commentate it!Output.Debug(ID is client ID, not displaying.);Should a client be able to talk to himself? If yes: you're not doing that. If no: this indicates something is fundamentally wrong! Throw an exception and let the user know -- don't just hide it in the logs.Int32.Parse(response);Output.Debug(Int32.Parse has not failed, assume client ID sent.);Client.ParseClientId(response);Double work for no reason. Consider using int.TryParse() instead.EndRcvThreadRcvThreadEndedOnly a very select few abbreviations are recommended (db, app, etc). These aren't amongst them.Again note that these are fields and not properties!if (command.StartsWith(/)){ return true;}else{ return false;}Also known asreturn command.StartsWith(/)args[0].ToLower()String comparison should never be done like this for two main reasons:Performance impact -- you create a new string. What if that first string barely fitted in your memory?It's not a correct comparison. This comment makes it clear but I suggest reading the entire post as well.Overall the code can be followed pretty well.Two things I would definitely look into if I were you: threading and static-ness. I'm not versed enough in threading to give a meaningful review but certain static fields and thread handling raised some eyebrows. The static-ness of your code is something you really should address though: It's very hard to test and your classes are very tightly coupled. I'd rather see instances being passed around where needed.While on the note of testing: all your external dependencies are hardcoded in it -- look into dependency injection if you want to start unit-testing some things!
_unix.251090
An existing directory is needed as a mount point.$ ls$ sudo mount /dev/sdb2 ./datadiskmount: mount point ./datadisk does not exist$ mkdir datadisk$ sudo mount /dev/sdb2 ./datadisk$I find it confusing since it overlays existing contents of the directory. There are two possible contents of the mount point directory which may get switched unexpectedly (for a user who is not performing the mount).Why doesnt mount happen into a newly created directory? This is the way how graphical operating systems display removable media. It would be clear if the directory is mounted (exists) or not mounted (does not exist). I am pretty sure there is a good reason but I havent been able to discover it yet.
Why does mount happen over an existing directory?
mount
This is a case of an implementation detail that has leaked.In a UNIX system, every directory consists of a list of names mapped to inode numbers. An inode holds metadata which tells the system whether it is a file, directory, special device, named pipe, etc. If it is a file or directory it also tells the system where to find the file or directory contents on disk. Most inodes are files or directories. The -i option to ls will list inode numbers.Mounting a filesystem takes a directory inode and sets a flag on the kernel's in-memory copy to say actually, when looking for the contents of this directory look at this other filesystem instead (see slide 10 of this presentation). This is relatively easy as it's changing a single data item.Why doesn't it create a directory entry for you pointing at the new inode instead? There are two ways you could implement that, both of which have disadvantages. One is to physically write a new directory into the filesystem - but that fails if the filesystem is readonly! The other is to add to every directory listing process a list of extra things that aren't really there. This is fiddly and potentially incurs a small performance hit on every file operation.If you want dynamically-created mount points, the automount system can do this. Special non-disk filesystems can also create directories at will, e.g. proc, sys, devfs and so on.Edit: see also the answer to What happens when you 'mount over' an existing folder with contents?
_webapps.45256
GitHub's free repositories are for open source software. Is there a canonical way in Github user interface to identify what OSS license a project is under?
Determine repo LICENSE canonically
github
null
_cogsci.8613
Despite being a night owl, I find when I'm up and about at sunrise1 I feel mentally much better than any other time of day - regardless of how much sleep I've had or whether I've been awake for many hours prior to sunrise. Indeed I'd almost imagine getting up for dawn and then going back to bed would be beneficial in and of itself.Why is sunrise therapeutic?And are there reasons above and beyond the circadian rhythm2?1. Strictly speaking, nautical twilight2. Since the psychological effect of sunrise can apply even when jet lagged or are so frequently a night owl that prior reinforcement of experience is unlikely.
Why is sunrise therapeutic?
perception;mood;arousal
null
_codereview.37221
I received this question at a C++ programming interview:void memcpy(char* dst, const char* src, int numBytes){ int* wordDist = (int*)dst; int* wordSrc = (int*)src; int numWords = numBytes >> 2; for (int i = 0; i < numWords; i++) { *wordDist++ = *wordSrc++; } int whatisleft = numBytes >> 2 - (numWords ); dst = (char*)wordDist; src = (char*)wordSrc; for (int i = 0 ; i <= whatisleft; i++); { *dst++ = *src++; }}What are portability issues? At least 1 syntax error?Algorithm considerations?Now I found that this line:int whatisleft = numBytes >> 2 - (numWords );is wrong as it doesn't calculates the correct number of bytes left to copy.The correct one should be:int numBytesInWord = (numWords << 2 );int whatisleft = numBytes - numBytesInWord;At the end, the copied char array is several characters shorter than the source.
Are there any problems with this memcpy() implementation?
c++;algorithm;interview questions
There's a number of issues with this code:int* wordDist = (int*)dst;int* wordSrc = (int*)src;Even if this works on a given platform, it is definitely not portable. int* can often have more strict alignment requirements than char* - this is technically undefined behaviour.Further, (int*)src; is casting away constness - always dangerous if you then modify the value somewhere in the rest of the function.int numWords = numBytes >> 2; This assumes that an int is 32 bit, which is again non-portable, and will of course be wrong on any platform where this isn't the case. This could be fixed by using:int numWords = numBytes / sizeof(int);However, the alignment issues mentioned previously still remain.This code:int whatisleft = numBytes >> 2 - (numWords);is completely wrong, as thanks to operator precedence, it is equivalent to numBytes >> (2 - numWords) which is likely to be negative, causing undefined behaviour. Also, one has to be careful regardless, and keep in mind the differences between an arithmetic right shift and a logical right shift. Again, technically, right shifts for signed values are implementation defined, and could potentially be either.
_webapps.85335
Imagine I would like to join a channel that is about Food. I can't find a way to search for it and find one. Of course channel should be public, but where/how should I search to find channels and bots on telegram?I tried Googling with site:telegram.me to limit the choices to that site, but no result.
How to browse/find telegram channels and bots?
telegram
You can try and find Telegram channels and bots using this catalog http://botfamily.com/
_codereview.5977
This is a function which produces the sum of primes beneath (not including) a certain number (using a variant on the sieve of Eratosthenes). erastosum = lambda x: sum([i for i in xrange(2, x) if i == 2 or i == 3 or reduce(lambda y,z: y*z, [i%j for j in xrange(2,int(i ** 0.5) + 1)])])Excessive use of lambda? Perhaps. Beautification would be nice, but I'm looking for performance optimizations. Sadly, I'm not sure if there is any further way to optimize the setup I've got right now, so any suggestions (on how to, or what else to do) would be nice.
Excessive use of lambda with variant of Sieve of Eratosthenes?
python;optimization;lambda
As much as I love anonymous functions, they can be a nightmare to debug. Splitting this code up piece wise (into an actual function or otherwise) shouldn't and wouldn't decrease it's performance while improving maintenance and portability for you later on.This class is quite efficient for determining primes. Despite being quite lengthy is more efficient than the more usual approach.
_cs.7039
The following exercise is difficult for me:Show that for each $k \in \mathbb{N}$ the question of existence of a $k$-clique within a graph lies in $\text{L}$.Hint: A $k$-clique denotes $k$ verteces within a graph that are all connected with each other.Annotation: if the question also considers $k$ as parameter, then the problem is $\text{NP}$-complete.So $\text{L}$ is the complexity class containing decision problems which can be solved by a deterministic Turing machine using a logarithmic amount of memory space. Now I'm wondering if this exercise isn't a catchy question, since I've found this paper and it says We give an algorithm for $k$-clique that runs in (...) $O(n^{\varepsilon})$ space, for all $\varepsilon > 0$, on graphs with $n$ nodes?Furthermore, I also don't understand the annotation, because how should it be possible to not consider $k$ for the $k$-clique problem?
Show that k-clique lies in L
complexity theory;space complexity
null
_reverseengineering.11468
Can anyone explain me why do these garbage values present on stack?Below is the disassembly view of the binary.Let me know in case you need the corresponding C source code. So basically I'm not able to understand why does os allocate a stack frame whose size is bigger than size of the function(total size of variables inside the function)?
Garbage value after Stack frame allocation?
ida;assembly;binary analysis;callstack;stack variables
null
_softwareengineering.296388
I've been reading up and trying out various workflows with Wordpress/git. There's a number of options available and I've got a few workflows roughly modelled but no matter which one I choose there's always someone or something that'll lose out.Workflow adoptersDevelopers (competent, used to git)Designers (not used to git, but competent)CMS users (less competent)SetupThis part isn't really subject to change unless a workflow is really, really worth it. Production server pulls from prod/master branchIn house development server pulls latest from develop branchUnit tests are available that should be run before local push to serverWhen develop receives a push and there's a new tag, automated tests run on the Dev server. If they pass, merge with productionOption AAll files stored in gitDatabase schema & content stored in gituse of local-config.php files for easy local deploymentdisallow access to admin panel for live site on production and development serverProsall work done locallyeasy to redeployeverything is controlled by git for traceabilityknown working version (no auto updates to mess things up potentially)Consminor edits to posts/pages have to be done via source control each timenon-devs may end up using this system moremerging a raw SQL file is problematicOption BAll files stored in gitDatabase schema & content stored in gituse of local-config.php files for easy local deploymentdisallow access to admin panel for live site on production onlyProsmostly the same as Option A but changes to posts/content can now be done via Dev servereasier for less technical folks to understandchanges to Dev site db checked in by the server automatically every so oftenConslosing traceability with commits as it'll just be a user server that committed a page content changeopening up potential to have people working on Dev server and skipping localpain to manageOption Conly templates/custom plugins stored in gitdb backed up by other method (probably manually)posts, pages, plugin installs, updates can be done live on the prod site (yuck)Prosmost user friendly Consleast Dev friendly. not quick and easy to redeployno idea if update to core breaks a testno idea if an update to a plugin breaks stuffusers editing pages on the fly liveIs there any good happy medium or better way of working that I'm totally missing or is it a suck it and see job where we have to implement it, find the issues folks are having and fix those.
Suitable workflow for multiple people with multiple skill levels
workflows
null
_cstheory.31702
I am interested in understanding the structure of the class of graphs $G$ such that there is no vertex induced subgraph on four vertices that is a perfect matching. Stated differently for any four vertices $a,b,c,d$ in $G$ if $ab$ and $cd$ are edges then the graph should have at least one more edge on the four vertices. Has this class been studied previously? Any references or insights would be appreciated. We understand this class when restricted to bipartite graphs but the general case seems more tricky.
Structure of graphs that exclude a perfect matching on four vertices as an induced graph
graph theory
Yes; they are known as $2K_2$-free graphs. Some additional references:On Toughness and Hamiltonicity of $2K_2$-Free GraphsThe maximum number of edges in $2K_2$-free graphs of bounded degree;Two characterisations of minimal triangulations of $2_{K2}$-free graphs
_webapps.19967
Some highly ranked Youtube channels seem to do nothing but reupload others' videos and slap advertisements on them. I am really really tired of giving those channels views, but it's sometimes hard not to. I'm looking for a browser plugin of sorts to help me in this.For example, say you want to view the TF2 Meet the Spy video; the official video isn't even on the search results page, while the top hit is comes from the Machinima channel. One has ads, the other doesn't; guess which?For bonus points, in addition to removing videos from known bad channels, it would be nice if I got a warning before loading a video of theirs through a direct link. For example, today I was linked to Gamespot's reupload (full of the lamest advertisements ever seen in history of world) of the official video announcing Toki Tori 2. A warning would've been nice.Is there anything like this out there? I'm happy to view ads to support content creators and refrain from using ad blockers whenever possible, but I don't want to give sharks my impressions.
How can I hide channels from Youtube?
youtube;ads
null
_cogsci.13558
What are the commonly observed biases in survey and are there any statistical or non-statistical methods to avoid it?I am conducting a survey for automobile seat comfort, I think my current methodology may have following issues :Subjects may give average rating when they are unsureSubjective rating may be anchored on the first vehicle subject sits in
Biases in subjective survey
bias;survey
null
_unix.263920
I'm wondering if you could change defaults groups for new users without having to specify all of them manually. I'm trying to harden my ArchLinux and there's a group (provided by Grsecurity) that is denying one to use any socket and therefore preventing him to use any backdoor.I've seen some threads saying to use EXTRA_GROUPS and ADD_EXTRA_GROUP in /etc/default/useradd, and some other saying it's not possible because they are hard-coded in useradd binary. (I think it's really OS dependent)(The goal is that if somehow someone used arbitrary remote code execution to create a new user he'd not be able to do anything with it because by default he will be in the said group. I know that if he has permission to create a user he has permission to change his groups but i guess it still adds one extra layer of security)Thanks by advance
Change default group for any new user
users;useradd
null
_unix.168083
I'm trying to use xmllint to parse names of Solr cores from the configuration file.This works:xmllint --xpath /solr/cores/core/@name solr.xml |grep -Po '(?<=name=)[a-z-]+(?=)'And it returns something like:core-acore-b...Which is exactly what I want, but I would like to eliminate grep. It seems like one should be able to do this with just xpath. Without grep, the output looks like:name=core-a name=core-b ...Wrapping the expressions in string() reduces the multiple output to just the first (ie core-a and that's it), which is not useful. How can I apply the string function on each result?
How do I get a list of the values of matching attributes using xmllint and xpath?
shell;xml;xmllint
AFAIK, xmllint is rather limited. But you can use xmlstarlet with its sel command for what you want to do. See xmlstarlet sel --help for usage and an example. With your example, it would be:xmlstarlet sel -T -t -m /solr/cores/core/@name -v . -n solr.xml
_cs.35371
I'm having trouble intuitively understanding why PSPACE is generally believed to be different from EXPTIME. If PSPACE is the set of problems solvable in space polynomial in the input size $f(n)$, then how can there be a class of problems that experience greater exponential time blowup and do not make use of exponential space?Yuval Filmus' answer is already extremely helpful. However, could anyone sketch my a loose argument why it might be the case that PSPACE EXPTIME (i.e. that PSPACE is not a proper subset of EXPTIME)? Won't we need exponential space in order to beat the upperbound for the total number of system configurations achievable with space that scales polynomially with input size? Just to say, I can understand why EXPTIME EXPSPACE is an open matter, but I lack understanding regarding the relationship between PSPACE and EXPTIME.
Why do we believe that PSPACE EXPTIME?
complexity theory;complexity classes;intuition
Let's refresh the definitions.PSPACE is the class of problems that can be solved on a deterministic Turing machine with polynomial space bounds: that is, for each such problem, there is a machine that decides the problem using at most $p(n)$ tape cells when its input has length $n$, for some polynomial $p$.EXP is the class of problems that can be solved on a deterministic Turing machine with exponential time bounds: for each such problem, there is a machine that decides the problem using at most $2^{p(n)}$ steps when its input has length $n$, for some polynomial $p$.First, we should say that these two classes might be equal. They seem more likely to be different but classes sometimes turn out to be the same: for example, in 2004, Reingold proved that symmetric logspace is the same as ordinary logspace; in 1987, Immerman and Szelepcsnyi independently proved that NL$\;=\;$co-NL (and, in fact, that NSPACE[$f(n)$]$\;=\;$co-NSPACE[$f(n)$] for any $f(n)\geq \log n$).But, at the moment, most people believe that PSPACE and EXP are different. Why?Let's look at what we can do in the two complexity classes. Consider a problem in PSPACE. We're allowed to use $p(n)$ tape cells to solve an input of length $n$ but it's hard to compare that against EXP, which is specified by a time bound.How much time can we use for a PSPACE problem? If we only write to $p(n)$ tape cells, there are $2^{p(n)}$ different strings that could appear on the tape, assuming a binary alphabet. The tape head could be in any of $p(n)$ different places and the Turing machine could be in one of $k$ different states. So the total number of configurations is $T(n) = k\,p(n)\,2^{p(n)}\!$. By the pigeonhole principle, if we run for $T(n)+1$ steps, we must visit a configuration twice but, since the machine is deterministic, that means it will loop around and visit that same configuration infinitely often, i.e., it won't halt. Since part of the definition of being in PSPACE is that you have to decide the problem, any machine that doesn't terminate doesn't solve a PSPACE problem. In other words, PSPACE is the class of problems that are decidable using at most $p(n)$ space and at most $k\,p(n)\,2^{p(n)}$ time, which is at most $2^{q(n)}$ for some polynomial $q$. So we've shown that PSPACE$\;\subseteq\;$EXP.And how much space can we use for an EXP problem? Well, we're allowed $2^{p(n)}$ steps and the head of a Turing machine can only move one position at each step. Since the head can't move more than $2^{p(n)}$ positions, we can only use that many tape cells.That's what the difference is: although both PSPACE and EXP are problems that can be solved in exponential time, PSPACE is restricted to polynomial space use, whereas EXP can use exponential space. That already suggests that EXP ought to be more powerful. For example, suppose you're trying to solve a problem about graphs. In PSPACE, you can look at every subset of the vertices (it only takes $n$ bits to write down a subset). You can use some working space to compute on each subset but, once you've finished working on a subset, you must erase that working space and re-use it for the next subset. In EXP, on the other hand, you can not only look at every subset but you don't need to reuse your working space, so you can remember what you learnt about each one individually. That seems like it should be more powerful.Another intuition for why they should be different is that the time and space hierarchy theorems tell us that allowing even a tiny bit more space or time strictly increases what you can compute. The hierarchy theorems only let you compare like with like (e.g., they show that PSPACE$\;\subsetneq\;$EXPSPACE and P$\;\subsetneq\;$EXP) so they don't directly apply to PSPACE vs EXP but they do give us a strong intuition that more resource means that more problems become solvable.
_unix.292710
In SquidGuard within pfSense 2.3.1 in the Groups ACL screen there are two columns in the Target Rules List Target Categories and Target Categories for off-time. Each value in the row has the values allow, deny, whitelist and ---.Why are there two columns and what do they mean?
In pfSense what is the meaning of the Target Categories and Target Categories for off-time columns in the Groups ACL Screen?
acl;squid;pfsense
Figured it out. After looking at the code that is generated and referencing the Squid Guardian website for some examples, it became clear to me that the Target Categories column contains the blacklist / whitelist rules that are applied when the acl is within the specified time period, and the Target Categories for off-time are the blacklist / whitelist rules that are applied when the acl is outside the time period specified.Target Rules SyntaxCopying the Target Rules text tells all (provided you've already saved it, it isn't updated automatically when changing the values...)It usually looks like this:<black-lists applied inside time frame> all|deny [ <black-lists applied outside time frame> all|deny ]The syntax works like this, Anything outside of the brackets is what is applied inside the time frame.<black-lists applied inside time frame>Anything inside the brackets is what is applied outside the time frame.<black-lists applied inside time frame>The all or deny at the end states that after the rest of the lists have been run through without a hit (left to right), do you want to allow all the other sites to be accessed, or do you want all the other sites to be denied?Prefixes:Applies to all specified black lists! = Deny = allow^ = whitelistExampleNow I imagine that I'm over complicating this a bit (there must be a less verbose syntax), and that if I learned more about the allow as opposed to whitelist syntax there would be some way to use the defaults, but I haven't looked into that yet, so here is what I understand:Suppose that when you want things set outside of the time range you want the following blacklists to be in effect and any other sites are free game:blk_BL_adv blk_BL_aggressive blk_BL_dating blk_BL_drugs blk_BL_gamble blk_BL_hacking blk_BL_movies blk_BL_news blk_BL_politics blk_BL_porn blk_BL_radiotv blk_BL_socialnet blk_BL_spyware blk_BL_warez...and you want anything else to be accessible...then you'd put all at the end.To see this in action you would have everything between the brackets:[ !blk_BL_adv !blk_BL_aggressive !blk_BL_dating !blk_BL_drugs !blk_BL_gamble !blk_BL_hacking !blk_BL_movies !blk_BL_news !blk_BL_politics !blk_BL_porn !blk_BL_radiotv !blk_BL_socialnet !blk_BL_spyware !blk_BL_warez all ] Note that there are only ! (deny) and no (allow) and no ^ (whitelist)Now suppose that during the time period we would like to allow access to the following, but still keep our off-time blacklist rules in play:blk_BL_moviesblk_BL_newsblk_BL_politicsblk_BL_socialnetThen we copy the values from our off-time list and replace the ! (deny) with ^ (whitelist) on only the entries listed above. The rest of them remain ! deny.The list outside the brackets then becomes!blk_BL_adv !blk_BL_aggressive !blk_BL_dating !blk_BL_drugs !blk_BL_gamble !blk_BL_hacking ^blk_BL_movies ^blk_BL_news ^blk_BL_politics !blk_BL_porn !blk_BL_radiotv ^blk_BL_socialnet !blk_BL_spyware !blk_BL_warez all...and also there is an all at the end to of the list to allow the rest of the sites.So when we throw it all together we have: !blk_BL_adv !blk_BL_aggressive !blk_BL_dating !blk_BL_drugs !blk_BL_gamble !blk_BL_hacking ^blk_BL_movies ^blk_BL_news ^blk_BL_politics !blk_BL_porn !blk_BL_radiotv ^blk_BL_socialnet !blk_BL_spyware !blk_BL_warez all [ !blk_BL_adv !blk_BL_aggressive !blk_BL_dating !blk_BL_drugs !blk_BL_gamble !blk_BL_hacking !blk_BL_movies !blk_BL_news !blk_BL_politics !blk_BL_porn !blk_BL_radiotv !blk_BL_socialnet !blk_BL_spyware !blk_BL_warez all ] and that's what gets stored as the value of the Target Rules box.When I was trying to figure this out, I unknowingly found myself in vim replicating the same two lists that make up the GUI by taking the value of Target Rules, splitting it into the lists inside and outside the brackets, and taking each of the flat lists and placing them vertically beside one another, then I realized what was going on.
_scicomp.7729
How to make Elemental Gemm run quickly?I have the following code:#include elemental.hppusing namespace std;using namespace elem;extern C { void openblas_set_num_threads(int num_threads);}int main( int argc, char *argv[] ) { Initialize( argc, argv ); const mpi::Comm comm = mpi::COMM_WORLD; const int commRank = mpi::CommRank( comm ); const int commSize = mpi::CommSize( comm ); try { const int n = Input(--n,size of matrices,1000); const int nb = Input(--nb,algorithmic blocksize,128); int r = Input(--r,process grid height,0); ProcessInput(); SetBlocksize( nb ); // If no process grid height was specified, try for a square if( r == 0 ) r = Grid::FindFactor( commSize ); Grid g( comm, r ); Matrix<double> A, B, C; Uniform( A, n, n ); Uniform( B, n, n ); Uniform( C, n, n ); mpi::Barrier( comm ); Timer timer; if( commRank == 0 ) { timer.Start(); Gemm( NORMAL, NORMAL, 1., A, B, 0., C ); std::cout << Multithreaded time: << timer.Stop() << secs << std::endl; openblas_set_num_threads(1); timer.Start(); Gemm( NORMAL, NORMAL, 1., A, B, 0., C ); std::cout << Sequential time: << timer.Stop() << secs << std::endl; } openblas_set_num_threads(1); if( commRank == 0 ) timer.Start(); DistMatrix<double,CIRC,CIRC> ARoot(n,n,g), BRoot(n,n,g); if( commRank == 0 ) { ARoot.Matrix() = A; BRoot.Matrix() = B; std::cout << Population time: << timer.Stop() << secs << std::endl; } if( commRank == 0 ) timer.Start(); DistMatrix<double> ADist( ARoot ), BDist( BRoot ), CDist(g); Zeros( CDist, n, n ); mpi::Barrier( comm ); if( commRank == 0 ) std::cout << Scatter from root: << timer.Stop() << secs << std::endl; if( commRank == 0 ) timer.Start(); Gemm( NORMAL, NORMAL, 1., ADist, BDist, 0., CDist ); mpi::Barrier( comm ); if( commRank == 0 ) std::cout << Distributed: << timer.Stop() << secs << std::endl; DistMatrix<double,CIRC,CIRC> CRoot( CDist ); mpi::Barrier( comm ); if( commRank == 0 ) std::cout << Gather to root: << timer.Stop() << secs << std::endl; } catch( std::exception& e ) { ReportException(e); } Finalize(); return 0;}I'm running on 4 nodes, each node with 12 cores. The blas is OpenBLAS, compiled from source, so it supports 12 threads, and affinity is turned off. I get the following results for gemm, with n=4000:multithreaded blas, via Elemental Matrix: 1.6ssinglethreaded blas. via Elemental Matrix: 11.9sDistMatrix, 1 mpi processes, singlethreaded blas: 13.3 secondsDistMatrix, 4 mpi processes, singlethreaded blas: 6.6 secondsDistMatrix, 48 mpi processes, singlethreaded blas: 30.2 secondsI also tried using multithreaded blas with DistMatrix, by commenting out the lines openblas_set_num_threads(1) in the above code, and got the following results:DistMatrix, 4 mpi processes, multithreaded blas: 4.5 secondsWhy am I getting times for the DistMatrix on 4 nodes which are not competitive with the multithreaded blas time on a single compute node?Edit: I also tried setting A to MC,STAR; and B to STAR,MR, following page 27 of Rethinking distributed dense linear algebra, by Jack Poulson 2012, but the results were no better for me:DistMatrix, mpi 4 processes, multithreaded blas: 6.2 secondsDistMatrix, mpi 48 processes, singlethreaded blas: 35.3 secondsEdit 2: added code to prevent any code being optimized out, ie the functions readMatrix, and the cout << sum << endl at the end, but no change in timings:blas single-threaded: 12.2 secondsblas multi-threaded: 1.6 secondsDistMatrix 4 nodes, 4 processes, multithreaded blas: 4.5 secondsDistMatrix 4 nodes, 48 processes, singlethreaded blas: 29.8 secondsEdit 3: note that scalapack sdgemm also seems to run more slowly than multithreaded blas for me, but faster than Elemental, for me:scalapack sdgemm, 4 nodes, 4 processes, multithreaded blas: 2.11 secondsscalapack sdgemm, 4 nodes, 48 processes, singlethreaded blas: 29.1 secondsNote that for both scalapack and elemental, it seems that it is faster to use one mpi process one node, and turn on OpenBLAS multithreading, than to use one mpi process per core, and turn off OpenBLAS multithreading. Which makes sense, since one can then take advantage of shared memory, reducing the amount of communications required?code for scalapack tests:#pragma onceextern C { struct DESC{ int DTYPE_; int CTXT_; int M_; int N_; int MB_; int NB_; int RSRC_; int CSRC_; int LLD_; } ; void blacs_pinfo_( int *iam, int *nprocs ); void blacs_get_( int *icontxt, int *what, int *val ); void blacs_gridinit_( int *icontxt, char *order, int *nprow, int *npcol ); void blacs_gridinfo_( int *context, int *nprow, int *npcol, int *myrow, int *mycol ); void blacs_gridexit_( int *context ); void blacs_exit_( int *code ); int numroc_( int *n, int *nb, int *iproc, int *isrcproc, int *nprocs ); void descinit_( struct DESC *desc, int *m, int *n, int *mb, int *nb, int *irsrc, int *icsrc, int *ictxt, int *lld, int *info ); void pdlaprnt_( int *m, int *n, double *a, int *ia, int *ja, struct DESC *desca, int *irprnt, int *icprnt, const char *cmatnm, int *nout, double *work, int cmtnmlen ); void pdgemm_( char *transa, char *transb, int *m, int *n, int *k, double *alpha, double *a, int *ia, int *ja, struct DESC *desca, double *b, int *ib, int *jb, struct DESC *descb, double *beta, double *c, int *ic, int *jc, struct DESC *descc );}void blacs_pinfo( int *p, int *P ) { blacs_pinfo_( p, P );}int blacs_get( int icontxt, int what ) { int val; blacs_get_( &icontxt, &what, &val ); return val;}int blacs_gridinit( int icontxt, bool isColumnMajor, int nprow, int npcol ) { int newcontext = icontxt; char order = isColumnMajor ? 'C' : 'R'; blacs_gridinit_( &newcontext, &order, &nprow, &npcol ); return newcontext;}void blacs_gridinfo( int context, int nprow, int npcol, int *myrow, int *mycol ) { blacs_gridinfo_( &context, &nprow, &npcol, myrow, mycol );}void blacs_gridexit( int context ) { blacs_gridexit_( &context );}void blacs_exit( int code ) { blacs_exit_( &code );}int numroc( int n, int nb, int iproc, int isrcproc, int nprocs ) { return numroc_( &n, &nb, &iproc, &isrcproc, &nprocs );}void descinit( struct DESC *desc, int m, int n, int mb, int nb, int irsrc, int icsrc, int ictxt, int lld ) { int info; descinit_( desc, &m, &n, &mb, &nb, &irsrc, &icsrc, &ictxt, &lld, &info ); if( info != 0 ) { throw runtime_error( non zero info: + toString( info ) ); }// return info;}void pdlaprnt( int m, int n, double *A, int ia, int ja, struct DESC *desc, int irprnt, int icprnt, const char *cmatnm, int nout, double *work ) { int cmatnmlen = strlen(cmatnm); pdlaprnt_( &m, &n, A, &ia, &ja, desc, &irprnt, &icprnt, cmatnm, &nout, work, cmatnmlen );}void pdgemm( bool isTransA, bool isTransB, int m, int n, int k, double alpha, double *a, int ia, int ja, struct DESC *desca, double *b, int ib, int jb, struct DESC *descb, double beta, double *c, int ic, int jc, struct DESC *descc ) { char transa = isTransA ? 'T' : 'N'; char transb = isTransB ? 'T' : 'N'; pdgemm_( &transa, &transb, &m, &n, &k, &alpha, a, &ia, &ja, desca, b, &ib, &jb, descb, &beta, c, &ic, &jc, descc );}#include <iostream>#include <cmath>using namespace std;#include cpputils.h#include args.h#include scalapack.hextern C { void openblas_set_num_threads(int num_threads);}int getRootFactor( int n ) { for( int t = sqrt(n); t > 0; t-- ) { if( n % t == 0 ) { return t; } } return 1;}// conventions:// M_ by N_ matrix block-partitioned into MB_ by NB_ blocks, then// distributed according to 2d block-cyclic scheme// based on http://acts.nersc.gov/scalapack/hands-on/exercise3/pspblasdriver.f.htmlint main( int argc, char *argv[] ) { int p, P; blacs_pinfo( &p, &P ); mpi_print( toString(p) + / + toString(P) ); int n; int numthreads; Args( argc, argv ).arg(N, &n ).arg(numthreads, &numthreads ).go(); openblas_set_num_threads( numthreads ); int nprows = getRootFactor(P); int npcols = P / nprows; if( p == 0 ) cout << grid: << nprows << x << npcols << endl; int system = blacs_get( -1, 0 ); int grid = blacs_gridinit( system, true, nprows, npcols ); if( p == 0 ) cout << system context << system << grid context: << grid << endl; int myrow, mycol; blacs_gridinfo( grid, nprows, npcols, &myrow, &mycol ); mpi_print(grid, me: + toString(myrow) + , + toString(mycol) ); if( myrow >= nprows || mycol >= npcols ) { mpi_print(not needed, exiting); blacs_gridexit( grid ); blacs_exit(0); exit(0); } // A B C // m x k k x n = m x n // nb: blocksize // nprows: process grid, number rows // npcols: process grid, number cols // myrow: process grid, our row // mycol: process grid, our col int m = n; int k = n;// int nb = min(n,128); // nb is column block size for A, and row blocks size for B int nb=min(n/P,128); int mp = numroc( m, nb, myrow, 0, nprows ); // mp number rows A owned by this process int kp = numroc( k, nb, myrow, 0, nprows ); // kp number rows B owned by this process int kq = numroc( k, nb, mycol, 0, npcols ); // kq number cols A owned by this process int nq = numroc( n, nb, mycol, 0, npcols ); // nq number cols B owned by this process mpi_print( mp + toString(mp) + kp + toString(kp) + kq + toString(kq) + nq + toString(nq) ); struct DESC desca, descb, descc; descinit( (&desca), m, k, nb, nb, 0, 0, grid, max(1, mp) ); descinit( (&descb), k, n, nb, nb, 0, 0, grid, max(1, kp) ); descinit( (&descc), m, n, nb, nb, 0, 0, grid, max(1, mp) ); mpi_print( desca.LLD_ + toString(desca.LLD_) + kq + toString(kq) ); double *ipa = new double[desca.LLD_ * kq]; double *ipb = new double[descb.LLD_ * nq]; double *ipc = new double[descc.LLD_ * nq]; for( int i = 0; i < desca.LLD_ * kq; i++ ) { ipa[i] = p; } for( int i = 0; i < descb.LLD_ * nq; i++ ) { ipb[i] = p; } if( p == 0 ) cout << created matrices << endl; double *work = new double[nb]; if( n <=5 ) { pdlaprnt( n, n, ipa, 1, 1, &desca, 0, 0, A, 6, work ); pdlaprnt( n, n, ipb, 1, 1, &descb, 0, 0, B, 6, work ); } NanoTimer timer; pdgemm( false, false, m, n, k, 1, ipa, 1, 1, &desca, ipb, 1, 1, &descb, 1, ipc, 1, 1, &descc ); MPI_Barrier( MPI_COMM_WORLD ); if( p == 0 ) timer.toc(pdgemm); blacs_gridexit( grid ); blacs_exit(0); return 0;}Here is the output from Jack Poulson's elemental/examples/blas-like/Gemm.cpp program, run with OPENBLAS_NUM_THREADS=1 OMP_NUM_THREADS=1 mpirun.mpich2 -hosts host3,host1,host2,host4 48 ./ElementalExampleGemm --m 2000 --n 2000 --k 2000 --nb 128, ie 48 processes, 1 thread per process:g: 6 x 8Sequential: 1.74586 secs and 9.16452 GFLopsPopulate root node: 0.0340741 secsSpread from root: 0.443471 secs[MC,* ] AllGather: 0.0068028 secs, 50.2758 MB/s for 334 x 128 local matrix[* ,MR] AllGather: 0.224466 secs, 1.14048 MB/s for 128 x 250 local matrixLocal gemm: 0.00301719 secs and 7.08474 GFlops for 334 x 250 x 128 product[MC,* ] AllGather: 0.00582409 secs, 58.7244 MB/s for 334 x 128 local matrix[* ,MR] AllGather: 0.221785 secs, 1.15427 MB/s for 128 x 250 local matrixLocal gemm: 0.00290704 secs and 7.35319 GFlops for 334 x 250 x 128 product[MC,* ] AllGather: 0.00559711 secs, 61.1058 MB/s for 334 x 128 local matrix[* ,MR] AllGather: 0.258585 secs, 0.990002 MB/s for 128 x 250 local matrixLocal gemm: 0.00293088 secs and 7.29337 GFlops for 334 x 250 x 128 product[MC,* ] AllGather: 0.00562692 secs, 60.7821 MB/s for 334 x 128 local matrix[* ,MR] AllGather: 0.266162 secs, 0.961821 MB/s for 128 x 250 local matrixLocal gemm: 0.00652504 secs and 3.276 GFlops for 334 x 250 x 128 product[MC,* ] AllGather: 0.00574803 secs, 59.5014 MB/s for 334 x 128 local matrix[* ,MR] AllGather: 0.253986 secs, 1.00793 MB/s for 128 x 250 local matrixLocal gemm: 0.00300407 secs and 7.11567 GFlops for 334 x 250 x 128 product[MC,* ] AllGather: 0.00567889 secs, 60.2258 MB/s for 334 x 128 local matrix[* ,MR] AllGather: 0.263011 secs, 0.973343 MB/s for 128 x 250 local matrixLocal gemm: 0.00289297 secs and 7.38894 GFlops for 334 x 250 x 128 product[MC,* ] AllGather: 0.00561213 secs, 60.9422 MB/s for 334 x 128 local matrix[* ,MR] AllGather: 0.0310259 secs, 8.25117 MB/s for 128 x 250 local matrixLocal gemm: 0.00288296 secs and 7.41461 GFlops for 334 x 250 x 128 product[MC,* ] AllGather: 0.00552988 secs, 61.8487 MB/s for 334 x 128 local matrix[* ,MR] AllGather: 0.229407 secs, 1.11592 MB/s for 128 x 250 local matrixLocal gemm: 0.00290298 secs and 7.36346 GFlops for 334 x 250 x 128 product[MC,* ] AllGather: 0.00556993 secs, 61.4039 MB/s for 334 x 128 local matrix[* ,MR] AllGather: 0.259156 secs, 0.987822 MB/s for 128 x 250 local matrixLocal gemm: 0.00277686 secs and 7.6979 GFlops for 334 x 250 x 128 product[MC,* ] AllGather: 0.00564504 secs, 60.587 MB/s for 334 x 128 local matrix[* ,MR] AllGather: 0.260839 secs, 0.981448 MB/s for 128 x 250 local matrixLocal gemm: 0.00277185 secs and 7.7118 GFlops for 334 x 250 x 128 product[MC,* ] AllGather: 0.00549412 secs, 62.2513 MB/s for 334 x 128 local matrix[* ,MR] AllGather: 0.224814 secs, 1.13872 MB/s for 128 x 250 local matrixLocal gemm: 0.00276208 secs and 7.7391 GFlops for 334 x 250 x 128 product[MC,* ] AllGather: 0.00556684 secs, 61.4381 MB/s for 334 x 128 local matrix[* ,MR] AllGather: 0.216236 secs, 1.18389 MB/s for 128 x 250 local matrixLocal gemm: 0.00276899 secs and 7.71977 GFlops for 334 x 250 x 128 product[MC,* ] AllGather: 0.00551414 secs, 62.0252 MB/s for 334 x 128 local matrix[* ,MR] AllGather: 0.22506 secs, 1.13747 MB/s for 128 x 250 local matrixLocal gemm: 0.00276208 secs and 7.7391 GFlops for 334 x 250 x 128 product[MC,* ] AllGather: 0.005409 secs, 63.2309 MB/s for 334 x 128 local matrix[* ,MR] AllGather: 0.255941 secs, 1.00023 MB/s for 128 x 250 local matrixLocal gemm: 0.00276995 secs and 7.71712 GFlops for 334 x 250 x 128 product[MC,* ] AllGather: 0.00536704 secs, 63.7252 MB/s for 334 x 128 local matrix[* ,MR] AllGather: 0.225583 secs, 1.13484 MB/s for 128 x 250 local matrixLocal gemm: 0.00295305 secs and 7.23861 GFlops for 334 x 250 x 128 product[MC,* ] AllGather: 0.00358391 secs, 59.6444 MB/s for 334 x 80 local matrix[* ,MR] AllGather: 0.251425 secs, 0.636373 MB/s for 80 x 250 local matrixLocal gemm: 0.00167489 secs and 7.97664 GFlops for 334 x 250 x 80 productDistributed Gemm: 3.80641 secsGathered to root: 0.399101 secsEdit: and results for 4 mpi processes (on 4 12-core nodes), run with mpirun.mpich2 -hosts host3,host1,host2,host4 -np 4 ./ElementalExampleGemm --m 2000 --n 2000 --k 2000 --nb 128, ie with OpenBLAS multithreading activated:g: 2 x 2Sequential: 0.219173 secs and 73.0017 GFLopsPopulate root node: 0.035708 secsSpread from root: 0.482837 secs[MC,* ] AllGather: 0.0147331 secs, 69.5035 MB/s for 1000 x 128 local matrix[* ,MR] AllGather: 0.010592 secs, 96.6769 MB/s for 128 x 1000 local matrixLocal gemm: 0.00869703 secs and 29.4353 GFlops for 1000 x 1000 x 128 product[MC,* ] AllGather: 0.00796413 secs, 128.576 MB/s for 1000 x 128 local matrix[* ,MR] AllGather: 0.00883698 secs, 115.877 MB/s for 128 x 1000 local matrixLocal gemm: 0.00752282 secs and 34.0298 GFlops for 1000 x 1000 x 128 product[MC,* ] AllGather: 0.00717402 secs, 142.737 MB/s for 1000 x 128 local matrix[* ,MR] AllGather: 0.0083642 secs, 122.427 MB/s for 128 x 1000 local matrixLocal gemm: 0.00796413 secs and 32.1441 GFlops for 1000 x 1000 x 128 product[MC,* ] AllGather: 0.00718212 secs, 142.576 MB/s for 1000 x 128 local matrix[* ,MR] AllGather: 0.0080409 secs, 127.349 MB/s for 128 x 1000 local matrixLocal gemm: 0.00650787 secs and 39.337 GFlops for 1000 x 1000 x 128 product[MC,* ] AllGather: 0.00641584 secs, 159.605 MB/s for 1000 x 128 local matrix[* ,MR] AllGather: 0.00688195 secs, 148.795 MB/s for 128 x 1000 local matrixLocal gemm: 0.00576997 secs and 44.3677 GFlops for 1000 x 1000 x 128 product[MC,* ] AllGather: 0.00597095 secs, 171.497 MB/s for 1000 x 128 local matrix[* ,MR] AllGather: 0.00652695 secs, 156.888 MB/s for 128 x 1000 local matrixLocal gemm: 0.00652885 secs and 39.2106 GFlops for 1000 x 1000 x 128 product[MC,* ] AllGather: 0.00575399 secs, 177.963 MB/s for 1000 x 128 local matrix[* ,MR] AllGather: 0.00634193 secs, 161.465 MB/s for 128 x 1000 local matrixLocal gemm: 0.00574183 secs and 44.5851 GFlops for 1000 x 1000 x 128 product[MC,* ] AllGather: 0.00563598 secs, 181.69 MB/s for 1000 x 128 local matrix[* ,MR] AllGather: 0.00627708 secs, 163.133 MB/s for 128 x 1000 local matrixLocal gemm: 0.00576282 secs and 44.4227 GFlops for 1000 x 1000 x 128 product[MC,* ] AllGather: 0.005867 secs, 174.535 MB/s for 1000 x 128 local matrix[* ,MR] AllGather: 0.00619698 secs, 165.242 MB/s for 128 x 1000 local matrixLocal gemm: 0.01108 secs and 23.1046 GFlops for 1000 x 1000 x 128 product[MC,* ] AllGather: 0.00589108 secs, 173.822 MB/s for 1000 x 128 local matrix[* ,MR] AllGather: 0.00623584 secs, 164.212 MB/s for 128 x 1000 local matrixLocal gemm: 0.00559402 secs and 45.7632 GFlops for 1000 x 1000 x 128 product[MC,* ] AllGather: 0.00580001 secs, 176.551 MB/s for 1000 x 128 local matrix[* ,MR] AllGather: 0.00648689 secs, 157.857 MB/s for 128 x 1000 local matrixLocal gemm: 0.00570703 secs and 44.857 GFlops for 1000 x 1000 x 128 product[MC,* ] AllGather: 0.00566101 secs, 180.886 MB/s for 1000 x 128 local matrix[* ,MR] AllGather: 0.00638199 secs, 160.452 MB/s for 128 x 1000 local matrixLocal gemm: 0.00575209 secs and 44.5056 GFlops for 1000 x 1000 x 128 product[MC,* ] AllGather: 0.00572801 secs, 178.771 MB/s for 1000 x 128 local matrix[* ,MR] AllGather: 0.00630784 secs, 162.338 MB/s for 128 x 1000 local matrixLocal gemm: 0.005795 secs and 44.176 GFlops for 1000 x 1000 x 128 product[MC,* ] AllGather: 0.00581098 secs, 176.218 MB/s for 1000 x 128 local matrix[* ,MR] AllGather: 0.00644612 secs, 158.855 MB/s for 128 x 1000 local matrixLocal gemm: 0.00571203 secs and 44.8177 GFlops for 1000 x 1000 x 128 product[MC,* ] AllGather: 0.00570321 secs, 179.548 MB/s for 1000 x 128 local matrix[* ,MR] AllGather: 0.00653315 secs, 156.739 MB/s for 128 x 1000 local matrixLocal gemm: 0.00570703 secs and 44.857 GFlops for 1000 x 1000 x 128 product[MC,* ] AllGather: 0.00371313 secs, 172.361 MB/s for 1000 x 80 local matrix[* ,MR] AllGather: 0.00396085 secs, 161.582 MB/s for 80 x 1000 local matrixLocal gemm: 0.00396299 secs and 40.3735 GFlops for 1000 x 1000 x 80 productDistributed Gemm: 0.327859 secsGathered to root: 0.237197 secs
How to make Elemental Gemm run quickly?
linear algebra;matrices;parallel computing;mpi;blas
null
_unix.139772
As far as I know a HTTP request and response constructs one TCP connection. To debug a web application on a GUI-less server, I'd like to be able to capture these TCP streams in a single distinguishable entity (same color, file, db record, anything).tcpdump can only save IP packets as they arrive or leave, with no ordering or reassembling. tcpflow goes one step further to reassemble TCP connections in separate files, but puts send and receive streams in separate files, which makes quick debugging annoying. I'm sure I can write a script or even one-liner to merge related files, but I'm guessing a wrapper around tcpflow for this job could introduce complexities which wouldn't exist inside tcpflow. Also I'm lazy and looking for a cleaner solution.Any suggestions would be appreciated.
Capture web traffic grouped by individual TCP streams
networking;http;tcpdump
You could copy (or stream) the pcap file to your desktop and use the Wireshark GUI for packet analysis. Besides the GUI, there is also the tshark command (included with Wireshark). Given a stream number, you can get its requests and responses combined in a single output with:$ tshark -q -r http.pcapng -z follow,tcp,ascii,1===================================================================Follow: tcp,asciiFilter: tcp.stream eq 1Node 0: 10.44.1.8:47833Node 1: 178.21.112.251:8077GET / HTTP/1.1User-Agent: curl/7.37.0Host: lekensteyn.nlAccept: */* 356HTTP/1.1 302 Moved TemporarilyServer: nginx/1.4.7Date: Sun, 29 Jun 2014 10:24:34 GMTContent-Type: text/htmlContent-Length: 160Connection: keep-aliveLocation: https://lekensteyn.nl/<html><head><title>302 Found</title></head><body bgcolor=white><center><h1>302 Found</h1></center><hr><center>nginx/1.4.7</center></body></html>===================================================================Refer to the manual page of tshark for more details. Basically, -q suppresses the normal packet display, -r http.pcapng selects the capture file and -z follow,... is the equivalent of the Follow TCP Stream in the GUI. Unfortunately, you must repeat this command for every stream, not really ideal.As for streaming a connection to the Wireshark GUI, you could use this command:ssh you@server 'tcpdump -w - tcp port 80' | wireshark -i - -kIf this is still not what you are looking for, then you could consider setting up a proxy and then logging everything through that proxy.
_webapps.11266
I have imported old emails from Outlook backups in my Gmail account and I ended up with 1000+ labels in my system. Is there an easy massive way to remove them and keep only ~10 my usual ones?
How to remove multiple Gmail labels
gmail;gmail labels
null
_softwareengineering.178117
I have over the last 30 days been developing a Python application that utilizes a MySQL database of information (specifically about Norwegian addresses) to perform address validation and correction. The database contains approximately 2.1 million rows (43 columns) of data and occupies 640MB of disk space.I'm thinking about speed optimizations, and I've got to assume that when validating 10,000+ addresses, each validation running up to 20 queries to the database, networking is a speed bottleneck.I haven't done any measuring or timing yet, and I'm sure there are simpler ways of speed optimizing the application at the moment, but I just want to get the experts' opinions on how realistic it is to load this amount of data into a row-of-rows structure in Python. Also, would it even be any faster? Surely MySQL is optimized for looking up records among vast amounts of data, so how much help would it even be to remove the networking step? Can you imagine any other viable methods of removing the networking step?The location of the MySQL server will vary, as the application might well be run from a laptop at home or at the office, where the server would be local.
Is it conceivable to have millions of lists of data in memory in Python?
python;data structures;mysql;optimization;speed
null
_unix.55308
I try to build uImage for linux-sunxi on a Debian box, which is prepared like this (How To Build Debian From Source Code for Mele):apt-get install emdebian-archive-keyringapt-get install gcc-4.4-arm-linux-gnueabiapt-get build-essential gitapt-get uboot-mkimageapt-get libusb-1.0-0-devI am following the guide at FirstSteps and have done all instructions without errors.make ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- defconfig works, but make ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- -j5 uImage fails: LD .tmp_vmlinux1arch/arm/common/built-in.o: In function `sp804_get_clock_rate':timer-sp.c:(.init.text+0x31c): undefined reference to `clk_get_sys'timer-sp.c:(.init.text+0x364): undefined reference to `clk_put'timer-sp.c:(.init.text+0x398): undefined reference to `clk_put'arch/arm/mach-versatile/built-in.o: In function `versatile_init_early':versatile_ab.c:(.init.text+0x134): undefined reference to `clkdev_add_table'drivers/built-in.o: In function `clcdfb_remove':hid-input.c:(.text+0xd790): undefined reference to `clk_put'drivers/built-in.o: In function `clcdfb_probe':hid-input.c:(.text+0xdc70): undefined reference to `clk_get'hid-input.c:(.text+0xde04): undefined reference to `clk_put'drivers/built-in.o: In function `amba_get_enable_pclk':hid-input.c:(.text+0xe448): undefined reference to `clk_get'hid-input.c:(.text+0xe470): undefined reference to `clk_put'drivers/built-in.o: In function `amba_put_disable_pclk':hid-input.c:(.text+0xe49c): undefined reference to `clk_put'drivers/built-in.o: In function `pl011_remove':hid-input.c:(.text+0x2b6c4): undefined reference to `clk_put'drivers/built-in.o: In function `pl011_probe':hid-input.c:(.text+0x2c234): undefined reference to `clk_get'hid-input.c:(.text+0x2c318): undefined reference to `clk_put'drivers/built-in.o: In function `enable_clock':hid-input.c:(.text+0x3a004): undefined reference to `clk_get'hid-input.c:(.text+0x3a01c): undefined reference to `clk_put'drivers/built-in.o: In function `disable_clock':hid-input.c:(.text+0x3a044): undefined reference to `clk_get'hid-input.c:(.text+0x3a05c): undefined reference to `clk_put'drivers/built-in.o: In function `__pm_clk_remove':hid-input.c:(.text+0x3a1b8): undefined reference to `clk_put'drivers/built-in.o: In function `pm_clk_add':hid-input.c:(.text+0x3a424): undefined reference to `clk_get'drivers/built-in.o: In function `mmc_io_rw_extended':hid-input.c:(.text+0x6d9ac): undefined reference to `sunximmc_check_r1_ready'drivers/built-in.o: In function `amba_kmi_probe':hid-input.c:(.devinit.text+0x8bc): undefined reference to `clk_get'drivers/built-in.o: In function `amba_kmi_remove':hid-input.c:(.devexit.text+0xd4): undefined reference to `clk_put'make: *** [.tmp_vmlinux1] Fel 1Here is a complete dump: http://pastie.org/5351582I have tried google the errors codes, but I cannot find anything about the several references to clk_*, what are these functions and how do I install them on Debian?
linux-sunxi - make failed, undefined references clk_*
linux;debian;make;arm
null
_unix.128833
I'm running the webserver lighttpd on Raspbian (Debian based) on a Raspberry Pi. The server runs as user www-data (checked with ps aux). I added the following line to /etc/sudoers:www-data ALL=NOPASSWD:/opt/vc/bin/vcgencmdto be able to run the vcgencmd tool from the Raspberry Pi that gives status information from within a PHP file with<? echo shell_exec('vcgencmd version'); ?>All it prints is VCHI initialization failed (instead of the supposed version information that appears when I run it on my user even without sudo) which appears when vcgencmd is run with wrong permissions.Running for example<? echo shell_exec('cat /sys/class/thermal/thermal_zone*/temp'); ?>works fine without any /etc/sudoers change, so there's no problem with PHP (like forbidden shell_exec or something).What else needs to be set in order to execute a command?
Adding www-data to /etc/sudoers dos not work for PHP shell_exec() to run a command
sudo
If you want to use something you added to /etc/sudoers, you have to call sudo.sudo is just a program with the setuid bit set. There's absolutely nothing more special about it, meaning it doesn't interpose itself every time a program is launched.The reason you can call cat /sys/class/thermal/thermal_zone*/temp is because you have read access to those files. Depending on how your filesystem permissions are set, you may have read access, but not necessarily write.The reason vcgencmd version might work when launched as your own user has 2 possible explanations:You have alias vcgencmd='sudo vcgencmd in your profile, thus you automatically run sudo.You have sufficient permissions to the files that vcgencmd needs to operate. If you need write access, and the files are owned by a group you're a member in, and have write access for that group, then you won't need sudo.In summary, either change your command to sudo vcgencmd version. Or find what file permissions you need to modify and modify them.
_unix.165974
I am working on applications on an AIX 6.1 two-node cluster, with an active and passive node. Is there a command that will show me which mount points / file systems are shared and 'swing' from one node to the other when the active node changes, and which mount points are truly local to each node and not shared?I can run commands with superuser access, but obviously I do not want to change/break anything, this is a production environment.
AIX cluster show swing filesystems
aix
null
_unix.74995
I want to run a script as root when the computer starts. This was earlier done in rc.local, but not anymore.What I've tried:putting the script in /etc/profile.dadding the /pathto/script.sh in /etc/profileadded /pathto/script.sh & in /etc/xdg/openbox/autostartThe script is for setting things powertop recommends; the script works just fine when I run sudo /pathto/script.sh.
Openbox root autostart?
openbox;powertop
You could put it in your crontab with@reboot /pathto/script.sh
_unix.344468
I am new to using regex and need to find a specific word in my text file.I need to find the word beach, how would I specify that it starts with b and contains ch?I have tried using grep '^s.*ch' but that prints out lines that contain it and I just want the word.
Need a regex to find all words that begin with a letter and contain two letters
linux;grep;regular expression
The problem you have is that .* matches any sequence of characters (maybe omitting newline) including spaces.So you want to change this to something which matches just characters which make up a word. How exactly you do this depends on which implementation of regular expressions you are using, and if you want to consider characters from different alphabets. One reasonably portable way is to use [[:alpha:]]*The syntax to match at the start of a word also depends on the implementation. For grep you can use \<.To just get the word there are two options to grep that can help, -o and --color. The former just outputs what is matched and the latter puts out the entire line with the match highlighted.So you probably wantgrep -o '\<b[[:alpha:]]*ch' filename
_cs.64856
When transforming terms from one language to another, the intuitively desired property is the preservation of semantics (as used e.g. here for a CPS transformatation):$$ s \Downarrow v \implies c(s) \Downarrow c(v) $$I am a little troubled, however, by reconciling this with the classical terms correctness (or soundness) and completeness of logic systems. Usually, I would consider the above statement the completeness property of $c$ (and the converse the definition of correctness). Intuitively, however, a compiler should be correct rather than complete (as e.g. type-checking often rules out correct programs). The converse of the statement above is only true if $c$ is injective: If the source language contains for instance booleans and operations on them and the compilation replaces them via Church-encoding, the target language can evaluate boolean operations on terms compiled from boolean literals and lambda-abstractions, which the source language cannot evaluate.Am I right to assume, that the above statement is the completeness property of $c$ (so the intuitive requirement actually has a counter-intuitive name)?Am I also right in my conclusion that a non-injective compiler then is usually not correct?
Is Semantic Preservation Soundness (or Correctness) or Completeness
terminology;logic;compilers;semantics
null
_softwareengineering.290390
The example below is totally artificial and its only purpose is to get my point across.Suppose I have an SQL table:CREATE TABLE rectangles ( width int, height int );Domain class:public class Rectangle { private int width; private int height; /* My business logic */ public int area() { return width * height; }}Now suppose that I have a requirement to show to the user the total area of all rectangles in the database.I can do that by fetching all the rows of the table, turning them to objects and iterating over them. But this looks just stupid, because I have lots and lots of rectangles in my table.So I do this:SELECT sum(r.width * r.height)FROM rectangles rThis is easy, fast and uses the strengths of the database.However, it introduces duplicated logic, because I have the same calculation in my domain class also.Of course, for this example the duplication of logic is not fatal at all.However, I face the same problem with my other domain classes, which are more complex.
What are the ways to avoid duplication of logic between domain classes and SQL queries?
architecture;maintainability
As lxrec pointed out, it'll vary from codebase to codebase. Some applications will allow you to put those kind of business logic into SQL Functions and/or queries and allow you to run those anytime you need to show those values to the user.Sometimes it may seem stupid, but it's better to code for correctness than performance as a primary objective.In your sample, if you're showing the value of the area for a user in a webform, you'd have to:1) Do a post/get to the server with the values of x and y;2) The server would have to create a query to the DB Server to run the calculations;3) The DB server would make the calculations and return;4) The webserver would return the POST or GET to the user;5) Final result shown.It's stupid for simple things like the one on the sample, but it may be necessary to more complex stuff like calculating the IRR of an investment of a client in a banking system.Code for correctness. If your software is correct, but slow, you'll have chances to optimize where you need (after profiling). If that means keeping some of the business logic in the database, so be it. That's why we have refactoring techniques.If it becomes slow, or unresponsive, than you may have some optimizations to do, like violating the DRY principle, which is not a sin if you surround yourself of the proper unit testing and consistency testing.
_unix.303297
I have placed a systemd service file in usr/lib/systemd/system/testfile.service.Here is the service file:[Unit]Description=Test service[Service]Type=notifyExecStart=/bin/dd.shExecReload=/bin/kill -HUP $MAINPIDKillMode=processRestart=on-failureRestartSec=30s[Install]WantedBy=multi-user.targetI tried to start the service at boot time by trying these two ways Created a softlink for the file from /usr/lib/systemd/systemd to /etc/systemd/system/multi-user.target.wants (manually and by using systemctl enable command) and rebooted the system, testfile service started successfully at boot time .created a dependency in the existing up and running service file like After=testfile.service and Wants=testfile.service, then rebooted the system testfile service started successfully.But when I place the file /usr/lib/systemd/system and without any 1 or 2 solution approach service is not started, I feel that placing the service file in /usr/lib/systemd/system/ is enough for any service to start automatically , without creating the softlinks to wants directory or creating the dependency with the other services.Please let me know, how to start a service at boot time which is present in /usr/lib/systemd/system directory without 1 or 2 solution approaches?I have also created preset files in usr/lib/systemd/system-preset/ to disable and enable few services, seems like those preset files were not executed , services which i have disabled in the preset file are still enabled after the boot up. Please let me know how to debug this issue.
start a service a at bootime in systemd
systemd;systemd boot
You should store your custom unit files in /etc/systemd/system/. After you create them, you have to enable them with systemctl enable name, which creates necessary symlinks.
_cs.10873
I was looking at the pumping lemma for CFG. I came across the first problem $a^nb^nc^n$ and understood the answer. Then I thought of the problem $a^nb^n$. I know that this is context free and thought of applying it. I came across a weired situation. Someone please tell me where I went wrong. So our language is $a^nb^n$. Let $m$ be the pumping length. Pumping lemma says that any sufficiently long string can be divided in $uvxyz$, where $v$ and $y$ can be pumped.we take our string to be $a^mb^m$ and we can split it into $uxvyz$. Also we know that $|vxy|\le m$. Also $u$ can be $\epsilon$. In that case $vxy$ consists only of $a$, since $|vxy|\le m$ and there are $m$, $a$'s. So when we pump $v$ and $y$, the resulting string wont be in the language!So where I got wrong? Is it wrong to take $u$ is $\epsilon$ and proceed from there?
Pumping lemma for CFG doubt
context free;automata;pumping lemma
I'm not sure why the answer of Karolis doesn't satisfy you. Let me chew it a bit more for you.First, let's recall what the pumping lemma says (taken form the credible source of Wikipedia):If a language L is context-free, then there exists some integer p 1 such that any string s in L with |s| p (where p is a pumping length) can be written ass = uvxyzwith substrings u, v, x, y and z, such that|vxy| p,|vy| 1, anduv$^n$xy$^n$z is in L for all n 0. Ok, so you say $L=\{a^nb^n \mid n \ge 0\}$ is CF, and thus we can run the Lemma on it.Great. Here I'll show you that the lemma works for it. Say the pumping length is $p\ge2$.$^3$ The lemma says, that for any long enough $s$ string in $L$ (let's take $s=a^mb^m$ with $m\ge p$ as you suggest) we can write it as $uvxyz$ that satisfies some conditions.$^1$Ok, let's give a try.Let's set $u=a^{m-1}$, $vxy=ab$, $z=b^{m-1}$.Sanity check: indeed, $uvxyz$ gives $s$. I don't know yet how to split the middle part into $vxy$, but see that condition (1) is satisfied, $|vxy|= 2 \le p$. Now for condition (2), i'll have to get the $vxy$ thing. Let's take $x=\epsilon$, thus $v=a$ and $y=b$. Do we satisfy condition (2)? Yes! $|vy|=2\ge1$. Yey.Now, it says that no matter what $n$ I take, $uv^nxy^nz$ needs to be a word in the language $L$.Let's see. By the way we picked the substrings,$$ uv^nxy^nz = a^{m-1}a^n\epsilon b^{n}b^{m-1}$$indeed! for any $n\ge 0$ we pick the word we get is $a^{m+n-1}b^{m+n-1}\in L$.To conclude, we were able to take any word in $L$, and write it as $uvxyz$ so that the conditions of the lemma hold. This can be done for any language which is context-free (and for some that are not, as well) and we just saw how to do it for $L$.more questions?$^{1)}$ This should be emphasized: any word can be written in some way uvxyz etc. It doesn't mean that ALL the possible $uvxyz$ will work. Only one of them needs to work in order to satisfy the lemma.$^{2)}$ when you prove that a language in not CF, then all the ways $uvxyz$ must be checked. This is because when you negate the lemma, the word There exists ... that satisfies (1), (2), (3) negates to does not exist ... that satisfy == for all ... the condition is not satisfied. So to prove $L$ is not-CFG you need to check all possible ways. To show the lemma is satisfies for a CF $L$, you only need to find one.$^{3)}$ What if $p=1$? Well, it is not. The lemma says that there exists some $p$. It doesn't say what this $p$ is. For our $L$, any $p \ge 2$ works, but $p=1$ doesn't. We need just one such $p$ (i.e., there exists), so taking $p=2$ is enough.
_unix.64188
I have two files have same data but in different lines.File 1:<Identities> <Identity> <Id>048206031415072010Comcast.USR8JR</Id> <UID>ccp_test_79</UID> <DisplayName>JOSH CCP</DisplayName> <FirstName>JOSH</FirstName> <LastName>CCP</LastName> <Role>P</Role> <LoginStatus>C</LoginStatus> </Identity> <Identity> <Id>089612381523032011Comcast.USR1JR</Id> <UID>94701_account1</UID> <DisplayName>account1</DisplayName> <FirstName>account1</FirstName> <LastName>94701</LastName> <Role>S</Role> <LoginStatus>C</LoginStatus> </Identity></Identities>File 2 :<Identities> <Identity> <Id>089612381523032011Comcast.USR1JR</Id> <UID>94701_account1</UID> <DisplayName>account1</DisplayName> <FirstName>account1</FirstName> <LastName>94701</LastName> <Role>S</Role> <LoginStatus>C</LoginStatus> </Identity> <Identity> <Id>048206031415072010Comcast.USR8JR</Id> <UID>ccp_test_79</UID> <DisplayName>JOSH CCP</DisplayName> <FirstName>JOSH</FirstName> <LastName>CCP</LastName> <Role>P</Role> <LoginStatus>C</LoginStatus> </Identity></Identities>If I use diff file1 file2 command I am getting below response:1,10d0< <Identities>< <Identity>< <Id>048206031415072010Comcast.USR8JR</Id>< <UID>ccp_test_79</UID>< <DisplayName>JOSH CCP</DisplayName>< <FirstName>JOSH</FirstName>< <LastName>CCP</LastName>< <Role>P</Role>< <LoginStatus>C</LoginStatus>< </Identity>20a11,20> <Identities>> <Identity>> <Id>048206031415072010Comcast.USR8JR</Id>> <UID>ccp_test_79</UID>> <DisplayName>JOSH CCP</DisplayName>> <FirstName>JOSH</FirstName>> <LastName>CCP</LastName>> <Role>P</Role>> <LoginStatus>C</LoginStatus>> </Identity>But I need to get no difference, because these files having same data in different lines. Can anyone help me how to achieve no difference output. Thanks in advance.
how to compare two xml files having same data in different lines?
bash;shell;xml;file comparison
null
_webmaster.16371
I have a .net domain. I plan to use it for selling goods/commercials. Is that ok? Are there any rules about domain usage?
Can I use a .net domain for selling goods?
domains;legal;business
This answer to a different question (not near enough for me to mark this as a duplicate) covers it. In short, there are no rules, just conventions that mostly tend to be followed.
_unix.30580
I just bought an HP pavillion g6 laptop, with the hope of installing Linux on it. I have now tried both Linux Mint (my first choice) and Ubuntu, and both simply give me a black screen from the moment it begins loading the Live CD. I think it reaches the login screen, I can hear the start-up jingle, but all is just black.Mint gives an Automatic boot in 10...9... screen, then goes black. I can stop the countdown and pick from a few options, I tried the compatibility mode but that didn't help. The other options are integrity and memory checks, or to boot from the harddisk.Ubuntu also shows a brief purple screen, where I can escape and either try it or install it. Given the problem I'm having I don't want to install just yet, so I haven't tried that. Picking Try Ubuntu I get a black screen immediately after.Google turned up a suggestion of pressing CTRL+ALT+F2 after it has finished loading, to get a shell, but that doesn't seem to do anything.I also searched through the BIOS options and set Switchable Graphics Mode to Fixed instead of Dynamic, but that didn't help either (so I've switched it back again).I'm out of ideas. I'd prefer to get Mint to work, since I'm tired of Ubuntu and want to try out Mint instead.Update I am able to get it to work by setting the nomodeset boot option, but without that I still get a black screen (I can just barely make out some elements on the screen, but it's very, very dark). I tried installing the proprietary ATI drivers in the Additional Drivers window, but that didn't seem to help, or they weren't installed properly, I can't seem to tell.
Black screen at boot with Mint and Ubuntu live CDs
ubuntu;boot;linux mint;laptop;livecd
There's a launchpad bug about Ubuntu booting with the laptop backlight off; that might be the problem you're seeing.
_webmaster.71453
How often are Links to Your Site updated in the Google Webmaster tool?I found this thread with various replies on the matter, from every few days and around 20 to 30 days to backlinks to my website that were removed 9 months ago. That doesn't really help much.Are there some more precise sources? A blog post of some SEO company that did some research on this? Even some information put out there by Google?
How often are Links to Your Site updated in the Google Webmaster tool?
google search console;backlinks
null
_unix.358017
I did man maldet trying to get information about (what I can call) maldet wildcards which are represented by question marks.I clicked / and searched \? and also for \?, but nothing was found even though the man pages clearly contain question marks.Given I used an escaping metacharacter and it the search still failed, I ask here why.I should note that other patterns, like batch for example, were found without problems. The reason I searched for a question mark is to find a command containing it.
Searching question marks in man
man;search
You may have problems with your pager. Try MANPAGER=less man maldet or man --pager=less maldet and see if that helps. To make it permanent, in your .bashrc put the following line: export MANPAGER=less
_unix.156473
Having a look at the default users & groups management on some usual Linux distributions (respectively ArchLinux and Debian), I'm wondering two things about it and about the consequences of modifying the default setup and configuration.The default value for USERGROUPS_ENAB in /etc/login.defs seems to be yes, which is reflected by the By default, a group will also be created for the new user that can be found in the useradd man, so each time a new user is created, a group is created with the same name and only this new user in. Is there any use to that or is this just a placeholder? I'm feeling like we are losing a part of the rights management as user/group/others by doing this. Would it be bad to have a group users or regulars or whatever you want to call it that is the default group for every user instead of having their own?Second part of my question, which is still based on what I've seen on Arch and Debian: there are a lot of users created by default (FTP, HTTP, etc.). Is there any use to them or do they only exist for historical reasons? I'm thinking about removing them but don't want to break anything that could use it, but I have never seen anything doing so, and have no idea what could. Same goes for the default groups (tty, mem, etc.) that I've never seen any user belong to.
Reasons behind the default groups and users on Linux
users;group;accounts
Per-user groupsI too don't see a lot of utility in per-user groups. The main use case is if a user wanted to allow friends access to their files, they can have the friend user added to their group. Few systems I've encountered actually use it this way.When USERGROUPS_ENAB in /etc/login.defs is set to no, useradd adds all the created users to the group defined in /etc/default/useradd by the GROUP field. On most of distributions, this is set to the GID 100 which usually corresponds to the users group.This does allow you to have a more generic management of users. Then, if you need finer control, you can manually add these groups and add users to them that makes sense.Default created groupsMost of them came about from historic reasons, but many still have valid uses today :disk is the group that owns most disk drive deviceslp owns parallel port (and sometimes is configured for admin rights on cups)uucp often owns serial ports (including USB serial ports)cdrom is required for mounting privileges on a cd driveSome systems use wheel for sudo rights; some notetc.Other groups are used by background scripts. For example, man generates temp files and such when it's run; its process uses the man group for some of those files and generally cleans up after itself.According to the Linux Standard Base Core Specification though, only 3 users that are root, bin and daemon are absolutely mandatory. The rationale behind the other groups is :The purpose of specifying optional users and groups is to reduce the potential for name conflicts between applications and distributions.So it looks as it is better to keep these groups in place. It's theorically possible to remove them without breakage, although for some, mysterious things may start to not work right (eg, some man pages not rendering if you kill that group, etc). It doesn't do any harm to leave them there, and it's generally assumed that all Linux systems will have them.
_cs.56659
I am engineering student,i have doubts regarding the topic can anyone help me to find solution.
Is there a bounded-time algorithm for Envy-free cake-cutting?
algorithms
null
_unix.347960
The Red Hat Enterprise Linux 7 Networking Guide (see link) says that the following syntax should be used to assign a subnet to a gateway:nmcli connection modify eth0 +ipv4.routes 192.168.122.0/24 10.0.0.1How do I adjust the above to accommodate the 255.255.255.248 Subnet Mask and the aa.aa.aaa.aa6 Default Gateway given by an internet service provider? Something like: nmcli connection modify eth0 +ipv4.routes 255.255.255.248 aa.aa.aaa.aa6Note that: the subnet is given in two different formats.a. The RHEL7 Networking Guide gives the subnet in the format 192.168.122.0/24, whileb. the internet service provider gives the subnet in the format 255.255.255.248. What is more,a. the RHEL7 Networking Guide gives the gateway as a private IP, whileb. the internet service provider gives the gateway as a public IP. For reference, a Windows laptop is able to connect to the internet and be recognized as one of the 5 valid IP addresses by giving the above information using the following steps: 1. Control Panel > Network and Internet > Network and Sharing Center 2. Click Change Adapter Settings 3. Right click on Ethernet 2 connection and click on Properties 4. Select Internet Protocol Version 4 (TCP/IPv4) 5. Then click on Properties Button to open the target dialog box: a. In the default state, the Obtain IP address automatically option is checked b. To claim a specific IP instead, click Use The Following IP Address and enter the following information: i. IP Address: aa.aa.aaa.aa1 ii. Subnet Mask: 255.255.255.248 iii. Default Gateway: aa.aa.aaa.aa6 iv. Preferred DNS Server: bb.bb.bb.bb v. Alternate DNS Server: bb.bb.cc.cc vi. Check the Validate Settings on Exit option. vii. Click OK 6. Click on any other open dialog boxes to return computer to normal state
Using nmcli to set subnet mask and gateway IP
centos;networking;rhel;ip;webserver
null
_webapps.98007
I have this issue where Google Sheets does not print a spreadsheet's rows in the same order as it displays them on screen. This specifically applies to dates in a sheet, which has a query() including an Order By B (where B = date). NOTE, queries in this setup reference other combined queries, so it gets a little confusing, but here is my best attempt at explanation:Issue Example: Google Sheets ExampleIn summary,Sheets A, B, and C are input sheets for inputting tasksSheet D combines those lists via =query({'Sheet A'!A3:N20;'Sheet B'!A3:N20;'Sheet C'!A3:N20})Sheet E takes content of Sheet D and sorts by date via =query(Sheet D!A3:E999,Select A,B,C Where A > -1 and A < 54 Order by B)the problem is that when tasks originating on different sheets have the same date, Google Sheets orders them differently on screen than it does when I print.then if notes have been added on the Sheet E, they fall out of sync with their respective tasks if printed, as they DO remain in the same order!I have tried adding a timestamp to the date (which worked) to differentiate the items, but that is really not convenient from an input POV.
Google Sheets query() with Order By doesn't print in same order as displayed
google spreadsheets;google spreadsheets query
If you would like that Google change this behaviour send your feedback to Google. To do that click on Help > Report a problemAn alternative workaround is to add a second column to Order by that uniquely makes unique pairs, i.e.=query(Handler!A3:E999,Select A,B,C Where A > -1 and A < 54 Order by B,C)
_codereview.52522
I know that stringstreams are the C++ recommended way to create formatted text. However, they can often become quite verbose, especially when compared to the succinct format strings of printf and family. However, the printf family can lead to its own issues, with buffer overflow issues, and so I would rather not use these functions directlyMy goal was to make a function to behave similarly to snprintf, but return a std::string, avoiding any chance of buffer overflow. Knowing that there are many pitfalls to string manipulation in C++, have I exposed myself to any errors in using this function?#include <cstdio>#include <cstdarg>#include <string>std::string string_sprintf(const std::string& format, ...){ static const int initial_buf_size = 100; va_list arglist; va_start(arglist, format); char buf1[initial_buf_size]; const int len = vsnprintf(buf1,initial_buf_size,format.c_str(), arglist) + 1; va_end(arglist); if(len<initial_buf_size){ return buf1; } else { char buf2[len]; va_start(arglist,format); vsnprintf(buf2,len,format.c_str(),arglist); va_end(arglist); return buf2; }}
Mimic sprintf with std::string output
c++;strings
Since you're using cstdarg and va_list is a complete type, it should be std::va_list.va_start with a std::string is undefined behavior. This means that you actually cannot use std::string as your format (at least not if you plan on using only the stdargs facilities).It's literally impossible, since there's no way to forward the ... idea, but instead you can only forward a va_list, which of course presents a problem since you can't obtain a va_list with a std::string parameter.Using vsnprintf implies that you're using C++11. Given this, I might be tempted to go the variadic template route rather than leveraing the old C style functions that don't have type safety. Unfortunately though, much of the parsing and transforming logic would have to be recreated. If you do want to consider going this route, Andrei Alexandrescu gave a talk in which he examined this option.Note that using a std::string format with variadic templates is of course possible.Oh, and of course you could do a hybrid where you use a variadic template, but only to forward the arguments to vsnprintf. That would allow you to use a std::string format but you don't get the true typesafety of the variadic approach.vnp already covered this, but let's take a second look and consider an excerpt from the standard:The vsnprintf function returns the number of characters that would have been written had n been sufficiently large, not counting the terminating null character, or a negative value if an encoding error occurred. Thus, the null-terminated output has been completely written if and only if the returned value is nonnegative and less than n.Unfortunately, you'll have to do some pretty gross handling for this since on MS systems a negative number currently means the buffer was not large enough whereas on standard compliant systems it means some of encoding error happened.If you include the null character in the calculation, you need to use <=. If you don't include it, you need to use <. This means that your current logic of len < initial_buf_size is wrong. It should actually be len <= initial_buf_size.Think of it this way: len represents the length of the entire, null terminated output string (since you added 1). initial_buf_size represents the size of the entire buffer, including the required null terminator. Since both numbers include the null terminator, the case where they are equal means that the entire buffer was utilized but that it does contain the full, null terminated output string.I wouldn't bother using initialize_buf_size. Instead, I would just hard code it into the buf1 declaration and then use sizeof(buf1). I'm all for avoiding magic numbers and pulling them into constants, but it's really only used once (and you should be using sizeof anyway -- the declaration of buf1 can change).2 spaces is rather unusual in C++. I would expect either 4 spaces or 1 tab.I'm not a fan of the lack of white space. Like the previous item, this is just opinion, but I (and I think most people -- though obviously I'm prone to confirmation bias on this) prefer spacing around clauses and operators. In other words:func(param1, param2, param3)andif (a < b) {Your function has way too much duplicated code. Let the va_list function be handled at the top level, and then have a second function that actually does the work. This is actually exactly what every implementation of the standard library I've ever used does with.Basically what I'm saying is that just as the standard library has printf and vprintf, you would have string_sprintf and string_vsprintf, and string_sprintf would just call string_vsprintf under the hood.Instead of having two cases, I would be tempted to just do an empty run to vsnprintf to figure out exactly what size you'll need.You're using a non-standard compiler extension to use a automatic memory duration array that is actually dynamically sized (buf2).Instead, I would use a container like a std::vector to handle the buffer, or since you're arleady using C++11, you can use std::string since it's guaranteed to be contiguous.All in all, I might do something like this:std::string string_vsprintf(const char* format, std::va_list args) { va_list tmp_args; //unfortunately you cannot consume a va_list twice va_copy(tmp_args, args); //so we have to copy it const int required_len = vsnprintf(nullptr, 0, format, tmp_args) + 1; va_end(tmp_args); std::string buf(required_len, '\0'); if (std::vsnprintf(&buf[0], buf.size(), format, args) < 0) { throw std::runtime_error{string_vsprintf encoding error}; } return buf;}std::string string_sprintf(const char* format, ...) __attribute__ ((format (printf, 1, 2)));std::string string_sprintf(const char* format, ...) { std::va_list args; va_start(args, format); std::string str{string_vsprintf(format, args)}; va_end(args); return str;}Note that this does not handle Windows and MS's non-compliant implementation of vsnprintf.It would also be a good idea to wrap the __attribute__ stuff in preprocessor checks since __attribute__ is only used in GCC and clang.Oh, and the string is unnecessarily set to all null characters and only then over written with the data we actually care about. If you super care about performance, that might be an issue, but realistically, it shouldn't matter. Anyway around it would require an extra copy anyway (since you'd have to take whatever buffer you used and copy it into a std::string).
_datascience.10927
I am using Stanford NER to recognize each entity in a search text. Once I identify entities, I need to pass that entities to an algorithm which calculates score for each entity type (e.g. country, customer) using weights of each word. Currently my training data has word and answer like below.country_training.tsv brazil country japan countrycustomer_training.tsv hyundai customer apple customerHow can i associate weight to each above training data like below so when I can get weight of each word too ?country_training.tsv brazil country 1.5 japan country 4.0customer_training.tsv hyundai customer 4.0 apple customer 2.3Please advice.Weights are input to NER to annotate each entity wiith their weights.
Stanford NER Training - Assign weight to each word
nlp;language model;stanford nlp
null
_webapps.8204
It seems like this option exist, but the entire geolocation editing page doesn't seem to be functional.The address is http://twitpic.com/media/editLocation/[your_image_code]It's possible to remove the location data prior to the upload, but I'd like to try and remove it after it's been uploaded.Thanks.
Removing/Changing an image geolocation data on Twitpic (after it been uploaded)
twitter;geolocation;twitpic
Okay, I've talked with their tech support and it was broken and they fixed it.
_webapps.91433
Meetup.com provides ical calendar for individual meetups. It doesn't provide a shared ical calendar that contains all meetup events but only a shared meetup list for the meetups where one clicked I attend. I would like to have one calendar in Google Calendar that mixes together multiple meetups. Is it possible to do this in Google Calendar? Can I combine calendars together?
Is it possible to merge multiple `ical` calendars that are imported via url together to one calender in Google Calendar
google calendar;meetup
null
_codereview.37740
What should the code do: Process client HTML requests, query database and return the answer in XML. Working with a high load.I need to know how can it be optimized.Is something terribly wrong with this code?Input data: HTML-session, MAC-address (in form of GET argument).Output data: XML (session_id, session_sign).Servlet: package com.packageexample.servlet;import java.io.IOException; import javax.servlet.*;import javax.servlet.annotation.WebServlet;import javax.servlet.http.HttpServlet;import javax.servlet.http.HttpServletRequest;import javax.servlet.http.HttpServletResponse;import javax.servlet.http.HttpSession;import org.apache.tomcat.jdbc.pool.DataSource;import org.apache.tomcat.jdbc.pool.PoolProperties;/** * Servlet implementation class TestServlet */@WebServlet(/Servlet)public class Servlet extends HttpServlet { private static final long serialVersionUID = -3954448641206344959L; private static final long SESSION_TIMEOUT = 360000; private static final String JDBC_DRIVER = com.mysql.jdbc.Driver; private static final String JDBC_MYSQL_SERVER = jdbc:mysql://192.168.0.100:3306/test?useUnicode=true&amp;characterEncoding=UTF-8; private static final String JDBC_MYSQL_USER = test; private static final String JDBC_MYSQL_PASSWORD = test; private static DataSource datasource; private static enum RESPONSE_STATUS {SESSION_NEW, SESSION_RESTORED, SESSION_OK, MAC_UNDEFINED, UNAUTHORIZED_ACCESS, HACK_ATTEMPT}; /** * @see HttpServlet#HttpServlet() */ public Servlet() { super(); PoolProperties p = new PoolProperties(); p.setUrl(JDBC_MYSQL_SERVER); p.setDriverClassName(JDBC_DRIVER); p.setUsername(JDBC_MYSQL_USER); p.setPassword(JDBC_MYSQL_PASSWORD); p.setJmxEnabled(true); p.setTestWhileIdle(false); p.setTestOnBorrow(true); p.setValidationQuery(SELECT 1); p.setTestOnReturn(false); p.setValidationInterval(30000); p.setTimeBetweenEvictionRunsMillis(30000); p.setMaxActive(100); p.setInitialSize(10); p.setMaxWait(10000); p.setRemoveAbandonedTimeout(60); p.setMinEvictableIdleTimeMillis(30000); p.setMinIdle(10); p.setLogAbandoned(true); p.setRemoveAbandoned(true); p.setJdbcInterceptors( org.apache.tomcat.jdbc.pool.interceptor.ConnectionState; + org.apache.tomcat.jdbc.pool.interceptor.StatementFinalizer); /*Using apache tomcat7 datasource for connection pooling /We choose to use DBCP, because we have a high load on our project*/ datasource = new DataSource(); datasource.setPoolProperties(p); } protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { //Need to check if we've got session already HttpSession session = request.getSession(false); String session_id = null; //Parse GET parameter mac from client request String mac = request.getParameter(mac); if (session!=null) {session_id = session.getId();} //Firstly, we need to check whether we got some MAC-address provided at all if (mac == null) { makeXml (request,response,session_id, RESPONSE_STATUS.MAC_UNDEFINED); return; } //Now that we need to deal with db, ManageDatabase dbDo = new ManageDatabase(); /*Check that there's a record for this MAC in database /If yes, return this device*/ Device device = dbDo.getDeviceByMAC (mac, datasource); if (device==null) { makeXml (request,response,session_id, RESPONSE_STATUS.UNAUTHORIZED_ACCESS); return; } /*Now that we know for sure that we've got some valid mac-address provided, /we need to check, if client has got some SESSION_ID*/ if (session_id == null) { session = request.getSession(); device.setSessionId(session.getId()); dbDo.setSessionId(device, datasource); makeXml (request,response,device.getSessionId(), RESPONSE_STATUS.SESSION_NEW); return; } else { //If session_id is not null, we need to check if it is valid for mac provided if (!(device.getSessionId().equals(session.getId()))) { makeXml (request,response,session_id, RESPONSE_STATUS.HACK_ATTEMPT); return; } //If session_id not null and it's a valid session, then check it for timeout. Restore if needed long sessionActiveTime = System.currentTimeMillis() - session.getLastAccessedTime(); if (sessionActiveTime > SESSION_TIMEOUT) { makeXml (request,response,session_id, RESPONSE_STATUS.SESSION_RESTORED); } else{ makeXml (request,response,session_id, RESPONSE_STATUS.SESSION_OK); } } } protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { } protected void makeXml (HttpServletRequest request, HttpServletResponse response, String sid, Enum<?> res) throws ServletException, IOException { request.setAttribute(session_id, sid); request.setAttribute(session_sign, res); RequestDispatcher view = request.getRequestDispatcher(Result.jsp); view.forward(request, response); }}ManageDatabase:package com.packageexample.servlet;import java.sql.Connection;import java.sql.PreparedStatement;import java.sql.ResultSet;import java.sql.SQLException;import org.apache.tomcat.jdbc.pool.DataSource;class ManageDatabase { private static final String MYSQL_QUERY_CHECK_MAC = SELECT * FROM device WHERE mac=?;; private static final String MYSQL_QUERY_UPDATE_SESSION_ID = UPDATE device set session_id=? WHERE mac=?;; Device getDeviceByMAC(String mac, DataSource datasource) { ResultSet rs = null; Device device = null; try (Connection connection = datasource.getConnection(); PreparedStatement statement = connection.prepareStatement(MYSQL_QUERY_CHECK_MAC, ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_UPDATABLE);){ statement.setString(1, mac); rs = statement.executeQuery(); while (rs.next()) { device = new Device(); device.setMac(rs.getString(mac)); device.setSessionId(rs.getString(session_id)); } } catch (SQLException e){System.out.println(e);} finally { if (rs!=null) try {rs.close();} catch (SQLException e) {System.out.println(e);} } return device; } void setSessionId(Device device, DataSource datasource) { try (Connection connection = datasource.getConnection(); PreparedStatement statement = connection.prepareStatement(MYSQL_QUERY_UPDATE_SESSION_ID, ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_UPDATABLE);) { statement.setString(1, device.getSessionId()); statement.setString(2, device.getMac()); statement.executeUpdate(); } catch (SQLException e){System.out.println(e);} }}Device:package com.packageexample.servlet;class Device { private String mac; private String sessionId; public String getMac() { return mac; } public void setMac(String mac_address) { this.mac = mac_address; } public String getSessionId() { return sessionId; } public void setSessionId(String session_id) { this.sessionId = session_id; } public String toString () { return Device mac: + this.mac + ; session_id: + this.sessionId; }}Result.jsp<%@ page language=java contentType=text/xml; charset=UTF-8 pageEncoding=UTF-8%><result><session_id><% out.println(request.getAttribute(session_id));%></session_id><session_sign><% out.println(request.getAttribute(session_sign));%> </session_sign></result>
Servlet for querying database on some high-loaded system
java;mysql;servlets
null
_codereview.55323
I created a classic snake game in the canvas element. I have not considered best practices when doing this, I just wanted to finish it first. Now it's time to improve the coding practice. You can help me out by mentioning bad practices, giving code improvements and suggesting anything else.<!DOCTYPE HTML><html><head><meta http-equiv=Content-Type content=text/html; charset=utf-8><title>Feed the Snake v 1.1 beta</title><style>body{ background:#000; color:#FFF;}canvas{ background:#FFF;}#controls{ position:absolute; top:0; right:0; margin:10px;}</style><script type=text/javascript>var snake = window.snake || {};function launchFullscreen(element) { if(element.requestFullscreen) { element.requestFullscreen(); } else if(element.mozRequestFullScreen) { element.mozRequestFullScreen(); } else if(element.webkitRequestFullscreen) { element.webkitRequestFullscreen(); } else if(element.msRequestFullscreen) { element.msRequestFullscreen(); }}window.onload = function(){ document.addEventListener(fullscreenchange, function(){snake.game.adjust();}); document.addEventListener(webkitfullscreenchange, function(){snake.game.adjust();}); document.addEventListener(mozfullscreenchange, function(){snake.game.adjust();}); document.addEventListener(MSFullscreenChange, function(){snake.game.adjust();}); snake.game = (function() { var canvas = document.getElementById('canvas'); var ctx = canvas.getContext('2d'); var status=false; var score = 0; var old_direction = 'right'; var direction = 'right'; var block = 10; var score = 0; var refresh_rate = 250; var pos = [[5,1],[4,1],[3,1],[2,1],[1,1]]; var scoreboard = document.getElementById('scoreboard'); var control = document.getElementById('controls'); var keys = { 37 : 'left', 38 : 'up', 39 : 'right', 40 : 'down' }; function adjust() { if (document.fullscreenElement || document.webkitFullscreenElement || document.mozFullScreenElement || document.msFullscreenElement ) { canvas.width=window.innerWidth; canvas.height=window.innerHeight; control.style.display='none'; } else { canvas.width=850; canvas.height=600; control.style.display='inline'; } } var food = [Math.round(Math.random(4)*(canvas.width - 10)), Math.round(Math.random(4)*(canvas.height - 10)),]; function todraw() { for(var i = 0; i < pos.length; i++) { draw(pos[i]); } } function giveLife() { var nextPosition = pos[0].slice(); switch(old_direction) { case 'right': nextPosition[0] += 1; break; case 'left': nextPosition[0] -= 1; break; case 'up': nextPosition[1] -= 1; break; case 'down': nextPosition[1] += 1; break; } pos.unshift(nextPosition); pos.pop(); } function grow() { var nextPosition = pos[0].slice(); switch(old_direction) { case 'right': nextPosition[0] += 1; break; case 'left': nextPosition[0] -= 1; break; case 'up': nextPosition[1] -= 1; break; case 'down': nextPosition[1] += 1; break; } pos.unshift(nextPosition); } function loop() { ctx.clearRect(0,0,canvas.width,canvas.height); todraw(); giveLife(); feed(); if(is_catched(pos[0][0]*block,pos[0][1]*block,block,block,food[0],food[1],10,10)) { score += 10; createfood(); scoreboard.innerHTML = score; grow(); if(refresh_rate > 100) { refresh_rate -=5; } } snake.game.status = setTimeout(function() { loop(); },refresh_rate); } window.onkeydown = function(event){ direction = keys[event.keyCode]; if(direction) { setWay(direction); event.preventDefault(); } }; function setWay(direction) { switch(direction) { case 'left': if(old_direction!='right') { old_direction = direction; } break; case 'right': if(old_direction!='left') { old_direction = direction; } break; case 'up': if(old_direction!='down') { old_direction = direction; } break; case 'down': if(old_direction!='up') { old_direction = direction; } break; } } function feed() { ctx.beginPath(); ctx.fillStyle = #ff0000; ctx.fillRect(food[0],food[1],10,10); ctx.fill(); ctx.closePath(); } function createfood() { food = [Math.round(Math.random(4)*850), Math.round(Math.random(4)*600)]; } function is_catched(ax,ay,awidth,aheight,bx,by,bwidth,bheight) { return !( ((ay + aheight) < (by)) || (ay > (by + bheight)) || ((ax + awidth) < bx) || (ax > (bx + bwidth)) ); } function draw(pos) { var x = pos[0] * block; var y = pos[1] * block; if(x >= canvas.width || x <= 0 || y >= canvas.height || y<= 0) { document.getElementById('pause').disabled='true'; snake.game.status=false; ctx.clearRect(0,0,canvas.width,canvas.height); ctx.font='40px san-serif'; ctx.fillText('Game Over',300,250); ctx.font = '20px san-serif'; ctx.fillStyle='#000000'; ctx.fillText('To Play again Refresh the page or click the Restarts button',200,300); throw ('Game Over'); } else { ctx.beginPath(); ctx.fillStyle='#000000'; ctx.fillRect(x,y,block,block); ctx.closePath(); } } function pause(elem) { if(snake.game.status) { clearTimeout(snake.game.status); snake.game.status=false; elem.value='Play' } else { loop(); elem.value='Pause'; } } function begin() { loop(); } function restart() { location.reload(); } function start() { ctx.fillStyle='#000000'; ctx.fillRect(0,0,canvas.width,canvas.height); ctx.fillStyle='#ffffff'; ctx.font='40px helvatica'; ctx.fillText('Vignesh',370,140); ctx.font='20px san-serif'; ctx.fillText('presents',395,190); ctx.font='italic 60px san-serif'; ctx.fillText('Feed The Snake',240,280); var img = new Image(); img.onload = function() { ctx.drawImage(img,300,300,200,200); ctx.fillRect(410,330,10,10); } img.src ='snake.png'; } function fullscreen() { launchFullscreen(canvas); } return { pause: pause, restart : restart, start : start, begin: begin, fullscreen : fullscreen, adjust : adjust, }; })(); snake.game.start();}</script></head><body><canvas width=850 height=600 id=canvas style=border:1px solid #333; onclick=snake.game.begin();></canvas><div id=controls style=float:right; text-align:center;> <input type=button id=pause value=Play onClick=snake.game.pause(this); accesskey=p> <input type=button id=restart value=Restart onClick=snake.game.restart();> <br/><br/> <input type=button id=fullscreen value=Play Fullscreen onClick=snake.game.fullscreen();> <br/><br/> <div style=font-size:24px;> Score : <span id=scoreboard>0</span> </div></div></body></html>You can see a live version of the game here.
Snake game with canvas element code
javascript;html5;canvas;snake game
From a once over:GoodI like how you use an IIFEI really like how you use direction = keys[event.keyCode];Not so goodYou are not consistently applying the 2nd good technique, for example this:function setWay(direction){ switch(direction) { case 'left': if(old_direction!='right') { old_direction = direction; } break; case 'right': if(old_direction!='left') { old_direction = direction; } break; case 'up': if(old_direction!='down') { old_direction = direction; } break; case 'down': if(old_direction!='up') { old_direction = direction; } break; }}could simply have beenfunction setWay(direction){ var oppositeDirection = { left : 'right', right: 'left', up: 'down', down:'up' } if( direction != oppositeDirection[old_direction] ){ old_direction = direction; }}I will leave deep thoughts to on whetherYou want to specify that 'left' is the opposite of 'right', since you already specified 'right' is the opposite of 'left'Whether you would want to merge oppositeDirection and keysYou copy pasted some code in giveLife and grow that could also benefit from the above approach. I would have written this: switch(old_direction) { case 'right': nextPosition[0] += 1; break; case 'left': nextPosition[0] -= 1; break; case 'up': nextPosition[1] -= 1; break; case 'down': nextPosition[1] += 1; break; }as//2 properly named array indexes for x and yvar X = 0; var Y = 1;//vectors for each directionvar vectors = { right : { x : 1 , y : 0 }, left : { x : -1 , y : 0 }, up : { x : 0 , y : -1 }, down : { x : 0 , y : 1 }}function updatePosition( direction ){ var vector = vectors( direction ); if( vector ){ nextPosition[X] += vector.x; nextPosition[Y] += vector.y; } else{ throw Invalid direction: + direction }}The advantages here are:If you wanted to play snake with 8 directions, you couldNo silent failure if an invalid direction is passed alongThe following code gives me the creeps:function launchFullscreen(element) { if(element.requestFullscreen) { element.requestFullscreen(); } else if(element.mozRequestFullScreen) { element.mozRequestFullScreen(); } else if(element.webkitRequestFullscreen) { element.webkitRequestFullscreen(); } else if(element.msRequestFullscreen) { element.msRequestFullscreen(); }}have you considered using something likefunction launchFullscreen(e) { var request = e.requestFullscreen || e.mozRequestFullScreen || e.webkitRequestFullscreen || e.msRequestFullscreen; request();}This is also is not a pretty sight:document.addEventListener(fullscreenchange, function(){snake.game.adjust();});document.addEventListener(webkitfullscreenchange, function(){snake.game.adjust();});document.addEventListener(mozfullscreenchange, function(){snake.game.adjust();});document.addEventListener(MSFullscreenChange, function(){snake.game.adjust();});That should at least bedocument.addEventListener(fullscreenchange, snake.game.adjust );document.addEventListener(webkitfullscreenchange, snake.game.adjust );document.addEventListener(mozfullscreenchange, snake.game.adjust );document.addEventListener(MSFullscreenChange, snake.game.adjust );and really there has to be a better way than to subscribe to every browser event ;) I am assuming you are not simply providing snake.game.adjust because it is not yet initialized at that point. I would rather solve that problem then creating functions to deal with that problem.
_codereview.62713
The following question was taken from Absolute Java 5th ed. by Walter Savitch:Write a program that outputs the number of hours, minutes, and seconds that corresponds to 50,391 total seconds. The output should be 13 hours, 59 minutes, and 51 seconds. Test your program with a different number of total seconds to ensure that it works for other cases. This is the code that I have written: public class Question7 { private static final int MINUTES_IN_AN_HOUR = 60; private static final int SECONDS_IN_A_MINUTE = 60; public static void main(String[] args) { int seconds = 50391; System.out.println(timeConversion(seconds)); } private static String timeConversion(int totalSeconds) { int hours = totalSeconds / MINUTES_IN_AN_HOUR / SECONDS_IN_A_MINUTE; int minutes = (totalSeconds - (hoursToSeconds(hours))) / SECONDS_IN_A_MINUTE; int seconds = totalSeconds - ((hoursToSeconds(hours)) + (minutesToSeconds(minutes))); return hours + hours + minutes + minutes + seconds + seconds; } private static int hoursToSeconds(int hours) { return hours * MINUTES_IN_AN_HOUR * SECONDS_IN_A_MINUTE; } private static int minutesToSeconds(int minutes) { return minutes * SECONDS_IN_A_MINUTE; }}
Converting seconds to hours, minutes and seconds
java;beginner;datetime
null
_cstheory.38118
is there a general statement what kinds of problems can be solved more efficiently using quantum computers (quantum gate model only)? do the problems for which an algorithm is known today have a common property?as far as i understand quantum computing helps with the hidden subgroup problem (shor); grover helps speedup search problems. i have read that quantum algorithms can provide speed-up if you look for a 'global property' of a function (grover/deutsch).is there a more concise and correct statement about where quantum computing can help? is it possible to give an explanation why quantum physics can help there (preferably something deeper that 'interference can be exploited')? and why it possibly will not help for other problems (e.g. for NP-complete problems)?are there relevant papers that discuss just that?(sorry, new here. hope that question is appropriate & understandable and there is not too much wrong in my statements...)
general statement what kinds of problems can be solved more efficiently
quantum computing
null
_codereview.11894
For practice I tried implementing a bubble sort algorithm in Ruby. I'm especially interested in creating clean DRY and KISS code and keeping things readable, but efficient. Any hints to improve things would be very much appreciated. The code looks like this:class Sorter def initialize end def sort(stack) if stack.length == 0 return stack else checkArguments(stack) if stack.length == 1 return stack else newstack = [] stackIsSorted = false while !stackIsSorted newstack = iterateThroughStack(stack) if newstack == stack stackIsSorted = true else stack = newstack end end return newstack end end end def checkArguments(stack) stack.each do |element| if element.class != Float && element.class != Fixnum raise ArgumentError, element detected in stack that is not an integer or double end end end def iterateThroughStack(stack) newstack = [] currentElement = stack[0] for i in (1...stack.length) if currentElement < stack[i] newstack.push(currentElement) currentElement = stack[i] else newstack.push(stack[i]) end end newstack.push(currentElement) endendThen, after reading about Test Driven Development, I started using this practice and since then, I think code makes more sense with unit tests. So below are the unit test I wrote:require 'test/unit'require 'lib/Sorter'class Test_Sorter < Test::Unit::TestCase def setup @sorter = Sorter.new end def test_emptyStack stack = [] assert_equal(stack, @sorter.sort(stack), sorting empty stack failed) end def test_StackWithOneElement stack = [1] assert_equal(stack, @sorter.sort(stack), sorting stack with one element: 1 failed) stack = [a] assert_raise (ArgumentError) { @sorter.sort(stack) } end def test_StackWithTwoElements stack = [2, 1] sorted_stack = [1, 2] assert_equal(sorted_stack, @sorter.sort(stack), sorting stack with two elements: 1, 2 failed) stack = [2, a] assert_raise (ArgumentError) { @sorter.sort(stack) } end def test_StackWithThreeElements stack = [2, 3, 1] sorted_stack = [1, 2, 3] assert_equal(sorted_stack, @sorter.sort(stack), sorting stack with three elements: 1, 2, 3 failed) end def test_StackWithFourElements stack = [4, 2, 3, 1] sorted_stack = [1, 2, 3, 4] assert_equal(sorted_stack, @sorter.sort(stack), sorting stack with four elements: 1, 2, 3, 4 failed) endend
Bubble sort implementation with tests
ruby;unit testing;sorting
There are few things I would do differently:Using class without instance variables doesn't make much sense, you can use module for that if the only thing you want is to limit scope. Of course it had been useful if you would have initialized class with some parameters like Sorter.new(Float, Fixnum) and saved them for later use,When you iterate through the stack you know for sure if you swapped items or not, so I would say it's a good idea to save this information inside iterateThroughStack and pass it back so you don't need to compare arrays in sort,Duck typing in Ruby means you normally don't check types, who cares what type of element is if it allows comparison, so I would remove checkArguments unless you have some special requirements,I would use each instead of for,Don't use return unless you want to return from the middle of a function (which is presumably bad thing to do).So it all comes down to this:class Sorter def initialize end def sort(stack) if stack.length > 1 stackIsSorted = false while !stackIsSorted stackIsSorted, stack = iterateThroughStack(stack) end end stack end def iterateThroughStack(stack) stackIsSorted = true newstack = [] currentElement, *tail = stack tail.each do |element| if currentElement < element newstack.push(currentElement) currentElement = element else newstack.push(element) stackIsSorted = false end end [stackIsSorted, newstack.push(currentElement)] endend
_webmaster.104348
I'm running a VPS server and for so many years, I've been using http:// for my main website. In fact, to even avoid www and non-www issue, duplicate content issues, etc. in search engines, I even have the below code in my .htaccess currently:RewriteEngine OnOptions +FollowSymlinksRewriteBase /RewriteCond %{HTTP_HOST} !^www.example.com$ [NC]RewriteCond %{REQUEST_URI} !^/[0-9]+\..+\.cpaneldcv$RewriteCond %{REQUEST_URI} !^/[A-F0-9]{32}\.txt(?:\ Comodo\ DCV)?$RewriteRule ^(.*)$ http://www.example.com/$1 [L,R=301]Aside from the above, I use absolute path showing http:// for internal links to deter content scrapers.Now, recently, due to an update in WHM/cPanel, and because Comodo/cPanel is now issuing free SSL certificates, and AutoSSL is enabled by default, I noticed that I can now access the https:// version of my site. That is, if I type https:// manually, the browser will state it is secure. For one, that is a good thing, since I'm planning to migrate to https:// in the future and I believe my site is ready for that I believe. But I don't plan to do the migration now, since that would be a very time consuming process.Given my conditions above, if I just leave AutoSSL enabled by default and the free Cpanel certificate installed in my domain name, will this cause duplicate content issues for many search engines since I believe they see http and https as different sites? Because I'm not very sure if major search engines are smart enough to think that although the https:// version of my site is apparently accessible, my links are still all http:// . What is your suggestion?
Will installing SSL certificate cause duplicate content penalty?
https
You will run into duplicate content issues sooner or later. It is just a matter of time.You have two choices, three actually, but really just two.1] Redirect HTTP to HTTPS or HTTPS to HTTP. I recommend HTTP to HTTPS since you seem to have what you need, however I understand not taking the leap. HTTPS adds an additional Trust factor. However, not wanting to take the leap today, which is perfectly understandable, you can just make sure that there is a redirect from HTTPS to HTTP.2] Add a canonical tag in each page that points to the same page using either HTTP or HTTPS. Since you seem to prefer HTTP for now I suggest pointing to HTTP.3] Do nothing. I do not recommend option 3. [insert cheesy grin]Of the options, option 1 is the easiest. Redirecting HTTPS to HTTP for now will solve the problem quickly.Here is a Q&A on just your scenario: https://stackoverflow.com/questions/12999910/https-to-http-redirect-using-htaccess
_codereview.166030
Today I implemented a C++11 template class which allows for Nullable types. The reason for this is that std::optional is not yet available, (I use C++11/14) and I wanted to practice a bit, so I decided to make one myself. Also for portability reasons. (The code has to compile on multiple platforms, namely Linux and Windows. GCC/MSVC)Can you guys take a look at it and point me to some improvements/changes that might be needed?Here is the code:Class Definition:#include <algorithm>template<typename T>class Nullable final{private: union Data { Data(){}; ~Data(){}; Data(const Data&) = delete; Data(Data&&) = delete; Data& operator=(const Data&) = delete; Data& operator=(Data&&) = delete; T m_Data; } m_Data; bool m_IsUsed = false;public: Nullable() = default; ~Nullable(); Nullable(T object); Nullable(const Nullable& object); Nullable(Nullable&& object); Nullable& operator=(const Nullable& object); Nullable& operator=(Nullable&& object); Nullable& operator=(const T& object); Nullable& operator=(T&& object); bool isInitialized(); void initialize(T&& object); void initialize(const T& object); void reset(); void reset(const T& object); void reset(T&& object);};Class Implementation: (In same header file)template<typename T>void Nullable<T>::initialize(T&& object){ m_IsUsed = true; m_Data.m_Data = std::move(object);}template<typename T>void Nullable<T>::initialize(const T& object){ m_IsUsed = true; m_Data.m_Data = object;}template<typename T>Nullable<T>::~Nullable(){ if(m_IsUsed) m_Data.m_Data.~T();}template<typename T>Nullable<T>& Nullable<T>::operator=(const Nullable<T>& rhs){ if(&rhs == this) return *this; if(isInitialized()) { m_Data.m_Data.~T(); } m_Data.m_Data = rhs.m_Data.m_Data; m_IsUsed = true; return *this;}template<typename T>Nullable<T>& Nullable<T>::operator=(Nullable<T> && rhs){ if(&rhs == this) return *this; if(isInitialized()) { m_Data.m_Data.~T(); } m_Data.m_Data = std::move(rhs.m_Data.m_Data); m_IsUsed = true; rhs.m_IsUsed = false; return *this;}template<typename T>Nullable<T>::Nullable(const Nullable<T> & rhs){ if(isInitialized()) { m_Data.m_Data.~T(); } m_Data.m_Data = rhs.m_Data.m_Data; m_IsUsed = true;}template<typename T>Nullable<T>::Nullable(Nullable<T> && rhs){ if(isInitialized()) { m_Data.m_Data.~T(); } m_Data.m_Data = std::move(rhs.m_Data.m_Data); rhs.m_IsUsed = false; m_IsUsed = true;}template<typename T>bool Nullable<T>::isInitialized(){ return m_IsUsed;}template<typename T>void Nullable<T>::reset(){ m_Data.m_Data.~T();}template<typename T>void Nullable<T>::reset(const T& object){ if(&object == this) return; if(isInitialized()) { m_Data.m_Data.~T(); } m_Data.m_Data = object; m_IsUsed = true;}template<typename T>void Nullable<T>::reset(T&& object){ if(&object == this) return; if(isInitialized()) { m_Data.m_Data.~T(); } m_Data.m_Data = std::move(object); m_IsUsed = true;}template<typename T>Nullable<T>& Nullable<T>::operator=(const T& object){ if(&object == &this->m_Data.m_Data) return *this; if(isInitialized()) { m_Data.m_Data.~T(); } m_Data.m_Data = object; m_IsUsed = true; return *this;}template<typename T>Nullable<T>& Nullable<T>::operator=(T&& object){ if(&object == &this->m_Data.m_Data) return *this; if(isInitialized()) { m_Data.m_Data.~T(); } m_Data.m_Data = std::move(object); m_IsUsed = true; return *this;}template<typename T>Nullable<T>::Nullable(T object){ m_Data.m_Data = object; m_IsUsed = true;}
C++ Nullable template class
c++;template;optional
NamingI would rename initialize because it's more of a setter. You can usually initialize an object only once (typically in the constructor), but there is no problem with calling initialize twiceAccessing the objectThere is no method to access the object. Maybe that's wanted because you are using friendship of some kind that you didn't paste? If that is the case, I highly recommend that you use the Attorney Client idiom if you are not already, so that your friend class/method can only access the m_Data member and not the booleanAdditional featuresThere are some features you may (or may not) want to add:.get() methodT&& constructorbool conversion operatorindirection operatorstructure dereference operator
_cs.1998
Let $G = (V,E)$ be a graph having $n$ vertices, none of which are isolated, and $n1$ edges, where $n \geq 2$. Show that $G$ contains at least two vertices of degree one.I have tried to solve this problem by using the property $\sum_{v \in V} \operatorname{deg}(v) = 2|E|$. Can this problem be solved by using pigeon hole principle?
Low-degree nodes in sparse graphs
graph theory;proof techniques
Yes, it can.You have $n-1$ edges, which means $2n-2$ holes for node-pigeons. If every node is supposed to have degree two (or more), we have to place (at least) two pigeons for each node, that makes a total of $2n$ pigeons.By said principle, (at least) two pigeons do not find a solitary hole, which means (at least) one node is isolated or (at least) two nodes have only one edge. As no node is isolated by assumption, you have (at least) two nodes with degree one.
_unix.36511
I use a VPN to connect my development machine to my school's CS dept. The development machine is Ubuntu as we do C programming in Unix. I used vpnc to do that. The school uses some DNS entries that only resolve on their DNS servers, i.e., internalserver.csdept.school.eduI am normally attached to the VPN whenever booted for convenience. However I noticed the other day that when I disconnect the VPN all my DNS queries fail. This obviously means that vpnc set up the school's DNS to be used. However I'd rather not use their DNS all the time (tracking and privacy and whatnot). Is there a way I can restore my ISP's DNS and then if the lookup fails, have it use my school's DNS?
Using a secondary DNS when lookup fails in primary?
dns
null
_softwareengineering.257732
I am creating a closed source application whose task is to launch other open source applications (similar to start menu of windows).So i have included various opensource applications under licences like :GNU-GPLLGPLMIT licenseApple Public Source License Version 2.0Apache creative commonsCeCILLI have not modified the opensource applications and i am ready to publish source of the opensource applications,but i do not want to disclose the source code of my launcher application.I intend to offer the whole stuff free with a commercial hardware. i.e. i am charging only for hardware and not for software. Can i do so ?Further in future i intend to sell the whole software bunch by including only the cost incurred for developing the closed source module, can i do so ?I have gone through the gpl documentation but i am not able to relate the scenario with the information in docs.Please explain the answer w.r.t. the individual licence, so that i can exclude the applications incase their license does not permit me to do so!
Shipping unmodified open source applications with closed source apps
gpl;mit license;lgpl;apache license;closed source
null
_codereview.144441
I want to pass errors in errors in order to know exactly the chain of causes, but without augmenting or even reading any stack until this becomes really necessary. And this is how I would achieve this:class RError extends Error { constructor(options = {}) { super(); this.name = options.name; this.message = options.message; this.cause = options.cause; }}Example usage:try { throw new RError({ name: 'FOO', message: 'Something went wrong.', cause: new RError({ name: 'BAR', message: 'I messed up.' }) });} catch (err) { console.error(err);}Here is the result:{ FOO: Something went wrong. at RError (/Users/boris/Workspace/playground/index.js:5:9) at Object.<anonymous> (/Users/boris/Workspace/playground/index.js:13:11) at Module._compile (module.js:541:32) at Object.Module._extensions..js (module.js:550:10) at Module.load (module.js:458:32) at tryModuleLoad (module.js:417:12) at Function.Module._load (module.js:409:3) at Function.Module.runMain (module.js:575:10) at startup (node.js:160:18) at node.js:456:3 name: 'FOO', message: 'Something went wrong.', cause: { BAR: I messed up. at RError (/Users/boris/Workspace/playground/index.js:5:9) at Object.<anonymous> (/Users/boris/Workspace/playground/index.js:16:16) at Module._compile (module.js:541:32) at Object.Module._extensions..js (module.js:550:10) at Module.load (module.js:458:32) at tryModuleLoad (module.js:417:12) at Function.Module._load (module.js:409:3) at Function.Module.runMain (module.js:575:10) at startup (node.js:160:18) at node.js:456:3 name: 'BAR', message: 'I messed up.', cause: undefined } }Since I couldnt find anyone on the web who uses this technique, Im wondering if there is any drawback to this approach.Update: Ive created two fiddles, an es5 and an es6 version (run, see results in the console).es5 version: https://jsfiddle.net/borisdiakur/vkj9e9fx/3/es6 version: https://jsfiddle.net/borisdiakur/3mtfwqv1/1/Update: Ive created a module: https://github.com/borisdiakur/rerror
An object for passing a chain of errors in JavaScript
javascript;error handling
Cosmetically speaking, you can use the ES6 destructuring operator to make your arguments a bit more readable:class RError extends Error { constructor({name, message, cause}) { super(name, message); this.cause = cause; }}As for drawbacks, the only one I can think of is having to do it yourself instead of having the language do it for you (Like, in example, in Java, which is where I guess this is coming from). So you'll need to remember to do it every time you have a rethrown exception. Personally, I wouldn't find this acceptable, as I would surely forget.
_softwareengineering.263205
While the concept of IoC isn't foreign to me, I'm new to Unity and I'm having trouble connecting the metaphorical dots, so to speak.In our project we have a class library for logic, then several class libraries with repositories implementing common interfaces (they provide access and allow CRUD operations on different data sources). Finally, this is all used together in either an MVC web site or a SharePoint site.We wanted to implement Unity to keep a single configuration and make the whole thing better.But... there are some design issues I'm facing. For example - the MVC web site, as far as communicating with the underlying SQL database is concerned, can have everything it needs specified in the configuration section for the Unity container. However, said web site also needs to occasionally connect with SharePoint... in order to create a SharePoint repository, I need to create an SPWeb object and pass it to the repository constructor.I CAN do that using ParameterOverride when resolving the type, but this doesn't... feel right. There are cases where a larger number of parameters need to overridden this way; to resolve a logic class I need to pass repositories, which in turn are resolved by passing a manually created SPWeb object... When this happens it seems I'm no longer following an IoC pattern. The kicker here is that I cannot have the SPWeb object defined in the Unity container!Is there a pattern (a practical example would be superb) where Unity is used but certain values are being passed TO the container while resolving types, without the need to use ParameterOverride?EDITI feel I need to provide a more specific example of why ParameterOverride is a problem. Below is a fragment of code used by a SharePoint timer job. The job in question synchronizes some data between SharePoint and SQL, so it requires two different repositories implementing the same interface. Because the code is ran from SharePoint, it cannot access a global configuration defined DataContext (EntityFramework, code-first) because there's no web.config file to read the connection string from. Also, due to the weirdness that is SharePoint, I cannot get the current SPWeb object, and instead need to create it on the fly.The end result is a ton on using statements, and a lot of parameters need to overridden when using Resolve. Also, the need to pass the parameter NAME as a string would make this really difficult to code for anyone else - this information isn't known from the interface alone. public override void Execute(Guid targetInstanceId) { var siteUrl = Properties[Site] as string; var sqlConnection = Properties[SQL] as string; if (String.IsNullOrEmpty(siteUrl)) throw new ArgumentNullException(Site URL is missing.); if (String.IsNullOrEmpty(sqlConnection)) throw new ArgumentNullException(SQL connection string is missing.); using (var site = new SPSite(siteUrl, SPUserToken.SystemAccount)) using (var web = site.OpenWeb()) using (var spPollRepository = Configuration.ServiceLocator.SharePointContainer.Container.Resolve<IPollRepository>(new ParameterOverride(web, web))) using (var ctx = Configuration.ServiceLocator.SharePointContainer.Container.Resolve<Repositories.PollsDataContext>(Configuration.ServiceLocator.SqlNamedRegistration, new ParameterOverride(nameOrConnectionString, sqlConnection), new ParameterOverride(initialize, false))) using (var sqlPollRepository = Configuration.ServiceLocator.SharePointContainer.Container.Resolve<IPollRepository>(Configuration.ServiceLocator.SqlNamedRegistration, new ParameterOverride(ctx, ctx))) { IPollsSyncLogic logic = Configuration.ServiceLocator.SharePointContainer.Container.Resolve<Logic.IPollsSyncLogic>(new ParameterOverride(sqlRepository, sqlPollRepository), new ParameterOverride(spRepository, spPollRepository)); logic.SyncPolls(); } }There's a big chance I'm using Unity... wrong. Or that some other fundamental definition has been mixed up. I hope this clears up any confusion regarding the question.
IoC, Unity and passing parameters (or a way to avoid doing so)
ioc
null
_unix.252406
I have a server running OpenSSH. I've setup an SFTP server that users can log into. I have created user userName with a folder at /sftp/userName with permissions of 555 and owner root. This folder has two subfolders, upload and download with permissions of 755 and 555 respectively. Both subfolders are owned by userName. I have setup my server to prevent userName from accessing anything outside of /sftp/userName and this folder acts as the root directory when userName is logged in.My goals:allow userName to login and see the download and upload folders within /sftp/userName.prevent userName from uploading or downloading anything in the folder /sftp/userName.allow userName to enter into download and download files but not upload files.allow userName to enter into upload and upload/download/delete files.All of this behavior works correctly and seems like a good setup for my situation.My problem is that userName can change permissions on the upload and download folders using chmod xxx.Questions:How can I prevent userName from making changes to folder permissions on upload and download?Are there other gotchas with my setup that I should be aware of?
Proper permissions on SFTP upload and download subdirectories in a user's home directory?
permissions;users;sftp;openssh
null
_cs.28531
I'm trying to prove that$\exists L_1, L_2 : L_1$ and $L_2$ are context-free languages $\land\;L_1 \cap L_2 = L_3$ is an undecidable language.I know that context-free languages are not closed under intersection.This means that I can produce an $L_3$, which is undecidable.An example would be $L_1 = \{a^n | n \in \mathbb{N}\} \cap L_2 = \{0\} = \emptyset$.Is this a correct proof?If not, how can I prove this theorem?Is the empty language decidable?
Can an intersection of two context-free languages be an undecidable language?
computability;context free;undecidability
Context-free languages are decidable, and decidable languages areclosed under intersection.So, though the intersection of two CF languages may not be CF, it is decidable.Remarks on your example:$\emptyset=\{\}\neq$ $\{0\}$$L_1\cap\emptyset=\emptyset$ which is context-free.You cannot prove your claim, because it is wrongthe empty language is decidable: the answer is always no, this string is not in the empty set.
_codereview.59594
If the linked list is 1->2->3->4 then the output should be 1->3. If the linked list is 1->2->3->4->5, then the output should be 1->3->5.The question is attributed to GeeksForGeeks. I'm looking for code-review, best practices and optimizations.public class DeleteAlternate<T> { private Node<T> first; private Node<T> last; private int size; public DeleteAlternate(List<T> items) { for (T item : items) { create(item); } } private void create (T item) { Node<T> n = new Node<>(item); if (first == null) { first = last = n; } else { last.next = n; last = n; } size++; } private final class Node<T> { private Node<T> next; private T item; Node (T item) { this.item = item; } } public void deleteAlternate ( ) { if (first == null) { throw new IllegalStateException(The first node is null.); } Node<T> node = first; // node == null, if even nodes are present in LL // node.next == null, if odd nodes are present in LL while (node != null && node.next != null) { node.next = node.next.next; node = node.next; } } // size of new linkedlist is unknown to us, in such a case simply return the list rather than an array. public List<T> toList() { final List<T> list = new ArrayList<>(); if (first == null) return list; for (Node<T> x = first; x != null; x = x.next) { list.add(x.item); } return list; } @Override public int hashCode() { int hashCode = 1; for (Node<T> x = first; x != null; x = x.next) hashCode = 31*hashCode + x.hashCode(); return hashCode; } @Override public boolean equals(Object obj) { if (this == obj) return true; if (obj == null) return false; if (getClass() != obj.getClass()) return false; DeleteAlternate<T> other = (DeleteAlternate<T>) obj; Node<T> currentListNode = first; Node<T> otherListNode = other.first; while (currentListNode != null && otherListNode != null) { if (currentListNode.item != otherListNode.item) return false; currentListNode = currentListNode.next; otherListNode = otherListNode.next; } return currentListNode == null && otherListNode == null; }}public class DeleteAlternateTest { @Test public void test1() { DeleteAlternate<Integer> dAlternate1 = new DeleteAlternate<>(Arrays.asList(1)); dAlternate1.deleteAlternate(); assertEquals(new DeleteAlternate<>(Arrays.asList(1)), dAlternate1); } @Test public void test2() { DeleteAlternate<Integer> dAlternate2 = new DeleteAlternate<>(Arrays.asList(1, 2)); dAlternate2.deleteAlternate(); assertEquals(new DeleteAlternate<>(Arrays.asList(1)), dAlternate2); } @Test public void test3() { DeleteAlternate<Integer> dAlternate3 = new DeleteAlternate<>(Arrays.asList(1, 2, 3)); dAlternate3.deleteAlternate(); assertEquals(new DeleteAlternate<>(Arrays.asList(1, 3)), dAlternate3); } @Test public void test4() { DeleteAlternate<Integer> dAlternate4 = new DeleteAlternate<>(Arrays.asList(1, 2, 3, 4)); dAlternate4.deleteAlternate(); assertEquals(new DeleteAlternate<>(Arrays.asList(1, 3)), dAlternate4); } @Test public void test5() { DeleteAlternate<Integer> dAlternate5 = new DeleteAlternate<>(Arrays.asList(1, 2, 3, 4, 5)); dAlternate5.deleteAlternate(); assertEquals(new DeleteAlternate<>(Arrays.asList(1, 3, 5)), dAlternate5); }}
Delete alternate nodes of a linked list
java;linked list;tree
null
_unix.141130
We have a SAMBA share on a centos 6 machine. Question: Can we mount this SAMBA 3 share over the internet? (without VPN/SSH tunnel, so directly!)
Is SAMBA routable?
samba;smb
It depends on some aspects of the protocols and implementations. NetBIOS/NetBEUI is not routable at all and it works sending broadcasts. Workgroups, domain joining, browsing, hostname update and other features of the SMB suite will be restricted to your network due those limitations. It shall work in a local network environment but not over TCP/IP. However, to overcome this issue, NBT (NetBIOS over TCP/IP) and WINS servers where implemented so, things like hostname updates could be done on larger networks where routing is needed.SMB itself is just an upper-layer protocol (presentation & application), and it will consume lower-layer protocol (network, transport, session) services. It will work across networks, but it heavily depends on the implementation/version of SMB you are using, and the operating system.The Good: It should work (in theory). Its just a matter of accessing the IP addresses where this share is published. Port redirection is probably needed on your modem or your firewall if you are directly attached to the Internet.The Bad:SMB is not safe at all. VPNs' (IPSec, OpenVPN, PPTP ...) first purpose on this setup is to solve the encryption and security issues of the SMB protocol,not routing ones. Edit: Maybe another layer of security can be added with Server Signing with samba 3.3.x+The Ugly:Your ISP could be blocking this kind of traffic (445/tcp)SMB does not have any kind of check-summing/verification, and it could have performance issues on high-latency networks.tl,dr; It's better to use other protocols like WebDAV, sftp, scp or ftp.
_cs.824
Assume I have a list of functions, for example $\qquad n^{\log \log(n)}, 2^n, n!, n^3, n \ln n, \dots$How do I sort them asymptotically, i.e. after the relation defined by$\qquad f \leq_O g \iff f \in O(g)$,assuming they are indeed pairwise comparable (see also here)? Using the definition of $O$ seems awkward, and it is often hard to prove the existence of suitable constants $c$ and $n_0$.This is about measures of complexity, so we're interested in asymptotic behavior as $n \to +\infty$, and we assume that all the functions take only non-negative values ($\forall n, f(n) \ge 0$).
Sorting functions by asymptotic growth
asymptotics;landau notation;reference question
If you want rigorous proof, the following lemma is often useful resp. more handy than the definitions.If $c = \lim_{n\to\infty} \frac{f(n)}{g(n)}$ exists, then$c=0 \qquad \ \,\iff f \in o(g)$,$c \in (0,\infty) \iff f \in \Theta(g)$ and$c=\infty \quad \ \ \ \iff f \in \omega(g)$.With this, you should be able to order most of the functions coming up in algorithm analysis. As an exercise, prove it!Of course you have to be able to calculate the limits accordingly. Some useful tricks to break complicated functions down to basic ones are:Express both functions as $e^{\dots}$ and compare the exponents; if their ratio tends to $0$ or $\infty$, so does the original quotient. More generally: if you have a convex, continuously differentiable and strictly increasing function $h$ so that you can re-write your quotient as$\qquad \displaystyle \frac{f(n)}{g(n)} = \frac{h(f^*(n))}{h(g^*(n))}$, with $g^* \in \Omega(1)$ and$\qquad \displaystyle \lim_{n \to \infty} \frac{f^*(n)}{g^*(n)} = \infty$,then $\qquad \displaystyle \lim_{n \to \infty} \frac{f(n)}{g(n)} = \infty$.See here for a rigorous proof of this rule (in German).Consider continuations of your functions over the reals. You can now use L'Hpital's rule; be mindful of its conditions!Have a look at the discrete equivalent, StolzCesro.When factorials pop up, use Stirling's formula:$\qquad \displaystyle n! \sim \sqrt{2 \pi n} \left(\frac{n}{e}\right)^n$It is also useful to keep a pool of basic relations you prove once and use often, such as:logarithms grow slower than polynomials, i.e. $\qquad\displaystyle (\log n)^\alpha \in o(n^\beta)$ for all $\alpha, \beta > 0$.order of polynomials: $\qquad\displaystyle n^\alpha \in o(n^\beta)$ for all $\alpha < \beta$.polynomials grow slower than exponentials: $\qquad\displaystyle n^\alpha \in o(c^n)$ for all $\alpha$ and $c > 1$.It can happen that above lemma is not applicable because the limit does not exist (e.g. when functions oscillate). In this case, consider the following characterisation of Landau classes using limes superior/inferior:With $c_s := \limsup_{n \to \infty} \frac{f(n)}{g(n)}$ we have$0 \leq c_s < \infty \iff f \in O(g)$ and$c_s = 0 \iff f \in o(g)$.With $c_i := \liminf_{n \to \infty} \frac{f(n)}{g(n)}$ we have$0 < c_i \leq \infty \iff f \in \Omega(g)$ and$c_i = \infty \iff f \in \omega(g)$.Furthermore,$0 < c_i,c_s < \infty \iff f \in \Theta(g) \iff g \in \Theta(f)$ and$ c_i = c_s = 1 \iff f \sim g$.Check here and here if you are confused by my notation. Nota bene: My colleague wrote a Mathematica function that does this successfully for many functions, so the lemma really reduces the task to mechanical computation. See also here.
_cs.45482
I'm looking for the name of a binary tree which is almost degenerate: at least one child of every interior node in the tree is a leaf.(Image from Penn State course STAT 557, Data Mining, lesson 10.)
Term for most degenerate tree with two children on every inner node
graph theory;terminology;trees
null
_unix.367393
I'm generating a JSON file with a shell script but I can't find a solution to automatically get rid of the last comma just before the ending }. Here is my code:echo { >> out_filefor i in 3 4 5 6 7 #ends at 298doy=$isommeMag=`awk -F '=' '/'$y'/{sommeMag+=$2}END{print sommeMag}'` /myfolder/... store=mag$yif [ -z $sommeMag ] #checks is variable is empty then echo \ $store\:0,>> out_file else echo \$store\:$sommeMag, >> out_file fidoneecho } >> out_fileThe file is ending this way :{ mag297:0, mag298:0, <-- syntaxt error}The file should end this way :{... mag297:0, mag298:0 <-- no more comma}How can I manage that?The code has been edited to be more readable here.
JSON correct construction
shell script;json
null
_codereview.163167
I am trying to perform integrity checks on input data before the core of my algorithm gets to work by making sure inputs are neither NAN or out of a reasonable set of bounds.I have defined the following class for a flag to indicate the status of a particular input:class IM_flag{private: // Flag Status //------------ // Convention (regular): // 0 = no latched faults and currently OK (or unchecked) // -1 = input is currently NAN and has no latched faults // -2 = input has latched a NAN condition since last clear but is currently OK // -3 = input has latched a NAN and is currently NAN // +1 = input is currently OOB and has no latched faults // +2 = input has latched an OOB condition since last clear but is currently OK // +3 = input has latched an OOB and is currently OOB // // +7 = input has latched a NAN and is currently OOB or has latched an OOB and is currently NAN // +8 = input has latched both NAN and OOB conditions but is currently OK // +9 = input has latched both NAN and OOB conditions and is currently NAN or OOB // // Convention (special cases): // +/- 1x = mismatched vs. a value expected to be the same (offsets regular status code by 10) int status; // Flag Internals //--------------- int NANcount; // number of NAN since last clear int OOBhighCount; // number of high limit exceedances since last clear int OOBlowCount; // number of low limit exceedances since last clear int NANisLatched; // 0=NAN latch limit not exceeded; 1=NAN latch limit exceeded int OOBisLatched; // 0=OOB latch limit not exceeded; 1=OOB latch limit exceeded int NANlatchLimit; // latch NAN error once limit exceeded int OOBlatchLimit; // latch OOB error once limit exceeded int OOBclampMode; // 0=report OOB only; 1=report OOB and coerce to upper/lower limitpublic: // Constructors //------------- IM_flag() : status(0), NANcount(0), OOBhighCount(0), OOBlowCount(0), NANisLatched(0), OOBisLatched(0), NANlatchLimit(2), OOBlatchLimit(2), OOBclampMode(0) {} IM_flag(int clamp) : status(0), NANcount(0), OOBhighCount(0), OOBlowCount(0), NANisLatched(0), OOBisLatched(0), NANlatchLimit(2), OOBlatchLimit(2), OOBclampMode(clamp) {} IM_flag(int clamp, int NANlim, int OOBlim) : status(0), NANcount(0), OOBhighCount(0), OOBlowCount(0), NANisLatched(0), OOBisLatched(0), NANlatchLimit(NANlim), OOBlatchLimit(OOBlim), OOBclampMode(clamp) {} // Operators //---------- operator int& () { return status; } operator int const& () { return status; } // Utility Functions //------------------ int getStatus() { return status; } void clear() { status = NANcount = OOBhighCount = OOBlowCount = NANisLatched = OOBisLatched = 0; } int getNANcount() { return NANcount; } int getOOBhighCount() { return OOBhighCount; } int getOOBlowCount() { return OOBlowCount; } int getNANlatchLimit() { return NANlatchLimit; } int getOOBlatchLimit() { return OOBlatchLimit; } int getOOBclampMode() { return OOBclampMode; } int setNANlatchLimit(int newLim) { int prev = NANlatchLimit; NANlatchLimit = newLim; return prev; } int setOOBlatchLimit(int newLim) { int prev = OOBlatchLimit; OOBlatchLimit = newLim; return prev; } int setOOBclampMode(int newMode) { int prev = OOBclampMode; OOBclampMode = newMode; return prev; } // Checking Functions //------------------- int check(double *input, double lowLim = DOUBLE_NEG_INF, double highLim = DOUBLE_POS_INF) { int badNow = 0; // local variable for tracking whether the input is currently bad // Check the input first if (ISNAN(*input)) // check for NAN first { NANcount++; badNow = -1; } else if (*input>highLim) // check high limit exccedance next { OOBhighCount++; badNow = 1; if (1==OOBclampMode) *input = highLim; } else if (*input<lowLim) // check low limit exccedance last { OOBlowCount++; badNow = 1; if (1==OOBclampMode) *input = lowLim; } // Check latch limits if ((!NANisLatched) && (NANcount>=NANlatchLimit)) NANisLatched = 1; if ((!OOBisLatched) && ((OOBhighCount+OOBlowCount)>=OOBlatchLimit)) OOBisLatched = 1; // Set status flag if ((1==NANisLatched) && (1==OOBisLatched) && (0!=badNow)) status = 9; // +9 = input has latched both NAN and OOB conditions and is currently NAN or OOB else if ((1==NANisLatched) && (1==OOBisLatched) && (0==badNow)) status = 8; // +8 = input has latched both NAN and OOB conditions but is currently OK else if (((1==NANisLatched) && (0==OOBisLatched) && (1==badNow)) || ((0==NANisLatched) && (1==OOBisLatched) && (-1==badNow))) status = 7; // +7 = input has latched a NAN and is currently OOB or has latched an OOB and is currently NAN else if ((0==NANisLatched) && (1==OOBisLatched) && (1==badNow)) status = 3; // +3 = input has latched an OOB and is currently OOB else if ((0==NANisLatched) && (1==OOBisLatched) && (0==badNow)) status = 2; // +2 = input has latched an OOB condition since last clear but is currently OK else if ((0==NANisLatched) && (0==OOBisLatched) && (1==badNow)) status = 1; // +1 = input is currently OOB and has no latched faults else if ((1==NANisLatched) && (0==OOBisLatched) && (-1==badNow)) status = -3; // -3 = input has latched a NAN and is currently NAN else if ((1==NANisLatched) && (0==OOBisLatched) && (0==badNow)) status = -2; // -2 = input has latched a NAN condition since last clear but is currently OK else if ((0==NANisLatched) && (0==OOBisLatched) && (-1==badNow)) status = -1; // -1 = input is currently NAN and has no latched faults else if ((0==NANisLatched) && (0==OOBisLatched) && (0==badNow)) status = 0; // 0 = no latched faults and currently OK else // shouldn't be possible to get to this else, but include it to identify problems status = -99999; return status; } int checkMismatch(double *input, double *expectedMatch, double tol = DOUBLE_TOL) { // Test equality within tolerance and offset status if unequal if (abs(*input-*expectedMatch)>tol) { // +/- 1x = mismatched vs. a value expected to be the same (offsets regular status code by 10) if (status<0) status -= 10; // make a negative error code more negative by 10 else status += 10; // make a zero or positive error code more positive by 10 } return status; }};Then I defined another class to contain all the flags and provide a couple outputs which summarize the flags:class IM_InputsCheck{public: // Summary status variables //------------------------- int errorLatchedCount; // counter for inputs currently latch for NAN and/or OOB int errorActiveCount; // counter for inputs currently NAN or OOB int errorMismatchCount; // counter for inputs currently mismatched // Array of flags //--------------- IM_flag flags[NUMEL_InputList]; // Constructors //------------- IMstruct_InputsCheck() : errorLatchedCount(0), errorActiveCount(0), errorMismatchCount(0) {} IMstruct_InputsCheck(int clamp) : errorLatchedCount(0), errorActiveCount(0), errorMismatchCount(0) { for (int ii=0; ii<NUMEL_InputList; ii++) flags[ii] = IM_flag(clamp); } IMstruct_InputsCheck(int clamp, int NANlim, int OOBlim) : errorLatchedCount(0), errorActiveCount(0), errorMismatchCount(0) { for (int ii=0; ii<NUMEL_InputList; ii++) flags[ii] = IM_flag(clamp,NANlim,OOBlim); } // Utility Functions //------------------ void summarize() { // Reset count of active, latched, and mismatch errors in inputs errorActiveCount = errorLatchedCount = errorMismatchCount = 0; // Loop through all inputs for (int ii=0, status=0; ii<NUMEL_InputList; ii++) { // get current status status = flags[ii].getStatus(); // increment mismatched error counter if appropriate if (abs(status)>9) errorMismatchCount++; // get just the core status now, dropping off the 10s place if it was there, and not differentiating between NAN and OOB status = (abs(status)%10); // increment active error counter if appropriate if ((1==status) || (3==status) || (7==status) || (9==status)) errorActiveCount++; // increment latched error counter if appropriate if (status>=2) errorLatchedCount++; } } void setAllNANlatchLimit(int newLim) { for (int ii=0; ii<NUMEL_InputList; ii++) flags[ii].setNANlatchLimit(newLim); } void setAllOOBlatchLimit(int newLim) { for (int ii=0; ii<NUMEL_InputList; ii++) flags[ii].setOOBlatchLimit(newLim); } void setAllOOBclampMode(int newMode) { for (int ii=0; ii<NUMEL_InputList; ii++) flags[ii].setOOBclampMode(newMode); } void clearAllFlags() { for (int ii=0; ii<NUMEL_InputList; ii++) flags[ii].clear(); }};I was anticipating using an enumeration as follows to size and index my array of flags:enum InputList{ Lat, Lon, Alt, VelN, VelE, VelD, /* ... about another 70 items ... */ SysTime, NUMEL_InputList};Then I do checks on individual inputs like this:inputCheckStruct->flags[Alt].check(Input_Altitude,-2000.,100000.);In asking a question on Stack Overflow a concern was raised over the use of enum at all. So...is this a good approach or is there a better way to accomplish what I'm trying to do? My main question is in regards to defining an enum with list of meaningful names, using the NUMEL_InputList dummy entry to get the size of array for the IM_InputsCheck class, indexing with the enum items, etc. but I'll also welcome other relevant suggestions. Please forgive me...I'm an engineer who needs to write code, not a real programmer.
Input data integrity checking
c++;enum
null
_softwareengineering.197356
In the Go Language Tutorial, they explain how interfaces work:Go does not have classes. However, you can define methods on struct types. The method receiver appears in its own argument list between the func keyword and the method name.type Vertex struct { X, Y float64}func (v *Vertex) Abs() float64 { return math.Sqrt(v.X*v.X + v.Y*v.Y)}An interface type is defined by a set of methods. A value of interface type can hold any value that implements those methods.This is the only way to create an interface in Go. Google further explains that:A type implements an interface by implementing the methods. There is no explicit declaration of intent [i.e. interface declarations].Implicit interfaces decouple implementation packages from the packages that define the interfaces: neither depends on the other.It also encourages the definition of precise interfaces, because you don't have to find every implementation and tag it with the new interface name.This all sounds suspiciously like Extension Methods in C#, except that methods in Go are ruthlessly polymorphic; they will operate on any type that implements them.Google claims that this encourages rapid development, but why? Do you give up something by moving away from explicit interfaces in C#? Could Extension Methods in C# allow one to derive some of the benefits that Go interfaces have in C#?
How does Go improve productivity with implicit interfaces, and how does that compare with C#'s notion of Extension Methods?
c#;language design;go
I don't see extension methods and implicit interfaces as the same at all.First let's speak to purpose.Extension methods exist as a syntactic sugar specifically to give you the ability to use a method as if it's a member of an object, without having access to the internals of that object. Without extension methods you can do exactly the same thing, you just don't get the pleasant syntax of someObjectYouCantChange.YourMethod() and rather have to call YourMethod(someObjectYouCantChange).The purpose of implicit interfaces however is so that you can implement an interface on an object you don't have access to change. This gives you the ability to create a polymorphic relationship between any object you write yourself and any object you don't have access to the internals of.Now let's speak to consequences.Extension methods really have none, this is perfectly in line with the ruthless security constraints .NET tries to use to aid in distinct perspectives on a model (the perspective from inside, outside, inheritor, and neighbor). The consequence is just some syntactic pleasantness.The consequences of implicit interfaces are a few things.Accidental interface implementation, this can be a happy accident, or an accidental LSP violation by meeting someone else's interface which you didn't intend to while not honoring the intent of it's contract.The ability to easily make any method accept a mock of any given object at all by simply mirroring that objects interface (or even just creating an interface that meets that methods requirements and no more).The ability to create adapters or other various similar patterns with more ease in regards to objects you can't meddle at the innards of.Delay interface implementation, and implement it later without having to touch the actual implementation, only implementing it when you actually want to create another implementor.
_unix.222201
What script would allow me to grep a keyword and print the filename containing the keyword inside the file content, for example 'Carhart' inside all .sas files in all subdirectories? I tried something like the following but it doesn't work:(find . -name '*.sas' -prune -type f -exec grep 'Carhart' > /dev/tty) >& /dev/nullThe script would satisfy two conditionsIt runs on tcsh on Solaris on SPARC-Enterprise, which is certified POSIXIt does not generate 'Permission denied' lines on directories which I have no permission to search and/or read. ( find / -name '*.sas' -prune > /dev/tty ) > & /dev/nullSince ( find / -name '*.sas' -prune > /dev/tty ) > & /dev/null works without reporting permission denied error, how can I modify this simple line to incorporate grep?
POSIX-compliant recursive grep with no errors for inaccessible directories
grep;find
null
_unix.181482
I deleted a big extended partition containing an ntfs logical partition with high percentage of occupied space and from that extended partition I made a new, smaller extended part. In it I created an ext4 logical partition .The newly created ext4 logical partition however comes with 1.75 GB already occupied. I have tried deleting and recreating the partition but the occupied space just keeps coming back. I did the following to search for clues but no joy.sudo du -h -s /media/hrmount/20K /media/hrmount/andsudo du -h -a /media/hrmount16K /media/hrmount/lost+found20K /media/hrmount/and sudo du -h -a /media/hrmount/lost+found/16K /media/hrmount/lost+found/the commands might seem redundant but I'm just blindly trying to figure this out.I also ran :fsck -V /dev/sdb5fsck from util-linux 2.20.1[/sbin/fsck.ext4 (1) -- /media/hrmount] fsck.ext4 /dev/sdb5 e2fsck 1.42 (29-Nov-2011)/dev/sdb5: clean, 11/6553600 files, 459349/26214400 blocksand the relevant output from df -h/dev/sdb5 100G 1.7G 94G 2% /media/hrmountI am quite sure that I will get rid of that occupied space by formatting the partition but what i want to know is what is causing that occupied space in also what it actually contains. Please help me find more clues to solve this puzzle. Thank you.
What is this data that keeps reappearing after partition delete + new partition creation?
filesystems;disk usage;ext4;gparted
The used space reported by df is reserved space. This reserved space is used by ext filesystems to prevent data fragmentation as well as to allow critical applications such as syslog to continue functioning when the disk is full. You can view information about the reserved space using the tune2fs command:# tune2fs -l /dev/mapper/newvg-root tune2fs 1.42.5 (29-Jul-2012)Filesystem volume name: <none>Last mounted on: /mnt/oldrootFilesystem UUID: d41eefc5-60d6-4e18-98e8-d08d9111fbe0Filesystem magic number: 0xEF53Filesystem revision #: 1 (dynamic)Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isizeFilesystem flags: signed_directory_hash Default mount options: (none)Filesystem state: cleanErrors behavior: ContinueFilesystem OS type: LinuxInode count: 3932160Block count: 15728640Reserved block count: 786304Free blocks: 11086596Free inodes: 3312928First block: 0Block size: 4096Fragment size: 4096Reserved GDT blocks: 1020Blocks per group: 32768Fragments per group: 32768Inodes per group: 8192Inode blocks per group: 512Flex block group size: 16Filesystem created: Tue Feb 8 16:28:29 2011Last mount time: Mon Dec 9 23:28:11 2013Last write time: Mon Dec 9 23:48:24 2013Mount count: 19Maximum mount count: 20Last checked: Tue Sep 3 23:00:06 2013Check interval: 15552000 (6 months)Next check after: Sun Mar 2 22:00:06 2014Lifetime writes: 375 GBReserved blocks uid: 0 (user root)Reserved blocks gid: 0 (group root)First inode: 11Inode size: 256Required extra isize: 28Desired extra isize: 28Journal inode: 8Default directory hash: half_md4Directory Hash Seed: 80cf2748-584a-4fe8-ab8c-6abff528c2c2Journal backup: inode blocksHere you can see that 786304 blocks are reserved and the block size is 4096. This means that 3220701184 bytes or 3GB is reserved. You can adjust the percentage of reserved blocks using the tune2fs commands (but is not recommended):tune2fs -m 1 /dev/sdb5
_webmaster.82886
I made a collaboration where some blogs wrote about our website (franciacortacircuit.com) and linked us, so i opened Google Analytics (GA) to see how many people visited our website by clicking on those links.In GA I clicked on Acquisition -> All traffic -> Referrals and searched for the domains that linked us and...boom!From the biggest one (which makes 15K visitors per day) whe had 55 visitors in 2 weeks, while from the other ones only 2...They really looked too few to me so i made another test to see if everything was working fine. I checked the visits coming from another website i manage (www.racebooking.net), where i put a banner linking to franciacortacircuit.com and from where i am sure some visits came (because for debugging and testing i clicked on that banner from different devices and locations..therefore the counter cannot be 0)Result? 0. Zero Visits from www.racebooking.net even if I am sure some visits came because i made those visits for testing.This makes me think that either i am not using GA properly or I am missing something... What am I doing wrong? Any suggestions?Additional infoOn GA I am monitoring http://franciacortacircuit.com and all visits to www.franciacortacircuit.com are 301 redirected to franciacortacircuit.comOn the banner i made on racebooking.net i am not linking to the root of http://franciacortacircuit.com but i am linking to a sub page.
The referral count under acquisition tab not showing expected results on Google Analytics
google analytics;referrer
null
_webmaster.15517
My wife has a DotNetNuke website for her non-profit, and a that site has a Contact Us page. The Contact Us module only allows for the entry of a single email address as the recipient of the contact email.She would like to setup something like a [email protected] type of email address, and then have any emails sent to that address automatically forwarded to a list/group of people in the organization, so that any one of those people can respond.My first thought was to setup something like a Yahoo Groups type of thing, but this organization deals with medical/patient issues and she is concerned about HIPAA and privacy issues caused by using a public group like that.It is possible that I could try and re-code the module for DNN, but that's a lot of work.Is there a technology or service to handle this type of thing?Thanks - Todd
How to forward email to a group from a single email address?
email;bulk email;email address
Most control panels have an option to set up mail forwarding to lists. Check there first, or if in doubt, contact your web host.
_softwareengineering.349882
I am currently working on an Android project that I have concluded needs refactoring done for a core part of the app experience. Let's call this part of the experience Search.From looking at bug reports, static analysis tools, and the code itself, Search should be prioritized in the refactoring.Since Search is a core component, I want to propose to my business that we refactor using TDD principles. This would require rewriting the Search part of the app to take advantage of TDD.A few team members have asked for a comparison between the existing code architecture, and the one I'd like to propose. They essentially want me to prove that my new approach will benefit the project.I haven't had to go into deep detail on a comparison like this before, and I'm not sure what the best way to go about it is.Beyond static analysis and just UML modeling, are there other ways of showing the advantages or disadvantages of software architecture?
What are some ways to compare and contrast software architecture strategies?
architecture;android development;coupling;dependency analysis
Proof is way too far. You aren't a researcher. You're a code monkey. What you need is their confidence. You can gain it through politics. A strategy that works sometimes despite my personal hatred of it. You can gain it through a minimal demonstration of what this can achieve. You can gain it by sneaking in some weekend and writing heroic code that converts it all over before they know what's happening. Of the three I recommend the minimal demonstration. This is the one that gets the team to buy in. Most of your effort here won't be about the code. It will be about changing a culture. Show that you're ready to defend these ideas against brutal skeptics who hate learning anything that may come between them and going home on time. Much as I'd personally love to do option three I know it doesn't work in the long run. Even if you fired everyone and brought in a new crew who'd only ever seen the code base in TDD mode you'd still run into trouble with people just not bothering and not understanding. TDD doesn't really work when it's done half hearted. You need to give people a reason to care and believe in it. Because it can fail horribly when they don't. Since this isn't a religion that means a good enthusiastic demo. You need to honestly show them the problems TDD can cause. The pitfalls and wrong headed ways of pretending to follow it. Do that and they'll be less likely to think of you as a snake oil salesman. One of the myths is that TDD frees you of needing to design. No. TDD forces architectural changes that can only improve a design but those changes shouldn't be the end of design. TDD gives you tests that let you paint yourself into corner and paint your way back out without sawing a hole in the wall. A flexible design attitude means the code base will reflect the best wisdom of today. Not yesterday. The sad truth is not much is going to line up personally with you business goals. What you should show is how much easier it is to react to changing goals when you have good tests. This isn't something you should let them see you struggling with. Practice the transformations and refactoring that you're going to perform. Treat this like a job interview. You really need to be on form for this. Or you can do option four. Add tests whenever you can sneak them in and hope that someday someone will care about maintaining them besides you. I've seen this tried many times. It doesn't work out and the doubters just end up more convinced that they were right all along.
_webapps.100340
i am building script in Google Spreadsheet that can find multiple results (like range of given dates) and then first display first result that script find. I need to show me how many results there is (like 'result 1/4') and that i have another button for 'next result' i have a list in one sheet with numbers:3545377i wanted to search another sheet and when script find that row i wanted to copy to another sheet but only that row. So in first run it will copy row 3, in second run it will copy row 5 etc.i have script that find and copy row based on one value but i wanted to modify it to search for multiple result and display it one by one (like button Next result) function Get(){ //gets and formats data ranges, to and from sheets.var ss = SpreadsheetApp.openById(xxxxxxxxxxxxxxxxxxxxxxxxxx);var target = SpreadsheetApp.getActiveSpreadsheet();var source_sheet = ss.getSheetByName(Base_NG);var target_sheet = target.getSheetByName(Radni_sheet);var ID = target_sheet.getRange(E2); var searchString = ID.getValue();var column =1; //column Index var columnValues = source_sheet.getRange(2, column, source_sheet.getLastRow()).getValues(); //1st is header rowvar searchResult = columnValues.findIndex(searchString); //Row Index - 2 if (target_sheet.getRange('E2').getValue() == ) { SpreadsheetApp.getActiveSpreadsheet().toast('ID nije upisan!')} else { //funkcija traenja po ID-u if(searchResult != -1) { //searchResult + 2 is row index. var values_search = source_sheet.getRange(searchResult + 2, 3).getValues(); //if function find result it will copy here CODE for copy }}}//loop Array.prototype.findIndex = function(search){ if(search == ) return false; for (var i=0; i<this.length; i++) if (this[i].toString().indexOf(search) > -1 ) return i; SpreadsheetApp.getActiveSpreadsheet().toast('ID ne postoji'); var target = SpreadsheetApp.getActiveSpreadsheet(); var target_sheet = target.getSheetByName(Radni_sheet); target_sheet.getRange('E3:E18').clearContent(); return -1;
Google scripts search with manual next result option
google spreadsheets;google apps script
null
_codereview.102644
I've writen a C# JSON parser, but its performance is not as good as JSON.NET.Running the same test, my parser takes 278 ms and JSON.NET takes 24 ms. What should I do to optimize it? It seems that the dynamic type slows down the lib.I know this parser doesn't support floats and negative numbers, but it doesn't matter.JsonObject.cspublic class JsonObject{ private readonly Dictionary<string, dynamic> dict = new Dictionary<string, dynamic>(); public void AddKeyValue(string key, dynamic value) { dict.Add(key, value); } public void AddKeyValue(KeyValuePair<string, dynamic>? pair) { if (pair.HasValue) dict.Add(pair.Value.Key, pair.Value.Value); } public dynamic this[string key] { get { return dict[key]; } set { dict[key] = value; } } public static dynamic FromFile(string filename) { var lexer = Lexer.FromFile(filename); var parser = new Parser(lexer); return parser.Parse(); } public static dynamic FromString(string content) { var lexer = Lexer.FromString(content); var parser = new Parser(lexer); return parser.Parse(); }}Parser.cspublic class Parser{ private readonly ParseSupporter _; public Parser(Lexer lexer) { _ = new ParseSupporter(lexer); } public dynamic Parse() { if (_.MatchToken(TokenType.SyntaxType, {)) { return ParseJsonObject(); } if (_.MatchToken(TokenType.SyntaxType, [)) { return ParseJsonArray(); } throw new FormatException(); } private List<dynamic> ParseJsonArray() { var result = new List<dynamic>(); _.UsingToken(TokenType.SyntaxType, [); var value = ParseValue(); while (value != null) { result.Add(value); _.UsingToken(TokenType.SyntaxType, ,); value = ParseValue(); } _.UsingToken(TokenType.SyntaxType, ]); return result; } private JsonObject ParseJsonObject() { var j = new JsonObject(); _.UsingToken(TokenType.SyntaxType, {); var pair = ParsePair(); while (pair != null) { j.AddKeyValue(pair); _.UsingToken(TokenType.SyntaxType, ,); pair = ParsePair(); } _.UsingToken(TokenType.SyntaxType, }); return j; } private KeyValuePair<string, dynamic>? ParsePair() { var key = string.Empty; { var token = _.UsingToken(TokenType.StringType); if (token == null) { return null; } key = token.Value.Value; } _.UsingToken(TokenType.SyntaxType, :); var value = ParseValue(); if (value == null) { return null; } return new KeyValuePair<string, dynamic>(key, value); } private dynamic ParseValue() { if (_.MatchToken(TokenType.SyntaxType, {)) { return ParseJsonObject(); } if (_.MatchToken(TokenType.SyntaxType, [)) { return ParseJsonArray(); } { var token = _.UsingTokenExpect(TokenType.SyntaxType); return token != null ? token.Value.RealValue : null; } }}
JSON parser in C#
c#;performance;.net;parsing;json
null
_unix.290712
I would like to power down/up my display from the terminal.Right now I have:setterm --blank force#wakesetterm --blank pokeWhich gives a blank screen; but active screen (screen still receives output from computer, just blank output, it won't enter a no signal state)How would I turn off the display completely like you can do from X with this?xset dpms force off#wakexset dpms force on
How to power down display on terminal?
linux;console;screen lock
null
_webapps.79460
On a couple blog posts, talking about software projects I've seen a small thing that when clicked causes you to star the project on GitHub. It's about the same size as one of the things to like something on Facebook. I want one of those for this project. How do I make one of those?
How do I make a Star this project on GitHub link?
github
null
_unix.334543
I have rbenv (ruby version manager) installed on machine and it works like that:$ rbenv local2.3.1Writing to stdout the local version of my ruby. I want to rescue this version and declare it in a variable to reuse in another occasion.$ declare -r RUBY_DEFINED_VERSION=$(rbenv local)$ echo Using ruby version $RUBY_DEFINED_VERSIONUsing ruby version 2.3.1It works!But I don't want to use a subshell to do the work (using $() or ``). I want to use the same shell and I don't want to create a tmp file to do the work.Is there a way to do this?Note: declare -r is not mandatory, it can be a simple var=FOOBAR.
Capture the output of a shell function without a subshell
shell;variable;command substitution;subshell
null
_cs.29134
I was reading page 147 of Goodrich and Tamassia, Data Structures and Algorithms in Java, 3rd Ed. (Google books).It gives example of linear sum algorithm which uses linear recursion to calculate sum of all elements of the array:Algorithm linearSum (arr , n) if (n == 1) return arr[0] else return linearSum (arr , n-1) + arr[n-1]end linearSumAnd the binary sum algorithm which uses binary recursion to calculate sum of all elements of the array:Algorithm binarySum (arr, i, n) if (n == 1) return arr[i] return binarySum (arr, i, n/2) + binarySum (arr, i+n/2, n/2)end binarySumIt further says:The value of parameter $n$ is halved at each recursive call binarySum(). Thus, the depth of the recursion, that is, the maximum number of method instances that are active at the same time, is $1 + \log_2 n$. Thus the algorithm binarySum() uses $O(\log n)$ additional space. This is big improvement over $O(n)$ needed by the linearSum() algorithm.I did not understood how the maximum number of method instances that are active at the same time, is $1 + \log_2n$.For example consider the below calls to method with method parameters given in rounded box:Then in two recursive calls of second row from top, $n = 8$. So, $1 + \log_2 8 = 4$. Now I dont get what maximum limit this 4 represent?
Space complexity analysis of binary recursive sum algorithm
algorithm analysis;space complexity;recursion
In a given tree, all the vertices of this tree correspond to binarySum() calls. The value of parameter n to binarySum() is halved at each recursive call. Also, each recursive call finishes after all its children finish. Thus at each recursive call, number of active calls include all the ancestor calls in call sequence.Thus when any binarySum() call corresponding to leaves in above call tree is active, its parent, grandparent and so on are still active as well. Thus, the depth of the recursion, that is, the maximum number of method instances that are active at the same time, which is always equal to the height of the recursive calls tree is 1 + log$_2$n. For example in binarySum() recursive calls tree above, with n = 8, at any of the calls correspnding to leaves,Why does the depth of recursion affect the space, required by an algorithm? Each recursive function call usually needs to allocate some additional memory (for temporary data) to process its arguments. At least, each such a call has to store some information about its parent call - just to know where to return after finishing. Let's imagine you are performing a task, and you need to perform a sub-task inside this first task - so you need to remember (or write down on a paper) where you stopped in the first task to be able to continue it after you finish the sub-task. And so on, sub-sub-task inside a sub-task... So, a recursive algorithm will require space O(depth of recursion).@randomA mentioned the Call Stack, which is normally used when a function invokes another function (including itself). The call stack is the part of the computer memory, where a recursive algorithm allocates its temporary data.
_unix.65822
Is there a way to update pwd in tmux status, instead of showing it in PS1?set -g status-left #(pwd)set -g status-interval 1doesn't work.
Display current pane's current directory in status
tmux
null
_datascience.14690
I have a dataframe consisting of some continuous data features. I did a kde plot of the features using seaborn kdeplot functionality which gave me a plot as shown below :How do I interpret this visualization in order to check for things like skew in the data points,etc? Thanks in advance.
Check for skewness in data
machine learning;python;visualization;data cleaning
IIUC you can use [DataFrame.hist()] method:import matplotlibimport matplotlib.pyplot as pltmatplotlib.style.use('ggplot')df = pd.DataFrame(np.random.randint(0,10,(20,4)),columns=list('abcd'))df.hist(alpha=0.5, figsize=(16, 10))Result:Data:In [44]: dfOut[44]: a b c d0 3 0 2 51 8 7 6 62 6 4 5 73 4 4 0 64 5 6 0 25 0 0 4 86 7 6 7 47 7 6 6 28 6 5 9 49 6 3 6 910 7 9 7 611 9 3 5 612 9 4 7 013 2 8 8 814 0 8 4 715 1 5 2 416 2 6 6 417 0 3 8 118 4 1 0 419 4 4 6 8In [45]: df.skew()Out[45]:a -0.154849b -0.239881c -0.660912d -0.376480dtype: float64In [46]: df.describe()Out[46]: a b c dcount 20.000000 20.000000 20.000000 20.000000mean 4.500000 4.600000 4.900000 5.050000std 2.964705 2.521487 2.770142 2.502105min 0.000000 0.000000 0.000000 0.00000025% 2.000000 3.000000 3.500000 4.00000050% 4.500000 4.500000 6.000000 5.50000075% 7.000000 6.000000 7.000000 7.000000max 9.000000 9.000000 9.000000 9.000000
_codereview.78583
Building on my ANTLR tree listener, I'm now starting to see how the whole thing is coming together.As I proceed to implement the numerous Node classes I'm going to need to make sense out of the ANTLR parse tree, I feel like I'm doing more and more weird things.. oh, the tests pass. But the implementations... here, have a look:ProcedureNodeUsed for Sub, Function, Property Get, Property Let and Property Set code blocks, this node can have children and defines a scope.public class ProcedureNode : Node{ public enum VBProcedureKind { Sub, Function, PropertyGet, PropertyLet, PropertySet } public ProcedureNode(VisualBasic6Parser.PropertySetStmtContext context, string scope, string localScope) : this(context, scope, localScope, VBProcedureKind.PropertySet, context.visibility(), context.ambiguousIdentifier(), null) { } public ProcedureNode(VisualBasic6Parser.PropertyLetStmtContext context, string scope, string localScope) : this(context, scope, localScope, VBProcedureKind.PropertyLet, context.visibility(), context.ambiguousIdentifier(), null) { } public ProcedureNode(VisualBasic6Parser.PropertyGetStmtContext context, string scope, string localScope) : this(context, scope, localScope, VBProcedureKind.PropertyGet, context.visibility(), context.ambiguousIdentifier(), context.asTypeClause) { } public ProcedureNode(VisualBasic6Parser.FunctionStmtContext context, string scope, string localScope) : this(context, scope, localScope, VBProcedureKind.Function, context.visibility(), context.ambiguousIdentifier(), context.asTypeClause) { } public ProcedureNode(VisualBasic6Parser.SubStmtContext context, string scope, string localScope) : this(context, scope, localScope, VBProcedureKind.Sub, context.visibility(), context.ambiguousIdentifier(), null) { } private ProcedureNode(ParserRuleContext context, string scope, string localScope, VBProcedureKind kind, VisualBasic6Parser.VisibilityContext visibility, VisualBasic6Parser.AmbiguousIdentifierContext name, Func<VisualBasic6Parser.AsTypeClauseContext> asType) : base(context, scope, localScope) { _kind = kind; _name = name.GetText(); _accessibility = visibility.GetAccessibility(); if (asType != null) { var returnTypeClause = asType(); _isImplicitReturnType = returnTypeClause == null; _returnType = returnTypeClause == null ? ReservedKeywords.Variant : returnTypeClause.type().GetText(); } } private readonly string _name; public string Name { get { return _name; } } private readonly string _returnType; public string ReturnType { get { return _returnType; } } private readonly bool _isImplicitReturnType; public bool IsImplicitReturnType { get { return _isImplicitReturnType; } } private readonly VBProcedureKind _kind; public VBProcedureKind Kind { get { return _kind; } } private readonly VBAccessibility _accessibility; public VBAccessibility Accessibility { get { return _accessibility; } }}I'm making use of a few extension methods here, extending some ParserRuleContext (and auto-generated derived types):public static class ParserRuleContextExtensions{ public static Selection GetSelection(this ParserRuleContext context) { if (context == null) return Selection.Empty; // adding +1 because ANTLR indexes are 0-based, but VBE's are 1-based. return new Selection( context.Start.Line + 1, context.Start.StartIndex + 1, // todo: figure out why this is off and how to fix it context.Stop.Line + 1, context.Stop.StopIndex + 1); // todo: figure out why this is off and how to fix it } public static VBAccessibility GetAccessibility(this VisualBasic6Parser.VisibilityContext context) { if (context == null) return VBAccessibility.Implicit; return (VBAccessibility) Enum.Parse(typeof (VBAccessibility), context.GetText()); }}VariableDeclarationNodeVBA variables can be declared in various ways; a declaration statement can actually declare more than a single variable. Hence, this node has child nodes too (VariableNode instances), but doesn't define a scope:public class VariableDeclarationNode : Node{ public VariableDeclarationNode(VisualBasic6Parser.VariableStmtContext context, string scope) :base(context, scope, null, new List<Node>()) { foreach (var variable in context.variableListStmt().variableSubStmt()) { AddChild(new VariableNode(variable, scope, context.visibility(), context.DIM() != null || context.STATIC() != null)); } }}public class VariableNode : Node{ private static readonly IDictionary<string, string> TypeSpecifiers = new Dictionary<string, string> { { %, ReservedKeywords.Integer }, { &, ReservedKeywords.Long }, { @, ReservedKeywords.Decimal }, { !, ReservedKeywords.Single }, { #, ReservedKeywords.Double }, { $, ReservedKeywords.String } }; public VariableNode(VisualBasic6Parser.VariableSubStmtContext context, string scope, VisualBasic6Parser.VisibilityContext visibility, bool isLocal = true) : base(context, scope) { _name = context.ambiguousIdentifier().GetText(); if (context.asTypeClause() == null) { if (context.typeHint() == null) { _isImplicitlyTyped = true; _typeName = ReservedKeywords.Variant; } else { var hint = context.typeHint().GetText(); _isUsingTypeHint = true; _typeName = TypeSpecifiers[hint]; } } else { _typeName = context.asTypeClause().type().GetText(); } _accessibility = isLocal ? VBAccessibility.Private : visibility.GetAccessibility(); } private readonly string _name; public string Name { get { return _name; } } private readonly string _typeName; public string TypeName { get { return _typeName; } } private readonly bool _isImplicitlyTyped; public bool IsImplicitlyTyped { get { return _isImplicitlyTyped; } } private bool _isUsingTypeHint; public bool IsUsingTypeHint { get { return _isUsingTypeHint; } } private readonly VBAccessibility _accessibility; public VBAccessibility Accessibility { get { return _accessibility; } }}Here are a few tests showing how the parser is ultimately used and how nodes are ultimately accessed: [TestMethod] public void UnspecifiedProcedureVisibilityIsImplicit() { IRubberduckParser parser = new VBParser(); var code = Sub Foo()\n Dim bar As Integer\nEnd Sub; var module = parser.Parse(project, component, code); var procedure = (ProcedureNode)module.Children.First(); Assert.AreEqual(procedure.Accessibility, VBAccessibility.Implicit); } [TestMethod] public void UnspecifiedReturnTypeGetsFlagged() { IRubberduckParser parser = new VBParser(); var code = Function Foo()\n Dim bar As Integer\nEnd Function; var module = parser.Parse(project, component, code); var procedure = (ProcedureNode)module.Children.First(); Assert.AreEqual(procedure.ReturnType, Variant); Assert.IsTrue(procedure.IsImplicitReturnType); } [TestMethod] public void LocalDimMakesPrivateVariable() { IRubberduckParser parser = new VBParser(); var code = Sub Foo()\n Dim bar As Integer\nEnd Sub; var module = parser.Parse(project, component, code); var procedure = module.Children.First(); var declaration = procedure.Children.First(); var variable = (VariableNode)declaration.Children.First(); Assert.AreEqual(variable.Accessibility, VBAccessibility.Private); } [TestMethod] public void TypeHintsGetFlagged() { IRubberduckParser parser = new VBParser(); var code = Sub Foo()\n Dim bar$\nEnd Sub; var module = parser.Parse(project, component, code); var procedure = module.Children.First(); var declaration = procedure.Children.First(); var variable = (VariableNode)declaration.Children.First(); Assert.IsTrue(variable.IsUsingTypeHint); } [TestMethod] public void ImplicitTypeGetsFlagged() { IRubberduckParser parser = new VBParser(); var code = Sub Foo()\n Dim bar\nEnd Sub; var module = parser.Parse(project, component, code); var procedure = module.Children.First(); var declaration = procedure.Children.First(); var variable = (VariableNode)declaration.Children.First(); Assert.IsTrue(variable.IsImplicitlyTyped); }
Of Procedures and Variables: never enough nodes
c#;parsing;antlr
Avoid early and perhaps useless enums:public enum VBProcedureKind{ Sub, Function, PropertyGet, PropertyLet, PropertySet}Enums are perhaps one of the most annoying, exploited, anti-pattern, anti oop, that no language should ever support kind of thing and ruins every programmer carrer. This was in fact so bad idea that you had a moment of dissatisfaction and started to do bad code:public ProcedureNode(VisualBasic6Parser.PropertySetStmtContext context, string scope, string localScope) : this(context, scope, localScope, VBProcedureKind.PropertySet, context.visibility(), context.ambiguousIdentifier(), null){}public ProcedureNode(VisualBasic6Parser.PropertyLetStmtContext context, string scope, string localScope) : this(context, scope, localScope, VBProcedureKind.PropertyLet, context.visibility(), context.ambiguousIdentifier(), null){}public ProcedureNode(VisualBasic6Parser.PropertyGetStmtContext context, string scope, string localScope) : this(context, scope, localScope, VBProcedureKind.PropertyGet, context.visibility(), context.ambiguousIdentifier(), context.asTypeClause){}public ProcedureNode(VisualBasic6Parser.FunctionStmtContext context, string scope, string localScope) : this(context, scope, localScope, VBProcedureKind.Function, context.visibility(), context.ambiguousIdentifier(), context.asTypeClause){}public ProcedureNode(VisualBasic6Parser.SubStmtContext context, string scope, string localScope) : this(context, scope, localScope, VBProcedureKind.Sub, context.visibility(), context.ambiguousIdentifier(), null){}You can only be unhappy after seeing/writing this. One other thing that may be wrong about your ctors is the fact that you're using a object to pass derivative values eg context.visibility(). Everything I mentioned would imply that your ProcedureNode could have the following signature:private ProcedureNode(ParserRuleContext context, string scope, string localScope)In fact this would be your only ctor. This is obviously very cute and everything but now we have a problem to solve, don't we? We need to identify which type of node we have, we also have to make some calls that are available in only some of the classes, etc...Let those classes: PropertySetStmtContext, PropertyLetStmtContext, PropertyGetStmtContext, FunctionStmtContext implement an Interface that have the following properties:interface IMemberContext{//maybe you can find a more suitable name VisibilityContext Visibility{get;} AmbiguousIdentifierContext Name{get;} Func<VisualBasic6Parser.AsTypeClauseContext> AsType{get;} //maybe you can return the result, instead of returning the function} And just use that in your ProcedureNode class:public class ProcedureNode : Node{ publlic ProcedureNode(IMemberContext memberContext, string scope, string localScope) : base((ParserRuleContext)memberContext, scope, localScope) { if (memberContext.AsType != null) { var returnTypeClause = memberContext.AsType(); IsImplicitReturnType = returnTypeClause == null; ReturnType = returnTypeClause == null ? ReservedKeywords.Variant : returnTypeClause.type().GetText(); } } public IMemberContext MemberContext{get {return (IMemberContext)base.Context;}} public string ReturnType { get; private set;} public bool IsImplicitReturnType { get; private set; }}And magic is done, all the mess is gone. Let the Context object be the majesty of your class, let it proudly be the object that you may easily inspect to get those nice properties values while debugging, let it rule your class ;)!I don't know about the details of your project but let's imagine that you could say:But it doesn't make sense that my ParserRuleContext implements that interface, because not all rules have accessibility etc.... You could also pass the same object twice in the same ctor, one time for the rule, other time for the Context:public class ProcedureNode : Node{ publlic ProcedureNode(ParserRuleContext ruleContext, IMemberContext memberContext, string scope, string localScope) : base(ruleContext, scope, localScope) { MemberContext = memberContext; if (memberContext.AsType != null) { var returnTypeClause = memberContext.AsType(); IsImplicitReturnType = returnTypeClause == null; ReturnType = returnTypeClause == null ? ReservedKeywords.Variant : returnTypeClause.type().GetText(); } } public IMemberContext MemberContext{get; private set;} public string ReturnType { get; private set;} public bool IsImplicitReturnType { get; private set; }}Not the best, but still it's way better than your previous attempt IMHO.
_unix.209248
Whenever I want to use custom QML components in my apps it makes them use GTK+ theme. How can I prevent this? Why am I asking here? Because I believe it has something to do with files like: ~/.config/qt.conf or whatever. And I have to do something like: [Qt]GTK-theme: falseOr something in this style, but I don't know what/how exactly should I do it. Can you help me with this?I'm using ArchLinux and GNOME as DE if it can help.
How to prevent Qt Applications from using GTK+ theme
gtk;theme;qt;qtcreator
null
_webapps.108319
Whenever I sign in to YouTube using my primary google account no matter in which browser YouTube starts lagging though I have a fully loaded 5K iMac but as soon as I sign out YouTube starts functioning as smooth as before I mean the animations across the webpage again works as smooth as butter I even signed in with a brand new Google account and surprisingly.
YouTube Google account and lag with youtube webpage
youtube;google account
null