id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_webmaster.27452
On a page on my website page I have a list of ALL the products on my site. This list is growing rapidly and I am wondering how to manage it from an SEO point of view. I am shortly adding a title to this section and giving it an H1 tag.Currently the name of each product in this list is not h1,2,3,4 its just styled text. However I was looking to make these h2,3,4.Questions:Is the use of h2,3,4 on these list items bad form as they should be used for content rather than all links?I am thinking of limiting this main list to only 8 items and using h2 tags for each name. Do you this this will have a negative or possible affect over all.I may create a piece of script which counts the first 8 items on the list. These 8 will get the h2, and any after that will get h3 (all styled the same).If I do add h tags should I put just on the name of the product or the outside of the a tag, therefore collecting all info.Has anyone been in a similar situation as this, and if so did they really see any significant difference?
SEO strategy for h1, h2, h3 tags for list of items
seo
DisgruntledGoat's answer is good, but I'd like to add an answer to a question you should've asked: How do I let Google recognize things like product names on my pages?Actually, there are several ways to do that, but the one I'd recommend is microdata. For example, for your site you could use the Product and Offer schemas from schema.org, something like this:<div class=listItem itemscope itemtype=http://schema.org/Product&gt <a itemprop=url title=Tannoy V4 href=/speakers/1/tannoy-v4> <div class=left> <img itemprop=image width=173 height=130 alt= src=images/small/TANNOY-V4.jpg> </div> <div class=mid> <div class=title itemprop=name> <span itemprop=brand>Tannoy</span> <span itemprop=model>V4</span> </div> <div class=starRating itemprop=aggregateRating itemscope itemtype=http://schema.org/AggregateRating> <img width=183 height=32 alt=Five Star Rating src=images/interface/5.gif> <meta itemprop=ratingValue content=5 /> </div> </div> <div class=right> <div class=now itemprop=offers itemscope itemtype=http://schema.org/Offer> <span class=listNow>Now <span itemprop=price>280</span></span> </div> <div class=save> <span class=listSave>Save 70</span> </div> </div> <div class=clear></div> </a></div>(If that example HTML looks familiar, it's because I grabbed it off your site and just added the bits in boldface.)That said, while marking up items in lists shouldn't do any harm, what you really should do is add this kind of microdata markup to the individual product pages. That way, Google can use it to generate rich snippets for your pages when they appear in search results. For more information, see Getting started with schema.org.
_softwareengineering.324914
A nice side-effect of using reviews is that it will ensure that other programmers have some knowledge of the code that is being merged to the project. The more that reviews changes to the project the more spread the knowledge is - which ought to increase the bus factor.Now the problem is that people quit (and hopefully isn't hit by the bus) so after a while both the author of the code and those who reviewed the changes are no longer present. The solution to this problem is to do post-merge reviews to the code to restore the common knowledge of the code.Probably such a process would benefit from a tool that can produce statistics over the common knowledge of the code. That is files that contains commits where neither the author nor those who reviewed it are present any longer, files that only one involved person is present and so on.First question is of course if someone has done something similar and if it works well in practice?The question is if this is possible to do in Gerrit? That is can I do a review, add comments and so on even after the changeset is closed?
Using gerrit(-like) tool to ensure high bus-factor
development process;code reviews;gerrit
null
_webapps.3856
I've recently signed up for Google Apps because of the email support. I only use the web based gmail client for all my mail. I'd like to have a professional email signature for each email that I send that includes a small image which is the Logo of my company. How can include such an signature with Gmail?
How do I create a decent email signature in Gmail?
gmail;google apps;email signature
Google recently announced support for rich text in signatures. That means you can now configure the font family, size and color, as well as insert images into the signature. Just go to your gmail settings and you'll find it right there, no need to enable as a lab feature or anything! It's possible it hasn't been expanded to Google Apps for your domain yet, in which case it should be coming pretty soon.
_cs.9512
I molded my problem as the following game (it is a congestion game with varying price):$N$ players share resources $E$,$S_i$ is the strategy space of player $i$ which is in $2^E$ (where $2^E$ is the power set of resources).$P_e^i$ is the price of resource $e \in E$ considering player $i$. The price of resource $e$ is different for different users.The goal of each player is to select a strategy $S_i$ which minimize its price $\sum_{e\in S_i}P_e^i$ .My questions are:Does this game have any Nash Equilibrium (NE)? If so under which conditions? If it has any NE, what is a sample algorithm for achieving it?I searched literature but could not find any appropriate information! Any solution is appreciated!
Congestion Game with Varying Price
algorithms;optimization;game theory
null
_unix.47178
When creating directories, mkdir -m <mode> <dir> provides for creating one or more directories with the given mode/permissions set (atomically).Is there an equivalent for creating files, on the command line?Something akin to:open(file, O_WRONLY | O_APPEND | O_CREAT, 0777);Is using touch followed by a chmod my only option here?Edit: After trying out teppic's suggestion to use install, I ran it through strace to see how close to atomic it was. The answer is, not very:$ strace install -m 777 /dev/null newfile...open(newfile, O_WRONLY|O_CREAT|O_EXCL, 0666) = 4fstat(4, {st_mode=S_IFREG|0666, st_size=0, ...}) = 0...fchmod(4, 0600) = 0close(4) = 0...chmod(newfile, 0777) = 0...Still, it's a single shell command and one I didn't know before.
Can files be created with permissions set on the command line?
shell;permissions;files
You could use the install command (part of GNU coreutils) with a dummy file, e.g.$ install -b -m 755 /dev/null newfileThe -b option backs up 'newfile' if it already exists. You can use this command to set the owner as well.
_unix.219641
I set up an Arch USB stick, following the instructions given at https://wiki.archlinux.org/index.php/USB_flash_installation_media#In_Windows_2, section 1.2.2. Specifically, I extracted the Arch ISO on to the drive and then installed syslinux on it, within Windows. When I try to boot from the drive, I get to a menu with two options:EFI Default LoaderReboot into Firmware InterfaceThe first option just reloads the same menu, while the second goes into the Firmware Interface, as it says, and then boots Windows. From looking around on the internet, it looks as if I've arrived at the systemd-boot menu (not Syslinux?) and that I should be seeing an option to boot into Arch above those two. Clearly, I've done something wrong along the way. At the moment, though, I have no idea what could be causing those menu items not to appear. What do I need to fix?
Why am I only getting two options in systemd-boot?
arch linux;system installation;dual boot
Probably too late (or already solved) for the OP, but if anyone else trips over this like I just did:Besides installing systemd-boot, you also need to manually add the boot entry for Arch, by creating a file at /boot/loader/entries/arch.conf.See this ArchWiki Page for more information.It's easy to miss that part, thanks to the large notice bootctl will automatically check for... making you think you're already done.
_webapps.43974
Is there support for internal office locations on Google Calendar?We have a bunch of meeting rooms and that's what appears in the Location field of an event. However, Google can't resolve that to a location?Is it possible to establish some sort of URL that is opened when users click on our meeting room links, rather than Google passing it off to Google Maps?
Is there support for internal locations on Google Calendar?
google apps;google calendar
It sounds like you are already using the calendar Resources feature. If not, you can set those up in the admin panel.When you create resources, you can add a URL to each resource. Then when a user adds the resource to a meeting, that URL goes into the Where: field. We use this for GoToMeeting accounts and it works great (you set up a recurring meeting and use that as the URL). For physical locations, it would be as pretty but it should work. Definitely a bit of a hack but it gets the job done for my company (only 25 employees). Here is a good guide on creating resources - http://www.googlegooru.com/how-to-create-a-resource-in-google-calendar/
_webmaster.16348
I'm currently building a website that is not targeted to an English speaking audience. So my thoughts are that if I were to make the URLs in the language of choice (in this case Spanish) then would this help or hinder my SEO for the website? Is it always better to have english URL paths for getting picked up by search engines?
Language Specific URLs for SEO
url;search;internationalization;seo;search engines
null
_bioinformatics.73
I am trying to understand the benefits of joint genotyping and would be grateful if someone could provide an argument (ideally mathematically) that would clearly demonstrate the benefit of joint vs. single-sample genotyping.This is what I've gathered from other resources (Biostars, GATK forums, etc.)Joint-genotyping helps control FDR because errors from individually genotyped samples are added up, and amplified when merging call-sets (by Heng Li on https://www.biostars.org/p/10926/)If someone understands this, can you please clarify what is the difference on the overall FDR rate between the two scenarios (again, with an example ideally)Greater sensitivity for low-frequency variants - By sharing information across all samples, joint calling makes it possible to rescue genotype calls at sites where a carrier has low coverage but other samples within the call set have a confident variant at that location. (from https://software.broadinstitute.org/gatk/documentation/article.php?id=4150)I don't understand how the presence of a confidently called variant at the same locus in another individual can affect the genotyping of an individual with low coverage. Is there some valid argument that allows one to consider reads from another person as evidence of a particular variant in a third person? What are the assumptions for such an argument? What if that person is from a different population with entirely different allele frequencies for that variant?Having read several of the papers (or method descriptions) that describe the latest haplotype-aware SNP calling methods (HaplotypeCaller, freebayes, Platypus) the overall framework seems to be:Establish a prior on the allele frequency distribution at a site of interest using one (or combination) of: non-informative prior, population genetics model-based prior like Wright Fisher, prior based on established variation patterns like dbSNP, ExAC, or gnomAD.Build a list of plausible haplotypes in a region around the locus of interest using local assembly.Select haplotype with highest likelihood based on prior and reads data and infer the locus genotype accordingly.At which point(s) in the above procedure can information between samples be shared or pooled? Should one not trust the AFS from a large-scale resource like gnomAD much more than the distribution obtained from other samples that are nominally party of the same cohort but may have little to do with each other because of different ancestry, for example?I really want to understand the justifications and benefits offered by multi-sample genotyping and would appreciate your insights.
Single-sample vs. joint genotyping
genotyping;gatk;freebayes;platypus;multi sample
Say you are sequencing to 2X coverage. Suppose at a site, sample S has one reference base and one alternate base. It is hard to tell if this is a sequencing error or a heterozygote. Now suppose you have 1000 other samples, all at 2X read depth. One of them has two ALT bases; 10 of them have one REF and one ALT. It is usually improbable that all these samples have the same sequencing error. Then you can assert sample S has a het. Multi-sample calling helps to increase the sensitivity of not so rare SNPs. Note that what matters here is the assumption of error independency. Ancestry only has a tiny indirect effect.Multi-sample calling penalizes very rare SNPs, in particular singletons. When you care about variants only, this is for good. Naively combining single-sample calls yields a higher error rate. Multi-sample calling also helps variant filtering at a later stage. For example, for a sample sequenced to 30X coverage, you would not know if a site at 45X depth is caused by a potential CNV/mismapping or by statistical fluctuation. When you see 1000 30X samples at 45X depth, you can easily know you are looking at a CNV/systematic mismapping. Multiple samples enhance most statistical signals.Older methods pool all BAMs when calling variants. This is necessary because a single low-coverage sample does not have enough data to recover hidden INDELs. However, this strategy is not that easy to massively parallelized; adding a new sample triggers re-calling, which is very expensive as well. As we are mostly doing high-coverage sequencing these days, the old problem with INDEL calling does not matter now. GATK has this new single-sample calling pipeline where you combine per-sample gVCFs at a later stage. Such sample combining strategy is perhaps the only sensible solution when you are dealing with 100k samples.The so-called haplotype based variant calling is a separate question. This type of approach helps to call INDELs, but is not of much relevance to multi-sample calling. Also, of the three variant callers in your question, only GATK (and Scalpel which you have not mentioned) use assembly at large. Freebayes does not. Platypus does but only to a limited extent and does not work well in practice.I guess what you really want to talk about is imputation based calling. This approach further improves sensitivity with LD. With enough samples, you can measure the LD between two positions. Suppose at position 1000, you see one REF read and no ALT reads; at position 1500, you see one REF read and two ALT reads. You would not call any SNPs at position 1000 even given multiple samples. However, when you know the two positions are strongly linked and the dominant haplotypes are REF-REF and ALT-ALT, you know the sample under investigation is likely to have a missing ALT allele. LD transfers signals across sites and enhances the power to make correct genotyping calls. Nonetheless, as we are mostly doing high-coverage sequencing nowadays, imputation based methods only have a minor effect and are rarely applied.
_codereview.43607
I've been suggested to put the following snippet of code up for review, hence I will do so, review of everything is appreciated, also posting as much relevant code as possible:abstract public class Primitive { protected List<VertexData> vertexData; public List<VertexData> getVertexData() { return vertexData; } public static void calculateNormals(final List<Primitive> primitives) { primitives.stream() .flatMap(primitive -> primitive.getVertexData().stream()) .collect(Collectors.groupingBy(VertexData::getVertex)) //Map<Vector3f, List<VertexData>> .entrySet().stream() .map(Map.Entry::getValue) //List<VertexData> .forEach(Primitive::calculateNormalsOfVertexData); } private static void calculateNormalsOfVertexData(final List<VertexData> vertexData) { Vector3f averageNormal = vertexData.stream() .map(VertexData::getNormal) .reduce(new Vector3f().zero(), (n1, n2) -> n1.add(n2)) .scale(1f / vertexData.size()); vertexData.forEach(vd -> vd.setNormal(averageNormal)); }}public class VertexData { private Vector3f vertex; private Vector3f normal; public VertexData(final Vector3f vertex, final Vector3f normal) { this.vertex = vertex; this.normal = normal; } public Vector3f getVertex() { return vertex; } public void setVertex(final Vector3f vertex) { this.vertex = vertex; } public Vector3f getNormal() { return normal; } public void setNormal(final Vector3f normal) { this.normal = normal; } @Override public String toString() { return VertexData( + vertex + , + normal + ); } @Override public int hashCode() { int hash = 7; return hash; } @Override public boolean equals(Object obj) { if (obj == null) { return false; } if (getClass() != obj.getClass()) { return false; } final VertexData other = (VertexData) obj; if (!Objects.equals(this.vertex, other.vertex)) { return false; } if (!Objects.equals(this.normal, other.normal)) { return false; } return true; }}public class CustomCollectors { /** * Returns a mapping of an index to elements of the stream following the natural order at which the elements are to be encountered. * * @param <T> The type of th elements in the stream * @return A Map<Long, T> that is indexed */ public static <T> Collector<T, ?, Map<Long, T>> indexing() { return Collector.of( HashMap::new, (map, t) -> map.put(Long.valueOf(map.size() + 1), t), (left, right) -> { final long size = left.size(); right.forEach((k, v) -> left.put(k + size, v)); return left; }, Collector.Characteristics.CONCURRENT ); }}Now onto the real code snippet that I would like to be reviewed, first of all what the code does is the following:Given a List<Primitive>, I want to obtain an indexed Map<Long, VertexData> of the distinct VertexData of the Primitives.My latest piece of code:Map<Long, VertexData> vdMapping = primitives.stream() .flatMap(primitive -> primitive.getVertexData().stream()) .distinct() .collect(CustomCollectors.indexing());vdMapping.forEach((k, v) -> System.out.println(k = + k + / v = + v));Before the last refactoring:AtomicLong index = new AtomicLong();Map<Long, VertexData> vdMapping = primitives.stream() .flatMap(primitive -> primitive.getVertexData().stream()) .distinct() .collect(Collectors.toMap(k -> index.getAndIncrement(), v -> v));vdMapping.forEach((k, v) -> System.out.println(k = + k + / v = + v));And in comparison, how a Java 7 piece of code would look like:Map<Long, VertexData> vdMapping = new HashMap<>();Set<VertexData> vdSet = new HashSet<>();long index = 0;for (Primitive primitive : primitives) { for (VertexData vd : primitive.getVertexData()) { vdSet.add(vd); }}for (VertexData vd : vdSet) { vdMapping.put(index++, vd);}for (Map.Entry<Long, VertexData> entry : vdMapping.entrySet()) { System.out.println(k = + entry.getKey() + / v = + entry.getValue());}A better name for CustomCollector.indexing() is also allowed.
Java 8 stream collector for numbering vertices
java;stream
long vs. intIn your map you are keyed off the long value of the index. The long is simply related to the size of the map as you accumulate things.A Map can never have more than Integer.MAX_INT members, thus you can never accumulate more than that number of key values.... thus, why are you using a Long when an Integer will suffice?SimplificationConsider this simplification, where the distinct phase is done in the intermediate Collector:Map<Long, VertexData> vdata = primitives .stream() .collect(Collector.of( LinkedHashSet<VertexData>::new, (acc, t) -> acc.add(t), (left, right) -> {left.addAll(right); return left;})) .stream().collect(CustomCollectors.indexing());vdMapping.forEach((k, v) -> System.out.println(k = + k + / v = + v));ConclusionIn this particular case, I am not certain that the streaming API from Java8 is the right solution. You are creating much more uncertainty than needs to happen.The right solution to this is a LinkedHashSet data structure. Your Java7 equivalent is a better solution, but change the HashSet to a LinkedHashSet, and then your results are going to be deterministic. The code is simpler, more maintainable, and the parallel benefits of your stream are not possible anyway, so the added overheads of creating intermediate maps, running complicated distinct() filters, and other processes are just painful....Just because it can be done with a stream does not mean that is the better solution.Collector associativenessOne of the properties of a Collector is that it is supposed to be associative.The associativity constraint says that splitting the computation must produce an equivalent result.Your collector is not associative. Consider your implementation: return Collector.of( HashMap::new, (map, t) -> map.put(Long.valueOf(map.size() + 1), t), (left, right) -> { final long size = left.size(); right.forEach((k, v) -> left.put(k + size, v)); return left; }, Collector.Characteristics.CONCURRENT );If your collector is used on a concurrent stream, there is no way for you to determine the order of the combination function calls: (left, right) -> { final long size = left.size(); right.forEach((k, v) -> left.put(k + size, v)); return left; },For example, suppose your stream is split in two, and the two parts are accumulated in two maps mapA and mapB. Your collector should produce the same results regardless of whether left == mapA && right == mapB or whether left == mapB && right == mapA.Your streams will produce non-deterministic values for your inputs because your Collector is non-deterministic.I am not certain how I would solve this problem.... there has to be a way to 'tag' the data at the beginning of the stream such that it is labelled with a 'key' sooner.... instead of arbitrarily, and non-deterministically assigning one later. ConclusionI believe the code produces results that are 'correct' for the context of the way you use it, but the results are non-deterministic, and thus are going to be a problem in the future when things go wrong.You need to do something to fix the non-determinisim earlier in the stream:Map<Long, VertexData> vdMapping = primitives.stream() .flatMap(primitive -> primitive.getVertexData().stream()) .distinct() .collect(CustomCollectors.indexing());vdMapping.forEach((k, v) -> System.out.println(k = + k + / v = + v));I believe the only decent solution would be to do something like guarantee sequential processing until the data leaves the distinct() phase, and then add a map phase at that point which numbers the VertexData that exits at that point.
_softwareengineering.301101
nowadays, I am studying OS.In Communication in Client-Server Systems chapter, I've heard that using process-id instead of port-number has problems.But I don't understand why it does.I think using ip+pid is possible because each process has their own process-id (pid). Is my thought wrong?
Why can't socket-communication use process_id instead of port_number?
sockets
null
_unix.125090
While browsing processes on a shared server in top I accidentally hit the r key which prompted me with renice. I had no clue what this was going to do with my input and found no way to go back. I tried ^C, ^D, <ESC> among other things and ended up just typing some garbage like asdf; which got me out. Is there a sane way to cancel a command that you type interactively in top?
top: how to cancel current command?
top
When prompted for the PID to renice, entering any value that isn't a positive integer will exit the renice mode with an error message. Once you enter a PID, however, you are stuck entering a priority; any invalid entry will cause the get_int function to return -1, which will set the priority to -1. The only way to avoid entering a priority is to kill top. Ctrl-C should work. Ctrl-D or enter will cause the niceness to be set to -1.Source: Procps source code
_softwareengineering.40183
I'm a undergraduate in my 3rd year of a Software Engineering degree. From this year on, my university has introduced a new course called 'Compiler Constructions', which teaches you the basics of the theory of building a compiler.What would be the real world advantage for a Software Engineer of learning about compiler construction?
What is the advantage of learning about and understanding compiler construction?
tools;compiler
There is a practical side to learning compiler construction. There have been several occasions where I've needed to build parsers to implement some app-specific command language. It allowed me to create flexible, scriptable interfaces to the app. It also gives you greater insight into why some languages are designed the way they are, and why no language is perfect. It's a tough course, one of the tougher ones in the curriculum. I made the mistake of taking it during a summer session; never take a compilers course in a summer session, your brain will explode.
_unix.93089
Here, I have some sample code which are tutorials for OpenGL. They are developed in Visual Stdio 2008 or 2010. Now, I want to execute the .exe through by wine. How to do that?
How to execute the .exe which code by opengl in Ubuntu?
wine;opengl
null
_softwareengineering.195128
As a DBA, most SQL is submitted to my team for review. We do not have a SQL developer, so the code is frequently very, very inefficient.Our current process is:SQL is submitted for review right before it goes to betaIssues are pointed out but frequently not resolved because the application has already been written around it, and is scheduled to be deployed to beta the same weekIf the DBA team identifies an issue, the developer response is show me how to fix it, thenIt should be noted that the DBA team has no requirements around any code, no context and no idea what 1000s of lines of a stored procedure is supposed to be doing.Is this a typical scenario? Who is responsible for fixing the code? Does your DBA typically re-write the stored procdedure for you?
Who is responsible for fixing failed SQL code reviews?
code reviews
null
_codereview.20523
I'm very new both to programming and to R so please bear with me :-)I've written the following bit of code, which works perfectly well and runs through a data file with 17446 rows in about 35 seconds. I don't really have a problem with this but am sure that this could be a lot more elegant with probably the use of the tapply function. I would love to see how you experts would rewrite this more efficiently and am sure the rewrite would teach me a lot. I've included the first few rows of the output file and hopefully the code should be fairly obvious (it must be if I managed to write it!!); it's a simple filter based on the standard deviation of a subset in one column dictating the output to another column which is set up at the start of the code. If a value in the NEE column is more than 2 stdevs of the preceeding three values then the value is taken from the mean of a subset in the NEE2 column else the value is copied from the first (if that makes sense). Please also note the count variable as I need to retrieve the amount of values replaced. Hope this piques someone's interest and thanks in advance for all of your time.JonPDspke <- read.csv(file.choose())nr = nrow(Dspke)count = 0Dspke$NEE2 <- (1:nr) #creates a new column ready for input of valuesfor (i in 4:nr) {#standard deviation of the previous three values in NEEstdev <- sd(Dspke$NEE[(i-2):i]) #if stdev>2 then NEE2 value is mean of the previous 3 values in NEE2 else copy value from NEEif(stdev>=2) { Dspke$NEE2[i] <- mean(Dspke$NEE2[(i-2):i-1]) count = count+1 }else { Dspke$NEE2[i] <- Dspke$NEE[i] } } write.csv(Dspke,Dspke.csv) Date_Time NEE NEE21 03/01/2012 13:00 -2.300000 1.000002 03/01/2012 13:30 -2.385610 2.000003 03/01/2012 14:00 -2.081935 3.000004 03/01/2012 14:30 -1.778260 -1.778265 03/01/2012 15:00 -2.409490 -2.409496 03/01/2012 15:30 -0.741030 -0.74103
More elegant filter script in R
r
null
_softwareengineering.70554
All over the Internet I see people advocating for the use of enum or const rather than #define for constant definition.Are there times when #define is more appropriate or equally appropriate? Please explain.Are there times when #define is just awful? Please elaborate. (You can say 'always' if you really wish, but I'm looking for more specific answers.)(Yes, I am trying to justify my use of #define, which to me looks so much neater.)
Is there an appropriate use for #define for constants?
c++
To keep it simple: they're right and you're wrong.There's rarely a good reason to use #define for a constant in C++. Yes, they're pretty awful. In particular, they completely ignore the scopes for normal identifiers, and instead always have file scope.You can work around that (e.g., using prefixes on names you define to avoid collisions) but even at best they don't fit very well with the rest of the language.I suspect you're thinking it looks neater is more a matter of habituation than any intrinsic superiority -- when I first started moving from C to C++ I tended to feel the same way. Purely intellectually I could understand the points being raised, but emotionally they just seemed wrong. For a while I considered it kind of silly to change something that had worked find forever. I had to really force myself to use const instead, and at first didn't really believe in it -- but fairly quickly I not only got used to it, but ended up admitting that it really was better.
_unix.85026
Our Centreon web interface is so slow ! about 3/4 mins to have my click's results ...While Nagios is working perfectly ...Thanks to Google, i guess it comes from mysql. But I can't understand anything with it ...If someone already solve this, would be perfect !P.S : also posted on centreon forum, and server fault
Centreon Web Interface Slow
interface;nagios
null
_softwareengineering.279403
I've written my own C# TCP communications module (using SocketAsyncEventArgs, although that's presumably irrelevant). My module runs at both ends of the connection, client and server. As part of the programming it is supposed to detect when the connection fails, and then automatically try to reestablish the connection. I'd like to hear idea about how to test this. One solution is to run it on two physical machines and unplug the network cable for a while and then reconnect. This isn't very automated. I'm wondering if there is some kind of WinSock hook program that can be used to simulate connection failures? Or any other suggestions?
How to test Windows .NET TCP program handling of connection failure
.net;testing;windows;tcp
null
_unix.10531
I've been trying to learn *nix and I think I'm doing pretty good as far as basic commands, and I think I understand a lot of the monitoring type commands etc... in short, I think I'm doing okay with syntax type stuff.And doing stuff like setup of xyz is more or less straight forward...but I really want to start learning how to do is troubleshoot/diagnose problems and be able to fix them. For example, if I go to my website and it's not loading...what would be the first thing I should check for? That sort of thing.So I figured there's probably some good books out there about what to do when things go wrong, what to start looking for, how to identify what is going wrong and how to fix it etc...so I was looking for some recommendations on where I should turn to for that?Ideally I would like book recommendations because I'm old fashioned and like being able to have something in my hands but also for bathroom reading situations :)Any good books out there? I did do a little bit of researching before posting here...but after a while of trying to look at various books, it became clear to me that I am currently too much of a noob to figure out whether the book I'm getting is really going to give me what Im after...seems like most books I've looked at so far focus on install, backup and general syntax...but that stuff is easy and straight forward to digest...I'm looking for the stuff that will help me become a better detective and *nix problem solver...p.s. - I'm currently using centOS 5.3 but from what I can tell, a lot of things are generic and can work from *nix system to *nix system so I don't think I necessarily need it to be centOS specific...Edit:I ended up getting 3 books:Linux Troubleshooting BibleLinux Server HacksLinux Server Hacks, Volume 2 (can't post link due to posting restrictions but you can find it easy enough from first link)
good unix troubleshooting book
centos;books;troubleshooting
null
_unix.284017
I am trying to find the physical addresses of heap variables, stack variables and memory mapped peripheral addresses using the /proc/{pid}/pagemap file using the steps detailed in the file: http://lxr.free-electrons.com/source/Documentation/vm/pagemap.txt. The procedure detailed works well for stack and heap variables. However, for memory mapped peripherals no page is found in the /proc/{pid}/pagemap file. The output of 'cat /proc/{pid}/maps' is:00008000-0000a000 r-xp 00000000 b3:02 289852 /home/linaro/ocm_test/write-memory00011000-00012000 r--p 00001000 b3:02 289852 /home/linaro/ocm_test/write-memory00012000-00013000 rw-p 00002000 b3:02 289852 /home/linaro/ocm_test/write-memory00013000-00034000 rw-p 00000000 00:00 0 [heap]b2efe000-b6dfe000 rw-s 00001000 b3:02 284849 /dev/uio0b6dfe000-b6ed2000 r-xp 00000000 b3:02 282416 /lib/arm-linux-gnueabihf/libc-2.15.sob6ed2000-b6eda000 ---p 000d4000 b3:02 282416 /lib/arm-linux-gnueabihf/libc-2.15.sob6eda000-b6edc000 r--p 000d4000 b3:02 282416 /lib/arm-linux-gnueabihf/libc-2.15.sob6edc000-b6edd000 rw-p 000d6000 b3:02 282416 /lib/arm-linux-gnueabihf/libc-2.15.sob6edd000-b6ee0000 rw-p 00000000 00:00 0 b6ee0000-b6ee2000 r-xp 00000000 b3:02 27519 /usr/lib/libinterface.sob6ee2000-b6ee9000 ---p 00002000 b3:02 27519 /usr/lib/libinterface.sob6ee9000-b6eea000 r--p 00001000 b3:02 27519 /usr/lib/libinterface.sob6eea000-b6eeb000 rw-p 00002000 b3:02 27519 /usr/lib/libinterface.sob6efb000-b6f12000 r-xp 00000000 b3:02 282407 /lib/arm-linux-gnueabihf/ld-2.15.sob6f13000-b6f14000 rw-p 00000000 00:00 0 b6f14000-b6f15000 rw-s 00000000 b3:02 284849 /dev/uio0b6f15000-b6f19000 rw-p 00000000 00:00 0 b6f19000-b6f1a000 r--p 00016000 b3:02 282407 /lib/arm-linux-gnueabihf/ld-2.15.sob6f1a000-b6f1b000 rw-p 00017000 b3:02 282407 /lib/arm-linux-gnueabihf/ld-2.15.sobee36000-bee57000 rw-p 00000000 00:00 0 [stack]bef1f000-bef20000 r-xp 00000000 00:00 0 [sigpage]ffff0000-ffff1000 r-xp 00000000 00:00 0 [vectors]When I try to find the physical addresses of 0x00013000 or 0xbee36000 it works fine. However the page map file returns no page found when I try to find the physical address corresponding to 0xb2efe000 which belongs to /dev/uio0. I am trying to do this for verification purposes. I know a physical address exists because I have used mmap on 0x1b90000 ignored to find 0xb2efe000. Could someone please explain why the /proc/{pid}/pagemap file doesn't contain the physical address?
Pagemap on memory mapped devices not working
ubuntu;linux kernel;virtual memory;mmap
null
_unix.230906
When I try to find a specific byte with grep not using pipe I get some output:$ grep -aboP \\x55 bigfile510:U1049086:U1049598:UBut when a pattern is supplied via pipe then there is a memory exhausted error:$ echo \\\\x55 | grep -aboPf - bigfilegrep: memory exhaustedWhy does it happen and how to make it work?
grep: memory exhausted on big file when using pipe
grep
null
_softwareengineering.343014
Heres my issue. I have different types of methods which make HTTP requests to a REST api. To keep things clean, I have methods that take different types of request objects as parameters. Example below.Task<IEventResponse> FindEventAsync(IGetEventsRequest data, CancellationToken token);So everytime my controller calls this method, it has to create a object of type IGetEventsRequest. Creating a new() instance of its implementation feels dumb, and I started thinking about creating a generic factory which could create these objects for me, and also work for all kinds of object types. Any help how could I achieve this?I am looking for this kind of syntax: await FindEventsAsync(_requestFactory.Create(?somehow specify the type of this and generate a new object based on the type), CancelationToken.None);
Help trying to implement a request object factory
c#;design;design patterns;object oriented design;implementations
null
_unix.355367
I use Arch linux, and using pptp to connect VPN. Following the Arch wiki steps, it worked before I updated Arch linux. I updated Arch following Arch homepage commands as:pacman -Syuw # download packagesrm /etc/ssl/certs/ca-certificates.crt # remove conflicting filepacman -Su # perform upgradethen I ran pptpclient will get either Error: either to is duplicate, or uid is a garbageor after routed and pinged long time I get the following error:Modem hangupSent 1002073833 bytes, received 0 bytesMPPE disabledConnection terminatedWaiting for 1 child processes...script /etc/ppp/ip-down, pid 2726Script /etc/ppp/ip-down finished (pid 2726), status = 0x0I do not know what is wrong, but I feel like pptpclient cannot receive any data after routed.Does anyone have any ideas how to proceed?UPDATED:when I debug pptpclient using command:sudo pon myVpn debug dump logfd 2 nodetach then the debug info as:pppd options in effect:debug # (from command line)nodetach # (from command line)logfd 2 # (from command line)dump # (from command line)noauth # (from /etc/ppp/options)name dxcqcv # (from /etc/ppp/peers/myVpn)remotename PPTP # (from /etc/ppp/peers/myVpn) # (from /etc/ppp/options)pty pptp p1.hk3.flydidu.com --nolaunchpppd # (from /etc/ppp/peers/myVpn)crtscts # (from /etc/ppp/options) # (from /etc/ppp/options)asyncmap 0 # (from /etc/ppp/options)mru 1400 # (from /etc/ppp/options)mtu 1400 # (from /etc/ppp/options)silent # (from /etc/ppp/options)lcp-echo-failure 4 # (from /etc/ppp/options)lcp-echo-interval 30 # (from /etc/ppp/options)hide-password # (from /etc/ppp/options)ipparam myVpn # (from /etc/ppp/peers/myVpn)proxyarp # (from /etc/ppp/options)usepeerdns # (from /etc/ppp/peers/myVpn)nobsdcomp # (from /etc/ppp/options)nodeflate # (from /etc/ppp/options)require-mppe-128 # (from /etc/ppp/peers/myVpn)noipx # (from /etc/ppp/options)using channel 8Using interface ppp0Connect: ppp0 <--> /dev/pts/4***Error: either to is duplicate, or uid is a garbage.***rcvd [LCP ConfReq id=0x1 <asyncmap 0x0> <auth chap MS-v2> <magic 0x80de6b20> <pcomp> <accomp>]sent [LCP ConfReq id=0x1 <mru 1400> <asyncmap 0x0> <magic 0xd2626b7a> <pcomp> <accomp>]sent [LCP ConfAck id=0x1 <asyncmap 0x0> <auth chap MS-v2> <magic 0x80de6b20> <pcomp> <accomp>]rcvd [LCP ConfAck id=0x1 <mru 1400> <asyncmap 0x0> <magic 0xd2626b7a> <pcomp> <accomp>]sent [LCP EchoReq id=0x0 magic=0xd2626b7a]rcvd [LCP EchoReq id=0x0 magic=0x80de6b20]sent [LCP EchoRep id=0x0 magic=0xd2626b7a]rcvd [CHAP Challenge id=0x7d <94f97c87ea84a9097b3910d403d0ab14>, name = pptpd]Warning - secret file /etc/ppp/chap-secrets has world and/or group accessadded response cache entry 0sent [CHAP Response id=0x7d <289521c7e2bf1d1a2d3bd8addb8b494e0000000000000000d8ae891c6114fbe4097ef6b89fef69e516540b92f2198a3e00>, name = dxcqcv]rcvd [LCP EchoRep id=0x0 magic=0x80de6b20]rcvd [CHAP Success id=0x7d S=A37F545150DBF921600B0B681B1E1ADDD7238617]response found in cache (entry 0)CHAP authentication succeededsent [CCP ConfReq id=0x1 <mppe +H -M +S -L -D -C>]rcvd [CCP ConfReq id=0x1 <mppe +H -M +S -L -D -C>]sent [CCP ConfAck id=0x1 <mppe +H -M +S -L -D -C>]rcvd [CCP ConfAck id=0x1 <mppe +H -M +S -L -D -C>]MPPE 128-bit stateless compression enabledsent [IPCP ConfReq id=0x1 <compress VJ 0f 01> <addr 0.0.0.0> <ms-dns1 0.0.0.0> <ms-dns2 0.0.0.0>]rcvd [IPCP ConfReq id=0x1 <addr 10.10.0.1>]sent [IPCP ConfAck id=0x1 <addr 10.10.0.1>]rcvd [IPCP ConfRej id=0x1 <compress VJ 0f 01>]sent [IPCP ConfReq id=0x2 <addr 0.0.0.0> <ms-dns1 0.0.0.0> <ms-dns2 0.0.0.0>]rcvd [IPCP ConfNak id=0x2 <addr 10.10.0.21> <ms-dns1 8.8.8.8> <ms-dns2 8.8.4.4>]sent [IPCP ConfReq id=0x3 <addr 10.10.0.21> <ms-dns1 8.8.8.8> <ms-dns2 8.8.4.4>]rcvd [IPCP ConfAck id=0x3 <addr 10.10.0.21> <ms-dns1 8.8.8.8> <ms-dns2 8.8.4.4>]Cannot determine ethernet address for proxy ARPlocal IP address 10.10.0.21remote IP address 10.10.0.1primary DNS address 8.8.8.8secondary DNS address 8.8.4.4Script /etc/ppp/ip-up started (pid 8055)Script /etc/ppp/ip-up finished (pid 8055), status = 0x0when this time the ppp0 was created, then I routed as:sudo ip route replace default dev ppp0 It should be worked like before, but it didn't after updated Arch, and show error message as:Script pptp p1.hk3.flydidu.com --nolaunchpppd finished (pid 8047), status = 0x0Modem hangup**Connect time 2.0 minutes.Sent 851273846 bytes, received 0 bytes.**Script /etc/ppp/ip-down started (pid 8097)MPPE disabledsent [LCP TermReq id=0x2 MPPE disabled]Connection terminated.Waiting for 1 child processes... script /etc/ppp/ip-down, pid 8097Script /etc/ppp/ip-down finished (pid 8097), status = 0x0
pptpclinet Error: either to is duplicate, or uid is a garbage
pptp
null
_unix.171176
A friend gave me an old Dell to fix for her daughter. So I put Lubuntu on it. I had a disk with Lubuntu on it that came with one of my magazines, so I installed it. This machine won't boot from USB and has no DVD drive.So I installed. It seemed to install automatically without any guidance. After about an hour or so a login box appeared. I can only assume it's installed.What is the default username and password? I didn't have to set one, yet it's asking me for one. I've looked online and couldn't find anything.
What is Lubuntu default password when it's distributed by magazines?
password;lubuntu
I believe these are the defaults:username: lubuntupassword: blank (no password)That's literally nothing, for the password.Linux Format magazineIf you're using the compilation CD/DVD that comes with this magazine the username should be ubuntu with again a blank password.Ubuntu 14.04 compilation discReferencesWhat is the default user/password?How to disable autologin in Lubuntu?
_unix.375354
This is the content of my text file named fnames.txt:SAMPLE_NIKE_856_20170703*SAMPLE_ADIDAS_856_20170702*SAMPLE_ANTA_856_20170630*SAMPLE_JORDAN_856_20170627*SAMPLE_CONVERSE_856_20170229*This is my script named fn.sh:#!/bin/sh##while read LINEdo find -name $LINEdone < fnames.txtIt returns nothing.What I want to happen is that in each line the script will execute the find command and the output will be stored in another text file called files.txt e.g:LINE 1:find -name SAMPLE_NIKE_856_20170703*then returns the filename that is looking for./SAMPLE_NIKE_856_20170703_80_20_304_234_897.datLINE 2:find -name SAMPLE_ADIDAS_856_20170702*then returns the filename that is looking for./SAMPLE_ADIDAS_856_20170702_56_98_123_245_609.datThe script will continue until all lines has been executed by find command.
How to execute a find command using a set of strings from a line in a text file
linux;bash;shell script
I finally found the answer. I just need to call the full file path of fnames.txt to /path/to/my/fnames.txt because I'm executing the script in another directory.#!/bin/sh##while read -r LINEdo find -name $LINEdone < /path/to/my/fnames.txtOutput:./SAMPLE_NIKE_856_20170703_80_20_304_234_897.dat./SAMPLE_ADIDAS_856_20170702_56_98_123_245_609.dat
_webmaster.86606
So I have a theme that has my Logo as an H1. The logo itself is an SVG file. I am curious if this is bad practice for SEO purposes. My audit shows 2 H1 tags on my pages due to this and I am curious if I should take out the H1 tag from my logo in my header now. I feel like it wouldn't make too much sense to leave it there since my logo is not text based. Here is how the code sits:<h1 class=logo-collapse> <a href=https://example.com/ title=My Brand Name class=logo> <img src=https://example.com/path/to/logo/logo.svg alt=My Brand Name width=150height=55> </a> </h1>
H1 Tag on Logo in Header
seo;heading
null
_codereview.23501
The objective of the following PHP function is simple: given an ASCII string, replace all ASCII chars with their Cyrillic equivalents.As can be seen, certain ASCII chars have two possible equivalents.For example: if the function is fed the string dvigatel it should get you . The problem: if I pass dvigat, the function will finish its execution in a fairly reasonable amount of time.But, if I pass dvigatel, which is only a couple letters longer, the execution time exceeds my 30 second PHP execution time limit. Could anybody give me a few pointers here, what's wrong with this function, why does it run so slow with an 8-char string?function transliterate($query) { $map=array( a => array(, ), b => , c => array(, ), d => , e => , f => , g => array(, ), h => , i => array(, ), j => array(, ), k => , l => , m => , n => , o => , p => , q => , r => , s => , t => , u => array(, , ), v => array(, ), w => , x => array(, ), y => array(, ), z => ); $query_array = preg_split('/(?<!^)(?!$)/u', $query ); $en_letters=array_keys($map); $this->processed[]=$query; foreach ($query_array as $letter) { if (in_array($letter, $en_letters)) { if (!is_array($map[$letter])) { $query_translit=str_replace($letter, $map[$letter], $query); } else { foreach ($map[$letter] as $bg_letter) { $query_translit=str_replace($letter, $bg_letter, $query); if (!in_array($query_translit, $this->transliterations)) $this->transliterations[]=$query_translit; } } } else { $query_translit=$query; } if (!in_array($query_translit, $this->transliterations)) $this->transliterations[]=$query_translit; } foreach ($this->transliterations as $transliteration) { if (!in_array($transliteration, $this->processed)) { if (!preg_match(/[a-zA-Z]/, $transliteration)) { return; } else { $this->transliterate($transliteration); } } } }
Given an ASCII string, replace all ASCII chars with their Cyrillic equivalents
php;optimization;recursion
Why a recursive function for such a simple task?$in = 'This is your input';$map = 'your char translation array here';$out = '';for($i = 0; $i < strlen($in); $i++) { $char = $in[$i]; if (isset($map[$char])) { if (is_array($map[$char])) { $newchar = $map[$char][0]; // whatever your multi-char selection logic is... } else $newchar = $map[$char]; } $out .= $newchar; }}
_unix.124535
I want to install gcc in my solaris 11 working in virtualbox, I tried pkg install gcc-45but I am getting : pkg: 0/1 catalogs successfully updated: Unable to contact valid package repository Encountered the following error(s): Unable to contact any configured publishers. This is likely a network configuration problem. http protocol error: code: 503 reason: Service Unavailable URL: 'http://pkg.oracle.com/solaris/release'I tried to enter a proxy by entering export http_proxy=http://184.168.55.226:80But did not work. How can I manage this?
Solaris 11 pkg repository update fails
software installation;package management;solaris;gcc
There can be transient network errors between you and the pkg repo serverwhich end up with you being unable to contact the repo server.A better place to ask this particular question would be the Oracle forumsrelated to Solaris 11. (Try https://community.oracle.com/community/developer/english/server_%26_storage_systems/solaris/solaris_11 to start with).Does the problem occur if you try again right now? I can get to the search page for the repo server and found http://pkg.oracle.com/solaris/release/p5i/0/developer%2Fgcc-45.p5i very quickly.
_unix.327641
I am trying to make possible remote Ubuntu boot. I have done one little thing, using this article , but there is one roughness. I would like to boot Ubuntu Live CD on my client and work in this half-OS (using option Try Ubuntu) , while in this article one tells how to install Ubuntu Server. Definitely, we need to raise DHCP and TFTP. I have done it, but there is problem in /var/lib/tftpboot. In my case it looks like:tree /var/lib/tftpboot//var/lib/tftpboot/ boot.txt debian etch i386 initrd.gz linux pxelinux.0 pxelinux.cfg defaultbut it is enough to install Ubuntu Server, while I need to run Ubuntu Live CD. Which files should I download and where to put them to make that possible?
PXE boot Live CD
ubuntu;pxe;tftp
null
_codereview.136286
Please review the following implementation of heapsort and pay attention to the following:Have I correctly chosen the names InputIt and BidirIt for my iterators?Is there a way to make the initialise iterator and then advance it pattern occupy one line instead of two?Is the it != ix - 1 comparison of iterators ok?Here is the code:#include <iterator>#include <functional>template< class InputIt, class Key = std::less_equal< std::iterator_traits<InputIt>::value_type >>void plunge( const InputIt ix, // first element of heap const InputIt iy, // one-past last element of heap const InputIt iz, // element to be plunged Key key = Key() // comparison key to use (up or down)) { auto ii = iz; auto il = ix; std::advance(il, 2 * std::distance(ix, ii) + 1); while (il < iy) { auto ir = il + 1; auto it = ir < iy && key(*ir, *il) ? ir : il; if (key(*ii, *it)) {return;} std::iter_swap(ii, it); std:: swap(ii, it); il = ix; std::advance(il, 2 * std::distance(ix, ii) + 1); }}template< typename BidirIt, typename Key = std::greater_equal< std::iterator_traits<BidirIt>::value_type >>void heapsort(const BidirIt ix, const BidirIt iy, Key key = Key()) { auto it = ix; std::advance(it, std::distance(ix, iy) / 2); for (; it != ix - 1; --it) { plunge(ix, iy, it, key); } for (it = iy - 1; it != ix - 1; --it) { std::iter_swap(ix, it); plunge(ix, it, ix, key); }}
Heapsort implementation in C++14
c++;algorithm;c++14;heap sort
Answers:Input iterator is not suitable for your plunge function. The reasonis in std::advance(). It increments the iterators, whichinvalidates all previous ones. Bidirectional iterators are ok.Use std::next() as mentioned in the comment byuser2296177.No. Only random access iterators are mandated to support theoperation.Here is the piece of C++ 14 standard that proves answer #1.Paragraph 24.2.3 of N4296 (Input Iterators). Note from table 107 - Input Iterator requirements (in addition to Iterator). Expression:++r;pre: r is dereferenceable. post: r is dereferenceable or r is past-the-end. post: any copies of the previous value of r are no longer required either to be dereferenceable or to be in the domain of ==.where r is Input Iterator. Additionally, it is stated that post increment has the same effect. Either pre or post increment should be used to implement std::distance(), so it invalidates all copies of Input Iterators according to the standard.Comments about the codetemplate< class InputIt, class Key = std::less_equal< std::iterator_traits<InputIt>::value_type >>Very arguable gain in readability.Combo of first and last if preferred when referring to range. value is used to refer to some instance of T. ii and il make things even worse. Don't be lazy to type!Bad idea to make iterators const. May be you wanted iterator pointing to const?Nice usage of standard library (though you will need to be sure what it does, which you're trying to do). The code is not working for input iterators, as mentioned in answers.Nice one for the first try overall.
_webmaster.36080
I have content that appears within a corporate website inside an iframe. Several departments contribute their own CSS files to manage the overall UI and design.My problem is that they may use selectors for elements like td (for instance), without notice. Of course that will affect my own content in the frame unless I add a class to every td. I'm just using td as an example: the generic style for any element could change without notice.Is there any method/convention/practice I can use to protect my own styling?
protecting css selectors on large website
css
Include reset.css after loading all your CSS files:html { color: #000; background: #FFF;}body,div,dl,dt,dd,ul,ol,li,h1,h2,h3,h4,h5,h6,pre,code,form,fieldset,legend,input,button,textarea,p,blockquote,th,td { margin: 0; padding: 0;}table { border-collapse: collapse; border-spacing: 0;}fieldset,img { border: 0;}address,caption,cite,code,dfn,em,strong,th,var,optgroup { font-style: inherit; font-weight: inherit;}del,ins { text-decoration: none;}li { list-style: none;}caption,th { text-align: left;}h1,h2,h3,h4,h5,h6 { font-size: 100%; font-weight: normal;}q:before,q:after { content: '';}abbr,acronym { border: 0; font-variant: normal;}sup { vertical-align: baseline;}sub { vertical-align: baseline;}legend { color: #000;}input,button,textarea,select,optgroup,option { font-family: inherit; font-size: inherit; font-style: inherit; font-weight: inherit;}input,button,textarea,select { *font-size: 100%;}This will reset all main selectors to their defaults, removing any obtrusive styling.
_softwareengineering.347665
The problem: The game Dota 2 is a competitive multiplayer game with various heroes. Biweekly the developers publish a game patch that balance some hero values with respect to community feedback. Here is the link to example updates page. http://www.dota2.com/news/updates/Example update data:* Chen: Level 10 Talent increased from +25 Movement Speed to +30* Chen: Level 20 Talent increased from -40s Respawn Time to -45s* Clockwerk: Base strength increased by 2If a new update is available, the game always downloads the update files first then installs them and then opens the game.The Question: As can be seen from the update log, most of these updates are balancing updates. They do not change game dynamics and as far as I can think, they only change one value on a mathematical expression.(except for bigger updates where bugs are fixed.) Is it plausible to store these values(such as Base Strength,Respawn Time) on a database and only change that value when a new power balancing is done ?Extra Info: I have found this Q&A which is similar to the problem above, but there are some differences that can affect the answer.1) In my case the database is only used by developer.2) The game is massively multiplayer so a user will have a internet connection. Another option is storing the values on a config like file and only checking database to see if a new version is available. If yes download new values and update config file, if not just open the game with current values. That way a user can still play on an outdated game client offline.3) The game is internationally competitive and manipulating that hero info is simply stealing the business logic of the company. Database calls can be intercepted and manipulated easier than compiled source code.why not game design SE: My question and example is based on a game but what I ask is a more general software engineering approach, that's why I find it more suitable to ask it here than the Game Design SE. There can be other examples of that approach for programs with similar features.Pros:- There is no pre-game installation for power balancing patches. (User perspective)- A new value update does not require a new deploy.(Dev. Perspective)Cons:- Manipulating of precious data.(It can be encrypted.)- A player notices the update and checks the update log so gets informed, with background value update that may not be the case(For every new update a notice can be created automatically.)
Getting value updates from database rather than updating the client side?
architecture;database;data integrity
null
_unix.214556
Could you help me decode this assignment:jvm_xmx=${jvm_xmx:-1024}Can't understand, what it does.
What does colon (':') in bash variable resolution syntax mean?
bash;shell script
man page for bash:${parameter:-word} Use Default Values. If parameter is unset or null, the expansion of word is substituted. Otherwise, the value of parameter is substituted.So if jvm_xmx is already set to something, it is left unchanged.If it is not already set to something, it is set to 1024.Example:$ echo $jvm_xmx$ jvm_xmx=${jvm_xmx:-1024}$ echo $jvm_xmx1024$ jvm_xmx=2048$ jvm_xmx=${jvm_xmx:-1024}$ echo $jvm_xmx2048$
_webmaster.102142
This article describes how to direct incoming traffic coming from fake URLs of this type:http://www.example.com/seo-sample/article-install-apache-on-linux.phpto a PHP file that really uses URLs of this type:http://www.example.com/seo-sample/article_show_friendly.php?url=install-apache-on-linuxHere is the example code used in the htaccess file:RewriteEngine onRewriteRule ^article-(.*).php$ ./article_show_friendly.php?url=$1However, my pages have multiple queries in their URLs. How do I make this work for them?http://example.com/keyboard/keyboard-chart.php?gam=141&sty=16&lay=1&tit=sid-meiers-alpha-centauri-smac-1(I want to use the tit parameter as the main part of the URL.)
Rewrite URL with multiple queries
htaccess;mod rewrite;url rewriting
null
_cs.41558
In the proof of existence of length preserving one way functions assuming the existence of one way functions, seeLength-preserving one-way functionsWe need $p(n)$ to be a function which can not only be computed in polynomial time, but also, its inverse should be polynomial time computable. How do we ensure that ? $p(n)$ can be chosen to be monotonically increasing. But is that sufficient ?
Length Preserving One way function
complexity theory;one way functions
null
_webmaster.76681
Client is having a GoogleVoice's Call me button on their website and I'm trying to figure out how to set up a tracking for clicks on it as it's not standard link but Flash.Button (for those not familiar with it):I tried Googling for way to achieve this but couldn't find proper instructions.Can somebody shed some light on this?
Track clicks on Flash button in Google Analytics
google analytics;tracking;flash;goal tracking;universal analytics
null
_webapps.71223
I have a conversation group that shows up on my Facebook chat. But, there never was a group chat and it cannot, as far as I know, be erased. How do you erase it? I already checked instant messaging and there is nothing. Can other people see this group?
Group conversation in Facebook chat
facebook groups;facebook chat;instant messaging
null
_softwareengineering.261899
I am allocating memory on the stack, and distributing it manually to shared pointers. I end up doing something like this at startup (I've simplified it by ignoring alignment issues):char pool[100];std::shared_ptr<Obj> p = new(reinterpret_cast<void*>(pool)) Obj;pool += sizeof(pool);Of course, it is possible that the 100 is not enough, so in that case I'd like to throw an exception:char pool[100];char* poolLimit = pool + 100;if (pool + sizeof(Obj) >= poolLimit) throw std::bad_alloc(); // is this a good idea?std::shared_ptr<Obj> p = new(reinterpret_cast<void*>(pool)) Obj;pool += sizeof(pool);Is it correct to throw a std::bad_alloc here? Or should I just throw std::runtime_error?
Should I throw std::bad_alloc?
c++;exceptions
null
_cogsci.4231
With the advent and proliferation of social media, forums and q&a sites, there has been a matching increase in the incidents of online-bullying, or 'cyber-bullying, that goes beyond trolling. All too many times, the consequences of this kind of bullying (like any type of bullying) can be tragic.What is the psychological motivation behind the cyber-bully? Is it a consequence of them feeling empowered by their apparent anonymity?I have read the post Why do teenagers take the internet and cyberbullying so seriously?, but my question is focused on the psychological motivations that the perpetrator of cyberbulling has, not why it is so damaging to the victims.I am after authoritive (refereed) articles about this.
What motivates individuals to engage in cyberbullying?
social psychology;motivation;internet;social networks
From my understanding of the problem and my years of experience with the internet since the early days when IRC was popular and web forums were just starting to emerge, I believe I can shed some light on this subject. probably not enough for a full answer but more than just a comment.I feel that a large part of the problem is the anonymity (or perceived anonymity) as well as the sense of detachment that communication over the internet provides us with. I think that psychologically it is much easier to be critical, mean, cruel and otherwise have a lack of empathy when not dealing with another person face-to-face or verbally. Just as text-based communication does not convey tone of voice to allow us to know when others online are being sarcastic, one can lose the sense of how much one's actions are hurting others when they are cruel online.With respect to perceived anonymity, it becomes greatly easier to rationalize or entirely forget that one's actions have consequences. It's easy to attack someone online and never have them know who is attacking them. This is demonstrated by a 2005 study done by Qing Li of the University of Calgary (I am unable to find the actual link to the study but have found many places which reference it) that states that 41% of students surveyed did not know the identity of the perpetrators. To me this indices that the anonymity provides a means for humans to rationalize behaviors we otherwise would be ashamed to be associated with.The paper Cyber bullying: Clarifying Legal Boundaries for School Supervision in Cyberspace by Shaheen Shariff/McGill University, Canada and Dianne L. Hoff/University of Maine, Orono, USA also draws some very interesting parallels between cyberbullying and the 1954 novel Lord of the Flies by William Golding, speaking to the roots of cruelty in human nature as well as lack of supervision/perceived anonymity: Young people in cyber-space lose their inhibitions in the absence of no central power, clear institutional or familial boundaries, or hierarchical structures (Milson & Chu, 2002). As Bandura (1991) explained over a decade ago, physical distance provides a context in which students can ignore or trivialize their misbehavior, as easily as Goldings boys did on their distant island. In cyber-space this form of disengagement is amplifiedLack of institutional and parental rules in cyber-space have the effect of creating virtual islands similar to the physical islands in Lord of the Flies. The absence of adult supervision allows perpetrators free reign to pick on students who may not fit their definition of cool because of their weight, appearance, accent, abilities or disabilities (Shariff and Strong-Wilson, 2005). Cyber-space provides a borderless playground that empowers some students to harass, isolate, insult, exclude and threaten classmates. [...] Without limits and clear codes of conduct, communication in cyber-space (even among adults) can rapidly deteriorate into abuse because of the knowledge and sense of security that comes with the limited possibility of being detected and disciplined.It also talks about teenage hormones being a driving factor:Moreover, adolescent hormones rage and influence social relationships as children negotiate social and romantic relationships and become more physically self-conscious, independent, and insecure (Boyd, 2000). Research on dating and harassment practices at the middle school level (Tolman, 2001) shows that peer pressure causes males to engage in increased homophobic bullying of male peers and increased sexual harassment of female peers to establish their manhood. During this confusing stage of adolescent life, the conditions are ripe for bullying to take place. The Internet provides a perfect medium for adolescent anxieties to play themselves out.I associate it somewhat with the notion by Thomas Hobbes that humans are warlike, that by our very nature we are completely self-centered, and that we form societies and governments by giving up some of our selfishness and collectively adhering to social contracts for a greater good. By the nature of online communities often we lose this sense of responsibility to the community and each other, and we are prone to regress to our more basic, warlike state. This is my personal belief; I am still searching for actual references with which I can back this up with.
_unix.46104
I have an always-on machine running Debian/Squeeze with a couple of ethernet ports (it's doing various serverish jobs, including DHCP) and a LAN on each.eth0 is on 192.168.7.* and has access to the internet (and the rest of the office LAN) via a gateway (NAT-translating DSL router) on that LAN at 192.168.7.1.eth1 is on 192.168.1.* . I'm looking for a quick, easy and Debian-friendly way of configuring the machine so that anyone on the 192.168.1.* LAN only has access to the external internet, DHCP, and can't access any of the other stuff on 192.168.7.*. I'm sure this is possible using an appropriate set of iptables rules. Suspect I'm looking for one of:A simple guide to how to set up iptables to achieve the above (Ican't be the first person to want to do this).A pointer to some user-friendly firewall configuring software whichcan do this (although note that I don't want to lock down theeth0-side and the 192.168.7.* network at all).Suggestion of some software which is designed to do exactly this; Ihave an idea there are some packages intended to achieve similar towhat I want for internet cafes etc, but I'm not sure where to startlooking or whether these might be overkill.Any tools being in Debian Squeeze (or Wheezy) already is a big plus.
How to provide a guest LAN on one ethernet device
debian;networking;security;iptables;routing
This is fairly easy to do with iptables. In the below, 'wan-iface' is the interface that your WAN connection is on. Depending on how its connected, could be eth2, ppp0, etc. Also, note that you can rename Ethernet interfaces by editing /etc/udev/rules.d/70-persistent-net.ruleshighly recommended when you have several. -i lan is much clearer than -i eth0.You can write an init.d script to apply these rules at boot or use the iptables-persistent package. Or there are various firewall rule generators packaged (personally, I write iptables rules directly, as I often want to do weird things). You'll need a NAT rule, if your existing one doesn't already cover it:iptables -t nat -A POSTROUTING -s 192.168.1.0/24 -o wan-iface -j SNAT --to-source external-ipExternal-ip is your actual IP address. If you have a dynamic one, change that line to:iptables -t nat -A POSTROUTING -s 192.168.1.0/24 -o wan-iface -j MASQUERADEThen you'll need a firewall rule to allow the traffic. I'm giving two here, depending on whether your default for forward is DROP or not. It should be drop, but...iptables -A FORWARD -i eth1 -o wan-iface -j ACCEPT # default is dropiptables -A FORWARD -i eth1 ! -o wan-iface -j DROP # default is acceptNow, you just need to allow DHCP. Assuming your firewall is running DHCP, and that DNS is on the WAN (else, you'll need to allow them to talk to the DNS server):iptables -A INPUT -i eth1 -p udp --dport bootps -j ACCEPTiptables -A INPUT -i eth0 -j DROP # only if your default isn't dropThat's, I believe, the minimal config for this. You can additionally limit what traffic goes out to the Internet. For example, if you wanted web browsing only, instead of the FORWARD rules above, you'd do this (again assuming DNS on the WAN):iptables -A FORWARD -i eth1 -o wan-iface -p tcp --dport domain -j ACCEPTiptables -A FORWARD -i eth1 -o wan-iface -p udp --dport domain -j ACCEPTiptables -A FORWARD -i eth1 -o wan-iface -p tcp --dport http -j ACCEPTiptables -A FORWARD -i eth1 -o wan-iface -p tcp --dport https -j ACCEPTiptables -A FORWARD -i eth1 -j DROP # only if default is acceptNote that the above allows three ports, domain (both TCP and UDP, for DNS), http (TCP), and https (TCP).edit:In response to your clarification:It sounds like no NAT is currently taking place on this box. Also, there is no WAN interface, traffic goes out over the LAN. Not the best setup, but doable.I'll use lan-ip to mean the IP address of the Debian box on your LAN (eth0). I'll use guest-ip to mean the IP address of the same box on your guest network (eth1).I'm getting confused by your interface naming while writing this, so I'm going to assume you take my advice and rename the interfaces to lan (eth0) and guest (eth1). If not, you can do a find & replace.It doesn't sound like you currently have routing or firewalling set up on this box, so I'll give full rules, not just the ones to add. You may need to add some more, of course.You'll need to turn on IP forwarding (edit /etc/sysctl.conf to do so). And turn on reverse path filter in the same file. You will need to configure DHCP to offer service on your eth1 network. Please note the default gateway it servers out (for the eth1 guest network only) will need to be guest-ip, not 192.168.7.1.Your NAT rule will look a little different. It'd be preferable not to have this, and instead perform this on 192.168.7.1, but I'm going to guess that's not possible. If it is possible, skip this nat rule, add it on 7.1 instead, and add a route to 192.168.1.0/24 via lan-ip.iptables -t nat -A POSTROUTING -s 192.168.1.0/24 -o lan -j SNAT --to-source lan-ipnow, since you don't have a firewall set up currently, default things to deny. This is the most secure way to do things, generally.iptables -P INPUT DROP # default for traffic to firewall (this box)iptables -P FORWARD DROP # default for forwarded trafficiptables -F # clear rulesiptables -X # delete custom chainsiptables -t nat -F # same, but for nat tableiptables -t nat -Xiptables -A INPUT -i lo -j ACCEPT # let the box talk to itself. Important.At this point, your box will be completely inaccessible. Not what you want. The next few rules fix that. The first two set up connection tracking, allowing packets that are part of an existing connection (or very closely related to it)iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPTiptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPTThen we'll assume for now that you trust the machines on the office LAN, and allow all traffic from them. You could change this to more restricted rules if you'd like. Note the FORWARD rule will allow you to access machines on the guest network from the office LAN. If that isn't desired, omit it.iptables -A INPUT -i lan -j ACCEPTiptables -A FORWARD -i lan -o guest -j ACCEPTNow, to allow some traffic from the guest network. First, you'll need to allow DHCP.iptables -A INPUT -i guest -p udp --dport bootps -j ACCEPT # dhcpNext, I assume you don't want to allow the guest access to any of your private networks. So we'll just drop all guest traffic to RFC1918 (private) space.iptables -A FORWARD -i guest -d 10.0.0.0/8 -j DROP iptables -A FORWARD -i guest -d 172.16.0.0/12 -j DROPiptables -A FORWARD -i guest -d 192.168.0.0/16 -j DROPSince we've dropped all private address space, the rest is public. So allow it. This line is somewhat scary, as if one of the previous lines were to go missing, it'd be trouble. iptables -A FORWARD -i guest -o lan -j ACCEPTYou could of course limit that to specific protocols and ports (as in the web browsing only example).You can also add rules for logging dropped packets, etc.
_unix.106858
What is the canonical way for accessing the local documentation on any available Shell Options builtin with shopt?I'm using Ubuntu 12.04 and can run help shopt to get a description of what shopt does:shopt: shopt [-pqsu] [-o] [optname ...] Set and unset shell options. ...I can list the various Shell Options and their values (shopt or shopt -p). But how do I find out what each one actually does without leaving the comfort of my Linux box? I'm not looking for the descriptions online. Is there a man page or something I'm missing?
How do you get descriptions of the available `shopt` options?
bash;shopt
See the shell builtin commands section of man bash; it has an entry for shopt that describes all of the available shell options. Here is an excerpt: shopt [-pqsu] [-o] [optname ...] [...] autocd If set, a command name that is the name of a directory is executed as if it were the argument to the cd com- mand. This option is only used by interactive shells. cdable_vars If set, an argument to the cd builtin command that is not a directory is assumed to be the name of a variable whose value is the directory to change to. cdspell If set, minor errors in the spelling of a directory com- ponent in a cd command will be corrected. The errors checked for are transposed characters, a missing charac- ter, and one character too many. If a correction is found, the corrected file name is printed, and the com- mand proceeds. This option is only used by interactive shells. [...]
_unix.359880
I am attempting to setup so I can mail from Raspbian.When I attempt to send I get an error ssmtp: Cannot open smtp.gmail.com:587 (I also tried port:465)I have set Access for less secure apps on Google and can send/receive from the account on Thunderbird.I have installed ssmtp and configured /etc/ssmtp/ssmtp.conf to contain:-# Config file for sSMTP sendmail## The person who gets all mail for userids < 1000# Make this empty to disable [email protected]# The place where the mail goes. The actual machine name is required no # MX records are consulted. Commonly mailhosts are named mail.domain.commailhub=smtp.gmail.com:[email protected]=xxxxxxxxxxxxxxUseTLS=YESUseSTARTTLS=YES# Where will the mail seem to come from?rewriteDomain=gmail.com# The full [email protected]# Are users allowed to set their own From: address?# YES - Allow the user to specify their own From: address# NO - Use the system generated From: addressFromLineOverride=YESI have also configured /etc/ssmtp/revaliases to contain:-# sSMTP aliases# # Format: local_account:outgoing_address:mailhub## Example: root:[email protected]:mailhub.your.domain[:port]# where [:port] is an optional port number that defaults to 25.root:[email protected]:smtp.gmail.com:587Any suggestions?EditThe settings above based on https://wiki.archlinux.org/index.php/SSMTPI have done some further testing.I selected a different SMTP server, which worked. (I do not want to use this, as it is only available when directly connected to my ISP.)I tried setting an Application Specific password, and got the responseThe setting you are looking for is not available for your account. (Presumably because this account does not have 2-factor authentication.)The gmail account I am trying to use was specifically created to send messages from the Raspberry Pi.
Problem sending to gmail using ssmtp
ssmtp
The hostname assignment looks wrong. You probably want hostname=raspberry.pi or something like that. (Ideally, your host has a public DNS name, and you should use that.) It should not be an email address.
_unix.96365
I need to set up a shell to block DDOS attacks , but in that script, it's reading some Blocked IPs from /etc/sysconfig/iptables. The problem is that I don't have this directory, So I'm wondering where should I read the blocked IPs from.I followed this script by Mr Takefuji. I'm using Debian 7 and have never set Iptables untill now.
Where does Iptables store blocked IPs
iptables
That directory is a Red Hat/Fedora/RHEL directory so I wouldn't expect it to be on any other distributions of Linux. Here's what's in that file on my Fedora 14 system:# Firewall configuration written by system-config-firewall# Manual customization of this file is not recommended.*filter:INPUT ACCEPT [0:0]:FORWARD ACCEPT [0:0]:OUTPUT ACCEPT [0:0]-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT-A INPUT -p icmp -j ACCEPT-A INPUT -i lo -j ACCEPT-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT-A INPUT -m state --state NEW -m udp -p udp --dport 631 -j ACCEPT-A INPUT -m state --state NEW -m udp -p udp --dport 5353 -d 224.0.0.251 -j ACCEPT-A INPUT -m state --state NEW -m tcp -p tcp --dport 631 -j ACCEPT-A INPUT -m state --state NEW -m udp -p udp --dport 631 -j ACCEPT-A INPUT -j REJECT --reject-with icmp-host-prohibited-A FORWARD -j REJECT --reject-with icmp-host-prohibitedCOMMITMost distros keep this information in different locations, I would determine where Debian keeps it and put this info there. The upside to using a file like this is that it's configured to be used when the iptables service is stopped & started. Again on Fedora the service is this:$ service iptables startWhen that is invoked the file I referenced above is used to establish firewall rules as described in that file.Going it aloneIf you just want to setup your own rules you don't have to use that service, it's a convenience at the end of the day. You can run iptables rules directly. In fact if you look at the format of the file referenced above you'll notice that they are just the contents of a call to iptables:-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPTIf you're new to iptables you might find a tool such as firestarter easier to getting going with its setup.
_cstheory.31627
So the halting problem basically states that there cannot exist any finite length algorithm for automatically verifying if other finite length algorithms terminate. But suppose I start listing out all the possible programs. I am allowed to list them as they form a countable set involving finite length permutations of a countable number of symbols. We can label each program $X_0$, $X_1$ ... where $X_n$ denotes the n'th program that is listed by some scheme of selecting and permuting symbols. Obviously an algorithm cannot exist that is successfully able to verify if ALL of the programs halt or do not. But does there exist any program $X_i$ for which it is simply not possible to verify if it will halt or not? My analysis of it so far:Assume we discover a program X that isn't verifiable. In other words there is a proof that there does not exist any algorithm A to determine if X terminates or does not. Then simply running X cannot result in us finding out if it terminates. Thus X does not terminate in any finite amount of time. Thus X doesn't halt. Therefore we have concluded that X is verifiable. Meaning there doesn't exist an undecidable program X for which there exists a proof that X cannot be verified by an Algorithm A in a finite amount of time.This however DOES NOT Mean every program X either can be proven to halt or not. Rather it states the programs X that are undecidable themselves do not have a proof of this fact. Here is the catch, if there is a proof that a program X cannot be proven to be undecidable then it must be the case that X is decidable so if X is undecidable then even this type of a proof cannot exist. We conclude that if X is undecidable, then there doesn't exist a proof of X is undecidable, nor a proof that there doesn't exist a proof of X being undecidable.Here is another form of the question, if I arbitrarily continue this chain of analysis I believe I will always conclude that X doesn't exist. That is if $$\exists \text{Proof} (\nexists \text{Proof of} \left( \nexists \text{Proof of} \left( \nexists \text{Proof of} \left( ... \left(\text{Program X is undecidable} \right) \right) \right) \right) $$Then,X is decidable. Whats a hint for beginning to prove this result. And if such a proof exists, what philosophical implications does that have?
Undecidable Single Programs
computability;proofs;proof theory;halting problem;undecidability
One way to look at your question is the Busy Beaver Numbers.What we will do is restrict a Turing Machine so that:The blank symbol is a $0$The tape alphabet is $\{0, 1\}$The input to our turing machine is always nothing (the tape is always initialized to only containing $0$'s)There are only $n$ internal states, for some $n \in \mathbb{N}$.From here on out whenever I refer to Turing Machines I will be referring to Turing Machines restricted in this way, because this class of Turing Machines is just as powerful as the class of all Turing Machines, and is easier in this context to think about.Note that given any $n=k$, the number of possible transition functions are finite. Thus if we really wanted to, we could manually iterate through every $k$-state turing machine, and figure out whether or not that Turing Machine halted. We could then run all of those halting ones and see which one runs the longest, and call this number $c_k$. Then we can solve the halting problem for any $k$-state Turing machine by simply running it and seeing if it runs for more than $c_k$ steps. If it does, then we know it doesn't halt.We will call $\text{Busy Beaver} (n) = BB(n) = c_n$, where $c_n$ is a constant like ours computed above.Note that if we could compute $BB(n)$ (or any function that grew faster than $BB(n)$), we could solve the halting problem. Thus $BB(n)$ is incomputable, in other words, it grows faster than any computable function.However, for a fixed $k$, $BB(k)$ is computable, assuming you have a Turing Machine with much more than $k$ states, and a very long time to compute this.So then if someone told you that they had a turing machine with $k$ states that you couldn't prove halted or not, you could simply compute $BB(k)$, then run their turing machine for $BB(k)+1$ steps, and then if it was still going you would know that it doesn't halt, otherwise you would know that it does halt. So then they are wrong and you can prove such a thing. This means that the Undecidable Single Program you are searching for doesn't exist, sorry.
_codereview.6230
I came across this code in our project today. Where possible I'm trying to leave the code base in a better shape than I found it, as I go along, and this method jumped out at me for a number of reasons, mainly the sql string and the try/catch block. I feel there's a less expensive way to do it.Original Code:public bool CheckSomething(string paramA, int paramB){ using (var conn = new SqlConnection(Connection)) { conn.Open(); string sqlCommand = SELECT ColumnA FROM OurTable WHERE ColumnB = ' + paramA + ' AND ColumnC = + paramB; using (var dbCommand = new SqlCommand(sqlCommand, conn)) { int noOfRecords = -1; try { noOfRecords = (int)dbCommand.ExecuteScalar(); } catch (Exception ex) { } finally { dbCommand.Dispose(); if (conn.State == ConnectionState.Open) { conn.Close(); } return noOfRecords > 0; } } }}I was thinking of re-writing it as this, but I still think it could be improved further, one of which would be to create an procedure for the sql, but that's unlikely. Was aiming to improve it purely from the code point of view. I'd appreciate thoughts.Rewritten version:public bool CheckSomething(string paramA, int paramB){ using (var conn = new SqlConnection(Connection)) { conn.Open(); string sqlCommand = string.Format(SELECT ColumnA FROM OurTable WHERE ColumnB = '{0}' and ColumnB = {1}, paramA, paramB); using (var dbCommand = new SqlCommand(sqlCommand, conn)) { object noOfRecords = dbCommand.ExecuteScalar(); dbCommand.Dispose(); if (conn.State == ConnectionState.Open) { conn.Close(); } return noOfRecords != null; } }}
Could this ExecuteScalar call be written better?
c#;exception handling
public bool CheckSomething(string paramA, int paramB){ using (var conn = new SqlConnection(..)) using (var comm = new SqlCommand(, conn)) { conn.Open(); object noOfRecords = comm.ExecuteScalar(); return noOfRecords != null; }}There is no need to close or dispose, the using handles that part. This removes the need for a manual try catch or closing logic, leaving a much compressed chunk of code that is functionally equivalent and just as safe.As for the select statement itself, either use parameterized SQL or a stored procedure as opposed to string concatenation. Parameterized SQL:string sql = SELECT ColumnA FROM OurTable WHERE ColumnB = @param1 AND ColumnC = @param2;using (var comm = new SqlCommand(sql, conn)){ comm.Parameters.AddWithValue(@param1, param1); comm.Parameters.AddWithValue(@param2, param2); conn.Open(); // etc...}
_scicomp.11151
I am measuring the electric field in the closure box with aperture and using probe to achieve that. But I have a problem. When I use the probe for measurement, results are not accurate in transient solver. So, I changed the settings in the transient solver > Specials and I use the program with high number of pulses option ( For example, 100000 and more pulses ). But simulation time is really bad and it takes nearly four days to complete with my desktop pc ( and I know that my pc is not as fast as enough, but this is my situation). How can I drop the simulation time to measuring the E-Field with probe?Thank you.
How can simulation be fast when Electric Field is being measured with probe in CST Microwave Studio?
electromagnetism;electromagnetics
null
_cs.76375
I'm trying to build a complexity analysis tool and I need an algorithm for constructing the control flow graph (to get cyclomatic and eventually essential complexity). I couldn't find any pseudocode online so I just got down to writing it. Using recursion seems to have been a bad way to go; my code handles most simple cases but with deeper nesting of control structures errors arise. Is there somewhere I can find pseudocode for constructing the CFG? (For the curious I'm doing this for VHDL.)
Pseudocode for constructing control flow graph
programming languages;software testing;logic programming;static analysis
null
_unix.369717
Ciao!How are you?Problem:I need 2 interfaces, because the fast ISP blocks 25 port, the slower is open. I can telnet with the required interface: telnet -b 192.168.81.20 alt2.gmail-smtp-in.l.google.com 25 Trying 74.125.68.27...Connected to alt2.gmail-smtp-in.l.google.com.Escape character is '^]'.220 mx.google.com ESMTP q14si1562820plk.485 - gsmtpThe wrong is not working telnet -b 192.168.78.20 alt2.gmail-smtp-in.l.google.com 25 Trying 74.125.68.27...telnet: Unable to connect to remote host: Connection refusedroot@server:/etc/postfix# I got the right settings (smtp_bind_address, the one I need and works with telnet):smtp inet n - y - - smtpd -o content_filter=spamassassin -o smtp_bind_address=192.168.81.1submission inet n - y - - smtpd -o syslog_name=postfix/submission -o smtpd_tls_security_level=encrypt -o smtpd_sasl_auth_enable=yes -o smtpd_client_restrictions=permit_sasl_authenticated,reject -o smtp_bind_address=192.168.81.1smtps inet n - y - - smtpd -o syslog_name=postfix/smtps -o smtpd_tls_wrappermode=yes -o smtpd_sasl_auth_enable=yes -o smtpd_client_restrictions=permit_sasl_authenticated,reject -o smtp_bind_address=192.168.81.1Still, I get this error:Jun 07 13:19:04 server postfix/smtp[10823]: connect to alt1.gmail-smtp-in.l.google.com[108.177.14.27]:25: Connection refusedJun 07 13:19:04 server postfix/smtp[10823]: connect to alt1.gmail-smtp-in.l.google.com[2a00:1450:4010:c0f::1b]:25: Network is unreachableJun 07 13:19:05 server postfix/smtp[10823]: connect to alt2.gmail-smtp-in.l.google.com[74.125.68.27]:25: Connection refusedJun 07 13:19:05 server postfix/smtp[10823]: C625334017A: to=<[email protected]>, relay=none, delay=12983, delays=12976/0.01/7.4/0, dsn=4.4.1, status=deferred (connect to alt2.gmail-smtp-in.l.google.com[74.125.68.27]:25: Connection refused)The settings are correct, why I get connection refused?If you know, thanks so much!Ciao! More:Routing:Kernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Iface0.0.0.0 192.168.78.1 0.0.0.0 UG 10 0 0 enp2s00.0.0.0 192.168.81.1 0.0.0.0 UG 30 0 0 enp1s0192.168.78.0 0.0.0.0 255.255.255.0 U 0 0 0 enp2s0192.168.81.0 0.0.0.0 255.255.255.0 U 0 0 0 enp1s0Ifconfig:enp1s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.81.20 netmask 255.255.255.0 broadcast 192.168.81.255 inet6 fe80::9ade:d0ff:fe04:23c3 prefixlen 64 scopeid 0x20<link> ether 98:de:d0:04:23:c3 txqueuelen 1000 (Ethernet) RX packets 265 bytes 49826 (48.6 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 104 bytes 25251 (24.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0enp2s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.78.20 netmask 255.255.255.0 broadcast 192.168.78.255 inet6 fe80::eeaa:a0ff:fe1b:4d84 prefixlen 64 scopeid 0x20<link> ether ec:aa:a0:1b:4d:84 txqueuelen 1000 (Ethernet) RX packets 4733 bytes 850839 (830.8 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 4911 bytes 934827 (912.9 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 8049 bytes 3219724 (3.0 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8049 bytes 3219724 (3.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0my.conf# Debian specific: Specifying a file name will cause the first# line of that file to be used as the name. The Debian default# is /etc/mailname.#myorigin = /etc/mailnamesmtpd_banner = ESMTP mail.patrikx3.tkbiff = no# appending .domain is the MUA's job.append_dot_mydomain = no# Uncomment the next line to generate delayed mail warnings#delay_warning_time = 4hreadme_directory = no# See http://www.postfix.org/COMPATIBILITY_README.html -- default to 2 on# fresh installs.compatibility_level = 2# TLS parameters#smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem#smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key#smtpd_use_tls=yes#smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache#smtp_tls_session_cache_database = btree:${data_directory}/smtp_scachesmtpd_tls_cert_file= /etc/ssl/acme/patrikx3.tk/fullchain.cersmtpd_tls_key_file=/etc/ssl/acme/patrikx3.tk/patrikx3.tk.keysmtpd_use_tls=yessmtpd_tls_auth_only = yessmtpd_sasl_type = dovecotsmtpd_sasl_path = private/authsmtpd_sasl_auth_enable = yessmtpd_recipient_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_unauth_destination, check_policy_service unix:private/policyd-spf# See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for# information on enabling SSL in the smtp client.#smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destinationmyhostname = mail.patrikx3.tkalias_maps = hash:/etc/aliasesalias_database = hash:/etc/aliasesmyorigin = /etc/mailnamemydestination = localhostrelayhost = mynetworks = 127.0.0.0/8# 5 gigabytmailbox_size_limit = 5368709120# 50 megabytemessage_size_limit = 52428800recipient_delimiter = +inet_interfaces = allinet_protocols = allvirtual_transport = lmtp:unix:private/dovecot-lmtpvirtual_mailbox_domains = mysql:/etc/postfix/mysql-virtual-mailbox-domains.cfvirtual_mailbox_maps = mysql:/etc/postfix/mysql-virtual-mailbox-maps.cfvirtual_alias_maps = mysql:/etc/postfix/mysql-virtual-alias-maps.cf# spfpolicyd-spf_time_limit = 3600#opendkimsmtpd_milters = inet:127.0.0.1:8891non_smtpd_milters = $smtpd_miltersmilter_default_action = acceptmilter_protocol = 2The required interface has route as well:iface enp1s0 inet dhcp metric 30 post-up ip route add 192.168.81.0/24 dev enp1s0 src 192.168.81.20 table rt2 post-up ip route add default via 192.168.81.1 dev enp1s0 table rt2 post-up ip rule add from 192.168.81.20/32 table rt2 post-up ip rule add to 192.168.81.20/32 table rt2
postfix dual interfaces, isp1 port 25 blocked, fast, isp2 port open, but slow
linux;postfix;smtp
null
_webmaster.86890
Many people say that setting up your own backlinks is not good. Is this true?Nobody can expect backlinks to be built by other sites naturally unless they are social networking sites or some good topic. Even the top news sources won't have backlinks naturally.Google doesn't know these, so can we build backlinks ourself?
Is backlinking to yourself in violation of Google's guidelines?
google;backlinks
null
_unix.299628
My setup:Ubuntu Server 16.04.1MaaS 2.0 Beta 3vmware vcenter 6pyvmomi-5.5.0.2014.1.1 which is installed manually for Python 3 because I read it would fix my problem.This is the error I'm getting:Aug 1 13:21:26 maas sh[5319]: 2016-08-01 13:21:25 [-] /usr/lib/python3/dist-packages/urllib3/connectionpool.py:794: requests.packages.urllib3.exceptions.InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.org/en/latest/security.htmlAug 1 13:21:26 maas maas.rpc.cluster: [ERROR] Failed to probe and enlist VMware nodes: (vim.fault.HostConnectFault) {#012 dynamicType = ,#012 dynamicProperty = (vmodl.DynamicProperty) [],#012 msg = '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:645)',#012 faultCause = ,#012 faultMessage = (vmodl.LocalizableMessage) []#012}I read several old posts from last year on how on to fix that but none of them work or even point me to the files which I have to edit.Somebody please point me in the right direction so that i can deploy my VMWare nodes.I tried MaaS 1.8/1.9 and now 2.0 and each version has problems that cannot be solved or are solved in future version but bring now bugs with it.
Ubuntu MaaS ssl error vmware vcenter 6 connection
ubuntu;python;vmware
null
_vi.2776
One of the open questions I have about Vim is if there is a way to perform a search/replace in the current project (bear with me if I use this notion inherited from other editors).For instance, let's assume I want to search my project files for a method name, and rename all the instances of that method call.This is how I can proceed with Sublime.How would you do this in Vim (possibly MacVim) without relying on other programs such as ack or sed?
Vim search replace all files in current (project) folder
search;macvim
There are several ways to do this.Inside Current DirectoryIf you want to perform the search/replace in a project tree, you can use Vim's argument list.Simply open Vim and then use the :args command to populate the argument list. You can pass in multiple filenames or even globs.For example, :args **/*.rb will recursively search the current directory for ruby files. Notice that this is also like opening Vim with vim **/*.rb. You can even use the shell's find command to get a list of all files in the current directory by running: :args `find . -type f`You can view the current args list by running :args by itself. If you want to add or delete files from the list, you can use the :argadd or the :argdelete commands respectively.Once you're happy with the list, now you can use Vim's powerful :argdo command which runs a command for every file in the argument list: :argdo %s/search/replace/gHere are some tips for searching (based on some of the comments):Use a word boundary if you wanted to search for foo but not foo_bar. Use the \< and \> constructs around the search pattern like so: :argdo %s/\<search\>/foobar/gUse a /c search flag if you want Vim to ask for confirmation before replacing a search term.Use a /e search flag if you want to skip the pattern not found errors.You can also choose to save the file after performing the search: :argdo %s/search/replace/g | update. Here, :update is used because it will only save the file if it has changed.Open buffersIf you already have buffers open you want to do the search/replace on, you can use :bufdo, which runs a command for every file in your buffer list (:ls).The command is very similar to :argdo: :bufdo %s/search/replace/gSimilar to :argdo and :bufdo, there is :windo and :tabdo that act on windows and tabs respectively. They are less often used but still useful to know.
_softwareengineering.278710
My manager keeps complaining that the estimates we have come up with are too much for the customer every time he asks us to think like a customer and see whether the estimate is valid, but my point is that even though we somehow finish the project on time, lots of issues might creep up in the project. Apart from this, I am the only developer for this project, and he puts most of the blame on me; be it design, bugs or any other discrepancy in the project.I am not sure if I am doing anything wrong, or is software engineering a field with no bugs where everything should be perfect? Is my manager is wrong? Can anyone explain?
What to tell a manager who tells your estimates are too much for a genuine task
management;estimation
Your manager is probably under pressure to justify your time estimates, thus the friction. This kind of friction is part of the work environment, there are lots of books, classes, coaching sessions available that will teach you how to interact with different people and personality types so you can do your job. More importantly for the customer ( and for your own ease of mind ), have you established a track record for the effort it takes to deliver a project, or talked with others in the industry about your time estimates? A good way to approach this kind of problem is to find a senior programmer who can give you some guidance on how long things should take so you get a sense of how you are doing relative to industry expectations. Once you have a sense for how long things 'should' take, you can apply your soft skills to the task of negotiating with your manager for more time as appropriate.The customer may expect things better, faster, and cheaper, but in general they only get two of those three things. I've had potential customers say things like, We have X budget and we want to become an online leader in our space. Just because they want it does not mean it will happen. Things take as long as they take.Of course, if you really want to excel, find a way to do the same work better, faster, and cheaper. Like I said, mostly that is not what the customer gets, but if you can do it, you will get props from your manager and your customers. Automation, improved tools, better processes - these are your friends.
_webmaster.10858
Any have any experience using these? And what are the pro's and cons? Can you expect a high level of false positives? Are they 100% effective?
DDos Mitigation Services
botattack
Disclosure: A family member works for Adversor.There is no such thing as 100% DDoS protection at the minute (unless someone has more computing power than the rest of the world combined).Some offer a range of Reverse Proxies, some use adaptive black/white listing (sometimes teaming up with other services for more accurate lists - distributed DDoS protection if you like) and the finally some use application firewalls.Generally as long as you have protected against Slowloris, then your next port of call would be to look at an Application Firewall and bot-proof forms (but this requires a development style that works in tandem otherwise you'll create customer facing DoS situations not solve them).Reverse proxies will help in the case of a DDoS (especially if they are Geo targeted so only one region would be affected). Be warned that you'll have to invest some time checking the HTTP requests/responses work properly. A lot of DDoS protection services will only point FastFlux DNS records towards their servers after a DDoS attack has been detected (to keep running costs down) so expect a bit of downtime during the switchover.Many Webmasters with high(ish) profile sites and a low budget (generally on a single server) will just protect against Slowloris and then use code-level protection instead of application firewalls. If they need to scale, reverse proxies from a non-DDoS Protection provider (i.e. a CDN) can be most cost effective, flexible and withstand more traffic.However if you have the budget/server-sprawl/awkward-development and want more peace of mind, you can outsource it to a specialist DDoS Protection service.
_codereview.90391
The next iteration is here.I have this small method for printing long integers neatly. For example:neatify(123L) = 123neatify(1234L) = 1 234neatify(12345L) = 12 345...The code:import java.util.Scanner;public class Main { public static String neatify(final long number, final int groupLength) { final String str = Long.toString(number); if (groupLength < 1) { return str; } final char[] charArray = str.toCharArray(); final StringBuilder sb = new StringBuilder(); for (int i = 0; i < charArray.length; ++i) { if (i != 0 && (charArray.length - i) % groupLength == 0) { sb.append(' '); } sb.append(charArray[i]); } return sb.toString(); } public static String neatify(final long number) { return neatify(number, 3); } public static void main(final String... args) { final Scanner scanner = new Scanner(System.in); while (scanner.hasNextLong()) { System.out.println(neatify(scanner.nextLong())); } }}
Neatly printing integers interspersed with spaces
java;strings;reinventing the wheel
Usabilitydo you really need the groupLength? I'm not aware of any number writing scheme where it is fixed, but not 3[*].on the other hand, the separator is different in a lot of countries. You can see an example for thousands separators here (or on wikipedia). For example, Canada uses , Italy uses ., the US uses ,, and Switzerland uses '. It might be nice to add that as a parameter instead of hardcoding . You can still use as a default.your code doesn't deal that well with negative numbers. -123 becomes - 123, and -123456 becomes - 123 456. I would not expect a space after the -.[*] In India it seems to be written eg as 15,00,000, which doesn't use 3, but which also isn't a fixed length.Miscif you start your loop at 1 and prepend sb.append(charArray[0]);, you can save the i != 0 check you perform each time.
_softwareengineering.303119
I have heard that when testing stories developed in the current sprint, no issues are raised they are not completed (i.e. definition of done was not fulfilled) and that you should only raise issues when you find another, unrelated issue etc. While it does make sense not to raise issues on something that developers still can work on till the end of the sprint, I fail to find some best practices. Is that a correct approach?
Should defects be raised during sprints or just noted in the stories and assigned to developers?
testing;agile;scrum
It depends on your process. In my world, a defect is something that escapes your quality practices. But its up for you to define exactly what those are.Some processes have a clear separation between the development team and the test / quality team. This is the world that I live in now. The development team is responsible for unit and integration tests and the quality team is responsible for deliverable-level (application and system level) regression and acceptance tests. In this environment, any issue that was not found by the development team in their testing would be recorded as a defect as it escaped the development team's quality activities - code reviews and tests.The agile methods promote a highly integrated cross-functional team. In this environment, the quality activities also also integrated. Different people may be leading different activities, but everyone's involved every step of the way. Because of that, there's not always a clear hand-off from a development team to a quality team. As such, I would consider a defect to be something that makes it through to the end of the iteration and into the release.However, something to consider is communication. In my first example, I may note an issue that I find in testing that is minor, but don't have the necessary path to fix in the development cycle. I may log a defect to communicate that I found an issue and allow it to be dispositioned. The project lead or quality team may say that it's actually a big deal (to the customer or from a product quality perspective) and demand it get fixed in the development cycle, or it may be planned for a later release.In the end, you do need to do the right thing to enable you to understand your product quality and communicate the current state of the development effort to the appropriate stakeholders.
_codereview.62649
I am developing a mysqli database wrapper. I've made an effort to make this as fast as possible and easy to use.However, now I want to start adding callbacks (for fetch), and make this mysqli-only dependent (still using get_result).trait dbExtras // connection-independent tools{ public function echoQuery() { $args = func_get_args(); if (count($args) == 1){ echo (string) $args[0]; } else { $query = $args[0]; for($i=1; $i < count($args); $i++){ $query = preg_replace('/[?]/', ' . $args[$i] . ' , $query, 1); } echo $query; } } public function toJSON($stmt) { $json = array(); if ($stmt->num_rows > 0) { while($row = $stmt->fetch_array(MYSQL_ASSOC)) {$json[] = $row;} } return json_encode($json); } //fetch an array from executed SQL query. mode can also be numeric (MYSQLI_NUM) or both (MYSQLI_BOTH) public function fetch($stmt, $mode = MYSQLI_ASSOC) { return $stmt->fetch_array($mode); } public function fetchAll($stmt, $mode = MYSQLI_ASSOC) { return $stmt->fetch_all($mode); }}class database{ //database vars public $connection = null; use dbExtras; // include methods // Methods Declaration public function __construct($host, $database, $username, $password) { $db = mysqli_connect($host,$username,$password,$database); //Check connection if($db->connect_errno > 0) {return false;} $this->connection = $db; // changed from utf8 to utf8mb4, reason: full int8n support $this->connection->set_charset(utf8mb4); } public function close() { if ($this->connection != null) {return $this->connection->close();} } function __destruct() { $this->close(); } public function runQuery() { $this->ping(); // reconnects if connection is closed $args = func_get_args(); $cnt = count($args); if ($cnt === 1){ return $this->query($args[0]); } else { $types = ''; $bind = mysqli_prepare($this->connection, $args[0]); for($i=1; $i < $cnt; $i++){ switch (gettype($args[$i])) { case 'NULL': case 'string': $types .= 's'; break; case 'boolean': case 'integer': $types .= 'i'; break; case 'blob': $types .= 'b'; break; case 'double': $types .= 'd'; break; default: $types .= 's'; break; } $values[] = &$args[$i]; } $params = array_merge(array($types), $values); unset($types); unset($values); // new method, saves 30 - 40 % of exec time. old was call_user_func_array(array($bind, bind_param), $params); $ref = new ReflectionClass('mysqli_stmt'); $method = $ref->getMethod(bind_param); $method->invokeArgs($bind,$params); unset($params); if ($bind->execute()) { return $bind->get_result(); } else { return false; } } } public function getLastInserted() { return $this->connection->insert_id; } public function setAutoCommit($bool = TRUE) // required before complex queries --use with addQuery() & commit() { $this->connection->autocommit($bool); } public function query($sql) { return $this->connection->query($sql); } public function commit() { return $this->connection->commit(); } public function escape($string) { return $this->connection->real_escape_string($string); } public function getAffectedRows() { return mysqli_affected_rows($this->connection); } public function ping() { return mysqli_ping($this->connection); } public function getError() { return mysqli_error($this->connection); }}I also use a trait for ease of use in other classes. The main function is runQuery(), and it's the one I've spent most time developing.I want to know how to make it perform better / more secure.
PHP MySQLi database wrapper
php;mysql;mysqli
About code-style: all classes/traits/interfaces must be Capitalized.Some notes:What do you mean writing database? Connection? Query? Don't mix them.Models can't contain any methods with side-effects(like output), remember it!Don't create superClassThatDoesAllTheWorkForTheApplication it's awful, also this is antipattern. Split tasks as it's shown in first paragraph.Don't use getError method! Just throw an exception!Don't use Reflection if you don't really need them! They are very slow.Don't use for loop where you can use foreach. foreach is faster!Never write code like this in construct method:if($db->connect_errno > 0) {return false;}You have exceptions for this!Never write just wrappers for ready functional! If you write database abstraction do it in abstract way and firstly write interfaces. Code is always secondary
_unix.227796
I'm trying to figure out a good way to disable xtrace before leaving a script. These are all being executed by Wercker, a continuous integration and deployment SaaS.A previous script of mine has run enabling xtrace+ echo 7ad27e6b-75d9-4e72-a9a7-8b0d6796bd75 0source /pipeline/maven-9ea06b71-4392-4fec-ab5a-db7389b49cf2/run.sh < /dev/null+ source /pipeline/maven-9ea06b71-4392-4fec-ab5a-db7389b49cf2/run.sh++ set +o xtrace ## disabling here to keep other area's quiet...++ '[' -e settings.xml ']'++ SETTINGS=--settings=settings.xml++ mvn --update-snapshots --batch-mode -Dmaven.repo.local=/pipeline/cache --settings=settings.xml deploy...+ echo f5b142ac-a369-4166-967e-688d46c642c8 0here's my actual codeif [ -n $WERCKER_MAVEN_DEBUG ]; then set -o xtrace case $WERCKER_MAVEN_DEBUG in [1-2]) env;; [1-3]) DEBUG=--debug;; esacfiif [ -e $WERCKER_MAVEN_SETTINGS ]; then SETTINGS=--settings=${WERCKER_MAVEN_SETTINGS}fimvn --update-snapshots \ --batch-mode \ -Dmaven.repo.local=${WERCKER_CACHE_DIR} \ ${SETTINGS} ${DEBUG} \ ${WERCKER_MAVEN_GOALS}when I try to disable xtrace with set +o xtrace at the bottom of the file that changed the return to always be 0, and so even if maven was failing ci wasn't. I then tried to capture mavens return and call exit ${STATUS} but this caused it to fail even when maven was succeeding with status 0. I think that had something to do with calling exit, not with how I was capturing the code.How can I disable xtrace after mvn has been executed while preserving mavens return status for the caller script?
How can I disable xtrace and preserve my exit code
shell;debugging;return status
null
_unix.227456
For example, I want to execute the following within a shell script:tar cvpzf /destination/backup.tgz /directory\ one /directory\ twoI wish to assign the list of paths (with whitespaces in them) to a variable at the top portion of a script, for easy maintenance.How would one assign /directory\ one /directory\ two to a variable and then pass it on later, for example, to tar, i.e.:#!/bin/shbackup_dirs=?????????tar cvpzf /destination/backup.tgz $backup_dirswithout causing tar to interpret /directory and one as separate entities, just how it would not do so when one executes it in the command line?
Pass list of directories (that contain whitespaces) to a command in a script
bash;shell;shell script;quoting;whitespace
Just quote it:dir1=directory 1dir2=directory 2tar cvpzf /destination/backup.tgz $dir1 $dir2Or, if your shell supports it (bash, which you've tagged your question with, does but sh, which you are using in your script doesn't) use arrays:targets=( directory 1 directory 2 )tar cvpzf backup.tgz ${targets[@]}
_reverseengineering.3145
I'm trying to understand the inner workings of StartServiceCtrlDispatcher. Specifically, I'm trying to figure out how it determines if its being called from the SCM. So I write a simple service in C# and called it as a console app. I started it from WinDbg and set a breakpoint at ADVAPI32!StartServiceCtrlDispatcherWStub. I then stepped into a few calls until I got the following stack trace:0:000> kPChild-SP RetAddr Call Site00000000`0037e578 00007ffa`101a52be RPCRT4!RpcStringBindingComposeW+0xff00000000`0037e580 00007ffa`101abedf sechost!ScClientBindToServer+0x7600000000`0037e690 00007ffa`101a8751 sechost!ScOpenServiceChannelHandle+0x1f00000000`0037e6d0 00007ffa`009796f0 sechost!StartServiceCtrlDispatcherW+0x3c00000000`0037e710 00007ffa`0097c0af System_ServiceProcess_ni+0x296f000000000`0037e7e0 00007ff9`a0730104 System_ServiceProcess_ni+0x2c0af00000000`0037e880 00007ff9`ffe84113 0x00007ff9`a073010400000000`0037e8d0 00007ff9`ffe83fde clr!CallDescrWorkerInternal+0x8300000000`0037e910 00007ff9`ffe889a3 clr!CallDescrWorkerWithHandler+0x4a00000000`0037e950 00007ff9`fff591aa clr!MethodDescCallSite::CallTargetWorker+0x25100000000`0037eb00 00007ff9`fff5999a clr!RunMain+0x1e700000000`0037ece0 00007ff9`fff59893 clr!Assembly::ExecuteMainMethod+0xb600000000`0037efd0 00007ff9`fff59372 clr!SystemDomain::ExecuteMainMethod+0x50600000000`0037f5e0 00007ff9`fff592c6 clr!ExecuteEXE+0x3f00000000`0037f650 00007ff9`fff59d84 clr!_CorExeMainInternal+0xae00000000`0037f6e0 00007ffa`011e7ced clr!CorExeMain+0x1400000000`0037f720 00007ffa`0128ea5b mscoreei!CorExeMain+0xe000000000`0037f770 00007ffa`107115cd MSCOREE!CorExeMain_Exported+0xcb00000000`0037f7a0 00007ffa`10a343d1 KERNEL32!BaseThreadInitThunk+0xd00000000`0037f7d0 00000000`00000000 ntdll!RtlUserThreadStart+0x1d0:000> !clrstackOS Thread Id: 0x2ae4 (0) Child SP IP Call Site000000000037e738 00007ffa100795b3 [InlinedCallFrame: 000000000037e738] System.ServiceProcess.NativeMethods.StartServiceCtrlDispatcher(IntPtr)000000000037e738 00007ffa009796f0 [InlinedCallFrame: 000000000037e738] System.ServiceProcess.NativeMethods.StartServiceCtrlDispatcher(IntPtr)000000000037e710 00007ffa009796f0 DomainBoundILStubClass.IL_STUB_PInvoke(IntPtr)000000000037e7e0 00007ffa0097c0af System.ServiceProcess.ServiceBase.Run(System.ServiceProcess.ServiceBase[])000000000037e880 00007ff9a0730104 ConsoleAndSCMPatternDemo.Program.Main(System.String[]) [c:\Users\Justin\Documents\Visual Studio 2013\Projects\ConsoleAndSCMPatternDemo\ConsoleAndSCMPatternDemo\Program.cs @ 21]000000000037ebb0 00007ff9ffe84113 [GCFrame: 000000000037ebb0] I have two questions:What is this sechost.dll that seems to have AdvApi32 functionality.Where can I find documentation about the following API calls:ScClientBindToServerScOpenServiceChannelHandleGoogle doesn't give good info on either.
Getting info for undocumented APIs called by StartServiceCtrlDispatcher
windows;windbg;callstack
null
_cogsci.12683
I have speculated for a long time that brain waves could be the location of human consciousness or indeed the soul. My argument for this is the following:They vary with different states of consciousness, like dreaming.They cover the brain as a whole and have provides coherence like the perception of timeI heard about an experiment with a man with severe secures that had his brain cut of in two with no neural connections. He didn't get a split personality and could perform functions that involved both halves of the brain. That told me that there couldn't be a completely neural explanation to consciousness. So which other mechanism do we know that could bridge that gap and provide a coherent self awareness combining the two brain halves: brain waves.They may appear simple on the outside of the scull but on the inside they could display very rich characteristics.Neural nets does not have brain waves and doesn't exhibit any signs of consciousness even though they have some kind of intelligence. What kind of experiment could validate this hypothesis? What does research say on this matter?What do you think about this idea in general?
Could brainwaves be the fenomena gives rise to consciousness
consciousness;awareness
In a famous experiment Luigi Galvani conducted in 1780, he observed that a dead frog leg was twitching when electricity was applied to its nerves. Following your line of argument, electricity would be the meta-physical phenomena that gives rise to life.Brainwaves are the signature of neural activity; we capture them through electrical measurements. We couldn't have observed them had it not been for neurons communicating.The most established and empirically-supported theory we have on consciousness demonstrates that consciousness is the result of a cortex-wide neural discharges on a highly connected 'master' network that has access to, and can 'extract' control of all lower-level networks.If this sounds overly complicated, think of an octopus that sees all the lights and can press all the buttons. If you want to learn (a lot) more on consciousness and brainwaves, I recommend you read Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts by Stanislas Dehaene, which is a highly accessible text.
_unix.256870
It seems that man -K does not search the formatted output, but the markup source. For example man -K warrantiesgives many manpages which do not contain the string warranties, like xcalc(1). Also, searching for strings containing special characters is very difficult: man -K 7 '\f'gives a lot of pages not containing \f. man -K 7 '\\f'does not seem to display false positives, but it also does not display ascii(7), which contains the string \f.How do I get around this?
Dealing with manpage search
man
null
_webmaster.35673
We run a high-traffic website at http://www.onedirection.net and we've been using Disqus throughout this year, initially to great effect. We accepted the upgrade to Disqus 2012 back in June, loving the increased user experience and the better community feel - albeit back to an Iframe again. However the fact we were specifically told that the comments are now being indexed by Google was great, and the dynamic nature of the iFrame suited our site (all our pages are cached, so by using Disqus the comments are updated straight away).However, it seems that the Disqus 2012 comments are not being indexed, and we've noticed an obvious fall in traffic over the last few months. Initially we didn't put this down to Disqus and focused on other issues (Google algorithm updates etc). But we're quickly coming down the reasoning that our pages now contain less indexable text, and we are getting less traffic because of this.We've tried emailing Disqus directly but they're very slow and don't seem keen to help.Any thoughts on this?
Disqus 2012 comments NOT being indexed by Google
wordpress;seo
null
_unix.304188
I'm trying to boot the MPC8309twr with sd card. I found some documents AN3659_bf.pdf(describes the format for sd card boot header for MPC8536E) which uses a program called boot format. While formatting the sd card with boot format, we need to pass arguments as configuration file & bootloader image.Here I am having some doubtsa) How to build a RAM BASED u-boot Image, for building ram based u-boot what are the modifications need to be done in source code.b) How to load the QE CODE, while bootloader is executing from NOR Flash it will load QE CODE with some predefined address in u-boot(ex:- fe7e0000) from flash. QE Code is necessary to load at executing at bootloader. so how can i load this QE CODE with sd card boot, for that in SD card where i need to copy this QE CODE.with regards,satish kumar.original post
How to create boot header for booting powerpc MPC8309twr with SD card
linux;boot loader;sd card;u boot
null
_codereview.1998
Exercise 2.57. Extend the differentiation program to handle sums and products of arbitrary numbers of (two or more) terms. Then the last example above could be expressed as(deriv '(* x y (+ x 3)) 'x)Try to do this by changing only the representation for sums and products, without changing the deriv procedure at all. For example, the addend of a sum would be the first term, and the augend would be the sum of the rest of the terms.I wrote the following solution.(define (deriv exp var) (cond ((number? exp) 0) ((variable? exp) (if (same-variable? exp var) 1 0)) ((sum? exp) (make-sum (deriv (addend exp) var) (deriv (augend exp) var))) ((product? exp) (make-sum (make-product (multiplier exp) (deriv (multiplicand exp) var)) (make-product (deriv (multiplier exp) var) (multiplicand exp)))) ((exponentiation? exp) (make-product (exponent exp) (make-exponentiation (base exp) (- (exponent exp) 1)))) (else (error unknown expression type -- DERIV exp))))(define (variable? x) (symbol? x))(define (same-variable? v1 v2) (and (variable? v1) (variable? v2) (eq? v1 v2)))(define (make-sum . a) (define (iter sum rest) (cond ((null? rest) (if (zero? sum) '() (list sum))) ((=number? (car rest) 0) (iter sum (cdr rest))) ((number? (car rest)) (iter (+ sum (car rest)) (cdr rest))) (else (cons (car rest) (iter sum (cdr rest)))))) (let ((result (iter 0 a))) (if (= (length result) 1) (car result) (cons '+ result))))(define (=number? exp num) (and (number? exp) (= exp num)))(define (make-product . a) (define (iter product rest) (cond ((null? rest) (if (= 1 product) '() (list product))) ((=number? (car rest) 1) (iter product (cdr rest))) ((number? (car rest)) (iter (* product (car rest)) (cdr rest))) (else (let ((result (iter product (cdr rest)))) (if (and (pair? result) (eq? (car result) 0)) (list 0) (cons (car rest) result)))))) (let ((result (iter 1 a))) (if (= (length result) 1) (car result) (cons '* result))))(define (sum? x) (and (pair? x) (eq? (car x) '+)))(define (addend s) (cadr s))(define (augend s) (if (> (length s) 3) (cons '+ (cddr s)) (caddr s)))(define (product? x) (and (pair? x) (eq? (car x) '*)))(define (multiplier p) (cadr p))(define (multiplicand p) (if (> (length p) 3) (cons '* (cddr p)) (caddr p)))(define (exponentiation? x) (and (pair? x) (eq? (car x) '**)))(define (base s) (cadr s))(define (exponent s) (caddr s))(define (make-exponentiation m1 m2) (cond ((=number? m2 0) 1) ((= m2 1) m1) (else (list '** m1 m2))))Can it be improved? In particular, look closely at make-product and make-sum.
Extend sums and products functions
lisp;scheme;sicp
Your have the right ideas for implementing make-sum and make-product. You may improve upon them in the following ways, in my opinion.Within the two functions, you employ an inner function iter to partition numeric and non-numeric terms. Traditionally, one uses the name iter to represent an iterative function; your definition, however, is most certainly recursive. To make it truly iterative, one may use two accumulators--one for the sum of all numeric terms and the other for the rest of the terms. Then one may return the cons of the two accumulators when the iteration terminates.One may allow make-sum (and similarly make-product) to handle other sum-type objects in its terms list as well--simply test with sum?, then append its terms.Finally, there are several cases that must be handled when iteration terminates: rest may be empty or it may contain one or more items; sum-numbers may be zero or non-zero.Here's an implementation (naming scheme is slightly different from yours--in the iter function, terms is the list of remaining terms to be iterated upon, sum-numbers is the sum of all numeric terms seen so far, rest is a list of all non-numeric terms seen so far):(define (make-sum . terms) (define (iter terms sum-numbers rest) (cond ((null? terms) (cons sum-numbers rest)) ((=number? (car terms) 0) (iter (cdr terms) sum-numbers rest)) ((number? (car terms)) (iter (cdr terms) (+ sum-numbers (car terms)) rest)) ((sum? (car terms)) (iter (append (cdar terms) (cdr terms)) sum-numbers rest)) (else (iter (cdr terms) sum-numbers (cons (car terms) rest))))) (let* ((result (iter terms 0 null)) (sum-numbers (car result)) (rest (cdr result))) (if (null? rest) sum-numbers (if (zero? sum-numbers) (if (null? (cdr rest)) (car rest) (cons '+ rest)) (cons '+ (cons sum-numbers rest))))))(define (make-product . terms) (define (iter terms sum-numbers rest) (cond ((null? terms) (cons sum-numbers rest)) ((=number? (car terms) 0) (cons 0 null)) ((=number? (car terms) 1) (iter (cdr terms) sum-numbers rest)) ((number? (car terms)) (iter (cdr terms) (* sum-numbers (car terms)) rest)) ((product? (car terms)) (iter (append (cdar terms) (cdr terms)) sum-numbers rest)) (else (iter (cdr terms) sum-numbers (cons (car terms) rest))))) (let* ((result (iter terms 1 null)) (sum-numbers (car result)) (rest (cdr result))) (if (null? rest) sum-numbers (if (zero? sum-numbers) (if (null? (cdr rest)) (car rest) (cons '* rest)) (cons '* (cons sum-numbers rest))))))One may also simplify the definitions of augend and multiplicand by using make-sum and make-product respectively:(define (augend s) (apply make-sum (cddr s)))(define (multiplicand p) (apply make-product (cddr p)))Note the similarities between make-sum and make-product. It may be possible to factor out some code. One may also use library functions such as partition (if available) to simplify the code.
_unix.312579
I was under the impression from the POSIX specs for sed that it is necessary to left-align the text on the line following the i\ command, unless you want leading whitespace in the output.A quick test on my Mac (using BSD sed) shows that perhaps this is not necessary:$ cat test.sed #!/bin/sed -fi\ This line starts with spaces.$ echo some text | sed -f test.sedThis line starts with spaces.some text$ However, I can't seem to find this documented anywhere. It's not in the POSIX specs, and it's not even in sed's man page on my system.Can I rely on this behavior in sed scripts which I want to be portable? How portable is it?(Is it documented anywhere?)(Bonus question: Is it even possible to force sed to insert whitespace at the beginning of a fixed line passed to i\?)
Is it portable to indent the argument to sed's 'i\' command?
sed;posix;portability
Yes, it should be portable as long as you escape any leading blank. Why ? Because some seds strip blank characters from text lines and the only way to avoid that is to escape the leading blank, as these manual pages dating back from the last century explain: 1, 2, 3The same goes for BSD sed (OSX just copied the code, it's not their extension) and if you check the archives and read the man page from BSD 2.11 it's pretty clear:(1)i\text ....... An argument denoted text consists of one or more lines, all but the last of which end with '\' to hide the newline. Backslashes in text are treated like backslashes in the replacement string of an 's' command, and may be used to protect initial blanks and tabs against the stripping that is done on every script line.Now, where is this documented in the POSIX spec ? It only saysThe argument text shall consist of one or more lines. Each embedded <newline> in the text shall be preceded by a <backslash>. Other <backslash> characters in text shall be removed, and the following character shall be treated literally.and if you scroll down under RATIONALE it saysThe requirements for acceptance of <blank> and <space> characters in command lines has been made more explicit than in early proposals to describe clearly the historical practice and to remove confusion about the phrase protect initial blanks [sic] and tabs from the stripping that is done on every script line that appears in much of the historical documentation of the sed utility description of text. (Not all implementations are known to have stripped <blank> characters from text lines, although they all have allowed leading <blank> characters preceding the address on a command line.)Since the part with backslashes may be used to was not included in that quote, the remaining phrase protect initial blanks... doesn't make any sense...1Anyway, to sum up: some implementations did (and some still do) strip blanks from text lines. However, since the POSIX spec to which all implementations should comply saysOther <backslash> characters in text shall be removed, and the following character shall be treated literally.we can conclude that the portable way to indent the lines in the text-to-be-inserted is by escaping the leading blank on each of those lines.1: I also don't understand why OSX/BSD people have changed the entire paragraph in the man page without altering the source code - you get the same behavior as before but the man section that documents this stuff is no longer there.
_unix.191821
How can I get the process ID of the driver of a FUSE filesystem?For example, I currently have two SSHFS filesystems mounted on a Linux machine:$ grep sshfs /proc/mountshost:dir1 /home/gilles/net/dir1 fuse.sshfs rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0host:dir2 /home/gilles/net/dir2 fuse.sshfs rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0$ pidof sshfs15031 15007How can I know which of 15007 and 15031 is dir1 and which is dir2? Ideally I'd like to automate that, i.e. run somecommand /home/gilles/net/dir1 and have it display 15007 (or 15031, or not a FUSE mount point, as appropriate). Note that I'm looking for a generic answer, not an answer that's specific to SSHFS, like tracking which host and port the sshfs processes are connected to, and what files the server-side process has open which might not even be possible at all due to connection sharing. I'm primarily interested in a Linux answer, but a generic answer that works on all systems that support FUSE would be ideal.Why I want to know: to trace its operation, to kill it in case of problems, etc.
Find what process implements a FUSE filesystem
process;fuse
I don't think it's possible. Here's why. I took the naive approach, which was to add the pid of the process opening /dev/fuse to the meta data that fuse creates at mount time, struct fuse_conn. I then used that information to display a pid= field in the mount command. The patch is really simple:diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.hindex 7354dc1..32b05ca 100644--- a/fs/fuse/fuse_i.h+++ b/fs/fuse/fuse_i.h@@ -402,6 +402,9 @@ struct fuse_conn { /** The group id for this mount */ kgid_t group_id;+ /** The pid mounting process */+ pid_t pid;+ /** The fuse mount flags for this mount */ unsigned flags;diff --git a/fs/fuse/inode.c b/fs/fuse/inode.cindex e8799c1..23a27be 100644--- a/fs/fuse/inode.c+++ b/fs/fuse/inode.c@@ -554,6 +554,7 @@ static int fuse_show_options(struct seq_file *m, struct dentry *root) struct super_block *sb = root->d_sb; struct fuse_conn *fc = get_fuse_conn_super(sb);+ seq_printf(m, ,pid=%u, fc->pid); seq_printf(m, ,user_id=%u, from_kuid_munged(&init_user_ns, fc->user_id)); seq_printf(m, ,group_id=%u, from_kgid_munged(&init_user_ns, fc->group_id)); if (fc->flags & FUSE_DEFAULT_PERMISSIONS)@@ -1042,6 +1043,7 @@ static int fuse_fill_super(struct super_block *sb, void *data, int silent) fc->release = fuse_free_conn; fc->flags = d.flags;+ fc->pid = current->pid; fc->user_id = d.user_id; fc->group_id = d.group_id; fc->max_read = max_t(unsigned, 4096, d.max_read);def@fractal [6:37] ~/p/linux -- masterI booted the kernel, mounted sshfs, ran mount:[email protected]:/tmp on /root/tmp type fuse.sshfs (rw,nosuid,nodev,relatime,pid=1549,user_id=0,group_id=0)Success? Unfortunately, not:root 1552 0.0 0.0 45152 332 ? Ssl 13:39 0:00 sshfs [email protected]:/tmp tmp Then I realized: the remaining sshfs process is a child of the one that created the mount. It inherited from the fd. As fuse is implemented, we could have a multitude of processes inheriting from the fd. We could have the fd passed around in UNIX sockets, completely out of the original process tree.We can obtain the 'who owns this TCP port' information, because sockets have this meta data, and simply parsing /proc tells us that information. Unfortunately, the fuse fd is a regular fd on /dev/fuse. Unless that fd somehow becomes special, I don't see how this can be implemented.
_unix.256081
I need to replace the following characters: with a with a with i with s with tThe replacement must be case sensitive. For example if a letter is upper-case then it must be replaced with an upper-case letter. I've read that I could use sed for this process but I don't know how to use it. Can anyone help me by writing the exact command that I must run in Terminal?
How can I replace a character in all the .php files inside a folder on OS X?
text processing;sed;replace
Even though sed is commonly used for these types of tasks, it isn't actually designed for themits very name is stream editor.The tool that is designed for non-interactive file editing is ex. (vi is the visual editor form of ex.) Particularly when you want to edit files in place, ex is a far superior tool to sed.In this case the commands used are almost identical to the sed command given earlier, but they don't have to be. The following is POSIX compliant (unlike sed -i).for file in *.php; do ex -sc '%s/[]/a/ge | %s//s/ge | %s//t/ge | %s//i/ge | x' $file ; doneExplanation:-s starts silent mode in preparation for batch processing. -c specifies the command(s) to be executed.% means to apply the following command to every line in the file. The s/// commands are fairly self-explanatory; the e flag at the end means that any errors (due to the pattern not being found) are suppressed and file processing will continue.| is a command separator (not a pipe).x tells ex to write any changes to the file (but only if there were changes) and exit.If you want in-place file editing, ex is the tool of choice. If you want to preview the changes before you make them, I'd recommend using tr as @gardenhead suggests.(Of course, if you're using a proper version control system such as git, you could make the changes in place using ex and compare the files to the old version by running git diff.)
_cogsci.1449
I remember reading about a study. I forgot the actual details of it, but the gist of it was: people were asked in what situation they would prefer to live, one where they make \$100,000 dollars and the neighbours all make \$200,000 dollars, or one where they make \$10,000 dollars and the neighbours all make \$5000 dollars. Most participants chose the \$10,000 vs \$5,000 situation, even though they would have ended up with much less money (note: I am pretty sure I got the actual numbers wrong). Sadly, I don't remember where I read it (thought it must have been in Predictably Irrational, but can't find it there). QuestionsWhat is the reference for the studyWhat theories explain this behavior? Where can I read background information on the topic?
A study about preference for making relatively vs. absolute more money?
reference request;decision making;economics
null
_cstheory.6448
Often, when we take part in TCS conferences, we notice some little details that we wish the conference organisers would have taken care of. And when we are organising conferences, we have already forgotten it.Hence the question: Which small steps we could easily take to improve TCS conferences?Hopefully, this question could become a resource that we could double-check whenever we are organising conferences, to make sure that we do not repeat the same mistakes again and again...I am here interested in relatively small and inexpensive details something that conference organisers could have easily done if only they had thought about it in time. For example, it might be a useful piece of information that could be put on the conference web page well in advance; a five-dollar gadget that may save the day; something to consider when choosing the restaurant for the banquet; the best timing of the coffee breaks; or your ideal design of the conference badges.We can cover here all aspects of conference arrangements (including paper submissions, program committees, reviews, local arrangements, etc.).This is a community wiki question. Please post one idea per answer, and please vote other answers up or down depending on how important they are in your opinion.
Small steps for better TCS conferences?
soft question;big list;research practice;conferences
null
_webmaster.2175
I was just wondering how the community backs up their websites?I take a nightly dump the database and shift it across onto another server which then gets backed up onto removable media. It is fairly automated and seems to work but I would be interested to hear what others have to say.
How do you backup your websites?
database;backups;code
The site code is in Subversion, which is in turn backed up nightly. Any development of the code is done on dedicated dev servers. Production is only updated once the new version has been tested.The content in relational databases is backed up nightly. Some of our sites have a huge volume of static content (images, pdfs etc.) and those are stored on RAID5 SANs, mirrored to an offsite facility and have tape backups just in case.We do not backup log files as we do not rely on them much.
_webapps.83044
Here is my problem:I want to use Mailchimp for sending a newsletter to a list of people's email addresses (I do have all the permissions). I set up the list correctly and sent out a test mail but it only arrives at mail accounts running on web.de, hotmail, etc. but not the private email addresses running on strato.de. I texted Mailchimp's customer care but they said it probably is strato's spam filters. Now, I turned those off for my account but the emails still don't arrive.Any idea what may solve this issue?
Mailchimp eMails don't arrive at strato.de addresses
email;mailchimp
null
_unix.149533
I did a usermod to add the current user user in a group, but when I run id -Gn it only shows the main user's group:[user@computer ~]$ id -Gn userBut when I specify the user, it works normally:[user@computer ~]$ id -Gn useruser newgroupDo you have an idea why it works like it? Am I missing something concerning the groups management in UNIX?
id command doesn't show all user's groups
group
That's because your active set of groups is only determined at login. You'll need to logout and login again to pick up the change and see it reflected by id. You can see this another way by issuing cat /proc/$$/status which lists most of your current (session) process states.
_codereview.127714
I wrote a download function and I wanted to know if my code clean, readable and maintainable, or if I could make things easier. Please tell me if this function could be used in the real world.public static void Download(Uri _downloadUri, string _path, string _passedSong, Song _currentObj) { if (_downloadUri != null) { if (!string.IsNullOrEmpty(_path) && !string.IsNullOrEmpty(_passedSong)) { try { var downloadedData = new byte[0]; var webReq = WebRequest.Create(_downloadUri); using (var webResponse = webReq.GetResponse()) { using (var dataStream = webResponse.GetResponseStream()) { //Download chuncks byte[] dataBuffer = new byte[1024]; //Get size int dataLength = (int) webResponse.ContentLength; ByteArgs byteArgs = new ByteArgs(); byteArgs.downloaded = 0; byteArgs.total = dataLength; if (_bytesDownloaded != null) { _bytesDownloaded(byteArgs, _currentObj); } //Download using (var memoryStream = new MemoryStream()) { while (true) { int bytesFromStream = dataStream.Read(dataBuffer, 0, dataBuffer.Length); if (bytesFromStream == 0) { byteArgs.downloaded = dataLength; byteArgs.total = dataLength; if (_bytesDownloaded != null) { _bytesDownloaded(byteArgs, _currentObj); } //Download complete break; } else { //Write the downloaded data memoryStream.Write(dataBuffer, 0, bytesFromStream); byteArgs.downloaded = bytesFromStream; byteArgs.total = dataLength;// for download progressbar if (_bytesDownloaded != null) { _bytesDownloaded(byteArgs, _currentObj); } } } //Convert to byte array downloadedData = memoryStream.ToArray(); //Write bytes to the specified file if (Directory.Exists(Settings.Default.downloadDirectory)) { using (var newFile = new FileStream(_path, FileMode.Create)) { newFile.Write(downloadedData, 0, downloadedData.Length); } } } } } } catch(Exception ex) { ExceptionLog.LogException(ex, Download(), in GeneralSettings.cs); } } } }
Download function
c#
null
_softwareengineering.210788
I have been working on a system alone for about two years. I inherited the system from a contractor who spent about two years working on it before me (alone). The system is not particularly well designed because there is business logic, presentation logic and data logic mingled together. It is a very complex system, which I am trying to refactor as I go along and this takes time.A new developer was recruited but he seems to be taking more of a Project management role. He is very direct asking exactly how long things will take; questioning all my assumptions and predictions, which is difficult because of the complexity and because I am the only expert at the moment.I am very organised in my personal life. For example, I was asked what time I would arrive in Manchester last week so I provided an exact time catering for accidents and road works on the way. I find it difficult to apply the same principles at work in software development at the moment.Don't get me wrong. I have worked with project managers on less complex projects in the past and have had a good relationship, but then the projects were less complex and I always delivered before schedule. I am struggling with this particular project manager and it is causing stress.How do other developers deal with project managers who want answers and accurate deadline dates?
Setting Deadlines in software development
design patterns;teamwork;estimation;time estimation
null
_unix.218338
I have a sda on my opensuse and want to add a new disk to have more space on it.After adding the disk I have created a partition lets name it sdb.Mr sda has 6 partition and I ant to sdb to 6th partition sda.Any one can help me for this?
Adding a new disk to an exsting partition in linux
mount;partition;opensuse;hard disk;yast
null
_codereview.134633
Although my code works as expected, there are a few gotchas.A single row is not filled at once; instead, I can see partially filled rows during the rendering process(Fixed in the updated code below.).The application still feels slow; can it be optimized further?And the biggest thing, how can I make the code more OOPsy?index.html <!DOCTYPE html> <html> <head> <script type=text/javascript src=js/client.js></script> <script type=text/javascript src=js/mosaic.js></script> <title>Mosaic</title> <style> .container { margin: 0 auto; width: 50%; } .container ul { list-style: none; margin-left: 0; } </style> </head> <body> <script id=worker1 type=javascript/worker> var SVG_URL = 'http://localhost:8765/color/'; function httpGet(url) { return new Promise(function(resolve, reject) { var req = new XMLHttpRequest(); req.open('GET', url); req.onload = function() { if (req.status == 200) { resolve(req.response); } else { reject(Error(req.statusText)); } }; req.onerror = function() { reject(Error('Network Error')); }; req.send(); }); }; function getSvg(data) { return httpGet(SVG_URL + data.hex) .then(function(svg) { return {svg: svg, x: data.x, y: data.y}; }) .catch(function(error) { console.log(error); }); }; function messageHandler(e) { var chunks = e.data; Promise.all(chunks.map(function(data) { return getSvg(data); })) .then(function(response) { self.postMessage(response); // Close the woker to be garbage collected. //self.close(); }); }; self.addEventListener('message', messageHandler, false); </script> <div class=container> <ul id=image-list> </ul> <input id=input type=file accept=image/*> </div> <script> (function(app) { document.addEventListener('DOMContentLoaded', function() { var blob = new Blob([document.querySelector('#worker1').textContent]); app.run(window.URL.createObjectURL(blob)); }); })(window.app || (window.app = {})); </script> </body></html>index.js /** * @fileoverview Creates the PhotoMosaic of the given image. * @author Vivek Poddar */ (function(window, document, app) { 'use strict'; var DOMURL = window.URL || window.webkitURL; // classy, since V8 prefers predictable objects. function Tile(rgb, x, y) { this.hex = Tile.rgbToHex(rgb); this.x = x * TILE_WIDTH; this.y = y * TILE_HEIGHT; }; // classy, since V8 prefers predictable objects. function SVGTile(svg, x, y) { this.svgURL = SVGTile.createSVGUrl(svg); this.x = x; this.y = y; this.width = TILE_WIDTH; this.height = TILE_HEIGHT; }; SVGTile.createSVGUrl = function(svg) { var svgBlob = new Blob([svg], {type: 'image/svg+xml;charset=utf-8'}); return DOMURL.createObjectURL(svgBlob); }; Tile.componentToHex = function(c) { var hex = c.toString(16); return hex.length == 1 ? '0' + hex : hex; }; Tile.rgbToHex = function(rgb) { return Tile.componentToHex(rgb[0]) + Tile.componentToHex(rgb[1]) + Tile.componentToHex(rgb[2]); }; /** * Draws a offscreen canvas to get averaged rgb per tile. * see https://stackoverflow.com/a/17862644/4260745 */ function getOffScreenContext(width, height) { var canvas = document.createElement('canvas'); canvas.width = width; canvas.height = height; return canvas.getContext('2d'); }; /** * Gets tiles data from the source image. * @param {HTMLElement} sourceImage * @return {!Array<!Tile>} */ function getTiles(sourceImage, tilesX, tilesY) { var res = []; var context = getOffScreenContext(tilesX, tilesY); // see https://stackoverflow.com/a/17862644/4260745 context.drawImage(sourceImage, 0, 0, tilesX, tilesY); var data = context.getImageData(0, 0, tilesX, tilesY).data; var i = 0; for (var row = 0; row < tilesY; row++) { for (var col = 0; col < tilesX; col++) { res.push(new Tile(data.subarray(i * 4, i * 4 + 3), col, row)); i++; } } return res; }; /** * @param {CanvasRenderingContext2D} ctx * @param {!Array<!SVGTile>} tiles */ function drawTiles(ctx, tiles) { var context = getOffScreenContext(tiles.length * TILE_WIDTH, TILE_HEIGHT); tiles.forEach(function(tile, index) { renderTile(context, tile, function() { if (tiles.length === index + 1) { ctx.drawImage(context.canvas, 0, tiles[0].y); } }); }); }; /** * Draws PhotoMosaic on screen. * @param {HTMLElement} image The source image from file input. */ function drawMosiac(image, ctx, url) { var rowData = {}; function renderRow(i) { if (!rowData[i]) return i; var tiles = []; rowData[i].forEach(function(data) { var tile = new SVGTile(data.svg, data.x, data.y); tiles.push(tile); }); drawTiles(ctx, tiles); return renderRow(++i); }; var tilesX = Math.floor(image.width / TILE_WIDTH); var tilesY = Math.floor(image.height / TILE_HEIGHT); var tiles = getTiles(image, tilesX, tilesY); var i = 0; var maxWorkers = navigator.hardwareConcurrency || 4; function runWorker(worker) { worker.onmessage = function(e) { var row = e.data[0].y / TILE_HEIGHT; rowData[row] = e.data; if (row === i) { i = renderRow(i); } if (tiles.length) { runWorker(worker) } else { worker.terminate(); }; } worker.postMessage(tiles.splice(0, tilesX)); } if (tiles.length) { for(var x = maxWorkers; x--; ) runWorker(new Worker(url)); } }; /** * Renders svg tile on the given context. * @param {CanvasRenderingContext2D} ctx * @param {!Tile} tile The tile to render. * @param {function()} callback To be called after image is loaded. * @throws Error */ function renderTile(ctx, tile, callback) { var img = new Image(); img.onload = function() { try { ctx.drawImage(this, tile.x, 0, tile.width, tile.height); ctx.imageSmoothingEnabled = false; ctx.mozImageSmoothingEnabled = false; DOMURL.revokeObjectURL(tile.svgURL); callback(); } catch (e) { throw new Error('Could not render image' + e); } }; img.src = tile.svgURL; }; /** * Handles image upload. * @param {function(HTMLElement)} callback To be called after image loads. */ function handleFileUpload(callback) { var img = new Image(); img.src = window.URL.createObjectURL(this); img.onload = function() { callback(this); } }; /** * Main function which starts the rendering process. * @throws RangeError */ app.run = function run(url) { var inputElement = document.getElementById('input'); var outputElement = document.getElementById('output'); inputElement.addEventListener('change', function() { handleFileUpload.call(this.files[0], function(image) { if (image.width < TILE_WIDTH || image.height < TILE_HEIGHT) { console.log(TILE_WIDTH, TILE_HEIGHT); throw new RangeError( 'Tile dimension cannot be greater than source image.'); } var canvas = document.createElement('canvas'); var context = canvas.getContext('2d'); canvas.width = image.width; canvas.height = image.height; drawMosiac(image, context, url); outputElement.appendChild(canvas); }); }, false); };})(window, document, window.app || (window.app = {}));
SVG mosaic creator
javascript;performance;object oriented;multithreading;canvas
null
_softwareengineering.257218
I'm a PHP developer coding in the Yii 1.x framework. I was looking for a way to encode unescaped JSON in Yii 1.x, and found the CJSON framework class for this purpose (so OOP).Since it does not support unescaped JSON though, I had to revert back to the pure PHP, procedural, non-OOP approach of using json_encode($results, JSON_UNESCAPED_SLASHES);. However, I asked how to achieve the same with the Yii framework.As an answer I received information that, while it does what I want, is not possible with the base framework and requires that I extend a base class. This proposed solution requires 12 lines of code and involves creation of a separate file, while my solution requires just 1 new line of code.Just to feed my curiosity - what is more important in situations like this? Should I follow KISS and make my code as simple as possible, even reverting to procedural code, or should I stick to OOP solutions and always extend classes if I can't do what I need with existing framework code?
Is the KISS principle more important than utilizing OOP to solve a problem?
object oriented
Comparing procedural code and OOP is like comparing apples and oranges. Sometimes, one leads to a better design, sometimes the other and sometimes neither.In languages that support a mixture of OO and procedural code (which is the large majority of OO languages), it can make sense to sub-class an existing class ifthe base class is open for extension (not sealed, final, whatever it is called in your language of choice), andyour extension must be used by another class, that takes (a reference to) the base class as dependency, oryour extension is applicable only in some situations, but it must also seamlessly handle the situations that the base class caters for, oryour extension needs access to parts of the base class, orthe code using the extension will mostly use it in conjunction with the base class.If none of that holds, then you should go for whatever leads to the simplest code, be it a class, extension or procedure (or just a procedure call in this case).
_unix.314987
Okay, so the Linux server I use at work doesn't have Zsh installed on it and I don't have root access, so I manually built and installed zsh to $HOME/usr (I've done this with other programs such as colordiff, rlwrap, and dateutils, and they all function perfectly well) and it runs just fine except that when I specify autoload -Uz compinit && compinit; autoload -U colors && colors it gives me the following errors: /home/foobar/.zshrc:16: compinit: function definition file not found/home/foobar/.zshrc:17: colors: function definition file not foundNow I've checked and rechecked and everything is installed as it should be:(Output from ls -R usr)usr/share/zsh:5.2 site-functionsusr/share/zsh/5.2:functions help scriptsusr/share/zsh/5.2/functions:Calendar Chpwd Completion Exceptions MIME Misc Newuser Prompts TCP VCS_Info Zftp ZleHere's the script that I wrote to configure and build ZSH:_prefix=$HOME/precd zsh-5.2prepare() { # set correct keymap path # FIXME this may not work at RRD since their Linux servers aren't Arch Linux based sed -i 's#/usr/share/keymaps#/lib/kbd/keymaps#g' Completion/Unix/Command/_loadkeys # Fix usb.ids path # FIXME I don't know if this is necessary/will even work at RRD sed -i 's#/usr/share/misc/usb.ids#/usr/share/hwdata/usb.ids#g' Completion/Linux/Command/_lsusb # Remove unneeded and conflicting completion scripts # FIXME this definitely probably won't work at RRD #for _fpath in AIX BSD Cygwin Darwin Debian Mandriva openSUSE Redhat Solaris; do # rm -rf Completion/$_fpath # sed s#\s*Completion/$_fpath/\*/\*##g -i Src/Zle/complete.mdd #done #rm Completion/Linux/Command/_{pkgtool,rpmbuild}}build() { ./configure --prefix=$_prefix/usr \ --docdir=$_prefix/usr/share/doc/zsh \ --htmldir=$_prefix/usr/share/doc/zsh/html \ --enable-etcdir=$_prefix/etc/zsh \ --enable-zshenv=$_prefix/etc/zsh/zshenv \ --enable-zlogin=$_prefix/etc/zsh/zlogin \ --enable-zlogout=$_prefix/etc/zsh/zlogout \ --enable-zprofile=$_prefix/etc/zsh/zprofile \ --enable-zshrc=$_prefix/etc/zsh/zshrc \ --disable-maildir-support \ --with-term-lib='ncursesw' \ --enable-multibyte \ --enable-function-subdirs \ --enable-fndir=$_prefix/usr/share/zsh/functions \ --enable-scriptdir=$_prefix/usr/share/zsh/scripts \ --with-tcsetpgrp \ --enable-pcre \ --enable-zsh-secure-free \ --enable-multibyte make}\package() { make install make install.info install.html}make distcleanpreparebuildcheckpackageIs there anything that I'm missing or forgetting, like is there a specific option that configure sets that will instruct Zsh specifically where to find its site-functions and other functions. Mostly I'm super confused because it's not like Zsh doesn't know where its site-functions are because it installed them in the expected place, albeit it's not in /usr. Any ideas?EDIT: I ran strace on zsh and there's a huge list of files that Zsh can't find:access(/etc/ld.so.preload, R_OK) = -1 ENOENT (No such file or directory)open(/apps/lcpv650/lcp/rts/lib/tls/x86_64/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/apps/lcpv650/lcp/rts/lib/tls/x86_64, 0x7fffa0839490) = -1 ENOENT (No such file or directory)open(/apps/lcpv650/lcp/rts/lib/tls/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/apps/lcpv650/lcp/rts/lib/tls, 0x7fffa0839490) = -1 ENOENT (No such file or directory)open(/apps/lcpv650/lcp/rts/lib/x86_64/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/apps/lcpv650/lcp/rts/lib/x86_64, 0x7fffa0839490) = -1 ENOENT (No such file or directory)open(/apps/lcpv650/lcp/rts/lib/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/code1_v370/c1p/lib/tls/x86_64/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/apps/code1_v370/c1p/lib/tls/x86_64, 0x7fffa0839490) = -1 ENOENT (No such file or directory)open(/apps/code1_v370/c1p/lib/tls/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/apps/code1_v370/c1p/lib/tls, 0x7fffa0839490) = -1 ENOENT (No such file or directory)open(/apps/code1_v370/c1p/lib/x86_64/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/apps/code1_v370/c1p/lib/x86_64, 0x7fffa0839490) = -1 ENOENT (No such file or directory)open(/apps/code1_v370/c1p/lib/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/lcpv650/lcp/lib/tls/x86_64/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/apps/lcpv650/lcp/lib/tls/x86_64, 0x7fffa0839490) = -1 ENOENT (No such file or directory)open(/apps/lcpv650/lcp/lib/tls/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/apps/lcpv650/lcp/lib/tls, 0x7fffa0839490) = -1 ENOENT (No such file or directory)open(/apps/lcpv650/lcp/lib/x86_64/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/apps/lcpv650/lcp/lib/x86_64, 0x7fffa0839490) = -1 ENOENT (No such file or directory)open(/apps/lcpv650/lcp/lib/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(/opt/FJSVcbl64/lib/tls/x86_64/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/opt/FJSVcbl64/lib/tls/x86_64, 0x7fffa0839490) = -1 ENOENT (No such file or directory)open(/opt/FJSVcbl64/lib/tls/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/opt/FJSVcbl64/lib/tls, 0x7fffa0839490) = -1 ENOENT (No such file or directory)open(/opt/FJSVcbl64/lib/x86_64/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/opt/FJSVcbl64/lib/x86_64, 0x7fffa0839490) = -1 ENOENT (No such file or directory)open(/opt/FJSVcbl64/lib/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(/opt/FJSVXbsrt/lib/tls/x86_64/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/opt/FJSVXbsrt/lib/tls/x86_64, 0x7fffa0839490) = -1 ENOENT (No such file or directory)open(/opt/FJSVXbsrt/lib/tls/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/opt/FJSVXbsrt/lib/tls, 0x7fffa0839490) = -1 ENOENT (No such file or directory)open(/opt/FJSVXbsrt/lib/x86_64/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/opt/FJSVXbsrt/lib/x86_64, 0x7fffa0839490) = -1 ENOENT (No such file or directory)open(/opt/FJSVXbsrt/lib/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/opt/FJSVXbsrt/lib, 0x7fffa0839490) = -1 ENOENT (No such file or directory)open(/opt/FJSVXmeft/lib/tls/x86_64/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/opt/FJSVXmeft/lib/tls/x86_64, 0x7fffa0839490) = -1 ENOENT (No such file or directory)open(/opt/FJSVXmeft/lib/tls/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/opt/FJSVXmeft/lib/tls, 0x7fffa0839490) = -1 ENOENT (No such file or directory)open(/opt/FJSVXmeft/lib/x86_64/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/opt/FJSVXmeft/lib/x86_64, 0x7fffa0839490) = -1 ENOENT (No such file or directory)open(/opt/FJSVXmeft/lib/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/opt/FJSVXmeft/lib, 0x7fffa0839490) = -1 ENOENT (No such file or directory)open(/apps/syncsort/lib/tls/x86_64/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/apps/syncsort/lib/tls/x86_64, 0x7fffa0839490) = -1 ENOENT (No such file or directory)open(/apps/syncsort/lib/tls/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/apps/syncsort/lib/tls, 0x7fffa0839490) = -1 ENOENT (No such file or directory)open(/apps/syncsort/lib/x86_64/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/apps/syncsort/lib/x86_64, 0x7fffa0839490) = -1 ENOENT (No such file or directory)open(/apps/syncsort/lib/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/rrdcom/load/tls/x86_64/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/apps/rrdcom/load/tls/x86_64, 0x7fffa0839490) = -1 ENOENT (No such file or directory)open(/apps/rrdcom/load/tls/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/apps/rrdcom/load/tls, 0x7fffa0839490) = -1 ENOENT (No such file or directory)open(/apps/rrdcom/load/x86_64/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/apps/rrdcom/load/x86_64, 0x7fffa0839490) = -1 ENOENT (No such file or directory)open(/apps/rrdcom/load/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/cobol_utils/tls/x86_64/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/apps/cobol_utils/tls/x86_64, 0x7fffa0839490) = -1 ENOENT (No such file or directory)open(/apps/cobol_utils/tls/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/apps/cobol_utils/tls, 0x7fffa0839490) = -1 ENOENT (No such file or directory)open(/apps/cobol_utils/x86_64/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/apps/cobol_utils/x86_64, 0x7fffa0839490) = -1 ENOENT (No such file or directory)open(/apps/cobol_utils/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/apps/cobol_utils, 0x7fffa0839490) = -1 ENOENT (No such file or directory)open(tls/x86_64/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(tls/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(x86_64/libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(libdl.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/lcpv650/lcp/rts/lib/libncursesw.so.5, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/code1_v370/c1p/lib/libncursesw.so.5, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/lcpv650/lcp/lib/libncursesw.so.5, O_RDONLY) = -1 ENOENT (No such file or directory)open(/opt/FJSVcbl64/lib/libncursesw.so.5, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/syncsort/lib/libncursesw.so.5, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/rrdcom/load/libncursesw.so.5, O_RDONLY) = -1 ENOENT (No such file or directory)open(tls/x86_64/libncursesw.so.5, O_RDONLY) = -1 ENOENT (No such file or directory)open(tls/libncursesw.so.5, O_RDONLY) = -1 ENOENT (No such file or directory)open(x86_64/libncursesw.so.5, O_RDONLY) = -1 ENOENT (No such file or directory)open(libncursesw.so.5, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/lcpv650/lcp/rts/lib/librt.so.1, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/code1_v370/c1p/lib/librt.so.1, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/lcpv650/lcp/lib/librt.so.1, O_RDONLY) = -1 ENOENT (No such file or directory)open(/opt/FJSVcbl64/lib/librt.so.1, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/syncsort/lib/librt.so.1, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/rrdcom/load/librt.so.1, O_RDONLY) = -1 ENOENT (No such file or directory)open(tls/x86_64/librt.so.1, O_RDONLY) = -1 ENOENT (No such file or directory)open(tls/librt.so.1, O_RDONLY) = -1 ENOENT (No such file or directory)open(x86_64/librt.so.1, O_RDONLY) = -1 ENOENT (No such file or directory)open(librt.so.1, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/lcpv650/lcp/rts/lib/libm.so.6, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/code1_v370/c1p/lib/libm.so.6, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/lcpv650/lcp/lib/libm.so.6, O_RDONLY) = -1 ENOENT (No such file or directory)open(/opt/FJSVcbl64/lib/libm.so.6, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/syncsort/lib/libm.so.6, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/rrdcom/load/libm.so.6, O_RDONLY) = -1 ENOENT (No such file or directory)open(tls/x86_64/libm.so.6, O_RDONLY) = -1 ENOENT (No such file or directory)open(tls/libm.so.6, O_RDONLY) = -1 ENOENT (No such file or directory)open(x86_64/libm.so.6, O_RDONLY) = -1 ENOENT (No such file or directory)open(libm.so.6, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/lcpv650/lcp/rts/lib/libc.so.6, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/code1_v370/c1p/lib/libc.so.6, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/lcpv650/lcp/lib/libc.so.6, O_RDONLY) = -1 ENOENT (No such file or directory)open(/opt/FJSVcbl64/lib/libc.so.6, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/syncsort/lib/libc.so.6, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/rrdcom/load/libc.so.6, O_RDONLY) = -1 ENOENT (No such file or directory)open(tls/x86_64/libc.so.6, O_RDONLY) = -1 ENOENT (No such file or directory)open(tls/libc.so.6, O_RDONLY) = -1 ENOENT (No such file or directory)open(x86_64/libc.so.6, O_RDONLY) = -1 ENOENT (No such file or directory)open(libc.so.6, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/lcpv650/lcp/rts/lib/libtinfo.so.5, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/code1_v370/c1p/lib/libtinfo.so.5, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/lcpv650/lcp/lib/libtinfo.so.5, O_RDONLY) = -1 ENOENT (No such file or directory)open(/opt/FJSVcbl64/lib/libtinfo.so.5, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/syncsort/lib/libtinfo.so.5, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/rrdcom/load/libtinfo.so.5, O_RDONLY) = -1 ENOENT (No such file or directory)open(tls/x86_64/libtinfo.so.5, O_RDONLY) = -1 ENOENT (No such file or directory)open(tls/libtinfo.so.5, O_RDONLY) = -1 ENOENT (No such file or directory)open(x86_64/libtinfo.so.5, O_RDONLY) = -1 ENOENT (No such file or directory)open(libtinfo.so.5, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/lcpv650/lcp/rts/lib/libpthread.so.0, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/code1_v370/c1p/lib/libpthread.so.0, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/lcpv650/lcp/lib/libpthread.so.0, O_RDONLY) = -1 ENOENT (No such file or directory)open(/opt/FJSVcbl64/lib/libpthread.so.0, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/syncsort/lib/libpthread.so.0, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/rrdcom/load/libpthread.so.0, O_RDONLY) = -1 ENOENT (No such file or directory)open(tls/x86_64/libpthread.so.0, O_RDONLY) = -1 ENOENT (No such file or directory)open(tls/libpthread.so.0, O_RDONLY) = -1 ENOENT (No such file or directory)open(x86_64/libpthread.so.0, O_RDONLY) = -1 ENOENT (No such file or directory)open(libpthread.so.0, O_RDONLY) = -1 ENOENT (No such file or directory)connect(11, {sa_family=AF_FILE, path=/var/run/nscd/socket}, 110) = -1 ENOENT (No such file or directory)connect(11, {sa_family=AF_FILE, path=/var/run/nscd/socket}, 110) = -1 ENOENT (No such file or directory)open(/apps/lcpv650/lcp/rts/lib/libnss_files.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/code1_v370/c1p/lib/libnss_files.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/lcpv650/lcp/lib/libnss_files.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(/opt/FJSVcbl64/lib/libnss_files.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/syncsort/lib/libnss_files.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/rrdcom/load/libnss_files.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(tls/x86_64/libnss_files.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(tls/libnss_files.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(x86_64/libnss_files.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(libnss_files.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/lcpv650/lcp/rts/lib/libnss_vas4.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/code1_v370/c1p/lib/libnss_vas4.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/lcpv650/lcp/lib/libnss_vas4.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(/opt/FJSVcbl64/lib/libnss_vas4.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/syncsort/lib/libnss_vas4.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/rrdcom/load/libnss_vas4.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(tls/x86_64/libnss_vas4.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(tls/libnss_vas4.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(x86_64/libnss_vas4.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(libnss_vas4.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(/opt/quest/lib64/tls/x86_64/libvtsmartcache.so.1, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/opt/quest/lib64/tls/x86_64, 0x7fffa0838e60) = -1 ENOENT (No such file or directory)open(/opt/quest/lib64/tls/libvtsmartcache.so.1, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/opt/quest/lib64/tls, 0x7fffa0838e60) = -1 ENOENT (No such file or directory)open(/opt/quest/lib64/x86_64/libvtsmartcache.so.1, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/opt/quest/lib64/x86_64, 0x7fffa0838e60) = -1 ENOENT (No such file or directory)open(/opt/quest/lib64/libresolv.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/lcpv650/lcp/rts/lib/libresolv.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/code1_v370/c1p/lib/libresolv.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/lcpv650/lcp/lib/libresolv.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(/opt/FJSVcbl64/lib/libresolv.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/syncsort/lib/libresolv.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/rrdcom/load/libresolv.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(tls/x86_64/libresolv.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(tls/libresolv.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(x86_64/libresolv.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(libresolv.so.2, O_RDONLY) = -1 ENOENT (No such file or directory)open(/opt/quest/lib64/libcrypt.so.1, O_RDONLY) = -1 ENOENT (No such file or directory)open(/opt/quest/lib64/libcrypt.so.1, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/lcpv650/lcp/rts/lib/libcrypt.so.1, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/code1_v370/c1p/lib/libcrypt.so.1, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/lcpv650/lcp/lib/libcrypt.so.1, O_RDONLY) = -1 ENOENT (No such file or directory)open(/opt/FJSVcbl64/lib/libcrypt.so.1, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/syncsort/lib/libcrypt.so.1, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/rrdcom/load/libcrypt.so.1, O_RDONLY) = -1 ENOENT (No such file or directory)open(tls/x86_64/libcrypt.so.1, O_RDONLY) = -1 ENOENT (No such file or directory)open(tls/libcrypt.so.1, O_RDONLY) = -1 ENOENT (No such file or directory)open(x86_64/libcrypt.so.1, O_RDONLY) = -1 ENOENT (No such file or directory)open(libcrypt.so.1, O_RDONLY) = -1 ENOENT (No such file or directory)open(/opt/quest/lib64/libfreebl3.so, O_RDONLY) = -1 ENOENT (No such file or directory)open(/opt/quest/lib64/libfreebl3.so, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/lcpv650/lcp/rts/lib/libfreebl3.so, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/code1_v370/c1p/lib/libfreebl3.so, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/lcpv650/lcp/lib/libfreebl3.so, O_RDONLY) = -1 ENOENT (No such file or directory)open(/opt/FJSVcbl64/lib/libfreebl3.so, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/syncsort/lib/libfreebl3.so, O_RDONLY) = -1 ENOENT (No such file or directory)open(/apps/rrdcom/load/libfreebl3.so, O_RDONLY) = -1 ENOENT (No such file or directory)open(tls/x86_64/libfreebl3.so, O_RDONLY) = -1 ENOENT (No such file or directory)open(tls/libfreebl3.so, O_RDONLY) = -1 ENOENT (No such file or directory)open(x86_64/libfreebl3.so, O_RDONLY) = -1 ENOENT (No such file or directory)open(libfreebl3.so, O_RDONLY) = -1 ENOENT (No such file or directory)stat(/var/opt/quest/vas/.qas_id_dbg, 0x7fffa08397d0) = -1 ENOENT (No such file or directory)stat(/tmp/.vasipc_timeout, 0x7fffa0839640) = -1 ENOENT (No such file or directory)stat(/tmp/.vasipc_timeout, 0x7fffa08396d0) = -1 ENOENT (No such file or directory)stat(/var/opt/quest/vas/.qas_id_call, 0x7fffa08397d0) = -1 ENOENT (No such file or directory)stat(/etc/zshenv.zwc, 0x7fffa0839a80) = -1 ENOENT (No such file or directory)stat(/etc/zshenv, 0x7fffa08399f0) = -1 ENOENT (No such file or directory)open(/etc/zshenv, O_RDONLY|O_NOCTTY) = -1 ENOENT (No such file or directory)Some of the files are due to the configure options that I'm passing to Zsh, some of which I need to change; of particular interest, however, are the following:stat(/home/foobar/.zshenv.zwc, 0x7fffa0839a80) = -1 ENOENT (No such file or directory)stat(/home/foobar/.zshrc.zwc, 0x7fffa0837f00) = -1 ENOENT (No such file or directory)stat(/sfun.zwc, 0x7fffa08332d0) = -1 ENOENT (No such file or directory)stat(/sfun/compinit.zwc, 0x7fffa0833240) = -1 ENOENT (No such file or directory)stat(/sfun/compinit, 0x7fffa08331b0) = -1 ENOENT (No such file or directory)access(/sfun/compinit, R_OK) = -1 ENOENT (No such file or directory)stat(/sfun.zwc, 0x7fffa08332d0) = -1 ENOENT (No such file or directory)stat(/sfun/colors.zwc, 0x7fffa0833240) = -1 ENOENT (No such file or directory)stat(/sfun/colors, 0x7fffa08331b0) = -1 ENOENT (No such file or directory)access(/sfun/colors, R_OK) = -1 ENOENT (No such file or directory)
Locally built zsh can't find its own function files
software installation;zsh;compiling;autoconf;automake
null
_codereview.26080
What are your thoughts on the fallowing immutable stack implementation? It is implemented having as a basis a C# immutable stack (where garbage collector assistance does not impose using a reference counted implementation like here).namespace immutable{ template<typename T> class stack: public std::enable_shared_from_this<stack<T>> { public: typedef std::shared_ptr<T> headPtr; typedef std::shared_ptr<stack<T>> StackPtr; template <typename T> friend struct stackBuilder; static StackPtr empty() { return std::make_shared<stackBuilder<T>>(nullptr, nullptr); } static StackPtr Create() { return empty(); } StackPtr push(const T& head) { return std::make_shared<stackBuilder<T>>(std::make_shared<T>(head), shared_from_this()); } StackPtr pop() { return this->tail; } headPtr peek() { return this->head; } bool isEmpty() { return (this->head == nullptr); } private: stack(headPtr head, StackPtr tail): head(head), tail(tail) { } stack<T>& operator= (const stack<T>& other); private: headPtr head; StackPtr tail; }; template <typename T> struct stackBuilder: public stack<T> { stackBuilder(headPtr head, StackPtr tail): stack(head, tail){} };}Usage:auto empty = stack<int>::empty();auto newStack = empty->push(1);auto stack = newStack;while(!stack->isEmpty()){ std::cout << *stack->peek() << \n; stack = stack->pop();}
Immutable C++ stack - thoughts and performance
c++;performance;functional programming;stack;immutability
null
_codereview.32874
I have a PHP script on my server that validates a form and sends the form to a CRM and an email address that I specify. In order to send the form data to my specified email, the script must include a valid email account and the account password. Basically, this script with an email address and password is sitting on my server and I am wondering if this is a security issue.Here's a version of the script for your reference: // Hidden fields$hidden1 = $_POST['LEADCF7'];$hidden2 = $_POST['LEADCF8'];$hidden3 = $_POST['LEADCF9'];$hidden4 = $_POST['LEADCF10'];$hidden5 = $_POST['LEADCF11'];// Form fields$_POST['First_Name'];$_POST['Last_Name'];$Company = $_POST['Company'];$_POST['Email'];$Phone = $_POST['Phone'];$LeadMessage = $_POST['LEADCF1'];// CRM form specific fields$data = array();$data['fieldname']='fieldvalue';$data['fieldname']='';$data['fieldname']='fieldvalue';$data['fieldname']='fieldvalue';$data['fieldname']='fieldvalue';$data['fieldname']='fieldvalue';$data['fieldname']='fieldvalue';$post_str = '';foreach($data as $key=>$value){$post_str .= $key.'='.urlencode($value).'&';}$post_str = substr($post_str, 0, -1);$errors = '';if ($_POST['First_Name'] != ){ $FirstName = filter_var($_POST['First_Name'], FILTER_SANITIZE_STRING); if ($_POST['First_Name'] == ) { $errors .= 'Please enter a valid name.'; }} else { $errors .= 'Please enter your name.';}if ($_POST['Last_Name'] != ) { $LastName = filter_var($_POST['Last_Name'], FILTER_SANITIZE_STRING); if ($_POST['Last_Name'] == ) { $errors .= 'Please enter a valid name.'; } } else { $errors .= 'Please enter your name.'; }if ($_POST['Email'] != ) { $Email = filter_var($_POST['Email'], FILTER_SANITIZE_EMAIL); if (!filter_var($_POST['Email'], FILTER_VALIDATE_EMAIL)) { $errors .= $Email is <strong>NOT</strong> a valid email address.<br/><br/>; } } else { $errors .= 'Please enter your email address.<br/>'; }if ($_POST['Phone'] != ) { $Phone = filter_var($_POST['Phone'], FILTER_SANITIZE_NUMBER_FLOAT); if (!filter_var($_POST['Phone'], FILTER_SANITIZE_NUMBER_FLOAT)) { $errors .= $Phone is <strong>NOT</strong> a valid phone number.<br/><br/>; } } else { $errors .= 'Please enter your phone number.<br/>'; } if (!$errors) { // then send the data to Zoho $ch = curl_init(); curl_setopt($ch,CURLOPT_SSL_VERIFYPEER, FALSE); curl_setopt($ch, CURLOPT_HEADER, true); curl_setopt($ch,CURLOPT_USERAGENT,'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.13) Gecko/20080311 Firefox/2.0.0.13'); curl_setopt($ch, CURLOPT_URL, 'CRM-specific-url-goes-here'); curl_setopt($ch, CURLOPT_POST, TRUE); curl_setopt($ch, CURLOPT_POSTFIELDS, $post_str.&First Name=$FirstName&Last Name=$LastName&Company=$Company&Email=$Email&Phone=$Phone&LEADCF1=$LeadMessage&LEADCF7=$hidden1&LEADCF8=$hidden2&LEADCF9=$hidden3&LEADCF10=$hidden4&LEADCF11=$hidden5); curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE); $response = curl_exec($ch); // print_r(curl_getinfo($ch)); header(Location:url-to-site-thank-you-page); curl_close($ch);require_once Mail.php;$from_add = [email protected]; // This email will be used by script to send the form data to email address below.$to_add = [email protected]; // This email address will receive the form data.$subject = New Lead from our Site;$body = <<<EMAILBelow is the information for a new lead:>First Name: $FirstName.>Last Name: $LastName.>Email: $Email.>Phone: $Phone.>Company: $Company.>Additional Info: $LeadMessage.EMAIL;$host = mail.emailsrvr.com; $username = [email protected]; $password = account-password; // This is the part I think might be a security issue.$headers = array ('From' => $from_add, 'To' => $to_add, 'Subject' => $subject);$smtp = Mail::factory('smtp', array ('host' => $host, 'auth' => true, 'username' => $username, 'password' => $password));$mail = $smtp->send($to_add, $headers, $body);if (PEAR::isError($mail)) { return false; } else { return true; }} else { echo The following errors were found. Please go back to correct them: <br> <div style='color:red;'>.$errors.</div>;}
Email script security
php;security;email
If you are asking about the password being inside the server side code, that shouldn't be a security issue, because this information doesn't leave the server.I don't see a Security issue there.PHP Code is Never available to Client Side. so anything that you write in the code is not available to Client side. You might want to set up something like a configuration file and store the password there, then if someone can see your PHP code they still won't be able to see the password, and you can use it in different locations in your code and only have to change it once, better maintainability.
_softwareengineering.302279
Over the years i have seen multiple cases where data is accessed and/or manipulated using a representative value which is internally resolved to the right object/data-field/algorithm.Some examples:// 1. Provide same logger globally using it's nameauto myLogger = LoggingService.getLogger(myLogger)// 2. Get person object using a unique idauto person = PersonCatalog.getPerson(12345)// 3. Get the left child of a B-Tree nodeauto lnode = BTreeNode.getNode(left)// 4. Make player character do something based off a commandplayerCharacter.performAction(move_north)I understand, why one would use this approach for the logger, as you don't have to pass around concrete objects to a Logger instance throughout the whole program. I can also see the merit in the second example, as it could provide an interface to a dictionary or a database lookup.But the third and fourth example in contrast smell of design flaws.Does this kind of design have a name and is it a viable strategy for more than just a couple of special scenarios?
Is accessing data using a representative value a viable strategy?
design
I agree, the latter two appear to be code smells, and the first one also. But I believe the contrast posed in this question has more to do with choosing the appropriate data structures than with specific patterns. I'll go over each one, starting with the one that seems fine.The #2 above looks okay (assuming the magic string is just for brevity) and uses the concept of an identifier. It's less a pattern and more of a basic concept used long before computers existed. In this case, the chosen identifier as a number string.The #1 is a smell because it's unclear what should be passed in to get the right logger. Is myLogger an internal name, or a class to instantiate, or a connection string? I can't tell from this call, and I've seen all of those options used before. If I again assume the magic string is for brevity, it could be alright if the variable name and/or a comment clarifies it.The #3 is a smell because it's using a very fluid and easy-to-get-wrong structure (a string identifier) to represent a very concrete concept. In fact, there are 2 or so correct strings you can pass in and virtually infinite incorrect strings. This is concrete enough that it should ideally be represented by a design-time validated structure like a class. Then when you get a member name wrong, the code won't even compile or will give a syntax error. The structure itself is important enough that the compiler/interpreter shouldn't even run it if incorrect.The #4 is a smell in a similar way to #3 (no validated structure) and #1 (lack of clarity), but less specific than B-tree code in #3. (Assuming you have reasons for not representing actions using methods/functions, f.ex. client/server.) You could represent this with a pattern or two. You could use messaging or even the GOF command pattern, depending on the specific application. Having messages as concrete classes would both give your inputs a validated structure and give your callers a well-defined means to communicate with you.In summary, most of these examples highlight the need to choose the appropriate data structure. Using strings to activate code branches is not nearly specific enough for well-known algorithms like b-trees. But it might be okay for a database connection. Choose wisely.
_softwareengineering.253240
I have some designing problems with my project. To illustrate my problem, I'll use the following two classes from my project.public class RAM_UserManagement{ private Map<int,User> userList; public User addUser(User user){//do stuff} public User deleteUser(User user){//do stuff} public User updateUser(User user){//do stuff} public List<User> getAllUser(){//do stuff} public User getUserById(int userId){//do stuff}}public class RAM_ServiceManagment{ private Map<int,Serivce> serviceList; public Service addService(Service ser){//do stuff} public Service deleteService(Service ser){//do stuff} public Service updateService(Service ser){//do stuff} public List<Service> getAllSerivces(){//do stuff} public Service getServiceById(int id){//do stuff} public Service getServiceByStatus(ENUM_STATUS status){//do stuff} public Service getServiceByUserName(String Name){//do stuff}}As you can see, from the nature of these classes they both doing exact same thing with some extra functionality. I am trying to generalize it by creating an interface. This is what I have in mindpublic interface IStorage<T>{ public T add(T item); public T delete(T item); public T update(T item); public List<T> getAll();//This is where I am struggling..}So CUD operation in both classes are ok to implement but the R(Read) method in both classes varies. In RAM_ServiceManagement I have extra getAll, getById, getStatus, getByName than the other class. How can I generalize this? or generalization cannot be applied here at all? Really appreciate if you can give some suggestion. Thanks
How to generalize a classes that has identical function plus some additional function
java;design;design patterns;object oriented design;interfaces
null
_unix.59635
In my .emacs, this has appeared, for unknown reasons:(custom-set-variables ; ... '(canlock-password fdd7041be5b...)And so on, totalling 40 characters and digits.C-h v offers this:canlock-password is a variable defined in `canlock.el'.Its value is fdd7041be5b...Documentation:Password to use when signing a Cancel-Lock or a Cancel-Key header.I use rmail and GNUS, and those use headers, other than that it doesn't remind my of anything, really, but it can of course be something unrelated that I've overlooked.I tried to delete it, but it came back.
canlock-password - hashed password (?) mysteriously in .emacs
emacs;usenet
The Cancel-Lock and Cancel-Key headers are a mechanism to protect Usenet messages against cancellation by unauthorized parties. If the news server supports it, and you send a cancel message for a message that contains Cancel-Lock: foo bar, then the server only honors the cancel if the cancel message contains Cancel-key: wibble such that SHA1(wibble) = foo or SHA1(wibble) = bar.The canlock-password is not the hash of anything, it is generated automatically by GNUS. If you don't want GNUS to change your .emacs, you need to set canlock-password yourself. canlock-password should be a randomly generated string, so you might as well let GNUS pick one. If you post from multiple places, you should use the same password everywhere. Also, don't post this value publicly; you may want to define it in a separate file.;; in .emacs(load-library ~/.emacs.d/passwords.el);; in passwords.el(setq canlock-password canlock-password-for-verify canlock-password)
_unix.98558
The title tells everything I guess:I have a MS Intelli Mouse Optical with 5 Buttons and the side buttons are not working.I have searched for a xorg.conf file but I couldn't find it.What can I do to make the mouse work like in Windows 7? (You know the plug & play stuff)
MS Intelli Mouse Optical not working on Mint Linux 15 64-bit
xorg;mouse
null
_codereview.1716
Because I don't like using \ to break long lines (to comply with PEP8), I tend to do something like this:message = There are {} seconds in {} hours.message = message.format(nhours*3600, nhours)print(message)It also makes code cleaner. Is this an okay way of doing things?
Is it okay to 'abuse' re-assignment?
python
It's more than okay, it is nicely readable, has no problems and involves no abuse whatsoever. I sometimes do the same.
_unix.72483
I want to write an application in Python that you can use with your default keyboard and a specially designed one for the application. I will design it simply by using a small numerical keyboard with stickers to give actions to the different keys. Both keyboards will be attached by USB.However, when these keys are pressed, just their regular signals (numbers, operators and enters), will be send to Python and it will not be able to distinguish between the signals from the main keyboard and the special keyboard.Because Python has (as far as I could find) no method for making this distinguishment, I want to do it on the OS itself. I will be programming it for the Raspberry Pi, so it will be Linux.So, the main question: How can I remap the keys of a specific keyboard to other keycodes. I thought about using the F-keys which I won't use for other purposes; or just some characters that are not present on any keyboard (supposing that there are such).Is this possible in Linux/Unix? And if so, how can I do it?
How to distinguish input from different keyboards?
linux;usb;keyboard
If you're using Linux, the best way to distinguish between input devices is to use the Linux Event Interface. After a device's hardware-specific input is decoded, it's converted to an intermediate Linux-specific event structure and made available by reading one or more of the character devices under /dev/input/. This is completely independent of the programming language you use, by the way.Each hardware device gets its own /dev/input/eventX device, and there are also aggregates (e.g. /dev/input/mice which represents the motion of all mice in the system). Your system may also have /dev/input/by-path and /dev/input/by-id.There's an ioctl called EVIOCGNAME which returns the name of the device as a humanly-readable string, or you can use something like /dev/input/by-id/usb-Logitech_USB_Gaming_Mouse-mouse.You open the device, and every time an event arrives from the input hardware, you'll get a packet of data. If you can read C, you can study the file /usr/include/linux/input.h which shows exactly how this stuff works. If you don't, you could read this question which provides all the information you need.The good thing about the event interface is that you just find out what device you need, and you can read input from that input device only, ignoring all others. You'll also get notifications about keys, buttons and controls you normally wouldn't by just reading the cooked character stream from a terminal: even dead keys like Shift, etc.The bad thing is that the event interface doesn't return cooked characters, it just uses numeric codes for keys (the codes corresponding to each key are found in the aforementioned header file but also in the Python source of event.py. If your input device has unusual keys/buttons, you may need to experiment a bit till you get the right numbers.
_unix.258938
Im new here. I have a question about changing permissions of a file. We have user1 user2 user3 and we have a xx file. xx files remoting by root and group1. have to give a writable and readable privilege for a user1(non member of group1). How can I do that?
Change Permissions for a Specific User
linux;chmod
null
_codereview.93441
This code basically makes the dot follow a certain point on the screen. It consumes a significant amount of computing time in my game, so I'd like it to be best optimized. I wrote this code after quickly reading some pages in a high school mathematics book, so it's unlikely to be a computationally efficient solution.The mj_ prefix is just what I use instead of a namespace in my C code.Suggest some good possible optimizations.struct mj_dot { double x; double y; double th;};void mj_dot_rotate(struct mj_dot *, double, double);void mj_dot_move(struct mj_dot *, double, double, double, double);static double mj_distance(double x, double y, double x2, double y2) { return sqrt(pow(x - x2, 2) + pow(y - y2, 2));}void mj_dot_rotate(struct mj_dot *this, double x, double y) { double dx = x - this->x; double dy = y - this->y; this->th = atan(dy / dx); if (isnan(this->th)) { this->th = 0; return; } if (dx < 0) { this->th += MJ_MATH_PI; }}void mj_dot_move(struct mj_dot *this, double x, double y, double speed, double seconds) { double d = speed * seconds; if (d >= mj_distance(this->x, this->y, x, y)) { this->x = x; this->y = y; } else { mj_dot_rotate(this, x, y); this->x += d * cos(this->th); this->y += d * sin(this->th); }}
Algorithm to follow a point
performance;c;game;mathematics
SimplificationThere's actually no need for the function mj_dot_rotate() with its complicated trigonometry. That function does lot of work to find an angle which isn't ever needed. In fact, it would be most efficient to inline the code for mj_distance() as well since you can reuse parts of it in mj_dot_move(). The key thing to realize is that you can determine the amounts to move in each direction purely as a function of dx, dy, and distance. Like this:void mj_dot_move(struct mj_dot *this, double x, double y, double speed, double seconds) { double d = speed * seconds; double dx = x - this->x; double dy = y - this->y; double distance = sqrt(dx*dx + dy*dy); if (d >= distance) { this->x = x; this->y = y; } else { double fraction = d / distance; this->x += dx * fraction; this->y += dy * fraction; }}Variable namingI agree with the other reviewer who said not to use this as a variable name. I thought for a second that you were using C++ and I started to look for a class definition.
_softwareengineering.205382
If I include an open source library in my project that is licensed under the MIT license, but contains BSD-licensed code that requires attribution (correctly attributed inside the project), is it my responsibility to attribute it again if I decide to use that library? Normally this is not a problem (I would just credit everyone regardless of the license) but on a mobile platform there is not a lot of real estate or efficient ways to show / bundle these licenses.
What are the requirements for an open-source license inside an open-source license?
licensing;mit license;bsd license
BSD does not always require one to credit the author in the gui of their appplication.As a matter of fact, only the 4-clause license (original BSD License) requires attribution outside the source code and binary code of the application.It states as follows:Redistribution and use in source and binary forms, with or withoutmodification, are permitted provided that the following conditions are met:1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.3. All advertising materials mentioning features or use of this software must display the following acknowledgement: This product includes software developed by the <organization>.4. Neither the name of the <organization> nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.THIS SOFTWARE IS PROVIDED BY <COPYRIGHT HOLDER> ''AS IS'' AND ANYEXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIEDWARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE AREDISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANYDIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED ANDON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THISSOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.The revised 3-clause licence has removed clause #3.The later (FreeBSD) 2-clause licence has also removed clause #4.That means that if the library uses the any BSD licence, you do need to attribute it somewhere in your gui.In this case, perhaps adding a colophon page to your application might be a solution.
_unix.197546
In my campus, there are 10+ access points available.Some of them are WPA2-PSK enabled while few are WEP and rest are 802.1x secured access points.I want to connect only to those Access Point who are enabled by 802.1x.Is there any way to differentiate such access points, when iwlist scan gives you combination of all available access points of all types ?
Search 802.1x enabled Aceess Point
wifi;802.1x
null
_softwareengineering.290258
We are using some open source code licensed as Apache License 2.0. From what I understand, we cannot endorse the company that made the code.We are using it to develop an free app with micro purchases, but the code isn't actually located on the app, it's located in the cloud on our processing servers. Our app doesn't actually use any of the code, it uses our AWS servers for processing that use the code. It sends images to the AWS API then it returns them back after we have used the open source code.Now, I understand we have to retain the license used so we plan on including it in our terms and condition page and making users accept it upon opening the app for the first time. My question is, can we mention in our app description that we are using the code?Here is an example, can we do this or is this endorsement?Our app uses a modified version of [original OS company ex. Twitter's] [open source package name ex. Mask API] to digitally transform the code into [whatever the code + modifications do].
We are publishing an app using open source apache licensed software, Need some help understanding provisions
licensing;open source;apache license
The Apache website's license FAQ has a very good non-laywer summary of what the license actually means. The part that's relevant to your question is this:It forbids you to:redistribute any piece of Apache-originated software without proper attribution;use any marks owned by The Apache Software Foundation in any way that might state or imply that the Foundation endorses your distribution;use any marks owned by The Apache Software Foundation in any way that might state or imply that you created the Apache software in question.Your example statement does not appear to be making any such implication, so I wouldn't worry about it. In fact, you are required to have proper attribution, in addition to reproducing the license, so you not only can, but must include a statement like that with your application.There's also an item on their FAQ which clarifies what does and doesn't imply endorsement:For example, it would be acceptable to use a name like 'SuperWonderServer powered by Apache', but never a name like 'Apache SuperWonderServer'. This is similar to the distinction between a product named 'Microsoft Burp' and 'Burp for Microsoft Windows'.Just to be thorough, I believe the relevant part of the actual license is section 6:Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.Implying that the entity known as the Apache Software Foundation endorses your derivative work is clearly not required when describing the origin of the work.
_unix.21725
I understand in Unix Oracle-Solaris OS the zoneadm list command will easily show all the available zones.But if I am logged into a non global zone there is no easy way to get information about the global zone.I see the arp command can be of some help because it will return the NIC mac address. Then with the NIC mac address I can arp again to get all the machine name associated with that (NIC) mac address. This process sounds kind of intricate to me.Is there any better way to get that info?
Find out global zone Name once you logged into a NON global zone
solaris;opensolaris;solaris zones
There is no supported way and this is by design. Non global zones are isolated. The arp trick isn't always reliable and won't work anyway with exclusive IP zones. Should you want to have this information available, you can implement your own method, for example writing a file like /etc/globalzone as of course the global zone can access every zone file system.
_unix.276139
EDIT: the bug disappears by version 4.3.8.I am using GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu). I believe I have found a bug. Would like to know if perhaps I'm missing something or if my bug is version/platform specific.Bash's history functions will utilize the HISTTIMEFORMAT variable, if defined. So ifHISTTIMEFORMAT=%sThen, history produces:60 1460542926 historyAdditionally, history -w results in the history file containing:#1460543065cat $HISTFILE#1460543082HISTTIMEFORMAT=%s#1460543084history -wHowever, if the variable is defined in this way:: ${HISTTIMEFORMAT:=%s }then the output from history is correct, but history -w fails to write the timestamp headers to $HISTFILE.unset HISTTIMEFORMAT: ${HISTTIMEFORMAT:=%s }history -wIf I then simply do export HISTTIMEFORMAT or declare HISTTIMEFORMAT, the problem goes away. However, if the variable is instead auto-exported via set -a, it doesn't work. I could not reproduce this kind of result with a different variable, PS2. From version 4.3.8 running on an Mint 17 / Ubuntu systemMethod 1$ bash --versionGNU bash, version 4.3.8(1)-release (x86_64-pc-linux-gnu)$ bash --norcbash-4.3$ HISTFILE=/tmp/histfile.$$bash-4.3$ history -cbash-4.3$ HISTTIMEFORMAT=%s bash-4.3$ history 1 1460642608 HISTTIMEFORMAT=%s 2 1460642610 historybash-4.3$ history -wbash-4.3$ cat $HISTFILE#1460642608HISTTIMEFORMAT=%s #1460642610history#1460642612history -wbash-4.3$ Method 2$ bash --norcbash-4.3$ HISTFILE=/tmp/histfile.$$bash-4.3$ history -cbash-4.3$ : ${HISTTIMEFORMAT:=%s }bash-4.3$ history 1 1460642758 : ${HISTTIMEFORMAT:=%s } 2 1460642763 historybash-4.3$ history -wbash-4.3$ cat $HISTFILE#1460642758: ${HISTTIMEFORMAT:=%s }#1460642763history#1460642769history -wbash-4.3$ From RHEL6 and RHEL7 systemsincluding GNU bash, version 4.2.46(1)-release (x86_64-redhat-linux-gnu) and version 4.1.2(1)-release (x86_64-redhat-linux-gnu)Method 1~$ bash --versionGNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu)Copyright (C) 2009 Free Software Foundation, Inc.License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>This is free software; you are free to change and redistribute it.There is NO WARRANTY, to the extent permitted by law.~$ bash --norcbash-4.1$ HISTFILE=/tmp/histfile.$$bash-4.1$ history -cbash-4.1$ HISTTIMEFORMAT=%s bash-4.1$ history -wbash-4.1$ cat $HISTFILE#1460643571HISTTIMEFORMAT=%s #1460643573history -wbash-4.1$ exitMethod 2~$ bash --norcbash-4.1$ HISTFILE=/tmp/histfile.$$bash-4.1$ history -cbash-4.1$ : ${HISTTIMEFORMAT:=%s }bash-4.1$ history -w bash-4.1$ cat $HISTFILE: ${HISTTIMEFORMAT:=%s }history -w bash-4.1$ history3 1460643602 : ${HISTTIMEFORMAT:=%s }4 1460643606 history -w 5 1460643608 cat $HISTFILE6 1460643719 history
have I found a bug in BASH?
bash;bugs
Further research indicates it was a bug and fixed sometime between the 4.2.x and 4.3.6 releases.
_unix.39530
I'm using Fedora 16 and I've successfully compiled Chromium from source (the first time I compiled something from source) a while back following these instructions:http://code.google.com/p/chromium/wiki/LinuxBuildInstructionsAt the end of the process, everything worked. However, after multiple attempts, I have had no luck in compiling a newer version of the program. I am stuck with Version 20.0.1100.0 custom (132047). Before re-building, I follow the steps to syncing my sources. But after all of the steps, build 132047 is still what I have.Can someone help me please to building newer builds and using them as I can't seem to find anything on the internet. Thanks!
Build Chromium from source
linux;fedora;compiling;source;chrome
null
_unix.14457
I am looking for the best way to create a dual boot on my Red Hat system.This is a Red Hat 5.3 System for scientific use, and we need to run some Windows software on it for visualization purposes.What is the easiest way to implement dual boot if Linux is already installed; formatting the hard drive is not an option.My thoughts were:Partition the HD and give Windows a 200GB logical driveInstall Windows 7I am probably missing the part where I configure the boot loader.What is the best way to do this?As a side note, is there a way to virtualize (VMWARE like) Windows 7 on the Red Hat machine and if so, will that cause a BIG loss in performance? And if so, how much do you think it will be?
Red Hat and Windows 7 dual boot system
rhel;dual boot;windows;boot loader;vmware
After installing Windows, you will need to boot of some linux recovery disk if you wish to re-install + configure grub for booting.Otherwise, you could look into EasyBCD which edits the Windows boot system to enable you to boot Linux from windows's boot loader instead.As far as virtualization is concerned, there are several options (depending on the CPU in the machine, and how serious you are about it)You could look into VMWare, VirtualBox, Xen or KVM which can all run Windows under linux. The last 2 require VT-x/AMD-V CPU extension support for Windows VMs, and all 4 will run smoother if the CPU has the virtualization extensions.The only really loss under a virtual machine in graphics performance because none are capable of properly emulating a graphics card in realtime (Though Xen has made some progress on GPU passthrough).So, it depends... :)
_unix.88642
I'm following through a tutorial and it mentions to run this command:sudo chmod 700 !$I'm not familiar with !$. What does it mean?
What does !$ mean?
bash;command history
Basically, it's the last argument to the previous command.!$ is the end of the previous command. Consider the following example: We start by looking for a word in a file:grep -i joe /some/long/directory/structure/user-lists/list-15if joe is in that userlist, we want to remove him from it. We can either fire up vi with that long directory tree as the argument, or as simply as vi !$ Which bash expands to:vi /some/long/directory/structure/user-lists/list-15(source; handy guide, by the way)It's worth nothing the distinction between this !$ token and the special shell variable $_.Indeed, both expand to the last argument of the previous command. However, !$ is expanded during history expansion, while $_ is expanded during parameter expansion.One important consequence of this is that, when you use !$, the expanded command is saved in your history.For example, consider the keystrokesecho Foo Enter echo !$ Jar Enter Up Enter; andecho Foo Enter echo $_ Jar Enter Up Enter.(The only characters changed are the $! and $_ in the middle.)In the former, when you press Up, the command line reads echo Foo Jar, so the last line written to stdout is Foo Jar.In the latter, when you press Up, the command line reads echo $_ bar, but now $_ has a different value than it did previouslyindeed, $_ is now Jar, so the last line written to stdout is Jar Jar.Another consequence is that _ can be used in other parameter expansions, for example, the sequence of commandsprintf '%s ' isomorphismprintf '%s\n' ${_%morphism}scelesprints isomorphism isosceles.But there's no analogous ${!$%morphism} expansion.For more information about the phases of expansion in Bash, see the EXPANSION section of man 1 bash (this is called Shell Expansions in the online edition). The HISTORY EXPANSION section is separate.
_codereview.154748
Consider the class AB below that is to be used as a simple customized list of A objects for lookup operations.Can this code be improved to avoid instantiating AB with an empty list [] (i.e., perhaps modify __add__ in some way)?class A(): def __init__(self, arg): self.arg = argclass AB(): def __init__(self, list): self.list = list def __add__(self, other): return AB(self.list + [other])ab = AB([])ab += A(1)ab += A(2)
Python custom list class initialization
python
Sure, you can play with the default argument value:class AB: def __init__(self, data=None): self.data = data if data is not None else [] def __add__(self, other): return AB(self.data + [other.arg])Other notes:list is a bad variable name as it is shadowing the built-in list keywordremove redundant parentheses after the class nameDemo:In [1]: ab = AB()In [2]: ab += A(1)In [3]: ab += A(2)In [4]: print(ab.data)[<__main__.A instance at 0x10afb14d0>, <__main__.A instance at 0x10afa0998>]
_unix.156102
I have 3 HDD and 1 SSD, I have successfully mounted all drives to bcache.pavs@VAS:~$ df -hFilesystem Size Used Avail Use% Mounted on/dev/sda1 132G 35G 90G 28% /none 4.0K 0 4.0K 0% /sys/fs/cgroupudev 3.9G 8.0K 3.9G 1% /devtmpfs 786M 2.3M 784M 1% /runnone 5.0M 0 5.0M 0% /run/locknone 3.9G 152K 3.9G 1% /run/shmnone 100M 52K 100M 1% /run/user/dev/bcache1 2.7T 2.1T 508G 81% /var/www/html/directlink/FTP1/dev/bcache2 1.8T 614G 1.2T 36% /var/www/html/directlink/FTP2/dev/bcache0 1.8T 188G 1.6T 11% /var/www/html/directlink/FTP3/dev/sdf1 367G 284G 65G 82% /media/pavs/e93284df-e52e-4a5d-a9e1-323a388b332fThe drives that are being cached are not OS drive. Three HDD with lots of BIG files, on average the files sizes goes from 600mb to 2GB, the smallest file size being 500mb and largest being 10GB.The files are being downloaded constantly through apache webserver. But I am only seeing marginally or no speed up in IO even on frequently accessed files. I don't know what type of cache formula bcache uses or if it can be tweaked for maximum cache performance. Ideally I would like to see frequently accessed files to be cached for at leased a day until there is no request for that file. I don't know if that level of granular cache tweaking is possible. I care about read performance only and would like to see maximum utilization of the SSD drive.EDIT: According to this. bcache discourages sequential cache, which if I understand correctly, is a problem for me as most of my files are large sequential files. The default sequential cutoff was 4.0M, it might have prevented the files from being cached (I don't know), so I disabled the cutoff by doing this for each backup drives:echo 0 > /sys/block/bcache0/bcache/sequential_cutoffNow wait and see if it actually improves performance.According to bcache stats all three of the drives are being cachedbcache0pavs@VAS:~$ tail /sys/block/bcache0/bcache/stats_total/*==> /sys/block/bcache0/bcache/stats_total/bypassed <== 461G==> /sys/block/bcache0/bcache/stats_total/cache_bypass_hits <== 9565207==> /sys/block/bcache0/bcache/stats_total/cache_bypass_misses <== 0==> /sys/block/bcache0/bcache/stats_total/cache_hit_ratio <== 63==> /sys/block/bcache0/bcache/stats_total/cache_hits <== 3003399==> /sys/block/bcache0/bcache/stats_total/cache_miss_collisions <== 659==> /sys/block/bcache0/bcache/stats_total/cache_misses <== 1698297==> /sys/block/bcache0/bcache/stats_total/cache_readaheads <== 0bcache1pavs@VAS:~$ tail /sys/block/bcache1/bcache/stats_total/*==> /sys/block/bcache1/bcache/stats_total/bypassed <==396G==> /sys/block/bcache1/bcache/stats_total/cache_bypass_hits <==9466833==> /sys/block/bcache1/bcache/stats_total/cache_bypass_misses <==0==> /sys/block/bcache1/bcache/stats_total/cache_hit_ratio <==24==> /sys/block/bcache1/bcache/stats_total/cache_hits <==749032==> /sys/block/bcache1/bcache/stats_total/cache_miss_collisions <==624==> /sys/block/bcache1/bcache/stats_total/cache_misses <==2358913==> /sys/block/bcache1/bcache/stats_total/cache_readaheads <==0bcache2pavs@VAS:~$ tail /sys/block/bcache2/bcache/stats_total/*==> /sys/block/bcache2/bcache/stats_total/bypassed <==480G==> /sys/block/bcache2/bcache/stats_total/cache_bypass_hits <==9202709==> /sys/block/bcache2/bcache/stats_total/cache_bypass_misses <==0==> /sys/block/bcache2/bcache/stats_total/cache_hit_ratio <==58==> /sys/block/bcache2/bcache/stats_total/cache_hits <==4821439==> /sys/block/bcache2/bcache/stats_total/cache_miss_collisions <==1098==> /sys/block/bcache2/bcache/stats_total/cache_misses <==3392411==> /sys/block/bcache2/bcache/stats_total/cache_readaheads <==0
Optimizing bcache
ssd;bcache
I have same problem. My disk IO still bypass by bcache. After set congested_read_threshold_us and congested_write_threshold_us following bcache documentation. My problem is solved.- Traffic's still going to the spindle/still getting cache misses In the real world, SSDs don't always keep up with disks - particularly with slower SSDs, many disks being cached by one SSD, or mostly sequential IO. So you want to avoid being bottlenecked by the SSD and having it slow everything down. To avoid that bcache tracks latency to the cache device, and gradually throttles traffic if the latency exceeds a threshold (it does this by cranking down the sequential bypass). You can disable this if you need to by setting the thresholds to 0: # echo 0 > /sys/fs/bcache/<cache set>/congested_read_threshold_us # echo 0 > /sys/fs/bcache/<cache set>/congested_write_threshold_us The default is 2000 us (2 milliseconds) for reads, and 20000 for writes.All disk IO are sent to my SSD(sde) now.Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %utilsdb 0.00 0.00 0.00 0.30 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00sdd 0.00 0.00 0.10 0.30 0.80 0.00 4.00 0.00 3.00 12.00 0.00 3.00 0.12sdc 0.00 0.00 2.20 0.30 26.00 0.00 20.80 0.00 1.76 2.00 0.00 1.76 0.44sda 0.00 0.00 0.20 0.20 0.80 0.00 4.00 0.01 8.00 16.00 0.00 13.00 0.52sde 0.00 293.20 81.70 232.70 1129.20 58220.00 377.54 6.62 21.05 27.69 18.71 3.18 100.00md1 0.00 0.00 2.50 0.30 27.60 0.00 19.71 0.00 0.00 0.00 0.00 0.00 0.00md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00bcache0 0.00 0.00 83.00 402.40 1156.80 28994.80 124.23 15.36 31.70 27.02 32.67 2.06 99.92
_cs.10152
Suppose I have a admissible and consistent heuristic.Is it true, that when I expand a node, I have guaranteed that the path I found to this node is optimal?Look at this pseudocode from wikipedia:function A*(start,goal) closedset := the empty set // The set of nodes already evaluated. openset := {start} // The set of tentative nodes to be evaluated, initially containing the start node came_from := the empty map // The map of navigated nodes. g_score[start] := 0 // Cost from start along best known path. // Estimated total cost from start to goal through y. f_score[start] := g_score[start] + heuristic_cost_estimate(start, goal) while openset is not empty current := the node in openset having the lowest f_score[] value if current = goal return reconstruct_path(came_from, goal) remove current from openset add current to closedset for each neighbor in neighbor_nodes(current) tentative_g_score := g_score[current] + dist_between(current,neighbor) if neighbor in closedset if tentative_g_score >= g_score[neighbor] continue if neighbor not in openset or tentative_g_score < g_score[neighbor] came_from[neighbor] := current g_score[neighbor] := tentative_g_score f_score[neighbor] := g_score[neighbor] + heuristic_cost_estimate(neighbor, goal) if neighbor not in openset add neighbor to openset return failureI suppose it should be true. Because of this:if current = goal return reconstruct_path(came_from, goal)If it wasn't true then this test would not guarantee me that the solution is optimal right?What I don't get and the reason I am asking this question is this:if neighbor in closedset if tentative_g_score >= g_score[neighbor] continueIf the neighbor is in closed list, it means that it has already been expanded. Why are they testing the scores then? Why would not the next condition work?if neighbor in closedset continue
A* optimality of the expanded node
algorithms;algorithm analysis;search algorithms;correctness proof
Please, let me contribute to this question with some observations. Most of them refer to the reply by Shaull.First, and foremost, I found that the answer provided to the question pointed to by Shaull is a little bit strange. The property of monotonicity is defined there in a strange way and indeed, I posted a question ---note I am not saying that's wrong but just strange to me and not very common and frankly speaking, I am a little bit suspicious that it might be wrong.In the original question I see various questions so let's go by steps.First, consistency is usually defined as follows [Pearl, 1984]: a heuristic $h(n)$ is consistent if and only if $h(n) \leq c(n,n') + h(n')$ for every pair of nodes $n$ and $n'$ in the state space, where $c(n,n')$ stands for the cost of the shortest path joining $n$ and $n'$. I think it is clear that admissibility (i.e., $h(n) < h^*(n)$ where $h^*(n)$ is the true optimal cost of node $n$) is immediately implied from consistency if $h(t)=0$ where $t$ is the goal node. For this and other definitions see Common Misconceptions Concerning Heuristic Search.So far, no need to say Suppose I have a admissible and consistent heuristic. It just suffices saying Suppose I have a consistent heuristic. Now, regarding your first question. If you have a consistent heuristic, there is no need to reopen nodes or, equivalently, everytime a node $n$ is expanded by A$^*$, the path discovered to it is necessarily optimal.Proof: Let us assume this is not true and after expanding node $n$ through a path $\pi\langle s, n_1, n_2, \ldots, n_k, n\rangle$ there was another path $\pi'\langle s, n'_1, n'_2, \ldots, n'_l, n\rangle$ such that $g_{\pi}(n) > g_{\pi'}(n)$. Note that $f(n)=g(n)+h(n)$ so that from our assumption $f_{\pi}(n) > f_{\pi'}(n)$ since $h(n)$ takes the same value in spite of the path followed to node $n$. But this is impossible, since $n$ was expanded as a successor or the path $\pi$ so that $f_{\pi}(n) \leq f_{\pi'}(n)$. Of course, you might say that the error was committed somewhere along the path $\pi'$ because a node $n'_i$ had such a large value of $h(n'_i)$ that it was delayed in the OPEN list, but this is impossible by definition of consistency.From here it should be obvious that the condition:if current = goal return reconstruct_path(came_from, goal)works smoothly since once the goal is about to be expanded it is guaranteed that the path discovered to it is optimal.Regarding your second question, the reason of your post, Shaull already answered in the footnotes of his reply: if the heuristic function is consistent then nodes are never reopened and your condition:if neighbor in closedset continueis enough. However, in practice, it is just simply substituted by a different condition: when expanding a node, their children are only added to OPEN if it was never expanded (not in closedset) and also, it is not already in OPEN (not in openset). With the appropriate structures for managing the open and closed list this can be done very fastly. This trick make it unnecessary to use that statement. But again, bear in mind what Shaull says, the pseudocode you show in your post is intended to work also with inconsistent heuristic functions. What you are asking for are, indeed, simplifications that result from the assumption of consistency.Hope this helps,[Pearl, 1984] Judea Pearl. Heuristics. Addison-Wesley. 1984