id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_webapps.18189
With the new FB sidebar chat application, I am getting a subset of my friends always visible on the sidebar, regardless of their online statusOthers appear in the More friends section when they are onlineAny idea why this is happening?Shouldnt the online friends be displayed at the top of the list, followed by the offline friends. Or display only the online frineds?(old FB chat had this feature IIRC)^^This is the main part of the sidebar. Note that very few friends are onlineTo view the others I have to scroll to the More Online Friends sectionAll the online friends are visible in this list
Facebook chat -- offline friends shown above online friends
facebook;facebook chat
null
_softwareengineering.117758
I've agreed to build an iPhone app for a client that was referred to me. I'm a full-time software engineer by day, but this is only my 2nd iPhone app and I agreed to do it without upfront funding as it would give me a project to further learn the API as well as increase my portfolio of iPhone apps.The client suggested as payment, that we split the profits 50/50. Normally I would not think this was fair but in a situation like this, I was ok with it since my family's life does not depend on it.As we are nearing the end of development on the app, I asked the client if they would prefer their personal name be put on the project, or if they minded that it was my name (since I have the developer license). Naturally, they requested their name. I've not told them all that that entails because I would relinquish management of the app and profit divisions to them.My question to this community, and pardon if this is not the proper place, is am I going about this the right way?Should I request a partnership be setup?Should I insist my developer name goes on it? What would you do?Additionally, I'm going off of his word that this is a niche product with high demand (its in the medical field) so we will be able to charge a bit more for it.
How should I handle the publishing of an iPhone app under someone else's name?
iphone;business
I had exactly the same situation.You are the developer and more important - a 50% partner. there is no reason in the world to register the app on their name. You can open a new user in iTunes connect with permissions to view financial reports on the app and a different bank account.request a partnership be setup before you upload the app. Always trust people, and if your partnership is important to you, set up a contract that will prevent you from conflicts when the money will start, (and i wish for you it will), to fly in.Good luck
_unix.58161
I can't figure it out. As I read in documentaion, {} doesn't create a subshell. However, looks like that sometimes it does: $ unset T; echo T_bfr=$T; echo $$; { echo $$; export T=1; }; echo T_afr=$TT_bfr=48744874T_afr=1 $ unset T; echo T_bfr=$T; echo $$; { echo $$; export T=1 ; }|cat; echo T_afr=$TT_bfr=48744874T_afr=What is the difference? Why T is missing in the second case?
curly braces and subshell
bash;subshell
The second case is different because that pipe runs in a subshell, where T_aft=$T is unset.
_unix.260803
I have file1.zip, file2.zip, etc. in folder called folder1. have similar folders folder2, folder3, etc.I need to add folder name to each .zip file inside these folders.So, files inside folder1 will be folder1_file1.zip, folder1_file2.zip, folder1_file3.zip, etc. Similarly, folder2 files will be folder2_file1.zip, folder2_file2.zip, folder2_file2.zip, etc.Thanks a lot!
Adding folder name to .zip files inside it
linux;shell script;rename
null
_computerscience.5506
I have a very basic grasp of computer graphics however I threw it all away when I wrote this simple application that I am using for scientific purposes. I didn't have a lot of time to write it correctly because my bosses didn't see the promise of it but now that it is done they have found it quite useful.The problem is I built it stupidly.Basically it takes a TON of points stored in a big array and draws them to the screen. The screen can be moved and zoomed.Instead of using a matrix to accomplish this I am simply passing in the width, height and center of the screen making my average vertex shader look like so#version 330 corelayout (location = 0) in float ex;layout (location = 1) in float ey;uniform float sizeX;uniform float sizeY;uniform float centerX;uniform float centerY;uniform float size;out vec4 color;out float passsize;void main(){ gl_PointSize = size; passsize = size; float x = (ex - centerX) / (sizeX / 2.0); float y = (ey - centerY) / (sizeY / 2.0); gl_Position = vec4(x, y, 0.0, 1.0); color = vec4(1.0f, 0.5f, 0.2f, 1.0f);}and an average draw call look about like thisglUseProgram(program);glGenVertexArrays(1, &VAO1);glBindVertexArray(VAO1);glGenBuffers(1, &VBO1);glBindBuffer(GL_ARRAY_BUFFER, VBO1);glBufferData(GL_ARRAY_BUFFER, sizeof(data1), data1, GL_STATIC_DRAW);glBindBuffer(GL_ARRAY_BUFFER, VBO1);glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, 0);glEnableVertexAttribArray(0);glBindBuffer(GL_ARRAY_BUFFER, 0);glGenBuffers(1, &VBO2);glBindBuffer(GL_ARRAY_BUFFER, VBO2);glBufferData(GL_ARRAY_BUFFER, sizeof(data2), data2, GL_STATIC_DRAW);glVertexAttribPointer(1, 1, GL_FLOAT, GL_FALSE, 0, 0);glEnableVertexAttribArray(1);glUniform1f(glGetUniformLocation(program, centerX), centerX);glUniform1f(glGetUniformLocation(program, centerY), centerY);glUniform1f(glGetUniformLocation(program, sizeX), scrwidth);glUniform1f(glGetUniformLocation(program, sizeY), scrheight);glUniform1f(glGetUniformLocation(program, size), dotSize);glDrawArrays(GL_POINTS, 0, dot count);glfwSwapBuffers(window);glfwPollEvents();This works very well but I would love to get rid of the low frame rate this causes when I am drawing a ton of points (generally over 10,000 on my small laptop that doesn't actually have a GPU).Is there any way I can get the GPU to more intelligently clip points that are going to be off screen?I don't have a lot of time to change things around so it would be great if it were something simple to achieve.
Is there a way I can make this easier for openGL to draw?
opengl
null
_softwareengineering.297983
I've built a reasonably sized CLI app with everything split across different classes and namespaces in a logical way. However many components need to render output from within themselves. Which seems messy.To expand...The program copies content files and databases between servers. The main file kicks in, then loads other files that handle configuration, CLI arguments, environment loading etc, then dispatches commands to other systems that do some pre-flight tests, database copying, database integrity checks, database replacements, file copying, etc.Most of these systems have output during their operations, some use ASCII progress bars that need to be updated or otherwise output lines that need to be overwritten.Up until now I've had each system use a dependency injected CLI Output class that wraps a bunch of output functions. But it still seems messy to trigger output from so many different systems that should only be focussing on their work task, not output.I've experimented with creating another system that takes callbacks from all the others, then calculates and renders the output. So each subsystem is only doing the job it was assigned, but triggering callbacks with various useful information at key steps.This is nice, but also leads to more bloat.Is there any precedence on the best way to do this? Maybe a third way I haven't thought of?
Where to output from large CLI app
php;separation of concerns;cli
null
_unix.197614
I've had Mint installed as a dual boot on my laptop for some time. I use it as my dev environment, for desktop and web related coding.I recently started getting errors which have rendered my terminal unusable. As soon as I start the program up I get the error/usr/bin/env: bash: No such file or directoryWhenever I try to run a command, I then get the following output:brae@G62-Linux ~ $ ifconfigTraceback (most recent call last):File /usr/lib/command-not-found, line 21, in <module>os.execvp(python3, [sys.argv[0]] + sys.argv)File /usr/lib/python2.7/os.py, line 344, in execvp_execvpe(file, args)File /usr/lib/python2.7/os.py, line 380, in _execvpefunc(fullname, *argrest)OSError: [Errno 2] No such file or directoryI don't want to start screwing around with files I don't fully understand, and most of the standard things (apt-get update etc.) aren't possible because of the errors. Can anyone give me any advice? I would just blank the partition and reinstall but I really can't be bothered going through all of that if I don't need to.Thanks very much.==========================================================EDIT- SOLUTIONThanks to the best answer below, I tracked down the problem to my .bashrc file. Turns out something had modified the file to alter the PATH variable with a Ruby environment which was causing the error. I simply deleted this section from the file (in my case this left the .bashrc file empty) and this solved the error. I believe that if this leaves the file blank, you can also change the .bash_profile (or .profile) file to no longer call the .bashrc file as it is not a requirement for the process.Thanks for you help everyone who answered, particularly apaul
Getting Python errors whenever I try to use terminal in Linux Mint
bash;linux mint;terminal;python
Many possibilities. At login time, usually, 3 steps are done:1) At login time, the shell specified in /etc/passwd is launched. So I'd first have a look at /etc/passwd (using GUI as gedit, since you can't use terminal...) and check the shell (it's the last field). You may have a line like this for your user:user:x:500:500::/home/user:/bin/bash(You may have /bin/sh, /bin/csh, /bin/zsh, ... but /bin/bash is the most common)2) Then the shell will read the content of your /home/user/.bash_profile (if you use bash). So I'd look this file (ie: open it with gedit, but watch out, filenames starting with a . are hidden by default) and see if it launch any python command.3) Finally, /home/user/.bashrc is also read when launching your terminal. So I'd have a look at this file too.These are the first steps I'd do, looking if any of these files launch a python, ipython, xonsh command, or any py script.
_cstheory.31373
Chaitin's incompleteness theorem says no sufficiently strong theory of arithmetic can prove $K(n) > L$ where $K(n)$ is the Kolmogorov complexity of the number $n$ and $L$ is a sufficiently large constant. The size of $L$ depends on the theory.Some people have suggested since the minimum value of $L$ depends on the theory, $T$, the minimum value of $L(T)$ can be used as a measure of the complexity of theory $T$. Other people have argued this is a very bad idea. At least one person has speculated the complexity of Peano arithmetic is less than $10^9$.Assume we define $K(x)$ using n-state, 2-symbol busy beavers(BB). $K(x)=n$ where $n$ is the number of states of the smallest BB that writes exactly $x$ 1's and halts. We don't really need a strong theory to write a computer program so assume our theory, $T$, is primitive recursive arithmetic. One way to define $L(PRA)$ is to define it as the size of a computer program that can determine the truth of any statement of the form $K(n) > m$ for any positive integers $n,m$. Assume the programming language is x86 assembly. The size of the program is the size in bits of a compiled executable running on a 64-bit processor.What is a reasonable value for $L(PRA)$? Are there any papers with upper bounds on $L(T)$ for some $T$?
What is the Kolmogorov complexity of arithmetic?
lo.logic;turing machines;kolmogorov complexity
null
_unix.354425
I've installed RHEL 6 on a physical HP server. it is configured with a bond between eth0 and eth1. Configs:DEVICE=bond0BOOTPROTO=staticIPADDR=10.12.25.52PREFIX=20GATEWAY=10.12.16.1DNS1=10.12.16.33DNS2=10.12.16.59DOMAIN=hph.localDEFROUTE=yesONBOOT=yesHOTPLUG=noUSERCTL=noNM_CONTROLLED=noIPV6INIT=noBONDING_OPTS=mode=balance-rr num_grat_arp=3 miimon=100DEVICE=eth0BOOTPROTO=noneONBOOT=yesHOTPLUG=noMASTER=bond0SLAVE=yesUSERCTL=noNM_CONTROLLED=noDEVICE=eth1BOOTPROTO=noneONBOOT=yesHOTPLUG=noMASTER=bond0SLAVE=yesUSERCTL=noNM_CONTROLLED=noBond state:[user@server ~]$ cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)Bonding Mode: load balancing (round-robin)MII Status: upMII Polling Interval (ms): 100Up Delay (ms): 0Down Delay (ms): 0Slave Interface: eth0MII Status: upSpeed: 10000 MbpsDuplex: fullLink Failure Count: 1Permanent HW addr: 14:02:ec:8b:df:20Slave queue ID: 0Slave Interface: eth1MII Status: upSpeed: 10000 MbpsDuplex: fullLink Failure Count: 1Permanent HW addr: 14:02:ec:8b:df:21Slave queue ID: 0I'm frequently seeing the network freeze. If I attempt to connect via SSH nothing happens. If I attempt to ping the server's IP address nothing happens. But, once I log into the server's console and ping the gateway, everything breaks free and I can both ping the server and connect remotely.Initially I thought maybe it was due to not tuning the bond options. I have since added those hoping that the num_grat_arp option in particular would fix it. This doesn't seem to be the case as the freeze continues arbitrarily.Should I be using different options? Is there something else that I'm not aware of such as a bug?
Network freezes on RHEL 6 bond
networking;rhel;bonding
null
_unix.194922
I am trying to add in my script an awk statement or a:if; else; then if eth0:4 is a match then put in eth0:4 IPHow would you get the IP in your script as a variable if you have multiple IP's assgined to one NIC?inet 133.16.8.9/16 brd 133.8.255.255 scope global eth0inet 133.8.5.8/16 brd 133.8.255.255 scope global secondary eth0:1inet 133.8.5.7/16 brd 133.8.255.255 scope global secondary eth0:2inet 133.8.5.6/16 brd 133.8.255.255 scope global secondary eth0:3inet 133.8.5/16 brd 133.8.255.255 scope global secondary eth0:4inet 133.8.5.4/16 brd 133.8.255.255 scope global secondary eth0:5inet 133.8.5.3/16 brd 133.8.255.255 scope global secondary eth0:6inet 133.8.5.2/16 brd 133.8.255.255 scope global secondary eth0:7
How to scrape multiple IP's and add in a script as a variable?
bash;text processing;sed;awk;scripting
null
_reverseengineering.2491
How does fastloghook work in immunity debugger pycommand section? i cant figure it out. Everything i try does not work and i know this code is a mess. I seriously dont understand fastloghook which makes me crazy when bphook worked so nicely. I need an explanation :/#!/usr/bin/env pythonimport immlibimport structfrom immlib import FastLogHookdef main(args): Will hook and run its own assembly code then return controlimm = immlib.Debugger()# Set nameName = hippiefast = imm.getKnowledge(Name)if fast: hook_list = fast.getAllLog() imm.log(str(hook_list)) imm.log(%s item[1[0]])# Instantiate fastloghookfast = immlib.FastLogHook(imm)# Primary address to hook on tofast.logFunction(imm.getAddress(msvcrt.strcpy))# Takes register and offset. dereference parameters from the stack# or capture data at a known offset from a registerfast.logBaseDisplacement('ESP', 0x4)fast.logBaseDisplacement('ESP', 0x8)# Tracks the value of a specific register when the hook is hitfast.logRegister(ESP)# Logs known memory offset at hook time# fast.logDirectMemory()# Set the hookfast.Hook()# Save data so we can retrieve results laterimm.addKnowledge(Name, fast, force_add=1)return LogBPHook installed`
How to use fastloghook in immunity debugger
python;immunity debugger
null
_softwareengineering.106104
I'm working in a start-up. I have a background with teamwork and management, but I'm currently the only developer. We have a project that will involve a few developers in the form of 2 freelancers. I will function as a developer as well as the ScrumMaster. The project is scheduled to last three to four months. The 3 developers (myself and the two freelancers) have never worked together before.Would it be possible or advisable to use Scrum to manage this type of project? Would there be any problems organizing or running the team using Scrum?
How can I use Scrum with a freelance team?
project management;agile;scrum
null
_cs.26289
Input: A binary heap of size $n$. $n$ is even.Output: 2 binary heaps of size $n/2$ each.I found this question in a solved algorithms test and the solution said: There is no better solution than to build 2 completely new heaps using BuildHeap() - $O(n)$ time.I have thought about taking out the root out of the original heap, and then we have 2 sub-heaps, one with $\lfloor n/2 \rfloor$ values, and one with $\lfloor n/2 \rfloor +1$ values.Now we just add the element we took out to the bigger heap of the two sub-heaps. How do we know which heap is bigger?We traverse the heap with 2 pointes, Left and Right.Left that goes only left, and Right that goes only right, and every pointer has a counter that counts the number of elements is has passed.The bigger heap will be denoted by the max counter of the 2 pointers.This works because a binary heap by definition is full in all of it's levels, aside the last level, which is full from left to right.That sums up to $O(log n)$.My question is: Am I right, or is the solution right?Edit: My solution is wrong. I have tough about adding it a fix but it makes it $O(n)$.
Binary heap of size $n$ splitting to 2 heaps of size $n/2$
algorithms;data structures;heaps
null
_codereview.149043
I'm writing classes to decode and persist Message's sent via TCP. Each message contains many DataField's. I have a simple class DataField class that holds some meta data, including a private byte[] _content;public byte[] Content{ get { return _content; } set { _content = value; }}I then inherit DataField to do the customised fields as required public GPSData(byte[] content) { this.FieldId = 0; this.FieldName = GPS Data; this.FixedFieldLength = 21; this.RecordLength = 1; this.LengthType = DataFieldLengthType.Fixed; this.ActualFieldLength = content.Length; this.Content = content; if (ActualFieldLength != FixedFieldLength) { string message = this.FieldName + is of Data Type + LengthType.ToString() + . The ActualFieldLength passed in must match the FixedFieldLength; throw new ArgumentOutOfRangeException(message); } int i = 0; GPSUTCDateTime = BitConverter.ToUInt32(content, i); i += sizeof(UInt32); Latitude = BitConverter.ToInt32(content, i); i += sizeof(Int32); Longitude = BitConverter.ToInt32(content, i); i += sizeof(Int32); Altitude = BitConverter.ToInt16(content, i); i += sizeof(Int16); TwoDGroundSpeed = BitConverter.ToUInt16(content, i); i += sizeof(UInt16); SpeedAccuracyEstimate = content[i]; i += sizeof(byte); TwoDHeading = content[i]; i += sizeof(byte); PDOP = content[i]; i += sizeof(byte); PositionAccuracyEstimate = content[i]; i += sizeof(byte); GPSStatusFlags = content[i]; }}Is this.Content = content; correct o ensure the array is assigned properly?Should I perhaps use content.CopyTo(Content, 0);Or is there another more correct option I should use.Mostly worried because I don't see a new declaration in the class for the array to be allocated.
Assignment of Array to Property in class constructor
c#;object oriented;array
null
_softwareengineering.284758
I'm on a quest to write a LISP-ish language that compiles to Brainfuck. Well, it's a stack of intermediate compilers actually. Currently I'm trying to write the one that transforms this code:a++b+ainto:++>+<It's standard Brainfuck, but with variables added.The compiler is responsible for two things:Assigning a and b some address in the memory, say, 0 and 1Translating each occurrence of a and b into a number of <'s or >'s so the memory pointer ends up at the right locationNow, here's the thing: in order for the compiler to know how many <'s to put, it needs to know where the memory pointer currently points to. At first it points to 0, but as the code gets executed, the pointer moves around and to get to a you need to move relatively from where you're at.So the only way to know the value of the memory pointer would be to execute the code and replace variable names as you go. Oh, mempoint is now 4, I see an a and it's location is 1 so i need to move three places left: <<<If I close my eyes I can hear a voice saying: you're trying to solve the Halting ProblemSo my question is: am I really? Is there any way around?
Is is possible to write a Brainfuck with variables compiler?
brainfuck
null
_codereview.96825
public struct DateRange : IEquatable<DateRange>, IComparable<DateRange>{ private readonly DateTime _min; private readonly DateTime _max; public DateRange(DateTime min, DateTime max) { if (min.CompareTo(max) > 0) throw new ArgumentOutOfRangeException(max, Cannot be less than min.); _min = min; _max = max; } public DateTime Min { get { return _min; } } public DateTime Max { get { return _max; } } public bool Intersects(DateRange other) { return other.Max >= this.Min && other.Min <= this.Max; } public bool Contains(DateRange other) { return other.Min >= this.Min && other.Max <= this.Max; } public bool Precedes(DateRange other) { return this.Max < other.Min; } public bool Follows(DateRange other) { return this.Min > other.Max; } public int CompareTo(DateRange other) { if (Contains(other) || Follows(other)) return 1; if (Precedes(other)) return -1; return 0; } public bool Equals(DateRange other) { return this.Min.Equals(other.Min) && this.Max.Equals(other.Max); } public override bool Equals(object obj) { return obj is DateRange && Equals((DateRange)obj); } public override int GetHashCode() { unchecked { int hash = 17; hash = hash * 23 + Min.GetHashCode(); hash = hash * 23 + Max.GetHashCode(); return hash; } } public override string ToString() { return string.Format({0:yyyy-MM-dd} to {1:yyyy-MM-dd}, Min, Max); }}public sealed class DateRangeTree : IEnumerable<DateRange>{ private readonly IImmutableList<DateRange> _items; private readonly DateRangeTreeNode _root; public DateRangeTree(IEnumerable<DateRange> items) { if (items == null) throw new ArgumentNullException(items); _items = items.OrderBy(x => x.Min).ThenByDescending(x => x.Max).ToImmutableList(); _root = new DateRangeTreeNode(_items); } public IEnumerator<DateRange> GetEnumerator() { foreach (var item in _root) { yield return item; } } IEnumerator IEnumerable.GetEnumerator() { return GetEnumerator(); } private sealed class DateRangeTreeNode : IEnumerable<DateRange> { private readonly DateRange _center; private readonly DateRangeTreeNode _left; private readonly DateRangeTreeNode _right; private readonly IImmutableList<DateRange> _items; public DateRangeTreeNode(IEnumerable<DateRange> items) { if (items == null) throw new ArgumentNullException(items); var points = new System.Collections.Generic.SortedSet<DateRange>(); foreach (var item in items) { points.Add(item);; } _center = points.Skip(points.Count / 2).First(); var ns = new List<DateRange>(); var ls = new List<DateRange>(); var rs = new List<DateRange>(); foreach (var item in items) { if (item.Precedes(_center)) { ls.Add(item); } else if (item.Follows(_center)) { rs.Add(item); } else { ns.Add(item); } } if (ns.Count > 0) { _items = ns.OrderBy(x => x.Min).ThenByDescending(x => x.Max).ToImmutableList(); } if (ls.Count > 0) { _left = new DateRangeTreeNode(ls.OrderBy(x => x.Min).ThenByDescending(x => x.Max)); } if (rs.Count > 0) { _right = new DateRangeTreeNode(rs.OrderBy(x => x.Min).ThenByDescending(x => x.Max)); } } public IEnumerator<DateRange> GetEnumerator() { if (_left != null) { foreach (var item in _left) { yield return item; } } if (_items.Count > 0) { yield return _items[0]; } if (_right != null) { foreach (var item in _right) { yield return item; } } } IEnumerator IEnumerable.GetEnumerator() { return GetEnumerator(); } }}
Date Range Tree
c#;datetime
null
_unix.180653
What I think happensFrom my understanding this is what happens when I connect a USB device to my computer:Kernel recognizes that I plugged in a USB deviceKernel setups the very low-level things for the new device like drivers etc.Kernel sends a uevent to the udev daemon.Udev daemon uses the information sent, to populate the appropriate files in /dev.What I want to doI was thinking that maybe I could jump directly to step 3 by manually sending a uevent to udev. Since the uevent is sent via netlink and netlink is based on sockets, theoretically this should be possible since I just need to know which socket to write to. Anyone who has any idea if this can work and how?
Can I masquerade a kernel uevent?
linux kernel;udev;usb device
null
_softwareengineering.218220
When and how did this process of prefixing css with vendor specific prefix begin.Which browser/org start this and why was this started. I searched the web but found no details on this.
History of vendor css prefix
web development;html;history;css
vendor prefixes have been around since pre css2. they are the standard method for implementing new features in a rendering engine, affixing the prefix implies that only this particular engine will render these particular properties/features/etc. vendor prefixes are rendering engine's extensions to the standard.different implementations of the box model by ie and netscape are the root cause for all of this, as well as doctype switching. the browsers implemented standards differently and were forced with rolling back active features or dealing with what was already live.microsoft has been doing it for awhile, as have wap css user agents, and -khtml; the explosion with css3 came about with the release of the iphone and all of its shiny. apple really set the tone in their aggressive -webkit css development.i'm not sure who exactly, but somewhere shortly after, the idea was proposed to make them a standard per rendering engine...i remember eric meyer blogging about it and reading it on the w3c site.upon implementation, it has turned out to be not optimal, if in the very least, because a certain percentage of developers are lazy and cannot be expected to prefix for any vendor besides -webkit. i'm being sarcastic, but that's pretty much dead on, minus the -webkit comment...the largest casualty from all of this was opera...opera mini/mobile on ios were pretty much ignored by the -webkit fan club, causing entirely too many sites to be broken or ugly when viewed using opera mini/mobile....so opera quit -o- and threw away presto for webkit.
_unix.313296
I want to use arch with sysvinit. As stated in Arch wiki, I downloaded and extracted sysvinit and initscripts-fork. I makepkg'd sysvinit with success, but afterwards initscripts-fork makepkg fails with the claim that dependency sysvinit is missing.How does makepkg check dependency existence? How can I make it see the install (or better still, do you know other practical ways to use arch with sysvinit)?
Arch does not see sysvinit package
arch linux;systemd;sysvinit;makepkg
By default, makepkg only builds, but doesn't install a package. You either have to use makepkg -i or install the .pkg.tar.xz file afterwards with pacman -U.
_cstheory.18690
Take an arbitrary infinite binary sequence $\omega$. The interesting case is when $\omega$ is not computable. Is there a computable (semi-)measure $\mu$ such that sequence $\omega$ is $\mu$-random in the sense of Martin-Lf?Or $\mu$-random is some weaker sense?Clearly, $\omega$ is $\mu$-random for $\mu$-almost all sequences. So the question is what happens with the sequences from that set of measure zero.
Is an infinite incomputable sequence random wrt a computable measure?
computability;randomness
Mucknik, Semenov and Uspensky showed that there are sequences which are not Martin-Lf random for any computable measure. They call all other sequences (which are Martin-Lf random for some computable measure) natural sequences.Andrei A. Muchnik, Alexei Semenov, and Vladimir Uspensky. Mathematical metaphysics of randomness. Theoretical Computer Science, 207(2):263317, 1998.This topic is also discussed in Laurent Bienvenu and Christopher Porter. Effective randomness, strong reductions and Demuths theorem.Jan Reimann and Theodore A. Slaman. Probability measures and effective randomness.Jan Reimann and Theodore A. Slaman. Measures and their random reals.Here is a quick proof (which may be different from the above references): Let $\{U_i\}$ be a (non-effective) enumeration of all $\Sigma^0_1$ sets making up some level of the universal Martin-Lf test for some computable measure $\mu$. Each $U_i$ is dense and open, so by the Baire category theorem (countable intersection of dense open sets is dense), the intersection $\bigcap_i U_i$ is nonempty. Any sequence in this intersection cannot be Martin-Lf random for any computable measure $\mu$.The same can be done for Schnorr randomness (which is weaker than Martin-Lf randomness). Let $\{U_i\}$ be the collection of all dense $\Sigma^0_1$ sets of computable $\mu$-measure for some computable measure $\mu$.UPDATE Oct 23, 2013:I realize there are three things to add (which I am sure are all well-known and they are probably in the above mentioned papers).My above proof actually shows that 1-generics cannot be Martin-Lf random for a computable measure, since 1-generics are in all dense $\Sigma^0_1$ sets.The K-trivials are not Martin-Lf random on any computable measure. A sequence $X \in \{0,1\}^\mathbb{N}$ is low-for-random (a.k.a. $K$-trivial) if $\text{MLR}^X = \text{MLR}$. It can be shown that this property also extends to all computable $\mu$. That is, if $X$ is K-trivial, then $\text{MLR}_{\mu}^X = \text{MLR}_{\mu}$ for all computable $\mu$. Hence, the non-computable $K$-trivials cannot be random for any computable measure. For if a $K$-trivial $X$ is random on a computable measure $\mu$ then $X$ is random relative to $X$. This means $X$ is an atom. However, only computable points can be atoms of computable measures.The set of all natural sequences is precisely the set of sequences truth-table reducible to a Martin-Lf random (on the fair-coin measure). (The main idea is that every computable measure is the push-forward on an a.e. computable map, and the Martin-Lf randoms on the measure $\mu$ are exactly the sequences pushed over to the new measure. The reason this computation is truth-table involves using layerwise computability.)
_unix.53439
I'm using guardium appilance, which has one of its port used to analyze span traffic.The base os is linux. The span traffic stopped coming during some time of operations. I asked the network team, and was told that port was shut down on the guardium device end. I'm convinced that this activity has taken on switch level as it was unable to handle the traffic being passed through it. Is there a command that i can run on the linux o/s that would tell exactly if the interface was shutdown ; if yes for what reason?Thanks.
Troubleshooting span traffic shutdown interface?
linux;logs
null
_codereview.146033
I've been learning Python for a few weeks and I created a simple game where you convert binary numbers to decimal. It shows you n binary numbers and you are supposed to enter the numbers in decimal system in sequence as a password. After failing three times you lose. I'd like to implement it into a bigger game later.Here's the code:import random# Conver decimal to binary.def toBin(i): bin = {0:b}.format(i) return bin# Generate random password and it's binary version.def generatePassword(n): password = [] passwordBin = [] for i in range(n): a = random.randint(0, 9) password.append(a) passwordBin.append(toBin(a)) return password, passwordBin# Prints the binary password and a frame.def printPassword(password): print(12 * '#') for i in range(len(password[1]) + 4): if i <= 1: print(2 * '#' + 8 * ' ' + 2 * '#') elif i <= 5: print(2 * '#' + password[1][i - 2].rjust(4, '0').center(8) + 2 * '#') else: print(2 * '#' + 8 * ' ' + 2 * '#') print(12 * '#')# The game loop.def puzzle(n): password = generatePassword(n) win = False endLoop = False attempts = 3 while endLoop is False: # Check remaining attempts. if attempts > 0: printPassword(password) print('Attempts remaining: ' + str(attempts) + '\n') # Looping through password numbers. for i in range(n): print('Number ' + str(i + 1), '\t\t', end='') # Input and check for ValueError. try: inp = int(input()) except ValueError: print('You can only enter numbers.') break # Is input correct? if inp == password[0][i]: print('CORRECT') else: attempts -= 1 print('WRONG\n') break # End loop and win. else: endLoop = True win = True # End loop. else: endLoop = True # Check win condition. if win is True: print('ACCESS GRANTED') else: print('ACCESS DENIED')# Run the game.puzzle(4)Are there any ways to improve my code, expecially the puzzle(n) function? The conditions and loops took me some time to get working and still kinda confuse me. I hope there aren't any bugs.
Puzzle game - converting binary numbers to decimal
python;beginner;game;python 3.x;quiz
null
_codereview.109534
I have a module called db, which looks like this (shortened):# A dictionary that maps type names (as returned by Model._get_type()) to classes.TYPE_REGISTRY = {}class Model(object): A superclass to be used for database models. To specify which attributes should be savied add an 'attributes' class property containing a list of property names. The first property name should be the primary key. def __init__(self, **kwargs): for name in kwargs: setattr(self, name, kwargs[name]) @classmethod def get_connection(cls): returns a database connection return get_connection() @classmethod def get_attributes(cls): Get a list of all attributes that an instance of this model can have. attrs = [] for sup in cls.__mro__: if hasattr(sup, attributes): attrs = attrs + sup.attributes return list(set(attrs)) @classmethod def register(cls): Registers a model's type. TYPE_REGISTRY[cls._get_type()] = cls def to_dict(self): Turn this object into a dicionary that can be saved to the database. d = {__type: self._get_type()} for attr in self.get_attributes(): if not hasattr(self, attr): continue d[attr] = getattr(self, attr) d = self.prepare_value(d) return d @classmethod def prepare_value(cls, value): Ensures that a value can be written to the database. if isinstance(value, datetime): return value.replace(tzinfo=r.make_timezone(+00:00)) if hasattr(value, to_dict): value = value.to_dict() if isinstance(value, dict): return cls._prepare_dict(value) if isinstance(value, (set, tuple, list)): return [cls.prepare_value(item) for item in value] elif not isinstance(value, (basestring, int, float, long, bool)) and value is not None: raise ValueError(Values must either implement to_dict, or be of type dict, set, tuple, list, basestring, int, float, long, datetime or None, got %s: %s % (type(value), value)) return value @classmethod def _prepare_dict(cls, d): Given a dictionary, this method ensures that it is ready to be written to the database. result = {} for key in d: result[key] = cls.prepare_value(d[key]) return result @classmethod def from_dict(cls, d): Converts a dictionary back to the class within the application if not d: return None type_ = d.get(__type) del d[__type] if type_: clss = TYPE_REGISTRY[type_] else: clss = cls for key in d: if isinstance(d[key], datetime): d[key] = d[key].replace(tzinfo=None) if isinstance(d[key], dict) and __type in d[key]: d[key] = cls.from_dict(d[key]) if isinstance(d[key], list): for i in xrange(len(d[key])): if isinstance(d[key][i], dict) and __type in d[key][i]: d[key][i] = cls.from_dict(d[key][i]) return clss(**d) @classmethod def _get_type(cls): return cls.__name__.lower() + s @classmethod def get_table(cls): Get the table object for this class. default_name = cls._get_type() table_name = getattr(cls, table_name, default_name) return r.table(table_name) @classmethod def run(cls, query): Runs the given query return query.run(cls.get_connection()) @classmethod def get(cls, id): Get an object by its id. return cls.from_dict(cls.run(cls.get_table().get(id))) @property def table(self): return self.get_table()This works nicely in most kinds of queries, but when I use .pluck(), I find myself having to include __type in the list of attributes I want to pluck, so that I can use .from_dict(d) with the result, and I'd somehow like that key to always be included.Would there be a good way to abstract this out?Here's an example query I'd like to be able to make:class Post(db.Model): attributes = [title, author, published, content, image] @classmethod def get_by_author(cls, author, fields): return cls.from_dict( cls.run(cls.get_table().get_all(author, index=author)).pluck(*fields) )Currently, I'd have to make sure that fields includes __type. That's easy enough to do, but it's a massive abstraction leak.
Abstracting away implementation details of my rethinkdb Model class
python
null
_unix.225185
What I would like to do is search for lines where the first column does not begin with 'rs' or 'chr' THEN if those lines begin with a number, append 'chr' to the first column value, otherwise leave as it was - no appending.I have the following code:awk '((!($1 ~ /rs/ || $1 ~ /chr/)) && $1 ~ /^[[:0-9:]]|$/) {$1 = chr$1}1' filename > newfilenameThis is good but appends 'chr' to all first column values that do not begin with 'rs' or 'chr'. There are some values in this column that I do not want to change and these all begin with letters (a-z). I only want to change the values which start with numbers (0-9).Thanks!
awk edit column value if numeric
shell script;awk
null
_webmaster.29938
I'm working on an invite a friend feature and would like to send emails in which the from field is the email address of the user sending the invites.I saw that evernote's emails are sent like that, and gmail shows some text saying via evernote.com.I'm wondering what's the correct way to do something like that and not get hit by spam filters. We're currently using postmark as the service through which we send our emails.
How to send email on behalf of a user?
email
I would check Postmark's terms of service to see if you're allowed to do that. Our application uses a custom invite feature built on Zend Framework so we used Zend_Mail to build the emails including headers and then send through our SMTP server using the from field as our users email address which they registered with.Your best bet as far as spam filters and outlooks junk filters are to test various emails, especially if you are using multi type/part emails with text and HTML versions.
_unix.218082
#!/bin/bashunset resultresult=$(find /home -path $HOME/TestDir/[0-9][0-9][0-9][0-9]/test* -mtime -7 -print -delete 2>/dev/null)[ $result ] || echo There are no recovery files older than 7 daysIn the /test part I actually need to find 2 file names. Those that begin with E* and those that begin with P_*. can this be done?
BASH script to search for files in folders of 4 digits and older than 7 days
bash;shell script
find $(find -name [0-9][0-9][0-9][0-9]) -name [test]* -mtime -7 -exec rm -i {} \; || echo There are no recovery files older than 7 daysThe || (or) operator looks at the result of the command on its left, and runs the command on the right only if the command on the left failed (based on the return code).If find does not find any matching files, it will return 1, which will cause || to run run the echo command.
_softwareengineering.229303
I have a class that will have a number of external methods that will all call the same smaller set of internal methods. So something like:obj.method_one(a, c) and obj.method_two(a, c)where obj.method_one calls obj._internal_method(a, c, y) and obj.method_two calls obj._internal_method(a, c, z).They're nearly identical but they have one argument at the end that differs on the internal call (which is a constant).I could just copy and paste a few times but I was wondering if theres any way to dynamically create these external methods.
Dynamic method creation in python
object oriented;python;class;methods;dynamic
There are ways, but unless you have at least half a dozen, I'd suggest:class myclass(object): def _internal_method(self,a,c,y): pass # add code here def method_one(self,a,c): self._internal_method(a,c,method one) def method_two(self,a,c): self._internal_method(a,c,another method)The other option is to actually really create it dynamically. This can be confusing, because if the object is inspected, the methods aren't there. It looks like this:class test(object): def __getattr__(self,attrname): def _internal_method(a,c): print You called %s with %s and %s % (attrname,a,c) if attrname in [method_one,method_two]: return _internal_method else: raise NameErrora=test()a.method_one(first,second)This makes use of the fact that a getting a property (like a method) that does not exist, the object's __getattr__ is called, and the result is passed back. Note that we are passing back the FUNCTION _internal_method as a.method_one, not calling it at this stage; that is done after the call to getattr has returned. Think of the last line as being:tmp=a.methodtmp(first,second)The first of these calls __getattr__ which returns _internal_method (but does not call it), the latter calls _internal_method.Since _internal_method is declared within __getattr__, it has the same scope as the call that created it, so self and attrname are accessible. Also, _internal_method is TRULY private, not just private by convention.I will reiterate that this is confusing for the next person to maintain your code. For example, if you ask for help(a.method_one), you will get the help for _internal_method instead!The length of this explanation, and your probable confusing is a very good reason not to use this unless you have to.
_cs.72043
I just read an article Statistical approach for figurative sentiment analysis on Social Networking Services: a case study on Twitter, which provide an algorithm to analyze tweets, and this article includes 2 formulas which I don't really understand.Link to the articleI hope maybe someone here can help me.The first formula is the (4) formula (Page 5) The second formula is the (6) formula (Page 8)I will be very thankful if someone will help me with this. Examples will be most welcome! :)
Understanding a formula on article
algorithms;algorithm analysis
Each tweet and each cluster is represented as an $m_k$-dimensional vector. The distance between a tweet $t_k$ and a cluster $\delta$ is then$$dis(t_k,\delta) = 1 - \frac{\langle t_k, \delta \rangle}{\|t_k\| \|\delta\|},$$where $\langle a,b \rangle = \sum_{i=1}^{m_k} a_i b_i$ is the inner product and $\|a\| = \sqrt{\langle a,a \rangle}$ is the norm. This explains equation (4).The definition of $P(S_t|w)$ is given below (6):$P(S_t|w)$ is the probability that a term has a score with the given tweet score.This is the number of tweets containing $w$ which have score $S_t$ divided by the total number of tweets containing $w$.The rest of the formula is hopefully self-explanatory ($\times$ is just ordinary multiplication).If you have any more questions, I suggest contacting the authors.
_computergraphics.4247
I have begun learning how to create a Ray Tracer and 1 thing I am confused about is how the pixel color from a Ray Tracer is stored into an image. Do we use SetPixel for a Bitmap? Do we use a third party library like libpng?Most tutorials don't really explain this well online, so if anyone can explain what the most common method for this is that would be great.
How are the colors for each pixel from a Ray Tracer stored in an Image?
raytracing;c++;image
It doesn't matter. All that matters is that you can store information per pixel. In most cases this means allocating a x*y array of ints (for 3 byte-sized channels of RGB) to store the pixel values and then feed it into your image compression library of choice to save it out to disk.
_cs.43428
How can I prove that if P=NP then for each non-trivial language $L,L'\in NP$ there exists a polynomial reduction $L\leq L'$?
if P=NP then $L\leq L'$ for all languages
complexity theory;computability
null
_softwareengineering.351364
Here's the thing:Imagine there is a database containing just ids from 1 to 1000 and a flag is_used for each of those. There is a service which has only two methods: getId() which finds available id in DB and returns it aftersetting is_used to truerelease(int id) for setting is_used to false on id passedHow would you suggest to organize everything to be scalable and performant if there are two rules:Each id should be used only by one client at some point in timeIf all ids are used then clients called for getId should be queued and provided with id as soon as it becomes available.
Scalable and performant way to organize access to a shared resource?
design;performance;scalability
null
_unix.333397
Admin note: This question is different then why is sudo path different then su because the environmental variables in a bash script ran from cron do not appear to carry over from environmental variables set for either users as sudo or as su. (See everything after the BUT.)When running sudo su and showing paths, I have /usr/local/bin in my path. I have several custom apps I put in that folder in the intent of making them available system wide. In etc/sudoers, /usr/local/bin is in the secure_path. BUTWhen running a bash script executed as root via a cron job, /usr/local/bin is apparently not preserved in the path as I get command not found when attempting to run apps that are installed there, despite the fact they are in /etc/sudoers.How do I get these apps to be available to root?Ubuntu 16.10
Why are sudo su and bash root script paths different?
ubuntu;cron;path
The environment in a cron job is, as you are seeing, different from that in a shell invoked by su - or sudo -s or sudo /path/to/executable. You can, however, set variables within the cron table:PATH=$PATH:/usr/local/bin0 0 * * * /path/to/run-me-at-midnight-with-path-changes.sh
_cogsci.9724
Xuan Choo recentely created a model of serial working memory for Spaun. Does this model of working memory exhibit the effects of proactive interference (as well as changing data types yielding freedom from proactive interference)? Additionally, does this model show signs of exhaustive serial search (the time for answering questions increases as the list gets longer)? If not, what future areas of investigation might yield this result or is it impossible?ReferencesChoo, F. X. (2010). The ordinal serial encoding model: serial memory in spiking neurons. http://compneuro.uwaterloo.ca/files/publications/choo.2010.pdfEliasmith, C., Stewart, T. C., Choo, X., Bekolay, T., DeWolf, T., Tang, Y., & Rasmussen, D. (2012). A large-scale model of the functioning brain. science, 338(6111), 1202-1205. http://compneuro.uwaterloo.ca/files/publications/eliasmith.2012.pdf
Does Spaun's serial memory exhibit proactive interference and serial searching
theoretical neuroscience;spa
null
_webapps.4300
Most of the videos and articles I share go to my Facebook wall, how can I post them on news feed
Articles, Videos I share goes on my Facebook wall instead of news feed
facebook
null
_unix.219368
I'll make this short and hopefully I make sense. Im trying to give a user ssh access, but, he/she must only be able to access /opt and the few subdirectories that live under it. However, here's where it gets tricky, I understand that if they are chrooted in /opt the other directories in there that aren't owned by root wont be accessible. That's a problem for me cause living inside there is /opt/Pentaho which you guessed it, is owned by the pentaho user and when you chown root:root that directory brings our Pentaho app to a grinding halt obviously. SO, is there a way for me to restrict an SSH user to /opt and be able to read and write files within the /opt/Pentaho directory or have I just lost my mind? Any insight would be great, thanks!
Chroot Users to /opt and subdirectories
linux;ssh;chroot;jails
null
_unix.325424
I have created a file called histcopy.txt with the command history, (history > histcopy.txt). It look somewhat like this: 1. l 2. ls 3. cat necopy.txt 4. netstat 5. cd | ls-l ; grep -i 3 histcopy.txt | echoAnd I want to print out each unique command, this is the command I run so far:awk '{print $2}' histcopy.txt | sort|uniq`And the outcome of that is:llscatnetstatcdBut in the last line (5. cd | ls-l ; grep -i 3 histcopy.txt | echo), there are many commands that it ignores and only takes the first cd. How can I rewrite my current command so that it extracts them as well? So that from the 5th line it would also extract:ls-lgrep -i 3: histcopy.txtechomaking of them separate items in the output list.
Extract unique single commands from history
linux;shell;command line;sort
$ cat histcopy.txt 1 l 2 ls 3 cat necopy.txt 4 netstat 5 cd | ls -l ; grep -i 3 histcopy.txt | echo$ sed 's/^\s*\S*\s*//' histcopy.txt | tr ';|' '\n' | awk '!seen[$1]++{print $1}'llscatnetstatcdgrepechosed 's/^\s*\S*\s*//' to remove initial space and number associated with command in history outputtr ';|' '\n' replace ; and | with newline characters. This will work for current problem statement, but won't help if there are commands inside substitutions, etcawk '!seen[$1]++{print $1}' unique commandsSimilar logic implemented with perl alone$ perl -lne 's/^\s*\S+//; (@a)= split/[;|]/; foreach (@a){($k) = /^\s*\K(\S+)/; print $k if !$seen{$k}++}' histcopy.txt llscatnetstatcdgrepecho
_datascience.13920
I am using Keras to classify images. I am following the Keras blog. The accuracy from predict_generator is not matching with the accuracy obtained from the confusion matrix, which I am computing using the scikit-learn package. I have included the relevant snippet of the code belowfrom keras.preprocessing.image import ImageDataGeneratorfrom keras.models import Sequentialfrom keras.layers import Convolution2D, MaxPooling2Dfrom keras.layers import Activation, Dropout, Flatten, Denseimport numpy as npimport theanofrom sklearn.metrics import classification_report, confusion_matrixy_actual = np.ones((nb_test_samples),dtype = int)y_actual[0:2817] = 0train_datagen = ImageDataGenerator( featurewise_std_normalization=False, samplewise_std_normalization=False, rescale = 1./255)test_datagen = ImageDataGenerator(rescale=1./255)train_generator = train_datagen.flow_from_directory( train_data_dir, target_size = (img_width,img_height), batch_size = 32, class_mode = 'binary')test_generator = test_datagen.flow_from_directory( test_data_dir, target_size = (img_width,img_height), batch_size = 32, class_mode = 'binary', shuffle = False )model.fit_generator( train_generator, samples_per_epoch = nb_train_samples, nb_epoch = nb_epoch, validation_data = test_generator, nb_val_samples = nb_test_samples)score = model.evaluate_generator( test_generator, 4938)print Test fraction correct (Accuracy) = {:.2f}.format(score[1])prediction = model.predict_generator(test_generator,nb_test_samples)for i in xrange(0,len(prediction)): if prediction[i]<0.5: prediction[i] = 0 else: prediction[i] = 1#y_predicted = test_generator.classesprint np.sum(prediction)CM = confusion_matrix(y_actual,prediction)print CM If I use y_predicted, I get a perfectly diagonal confusion matrix, when the console output shows an accuracy of 70% which doesn't make any sense at all. What is that I am doing wrong?
Accuracy doesn't match in Keras
deep learning;keras
null
_unix.327743
I have some AAC files which I've extracted with MP4Box form MP4 videos. I've been trying to find a (preferably GUI) application to tag them with a title, year etc. - and failing:Amarok - pretends to be able to edit AAC tags, not actually writing them.EasyTag - makes me rename the files to .mpa to notice them; then, also, supposedly tags them, but when you try to save the changes it says it fails to write the changes.puddletag - doesn't see AAC/M4A filesKid3 - Supposedly edits tags (inconvenient interface, by the way), but when you commit the changes and reload the file - they're gone.ExFalso - doesn't see AAC/M4A fileIs AAC tagging support really that bad? Am I doing something wrong? Should I use other apps/tools?
Tagging AAC problems (Kubuntu 16.04 + various apps)
audio;unicode;media;tagging
null
_webapps.4164
How do I view the IMDb top lists sorted by demographics in order to get more personalized ratings?For example, I would find it more useful if I could view the Top 250 films list, while counting votes only for males, and even more if I could look at the top votes for males in a specific age bracket.
View IMDb top lists sorted by demographics to give more personalized ratings
sorting;imdb
null
_webmaster.45517
I want to redirect mysite.com or http://mysite.com or www.mysite.com or any other format given my user to http://www.mysite.com, I'm able to achieve this by rewriting following lines in my .htaccess fileRewritecond %{http_host} ^mysite.comRewriteRule ^(.*) http://www.mysite.com/$1 [R=301,L]But I want do this from Apache,So I've added following line in Virtual host conf file of the site and removed above two lines from .htaccessRedirect 301 / http://mysite.com/But whenever I'm trying to access the site following error is displyaing,Error 310 (net::ERR_TOO_MANY_REDIRECTS): There were too many redirects.where I'm doing the wrong ?
Issue with Apache redirection
htaccess;mod rewrite;apache2
What you want is this:RewriteCond %{HTTP_HOST} !^www\.example\.com [NC]RewriteCond %{HTTP_HOST} !^$RewriteRule ^/(.*) http://www.example.com/$1 [L,R]which was copied directly from here. You really ought to read through that page and understand it.Your current rewrite rule, as it stands, sends every request to a local server path that your apache config probably cannot deliver ('/' is the root directory of your server not the root of your website). Apache in turn will try to respond to that with an error page, which in turn gets redirected to the '/' again, hence an infinite redirect.The above code only redirects requests NOT bound for www.example.com and preserves the rest of the URL when it redirects.
_unix.165033
I have Ubuntu 14.04 and I want to install FreeBSD,SUSE or which distro is possible but I need to do this without any USB Install or DVD.Actually I'm looking for something like wubi but as I said before I'm on Ubuntu and I do not have any other kind of os. Is this possible ? Is there wubi for linux ?
how to install any linux distribution from existing ubuntu
system installation
Vivian, you can easily do this by using VirtualBox or any kind of virtualization software. You will need to download the iso files to do this of course. I have included a link that shows (with images) how to install VirtualBox and use an ISO to create a virtual OS. Please look here. Hope this helps you.
_unix.176460
AimThe aim is to convert the following string:hello_hello,123-world567-helloworld123456,world1234-hello09876using sed into a specific format.Attemptssed -e 's|^\(hello_[a-z0-9]\{3\}\)\(.*\)|\1,\1\2|g;s|..|&/|g' /tmp/fileExpected outcomehe/ll/o_/he/ll/o,123-world567-helloworld123456,/wo/rl/d1/23/4-/he/ll/o0/98/76/Current outcomeThe problem is that every 2 characters a / is inserted. The insertion of / should be avoided in the part that resides between the two commas.he/ll/o_/he/ll/o,/12/3-/wo/rl/d5/67/-h/el/lo/wo/rl/d1/23/45/6,/wo/rl/d1/23/4-/he/ll/o0/98/76/
How to avoid that an insertion pattern using sed is applied to the middle of a string?
sed
I can do it like:sed 's|\(,[^,]*,\)\{0,1\}\([^,]\{1,2\}\)|\1/\2|g' <<\IN hello_hello,123-world567-helloworld123456,world1234-hello09876IN...which prints.../he/ll/o_/he/ll/o,123-world567-helloworld123456,/wo/rl/d1/23/4-/he/ll/o0/98/76So most of the changes made are done to the second s///ubstitution - but that's because I removed all of the first.So the biggest part of your problem was that you were simply telling sed to substitute in a / after every two characters - the . dots mean any char and the g means global - or all.The second biggest part was that the first substitution was not helping you - and was completely unnecessary.More than that though, you were also inserting an extra comma in the first substitution - so after I'd get the first bit straightened out, I was still running into extra fields. Look:\(,[^,]*,\)\{0,1\}\([^,]\{1,2\}\)|\1/\2That's the substitution statement that worked for me, and heres why: \(,[^,]*,\)\{0,1\} - in a global context you have to be careful to get only as much as you need. You were substituting for every two chars and so that's what you got - sed is greedy. This is referenced first - which is important - because as sed reads left to right it will usually just insert a slash between every two sequential not comma chars, but it if it encounters a comma it will read in up to the next it finds and save the whole block to \1 without inserting any slashes at all.\([^,]\{1,2\}\) - You can't use the . dots here - they will match a comma and so you'll just wind up writing in the slashes after you skip a delimiter. You need to explicitly exclude commas. And so that is what this does - every sequence of 1 or 2 of them - though sed will always pull the largest one of those numbers that it might.One difference I can see between this and that in your example is the first slash here is at the head of the string and there is no trailing slash, whereas yours does the opposite. To remedy that, as needed:...;s|^/\(.*/.\)/*$|\1/|...
_unix.16148
If I install OpenBSD 4.9, when will I have to upgrade to 5.0? When will the 4.9 release no longer be supported by the OBSD teams?
Security support time for OpenBSD?
openbsd;support cycle
The security support for a given version in OpenBSD is 1 year.From the FAQ:You will also note that in the above example, the 4.6-stable branch came to an end with 4.8-release, and the 4.7-stable branch came to an end with 4.9-release -- old releases are typically supported up to two releases back.
_codereview.135399
I know basic Java, but I struggle sometimes with object orientation design.There is a vendor api I use, and I wanted to wrap it to be reusable as a lib in other projects.All the services from the vendor are different classes and have no hierarchy and so on, but I have no option to change it.So I want to use composition and ensure I don't repeat myself.I thought initially to create a service that would receive the parameters that are common to all services, and this service would implement the api.I tried refactoring this code here and there, and I'm pretty sure this design I'm trying has some great problems as I noticed when trying to create unit tests :)How could I achieve a better design?Code as of now:1) Vendor code (I can't change this) VendorServiceCake.javapackage example;public class VendorServiceCake { public VendorApiCake getCakeApi(int i) { //vendor code return new VendorApiCake(); }}VendorApiCake.javapackage example;public class VendorApiCake { public void authenticate(String user, String password, int parameterNeeded) {/*vendor code*/} public void cookDeliciousCake(CakeIngredients ingredients) {/*vendor code*/}}VendorServiceSellPie.javapackage example;public class VendorServiceSellPie { public VendorApiSellPie getPieApi(int i) { //vendor code return new VendorApiSellPie(); }}VendorApiSellPie.javapackage example;public class VendorApiSellPie { public void authenticate(String user, String password, int parameterNeeded) {/*vendor code*/} public void sellDeliciousPie(Object customer) {/*vendor code*/}}2) This is how I currently invoke the vendor apiExampleCurrent.javapackage example;public class ExampleCurrent { void exampleCake() { VendorServiceCake vendorServiceCake = new VendorServiceCake(); VendorApiCake cakeApi = vendorServiceCake.getCakeApi(1234); cakeApi.authenticate(user, password, 1234); CakeIngredients ingredients = null; cakeApi.cookDeliciousCake(ingredients); } void exampleSellPie() { VendorServiceSellPie vendorServiceSellPie = new VendorServiceSellPie(); VendorApiSellPie apiSellPie = vendorServiceSellPie.getPieApi(1234); //same parameters as above apiSellPie.authenticate(user, password, 1234); //same parameters as above Object customer= null; apiSellPie.sellDeliciousPie(customer); }}3) This is what I want it too look like when users use my .jarUsageOfMyNewApi.javapackage example;import java.util.ArrayList;import java.util.List;public class UsageOfMyNewApi { void usage() { BakeryServiceCake service = new BakeryServiceCake(user, password, 1234); CakeIngredients ingredients = null; service.cookDeliciousCake(ingredients); } void usage2() { BakeryServiceSellPie service = new BakeryServiceSellPie(user, password, 1234); List<Customer> customers = new ArrayList<>(); customers.add(new Customer(john)); service.sellDeliciousPie(customers); }}4) Code needs refactoring for better designBakeryService.javapackage example;import java.util.List;public abstract class BakeryService { //is this class useless? public BakeryService(String user, String password, int parameterNeeded) {} private void checkParameters() {/*do some checkings of the parameters*/}}BakeryServiceCake.javapackage example;public class BakeryServiceCake extends BakeryService implements KitchenCakeApi { private KitchenCakeApi api; public BakeryServiceCake(String user, String password, int parameterNeeded) { super(user, password, parameterNeeded); this.api = new KitchenCakeApiImpl(user,password, parameterNeeded); } @Override public void authenticate() { api.authenticate(); } public void cookDeliciousCake(CakeIngredients ingredients) { api.cookDeliciousCake(ingredients); }}BakeryServiceSellPie.javapackage example;import java.util.List;public class BakeryServiceSellPie extends BakeryService /* will implement SellingCakeApi */ { public BakeryServiceSellPie(String user, String password, int i) { super(user, password, i); } public void sellDeliciousPie(List<Customer> customers) {/*to be implemented yet as the BakeryServiceCake, but not the main point as of now*/}}KitchenCakeApi.javapackage example;public interface KitchenCakeApi { void authenticate(); void cookDeliciousCake(CakeIngredients cakeIngredients);}KitchenCakeApiImpl.javapackage example;public class KitchenCakeApiImpl implements KitchenCakeApi { private final String user; private final String password; private final int parameterNeeded; private VendorServiceCake vendorServiceCake; private VendorApiCake cakeApi; public KitchenCakeApiImpl(String user, String password, int parameterNeeded) { this.user = user; this.password = password; this.parameterNeeded = parameterNeeded; vendorServiceCake = new VendorServiceCake(); cakeApi = vendorServiceCake.getCakeApi(parameterNeeded); } @Override public void authenticate() { cakeApi.authenticate(user, password, parameterNeeded); } @Override public void cookDeliciousCake(CakeIngredients cakeIngredients) { cakeApi.cookDeliciousCake(cakeIngredients); }}BeansCakeIngredients.javapackage example;public class CakeIngredients {}Customer.javapackage example;public class Customer { public Customer(String john) {}}
Wrapper for a vendor API that lacks common interfaces
java;object oriented;api;wrapper
null
_unix.50470
How can I search for files that were modified or changed 5 minutes before and 5 minutes after, a certain file. I have triedmint@mint ~/Desktop $ touch -t 201210101315 /tmp/timestampmint@mint ~/Desktop $ sudo find . ~ -cmin -5 | xargs ls -lto create a temp file with that time stamp and search for files changed within 5 minutes but I'm only getting files changed 5 minutes within current time.What is the simplest way?
How to find files compared to the time of a specific file
linux;find
Use the -a (and) condition, e.g find . -cmin -5 -cnewer /tmp/timestampWill find all files changed in 5 minutes and newer than /tmp/timestamp
_webapps.8351
Is it possible to move Google bookmarks from one account to another? My GApps domain now allows me to use Google Bookmarks. Previously I've had to use a 'real' Google account to store these, and I want to consolidate under my apps account (and I have to, basically, since I can only have a single sign-on in a browser).However, I can't seem to find way of moving the bookmarks. I can export them from www.google.com/bookmarks, but the 'import' link just sends me to some help which instructs me to use the Google Chrome sync.I don't want to do that, since there are plugins that allow me to access my bookmarks from Firefox etc.(Note that this is a copy of my unanswered question on SuperUSer; I figured this was a better forum for it.)
How to: migrate real (not Chrome) Google bookmarks from one account into another
google apps;google bookmarks;migration
You would have to export and then import it into Firefox. Then, use the Google Firefox toolbar to sync it to another Google account. It's a round about way, but that's what's recommended. If you don't want to meddle with your existing bookmarks on Firefox, you might want to import them into a new Firefox profile created for this purpose.
_webmaster.42472
Following is the response from the server space provider, I have got upon requesting 1GB mail box size for each e-mail id for the website hosted in shared environment.Please note that in the normal shared hosting environment, you will be facing the following issues if you are allocated with more email space:Generally domains that have more number of Email id's and email space have lot of mails stacked in the server which creates spike in inode creation in the server . This in turn creates file system errors affecting the disk integrity.Note that since you are in a shared hosting environment , the CPU usage per domain is 12 to 15 % only. If lot of mails is stacked in the server, the mail server CPU usage for your domain will go high and it will affect the performance of the entire serverHence in order to maintain the highest level of performance on our shared servers, we are limiting the mail box maximum size to 100MB only.Questions:How appropriate are these technical reasons?Is there any way they can provide 1GB mail box and simultaneously keep constraints or check on other factors and avoid above mentioned problems?
Are these technical reasons correct for not providing 1GB mailbox for each e-mail ID on shared hosting environment?
email;shared hosting
Sounds like you've exceeded your shared hosting capabilities. Your hosting provider uses hosted services to keep the costs down for everyone. Therefore they have to put resource limits on each account so that the resources are shared out equitably among all the accounts on the shared server. It's their server, and their ballcourt, so it doesn't matter what we think they can do, but how they plan to run their system for best response for the rest of their users on the server.You have two options:See if they have a higher service level that lets you do what you want.The usual way of handling this is to move your email services to another system. Getting an email service provider that will provide you with 50GB of space per user also can get you on a system that has far better delivery of email, is not so likely to get blacklisted when one bad customer on the hosted server decides to spam everyone. $2.00 a month for each email account was a really good move for us because it allowed our company to send more email, receive larger attachments, and know that PSF, DKIM, DomainKeys records were processed to allow for a massive reduction in the bounce rate we were experiencing from having the email hosted through the shared server account we were using at the time.To put it another way, the advantages of getting services that do what you need will pay off better than attempting to argue a service provider into doing something they haven't engineered the system to do.
_unix.42856
There are a lot of questions out there about how to convert a PDF file to a PNG image, but I'm looking to take a nice sharp PNG file and just basically wrap it or embed it in a PDF file without having it look blurry or fuzzy.I realize with imagemagic installed I can do a simple conversion like:convert sample.png sample.pdfI've also tried a lot of the switches to set the depth, and also the quality setting:convert -quality 100 sample.png sample.pdfHowever the PDF still comes out looking blurry / fuzzy.Here's a sample image:http://img406.imageshack.us/img406/6461/picture3mu.pngAs a png it's crisp and clean. When I convert it to a PDF, even at the same size it looks blurry:Picture 4.png http://img803.imageshack.us/img803/9969/picture4at.pngHow can I convert PNG to a PDF in high quality?
How can I convert a PNG to a PDF in high quality so it's not blurry or fuzzy?
imagemagick;image manipulation
null
_softwareengineering.60545
This is probably going to sound messed up, but here it goes.I've been working on a project for a client for a while now. I wasn't given any details except for It has to be an XYZ plugin and interface with ABC product. Which was fine, but now we're towards the end (I think) and it's just dragging out. I don't have any time to spend on it and I'm already over schedule by 3 months. Trying to get the client to describe to me how he would like to be able to navigate the data (a UI issue) is just difficult. I've submitted mock ups on what I think he wants but his latest response is you should look at XXX product, it has similar functionality.Of course, I looked at it and it looks similar to what I submitted, but I don't think that the way I've built the framework is going to support what he is now describing to me. We've had good communication throguh out the process but he doesn't know what he wants. I explained how I was going to build the framework and he agreed, so it isn't a bad choice on my part about design.When I go over what I think are finalized modules, he says, You should have done it this way which requires me to go back and rework code and UI. Some smaller items could have been better thought out by me, but the big things are how I interpreted his requirements and I've gone over this module several times during development.I've already received final funds last month so i'm working for free at this point. I no longer want to deal with this project. I've already received payment. I've done other successful projects with this client before and he has a lot of other projects he wants to do.What the heck should I do? I don't want to work on this project anymore. I don't want to ask for any more money (money isn't really the issue). I don't want to make him mad either. I know it looks like I want to have my cake and eat it too. If you think I should call it quits, how should I do it given the circumstances?
How to tell client I no longer want to work on his project
project;client relations
First, you need to get out of the mindset that you are now working for free, just because you've gotten what you believe is the final payment. You agreed to a price and were paid. If you had received all of the funds up front before even starting, would you have been doing the entire project for free?(BTW this is why I never work on fixed-price projects; I always insist on working by the hour.)If you can show that what the client has requested goes way beyond what you originally signed up for, then you could ask for more money, but as you indicated that doesn't seem to be the issue. It sounds like you are just tired of the project. Unfortunately that's not a good reason to quit.If you had a defined specification at the beginning, and have met that spec, then you could ethically walk away from the project but you most certainly will never get any more work from this client again. It would be better to finish up what the client wants, spending as little of your time as possible, and hope to do better next time.
_softwareengineering.193416
I think i've understood more or less what a parsed Scheme program looks like (a binary tree with atomic values on the leaves, if i have understood correctly). Can anybody please define to me, or give a reference, what a state (or a computation) of a Scheme program is? Is it just the current binding plus a position, or a stack of positions, on the syntax tree? (In such a case, i would appreciate a reference for a formal definition of Scheme binding as well :).)Is there some simple description, like the one for the Turing Machine (the program + the current content of the tape + the current position on the tape)?
In Scheme, what is formally a program's state?
binding;scheme;state
Both of the other answers look excellent to me.I would add the following: the state of a scheme program is... a scheme program! That is, you can define a meaning function for scheme programs by showing how programs reduce to other programs. This is called a small-step semantics. To show you what I mean: what are the steps--in a seventh grade sense--in evaluating 3 * (4 + 5)? According to the small-step rules provided by most teachers, the first step in evaluating this program is to reduce it to 3 * 20In exactly the same way, you can define a small-step semantics for Scheme that reduces a program to an answer by taking a series of reduction steps.
_unix.325136
I have a bash script:#!/bin/bashVAR1=var1VAR2=var2VAR3=var3cat ${VAR1} \ <(echo -e '<something>') \ # <--------- here's the error ${VAR2}/file123.txt \ <(echo -e '</something>\n<something2>') \ ${VAR3}/file456.txt \ <(echo -e '</something2>')When I run it: sh my_script.sh, I get the error: my_script.sh: 9: Syntax error: ( unexpected (expecting word)update:bash isn't found, /bin/bash doesn't exist. neither bash does.
Syntax error: ( unexpected (expecting word) --- in my bash script
bash;shell script;shell;freebsd
Try :chmod 755 my_script.shThen run it simply like thismy_script.shThe #!/bin/bash line at the beginning is used to tell your system which shell you should run the script with. I think you are overiding this by executing it with sh my_script.sh. You could also explicitly write /bin/bash my_script.sh. Also if you have some bash specific syntax in your script, you should consider changing the extension to .bash to be more explicit.EDITYou don't seem to have bash on your FreeBSD distro (the default shell on FreeBSD appears to be tcsh). You can find here a tutorial to install bash on FreeBSD. The solution I provided should then work properly. Best of luck.
_cs.32215
$L=\{a^{2k}b^nb^k\mid k\geq0, n\geq0\}$ over alphabet $\{a,b\}$How do I prove that $L$ is not regular using Pumping Lemma? All the examples I've come across had same exponents all around, and I'm a bit confused how should I write this down.Should I start with $w=\{a^pb^pb^p\mid p\leq k, p\leq n\}$ where $w$ is a subset of $L$, or how should this be tackled?
Pumping Lemma for $L=\{a^{2k} b^n b^k \mid k\ge0, n\ge0\}$
formal languages;pumping lemma
Choose for example the word $x=a^{2k}b^{k}\in L$ ( with $|x|=3k\ge k$)Possible partitions of $x=uvw$ are $u=a^l,\;v=a^s,\;w=a^mb^{k}$ with $s\ge 1,\; l+s\le k$ and $l+s+m=2k$Now lets look at $uv^2w=a^{2k+s}b^k$ which is clearly not in $L$. If $s$ is odd, than the number of $a$'s is odd and therefore $uv^2w\notin L$. If otherwise $s$ is even, then the number of $b$'s would have to be larger than $(2k+s)/2$, which it is not ($k<(2k+s)/2)$. Therefore $uv^2w\notin L$ holds for each $s\ge 1$.$\rightarrow$ $L$ is not regular.
_unix.62181
I would like what scripts will run at start, without needing to restart and see what happens, is there any way to know what 'services' are ready to run?Shoud I check every single file inls /etc/rc0.d...and so on?or justrebootMissing DOS autoexec.bat.......EDITI am using Debian
How to know what will execute at startup/bootup?
boot;directory structure;services;reboot
null
_softwareengineering.10462
I've been programming for a while now, and I've covered a lot of languages. And this trend that I noticed is that all HDL languages have so painful IDEs!In general, any development environment having some Hardware related development has very crappy UI.I'm talking about uVision, ModelSim, VHDL Simili, Xilinx etc, compared with Netbeans, Eclipse, Visual Studio etc.Why do hardware-guys hate their developers?NOTE: There are exceptions (LABView is awsome!). Can you think of any more?
Why are HDL IDEs so user-unfriendly when compared to generic-purpose language (like Java/C) IDEs?
ide
It's not that hardware guys hate their developers. It's that they're hardware guys, so they're not really very good at designing or writing software. Most of them simply don't think enough like normal people to produce software that most people will find attractive or easy to use.The other part of it is that most of these tools assume that anybody using them uses them constantly; the emphasis is primarily on making them easy for an expert to use, as opposed to easy for a beginner to learn. Of course, it's possible to combine the two, but it takes even more of the UI design skills that (as I just pointed out above) they mostly lack. Worse, along with lacking the skills, many think in terms like: only a [insert perjorative term here] would care about changing colors.
_webapps.108617
Google Hangouts doesn't display background pictures. Google Authenticator has problems. Captcha images don't display on some sites (like Craigslist).I have tried installing JavaScript and Adobe Flash player and still no changes. I am running Windows 10 on all browsers: Chrome, Mozilla, Opera, UC browser, Internet Explorer.
How to fix browser issues on Windows 10?
google chrome;google hangouts;browser
null
_unix.291049
I'm trying to download data from the following linkexport ICTP_DATASITE='http://clima-dods.ictp.it/data/Data/RegCM_Data/EIN15/1990/'These are the codes :for type in air hgt rhum uwnd vwnddo for hh in 00 06 12 18 do curl -o ${type}.1990.${hh}.nc \ ${ICTP_DATASITE}/EIN15/1990/${type}.1990.${hh}.nc donedoneBut its not downloading and im getting the following error message% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (6) Could not resolve host: hgtcurl: (6) Could not resolve host: rhumcurl: (6) Could not resolve host: uwndcurl: (6) Could not resolve host: vwnd.1990.00curl: (7) Could not resolve host: vwnd.1990.00curl: (7) Could not resolve host: vwnd.1990.00curl: (6) Could not resolve host: 18.nccurl: (3) <url> malformedcurl: (6) Could not resolve host: hgtcurl: (6) Could not resolve host: rhumcurl: (6) Could not resolve host: uwndcurl: (6) Could not resolve host: vwnd.1990.00curl: (7) Could not resolve host: vwnd.1990.00curl: (7) Could not resolve host: vwnd.1990.00curl: (6) Could not resolve host: 18.ncCan you please help me.
Unable to download data using curl for loop
ssh;curl
null
_softwareengineering.201728
I was studying Mediator Pattern and I noticed that to use this pattern you should register the Colleagues into Mediator from the Colleague concrete classes. for that we have to make an instance of Mediator inside Colleague concrete classes which violates IoC and you can not inject the Colleagues into Mediator (as far as I know! whether it is right or wrong)Questions:1- Am I right about the thing I said?2- Shall we always use IoC at all or there are some times you can forget about it?3- If we always have to use IoC, can we say Mediator is an anti-Pattern?
Shall we always use IoC in our designs?
design;design patterns;ioc
null
_unix.274791
I'm using DHCP and have both IPv4 and IPv6. I want to run a script that updates an IPv6 DDNS service when my network is configured.I created a script at /etc/network/if-up.d/update_dns however this script fails with a DNS resolution error (curl: (6) Could not resolve host: dynv6.com). The logs seem to show it's running before the IPv6 DHCP is finished. I think maybe this is because IPv4 is ready and the scripts fire.Is there somewhere else I should put scripts that require IPv6? There are many answers that suggest if-up.d is the correct place?I'm using Raspbian Jessie Lite, which already has Slow Boot (a script at /etc/systemd/system/dhcpcd.service.d/wait.conf that waits for DHCP) which fixed similar issues previously with things running before the network was ready.I've included logs of anything including network/dhcp/eth0 below.Apr 6 20:49:58 raspberrypi systemd[1]: Starting LSB: Raise network interfaces....Apr 6 20:49:58 raspberrypi networking[223]: Configuring network interfaces...* Hostname was NOT found in DNS cacheApr 6 20:49:58 raspberrypi networking[223]: % Total % Received % Xferd Average Speed Time Time Time CurrentApr 6 20:49:58 raspberrypi networking[223]: Dload Upload Total Spent Left SpeedApr 6 20:49:58 raspberrypi networking[223]: 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Could not resolve host: dynv6.comApr 6 20:49:58 raspberrypi networking[223]: * Closing connection 0Apr 6 20:49:58 raspberrypi networking[223]: curl: (6) Could not resolve host: dynv6.comApr 6 20:49:58 raspberrypi networking[223]: done.Apr 6 20:49:58 raspberrypi systemd[1]: Started LSB: Raise network interfaces..Apr 6 20:49:58 raspberrypi systemd[1]: Starting dhcpcd on all interfaces...Apr 6 20:49:58 raspberrypi dhcpcd[385]: version 6.7.1 startingApr 6 20:49:58 raspberrypi dhcpcd[385]: dev: loaded udevApr 6 20:49:58 raspberrypi dhcpcd[385]: eth0: adding address fe80::1073:c87:ef15:c4a3Apr 6 20:49:58 raspberrypi dhcpcd[385]: eth0: waiting for carrierApr 6 20:49:58 raspberrypi dhcpcd[385]: wlan0: waiting for carrierApr 6 20:50:00 raspberrypi dhcpcd[385]: eth0: carrier acquiredApr 6 20:50:00 raspberrypi dhcpcd[385]: DUID 00:01:00:01:1e:7e:75:f4:b8:27:eb:8c:48:b0Apr 6 20:50:00 raspberrypi dhcpcd[385]: eth0: IAID eb:8c:48:b0Apr 6 20:50:01 raspberrypi dhcpcd[385]: eth0: rebinding lease of 192.168.0.100Apr 6 20:50:01 raspberrypi dhcpcd[385]: eth0: soliciting an IPv6 routerApr 6 20:50:02 raspberrypi dhcpcd[385]: eth0: Router Advertisement from fe80::c23e:fff:fe63:5170Apr 6 20:50:02 raspberrypi dhcpcd[385]: eth0: adding address fd41:6d80:6364:0:bcdf:ae43:354b:1e46/64Apr 6 20:50:02 raspberrypi dhcpcd[385]: eth0: adding address 2a02:c7d:2bbb:9f00:76b3:47f9:2c11:fea4/64Apr 6 20:50:02 raspberrypi dhcpcd[385]: eth0: adding route to fd41:6d80:6364::/64Apr 6 20:50:02 raspberrypi dhcpcd[385]: eth0: adding route to 2a02:c7d:2bbb:9f00::/64Apr 6 20:50:02 raspberrypi dhcpcd[385]: eth0: adding default route via fe80::c23e:fff:fe63:5170Apr 6 20:50:02 raspberrypi dhcpcd[385]: eth0: requesting DHCPv6 informationApr 6 20:50:06 raspberrypi dhcpcd[385]: eth0: leased 192.168.0.100 for 86400 secondsApr 6 20:50:06 raspberrypi dhcpcd[385]: eth0: adding route to 192.168.0.0/24Apr 6 20:50:06 raspberrypi dhcpcd[385]: eth0: adding default route via 192.168.0.1Apr 6 20:50:06 raspberrypi dhcpcd[385]: forked to background, child pid 716Apr 6 20:50:06 raspberrypi systemd[1]: Started dhcpcd on all interfaces.Apr 6 20:50:06 raspberrypi ntpd[757]: Listen normally on 3 eth0 192.168.0.100 UDP 123Apr 6 20:50:06 raspberrypi ntpd[757]: Listen normally on 5 eth0 2a02:c7d:2bbb:9f00:76b3:47f9:2c11:fea4 UDP 123Apr 6 20:50:06 raspberrypi ntpd[757]: Listen normally on 6 eth0 fe80::1073:c87:ef15:c4a3 UDP 123Apr 6 20:50:06 raspberrypi ntpd[757]: Listen normally on 8 eth0 fd41:6d80:6364:0:bcdf:ae43:354b:1e46 UDP 123
Why are my if-up.d scripts running before DHCP(v6) has completed?
debian;raspbian;dhcp;network interface;ipv6
null
_webmaster.60617
I made a Speed Test on my site via online tool. The tool recommended enabling compression to speed up the site. As far as I understand this requires some changes on server (already done by hosting support) and making some entries into .htaccess file. I searched internet and there are many different recommendations about what to enter into .htaccess file.I am just a bit scared to screw things with wrong entries. does anybody know how to implement this method.Thanks
How to combine GZip + CDN for faster page loads?
php;cdn;page speed;lamp;gzip
null
_codereview.8381
I know there is probably a better way to have coded this. Can anyone be of help? I am not an expert JavaScript coder.$(function() { $('#clientLoginText').hover(function() { $('#clientLoginBox').stop().animate({ height : '80px'}, 700); }, function() { //$('#clientLoginBox').stop().delay(500).animate({ height : '0px'}, 700); });});$(function() { $('#clientLoginBox').hover(function() { }, function() { $('#clientLoginBox').stop().delay(500).animate({ height : '0px'}, 700); });});
Animating login screens
javascript;jquery
Well here is a tidied up version that shouldn't change the functionality at all:$(function() { $('#clientLoginText').mouseenter(function() { $('#clientLoginBox').stop().animate({ height : '80px'}, 700); }); $('#clientLoginBox').mouseleave(function() { $(this).stop().delay(500).animate({ height : '0px'}, 700); });});About the changes:You only need a single document.ready function: the only good reason I can think of for having two is if you need to put them in separate JS files.You had used the .hover() function, which is shorthand for .mouseenter() and .mouseleave(), but in the first use you passed an empty function for the mouseleave part and in the second use you passed an empty function for the mouseenter part, so why not just code the first directly as .mouseenter() and the second as .mouseleave()?Within an event handler you can refer to the element that triggered the event as this, or $(this) if you need it as a jQuery object. More efficient than getting jQuery to find it again from its id.
_cstheory.1214
Examples of bounded $NP$-complete variants of undecidable sets:Bounded Halting problem={ $(M, x, 1^t)$| NTM machine $M$ halts and accepts $x$ within $t$ steps}Bounded Tiling={ $(T, 1^t)$| there is a tiling of a square of area $t^2$ by tiles from $T$}Bounded Post Correspondence Problem={ $(T, 1^t)$| there is a matching set of dominoes that uses at most $k$ dominoes from a set of dominoes $T$ (including repeated dominoes) }Is it always possible to get $NP$-complete variant of every Undecidable problem by imposing some bounds on the computation? Are there other natural examples of this kind?
NP-complete variants of undecidable problems?
cc.complexity theory;np hardness
As Jukka pointed out, the answer is trivially no for all undecidable problems. A more reasonable question would be: Can every problem that is complete for the class of recursively enumerable languages be made NP-complete in a straightforward way? I am not sure this is true in general, but in the special cases you mention in your question (Bounded-Halting and Tiling) these problems are complete for RE even under special polynomial time reductions. (I leave special mostly undefined in this answer, but the properties needed can be worked out from it.)So if we ask the even more reasonable question: Can every problem that is complete (under special polytime reductions) for the class of recursively enumerable languages be made NP-complete in a straightforward way?, here the answer is yes. Take any RE-complete problem $A$, defined with respect to a Turing machine $M_A$ that takes a pair of inputs $(x,y)$, such that $x \in A \iff (\exists y)[M_A(x,y)~\text{halts}]$. We are assuming that there is a polynomial time reduction from the Halting Problem to $A$. Define Bounded-A to be the set of pairs $(x,1^t)$ such that there is a $y$ of length at most $t$ such that $M_A(x,y)$ halts within $t$ steps. Clearly Bounded-A is in $NP$. It's also $NP$-complete because we can reduce the $NP$-complete Bounded Halting Problem to Bounded-A in polynomial time (Note that here you need special properties on the polynomial time reduction $R$ to ensure that it carries over to Bounded-Halting as well: i.e., you need to be able to efficiently compute an upper bound $t'$ on how long $M_A(R(M,x),y)$ needs to run, assuming that $M(x)$ halts within $t$ steps.)Now, is there a language which is RE-complete under (say) doubly-exponential-time reductions but not under exponential-time reductions? For such a problem, it is unlikely that you can trivially modify it to get an $NP$-complete version. I would guess that such a problem can be artificially constructed.
_softwareengineering.276084
how can explain the diference between programming languge and protocol?, can protocol have extension?we know that machines communicate with protocol, but they can also do so with expressive language.so, I think the difference between these two is unclear. also, the references for defining these two are not clear.so maybe protocol is superset of language, eg: protocol equ communication language?, assembly equ machine language?, css equ styling language?, can any body do explain what is a protocol compared to language?
comparison of computer processing language vs protocol
http;programming languages;machine code
null
_softwareengineering.252281
I am trying to understand the difference between different rounding methods: Our application offers two different types of rounding:IEEE USP (GMP) rounding. http://www.usp.org/sites/default/files/usp_pdf/EN/USPNF/USP34-NF29General%20Notices.pdfIn our unit tests, they seem to do the same thing for the cases being tested. However, I haven't found anyone in the office who can explain what is meant when selecting rounding mode of IEEE
Difference between USP and IEEE rounding
numeric precision
According to wikipedia, IEEE 754 specifies round to even when the next digit is a 5, while the paper you cite says that a digit of 5 means add one to the preceding digit.
_webmaster.33220
A client of mine had an acrimonious split with their business partner. My client owns the legal business entities but unfortunately the other partner has the server/domain details. The other partner has since started a new company but has kept the old site up with the old company name and is taking business enquiries through that site but ultimately funnelling it through to their new company.My client is still trading under their original name and bought a .net domain to compete against the .co.uk. I've since created a new website for them with a CMS that they update regularly and I've got them set up using all the social media outlets (Facebook, Twitter, G+, Pinterest, YouTube) and they are also running Google AdWords for their company name. They're getting around 2k visits a month, the site is coded well, they've got good in/outbound links, meta data is all up to date, the social media is doing well and Google Analytics is showing positive results. Google Webmaster Tools seems to show everything running as best it could. The old site has been up around 3 years, it never gets updated and is coded really badly (tables and images etc) but it has obviously been indexed by Google, sits at the top spot and has it's sub-pages listed within it's result (not sure of the correct term).The new site, in comparison has been up around 4 months, sits at number 2 on Google with a few of it's internal pages at 3 and 4 and then their Facebook and Twitter accounts below that. The Google Ad for the business name sits right at the top of the page.My client is frustrated that the old site is still taking business enquirers from their legitimate customers who don't know the difference between the two domains. Every advertising drive they do for their site will still, unfortunately, also drive some business to their competitors. I've suggested to my client that it might be easier to take legal action rather then fight an SEO battle but they don't want to take this route.1) What other routes are open to overtake the old domain in search ranking?2) How much emphasis does the age of a site have in the results even if it's not updated regularly? Is it just a waiting game?3) Can two sites on Google have their internal pages listed within the one result? Or does Google only reserve that feature for one site per search term?4) Can Google take any action on a domain that is trading in this way? If we submitted a claim could they drop the old site from the results?
Competing against the same business name in search results
seo;google;legal
I know you said the client doesn't want to get lawyers involved, but um, your client needs to get a lawyer involved. An SEO slap fight just isn't going to fix this, and if it's the route they want to go then they don't get to complain about promotions sending business to the competitor since that's kind of the core of the entire problem. Everything from here down is not any form of legal advice but rather some possible things to consider when talking to a lawyer. My client owns the legal business entities but unfortunately the other partner has the server/domain details.I read the above as that your client has the rights to CompanyName. Presumably that also means any trademarks associated with CompanyName and so on. Is the domain companyname.com? If so, your client should be trying to seize it (talk to a lawyer) because this:The other partner has since started a new company but has kept the old site up with the old company name and is taking business enquiries through that site but ultimately funnelling it through to their new company.possibly sets up a pretty good argument that the former partner is infringing on trademark and possibly even taking advantage of brand confusion if s/he is still operating in the same business space. (Talk to a lawyer.)Even if the domain isn't exactly $companyname, the fact it's established in attachment to the previous incarnation of the business might still provide some leverage. I'm actually unclear how/why your client even agreed to leave these assets with the partner. Did they somehow not consider the web site or at least domain name a business entity?Domain age is noted but not hugely important. Yes, multiple sites can show sitelinks for a given set of results, but you have little say over it happening. For example, if I search for movie reviews both Rotten Tomatoes and Roger Ebert have them, and they're not even the top two results; they sit at 1 and 3.Can Google take any action on a domain that is trading in this way? If we submitted a claim could they drop the old site from the results? Google cares about things like spam and duplicate/stolen content. This is an argument between former business owners who seem to have made some poor decisions in splitting up. I would say it's highly unlikely they're going to get involved. Alternate option: As far as the whole thing with sending some business to the other person during promotions, is it completely out of the question for your client to just re-establish under a new name? It seems to me like this split is rather botched anyway and it might be easier for everyone involved.
_webmaster.71532
I am trying to set up a domain for my running instance in Amazon ec2 server. When i go to bracketfanatic.com i get a redirect loop and the site stops. I believe that everything is set up correctly except for my Record Sets. Here is a picture of my records. Any help would be nice. Thank you.
Amazon ec2 domain redirect loop
domains;domain forwarding;amazon ec2
Your CNAME should be www.bracketfanatic.com pointing to bracketfanatic.com. But that may not be the only thing you need to set. DNS does not normally control redirects, however, there may be something we are not aware of that the host is doing. Make this change and see if that does not solve your problem. If not, then we will have to look further.
_unix.199444
I have installed vsftpd ftp server on CentOS 6.6 with yum, But I can't start vsftpd service.When I run /etc/init.d/vsftpd start, I get Starting vsftpd for vsftpd: [FAILED]with no error, and no relevant error is listed in my /var/log/messages butafter installing vsftpd, I have this line in it :ip-10-252-65-122 pure-ftpd[28837]: ([email protected]) [INFO] Timeout - try typing a little faster next timeI tried to google the error, but I found nothing.How can I solve the problem?
Unable to start vsftpd service on CentOS 6.6
ftp;vsftpd;init.d
null
_webapps.11367
I have a particular page that I want to refer users who have DNS problems:How do I diagnose not being able to reach a specific website as an end user?However, there is a catch-22 -- users who are having DNS problems may not be able to reach our sites and read that page!Thus, I need a reliable long-term mirror of this web page on another domain, either for free or as a paid service. Ideally, one that would periodically ping the source and keep it up to date with any changes as well.I found some community built mirroring services to make sites Digg-proof or Reddit-proof but these are ad-hoc and not guaranteed to work for the long term.Are there any webapps that offer reliable long term mirroring of individual web pages? Or any other webapp I can use to achieve this functionality?
Mirroring a particular web page?
webapp rec
The only semi-reliable thing I could think of was linking to the page in the Google cache:http://webcache.googleusercontent.com/search?q=cache%3Ahttp%3A%2F%2Fsuperuser.com%2Fquestions%2F231977%2Fhow-do-i-diagnose-not-being-able-to-reach-a-specific-website-as-an-end-userBut you can't control how frequently it's updated, etc. Also, not the prettiest of URIs (although there's always the shortened link). And the Google cached URI scheme has changed over the years. Maybe using the stock google.com query with the cache operator would be more reliable.
_unix.46872
Pardon the horrendous title, I'm not exactly sure how to summarize the issue I'm having. Basically, I have synaptics, and I right-click by tapping two fingers on the touchpad because my touchpad doesn't have any buttons. This is great, but I get all sorts of extra right-clicks because unlike left-clicking (tapping with a single finger), the right-click doesn't require the contact to be brief.Left-click (single-finger) requires the tap to be a tap, i.e. if I put a finger on the touchpad, leave it there for a moment, and then raise it, it is not a click. That's great.Right-click (double-finger) does not require a tap, i.e. if I put two fingers on the touchpad, leave them there for a moment, and then raise them, it is a right click, thrown when my fingers leave the touchpad. This is a problem.It's especially annoying because I use two-finger scroll, so I get random right-click events if I didn't scroll enough, or was going to scroll then decided not to, etc. How could I go about fixing this?
Synaptics two-finger-tap right-click happens on non-tap
scrolling;synaptic
null
_cstheory.16121
Consider two sequences $u_1 \geq u_2 \geq ... \geq u_n$ and $l_1 \geq l_2 \geq ... \geq l_n$ with $u_i \geq l_i$ for every $i$. Let $\mathcal{G}(l_{1:n},u_{1:n})$ be all undirected unweighted simple graphs on $n$ vertices that have their spectrum lower-bounded by $l_{1:n}$ and upper-bounded by $u_{1:n}$. In other words, let $G$ be a graph on $n$ vertices, and $\lambda_1 \geq \lambda_2 \geq ... \geq \lambda_n$ the eigenvalues of $G$'s adjacency matrix; if $l_i \leq \lambda_i \leq u_i$ for all $i$ then $G \in \mathcal{G}(l_{1:n},u_{1:n})$.Is there an efficient procedure to sample a graph uniformly at random from $\mathcal{G}(l_{1:n},u_{1:n})$? If not, what restrictions do we need to place on $l_{1:n}$, $u_{1:n}$, or elsewhere to have such a procedure? What if we just need the distribution over $\mathcal{G}(l_{1:n},u_{1:n})$ to be approximately uniform?In the application I am considering, I really only care about the first few eigenvalues (biggest 2 or 3). If we constain only $l_i \leq \lambda_i \leq u_i$ for all $i \in \{1,2\}$ or $i \in \{1,2,3\}$ (instead of all $i$ as before) then is there a good sampling algorithm?If no good sampling algorithm is known even in the restricted case then what do people usually do in practice?Related questionReverse Graph Spectra Problem?
Generating a random graph with constraints on spectrum
ds.algorithms;graph theory;randomness;spectral graph theory
null
_codereview.139701
I'm mainly wondering about whether or not I have used the (TAP) async/await pattern correctly. I get a bunch of these warnings, Because this call is not awaited, execution of the current method continues before the call is completed., however that sounds like exactly what I want. Which is for the server to be able to handle clients independently and asynchronously.As a side note, I know it's bad for performance to re-initialize the streams in for example HandleConnection, I just kept it that way for brevity's sake.Client.cs:using System.IO;using System.Net;using System.Net.Sockets;using System.Threading.Tasks;class Client { TcpClient tcpClient; public void Connect() { ConnectAsync(); } public async void Ping() { await SendMessage(Ping!); } private async Task ConnectAsync() { tcpClient = new TcpClient(); await tcpClient.ConnectAsync(IPAddress.Parse(127.0.0.1), 3344); MainWindow.ClientLog(Connected to server!); HandleConnection(tcpClient); } private async Task HandleConnection(TcpClient tcpClient) { NetworkStream ns = tcpClient.GetStream(); StreamReader sr = new StreamReader(ns); string message = await sr.ReadLineAsync(); MainWindow.ClientLog(Received message from server: + message); HandleConnection(tcpClient); } private async Task SendMessage(string message) { NetworkStream ns = tcpClient.GetStream(); StreamWriter sw = new StreamWriter(ns); await sw.WriteLineAsync(message); await sw.FlushAsync(); MainWindow.ClientLog(Sent Ping! to server.); }}Server.cs:using System.Collections.Generic;using System.IO;using System.Net;using System.Net.Sockets;using System.Threading.Tasks;class Server { List<TcpClient> connectedClients; public void Host() { connectedClients = new List<TcpClient>(); HostAsync(); } private async Task HostAsync() { IPEndPoint local = new IPEndPoint(IPAddress.Parse(127.0.0.1), 3344); TcpListener mainListener = new TcpListener(local); mainListener.Start(); MainWindow.ServerLog(Server started.); while(true) { TcpClient client = await mainListener.AcceptTcpClientAsync(); MainWindow.ServerLog(Client connected!); connectedClients.Add(client); HandleClient(client); } } private async Task HandleClient(TcpClient client) { NetworkStream ns = client.GetStream(); StreamReader sr = new StreamReader(ns); string message = await sr.ReadLineAsync(); MainWindow.ServerLog(Message from client nr. + connectedClients.IndexOf(client) + : + message); if (message == Ping!) { await SendMessage(client, Pong!); } HandleClient(client); } private async Task SendMessage(TcpClient client, string message) { NetworkStream ns = client.GetStream(); StreamWriter sw = new StreamWriter(ns); await sw.WriteLineAsync(message); await sw.FlushAsync(); MainWindow.ServerLog(Sent message to client: + message); } private async Task SendMessages(IEnumerable<TcpClient> clients, string message) { foreach (TcpClient client in clients) { SendMessage(client, message); } }}
Client/Server, Asynchronous ping-pong exchange
c#;networking;async await
null
_cs.45655
The Arden's lemma states that there exists a solution to the equation between regular expressionsr = sr + t, with r unknown, and it is s*t.I went through some other topics on the forum and I always saw it being applied, for example, on grammars that have productions likeS->AS|bso that L(S) = L(A)L(S) + b and the solution is L(A)*b.However, on some student notes from a classmate I found this example:S->ASA|AA->aAa|Ab|eand the Arden's rule is applied in this way without any further manipulation of the grammar:L(S) = L(A)L(S)L(A) + L(A) and it concludes by saying that L(S) = L(A)*This seems wrong to me, but I want to check first whether there is some further hypothesis that can be applied here to make the statement valid.For example, I wonder if it is decidable (and of course valid) whether the productions S->ASA|A generate the same language of the productions S->AAS|A.Can somebody please help me?
Arden's lemma applicability on context free grammars
formal languages;context free;formal grammars;regular expressions
First of all, let me mention that Arden's lemma applies not only to regular expressions but to any equation among languages of the form $r = sr+t$, where $r,s,t$ are arbitrary languages. The smallest solution to this equation (also known as the least fixed point) is always $r = s^*t$. If $\epsilon \notin s$, then this is in fact the only solution. (Otherwise, $\Sigma^*$ is also a solution.)The language generated by the rule $S\to aSa|a$ is not $a^*$ but rather $a(aa)^*$: only odd powers are allowed. So it seems that the student notes contain a mistake, though in this particular case the calculation yields the correct conclusion since $\epsilon \in L(A)$.On the other hand, $S\to aSa|a$ and $S\to aaS|a$ do generate the same language.More generally, if the only rules involving $S$ are $S \to ASA|A$, then replacing these rules by $S \to AAS|A$ or $S \to A|SAA$ result in the same language, $L(A)(L(A)^2)^*$; and if $\epsilon \in L(A)$, we can further simplify this to $L(S) = L(A)^*$.
_unix.5656
In order to save disk space, I want to have two OS installations share a single swap partition (a dual-boot). Is this a good idea?
Are there any side effects when two distros share a swap partition?
partition;swap;dual boot
It's possible. In fact, you can share the swap space between completely different operating systems, as long as you initialize the swap space when you boot. It used to be relatively common to share swap space between Linux and Windows, back when it represented a significant portion of your hard disk.Two restrictions come to mind:The OSes cannot be running concurrently (which you might want to do with virtual machines).You can't hibernate one of the OSes while you run another.
_unix.36894
I have a: Bus 006 Device 002: ID 03f0:0605 Hewlett-Packard ScanJet 2200cScanner plugged in via USB to my notebook. When I launch this command (no matter that I use it with a normal user or root, does the same): scanimage --format=tiff --resolution=150 --mode=color > a.tiffThe scanner just waits for ~30 sec then starts actually scanning, but it stops after going about ~8cm's.. (so it's scanning just the ~1/3 part of an A/4 paper..) but the scanned part looks good. So the problem is that why doesn't this scanner scans a full A4 paper size? I can't find relevant logs.. UPDATE: I wanted to try out the scanner on a Win7Pro64bit machine.. I didn't found any drivers to it :D
Hewlett-Packard ScanJet 2200c on Scientific Linux 6.1 64bit
scanner
scanimage -l 0 -t 0 -x 215 -y 297 --format=tiff --resolution=150 --mode=color > output.tiffit works! (the default scan size wasn't A4..)but it's really slow.. it tooked 1:56 to scan a A4 paper..
_cs.28868
In a bipartite graph, how can we find the total number of ways of getting a maximal matching?The cardinality of both the sets in the bipartite graph may not be the same. So two matchings are said to be different if they have at least one distinct edge.
Finding the number of distinct maximal matching in a bipartite graph
graphs;bipartite matching;matching
null
_unix.15855
How can I 'cat' a man page like I would 'cat' a file to get just a dump of the contents?
How to dump a man page?
man
First of all, the man files are usually just gziped text files somewhere in your file system. Since your milage will vary finding them and you probably wanted the processed and formatted version that man gives you instead of the source, you can just dump them with the man tool. By looking at man man, I see that you can change the program used to view man pages with the -P flag like this:man -P cat command_nameIt's also worth nothing that man automatically detects when you pipe it's output instead of viewing it on the screen, so if you are going to process it with something else you can skip straight to that step like so:man command_name | grep search_stringor to dump TO a file:man command_name > formatted_man_page.txt
_softwareengineering.109969
I program in a style that everything it's expensive or I really do hate repeating anything, mostly because I develop for embedded systems. So I get very annoyed when I have to do something that causes repetition.An example would be that in my current project, I am creating a Layout Manager. I had this huge function that did all the work of placing everything, which I don't mind as long as I comment well. Some point down the line I realise that something I do in that long function I need to do again somewhere else, like adjusting for something that is bigger than the current. But what got me really annoyed ( and why I'm posting ) is that the needed function requires three variables. To me those three variables are expensive and should only be computed once per call. But to refactor the code the way I need to I have to put the variables in the needed function. Now the variables gets evaluated three times.This is just an example, and things like this crop up a lot else where, so besides telling me those evaluations aren't expensive, at which point I'll say, I'm still repeating it though, what can be done about this sort of thing, or, to my huge annoyance, is it unavoidable?Update: One of the main suggestions given is store the value. Using the example above again, storing the value does not seem viable. Mostly because the needed function takes two needed values which are used to compute the three variables. It would mean I would have to create an object, and feel free to correct me if I'm wrong, store the values and variables into the object and then test them if they had changed.In short, unless there are other ideas out there, it looks to be unavoidable. The only real thing you can do is minimise up to a limit. How deep do I create objects at the cost of property look ups? How many times to I repeatedly have to create those three variables? I hope you get the idea.There is also the fact maybe I could better refactor my code. I tend to write in huge functions and only pull out what I need, which is probably KISS's fault. I think need another acronym to balance things out :P .
Hate repetition to the extreme
self improvement;embedded systems;dry
null
_unix.341595
When SELinux is installed on a system are its rules enforced before or after the standard linux permissions? For example if a non-root linux user tries to write to a file with linux permission -rw------- root root will SELinux rules be checked first or will standard filesystem permissions apply and SELinux never invoked?
Are SELinux rules enforced before or after standard linux permissions?
files;permissions;selinux;access control
I see the terms to search for now are MAC and DAC. DAC is the standard permission system. MAC is the system used by SELinux.The answer to quote one source is:It is important to remember that SELinux policy rules are checked after DAC rules. SELinux policy rules are not used if DAC rules deny access first.This diagram shows:References:https://selinuxproject.org/page/NB_MAChttps://www.centos.org/docs/5/html/Deployment_Guide-en-US/selg-overview.htmlhttps://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Security-Enhanced_Linux/chap-Security-Enhanced_Linux-Introduction.html
_unix.17304
I'm specifically looking to change the formatting in two places:On the desktop, the date/time reads Sat 21:05. I would like it to read 23-Jul-11 21:05 or 23 Jul 11 21:05. In other words, I want to change the display of the day of week to display the day, short month name, and short year.On the login screen, the date/time reads Sat 9:06 PM. I would like to change it to either the same as the desktop (23-Jul-11 21:05 or 23 Jul 11 21:05) or the preferred display of the full date in the format (Saturday, 23 July 2011 21:05) where the full day, date, full month, full year, and time (in 24-hour format) is displayed.I don't see any options to adjust this. I would suspect it's possible, but I'm not sure what config file it would be in or what options I would adjust.
Is there a way to adjust the formatting of the date/time in the GNOME 3 login screen and desktop?
gnome;gnome3;date;time;gnome shell
null
_webapps.4469
Related to this question, I want to remove the On Behalf Of when I send an email from another Gmail account via the web ui. So, as of now, my email is being shown as From [email protected] on behalf of [email protected]. When inputting the email into the settings, Gmail appears to detect that the email is another Gmail account and does not ask you whether or not to use another SMTP server. Anyone has any ideas on how I can get around this?
How to use Gmail's Send Mail As with another Gmail account through SMTP without On Behalf Of?
gmail;privacy;smtp
null
_unix.83351
Somehow i have tar file called 'secret\r-.tar.gz'.Note that it has \r in the name.I tried following SSH command for moving but none of them are working:mv secret\r-.tar.gz ../mv secret\\r-.tar.gz ../mv secret\\r-.tar.gz ../mv secret\r-.tar.gz ../All resulted with error:mv: cannot stat `secret\r-.tar.gz': No such file or directoryCan you guys point me to the right direction.
Unable to move or delete file with \r in the name
bash;rename
If the file is literally called secret\r-.tar.gz, mv secret\r-.tar.gz ../ should have worked.If the \r is really a carriage return, you need to have a literal carriage return (and not an escape):mv $'secret\r-.tar.gz' ..
_unix.211660
I've got a new question. This time about my new Laptop. I wanted to install Linux openSUSE, but it doesn't work. I worked with the Internet, and did everything as described there. I disabled Fast Boot and Secure Boot, enabled the Launch CSM option, and so on. All in the Setup. But if I try to boot from my USB-Stick, a message appears: No system detected, please insert a bootable system and press any key. (Or something like that). Can you help me?Thanks, Me :)
Installing Linux on my ASUS ROG G751JT-T7005H
linux;system installation;opensuse
null
_codereview.10715
I originally posted this in stackoverflow.com but the question may be too broad. I'm trying to figure out the best way to download and use an SQL database from my server. I have included the code i whipped up but I'm not sure if it's a viable way to accomplish this so peer review would be extremely helpful :)As it stands now, the database will be downloaded in a separate thread but when UI components are initialized they fail (obviously, as the database doesnt exist while its still being downloaded).package com.sandbox.databaseone;import java.io.BufferedReader;import java.io.FileOutputStream;import java.io.InputStream;import java.io.InputStreamReader;import org.apache.http.HttpEntity;import org.apache.http.HttpResponse;import org.apache.http.client.HttpClient;import org.apache.http.client.methods.HttpGet;import org.apache.http.impl.client.DefaultHttpClient;import android.database.sqlite.SQLiteDatabase;import android.os.AsyncTask;import android.util.Log;public class DatabaseManager {private SQLiteDatabase database;private int currentVersion = 0;private int nextVersion = 0;private static String databasePath = /data/data/com.sandbox.databaseone/databases/;private static String databaseFile = dbone.sqlite;private static String databaseBaseURL = http://www.redstalker.com/dbone/;private static String databaseVersionURL = version.txt;public DatabaseManager(){ database = null;}public void initialize(){ DatabaseVersionCheck check = new DatabaseVersionCheck(); String url = databaseBaseURL + databaseVersionURL; check.execute(url);}private void init_database(String path){ database = SQLiteDatabase.openDatabase(path, null, SQLiteDatabase.OPEN_READONLY); if(database != null) { currentVersion = nextVersion; } else { nextVersion = 0; }}private class DatabaseVersionCheck extends AsyncTask<String, Void, String>{ @Override protected String doInBackground(String... params) { StringBuilder builder = new StringBuilder(); HttpClient client = new DefaultHttpClient(); HttpGet get = new HttpGet(params[0]); try { HttpResponse response = client.execute(get); int statusCode = response.getStatusLine().getStatusCode(); if(statusCode == 200) { HttpEntity entity = response.getEntity(); InputStream in = entity.getContent(); BufferedReader reader = new BufferedReader(new InputStreamReader(in)); String line; while((line = reader.readLine()) != null) { builder.append(line); } in.close(); reader.close(); entity.consumeContent(); } } catch(Exception e) { e.printStackTrace(); } return builder.toString(); } @Override protected void onPostExecute(String result) { if(result != null) { int version = Integer.parseInt(result); if(version > currentVersion) { nextVersion = version; DownloadDatabase d = new DownloadDatabase(); d.execute(); } } }}private class DownloadDatabase extends AsyncTask<Void, Void, Boolean>{ @Override protected Boolean doInBackground(Void... params) { boolean result = false; String url = databaseBaseURL + databaseFile; String path = databasePath + databaseFile; HttpClient client = new DefaultHttpClient(); HttpGet get = new HttpGet(url); try { HttpResponse response = client.execute(get); int statusCode = response.getStatusLine().getStatusCode(); if(statusCode == 200) { FileOutputStream fos = new FileOutputStream(path); HttpEntity entity = response.getEntity(); InputStream in = entity.getContent(); byte [] buffer = new byte[1024]; int count = 0; while((count = in.read(buffer)) != -1) { fos.write(buffer, 0, count); } fos.close(); in.close(); entity.consumeContent(); result = true; } } catch(Exception e) { e.printStackTrace(); } return result; } @Override protected void onPostExecute(Boolean result) { String path = databasePath + databaseFile; if(result) { init_database(path); } }}}
AsyncTask, Android, and SQL
java;android
Some generic Java notes, since I'm not too familiar with Android.databasePath, databaseFile, databaseBaseURL, databaseVersionURL should be constants (all uppercase with words separated by underscores):private static final String DATABASE_PATH = /data/data/com.sandbox.databaseone/databases/;private static final String DATABASE_FILE = dbone.sqlite;private static final String DATABASE_BASE_URL = http://www.redstalker.com/dbone/;private static final String DATABASE_VERSION_URL = version.txt;Reference: Code Conventions for the Java Programming Language, 9 - Naming ConventionsAccording to the previous Code Conventions, init_database should be initDatabase.If databaseBaseURL and databaseVersionURL hadn't be constant I'd named them as databaseBaseUrl and databaseBaseVersionUrl. From Effective Java, 2nd edition, Item 56: Adhere to generally accepted naming conventions: While uppercase may be more common, a strong argument can made in favor of capitalizing only the first letter: even if multiple acronyms occur back-to-back, you can still tell where one word starts and the next word ends. Which class name would you rather see, HTTPURL or HttpUrl?public DatabaseManager(){ database = null;}Initializing fields with null is unnecessary since null is the default value of references.I'd call the StringBuilder in the doInBackground method as result.StringBuilder builder = new StringBuilder();It would say what's the purpose of the object. For the same reason I'd rename the boolean result to boolean success,in to responseStream, andd to downloadDatabase.Close your streams in a finally block. In case of a previous errors they won't be closed.The code does databasePath + databaseFile more than once. Create a method for that.I don't know if it is applicable to Android or not, but in Java it's a good practice to pass the character set to the constructor of InputStreamReader. Without this InputStreamReader uses the default charset which could vary from system to system.BufferedReader reader = new BufferedReader(new InputStreamReader(in, UTF-8));I'd use Commons IO's IOUtils for the copying. It has a copy(InputStream input, OutputStream output) which you could use in the DownloadDatabase task and you could use copy(InputStream input, Writer output) in the DatabaseVersionCheck task if you replace the StringBuilder to a StringWriter.In the DatabaseVersionCheck.onPostExecute I'd use guard clauses:@Overrideprotected void onPostExecute(String result) { if(result == null) { return; } int version = Integer.parseInt(result); if(version <= currentVersion) { return; } nextVersion = version; DownloadDatabase d = new DownloadDatabase(); d.execute();}It makes the code flatten and more readable. References: Replace Nested Conditional with Guard Clauses in Refactoring: Improving the Design of Existing Code; Flattening Arrow Codee.printStackTrace() is not the best practice. Maybe you should inform the user about the error and/or log it. Why is exception.printStackTrace() considered bad practice?Is it a bad idea to use printStackTrace() in Android Exceptions?Integer.parseInt could throw NumberFormatException. Maybe the code should handle it.
_softwareengineering.345564
One of the last projects I worked on had a lot of deprecated methods. Are there any good strategies for making sure that the use of these methods actually decreases over time? The example below is Python-specific, but this question really applies to all languages.I've read a little bit in the past about adopting informal policies that apply to every commit such as number of lines not covered by the test suite is not allowed to grow or coverage percentage must not decrease as a means of improving test coverage that doesn't involve dropping everything. That's also sort of the angle that I'm approaching this from. What's a measurable criterion that someone can look at to determine whether their commit is polite or not.Here's what an example might look like. Suppose I have a RequestTask that used to have a group_id, but doesn't anymore. The validate_group_id method is now deprecated and just calls validate_request_id. The group_id instance variable has been replaced with a property. Here's a skeleton file.class RequestTask: ... def validate_request_id(self): ... # deprecated method def validate_group_id(self): return self.validate_request_id() @property def group_id(self): return NoneLet's further suppose that we can't just change all the code that calls the deprecated methods right now, due to time constraints.I want to make sure that code calling these deprecated methods eventually gets changed.What are some good strategies for tracking whether use of deprecated methods and APIs is actually decreasing over time?If necessary, what's a good way to enforce a policy that WILL lead to non-use of deprecated methods?Is it even a good idea to attempt something like this?
Mechanically measuring/enforcing diminishing use of a deprecated API over time?
deprecation
The principle would be the same in this and the test coverage case. To implement it you would do something like during builds, either statically or dynamically, compiling a list of all functions/methods calling the deprecated method and saving ot. If that list ever gets a new item, fail the buildIn order to track calls you can do this by either analysing source code / binaries (e.g. in dotNet this is quite straightforward with Nunit) or by tracking the calls live and recording them during tests (relies on a good test suite and ability to inspect the call stack).For python in particular this SO answer shows how to trace these calls dynamically. I doubt source/byte-code analysis would work for such a dynamic language.
_softwareengineering.95655
I am creating an internal application for the company I am contracted to. We wish to use a GPLv2 licensed library in this application. Some pointsThe application is to be used within the confines of the company andnever be available for public use. It is for internal company useonly.It will never be sold, ever!!. So no money will be made directly from selling the code. It's not a product.There are two forms of usage of the applicationIts native form which is console based exe (which uses the GPL library); and Usage via a web interface which calls the executable.The source code will remain closed source (company use only), and be propriety I have gone through numerous questions on SO about this (one closed as off-topic and another unmarked from Programmers ), but I have had a hard time in understanding whether my interpretation of the licence is correct. Based on my understanding thus far, I am permitted to use this library without any concern. I am not modifying the source code nor am I distributing the application or making the application publicly available. The application will not be sold nor will it be distributed to anyone outside the company (It will however be available at our company's offsite DR facility). I am very likely to use the released versions binaries and not re-compile from source.The following question from the GNU FAQ seems to support my thoughts. Does the GPL require that source code of modified versions be posted to the public?The GPL does not require you to release your modified version, or any part of it. You are free to make modifications and use them privately, without ever releasing them. This applies to organizations (including companies), too; an organization can make a modified version and use it internally without ever releasing it outside the organization.But if you release the modified version to the public in some way, the GPL requires you to make the modified source code available to the program's users, under the GPL.Thus, the GPL gives permission to release the modified program in certain ways, and not in other ways; but the decision of whether to release it is up to you.Can any GPLv2 licensed library be used in a company's internal intranet application?
Can any GPLv2 licensed library be used in a company's internal intranet application?
licensing;gpl
null
_unix.367716
NetworkManager lists a bluetooth network connection I did with my smartphone long ago. I've tried nmcli con delete and rm from /etc/NetworkManager/system-connections/ but it always comes back on NetworkManager restart. How to get rid of it? It's not on /etc/NetworkManager/system-connections/ anymore but it's listed by nmcli con show. Deleting it and restarting NM makes it reaper with a new UUID.
Can't get rid of old bluetooth network connection in NetworkManager
networkmanager;bluetooth
null
_codereview.68679
I am testing various sorting algorithms. Right now I am testing shell sort, insertion sort and selection sort. I ran all three algorithms on a randomly-generated list of 1000 integers. The selection sort took 41 seconds, insertion sort took 34 seconds and shell sort sort took over 3 minutes. What can I do to improve my implementation? public class SortAlgorithm{ public void InsertionSort<T>(T[] a) where T : IComparable { for (int i = 0; i < a.Length; i++) { // Exchange a[i] with smallest entry in a[i+1...N). int min = i; // index of minimal entr. for (int j = i + 1; j < a.Length; j++) { if (Less(a[j], a[min])) { min = j; } else if (a.Length < j + 1) { a[j + 1] = a[j]; a[j] = a[min]; } } Show(a); Exch(a, i, min); } } public void ShellSort<T>(T[] a) where T : IComparable { // Sort a[] into increasing order. int N = a.Length; int h = 1; while (h < N / 3) { h = 3 * h + 1; // 1, 4, 13, 40, 121, 364, 1093, .. } while (h >= 1) { // h-sort the array. for (int i = h; i < N; i++) { // Insert a[i] among a[i-h], a[i-2*h], a[i-3*h]... . for (int j = i; j >= h && Less(a[j], a[j - h]); j -= h) Exch(a, j, j - h); Show(a); } h = h / 3; } } public void SelectionSort<T>(T[] a) where T : IComparable { // Sort a[] into increasing order. int n = a.Length; // array length for (int i = 0; i < n; i++) { // Exchange a[i] with smallest entry in a[i+1...N). int min = i; // index of minimal entr. for (int j = i + 1; j < n; j++) if (Less(a[j], a[min])) min = j; Exch(a, i, min); Show(a); } } private static void Exch<T>(T[] a, int i, int j) where T : IComparable { T t = a[i]; a[i] = a[j]; a[j] = t; } public void Show<T>(T[] a) where T : IComparable { // Print the array, on a single line. foreach (T t in a) { Console.Write(t + ); } Console.WriteLine(); } private static bool Less(IComparable v, IComparable w) { return v.CompareTo(w) < 0; } public bool IsSorted(IComparable[] a) { // Test whether the array entries are in order. for (int i = 1; i < a.Length; i++) if (Less(a[i], a[i - 1])) return false; return true; }}
Shell sort seems inefficient
c#;performance;algorithm;sorting
Show() is probably slowing it down. It transforms your implementation into \$O(n^3)\$ and even worse increases the algorithm's time by (n*[time taken to access and write to console stream]) in the last loop. Preparing your string and printing once to the console would increase the speed, even better, printing when the list is sorted. Requesting and using resources (e.g. streams) during an expensive operation will increase execution time. You are using Console.Write <-- probably accessing the raw stream would be even faster if needs be.
_softwareengineering.185489
First off, I'll admit that I'm a newbie to DDD and need to read the blue book.I'm building a system that has an AggregateRoot of type Match. Each Match can have a collection of Votes and also has a readonly VoteCount property which gets incremented when a user up-votes or down-votes a Match. Since many users could be voting on a Match at the same time, Votes have to be added/removed from the Match and the VoteCount has to be incremented/decremented as one atomic operation involving write locks (with locks handled by the DB). (I need VoteCount as a static value in the database to be queried on efficiently by other processes/components.)It seems to me that if I were adhering to strict DDD, I would be coding this operation as such:An application service would receive a vote request objectThe service would then retrieve the Match object from a Match RepositoryThe service would then call some sort of method on the Match object to add the Vote to the collection and update VoteCount.The Repository would then persist that Match instance back to the DBHowever, this approach is not feasible for my application for 2 main reasons, as I see:I'm using MongoDB on the backend and cannot wrap this read-write operation into a transaction to prevent dirty reads of the Match data and its associated Votes and VoteCount.It's highly inefficient. I'm pulling back the entire object graph just to add a Vote and increment VoteCount. Although this is more efficient in a document db than in a relational one, I'm still doing an unnecessary read operation. Issues 1 & 2 are not a problem when sending a single Vote object to the repository and performing one atomic update statement against Mongo.Could Vote, in this case be considered an aggregate and be deserving of its own repository and aggregate status?
DDD - Aggregate Roots - Dealing with Efficiency and Concurrency
domain driven design;mongodb;aggregate
Could Vote, in this case be considered an aggregate and be deserving of its own repository and aggregate status?I think this might be the right answer. An aggregate should be a transactional consistency boundary. Is there a consistency requirement between votes on a match? The presents of a Vote collection on a Match aggregate would suggest that there is. However, it seems like one vote has nothing to do with the next. Instead, I would store each vote individually. This way you can use the aggregate functionality of MongoDB to get the count, though I'm not sure whether it is still slow. If it is, then you can aggregate using the Map/Reduce functionality.More generally, this may not be a best fit for DDD. If the domain doesn't consist of complex behavior there is hardly a reason to try to adapt the DDD tactical patterns (entity, agreggate) to this domain.
_codereview.59652
I am new to android testing and would like to try start off in the correct direction, so I am trying to understand if this is the correct way to test a particular method or if there is some best practice that I should be following.Here are some snippets of the class that contains the method I would like to test. public class MapMarker implements Target {private Bus mBus;private RailsMarker mMarker;private Bitmap mBitmap;public String getGravatarUrl() { return mMarker.getGravatarUrl();}public String getUserId() { return mMarker.getUserId();}@Overridepublic void onBitmapLoaded(Bitmap bitmap, LoadedFrom loadedFrom) { System.out.println(String.format(loaded bit map from gravatar url = %s, for userid = %s,getGravatarUrl(), getUserId())); mBitmap = bitmap; mBus.post(new MarkerReadyEvent(this)); }}I am trying to test that when the onBitMapLoaded method is called, it posts a MarkerReadyEvent onto my eventbus.Here is the test method I coded.@Testpublic void testOnBitMapLoaded() { Bus mockBus = mock(Bus.class); RailsMarker railsMarker = new RailsMarkerBuilder().withGravatarUrl(a_gravatar_url) .withUserId(a_user_id) .build(); MapMarker androidMapMarker = new MapMarkerBuilder().withBus(mockBus) .withMarker(railsMarker) .build(); androidMapMarker.onBitmapLoaded(null, null); verify(mockBus).post(isA(MarkerReadyEvent.class));}I am using Mockito to mock the eventbus.I am creating a real instance of the MapMarker class (via https://github.com/mkarneim/pojobuilder)I am also creating a RailsMarker instance to be included in the MapMarker instance. (I realized this in only needed for the println statement but I did not want to remove it just to make the test easier)Is this a good approach or is there some other pattern I should be following?The part that seems a bit strange is that I end up building the instance I am going to test with both a mock object (the Bus) and a real object (the RailsMarker), but I don't see anyway around this.
Is building a test instance from a mix of both mock and real objects OK?
java;unit testing;mocks
You would be testing the MapMarker more in isolation if you would inject only mocks.Isolation is important for two things:stability of test resultsmore explicit and direct feedbackIf a bug would be introduced in the RailsMarker, this test would also fail, causing it to be less stable.In case of this bug, there would by multiple test failures (also for RailsMarker and possibly other tests that use this class) making it harder to find the bug.In general I would use mocks for any dependant object that has more logic than just getters and setters. That being said, it looks like the RailsMarker might qualify.
_softwareengineering.271381
In how far do we create variables in our methods or functions?Do we only create one when we're using the result of the variable more then one time like this?function someFunction(SomeClass $someClass) { $thisVar = $someClass->thisVar(); doSomethingElse($thisVar); echo $thisVar;}Or just always create one no matter how many times we're using it (for better readability)?Or just call the methods from the dependency directly like this?function someFunction(SomeClass $someClass) { doSomethingElse($someClass->thisVar()); echo $someClass->thisVar();}
Creating variables in methods/functions
object oriented;php;dependency injection;variables
One reason for introducing a separate variable is to improve readability, assumed the real name of $someClass->thisVar() is not expressive enough. This can make sense even if the function is only called once:function someFunction(SomeClass $someClass) { $explainingName = $someClass->thisVar(); doSomethingElse($explainingName);}Of course, if you think the function name is clear enough, this variantfunction someFunction(SomeClass $someClass) { doSomethingElse($someClass->selfExplanatoryName());}might be fine. But even in such a case, when both function names are very long,function someFunction(SomeClass $someClass) { doSomethingElseWithAVeryLongFunctionName($someClass->selfExplanatoryButVeryLongName());}might be considered to be less readable thanfunction someFunction(SomeClass $someClass) { $thisVar = $someClass->selfExplanatoryButVeryLongName(); doSomethingElseWithAVeryLongFunctionName($thisVar);}When you need to reuse the value at least twice like in your examples, a reason might be that $someClass->thisVar() must be called only once because of side effects, or it should be called only once because of a noteable performance impact. A third reason might be to avoid code duplication. Think of this:function someFunction(SomeClass $someClass) { doSomethingElse($someClass->thisVar($parameter1,$parameter2,$parameter3)); echo $someClass->thisVar($parameter1,$parameter2,$parameter3);}here $someClass->thisVar($parameter1,$parameter2,$parameter3) is repated twice, which violates the dry principle. Your second example is an edge case: it is also a violation of the DRY principle, but not a severe one, and to avoid it you have to introduce more code than it actually saves you. So it is a trade-off with no clear best solution.
_scicomp.24981
I am doing a little project on solving the heat equation using finite-volume method on a solid cube, I converted the polyhedral mesh of the cube to an OpenFOAM mesh.I have a Python code where I parse the points, faces, owner, neighbour and boundary files of the OF case mesh, I've managed to create a hash list (let's call it CellFacesMap) to map the cell number to the list of its owned faces and faces neighbours to it. Now, I am trying to generate a hash list (let's call it CellAdjacentsMap that maps each cell with all its adjacent cells, in order to apply the discretized heat equation to each cell (and also for generating the volume weighting factors).The only way I could think of now to do so is to visit each cell in CellFacesMap and for the owned faces I get the cells neighbor to this face (adjacent cells) and for faces neighbor the same cell I get the cells that owns theses faces (also adjacent cells), meaning I visit CellFacesMap over and over for each cell to check its faces.But this method is costly (time wise, it took 3.5 minutes to generate a map for 4000+ cell mesh). So is there any faster way to generate CellAdjacentsMap?
Getting adjacent cells map for an unstructured polyhedral mesh
finite volume;mesh;openfoam;unstructured mesh;mesh traversal
I'm assuming you are starting with a list of cell definitions, say,as a list of vertices defining the cell and a type defining thetopology of each cell. As part of the topology definition for eachcell, you can easily get the faces for that cell defined, say, asa list of vertices.The first step in creating the CellAdjacentsMap that you need is tofirst create a map that I'll call FaceCellMap that lists the cells(either one or two) that each face belongs to. You do this byiterating over all cells, adding each cell's faces to the map, oneby one.There are some implementation details in forming this FaceCellMap.The first is what data structure do you use? If you have a generalHashMap capability, the key can be face definition and the valuecan be a length-two array of cell IDs (one of the entries may bezero). The second issue is how do you compare faces? You can usea list of vertices but you have implement the comparison functionin such a way that it is insensitive to the starting vertex andwhether you traverse the vertex list in clockwise or counterclockwiseorder.But after you have created FaceCellMap, creating CellAdjacentsMap isstraightforward. You iterate through all the cells. For each cell, you iterate through all its faces. For each face, you check theFaceCellMap for that face to see if it has an adjacent cell. If soyou add it to the list of adjacencies for the current cell.
_cstheory.25219
Can I use Gaussian elimination to compute matrix inverse over the ring $\mathbb{Z}_{p^k}$ (ring of residues modulo $p^k$) where $p$ is prime and $k$ is an integer greater than $1$?Such matrix is invertible if and only if its determinant is not congruent to zero modulo $p$. So we can check if it is invertible by constructing the second matrix — matrix of residues modulo $p$ of elements of the original matrix. The second matrix is over $\mathbb{Z}_p$ so we can use Gaussian elimination to compute its determinant. The determinant can be expressed as a polynomial formula (Laplace expansion) so the determinant of the second matrix is the determinant of the original one modulo $p$ and so the original matrix is invertible if and only if the determinant of the second one is not zero.I think that using Gaussian elimination to compute the inverse in this case is correct and it is quite easy to prove. It uses exactly the same steps as inverting the aforementioned matrix over $\mathbb{Z}_p$. Gaussian elimination over the finite field $\mathbb{Z}_p$ fails if and only if the matrix is not invertible and it fails because all the elements under the pivot position are zero. It means that in the original matrix all these elements are congruent to zero modulo p so they are not invertible and Gaussian elimination fails over $\mathbb{Z}_{p^k}$. If the elimination succeeds over $\mathbb{Z}_p$ then it also succeeds over $\mathbb{Z}_{p^k}$ — if you can find nonzero pivot element in the matrix over $\mathbb{Z}_p$ then the equivalent element in the original matrix is invertible.Moreover I have implemented it and it works. For invertible matrices it gives their inverses modulo $\mathbb{Z}_{p^k}$. For non-invertible it fails. I have conducted many tests for different matrix sizes, different $p$ and different $k$. It is quite simple so there are two possibilities: either there is something wrong in my reasoning, or someone must have described it somewhere. However I could't find any reference for that fact. I've found some articles and books that discuss the computation of matrix inverses and determinants modulo integers (e.g. Pan, Stewart — Algebraic and numerical techniques for the computation of matrix determinants, von zur Gathen, Gerhard — Modern computer algebra) but I haven't found it there.I would like to find some reference that clearly states that you can do Gaussian elimination over $\mathbb{Z}_{p^k}$ or some argument against it.Thanks in advance.
Gaussian elimination for inverting matrices modulo prime power
reference request;linear algebra
null
_codereview.139931
I need to perform SQL query which will get products from database based on few dimensions which I will pass in URL.import itertoolssql_where = ''dimensions = ['a','b', 'c', 'd', 'e', 'f', 'f2', 'g']for dimension in dimensions: for onetwo, direction in zip( range(1, 3), itertools.cycle('><') ): globals()[dimension+str(onetwo)] = str( request.GET.get( dimension+str(onetwo) ) ) sql_where += AND ( `{dimension}` {direction}= '{get_dimension}' OR '{get_dimension}' = ('None' OR '') ).format(dimension=dimension, direction=direction, get_dimension=globals()[dimension+str(onetwo)])sql = SELECT * FROM t_wishbone_dimensions WHERE 1=1 + sql_whereprint sqlExample output for /search?a1=34&a2=37&c1=50&c2=75SELECT * FROM t_wishbone_dimensions WHERE 1=1 AND ( `a` >= '34' OR '34' = ('None' OR '') ) AND ( `a` <= '37' OR '37' = ('None' OR '') ) AND ( `b` >= 'None' OR 'None' = ('None' OR '') ) AND ( `b` <= 'None' OR 'None' = ('None' OR '') ) AND ( `c` >= '50' OR '50' = ('None' OR '') ) AND ( `c` <= '75' OR '75' = ('None' OR '') ) AND ( `d` >= 'None' OR 'None' = ('None' OR '') ) AND ( `d` <= 'None' OR 'None' = ('None' OR '') ) AND ( `e` >= 'None' OR 'None' = ('None' OR '') ) AND ( `e` <= 'None' OR 'None' = ('None' OR '') ) AND ( `f` >= 'None' OR 'None' = ('None' OR '') ) AND ( `f` <= 'None' OR 'None' = ('None' OR '') ) AND ( `f2` >= 'None' OR 'None' = ('None' OR '') ) AND ( `f2` <= 'None' OR 'None' = ('None' OR '') ) AND ( `g` >= 'None' OR 'None' = ('None' OR '') ) AND ( `g` <= 'None' OR 'None' = ('None' OR '') )What do you think about my solution?
Generate SQL query by loop
python;mysql
null
_unix.386465
Recently we are trying to tune ZFS on a machine which runs on 256RAM memory. Our current ZFS memory variables for the ARC are max 255Gb and min 64Mb. A main issue that we face is that during high peak times getting workflows aborded with not enough memory. (there are several flows that need up to 55G memory)When tried to limit the max value to 4G we faced degradation with slow performance.Output of uname -a SunOS xxxxx 5.11 11.1 sun4v sparc sun4v Publisher: solaris Version: 0.5.11 (Oracle Solaris 11.1 SRU 1.4) Build Release: 5.11 Branch: 0.175.1.1.0.4.0psrinfo -pv The physical processor has 2 cores and 16 virtual processors (0-15) The core has 8 virtual processors (0-7) The core has 8 virtual processors (8-15) SPARC-T4 (chipid 0, clock 2848 MHz) I am looking forward a rule of thumb in order to configure the values min/max arc memory.Should the arc gets a fixed amount of memory (min max the same) or should check the max memory per timeslot (h or 1/2h ) and go with that value added a cap of ~ +10%Edit 1It's an application server with informatica powervemter 9.6.1 installedOur current hit rate is above 96%
ZFS ARC memory tuning
solaris;memory;performance;zfs;zfs arc
null
_computerscience.2037
I am attempting to model a simple graphics pipeline (i.e. Local->Word->View->Screen->2D spaces).I've been looking at the algorithm required to transform from world to view-space and using the following transformAlthough I have been using U,V,N,C notation rather than Xc,Yc,Zc,e.And my matrix transformation at first glance, appears correct: When I plot the vectors onto world space, I get a vector from the camera position to the point we're looking at, and two correctly placed new X and Y vectors (see the left hand image below).By looking along the line of sight vector (such that it disappears,and the focus & camera points overlap), as in the middle figure, should (I believe?) give an accurate example of what the resultant viewspace transformation should look like, with the green vertical and right-pointing vectors showing the new X & Y vectors.However - when I actually do the view space transform I don't get the middle figure, instead, the resultant transform seems to be looking at the object from a different position possibly aligned with one of the axes? This is shown in the right hand figure - which is actually looking at the object from a slightly lower perspective My question is essentially what have I done wrong, or am I misunderstanding both the transform and thus my results?Thanks very much,DavidUPDATESAfter stepping through the code - I discovered that my source book was having me convert the line-of-sight vector to spherical coords and then they were being converted straight back again to create R (viewspace coordination matrix). I've eliminated this redundancy (and potential source of errors) but it hasn't solved the problem...clear; clc; close all;%======Create World Space (hard-coded values for demo) ===========ws_vtx = [0,2,0,2,1,0.5,1.5,0.5,1.5,0.5,1.5,0.5,1.5; 0,0,0,0,-2,0,0,2,2,0,0,2,2; 0,0,2,2,1,0.5,0.5,0.5,0.5,1.5,1.5,1.5,1.5];ws_fcs = [1,2,4,3,3,1,6,6,6,6,7,7,9,8,8,8,10,10; 2,4,3,1,4,4,9,8,7,11,9,13,8,12,10,6,11,13; 5,5,5,5,1,2,7,9,11,10,13,11,13,13,12,10,13,12];%==================Create view matrix===================focus = [1.5,0,1.5]; %The point we're looking atCx = 3; Cy = -3; Cz = 3; %Position of cameraVspec = [0;0;1]; %Specified up direction for view space (i.e. camera orientation)vector = focus - [Cx,Cy,Cz]; %Vector camera to focus point p = sqrt(vector(1)^2 + vector(2)^2 + vector(3)^2); %Total magnitude of vectorN = vector'/p;V = Vspec - cross(cross(Vspec,N),N); %Create new up directionU = cross(N,V); %Create new x-axis%Create rotational matrix to view spaceR= [U(1),U(2),U(3),0; % U is direction of camera space X axis V(1),V(2),V(3),0; % V is direction of camera space Y axis N(1),N(2),N(3),0; 0 , 0, 0,1];Tr = [1,0,0,-Cx; 0,1,0,-Cy; 0,0,1,-Cz; 0,0,0, 1];T = R*Tr; %Total view transform = rotation * translation%============Plot the camera vectors & World Space=================grid on; hold on; xlabel('x'); ylabel('y'); zlabel('z');scatter3(Cx,Cy,Cz,'s'); %Plot the camera positionplot3([Cx, Cx+p*N(1)],[Cy, Cy+p*N(2)],[Cz, Cz+p*N(3)],'g'); %Plot camera vectorsplot3([Cx, Cx+V(1)],[Cy, Cy+V(2)],[Cz, Cz+V(3)],'g');plot3([Cx, Cx+U(1)],[Cy, Cy+U(2)],[Cz, Cz+U(3)],'g');scatter3(ws_vtx(1,:),ws_vtx(2,:),ws_vtx(3,:)) %Plot all the pointspatch('Faces',ws_fcs','Vertices',ws_vtx','Facecolor', 'none');for i = 1:length(ws_vtx) str = sprintf('%d',i); text(ws_vtx(1,i),ws_vtx(2,i),ws_vtx(3,i), str,'FontSize',14, 'Color','r');end%====================Viewspace Transform===========================ws_vtx = T*vertcat(ws_vtx,(ones(1,length(ws_vtx)))); %Transform worldspacews_vtx = ws_vtx(1:3,:); %Remove homogenous columnposition = T*[Cx;Cy;Cz;1]; focus = T*horzcat(focus,1)'; %Transform set pointsposition = position(1:3,:); focus = focus(1:3,:); %Remove homogenous columns%================Plot new viewspace===============================figure(); grid on; hold on; xlabel('x'); ylabel('y'); zlabel('z');scatter3(position(1),position(2),position(3),'s'); %Plot new camera vectorsscatter3([position(1); position(1)+focus(1)],... [position(2); position(2)+focus(2)],[position(3); position(3)+focus(3)],'s');plot3([position(1), focus(1)],[position(2), focus(2)],[position(3), focus(3)],'g');patch('Faces',ws_fcs','Vertices',ws_vtx','Facecolor', 'none');
Correct view-space transform
transformations;matrices;matlab;vectors
Your transform looks correct. To transform from world to eye coordinates, I I always use a lookat transform, defined by 3 vectors: $\bf{e}$, $\bf{a}$ and $\bf{u}$; in english, the eye position, the point it's looking at, and an up vector, which must not be in the same direction as $\bf{a} - \bf{e}$ (more specifically, not a multiple of it).The space is defined using $\bf{z} = {{(\bf{e} - \bf{a})}\over{|\bf{e} - \bf{a}}|}$, which means the negative z axis points in the direction of what I'm looking at (this works well for OpenGL); $\bf{x} = {{\bf{u} \times \bf{z}}\over{|\bf{u} \times \bf{z}|}}$ and $\bf{y} = \bf{z} \times \bf{x}$, which, again, for OpenGL, makes $\bf{x}$ rightward on the screen and $\bf{y}$ upward.To transform into eye coordinates is a matter of subtracting they eye's position, then projecting into the space defined by the vectors above:$$\begin{bmatrix} \bf{x}_x & \bf{x}_y & \bf{x}_z & 0 \\ \bf{y}_x & \bf{y}_y & \bf{y}_z & 0 \\ \bf{z}_x & \bf{z}_y & \bf{z}_z & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}\begin{bmatrix} 1 & 0 & 0 & -\bf{e}_x \\ 0 & 1 & 0 & -\bf{e}_y \\ 0 & 0 & 1 & -\bf{e}_z \\ 0 & 0 & 0 & 1 \end{bmatrix} $$Which is exactly what you have to begin with.
_unix.128691
We have an RHEL 6.5 system which boots to the point of loading APF firewall and then just hangs indefinitely. The console shows the following:...Loading autofs4: [OK]Starting automount: [OK]Starting SWsoft control panels server... [OK]Starting APF: ip_tables: (C) 2000-2006 Netfilter Core Teamnf_conntrack version 0.5.0 (16384 buckets, 65536 max)It then waits indefinitely on the screen above. The only way we have been able to work around this was to disable the APF service from starting at boot./var/log/messages shows the following:May 9 04:15:29 sapphire acpid: waiting for events: event logging is offMay 9 04:15:29 sapphire acpid: client connected from 6777[68:68]May 9 04:15:29 sapphire acpid: 1 client rule loadedMay 9 04:15:30 sapphire automount[6797]: lookup_read_master: lookup(nisplus): couldn't locate nis+ table auto.masterMay 9 04:15:31 sapphire kernel: ip_tables: (C) 2000-2006 Netfilter Core TeamMay 9 04:15:31 sapphire kernel: nf_conntrack version 0.5.0 (16384 buckets, 65536 max)/var/log/apf_log consistently shows the following as the last lines before boot hangs:May 09 04:15:33 server apf(6915): {dshield} downloading http://feeds.dshield.org/top10-2.txtMay 09 04:15:33 server apf(6915): {dshield} parsing top10-2.txt into /etc/apf/ds_hosts.rulesMay 09 04:15:33 server apf(6915): {dshield} loading ds_hosts.rulesMay 09 04:30:50 server apf(2681): {dshield} downloading http://feeds.dshield.org/top10-2.txtMay 09 04:30:51 server apf(2681): {dshield} parsing top10-2.txt into /etc/apf/ds_hosts.rulesMay 09 04:30:51 server apf(2681): {dshield} loading ds_hosts.rulesMay 09 04:48:45 server apf(2653): {dshield} downloading http://feeds.dshield.org/top10-2.txtMay 09 04:48:46 server apf(2653): {dshield} parsing top10-2.txt into /etc/apf/ds_hosts.rulesMay 09 04:48:46 server apf(2653): {dshield} loading ds_hosts.rulesSo it would seem that loading of the DShield rules is causing boot to hang.After successfully booting, both iptables and APF are running fine.Can anyone suggest how to troubleshoot this?
RHEL 6.5 won't boot after loading APF firewall
rhel;iptables
This issue appears to have fixed itself as rebooting with the same configuration now works fine. As the system boot was hanging when loading the dynamically downloaded ds_hosts.rules file from DShield, we can only assume that there was a problem with this file which has since been updated as nothing else has changed on the system.The temporary workaround, while the server would not boot, was to disable APF at startup by booting in to single-user mode and then running:chkconfig apf off
_softwareengineering.64592
There is an overhead associated with continuous integration, e.g., set up, re-training, awareness activities, stoppage to fix bugs that turn out to be data issues, enforced separation of concerns programming styles, etc.At what point does continuous integration pay for itself?EDIT: These were my findingsThe set-up was CruiseControl.Net with Nant, reading from VSS or TFS. Here are a few reasons for failure, which have nothing to do with the setup:Cost of investigation: The time spent investigating whether a red light is due a genuine logical inconsistency in the code, data quality, or another source such as an infrastructure problem (e.g., a network issue, a timeout reading from source control, third party server is down, etc., etc.)Political costs over infrastructure: I considered performing an infrastructure check for each method in the test run. I had no solution to the timeout except to replace the build server. Red tape got in the way and there was no server replacement. Cost of fixing unit tests: A red light due to a data quality issue could be an indicator of a badly written unit test. So, data dependent unit tests were re-written to reduce the likelihood of a red light due to bad data. In many cases, necessary data was inserted into the test environment to be able to accurately run its unit tests. It makes sense to say that by making the data more robust then the test becomes more robust if it is dependent on this data. Of course, this worked well!Cost of coverage, i.e., writing unit tests for already existing code: There was the problem of unit test coverage. There were thousands of methods that had no unit tests. So, a sizeable amount of man days would be needed to create those. As this would be too difficult to provide a business case, it was decided that unit tests would be used for any new public method going forward. Those that did not have a unit test were termed 'potentially infra red'. An intestesting point here is that static methods were a moot point in how it would be possible to uniquely determine how a specific static method had failed.Cost of bespoke releases: Nant scripts only go so far. They are not that useful for, say, CMS dependent builds for EPiServer, CMS, or any UI oriented database deployment.These are the types of issues that occured on the build server for hourly test runs and overnight QA builds. I entertain that these to be unnecessary as a build master can perform these tasks manually at the time of release, esp., with a one man band and a small build. So, single step builds have not justified use of CI in my experience. What about the more complex, multistep builds? These can be a pain to build, especially without a Nant script. So, even having created one, these were no more successful. The costs of fixing the red light issues outweighed the benefits. Eventually, developers lost interest and questioned the validity of the red light.Having given it a fair try, I believe that CI is expensive and there is a lot of working around the edges instead of just getting the job done. It's more cost effective to employ experienced developers who do not make a mess of large projects than introduce and maintain an alarm system. This is the case even if those developers leave. It doesn't matter if a good developer leaves because processes that he follows would ensure that he writes requirement specs, design specs, sticks to the coding guidelines, and comments his code so that it is readable. All this is reviewed. If this is not happening then his team leader is not doing his job, which should be picked up by his manager and so on.For CI to work, it is not enough to just write unit tests, attempt to maintain full coverage, and ensure a working infrastructure for sizable systems. The bottom line: One might question whether fixing as many bugs before release is even desirable from a business prespective. CI involves a lot of work to capture a handful of bugs that the customer could identify in UAT or the company could get paid for fixing as part of a client service agreement when the warranty period expires anyway.
How many developers before continuous integration becomes effective for us?
teamwork;continuous integration
null
_codereview.157763
I'm using slim php and i have this simple paginated end point to get all clients, i feel like i could do some things here better, so i would appreciate all suggestions, if possible please give me code example with suggested improvements. Thank you in advance.$app->get('/api/clients', function ($request, $response, $args) { require_once 'dbConnect.php'; require_once 'shared/securityService.php'; $queryParams = $request->getQueryParams(); $total = getTotalCount($mysqli, $queryParams[group]); $pageSize = $queryParams[pageSize]; $totalPages = ceil($total / $pageSize); $offset = ($queryParams[page] - 1) * $pageSize; if(is_numeric($queryParams[group])) { $stmt = $mysqli->prepare(SELECT id, firstName, lastName, birthDate, phoneNumber, address, contactPerson, contactPersonPhoneNumber FROM client WHERE groupId = ? ORDER BY id LIMIT $pageSize OFFSET $offset); $stmt->bind_param(i, $queryParams[group]); } else { $stmt = $mysqli->prepare(SELECT id, firstName, lastName, birthDate, phoneNumber, address, contactPerson, contactPersonPhoneNumber FROM client ORDER BY id LIMIT $pageSize OFFSET $offset); } $stmt->execute(); $stmt->bind_result($id, $firstName, $lastName, $birthDate, $phoneNumber, $address, $contactPerson, $contactPersonPhoneNumber); $data = null; while ($stmt->fetch()) { $data['items'][] = array( 'id' => $id, 'firstName' => $firstName, 'lastName' => $lastName, 'birthDate' => $birthDate, 'phoneNumber' => $phoneNumber, 'address' => $address, 'contactPerson' => $contactPerson, 'contactPersonPhoneNumber' => $contactPersonPhoneNumber ); } $data['totalItems'] = $total; $stmt->close(); $mysqli->close(); echo json_encode($data);});function getTotalCount($mysqli, $groupId) { if(is_numeric($groupId)) { $totalStmt = $mysqli->prepare(SELECT COUNT(id) FROM client WHERE groupId = ?); $totalStmt->bind_param(i, $groupId); } else { $totalStmt = $mysqli->prepare(SELECT COUNT(id) FROM client); } $totalStmt->execute(); $totalStmt->bind_result($total); $totalStmt->fetch(); $totalStmt->close(); return $total;}
Paginated method to get all items
php;mysql;mysqli;slim
null
_webapps.90668
I'm using Gmail. I'd like to receive desktop notification for every new email. How can I receive these notifications?
How to enable desktop notifications for Gmail
gmail;notifications
You have to turn Desktop Notifications on for new emails from your Gmail Settings.From https://support.google.com/mail/answer/1075549?hl=en :Turn desktop notifications on or offOpen Gmail.In the top-right corner, click the gear icon Settings.Select Settings.Scroll down to the Desktop Notifications section (stay in the General tab).Choose one of the options:New mail notifications on: If you use inbox categories, youll only be notified about messages in Primary.Important mail notifications on: Youll be notified about every incoming messages marked as important. Learn more about importance ranking.Mail notifications offClick Save Changes at the bottom of the page.
_unix.116745
I'm following this tutorial. What do the IPs mean after the ifconfig here?dev tun0ifconfig 10.9.8.1 10.9.8.2secret /etc/openvpn/static.keyCan I use external IPs (assuming I'm configuring this on a vps/dedicated server)?
OpenVPN static key ip meaning/order?
networking;openvpn
Those are the IP addresses of the local and remote tunnel endpoints (in that order). They're used for routing (and of course the local one is a local IP address, just like on any other interface).You could use public IPs, but its a waste of IP addresses in most casesyou can use internal (RFC1918) addresses even if you're routing a public subnet over the tunnel.They're both /32's, so they don't need to be on the same subnet. E.g., if you're trying to give a public address to the client, you can have only that one be a public IP.As an example of how they're using in routing, let's say that you have a network like this:10.0.0.0/24 ------ fw1 ------ INET ----- fw2 --- 10.1.0.0/2410.0.1.0/24 -------|200.200.200.0/24 -/You build an OpenVPN tunnel between fw1 and fw2. Let's say you have fw1 as 10.255.255.255.1 and fw2 as 10.255.255.2. On fw1, you'd have ifconfig 10.255.255.1 10.255.255.2 (on fw2, it'd be in the other order). On fw1, you'd have routes like:10.1.0.0/24 via 10.255.255.2 dev tun0and on fw2:10.0.0.0/24 via 10.255.255.1 dev tun010.0.1.0/24 via 10.255.255.1 dev tun0200.200.200.0/24 via 10.255.255.1 dev tun0You can add those routes however you likeby the route option in the OpenVPN config, by hand using ip route add, by using a routing daemon like Quagga, etc.
_unix.286757
I'm searching for a Ubuntu (or other Linux distribution) VM with pre-installed Maven, JDK, Code-Editor, etc. It's pretty annoying to install all these tools every time you set up a new development environment so I was wondering if there are pre-built VM images for VirtualBox?
Pre-built developer ubuntu VM
linux;ubuntu;virtualbox;virtual machine
null