id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_unix.165521
I am fairly new to networking and penetration testing, however, I'm really interested in the field and would like to know which distribution of Linux would be best to work with for security testing...and why. I've heard that Backtrack is probably the best, but I'm not sure, any help would be really appreciated.
Which Linux should I chose for penetration testing and networking
linux;security
I would strongly suggest Kali, because you already get a lot of tools for that purpose, and it is still quite easy to handle (and you get a lot of tutorials in addition).
_webapps.80545
I archived a Trello list that had on it about 20 cards.I want to see all the cards on it. I don't mind unarchiving it for this. But that doesn't seem to be working.When I go to the archive, the list item is there, but there are no cards on it. There's no description or comments, either. I'm new to Trello. I'm guessing maybe that bit is normal. Is it?If I try to move it back to the board, it doesn't appear on the board. But the move does appear in the Activity list. I can click on it from there, and still there are no cards showing or description or comments. I tried copying it to another board. It also didn't appear there.There's no Activity item saying that the cards have been deleted.How do I get the cards back / view the cards?And also the list with description and comments?Here are screenshots:http://screencast.com/t/OmNb2LJTjhttp://screencast.com/t/BdABS9YR
Archived Trello list. Cards disappeared. List doesn't appear back on board
trello
null
_scicomp.21498
I've been given a second-order non-linear ODE:$$\frac{d^{2}\theta(s)}{ds^{2}} = sf_{g}\cos{\theta} + sf_{x}\cos{\phi}\sin{\theta}$$where $f_{g}, f_{x}$ and $\phi$ are constants.The boundary conditions for $[0,L]$ are:$\frac{d\theta}{ds}\Big|_{s=0} = 0 $$\theta (L) = \theta_L$I will be repeating this numerous times for different values of $L$, implementing the solution method as a function. The $\theta_L$ is just a constant assumed known each time.I am unsure how to proceed with these boundary conditions (are they Neumann, Dirichlet etc.? I don't think so). I struggle when I try to convert the BVP into an IVP for the shooting method and I've read the documentation for bvp4c and ode45 over and over with no progress on this problem. I just can't seem to get started here.Could anyone give me some pointers or help? Thank you very much in advance.
Numerical method for a BVP with mixed boundary conditions (MATLAB)
matlab;boundary conditions;numerical modelling
To start you could change it into a system of first-order ODEs$$Y(s) := \left( \begin{split} \theta'(s)\\ \theta(s) \end{split}\right) = \left(\begin{split} Y_1(s)\\ Y_2(s) \end{split}\right)\,,$$$$ Y'(s) = \left( \begin{split} s f_g \cos(Y_2(s)) + s f_x \cos \phi\sin(Y_2(s))\\ Y_1(s) \end{split}\right)\,.$$$$Y(0) = \left(\begin{align} 0\\ t \end{align}\right)\,.$$Then the shooting method, parametrised by $t$ as initial conditions, that means use different values of $t$ as an initial condition. Approximate value of $t$ can be find in the style taught in this lecture. The easiest root finding method is the bisection method. For that you should know a range in which the value of $t$ lies $[t_a t_b]$. Substitute $t_a$, $t_b$ and $t_c$ (mid point of $t_a$ and $t_b$), as three different I.C and solve that using ode45 then check weather you are getting closer to $Y_2(L)=\theta_L$, then reduce the search range in bisection algorithm algorithm until you get desired accuracy.
_codereview.53875
Here is the solution to generating possible moves and keeping the king safe.If someone is willing to look it over and come with some suggestion to perhaps improve it, I would appreciate it.Full source code is at GitHubKing:@Overridepublic Collection<Square> generatePossibleMoves() { possibleMoves.clear(); List<Square> moves = new ArrayList<>(); int[][] offsets = { {1, 0}, {0, 1}, {-1, 0}, {0, -1}, {1, 1}, {-1, 1}, {-1, -1}, {1, -1} }; for (int[] o : offsets) { Square square = super.getSquare().neighbour(o[0], o[1]); if (square != null && (square.getPiece() == null || isOpponent(square.getPiece()))) { moves.add(square); } } possibleMoves.addAll(moves); if (getSquare().isSelected()) { Piece[] pieces = { PieceType.PAWN.create(getPieceColor()), PieceType.ROOK.create(getPieceColor()), PieceType.BISHOP.create(getPieceColor()), PieceType.KNIGHT.create(getPieceColor()), PieceType.QUEEN.create(getPieceColor()), PieceType.KING.create(getPieceColor())}; Piece oldKing = this; getSquare().removePiece(); for (Square kingMove : moves) { if (kingMove.isEmpty()) { for (Piece piece : pieces) { piece.putPieceOnSquareFirstTime(kingMove); piece.generatePossibleMoves(); for (Square enemy : piece.getPossibleMoves()) { if (!enemy.isEmpty() && enemy.getPiece().isOpponent(piece) && enemy.getPiece().getTypeNumber() == piece.getTypeNumber()) { enemy.setBackground(Color.BLUE); possibleMoves.remove(kingMove); break; } } } kingMove.removePiece(); } else if (isOpponent(kingMove.getPiece())) { Piece oldPiece = kingMove.getPiece(); for (Piece piece : pieces) { kingMove.removePiece(); piece.putPieceOnSquareFirstTime(kingMove); piece.generatePossibleMoves(); for (Square square1 : piece.getPossibleMoves()) { if (!square1.isEmpty() && square1.getPiece().isOpponent(piece) && square1.getPiece().getTypeNumber() == piece.getTypeNumber()) { possibleMoves.remove(kingMove); break; } } } kingMove.removePiece(); oldPiece.putPieceOnSquareFirstTime(kingMove); } } oldKing.putPieceOnSquareFirstTime(getSquare()); } return possibleMoves;}Bishop@Overridepublic Collection<Square> generatePossibleMoves() { int row = super.getSquare().ROW; int column = super.getSquare().COLUMN; possibleMoves.clear(); //all possible moves in the down positive diagonal for (int j = column + 1, i = row + 1; j < Board.SIZE && i < Board.SIZE; j++, i++) { Square square = super.getSquare().getBoardSquare(i, j); if (square.getPiece() == null) { possibleMoves.add(square); } else if (isOpponent(square.getPiece())) { possibleMoves.add(square); break; } else { break; } } //all possible moves in the up positive diagonal for (int j = column - 1, i = row + 1; j > -1 && i < Board.SIZE; j--, i++) { Square square = super.getSquare().getBoardSquare(i, j); if (square.getPiece() == null) { possibleMoves.add(square); } else if (isOpponent(square.getPiece())) { possibleMoves.add(square); break; } else { break; } } //all possible moves in the up negative diagonal for (int j = column - 1, i = row - 1; j > -1 && i > -1; j--, i--) { Square square = super.getSquare().getBoardSquare(i, j); if (square.getPiece() == null) { possibleMoves.add(square); } else if (isOpponent(square.getPiece())) { possibleMoves.add(square); break; } else { break; } } //all possible moves in the down negative diagonal for (int j = column + 1, i = row - 1; j < Board.SIZE && i > -1; j++, i--) { Square square = super.getSquare().getBoardSquare(i, j); if (square.getPiece() == null) { possibleMoves.add(square); } else if (isOpponent(square.getPiece())) { possibleMoves.add(square); break; } else { break; } } return possibleMoves;}Knight@Overridepublic Collection<Square> generatePossibleMoves() { possibleMoves.clear(); int[][] offsets = { {-2, 1}, {-1, 2}, {1, 2}, {2, 1}, {2, -1}, {1, -2}, {-1, -2}, {-2, -1} }; for (int[] o : offsets) { Square square = super.getSquare().neighbour(o[0], o[1]); if (square != null && (square.getPiece() == null || isOpponent(square.getPiece()))) { possibleMoves.add(square); } } return possibleMoves;}Rook@Overridepublic Collection<Square> generatePossibleMoves() { int row = super.getSquare().ROW; int column = super.getSquare().COLUMN; possibleMoves.clear(); //all possible moves in the up for (int i = row + 1; i < Board.SIZE; i++) { Square square = super.getSquare().getBoardSquare(i, column); if (square.getPiece() == null) { possibleMoves.add(square); } else if (isOpponent(square.getPiece())) { possibleMoves.add(square); break; } else { break; } } //all possible moves in the down for (int i = row - 1; i > -1; i--) { Square square = super.getSquare().getBoardSquare(i, column); if (square.getPiece() == null) { possibleMoves.add(square); } else if (isOpponent(square.getPiece())) { possibleMoves.add(square); break; } else { break; } } //all possible moves to the right for (int i = column + 1; i < Board.SIZE; i++) { Square square = super.getSquare().getBoardSquare(row, i); if (square.getPiece() == null) { possibleMoves.add(square); } else if (isOpponent(square.getPiece())) { possibleMoves.add(square); break; } else { break; } } //all possible moves to the left for (int i = column - 1; i > -1; i--) { Square square = super.getSquare().getBoardSquare(row, i); if (square.getPiece() == null) { possibleMoves.add(square); } else if (isOpponent(square.getPiece())) { possibleMoves.add(square); break; } else { break; } } return possibleMoves;}QueenMoves exactly like the Rook and Bishop so why not reuse?@Overridepublic Collection<Square> generatePossibleMoves() { possibleMoves.clear(); Piece[] pieces = { PieceType.ROOK.create(getPieceColor()), PieceType.BISHOP.create(getPieceColor()) }; for (Piece piece : pieces) { piece.setSquare(getSquare()); possibleMoves.addAll(piece.generatePossibleMoves()); } return possibleMoves;}Pawn@Overridepublic Collection<Square> generatePossibleMoves() { possibleMoves.clear(); boolean color = super.isWhite(); int dx = color ? -1 : 1; Square ahead = super.getSquare().neighbour(dx, 0); if (ahead.getPiece() == null) { possibleMoves.add(ahead); if (super.getSquare().ROW == 6 && color) { Square aheadsecond = super.getSquare().neighbour(dx - 1, 0); if (aheadsecond.getPiece() == null) { possibleMoves.add(aheadsecond); } } else if (super.getSquare().ROW == 1 && !color) { Square aheadsecond = super.getSquare().neighbour(dx + 1, 0); if (aheadsecond.getPiece() == null) { possibleMoves.add(aheadsecond); } } } Square aheadLeft = super.getSquare().neighbour(dx, -1); if (aheadLeft != null && aheadLeft.getPiece() != null && isOpponent(aheadLeft.getPiece())) { possibleMoves.add(aheadLeft); } Square aheadRight = super.getSquare().neighbour(dx, 1); if (aheadRight != null && aheadRight.getPiece() != null && isOpponent(aheadRight.getPiece())) { possibleMoves.add(aheadRight); } return possibleMoves;}
Generating possible Chess moves
java;chess
null
_softwareengineering.263541
Within the context of JavaScript/Node.JS; Will using Callback functions improve the maintainability of source code, when there is no need for async programming?For example does the plain code sound semantically more correct and will be easier to maintain/extend, rather than the second one that unnecessarily uses callback functions?Plainvar validateId = function ( id ) { if ( undefined !== id ) { return true; } else { return false; }}var setId = function ( id ) { if ( true === validateId(id) ) { userId = id; console.log( success ); } else { console.log( failed: , invalid id ); };}Callback-edvar validateId = function ( id, success, fail ) { if ( undefined !== id ) { success( id ); } else { fail( invalid id ); }}var setId = function ( id ) { validateId( id, function success (validatedId) { userId = validatedId; console.log( success ); }, function fail ( error ) { console.log( failed: , error ); });}Update #1I'm not looking for a general advise on how to write readable and maintainable code. As written in the first line, I'm specifically looking to see if using Callbacks (not even in any programming languages, but specifically in JavaScript) will improve the maintainability of the code? And if that makes the source code semantically more correct? (invoke validateId and if the result was true set the userId comparing to invoke validateId and assign the userId on success)
Callback functions: Semantics and maintainability, when they aren't necessary
javascript;node.js;maintainability
Both solutions can make sense. Using functions as parameters is useful in many cases, and this generally makes it easier to write correct code because you're forced to provide callbacks for all circumstances. However, an API that requires callbacks tend to create unnecessarily deep indentation. This becomes more obvious when we have more than one validation, and all must pass:// simple solutionfunction validateA(a) { return a !== undefined }function validateB(b) { return b !== undefined }function validateC(c) { return c !== undefined }function frobnicate(a, b, c) { if (! validateA(a)) { console.log(failed: , invalid a); return; } if (! validateB(b)) { console.log(failed: , invalid b); return; } if (! validateC(c)) { console.log(failed: , invalid c); return; } console.log(success);}// naive callback solutionfunction validateA(a, onSuccess, onError) { return (a !== undefined) ? onSuccess(a) : onError(invalid a);}function validateB(b, onSuccess, onError) { return (b !== undefined) ? onSuccess(b) : onError(invalid b);}function validateC(c, onSuccess, onError) { return (c !== undefined) ? onSuccess(c) : onError(invalid c);}function frobnicate(a, b, c) { return validateA(a, function successA(a) { return validateB(b, function successB(b) { return validateC(c, function successC(c) { console.log(success); }, function failC(msg) { console.log(failed: , msg) }); }, function failB(msg) { console.log(failed: , msg) }); }, function failA(msg) { console.log(failed: , msg) });}With this callback-based validation, control flow is incomprehensible, and because the success handler comes before the error handler, error handling is visually separated from the problem it's handling. This is a hindrance to maintenance. Simply slapping callbacks on functions doesn't scale.Patterns do exist to solve this. For example, we could write a combinator that takes care of correctly composing the callback-based validation functions:function multipleValidations(validations, onSuccess) { function compose(i) { if (i >= validations.length) { return onSuccess; } var cont = compose(i + 1); var what = validations[i][0]; var validation = validations[i][1]; var onError = validations[i][2] || function(msg) {}; return function(x) { return validation(what, cont, onError) }; } return compose(0)();}function frobnicate(a, b, c) { return multipleValidations([ [a, validateA, function failA(msg) { console.log(fail: , msg) }], [b, validateB, function failB(msg) { console.log(fail: , msg) }], [c, validateC, function failC(msg) { console.log(fail: , msg) }] ], function success() { console.log(success); });}Well, it's now much nicer to use validation, but the multipleValidations helper is ugly (and non-obvious, and difficult to maintain). That same interface would have been much easier to implement if each validation was just a predicate which returns a boolean (except that now the validation can't define the error message itself, and must depend on the error callback instead):function multipleValidations(validations, onSuccess) { for (var i = 0; i < validations.length; i++) { var what = validations[i][0]; var validate = validations[i][1]; var onError = validations[i][2] || function fail() {}; if (! validate(what)) { return onError(); } } return onSuccess();}Using callbacks where they are not required is not only a KISS violation (keep it simple and stupid). They sometimes actively get in the way and obfuscate code. Some patterns do require callbacks, and using them there is OK.Another reason why we should prefer the simpler solution is that we can easily transform both representations of validation into each other:function predicateToCallback(predicate, errorMessage) { return function validate(x, onSuccess, onError) { return (predicate(x)) ? onSuccess(x) : onError(errorMessage); };}function callbackToPredicate(validate) { return function predicate(x) { return validate(x, function onSuccess() { return true; }, function onError() { return false; }); };}In your original code, the callback-based solution does not return a value, so transforming a callback into a predicate becomes marginally more difficult:function callbackToPredicate(validate) { return function predicate(x) { val result; validate(x, function onSuccess() { result = true; }, function onError() { result = false; }); return result; };}Because it is so simple to transform one representation of the same concept into the other one, we ought to start out with the more simple one. If for some reason needed, the callback based solution is just a function call away. But quite likely, you ain't gonna need it (YAGNI).
_softwareengineering.355029
I have had this interesting problem come up at work and it seems like the sort of problem that might have a well known solution, apologies in advance if some of my terminology is wrong or confusing. I have a set of inputs and a set of outputs, each input connects to every output except when specified in a dictionary which associates an output to a list of inputs which cannot be connected to it. Since it is expected that most inputs will connect to most outputs so it saves memory to only store when an input can't connect to a particular output. I want to build a set of the minimum amount pairs of lists like:pair<list<Output>, list<Input>> where all the outputs can be connected to the inputs. so as an example inputs are [a, b, c, d] and outputs are [A, B, C, D] wherewe have a map of outputs which cannot be connected to certain inputs:A -> [a, b]B -> []C -> []D -> [c, d]so in this case we'd have pairs: ([c, d], [A, B, C])([a, b, c, d], [B]) ([a, b, c, d], [C]) ([a, b], [B, C, D])Which can be reduced to the minimum 'spanning' set:([c, d], [A, B, C]) ([a, b, c, d], [B, C]) ([a, b], [B, C, D])For practical considerations there are typically expected to be on the order of 10 or so outputs and several thousand inputs.
Is there a general solution to this mapping problem?
algorithms
null
_codereview.173749
This program moves files to a backup directory, archiving them by year of last modification.From a directory like :. file1.ext file2.ext file3.ext fileN.extTo destination:. _Backup| 2002| _200X| file3.ext| fileN.ext file1.ext file2.extConfiguration.xml:<?xml version=1.0 encoding=utf-8 ?><data> <PathItem PathFrom=C:\Test\FjournaTest PathTo =C:\Test\FjournaTest\bak Delay=20 Enable=1 /> <PathItem PathFrom=\\AAAAAAA\BBB\Commun\Email\XYZ\ PathTo =\\AAAAAAA\BBB\Commun\Email\XYZ\Backup Delay=30 Enable=0 /></data>Delay is the number of days old the file needs to be eligible to the backup.FileMover.Csclass FileMove{ private IEnumerable<PathItem> _pathList; private bool _Talk = false; private DateTime currentDate = DateTime.Now; private int tryMAX = 7; public FileMove(string configPath) { if (!PathValid(configPath)) { throw new Exception($Invalid Configuration Path: [{configPath}].); } XDocument doc = XDocument.Load(configPath); var tempP = from p in doc.Descendants(PathItem) where int.Parse(p.Attributes(Enable).First().Value) == 1 select new PathItem( p.Attributes(PathFrom).First().Value , p.Attributes(PathTo).First().Value , int.Parse(p.Attributes(Delay).First().Value) , true ); //Distinct on From. _pathList = tempP.GroupBy( i => i.PathFrom, (key, group) => group.First() ).ToArray(); } public FileMove(string configPath, bool talk) : this(configPath) { this._Talk = talk; } public void Run() { if (!_pathList.Any()) { W(No Enabled Path.); return; } I(Process Start.); foreach (var x in _pathList) { I($Processing:\n{x});//TODO Try > Catch > Log. Process(x.PathFrom, x.PathTo, x.Delay); } I(Process End.); } private void Process(string pathFrom, string pathTo, int delay) { bool ValidPathFrom = PathValid(pathFrom) , ValidPathTo = PathValid(pathTo); if (!(ValidPathFrom && ValidPathTo)) { W(Path Not Valid!); W($\tPathFrom:{ValidPathFrom}\tPathTo:{ValidPathTo}); return; } I($\tPath Valid >PathFrom:{pathFrom}\t>PathTo:{pathTo}); string[] fileEntries = Directory.GetFiles(pathFrom); Parallel.ForEach(fileEntries, (fileName) => { ProcessFile(fileName, pathTo, delay); }); } private void ProcessFile(string fileName, string pathTo, int delay) { var fileInfo = new FileInfo(fileName); DateTime lastModified = fileInfo.LastWriteTime; int DuplicateCounter = 0; var yearDirectory = lastModified.Year.ToString(); string shortName = Path.GetFileNameWithoutExtension(fileInfo.Name); pathTo += $\\{yearDirectory}; string savePath = ${pathTo}\\{fileInfo.Name}; if (delay == 0 || (currentDate - lastModified).Days >= delay) { //Year Dir Exist ? if (!PathValid(pathTo)) { Directory.CreateDirectory(pathTo); } // Make sure that the target does not exist. while (File.Exists(savePath) && DuplicateCounter < 99) { DuplicateCounter++; savePath = ${pathTo}\\{shortName}({DuplicateCounter}){fileInfo.Extension}; if (DuplicateCounter == 99) { W($\t\t[{shortName}] Have to many duplicate.); } } // MoveIt. int tryCount = 0; while (tryCount < tryMAX + 1) { try { fileInfo.MoveTo(savePath); } catch (Exception e) { if (tryCount == tryMAX) {//thinks 7 time before it speaks throw new Exception($-File move : {savePath} #Failed, e); } tryCount++; continue; } break; } I({0} was moved to {1}., fileName, savePath); } } private bool PathValid(string path) { if (File.Exists(path)) { return true; } else if (Directory.Exists(path)) { return true; } else { W({0} is not a valid directory or file., path); return false; } } private bool PathValid(string path, out bool validPathFrom) { validPathFrom = PathValid(path); return validPathFrom; } public void W(string format, params object[] args) //Warning always printed { #if DEBUG System.Diagnostics.Debug.WriteLine(format, args); #endif Console.WriteLine(format, args); } public void I(string format, params object[] args) //InfoCut off if not talkative. { if (_Talk) W(format, args); }}
Archiving files into directories
c#;file
null
_cogsci.9520
In the blue eyes/brown eyes exercise created by Jane Elliot in the 1960s, she divided a group of schoolchildren by eye color and characterized one of them (e.g., the blue eyed group) as an inferior group. The aim of the exercise is to show how racism works and how it feels to be discriminated against. Is the exercise effective? What are the psychological consequences of it? Does the exercise still work today? What are the ethics when this exercise is done with adults or children?
What are the effects of the blue eyes/brown eyes racism exercise?
social psychology;attitudes;racism
null
_cstheory.19956
Given a class of hypothesis $\mathcal{H}$ representing the set of all consistent hypotheses with the examples seen so far, how to compute the region of uncertainty? The region of uncertainty is defined as all points where there are two or more hypotheses disagree on its labelling. I am representing $\mathcal{H}$ as two hypotheses: most specific (MS) and most general (MG) (as described here ). $\mathcal{H}$ consists of these two and anything in between. Clearly, enumerating all possible hypotheses is infeasible due to the size of $\mathcal{H}$. To put it in one word, how to compute disagreements in practice? EDIT: sorry for not being specific. I am trying to learn a graphical model similar to bayesian networks but instead of probabilities I have order relation. I have a set of variables $V=\{v_1,v_2,...,v_n\}$ where each variable has a set of possible values (its domain). $V$ defines an outcome space $\mathcal{O}$ (the set of all possible assignments over $V$ from their domain values). For example, if $A=\{1,2\}$ and $B=\{3,4\}$ we have $\mathcal{O}=\{(1,3),(1,4),(2,3),(2,4)\}$. The input space $X$ is the set of all pairs of outcomes from $\mathcal{O}$. Each example $x$ consist of pairs of outcomes $a$,$b\in \mathcal{O}$ (denoted as $x[a]$,$x[b]$ respectively). The target function is a strict order relation $\succ$ over $\mathcal{O}$. I am adopting the active learning paradigm. I first choose $\mathcal{U}$ unlabelled examples, then ask the oracle to label an example $x\in \mathcal{U}$. $h(x)=+$ if $x[a]$ is better than $x[b]$ otherwise $-$. The hypothesis class $\mathcal{H}$ is the set of all possible strict (partial/total) orders consistent with the examples seen so far. My main concern is how to select representative hypotheses from $\mathcal{H}$
How to compute the disagreement between hypotheses
machine learning
One major task of computational learning theory is to try to handle questions like this for specific classes. In general, if you assume no structure on the hypothesis space, you basically have no choice but to look at each hypotheses individually. On the other hand, if there is a lot of structure, eg. if you assume the hypotheses are linear separators, say in $\mathcal{R}^2$, you can see that it's possible to be quite efficient even though there are infinitely many linear separators!The same thing happens for trying to find the ERM (empirical risk minimizing) hypothesis in a class. This is important for PAC learning. Sometimes it's easy, sometimes it's not.
_unix.55153
I use Mint 13 Maya with mainline kernel 3.6.3 for Ubuntu Quantal on my Asus P53E Notebook.I use zram swap that comes with the package zram-config.I've noticed that using it the system to hangs at random moments, especially (but not always) under high memory loads. Even when 90% of memory is free (and exactly no swap in use) the notebook cannot survive in a powered state a night (8 hours).This hanging behaviour is quite literal: the computer respond to exactly nothing. No mouse movement, capslock/numlock leds don't respond to the corresponding keys. Even the sysrq combinations don't work (like BUSIER). (I don't remember if dimming the display works). After the hard restart, there is no trace of anything odd in the system logs. This behavior doesn't happen after I shut down the zram-config service.Does it mean, that zram swap is not ready for production? Or is it misconfigured? How can I debug this problem?
zram swap instability
linux mint;swap;zram
null
_scicomp.2150
From Wikipedia, assume that we have a function $M(x)$, and we want to solve the equation $M(x) = 0$. But we cannot directly observe the function $M(x)$, we can instead obtain measurements of the random variable $N(x)$ where $E[N(x)] = M(x)$. The RobbinsMonro algorithm is to solve this problem by generating iterates of the form:$$ x_{n+1}=x_n-a_n N(x_n) $$where $a_1, a_2, \dots$ is a sequence of positive step sizes.If considering solving the deterministic version of the equation instead, ie solving $M(x)=0$ when $M(x)$ can be observed directly, I wonder:Is there already a type of algorithm similar to the RobbinsMonroalgorithm? Is it $$ x_{n+1}=x_n-a_n M(x_n) $$ where $a_1, a_2, \dots$ is a sequence of positive step sizes? I couldn't find such an algorithmin the resources that I can access.If such an algorithm in 1 can work, what is itsrationale/intuition/motivation of $x_{n+1}=x_n-a_n M(x_n)$? Byrationale/intuition/motivation, I mean, for example, tangent lineapproximation in Newton's method for solving equation, and steepestdescent in gradient descent method for optimization.The reason of asking this question is that I think most, if not all, stochastic approximation algorithms are inspired from some algorithms for the similar deterministic cases.Thanks and regards!
What is the deterministic counterpart of Robbins-Monro algorithm?
optimization;stochastic
In the deterministic case, you can of course run the same algorithm, but it is very inefficient compared to quasi-Newton (Broyden type) methods. There is little point to investigate the properties of so poor an algorithm.On the other hand, broyden's method is quite sensitive to noise, hence cannot be easily adapted to the stochastic case. Moreover, if the amount of noise is large then convergence is dictated by the law of large numbers anyway, and there is little to be gained from trying to adapt algorithms with superlinear convergence.
_unix.89644
My aim is to allow read access to folder /var/www/mysite/ only for users in group www-data using a default ACL.This works for a regular ACL, but not for a default ACL. Why?This is how I did it:I am logged on as user www-data who is in group www-data. I am in directory /var/www.I created a directory mysite and gave it the permission 0. Then I added ACL permissions so that anyone in group www-data has read-access to directory mysite/.$ mkdir mysite$ chmod 0 mysite$ setfacl -m g:www-data:r-x mysite$ ls -lad---------+ 2 root root 4096 Sep 6 11:16 mysite$ getfacl mysite/# file: mysite/# owner: root# group: rootuser::---group::---group:www-data:r-xmask::r-xother::---At this point user www-data has access to the folder. However, if I instead add a default ACL, access is denied!$ setfacl -m d:g:www-data:r-x mysite # <---- NOTE the default acl rule.$ ls -lad---------+ 2 root root 4096 Sep 6 11:16 mysite$ getfacl mysite/# file: mysite/# owner: root# group: rootuser::---group::---other::---default:user::---default:group::---default:group:www-data:r-xdefault:mask::r-xdefault:other::---
Why don't I have read access to files with modified ACL?
acl
The default ACL is the ACL that is applied to newly created files in that directory. It is also copied as the default ACL for subdirectories created under that directory, so unless you do something to override it it applied recursively.The default ACL has no effect on the directory itself, or on any files that exist when you change the default ACL.So in your situation you need to both set the ACL on the directory (for the directory itself) and set the default ACL (for files that you will create in the directory).
_webmaster.85055
Following on from my previous question - NOINDEX large number of pages to fix Google thin content manual action?Firstly, If I noindex all pages causing the manual action, is that enough for me to request a review? Or do I need to make sure these pages are no longer indexed first? There are 100s of these pages so manually removing every URL will take a long time. I understand you can remove directories using Googles URL removal tool.. The site is built with Wordpress so the pages aren't literally in the same directory but the URL structure is domain.com/category/postname so if I put domain.com/category/ will that remove all posts as if they were in the 'category' directory?Secondly, I have set my preferred domain in google webmaster tools to the non www domain (This is also set at my hosting so www urls are 301 redirected to non-www urls). In GWT crawl stats the www domain is showing a lot more indexed pages than the non-www domain. If i 'fetch as google' the status shows as redirected, so how is there such a high number of indexed pages for the www domain? Should I remove URL for the entire of the www domain and remove only my noindex pages for (the preferred) non-www domain?
Should I remove Google indexed pages before requesting a review after thin content manual action
google;google search console;wordpress;googlebot;noindex
null
_codereview.32386
I have this class containing two constructors with different signatures but they do the same thing:public Person(Dictionary dictionary, string someString) : base(dictionary, someString){ base.GetProperty(FirstName); base.GetProperty(LastName);}public Person(Dictionary dictionary, string[] someStringArray) : base(dictionary, someStringArray){ base.GetProperty(FirstName); base.GetProperty(LastName);}The type of the second parameter in each of these constructors determines the behavior of the 'GetProperty' method.I've read about casting and the bad behaviors of it, but would it be appropriate to do something like this:public Person(Dictionary dictionary, Object something){}and then cast something to either a string or string[] depending on whichever is appropriate?
Different Constructors, Same Implementation
c#;casting
Two things your could do:Refactor the common code into a method (like Initialize) and call that:private void Initialize(){ base.GetProperty(FirstName); base.GetProperty(LastName);}public Person(Dictionary dictionary, string someString) : base(dictionary, someString){ Initialize();}public Person(Dictionary dictionary, string[] someStringArray) : base(dictionary, someStringArray){ Initialize();}Not sure whether that's an option as I don't know how the behaviour changes but you could reduce one case to the other:public Person(Dictionary dictionary, string someString) : this(dictionary, new [] { someString }){}public Person(Dictionary dictionary, string[] someStringArray) : base(dictionary, someStringArray){ base.GetProperty(FirstName); base.GetProperty(LastName);}Apart from that:I'd avoid the casting. Apparently only strings or arrays of strings make sense to be passed in. If you change the parameter to object then it is no longer clear to the caller what he can and cannot pass in and would have to write a test to make sure that what he is passing in will be accepted. It reduces the clarity of the interface.GetProperty seems to be a strange thing to call in a constructor. I'd expect it to return a property value yet you do nothing with the return value.Consider making FirstName and LastName (and any other property string you use) string constants rather than literals - especially if you use them in more than one place. Lambda expressions might be an option if they are actual properties on the object.
_softwareengineering.289840
I have multiple forms and use AJAX to submit them. I asked my boss if he needed any specific format for the form ID and he told me to generate a unique hash and keep it in session; check it whenever the form comes back to make sure it's a valid form submission and not someone just hitting an endpoint.What does that mean and why is it useful? How does it help me security wise?I have read a couple of articles like The 3 things you should know about hashCode() but I'm still not sure...
Why use a unique hashkey for form submissions?
algorithms;rest;ajax;validation;hashing
null
_unix.101893
I was using ls -l on a directory, and was surprised that spaces and underscores were ignored for the sort order. For example,$ echo $LANGen_AU.UTF-8$ ls -ltotal 0-rw-r--r-- 1 sparhawk sparhawk 0 Nov 20 21:12 a_a-rw-r--r-- 1 sparhawk sparhawk 0 Nov 20 21:13 a b-rw-r--r-- 1 sparhawk sparhawk 0 Nov 20 21:13 a_c-rw-r--r-- 1 sparhawk sparhawk 0 Nov 20 21:13 a d$ LANG=en_AU ls -ltotal 0-rw-r--r-- 1 sparhawk sparhawk 0 Nov 20 21:13 a b-rw-r--r-- 1 sparhawk sparhawk 0 Nov 20 21:13 a d-rw-r--r-- 1 sparhawk sparhawk 0 Nov 20 21:12 a_a-rw-r--r-- 1 sparhawk sparhawk 0 Nov 20 21:13 a_cIn my default locale, spaces and underscores are interchangeable, and without UTF-8, spaces come before underscores. I see similar results for en_US and en_US.UTF-8.I have two questions:Am I interpreting this correctly? Are they interchangeable?Is there a list of my locale's sort order? I want to find a character that precedes underscore.
How can I find the sort order for ls with my locale?
sort;locale
null
_unix.278639
I am ambitiously trying to translate a c++ code into bash for a myriad of reasons.This code reads and manipulates a file type specific to my sub-field that is written and structured completely in binary. My first binary-related task is to copy the first 988 bytes of the header, exactly as-is, and put them into an output file that I can continue writing to as I generate the rest of the information.I am pretty sure that my current solution isn't working, and realistically I haven't figured out a good way to determine this. So even if it is actually written correctly, I need to know how I would test this to be sure!This is what I'm doing right now:hdr_988=`head -c 988 ${inputFile}`echo -n ${hdr_988} > ${output_hdr}headInput=`head -c 988 ${inputTrack} | hexdump`headOutput=`head -c 988 ${output_hdr} | hexdump`if [ ${headInput} != ${headOutput} ]; then echo output header was not written properly. exiting. please troubleshoot.; exit 1; fiIf I use hexdump/xxd to check out this part of the file, although I can't exactly read most of it, something seems wrong. And the code I have written in for comparison only tells me if two strings are identical, not if they are copied the way I want them to be.Is there a better way to do this in bash? Can I simply copy/read binary bytes in native-binary, to copy to a file verbatim? (and ideally to store as variables as well).
How can I work with binary in bash, to copy bytes verbatim without any conversion?
bash;binary;head
null
_unix.41654
I am using DM368 TI board. I want to send keystrokes to gstplayer in ubuntu 10.10 For that purpose i used uinput function in C code. My code works properly on console but not on the hardware. I even generated uinput.o file on the board. Please give suitable suggestionHere's my code :#include <string.h>#include <stdio.h>#include <sys/types.h>#include <sys/stat.h>#include <fcntl.h>#include <linux/input.h>#include <linux/uinput.h>#include <stdio.h>#include <sys/time.h>#include <sys/types.h>#include <unistd.h>#include <errno.h>/* Globals */int uinp_fd;struct uinput_user_dev uinp; // uInput device structurestruct input_event event; // Input device structure/* Setup the uinput device */int setup_uinput_device(){ // Temporary variable int i=0; // Open the input device uinp_fd = open(/dev/uinput, O_WRONLY | O_NONBLOCK); if (uinp_fd<0) { printf(Unable to open /dev/uinput\n); //die(error: open); return -1; } // Setup the uinput device if(ioctl(uinp_fd, UI_SET_EVBIT, EV_KEY)<0) printf(unable to write); if(ioctl(uinp_fd, UI_SET_EVBIT, EV_REL)<0) printf(unable to write); if(ioctl(uinp_fd, UI_SET_RELBIT, REL_X)<0) printf(unable to write); if(ioctl(uinp_fd, UI_SET_RELBIT, REL_Y)<0) printf(unable to write); for (i=0; i < 256; i++) { ioctl(uinp_fd, UI_SET_KEYBIT, i); } //ioctl(uinp_fd, UI_SET_KEYBIT, BTN_MOUSE); //ioctl(uinp_fd, UI_SET_KEYBIT, BTN_TOUCH); //ioctl(uinp_fd, UI_SET_KEYBIT, BTN_MOUSE); //ioctl(uinp_fd, UI_SET_KEYBIT, BTN_LEFT); //ioctl(uinp_fd, UI_SET_KEYBIT, BTN_MIDDLE); //ioctl(uinp_fd, UI_SET_KEYBIT, BTN_RIGHT); //ioctl(uinp_fd, UI_SET_KEYBIT, BTN_FORWARD); //ioctl(uinp_fd, UI_SET_KEYBIT, BTN_BACK); memset(&uinp,0,sizeof(uinp)); // Intialize the uInput device to NULL snprintf(uinp.name, UINPUT_MAX_NAME_SIZE, uinput-sample); uinp.id.bustype = BUS_USB; uinp.id.vendor = 0x1; uinp.id.product = 0x1; uinp.id.version = 1; /* Create input device into input sub-system */ if(write(uinp_fd, &uinp, sizeof(uinp))< 0) printf(Unable to write UINPUT device.); if (ioctl(uinp_fd, UI_DEV_CREATE)< 0) { printf(Unable to create UINPUT device.); //die(error: ioctl); return -1; } return 1;}void send_click_events( ){ // Move pointer to (0,0) location memset(&event, 0, sizeof(event)); gettimeofday(&event.time, NULL); event.type = EV_REL; event.code = REL_X; event.value = 100; write(uinp_fd, &event, sizeof(event)); event.type = EV_REL; event.code = REL_Y; event.value = 100; write(uinp_fd, &event, sizeof(event)); event.type = EV_SYN; event.code = SYN_REPORT; event.value = 0; write(uinp_fd, &event, sizeof(event)); // Report BUTTON CLICK - PRESS event memset(&event, 0, sizeof(event)); gettimeofday(&event.time, NULL); event.type = EV_KEY; event.code = BTN_LEFT; event.value = 1; write(uinp_fd, &event, sizeof(event)); event.type = EV_SYN; event.code = SYN_REPORT; event.value = 0; write(uinp_fd, &event, sizeof(event)); // Report BUTTON CLICK - RELEASE event memset(&event, 0, sizeof(event)); gettimeofday(&event.time, NULL); event.type = EV_KEY; event.code = BTN_LEFT; event.value = 0; write(uinp_fd, &event, sizeof(event)); event.type = EV_SYN; event.code = SYN_REPORT; event.value = 0; write(uinp_fd, &event, sizeof(event));}void send_a_button(){ // Report BUTTON CLICK - PRESS event memset(&event, 0, sizeof(event)); gettimeofday(&event.time, NULL); event.type = EV_KEY; event.code = KEY_ENTER; event.value = 1; write(uinp_fd, &event, sizeof(event)); event.type = EV_SYN; event.code = SYN_REPORT; event.value = 0; write(uinp_fd, &event, sizeof(event)); // Report BUTTON CLICK - RELEASE event memset(&event, 0, sizeof(event)); gettimeofday(&event.time, NULL); event.type = EV_KEY; event.code = KEY_ENTER; event.value = 0; write(uinp_fd, &event, sizeof(event)); event.type = EV_SYN; event.code = SYN_REPORT; event.value = 0; write(uinp_fd, &event, sizeof(event));}/* This function will open the uInput device. Please make sure that you have inserted the uinput.ko into kernel. */int main(){ // Return an error if device not found. if (setup_uinput_device() < 0) { printf(Unable to find uinput device\n); return -1; } sleep(1); send_a_button(); // Send a A key // send_click_events(); // Send mouse event /* Destroy the input device */ ioctl(uinp_fd, UI_DEV_DESTROY); /* Close the UINPUT device */ close(uinp_fd);}
Running uinput code on DM368 board
ubuntu;embedded
null
_softwareengineering.235591
I understand that Singelton helps to instantiate only one class AT A TIME. I try to learn how to Design for Singleton function in java. I want to know it better to understand Kernel. So I try to do this following, but I like to know if it's the only way to come up with private constructor.public class Singleton { private static Singleton instance = null; private Singleton() { } public static synchronized Singleton getInstance() { if (instance == null) { instance = new Singleton (); } return instance; }}
Simple Design for Singleton function in java for Kernel
java;singleton
yes, to create a Singleton class, you have to use the private constructor as it is the only way to prevent another class from creating an instance of your class. public class Singleton {private static Singleton instance = null;private Singleton() { }public static synchronized Singleton getInstance() { if (instance == null) { instance = new Singleton (); } return instance;}}
_codereview.1287
I have a heading, which is an integer and from it I would like to figure out the compass direction(North, North-East,...) and return the appropriate icon. I feel like I'm probably already at the cleanest solution but I would love to be proven wrongpublic GetHeadingImage(string iconName, int heading){ if (IsNorthHeading(heading)) return iconName + n + ICON_FILE_EXTENTION; if (IsNorthEastHeading(heading)) return iconName + ne + ICON_FILE_EXTENTION; if (IsEastHeading(heading)) return iconName + e + ICON_FILE_EXTENTION; /*A bunch more lines*/}private bool IsNorthHeading(int heading){ return heading < 23 || heading > 337;}/* A bunch more extracted test methods*/
Cleaner way to calculate compass direction?
c#
null
_unix.63266
Anytime I want to move thousands of files to a new folder, I always encounter the same problem.> mkdir my_folder> mv * my_foldermv: cannot move 'my_folder to a subdirectory of itself 'my_folder'While I think that the error above is harmless (is it?) I am wondering if there is a way of avoiding it.In case it matters, I am interested in a solution in zsh or one that works well across various shells.
mv * folder (avoiding 'cannot move' error)
zsh;rename
In zsh, with the extended_glob option enabled, you can use ~ to exclude patterns from globs, so you could use:setopt extended_globmv -- *~my_folder my_folderOr use the negation operator (still with extended_glob):mv -- ^my_folder my_folderUse braces to avoid typing the directory name twice:mv -- {^,}my_folderIn bash (for other answer-seekers using it), you can use Ksh-style extended globs:# If it's not already enabledshopt -s extglobmv -- !(my_folder) my_folderYou can also use that syntax in zsh if you enable the ksh_glob option.
_computerscience.5215
Since I am using batch rendering, so I should pack every render units can be batched into one big VBO. But, what if I am in a case where render units are dynamically changed, some new units are added in, and some units are subtracted out. So I should reconstruct my VBO again and again in very frame. So here is the question.Should I allocate a big enough storage in the first time and update the data in it, and only reallocate when run out of the storage, or allocate storage according to the size on demand in each frame?
How can deal with batch rendering when the elements in the batch changed every frame?
opengl;glsl
Allocate a VBO with a reasonable amount of storage during startup, and update the data in it each frame. This will help the driver manage the memory efficiently. Don't destroy and recreate the buffer unless absolutely necessary; too much resource creation/destruction churn will put more pressure on the driver and can potentially lead to stalls.To be more specific: for data that will be updated every frame, you probably want to initialize it using glBufferStorage with GL_DYNAMIC_STORAGE_BIT (or, if you're on an older OpenGL version, glBufferData with GL_DYNAMIC_DRAW). Then, to update it each frame, use glMapBufferRange with GL_MAP_INVALIDATE_BUFFER_BIT. This should be an efficient, fast path in the graphics driver.
_cs.50026
Assume there is a set {1,...,n} of person. For each person i, a set $A_i$ of items are available for exchange. A is the set of all items. The value of item j to person i is $v_{ij}$ and assume all values are positive integers. The question is, does there exist a pair of person who can both make profit by exchanging some of their items in $A_i$.Formally:$\exists$ i and i' and $S_i\subseteq A_i$ and $S_{i'}\subseteq A_{i'}$ such that $\sum_{j\in S_i} v_{i'j}>\sum_{j\in S_{i'}} v_{i'j}$ and $\sum_{j\in S_{i'}} v_{ij}>\sum_{j\in S_{i}} v_{ij}$ I'm not sure whether this question is NP-Complete or there exists a polynomial-time algorithm?
weighted subset exchange question
complexity theory;np complete
It looks like your question can be written formally$\exists i,j\in\{1,..,n\}\:\exists S\subseteq A_i\:\exists T\subseteq A_j :(\sum_{y\in T}v_{i,y}>\sum_{x\in S}v_{i,x})\land (\sum_{x\in S}v_{j,x}>\sum_{y\in T}v_{j,y})$If the above is true, then persons $i$ and $j$ benefit from swapping items from $S$ and $T$. This is hard to decide. Here is an idea for a reduction. We simplify things by assuming there are only two persons $P_1$ and $P_2$, that $P_1$ has just coins and banknotes in $A_1$, that $P_2$ has just $A_2=\{x\}$ and that they would both profit if $P_1$ can buy item $x$ at price $V$ from $P_2$. This is because $v_{1,x}-1=V=v_{2,x}+1$, i.e., $P_1$ thinks thatby paying $V$ for $x$ he wins $1$, while $P_2$ thinks he wins $1$ by selling $x$. They will trade iff $P_1$ can find some coins and notes in his $A_1$ that add up to $V$ exactly (we assumes that $P_2$ only has $A_2=\{x\}$ so he cannot make any change). NB: the coins and notes are items $c\in A$ with $v_{1,c}=v_{2,c}$, their face value is the same for both persons.
_scicomp.27164
I need a fast and accurate method to calculate 3d spherical volume integrals.I have pre-calculated data of high precision that just needs a few trivial manipulations before each integration step - in other words function calls are mostly just array look ups and multiplication and so are relatively cheap.The data is evenly spaced in spherical polar coordinates: 4 samples per degree for theta and phi, 500 radial samples.I need good precision, but I also need to perform several million such integrals, so speed is an important concern. What algorithm is most suitable for this problem?Bonus points for a link to an implementation in C++!
Spherical volume integral from pre-calculated points - which algorithm is best?
c++;quadrature;integration
So you want to evaluate$$F=\int_0^r \int_0^{2\pi} \int_{0}^\pi f(r,\phi,\theta) \, r^2 \, \sin (\theta) \ dr \, d\phi \, d\theta$$and you have your function $f$ available on equidistant gridpoints $r_i$, $\phi_j$ and $\theta_k$. Call$$f_{ijk} = f(r_i, \phi_j, \theta_k)$$The plain simple approach is then just to use the trapezoidal rule (I'll omit the one-halves at the endpoints):$$F \approx \sum_{ijk}f_{ijk} \, r_i^2 \, \sin(\theta_i)$$... or, if you're feeling for higher order, use another Newton-Cotes formula.Just to note that once. For other approaches, see the comment section.EDIT: I've just noticed you were asking for the best method, so this will probably be quite boring for you.
_codereview.128134
This code works, but I want to refine it:# =======================# Dir Walk## This will recursively walk a directory# ========================function dirwalk(path::AbstractString, fn::Function) content = readdir(path) for c in content p = joinpath(path, c) if isdir(p) dirwalk(p, fn) elseif isfile(p) fn(p) end endend
Recursively walk directories with a callback function
recursion;callback;julia;file system
null
_scicomp.5003
I sincerely apologise if this question is a duplicate. Though it is clearly a question that must have been asked and answered a 1000 times I can't find any reasonable solution.How do I take a simple 2D CAD Drawing (.dwg file) of a 2D boundary and generate a 2D computational mesh suitable for FEM analysis in any of the following file formats:.xml (DOLFIN XML), .ele/.node (Triangle file format), .mesh (Medit generated by TetGen), .msh/.gmsh (Gmsh), .grid (Diffpack tetrahedral), .inp (Abaqus tetrahedral), .e/.exo/.ncdf (Sandia Exodus II), .vrt/.cell (Star-CD)I'm doing this on my own time so it must be free software. Any assistance would be greatly appreciated!Derek
Mesh simple 2d CAD boundry drawing
finite element;mesh generation
For your setup the most common strategy is to transfer data from the CAD program to the FEM preprocessor via a neutral file format, e.g. IGES or STEP, which are both supported by Gmsh. Unfortunately I have no direct experience of DraftSight, so please check if it is capable of exporting (saving) models in IGES or STEP format. (Most CAD programs have native support for IGES or STEP, so I would expect that also DraftSight has this capability.)
_reverseengineering.2475
I'm trying to reverse engineer a driver that consists of 2 components, a windows service and a control panel application. My goal of reverse engineering is to replace the control panel with my own program.Now as far as I can see I have a few possible approaches:I try to reverse engineer the control panel, and discover the calls sent. But this panel consists of a lot of bloatware.I try to reverse engineer the service, and discover the input needed. But this service also handles other (unknown) functions.I try to catch the communication between the panel and the service.Now 3 would be the easiest approach, but I have no idea if this is technically possible. Then I tried option 2, but i can only statically analyze the exe, Since dynamic analysis causes it to crash prematurely. Option 1 seems to be the most logical one, and this was the first I tried, but I can't really find an interesting starting point.Is there anyone who can point me in the right direction. I have some reverse engineering experience from crackme's and applications, but this is my first attempt at reversing a driver.
Reverse engineering windows service
windows;ida;ollydbg
If you goal is ultimately to control the service, it make more sense to reverse it versus reversing control panel. Who knows, you might find functionality you were not aware of. The key part of reversing windows service is to realize that it runs within the context of the Service Control Manager, and simply running the executable will not work. There are several major components of any windows service, that you need to be aware of. I have already given an answer to How does services.exe trigger the start of a service? question. It describes inner workings of a windows service.It is very much possible to reverse a Windows service both dynamically and statically as long as you understand underlining concepts. If you run your service in context of command prompt, it will fail. Every Windows service by design has to call to Service Control Manager, if that call fails it means the service is not executed within the SCM. It is expected to fail being executed outside of the SCM. If services expects input, you will have to figure out what it needs. Firstly, you will need to locate Service Worker Thread. Thereafter, locating part of the way it communicates to control panel should be easy.
_webapps.7358
I want to share a specific Twitter post to my friend. For that I need to make a link for that post. How can I do that?
Is it possible to make a link for a specific Twitter post?
twitter;links
In the latest incarnation of Twitter the link is now called Expand and has moved under the Tweet:If you actually click on the Expand link you get three more places to pick up the link - the time, the Collapse link and the Details link:See the revision history for previous locations of this link
_unix.12637
Say I open ~/vim.txt, push that to background, then cd to another path. When I bring that job to foreground, is there an option to switch back to old path? I noticed it says pwd, so I assume it is possible.
Switch back to path when resume job from background
bash;zsh;directory;background process;cd command
This is possible in zsh, and in fact it's easy thanks to the direct access to the job parameters provided by the zsh/parameter module. You can use a job number or any job specification (%+, %-, %foo, etc.) as a subscript in the array.zmodload zsh/parameterfgcd () { local dir=$jobdirs[${1:-%+}] # If the jobspec matched, then call cd. Otherwise it's probably a bad # job spec, but call fg anyway to get the usual error message. if [[ -n $dir ]]; then cd $dir; fi fg $1}Bash also keeps track of the information, but I don't think it's exposed. On some systems, you can obtain the current working directory of the job's process, and switch to it. For example, on Linux, /proc/$pid/cwd is a symbolic link to that process's working directory.fgcd () { # Linux only local pid=$(jobs -p $1) if [[ -n $pid ]]; then cd /proc/$pid/cwd; fi fg $1}Since it can also be useful, here's a zsh version. Unlike the function above, which switches to the job's original directory, this one switches to the current working directory of the job's process leader.fgcd () { # Linux only local pid=${${${jobstates[${1:-%+}]}#*:*:}%\=*} if [[ -n $pid ]]; then cd /proc/$pid/cwd; fi fg $1}
_codereview.148627
I have a String with a double number. Unfortunately, the number is created on backends with different locals, so it could be both 101.02 and 101,02 (different delimiters). I need to get the position of this delimiter if it exists and get 0, if it is not.I've come to two options:int pos = amount.indexOf(',') == -1 ? (amount.indexOf('.') == -1 ? 0 : amount.indexOf('.')) : amount.indexOf(',');Second option with the same logic but different style:int pos = amount.indexOf(',');if (pos == -1) pos = amount.indexOf('.');if (pos == -1) pos = 0;I do not need to have a double number from String, I just need the position of the delimiter to color the String (using the Android class Spannable).Is there a cleaner way to achieve this goal? And which of these styles are better, in your opinion? Is there some way to use the DecimalFormat class to achieve the goal?
Find index of double number delimiter
java;android;comparative review
The problems with the first approachint pos = amount.indexOf(',') == -1 ? (amount.indexOf('.') == -1 ? 0 : amount.indexOf('.')) : amount.indexOf(',');are that:It's not that easily readable; the ternary operator has value when what is tested is simple enough. But when you start nesting them, it often degrades clarity.The index is calculated two times, one to test whether it is -1 or not, and the second time to return the value.As such, the second approachint pos = amount.indexOf(',');if (pos == -1) pos = amount.indexOf('.');if (pos == -1) pos = 0;is the most preferable between the two, mainly for clarity. You should consider putting that into a utility method.There would be other approaches like using a regular expression, but another good one would to not traverse the string potentially 2 times, and instead of looking whether the string has a certain character, loop through each character and see if it is one of the potential delimiters. With Java 8, you could haveint pos = amount.chars().filter(c -> c == '.' || c == ',').findFirst().orElse(0);And you could write explicitly the for loop for Java ≤ 7.Final point: having a position of 0 when neither , nor . are present in the String can be confusing; 0 is a valid index value for a string, and it can imply that the delimiter was the first character of the string. If you consider .25 (that could be a valid representation of a double number, the 0 before being implied), the code would consider this as if having no delimiter.
_unix.126064
I'm new to awesome, and I'm having trouble configuring a theme from the following repository: hereI just moved the themes to my awesome directory, so the tree looks like this:~/.config/awesome/-- rc.lua-- themes -- anon -- multicolor # these are the themes from github# etcThe rc.lua is copied from one of the themes here are its contents:--[[ Steamburn Awesome WM config 2.0 github.com/copycat-killer --]]local awful = require(awful)awful.util = require(awful.util)--{{{ Maintheme = {}home = os.getenv(HOME)config = awful.util.getdir(config)shared = /usr/share/awesomeif not awful.util.file_readable(shared .. /icons/awesome16.png) then shared = /usr/share/local/awesomeendsharedicons = shared .. /iconssharedthemes = shared .. /themesthemes = config .. /themesthemename = /steamburnif not awful.util.file_readable(themes .. themename .. /theme.lua) then themes = sharedthemesendthemedir = themes .. themenamewallpaper1 = themedir .. /wall.pngwallpaper2 = themedir .. /background.pngwallpaper3 = sharedthemes .. /zenburn/zenburn-background.pngwallpaper4 = sharedthemes .. /default/background.pngwpscript = home .. /.wallpaperif awful.util.file_readable(wallpaper1) then theme.wallpaper = wallpaper1elseif awful.util.file_readable(wallpaper2) then theme.wallpaper = wallpaper2elseif awful.util.file_readable(wpscript) then theme.wallpaper_cmd = { sh .. wpscript }elseif awful.util.file_readable(wallpaper3) then theme.wallpaper = wallpaper3else theme.wallpaper = wallpaper4end--}}}theme.font = Tamsyn 10.5theme.fg_normal = #cdcdcdtheme.fg_focus = #d79d38theme.fg_urgent = #CC9393theme.bg_normal = #140c0btheme.bg_focus = #140c0btheme.bg_urgent = #2a1f1etheme.border_width = 1theme.border_normal = #140c0ftheme.border_focus = #915543theme.border_marked = #CC9393theme.taglist_fg_focus = #d47456theme.tasklist_bg_focus = #140c0btheme.tasklist_fg_focus = #d47456theme.menu_height = 16theme.menu_width = 140theme.layout_txt_tile = [t]theme.layout_txt_tileleft = [l]theme.layout_txt_tilebottom = [b]theme.layout_txt_tiletop = [tt]theme.layout_txt_fairv = [fv]theme.layout_txt_fairh = [fh]theme.layout_txt_spiral = [s]theme.layout_txt_dwindle = [d]theme.layout_txt_max = [m]theme.layout_txt_fullscreen = [F]theme.layout_txt_magnifier = [M]theme.layout_txt_floating = [|]theme.menu_submenu_icon = themedir .. /icons/submenu.pngtheme.taglist_squares_sel = themedir .. /icons/square_sel.pngtheme.taglist_squares_unsel = themedir .. /icons/square_unsel.pngtheme.tasklist_disable_icon = truetheme.tasklist_floating = theme.tasklist_maximized_horizontal = theme.tasklist_maximized_vertical = return themeIt is not working, it just displays the cursor on a black screen whit no background not taskbar and you can't open terminals or anything else.Also I copied the themes to /usr/share/awesome/themes but I don't think it is the problem.I'm on arch-linux my version of awesome is 3.5.2.If some one has done this, please help.
Awesome WM buggy rc.lua
awesome
null
_datascience.18202
I'm currently studying from Andrew Ng's Stanford handouts here (I'm at part 8). Now from what I gathered from before, all the time our goal was to minimize ||w||^2 so that we can maximize the margin. However, now as he's writing about the regularization, he's saying:The parameter C controls the relative weighting between the twin goals of making the ||w||^2 large (which we saw earlier makes the margin small) and of ensuring that most examples have functional margin at least 1.Why would making the margin small be currently a goal?
SVM regularization - minimizing margin?
svm;regularization
null
_unix.180531
I have a LUKS filesystem on a USB hard drive and I'd like to have the system generate a GUI prompt whenever the device is connected to the computer to provide the decryption password, then mount the enclosed filesystem according to instructions in /etc/fstab. Is there a way to do this? Does there exist a program which will, upon detecting the LUKS filesystem's UUID, prompt for a password, decrypt, then mount the volume in a special place, or do I have to rely on typing out cryptsetup luksOpen /dev/sdXY && mount -a every time I insert my device?
Automatically prompt to decrypt external LUKS filesystem in GNOME session?
gnome;luks
null
_webmaster.35355
I am trying to get Google to index an AJAX site (davidelifestyle.com). It's crawlable with JavaScript turned off and I have also recently implemented _escaped_content_ snapshot mechanism but all that's indexed is a home page and PDF files that are not directly available from the home page. Also when I use Fetch as Google in Webmaster Tools, it downloads the dynamic page but does not index it (Submit to Index just reloads the page).Any ideas what might be wrong?Edit: Today Index Status in Webmaster Tools showed: Total indexed: 0, Not selected: 178. According to documentation, pages are not selected because they are regarded duplicates.
Google crawling the site but refusing to index dynamic content
seo;google search console;ajax;google index
null
_unix.85776
Suppose I have the following line in my command prompt and the cursor is at the and of the line or the beginning of the line by using (CTRL+A)[subhrcho@slc04lyo pcbpel]$ abc def ghi jklHow can I navigate to a particular word say, for example, to def. I am using tcsh shell in linux with default binding which I suppose is emacs mode.P.S:I do not have a Meta key in my keyboard. I can move forward or backward between words by using CTRL+f and respectively. So I think my Meta key is the Esc key. Please correct me if that presumption is wrong. Alt isn't working either as the meta key.I had a look at the emacs documentation but invoking CTRL-s w and then pressing Enter is not working for me. It would just try to execute whatever is there in the prompt by first appending a w character to it and the prompt would say :[subhrcho@slc04lyo pcbpel]$ abc def ghi jklwabc: Command not found.
In tcsh shell how I find a particular word in the command prompt?
keyboard shortcuts;tcsh;line editor
By Meta-x tcsh means that it expects the ESC ASCII character (aka ^[ or \e) followed by x. You can always do it by pressing Escape and x quickly in sequence, or some terminals do it by pressing Alt-x.Some other terminals send the character x with the 8th bit set when pressing Alt-x. With xterm, you can change that by adding:XTerm*metaSendsEscape: trueto an X11 resource file.Now, for searching in tcsh, if you want to emulate emacs/zsh Ctrl-R or Ctrl-S in emacs mode, you'll have to bind the i-search-back and i-search-fwd widgets:bindkey '^R' i-search-backbindkey '^S' i-search-fwdHowever note that generally, for the terminal driver, ^S is the stop character that pauses the terminal input and output (resumed with ^Q). So, if you want to bind ^S, you'll have to disable that either by disabling flow control:stty -ixonOr bind stop to some other character:stty stop '^T'
_webmaster.86988
I have connected my AdWords and Analytics accounts and everything seems to be working, except transactions don't show up as conversions in AdWords.In Analytics I can go to Conversions -> Ecommerce -> Overview and view my transactions.I can also go to Aquisition -> AdWords -> Campaigns and see clicks and transactions (just 1 test transaction that I made) for my AdWords campaigns.So the only problem is that I don't see the conversions in the AdWords dashboard. I went to Tools -> Conversions -> Google Analytics and setup the importing of transactions from the correct view in Analytics. Still not working!EDIT: I also don't see Real-Time Conversions in Analytics.
Google Analytics Ecommerce Transactions not importing to AdWords
google analytics;google adwords
null
_cstheory.3024
Hamiltonian cycle problem is $NP$-complete on cubic planar bipartite graphs. I'm interested in upper bounds on the length of the longest simple path in non-Hamiltonian cubic planar bipartite graphs.What is the best known upper bound on the length of longest simple path in non-Hamiltonian cubic planar bipartite graphs?Edit: Also, I am interested in non-trivial lower bounds on the length of the longest simple path in this class of graphs.
Upper bounds on the length of longest simple path in non-Hamiltonian graph?
cc.complexity theory;graph theory
There exist cubic bipartite planar graphs in which the longest path has length only $O(\log^2 n)$:
_scicomp.365
Which approaches are used in practice for estimating the condition number of large sparse matrices?
Estimation of condition numbers for very large matrices
linear algebra;matrices;conditioning
It is very common to project the matrix into the Krylov space (generated by repeated application on a vector) and then to get the condition number of the projected matrix. In PETSc, this can be done automatically using -ksp_monitor_singular_value.
_unix.323163
I have a csv file that has many lines of timestamps in following format HH:MM:SS:MSFor example:00.00.07.38 00.00.08.13 00.00.08.88The hour is not relevant to me so I would like to cut it out. How do I remove HH from every line in file with bash.I can read line by line from the filewhile IFS=, read col1do #remove HH from every line #awk -F '[.]' '{print $1}' <<< $col1 #only prints one portion of time #echo $col1 | cut -d. -f2 | cut -d. -f3 | cut -d. -f4done < $fileI have been playing around with awk and cut but was only able to print a specific position ex HH etcBut how to remove just the HH from the line without creating a new file?
Remove a specific portion of line
bash;sed;awk;cut
null
_cs.60098
I was reading about Multi Layered Perceptron(MLP) and how can we learn pattern using it. Algorithm was stated asInitiate all weight to small values. Compute activation of eachneuronusing sigmoid function. Compute the error at the output layer using$\delta_{ok} = (t_{k} - y_{k})y_{k}(1-y_{k})$compute error in hidden layer(s) using$\delta_{hj} = a_{j}(1 -a_{j})\sum_{k}w_{jk}\delta_{ok}$update output layer using using$w_{jk} := w_{jk} + \eta\delta_{ok}a_{j} ^{hidden}$ and hidden layer weight using $v_{ij} := v_{ij} + \eta\delta_{hj}x_{i}$Where $a_{j}$ is activation function, $t_{k}$ is target function,$y_{k}$ is output function and $w_{jk}$ is weight of neuron between $j$ and $k$My question is that how do we get that $\delta_{ok}$? and from where do we get $\delta_{hj}$? How do we know this is error? where does chain rule from differential calculus plays a role here?
error computation in multi layered perceptron
machine learning;neural networks
How do we get that $\delta_{ok}$?You calculate the gradient of the network . Have a look at Tom Mitchel: Machine Learning if you want to see it in detail. In short, your weight update rule is$$w \gets w + \Delta w$$with the $j$-th component of the weight vector update being$$\Delta w^{(j)} = - \eta \frac{\partial E}{\partial w^{(j)}}$$where $\eta \in \mathbb{R}_+$ is the learning rate and $E$ is the error of your network.$\delta_{ok}$ is just $\frac{\partial E}{\partial w^{(o,k)}}$. So I guess $o$ is an output neuron and $k$ a neuron of the last hidden layer.Where does the chain rule play a role?The chain rule is applied to compute the gradient. This is where the backpropagation comes from. You first calculate the gradient of the last weights, then the weights before, ... This is done by applying the chain rule to the error of the network.NoteThe weight initialization is not only small, but it has to be different for the different weights of one layer. Typically one chooses (pseudo)random weights.See: Glorotm, Benigo: Understanding the difficulty of training deep feedforward neural networks
_cstheory.23792
A recent paper out of Microsoft Research describes a new, faster implementation of the patience sort algorithm. A key part of the implementation is an improved merging strategy dubbed the ping-pong merge. I am confused as to why this merge strategy uses two arrays to perform the merging, instead of just using a single array and always performing a blind merge as described in the paper. It seems that always performing blind merges, and thus only using a single array to perform the merge, would cut down memory usage with no change in runtime.
Patience Sort+ ping pong merge implementation
ds.algorithms;ds.data structures;sorting
null
_softwareengineering.340969
I'm building a Spring-Boot API that will serve a system that requires synchronous behavior despite the API doing async operations. I'm not too familiar with threads and I'm thinking this might be the right place to use them. I'm eager to hear some input.Here are the restraints:1. API will write to queue when request comes in.2. The consumer of the queue will write to a database upon successfully completing request.3. API will continually pool DB to see when request has been satisfied.4. API responds to client. The expectation is that the API will field a few hundred requests per minute, but should scale to handle maybe 10,000/minute one day.This will live in Elastic Beanstalk, so we do get some flexibility with scaling. One specific question is: Is there a way to take advantage of Java/Spring features to significantly decrease the load on the system, or is Spring capable of managing compute resources itself?
Design synchronous wrapper around async process
architecture;performance;multithreading
null
_softwareengineering.324735
In a JavaScript app, suppose I have a nested object like this:var myObject = { someProp: { someOtherProp: { anotherOne: { yetAnother: { myValue: hello! } } } }}In another part of the code, I'm calling myValue multiple times (I just call it three times in this example but imagine there's a lot more):someFunc() { doSomething(myObject.someProp.someOtherProp.anotherOne.yetAnother.myValue) doSomethingElse({ arg1: something, arg2: myObject.someProp.someOtherProp.anotherOne.yetAnother.myValue }) if (myObject.someProp.someOtherProp.anotherOne.yetAnother.myValue == hola) { doStuff() }}Apart from the obvious readability and maintainability gains, is it actually faster to do it like this:someFunc() { let val = myObject.someProp.someOtherProp.anotherOne.yetAnother.myValue doSomething(val) doSomethingElse({ arg1: something, arg2: val }) if (val == hola) { doStuff() }}Or is it pretty much the same, behind the scenes?In other words, does the interpreter have to walk the whole myObject tree and search for each nested property each time, in case 1, or does it somehow cache the value, or is there another mechanism that makes it as fast as case 2?Context: any modern browser (Chrome, Safari, Firefox, Edge).
Is it faster to make a dedicated variable instead of calling deeply nested object several times?
javascript;computer science
The optimization you are performing by hand is called common subexpression elimination. According to this article, Chrome's V8 has performed this operation since at least 2011; Webkit's JIT also does it; SpiderMonkey, the JS engine in Firefox also does it. I haven't found a good description of the optimization performed by Chakra, Edge's JIT, but the source code is online and the function located here appears to perform the relevant optimization.It therefore seems that there is little point in doing this for optimization purposes (at least for desktop browsers -- note that mobile browsers in many cases do not have JIT compilers, or if they do they are much more basic). But it's probably worth doing anyway in order to make your code more readable.
_codereview.101280
I previously asked about my implementation of the Sieve of Eratosthenes algorithm here.After looking at all of the feedback, I have reworked the code to make it significantly more efficient. However, I'd like to know whether it can be made more efficient still.I have tried to follow pesudocode for my implementation, which I have provided below:Input: an integer n > 1Let A be an array of Boolean values, indexed by integers 2 to n,initially all set to true. for i = 2, 3, 4, ..., not exceeding n: if A[i] is true: for j = i2, i2+i, i2+2i, i2+3i, ..., not exceeding n: A[j] := falseOutput: all i such that A[i] is true.Pseudocode sourced from here.My implementation:import java.util.Arrays;import java.util.Scanner;public class sieveOfEratosthenes { public static void main (String [] args) { int maxPrime; try (Scanner sc = new Scanner(System.in);) { System.out.print(Enter an integer greater than 1: ); maxPrime = sc.nextInt(); sc.close(); } long start = System.nanoTime(); boolean [] primeNumbers = new boolean [maxPrime]; Arrays.fill(primeNumbers, true); int maxNumToTest = (int) (Math.floor(Math.sqrt(maxPrime))); for(int i = 2; i <= maxNumToTest; i++) { if (primeNumbers[i] == true) { for (int j = i * i; j < maxPrime; j += i) { primeNumbers[j] = false; } } } long stop = System.nanoTime(); for(int i = 2; i < primeNumbers.length; i++) { if(primeNumbers[i] == true) { System.out.print((i) + , ); } } System.out.println(\nExecution time: + ((stop - start) / 1e+6) + ms.); }}I have tested my implementation to find that it is capable of calculating all primes below 10,000,000 in ~110ms on an i5 processor.My question: Is this as fast as is physically possible in Java, or can I make further improvements?
Java Implementation of Sieve of Eratosthenes algorithm
java;performance;algorithm;sieve of eratosthenes
Using vertical spaces (new line) to group related code will help to easier read your code. if(primeNumbers[i] == true) the checking with == true is superfluous because the condition of the if is a boolean one, so you can simplify this to if(primeNumbers[i]) - instead of using maxPrime here boolean [] primeNumbers = new boolean [maxPrime]; Arrays.fill(primeNumbers, true) you should take advantage of the computed int maxNumToTest like you did in the first loop. You should also add this computed value as the end of the inner loop and the result loop like so int maxNumToTest = (int) (Math.floor(Math.sqrt(maxPrime))); boolean [] primeNumbers = new boolean [maxNumToTest]; Arrays.fill(primeNumbers, true); for(int i = 2; i <= maxNumToTest; i++) { if (primeNumbers[i]) { for (int j = i * i; j <= maxNumToTest; j += i) { primeNumbers[j] = false; } } } long stop = System.nanoTime(); for(int i = 2; i <= maxNumToTest; i++) { if(primeNumbers[i]) { System.out.print((i) + , ); } }- putting everything int main() should be avoided. The current code could be divided into 3 methods: reading the inputcomputing the primeswriting the output In this way each method has a single responsibility and is easier to read and to maintain.
_unix.337582
I am trying to take a screenshot in Lubuntu, but when i try to paste it on an image editor like GIMP it says There is no image data in the clipboard to paste.
Can't take a screenshot on Lubuntu 16.04
keyboard shortcuts;images;screenshot
null
_unix.117664
PGP Whole disk encryption (WDE) is an encryption software that encrypts all your data, and includes a bootguard. Because of this, I have been unable to successfully dual-boot, as there is no place to install GRUB. (I did attempt an install, the result of which was a complete loss of my Windows partition, and a completely shot weekend.) I attempted the Ubuntu dual-boot (wubi version, does not install bootloader), but since the sequence is PGP decrytption-bootloader-OS, the data necessary for the install were inaccessible. The computer is a work computer, and I have not been granted permission to decrypt it. I'm booting off an external drive for now, but are there any good means of doing this?
How to dual-boot on a machine with PGP WDE?
ubuntu;windows;grub2;dual boot;pgp
You can install GRUB (instead of the whole OS) on a USB stick or CD and use that for booting. Having such a boot medium is generally nice (at least on non-UEFI systems, if a system gets so screwed up that it doesn't boot any more).
_webapps.40288
Is there a way to set the shape menus I want in draw.io? When starting a new drawing I always have to go through the process of closing the ones I don't need and opening the ones I do. I would like to cut this step from my process.Thank you for your time.
Setting default shape menus in draw.io
draw.io
For now there's a URL parameter libs that takes a semicolon separated list of the library sets you want, e.g.:https://www.draw.io/?libs=general;flowchart;basic;arrows;clipart;signs;mockups;electrical;aws;pid;leanMapping;ciscoThat's also the list of all possible values currently for that parameter. You've made me realise there's a bug with ER, BPMN and iOS sets regarding this, we'll look at that.We have discussed this and intend to persist the library settings between uses. A question for you, would you prefer this to be stored per browser (i.e. cookies) or per Google account (as a meta file in your Drive)?
_unix.158853
I can parse the /etc/passwd with augtool: myuser=bobusershome=`augtool -L -A --transform Passwd incl /etc/passwd print /files/etc/passwd/$myuser/home | sed -En 's/\/.* = (.*)/\1/p'`...but it seems a little too convoluted.Is there any simple, dedicated tool for displaying users'home (like usermod can be used to change it)?
What is the reliable way of getting each user's .ssh directory from bash?
bash;command line;scripting;hosts
You should never parse /etc/passwd directly. You might be on a system with remote users, in which case they won't be in /etc/passwd. The /etc/passwd file might be somewhere else. Etc.If you need direct access to the user database, use getent.$ getent passwd phemmerphemmer:*:1000:4:phemmer:/home/phemmer:/bin/zsh$ getent passwd phemmer | awk -F: '{ print $6 }'/home/phemmerHowever there is also another way that doesn't involve parsing:$ user=phemmer$ eval echo ~$user/home/phemmerThe ~ operator in the shell expands to the specified user's home directory. However we have to use the eval because expansion of the variable $user happens after expansion of ~. So by using the eval and double quotes, you're effectively expanding $user first, then calling eval echo ~phemmer.Once you have the home directory, simple tack /.ssh on to the end.$ sshdir=$(eval echo ~$user/.ssh)$ echo $sshdir/home/phemmer/.ssh
_codereview.88333
My question is about the equals() method. Most people implement equals() to start like:@Override public boolean equals(Object o) { if (this == o) { return true; } if ( (o == null) || !(o instanceof MyClass) ) { return false; } ...The !(o instanceof MyClass) line here can be very limiting. Once a class with such an equals method is wrapped in another class, it is no-longer equal to itself! Consider the following function and wrapper class. It works like Collections.unmodifiableList(x), only I made an UnList interface that extends List to deprecate each mutator method in addition to making the method throw an UnsupportedOperationException./** Returns an unmodifiable version of the given list. */public static <T> UnList<T> un(List<T> list) { if (list == null) { return UnList.empty(); } if (list instanceof UnList) { return (UnList<T>) list; } if (list.size() < 1) { return UnList.empty(); } // Declare a real class instead of an anonymous one so that we can // check the class of the object passed to the equals() method and // compare the inner objects if that's appropriate. class UnListWrapper<E> implements UnList<E> { private final List<E> inner; private UnListWrapper(List<E> i) { inner = i; } @Override public UnListIterator<E> listIterator(int index) { return un(inner.listIterator(index)); } @Override public int size() { return inner.size(); } @Override public E get(int index) { return inner.get(index); } @Override public int hashCode() { return inner.hashCode(); } @Override public boolean equals(Object o) { // Typical implementations of equals() check the exact // class of the passed object so actually give up the // inner, wrapped object to allow the class check to pass // in those cases. For best results, use Sorted // collections that don't call this method. return inner.equals((o instanceof UnListWrapper) ? ((UnListWrapper) o).inner : o); } }; return new UnListWrapper<>(list);}When I wrote the above equals() method, I thought that I was doing a good thing by broadening the scope where the equals method could still work. But sometimes failing fast can be kinder than working half the time.Imagine a bunch of lists with the above implementation being thrown into a HashSet. Some are wrapped in the UnListWrapper, some are not. If you put in two equivalent unwrapped lists, or put the unwrapped list in first, then you add the wrapped one, the equals method in the wrapped list will compare its inner list and see that they are the same. But if you add the wrapped list followed by an unwrapped one, the equals method of the unwrapped List could reject the wrapped one as being the wrong class, and you'd end up with two of the same List in your set.This order-dependent problem can be hard to test for and is an awful kind of problem to debug. So now I'm thinking that it's kinder to implement equals to fail fast: @Override public boolean equals(Object o) { return inner.equals(o); }Then if someone writes a List with an instanceof MyClass equals() implementation, they will have to fix their equals to use instanceof List which is probably better all the way around.On the other hand, the end user might not have control over the List implementation and might not put it in a HashSet and would appreciate things working half the time.I'm at the edge of my experience here and could use some better informed opinions. If this has no right answer, is there a best wrong answer?P.S. I'm using the TreeSet and TreeMap implementations from Clojure so that people using this utility don't have to worry about equals() and hashCode(). But just writing JUnit tests, having a working equals() is extremely useful, 'cause Java is designed to be used that way, and a lot of Java expects it, even if using a separate Comparator is generally superior.
Is one-sided equality more helpful or more confusing than quick failure?
java;inheritance;collections
This statement here:The !(o instanceof MyClass) line here can be very limiting. Once a class with such an equals method is wrapped in another class, it is no-longer equal to itself!is not true. Consider this code taken from AbstractList:public boolean equals(Object o) { if (o == this) return true; if (!(o instanceof List)) return false; ListIterator<E> e1 = listIterator(); ListIterator<?> e2 = ((List<?>) o).listIterator(); while (e1.hasNext() && e2.hasNext()) { E o1 = e1.next(); Object o2 = e2.next(); if (!(o1==null ? o2==null : o1.equals(o2))) return false; } return !(e1.hasNext() || e2.hasNext());}That code works for almost all List implementations out there, including LinkedList, ArrayList, etc. The !(o instanceof List). Since AbstractList, ArrayList, LinkedList, etc. are all instances of List, they will all 'pass' that not instance test, and will be tested with matching ListIterator sequences.Also, not that there is no need to test for o == null either, in your question you say Most people implement equals() to start like.... :if ( (o == null) || !(o instanceof MyClass) ) { return false; }Well, that o == null check is redundant because null is not an instance of any class, so the line can just be:if ( !(o instanceof MyClass) ) { return false; }The bottom line, is that, since your UnList implements List, the list it wraps should have a meaningful implementation of both equals(..) and listIterator() already, and, as a consequence, there should be no need to re-implement or override the equals(...) method at all.If your equals(...) method is not working using its base/underlying implementation, then you have a bug somewhere else.I would simply use hashCode() and equals() methods directly on the UnList interface using default methods that caller the inner version, like you have done with most other methods.
_cogsci.6110
There seem to be at least two kinds of confusion regarding novel concepts.In one, the brain simply can't form an abstract model from whatever information is being presented. It's where you can't wrap your head around something.In another, the brain perceives a cognitive dissonance, where something doesn't add up, but further information provides a unifying model that can explain everything being presented.They seem to be related.The brain is somehow able to evaluate the fidelity of an abstract model that it's trying to construct. How does this happen?
How does the brain know whether or not it comprehends a novel concept?
perception;cognitive neuroscience;cognitive modeling;reading;artificial intelligence
Interesting question! A related phenomenon called the illusion of explanatory depth (IOED) suggests that the human cognitive system has a systematic weakness in this kind of evaluation--I believe the classic example is asking people if they know how a helicopter works (most people say yes), and then asking them to explain how a helicopter works (very few people even get as far as mentioning airfoils or lift).Although the mechanism behind the illusion of explanatory depth (and thus the mechanism behind evaluation of incomprehension in general) is understudied, a 2010 review examined six studies and drew a few conclusions: IOEDs have been found in natural, mechanical, and social-psychological domains.People who adopt a concrete construal style (as opposed to an abstract style), either naturally or from prompting, diminish the scope of their IOED. Missing the trees for the forest: A construal level account of the illusion of explanatory depth. Alter, Adam L.; Oppenheimer, Daniel M.; Zemla, Jeffrey C. Journal of Personality and Social Psychology, Vol 99(3), Sep 2010, 436-451. http://dx.doi.org/10.1037/a0020218
_softwareengineering.128003
I'm implementing a bounding volume hierarchy in F#. Since it would be for a game, I want the garbage collector to be as quick and infrequent as possible.It seems though that I may have to pull some whacky tricks, probably pre-allocating everything. That means that I can't have many things immutable, and that I have to know up front how large my tree will be -- a major annoyance.I'll probably end up biting the bullet and doing just that (or maybe just go back to C++), but for the record, are trees inherently bad for GC performance? They would seem to be, considering the mark stage would have to traverse a lot of nodes.
Are tree structures inherently bad for mark-and-sweep garbage collector performance?
game development;garbage collection;trees
I understand that you are concerned about this issue not because you suffer from the premature optimization syndrome, but because you are worried that this potential performance issue might need to be taken into consideration in order to make the right choices of technologies in the very beginning of the project. If that is your concern, there is no answer oh, yes, that's really going to kill the GC or no, don't worry, everything will be fine that you should really depend on. Just implement a test scenario, and see for yourself how it performs.(My guess is that there will be no problem whatsoever. Even in conditions 10x worse than your projected usage.)
_softwareengineering.121798
Possible Duplicate:Why programming open source? I'm having trouble understanding why people dedicate their time to an open source project that's free as in beer instead of focusing on a closed source, paid project.Closed source projects seem to be more commercially viable, so why do programmers open source their code and make it free when there are commercial opportunities for it?
What is the economic rationale for focusing on free, open source projects?
open source;commercial
Even as a purely economic decision (which it isn't, as several others pointed out), the rational function to optimize is your cumulative future value, not your immediate future value. By working on a non-paid project, you may increase:Your competence Your reputationYour devotion to the profession (i.e, to a hiring manager)Your mastery of a potentially profitable domain (e.g., a new language, hardware platform, or domain)Your association with a product for which people will pay support All of which may maximize long-term economic benefits.
_unix.189726
How can I install all packages from given yum repo? Yum list contains a header and you cannot tell it to print only the first line.
Install all packages from given yum repo
yum
null
_cs.67841
Consider the following problem:Is $L_1 * L_2$ is of $LType$?where we know that both $L_1$ and $L_2$ are of type $LType$ and $LType$ is closed under $*$ operation.Above, by $LType$, I mean any type of language that is known to belong to any type of Chomsky hierarchy. So it may be regular (Chomsky Type 3), CFL or DCFL (Chomsky Type 2) etc.Also by operation $*$, I mean any binary operation that results in another language, say intersection $\cap$, union $\cup$, difference $-$ etc.Given all these pre-information of the type of languages and their closure under the specified operation, is the above problem decidable?For exampleDoes the problem:Given that $R_1$ and $R_2$ are regular, is $R_1 \cap R_2$ too a regular language? is decidable, especially when we know that regular languages are closed under intersection? Also can we comment in the same manner about undecidability of operation when we know that the type of operand languages is not closed under that operation? That is Does the problemIs $L_1*L_2$ is of $LType$?is undecidable?, especially when we know that $LType$ is not closed under the operation $*$ and that $L_1$ and $L_2$ are of type $LType$?For exampleDoes the problemIs $CFL_1\cap CFL_2$ is CFL?is undecidable?, especially when we know that CFLs are not closed under intersection.I feel both of above facts are correct/obvious/intuitive. But I am confused because no book on the theory of automata states it clearly.
How the closure properties of the formal languages dictate decidability of their problem
formal languages;automata;undecidability;decision problem;chomsky hierarchy
If a family $\mathcal F$ of languages is closed under some operation, say intersection, then you are right that it is decidable whether the intersection of any two particular languages from $\mathcal F$ also belongs to $\mathcal F$. The answer is always Yes.The converse, however, isn't true. Consider the family $\mathcal F_m$ of all regular languages which are accepted by some DFA having at most $m$ states, for some parameter $m$. When $m > 1$, this language family isn't closed under intersection, but given any two languages from $\mathcal F_m$ (in any reasonable encoding), it is decidable (depending on the encoding, efficiently) whether their intersection belongs to $\mathcal F_m$.
_unix.164518
I am having a hard time figuring out exactly how to run rsync to get it to do what I need it to do. Basically what I need is as follows given a single source folder with multiple sub-directories:-If files for a given subdirectory are changed in the source folder, sync those changes to the destination (update files and delete files not found in the source folder any longer).-If a folder is found in the source but not the destination, sync the folder and all of its contents to the destination.-If a folder is found in the destination but not in the source, do nothing (e.g. don't delete it).This is what the directory structure would look like:Source Folder Folder 1 File 1 unchanged.txt Folder 2 File 2 newer.txt Folder 3 File 3.txtDestination Folder Folder 1 File 1 unchanged.txt Folder 2 File 2 old.txt (to be replaced with File 2 newer.txt) (Folder 3 not yet in destination, to be added from source) Folder X (not in source, to be left untouched)
Can rsync be configured to avoid modifying subdirectories not found in the source folder?
rsync
null
_codereview.163891
I have written Command Class & CommandCollection class.Every command will have instance of Command object and it get assigned in the CommandCollection object.Usage:$commandCollection = new CommandCollection();$data = ['domain' => 'www.domain.com'];// version 4 and 5 is belong to script_one.sh file under add-site name$commandCollection->addCommand('add-site', 'path/script_one.sh', $data) ->forVersion([4, 5]);$commandCollection->addCommand('add-site', 'path/script_two.sh', $data) ->forVersion([6]);View the shell script of version 5 (path/script_one.sh)echo $commandCollection->viewCommand('add-site', 5);I will be using CommandCollection object in some of Manager classes such as SitesManager class and CronJobManager classWhat do you think of my code and is there anything could be improved or refactored?CommandCollection Classclass CommandCollection{ private $commandFiles; public function __construct() { $this->commandFiles = collect([]); } public function addCommand($name, $filename, $data) { $nameFilter = $this->commandFiles->where('name', $name)->where('filename', $filename); if (!$nameFilter->isEmpty()) { throw new \Exception('Already exists!'); } $command = new Command($name, $filename, $data); $this->commandFiles->push($command); return $command; } public function viewCommand($name, $version) { foreach ($this->commandFiles as $commandFile) { if ($commandFile->name == $name && $commandFile->hasVersion($version)) { return $commandFile->render(); } } }}Command Classclass Command{ protected $name; protected $filename; protected $version; protected $data; public function __construct($name, $filename, $data) { $this->name = $name; $this->filename = $filename; $this->data = $data; $this->version = collect(); } public function forVersion($versions) { foreach ($versions as $version) { $this->version->push($version); } } public function render() { return view($this->filename, $this->data)->render(); } public function hasVersion($number) { return $this->version->contains($number); } public function __get($property) { if (property_exists($this, $property)) { return $this->$property; } }}
Command Class & CommandCollection class
php;object oriented;laravel
I would suggest that you instantiate your Command object and inject into CommandCollection via your addCommand() method. $command = new Command('add-site', 'path/script_one.sh', $data);$command->forVersion([4, 5]);$commandCollection->addCommand($command);This gives you clear separation of logic such that your collection no longer needs to understand how to instantiate the Command object. It also removes the obfuscated logic of calling forVersion() on a Command object returned from addCommand(). Why would addCommand() be expected to return a Command object instead of simply adding a Command object to the collection?Passing the Command dependency like this would also allow you to type hint that a valid Command object is passed. Right now, you are doing absolutely nothing to validate that that the parameters being passed to this method (or really all other public methods across these two classes) contain good values to work with.Should data passed to forVersion() be information that is actually passed to the Command constructor, as opposed to being passed in a separate method? Does the version data really need to be mutable on the object such that there should be a setter method like this? Are you allowing your Command objects to be instantiated into a bad or incomplete state?Should you really be using Laravel Collection object as the basis for your collection class? In other words, do you really want to you an array for your data structure here? Note how in your methods you will have to repeatedly iterate over the collection/array to determine if certain entries are present, or to get their values. Your addCommand() method performs an \$O(n^2)\$ search against your underlying Collection just to be able to add an entry. Your viewCommand() requires an \$O(n)\$ iteration over the collection in order to return a Command file rendering.This might not be a problem if you are only ever expecting to be working with a handful of entries in the collection, but will become a problem if you expect your collection to grow to significant sizes (even 100 entries would cause 10,000 individual iterations on the collection to be performed to add a new command, 1000 entries would require 1,000,000 iterations!).If this concerns you, you might consider a key-value lookup scheme to allow for quick \$O(1)\$ lookup of results.For example a structure like the following would enable fast lookup against the collection.[ '{filename}' => [ {version} => {Command Object}, {anotherversion} => {Command Object}, ... ], '{anotherFilename}' => [ ... ], ...]With a viewCommand() method like:public function viewCommand($name, $version){ if(isset($commandFiles[$name][$version])) { return $commandFiles[$name][$version]->render(); } return false;}Why should the collection hold a method for rendering the command? You already have the render() command living properly on the object, so isn't this basically a find() method?For example:$command = $collection->find($name, $version);$command->render();The collection should just be a collection. With methods only for managing/accessing the collection not for calling methods on the objects contained in the collection.Note that generalizing to just a find() method would also allow you to call this method in the addCommand() method to see if command already exists.Note that if you decided you still want to stick with Laravel Collection as your base data structure, then you might also get to the realization that your collection class really adds little value above and beyond what the Laravel class adds. So why not:$collection = collect([]);$collection->push(new Command(...));// and to render$command = $collection->where('name', $name);$command->render();Otherwise you might find yourself eventually adding filter, sort, etc. methods to your class because you have hidden away the underlying Collection capabilities.Is it conceivable that a command in the collection could be replaced with a new Command object? If so, then you should not be throwing exception in this case. I would not think a collection would necessarily need to hold the logic as to whether an item in the collection could be replaced or not, as a caller could easily check for existence of an item in the collection and make decision to overwrite or not.public function __get($property){ if (property_exists($this, $property)) { return $this->$property; }}What if property does not exist? Should you throw exception? This condition would likely mean caller is accessing this class in unexpected fashion and should probably fail loudly.$commandFile->name == $nameConsider using exact comparisons as default behavior, only using loose comparisons where there truly is a use case for it.
_softwareengineering.313907
I'm relatively new to C++, so I'm not sure how I should best handle small dependencies (e.g., a scripting language, or a JSON/YAML/XML Parser).Should I create separate projects and link them as static library, or are there downsides of just putting the .h/.cpp files into my main project?The latter seems a lot easier because I've spent several hours dealing with incompatible libraries (different compiler setting when building the library), but I don't want to start off learning C++ the wrong way.If it's preferable to keep them as separate libraries, how would I best keep compilation flags in sync so that the .lib/.a files successfully link to my application?(I'm currently working with MSVC 2015, but the goal is to compile on Mac OS X and iOS using XCode/clang, so that I have to deal with at least 3 different types of libraries (Win x86, Mac x64, ARM))
Should I add the source of libraries instead of linking to them?
c++;linking
null
_codereview.105845
I'm a beginner programmer and decided to try some coding challenges. I found CodeEval and attempted the first challenge, FizzBuzz. However upon submitting my code I found that my submission only partially solved the problem.Link to the challengeChallenge Description:Players generally sit in a circle. The first player says the number 1, and each player says next number in turn. However, any number divisible by X (for example, three) is replaced by the word fizz, and any divisible by Y (for example, five) by the word buzz. Numbers divisible by both become fizz buzz. A player who hesitates, or makes a mistake is eliminated from the game.Write a program that prints out the final series of numbers where those divisible by X, Y and both are replaced by F for fizz, B for buzz and FB for fizz buzz.Input Sample: Your program should accept a file as its first argument. The file contains multiple separated lines; each line contains 3 numbers that are space delimited. The first number is the first divider (X), the second number is the second divider (Y), and the third number is how far you should count (N). You may assume that the input file is formatted correctly and the numbers are valid positive integers.For Example:1) 3 5 102) 2 7 15Output Sample:Print out the series 1 through N replacing numbers divisible by X with F, numbers divisible by Y with B and numbers divisible by both with FB. Since the input file contains multiple sets of values, your output should print out one line per set. Ensure that there are no trailing empty spaces in each line you print.1) 1 2 F 4 B F 7 8 F B2) 1 F 3 F 5 F B F 9 F 11 F 13 FB 15How can I improve my code speed-wise and memory-wise so that I can fully complete this challenge?The code is working for my test cases.import sysdef main(name_file): _test_cases = open(name_file, 'r') for test in _test_cases: result = if len(test) == 0: break else: test = test.split() first_div = int(test[0]) second_div = int(test[1]) lim = int(test[2]) for i in range(1, lim+1): if i % (first_div*second_div) == 0: result += FB elif i % first_div == 0: result += F elif i % second_div == 0: result += B else: result += %d % i print(result[:-1]) _test_cases.close()if __name__ == '__main__': main(sys.argv[1])
FizzBuzz for CodeEval
python;performance;beginner;python 3.x;fizzbuzz
There is a bug in your code for those times when the two input values are not prime.Consider an instance when the input values are 4 and 6. In this case, your code will output F for multiples of 4, and B for multiples of 6, and FB for multiples of 24.... great... but, is it? No, 12 is a multiple of both, but will only print F .If you want to use the optimized cascading-if-else version of FizzBuzz, then you need to ensure the inputs are both prime, or have no common factors.It would often still be faster to compute the common factors and then after that do the system you have.
_unix.181061
From zsh documentation:${name-word}${name:-word}If name is set, or in the second form is non-null, then substitute its value; otherwise substitute word. In the second form name may be omitted, in which case word is always substituted.So I can use something like:$ printf '%s\n' ${:-123}123I wonder why zsh allow this and in which case it's useful?
Why does zsh allow name to be null in ${name:-word}?
zsh;variable substitution
I don't have an explanation for why they did it this way, but ${:-foo...} does have an application: it counts as a parameter substitution in places that syntactically require one, but always just expands to a literal you give. You can write things like this to use expansion flags or other expansion specifiers on a literal string in-place:$ echo ${(#)${=${:-65 66 67}}}A B C$ echo ${(q)${:-hello world!}}hello\ world\!You can also use other expansions inside your word string:$ restofarglist=arg3,arg4$ echo ${${:-arg1,arg2,$restofarglist}//,/ }All of this behaviour is documented right at the end of the section on parameter expansion:If a ${...} type parameter expression or a $(...) type command substitution is used in place of name above, it is expanded first and the result is used as if it were the value of name.You can nest these expansions arbitrarily deeply, although they become incomprehensible fairly quickly. There are strange corner cases around using the ${name=word} family of expansions inside either side.It's a pretty obtuse way of doing things in general, but it's both the only application I know of for ${:-} and the only way to make these things happen without creating another variable. The documentation could definitely be clearer, even though all the bits are technically there.
_webmaster.32009
I run an online web app which at the moment I charge for in GBP. I'm expanding to the US and Europe and am looking for the best way to change the displayed price based on where the visitor is located. Because I use PayPal, ideally it would be great if it took into account the fee PayPal charges for currency conversion, but if not it wouldn't be the end of the world.What would be the best way to do this do you think?
Changing displayed price based on visitor location?
paypal;web applications;geolocation;pricing
null
_softwareengineering.303380
What is the docstring convention in Python for the following magic class methods:__str____unicode____repr__Should I add docstrings for these? If yes, what should they say (for each)?
Docstring convention for Python __str__, __unicode__, and __repr__ class methods
python;documentation
I wouldn't add docstrings; they'll never meaningfully differ from the python stdlib docs relating to them.
_cogsci.7855
Supposing that neurons function similarly to transistors: A neuron able to fire $200$ times per second and transistors can be switched on and off more than $100,000,000,000$ ($10^{11}$) times per second. Let's say it fires 1 out of 2 times in average.We have $86,000,000,000$ ($8.6 \cdot 10^{10}$) neurons in a brain, and $4,000,000,000$ ($4 \cdot 10^9$) transistors in medium CPU.$\text{Units count} \cdot \text{firing probability} \cdot \text{firing rate} = \text{total fires per second}$A brain's total fires per second: $(8.6 \cdot 10^{10}) \cdot 1 \cdot 200 = 1.72 \cdot 10^{13}$.A CPU's total fires per second: $(4 \cdot 10^9) \cdot \frac{1}{2} \cdot 10^{11} = 2 \cdot 10^{20}$.A CPU is faster than a brain by $\frac{2 \cdot 10^{20}}{1.72 \cdot 10^{13}} = 1.16 \cdot 10^7$, or about 12 million times faster. I gave the brain an advantage that every neuron is firing non-stop instead of just 1%, and that they fire at the rate of the fastest neurons.Why isn't this argument valid?
Why do scientists say brains are faster than computers?
neurobiology;theoretical neuroscience
There is a basic epistemological problem here that was only touched upon by Chuck Sherrington - everyone is making the assumption that the brain processes the same kind of information as a digital computer. There is no real evidence to suggest that it does, in fact. A digital computer is an instantiation of a Turing machine, which is equivalent to certain kinds of automata. In order for the processing power of the brain to be compared to that of a digital computer in the first place, one needs to show that the brain employs representations (discrete entitites/states like bits) and rules (well-defined transitions between states/bits). Nobody has even come close to doing this, even for a subsection of the brain. This would be done by showing that the brain implements some digital computation - David Chalmers' famously [1] explains how this needs to be done. According to current the state of research, the brain seems to be a complex biological system, operating at multiple levels of measurement, and does not process information in discrete terms! And yes, Chuck Sherrington says it, neurons are not simply on/off![1] Chalmers, David J. A computational foundation for the study of cognition. (1993).
_webmaster.48201
I am doing a web site for a client. The descriptions for all his services are rather short and precise so I decided to put them all (about 10 of them) on one page and have a sticky menu in the sidebar for quick access to each of the services. I find that a very usable experience. And for mobile (through the responsive approach), swiping through the services is more convenient than navigating to each service on a dedicated page.But now an SEO guy comes in and tells me to put all services on dedicated pages, and the client then also has to transform all the about 60 words descriptions into 400 word descriptions, because Google demands 400 words to be on a page to be a page.I my opinion this is a major step back in regard of accessibility/usability. And stuffing the descriptions with 'nonsense' just for the sake of the presumed SEO advantages? I really don't think so.I am about to tell the client, that the SEO guy might not be right about that situation, but I am not sure, so who is right?
Content blocks on dedicated pages rather than listed on one page
seo;web development;mobile;accessibility
See, i am not completely agree with your SEO guy to put 400 words of content by creating separate page for each service but i am agree with him on not to create just one page that contains all services !!See, what i suggest is: create one page as you think to place all services there title and small description (2-3 lines) and put read more... link there. When someone click on read more.. link, they will go to detailed page of that service.About service page; i recommend creating attractive landing page with call to action stuff rather than placing 400 words of content has no meaning. I am sure it will solve your problem :)
_unix.195089
I am trying to create an alias that builds upon my previous command.Say I run ag fooAfter looking at the list I want to be able to use those results in vim so I dovim -q<(!! --vimgrep)The alias I want is alias edit-last='vim -q<(!! --vimgrep)'But I can't seem to use !! in my alias. I'm having a hard time finding info about what the !! is, a built-in, an alias
How can I use !! in zsh alias
zsh;alias;command history
!! is history expansion. The first ! starts a history expansion; !! has the event designator meaning the previous command.You can access the command history via the fc and history builtins and via the history variable.Since --vimgrep only makes sense with ag, your alias would be more useful if it applied to the last ag command. You can locate the previous ag command like this:${${(M)history:#ag *}[1]}Furthermore you'll need to inject the --vimgrep into the command.alias edit-last='vim -q<(eval ${${(M)history:#ag *}[1]} --vimgrep)'The last ag command won't make sense anymore if you've changed the current directory. This is difficult to detect. You may want to whitelist acceptable commands instead. This isn't a perfect test of course.edit-last () { local cmd setopt local_options extended_glob for cmd in $history; do case $cmd in ((ls|(cvs|git|hg|svn) status)(| *)) :;; (ag *) vim -q<(eval $cmd --vimgrep); return;; (edit-last) :;; (*) echo >&2 The previous ag command is too old.; return 125;; esac done}
_codereview.54014
How do I make these line of codes more scala-ish (shorter?). I still get the Java feeling in it (which I want to stay away from).import scala.collection.mutableval outstandingUserIds: mutable.LinkedHashSet[String] = mutable.LinkedHashSet[String]() val tweetJson = JacksMapper.readValue[Map[String, AnyRef]](body) val userObj = tweetJson.get(user) tweetJson.get(user).foreach(userObj => { userObj.asInstanceOf[Map[String, AnyRef]].get(id_str).foreach(idStrObj => { if (outstandingUserIds.exists(outstandingIdStr => outstandingIdStr.equals(idStrObj))) { outstandingUserIds.remove(idStrObj.asInstanceOf[String]) } })})
Parsing JSON to a Map and Set structure
scala;json
If you really want to inflict this JSON library on yourself, your code could be simplified to:import scala.collection.mutableval outstandingUserIds = mutable.LinkedHashSet[String]() val tweetJson = JacksMapper.readValue[Map[String, AnyRef]](body) for { userObj <- tweetJson.get(user) idStrObj <- userObj.asInstanceOf[Map[String, AnyRef]].get(id_str)} outstandingUserIds -= idStrObj.asInstanceOf[String]But I would recommend you use a better JSON library. For example, using the library that comes with the Play! framework:(body \ user \ id_str).asOpt[String].foreach {id => outstandingUserIds -= id}
_cs.79804
http://codeforces.com/problemset/problem/535/BThe problem is:You are given a lucky number n. Lucky numbers are the positive integers whose decimal representations contain only the lucky digits 4 and 7. For example, numbers 47, 744, 4 are lucky and 5, 17, 467 are not.If we sort all lucky numbers in increasing order, what's the 1-based index of n?Input: a lucky number n (1n109).Output: index of n among all lucky numbers.Examples: input: 4, output: 1 input: 7, output: 2 input: 77, output: 6The Editorial solutions says,1 : Consider n has x digits, f(i)= decimal representation of binary string i, m is a binary string of size x and its i-th digit is 0 if and only if the i-th digit of n is 4. Finally, answer equals to 21+22++2x-1+f(m)+1.2 : Count the number of lucky numbers less than or equal to n using bitmask (assign a binary string to each lucky number by replacing 4s with 0 and 7s with 1).My question is ,how the Binary representations are used to calculate the position of the string?I am just not understanding this.
How to use Bitmasking to solve this problem?
algorithms;counting
null
_unix.74115
I'm trying to compile SmartSim for OSX Lion, and at the moment I'm at the ./configure stage.Here is a dump of what I've managed to get so far:$ ./configurechecking for a BSD-compatible install... /usr/bin/install -cchecking whether build environment is sane... yeschecking for a thread-safe mkdir -p... ./install-sh -c -dchecking for gawk... nochecking for mawk... nochecking for nawk... nochecking for awk... awkchecking whether make sets $(MAKE)... yeschecking for gcc... gccchecking whether the C compiler works... yeschecking for C compiler default output file name... a.outchecking for suffix of executables... checking whether we are cross compiling... nochecking for suffix of object files... ochecking whether we are using the GNU C compiler... yeschecking whether gcc accepts -g... yeschecking for gcc option to accept ISO C89... none neededchecking for style of include used by make... GNUchecking dependency style of gcc... gcc3checking for ... nochecking for pkg-config... /usr/local/bin/pkg-configchecking pkg-config is at least version 0.9.0... yeschecking for GLIB... yeschecking for GTK... noconfigure: error: Package requirements (gtk+-2.0) were not met:Package 'cairo', required by 'pangocairo', not foundConsider adjusting the PKG_CONFIG_PATH environment variable if youinstalled software in a non-standard prefix.Alternatively, you may set the environment variables GTK_CFLAGSand GTK_LIBS to avoid the need to call pkg-config.See the pkg-config man page for more details.However, I have installed Homebrew, and it says I have the above installed:$ brew install gtk+Warning: gtk+-2.24.17 already installedHere's a dump of my /etc/paths file:/usr/local/Cellar//usr/local/bin/usr/local/sbin/usr/bin/bin/usr/sbin/sbin/Library/Frameworks/GTK+.framework/Resources/binOh, and I have XCode installed for the compiler collection, build tools, and all the other dev tools.So, how do I get the configure script to recognize that I have the required pkgs/libs/frmwrks installed?
Installing required libs/frameworks/packages for compilation on OSX
osx;compiling;path;libraries;configure
null
_codereview.83798
I've implemented both as follows. Could someone be kind enough to just play around with various inputs and let me know if there are any bugs?#include <iostream>using namespace std;/***************************************************************************/class AStack { public: AStack(int); ~AStack(); void push(int); int pop(); int top(); bool isEmpty(); void Flush(); private: int capacity ; int* a; int index = -1; // Index of the top most element};AStack::AStack(int size) { a = new int[size]; capacity = size;}AStack::~AStack() { delete[] a;}void AStack::push(int x) { if (index == capacity - 1) { cout << \n\nThe stack is full. Couldn't insert << x << \n\n; return; } a[++index] = x;}int AStack::pop() { if (index == -1) { cout << \n\nNo elements to pop\n\n; return -1; } return a[index--];}int AStack::top() { if (index == -1) { cout << \n\nNo elements in the Stack\n\n; return -1; } return a[index];}bool AStack::isEmpty() { return (index == -1);}void AStack::Flush() { if (index == -1) { cout << \n\nNo elements in the Stack to flush\n\n; return; } cout << \n\nFlushing the Stack: ; while (index != -1) { cout << a[index--] << ; } cout << endl << endl;}/***************************************************************************/class LLStack { public: struct Node { int data; Node* next; Node(int n) { data = n; next = 0; } Node(int n, Node* node) { data = n; next = node; } }; LLStack(); ~LLStack(); void push(int); int pop(); int top(); bool isEmpty(); void Flush(); private: Node* head;};LLStack::LLStack() { head = 0;}LLStack::~LLStack() { this->Flush();}void LLStack::push(int x) { if (head == 0) head = new Node(x); else { Node* temp = new Node(x, head); head = temp; }}int LLStack::pop() { if (head == 0) { cout << \n\nNo elements to pop\n\n; return -1; } else { Node* temp = head; int n = temp->data; head = temp->next; delete temp; return n; }}int LLStack::top() { if (head == 0) { cout << \n\nNo elements in the stack\n\n; return -1; } else { return head->data; }}bool LLStack::isEmpty() { return (head == 0);}void LLStack::Flush() { while (head != 0) { Node* temp = head; head = head->next; delete temp; }}/***************************************************************************/int main() { // Insert code here to play around return 0;}
Array and Linked List Implementation of Stack
c++
null
_hardwarecs.4364
I'm looking for a decent laptop on which I can also play...I'm not really interested in super high graphics...but I would like to be able to play the latest games at 720p resolution with medium/high-ish settings. My budget is of 700 .So, I found this Medion Erazer P6661 which has pretty good specs, and is just at my budget limit, but I have been reading online about people having issues on Medion laptops and about them being build with cheap materials, and I myself have had a bad time on a Medion laptop, even though it was a really cheap one.So, here's my question: ...is this laptop worth my money? And if not, what other laptops should I consider for that price?The most important features to me are: - Skylake processor (i5 or i7) - At least 8 GB of RAM - 1 TB HDD - nVidia card (940m or better) - 17.3 screen would be preferred over 15.6, but I can't seem to find many
Good casual gaming laptop
laptop;gaming
null
_reverseengineering.311
Java and .NET decompilers can (usually) produce an almost perfect source code, often very close to the original.Why can't the same be done for the native code? I tried a few but they either don't work or produce a mess of gotos and casts with pointers.
Why are machine code decompilers less capable than for example those for the CLR and JVM?
decompilation;x64;x86;arm
TL;DR: machine code decompilers are very useful, but do not expect the same miracles that they provide for managed languages. To name several limitations: the result generally can't be recompiled, lacks names, types, and other crucial information from the original source code, is likely to be much more difficult to read than the original source code minus comments, and might leave weird processor-specific artifacts in the decompilation listing.Why are decompilers so popular?Decompilers are very attractive reverse engineering tools because they have the potential to save a lot of work. In fact, they are so unreasonably effective for managed languages such as Java and .NET that Java and .NET reverse engineering is virtually non-existent as a topic. This situation causes many beginners to wonder whether the same is true for machine code. Unfortunately, this is not the case. Machine code decompilers do exist, and are useful at saving the analyst time. However, they are merely an aid to a very manual process. The reason this is true is that bytecode language and machine code decompilers are faced with a different set of challenges.Will I see the original variable names in the decompiled source code?Some challenges arise from the loss of semantic information throughout the compilation process. Managed languages often preserve the names of variables, such as the names of fields within an object. Therefore, it is easy to present the human analyst with names that the programmer created which hopefully are meaningful. This improves the speed of comprehension of decompiled machine code.On the other hand, compilers for machine-code programs usually destroy most of all of this information while compiling the program (perhaps leaving some of it behind in the form of debug information). Therefore, even if a machine code decompiler was perfect in every other way, it would still render non-informative variable names (such as v11, a0, esi0, etc.) that would slow the speed of human comprehension.Can I recompile the decompiled program?Some challenges relate to disassembling the program. In bytecode languages such as Java and .NET, the metadata associated with the compiled object will generally describe the locations of all code bytes within the object. I.e., all functions will have an entry in some table in a header of the object.In machine language on the other hand, to take x86 Windows disassembly for example, without the help of heavy debug information such as a PDB the disassembler does not know where the code within the binary is located. It is given some hints such as the entrypoint of the program. As a result, machine code disassemblers are forced to implement their own algorithms to discover the code locations within the binary. They generally use two algorithms: linear sweep (scan through the text section looking for known byte sequences that usually denote the beginning of a function), and recursive traversal (when a call instruction to a fixed location is encountered, consider that location as containing code).However, these algorithms generally will not discover all of the code within the binary, due to compiler optimizations such as interprocedural register allocation that modify function prologues causing the linear sweep component to fail, and due to naturally-occurring indirect control flow (i.e. call via function pointer) causing the recursive traversal to fail. Therefore, even if a machine code decompiler encountered no problems other than that one, it could not generally produce a decompilation for an entire program, and hence the result would not be able to be recompiled.The code/data separation problem described above falls into a special category of theoretical problems, called the undecidable problems, which it shares with other impossible problems such as the Halting Problem. Therefore, abandon hope of finding an automated machine code decompiler that will produce output that can be recompiled to obtain a clone of the original binary.Will I have information about the objects used by the decompiled program?There are also challenges relating to the nature of how languages such as C and C++ are compiled versus the managed languages; I'll discuss type information here. In Java bytecode, there is a dedicated instruction called 'new' to allocate objects. It takes an integer argument which is interpreted as a reference into the .class file metadata which describes the object to be allocated. This metadata in turn describes the layout of the class, the names and types of the members, and so on. This makes it very easy to decompile references to the class in a way that is pleasing to the human inspector.When a C++ program is compiled, on the other hand, in the absence of debug information such as RTTI, object creation is not conducted in a neat and tidy way. It calls a user-specifiable memory allocator, and then passes the resulting pointer as an argument to the constructor function (which may also be inlined, and therefore not a function). The instructions that access class members are syntactically indistinguishable from local variable references, array references, etc. Furthermore, the layout of the class is not stored anywhere in the binary. In effect, the only way to discover the data structures in a stripped binary is through data flow analysis. Therefore, a decompiler has to implement its own type reconstruction in order to cope with the situation. In fact, the popular decompiler Hex-Rays mostly leaves this task up to the human analyst (though it also offers the human useful assistance).Will the decompilation basically resemble the original source code in terms of its control flow structure?Some challenges stem from compiler optimizations having been applied to the compiled binary. The popular optimization known as tail merging causes the control flow of the program to be mutilated compared to less-aggressive compilers, which usually manifests itself as a lot of goto statements within the decompilation. The compilation of sparse switch statements can cause similar problems. On the other hand, managed languages often have switch statement instructions.Will the decompiler give meaningful output when obscure facets of the processor are involved?Some challenges stem from architectural features of the processor in question. For example, the built-in floating point unit on x86 is a nightmare of an ordeal. There are no floating point registers, there is a floating point stack, and it must be tracked precisely in order for the program to be properly decompiled. In contrast, managed languages often have specialized instructions for dealing with floating-point values, which are themselves variables. (Hex-Rays handles floating point arithmetic just fine.) Or consider the fact that there are many hundreds of legal instruction types on x86, most of which are never produced by a regular compiler without the user explicitly specifying that it should do so via an intrinsic. A decompiler must include special processing for those instructions which it supports natively, and so most decompilers simply include support for the ones most commonly generated by compilers, using inline assembly or (at best) intrinsics for those which it does not support.These are merely a few of the accessible examples of challenges that plague machine code decompilers. We can expect that limitations will remain for the foreseeable future. Therefore, do not seek a magic bullet that is as effective as managed language decompilers.
_webapps.99606
Once again I need your expertise on a matter. I have a sheet and on this sheet, there are students' birthdates. What I would like to do is, I will put a number on top of a column, and if any birthdate is between the range of today() and today+the number, I want them to show up under this column. Something like this:I want to also say, in the picture as you see if a name shows up under Tomorrow section, it is not showing up in another category, from what I understand is to get this data, we basically need to compare those birthdates to today(), and for example for tomorrow x = today()+1 AND x != today() // if this request is going to cause too much workload, you can dismiss it, showing the same name under every category is fine, as long as I get the closest ones, so that I can keep track of them all the time. Thank you for all the help you do, I really can't appreciate enough for such a huge help, God Bless You All.Here is the excel file of the picture above https://docs.google.com/spreadsheets/d/1rq0Q5Ij6dBei1q-vkPICBcEZ-8k56ccAk5kQYlHssM0/edit#gid=0(file is open to public - no need a google account to see)
How to get data from a column by sorting Birthdates' column according to given a date? (Google Spreadsheet)
google spreadsheets
Does this work?C2: =FILTER(A2:A7, B2:B7 = TODAY())D2: =FILTER(A2:A7, B2:B7 = TODAY() + 1)E2: =FILTER(A2:A7, B2:B7 >= TODAY() + 2, B2:B7 <= TODAY() + 7)F2: =FILTER(A2:A7, B2:B7 >= TODAY() + 8, B2:B7 <= TODAY() + 14)G2: =FILTER(A2:A7, B2:B7 >= TODAY() + 15, B2:B7 <= TODAY() + 30)
_codereview.96613
Base on the previous question:AL N*N Tic Tac Toe GameHere is a summary of improvements:Deleting the Match class for its expensive calling constructorDeleting enum struct Type and enum struct Diagonals for no longer being neededConverting Random class to a template to allow either int or unsigned values as parametersHow can I improve this game?#include <iostream>#include <cctype>#include <array>#include <random>enum struct Player : char{ none = '-', first = 'X', second = 'O'};std::ostream& operator<<(std::ostream& os, Player const& p){ return os << std::underlying_type<Player>::type(p);}// TicTacToe Class takes care for the logic and the drawing of the game.template<std::size_t DIM> // main reason for template is avoiding global variablesclass TicTacToe {public: TicTacToe(); bool isFull() const; void draw() const; bool isWinner(Player player) const; bool applyMove(Player player, std::size_t row, std::size_t column);private: std::size_t mRemain = DIM * DIM; std::array<Player, DIM * DIM> mGrid;};template<std::size_t DIM>TicTacToe<DIM>::TicTacToe(){ mGrid.fill(Player::none);}template<std::size_t DIM>bool TicTacToe<DIM>::applyMove(Player player, std::size_t row, std::size_t column){ std::size_t position = row + DIM * column; if ((position > mGrid.size()) || (mGrid[position] != Player::none)) { return true; } --mRemain; mGrid[position] = player; return false;}template<std::size_t DIM>bool TicTacToe<DIM>::isFull() const{ return (mRemain == 0);}template<std::size_t DIM>bool TicTacToe<DIM>::isWinner(Player player) const{ std::array<bool, 2 * (DIM + 1)> win; win.fill(true); int j = 0; for (auto i : mGrid) { int x = j++; for (std::size_t k = 0; k < DIM; ++k) { if (x % DIM == k) { win[k] &= i == player; } if (x / DIM == k) { win[DIM + k] &= i == player; } if ((k == 0 && (x / DIM - x % DIM == k)) // Diagonals -> LeftTop RightBottom || (k == 1 && (x / DIM + x % DIM == DIM - k))) // Diagonals -> RightTop leftBottom { win[2 * DIM + k] &= i == player; } } } for (auto i : win) { if (i) { return true; } } return false;}template<std::size_t DIM>void TicTacToe<DIM>::draw() const{ std::cout << ' '; for (std::size_t i = 1; i <= DIM; ++i) { std::cout << << i; } int j = 0; char A = 'A'; for (auto i : mGrid) { if (j == 0) { std::cout << \n << A++; j = DIM; } --j; std::cout << ' ' << i << ' '; } std::cout << \n\n;}template<typename T>class Random{public: Random(const T& min, const T& max) : mUnifomDistribution(min, max) {} T operator()() { return mUnifomDistribution(mEngine); }private: std::default_random_engine mEngine{ std::random_device()() }; template <typename U> static auto dist() -> typename std::enable_if<std::is_integral<U>::value, std::uniform_int_distribution<U>>::type; template <typename U> static auto dist() -> typename std::enable_if<std::is_floating_point<U>::value, std::uniform_real_distribution<U>>::type; decltype(dist<T>()) mUnifomDistribution;};// Game class represent the game loop for the tic tac toe game // it simply takes inputs by switching users to check for whom is the winner. class Game {public: void run();private: void showResult() const; void turn(); static const std::size_t mDim = 4; static const std::size_t mNumberOfPlayers = 2; TicTacToe<mDim> mGame; std::array<Player, mNumberOfPlayers> mPlayers{ { Player::first, Player::second } }; int mPlayer = 1; Random<int> getRandom{ 0, mDim - 1 };};void Game::run(){ while (!mGame.isWinner(mPlayers[mPlayer]) && !mGame.isFull()) { mPlayer ^= 1; mGame.draw(); turn(); } showResult();}void Game::showResult() const{ mGame.draw(); if (mGame.isWinner(mPlayers[mPlayer])) { std::cout << \n << mPlayers[mPlayer] << is the Winner!\n; } else { std::cout << \nTie game!\n; }}void Game::turn(){ char row = 0; char column = 0; for (bool pending = true; pending;) { switch (mPlayers[mPlayer]) { case Player::first: std::cout << \n << mPlayers[mPlayer] << : Please play. \n; std::cout << Row(1,2,3,...): ; std::cin >> row; std::cout << mPlayers[mPlayer] << : Column(a,b,c,...): ; std::cin >> column; column = std::toupper(column) - 'A'; row -= '1'; pending = column < 0 || row < 0 || mGame.applyMove(mPlayers[mPlayer], row, column); if (pending) { std::cout << Invalid position. Try again.\n; } break; case Player::second: row = getRandom(); column = getRandom(); pending = mGame.applyMove(mPlayers[mPlayer], row, column); break; } } std::cout << \n\n;}int main(){ Game game; game.run();}
AL N*N Tic Tac Toe Game - 2
c++;c++11;tic tac toe
This is a cool idea! I like that you can change the size of the board. Sounds like a fun game. Here are my suggestions.Use the Right Tool for the JobI'm not sure I understand the point of TicTacToe being a template and not a regular class. If it were just a class, you could pass the dimensions of the board into a constructor. Having a template where the only thing that varies is the size of some internal storage doesn't seem like the best use of templates to me.ReadabilityThe applyMove() method returns true if it fails and false if it succeeds. That's counterintuitive. I'd expect the opposite.I had a hard time understanding your isWinner() method. It's a very strange way to test for the winning condition. I think a straightforward implementation where you manually iterate over each row and column would be easier to understand and maintain. At the very least, some comments in the one you have would be nice. (Also, are you sure you've allocated enough space in the win array? It looks to me like it needs to be 3 * DIM, not 2 * (DIM + 1).) It seems like you really need 3 separate arrays and you've just made them all a single array and you're using different sections of the array to represent rows, columns, and diagonals.The GameIn the Game class, it looks like mPlayer could be a static const std::array since it never changes and is the same for every instance.In Game::run(), this:mPlayer ^= 1;is too clever. It's the type of thing that's not obvious so it's best avoided. Just us the simpler:mPlayer = (mPlayer + 1) % 2;In Game::turn(), you have this tortured for loop:for (bool pending = true; pending;)It's better written as a while loop:bool pending = true;while (pending)// ... etc.
_datascience.11688
I have a bunch of test measurements data and a semi-empirical model that has 18 parameters which I have to find so that the model fits my data well. So far I've managed to find and optimise the parameters using Optimisation and Global Optimisation algorithms in MATLAB.Now I would like to explore different approaches for the parameter estimation. I have read some papers where the approach with NNs is described. I am new to NNs and have no idea if this is even possible. I would create a two layer network with 18 input and output neurons. I am not sure what kind of transfer function would be appropriate for the problem.The formula I have to fit and find the parameters look like this:$ y = D \sin(C \arctan(Bx - E(Bx - \arctan(Bx))))$where $B, C, D, E$ are macro-coefficients and the micro-coefficients are used to express the variation of each of the primary coefficients with respect to some other data. This is how my data looks like. How would you create network in MATLAB for this problem? Can you give me some hints and a push in the right direction to tackle this problem?Thanks in advance.
Artificial Neural Networks and Efficient Parameter Optimization
neural network;supervised learning;parameter estimation
null
_cs.65349
The tag sampling tells me that,Creating samples from a well-specified population using a probabilistic method and/or producing random numbers from a specified distribution.Now, the question becomes: what is population then?Suppose, I am implementing a classifier. I have a set of training data in the file train.txt, and a set of test data in the file test.txt.What element can be termed as 'Sample', and what activity can be termed as 'Sampling' in this case?
What do the terms 'Sample' and 'Sampling' mean in the discussion of Pattern Recognition and Machine Learning?
sampling
Suppose that you're a pollster. You're trying to understand who will win the elections in Mars. What do you do? You sample a few eligible voters from the voter population, ask them for their preferences, and tally their votes.Sampling is not trivial, however, since your sample might not be representative, and so you might need to add in some weights when you tally the votes.Here is another example. You're trying to understand the connection between lifestyle and longevity. You sample a few random people from the general population, and give them a questionnaire. Your goal is to create a formula that predicts life expectancy given some features of the person's lifestyle, such as whether they smoke or not.The second example demonstrates the sort of problem that machine learning algorithms solve. Hopefully these examples give you enough hints to understand what sample and population mean in your context.
_webapps.11904
I'd like to add MathJax to Tumblr site, although I'm a bit confused as to where to begin.MathJax doesn't offer documentation for Tumblr, just sites like WordPress and some others.
Adding MathJax to Tumblr
tumblr
null
_cs.56587
Assume we have a set of $n$ objects $X=\{x_1,x_2,\ldots,x_n\}$, where each object $x_i$ has a penalty $p_i$. Additionally, we have a set of incompatibility constraints $C=\{(x_i,x_j),\ldots\}$, where a tuple $(x_i,x_j)$ says that object $x_i$ is incompatible with object $x_j$. The problem is to find a subset $Y$ of $k<n$ compatible objects that minimize the sum of penalties, i.e. $\min_{Y} \sum_{x_i \in Y} p_i$. The objects in $Y$ need to be compatible, i.e. $\forall x_i,x_j \in Y: (x_i,x_j) \not\in C$.Let me make an example. Assume we have 4 objects $X=\{x_1,\ldots,x_4\}$ with penalties $p_1 = 2,\ p_2 = 0.1,\ p_3=3,\ p_4=100$. Furthermore the following incompatibilities are given: $C=\{(x_1,x_2),\ (x_2,x_3),\ (x_3,x_4)\}$. The $k=2$ compatible objects that minimize the function are $Y = \{x_1,x_3\}$ with a total penalty of $5$. Object $x_2$ with the least penalty is not part of the solution, because the only compatible object is $x_4$ with a penalty $p_4=100$.I have two questions:Is this problem already known under some name or a variation of a known problem?Is there an efficient (polynomial time) algorithm to solve it?
Find k compatible objects with minimum total penalty
optimization;combinatorics
First of all you have to find if there exist independent sets of size $k$ and then select the one with the minimum penalty. We have maximum weighted independent problem (size of independent set is unconstrained), but I am not aware of any optimization problem which select exactly $k$-sized independent set and minimizes/maximizes the total weight.The decision version of the optimization problem that you describe will be: Does there exist an independent set of size $k$ and penalty less than or equal to $P$. We can reduce standard independent set problem to this problem by specifying the penalty of each vertex as 1 and $P = k$. Thus the decision version of the described optimization problem is NP-complete.So there won't be polynomial time algorithm for the problem unless $P = NP$.
_vi.11556
I'm labeling my Tmux tabs with the current file in vim like so:autocmd BufEnter * let &titlestring = ' ' . expand(%:t)set titleset t_ts=kThen I have a VimLeave autocmd to have tmux rename the tab when I exit:autocmd VimLeave * call system(tmux setw automatic-rename)However, when I exit vim, the tab is renamed to Thanks for flying VimAny ideas how to fix this?
Vim + Tmux: Resetting tmux tab name when exiting vim
vimrc;autocmd;tmux
'titleold' string (default Thanks for flying Vim) global This option will be used for the window title when exiting Vim if the original title cannot be restored. Only happens if 'title' is on or 'titlestring' is not empty.So autocmd VimLeave * set notitleshould fix it.
_unix.30840
I have setup software RAID1-arrays using two 250GB harddrives. There's two arrays - one named md0 in which the system is kept and the other, md1 works as swap:# cat /proc/mdstat md0 : active raid1 sda1[1] sdb1[0] 239256512 blocks [2/2] [UU]md1 : active raid1 sda2[1] sdb2[0] 4940736 blocks [2/2] [UU]To keep things a bit more organized, I would like to use separate partitions for /tmp, /home, /var, /opt & and so on in the future. Do I need to create separate arrays for each partition or can I someway let my current md0 contain all these partitions without creating a dozen additional arrays?Thanks
Software raid + separate partitions?
debian;partition;raid;software raid
Mat already said it. I will give you a quick example of a standard layout for software raid and LVM:sd[ab]1: /boot, 256MB - can be run as Raid1 (md0), install grub on both partitionssd[ab]2: /, 3GB - run as Raid1 (md1)sd[ab]3: md2 - use for VG system:After you created md2:pvcreate /dev/md2vgcreate system /dev/md2lvcreate -n vartmp -L 2G systemmkfs -t ext3 -L vartemp /dev/system/vartempmount /dev/system/vartemp /var/tmpI hope that is enough to get the idea. You can use LVs just like you would use a partition. If / is big enough, you can start by installing everything there, then set up your LVs and move the contents there after you booted from a rescue ISO/DVD/CD.
_codereview.57256
I'm doing some practice problems on Khan Academy. The current one is Prime factorization. I came up with this:require 'prime'def prime_factorization(n, primes = nil) return [n] if Prime.prime? n primes ||= Prime.take_while { |p| p < n/2 } factorization = [] prime = primes.detect { |p| n % p == 0 } factor = n / prime factorization << prime if Prime.prime? factor factorization << factor else factorization += prime_factorization factor, primes end factorizationendThe benchmarks for this are: user system total realprime factorization of 75: 0.000000 0.000000 0.000000 ( 0.000129)prime factorization of 750: 0.000000 0.000000 0.000000 ( 0.000241)prime factorization of 4202: 0.000000 0.000000 0.000000 ( 0.000498)prime factorization of 39450: 0.010000 0.000000 0.010000 ( 0.003061)prime factorization of 460522: 0.050000 0.000000 0.050000 ( 0.057704)What is the big O notation for this method?How does this algorithm compare to optimal prime factor algorithms in terms of complexity/performance?Is recursion a good way to solve this problem?How could it be improved?Is this a stupid question?
Is this a good Ruby implementation of Prime Factorization?
ruby;primes;complexity
Reinventing the wheelThere is an easier solution, considering that you are using the Prime class, which already has a Prime.prime_division() method that does almost the same thing. The only difference is the output format: your prime_factorization(360) would output [2, 2, 2, 3, 3, 5], whereas Prime.prime_division(360) would output [[2, 3], [3, 2], [5, 1]]. It's just a matter of converting the output by repeating each prime factor the specified number of times.require 'prime'def prime_factorization(n) Prime.prime_division(n).flat_map { |factor, power| [factor] * power }end(Thanks to @Flambino and @tokland for simplifying the transformation.)CritiqueYou've asked some very interesting questions.The big-O complexity of this method is not at all simple to analyze. You call methods such as Prime.prime?() that in turn make use of a generator. Then you also call Prime.take_while(), Prime.prime?(factor). On top of that, there's recursion. All I will say, though, is that the method can be very inefficient.The initial call to prime?(n) already does a trial division that resembles the kind of work your method will perform. Calling prime? is altogether redundant see Simplification 1 below.The call to primes ||= Prime.take_while { |p| p < n / 2 } can also be very inefficient. For example, to calculate prime_factorization(360), you would be asking it to generate primes up to 179, even though no prime factor above 5 exists.Ruby provides an Integer.divmod() function that lets you avoid having to recalculate n / p immediately after having ascertained n % p == 0.When you recurse, you test every prime number all over again, starting with 2, 3, 5, . Rather, you want to be able to continue by testing the same prime factor that succeeded.Simplification 1Assuming that you are deliberately reinventing-the-wheel, you can still improve your method in several ways:Eliminate calls to prime?Make early termination work by removing take_while in favour of a breakUse divmodReplace recursion with loopingJust yield factors as you detect them. Let to_enum and to_a take care of appending results to the array.def prime_factorization(n) def factor_generator(n) Prime.instance.each do |p| break if p > n begin div, mod = n.divmod(p) if mod == 0 yield p n = div end end while mod == 0 end end to_enum(:factor_generator, n).to_aendSimplification 2The implementation above is still inefficient, since there is already primality testing hidden inside Prime.instance.each. To avoid duplication of effort between your code and the Prime class, you should be able to test any increasing sequence of numbers that is not obviously composite.def prime_factorization(n) def factor_generator(n) for prime_candidate in Prime::Generator23.new break if prime_candidate > n begin div, mod = n.divmod(prime_candidate) if mod == 0 yield prime_candidate n = div end end while mod == 0 end end to_enum(:factor_generator, n).to_aend
_unix.166686
I understand that Everything is a file is one of the major concepts of Unix, but sockets use different APIs that are provided by the kernel (like socket, sendto, recv, etc.), not like normal file system interfaces. How does this Everything is a file apply here?
Are Unix Internet sockets files?
files;socket;unix philosophy
sockets use different APIsThat's not entirely true. There are some additional functions for use with sockets, but you can use, e.g., normal read() and write() on a socket fd.how does this Everything is a file apply here?In the sense that a file descriptor is involved.If your definition of file is a discrete sequence of bytes stored in a filesystem, then not everything is a file. However, if your definition of file is more handle like -- a conduit for information, i.e., an I/O connection -- then everything is a file starts to make more sense. These things inevitably do involve sequences of bytes, but where they come from or go to may differ contextually.It's not really intended literally, however. A daemon is not a file, a daemon is a process; but if you are doing IPC your method of relating to another process might well be mitigated by file style entities.
_softwareengineering.340187
Title says it all. Should I increment my API version if I add, say, an image property to each instance of my JSON-represented 'Restaurant' resource? or should API versioning change only when implementation changes?
Should API version change when data is added?
api design
null
_webapps.51403
I'm using the Lab feature preview-pane in Gmail.How can I quickly get an email URL considering the URL bar does't change as I browse?My best solution currently is to use Show Original Message, and use that URL, but while fast, the message is a bit verbose.
How to quickly get email URL in Gmail with preview-pane?
gmail
Quickly is overstating it but if you press the Print button and remove everything after the first ? to the last = and add #inbox/ in between you have yourself a link that actually works. For example https://mail.google.com/mail/u/0/?#inbox/111x11x5b93c0905.Sounds a bit complicated but takes about 10 seconds to do.
_unix.311931
I had been using Windows for a while but made the switch over to Linux. I thought I'd go with Elementary OS. I got a problem soon enough: when I tried to connect to a wireless network, it would process the information given and then prompt for the network's password once again. This only happens with networks that require a password.I then tried Linux Mint. Same problem.Then Kali Linux. Same problem. But with Kali, during installation it asked me for home network and password which I gave and which it connected. However, it fails to connect to any other network that asks for a password. I've tried using WIXD, making Wi-Fi available to all users, etc, but no luck. I know this sounds vague but any help would be really appreciated. I need this working asap.
Wireless networks keeps prompting password on every distro
linux;wifi
null
_softwareengineering.233655
I created a general class that accepts a string when it is constructed, and it spits out that string when a user calls what(). This is all it does; on throw, it returns the initializing string.class Exception_As { private: std::string message public: Exception_As(const std::string& message) { // constructor this->message =EXCEPTION: +message; } virtual const char* what() const throw() { return message.c_str(); }}How I use my custom exception class:bool check_range( const std::vector<std::string>& list, const unsigned& idx_start, // where to start checking const unsigned& idx_end, // where to stop checking const std::string& key) { if ( idx_start > idx_end ) { throw Exception_As(check_range: start index > end index: now fix it! :D); } ... rest of code} // end of check_rangeHow a user would use check_range:// some other user code using my check_rangevoid main() { try { ... set up variables my_hero_qualified = check_range(semi_finalists, third_place, first_place, my_hero); ...rest of code } catch (const Exception_As& e) { std::cout << e.what() << std::endl; }}If the try/catch was not there, it would abort the program because of the thrown Exception_As. If it was there, the user would be notified that check_range: start index > end index: now fix it! :D. For me, sure beats writing tons of BadEvent1Exception, BadEvent2Exception, etc classes. I can see I'm being lazy with this method, and yeah, I can't do anything fancy like modify values based on exceptions or morph an object's state; but if you just entered unexpected territory, who knows if you're handling it correctly? May as well just end it so you can fix what's up and develop properly.So, question: if all exceptions do is break the program on unexpected circumstances, won't a single exception class that returns the problem be enough? ...Or should exceptions be doing other things anyway? If not, why create so many classes of Exceptions that have different names, sure, but from what I've seen, only report that something went wrong.
Should exceptions do things other than tell the user something went wrong?
exceptions;exception handling
Short answer: NOLong answer:You are getting the concept of exceptions wrong.Exceptions do not tell the user there's something wrong. It's the calling program that should catch the exception and tell (or not tell) the user something is wrong or recover from it graciously.If an exception has code inside to fix something, then it's not an exception by definition.How can the exception get access to the context of the calling program in order to fix it? How does the exception knows the innards of the calling program in order to fix the error?Also, how can a user call what() after the program aborts? Does the exception also persist the message to a file or database?Your approach is completely new and unwritten about in books or the web.Please show us some code.
_cstheory.27698
In a Multiterminal Cut problem input is a graph G=(V,E) and a set of k terminals T which is a subset of vertex set V. There is a weight w(e) associated with each edge in the graph. The question is to find the minimum weight edge set whose removal will disconnect every terminal to every other terminal in T. The following things are known about the problem:k = O(1), Degree of any vertex unbounded, edge weights = O(1)APX hardk = O(1), Degree of any vertex = O(1), edge weights unboundedAPX hardk unbounded, Degree of any vertex = O(1), edge weights = O(1)NP hardk = O(1), Degree of any vertex = O(1), edge weights equalPoly time.Is there a result which says that the problem with k unbounded, Degree of any vertex = O(1), edge weights = O(1) is also APX-hard? Any ideas on proving this will be useful.
APx hardness of Multiterminal Cut Problem
cc.complexity theory;graph theory;approximation hardness
null
_softwareengineering.74656
I have seen some source code containing Copyright notice in the beginning of the file. Most of them are either GNU General Public License or Apache License. If I want to develop any open source software what are the steps should I follow? Do I need licensing my software by registering somewhere, or I just add the text like described in Apache License 2.0.If I modify source code of another open source software then what should I do? Should I seek permission from them?
Software licensing and Copyright of Open source
java;open source;licensing;copyright
If I modify source code of another open source software then what should I do? Should I seek permission from them?For the licenses you named (GNU GPL, Apache 2.0) you do not need to explicitly ask for permissions to modify the code. Those are free software licenses which give you the right to modify the source code. So you already have a written statement that you're allowed to do so.This is why it is important that those files have a licensing header on top which normally contains a copyright notice (basically who wrote this file, when was it written and to which work belongs it) and the packages contain the license text in full, so you can actually read the license text. Some authors only place a link to some third party website, so you actually would not have anything written but only something linked which could have been changed or could go offline.So if you find a copyright statement and the licensing terms bundled with the code, that's a safe spot.The right to modify/adaptBecause the right to actually modify source-code is an important one, all free software licenses grant that to you. What's a free software license? In short, check if it is compatible with the GNU GPL. If it is, you can be pretty sure that it is one. Additionally learn about it here: The Free Software Definition.You can differ between two types/classes of free software licenses: One of the sort like the GNU GPL. Those do not only grant that right to you personally to modify the software, they also grant that right to everybody else whom you pass your (modified) software to as well. For example the users of your software.That's done to preserve the freedom of software. The idea behind it is basically: What is software w/o it's users? Whom is software written for? What is free software that is not free for its users?The other type are free software licenses that grant that right to modify to you personally but don't need you to grant the same right(s) on your modifications to the users of your software. IIRC that's like the Apache license, but the type is best known for the BSD-like licenses which probably are a better wording for that type.This might sound like a slight difference only but it is actually a real(tm) difference. For example if you want your modifications to be let's say GNU GPL'ed so they remain free for all of your code's (potential) users, you need to put it under something like the GNU GPL. But if you don't want that you can not make use of GNU GPL'ed code at all (most code under a free software license is available under GPL).Some developers consider the GNU GPL as not free because they think they're limited by it. In fact they just do not want to conform with a software license that is reciprocal (some name this viral but the adjective is not precise). Instead these developers prefer code under a license that does not requires that (their) modifications are made available upon distribution. In short: they want the benefit of a software only for themselves but not for their users (that's really short and it has some taste. but it's also valid to write so).I make this bold a bit to better show the difference. In fact every developer always - regardless of free or proprietary software - needs to decide on her/his work which licensing terms to choose at some point. Today as developers we most often do not write software from scratch so we actually create a so called derivate: we modify something existing. So we are already bound to the licenses of some existing software. If we don't like that licensing terms we need to write it from scratch (and we can even do so, because it's totally valid to learn from everything publicly available - imagine if not!).Distribution is a key point here. Basically it only means that you pass on software to somebody else. And vice-versa: You get software from somebody else.Without distribution you can actually do everything with source-code you were able to get incl. illegal stuff because nobody would ever notice. That's like you've written it on your own but you didn't need to type it into the computer. I love to copy stuff even I can type quite fast.You do this on your own. This even includes the modification you ask for: As long you do modifications for your own only, you can even ignore any licensing at all (disclaimer: not a legal advice ;) ). Nobody would ever hinder you. Practically nor legally. Because you do that in private and nobody will ever notice. Okay, there is some risk in case somebody will notice for whatever reason but most legal systems do not allow to even use that information against you then. That aspect is probably out of the boundary of your question but it shows that distribution is what matters.If you modify some software because you play around with that I don't think that practically there is much in the existing law that would hinder you. You might actually need to label your playing as science, but science always was playing so no need to censor yourself.So just to make the point: For the licenses you named, you can do the modifications on your own w/o asking for an additional consent by the copyright owner (you can always do however in case you are uncertain. but take care: some software developers do not answer if you ask a licensing question about their code they released about five years ago or so, so asking questions to the original author might not actually help you.).To review: The copyright owner preferred to give you the licensing terms and marked the code on top of the files to be under certain terms and you're free to do the modifications by those terms already (it's always more safe to have the copyright statement and the license with the package).However read the terms carefully. Some terms say that if you don't conform with something, you will have your rights to modify the software terminated. That might sound dangerous and harsh, but on the other hand keep in mind that most free software licenses contain such termination clauses to ensure that the software stays free. The GNU GPL does this but other licenses well. Just look out for something like termination. The termination clauses always show the expressed wish of a copyright owner how to deal with his software explicitly. It means that a grant is given for something. And that something most often is more than money. Or to say it differently: Money can't buy it.How to deal with your own modifications?You asked as well:Do I need licensing my software by registering somewhere, or I just add the text like described in Apache License 2.0.Next to registering (which is not necessary nowadays, just publish so you have set a proof-able fact in case you need it) there are as well terms you should consider when you do modifications. Most software licenses - regardless whether begin reciprocal or not - require you to mark your changes next to the existing code. That sort of mark / make visible basically has the meaning that if someone get's your code that it's visible which part of the code has been written by you and which part not. As written above with the different types of the license, there can be a difference which license applies to your modifications.For that to work you need to make visible where you changed something. So what does this mean in practice? For files which contain a copyright notice on top and you do not make any modifications to it, you do not need to change the copyright notice at all as well because you have not changed anything.But for files where you actually made some changes (let's not count if you changed a single byte, let's say you've added two more functions with some lines of code and modified some lines of code in an existing function), you should add your copyright on top. I mean really on top, above the original copyright statement. But next to that, you should write after your statement that this file as well is under copyright by someone else followed by their original copyright statement and their licensing description or even terms if the terms were in the original file.A description how that can be done can be found here: Maintaining Permissive-Licensed Files in a GPL-Licensed Project: Guidelines for Developers. That article might look like being from a GNU GPL viewpoint, but it's written with the best intentions to prevent infringement of copyrights which actually does apply if you license your modifications under your own (place in the name) license as well. So to say: prevent infringement of copyright has the purpose to not trigger any termination in the various licenses available so you can license your modifications (the modification you're allowed to if you don't terminate them) under any of your (compatible to the original) terms as well.Phew what a sentence. Anyway that document explains in detail the what and why and you can easily adopt it to your own licensing as well, I think it's very valuable.So to resume: If you make modifications to an existing work under copyright, make visible that you changed the file(s) but adding your own(tm) copyright statement and licensing terms and make visible that the files still contain some code under their own copyright and terms.So next to the code changes you've done you've explained the copyright changes. I think then you're pretty fine to publish things.This is all personal opinion. I might be wrong with some points, so no warranty of whatsoever. Probably it helps you that you can go on with your question, but it has not been written in any intention of whats-o-ever nor fitting for a particular purpose.
_cstheory.33737
Is there a graph class for which the chromatic number can be computed in polynomial time, but finding an actual $k$-coloring with $k=\chi(G)$ is NP-hard?Without any further restriction the answer would be yes. For example, it is known that in the class of 3-chromatic graphs it is still NP-hard to find a 3-coloring, while the chromatic number is trivial: it is 3, by definition. The above example, however, could be called cheating in a sense, because it makes the chromatic number easy by shifting the hardness to the definition of the graph class. Therefore, I think, the right question is this:Is there a graph class that can be recognized in polynomial time, and the chromatic number of any graph $G$ in this class can also be computed in polynomial time, yet finding an actual $k=\chi(G)$-coloring for $G$ is NP-hard?
Graph class with easy chromatic number, but NP-hard coloring
cc.complexity theory;graph theory;graph algorithms;np hardness
null
_unix.166642
An external storage device can be connected to different Linux or Unix machines, and I guess its mounting point may be different?If yes, will symlinks that are stored on the device and link to files on the device become invalid when connecting the device to different *nix machines, because the mounting directory for the device changes? If yes, how shall we create symlinks on the device to avoid the problem?
Create symlinks on an external storage device that can work on any Linux machines?
symlink;external hdd
null
_webmaster.102210
Anyone has a proven solution for blogger duplicate content indexing problem.My posts are indexed with m=0 and m=1 parameters.What I have done till now:blocked m=0 and m=1 on robots.txt (added: Disallow: /*/*/*.html?m=0 and Disallow: /*/*/*.html?m=1)On Google Webmaster Central > Crawl > URL Parameters --- I have added the m parameter with effect: Paginates and Crawl (Which URLs with this parameter should Googlebot crawl?) with the value of No URLs.On blogger template, I have added nofollow robots meta tag - when matching data:blog.isMobile condition.Edited:I'm using the canonical tag: expr:href='data:blog.canonicalUrl' rel='canonical'I have custom domain for my blog.
how to fix blogger duplicate content m=1 m=0 problem
duplicate content;blogger
I'm using this query inurl:m= site:mydomain.com to detect those posts with m=0 and m=1.It would seem that what are seeing is simply the results of a site: search. Using the site: operator is not a normal Google search and has been shown to return non-canonical (including redirected) URLs in the SERPs. These are URLs that don't ordinarily get returned in a normal organic search (when no search operators are used). Even URLs that are the source of 301 redirects have been shown to be returned for a site: search, when they are not returned normally. These non-canonical URLs are still crawled (and processed) by Google and they are often acknowledged in a site: search.Reference:How to submit shortened URLs to Google so that they are included in the indexRelated question: Google indexing duplicate content despite a canonical tag pointing to an extarnal URL. Am I risking a penalty from Google?Normally, a rel=canonical (which you have already done) is sufficient to resolve such conflicts with query parameters and duplicate content. But note that it doesn't necessarily prevent the non-canonical pages from being indexed (which you see when doing a site: search), but from being returned in a normal Google search.blocked m=0 and m=1 on robots.txt ....You probably don't want to block these URLs from being crawled as it could damage your ranking on mobile search.BTW what about Disallow: /.html, Allow: /.html$Aside: This looks dangerous. Google doesn't process the robots.txt directives in top-down order. They are processed in order of specificity (length of URL), but when it comes to the use of wildcards, the order is officially undefined (which also means it could even change). The Allow: directive is also an extension to the standard and might not be supported by all search engines. It would be better to be more explicit. eg. Disallow: /*?m=. But, as mentioned, you probably should not be blocking these URLs in robots.txt anyway.See also my answer to this question for more info about robots.txt and how it is processed:Robots.txt with only Disallow and Allow directives is not preventing crawling of disallowed resources
_unix.230559
I'm facing a strange issue, I will try to give as much details as I can. I already tried lot of things from stackexchange and ubuntuforum and nothing seem to work.My HDD looked like this : /dev/sda5 : empty partition (~90GB)/dev/sda3 : Linux Mint 17.2 (~60GB)/dev/sda4 : Linux Swap (4GB)I needed to install Win7 for my work so I did on /dev/sda5, I knew that the MBR will be erased but I already did it in the past and I kept my Live USB to reinstall GRUB.First StepI installed Win7, reinstalled GRUB then I reboot and here start all my problems. Mint is not showing up as intended. So I boot again from my Live USB, I tried to mount Mint partition and it's empty, nothing is shown. Second stepI removed Win7 and try to install a fresh Mint on /dev/sda5 to check if it can fix things by itself but no luck. Third StepI wandered from post to post on stackexchange & ubuntuforum. I tried lot of things to check superblock, partition integrity : no problem ever found.Is the partition really empty ?So I was left with an empty partition with all my data. I was sure that all was still here laying on the disk. I found the foremost package and used it on an image of /dev/sda3. I was able to recover some files (so i am sure all is still here) but it is very messy and there is important stuff I did not found back.If someone know what could happened or how to get my data back I will anwser all your questions.Thanks ! Some random facts :Most of the files I am willing to get back are : txt / pdf / tex /php /python files. Foremost does not seem to find themI never touch the partition /dev/sda3 except to create an image with gddrescueWhen I tried to mount (even before messing up) Mint partition from Live USB (Kali & Mint) the disk was always empty (even if I can see that around 3GB are used in GParted)I am posting from the fresh Mint install and I can run whatever command is needed and install any package requiredEdit :Result of parted -l :Model: ATA ST9160821AS (scsi)Disk /dev/sda: 160GBSector size (logical/physical): 512B/512BPartition Table: msdosNumber Start End Size Type File system Flags 1 1048kB 90,0GB 90,0GB extended boot 5 1049kB 90,0GB 90,0GB logical ext4 3 90,0GB 156GB 65,7GB primary ext4 4 156GB 160GB 4294MB primary linux-swap(v1)
Data recovery from corrupted Mint 17 partition
linux;linux mint;partition;data recovery
null
_codereview.164066
Everything is working fine; it just works too slowly. Specially, when there are 3 workbooks that a single Userform needs to open and update 3 ListBoxes.Here I have a class that I use to:Open a Workbook (as read-only), copy its contents into an array.I pass this array to a ListBox, so the user can see what is the content of that Workbook.The user can now choose what record/s he/she wants to update.With the help of a Column named Trans_no, where there are unique numbers. I update the the that entirerow (depending on the number of Controls associated to each Column.)Given the Trans_no, I can locate the cell/row that needs updating (using sub LOOK_FOR), or the cell below the last non-blank cell in Trans_no Column.I loop through the collection of controls with sub PASS_THIS.Delete the record, depending on the selected Trans_No.Here is a sample userform:Here is the code for class cls_Connection:Private sCon As String '// Connection stringPrivate eApp As Excel.Application '// New instance of Excel ApplicationPrivate eWB As Excel.Workbook '// The workbook in Excel ApplicationPrivate eWS As Worksheet '// The worksheet in Excel WorkbookPrivate bRonly As Boolean '// Is the workbook ReadOnly?Private bOpen As Boolean '// Is the connection open?Private vDa() As Variant '// The data from the worksheetPrivate LastMod As Date '// The time when the last change took placeProperty Get timeLastModified() As Date '// this property doesnt have timeLastModified = LastMod '// a let proerty. so the userEnd Property '// wont be able to change its valueProperty Get isReadOnly() As Boolean '// This property doesn't have isReadOnly = bRonly '// a let property. so the userEnd Property '// wont be able to change its valueProperty Let ConnectionString(ByVal FilePath As String) sCon = FilePath '// This property sets the connectionEnd Property '// string.Property Get ConnectionString() As String ConnectionString = sCon '// This property shows the connectionEnd Property '// string.Property Get Data() As Variant '// There is only get data property. Data = vDa() '// So the user won't be able toEnd Property '// set/change its value.Private Sub OpenConnection(ByRef sPass As String, Optional ByRef bRead As Boolean = False) Set eApp = New Excel.Application '// Creating new instance of excel On Error GoTo ErrHandler '// basic error handler Set eWB = eApp.Workbooks.Open(sCon, , bRead, , sPass, , True) Set eWS = eWB.Sheets(1) '// sets new worksheet bOpen = True '// is it open? bRonly = eWB.ReadOnly '// is it opened as readonly? LastMod = eWB.BuiltinDocumentProperties(Last Save Time) Exit Sub '// exits the sub after updating last modErrHandler: MsgBox Err.Description, vbCritical, Err.Number & - Call a programmer! EndEnd SubPrivate Sub CloseConnection(ByRef bChanges As Boolean) On Error GoTo ErrHandler '// basic error handling If Not bRonly Then eWB.Save LastMod = eWB.BuiltinDocumentProperties(Last Save Time) End If eWB.Close bChanges '// Closes the workbook and save it as needed. eApp.Quit '// Quits the new instance of Excel. bOpen = False '// changes the global boolean Exit Sub '// exits the subErrHandler: MsgBox Err.Description, vbCritical, Err.Number & - Call a programmer!End SubPublic Sub UpdateMe(ByRef Password As String) OpenConnection Password, True '// Opens the workbook.(readonly) If eWS Is Nothing Then Exit Sub '// Exit if there is no worksheet. Update '\\ calls the update routine CloseConnection False '// Closes the workbook.End SubPrivate Sub Update() If Not bOpen Then Exit Sub '// checks if there is an open wb Erase vDa() '// clears the database With eWS '// updates it by getting the last row+cols vDa() = .Range(.Cells(1, 1), .Cells(GET_LAST(Row, .Cells), .Cells.End(xlToRight).Column)) End WithEnd SubPublic Sub UpdateRecords _ (ByVal Password As String, ByVal whatToDo As xlAddNewEditDelete, _ Optional ByVal transNo As String, Optional ByRef cControls As Collection) Dim strMsg As String Dim rActive As Range If CanWeProceed(sCon) Then '\\ calls the canweproceed FUNCTION If Not whatToDo = AddNew Then '// basic checking if arguements If Len(Trim(transNo)) = 0 Then Exit Sub ' for addnew records are End If '// present If Not whatToDo = Delete Then '// basic checking if arguements If cControls Is Nothing Then Exit Sub '// for delete records are End If '// present OpenConnection Password, False '\\ opens the workbook that will be updated If bRonly Then GoTo FileOpen '// do not proceed if opened as readonly Select Case whatToDo '// select case depending on what the Case AddNew '// in case the user want to add new records Set rActive = eWS.Cells(GET_LAST(Row, eWS.Range(A:A)) + 1, 1) PASS_THIS cControls, rActive '// after locating the lastrow, pass the data Case Edit '// in case the user want to edit Set rActive = LOOK_FOR(transNo) '// locate the trans# then update If Not rActive Is Nothing Then PASS_THIS cControls, rActive Case Delete '// in case the user want to delete Set rActive = LOOK_FOR(transNo) '// locate the trans# then delete If Not rActive Is Nothing Then rActive.EntireRow.Delete shift:=xlUp End Select Update '\\ calls the update routine CloseConnection True '\\ closes the workbook and save the changes End If Exit SubFileOpen: MsgBox Request denied! Encountered a critical error! & vbCrLf & _ Do not close this error message., vbCritical, Call a programmer!End SubPrivate Sub PASS_THIS(ByRef cControls As Collection, ByVal rWhere As Range) Dim int1 As Integer '// this sub takes a range object for update With cControls '// of controls and passes it to the database For int1 = 1 To .Count '// loops through the control. rWhere.Offset(, int1 - 1).value = .Item(int1).value Next '// pass each value to the worksheet End WithEnd SubPrivate Function LOOK_FOR(ByRef strTrans As String) As Range Dim bFound As Boolean '// this sub returns a range object Dim loop1 As Long '// if there is a valid transaction Dim rEach As Range '// number present in the database Set LOOK_FOR = eWS.Cells(GET_LAST(Row, eWS.Range(A:A)) + 1, 1) With eWS '// the default range is the last row For loop1 = 2 To .UsedRange.rows.Count + 1 Set rEach = .Cells(loop1, 1) '// loops through the used range If rEach.value = strTrans Then '// and check each transaction # Set LOOK_FOR = rEach '// if there is an equivalent, Exit Function '// return that range and exit function. End If '// if the trans# to be updated is not Next '// found, this will give the last row End With '// and put the data in that row.End FunctionPrivate Function CanWeProceed(FilePath As String) As Boolean Dim FileNo As Integer, ErrNo As Integer On Error Resume Next '// Skips one error. FileNo = FreeFile() '// Gets an available file number. Open FilePath For Input Lock Read As #FileNo Close FileNo '// Closes the file. ErrNo = Err '// Resumes error handling. On Error GoTo 0 '// Resumes error handling. CanWeProceed = ErrNo = 0End FunctionHere is the code for class cls_NewRecords:This class represents the entirety of the Userform.Public WithEvents ContentBox As MSForms.ListBox '// Listbox containing the dataPublic WithEvents FilterButton As MSForms.CommandButton '// Start to look for.Public WithEvents FilterColumn As MSForms.ComboBox '// Where to look for.Public FilterBox As MSForms.TextBox '// What to look for.Public WithEvents buttonSave As MSForms.CommandButton '// Save button.Public WithEvents buttonDelete As MSForms.CommandButton '// Delete button.Public WithEvents buttonClear As MSForms.CommandButton '// Edit button.Public WithEvents buttonRefresh As MSForms.CommandButton '// Edit button.Private ControlCollection As CollectionPrivate vDatabase() As Variant'Private vDetails() As Variant '// what is this for?Private vHeaders() As VariantPrivate ColumnOfEmpNumber As IntegerPrivate ColumnToFilter As IntegerPrivate DisableEvents As BooleanPrivate DatabaseConnection As cls_ConnectionPrivate ConnectionString As StringPrivate ExcelPassword As StringPrivate ColumnWidths As StringPrivate DatabaseLastMod As DatePrivate Const MsgBoxHeader As String = MasterlistProperty Set Controls(ByVal cols As Collection) Set ControlCollection = colsEnd PropertyPublic Sub InitializeConnection(ByVal strCon As String, ByVal strPass As String) ConnectionString = strCon ExcelPassword = strPass Set DatabaseConnection = New cls_Connection With DatabaseConnection .ConnectionString = ThisWorkbook.Path & \ & ConnectionString .UpdateMe ExcelPassword vDatabase() = .Data End WithEnd SubPublic Sub InitializeListBox(Optional ByVal strWidths As Variant) ColumnWidths = strWidths With ContentBox RefreshList If Not IsMissing(strWidths) Then .ColumnWidths = strWidths .ColumnCount = UBound(vDatabase(), 2) + 1 End With vHeaders() = TRANSPOSEARR(vDatabase()) ReDim Preserve vHeaders(LBound(vHeaders(), 1) To UBound(vHeaders(), 1), 1 To 1) FilterColumn.List() = vHeaders() TrackingDetails AddNewEnd SubPrivate Sub RefreshList() With DatabaseConnection vDatabase() = .Data ContentBox.List() = vDatabase() DatabaseLastMod = .timeLastModified End WithEnd SubPrivate Sub ClearList() Dim int1 As Integer With ControlCollection For int1 = 1 To .Count If TypeName(.Item(int1)) = ComboBox Then .Item(int1).ListIndex = 0 Else .Item(int1) = End If Next End With ContentBox.Locked = FalseEnd SubPrivate Sub ButtonClear_Click() RefreshList ClearList TrackingDetails AddNewEnd SubPrivate Sub ButtonRefresh_Click() With DatabaseConnection .UpdateMe ExcelPassword RefreshList End WithEnd SubPrivate Sub ButtonDelete_Click() Dim strMsg As String strMsg = The database is not updated. & vbCrLf & _ Would you like to refresh your database? ManageRecords Delete, ControlCollection.Item(1), ControlCollection, strMsgEnd SubPrivate Sub ButtonSave_Click() Dim strMsg As String strMsg = You are about to add/update a record. & vbCrLf & _ Are you sure you want to proceed? With ControlCollection On Error GoTo EarlyExit If CDbl(.Item(1).value) > vDatabase(UBound(vDatabase(), 1), 1) Then ManageRecords AddNew, .Item(1), ControlCollection, strMsg Else TrackingDetails Edit ManageRecords Edit, .Item(1), ControlCollection, strMsg End If End With Exit SubEarlyExit: If Err.Number = 13 Then MsgBox You are trying to save an invalid transaction number, vbInformation, Err.Number & - Select a valid record. Else MsgBox Err.Description, vbCritical, Err.Number & - Call a programmer! End IfEnd SubPrivate Sub ContentBox_Click() Dim i1 As Integer, a() As Variant, strTrans As String With ContentBox If .ListIndex < 1 Then Exit Sub strTrans = .List(.ListIndex, LBound(.List(), 2)) a() = CLEANARR(vDatabase(), strTrans, 1, False, True, True) End With With ControlCollection For i1 = 1 To .Count .Item(i1).value = a(2, i1) Next End WithEnd SubPrivate Sub FilterColumn_Change() Dim sTemp As String, i As Integer, a() As Variant sTemp = FilterColumn.value If Len(Trim(FilterColumn.value)) = 0 Then Exit Sub For i = LBound(vHeaders(), 1) To UBound(vHeaders(), 1) If sTemp = vHeaders(i, 1) Then ColumnToFilter = i NextEnd SubPrivate Sub FilterButton_Click() If ContentBox.Locked Then Exit Sub Dim a() As Variant, sTemp As String sTemp = CStr(FilterBox.value) If Len(Trim(sTemp)) = 0 Then ContentBox.List() = vDatabase() Exit Sub Else OPTIMIZE_VBA True a() = CLEANARR(vDatabase, sTemp, ColumnToFilter, False, False, True) ContentBox.List = a() OPTIMIZE_VBA False End IfEnd SubPrivate Sub ManageRecords(ByVal whatToDo As xlAddNewEditDelete, _ByRef transNo As String, ByRef colsControl As Collection, strMsg As String) Dim iRefresh As Byte, iProceed As Byte If Not isDatabaseLatest Then iRefresh = MsgBox(The database is not updated. & vbCrLf & _ Would you like to refresh your database?, _ vbInformation + vbOKCancel, MsgBoxHeader) If iRefresh = 1 Then ButtonRefresh_Click End If iProceed = MsgBox(strMsg, vbInformation + vbOKCancel, MsgBoxHeader) If iProceed = 1 Then OPTIMIZE_VBA True DatabaseConnection.UpdateRecords ExcelPassword, whatToDo, ControlCollection.Item(1), ControlCollection ButtonClear_Click OPTIMIZE_VBA False End IfEnd SubPrivate Sub TrackingDetails(ByRef whatToDo As xlAddNewEditDelete) With ControlCollection If whatToDo = AddNew Then .Item(1).value = GiveMax(vDatabase()) + 1 .Item(2).value = Now() End WithEnd SubPrivate Function isDatabaseLatest() As Boolean isDatabaseLatest = Not (CDate(FileDateTime(ThisWorkbook.Path & \ & ConnectionString)) < DatabaseLastMod)End FunctionPrivate Function GiveMax(v() As Variant) As LongDim i As Long, H As LongOn Error Resume Next For i = LBound(v(), 1) To UBound(v(), 1) If v(i, 1) > H Then H = v(i, 1) NextGiveMax = HEnd FunctionHere is the code for the Userform:On initilize of the userform I create a variable as cls_NewRecords, set its properties and controls, then add them to a global collection.Private CollectionOfClasses As CollectionPrivate Sub UserForm_Initialize()Dim colControl As CollectionDim int1 As IntegerDim ThisUserform As cls_NewRecordsDim ThisHelper As cls_RecordHelperDim limitFormat As cls_FormattedControlsSet ThisUserform = New cls_NewRecords '<~ set this variable a new classSet CollectionOfClasses = New Collection '<~ define the public collection as new collectionSet colControl = New Collection 'collection of controls. their index refers to what column they will be placed.For int1 = 1 To 20 colControl.Add Me.Controls(Col & int1), TextBox & int1NextWith ThisUserform Set .ContentBox = listFilter '<~ the listbox that represents the workbook Set .FilterBox = textFilter '<~ 'text' we use to filter the workbook Set .FilterColumn = selectFilter '<~ ComboBox that the user chooses what column should the 'text' looked for Set .FilterButton = buttonFilter '<~ start looking for 'text' in the chosen column Set .buttonSave = buttonSave '<~ save changes ( new record/edit record) Set .buttonClear = buttonClear '<~ clear the userform. Set .buttonDelete = buttonDelete '<~ delete the record. Set .buttonRefresh = buttonRefresh '<~ refresh the list. (if there are changes done by other user) Set .Controls = colControl .InitializeConnection data\att.xlsx, G.Cells(1, 1).Value '<~ sheet 'G' range 'A1' is where the password for the workbook is stored. .InitializeListBox 0;0;0;0;30;110;50;30;65;100;0;0;0;0;0;0;0;0;0;0;0;0 '<~ to hide unnecessary columns.End WithCollectionOfClasses.Add ThisUserform '<~ adds this class to the collectionSet ThisUserform = Nothing '<~ minor cleanupSet colControl = Nothing '<~ minor cleanupWith Col9 .AddItem Whole Day .AddItem Half Day .AddItem Under Time .AddItem Late .AddItem SuspensionEnd WithselectFilter.ListIndex = 5With Col4.AddItem Direct.AddItem NonDirect.ListIndex = 0End WithEnd SubThe following function/sub are located in a regular module.This is the OPTIMIZE_VBA Sub:Public Sub OPTIMIZE_VBA(ByVal isOn As Boolean)Dim bHolder As BooleanbHolder = Not isOnWith Application .DisplayAlerts = bHolder .ScreenUpdating = bHolder .EnableEvents = bHolder .Calculation = IIf(isOn, xlCalculationManual, xlCalculationAutomatic) .Calculate If .Version > 12 Then .PrintCommunication = bHolderEnd WithEnd SubThis is the GET_LAST Function:Public Function GET_LAST(c As Choice, rng As Range)Dim o As XlSearchOrderDim r As Range o = xlByRows '<~~ default value If c = 2 Then o = xlByColumns '<~~ change it if looking for column Set r = rng.Find(What:=*, after:=rng.Cells(1), LookIn:=xlFormulas, _ LookAt:=xlPart, SearchOrder:=o, SearchDirection:=xlPrevious, _ MatchCase:=False) If r Is Nothing Then Set r = rng.Cells(1, 1) '<~~ if we found nothing give A1 If c = Row Then GET_LAST = r.Row If c = Column Then GET_LAST = r.Column If c = Cell Then GET_LAST = rng.Parent.Cells(GET_LAST(Row, rng), GET_LAST(Column, rng)).Address(0, 0)End FunctionThis is the CLEANARR Function:That receives a 2D array and loops from lbound upto ubound of 1stD.Filters the array with the given column number and criteria ('s' as string).Public Function CLEANARR _ (ByRef v() As Variant, ByVal s As String, ByVal c As Integer, _ Optional ByVal RemoveMatch As Boolean = False, _ Optional ByVal ExactMatch As Boolean = False, _ Optional ByVal KeepHeader As Boolean = True) _As VariantDim a(), r As Long, i1 As Long, i2 As LongDim StartofLoop As Integer, deleteRecord As BooleanReDim a(LBound(v(), 1) To UBound(v(), 1), LBound(v(), 2) To UBound(v(), 2))StartofLoop = LBound(v(), 1)If KeepHeader Then Call GIVE_HEADER(a(), r, StartofLoop, v())For i1 = StartofLoop To UBound(v(), 1) If ExactMatch Then If Not (UCase(Format(v(i1, c), 0)) = UCase(Format(s, 0))) = RemoveMatch Then deleteRecord = True Else If Not InStr(1, v(i1, c), s, vbTextCompare) = RemoveMatch Then deleteRecord = True End If If deleteRecord Then r = r + 1 For i2 = LBound(v(), 2) To UBound(v(), 2) a(r, i2) = v(i1, i2) Next deleteRecord = False End IfNextCLEANARR = REDUCEARR(a())End FunctionThis is the TRANSPOSEARR Function:Public Function TRANSPOSEARR(ByRef v() As Variant) As VariantDim rows, cols As LongDim s() As VariantReDim s(LBound(v(), 2) To UBound(v(), 2), LBound(v(), 1) To UBound(v(), 1))For rows = LBound(v(), 1) To UBound(v(), 1) For cols = LBound(v(), 2) To UBound(v(), 2) s(cols, rows) = v(rows, cols) NextNextTRANSPOSEARR = s()End Function
Class to update a userform with an external workbook and vice versa
performance;vba;excel
null
_unix.121032
I want to install VLC in my Linux box. When I execute yum install vlc, it displays following message:-Loaded plugins: refresh-packagekit, securitySetting up Install ProcessResolving Dependencies--> Running transaction check---> Package vlc.i686 0:2.0.10-1.el6 will be installed--> Processing Dependency: vlc-core(x86-32) = 2.0.10-1.el6 for package: vlc-2.0.10-1.el6.i686--> Processing Dependency: libvlccore.so.5 for package: vlc-2.0.10-1.el6.i686--> Processing Dependency: libcaca.so.0 for package: vlc-2.0.10-1.el6.i686--> Processing Dependency: kde-filesystem for package: vlc-2.0.10-1.el6.i686--> Processing Dependency: libaa.so.1 for package: vlc-2.0.10-1.el6.i686--> Running transaction check---> Package aalib-libs.i686 0:1.4.0-0.18.rc5.el6.1 will be installed---> Package kde-filesystem.noarch 0:4-30.1.el6 will be installed---> Package libcaca.i686 0:0.99-0.9.beta16.el6 will be installed--> Processing Dependency: libglut.so.3 for package: libcaca-0.99-0.9.beta16.el6.i686---> Package vlc-core.i686 0:2.0.10-1.el6 will be installed--> Processing Dependency: live555date(x86-32) = 2012.04.27 for package: vlc-core-2.0.10-1.el6.i686--> Processing Dependency: libx264.so.120 for package: vlc-core-2.0.10-1.el6.i686Package x264-libs is obsoleted by x264, but obsoleting package does not provide for requirements--> Processing Dependency: libavformat.so.53(LIBAVFORMAT_53) for package: vlc-core-2.0.10-1.el6.i686Package ffmpeg-libs is obsoleted by ffmpeg, but obsoleting package does not provide for requirements--> Processing Dependency: libtiger.so.5 for package: vlc-core-2.0.10-1.el6.i686--> Processing Dependency: libzvbi.so.0 for package: vlc-core-2.0.10-1.el6.i686--> Processing Dependency: libavcodec.so.53 for package: vlc-core-2.0.10-1.el6.i686Package ffmpeg-libs is obsoleted by ffmpeg, but obsoleting package does not provide for requirements--> Processing Dependency: libavutil.so.51 for package: vlc-core-2.0.10-1.el6.i686--> Processing Dependency: libgme.so.0 for package: vlc-core-2.0.10-1.el6.i686--> Processing Dependency: libavformat.so.53 for package: vlc-core-2.0.10-1.el6.i686--> Processing Dependency: libavutil.so.51(LIBAVUTIL_51) for package: vlc-core-2.0.10-1.el6.i686--> Processing Dependency: libavcodec.so.53(LIBAVCODEC_53) for package: vlc-core-2.0.10-1.el6.i686--> Running transaction check---> Package freeglut.i686 0:2.6.0-1.el6 will be installed---> Package game-music-emu.i686 0:0.5.5-1.el6 will be installed---> Package libavcodec53.i686 0:0.10.9-58.el6 will be installed--> Processing Dependency: libxavs.so.1 for package: libavcodec53-0.10.9-58.el6.i686--> Processing Dependency: libx264.so.136 for package: libavcodec53-0.10.9-58.el6.i686---> Package libavformat53.i686 0:0.10.9-58.el6 will be installed---> Package libavutil51.i686 0:1.0.8-58.el6 will be installed---> Package libtiger.i686 0:0.3.4-1.el6 will be installed---> Package live555.i686 0:0-0.34.2012.01.25.el6 will be updated---> Package live555.i686 0:0-0.37.2012.04.27.el6 will be an update---> Package vlc-core.i686 0:2.0.10-1.el6 will be installed--> Processing Dependency: libx264.so.120 for package: vlc-core-2.0.10-1.el6.i686Package x264-libs is obsoleted by x264, but obsoleting package does not provide for requirements---> Package zvbi.i686 0:0.2.33-6.el6 will be installed--> Running transaction check---> Package libx264_136.i686 0:0.136-19_20130917.2245.el6 will be installed---> Package libxavs1.i686 0:0.1.51-2.el6 will be installed---> Package vlc-core.i686 0:2.0.10-1.el6 will be installed--> Processing Dependency: libx264.so.120 for package: vlc-core-2.0.10-1.el6.i686Package x264-libs is obsoleted by x264, but obsoleting package does not provide for requirements--> Finished Dependency ResolutionError: Package: vlc-core-2.0.10-1.el6.i686 (rpmfusion-free-updates) Requires: libavformat.so.53(LIBAVFORMAT_53) Available: ffmpeg-libs-0.10.9-1.el6.i686 (rpmfusion-free-updates) libavformat.so.53(LIBAVFORMAT_53) Available: ffmpeg-libs-0.10.11-1.el6.i686 (rpmfusion-free-updates) libavformat.so.53(LIBAVFORMAT_53) Available: libavformat53-0.8.15-55.el6.i686 (atrpms) libavformat.so.53(LIBAVFORMAT_53) Available: libavformat53-0.9.3-56.el6.i686 (atrpms) libavformat.so.53(LIBAVFORMAT_53) Available: libavformat53-0.10.9-58.el6.i686 (atrpms) libavformat.so.53(LIBAVFORMAT_53) Available: ffmpeg-libs-0.6.5-2.el6.i686 (linuxtech-release) Not foundError: Package: vlc-core-2.0.10-1.el6.i686 (rpmfusion-free-updates) Requires: libavcodec.so.53 Available: ffmpeg-libs-0.10.9-1.el6.i686 (rpmfusion-free-updates) libavcodec.so.53 Available: ffmpeg-libs-0.10.11-1.el6.i686 (rpmfusion-free-updates) libavcodec.so.53 Available: libavcodec53-0.8.15-55.el6.i686 (atrpms) libavcodec.so.53 Available: libavcodec53-0.9.3-56.el6.i686 (atrpms) libavcodec.so.53 Available: libavcodec53-0.10.9-58.el6.i686 (atrpms) libavcodec.so.53 Available: ffmpeg-libs-0.6.5-2.el6.i686 (linuxtech-release) Not foundError: Package: vlc-core-2.0.10-1.el6.i686 (rpmfusion-free-updates) Requires: libx264.so.120 Available: libx264_120-0.120-0.20120424.1.el6.i686 (linuxtech-release) libx264.so.120 Available: x264-libs-0.120-4.20120303.el6_bootstrap.i686 (rpmfusion-free-updates) libx264.so.120 Available: x264-libs-0.120-5.20120303.el6.i686 (rpmfusion-free-updates) libx264.so.120 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigestI don't know why I am getting this error. In my knowledge, if there are any dependencies, then package manager should find and install them too. Can anybody tell me what's wrong with yum?
Oracle Linux 6.5: Unable to install VLC 2.0.10 from rpmfusion-free-updates
vlc;oracle linux
The most immediate dependency not being found looks to be the 0.6.5 version of ffmpeg-libs which is usually something you would get from rpmfusion (which you appear to be using as well). rpmfusion, though, only goes up to v0.5 on RHEL/OEL 5, and jumped to v0.10 un RHEL/OEL6. So it's not able to locate that specific package version.I'm seeing a lot of different repos popping up in that yum install command so it's possible that yum is pulling the a version of whatever specific package depends on ffmpeg-libs but that package is built against a version of ffmpeg-libs with a lower version number than any copy of that package in your repos has. So it's basically saying I'm trying to install Package1 which needs version 0.6 of ffmpeg-libs, but out of all your repos the only thing I can find is version 0.10So you have two ways of solving these types of yum issues:Eliminate as many additional repos as you can. VLC is available in the RPM fusion repository which has worked well for me in the past. I don't believe they depend on any other repository existing besides the base repository for the core OS packages. I would try disabling all repos except whatever Oracle calls their base repo, EPEL, and rpmfusion itself and see if that causes the version numbers to sync up.Try to identify the repo causing that specific version of ffmpeg-libs to be required and check to see if the repo maintainers expect you to also have any other yum repos configured.Of the two, the first one seems the easiest. You can do a yum repolist to see what repos you have installed and you can either disable them by editing their /etc/yum.repos.d configuration file, or added enough --disablerepo= options to your yum install command.That said, you're going to have a hard time living with RHEL as a mutlimedia platform. I would recommend using Fedora or something like that and just running RHEL in a virtual machine or something. Fedora has a package called virt-manager that would allow you to do this. Do what you want, but it might be easier to use a system that was designed for desktop users as a part-time hypervisor than making a server OS behave like a desktop OS.
_softwareengineering.15681
What tools are necessary when developing efficiently with Silverlight 4? VS 2010 is a gimme, but what version? Is Pro enough? Premium? What about Expression Blend/Web/Both? When considering VS 2010, premium comes with an MSDN subscription but it's at the high end of the budget. It makes sense if Expression comes with it though.It's for a one man development show, making an LOB app that will integrate video/audo and mic/webcam equipment.
.NET silverlight development tools
tools;web tools;silverlight
null
_webapps.57333
Is there a keyboard shortcut to check for new mail in Gmail?(Question inspired because I frequently: get a notification from my phone and recognize the sound as a new email on the Gmail app and many times I'm at work, etc, so I switch to my open Gmail tab. Sometimes I even wait a little bit before going to the Gmail tab. And it's not there. I usually keep my hands on the keyboard. Checking new mail is one of the few times during the day that I have to move my hand to the mouse for a one-time action just to immediately go back to the keyboard. I got fed up because I couldn't find it on Google's list of Gmail shortcuts and started searching around. Maybe no one has ever asked the question or just no one cares: I can't seem to find any info.)
check mail keyboard shortcut for Gmail
gmail;email;keyboard shortcuts
The shortcut that does exactly what is asked for here is 'u', which Google labels return to thread list, which is what it does if you're in a conversation view (i.e. a single email thread). However, if you're already looking at a list of threads (e.g. your inbox), 'u' will refresh that list.Unlike g, i, 'u' works to refresh any thread list you're looking at, and also does not move the cursor back to the top.Like g, i, the 'u' shortcut is part of the set of keyboard shortcuts that might be disabled. If that's the case, you can enable them by setting Keyboard shortcuts on in Gmail settings, under the General tab. You can also do it by pressing the '?' key in the thread or conversation view, which brings up a list of keyboard shortcuts and their meanings (although, as demonstrated here, those meanings aren't always clear). There's a link in the middle of the box allowing you to toggle whether the extended keyboard shortcuts are enabled or disabled:
_unix.64105
Can XBMC play Blu-Ray disks without other steps (e.g. manually decrypting it - if I buy a Blu-ray movie, can it be played out of the box)?
Can XBMC play Blu-Ray disks?
xbmc;blu ray
I don't know a lot about XBMC, but it looks like the bluray comes as a plugin:http://lifehacker.com/5621471/how-to-enable-blu+ray-playback-in-xbmc
_unix.160037
Does such a thing exist? I'd be interested in something like:You right-click a file in a file viewing manager, and click gmail yourself this file. One click, and you're done.You open up the command-line, and type something like gmail ~/file.txt and file.txt is instantly sent to your own account.
One-Click Script to Gmail Yourself a File
shell;scripting;email
Well, this one is not exactly the way you want. But still it could be useful for the second option in your question. Install the required packages. sudo apt-get install msmtp-mtaEdit the following file to add the details. If the file doesn't exist, you could create it. vi ~/.msmtprc#Gmail accountdefaultslogfile ~/msmtp.logaccount gmailauth onhost smtp.gmail.comfrom [email protected] ontls ontls_trust_file /usr/share/ca-certificates/mozilla/Equifax_Secure_CA.crtuser [email protected] your_gmail_passwordport 587account default : gmailChange the permissions of the above file so that others couldn't read your user account details. chmod 600 .msmtprcNow, install a command line email program to write your email. sudo apt-get install heirloom-mailxNow, again edit/create the below file. vi ~/.mailrcAdd the below entries to the above file. set sendmail=/usr/bin/msmtpset message-sendmail-extra-arguments=-a gmailWe are done to send email from the command line. Testingmail -a hello.txt -s CHECKING recipient-mail-idENTER THE MAIL CONTENTS HERE. ctrl - d to finish the mail contents. Referenceshttp://tuxtweaks.com/2012/10/send-gmail-from-the-linux-command-line/
_codereview.11528
I wrote a little function to compact long text. I am relatively new to JavaScript so I am not sure I wrote it as elegantly as possible. I wrote it to be runnable on the server side (Node.js) as well, so can't use jQuery or such. Can the code be improved?// change a long text into bla bla bla... (more) where (more) is a link to open the rest of the text// compact text if over maxWords words (split by ' ', default 25)// or if over maxRows lines (split by '<br>', default 2)// but avoid compacting if no more than almostFinished (default 5) words to the end of the textcompactText = function(text, prefix, maxWords, maxRows, almostFinished) { if (!maxWords) { maxWords = 25; } if (!maxRows) { maxRows = 2; } if (!almostFinished) { almostFinished = 5; } var result = []; var compacted = false; // escape the html code in the text text = (text || '').replace(/</g, '&lt;').replace(/>/g,'&gt;').replace(/\r/g, ' ').replace(/\n/g, ' <br> ') var pn = text.split(' '); var row = 1; for (var word=0; word< pn.length; word++) { if (pn[word] === <br>) { row++; } if (!compacted && (row > maxRows || word == maxWords) && word< pn.length - almostFinished) { compacted = true; /* too long - add a (more) link and open hidden span for rest of text */ result.push('<span class='+prefix+'-more-text>...<a href=# onclick=javascript:$('+ '.+prefix+-more-text').toggle(); return false;+ '>(show more)</a></span>'); result.push('<span class='+prefix+'-more-text style=display:none>'); } result.push(pn[word]); } if (compacted) { // compacted: add link to toggle back (less) and close the hidden span. result.push('<a href=# onclick=javascript:$('+ '.+prefix+-more-text').toggle(); return false;+ '>(show less)</a></span>'); } return result.join(' ');}
Collapse long text (with more/less links)
javascript;node.js
Instead of pn use a longer variable name. A variable like this with such a big scope really deserves a longer name. It would make the code easier to read.var pn = text.split(' ');Instead of commenting use the comment name as a function name:// escape the html code in the texttext = (text || '').replace(/</g, '&lt;').replace(/>/g,'&gt;').replace(/\r/g, ' ').replace(/\n/g, ' <br> ')I'd write this:function escapeHtml(text) { if (!text) { return ; } return text.replace(/</g, '&lt;').replace(/>/g,'&gt;').replace(/\r/g, ' ').replace(/\n/g, ' <br> ') }I'd create a createToggleLink function. function createToggleLink(prefix, linkText) { return '<a href=# onclick=javascript:$(' + '. + prefix + -more-text').toggle(); return false; + '>( + linkText + )</a>');}It removes some code duplication and furthermore makes it more explicit that you close the span tag:if (compacted) { var toggleLink = createToggleLink(prefix, '(show less)'); result.push(toggleLink); result.push('</span>');}So, the comment is unnecessary, the code says the same.
_webapps.50129
Is it possible to upload a photo to one of my albums on Facebook, so that the news about it will never appear anywhere... Neither in my timeline of my profile nor in the news feed of my friends etc... It will just be in my albums and nobody will learn about it except me since I know I put it in of my albums.Facebook makes people share everything but what if I don't want to share particular thing with anybody...I only want to have a photo in my profile, so that friends who come to see my profile will see the photo in there...
Uploading photo to my profile on Facebook, so that nobody notices
facebook;facebook timeline;facebook privacy;facebook profile
When you upload the photo, simply change the privacy setting to Only Me.
_softwareengineering.243259
I have been asked to clone some existing software for a company. Basically its an old 16 bit DOS console app, which was supplied free of charge in I believe the late 80's. Having replaced the machine that needs to run it with a box running Win7 x64 they can't get it to work. It crashes every couple of minutes under DOSbox.The company that supplied it appears to no longer exist - if they did the company asking me to do this would almost certainly know about it. Its undetermined whether they have gone entirely or are just trading under a different name. If the latter they seem to have withdrawn from the market related to this product (because again, niche area, we should know about everyone there).What is the status to this with regards to copyright etc.? The main concern for the company involved is they want an identical interface to what they already have so I would have to clone this entirely. Having no source code / indication of the underlying mechanisms these would be written from scratch.Is an interface covered by copyright? / Does that still hold 30 years later? What is the assumed license when none at all is provided?Under UK law would I be under any serious risk were I to take on the project? How would this pan out if I then decided to sell the software on to other companies?
Cloning existing software for commercial purposes - legal implications
legal;copyright;porting
null
_unix.109349
A friend of mine who likes programming in the Linux environment, but doesn't know much about the administration of Linux recently ran into a problem where his OS (Ubuntu) was reporting out of disk space on XXX volume. But when he went to check the volume, there was still 700 GB left. After much time wasted, he was eventually able to figure out that he was out of inodes. (He was storing lots of little incremental updates from a backup system on this volume and burned thru all his inodes.)He asked me why the Linux kernel reported the error message (out of disk space) instead of properly reporting (out of inodes). I didn't know, so I figured I would ask StackExchange.Anyone know why this happens? and why it hasn't been fixed after all these years? (I remember a different friend telling me about this problem in 1995.)
Why does the Linux kernel report out of disk space when in reality it is out of i-nodes
filesystems;linux kernel;inode
A single error number, ENOSPC, is used to report both situations, hence the same error message.To keep compliance with the ISO C and POSIX standards, the kernel developers have no choice but to use a single error number for both events. Adding a new error number would break existing programs.However, as sticking to traditional error messages is not AFAIK mandatory, nothing should forbid a developer to make the single message clearer, like for example out of disk/inode space Technically, whether being out of inode space or out of data space is the same, i.e. it means there is not enough free disk space for the system call to succeed.I guess you weren't going to complain if your disk is reported as full while there are still free inodes slots.Note that file systems like JFS, XFS, ZFS and btrfs allocate inodes dynamically so do no exhibit this issue anymore.