id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_softwareengineering.333615
I am currently planning a monitoring server for a distributed system. In a file (or maybe a database, someday) I save all the servers and parameters that I want to monitor. When the monitoring system starts, I parse that configuration file and store the content in a Configuration class (see source code below). In order to be independent from the storage type (file, database, ...) I use the IConfigurationHandler Interface (ok, I admit that the name doesn't really fit :)), whose implementations are responsible for reading/writing the file content. I considered the following design:public class Configuration{ private IConfigurationHandler _configHandler; public Configuration(IConfigurationHandler configHandler) { this._configHandler = configHandler; this.Load(); } public void Load() { this._configHandler.LoadConfigurationInto(this); }}public class XmlDocumentConfigurationHandler : IConfigurationHandler{ public void LoadConfigurationInto(Configuration configuration) { configuration.X = ...; configuration.Y = ...; configuration.Z = ...; }}If I wanted to use the Configuration class, the only thing I would have to do is add a dependency to the constructor (assuming that I am using a DI-Framework).public class ConfigurationConsumer{ public ConfigurationConsumer(Configuration configuration) { // Do something with the configuration }}But somehow this code gives me a bad feeling. What do you guys think about it?Is it good if an object let's configure itself by another class?Even though it feels wrong, I cannot think of a particular case where this design has any negative impact.Another approach would be to call the IConfigurationHandler directly from the class that needs the Configuration, as follows: public class Configuration{ public Configuration() { }}public class XmlDocumentConfigurationHandler : IConfigurationHandler{ public Configuration GetConfiguration() { Configuration configuration = new Configuration(); configuration.X = ...; configuration.Y = ...; configuration.Z = ...; }}public class ConfigurationConsumer{ public ConfigurationConsumer(IConfigurationHandler configurationHandler) { var configuration = configurationHandler.GetConfiguration(); // Do something with the configuration }}The problem with this approach is, that I would need to accept an IConfigurationHandler, even though the only thing I need is a Configuration object.Here is a (very) rough outline of my architecture:And here is an example for the content of the configuration file:<servers> <server id=1> <monitored_values> <monitored_value name=CpuUsage/> <monitored_value name=.../> </monitored_values> <!--some more configuration--> </server> <server id=...> <!--some more configuration--> </server> <server id=n> <!--some more configuration--> </server></servers>The actual configuration file then would, among others, contain a list with all servers, which themselves contain a list with their monitored parameters.
Let an object be configured by another class
c#;design;object oriented design
null
_cs.51799
I am a little bit confused with the concept of using the Weibull distribution or other distribution for fault model. As I understand, in simulation this distribution is often used for modelling a fault in components.On the other hand when designing systems we often assume some components might become faulty and we use fault detection method to know where the distinct component is faulty or not.I want to categorize faulty sensor in 5 groups: transmitter circuit/battery condition/micro controller/receiver circuit and sensor circuit fault and correlated these kinds we can have different attribute in network.The question is, how can i write the Weibull distribution function to produce these kinds of fault in a wireless sensor networks?
How does the Weibull distribution work as fault model in wireless sensor networks?
computer networks;randomness;modelling;fault tolerance;wireless
Step 1: Fit a Weibull distribution to all transmitter circuit faults (with one set of parameters). Fit a Weibull to battery condition faults (with another set of parameters). You get five different Weibull distributions.Step 2: During simulation, randomly draw one number from each of the Weibull distributions. This gives you the time until the first transmitter circuit fault, the time until the first battery condition fault, and so on. Pick the smallest number; that's the fault that occurs first. Simulate a fault of that type, at that time. (You can continue the simulation to simulate multiple faults if you want.)
_codereview.59526
I'm writing my own game framework and would like to get feedback on the API while I'm writing it. At the moment it's very simple, but I would like some guidance about where to take it.This is a sample main function to create a window:#include <string>#include <iostream>#include pneu/graphics/Window.hpp#include pneu/core/MethodResult.hppint main(int argc, const char** argv) { // declare window pneu::graphics::Window window(hello-world, 800, 600, 80, 60); // initialise it, handling any errors window.init().onError([](const std::string& error) { std::cout << error << std::endl; exit(1); }); // main event loop while (window.isRunning()) { window.pollEvents(); window.update(); window.renderFrame(); } return 0;}Here is the declaration of MethodResult (it's a header-only class)#pragma once#include <exception>#include <functional>#include <string>namespace pneu {namespace core {class MethodResult final {public: static auto ok(void) -> MethodResult { return MethodResult(true, ); } static auto error(const std::string& desc) -> MethodResult { return MethodResult(false, desc); } MethodResult(const MethodResult&) = default; ~MethodResult(void) = default; inline auto isOk(void) const -> bool { return fOk; } inline auto getError(void) const -> std::string { return fDescription; } inline auto onError(const std::function<void (const std::string&)>& f) -> void { if (!isOk()) { f(getError()); } } inline auto throwOnError(const std::exception& e) -> void { if (!isOk()) { throw e; } }private: MethodResult(bool ok, const std::string& desc) : fOk(ok), fDescription(desc) { } const bool fOk; const std::string fDescription;};} // namespace core} // namespace pneu#define PNEU_EXCEPT_TO_METHODRES(func) \ do { \ try { \ func; \ } catch(const std::exception& e) { \ return pneu::Graphics::MethodResult::error(e.what()); \ } \ }#define PNEU_TRY_METHOD(func) \ do { \ auto err = func; \ if (!err.isOk()) { \ return err; \ } \ } while(0)Window class declaration#pragma once#include <memory>#include <string>struct GLFWwindow;namespace pneu {namespace core {class MethodResult;} // namespace corenamespace graphics {class RenderObject;class Window final {public: Window(const std::string& title, int width, int height, int min_width = 0, int min_height = 0); Window(const Window&) = delete; Window(Window&&) = delete; auto operator=(const Window&) -> Window& = delete; ~Window(void); auto init(void) -> pneu::core::MethodResult; auto update(void) -> void; auto pollEvents(void) -> void; auto renderFrame(void) -> void; auto isRunning(void) -> bool; auto addRenderObject(std::weak_ptr<RenderObject> object) -> void;private: auto _initGlfw(const std::string&) -> pneu::core::MethodResult; auto _handleKeypress(int, int, int, int) -> void; auto _handleRefresh(void) -> void; auto _handleQuitRequested(void) -> void; auto _handleWindowResize(int, int) -> void; auto _handleWindowMove(int, int) -> void; auto _handleViewportResize(int, int) -> void; auto _handleFocusLost(void) -> void; auto _handleFocusGained(void) -> void; static auto _windowResizeCallback(GLFWwindow*, int, int) -> void; static auto _viewportResizeCallback(GLFWwindow*, int, int) -> void; static auto _windowMoveCallback(GLFWwindow*, int, int) -> void; static auto _refreshCallback(GLFWwindow*) -> void; static auto _keypressCallback(GLFWwindow*, int, int, int, int) -> void; static auto _quitRequestedCallback(GLFWwindow*) -> void; static auto _windowFocusChangeCallback(GLFWwindow*, int) -> void; struct WindowImpl; std::unique_ptr<WindowImpl> fWinImpl; std::string fWinTitle;};} // namespace graphics} // namespace pneu
Game framework using C++
c++;c++11;library;error handling;gui
null
_webapps.56863
It's been all right until a few days ago when I made a search from google.com and it ended up at this address:https://ipv6.google.com/sorry/IndexRedirect?continue=https://www.google.com/search%3Fq%3Dkeyword%2Bplanner%26ie%3Dutf-8%26oe%3Dutf-8%26aq%3Dt%26rls%3Dorg.mozilla:en-US:official%26client%3Dfirefox-aWith a Server not found error in Firefox. I kept trying and it persisted. However it's all right with google.com.hk which doesn't go to ipv6.google.com. I tried to ping ipv6.google.com but ended up with this:Ping request could not find host ipv6.google.com. Please check the name and try again.Any idea how I can make this error go away with google.com? Or probably set something in my Google search settings to disable it from going to ipv6.google.com?
Server not found when searching through Google
google search
null
_codereview.27965
I found an interview question which requires us to print the elements of the matrix in a spiral order starting from the top left. I need some pointers on how to improve my code:#include <stdio.h>//function that prints a row from startx,starty to endyvoid printRow(int arr[][4],int startx,int starty,int endy){ int yCtr; for(yCtr = starty; yCtr <= endy ; yCtr ++) printf(%d ,arr[startx][yCtr]);}//function that prints a row from startx,starty to endy (decreasing columns)void printRowBackward(int arr[][4],int startx,int starty,int endy){ int yCtr; for(yCtr = starty; yCtr >= endy ; yCtr --) printf(%d ,arr[startx][yCtr]);}//function that prints a column from startx,starty to endxvoid printColumnBackward(int arr[][4],int startx,int starty,int endx){ int xCtr; for(xCtr = startx; xCtr >= endx; xCtr --) printf(%d ,arr[xCtr][starty]);}// column backwardsvoid printColumn(int arr[][4],int startx,int starty,int endx){ int xCtr; for(xCtr = startx; xCtr <= endx; xCtr ++) printf( %d ,arr[xCtr][starty]);}// prints a section of the spiralvoid printSpiralSection(int arr[][4],int startx,int starty,int size){ printRow(arr,startx,starty,size - 1); printColumn(arr,startx + 1 ,size - 1 ,size -1); printRowBackward(arr,size - 1,size - 2,starty); printColumnBackward(arr,size - 2,starty,startx + 1);}int main(){ int array[4][4] = { 22 ,323,2342,222, 2,234,243,333, 21,13,23,444, 223,234,231,234}; int startx =0, starty = 0, size = 4; // prints each section of the spiral ... while(size >=1) { printSpiralSection(array,startx,starty,size); size --; startx++; starty++; } getchar(); return 0;}
Program to display a matrix in a spiral form
c;interview questions;matrix
null
_codereview.8283
By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms.fib = (x) -> return 0 if x == 0 return 1 if x == 1 return fib(x-1) + fib(x-2)i = 2fib_sum = 0while (fib i) < 4000000 n = fib i if n % 2 is 0 then fib_sum += fib i i++alert fib_sumI am looking for a more functional solution but I am stuck.EditOptimized the solution:limit = 4000000sum = 0a = 1b = 1while b < limit if b % 2 is 0 then sum += b h = a + b a = b b = halert sum #4613732
Project Euler question 2 in CoffeeScript
javascript;functional programming;project euler;coffeescript;fibonacci sequence
null
_softwareengineering.269281
I develop a browser based game with node.js in back and HTML5 canvas in front-end.It use WebSockets for communication.My plan is to generate business events in client side, e.g.: finishJob.Client will store all relevant information up to date.client doesn't have to call server every time it need some data e.g: players moneyto achieve this, client will subscribe to players channelevery online player has its own channel, like a chat roomevery time something happen with a player, its channel fire an event with players new dataIn MVC pattern in here the model is Player, the View is the HTML5 Canvas, but i need 2 type of controllers:controller to handle business eventscontroller to handle channels and subscribersMy questions: Is this a viable option?If yes, are there any design pattern similar for this, or any article about this kind of architecture? Are there any naming conventions (controllers, handlers, channels...)?
Are there any design pattern to data binding in event driven architecture?
design;architecture;event programming;websockets
Yes...see the link below for this pattern...If you're writing an application which uses Peers -- or any complex app which requires robust Object-Networks I would use an Event-Driven Architecture.Using a Mediator or EventHub (Event-Aggrigator)The simplest approach would be to implement the Mediator Pattern designed by Addy Osmoni.This allows you to write something like:// core.jsmediator.subscribe('newMemberAdded', function newMemberAddedHandler(id){ this.membersModule.add(id);});...// membersUI.js$('#addMember').click(function(){ ... mediator.publish('newMemberAdded', 998); ...});With this, the only Coupling your modules require is a reference to mediator in order to communicate with other modules.Using a Mediator is very powerful and will make your modules more Liftable (loose coupling), however, there are some conventions you should consider while developing an EDA:Modules only publish interests -- not Query+Command eventse.g: eventHub.fire('buttonClicked') NOT eventHub.fire('get:membersList', function(){ ... })Query+Command Channels Are reserved for Core/Facade interaction (see Osmoni's post)Work-around those Noun-Verb-Adjective channel-names:e.g: 'log', 'start', 'change', 'notice' all can be seen as a command or something that happend. You can add the ing conjugate to obviate this ('starting')Listen Before You Speak! -- Otherwise you may miss eventsVisit the link above for moreAdditionally, you can bind your Mediator to a WebWorker or SharedWorker to share state between browser tabs (etc) and bind your worker to an EventHub on your server for an even cleaner coupling.I know this post is somewhat ad hoc, but I hope its enough to get you started!
_webmaster.50068
I have a small site that I want to merge with a bigger one. Both on Wordpress. How can I merge the second one with the first?I know that one solution would be to make the smaller one a subdomain of the bigger one, but I would like the following thing to happen: when I click on a category or a tag, posts from both sites/databases would appear. Something like Smashing Magazine did when it assimilated designinformer.com. The other solution and the one that I would prefer would be to merge the two databases, but I don't know if this is possible.
I want to combine the databases from two different sites under one URL. How is this possible?
database
null
_softwareengineering.10998
In a message to comp.emacs.xemacs, Jamie Zawinski once said:Some people, when confronted with a problem, think I know, I'll use regular expressions. Now they have two problems.I've always had trouble understanding what he was getting at. What does he mean by this?UpdateThe answer that I'm looking for is one which explains what the 2nd problem is. Most answers below are that regexes are hard, which doesn't seem to fit the question.
What does the Jamie Zawinski's quotation about regular expressions mean?
quotations;regular expressions
null
_unix.247090
Is it really necessary to run UFW behind my Gateway Router which already has Firewall/Port rules?In this example, I am discussing running UFW on an Internet Accessible (Linux) Server that is behind a Natted Firewall Gateway Router.Please be specific, don't just say yes because there are hackers out there! (Though that is valid answer)Is it just for internal protection? If so, I know I can trust my LAN, can I not run UFW?
Is UFW behind a firewall(ed) gateway router necessary
security;firewall;ufw
null
_webmaster.103378
my site has over 5 million pages. Before changing all links from http to https, I had around 14.000 visits per day form google results, after that, the site is receiving barely 600 visits per day.Should I switch back all links in the sitemap to http, or should I wait a little longer for them to be re-crawled?Thanks for your help.
Switching from http to https in my sitemap will affect my SEO?
https;sitemap
Looks like you're pulling that graph from Google Search Console, you should have all version of your site added. In order to view the metrics for the https version that's the property you'd view the data in going forward:If your site supports multiple protocols (http:// and https://), you must add each as a separate site. https://support.google.com/webmasters/answer/34592?hl=en
_datascience.13923
I am trying to do sentiment analysis. In order to convert the words to word vectors I am using word2vec model. Suppose I have all the sentences in a list named 'sentences' and I am passing these sentences to word2vec as follows :model = word2vec.Word2Vec(sentences, workers=4 , min_count=40, size=300, window=5, sample=1e-3)Since I am noob to word vectors I have two doubts.1- Setting the number of features to 300 defines the features of a word vector. But what these features signify? If each word in this model is represented by a 1x300 numpy array, then what do these 300 features signify for that word?2- What does down sampling as represented by 'sample' parameter in the above model do in actual?Thanks in advance.
Features of word vectors in word2vec
machine learning;deep learning;word embeddings;word2vec;sentiment analysis
1- The number of features: In terms of neural network model it represents the number of neurons in the projection(hidden) layer. As the projection layer is built upon distributional hypothesis, numerical vector for each word signifies it's relation with its context words.These features are learnt by the neural network as this is unsupervised method. Each vector has several set of semantic characteristics.For instance, let's take the classical example, V(King) -V(man) + V(Women) ~ V(Queen) and each word represented by 300-d vector. V(King) will have semantic characteristics of Royality, kingdom, masculinity, human in the vector in a certain order. V(man) will have masculinity, human, work in a certain order. Thus when V(King)-V(Man) is done, masculinity,human characteristics will get nullified and when added with V(Women) which having femininity, human characteristics will be added thus resulting in a vector much similar to V(Queen). The interesting thing is, these characteristics are encoded in the vector in a certain order so that numerical computations such as addition, subtraction works perfectly. This is due to the nature of unsupervised learning method in neural network.2- There are two approximation algorithms. Hierarchical softmax and negative sampling. When the sample parameter is given, it takes negative sampling. In case of hierarchical softmax, for each word vector its context words are given positive outputs and all other words in vocabulary are given negative outputs. The issue of time complexity is resolved by negative sampling. As in negative sampling, rather than the whole vocabulary, only a sampled part of vocabulary is given negative outputs and the vectors are trained which is so much faster than former method.
_unix.68059
I am using daemontools to monitor a process and its output log. I am using multilog to write the logs to disk.The run script for the log is:#!/bin/bashPATH=/usr/local/bin:/usr/bin:/bincd /usr/local/script_direxec multilog t s16777215 n50 '!tai64nlocal' '!/bin/gzip' /var/log/script_logThe process being monitored, also writes output to stderr. So in the run script for the process there is the following lines to redirect stderr to stdout:exec 2>&1exec ./my_processHowever, while tailing the log file, I see hundreds of lines of output coming in bursts (the monitored process writes output every few seconds), and the timestamp on the log lines differs in sub-microsecond levels. I know from the nature of the process that time difference between the log lines is not so small. Clearly multilog is buffering output and then adding the timestamp when it is ready to write to file. I would like the timestamps to more closely reflect the time at which the line was output. How can this be fixed?
Daemontools multilog loses log line time information. How to fix it?
logs;buffer;daemontools
The script being monitor was a Python script. To make all standard streams unbuffered, I found that one can just pass the -u option to the interpreter. This solved the problem in my case.
_codereview.42185
I have a class Employee that contains an ArrayList of Projects.I'm storing the Employees in one table, and the Projects in another.I'm trying to find the best way to create an ArrayList of Employee objects based on a result set. Simply creating an ArrayList of Employees based on a result set is pretty straightforward, but I'm finding that filling an ArrayList of Employees that each contains an ArrayList of Projects isn't so simple.Right now, my getEmployees() function is using two nested SQL queries to accomplish this, something like this:public ArrayList<Employee> getEmployees(){ PreparedStatement ps1 = null; ResultSet rs1 = null; PreparedStatement ps2 = null; ResultSet rs2 = null; ArrayList<Employee> employees = new ArrayList<Employee>(); String query1 = SELECT * FROM employees + ORDER BY employee_id ASC; try { ps1 = conn.prepareStatement(query1); rs1 = ps1.executeQuery(); while (rs1.next()) { Employee employee = new Employee(); int employeeID = rs1.getInt(employee_id); employee.setEmployeeID(employeeID); employee.setName(rs1.getString(employee_name)); // Get projects for this employee ArrayList<Project> projects = new ArrayList<Project>(); String query2 = SELECT * FROM projects + WHERE employee_id = ?; ps2 = conn.prepareStatement(query2); ps2.setInt(1, employeeID); rs2 = ps2.executeQuery(); while (rs2.next()) { Project project = new Project(); project.setProjectID(rs2.getInt(project_id)); project.setName(rs2.getInt(project_name)); projects.add(project); } employee.setProjects(projects); employees.add(employee); } return employees; } catch (SQLException e) { e.printStackTrace(); } finally { try { // close result sets and prepared statements } catch (SQLException e) { e.printStackTrace(); } } return null; }The above code works, but it seems messy to me. Is there a way to do this without having to nest two separate SQL queries? I've tried using a LEFT JOIN to get a single result set back, but I was unable to figure out a way to use that single result set to fill the ArrayList of Employees, each containing an ArrayList of Projects.EDIT: Assume the tables have the following structures:employees table:employee_id int(11) NOT NULL PRIMARY KEY AUTO_INCREMENTemployee_name varchar(60) NOT NULL projects table:project_id int(11) NOT NULL PRIMARY KEY AUTO_INCREMENTemployee_id int(11) NOT NULLproject_name varchar(60) NULL
How to fill an ArrayList of ArrayLists with a Left Join?
java;sql
A standard way to do this is through break-processing, where you track one value, and when it changes, you do something special.... but you may find it easier to do a more unstructured system:Map<Integer, List<Project>> employeeProjects = new HashMap<>();Map<Integer, Employee> employees = new TreeMap<>(); // Treeset ... sorted by employeeID// join the tables.String select = select e.employee_id, e.employee_name, p.project_id, p.project_name + from employees e left outer join projects p on e.employee_id = p.employee_id + order by e.employee_id, p.project_name;// do the select.....while (rs.next()) { Integer employeeID = rs.get(employee_id); Employee emp = employees.get(employeeID); if (emp == null) { emp = new Employee(); emp.setName(rs.get(employee_name); emp.setEmployeeID(employeeID); employees.put(employeeID, emp); // create a new list for this employee employeeProject.put(employeeID, new ArrayList<Project>()); } String projectName = rs.getString(project_name); if (!rs.wasNull()) { List<Project> projects = employeeProject.get(employeeID); Project proj = new Project(); proj.setID(rs.getInt(project_id)); proj.setName(projectName); projects.add(proj); }}rs.close();Then, once you have the data structured the way you want, you can:List<Employee> result = new ArrayList<>();for (Employee emp : employees.values()) { emp.setProjects(employeeProjects.get(emp.getEmployeeID()); result.add(emp);}return result;
_unix.56084
I have a symbolic link to a file in one directory. I would like to have that same link in another directory. How do I copy a symbolic link?I tried to cp the symbolic link but this copies the file it points to instead of the symbolic link itself.
How do I copy a symbolic link?
centos;symlink;cp
Use cp -P (capital P) to never traverse any symbolic link and copy the symbolic link instead.This can be combined with other options such as -R to copy a directory hierarchy cp -RL traverses all symbolic links to directories, cp -RP copies all symbolic links as such. cp -R might do one or the other depending on the unix variants; GNU cp (as found on CentOS) defaults to -P.Even with -P, you can copy the target of a symbolic link to a directory on the command line by adding a / at the end: cp -RP foo/ bar copies the directory tree that foo points to.GNU cp has a convenient -a option that combines -R, -P, -p and a little more. It makes an exact copy of the source (as far as possible), preserving the directory hierarchy, symbolic links, permissions, modification times and other metadata.
_codereview.156369
I have this function that returns me a list of users with their roles and groups. As you can see, this is how I fetch and create list of objects. I'm wondering whether this is a good approach and what parts should be improved. I'm not that experienced with PHP so I would appreciate code samples.I'm also wondering how good an approach it is to first get all users and then make another prepared statement to get user roles and groups. That would mean I will have big number of database calls, so I think it's a bad idea. $stmt = $mysqli->prepare(SELECT u.id, u.firstName, u.lastName, u.email, u.phoneNumber, u.address, u.birthDate, ur.roleName, cg.id, cg.name FROM users as u LEFT OUTER JOIN user_role as ur ON u.id = ur.userId LEFT OUTER JOIN user_group as ug on ug.userId = u.id LEFT OUTER JOIN control_group as cg on cg.id = ug.groupId WHERE u.id != ?); $stmt->bind_param(i, $_SESSION[id]); $stmt->execute(); $stmt->bind_result($id, $firstName, $lastName, $email, $phoneNumber, $address, $birthDate, $roleName, $groupId, $groupName); $users = array(); while ($stmt->fetch()) { if (empty($users[$id])) { $users[$id] = array( 'id' => $id, 'firstName' => $firstName, 'lastName' => $lastName, 'email' => $email, 'phoneNumber' => $phoneNumber, 'address' => $address, 'birthDate' => $birthDate, 'roles' => array(), 'groups' => array() ); } if ($roleName) { $found = false; foreach ($users[$id]['roles'] as $role) { if($role['roleName'] == $roleName){ $found = true; break; } } if($found == false) $users[$id]['roles'][] = array( 'roleName' => $roleName ); } if ($groupId) { $found = false; foreach ($users[$id]['groups'] as $group) { if($group['groupName'] == $groupName){ $found = true; break; } } if($found == false) $users[$id]['groups'][] = array( 'groupName' => $groupName ); } } $stmt->close(); $mysqli->close(); echo json_encode($users);This is the response I get, only thing that I want improve is item index, as you can see in my example as index I have item id, I would like to get correct index starting from 0.
Loading objects for users, roles, and groups from a query with LEFT OUTER JOINs
php;mysql;json;mysqli;join
When you say you want correct index starting from 0, does this mean that you want the first dimension in your data structure to be 0 to n-1 (where n is number of user objects returned)?If that is the case, you really should be building a numerically-indexed array of user objects, not an object/associative array with the user id as the first dimensional key.Some thoughts on your code:I would strongly consider moving away from camelCasing the names of your database entities (table, columns, etc.). This can save you from problem around the fact that, in most cases with MySQL, these database entities are treated without consideration for case (like in files on your system and such). Using snake_case is generally the preferred approach for most relational database systems to avoid unexpected problems arising from such naming.Be consistent on upper-casing all query parts that are not database entities - AS, ON, etc. in addition to the obvious ones like SELECT, FROM, etc. Right now you are mixing usage for on.Take as much vertical space as you need when writing your queries. You get no bonus points for trying to keep it on as few lines as possible. Err on the side of making everything in your code more readable.I would consider aliasing the return field names in your query so you can move away from binding query results with variables that are available in the main scope of this script. To me you begin to lose the concept of working with a row of data in your current approach.You might consider using fetch_object() in combination with the aliasing noted above, to give you a nice, readable way to work with each row in the result set. This will also map null values in result set to true null values on the resulting row object.You should use ORDER BY clauses whenever you have cases where you are going to need to map flat rows from a result set into a hierarchical data structure. This allows you to simply look for changes in column values when iterating the structure to trigger the need to create a new child data structures.If you truly work with objects when reading data into your final structure and use an ORDER BY clause, you should be able to simplify your result row to data structure mapping logic.Why be redundant in your response data with having labels roleName and groupName when they are already nested in roles and groups arrays?Why retrieve group id at all if you are not going to use it in the resulting response structure?Putting this all together you might have code more like this:$query = SELECT u.id AS id, u.first_name AS first_name, u.last_name AS last_name , u.email As email, u.phone_number AS phone_number, u.address AS address, u.birth_date AS birth_date, ur.roleName AS role_name, cg.name AS group_nameFROM users AS u LEFT OUTER JOIN user_role AS ur ON u.id = ur.user_id LEFT OUTER JOIN user_group AS ug ON ug.user_id = u.id LEFT OUTER JOIN control_group AS cg ON cg.id = ug.group_idWHERE u.id != ?ORDER BY id ASC, role_name ASC, group_name ASC;$stmt = $mysli->prepare($query);$stmt->bind_param(i, $_SESSION[id]);$stmt->execute();// and then fetch rows$users = array();$user = new stdClass();$user->id = 0;while ($row = $stmt->fetch_object()) { // build new user if needed if((int)$row->id !== $user->id) { // Break existing reference between previous $user and $users. // Data in $users will remain after this dereferencing unset($user); $user = new stdClass(); $user->id = (int)$row->id; $user->firstName = $row->first_name; $user->lastName = $row->last_name; $user->email = $row->email; $user->phoneNumber = $row->phone_number; $user->address = $row->address; $user->birthDate = $row->birth_date; $user->roles = array(); $user->groups = array(); // Assign this new user object by reference to $users. // This allows you to simply work with $user here in loop // as opposed to $users[$index] which requires you to manually // track index values. $users[] =& $user; // since this is a new user object, we need to reset // role and group objects so we can detect changes in these columns. unset($role); $role = new stdClass(); $role->name = null; unset($group); $group = new stdClass(); $group->name = null; } // build new role if needed if($row->role_name !== $role->name) { // dereference role unset($role); // create new role for this user $role = new stdClass(); $contact->name = $row->role_name; $user->roles[] =& $role; } // build new group if needed if($row->group_name !== $group->name) { // dereference group unset($group); // create new group for this user $group = new stdClass(); $group->name = $row->group_name; $user->groups[] =& $group; }}unset($user, $role, $group);$stmt->close();$mysqli->close();echo json_encode($users);Note that this is simple example and doesn't have proper error handling around statement preparation and execution. You should make sure that you just don't assume these things work. Make sure you understand all possible results and/or exceptions that can occur from a function/method call and handle those outcomes accordingly.
_softwareengineering.345247
There are two hard things in computer science: cache invalidation, naming things, and off-by-one errors. - Tweeted by Jeff AtwoodAgreed. All software engineers can understand this. Choosing suitable names for variables can make an enormous difference in the readability of code. Very good programmers have an arsenal of both technical and descriptive words that they use to articulate complex tasks. There's been times when I've thought for more than an hour on an appropriate variable name to give my code clarity.It seems that to be truly effective at programming you also need to develop your vocabulary. However, I suspect there is a particular realm of words that are sufficiently descriptive, yet common enough that any reader can follow the flow of code. So my question is how do I do this efficiently? I.e. not just reading lots of books. Perhaps there are courses that are oriented specifically for teaching useful coding vocabulary?
How can I expand my vocabulary to improve my variable names?
naming
null
_softwareengineering.143134
I am reading PHP Objects, Patterns, and Practice. The author is trying to model a lesson in a college. The goal is to output the lesson type (lecture or seminar), and the charges for the lesson depending on whether it is an hourly or fixed price lesson. So the output should beLesson charge 20. Charge type: hourly rate. Lesson type: seminar.Lesson charge 30. Charge type: fixed rate. Lesson type: lecture.when the input is as follows:$lessons[] = new Lesson('hourly rate', 4, 'seminar');$lessons[] = new Lesson('fixed rate', null, 'lecture');I wrote this:class Lesson { private $chargeType; private $duration; private $lessonType; public function __construct($chargeType, $duration, $lessonType) { $this->chargeType = $chargeType; $this->duration = $duration; $this->lessonType = $lessonType; } public function getChargeType() { return $this->getChargeType; } public function getLessonType() { return $this->getLessonType; } public function cost() { if($this->chargeType == 'fixed rate') { return 30; } else { return $this->duration * 5; } }}$lessons[] = new Lesson('hourly rate', 4, 'seminar');$lessons[] = new Lesson('fixed rate', null, 'lecture');foreach($lessons as $lesson) { print Lesson charge {$lesson->cost()}.; print Charge type: {$lesson->getChargeType()}.; print Lesson type: {$lesson->getLessonType()}.; print <br />;}But according to the book, I am wrong (I am pretty sure I am, too). Instead, the author gave a large hierarchy of classes as the solution. In a previous chapter, the author stated the following 'four signposts' as the time when I should consider changing my class structure:Code duplicationThe class who knew too much about its contextThe jack of all trades - classes that try to do many thingsConditional statementsThe only problem I can see is conditional statements, and that too in a vague manner - so why refactor this? What problems do you think might arise in the future that I have not foreseen?Update: I forgot to mention - this is the class structure the author has provided as a solution - the strategy pattern:
Why is my class worse than the hierarchy of classes in the book (beginner OOP)?
php;object oriented;refactoring
null
_unix.91075
Without unplugging my keyboard I'd like to disable it from the terminal; I was hoping that this could be done using rmmod but based on my currently loaded modules it doesn't look like it is possible. Does anyone have any ideas?
How to disable keyboard?
linux;keyboard;kernel modules
There are pretty good directions on doing it here, titled: Disable / enable keyboard and mouse in Linux.ExampleYou can list the devices with this command.$ xinput --listVirtual core pointer id=0 [XPointer]Virtual core keyboard id=1 [XKeyboard]Keyboard2 id=2 [XExtensionKeyboard]Mouse2 id=3 [XExtensionKeyboard]And disable the keyboard with this:$ xinput set-int-prop 2 Device Enabled 8 0And enable it with this one:$ xinput set-int-prop 2 Device Enabled 8 1This only works for disabling the keyboard through X. So if you're on a system that isn't running X this won't work.List of propertiesYou can use this command to get a list of all the properties for a given device:$ xinput --list-props 2Device 'Virtual core keyboard': Device Enabled (124): 1 Coordinate Transformation Matrix (126): 1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 1.000000
_softwareengineering.194111
It seems most companies are practicing Agile methodologies these days for software development. I'm curious to know if there are any downsides to using Agile, does it have shortcomings, is it always the right methodology to use? Do you any of you have experiences of using Agile and it didn't really work, maybe it was due to the type of project, or the team?
Are there any disadvantages to using the Agile methodology?
agile
Agile is successful if you have an organization that is committed to doing it right. No methodology will work if you don't have commitment and training in the methodology.I would also say that agile works best at a certain scale. You have to be able to divide your project into reasonable size deliverables. You wouldn't be able to build an office building or a nuclear submarine using agile. Similarly, in software, some projects are large and complex enough that you really can't start without doing a lot of the up-front design work using a more waterfall-like approach. Think about something like an operating system, for example. Another example would be a system that crosses many organizational boundaries, like a national health electronic record system.Once you've done your overall architecture and design, you can use agile to build-out the features, but if you started with agile you probably wouldn't get off the ground.
_unix.185145
I'm trying to print a simple US Letter document, but for some reason, I just cannot manage to fit it properly onto A4 when printing multiple pages per-list.I have tried converting the pdf using:gs -o print.pdf -sDEVICE=pdfwrite -sPAPERSIZE=a4 -dFIXEDMEDIA -pPDFFitPage -dCompatibilityLevel=1.4 input.pdfBut this does not seem to have any effect on the document, it still shows as US letter.Any way how to convert a pdf to A4 format?
Convert pdf to a different page size (US Letter -> A4)
printing;pdf;ghostscript
One solution that usually works is to use pdfjam from the texlive distribution> pdfinfo in.pdf ...Producer: Acrobat Distiller 6.0.1 (Windows)...Page size: 612 x 792 pts (letter)Page rot: 0File size: 66676 bytesOptimized: yesPDF version: 1.4> pdfjam --outfile out.pdf --paper a4paper in.pdf> pdfinfo out.pdf Creator: TeXProducer: pdfTeX-1.40.15...Page size: 595.276 x 841.89 pts (A4)Page rot: 0File size: 53963 bytesOptimized: noPDF version: 1.5Setting the paper size to something unconventional works with another switch, such as --papersize '{6.125in,9.250in}'.As you can see here it also changed the PDF version, and dropped/modified other properties of the PDF, so you have to check if its suitable for your task.
_unix.74764
Is it possible to retrieve info regarding to locked unix accouts?I am interested in seeing information about what date and time the lockout happend and from what hostname (pc name). I would like to see something similar to the who command.
How to retrieve information about locked accounts
command line;security;users
I don't believe this information is kept anywhere. They only place you could get some of this type of information would be from the sudo command logs, assuming you're using sudo and that your sudo setup gives out permissions such that you're logging on individual commands such as passwd.I've used this command before to show what accounts are locked,i.e. LK.$ cat /etc/passwd | cut -d : -f 1 | awk '{ system(passwd -S $0) }'root PS 2010-12-18 0 99999 7 -1 (Password set, SHA512 crypt.)ftp LK 2010-11-11 0 99999 7 -1 (Alternate authentication scheme in use.)nobody LK 2010-11-11 0 99999 7 -1 (Alternate authentication scheme in use.)usbmuxd LK 2010-12-18 0 99999 7 -1 (Password locked.)avahi-autoipd LK 2010-12-18 0 99999 7 -1 (Password locked.)dbus LK 2010-12-18 0 99999 7 -1 (Password locked.)ntop LK 2011-05-22 0 99999 7 -1 (Password locked.)nginx LK 2011-08-19 0 99999 7 -1 (Password locked.)postgres LK 2012-06-26 0 99999 7 -1 (Password locked.)fsniper LK 2012-06-26 0 99999 7 -1 (Password locked.)clamupdate LK 2012-08-31 0 99999 7 -1 (Password locked.)Alternative methodThanks to @RahulPatil in the comments here's a more concise method:$ awk -F: '{ system(passwd -S $1) }' /etc/passwdroot PS 2007-06-20 0 99999 7 -1 (Password set, MD5 crypt.)bin LK 2007-06-20 0 99999 7 -1 (Alternate authentication scheme in use.)daemon LK 2007-06-20 0 99999 7 -1 (Alternate authentication scheme in use.)adm LK 2007-06-20 0 99999 7 -1 (Alternate authentication scheme in use.)
_unix.129359
I recently installed Puppy Linux on a USB stick using the Universal USB Installer, but I have not being able to boot from it. My notebook came with Windows 8 and had the secure boot activated at UEFI. I disabled the secure boot option, but I am still not able to boot from the USB stick (it is not being shown in the list of bootable devices). Interestingly, a live USB with certain Linux distros are recognized on my computer, such as Linux Mint Cinnamon and Ubuntu. But others, such as Puppy, Zorin, Elementary OS and Mint MATE are not working.Basically all advice I found online is to disable the secure boot option, which I already did. Is there anything else I can do in order to boot from my usb stick?
UEFI and Puppy Linux
boot;live usb;uefi;puppy linux
You need a full shutdown, because you use Windows 8.Maybe you must disable the hybrid boot: http://www.howtogeek.com/129021/how-to-do-a-full-shutdown-in-windows-8-without-disabling-hybrid-boot/Type in cmd:shutdown /s /t 0Also look at this:http://puppylinux.org/wikka/UEFI
_webapps.103956
I've a personal office account, and using Excel online I've realized this issue:Language of onedrive and Excel is English, and I see everything in English, except formulas.When I type formulas I get the suggestions in Hungarian, and also see entered formulas in Hungarian.When I select functions, I see the list in English, however help is in Hungarian (see SZVEGSSZEFZS at the bottom of the picture), and inserting the function inserts it in Hungarian.The issue is the same with both Google Chrome 56.0.2924.87 and Internet Explorer 11 (Both from the same pc with windows 7 if that matters).How can I fix this?
strange language settings in Excel online
excel online
null
_cs.68760
I'm studying transactions in the database, and I tried to do an exercise that relates to schedules 2PL.Is the schedule $S$ 2PL?$S: R1(x) R2(x) W1(x) W2(y) W1(y)$I know the definition of schedule 2PL and I know that there are different types.First I checked whether $S$ is serializable and it is not.Now how do I check if it is 2PL? There is a mechanical method?I tried to create a table like this:+-----------+------------------+-----+-----+| OPERATION | LOCK | x | y |+-----------+------------------+-----+-----+| R1(x) | lock(T1, x) = OK | T1 | |+-----------+------------------+-----+-----+| R2(x) | lock(T2, x) = ? | | |+-----------+------------------+-----+-----+| W1(x) | | | |+-----------+------------------+-----+-----+| W2(y) | | | |+-----------+------------------+-----+-----+| W1(y) | | | |+-----------+------------------+-----+-----+But I am stuck. The transaction $T1$ has a lock on the object $x$, now $T2$ wants to read $x$. If the lock of $T1$ on $x$ is shared (S) then $T2$ can read $x$; if the lock is exclusive (X), $T2$ can not read $x$.I think it's more likely that the lock is shared, thus:+-----------+------------------+------+-----+| OPERATION | LOCK | x | y |+-----------+------------------+------+-----+| R1(x) | lock(T1, x) = OK | T1S | |+-----------+------------------+------+-----+| R2(x) | lock(T2, x) = OK | T2S | |+-----------+------------------+------+-----+| W1(x) | lock(T1, x) = OK | | |+-----------+------------------+------+-----+| W2(y) | lock(T2, y) = OK | | T2X |+-----------+------------------+------+-----+| W1(y) | lock(T1, y) = NO | | |+-----------+------------------+------+-----+Assuming I'm right, I tried to fill in the table but I'm not at all sure of what I did.Could someone help me?Thank you very much
Check if a schedule is 2PL
concurrency;database theory;databases
null
_cstheory.14367
For positive semi-definite matrices $A,B$, how can I find an $X$ that minimizes $\text{Trace}(AX^TBX$) under the constraint:that $X$ is orthogonal.All the matrices have real entries and $A,B$ are square while $X$ is rectangular. Thanks.This is what I have:Define $B=F^{T}F$. Define $Y=FX$. You get the above problem as \begin{align}\min_{Y}~ \text{trace}(AY^{T}Y)\end{align}But now I want $X^*$ that minimizes the original problem. This is what is confusing me!
Trace minimization with an orthogonality constraint
ds.algorithms;optimization;linear algebra
null
_unix.185801
I am searching for a method to find only JPEG files/With my limited knowledge of Linux I came to this point:list all paths that exist from root below with find /pipe the result into next command and perform the file command on each found path with xargs fileIn the result of the file command is a JPEG string contained, I thought maybe it would be possible somehow to an IF-statement to filter only the JPEGs: If (JPEG contained in output of file command) {show argument of file}once more:find / | xargs file | If statement Could you please correct me, give me a hint how to perform the task or give a solution?
How to list only JPEG files from root below using the command line?
shell script;find;file command
null
_reverseengineering.12011
I am debugging an old Sony PlayStation 1 MIPS binary that swaps a bunch of submodules using same base address for loading. I am trying to find a workflow that would allow me to have all the submodules loaded at the same time.From my understanding it's impossible to create overlapping segments in IDA, so I tried loading the binary at a higher address(outside of the 2MB of RAM IDA expects), but IDA doesn't like this - code blocks are not automatically detected and when forced - offsets are all wrong(not a big surprise). So do I have any options here? Looks like other way around is to try to sync data between multiple projects, but based on other answers - that's far from perfect too.
Multiple overlays using same addresses in IDA
ida
null
_vi.7665
I want to open another file in the same directory or any file with its path relative to the current directory in command line.My path is /home/sibich/ /home/sibich> vim a.plIn vim, I want to open b.pl in same directory, so I use ::vim b.plBit I receive this message Invalid pattern or filenameSo, I had to give it in shell. :!vim b.plI want to directly execute this in vim.Example 2: sub is a folder under /home/sibich :vim sub/c.plIs there a way to set options such that command line accepts path relative to current directory and allows opening files through split, tabnew and vim commands?
opening another file with path relative to current directory
working directory
Have you tried the edit command?:edit b.plEdit: Not sure if you edited in the last question, or I just missed it the first time. But the only reason you wouldn't be able to use relative paths on :split or :tabnew is if your current working directory isn't the same as the file you're currently editing. So I think what you're looking for is :set autochdirThis option basically makes your current working directory follow you whenever you change buffers. With that options set, you should be able to use relative paths. See :h autochdir for more info.
_webmaster.4557
I am using following htaccess code to stop hot linking my image filesOptions +FollowSymlinksOptions +SymlinksIfOwnerMatchRewriteEngine onRewriteCond %{HTTP_REFERER} !^http://(.+\.)?article-stack.com/ [NC]RewriteCond %{HTTP_REFERER} !^$ReWriteRule .*.(png|gif|jpg)$ - [N,F,L]Still the imge of my site are accessible from other sites. how can i stop it?
Not able to stop hotlink
htaccess;hotlinking
null
_webmaster.21388
I'm looking for recommendations for a credit card processor to handle payments for purchases from an online store. The requirements arehosted payments page with state-of-the-art UX, and able to be configured to match the -parent website's look and feel as closely as possiblesettlement into a UK merchant account or includes merchant facility and can settle to a UK company with a UK bank acct, in GBP.looking to minimise PCI requirements for the parent web site as far as possiblereliable and good supportpayments page doesn't require Verified By Visa or SecureCodeideally payments page can do multi-currencyshopping cart is not requiredThe UK-based processors I've found so far all have hosted payments pages that look like they are from the mid-1990's. Nice UX is a key requirement.Suggestions based on personal experience preferred.
Credit Card processor for UK company recommendations?
payments;payment gateway
null
_codereview.161142
Solved the classic Tower of Hanoi problem in Ruby, using recursion. Would love your feedback on this.# Excellent explanation of the solution at # http://www.mathcs.emory.edu/~cheung/Courses/170/Syllabus/13/hanoi.htmlMove = Struct.new :disk, :from, :to do def to_s Disk #{disk}: #{from} -> #{to} end enddef spare_peg(from, to) # returns the peg that is not 'from' nor 'to' # e.g. if from=A, to=C ... then spare=B [*A..C].each {|e| return e unless [from, to].include? e}enddef hanoi(num, from, to) if num == 1 # base case return [Move.new(num, from, to)] end spare = spare_peg(from, to) moves = hanoi(num - 1, from, spare) # move everything to the spare peg moves << Move.new(num, from, to) # move the sole remaining disk to the 'to' peg moves += hanoi(num - 1, spare, to) # move all the disks on top of the 'to' pegendSample output:puts hanoi(3, A, B).each {|move| move.to_s}Disk 1: A -> BDisk 2: A -> CDisk 1: B -> CDisk 3: A -> BDisk 1: C -> ADisk 2: C -> BDisk 1: A -> B
Tower of Hanoi solver in Ruby
algorithm;ruby;recursion
Looks good!For the spare_peg you could use detect (which can be called on a range)(A..C).detect { |peg| ![from, to].include?(peg) }or some array arithmetic:([*A..C] - [from, to]).first(I'd just use detect.)And a minor thing: I'd use parentheses for declaring the Struct.new call, just for consistency.
_reverseengineering.15695
I am trying to determine how to read physical memory. I understand that when the OS is in protected mode a process can only access the virtual memory assigned to it without resulting in a segmentation fault. Is it possible to read physical memory using assembly language running in real mode? I'm assuming that using assembly language would be the most effective method rather than using a higher-level language such as C. My goal is to create a program which can scan physical memory to be used for memory forensic applications. I am running a Windows OS using MASM.
Reading Physical Memory
assembly;memory
null
_scicomp.18830
I am trying to create an animation of a 3d vector field in ParaView.All my data is in three matrices (201 x 201 x 201) corresponding to vectors components in x,y,z directions. And it's calculated in MATLAB. If I do:B = cat(4,Bx,By,Bz);B = permute(B,[4 1 2 3]);fid = fopen('Bvec.raw','w'); fwrite(fid,B,'float'); fclose(fid);I can load Bvec.raw into ParaView and visualise the vector field at one instant of time.Now, how would I go about making an animation if I had say 100 (Bx, By, Bz) triples at different instants of time? Without creating 100 different raw files and loading them into paraview by hand.ALSO, I'm not set on using paraview, it's just the first software I tried, so if you think there's an easier/better/ more competent way of going about it I'd appreciate suggestions.
3D animation of a vector field (ParaView)
matlab;data visualization;visualization;vector;paraview
null
_webapps.7704
Possible Duplicate:Good webapp for checking availability of domain names? My wife wants to register several new business domains. She is non-technical so would appreciate a solution that doesn't involve nslooukp and the command line. I used to have a desktop tool but oddly must have cleaned it off my machine by accident. Is there a website or tool that you trust?I've already had one recommendation for betterwhois - are they trustworthy?
Looking for a safe/tool site for checking domain names
domain
I've been using domaintools.com various services for years. Its http://domain-search.domaintools.com/ give easy keyword search for domain names. It's safe enough for me.However, a desktop tool querying DNS servers directly may be safer for you. Did the desktop tool you deleted, offer domain search by keyword?
_unix.310849
I need to test some code on a big-endian system, and it feels like severe overkill to use an entire linux distro to run one executable on a big-endian platform. I have been working on the eudyptula challenge, so I am familiar with the idea of just running the raw linux kernel, loading bash, and quickly compiling my project (although I think quickly compiling is an oxymoron) then running it. My issue is that I can't seem to successfully cross-compile linux 4.7 on my arch linux machine to successfully run it on a big-endian emulator (using qemu). I installed the arm-linux-gnueabihf-* packages and tried make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- defconfig which seemed to successfully build the kernel, but when I tried 'running' it with qemu-system-arm -machine raspi2 -kernel ./arch/arm/boot/zImage it just showed a black screen. Any idea how to successfully run a basic linux installation without any distribution to quickly test big-endian code?
Cross-compile base Linux for Big-Endian system
linux;linux kernel;cross compilation
null
_codereview.38626
Is the below code efficient for parsing the file, or do I face performance issues? If the latter, please offer suggestions.#include <iostream>#include <fstream>#include <map>using namespace std;ofstream outfile;void writedata(string data, string file){ outfile.open(file.c_str(), ios::app); outfile << data; outfile.close();}int main(){ ifstream input(data.txt, ios::in); ofstream out; string line; string tmp; string data; string filename; string name; size_t found; map<string, string> MyFileMap; map<string, string>::iterator it; while (input >> line) { found = line.find_last_of(pmf); if (found != string::npos) { name = line.substr(found + 1, line.length()); filename = line.substr(found + 1, line.length()); cout << filename without txt << filename << endl; for (it = MyFileMap.begin(); it != MyFileMap.end(); ++it) { if (name.compare(it->first) == 0) { while (input >> tmp && tmp.compare(pme)) { data += tmp; } cout << Data push << data << to << it->second << endl; writedata(data, it->second); data.clear(); break; } } if (it == MyFileMap.end() || MyFileMap.empty()) { cout << No match in map << endl; filename = filename + .txt; cout << Filename << filename << endl; while (input >> tmp && tmp.compare(pme)) { data += tmp; } cout << Data push << data << to << it->second << endl; out.open(filename.c_str()); MyFileMap[name] = filename; for (it = MyFileMap.begin(); it != MyFileMap.end(); ++it) { if (name.compare(it->first) == 0) { writedata(data, it->second); data.clear(); break; } } } } } system(pause);}sample file : data.txtpmfwork1data1data1pmepmfwork2data2data2pmepmfwork3data3data3pmepmfwork4data4data4pmeIn the above code, I am trying to parse the sample file with pmf and pme as begin and end flags, work as the file name, and data as file contents. Below is the sample file.I would like to reduce the use of open call for multiple times. It would be of great help if you provide any suggestions.Above code is generating four different text files: work1, work2 and so on.
Is this code efficient for file parsing?
c++;parsing;c++11
null
_unix.80447
In this answer on a previous question, I found out how to modify files in a squashfs filesystem:# unsquash the filesystem to a local directorysudo cp /media/clonezilla/live/filesystem.squashfs ./sudo unsquashfs filesystem.squashfs# now, insert my own script which I want as part of the distributionsudo cp ~/autobackup squashfs-root/usr/sbin/# now, resquash the filesystem to be able to use itsudo mksquashfs squashfs-root filesystem.squashfs -b 1024k -comp xz -Xbcj x86 -e bootHowever, on that last line, I run into some problems making the filesystem:Source directory entry bin already used! - trying bin_1Source directory entry dev already used! - trying dev_1Source directory entry etc already used! - trying etc_1Source directory entry home already used! - trying home_1Source directory entry initrd.img already used! - trying initrd.img_1Source directory entry lib already used! - trying lib_1Source directory entry lib64 already used! - trying lib64_1Source directory entry media already used! - trying media_1Source directory entry mnt already used! - trying mnt_1Source directory entry opt already used! - trying opt_1Source directory entry proc already used! - trying proc_1Source directory entry root already used! - trying root_1Source directory entry run already used! - trying run_1Source directory entry sbin already used! - trying sbin_1Source directory entry selinux already used! - trying selinux_1Source directory entry srv already used! - trying srv_1Source directory entry sys already used! - trying sys_1Source directory entry tmp already used! - trying tmp_1Source directory entry usr already used! - trying usr_1Source directory entry var already used! - trying var_1Source directory entry vmlinuz already used! - trying vmlinuz_1Essentially, since it's overwriting an existing squashfs filesystem, instead of merging duplicate files, it creates new folders and files in the root of the filesystem named bin_1, etc_1, var_1, tmp_1, etc. Obviously, this is not desired. Is there a way that I can force it to merge the directories? I have attempted to run it with -noappend, but that breaks the Clonezilla install and I can't get into the Clonezilla wizard. Any ideas?
Merging preexisting source folders in mksquashfs
clonezilla;squashfs
As I said in my other answer you have to move the old filesystem.squashfs to another location (or rename it) before repacking your modified squashfs-root into a new filesystem.squashfs:mv filesystem.squashfs /path/to/backup/ormv filesystem.squashfs filesystem.squashfs.oldthen:mksquashfs squashfs-root filesystem.squashfs -b 1024k -comp xz -Xbcj x86 -e boot
_codereview.120069
This is my first PHP project - just to write to something to learn from. I made a page with form to send some quote to server and display some quotes from db.I'm just starting with PHP so point out anything that could help me improve.Live version.quote-a-day.php<?php require_once $_SERVER['DOCUMENT_ROOT'] . '/logger.php'; require_once $_SERVER['DOCUMENT_ROOT'] . '/db.php'; $quote = $aName = ; $quoteErr = $nameErr = $sendErr = ; $success = false; function fail() { global $success; $success = false; } function test_input($data) { $data = trim($data); $data = stripslashes($data); $data = htmlspecialchars($data); return $data; } if ($_SERVER[REQUEST_METHOD] === POST) { $success = true; if (empty($_POST[quote])) { $quoteErr = Quote is required.; fail(); } elseif(strlen($_POST[quote]) > 40) { $quoteErr = 'Too long. Keep it under 40 characters.'; fail(); } else { $quote = test_input($_POST[quote]); } if (empty($_POST[aName])) { $nameErr = Author's name is required.; fail(); } elseif(strlen($_POST[aName]) > 40) { $nameErr = 'Too long. Keep it under 40 characters.'; fail(); } else { $aName = test_input($_POST[aName]); } } if ($success) { try { $db = DB::getConn(); // send data $stmt = $db->prepare(SELECT * FROM qad_author WHERE name = :name); $stmt->execute(array(':name' => $aName)); if ($stmt->rowCount() === 0) { $stmt = $db->prepare(INSERT INTO qad_author (name) value (:name)); $stmt->execute(array(':name' => $aName)); $author_id = $db->lastInsertId(); } else { $author_id = $stmt->fetch(PDO::FETCH_ASSOC)['author_id']; } $stmt = $db->prepare(INSERT INTO qad_quote (quote, author) values (?, ?)); $stmt->execute(array($quote, $author_id)); // Redirect header(Location: . $_SERVER['REQUEST_URI']); exit(); } catch (PDOException $ex) { Logger::alert ($ex->getMessage()); $sendErr = 'Quote submit failed. Error logged.'; } }?><!DOCTYPE html><html><head> <title>Semicoded</title> <link rel=stylesheet type=text/css href=quote.css></head><body> <?php include(menu_top.php) ?> <div id=main> <div class=project> <h3>Submit a quote</h3> <br> <form action=<?php echo htmlspecialchars($_SERVER[PHP_SELF]);?> method=post> <table> <tr> <td>Quote:</td> <td><input type=text name=quote maxlength=40 size=30 value=<?php echo $quote; ?> ></td> <td><span class=error><?php echo $quoteErr; ?></span></td> </tr> <tr> <td>Author:</td> <td><input type=text name=aName maxlength=40 size=20 value=<?php echo $aName; ?> ></td> <td><span class=error><?php echo $nameErr; ?></span></td> </tr> <tr> <td colspan=2><input type=submit></td> <td><span class=error><?php echo $sendErr; ?></span></td> </tr> </table> </form> </div> <div id=qad-out> <?php // display quotes include(qad-load.php); ?> </div> <div class=quote> <a href=# id=qad-load-more>Load more.</a> </div> </div><script type=text/javascript>(function () { 'use strict'; var out = document.getElementById(qad-out); function loadMore() { var xmlhttp = new XMLHttpRequest(); xmlhttp.onreadystatechange = function () { if (xmlhttp.readyState === XMLHttpRequest.DONE) { out.innerHTML = (xmlhttp.status === 200) ? xmlhttp.responseText : '<div class=quote><span class=error>There was an error.</span></div>'; } }; xmlhttp.open(GET, qad-load.php, true); xmlhttp.send(); } document.getElementById(qad-load-more).onClick = loadMore;}());</script></body></html>qad-load.php<?php require_once $_SERVER['DOCUMENT_ROOT'] . '/logger.php'; require_once $_SERVER['DOCUMENT_ROOT'] . '/db.php'; try { $db = DB::getConn(); // $stmt = $db->query('SELECT * FROM qad_quote ORDER BY quote_id DESC LIMIT 10;'); $stmt = $db->query('SELECT * FROM qad_quote ORDER BY rand() LIMIT 10;'); $nameStmt = $db->prepare(SELECT * FROM qad_author WHERE author_id = :author_id;); while($row = $stmt->fetch(PDO::FETCH_ASSOC)) { echo '<div class=quote>'; $nameStmt->execute(array(:author_id => $row[author])); $rowName = $nameStmt->fetch(PDO::FETCH_ASSOC); echo <q>, $row[quote], </q><br>; echo <em>, $rowName[name], </em><br>; echo '</div>'; } } catch (PDOException $ex) { Logger::alert ($ex->getMessage()); echo '<br><b>Error while loading quotes.</b><br>'; }?>db.php<?php class DB { private static $conn = null; public static function getConn() { if (is_null(self::$conn)) { self::$conn = new PDO('mysql:host=localhost;dbname=semicode_db;charset=utf8', 'non-admin-user', // user '123456', // pwd array(PDO::ATTR_EMULATE_PREPARES => false, PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION) ); } return self::$conn; }}?>
Form submit and data display
javascript;php;html;form
For a first project, your code is pretty good. The structure and formatting are fine, and your code is generally quite readable. You also didn't make any really bad beginners mistakes like having SQL injection vulnerabilities, etc.test_input isn't a great function. I wrote about it already a couple of times, so I'll just summarize here: It just applies some semi-random functions which mangle your data. It doesn't apply proper input filtering and is not a recommended approach to security. Right now, it does protect you from XSS, but the recommended approach is to encode variables when echoing them, not when retrieving them (it keeps your data clean, and it is secure no matter where the data comes from). If you want additional input filtering, consider a different approach.don't shorten variable names, it makes code hard to read. What's aName for example? I have no idea. If you write authorName, it's immediately clear what is meant. Same for qad, xErr, etceither use snake_case or camelCase. Mixing both makes code harder to read.fail doesn't do anything except set success to false. It would be clearer to just do that directly.some more functions could help the structure of your code and make it more readable and reusable. Examples may be insertQuote($db, $quote, $author) or insertAuthor($db, $author)your HTML could use some improvements. For example, you might want to add labels for your input. You might also want to use fieldset instead of table.
_unix.183141
I'm running a long copy of .gz files from one server to another. At the same time, I'd like to unzip the files already copied. For example, al filenames that start with a through filenames starting with c.How can I do this?
Gunzip in range of files
gzip
The easiest way to do this is to use your shell. Assuming a relatively modern shell (such as bash), you can dogunzip [a-c]*gz
_unix.108409
My computer crashes while trying to load the kernel from a LiveCD with the message unable to handle kernel paging request. I'd like to reproduce the error output on a forum to get some ideas on where to dig, but I can't figure out how to save that trace except by taking a picture with a camera. Is there some option to save the output on a hard drive, or at least allow scrolling to the beginning of the error? I currently have Windows and Ubuntu 9.10 installed, and I'm trying to install Ubuntu 12.04. The same error happens with other Linux distributions (openSUSE 13.1, Linux Mint 16).
How to write log before instalation
logs;system installation
null
_cs.64643
The classical 0,1 knapsack problem with weights $w$ and unit value for all items $x$:$max \displaystyle\sum_{i} x_i, x_i \in \{0,1\}$subject to$\displaystyle\sum_{i} w_ix_i \leq W$for a maximum weight $W$.Recall that the weights in this version can be negative: we will simply select all items with zero or negative value and then solve for the remaining items with positive weight[1]Now suppose we also want to limit the maximum weight on the negative side:$|\displaystyle\sum_{i} w_ix_i| \leq W$We can quickly see that this problem is still hard when we set $W=0$ as this would answer subset-sum. What is the name of the problem where we have constraints on the absolute weight? As I encounter instances of this problem often I would like to read up on it and possible heuristics. I suspect that it would have long been listed as a variant of subset-sum or knapsack but I fail to find this exact problem.[1].$\displaystyle\sum_{i} \{ w_ix_i | w_i \in w^+ \} \leq W - \displaystyle\sum_{i} \{ w_i | w_i \in w^- \}$
Subset sum with wider constraint
optimization;reference request;knapsack problems;integer programming
null
_codereview.162627
I struggled to figure out how to clear a textbox after submitting it's contents via a button control.I would like to know if my solution is an accepted practice for initializing a control after an action has been made by a user.Is embedding event data (i.e. SendRequested) within a model an accepted practice for updating UI within Elm?module Main exposing (..)import Html exposing (..)import Html.Attributes exposing (..)import Html.Events exposing (..)import WebSocketmain = program { init = init , update = update , view = view , subscriptions = subscriptions }-- MODELtype alias Model = { input : String , messages : List String , sendRequested : Bool }init : ( Model, Cmd Msg )init = ( Model [] False, Cmd.none )-- UPDATEtype Msg = Input String | Send | NewMessage Stringupdate : Msg -> Model -> ( Model, Cmd Msg )update msg { input, messages, sendRequested } = case msg of Input newInput -> ( Model newInput messages False, Cmd.none ) Send -> ( Model messages True, WebSocket.send ws://echo.websocket.org input ) NewMessage message -> ( Model input (message :: messages) False, Cmd.none )-- SUBSCRIPTIONSsubscriptions : Model -> Sub Msgsubscriptions model = WebSocket.listen ws://echo.websocket.org NewMessage-- VIEWview : Model -> Html Msgview model = let inputElement = if not model.sendRequested then input [ onInput Input ] [] else input [ onInput Input, value ] [] in div [] [ div [] (List.map viewMessage model.messages) , inputElement , button [ onClick Send ] [ text Send ] ]viewMessage : String -> Html MsgviewMessage message = div [] [ text message ]
Clear out a textbox after submitting its content
functional programming;elm
Since you're already keeping track of the <input>'s value in your model, it would be cleaner to use that as the value in your view, too.view : Model -> Html Msgview model = div [] [ div [] (List.map viewMessage model.messages) , input [ onInput Input, value model.value ] [] , button [ onClick Send ] [ text Send ] ]This way, your view is a pure representation of your model, and your model is kept in sync with user input.I took the liberty of doing a quick pass over your code, additionally introducing record update syntax in your update function, as opposed to constructing a fresh record from scratch.Adding or removing a field from a record needn't result in refactoring every single place you happen to modify the values of that record.
_webapps.14140
From time to time I see strange twitter handles following me on Twitter. I generally ignore them. Don't follow them (for sure, don't even look them up mostly)What can be the intention of these other than they expecting me to follow them or the links they post?Is it safe to ignore them (and let them continue following me)?
Bots/strange followers on twitter: any problems if I ignore them
twitter;followers
null
_vi.6995
I'm using vim-plug for managing plugins, primarily for its lazy-loading capabilities. This lets me do:Plug 'Valloric/YouCompleteMe', { 'do': 'python2 ./install.py --clang-completer --gocode-completer --tern-completer', }I don't have YCM in all systems where I do have my .vim present and updated, and sometimes I'm not interested in having it present or installed - Supertab suffices for my needs in these cases, and installing YCM takes too much time, and YCM itself might be too heavyweight.Now, I can skip the build step, since it is a shell command and I can probably do something like 'do': '[[ -z $INSTALL_YCM ]] && python2 '. Can I get vim-plug to skip cloning the YCM plugin altogether? It has a bunch of submodules (and submodules within submodules it's turtles all the way down), and downloading all that would just be a waste of time and bandwidth.YCM is just an example. As I add more plugins (I have 17 now, and as I understand, some users have twice as many), I would be interested in skipping plugins on systems where they have no use at all. So, don't focus on YCM.
Can I get vim-plug to skip installing a plugin, based on some Vimscript condition?
plugin vim plug
As noted in the comments, you can use Vimscript within the vim-plug block, so I ended up checking for particular commands to control installation of plugins. For example, I rarely have cmake installed, I usually install it for YCM. So a executable('cmake') check is good enough for that. Now, my vim-plug block has one section for common plugins, and a set of checks for those which are useful in specific cases: call plug#begin() Common pluginsPlug 'vim-scripts/diffchar.vim'Plug 'scrooloose/nerdtree'Plug 'ervandew/supertab'Plug 'scrooloose/syntastic'Plug 'tpope/vim-surround'Plug 'bling/vim-airline'Plug 'tpope/vim-fugitive'Plug 'tomasr/molokai'Plug 'ctrlpvim/ctrlp.vim'Plug 'gabrielelana/vim-markdown', {'for': 'markdown'}Plug 'majutsushi/tagbar', {'for': ['cpp', 'c', 'go', 'sh', 'js']}Plug 'godlygeek/tabular'if executable('cmake') YCM command lifted from vim-plug readme Plug 'Valloric/YouCompleteMe', { 'do': YCMInstallCmd(), 'for': ['cpp', 'c', 'go', 'sh', 'js', 'vim'] } autocmd! User YouCompleteMe if !has('vim_starting') | call youcompleteme#Enable() | endifendifif executable('go') Plug 'fatih/vim-go', {'for': 'go'}endifif executable('latex') Plug 'lervag/vimtex', {'for': 'tex'}endifif executable('ghc') Plug 'dag/vim2hs', {'for': 'hs'}endifif executable('man') Plug 'murukeshm/vim-manpager'endifif executable('dpkg') Plug 'vim-scripts/deb.vim'endifif executable('logrotate') Plug 'moon-musick/vim-logrotate'endifcall plug#end()
_webmaster.41166
How to integrate premium responsive html template to umbraco?I have been wondering, that whether there is any kind of set of standards ORAny additional code to integrate my premium html web template to umbraco cms?
How to integrate premium responsive html template to umbraco?
html;cms;template;integration
Umbraco uses standard HTML markup so you should be able to use pretty much all HTML templates, responsive is just clever styling and it'll work with correct markup with the template you have no problem.You should take a look at:http://umbracohosting.com/help-and-support/video-tutorials/introduction-to-umbraco/sitebuilder-introduction/templateshttp://www.blogfodder.co.uk/2010/5/13/building-your-first-umbraco-site-from-scratchThere's a few videos on Blogfodder which are really easy to follow and this should help you adapt the HTML template you have to work with Umbraco.
_unix.333757
I am currently trying to make an autonomous drone using the Robot Operating System (ROS). To do this, I have installed Raspbian Lite (Jessie) on a Raspberry Pi 3 and am currently using ROS Kinetic on it. Because it is Raspbian Lite, there were no window managers or desktop environments that came along with the installation. I decided to go with Openbox Window Manager and installed a terminal onto it for convenience. I can just call sudo startx, and the window manager opens up, which can be accessed by Ctrl + alt + F2`. Now my question lies in the fact that I do not understand the process of creating new sessions within the system wide terminal. Is it called the system wide terminal to start with? What are these sessions, that I am invoking with the use of Ctrl+ Shift+ F? Some of them accommodate display managers and some of them accommodate terminals, while I imagine, that a whole desktop environment can be accommodated too. Is there a man page that I can look into?
What is switching environments on the system wide terminal called?
terminal;xorg;window manager;openbox
They are kernel virtual terminal devices, multiplexed onto the physical framebuffer and human-input devices by a terminal emulator program that is built into the kernel itself. To applications programs running on top of the kernel, they look like any other terminal devices, such as a serial terminal device. (They have a line discipline, but no modem control.)The system implements terminal login by dint of running a getty program (or equivalent) and a login program that accept user credentials and invoke login sessions.The X server program also needs to use the physical framebuffer and human-input devices. It needs to negotiate sharing them with the kernel terminal emulator. It does so by allocating one virtual terminal and telling the kernel to disconnect that from the kernel terminal emulator. Hence why it appears that the X server runs on a particular terminal. When the kernel terminal emulator sees the hotkey chord for switching to the allocated virtual terminal, it cedes control of the framebuffer and human-input devices to to the X server. When the X server sees the hotkey chord for switching to another virtual terminal, the X server cedes control back. These hotkey chords are not necessarily symmetrical. On one of my systems the hotkey chord implemented by the kernel terminal emulation program for switching to virtual terminal #2 is Alt+F2 whereas the hotkey chord implemented by the X server for the same action is Ctrl+Alt+F2.When it comes to graphical login, a display manager handles starting up X servers with greeter programs. You're just starting an X server directly and not using a display manager, of course. Once the user credentials have been authenticated, a desktop manager displays a desktop environment, which comprises a set of X client applications of varying degrees of complexity. For complex desktop environments, there is a whole bunch of server programs interconnected via a desktop bus. (On one of my systems, the so-called small and lightweight GNOME Editor requires a D-BUS broker and nine other server programs to be running.)Some of those X client programs can be other terminal emulators, userspace ones, such as LXTerminal, Unicode RXVT, GNOME Terminal, Terminate, roxterm, evilvte, xterm, and so forth. These do not directly use physical framebuffer and human-input devices, and they make use of pseudo-terminal devices.Further readinghttps://superuser.com/a/723442/38062https://unix.stackexchange.com/a/316279/5132 https://unix.stackexchange.com/a/194218/5132https://unix.stackexchange.com/a/178807/5132https://stackoverflow.com/a/39302351/340790
_unix.218703
Edit: using Amazon Linux:Linux ip-xx-xx-xx-xxx 3.14.44-32.39.amzn1.x86_64 #1 SMP Thu Jun 11 20:33:38 UTC 2015 x86_64 x86_64 x86_64 GNU/LinuxOutput of this command:date +%Y-%m-%d %H:%M:%S --date=2015-07-27 00:11:222015-07-27 00:11:22Output of this command:date +%Y-%m-%d %H:%M:%S --date=2015-07-27 00:11:22 + 00:052015-07-27 00:06:22Output of this command:date +%Y-%m-%d %H:%M:%S --date=2015-07-27 00:11:22 - 00:052015-07-27 00:16:22OMFG is the time flowing backwards? Or an 8th layer issue?Edit: My Amazon Linux is at UTC+00:00 while my Ubuntu local machine is at UTC-05:00. The same commands in Ubuntu show:2015-07-26 19:11:222015-07-26 19:06:222015-07-26 19:16:22respectively, so + 00:05 does not seem to be the time zone.
The arithmetic operations in bash date... are reversed?
date
According to the source of parse-datetime.y of gnulib, it seems to be a feature related to the function time_zone_hhmm.First, we cannot use your format to add seconds : date +%Y-%m-%d %H:%M:%S --date=2015-07-27 00:11:22 - 00:05:01, I'm having a parse error :date: invalid date 2015-07-27 00:11:22 - 00:05:01Then, according to the header of the time_zone_hhmm function : /* Convert a time zone expressed as HH:MM into an integer count ofminutes. If MM is negative, then S is of the form HHMM and needsto be picked apart; otherwise, S is of the form HH. As specified inhttp://www.opengroup.org/susv3xbd/xbd_chap08.html#tag_08_03, allowonly valid TZ range, and consider first two digits as hours, if nominutes specified. */So, looks like you're modifing the timezone, which match the behavior of inverted add/remove time.Should you want to perform arthmetics on time, you'd better use Epoch timestamps (date +'%s') which are easier to perform calculations on. Moreover, they are based on UTC time, which does not take local timezone of the server you're working on into account.A timestamp will thus return the same result on USA, Asia or Europe. Then you use this timestamp to display human readable time, which takes the Timezone into account.Anyway, nice catch. I would never have dug in gnulib source without this nice trick !
_unix.249101
My start menu shortcut of the JetBrains InteliJ IDEA 15 IDE is no longer working on my Linux Mint 17.3 x64. When I try to use it nothing happens. I tried to delete the shortcut and to use the Create Desktop Entry function of the program but it didn't work, and I also tried to recreate the shortcut manually. I can only start the application by using: `sudo sh install_path/bin/idea.sh' in the terminal. My shortcut code is:[Desktop Entry]Version=1.0Type=ApplicationName=IntelliJ IDEAIcon=/usr/local/InteliJ_IDEA/idea-IU-143.381.42/bin/idea.pngExec=/usr/local/JetBrains/idea-IU-143.381.42/bin/idea.sh %f Comment=Develop with pleasure!Categories=Development;IDE;Terminal=falseStartupWMClass=jetbrains-ideaHow can I make my shortcut work again?
InteliJ IDEA desktop entry Linux Mint
linux mint
null
_unix.42888
I'm looking for a command-line Linux password-manager. It appears that pwsafe is the program most commonly used. However, its latest update was in 2005. Is it a good idea to use this? Has it been so stable that no updates are necessary, or am I risking using outdated software for an important task like password management?
Is it a good idea to use pwsafe - a password manager not updated since 2005?
password
IMO it would be better to use a plain text file with a structured format and then GPG or OpenSSL encrypt it. Or even just plain vim encryption.The advantage of OpenSSL/GPG over vim is that you can decrypt it on the fly and grep the output.Here's a nice write up for using vim as a password safe.
_cs.41249
I'm new to the subject of automata theory. I get most of the stuff, but I cannot figure out when to change the state of the machine, and when to keep the state unchanged for a particular transition operation. An explanation would be appreciated. Thanks!Also, my understanding of states is that they're basically the indicators in the machine, which let it know which operation is going on. Is this the correct analogy? If not, can you give a better one?Edit: Regarding finite automata, I can understand when to change the state. But I'm confused in push down automata. Do you change the state only during push and pop, and during introduction of a new character? Or are there more reasons to change the state?
When do we need to change the state of a push down automata?
automata;pushdown automata
null
_unix.266517
According to the Filesystem Hierarchy Standard the /bin directory should contain utilities needed in single user mode. In practice, many Linux distributions make the directory a symbolic link to /usr/bin. Similarly, /sbin is nowadays often a symbolic link to /usr/bin as well.What's the rationale behind the symlinks?
Why is /bin a symbolic link to /usr/bin?
linux;symlink;directory structure;fhs
Short summary of the page suggested by don_crissti:Scattering utilities over different directories is no longer necessary and storing them all in /usr/bin simplifies the file system hierarchy. Also, the change makes Unix and Linux scripts / programmes more compatible.Historically the utilities in the /bin and /sbin directories were used to mount the usr partition. This job is nowadays done by initramfs, and splitting the directories therefore no longer serves any purpose. The simplified file system hierarchy means, for instance, that distributions no longer need to fix paths to binaries (as they're now all in /usr/bin).
_datascience.15080
I'm new on this website, so hello everybody :)My question:I have a huge amount of viewing data from STBs (set-top-boxes). For example: [1024532, 11/6/2016, 12:03, 2, 65] Means that STB ID=1024532 watched on 11/6/2016 from 12:03, 2 minutes channel 65. I have about 1e9 records like that. Lets talk for the sake of the question about just 1 STB data, let's say STB 1024532 - I have about 30000 viewing events from this STB - on 1 year (2015).I also have demographic data, for example I know that in the household of this STB there are 4 members. What I'm trying to do is to cluster the viewings, based on the click events, to separate viewers. For example, 1 viewer has a tuning event every 2 minutes on average while other viewer is making a tuning event every 20 minutes on average - so in that case I can know that this is viewer A and this is viewer B. Also, I can use the channels they are watching to give a better prediction. So what I have is many many viewing events, which I divide by 4 hours or more of 0 events in the TV - and I need to associate every group of viewing events to a viewer in the household. I have a few millions STBs so of course I can separate my data to train & test. So my question is what features should I extract from every group of viewing events, and what algorithm should I apply to create the classification model. (I hope I was clear and that my English was not too broken, English is not my first language, Hebrew is).
Detecting viewing patterns on a large dataset
classification;clustering;predictive modeling;bigdata
null
_datascience.16292
I was recently thinking about the memory cost of (a) training a CNN and (b) inference with a CNN. Please note, that I am not talking about the storage (which is simply the number of parameters).How much memory does a given CNN (e.g. VGG-16 D) need for (a) training (with ADAM)(b) inference on a single image?My thoughtsBasically, I want to make sure that I didn't forget anything with this question. If you have other sources which explain this kind of thought, please share them with me.(a) TrainingFor training with ADAM, I will now assume that I have a Mini-batch size of $B \in \mathbb{N}$ and $w \in \mathbb{N}$ is the number of parameters of the CNN. Then the memory footprint (the maximum amount of memory I need at any point while training) for a single training pass is:$2w$: Keep the weights and the weight updates in memory$B \cdot $ size of all generated feature maps (forward pass)$w$: gradients for each weight (backpropagation)$w$: learning rates for each weight (ADAM)(b) InferenceIn inference, it is not necessary to store a feature map of layer $i-1$ if the feature maps of layer $i$ are already calculated. So the memory footprint while inference is$w$: The modelThe two most expensive successive layers (one which is already calculated, the net one which gets calculated)
What is the memory cost of a CNN?
convnet
null
_webapps.73312
If you search for a term on DDG: https://duckduckgo.com/?q=asdfand then click on a search result, you will notice that it goes through a redirect: https://duckduckgo.com/l/?kh=-1&uddg=http%3A%2F%2Fwww.asdf.com%2FWhy does DDG do this? And is this a privacy concern?
Are DuckDuckGo redirects a privacy issue?
privacy;duckduckgo
null
_unix.385986
I have a problem cleaning up my boot partition. I have tried the usual commands (sudo apt autoremove -purge, sudo purge-old-kernels etc.) but no success. It seems that the files that are in there are not being recognised as old kernels.ls /bootabi-4.10.0-26-lowlatency memtest86+.binconfig-4.10.0-26-lowlatency memtest86+.elfgrub memtest86+_multiboot.bininitrd.img-4.10.0-26-lowlatency System.map-4.10.0-26-lowlatencylost+found vmlinuz-4.10.0-26-lowlatencydpkg --list 'linux-image*'Desired=Unknown/Install/Remove/Purge/Hold| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)||/ Name Version Architecture Description+++-==============-============-============-=================================un linux-image <none> <none> (no description available)ii linux-image-4. 4.10.0-26.30 amd64 Linux kernel image for version 4.in linux-image-4. <none> amd64 (no description available)Can anyone tell me how to remove these 'no description' packages?Output from df:mick@mick-OptiPlex-745:~$ dfFilesystem 1K-blocks Used Available Use% Mounted onudev 994400 0 994400 0% /devtmpfs 203516 6456 197060 4% /run/dev/mapper/ubuntu--studio--vg-root 151130308 87935772 55494468 62% /tmpfs 1017564 5136 1012428 1% /dev/shmtmpfs 5120 4 5116 1% /run/locktmpfs 1017564 0 1017564 0% /sys/fs/cgroup/dev/sda1 482922 423090 34898 93% /boottmpfs 203516 16 203500 1% /run/user/121tmpfs 203516 68 203448 1% /run/user/1000mick@mick-OptiPlex-745:~$
Boot partition 93% full with only one kernel unpacked
ubuntu;boot;apt;package management;dpkg
null
_vi.8621
I have a list of files:./a.temp.txt ./a.temp.txt./a/b.temp.txt ./a/b.temp.txt./a/b/c.temp.txt ./a/b/c.temp.txtAnd I want to remove the temp. on each line, but only the second occurence, thus, the file should look like:./a.temp.txt ./a.txt./a/b.temp.txt ./a/b.txt./a/b/c.temp.txt ./a/b/c.txtHow should I do this?
Substitute second occurence on line
regular expression;substitute
In general you can match the Nth occurrence of somethingusing \zs and \{N}. There's an example given at :help \zs.In your case the command would be::%s/\(.\{-}\zstemp\.\)\{2}//
_unix.273968
I am encountering an error when I run apt-get upgrade. It appears I have a package installed that has unmet dependencies. uname -a output:Linux kbu 3.19.0-56-generic #62~14.04.1-Ubuntu SMP Fri Mar 11 11:03:15 UTC 2016 x86_64 x86_64 x86_64 GNU/LinuxHere is the output from apt-get upgrade:Reading package lists...Building dependency tree...Reading state information...You might want to run 'apt-get -f install' to correct these.The following packages have unmet dependencies: libefl-bin : Depends: libefl (= 201604022131-32022~ubuntu14.04.1) but 201603242131-31876~ubuntu14.04.1 is installedE: Unmet dependencies. Try using -f.Here is the output from running apt-get -f install:Reading package lists...Building dependency tree...Reading state information...Correcting dependencies... DoneThe following extra packages will be installed: libeflThe following packages will be upgraded: libefl1 upgraded, 0 newly installed, 0 to remove and 2 not upgraded.1 not fully installed or removed.Need to get 3,032 kB of archives.After this operation, 3,278 kB of additional disk space will be used.Get:1 http://ppa.launchpad.net/enlightenment-git/ppa/ubuntu/ trusty/main libefl amd64 201604022131-32022~ubuntu14.04.1 [3,032 kB]Fetched 2,873 kB in 14s (193 kB/s)(Reading database ... 217178 files and directories currently installed.)Preparing to unpack .../libefl_201604022131-32022~ubuntu14.04.1_amd64.deb ...Unpacking libefl (201604022131-32022~ubuntu14.04.1) over (201603242131-31876~ubuntu14.04.1) ...dpkg: error processing archive /var/cache/apt/archives/libefl_201604022131-32022~ubuntu14.04.1_amd64.deb (--unpack): trying to overwrite '/usr/lib/libelementary.so.1.17.99', which is also in package libelementary 201603242216-12490~ubuntu14.04.1dpkg-deb: error: subprocess paste was killed by signal (Broken pipe)Errors were encountered while processing: /var/cache/apt/archives/libefl_201604022131-32022~ubuntu14.04.1_amd64.debE: Sub-process /usr/bin/dpkg returned an error code (1)...and the output from apt-cache showpkg libefl:Package: libeflVersions: 201604022131-32022~ubuntu14.04.1 (/var/lib/apt/lists/ppa.launchpad.net_enlightenment-git_ppa_ubuntu_dists_trusty_main_binary-amd64_Packages) Description Language: File: /var/lib/apt/lists/ppa.launchpad.net_enlightenment-git_ppa_ubuntu_dists_trusty_main_binary-amd64_Packages MD5: c3762b13d2835617f77263b388ba31ad Description Language: en File: /var/lib/apt/lists/ppa.launchpad.net_enlightenment-git_ppa_ubuntu_dists_trusty_main_i18n_Translation-en MD5: c3762b13d2835617f77263b388ba31ad201603242131-31876~ubuntu14.04.1 (/var/lib/dpkg/status) Description Language: File: /var/lib/dpkg/status MD5: cc1a6fd5b0ea2294658de3cd7b486a53Reverse Depends: libefl:i386,libefl e20,libefl libefl-dbg,libefl 201604022131-32022~ubuntu14.04.1 libefl-dev,libefl 201604022131-32022~ubuntu14.04.1 libefl-bin,libefl 201604022131-32022~ubuntu14.04.1 terminology,libefl evas-loaders,libefl emodule-comp-scale,libefl ecomorph,libefl emodule-exebuf,libefl emodule-everything-websearch,libefl emodule-everything-wallpaper,libefl emodule-everything-tracker,libefl emodule-everything-places,libefl emodule-everything-pidgin,libefl emodule-everything-mpris,libefl emodule-everything-aspell,libefl emodule-engage,libefl emodule-empris,libefl emodule-elfe,libefl libelementary,libefl libedbus1,libefl emodule-wlan,libefl emodule-winselector,libefl emodule-winlist-ng,libefl emodule-weather,libefl emodule-uptime,libefl emodule-taskbar,libefl emodule-snow,libefl emodule-slideshow,libefl emodule-rain,libefl emodule-moon,libefl emodule-mem,libefl emodule-mail,libefl emodule-itask,libefl emodule-flame,libefl emodule-execwatch,libefl emodule-diskio,libefl emodule-deskshow,libefl emodule-cpu,libefl emodule-alarm,libefl ecomorph-core,libefl libelementary-bin,libefl e17,libeflDependencies: 201604022131-32022~ubuntu14.04.1 - libbulletcollision2.81 (0 (null)) libbulletdynamics2.81 (0 (null)) libbulletsoftbody2.81 (0 (null)) libc6 (2 2.17) libdbus-1-3 (2 1.5.12) libfontconfig1 (2 2.9.0) libfreetype6 (2 2.2.1) libfribidi0 (2 0.19.2) libgcc1 (2 1:4.1.1) libgif4 (2 4.1.4) libgl1-mesa-glx (16 (null)) libgl1 (0 (null)) libglib2.0-0 (2 2.37.3) libgstreamer-plugins-base1.0-0 (2 1.0.0) libgstreamer1.0-0 (2 1.0.0) libharfbuzz0b (2 0.9.4) libjpeg8 (2 8c) liblinearmath2.81 (0 (null)) libluajit-5.1-2 (0 (null)) libmount1 (2 2.20.1) libpng12-0 (2 1.2.13-4) libpulse0 (2 1:0.99.1) libsndfile1 (2 1.0.20) libssl1.0.0 (2 1.0.0) libstdc++6 (2 4.1.1) libtiff5 (2 4.0.3) libudev1 (2 199) libwebp5 (0 (null)) libx11-6 (2 2:1.2.99.901) libxcomposite1 (2 1:0.3-1) libxcursor1 (4 1.1.2) libxdamage1 (2 1:1.1) libxext6 (0 (null)) libxfixes3 (0 (null)) libxi6 (2 2:1.2.99.4) libxinerama1 (0 (null)) libxrandr2 (2 2:1.2.99.3) libxrender1 (0 (null)) libxss1 (0 (null)) libxtst6 (0 (null)) zlib1g (2 1:1.1.4) libecore-con1 (0 (null)) libecore-con1:i386 (0 (null)) libecore-evas1 (0 (null)) libecore-evas1:i386 (0 (null)) libecore-fb1 (0 (null)) libecore-fb1:i386 (0 (null)) libecore-file1 (0 (null)) libecore-file1:i386 (0 (null)) libecore-imf1 (0 (null)) libecore-imf1:i386 (0 (null)) libecore-ipc1 (0 (null)) libecore-ipc1:i386 (0 (null)) libecore-x1 (0 (null)) libecore-x1:i386 (0 (null)) libecore0 (0 (null)) libecore0:i386 (0 (null)) libecore1 (0 (null)) libecore1:i386 (0 (null)) libedbus2 (0 (null)) libedbus2:i386 (0 (null)) libedje1 (0 (null)) libedje1:i386 (0 (null)) libeet0 (0 (null)) libeet0:i386 (0 (null)) libeet1 (0 (null)) libeet1:i386 (0 (null)) libeeze1 (0 (null)) libeeze1:i386 (0 (null)) libefreet1 (0 (null)) libefreet1:i386 (0 (null)) libeina0 (0 (null)) libeina0:i386 (0 (null)) libeina1 (0 (null)) libeina1:i386 (0 (null)) libeio0 (0 (null)) libeio0:i386 (0 (null)) libembryo0 (0 (null)) libembryo0:i386 (0 (null)) libemotion1 (0 (null)) libemotion1:i386 (0 (null)) libevas0 (0 (null)) libevas0:i386 (0 (null)) libevas1 (0 (null)) libevas1:i386 (0 (null)) libefl:i386 (0 (null)) 201603242131-31876~ubuntu14.04.1 - libbulletcollision2.81 (0 (null)) libbulletdynamics2.81 (0 (null)) libbulletsoftbody2.81 (0 (null)) libc6 (2 2.17) libdbus-1-3 (2 1.5.12) libfontconfig1 (2 2.9.0) libfreetype6 (2 2.2.1) libfribidi0 (2 0.19.2) libgcc1 (2 1:4.1.1) libgif4 (2 4.1.4) libgl1-mesa-glx (16 (null)) libgl1 (0 (null)) libglib2.0-0 (2 2.37.3) libgstreamer-plugins-base1.0-0 (2 1.0.0) libgstreamer1.0-0 (2 1.0.0) libharfbuzz0b (2 0.9.4) libjpeg8 (2 8c) liblinearmath2.81 (0 (null)) libluajit-5.1-2 (0 (null)) libmount1 (2 2.20.1) libpng12-0 (2 1.2.13-4) libpulse0 (2 1:0.99.1) libsndfile1 (2 1.0.20) libssl1.0.0 (2 1.0.0) libstdc++6 (2 4.1.1) libtiff5 (2 4.0.3) libudev1 (2 199) libwebp5 (0 (null)) libx11-6 (2 2:1.2.99.901) libxcomposite1 (2 1:0.3-1) libxcursor1 (4 1.1.2) libxdamage1 (2 1:1.1) libxext6 (0 (null)) libxfixes3 (0 (null)) libxi6 (2 2:1.2.99.4) libxinerama1 (0 (null)) libxrandr2 (2 2:1.2.99.3) libxrender1 (0 (null)) libxss1 (0 (null)) libxtst6 (0 (null)) zlib1g (2 1:1.1.4) libecore-con1 (0 (null)) libecore-con1:i386 (0 (null)) libecore-evas1 (0 (null)) libecore-evas1:i386 (0 (null)) libecore-fb1 (0 (null)) libecore-fb1:i386 (0 (null)) libecore-file1 (0 (null)) libecore-file1:i386 (0 (null)) libecore-imf1 (0 (null)) libecore-imf1:i386 (0 (null)) libecore-ipc1 (0 (null)) libecore-ipc1:i386 (0 (null)) libecore-x1 (0 (null)) libecore-x1:i386 (0 (null)) libecore0 (0 (null)) libecore0:i386 (0 (null)) libecore1 (0 (null)) libecore1:i386 (0 (null)) libedbus2 (0 (null)) libedbus2:i386 (0 (null)) libedje1 (0 (null)) libedje1:i386 (0 (null)) libeet0 (0 (null)) libeet0:i386 (0 (null)) libeet1 (0 (null)) libeet1:i386 (0 (null)) libeeze1 (0 (null)) libeeze1:i386 (0 (null)) libefreet1 (0 (null)) libefreet1:i386 (0 (null)) libeina0 (0 (null)) libeina0:i386 (0 (null)) libeina1 (0 (null)) libeina1:i386 (0 (null)) libeio0 (0 (null)) libeio0:i386 (0 (null)) libembryo0 (0 (null)) libembryo0:i386 (0 (null)) libemotion1 (0 (null)) libemotion1:i386 (0 (null)) libevas0 (0 (null)) libevas0:i386 (0 (null)) libevas1 (0 (null)) libevas1:i386 (0 (null)) libefl:i386 (0 (null)) Provides: 201604022131-32022~ubuntu14.04.1 - 201603242131-31876~ubuntu14.04.1 - Reverse Provides: My knowledge of the apt package manager is pretty limited, and my searches for more info have not turned up anything useful for dealing with this issue.I have also tried some of the common solutions for fixing unmet dependecies (such as apt-get clean and attempting to upgrade again). Any more insight into what the output of some of these commands mean is very much appreciated if a specific solution is not obvious.EditOutput of apt-cache policy libefl-bin libefl:libefl-bin: Installed: 201604022131-32022~ubuntu14.04.1 Candidate: 201604022131-32022~ubuntu14.04.1 Version table: *** 201604022131-32022~ubuntu14.04.1 0 500 http://ppa.launchpad.net/enlightenment-git/ppa/ubuntu/ trusty/main amd64 Packages 100 /var/lib/dpkg/statuslibefl: Installed: 201603242131-31876~ubuntu14.04.1 Candidate: 201604022131-32022~ubuntu14.04.1 Version table: 201604022131-32022~ubuntu14.04.1 0 500 http://ppa.launchpad.net/enlightenment-git/ppa/ubuntu/ trusty/main amd64 Packages *** 201603242131-31876~ubuntu14.04.1 0 100 /var/lib/dpkg/statusThe output of sudo apt-get purge libefl:Reading package lists... DoneBuilding dependency tree Reading state information... DoneYou might want to run 'apt-get -f install' to correct these:The following packages have unmet dependencies: libefl-bin : Depends: libefl (= 201604022131-32022~ubuntu14.04.1) but it is not going to be installed libelementary : Depends: libefl but it is not going to be installed terminology : Depends: libefl but it is not going to be installedE: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).
Apt Unmet dependecies: libefl-bin
ubuntu;apt;dependencies
I found this thread within minutes of you posting it. It seems this problem has only come up in the last few days, and is related to the Terminology package.I more or less followed these instructions to get apt working again: http://ubuntuforums.org/showthread.php?t=2319182I couldn't follow them exactly due to purging libefl, so I first used dpkg to remove terminology, then libelementary, and reinstalled libefl. After that, apt was happy again.
_unix.202884
How can I configure kmscon to use dead keys? I already have xkb-layout=br in /etc/kmscon/kmscon.conf, but the dead keys dont work. (' + e gives me e instead of ). The layout seems to have been loaded, since the key works.Is it a bug in kmscon or am I missing something?
kmscon and deadkeys
keyboard;xkb;dead keys
null
_softwareengineering.159140
I am looking at the String class in Actionscript 3.0.It has a property called String.length . Internally it's a getter function (or method ?) that returns the length of string. Why can't it be String.getLength() ? Methods can take in 1, 2 or more values...so their significance can be understood. But what significance does property have? As it is, after all, a function only. So, why the categorization into properties? Is it just for adding an overhead botheration to remember that something has been categorized into a property?In other words, as a programmer, how am I helped when I'am told that String.length is a property of the String class when I can't find any method for the same.While writing a program, how would I know what is a property and what is a method?I appreciate having someone shed some light on this.V.
Why categorization into Property and Methods
programming practices;actionscript
String.Length is a read/only property, so it's possibly not the best place to start.Imagine you have a Person class that needs to store a persons full name.In Java, you might create a pair of methods, setName() and getName(), to handle this value. That's two methods, required to handle a single value.It's only convention that links those two methods together. Someone who didn't know the convention - say, someone self taught - might instead write AssignName() and Name() methods. It is, after all, just a convention.By formalising the relationship into something called a property, C# (and other languages) remove the need for developers to know about, and to adhere to, the same convention as everyone else.Once you have the unified property, there are other benefits. For example, you can pass a property as an out or ref parameter if you want, you can't do that with a pair of methods.As you look across other programming languages, there are other examples of this approach to be found.An Interface is just a pure abstract class.A Delegate is just a function as a value.MethodMissing() and dynamic are just ways for a class to allow you to call methods that don't exist.An event is just the way the language supports the Observer pattern.The async and await keywords in C# 5.0 are just another way to write asynchronous code. Extension Methods are just static methods.Except that all of these things are more than what they are just. They increase the expressiveness of the language involved, allowing better abstractions and more effective code.
_unix.388340
I researched some examples and came up with two below that seem like they should work but only the first one executes:*/5 * * * * /data/db/test1.py > /data/db/text.txt && hadoop fs -put -f /data/db/text.txt /tmp/ >/dev/null 2>&1I have also tried */5 * * * * bash -c '/data/db/test1.py > /data/db/text.txt && hadoop fs -put -f /data/db/text.txt /tmp/' >/dev/null 2>&1If I run both commands separately in shell, they work just fine.
multiple cron jobs on one line (running consecutively
linux;rhel;cron
null
_cogsci.9272
I heard recently that a scientist has developed a retina prosthesis. So I think this is a question that has an answer.Does the retina encode information like a Bitmap image or an SVG image?(A Bitmap is a grid of pixels. An SVG image is vector based, so that it defines a shape and says 'in this region, it's green, in that it's red', so to speak.)
Does the retina encode visual information like a Bitmap or an SVG?
vision;neurophysiology
Short answerThe retinal image corresponds more to a bitmap, than a vector-based imageBackgroundThe retina contains a layer of about 100 million photoreceptors that are topographically organized. In other words, each photoreceptor codes one specific pixel in the field of view. In turn, the nerve fibers running from the eye to the brain are also topographically organized in the same way the retina is (retinotopy is maintained up until the higher brain centers and is lost only in the inferotemporal cortex). Hence, yes, basically, but grossly oversimplified I agree with @Krysta that the retinal image more closely corresponds to a bitmap than a vector-based image. Nonetheless, colors, contrasts and edges are coded in the retina. Here is a picture on retinotopy. Basically a 1:1 topographic representation of the retina can be found in the primary visual visual cortex:source: University of Leeds and SydneyRetinal implants are devices that replace the lost photoreceptors in the retina in patients with retinitis pigmentosa or other retinal degenerative diseases. Retinal implants typically consist of a grid of electrodes (60 - about 1500 in commercially available systems) that are placed on (Argus II) or below the retina (alpha-IMS). Here's a fundus photo of an Argus II implant:The electrodes directly stimulate the surviving neruonal cells in the retina. Retinal implants make use of the retinotopy. An electrode in the center of the retina will elicit a spot of light (a phosphene) in the central fielfd of view, whereas a more peripherally situated electrode will generate phosphenes more eccentrically. Hence, camera images (in the Argus II device) are simply downsampled and projected onto the electrode grid as intensity-coded stimuli. In the alpha-IMS system the intra-ocular part consists of a grid of photosensitive diodes coupled to small amplifiers. Hence, in this device it is, quite literally, a photosensitive chip thrown on the retina. In both approaches, brighter areas in the image receive higher electric currents and result in brighter and/or larger percepts. ReferenceStronks et al. Exp Rev Med Dev;11:23-30
_webmaster.13001
My logs recently showed the following user-agent:samsung-gt-s5620 UNTRUSTED/1.0Why would a user-agent have untrusted in caps in its name?
Untrusted user-agent?
user agent
Possibly because the phone considers the browser an untrusted application.
_codereview.118191
I have a stack of numbers ranging from 1 to 1 million representing a pile of numbered DVDs. Given the stack in random order I am to sort the stack by only taking an element from the stack and push it to the top and then calculate the minimum of such pushes necessary. My code is way too inefficient when dealing with large amounts of elements in the stack and I have a requirement that it must run in less than 4 seconds for any input size < 1 million. Right now it took me 28 secs to run with an input of 10 000 DVDs, so it's nowhere close to the goal!How can I make the code more efficient?import heapq class Node: # Creates a DVD node# right och left corresponds to up and down in the stack def __init__(self, value): self.value = value self.right = None self.left = None def __lt__(self, other): # necessary for comparing node size in the heap return self.value < other.valueclass LinkedQ: # the skeleton of the stack def __init__(self): self._first = None self._last = None self._length = 0 def length(self): return self._length def enqueue(self, dvd): # Puts the DVD at the end of the queue, i.e. on top of stack ny = Node(dvd) if self._first == None: self._first = ny else: if self._last == None: self._first.right = ny self._last = ny self._last.left = self._first else: previous = self._last self._last.right = ny self._last = ny self._last.left = previous self._length += 1 return ny def push(self, node): # Push the element to top if node.left != None: # If node not first. The node can not be at the last spot because of criteria given in investigate_values left_node = node.left right_node = node.right left_node.right = right_node right_node.left = left_node else: right_node = node.right right_node.left = None self._first = right_node node.right = None node.left = self._last self._last.right = node self._last = node def isEmpty(self): if self._length == 0: return True else: return False def investigate_values(elements_to_be_moved, stack, count_movements, remember_used):# If a DVD's number is higher than another DVD to the right(above in stack)# it must be moved sooner or later to be in correct order. This is the fundamental idea of my entire algorithm last_element = stack._last lowest_number = last_element.value while last_element: # O(N) if last_element.value > lowest_number: # Then last_element must be moved if not remember_used[last_element.value-1]: # A new DVD not already in the heap , O(1) heapq.heappush(elements_to_be_moved, last_element) # Heapq sortes the heap upon appending , O(logN) count_movements += 1 # Every DVD's number is unique so no crashes will occur remember_used[last_element.value-1] = True if last_element.value < lowest_number: lowest_number = last_element.value last_element = last_element.left return elements_to_be_moved, count_movements, remember_useddef move_elements(stack, elements_to_be_moved, nodes):# pushes the dvd in the right order that gives us a sorted stack in least amount of movements next_item = elements_to_be_moved[0] item_value = next_item.value heapq.heappop(elements_to_be_moved) # O(logN) first_element = stack._first keep_going = True stack.push(nodes[item_value]) # O(1) return elements_to_be_moved def read_data():#Reads input #Note: Must be in a special way, I don't use some variables but they have to be there beacuse of given conditions in the task. number_of_input = int(input()) # How many different stacks, runs independently save_data = [] save_numbers = [] dict_list = [] # For keeping the nodes for instance in range(0,number_of_input): how_long = input() # Ghost Variable, not used save_data.append((input()).split()) for element in save_data: stack = LinkedQ() hashnode = {} for number in element: # O(N) number = int(number) node = stack.enqueue(number) hashnode.update({number:node}) # Unique numbers, no duplicates save_numbers.append(stack) dict_list.append(hashnode) return save_numbers, dict_listdef main(): get_data, saved_nodes = read_data() for (stack, nodes) in zip(get_data, saved_nodes): count_movements = 0 # Keeps track of how many pushes been executed, this gives the final answer remember_used = [False]*(stack._length) elements_to_be_moved = [] # Becomes a heap keep_sorting = True while keep_sorting: elements_to_be_moved, count_movements, remember_used = investigate_values(elements_to_be_moved, stack, count_movements, remember_used) try: crash_variable = elements_to_be_moved[0] # If error occurs the heap is empty and we're done except IndexError: print(count_movements) break else: elements_to_be_moved = move_elements(stack,elements_to_be_moved, nodes)main()
Sort a stack of numbers in ascending order only using push
python;sorting;time limit exceeded;stack;complexity
As always, you can make your code much more efficient by analysing the problem algorithmically before you start coding. Please note that the problem - Kattis DVDs - does not ask you to sort the given input; it asks for the minimum number of steps needed for sorting in the manner described.The problem is special in two regards. First, the input is a permutation of the numbers 1 through n. No duplicates, no gaps. Every number between 1 and n occurs exactly once. Second, the sorting procedure can only move DVDs to the back of the sequence (top of the stack); this means that the only DVDs which need not be moved are those which are part of the perfectly increasing subsequence that starts with the DVD labelled '1'. Everything else needs to be moved, and in the optimal case (minimal number of sorting steps) each out-of-order DVD will be moved exactly once.Hence it is trivial to compute the minimum number of moves as asked in the programming challenge, like in this pseudo code:int next = 1;foreach (int x in input_vector) if (x == next) ++next;result = input_vector.Length - (next - 1);That takes care of the Kattis challenge but the real challenge that presents itself here - should one be so inclined - is to actually sort the DVD stack in the described manner, because that turns out to be surprisingly tricky.One could imagine processing the input in order to generate a list of numbers for a hypothetical robot, where each number k signifies 'pull the kth DVD from the stack and put it on top'. Or one could imagine writing a control program for said robot. A simple strategy that sorts the stack using the minimal number of moves (and a lot of time) would be this:find the last DVD which is part of the perfectly increasing subsequence that starts with 1set k to the name of this DVDwhile ++k < n: find the DVD labelled k, pull it and put it on topThis algorithm is obviously quadratic in n and hence awfully slow.Building a list of DVDs to be moved based on their initial positions is easy enough and efficiently done. A single pass through the input yields the initial slot indices for all DVDs that are not part of a perfect increasing subsequence that starts at #1, and sorting the pairs of slot index and DVD label into label order isn't rocket science. However, when outputting the list of slot indices for the robot it is necessary to adjust the indices, because pulling a DVD causes all DVDs on top of it to move down by one. In other words, it is necessary to subtract from each index the number of lower slot indices which have already been output at that point. This boils down to a linear scan again, unless a data structure is employed that reduces the effort to log(n) or better.The hypothetical robot control program could use the same strategy directly, or something analogous; in any case the same subproblem would be encountered. Hence the really interesting element in this challenge is the support structure for counting elements that are less in both label order and slot order.The ideal structure for this purpose would be an ordered set with efficient rank computation, since the label element of each pair can be implied via query/insert order during the processing. Van Emde Boas trees come to mind but I've yet to meet anyone who'd program such a beast voluntarily, without a lot of green-backed inducement. A sorted vector would certainly do the job up to the point where insert cost becomes prohibitive, which is somewhere between 10^4 and 10^5. It would essentially replace the effort of the scan with a more efficient memory move but bigger inputs require something smarter.In an imperative language the best bang for buck would be a simple insert-only form of a B-tree, or a hybrid like a summarising index on top of a simple bitset. The difficulty with the usual builtin tree-based maps/sets - apart from their poor performance - is the fact they normally don't expose the underlying tree structure, which makes it impossible to implement the log(n) rank order computation with a few simple tweaks.A structure that I've used occasionally is a brutally simple ersatz B-tree with just two levels, which is comparatively easy to implement because it is insert-only and elements move only left to right overall. The frequency of inserts into the root vector - and hence the cost of moving stuff around in it - is effectively divided by the size of the leaves, and hence the 'reach' of the simple sorted vector solution gets effectively multiplied by the leaf capacity (modulo average utilisation). The case under consideration is a bit more demanding than what I've used this for (e.g. questions like 'what are then ten million best results out of a few billion') because the top level needs to be scanned linearly towards the nearer end in order to determine the base rank of a given page. Again, the cost of the original linear scan gets effectively divided by the page size. In a competitive situation - in order to achieve unrivalled speed - it might be worth it to increase the number of levels to three, or to switch completely to a full in-memory B-tree with page size equal to the L1 cache line size (64 bytes). The implementation would be much simpler than that for an augmented Red-Black tree and performance better by orders of magnitude. However, the latter variant is probably only viable in imperative languages with precise memory control, like C, C++ or Pascal.Available B-tree libraries might not offer direct support for rank but those that come disguised as indexed in-memory datasets might have it, in the form of a virtual record number (row number).
_unix.236801
I am using CentOS 7 and I had Java 1.7 installed with openjdk in my system but I wanted Java 8 so I removed Java 1.7 by the command:yum remove java-1.*Then I installed java normally by tar.gz package everything works normally until I restarted the system. The problem is that whenever I enter the password in my login screen the screen comes back again to the same login screen its like a loop I type the password enter it then a blank screen shows for seconds and then login screen appears again.I can't seem to figure out what to do where to start looking the problem.
Unable to login to CentOS 7 after reinstalling Java
centos;boot;login;java
null
_unix.335785
I'm trying to figure out a way to copy the current text in a command line to the clipboard WITHOUT touching the mouse. In other words, I need to select the text with the keyboard only.I found a half-way solution that may lead to the full solution:Ctrl+a - move to the beginning of the line.Ctrl+k - cuts the entire line.Ctrl+y - yanks the cut text back.Alternatively I can also use Ctrl+u to perform the first 2 steps.This of course works, but I'm trying to figure out where exactly is the cut text saved. Is there a way to access it without using Ctrl+y ?I'm aware of xclip and I even use it to pipe text straight to the clipboard, so I was thinking about piping the data saved by Ctrl+k to xclip, but not sure how to do it.The method I got so far is writing a script which uses xdotool to add echo to the beginning of the line and | zxc to the end of the line, and then hits enter (zxc being a custom alias which basically pipes to xclip). This also works, but it's not a really clean solution.I'm using Cshell if that makes any difference.EDIT: I don't want to use screen as a solution, forgot to mention that.Thanks!
How to copy text from command line to clipboard without using the mouse?
command line;clipboard;paste;xdotool;xclip
If using xterm or a derivative you can setup key bindings to start and end a text selection, and save it as the X11 primary selection or a cutbuffer. See man xterm. For example, add to your ~/.Xdefaults:XTerm*VT100.Translations: #override\n\ <Key>KP_1: select-cursor-start() \ select-cursor-end(PRIMARY, CUT_BUFFER0)\n\ <Key>KP_2: start-cursor-extend() \ select-cursor-end(PRIMARY, CUT_BUFFER0)\nYou can only have one XTerm*VT100.Translations entry. Update the X11 server with the new file contents with xrdb -merge ~/.Xdefaults. Start a new xterm.Now when you have some input at the command prompt, typing 1 on the numeric keypad will start selecting text at the current text cursor position, much like button 1 down on the mouse does. Move the cursor with the arrow keys then hit 2 on the numeric keypad and the intervening text is highlighted and copied to the primary selection and cutbuffer0. Obviously other more suitable keys and actions can be chosen. You can similarly paste the selection with bindings like insert-selection(PRIMARY).
_datascience.16904
I am trying to understand the key difference between GBM and XGBOOST. I tried to google it but could not find any good answer explaining the difference between the two algos and why does xgboost almost always perform better than gbm. WHat makes XGBOOST so fast?
GBM vs XGBOOST? Key differences?
machine learning;algorithms;xgboost;ensemble modeling;gbm
Quote from the author of xgboost:Both xgboost and gbm follows the principle of gradient boosting. There are however, the difference in modeling details. Specifically, xgboost used a more regularized model formalization to control over-fitting, which gives it better performance.We have updated a comprehensive tutorial on introduction to the model, which you might want to take a look at. Introduction to Boosted TreesThe name xgboost, though, actually refers to the engineering goal to push the limit of computations resources for boosted tree algorithms. Which is the reason why many people use xgboost. For model, it might be more suitable to be called as regularized gradient boosting.Edit: There's a detailed guide of xgboost which shows more differences.Referenceshttps://www.quora.com/What-is-the-difference-between-the-R-gbm-gradient-boosting-machine-and-xgboost-extreme-gradient-boostinghttp://xgboost.readthedocs.io/en/latest/model.html
_unix.310501
Running postfix 2.11.3 on Raspbian 8 (jesse) and seeing the same problem described here:Postfix does not send mails until the service is restartedI want to try postponing startup of postfix until after the network is up. But the command suggested in the comments /sbin/chkconfig --level 345 postfix on doesn't work on raspbian.Can someone tell me the best way to postpone startup of postfix on Raspbian?
How to delay postfix start on raspbian?
postfix;raspbian
null
_unix.104807
I'd like to run some minimum-expectation benchmarks on EC2, but I'm having an issue of high variability in my output data. Specifically, they keep giving me more CPU than I'm minimally entitled to, which is killing my ability to evaluate minimum performance when I'm only receiving my minimal CPU allocation. So the question is...does anyone know a way to limit the CPU cycles available to the entire OS? I'm looking at Debian-based systems, so primarily Ubuntu. I've tried modifying the governor (no luck) and there's no access to Xen config (not that it would help) or the BIOS. All other solutions are either percentage (cpulimit, cgroups) or time (ulimit) based and therefore won't work.
Limit CPU cycles inside of Xen
linux;cpu;xen;limit;amazon ec2
null
_codereview.1180
I wrote the following Ruby script several years ago and have been using it often ever since on a Bioinformatics computer cluster.It pulls out a list of hosts from the Torque queuing system qnodes. It ssh'es and runs a command on all the nodes. Then it prints the output and/or errors in a defined order (alphabetical sort of the hostnames).A nice feature: results are printed immediately for the host that is next in the order.I would like to use it as an example for a Ruby workshop. Could you please suggest best-practice and design pattern improvements?#!/usr/bin/rubyEXCLUDE = [/girkelab/, /biocluster/, /parrot/, /owl/] require open3# Non-interactive, no password asking, and seasonable timeoutsSSH_OPTIONS = [-o PreferredAuthentications=publickey,hostbased,gssapi,gssapi-with-mic, -o ForwardX11=no, -o BatchMode=yes, -o SetupTimeOut=5, -o ServerAliveInterval=5, -o ServerAliveCountMax=2 ].join( )SSH = /usr/bin/ssh #{SSH_OPTIONS}MKDIR = /bin/mkdirraise Pleae give this command at least one argument if ARGV.size < 1COMMAND = ARGV[0..-1].join(' ')output_o = {}output_e = {}IO_CONNECTIONS_TO_REMOTE_PROCESSES = {}def on_all_nodes(&block) nodes = [] Kernel.open('|qnodes | grep -v ^ | grep -v ^$') do |f| while line = f.gets i = line.split(' ').first nodes.push(i) if EXCLUDE.select{|x| i =~ x}.empty? end end nodes.sort.each {|n| block.call(n)}end# Create processeson_all_nodes do |node| stdin, stdout, stderr = Open3.popen3(#{SSH} #{node} \#{COMMAND}\) IO_CONNECTIONS_TO_REMOTE_PROCESSES[node] = [stdin, stdout, stderr]endhas_remote_errors = false# Collect resultson_all_nodes do |node| stdin, stdout, stderr = IO_CONNECTIONS_TO_REMOTE_PROCESSES[node] stdin.close e_thread = Thread.new do while line = stderr.gets line.chomp! STDERR.puts #{node} ERROR: #{line} has_remote_errors = true end end o_thread = Thread.new do while line = stdout.gets line.chomp! puts #{node} : #{line} end end # Let the threads finish t1 = nil t2 = nil while [t1, t2].include? nil if t1.nil? t1 = e_thread.join(0.1) # Gives 1/10 of a second to STDERR end if t2.nil? t2 = o_thread.join(0.1) # Give 1/10 of a second to STDOUT end endendexit(1) if has_remote_errors
Ruby script on-all-nodes-run not only for teaching
ruby;multithreading;networking;child process
First of all, take a look at the Net::SSH library. I don't have much experience with it, so I don't know whether it supports all the options you need. But if it does, using it might be more robust than using the command line utility (you wouldn't have to worry about whether ssh is installed in the place you expect (or at all) and you wouldn't have to worry about escaping the arguments).Assuming you can't (or won't) use Net::SSH, you should at least replace /usr/bin/ssh with just ssh, so at least it still works if ssh is installed in another location in the PATH.nodes = []Kernel.open('|qnodes | grep -v ^ | grep -v ^$') do |f| while line = f.gets i = line.split(' ').first nodes.push(i) if EXCLUDE.select{|x| i =~ x}.empty? endendWhen you initialize an empty array and then append to it in a loop, that is often a good sign you want to use map and/or select instead.line = f.gets is a bit of an anti-pattern in ruby. The IO class already has methods to iterate a file line-wise.To find out whether none of the elements in an array meet a condition, negating any? seems more idiomatic than building an array with select and check whether it's empty.nodes = Kernel.open('|qnodes | grep -v ^ | grep -v ^$') do |f| f.lines.map do |line| line.split(' ').first end.reject do |i| EXCLUDE.any? {|x| i =~ x} endendnodes.sort.each {|n| block.call(n)}I would recommend that instead of taking a block and yielding each element, you just return nodes.sort and rename the function to all_nodes. This way you can use all_nodes.each to execute code on all nodes, but you could also use all_nodes.map or all_nodes.select when it makes sense.Open3.popen3(#{SSH} #{node} \#{COMMAND}\)Note that this will break if COMMAND contains double quotes itself. Generally trying to escape command line arguments by surrounding them with quotes is a bad idea. system and open3 accept multiple arguments exactly to avoid this.If you make SSH an array (with one element per argument) instead of a string, you can use the multiple-arguments version of popen3 and can thus avoid the fickle solution of adding quote around COMMAND to escape spaces, i.e.:Open3.popen3(*(SSH + [node, COMMAND]))IO_CONNECTIONS_TO_REMOTE_PROCESSES = {}# ...on_all_nodes do |node| stdin, stdout, stderr = Open3.popen3(#{SSH} #{node} \#{COMMAND}\) IO_CONNECTIONS_TO_REMOTE_PROCESSES[node] = [stdin, stdout, stderr]endIf you heeded my above advice about all_nodes you can simplify this using map. I also would suggest not using a Hash here. If you use an array instead, the nodes will stay in the order in which you inserted them, which will mean that you can iterate over that array instead of invoking all_nodes again.has_remote_errors = falseall_nodes.map do |node| [node, Open3.popen3(*(SSH + [node, COMMAND]))]end.each do |node, (stdin, stdout, stderr)| stdin.close ethread = # ... # ...endThis way you removed the complexity of first putting everything into a hash and then getting it out again.while line = stderr.getsAgain, this can be written more idiomatically as stderr.each_line do |line|. Same with stdout.first = trueThis is never used. I can only assume that it's a left over of previous iterations of the code, which is no longer necessary. Obviously it should be removed.# Let the threads finisht1 = nilt2 = nilwhile [t1, t2].include? nil if t1.nil? t1 = e_thread.join(0.1) # Gives 1/10 of a second to STDERR end if t2.nil? t2 = o_thread.join(0.1) # Give 1/10 of a second to STDOUT endendI don't see any benefit of doing it this way. Just do:e_thread.joino_thread.joinNote that joining on one thread does not mean that the other threads stop running - only the main thread does, but that's perfectly okay as you want that anyway.
_opensource.284
I want to create a dynamically loadable module (.dll or .so) for a closed source program, but I would like to make the source code of this module available, and I'd like it to be (A)GPL licensed, so others could use my module, but I also want to make sure their code would also be open sourced. My questions are:Since the whole project would be one executable at the end would the closed source program count as derivative work, thereby it would need to be GPL as well?If yes, can I add some kind of additional permission to the GPL licence of my code to explicitly allow this linkage to occur, so it could actually be distributed with the app?If yes, where and how should I put these additional permissions in the licence file?Would this modified licence still be GPL (compatible)? What kind of open source code could I use within my module?If the resulting licence would not be GPL compatible are there any other licences that are better sutied for this situation, but are still similar in essence to GPL (e.g. reusing the code is only possible in case it's source is also released)?
GPL licenced code for a module of a closed source program
licensing;gpl
Since the whole project would be one executable at the end would the closed source program count as derivative work, thereby it would need to be GPL as well?The basic understanding of the GPL and it's intention saying: yes, the program must be GPL too, as through the linking you create a derivative work. That is the position of the FSF. But it is not shared by everyone. Some interpretations say, only static linking make it a derived work, not dynamic linking (which it would be, you saying it is a DLL). There are even angles, that differ even more. Lawrence Rosen (from the OSI) says:The primary indication of whether a new program is a derivative work is whether the source code of the original program was used [in a copy-paste sense], modified, translated or otherwise changed in any way to create the new program. If not, then I would argue that it is not a derivative workSo, there are different interpretations. As far as I know this dispute wasn't put to test in the courts yet. The GPL was enforced multiple times in courts, but as far as I know it was always about violations in using the GPL-program, not because of linking.If yes, can I add some kind of additional permission to the GPL licence of my code to explicitly allow this linkage to occur, so it could actually be distributed with the app?Yes. Many projects use a linking exception to the GPL.If yes, where and how should I put these additional permissions in the licence file?I found no clear guidance of this, but in my understanding it must be clear that an exception is applied and which part is the original GPL. So I would have the copyright statement saying it is GPL with your exception and put both in the license file, but with clear distinction (new header or so).Would this modified licence still be GPL (compatible)? What kind of open source code could I use within my module?As far as I understand, the code of your module can be used in other GPL-projects, but different GPL-code cannot be used in your module without the author consenting to your exception. Other than that, more permissive licenses still should work, as long as they are conforming with the terms of the GPL and your exception.If the resulting licence would not be GPL compatible are there any other licences that are better sutied for this situation, but are still similar in essence to GPL (e.g. reusing the code is only possible in case it's source is also released)?The LGPL doesn't count linking as creating a derived work, but still applies copyleft to the code of your module. That would imply your module can be linked to different programs without making them Open Source, but modifying your module still demands making that changes LGPL.
_unix.219963
First of all, yes, locked into csh on a Solaris box, can't do anything about it, sorry.I have a report batch I was running using a foreach loop. Right now it runs as a single thread and I would like to speed it up with GNU parallel. I have been trying two different approaches but hitting roadblocks on each.Here is my current version:if( $#argv <= 1) then #Get today's date set LAST = `gdate +%Y-%m-%d`else #use date passed in parameter set LAST=`echo $2 | tr -d _/`; endifif( $#argv == 0 ) then #default to 15 day lookback set COUNT = 15else #otherwise use parameter set COUNT = $1endif@ LCOUNT = $COUNT + 1 #increment by one to exclude $LAST date#get starting date by subtracting COUNT (now incremented by 1)set START = `gdate --date='$LAST -$LCOUNT day' +%Y/%m/%d`;#loop through dates, generate report string, and pipe to reportcliforeach i (`seq $COUNT`) set REPDATE = `gdate --date='$START +$i day' +%Y/%m/%d`; set FILEDATE = `gdate --date='$START +$i day' +%Y%m%d`; echo runf reportname.rep -ps $REPDATE -pe $REPDATE -o report_$FILEDATE.csv \ | reportcli <cli params here>endSo I would like to get this working with parallel, but as you can see I have a boatload of command expansion/substitution going on.I tried a few different approaches, including making an array of the string passed to the reportcli, but I can't figure out how to get it to play nice.As I see it, I have two choices:A) one big line (have to iron out all the quoting problems to get the gdate command substitution to work):`seq $COUNT` | parallel reportcli <cli params> < runf reportname.rep -ps \ `gdate --date='$START +{} day' +%Y/%m/%d` -pe `gdate --date='$START +{} day' +%Y/%m/%d` \ -o report_`gdate --date='$START +${} day' +%Y%m%d`.csv B) Assemble a csh array beforehand, then try to expand the array (expand with echo?), pipe to parallelset CMDLISTforeach i (`seq $COUNT`) set REPDATE = `gdate --date='$START +$i day' +%Y/%m/%d`; set FILEDATE = `gdate --date='$START +$i day' +%Y%m%d`; set CMDLIST = ($CMDLIST:q runf reportname.rep -ps $REPDATE -pe $REPDATE \ -o report_$FILEDATE.csv)endI know my array is good because I can do this and get back each element:foreach j ($CMDLIST:q) echo $jendbut, I'm not sure how to get this to work in csh:echo $CMDLIST | parallel --pipe reportcli <cli params here>Thanks in advance!!
csh array/command substitution with gnu parallel
csh;gnu parallel
Write a script. Call that from GNU Parallel:[... set $START and $COUNT ...]seq $COUNT | parallel my_script.csh $START {}my_script.csh:#!/bin/cshset START = $1set i = $2set REPDATE = `gdate --date='$START +$i day' +%Y/%m/%d`;set FILEDATE = `gdate --date='$START +$i day' +%Y%m%d`;echo runf reportname.rep -ps $REPDATE -pe $REPDATE -o report_$FILEDATE.csv \ | reportcli <cli params here>
_unix.329655
I need to create some files for an old windows API, for a video playback. The native windows files which works with this old API have the format of MSRLE. They are used as small animations during copy, move, etc.My belowed FFMPEG program has a decoder for those files, but not an encoder.I would like to convert small AVI files to MSRLE format, preferably using ffmpeg under Linux. Did I overlooked some function/option with FFMPEG?How to build/compile FFMPEG to have the functionality which allows me to encode video files in MSRLE format?
Encoder to msrle format with FFMPEG
ffmpeg;conversion;video encoding;file format
null
_softwareengineering.39032
Possible Duplicate:Are certifications worth it? I'm curious what experience others have had, both from the perspective of an employer and an employee on Microsoft Certifications.I'm kind of sitting on the fence myself on this issue. I tend to have a cynical view of folks with massive amounts of letters following their name, and sometimes think that these certificates are so specialized that they prove a candidate knows almost everything about almost nothing. Deeply specialized knowledge isn't always needed in the real world, especially if it means total ignorance of related subjects. Nobody needs to here That's not my specialty - we need to hire another guy on the team very often.On the other hand, everybody knows how easy it is to BS through an interview, and nobody wants the experience of hiring an employee and finding out two weeks later that they don't really have a good grasp on what you need them to do.
Do Microsoft Certifications matter?
asp.net;sql server;windows;active directory
null
_unix.206505
I'm using Virtual Box on my windows 8.1 host machine. I have installed CentOS & Ubuntu in 'Graphical Mode' and had sufficient practice. But now I wanted to switch to 'Command Line Mode completely. Therefore created a machine for that purpose and installed 'CentOS6.6 Basic Server'/'CentOS7 Minimal.On Graphical Mahines, I could easily install VBox Guest Additions with these few commands.yum updateyum install gccyum install kernel-develsh VBoxLinuxAdditions.run (From mounted Location For CDrom)But since I installed CLI machine and when tried same commands on it, it does not install properly, gives this error: Could not find X.Org or Xfree86 Window System, skipping.Can anyone guide me about it and tell me what's need to be done to get it working?
Installing VBox Guest Additions when there is no X-server
centos;command line;virtualbox;virtual machine
null
_webapps.35682
Do canned responses only work in a certain versions of GMail? I do not seem to have the canned responses icon even after I enable it in labs
Do canned responses only work in a certain versions of GMail?
gmail;gmail canned response
null
_unix.264416
My understanding is that a process always runs in user mode and uses user space only, and a kernel always runs in kernel mode and uses kernel space only. But I feel that I might not be correct, after reading the following two books. Could you correct me if I am wrong?In Linux Kernel Architecture by Maurer, the terms system processand user process are used without definitions, for example, whenintroducing division of virtual address space into kernel space anduser space:Every user process in the system has its own virtual address range that extends from 0 to TASK_SIZE . The area above (from TASK_SIZE to 2 32 or 2 64 ) is reserved exclusively for the kernel and may not be accessed by user processes. TASK_SIZE is an architecture-specific constant that divides the address space in a given ratio in IA-32 systems, for instance, the address space is divided at 3 GiB so that the virtual address space for each process is 3 GiB; 1 GiB is available to the kernel because the total size of the virtual address space is 4 GiB. Although actual figures differ according to architecture, the general concepts do not. I therefore use these sample values in our further discussions.This division does not depend on how much RAM is available. As a result of address space virtualization, each user process thinks it has 3 GiB of memory. The userspaces of the individual system processes are totally separate from each other. The kernel space at the top end of the virtual address space is always the same, regardless of the process currently executing.... The kernel divides the virtual address space into two parts so that it is able to protect the individual system processes from each other.You can read more examples, by searching either user process orsystem process in the book.Are both user processes and system processes processes, as opposedto kernel?What are their definitions? Do they differ by their owners (regularuser or root?), by the user who started them, or by something else?Why does the book explicitly write system process or userprocess, rather than just process to cover both kinds ofprocesses, for example, in the above quote? I guess what it saysabout user process also applies to system process, and what itsays about system process also applies to user process.In Understanding Linux Kernel by Bovet, there are concepts kernelcontrol path and kernel thread. A kernel control path denotes the sequence of instructions executed by the kernel to handle a system call, an exception, or an interrupt.... Traditional Unix systems delegate some critical tasks to intermittently running processes, including flushing disk caches, swapping out unused pages, servicing network connections, and so on. Indeed, it is not efficient to perform these tasks in strict linear fashion; both their functions and the end user processes get better response if they are scheduled in the background. Because some of the system processes run only in Kernel Mode, modern operating systems delegate their functions to kernel threads, which are not encumbered with the unnecessary User Mode con- text. In Linux, kernel threads differ from regular processes in the following ways: Kernel threads run only in Kernel Mode, while regular processes run alterna- tively in Kernel Mode and in User Mode. Because kernel threads run only in Kernel Mode, they use only linear addresses greater than PAGE_OFFSET . Regular processes, on the other hand, use all four gigabytes of linear addresses, in either User Mode or Kernel Mode.You can read more by searching them at Google Books. Are system process in Maurer's book and in Bovet's book the same concept? Can system process mentioned in the two books run in user space, kernel space, or both?Is system process different from kernel control path and kernel thread?
Differences between system processes, and user processes, kernel control paths and kernel thread
linux;kernel;process
null
_webmaster.18049
Is there a reason why a favicon.ico is not showing in Safari 5.1 on Mac? It shows on every other browser I have tested except the above mentioned one. In Safari on Windows it shows.I've declared it like this:<link rel=shortcut icon href=http://www.mysite.com/favicon.ico />
Favicon not showing in Safari 5.1
favicon;safari
Assuming that Safari on Mac is showing favicons for other websites (try any stackexchange website), you probably need to clear the browser cache.Also, try adding a random number (using JS or Server-script like PHP) to your favicon reference, like below:EDIT- Also, add type attribute and try again? (see below)<!-- FAVICON --> <link rel = shortcut icon type = image/x-icon href = http://www.dixitsolutions.com/favicon.ico?23189123 />
_unix.84026
I need to efficiently count every character of an arbitrary file by its character CLASS (as defined by the BASH man page); i.e. [[:alnum:]], [[:alpha:]], [[:ascii:]], [[:blank:]], [[:cntrl:]], [[:digit:]], [[:graph:]], [[:lower:]], [[:print:]], [[:punct:]], [[:space:]], [[:upper:]], [[:word:]] and [[:xdigit:]]Once the file is processed, display on a single line the resulting counts for each, even when zero.Web searches have not been fruitful in finding something along these lines.The arbitrary file (/tmp/f1.txt) will contain a variety of diverse text/data.I am not looking to process ELF binaries nor UniCode (or any form of multi-byte) content.I am not concerned about line count (CR and/or LF), only fixated on accumulating a count of each 'character' in the target file by the above classes.I intend for this to end up as a standard function() in a larger bash script. Bash/sed/awk and the like are desired; while perl/python/ruby not so much.Sample data files could be:Zero bytes, i.e. no content at all.A single characterA single wordMultiple words separated by whitespaceMultiple lines interspersed with whitespace and/or CarriageReturns and/or LineFeeds.For multiline files there might not be a CR or LF to signify the end of the last line (yet all characters should still be counted).
Couting all characters by BASH character class
bash;shell script;text processing
null
_unix.327699
For some reason my Fedora 25 FRESH install is not using wayland by default. I know this because of$: loginctl show-session 3 -p TypeType=x11If I was using Wayland by default that should say wayland or weston. I'm very confused why this fresh install of fedora 25 is not sporting wayland by default. I looked over the arch wiki briefly, and tried to test run wayland by issuing$: westonAlso, I have rebooted fedora to multiuser.target, to get just a command line to manually launch a dbus-run-session for wayland, and this is the output:$: dbus-run-session -- gnome-shell --display-server --wayland(gnome-shell:1372): mutter-WARNING **: Can't initialize KMS backend: could not find drm kms deviceThen I tried:$: startxAnd my standard gnome desktop popped up no problem. I'm seriously wondering if fedora 25 live installer ever installed wayland to begin with?After looking for the wayland config file weston.ini, I cannot find it in ~/.config/ where it's supposed to be.System info:$:uname -aLinux sark 4.8.10-300.fc25.x86_64 #1 SMP Mon Nov 21 18:49:16 UTC 2016 x86_64 x86_64 x86_64 GNU/LinuxI have done a full system update on first login with $: sudo dnf updateAlso went through the process of using the nvidia drivers for my graphics card; GTX 950Not using the default pre-my-move-to-nvidia-driver driver :PEDIT:After investigating onto my laptop, my Laptop reports that it is using wayland:$: loginctl show-session 2 -p TypeType=waylandThis laptop was a fedora24 upgrade to fedora25, not a fresh install of fedora 25Laptop info:$: uname -aLinux mcp 4.8.10-300.fc25.x86_64 #1 SMP Mon Nov 21 18:59:16 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
Fedora 25 is NOT using wayland by default!
linux;fedora;x11;wayland
Nvidia does not yet support Wayland, so Fedora 25 falls back to X11. From the Nvidia forum I see someone has used packages from the in-development Fedora 26 plus some patches to get it working, but notes I have tested it with local builds and it runs like crap, personally I wouldn't bother trying it in F25.Hopefully this will be resolved for F26. In the meantime, I'm at least glad that the X11 fallback worked nicely and transparently.
_unix.208803
I have a 16G pendrive that has some bad blocks:# f3read /media/morfik/224e0447-1b26-4c3e-a691-5bf1db650d21 SECTORS ok/corrupted/changed/overwrittenValidating file 1.h2w ... 2097112/ 40/ 0/ 0Validating file 2.h2w ... 2097120/ 32/ 0/ 0Validating file 3.h2w ... 2097098/ 54/ 0/ 0Validating file 4.h2w ... 2097148/ 4/ 0/ 0Validating file 5.h2w ... 2097114/ 38/ 0/ 0Validating file 6.h2w ... 2097152/ 0/ 0/ 0Validating file 7.h2w ... 2097152/ 0/ 0/ 0Validating file 8.h2w ... 2097152/ 0/ 0/ 0Validating file 9.h2w ... 2097152/ 0/ 0/ 0Validating file 10.h2w ... 2097152/ 0/ 0/ 0Validating file 11.h2w ... 2097152/ 0/ 0/ 0Validating file 12.h2w ... 2097152/ 0/ 0/ 0Validating file 13.h2w ... 2097152/ 0/ 0/ 0Validating file 14.h2w ... 2097152/ 0/ 0/ 0Validating file 15.h2w ... 90664/ 0/ 0/ 0 Data OK: 14.05 GB (29450624 sectors)Data LOST: 84.00 KB (168 sectors) Corrupted: 84.00 KB (168 sectors) Slightly changed: 0.00 Byte (0 sectors) Overwritten: 0.00 Byte (0 sectors)Average reading speed: 18.77 MB/sAs you can see only the first five gigs have damaged sectors. The rest is fine. The problem is when I try to burn a live image to this pendrive, the action stops after transferring 50MiB.Is there a way to skip the 5G from the beginning and place the image after the damaged space, so it could boot without a problem?
Is it possible to burn a live image to a damaged pendrive?
usb drive;live usb;badblocks
I've managed to solve this problem, but I'm still wonder if there's a better and easier solution.Anyways, if you have bad blocks at the beginning of the device and you are unable to burn a live image, you should make two partitions: Then you download an image and check its first partition's offset:# parted /home/morfik/Desktop/debian-live-8.1.0-amd64-mate-desktop.iso(parted) unit s(parted) printModel: (file)Disk /home/morfik/Desktop/debian-live-8.1.0-amd64-mate-desktop.iso: 2015232sSector size (logical/physical): 512B/512BPartition Table: msdosDisk Flags:Number Start End Size Type File system Flags 1 64s 2015231s 2015168s primary boot, hiddenSo it's 64 sectors, which means 64*512=32768bytes . Now we are able to mount this image:# mount -o loop,offset=32768 /home/morfik/Desktop/debian-live-8.1.0-amd64-mate-desktop.iso /mnt mount: /dev/loop0 is write-protected, mounting read-only# ls -al /mnttotal 593Kdr-xr-xr-x 1 root root 2.0K 2015-06-06 16:09:57 ./drwxr-xr-x 24 root root 4.0K 2015-06-08 20:54:43 ../dr-xr-xr-x 1 root root 2.0K 2015-06-06 16:08:34 .disk/dr-xr-xr-x 1 root root 2.0K 2015-06-06 15:59:10 dists/dr-xr-xr-x 1 root root 2.0K 2015-06-06 16:09:41 install/dr-xr-xr-x 1 root root 2.0K 2015-06-06 16:08:29 isolinux/dr-xr-xr-x 1 root root 2.0K 2015-06-06 16:08:29 live/dr-xr-xr-x 1 root root 2.0K 2015-06-06 15:59:00 pool/dr-xr-xr-x 1 root root 2.0K 2015-06-06 16:09:37 tools/-r--r--r-- 1 root root 133 2015-06-06 16:09:44 autorun.inflr-xr-xr-x 1 root root 1 2015-06-06 15:59:10 debian -> ./-r--r--r-- 1 root root 177K 2015-06-06 16:09:44 g2ldr-r--r--r-- 1 root root 8.0K 2015-06-06 16:09:44 g2ldr.mbr-r--r--r-- 1 root root 28K 2015-06-06 16:09:57 md5sum.txt-r--r--r-- 1 root root 360K 2015-06-06 16:09:44 setup.exe-r--r--r-- 1 root root 228 2015-06-06 16:09:44 win32-loader.iniWe have access to the files so we can copy them to the prendrive's second partition:# cp -a /mnt/* /media/morfik/goodThe following command will hardcode the second partition into MBR in order to boot from it:printf '\x2' | cat /usr/lib/SYSLINUX/altmbr.bin - | dd bs=440 count=1 iflag=fullblock conv=notrunc of=/dev/sdbI'm using ext4 filesystem on the second partition, so I have to use extlinux, but the image has isolinux. I don't have to remove this folder, I can change its name instead:# mv isolinux extlinuxTha same thing I have to do with the config file inside of that folder:# mv isolinux.cfg extlinux.confI'm not sure whether this step is necessary, but I always copy all the files anyway:# cp /usr/lib/syslinux/modules/bios/* /media/morfik/good/extlinux/The last thing is to install extlinux's VBR on the second partition:# extlinux -i /media/morfik/good/extlinux//media/morfik/good/extlinux/ is device /dev/sdb2 And that's pretty much it. I tested the image, it boots and the live system works well. This solution should work for all kind of live images.
_computerscience.1912
I try to implement a position based cloth simulation using hardware tesselation.This means I want to just upload a control quad to the graphics card and then use tesselation and geometry shading to create the nodes in the cloth.This idea follows the paper:Huynh, David, Cloth simulation using hardware tessellation (2011). Thesis. Rochester Institute of Technologyhttp://scholarworks.rit.edu/theses/265/I know how to use tesselation to create the simulated points.What I don't know is how to store the computated information into a framebuffer.The geometry and also the tesselation evaluation shaders have informations needed for the per-vertex computations. But can they directly write into the framebuffer?The fragment-shader I know can write to the framebuffer, but my information would be interpolated and I would no longer know what to write at which position.
Per Vertex Computation in OpenGL Tesselation
opengl;shader;gpgpu;simulation
Based on the comment of ratchet freak I researched Transform Feedback Buffers and solved my problem that way.I now generate the simulated points on the CPU and put them into a VertexBufferObject.I generate a second VBO for the points (along with some others for velocity).The connectivity of the cloth is given as an vertex-attribute in ivec4.Using the transform feedback buffers and a Double-buffering trick using the two VBOs i can always read from the last step (using the connectivity info) and write to another buffer. This is in order to solve problems with concurrency. The calculations are done in the vertex-shader as GL_POINTS.Binding the output of the first shader into another regular shader using additional index-buffer to generate triangles I can without trouble render the cloth any way I want.This idea follows the transform feedback buffer example made in the book OpenGL Superbible http://www.openglsuperbible.com/
_softwareengineering.150122
I think one of the best ways to learn and become a better programmer is to share knowledge gained from working on personal or work projects with others. One way to do this at the workplace is to set aside an hour or so a week and let one or several team members present their progress on the part of the code base they have been working on. This is especially helpful when each team member has a specialized skill set. What are some other useful practices that managers can institute to increase team cohesion and improve productivity?
Productivity enhancing development strategies for small teams
productivity;management;teamwork;team
I've used Training Tuesdays, though you could move it to a Thursday and keep the TT title, as an opportunity for the team to share what they know. I borrowed a few katas from Code Kata and worked through them as a team, they provide nice little programming scenarios that allow everyone a chance to think about a problem then propose a solution; one person might suggest LINQ another a for loop... then you've got a good ground for a discussion about the various benefits of each approach. It's not mission critical so the pressure is off and it's ok whatever you suggest. So if you've got good database skills you might see the problem in a particular light, explaining that to someone who's a good OO programmer means you both need to understand the other language to some extent.I also used the excellent Summer of NHibernate / Autumn of Agile screencasts by Steve Bohlen in the sessions to bring in an understanding of ORMs and Agile processes, the Autumn of Agile shows advantages of agile over waterfall approach and it was good having that visualised for us.We also gave mini-presentations on subjects we each were strong on; so there was a guy strong on UX and accessibility in the team and he showed what that involved and worked through some forms we had and highlighted good and bad aspects. Then if someone has an interest in learning something new it can be a good opportunity there, for example finding out about javascript patterns or unit testing javascript so you can present back to the team on how they can be adopted. I'm currently spending a bit of free time learning Red Gate's SQL Test and in a week will be having an hour session with the team in which I hope to show them why we should start writing unit tests in the database. What I've found is that when you have to teach something it lifts your game; I teach martial arts and when I learned a technique for a grading I thought I understood it, but when I had to teach it to someone I had to see it from their perspective and think of ways they could understand it. The same has been true when I've mentored developers too; they ask different questions than I did when learning that feature so I had to really understand it.Pair-programming could also help here; over time everyone works with everyone else on something, whether it's your core business or on a fun Friday afternoon project or even just on hey lets have a look at that crazy rail's thing everyone's on about, when we've done this not only is the code we write better (you don't tend to cheat when you know someone is watching) but you have discussions about how to do xy or z (which is what happened with the code kata in the team, only here it's just 2 of you). So if you try to pair seniors with juniors the juniors get to work with someone who can help guide them and prevent them making mistakes. The seniors get someone asking why, why why and that makes them have to explain / teach and therefore question their understanding and deepen what they know, or pick up new ideas from people who aren't already set in their ways (I'm guilty of this, thought I knew all I needed to and stopped learning, new approaches came in and I was behind the times)
_unix.378433
I have many video files in my ubuntu server and I wish to stream them one by one without interruption. I just run ffmpeg with segment function.The problem is some files have different DAR-SAR-resolution values so when a video finished, the next one is not playing or giving green screen.In case I classify video files in accordance with the same resolution,fps,DAR,SAR, no problem occurs. I encoded some files to change DAR and SAR to 16:9 and 1:1 but ffmpeg is not changing encoded file with correct SAR-DAR values. Maybe I am lack of information about Sar and DarI read many answers in different forums and a bit tired of desperately searching on internet and reading same threads again and again.I'd appreciate if you could redirect me to the right direction to get rid of this problem.
How to FFmpeg videos with different SAR DAR values
ffmpeg
null
_softwareengineering.307704
I am working on a rather big Node.JS project with several thousand lines of code. It's not a homepage, but acts more like a configurable general purpose application server. As such it brings some parts which are useful in most projects I do. The problem is that I easily lose overview in the core modules. So I did a bit of research and came up with an interesting structure based on C++ Header/Code file structures. I want to know if this structure is good in the long run (maintainability, testability, extensibility), how it can be improved and if there is already a (better) standard way of doing the structuring I did not find.The structure has three kinds of files, where xxx is the module name and yyy is the method name.xxx.h.js: The header file, which contains the class and method declarationsxxx.yyy.c.js: The code files, which contain one method each (and possibly local helper functions)index-xxx.js: The glue and main file for the moduleI would like to structure all my sub-modules like this and then use a loading-mechanism to load all modules, namespace them and finally use them globally.Here's an example:package.json{ name: Foo, version: 1.0.0, description: Does something in the core system, author: Marco Alka, main: index-foo.js}// index-foo.js'use strict';// return classmodule.exports = require('./foo.h.js');// overwrite definitions with declarations// this part can and most probably will be done generically by my module loader. I write it down for easier overview.require('./src/foo.bar.c.js');require('./src/foo.baz.c.js');// foo.h.js'use strict';/** * Easy-to-read interface/class * No obstructive code in my way */module.exports = class Foo { constructor() {} /** * Put comments here */ bar() { throw 'Not Implemented'; } /** * More comments for methods * See how the method declarations+documentations nicely dominate the file? */ baz() { throw 'Not Implemented'; }}// src/foo.bar.c.js'use strict';// include headervar h = require('../foo.h.js');// implement method Foo::barh.prototype.bar = function () { console.log('Foo.bar');}// src/foo.baz.c.js'use strict';// include headervar h = require('../foo.h.js');// implement method Foo::barh.prototype.baz = function () { console.log('Foo.baz');}Example on how to use the whole thing from within the root folder'use strict';// later on a module loader will be used which loads the folder as module instead of the index file directlyvar F = require('./index-foo.js');// make objectvar foo = new F();// call methodfoo.bar();Output in console: Foo.bar\n
How to structure big Node.JS modules
node.js;project structure;modules
I'm a newbie and I'm pretty sure this is an opinion-based answer which is really outside the scope of the board. With that, here's my practical experience:There are no implicitly bad ways to structure code, so long as the structure can be described simply.The only thing bad about poor code structure is lack of discipline: if you consistently follow the structure when developing, then you or anyone else editing your code should be able to discern the context quickly.Even if you are the only person on your development team (which is sounds like you are), use Git or some other kind of repository with good robust explanations for what you changed and why.It's been my experience that code structures are not usually the culprits for making things difficult to manage. It's logical architecture and documentation. If you have a good logical architecture that clearly defines modularity of functionality and you document what you've done, it will be an investment that pays off.
_webapps.17114
I'm using Google Apps for example.com. Before doing so, my example.com email address was associated with a Google account, probably because of Picasa. Upon attempting to reset the password for my address (google.com/accounts/recovery), Google notes that I've got two accounts associated with my address: one personal from Picasa and one organisational from the administrator at example.com. This causes all sorts of trouble, mainly that I can't associate the email address with a (Gmail) Google Plus account because an account with that email address already exists.How can I delete the personal Google account?
How to remove a personal Google account with the same address as an Apps account?
google apps;google account
Have you tried following the account deletion procedure?To delete your Google Account, follow these steps: 1.Sign in on the Google Accounts homepage. (If you forgot your password, you can reset it). 2.Click Edit next to the My products list. If you don't see the Edit link, your account was likely created through an organization or company. To delete your account, talk to your domain administrator. 3.On the following page, click Close account and delete all services and info associated with it to delete your account. 4.Confirm your account deletion. To do so, you'll need to select these two options: Yes, I want to delete my account, and Yes, I acknowledge that I am still responsible for any charges incurred due to any pending financial transactions. (You can safely select the latter option if you haven't used any of Google's paid products, such as AdWords and Google Checkout, or if you have no pending financial transactions related to these products.)
_codereview.85948
I'm trying to optimize the following method. Any suggestions on how this can be done better? The shipping cost is based on weight and the total cost of products. Following are the conditions for calculating cost.When order is under $100, and all items under 10 lb, then shipping is $5 flat.When order is $100 or more, and each individual item is under 10 lb, then shipping is free.Items over 10 lb always cost $20 each to ship.public function addShippingCost($subtotal) { $shipCost = 0; $totalWeight = 0; $itemUnder10lb = true; foreach ($this->products as $product) { if($product->getWeight() > 10){ $shipCost += $this::RATE_HEAVY; $itemUnder10lb = false; } $totalWeight += $product->getWeight(); } //When order is under $100, and all items under 10 lb, then shipping is $5 flat //When order is $100 or more, and each individual item is under 10 lb, then shipping is free //Items over 10 lb always cost $20 each to ship if($subtotal > 100 && $itemUnder10lb) return $this::RATE_FREE; elseif($subtotal < 100 && $totalWeight < 10) return $this::RATE_FLAT; elseif($itemUnder10lb == false) return shipCost; }
Method for calculating shipping cost
php;object oriented
What if you have an order, that is > 10lb in total weight< $100 in subtotal has no product with > 10lb? Your method would not return anything in that case.Here's my patched solution:public function addShippingCost($subtotal) { $shipCost = 0; $weightSum = 0; $allItemsUnder10lb = true; foreach ($this->products as $product) { if($product->getWeight() > 10){ $shipCost += $this::RATE_HEAVY; $allItemsUnder10lb = false; } $weightSum += $product->getWeight(); } if ($subtotal < 100 && $weightSum < 10) { $shipCost = $this::RATE_FLAT; } if ($subtotal >= 100 && $allItemsUnder10lb) { $shipCost = $this::RATE_FREE; } return $shipCost;}
_webmaster.13632
I have inherited an old website whose security certificate causes this error when viewed with IE.The name on the security certificate is invalid or does not match the name of the siteIs there any easy way to fix this without having to get a new certificate?Edit:Some more info. This an internal website, though a couple of other companies have secure access to out networks DMZ to access it.
Issue with security certificate name
https;security certificate
null
_unix.192348
Version control software seems to be used for backing up project mostly in plain text files.For backing up a file system, with large amount of files or large size, regular file copying/transferring software such as rsync seems to me more proper?If not, what is the advantage and disavantage of version control for backing up a file system? How is version control worth more than regular file copying or transferring software?
What is advantage of backing up a filesystem by version control software?
backup;version control
null
_cs.31895
a minimum heavyweight spanning tree is a spanning tree in which the heaviest edge is as light as possible.Formally,input : given connected undirected weighted graph, $G$.output : a spanning tree $T$ for $G$ with property that every spanning tree $T'$ for $G$ has some edge $e'$ such that $w(e')\ge w(e)$ for every edge $e$ in $T$.provide a greedy algorithm to solved the problem.is every minimum heavyweight spanning tree a minimum spanning tree ?first thing comes into my mind was using Kruskal's algorithm with $O(|E| \log |V|)$ running time. If I use kruskal's algorithm for 1, the output will guarantee to be a MST, thus 2 will be true? Can someone verify this for me, thanks in advance
minimum spanning tree and minimum heavyweight spanning tree
algorithms;graphs;greedy algorithms;spanning trees
null
_unix.317661
I've only just realized that KVM snapshots default to being listed in alphabetical order, rather than chronological. This can be confusing if you sometimes use the command virsh snapshot-create-as $ID $NAME in order to create snapshots. This can result in output similar to below which is hard to find the latest snapshot from (especially if it's a long list): Name Creation Time State ------------------------------------------------------------ 1474043443 2016-09-16 17:30:43 +0100 running 1476197777 2016-10-11 15:56:17 +0100 running 1476721835 2016-10-17 17:30:35 +0100 running 1476953503 2016-10-20 09:51:43 +0100 running consolidated 2016-09-25 08:06:27 +0100 running just installed mysql 5.6 2016-09-16 10:19:46 +0100 running updated vars 2016-09-24 04:02:24 +0100 runningIs there a way to list the snapshots in chronological order? That way I can (or a script can be programmed to) just read the name from either the top or bottom of the list to grab the latest one. If not, is there a place I can raise this to put in a request for a --chronological parameter?After reading the Redhat documentation on managing snapshots, I have found the --tree and --leaves options which are helpful to me in this specific case as --leaves gives me the latest snapshot which is what I'm after, but would probably list several if there were several paths. I'm not sure if the last line in a --tree option would always give me the latest snapshot. It may depend on the alphabetical names of the branches when the splits were made. --current` does not work for me as I get the following error message:error: Domain snapshot not found: the domain does not have a current snapshotI am guessing this is because the virtual machine had its snapshot taken whilst it was running, and it is also currently still running.I guess one could easily write a script to parse the Creation Time in the output to automatically fetch the name of the latest snapshot but it would be nice if this was somehow built-in.
KVM - List snapshots in order of creation
kvm;snapshot;virsh
null
_unix.371683
Running a rescue system I can manually start anaconda like this:# anaconda --kickstart ks.cfg --repo file:///mnt/cdromBut then anaconda always wants to reboot/shutdown the system, at the end. I don't manage to exit it without a system reboot/shutdown.How can I tell anaconda to exit without rebooting/shutting down?Use cases: directly inspect the installed system, restart anaconda with a fixed kickstart file etc.
Exit anaconda without reboot
kickstart;anaconda
null
_codereview.116411
We are given two list on integer arrays, and the objective is to retrieve another list which contains the duplicates in both lists.The code developed is the following:public List<int[]> GetDuplicates(List<int[]> pInputList1, List<int[]> pInputList2){ var outputList = new List<int[]>(); pInputList2.ForEach(x => { outputList.AddRange(pInputList1.Select(j => j).Where(y => { for (int i = 0; i < y.Length; i++) if (y[i] != x[i]) return false; return true; })); }); return outputList;} The code works as intended what I am concerned about is optimization, how to improve this code in both, execution time and stability.What I am doing in this code is compare each element from one list with ALL of the elements of the other list (I see a problem here, but don't know how to approach it well, the foreach seems not to be the best option here).
Searching a list of an int arrays into another list of int arrays
c#;performance;array
The original code:public List<int[]> GetDuplicates(List<int[]> pInputList1, List<int[]> pInputList2){ var outputList = new List<int[]>(); pInputList2.ForEach(x => { outputList.AddRange(pInputList1.Select(j => j).Where(y => { for (int i = 0; i < y.Length; i++) if (y[i] != x[i]) return false; return true; })); }); return outputList;} I'd start with a few readability simplifications first:Select(j => j) is a no-op and can be removedThe usage of ForEach is hurting readability (and performance, due to passing delegates around) a little bitMoving the pInputList1.Where(...) outside of the AddRange also makes it clearer what's going on.public List<int[]> GetDuplicates(List<int[]> pInputList1, List<int[]> pInputList2){ var outputList = new List<int[]>(); foreach (var x in pInputList2) { var duplicates = pInputList1.Where(y => { for (int i = 0; i < y.Length; i++) if (y[i] != x[i]) return false; return true; }); outputList.AddRange(duplicates); } return outputList;}Next, as mentioned in the comments, the intention is to filter the second list according to whether an identical array appears in the first list. The method and its arguments (and derived local variables) should be renamed to reflect that. I'd also swap the parameters over, because pInputList2 is the important part of the method, while pInputList1 is the set against which pInputList2 is checked.public List<int[]> FilterByWhitelist(List<int[]> testArrays, List<int[]> whiteListedArrays){ var outputList = new List<int[]>(); foreach (var testArray in testArrays) { var duplicates = whiteListedArrays.Where(whiteListedArray => { for (int i = 0; i < whiteListedArray.Length; i++) if (whiteListedArray[i] != testArray[i]) return false; return true; }); outputList.AddRange(duplicates); } return outputList;}Using the same point in the comments: we don't need to find every matching array in whiteListedArrays - we only need to know whether there is one. As such, we can use Any instead of Where:public List<int[]> FilterByWhitelist(List<int[]> testArrays, List<int[]> whiteListedArrays){ var outputList = new List<int[]>(); foreach (var testArray in testArrays) { var isWhiteListed = whiteListedArrays.Any(whiteListedArray => { for (int i = 0; i < whiteListedArray.Length; i++) if (whiteListedArray[i] != testArray[i]) return false; return true; }); if (isWhiteListed) { outputList.Add(testArray); } } return outputList;}Another bug in this implementation is: what if one of your testArrays is longer than one of the whiteListedArrays? You're accessing x[i] while only ensuring that i is less than the length of y. You'll probably get an ArgumentOutOfRangeException from this. Because two arrays can't be equal if their lengths don't match, this should be checked first. On the other hand, Linq has a nice built-in method for checking whether two sequences are equal:public List<int[]> FilterByWhitelist(List<int[]> testArrays, List<int[]> whiteListedArrays){ var outputList = new List<int[]>(); foreach (var testArray in testArrays) { var isWhiteListed = whiteListedArrays.Any( whiteListedArray => testArray.SequenceEqual(whiteListedArray)); if (isWhiteListed) { outputList.Add(testArray); } } return outputList;}This method now iterates once over its main argument, and returns a subsequence of that argument. This sounds like an ideal scenario for an extension method on IEnumerable.public static IEnumerable<int[]> FilterByWhitelist(this IEnumerable<int[]> testArrays, List<int[]> whiteListedArrays){ foreach (var testArray in testArrays) { var isWhiteListed = whiteListedArrays.Any( whiteListedArray => testArray.SequenceEqual(whiteListedArray)); if (isWhiteListed) { yield return testArray; } }}This would be called like:var originalList = new List<int[]> { ... };var whiteList = new List<int[]> { ... };var filteredList = originalList.FilterByWhitelist(whiteList).ToList();One final performance improvement I can see is pre-grouping the whiteListedArrays by length, and only testing for sequence equality where the lengths are already known to be equal:public static IEnumerable<int[]> FilterByWhitelist(this IEnumerable<int[]> testArrays, List<int[]> whiteListedArrays){ var lengthGroupedWhiteListedArrays = whiteListedArrays .GroupBy(arr => arr.Length) .ToDictionary(group => group.Key, group => group.ToList()); foreach (var testArray in testArrays) { List<int[]> lengthMatchedWhiteListedArrays; if (!lengthGroupedWhiteListedArrays.TryGetValue(testArray.Length, out lengthMatchedWhiteListedArrays)) { continue; } var isWhiteListed = lengthMatchedWhiteListedArrays.Any( whiteListedArray => testArray.SequenceEqual(whiteListedArray)); if (isWhiteListed) { yield return testArray; } }}At this point, we're only iterating over whiteListedArrays once, so that too can become an IEnumerable:public static IEnumerable<int[]> FilterByWhitelist(this IEnumerable<int[]> testArrays, IEnumerable<int[]> whiteListedArrays){ var lengthGroupedWhiteListedArrays = whiteListedArrays .GroupBy(arr => arr.Length) .ToDictionary(group => group.Key, group => group.ToList()); foreach (var testArray in testArrays) { List<int[]> lengthMatchedWhiteListedArrays; if (!lengthGroupedWhiteListedArrays.TryGetValue(testArray.Length, out lengthMatchedWhiteListedArrays)) { continue; } var isWhiteListed = lengthMatchedWhiteListedArrays.Any( whiteListedArray => testArray.SequenceEqual(whiteListedArray)); if (isWhiteListed) { yield return testArray; } }}We could, at this point, extract the array-specific code into a class that encapsulates the whitelist:public class Whitelist{ private readonly IReadOnlyDictionary<int, List<int[]>> _lengthGroupedWhiteListedArrays; public Whitelist(IEnumerable<int[]> whitelistedArrays) { _lengthGroupedWhiteListedArrays= whitelistedArrays .GroupBy(arr => arr.Length) .ToDictionary(group => group.Key, group => group.ToList()); } public bool Contains(int[] item) { List<int[]> correctLengthFilterArrays; if (!_lengthGroupedWhiteListedArrays.TryGetValue(item.Length, out correctLengthFilterArrays)) { return false; } return correctLengthFilterArrays.Any(filterArray => item.SequenceEqual(filterArray)); }}...public static IEnumerable<int[]> FilterByWhitelist(this IEnumerable<int[]> testArrays, Whitelist whitelist){ foreach (var testArray in testArrays) { if (whitelist.Contains(testArray)) { yield return testArray; } }}Although by doing this, FilterByWhitelist has more or less become Linq's Where:var originalList = new List<int[]> { ... };var whiteList = new Whitelist(new List<int[]> { ... });var filteredList = originalList .Where(arr => whiteList.Contains(arr)) .ToList();This way, the pre-grouping of array lengths can be used in many places.
_cs.19913
I'm following the algorithm for left recursion elimination from a grammar. It says remove the epsilon production if there is any.I have the grammar$\qquad S \to Aa \mid b$$\qquad A \to Ac \mid Sd \mid \varepsilon$I can see after removing the epsilon productions the grammer becomes$\qquad S \to Aa \mid a \mid b$$\qquad A \to Ac \mid Sd \mid c \mid d$ I'm confused where the $a \mid b$ for $S$ and $c \mid d$ for $A$ come from.Can someone explain this?
Eliminating $\varepsilon$-productions during elimination of left recursion
context free;formal grammars;compilers;left recursion
null
_unix.359424
I get this error on the installation of a magento instance.in my php.ini settings I have changed the value from 0 to -1 and restarted apache2 and everything needful is done.It still shows this errorCan do I resolve this ?Thank You
PHP changes doesn't reflect : always_populate_raw_post_data = 0.
linux;ubuntu;php
null
_cstheory.3348
Is there any current, ongoing, research being performed, looking specifically at implementing prediction models on a distributed system? I'm specifically interested in work where the underlying system can be modeled as a network of devices that has significant churn in terms of connections between the nodes, and nodes joining/leaving the network (for example, a peer-to-peer network).
Is anyone actively researching distributed prediction models?
reference request;dc.distributed comp
There is no information about any particular prediction model in your request, therefore, I think you should look through the paper PLDA: Parallel Latent Dirichlet Allocation for Large-scale Applications (2009) of Google researchers. See also their project on Google Code.Besides, Google Prediction API project might be valuable resource related to the topic, too.
_softwareengineering.157345
In many languages, the substring function works like this:substring(startIndex, endIndex)returns the substring from startIndex until endIndex-1 (if you view startIndex and endIndex as 0-based) / from startIndex+1 to endIndex (1-based)This is confusing. I understand that the two parameters can be interpreted as startIndex and length of the substring, but in my opinion that is still confusing and even in this case, startIndex is 0-based while length is 1-based.Why not stick to one convention for both the function arguments? and why do newer languages like ruby and python continue to stick to this standard?
Why are the arguments for substring functions mismatched?
language agnostic;language design;strings
The second argument is not the length of the substring, that only works if you start at the very beginning of the string. The point is that specifying from..to is linguistically ambiguous: you name two limit values, but do you want those two values themselves included in the extracted range or not?In normal parlance, there isn't a strong conventional preference: I knew her from first to fourth grade means for four years, but training is from one to three means two hours, not three. Therefore, it would be a major source of confusion if the indices referred to characters in a string. What they really refer to is the positions between the characters: 0 means before the string, 1 means after the first character, 2 means after the second character etc. Therefore, s.substring(0,2) means The first two characters of s, unambiguously. (The fact that endIndex - startIndex == length(extract) is admittedly a nice bonus.)
_vi.8866
When I'm trying to increase numbers with CtrlA it removes the leading and the trailing zeros.For example:1.0095001.095001.0009500Results after pressing CtrlA:1.009500 -> 1.95011.09500 -> 1.95011.0009500 -> 1.9501or if cursor in front of the .1.009500 -> 2.0095001.09500 -> 2.095001.0009500 -> 2.0009500Is there a way to make it work like this:1.009500 -> 1.0095011.09500 -> 1.095011.0009500 -> 1.0009501I have tried this onvim --versionVIM - Vi IMproved 7.4 (2013 Aug 10, compiled Oct 12 2015 04:35:19)
How can I increase floating number with ctrl-a without deleting zeros?
key bindings;vimrc
Thanks to @statox's suggestion, you can solve your issue with::set nrformats-=octalFirstly, Vim does not increment decimal number, it will try to increment 1 and 009500 separately. So the question is why incrementing 009500 removes the leading 0. As suggested by @sp asic, I think vim is treating this number as an octal number.Looking at the source code of Vim, a conversion between the string and the actual number is done, using the str2nr() function.You can see the definition of numbers in :h expr-number: hex-number octal-numberDecimal, Hexadecimal (starting with 0x or 0X), or Octal (starting with 0).009500 is a number starting with a 0, but containing a 9, it's not an octal number, and should not have a leading 0.You can see the same result calling the str2nr() function yourself::echo str2nr(000100)64:echo str2nr(000900) 90000100 is a valid octal number, incrementing it will just add 1 to the result.00900 is an invalid decimal number, incrementing it will fix it (removing the leading 0) and incrementing it.Note: This is what I've understood of the little dig I've done in the Vim code, as I didn't understood everything I've read I may be wrong.I believe the code responsible for the decimal conversion is the following (source):while (VIM_ISDIGIT(*ptr)){ un = 10 * un + (uvarnumber_T)(*ptr - '0'); ++ptr; if (n++ == maxlen) break;}Considering the result is un, you can see that the first 0 won't add any value to the decimal number.
_unix.315767
I am having an issue with my nginx.conf file where my proxy_pass for nas.domain.com IS forwarded to the correct address, but cloud.domain.com is not. Can anyone please help me figure out what is wrong with my file? Also, I am a newbie to nginx and my domain name has been changed to protect the innocent:load_module /usr/local/libexec/nginx/ngx_mail_module.so; load_module /usr/local/libexec/nginx/ngx_stream_module.so; #user nobody; worker_processes 1; # This default error log path is compiled-in to make sure configuration parsing # errors are logged somewhere, especially during unattended boot when stderr # isn't normally logged anywhere. This path will be touched on every nginx # start regardless of error log location configured here. See # https://trac.nginx.org/nginx/ticket/147 for more info. # #error_log /var/log/nginx/error.log; # #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] $request ' # '$status $body_bytes_sent $http_referer ' # '$http_user_agent $http_x_forwarded_for'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name domain.com www.domain.com; #charset koi8-r; #access_log logs/host.access.log main; location / { #root /usr/local/www/nginx; #index index.html index.htm; proxy_pass http://192.168.1.5; } } # #error_page 404 /404.html; # # redirect server error pages to the static page /50x.html # # error_page 500 502 503 504 /50x.html; # location = /50x.html { # root /usr/local/www/nginx-dist; # } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} #} server { server_name nas.domain.com; location / { # app1 reverse proxy follow proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://192.168.1.121:8080; } } server { listen cloud.domain.com:80; server_name cloud.domain.com; location / { # app2 reverse proxy settings follow proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://192.168.1.2:2080; } } # HTTPS server # #server { # listen 443 ssl; # server_name localhost; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_cache shared:SSL:1m; # ssl_session_timeout 5m; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} }
Nginx Reverse Proxy Problem
configuration;nginx;reverse proxy
null
_codereview.4009
Unlike most(?) smart pointers, boost::intrusive_ptr has a non-explicit constructor. Given that, one could writetypedef boost::intrusive_ptr<Foo> FooPtr;FooPtr MyFactory(){ return new Foo(a, b, c);}Or one could writeFooPtr MyFactory(){ return FooPtr(new Foo(a, b, c));}The latter is more consistent with how the code would have to be written if FooPtr were another kind of smart pointer, like shared_ptr or unique_ptr. The former is more concise. I personally prefer concise as long as it doesn't lead to trouble for readers and maintainers later. I'm wondering whether the former case is too concise?
Which is the better way to return a new instance of a boost intrusive_ptr
c++
null