id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_scicomp.19177 | I have found that the method of lines is a very natural way to think about the discretization of PDE's. Therefore I always default to that mindset when presented with a new set of equations. I have never seen a PDE where this would not work.What I am wondering is if there are discretization methods (or types of PDEs) which can not be formulated through method of lines. I expect that any PDE where the time derivative is implicit in the equation and can't be solved for would be one such case (although I know of no actual example of this). I am looking for reasoning as to why the method of lines is always applicable or a counter example. | Can the method of lines be used to discretize all PDEs? | pde;method of lines | One situation in which the usual method-of-lines approach cannot be used in a straightforward way is with equations that have mixed space-time derivatives.. By usual method-of-lines approach, I mean discretization of spatial derivatives followed by application of a Runge-Kutta or linear multistep method. This usually applies only to systems of first-order (in time) evolution PDEs.An example of equations with such mixed derivatives is Eq. (2.1) of http://epubs.siam.org/doi/pdf/10.1137/060676064.In at least some cases, it is possible to rewrite such equations as first-order systems of evolution PDEs, but I don't immediately see a way to do it here. There may be other tricks to apply the method of lines to such equations, but I don't know of them. |
_codereview.173797 | I am fairly new to redux and I am trying to re-architect an old application -- so the first page will present the user with a possible splash screen and then login form -- and on successful login - present the user with the main website.I would like some help with react antd form and redux handling/processing for login authentication handling. Enhancing these demos. Links to similar issues - something broken down etc.. common problems that may be faced etc.. - maybe even logging back out functions. I have these old components so this is how the application used to look. I would most likely use antd for the form components.here is a demo I've got going which uses redux -- but this just fetches data from an api -- then renders it -- I am keen to pickle parts of this code to replace parts of form handling etc..this redux demo is on this jsfiddle for reference.http://jsfiddle.net/0ht35rpb/104///redux demoimport React, { Component } from 'react'import { render } from 'react-dom'import {Provider, connect} from 'react-redux'import {createStore, applyMiddleware} from 'redux' import thunk from 'redux-thunk';import MapChart from './modules/mapChart/MapChart'import './index.css'function fetchPostsRequest(){ return { type: FETCH_REQUEST }}function fetchPostsSuccess(payload) { return { type: FETCH_SUCCESS, payload }}function fetchPostsError() { return { type: FETCH_ERROR }}const reducer = (state = {}, action) => { switch (action.type) { case FETCH_REQUEST: return state; case FETCH_SUCCESS: return {...state, posts: action.payload}; default: return state; }} function fetchPostsWithRedux() { return (dispatch) => { dispatch(fetchPostsRequest()); return fetchPosts().then(([response, json]) =>{ if(response.status === 200){ dispatch(fetchPostsSuccess(json)) } else{ dispatch(fetchPostsError()) } }) }}function fetchPosts() { const URL = 'https://data.police.uk/api/crimes-street/all-crime?poly=52.268,0.543:52.794,0.238:52.130,0.478&date=2017-01'; return fetch(URL, { method: 'GET'}) .then( response => Promise.all([response, response.json()]));}// this is how you'll get your icon links// instead of a switch with loads of repetitive bytesconst iconMap = { 'anti-social-behaviour': 'green-dot', 'burglary': 'red-dot', 'other-theft': 'pink-dot', 'public-order': 'purple', 'robbery': 'yellow', 'vehicle-crime': 'orange', 'violent-crime': 'orange-dot', 'other-crime': 'ltblue-dot', 'criminal-damage-arson': 'yellow-dot', 'drugs': 'purple-dot', 'shoplifting': 'blue-dot'}// this is a class because it needs stateclass CrimeMap extends Component { state = { markers: [] } componentDidMount() { this.props.fetchPostsWithRedux() } componentWillReceiveProps(nextProps){ this.setState({markers: this.mapLayerData(nextProps.posts)}); } // store only the data you want to pass as props to each Marker // instead of mapping it directly in MapChart every time it renders mapLayerData(markers) { // use a standard map function instead of $.each // destructuring saves time and repetition return markers.map(({ category, location }) => ({ // use a template string and a simple map of icon names to get your icon uri icon: 'http://maps.google.com/mapfiles/ms/icons/'+ iconMap[category] +'.png', label: category, name: category, position: { lat: location.latitude, lng: location.longitude } })) } render() { // there's only one layer, so render it directly return ( <div className=app> <MapChart markers={this.state.markers} /> </div> ) }}function mapStateToProps(state){ return { posts: state.posts }} let Container = connect(mapStateToProps, {fetchPostsWithRedux})(CrimeMap);const store = createStore( reducer, applyMiddleware(thunk));render( <Provider store={store}> <Container/> </Provider>, document.getElementById('root'));here is the jsfiddle with the old code on - if its beneficial to demonstrate changes/improvements there.http://jsfiddle.net/0ht35rpb/103///Old codeimport React from 'react';import ReactDOM from 'react-dom';//splash pagevar SplashPage = React.createClass({ componentDidMount: function() { function animateSplash(){ $('.splash-page').animate({ opacity: 0, }, 2500, linear, function() { // Animation complete. $('.splash-page').hide(); }); } setTimeout(function(){ animateSplash(); }, 4000); }, render: function() { return ( <div className=splash-page> <div className=content> <div className=splash> SPLASH PAGE </div> </div> </div> ); }});//form pagevar LandingForm = React.createClass({ componentDidMount: function(){ }, getInitialState: function() { return {postcode: '', propertytype: '', propertysize: '', propertytypename: '', isValid: false}; }, checkPostCodeValid: function(target, postcode){ var host = this.props.source[0][local]; if(window.location.hostname != localhost){ host = this.props.source[0][live]; } var isPostcodeValidApiCall = host + 'codeigniter/index.php/api/isPostcodeValid/'+postcode; //quick hack to bypass email validator on computer with no db if(window.location.hostname == localhost){ isPostcodeValidApiCall = host + _test_emailvalidate_json1_nw1.php; } var that = this; this.serverRequest = $.get(isPostcodeValidApiCall, function (response) { var response = JSON.parse(response); var errorMsg = response.isPostcodeValid.errorMessage; var isValid = response.isPostcodeValid.isValid; if(!isValid){ //show error message setLabelError(target, errorMsg, true);//target, error message, show error } else{ //do not show error setLabelError(target, errorMsg, false);//target, error message, show error } that.setState({ isValid: isValid }); }.bind(this)); }, handlePostcodeChange: function(e) { this.setState({postcode: e.target.value}); }, handleValidation: function(e){ //do a check on char lengths greater than 1 if((e.target.value).length > 1){ this.checkPostCodeValid(e.target, e.target.value); } }, handlePropertyTypeChange: function(val) { this.setState({propertytypename: val.label}); this.setState({propertytype: val.value}); setReactSelectorLabel(val); }, handlePropertySizeChange: function(val) { this.setState({propertysize: val.value}); setReactSelectorLabel(val); }, handleSubmit: function(e) { e.preventDefault(); var postcode = this.state.postcode.trim().toUpperCase(); var propertytype = this.state.propertytype.trim(); var propertysize = this.state.propertysize.trim(); var isValid = this.state.isValid; //console.log(isValid, isValid); var propertytypename = this.state.propertytypename.trim(); if (!postcode || !propertytype || !propertysize || !isValid) { return; } // TODO: send request to the server this.setState({postcode: '', propertytype: '', propertysize: '', propertytypename: '', isValid: false}); var shortPostcode = ((postcode.substring(0, postcode.length-3)).trim()); var postCodeSector = ((postcode.substring(0, postcode.length-2)).trim()); //build queries var masterApiCall = '/api/master/'+shortPostcode+'/'+propertytype+'/'+propertysize; var directApiCall = '/api/direct/'+postcode; var dataEntry = [ { directApiCall : directApiCall, masterApiCall: masterApiCall, postCodeSector: postCodeSector, postCode: shortPostcode, fullPostCode: postcode, propertyType: propertytype, propertySize: propertysize, propertyTypeName: propertytypename } ]; ReactDOM.render( <DataPage root={initConfig} source={dataEntry} />, document.getElementById('root'), function() { //xx } ); }, componentWillUnmount: function() { this.serverRequest.abort(); }, render: function() { var optionsPropertyType = [ { value: 'val1', label: 'key1' }, { value: 'val2', label:'key2'} ]; var optionsPropertySize = [ { value: 'val1', label: 'key1' }, { value: 'val2', label:'key2'} ]; return ( <div className=container data-role=screenbackground> <div className=row> <div className=col-md-12 col> <div className=form-components> <form id=mainform data-role=form-aesethetics className=commentForm onSubmit={this.handleSubmit}> <fieldset> <label>Postcode</label> <input placeholder=Postcode type=text name=postcode value={this.state.postcode} onChange={this.handlePostcodeChange} onKeyPress={this.handleValidation} /> </fieldset> <fieldset> <label>Property Type</label> <Select name=propertytype value={this.state.propertytype} options= {optionsPropertyType} onChange={this.handlePropertyTypeChange} placeholder=Property type /> </fieldset> <fieldset> <label>Property Size</label> <Select name=propertysize value={this.state.propertysize} options= {optionsPropertySize} onChange={this.handlePropertySizeChange} placeholder=Property size /> </fieldset> <fieldset className=aligninmobile> <input type=submit disabled={!this.state.isValid} value=Let's Go/> </fieldset> </form> </div> </div> </div> </div> ); }});//Data Pagevar DataPage = React.createClass({ getInitialState: function() { return { rawDirectData: '' }; }, componentDidMount: function () { //check host to see which source to use var host = this.props.root[0][local]; if(window.location.hostname != localhost){ host = this.props.root[0][live]; } var directApi = host + this.props.source[0][directApiCall]; if(window.location.hostname == localhost){ directApi = host + _test_direct_json1_nw1.php; } var that = this; function requestData(source, callback){ that.serverRequest = $.get(source, function (response) { callback(response); }.bind(that)); } requestData(directApi, function(rawDirectData){ that.setState({ rawDirectData: JSON.parse(rawDirectData) }); }); }, componentWillUnmount: function() { this.serverRequest.abort(); }, render: function() { var props = this.props; var directApiData = this.state.rawDirectData; return ( <div className=container> <div className=row> <div className=col-md-12 col> <MultipleComponents root={props.root} source={props.source} /> </div> <div className=col-md-12 col> <ContactForm root={props.root} source={props.source} data={directApiData}/> </div> </div> </div> ); }});//LandingFormReactDOM.render( <div> <SplashPage /> <LandingForm source={initConfig} /> </div>, document.getElementById('root'), function(){ }); | Redux for form handling | comparative review;form;react.js;jsx;redux | null |
_unix.83511 | I'm trying to get a Qt application to launch immediately after booting up. When booted, the Linux image does nothing more than launch an X server and a terminal. It also has the cron daemon running in the background. Obviously, my Qt application needs the X server to be running to do anything.I've seen a similar question for Red Hat and SUSE Linux.However, I don't see this working for my image.I'm wondering if there is a standard way in Linux/UNIX to make a GUI application start immediately after the X Server.[sj755@localhost X11]$ tree /etc/X11//etc/X11/|-- functions|-- Xdefaults|-- Xinit|-- Xinit.d| |-- 01xrandr| |-- 11zaurus| |-- 12keymap| |-- 40xmodmap| |-- 50setdpi| |-- 55xScreenSaver| |-- 60xXDefaults| |-- 89xTs_Calibrate| `-- 98keymap-fixup|-- xmodmap| |-- a716.xmodmap| |-- collie.xmodmap| |-- default.xmodmap| |-- h1910.xmodmap| |-- h2200.xmodmap| |-- h6300.xmodmap| |-- hx4700.xmodmap| |-- keyboardless.xmodmap| |-- omap5912osk.xmodmap| |-- poodle.xmodmap| |-- shepherd.xmodmap| |-- simpad.xmodmap| |-- slcXXXX.xmodmap| |-- xmodmap-invert| |-- xmodmap-left| |-- xmodmap-portrait| `-- xmodmap-right|-- xorg.conf|-- Xserver|-- xserver-common|-- Xsession`-- Xsession.d |-- 60xXDefaults |-- 89xdgautostart `-- 90xXWindowManager3 directories, 36 filesroot@devboard:~# cat /etc/X11/Xsession.d/90xXWindowManagerif [ -x $HOME/.Xsession ]; then exec $HOME/.Xsessionelif [ -x /usr/bin/x-session-manager ]; then exec /usr/bin/x-session-managerelse exec /usr/bin/x-window-managerfi#!/bin/sh## Very simple session manager for Mini X## Uncomment below to enable parsing of debian menu entrys# export MB_USE_DEB_MENUS=1 if [ -e $HOME/.mini_x/session ]thenexec $HOME/.mini_x/sessionfiif [ -e /etc/mini_x/session ]thenexec /etc/mini_x/sessionfiMINI_X_SESSION_DIR=/etc/mini_x/session.dif [ -d $MINI_X_SESSION_DIR ]; then # Execute session file on behalf of file owner find $MINI_X_SESSION_DIR -type f | while read SESSIONFILE; do set +e USERNAME=`stat -c %U $SESSIONFILE` # Using su rather than sudo as latest 1.8.1 cause failure [YOCTO #1211]# su -l -c '$SESSIONFILE&' $USERNAME sudo -b -i -u $USERNAME $SESSIONFILE& set -e donefi# This resolution is big enough for hob2's max window size.xrandr -s 1024x768# Default files to run if $HOME/.mini_x/session or /etc/mini_x/session# don't exist. matchbox-terminal&exec matchbox-window-manager | Starting Qt Application On Startup for Embedded Linux | linux;startup;init;x server;qt | Have a look at /etc/X11/xinit/xinitrc (this may be different places on different systems) to see what files it sources. Generally, this will have an if..elif..else structure, so that only one initialization file is read, with $HOME/.Xclients prioritized then /etc/X11/xinit/Xclients. That's almost certainly where the terminal that appears comes from (I am presuming you do not have a desktop environment installed or anything).Anyway, if you just want to run a single GUI app, create (or modify) an Xclients file like this:#!/bin/shmyGUIappThis should be executable. It's pretty much a normal shell script, I believe, so you can have more stuff in there, although obviously not backgrounding a GUI app will block execution at that point.[later addition]Your installation doesn't have exactly those files, but it does have an /etc/X11/Xinit.d and if you look, I am sure those are short shell scripts and they are sourced from somewhere, probably one of the files in /etc/X11 -- Xsession, Xserver, or xserver-common. You might want to check if $XINITRC is defined in your environment; that will be a clue.Your best bet is probably to just create a $HOME/.Xclients file (or as jofel mentions, $HOME/.xinitrc, which is probably more universal) and try it -- exactly that spelling and case, with a leading dot, and it should be set chmod 755 (the group and other permissions may not matter). Almost certainly this will be sourced from somewhere properly.You can put files in /etc/X11/Xinit.d yourself, but doing it for this purpose is not a good idea because yours should run last and block further execution. So have a look at the scripts in /etc/X11 (again: Xsession, etc., they don't have a .sh suffix) and try to figure out in what order they all chain together. It is also likely that somewhere one of them checks for an Xclients file, eg via something likeif [ -x /some/path/Xclients ]; then$HOME may also be used, and .xinitrc. Which is why creating at least one of these variations should work (write the file and move around/rename it if at first you don't succeed). To summarize: prime candidates for the name: .xinitrx and .Xclients, in either $HOME or /etc/X11/, but if in the later, ditch the leading dot. |
_codereview.365 | I've had to make several functions to turn some structures into strings. I am a still green when it comes C so I am unsure if I am doing this a very awkward way. The system I am coding for does not have snprintf, I know that would be far more elegant, however I cannot use it.Any advice?int device_to_string(char* const asString, pDevice dev, size_t maxLength){ char* ipAsString; size_t actualLength; struct in_addr addr; if (dev == NULL) { return NULL_ERROR; } addr.s_addr = dev->ip; ipAsString = inet_ntoa(addr); actualLength = strlen(name=) + strlen(dev->name) + strlen(&ip=) + strlen(ipAsString) + strlen(&mac=) + strlen(dev->mac) + strlen(&type=) + strlen(dev->type) + 1; if (actualLength > maxLength) { return SIZE_ERROR; } strncat(asString, name=, strlen(name=)); strncat(asString, dev->name, strlen(dev->name)); strncat(asString, &ip=, strlen(&ip=)); strncat(asString, ipAsString, strlen(ipAsString)); strncat(asString, &mac=, strlen(&mac=)); strncat(asString, dev->mac, strlen(dev->mac)); strncat(asString, &type=, strlen(&type=)); strncat(asString, dev->type, strlen(dev->type)); asString[actualLength] = '\0'; return NO_ERROR;} | Struct to web style string | c;strings | Yeah, without snprintf and sprintf it gets a bit tedious, but I think this code is actually quite clear. You use your horizontal and vertical whitespace very well, and it's clear what you're doing with each block of code. You have also controlled for any possible issues that might come up (null pointer, insufficient buffer length, etc). Maybe there's a more concise way to do it, but in terms of clarity and maintainability I think this code will suffice. |
_scicomp.15883 | I'ld like to calculate the PV of an integral with the form$$ \tilde{G}_l(\omega) = -\frac{2\omega}{\pi} PV\int_0^\infty \frac{\tilde{G}_d(\omega^\prime)}{\omega^2 - {\omega^\prime}^2}d\omega^\prime$$in MATLAB. It is not obvious to me how to do this. I've tried using factorization to change the power in the denominator to allow for use with the Hilbert function in MATLAB, but that hasn't worked. Any insight would be appreciated.$\tilde{G}_d(\omega)$ is an even function so I feel like I could do a contour integration and pick up the residue in the right-half plane, but I would rather do it numerically, as $\tilde{G}_d(\omega)$ exists more completely as a numerical function than an analytic one. | Numerical Principal Value Integration - Hilbert like | quadrature | null |
_unix.11620 | I wondered about some missing space on my ext3 partition and, after some googling, found that debian based ubuntu reserves 5% of the size for root.I also found posts describing how to change that size via tune2fs utility.Now I've got 2 questions, that I didn't find clear answers for: should I unmount the partition before changing the reserved space. what could happen if I don't?how much space should I reserve for the filesystem, so that it can operate efficiently?Thank you! | how much space to reserve on ext3 filesystem to prevent fragmentation issues? | ext3;ext2 | You don't need to unmount the partition prior to doing this. Regarding question two, it depends. As HDDs have grown in size, so has the total amount of disk space that's reserved for root. If you have a 2 TB HDD and it's totally used for /, then I would say you could quite safely tune it down to 1% by doing this:$ sudo tune2fs -m 1 /dev/sda*X*A smaller drive in the region of 320 GB I'd probably leave as is.Keep in mind that drives that are for data storage purposes don't really need all this space reserved for root. In this case you can change the number of reserved blocks like this:$ sudo tune2fs -r 20000 /dev/sdb*X*Hope that helps.EDIT: Regarding fragmentation issues, ext file systems are inherently immune to fragmentation issues. To quote Theodore Ts'o:If you set the reserved block count to zero, it won't affect performance much except if you run for long periods of time (with lots of file creates and deletes) while the filesystem is almost full (i.e., say above 95%), at which point you'll be subject to fragmentation problems. Ext4's multi-block allocator is much more fragmentation resistant, because it tries much harder to find contiguous blocks, so even if you don't enable the other ext4 features, you'll see better results simply mounting an ext3 filesystem using ext4 before the filesystem gets completely full. |
_codereview.53872 | The following is a series of functions to ensure that various elements on a page line up no matter what window size or when the window is resized. However I'm not sure my code is very concise as I am using the same function on three different event handlers. I read that function expressions aren't hoisted so I can't get the proper values on $(document).ready but I'm not sure how I could declare them with out making the code really messy and repetitive.I've made a jsfiddle with the complete HTML and CSS. function captHeight(){ var figCaption = $('.cap-bot').find($('figcaption')); var captionHeight = Math.max.apply(null, $('figcaption').map(function () { return $(this).height(); }).get()); $('.cap-bot').css({ 'overflow': 'visible' }); figCaption.css({ 'position': 'relative', 'opacity': '1', 'bottom': '0', 'min-height': captionHeight+ 20 + px //to account for padding });}function sameHeight(){if($('.find-height').length > 0){ var foundHeight= $('.find-height').height(); var gaveHeight= $('.give-height').height(); if (gaveHeight > foundHeight){ captHeight(); } $('.give-height').css({ 'min-height': foundHeight }); }} $(document).ready(function(){ sameHeight();});$(window).on('resize', function(){ sameHeight();});$(window).on('scroll', function(){ sameHeight();}); | Aligning page elements with window size | javascript;beginner;jquery | I read that function expressions aren't hoistedYes. See var functionName = function() {} vs function functionName() {}.so I can't get the proper values on $(document).readyWhy that? Unless the assignment to sameHeight did happen after the $(document).ready call, you can easily get the function.but I'm not sure how I could declare them with out making the code really messy and repetitive.You already have declared them, they are hoisted and everything is fine. However, even if you did use function expressions:var captHeight = function() { };var sameHeight = function() { };$(document).readythen everything would have been fine. Notice that, as @Flambino mentioned already in the comments, you can shorten the part where you install the event listener to$(document).ready(sameHeight);$(window).on('resize', sameHeight) .on('scroll', sameHeight); |
_unix.274618 | All top links when Googling for OOM score seem to indicate that the values must be between -1000 and 1000.I tried to verify that with a simple cat /proc/*/oom_score | sort -n | less and I encountered all sorts of values ranging from 0 to 30132501, with the vast majority being between 1000 and 10000.How should I interpret this?To my knowledge nothing is manually adjusting the oom_score files.System information:%uname -aLinux hhgw16 2.6.18-371.el5 #1 SMP Tue Oct 1 08:35:08 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux% lsb_release -aLSB Version: :core-4.0-amd64:core-4.0-ia32:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-ia32:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-ia32:printing-4.0-noarchDistributor ID: CentOSDescription: CentOS release 5.10 (Final)Release: 5.10Codename: Final | OOM Killer: Processes have a score of over a 1000 | linux;memory;proc;out of memory | null |
_webapps.106048 | Whenever I try to make a sayat.me account it comes up with the error 'sorry, we cannot process your account.' I have tried signing up through social media such as Facebook but it still doesn't work. What is the issue here? | Can't sign up to sayat.me, it says sorry, we cannot process your account | sayat.me | null |
_unix.375928 | Following this guide:https://wiki.archlinux.org/index.php/Orange_PiI encounter this problem:input in flex scanner failedmake[1]: *** [orangepi_zero_defconfig] Error 2make: *** [orangepi_zero_defconfig] Error 2After I enter this command:make -j4 ARCH=arm CROSS_COMPILE=arm-none-eabi- orangepi_zero_defconfigI have attempted using a capital and lower-case z as the first letter in zero as well as removing the space between eabi- and orangepi | Compiling Orange Pi on macOS | osx;compiling | null |
_scicomp.396 | Is it possible to apply Richardson extrapolation with Euler-Maruyama scheme to improve strong rate of convergence of stochastic differential equations? | Richardson extrapolation for strong rate of convergence of SDE | numerics;ode;convergence;stochastic | null |
_softwareengineering.181171 | I've read a couple of articles (example) that consider the classicfor (int ...... size() .... i++) loop bad practice when iterating through for example vectors in stl. Instead the recommended why is to use vector iterators, i.e. begin, end, etc.Why is that? | Why is using classic for loops as iterators in stl considered bad? | stl | If you know you're operating on a std::vector, and that's never going to change, there's nothing fundamentally wrong with that form of for loop - it's well understood, easy to read, and the compiler will usually do a good job of eliminating the potential cost of calling size() on every loop.But using a for loop with iterators is more generic. It doesn't require that your container have a size() operator at all, and doesn't require that the container have index-based accessors at all (let alone efficient ones). So your code will be more generic if you go that route.Other things to consider these days:The range-base for loop: compact syntax, very few requirements on the container type.Algorithms like for_each/find/fill/... combined with lambdas if the body of the loop lends itself to it for more functional-style C++. |
_webmaster.93492 | Can I use:<link rel=alternate hreflang=en href=?l=en />to link pages with different languages for SEO (recommended for example by Google)? Can that URL be relative as in the example above, or does it have to be full (start with http://example.com)? | Can a element's href attribute be relative? | seo;html;language;hyperlink;relative urls | null |
_unix.359207 | Recently I reduced the size of title bar in Fedora 24 Gnome 3.20 version. Basically I want to reduce size of every widget so that the space taken by application on screen will be reduce. Any help appreciated. | Is there any way we can reduce size for all widgets including button, menu/menu item etc in Gnome 3.20 or above? | fedora;gnome3;desktop environment | null |
_unix.217920 | Emacs, as far as I remember, must load the .Xresources file on startup and read the font rendering settings from there. But mine does this only if I run xrdb merge first and then start emacs. I think I have something misconfigured here.As my emacs starts as a systemd service I've just added ExecStartPre=xrdb -merge ~/.Xresources in the emacs.service. It practically solves the issue. But I still want to know why it's not working as it shouldbe?Also, my .Xdefaults is a symlink to .Xresources and I use KDE on openSUSE. | Loading X resources from .Xresources and .Xdefaults for Emacs | emacs;x resources | It is working exactly the way it should. ~/.Xresources is conventionally loaded when you log in. Many distributions do this as part of the X session startup scripts. If you don't use a full destkop environment, you may need to add it to your login scripts, which would be systemd in your case, just like you did. The resources from ~/.Xresources are loaded into the X server and apply to all programs that display on that X server.You may be confusing .Xresources with ~/.Xdefaults, which is loaded by each X client application when it starts, and then applies to this application. When using X remotely, the ~/.Xresources file is on the X server side, whereas the ~/.Xdefaults file is on the client side.Note that Emacs only loads ~/.Xdefaults if no resources are loaded in the server (more precisely, if XResourceManagerString returns NULL). It also loads ~/.Xdefaults-$HOSTNAME unconditionally. I don't know why. |
_datascience.19097 | I am currently working with classifying patterns, and though at good place to start would be using already established models such as VGG16, INCEPTION models and so on.. Problem that my images are pretty small.. They are in shape of (8,15,3)... Any well established models able to handle this? | Deep learning models for classification, able to classify small image patches? | classification;deep learning;keras;convnet;model selection | null |
_webmaster.101961 | I am trying to track clicks on my site using the analytics.js script for tracking events. I implement the snippet and test using the Real Time view and nothing shows up. Here is the snippet I am using:<a href=my-file-link.pdf onClick=ga('send', 'event', {eventCategory: 'download', eventAction: 'click', eventLabel: 'downloadoverview'}); target=_blank>Link</a>I have also tried:<a href=my-file-link.pdf onClick=ga('send', 'event', 'download', 'click','download overview'); target=_blank>Link</a>Anyone know what I am doing wrong? | Google Analytics events not registering | google;google analytics;universal analytics;analytics events | null |
_codereview.98060 | I made this Renderer class recently, to simplify the user interface of my library's API. I would like to ask advice about the move semantics (copy constructor, std::unique_ptr, ...), or any other things you can point out to improve my code.As you can see, I take in parameters in RenderObject in the form of a pointer. This is simply because Material uses a Texture class, whose assignment operators are disabled. It only has a move constructor (C++11), but Material cannot use it.RenderObject.h#pragma once#include alpha/Mesh.h#include alpha/Texture.h#include alpha/Light.h#include alpha/Transform.hclass RenderObject{public: RenderObject( Mesh* mesh, Material* material, Transform transform = Transform() ) : m_pmesh(mesh), m_pmaterial(material), m_transform(transform) {} ~RenderObject(){} void SetTransform(Transform transform){ m_transform = transform; } Transform GetTransform(){ return m_transform; } Mesh* GetMesh(){ return m_pmesh; } void SetMesh(Mesh* mesh){ m_pmesh = mesh; } Material* GetMaterial(){ return m_pmaterial; } void SetMaterial(Material* material){ m_pmaterial = material; } void SetPosition(glm::vec3 position){ m_transform.SetPos(position); } void SetRotation(glm::vec3 rotation){ m_transform.SetRot(rotation); } void SetScale(glm::vec3 scale){ m_transform.SetScale(scale); } glm::vec3 GetPosition(){ return m_transform.GetPos(); } glm::vec3 GetRotation(){ return m_transform.GetRot(); } glm::vec3 GetScale(){ return m_transform.GetScale(); }private: Transform m_transform; Mesh* m_pmesh = nullptr; Material* m_pmaterial = nullptr;};Material.hclass Material{public: template<typename T> Material(T&& texture, float shininess, const glm::vec3& specularColor = glm::vec3(0.5, 1.0, 1.5), const glm::vec3& emissiveColor = glm::vec3(0.0, 0.0, 0.0) ) : m_texture(std::forward<T>(texture)), m_shininess(shininess), m_specularColor(specularColor), m_emissiveColor(emissiveColor) {} ~Material(){ m_texture.Dispose(); ///not necessary... ressources are automatically disposed in texture ! } void Bind(unsigned unit = 0){ this->m_texture.Bind(unit); } void SetUniforms(Shader& shader) const{ shader.UpdateUniform3fv(materialSpecularColor ,glm::value_ptr(this->m_specularColor)); shader.UpdateUniform1f(materialShininess, this->m_shininess); shader.UpdateUniform3fv(materialEmissiveColor, glm::value_ptr(this->m_emissiveColor)); } void SetShininess(float shininess){ m_shininess = shininess; } void SetSpecularColor(glm::vec3 specularColor){ m_specularColor = specularColor; } void SetEmissiveColor(glm::vec3 emissiveColor){ m_emissiveColor = emissiveColor; } float GetShininess(){ return m_shininess; } glm::vec3 GetSpecularColor(){ return m_specularColor; } glm::vec3 GetEmissiveColor(){ return m_emissiveColor; } Texture& GetTexture(){ return m_texture; } Material& operator=(const Material& other) = delete; Material(const Material& other) = delete;protected: float m_shininess; glm::vec3 m_specularColor; glm::vec3 m_emissiveColor; Texture m_texture; ///more lighting... normal maps...};Renderer.h#pragma once#include <vector>#include alpha/Light.h#include alpha/Mesh.h#include alpha/Texture.h#include alpha/Light.h#include alpha/Transform.h#include alpha/RenderObject.hclass ForwardRenderer{public: ForwardRenderer(){}///default ctor ForwardRenderer(std::vector<RenderObject>* objects) : m_pobjects(objects), m_shader(res/basicShader.glslv, res/phongShader.glslf) {} ForwardRenderer( std::vector<RenderObject>* objects, PhongLight* light ) : m_pobjects(objects), m_plight(light), m_shader(res/basicShader.glslv, res/phongShader.glslf) {} ~ForwardRenderer(){}///default dtor void RenderAll(Camera& camera){ m_shader.Bind(); for(auto it = m_pobjects->begin(); it != m_pobjects->end(); ++it){ ///Set Uniforms camera.SetUniforms(m_shader, it->GetTransform()); it->GetMaterial()->SetUniforms(m_shader); m_plight->SetUniforms(m_shader); ///Render it->GetMaterial()->Bind(0);///sampler slot = 0; it->GetMesh()->Draw(); camera.SetUniforms(m_shader, Transform()); } } ForwardRenderer& operator=(const ForwardRenderer& other) = delete; ForwardRenderer(const ForwardRenderer& other) = delete;private: std::vector<RenderObject>* m_pobjects = nullptr; PhongLight* m_plight = nullptr; Shader m_shader;};typedef ForwardRenderer BasicRenderer; | Basic OpenGL Renderer class | c++;c++11;opengl | Resource management:I general, the code seems alright to me, but one thing you continue to do, if I recall from your previous questions, is to use raw pointers and manual resource/memory management. So I'd like to urge you to look into the standard smart pointers and start using them.The basic usage is:Resource is shared between instances of different objects: Frequent case for things like textures, materials and shaders. Use a shared_ptr for them.Resource is explicitly owned by an instance of an object: In a renderer, this could apply to many types of things, from light sources to object transforms. Use a unique_ptr for those.A few other things:RenderObject is just a data container with a bunch of accessors. When you get to this point, it might be worth considering just making it a plain struct with all fields public, since there is no difference regarding encapsulation when you can freely get/set all fields. In such cases, the accessors only add a level of unnecessary indirection. However, you might also consider a redesign altogether, since the heavy use of accessors indicate that all the algorithms involving a RenderObject are outside the class, which in turn indicate that it is probably strongly coupled with other areas of the code and systems.I'm wondering what this is about? ~Material(){ m_texture.Dispose(); ///not necessary... ressources are automatically disposed in texture ! }So, if it is not necessary, as you say, why is it there? It would be safer indeed if the destructor of Texture did the cleanup, so you should probably avoid exposing a public Dispose() method anyways, to prevent making the mistake of having a disposed/invalid object in your hands. This is specially true for library code, make it as hard as possible for the user to shoot himself in the foot (dedicated users will still manage, but don't facilitate ;)).The fact that the constructor of Material is templated for a texture type puzzles me:template<typename T>Material(T&& texture, float shininess, ...The member m_texture is of type Texture, so unless there's some polymorphic conversion going on here, the template parameter should just be a Texture &&.Don't this-> qualify member variables, ever! We have a very amusing example right here on CR of how using it can lead to some pretty hilarious bugs.Avoid writing the empty constructors/destructors and let the compiler do its job providing the empty defaults. But in particular, don't do this: ~ForwardRenderer(){}///default dtorYep, anyone that has been programming in C++ for more than two days knows what a destructor looks like. Unless you're using the comment for trolling purposes[*], trim it down.std::vector<RenderObject>* m_pobjects in ForwardRenderer seems like it could be declared by value. Then change the constructor to move the parameter or take it by && move-reference. The less pointer chasing the better when it comes to performance critical applications like real-time rendering. Make good use your data cache.[*]: I recall once reading a comment just above a class destructor that said something like If you don't know what this is, then you're probably going to get fired... -- Obviously the author was being funny, and this kind of internal jokes can help bootring the morale of a team, but otherwise, never comment explaining obvious aspects of the language. |
_unix.208682 | When I press the right super key + P, gnome-shell switchtes between different display settings (how my monitors are arranged and what there resoluiton is). Sometimes I press the combination accidentally which is really annoying.I looked through the gnome keybaord settings, and there are all kind of shortcuts. But this one I can not find.Where is it located and how can I disable it? | Disabling keyboard shortcut for changing display setup in gnome-shell | keyboard shortcuts;gnome shell | I don't usually use gnome (I installed Ubuntu and just use unity), but when I used gnome I set custom shortcuts, and I've done some reading around to check. Some OSs may need dconf instead of gconf.The answer here shows how to change keyboard shortcuts using gconf-editor. You need to select the unwanted shortcut in gnome tweak-tools or gconf-editor and press backspace to clear it.Alternative methods are given here and here, the questions are on askubuntu, but they're gnome specific questions. |
_unix.259274 | my goal is to store each foliated line (actually each rule) into an array. My output:Chain INPUT (policy ACCEPT)num target prot opt source destination 1 ACCEPT udp -- 109.224.241.0/24 0.0.0.0/0 udp dpt:50602 ACCEPT udp -- 109.224.241.0/24 0.0.0.0/0 udp dpt:45693 ACCEPT udp -- 217.14.138.0/24 0.0.0.0/0 udp dpt:50604 ACCEPT udp -- 217.14.138.0/24 0.0.0.0/0 udp dpt:45695 ACCEPT udp -- 172.30.33.0/24 0.0.0.0/0 udp dpt:50606 ACCEPT udp -- 172.30.33.0/24 0.0.0.0/0 udp dpt:45697 ACCEPT udp -- 212.11.91.0/24 0.0.0.0/0 udp dpt:50608 ACCEPT udp -- 212.11.91.0/24 0.0.0.0/0 udp dpt:45699 ACCEPT udp -- 212.11.64.0/19 0.0.0.0/0 udp dpt:506010 ACCEPT udp -- 212.11.64.0/19 0.0.0.0/0 udp dpt:456911 ACCEPT udp -- 77.240.48.0/20 0.0.0.0/0 udp dpt:506012 ACCEPT udp -- 77.240.48.0/20 0.0.0.0/0 udp dpt:456913 LOG udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:4569 LOG flags 0 level 4 prefix AsteriskHack:14 DROP udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:456915 LOG udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:5060 LOG flags 0 level 4 prefix AsteriskHack:16 DROP udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:5060Chain FORWARD (policy ACCEPT)num target prot opt source destination Chain OUTPUT (policy ACCEPT)num target prot opt source destinationI do not want to limit rule number. Each chains has several rules.After I executed my code.:while IFS='' read line -r || [[ -n $line ]]; do array+=($line)done < <(iptables -L --line-numbers)When I echo ${array[@]} my output is horrible text in one line, so I would like to seperate each line. Actually my content of this array consists of unreal number of iptables rules (after echo ${!array[@]})I do not know how to have in conditional that foliated rows are compulsory with defined separator (new line). I am not sure if is my separator right.Thank you for your reply,M | Read each line - BASH | bash;text processing | Here's how I'd do it. First, for simplicity, let's list each of the chains individually, since I'm assuming you want to know which chain a rule belongs to:$ iptables -L INPUT --line-numbersChain INPUT (policy ACCEPT)num target prot opt source destination1 ACCEPT udp -- 109.224.241.0/24 0.0.0.0/0 udp dpt:50602 ACCEPT udp -- 109.224.241.0/24 0.0.0.0/0 udp dpt:45693 ACCEPT udp -- 217.14.138.0/24 0.0.0.0/0 udp dpt:50604 ACCEPT udp -- 217.14.138.0/24 0.0.0.0/0 udp dpt:45695 ACCEPT udp -- 172.30.33.0/24 0.0.0.0/0 udp dpt:50606 ACCEPT udp -- 172.30.33.0/24 0.0.0.0/0 udp dpt:45697 ACCEPT udp -- 212.11.91.0/24 0.0.0.0/0 udp dpt:50608 ACCEPT udp -- 212.11.91.0/24 0.0.0.0/0 udp dpt:45699 ACCEPT udp -- 212.11.64.0/19 0.0.0.0/0 udp dpt:506010 ACCEPT udp -- 212.11.64.0/19 0.0.0.0/0 udp dpt:456911 ACCEPT udp -- 77.240.48.0/20 0.0.0.0/0 udp dpt:506012 ACCEPT udp -- 77.240.48.0/20 0.0.0.0/0 udp dpt:456913 LOG udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:4569 LOG flags 0 level 4 prefix AsteriskHack:14 DROP udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:456915 LOG udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:5060 LOG flags 0 level 4 prefix AsteriskHack:16 DROP udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:5060You can get an list of chain names like this:CHAINS=$(iptables -L | awk '/Chain /{print $2}')Now, let's use a few tricks to simply get these into an array:# We can just define the array from the contents of our command# output, using \r or \n as a field separator.# We use grep to ignore lines that don't start with a number.IFS=$'\r\n' GLOBIGNORE='*' command eval 'INPUT_RULES=($(iptables -L INPUT --line-numbers | grep '^[0-9]'))'With Bash 4, you can also use the mapfile builtin:IFS=$'\r\n' mapfile INPUT_RULES < <(iptables -L INPUT --line-numbers | grep '^[0-9]')Now, I don't know your specific use case, but if you query each chain one at a time, you should also be able to remove the line numbers or use them as keys in an associative array, but maybe they're fine being included.If you don't want to use grep, but still want to exclude the first two lines from the array, you can just unset the first two elements after the fact, like this:array=(${array[@]:2})Also note that in your original example:echo ${array[@]}will put everything on one line whether it's in separate array keys or not. A better way to accurately view the array with one element per line, would be this:for rule in ${array[@]}; do echo LINE: $rule; done |
_scicomp.2369 | I'm diving into the fascinating world of finite element analysis and would like to solve a large thermo-mechanical problem (only thermal $\rightarrow$ mechanical, no feedback). For the mechanical problem, I already grasped from Geoff's answer, that I'll need to use iterative solver due to the size of my mesh. I further read in Matt's reply, that the choice of the correct iterative algorithm is a daunting task.I'm asking here if there exist any experience on large 3-d linear-elastic problems that would help me narrow down my search for the best performance? In my case, it's a structure with thin, patterned films, and irregularly placed materials (both high-CTE and low-CTE). There are no large deformations in this thermo-mechanical analysis. I can use my university's HPC [1.314 nodes, with 2 AMD Opteron processors (each 2.2 GHz/ 8 cores)].I think PETSc could contain something interesting, especially the algorithms which do some sort of domain decomposition (FETI, multigrid) but I'm a bit overwhelmed by the options and have no experience. I also like the phrase geometrically informed preconditioners, but am unsure if this helps me. I have not yet found something focussing on linear continuum mechanics.Strong scaling (Amdahl) is very important in my application because my industrial partners can't wait a long time for simulation results. I definitely not only appreciate answers, but also recommendations for further reading in the comments. | What is a robust, iterative solver for large 3-d linear-elastic problems? | pde;parallel computing;petsc;hpc;iterative method | Assuming that your structures are actually 3D (rather than only thin features, perhaps discretized with shell elements) and that the model is larger than a few hundred thousand dofs, direct solvers become impractical, especially if you only need to solve each problem once. Additionally, unless the structure is always close to a Dirichlet boundary, you will need a multilevel method to be efficient. The community is divided between multigrid and multilevel domain decomposition. For a comparison of the mathematics, see my reply to: What is the advantage of multigrid over domain decomposition preconditioners, and vice versa?The multigrid community has generally been more successful at producing general purpose software. For elasticity, I suggest using smoothed aggregation which requires an approximate near null space. In PETSc, this is done by choosing PCGAMG or PCML (configure with --download-ml) and calling MatSetNearNullSpace() to provide the rigid body modes.Domain decomposition methods offer an opportunity to coarsen faster than smoothed aggregation, thus possibly being more latency tolerant, but the sweet spot in terms of performance tends to be narrower than smoothed aggregation. Unless you want to do research on domain decomposition methods, I suggest just using smoothed aggregation, and perhaps try a domain decomposition method when the software becomes better. |
_cs.3462 | I'm reading through Computers and Intractability: A guide to the Theory of NP-Completeness by Michael R. Garey and David S. Johnson, p. 20 and I came across this concept of a function that is polynomially related to input lengths obtained using some encoding scheme. Let $$Len:D_{\Pi}\rightarrow \mathbb Z^+$$ be a function that maps instances $\in D_{\Pi}$ (the set of instances of decision problem $\Pi$) to positive integers (lengths). Let $x$ be the string obtained from $I\in D_{\Pi}$ under some encoding $e$. If there exist polynomials $p$ and $p'$ such that $$Len(I) \le p(|x|)$$ and $$|x| \le p'(Len(I)),$$ We say that $Len$ is polynomially related to the input lengths obtained by the encoding $e$. I cannot digest that; my understanding is that two encodings are polynomially related if converting from one another requires a polynomial amount of time. Can anybody clarify things a bit? | Polynomially related lengths under two different encodings | complexity theory;polynomial time;encoding scheme | Garey and Johnson are referring to the fact that any encoding scheme for some instance $I$ of a problem $\Pi$ will only differ in length (i.e. number of bits) by a polynomial amount. For example, consider two possible ways to encode a graph: adjacency matrix, and adjacency list.It is not possible to obtain a super-polynomial speedup by using one encoding over another for various instances of problems. Again, this notion relies on the fact that encodings are reasonable. That is to say, we are not unnecessarily padding our encoding with some junk information. |
_unix.193634 | I'm modifying raw PCL/PS files (mixed) and for some reason I can't get my Sed syntax right for the true beginning that I want it to grab. Here's a sample output from the strings command:*c50B*c0P&f1X&f7y4X%-12345X%!PS-Adobe-3.0 EPSF-3.0 <------Sed doesn't work for this pattern%%Creator: tiff2ps <----Sed works for this pattern[data...]%%EOFHere's my sed command that works:sed -n '/%%Creator/,/%%EOF/p'But I want it to start with %-12345X%:sed -n '/%-12345X%/,/%%EOF/p'When I do the last command, it just outputs the entire file. No combination from that line works. Now, I'm viewing the raw print file with 'strings', could it be that that line is encoded in a way that sed can't understand? Any idea on working around this?Edited to add:I'm pretty sure this has to do with the encoding of PCL and line escapes. The file goes from PCL to PS, and doesn't create the first message of PS on it's own line. Output of cat looks like this:*c50BESC*c0PESC&f1XESC&f7y4XESC%-12345X%!PS-Adobe-3.0 EPSF-3.0%%Creator: tiff2ps | Trouble with Sed Syntax | text processing;sed | My guess is that sed is doing exactly what you're telling it to do: Print out the first line that contains %-12345X%. But since this is not an ASCII file, but a PCL or PDF file with all sorts of binary bytes in there- and no proper newline to speak of, until just in front of %%Creator:- it prints out the entire thing. Remember, sed prints the matching line. What you're asking it to do, I think, is print starting at this string.If you want to take a file that's not guaranteed to be line-oriented (such as this), you're going to have to use a technique that does not depend on line-oriented tools. This may help: how to dump part of a binary file. It's a little more complicated but your strings are pretty distinctive so it should do the trick.Hmm... just had a idea- maybe this would work. It deletes everything on the same line that's in front of the %-12345X% except for that string itself. Then it prints everything from that line to the end of file. I haven't tried this, but that's how I'd approach it:sed -n -e '/%-12345X%/s/.*%-12345X%/%-12345X%/' -e '/%-12345X%/,/%%EOF/p'Or even better:STR=%-12345X%sed -n -e /${STR}/s/.*${STR}/${STR}/ -e /${STR}/,/%%EOF/p |
_unix.26457 | I have several questions about nmap. First, nmap can detect servers via:link-layernetwork-layertransport-layerWhat are the differences (not in the layer, but in the way that nmap does it) and how does nmap do this?Second, when I port scan with nmap, the UDP scan takes much longer then the TCP scan. Why?Third: are there any different methods to explore the OS then to use the -O --osscan-guess command (I mean totally different, not e.)? | Questions about nmap | networking;nmap | null |
_webmaster.78434 | I have implemented user IP detection to several of my eCommerce sites, so that it shows the currency and delivery charges based upon the users location. All is good, and this system has been working for a while, but I have began to notice a worrying side effect with the search engines.Google seems to only crawl websites using US based IP's; country searches such as on google.co.uk now show prices by default in $'s (within the listings), which are reducing the number of click-throughs as prices are no longer being shown in the user's local currency.The impact is even greater as these are all UK based sites. Even though they are setup to sell to the global market, we still want to keep our local market strong.One way could be to exclude the currency/delivery detection for the spider IP range, but the same still applies; we have a big following in Europe, US, Australia and New Zealand who convert better if they see their local currency within the search listings.There are product feeds setup with google, but these don't filter into the main search listings.The alternative route is to implement country targeting through subfolders/subdomains (/uk/, /us/, /fr/, /it/), however this seems very clunky in a modern internet. For example I would have to list every country in Europe, even though they all pay in euros, and all have the same delivery price; in effect I'd be creating x50 extra identical pages for each product (as there are around 50 countries in europe)Any suggestions? | Side effects of customising a users currency based upon their IP | seo | Google's International section of their Webmaster Tools talks about Locale-aware crawling by Googlebot. This appears to be a relatively new (seems to have been announced in January 2015), and is fully automated by Google (emphasis added):Today were introducing new locale-aware crawl configurations for Googlebot for pages that we detect may adapt the content they serve based on the request's language and perceived location.They are using both geo-distributed crawlers as well as the accept-lang headers, which should help, but as they haven't said where these new IP's are coming from might still miss the UK.One thing that might also help is the use of micro-data on your site.This would allow you to serve all currencies in your markup, hide those that weren't valid for the current user/ip through CSS/JS and still have Google understand what you're doing:<div class=curr-gbp itemprop=offers itemscope itemtype=http://schema.org/Offer> <!--price is 1000, a number, with locale-specific thousands separator and decimal mark, and the $ character is marked up with the machine-readable code USD --> <span class=usd itemprop=priceCurrency content=USD>$</span> <span class=usd itemprop=price content=1000.00>1,000.00</span> <span class=gbp itemprop=priceCurrency content=GBP>£</span> <span class=gbp itemprop=price content=750.00>750.00</span> <span class=aud itemprop=priceCurrency content=AUD>$</span> <span class=aud itemprop=price content=1500.00>1,500.00</span></div>You set the class on the containing div to the currency you've selected for the user and then hide the other options through CSS:.curr-gbp .usd, .curr-gbp .aud { display: none; }.curr-usd .gbp, .curr-usd .aud { display: none; }.curr-aud .usd, .curr-aud .gbp { display: none; }Google should then recognise the mark-up and display it as appropriate in its listings. |
_webmaster.100355 | I am trying to create a widget to count number of users who have visited the site more 3 or more times within the time range.I can easily do this when setting up a segment:Note that the tooltip that appears for the question mark next to Sessions label defines it as Total number of Sessions within the date range. A session is the period time a user is actively engaged with your website, app, etc. All usage data (Screen Views, Events, Ecommerce, etc.) is associated with a session.However, when creating a widget, I cannot find the option to define such a filter. The closest I can find is Count of sessions, but there is not an arithmetic copmarison operator (it is because Count of sessions is a string dimension),so I have to match it with this regex: (^[3-9]$)|(^[0-9]{2,3}$):And the results are very different from the segment:Note 78.32% in widget vs 17.12% in segment.Question: Is there a way to filter the users by the number of sessions within the time range in a dashboard widget? | Number of sessions filter in Google Analytics custom dashboard widget | google analytics | Count of Sessions as you mentioned is a dimension not a metrics as we may expect it to be, it is incremented in the cookie with each session and is passed in GA as such. To elaborate :Suppose we have 3 users, following is their visit log :User A - Visit's 1st time | Count Of Session =1 User A - Visit's 2nd time | Count Of Session =2 User A - Visit's 3rd time | Count Of Session =3User B - Visit's 1st time | Count Of Session =1User B - Visit's 2nd time | Count Of Session =2User C - Visit's 1st time | Count Of Session =1Now if we make a report of Users group by Count Of Sessions, we may have something like this :Count Of Sessions | Users 1 | 3 2 | 2 3 | 1 For your problem it would be difficult to present this in a single value, data would make more sense if you could show a tabular view, I've tried to address your case with following widget :It's important to note that the number of users you see against the Count Of Sessions is not exclusive, to have the exclusive users against each Count Of Sessions we could simply :Users having n only sessions = (Users with n Count Of Sessions - Users with (n+1) Count Of SessionsSo, Users having only 3 sessions would be 1,062,191 (3,962,517 - 2,900,326) |
_cs.13383 | Is $L = \{A^n B^n C^n \mid n \in \mathbb{N}\}$ a context-free language, e.g. $AAAABBBBCCCC \in L$If so, what's that context-free grammar that produces it? | Context Free Grammar for $\{A^nB^nC^n | n \in \mathbb{N}\}$ | formal languages;context free | null |
_webapps.100809 | I would like to convert all text in a range of cells to Title Case automatically. How can I do that? | Convert all text to to Title Case in a Google Spreadsheet? | google spreadsheets | null |
_cogsci.8914 | People seem to believe that one's consciousness is a different consciousness than that of others, but the same consciousness as the one that has been in one's body in the past and the one that will be in one's body in the future. How likely is this to be true? I have almost no knowledge about neuroscience, so please refrain from using much neuroscience jargon if feasible, even if it's at the cost of not giving a thorough explanation. A simple yes, no, or we have no idea answer would suffice if need be. | Is the consciousness currently in oneself the same as the consciousness that used to be or in the future will be in oneself? | consciousness;philosophy of mind | null |
_unix.185156 | I've got a command in linux which returns a list of numbers. Now I need this numbers to form a directory path and cat the file located at that path.For example:myCommand returns:1 1030 40And I want to cat all the files that look like this: /folder/1/folder2 /folder/10/folder2 /folder/30/folder2 /folder/40/folder2I Hope it is clear what I want, if not, feel free to ask.This is my code right now: myCommand | xargs catBut it obviously doesn't work since directory 1,10,30 and 40 doesn't exist.I want the output to be: the catted value - number generated by myCommand | Reuse output from command 1 in command 2 using a pipe | linux;pipe;cat | There is no problem to do thing with xargsmyCommand | xargs -I{} sh -c 'echo -n {}\ ; cat /folder/{}/folder2/file'But more syntaxically right and much flexible do it with for loop:for genpath in $(myCommand)do echo -n $genpath\ cat /folder/$genpath/folder2/filedoneOr even through while loopwhile read genpathdo echo -n $genpath\ cat /folder/$genpath/folder2/filedone < <(myCommand) |
_softwareengineering.179791 | I'm looking for some input and theory on how to approach a lexical topic.Let's say I have a collection of strings, which may just be one sentence or potentially multiple sentences. I'd like to parse these strings to and rip out the most important words, perhaps with a score that denotes how likely the word is to be important.Let's look at a few examples of what I mean.Example #1:I really want a Keurig, but I can't afford one!This is a very basic example, just one sentence. As a human, I can easily see that Keurig is the most important word here. Also, afford is relatively important, though it's clearly not the primary point of the sentence. The word I appears twice, but it is not important at all since it doesn't really tell us any information. I might expect to see a hash of word/scores something like this:Keurig => 0.9afford => 0.4want => 0.2really => 0.1etc...Example #2:Just had one of the best swimming practices of my life. Hopefully I can maintain my times come the competition. If only I had remembered to take of my non-waterproof watch.This example has multiple sentences, so there will be more important words throughout. Without repeating the point exercise from example #1, I would probably expect to see two or three really important words come out of this: swimming (or swimming practice), competition, & watch (or waterproof watch or non-waterproof watch depending on how the hyphen is handled).Given a couple examples like this, how would you go about doing something similar? Are there any existing (open source) libraries or algorithms in programming that already do this? | Language parsing to find important words | parsing;languages | null |
_unix.105690 | UPD: Changed question title from manage devices to discover information about devicesOne of the frequent thing I do is look up info about devices on my systems. And I am constantly find myself confused about various different commands on Linux to query available disks, network adapters, graphics cards etc.For example, if I need to query all disk drives available, I do:ls -la /dev/disk/by-idIf I need to query all network cards available, I do:ls -la /sys/class/netIs there any single point to query all device ids by type?Maybe there was initiative to unify handling of devices info and make it discoverable/accessible, but it failed? History of the question in order of appearance:/dev/disk/ lists disks, why /dev/net/ doesn't list network interfaces?Why are network interfaces not in /dev like other devices? | Is there a uniform way to discover information about devices? | devices;udev | There is no single standard or tool to query hardware devices on Linux systems in general. Depending on your host's architecture, and which of its components you must query, and how much detail you need about it, you may need one or more tools specific to that component. However some commands/tools are in wider use and have greater mindshare than others. Following are some--that may or may not be available for your particular host--but are nonetheless Generally Regarded As Useful and widely available from major package managers (though I only link to Debian below):all-purpose query tools:hwinfo. hwinfo --short - gives a useful overview of The Whole Enchilada, and info hwinfo shows many other options for querying specific subsystems in detail.inxi is part of a larger collection of system administration tools with similarly general capabilities.tools for specific subsystems:dmidecode - processor, memory and motherboard details from BIOSlscpu - processor details from /proc/cpuinfolspci - PCI devices, typically graphics cards, audio cards, network cardslsusb - USB devices in generalls -l /dev/disk/by-{id,label}/ - block devices and their block device filesls -l /sys/class/net/ - network devices and their network interfacesudevinfo - block devices, if using udevI encourage people to expand this list if some essential tool is missing. |
_cs.57516 | Recently the following problem was posed in a private coding competition at my workplace:An array $A$ is given that has only positive integers in it. The objective is to equalize the array in minimum possible steps. In every step all but one element are incremented by a fixed step size. The possible step sizes are given in another array $D$.For example, if $A=\{2,2,3,7\}$ and $D=\{1,2,5\}$ we can equalize $A$ in two steps, first adding $A=\{5,5,5,0\}$ to get $A=\{7,7,8,7\}$ and then adding $\{1,1,0,1\}$ to get $A=\{8,8,8,8\}$. In discussion forums it was suggested to repeatedly find the difference of the smallest and largest elements and then greedily increment all but the largest element by this amount. An implementation using this strategy passed all test cases. However I'm unable to see how this strategy will indeed result in the smallest number of steps. | Proof for This Greedy Strategy for Equalizing An Array | greedy algorithms | Actually I am not sure the greedy solution will always give the best solution?Recall that the change making problem does not work for certain denominations in a greedy way. For example with denominations $1,3,4$ and a goal of $6$ the greedy solution will give $4,1,1$ whereas the optimal solution is $3,3$.This can be translated into your problem. Start with array $A=[0,6]$ and steps $D=\{1,3,4\}$. Now your greedy solution seems to behave like change making. I am not certain about this, however, as your description does not precisely say by which amount the numbers are incremented. Perhaps your algorithm chooses $[0,6] \to [4,6] \to [7,6] \to [7,7]$ (which is also wrong, anyway). |
_unix.308591 | I have a file with the text in the fowling formatline 1,line 2,< Blank line >line 3,line 4,< Blank line >line 5,line 4,< Blank line >I need to put it in the format:line 1,line 2,< Blank line >line 3,line 4,< Blank line >line 5,line 4,< Blank line >So I'm trying:tr -d '\n' < myfile.txt > myfile_res.txtBut then I get all concatenated:line 1,line 2,line 3,line 4,line 5,line 4What I need it's: To remove '\n' only from lines containing characters and leave the blank lines and I believe it will work.Any idea how to encode this ? | trouble in formating text with tr | text processing | null |
_cs.125 | In quantum computation, what is the equivalent model of a Turing machine? It is quite clear to me how quantum circuits can be constructed out of quantum gates, but how can we define a quantum Turing machine (QTM) that can actually benefit from quantum effects, namely, perform on high-dimensional systems? | How to define quantum Turing machines? | quantum computing;turing machines;computation models | (note: the full desciption is a bit complex, and has several subtleties which I prefered to ignore. The following is merely the high-level ideas for the QTM model)When defining a Quantum Turing machine (QTM), one would like to have a simple model, similar to the classical TM (that is, a finite state machine plus an infinite tape), but allow the new model the advantage of quantum mechanics.Similarly to the classical model, QTM has:$Q=\{q_0,q_1,..\}$ - a finite set of states. Let $q_0$ be an initial state.$\Sigma=\{\sigma_0,\sigma_1,...\}$, $\Gamma=\{\gamma_0,..\}$ - set of input/working alphabetan infinite tape and a single head.However, when defining the transition function, one should recall that any quantum computation must be reversible. Recall that a configuration of TM is the tuple $C=(q,T,i)$ denoting that the TM is at state $q\in Q$, the tape contains $T\in \Gamma^*$ and the head points to the $i$th cell of the tape. Since, at any given time, the tape consist only a finite amount of non-blank cells, we define the (quantum) state of the QTM as a unit vector in the Hilbert space $\mathcal{H}$ generated by the configuration space $Q\times\Sigma^*\times \mathrm{Z}$. The specific configuration $C=(q,T,i)$ is represented as the state $$|C\rangle = |q\rangle |T\rangle |i\rangle.$$ (remark: Therefore, every cell in the tape isa $\Gamma$-dimensional Hilbert space.)The QTM is initialized to the state $|\psi(0)\rangle = |q_0\rangle |T_0\rangle |1\rangle$, where $T_0\in \Gamma^*$ is concatenation of the input $x\in\Sigma^*$ with many blanks as needed (there is a subtlety here to determine the maximal length, but I ignore it).At each time step, the state of the QTM evolves according to some unitary $U$ $$|\psi(i+1)\rangle = U|\psi(i)\rangle$$ Note that the state at any time $n$ is given by $|\psi(n)\rangle = U^n|\psi(0)\rangle$. $U$ can be any unitary that changes the tape only where the head is located and moves the head one step to the right or left. That is, $\langle q',T',i'|U|q,T,i\rangle$ is zero unless $i'= i \pm 1$ and $T'$ differs from $T$ only at position $i$.At the end of the computation (when the QTM reaches a state $q_f$) the tape is being measured (using, say, the computational basis).The interesting thing to notice, is that each step the QTM's state is a superposition of possible configurations, which gives the QTM the quantum advantage. The answer is based on Masanao Ozawa, On the Halting Problem for Quantum Turing Machines.See also David Deutsch, Quantum theory, the Church-Turing principle and the universal quantum computer. |
_unix.182783 | Consider dates in the ISO 8601 format i.e. YYYY-MM-DD, e.g. 2015-01-28, and say we have a folder with files of the form:AAAA_<date>_BBBBI am looking for a glob pattern in zsh (or Bash) that I could use to specify files between two dates. For example, say I want to copy files between 2014-12-15 and 2015-02-03. Is there an easy way to build a glob expression to refer to files between those two dates (including them)?The following may help (from lexicographical order):An important exploitation of lexicographical ordering is expressed in the ISO 8601 date formatting scheme, which expresses a date as YYYY-MM-DD. This date ordering lends itself to straightforward computerized sorting of dates such that the sorting algorithm does not need to treat the numeric parts of the date string any differently from a string of non-numeric characters, and the dates will be sorted into chronological order. | Filename expansion from date ranges | bash;files;zsh;wildcards | It is good you tagged the question with zsh, since in that shell you can use extensive glob qualifiers including test done on file names:$ ls -1 AAAA*AAAA_2012-10-03_BBBBAAAA_2014-12-28_BBBBAAAA_2015-01-03_BBBBAAAA_2015-02-03_BBBBAAAA_2015-10-03_BBBB$ d1='2014-12-15'$ d2='2015-02-03'$ print -l *(e:'[[ ${${REPLY#*_}%_*} > $d1 && ${${REPLY#*_}%_*} < $d2 ]]':)AAAA_2014-12-28_BBBBAAAA_2015-01-03_BBBBNote that for the given date format you can perform simple string comparison.I assumed here, that your files are named AAAA_date_BBBB as in the question and extracted the date part with parameter expansion. You would probably need to modify this code to get date in suitable way for your real case scenario. |
_cstheory.18582 | In the 1998 technical note Computing on data streams by Monika Rauch Henzinger , Prabhakar Raghavan , Sridar Rajagopalan (found here: http://www.eecs.harvard.edu/~michaelm/E210/datastreams.pdf)They define a directed multigraph with node set V1 union V2 union union Vk, all of whose edges are directed from a node in Vi to a nodein Vi+1.I cannot see how this allows for disconnected components? Can anyone clarify this? | Computing on data streams clarification | data streams;streaming | The definition says that every edge that exists has to go from some $V_i$ to $V_{i+1}$. It doesn't say that every possible edge from $V_i$ to $V_{i+1}$ has to be there. For example, $V_1=\{a,b\}$, $V_2=\{c,d\}$ with edges $ac$ and $bd$ gives a disconnected graph. |
_webmaster.89066 | I am having some issues getting the below to work the way I expect. What I am trying to do is take a url like https://example.com/TOPIC/courses/details/143911 but have it sent to php with the URI like /index.php?/product/details/143911&original_path=/TOPIC/courses/details/143911. It just seems to keep giving me 404's.location ~ ^(?P<one>.*)/(?P<two>details|scheduler)/(?P<three>.*)$ { try_files $uri $uri/ /index.php?/product/$two/$three&original_path=$one;}location ~* \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_URL $my_script_url; fastcgi_param SCRIPT_URI $scheme://$http_host$my_script_url; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param PHP_SELF $uri; fastcgi_param HTTPS $https if_not_empty; fastcgi_param HTTP_FRONT_END_HTTPS HTTPS; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param REQUEST_URI $uri?$args; include fastcgi_params; fastcgi_ignore_headers Cache-Control; fastcgi_ignore_headers Expires; fastcgi_ignore_headers Set-Cookie;}What is strange is that I can put the URL in like https://example.com/index.php?/product/details/143911&orignial_path=/TOPIC/courses and it works fine. Something else that seems strange to me is that if I implement the php-fpm status page each request, via the try_files above or entering the URL manually shows the same script and request_uri, but only the manually entry is working how I would expect. | Using Nginx to rewrite a friendly URL to index.php handler gives 404 errors | url rewriting;nginx;error;clean urls | null |
_unix.24087 | I would like to have a wrapper program that runs a given command and sets a signal handler so that it gets run when the command receives a specified signal.The question is this:Is there an utility program to do this?If not, is it possible to do this by using bash's commands trap and exec?If not, how can I do this? (e.g. by writing a program myself in C which does a few system calls)EDIT: The target platform is GNU/Linux.EDIT 2: Following Ignacio's answer, I managed to write a preload SO which looks like this. | Wrapper program that sets signal handler | bash;signals;system calls;trap | Can't be done. From the exec(3p) man page:Signals set to be caught by the calling process image shall be set to the default action in the new process image.You would have to write a preload SO which would hook up the signal handlers before the program started. |
_unix.134778 | If we add a device that does not support PNP (Plug-an Play), the manufacturer will hopefully provide explicit directions on how to assign IRQ values for it. However, if we don't know what IRQ value to specify, what command line should be used to check if a IRQ value is free or not?lsdev displays info about devices: $lsdev Device DMA IRQ I/O Ports------------------------------------------------0000:00:02.0 7000-703f0000:00:1f.2 7060-707f 7080-7087 7088-708f 7090-7093 7094-70970000:00:1f.3 efa0-efbf0000:01:00.0 6000-607f0000:04:00.0 4000-40ff0000:05:00.0 3000-30ffacpi 9 ACPI 1800-1803 1804-1805 1808-180b 1810-1815 1820-182f 1850-1850ahci 43 7060-707f 7080-7087 7088-708f 7090-7093 7094-7097cascade 4 What about this cmd lsdev, is it enough for this task? For example, if we want to know if 1233 is free, we would run this command:lsdev | awk '{print $3}'|grep 1233 NOTE: $3 above is used because IRQ value printed in the 3rd column of lsdev output.Then if no output, it means that it is free for us to use? | How to know if an IRQ value is free to use | interrupt;upnp;autocmd | Looking at the man page for lsdev there is this comment:This program only shows the kernel's idea of what hardware is present, not what's actually physically available.The output of lsdev is actually just the contents of the /proc/interrupts file:excerpt from man proc /proc/interrupts This is used to record the number of interrupts per CPU per IO device. Since Linux 2.6.24, for the i386 and x86_64 architectures, at least, this also includes interrupts internal to the system (that is, not associated with a device as such), such as NMI (non maskable interrupt), LOC (local timer interrupt), and for SMP systems, TLB (TLB flush interrupt), RES (rescheduling interrupt), CAL (remote function call interrupt), and possibly others. Very easy to read formatting, done in ASCII.So I'd likely go off of the contents of /proc/interrupts instead:$ cat /proc/interrupts CPU0 CPU1 CPU2 CPU3 0: 157 0 0 0 IO-APIC-edge timer 1: 114046 13823 22163 22418 IO-APIC-edge i8042 8: 0 0 0 1 IO-APIC-edge rtc0 9: 863103 151734 155913 156348 IO-APIC-fasteoi acpi 12: 2401994 396391 512623 477252 IO-APIC-edge i8042 16: 555 593 598 626 IO-APIC-fasteoi mmc0 19: 127 31 83 71 IO-APIC-fasteoi ehci_hcd:usb2, firewire_ohci, ips 23: 32 8 21 16 IO-APIC-fasteoi ehci_hcd:usb1, i801_smbus 40: 5467 4735 1518263 1230227 PCI-MSI-edge ahci 41: 1206772 1363618 2193180 1477903 PCI-MSI-edge i915 42: 267 5142231 817 590 PCI-MSI-edge iwlwifi 43: 5 8 6 4 PCI-MSI-edge mei_me 44: 0 2 2 23405 PCI-MSI-edge em1 45: 19 66 39 23 PCI-MSI-edge snd_hda_intelNMI: 12126 25353 28874 26600 Non-maskable interruptsLOC: 29927091 27300830 30247245 26674337 Local timer interruptsSPU: 0 0 0 0 Spurious interruptsPMI: 12126 25353 28874 26600 Performance monitoring interruptsIWI: 634179 806528 600811 632305 IRQ work interruptsRTR: 5 1 1 0 APIC ICR read retriesRES: 4083290 3763061 3806592 3539082 Rescheduling interruptsCAL: 16375 624 25561 737 Function call interruptsTLB: 806653 778539 828520 806776 TLB shootdownsTRM: 0 0 0 0 Thermal event interruptsTHR: 0 0 0 0 Threshold APIC interruptsMCE: 0 0 0 0 Machine check exceptionsMCP: 416 416 416 416 Machine check pollsERR: 0MIS: 0ReferencesLinux list all IROs currently in useKernel Korner - Dynamic Interrupt Request Allocation for Device Drivers |
_unix.34162 | Sometimes, tailing an output log which is constantly being updated doesn't give the whole lines. Why is that?grep pattern input_file > output.log &tail output.logWhy doesn't it print the last line in full? | Why does tailing an output log sometimes give partial lines? | pipe;tail | Reading and writing a file is not line-atomic, and tail -f is not line-buffered. So, when tail -f reads the file while it was still being written, it may reach the end of the file (and therefore stop printing) before the process writing to the file writes the end of the line. When that happens, it prints an incomplete line because that's all it sees. |
_webmaster.72754 | How can I use the Piwik API or other available Piwik tools or features to find the average number of visitors per day during a date range?I can find the number of user for a day and the total number of users for a date range but I can't find the Average number of user per day for a date range. I can find the average number on my own my dividing (Total Visits/Number of days) but I need to have this stat available for non technical users. | Finding the average number of visitors per day using Piwik | analytics;visitors;usage data | null |
_softwareengineering.347997 | I came across an architecture for a .net application where there are 3 layersRepository layer (edmx and their classes) ^ | VDomain layer (Model -> Interfaces and their implementation) ^ | VWeb layer (Model -> View models)Communication between the web and the data layer takes place via Interfaces defined in the domain layer (anemic models)The naming convention used over here is all dependent on table namesRepository layerTableNameRepository.csTableName.cs (autogenerated files)DomainLayerITableName.csTableName.csTableNameService.csWeb layerTableNameListModelTableNameAddEditModelThe problem with this is that if a column is modified in the table, the classes in repository and domain layers all need re-work.Also, communication between these layers take place with managers which have been resolved as below, where Container is staticContainer.Resolve<RepositoryManager>();Container.Resolve<ServiceManager>();The DBContext is stored in session variables during the first call and removed at the end of the request. The second call will initialize the DBContext with the new operator.Any suggestions on what can be done (with minimal changes) so as to imporve this architecture so that it aligns with DDD. | Domain Driven Design - Improving the developers work | design patterns;object oriented design;domain driven design;asp.net mvc;solid | null |
_unix.342715 | I want to start keepassX in floating mode in i3wm. my .config/i3/config contains the line for_window [class=keepassx] floating enableand the xprop xprop _NET_WM_USER_TIME(CARDINAL) = 7578932WM_STATE(WM_STATE): window state: Normal icon window: 0x0_NET_WM_SYNC_REQUEST_COUNTER(CARDINAL) = 29360143_NET_WM_ICON(CARDINAL) = Icon (64 x 64):XdndAware(ATOM) = BITMAP_MOTIF_DRAG_RECEIVER_INFO(_MOTIF_DRAG_RECEIVER_INFO) = 0x6c, 0x0, 0x5, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x10, 0x0, 0x0, 0x0_NET_WM_NAME(UTF8_STRING) = myKeys.kdbx - KeePassXWM_CLIENT_LEADER(WINDOW): window id # 0x1c00005_NET_WM_PID(CARDINAL) = 26787_NET_WM_WINDOW_TYPE(ATOM) = _NET_WM_WINDOW_TYPE_NORMAL_MOTIF_WM_HINTS(_MOTIF_WM_HINTS) = 0x3, 0x3e, 0x7e, 0x0, 0x0WM_PROTOCOLS(ATOM): protocols WM_DELETE_WINDOW, WM_TAKE_FOCUS, _NET_WM_PING, _NET_WM_SYNC_REQUESTWM_NAME(STRING) = Keys.kdbx - KeePassXWM_LOCALE_NAME(STRING) = en_US.UTF-8WM_CLASS(STRING) = keepassx, KeepassxWM_HINTS(WM_HINTS): Client accepts input or input focus: True Initial state is Normal State. bitmap id # to use for icon: 0x1c0000b window id # of group leader: 0x1c00005WM_NORMAL_HINTS(WM_SIZE_HINTS): user specified location: 960, 22 program specified location: 960, 22 user specified size: 956 by 1033 program specified size: 956 by 1033 program specified minimum size: 640 by 517 window gravity: NorthWestWM_CLIENT_MACHINE(STRING) = nautilusWM_COMMAND(STRING) = { keepassx }I also tried the command for_window [instance=keepassx] floating enablehow can I make the keepassX always stars in floating mode? | Make KeepassX float in i3wm | configuration;i3;tiling wm | As Adaephon said, you just looked at the wrong string. Everything else should be fine.You want to distinct by class, so let's look at your xprop:WM_CLASS(STRING) = keepassx, KeepassxThis line is defined like:WM_CLASS(STRING) = instance, classAs you see, you wanted to float keepassx, but the class is KeepassxThere are two solutions for you:Use for_window [class=Keepassx] floating enable, as this refers to the right class name.Use for_window [class=(?i)keepassx] floating enable, which means that the searched string will be case-unsensitive.Bear in mind, that you can also use for_window with other attributes, like name, instance, etc.EDIT: I've read his comment again and yes, he should be right: Look again at your config to strike out that after that line, another one comes that may disable floating mode for specific or every windows. |
_softwareengineering.243165 | Have a question on recursively populating JsTree using the .NET wrapper available via NuGet. Any help would be greatly appreciated.the .NET class JsTree3Node has a property named Children which holds a list of JsTree3Nodes, and the pre-existing table which contains the node structure looks like thisNodeId ParentNodeId Data AbsolutePath 1 NULL News /News 2 1 Financial /News/Financial 3 2 StockMarket /News/Financial/StockMarketI have a EF data context from the the database, so my code currently looks like this.var parentNode = new JsTree3Node(Guid.NewGuid().ToString());foreach(var nodeItem in context.Nodes){ parentNode.Children.Add(nodeItem.Data); // What is the most efficient logic to do this recursively?}as the inline comment says in the above code, what would be the most efficient way to load the JStree data on to the parentNode object.I can change the existing node table to suite the logic so feel free to suggest any changes to improve performance. | Most efficient way to rebuild a tree structure from data | trees | null |
_codereview.79303 | I just finished working through Exercise 16 of Learn Python The Hard Way.To further study, I wanted to create a script that will read and display a file's contents on screen, and then have the user re-write the file with three lines.Can my code be written in a much cleaner, pythonic way?from sys import argvscript, filename = argvprint The name of the script is: %r % script# open txt file and display on screentext1 = open(filename, 'r')print text1.read()text1.close()# ask user to overwrite the fileprint To overwrite this file, hit <enter>.print Otherwise, CTRL-C to escape.raw_input('> ')# open the txt file again; overwrite it with different data# have user input new data for each linetext2 = open(filename, 'w')print Write three lines to the file.line1 = raw_input('line 1: ')line2 = raw_input('line 2: ')line3 = raw_input('line 3: ')text2.write('\n'.join([line1, line2, line3, '']))text2.close()# read and display reconstructed filetext3 = open(filename, 'r')print text3.read()text3.close() | Reading and Writing in same script | python;python 2.7;file;console;io | null |
_unix.82016 | I have a few questions about moving from apt-get to zypper in bash scripts.What is the equivalent of this?sudo apt-get install curl --assume-yes(where curl could be any package)I found the Zypper Cheat Sheet - openSUSE. Very nice! But I would appreciate the voice of experience here -- what's the right way to use zypper in a script where I want to auto agree to all prompts and not skip things that need a response?With my inexperience I would be tempted to use:sudo zypper --non-interactive --no-gpg-checks --quiet install --auto-agree-with-licenses curlBut is that really the equivalent of --assume-yes?What about the equivalent for these?sudo apt-get autoremove -ysudo apt-get autoclean -yThis suggests there isn't one...Is there a replacement for gdebi-core? Or is gdebi not ever needed with zypper's powerful satisfiability solver? I use gdebi for situations where I need to install a package on an older version and I have a .deb file already (but not all the dependencies). | How to use zypper in bash scripts for someone coming from apt-get? | bash;package management;apt;opensuse;zypper | zypper is not very consistent with naming flags for subcommands. For install you should use --non-interactive mode, in shortcut -n:zypper -n install curlThat might be quite confusing for someone coming from apt-get install -y curl. Although zypper's legacy option is -y/--no-confirm (sometimes the only option that actually works).According to documentation there's no way how to accept a GPG key without interactive mode:a new key can be trusted or imported in the interactive mode onlyEven with --no-gpgp-checks the GPG key will be rejected. A workaround for scripts is to use pipe and echo:zypper addrepo http://repo.example.org my_name | echo 'a' |
_unix.13904 | Accidentally I managed to copy-paste a paragraph in vim a zillion times.How do I select and then delete the text from my current position to the end of file?In Windows, this would be Ctrl-Shift-End, followed by Delete. | How to select/delete until end of file in vim/gvim? | vim | VGxEnter Visual Mode, go to End of File, delete.Alternatively, you can do:VggxTo delete from the current position to the beginning of the file. |
_cs.29386 | I searched this over Google few times but ended up with articles on transparency rather than an answer.With relation to distributed database study(or distributed OS), what is the concept of transparency?I mean in replication, fragmentation and everywhere I guess the idea is to implement these concepts but not to bother user about them. Correct me if I'm wrong but I think the idea is User shouldn't know?. Then HOW is this transparency? Isn't it abstraction? Hiding the implementation details? | What actually is the concept of transparency? Why not call it Abstraction? | distributed systems;databases;database theory | I agree with you. The distinction is pretty weak.I think the rationale is that transparent is supposed to mean invisible, or you don't even know it's there. Abstraction means that you don't see the implementation of something, but you know that it's there.Example: Consider Linux. There's one command, cp <srcfile> <destfile> for copying files on the mounted file system. There's a completely different command, usually scp <user>@<machine-name>:<srcfile> <destfile> for copying a file from a remote filesystem that is mounted on a remote machine. scp is an abstraction in that it hides all the implementation details about contacting the remote machine using the ip protocol, the encryption of the channel, and the mechanics of accessing the disks on the local and remote sides of the transfer. But in no way is the distinction between a local file and a remote file transparent. You have to use a different command depending on where the file is.Worse, you can't edit the remote file in-place. With a local file you just open the file in your text editor. With a remote file you have to scp it to your local file system, edit the copy, and then scp the modified file back to the original location. |
_unix.5964 | I just installed suse 11.3, but cannot update the system. Update Applet says: PackageKit Error repo-not-available: Failed to download /media.1/media from http://download.opensuse.org/distribution/11.3/repo/oss/I filled in my details in yast->proxy, and Firefox works fine with the same details. My laptop is a Packard Bell with ATI graphics card which I am yet to install. Could it be the network card? | suse cannot install software | networking;opensuse | If your system time/date has reverted to January 1, 1970 or something ancient like that then your computers entire SSL infrastructure will temporarily be broken. Connecting to software repositories usually is done over an SSL connection these days to prevent someone from doing a man-in-the-middle intercept attack.Simply set your system date to be correct and downloading and updating software will magically resume working. |
_webmaster.24015 | I have a new site (just over 3 months old) and after a month or so, it started ranking for some searches in Google.Then suddenly after 2 months, boom, no traffic coming from search other than when people search for the name of the site.My question is whether this is something common? I was doing some link-building, but nothing too out of whack. I did 2 guest posts on some blogs. | Site got taken out of the search engines almost entirely | seo | null |
_unix.381249 | Is there a way to get a list of all commands that match a specific (case insensitive) pattern? So for example, I know the command (which might be an alias) I'm looking for contains diag or Diag but I'm not sure of the actual command.I'm currently on Ubuntu with Bash but am asking specifically on this site because I'd love to learn of a way that's usable across various kinds of distros (e.g. I'll need this skill on CentOS and Manjaro later on too).I've tried man iag hoping it would work the same as Powershell's help iag but that doesn't work.I've tried my Google-fu but that only seems to lead to explanations on how to find files by partial name of text inside files.I've tried searching this SE site in various ways (e.g. 1, 2) but didn't find a duplicate of my question.How do you find the exact name of a command if you can remember only part of it? | Find commands by partial name | shell;command line;command | Use compgen -c to get a list of all commands, you can also use it like:compgen -c difto get a list of all commands started with dif.Combine it with grep to get exactly what you are looking for:compgen -c | grep -i diagwhich looks for any commands containing diag. use regex for more flexible searches:compgen -c | grep -i ^diag # started with diagcompgen -c | grep -i diag$ # ended to diagYou can also use apropos to find commands, it searches into the manual page names and descriptions. |
_codereview.101317 | The task of this project was to create a method that takes in an array of stock prices, one for each hypothetical day. From there, the method should return the best day to buy and the best day to sell the stock to maximize return. A few ground rules: Stocks need to be bought before they can be soldIt is possible that the lowest priced day can be the last day, and the highest price day could be the first day. With that, and a relatively new working knowledge of Ruby, here's what I came up with. It works, but I'm sure there could be improvements. Be gentle! def stock_prices (array) $largest_difference = 0 array.each_with_index {|value, index| array.each {|i| $difference = value - i if ($difference <= $largest_difference) && (index < array.rindex(i)) $negative_array = [] << $difference $negatives = [index, array.rindex(i)] $largest_difference = $difference end } } if $negative_array.nil? puts The stock should be bought and sold at [0, 1], respectively else puts The stock should be bought and sold at #{$negatives}, respectively endendstock_prices([17,3,6,9,15,8,6,1,10])#stock_prices([25,2,10,9])#stock_prices([10,12,5,3,20,1,9,20])#stock_prices([10,9,8,7,6])#stock_prices([18,17,17,16,15]) | Stock picker that tells when to buy and when to sell | ruby | The code is a pretty sensible solution for this problem.Never put $ before a variable in Ruby, it becomes global and may be modified from anywhere.I think that function programming is more apt to algorithmic problems than imperative. I solved it like this:def stock(prices) ((0...prices.length).to_a) .repeated_permutation(2) .select {|start, finish| finish > start} .max_by {|start, finish| prices[finish] - prices[start]}endIt is more close to how you would describe the problem, anyway you may be intersted in PBRTM, the main programming style supported by Ruby. (Programming By Reading The Manual) Ruby gives you many many library functions and you may enjoy scrolling through them to find the ones that help you shorten your code.Getting to the real meat, both yours and my approach run in O(N^2) time complexity. That means that for a million (10^6) stock prices, the code must make (10^12) operations, that is not feasible.I suggest something like this:I find min and max of array. Is max after min? If so I am done. Otherwise I delete the least far from average and start again.This is almost linear, but may not be optimal.ORYou may sort the data O(N log N) and try first and last, if in the array the index of last is after the index of first, done. Otherwise delete the least far from average between first and last and repeat. Also this may fail to provide the optimal result in edge cases.All in all, a quadratic solution is easy but slow but a quasilinear one is hard but fast. |
_unix.370922 | I just installed the SFML library using the following command: apt-get install sfml-devand after installing it I went to my /usr/lib folder to find out what the actual shared object's name is (so that I know what to tell my compiler what to use).But there was nothing with SFML in the name there.There was, however, an SFML folder in my /usr/include directory.So where are the shared object files, if they aren't in my /usr/lib directory?How can I find them, so that I can figure out what to tell the compiler to use?g++ *.cpp -o exe -l? | where does shared objects go after installing and how to find out their names | ubuntu;c;c++ | gcc -print-search-dirsshows all the search paths the compiler uses.dpkg -L libsfml-devshows all the files installed by the package; to find the actual library though youll need to run dpkg -L on whichever package actually contains the library (libsfml-dev has a number of library dependencies).In any case, libraries typically end up in /usr/lib/$(dpkg-architecture -qDEB_HOST_MULTIARCH) nowadays on Debian derivatives, i.e. /usr/lib/x86_64-linux-gnu on common 64-bit PCs. You dont actually need to worry about that with your compiler though, it already knows where to find the libraries you just need to figure out what -l parameters to provide it with. |
_softwareengineering.324670 | Can someone explain this code :<?php $a = 5; $a = $a + 1; echo $a; ?>The output is 6,How?what did save in RAM?$a = 5 or $a = $a + 1 ? | How does $a = 5; $a = $a + 1 ; equal 6? | php | $a is a variable. It should be obvious from the name alone that its value can vary.The PHP interpreter treats variables as symbols, so the logic goes like this:$a = 5; // set the value of the variable symbol $a to 5$a = $a + 1; // set the value of the variable symbol $a to the value of $a, plus 1echo $a; // write out the value of $aThe thing that's happening in the 2nd line is that it's using the value of $a in its calculation, and assigning the result of the calculation back to $a, replacing the value that was in there before.As Robert Harvey alluded to with the link he posted, research indicates that this is a fundamentally confusing concept to some people; their brains are just not wired the right way for the sort of symbolic thought that programming uses all the time. If this idea of reassigning values just doesn't make any sense, you'd probably save yourself a lot of frustration and wasted time by looking elsewhere for a career. On the other hand, if my explanation managed to clear it up, then by all means, keep learning. :) |
_softwareengineering.288000 | Suppose there is a small team with 1 senior developer and 3 junior developers working on the same repo in a team on a github organization. The senior developer has admin access. What is the most common approach in the industry to setup the access for the junior developers?Is it giving them read access only and ask them to submit a pull request every time they change something and it will be reviewed by the senior developer?Or is it giving them full write access?The goal here is to maximize productivity. | git workflow: read access + pull request or write access? | git;productivity;github | In my organization, all developers, both junior and senior, submit pull requests that are reviewed by other developers. Just because you are senior doesn't mean you can't make a mistake.But we also don't limit write access to anyone. We trust our people to not force push to master. (And worst case, the git server is backed up.) Just because you are junior does not mean you are not responsible.Rules are best enforced with culture, not technology. The last thing you want in an emergency situation is someone trying to find the person with admin privileges. If you don't trust them with your repo, you should not have hired them. |
_webmaster.52409 | I have many images in my server and I want transfer my data to another server.I want combine those files to one file and then FTP. I am using 7-zip with store option to make one file, but it is very slow.Is there any fast solution? | FTP many images from one server to another server in fastest way | images;ftp;transfer | null |
_unix.150789 | I have Linux system in which we force /dev/devname for running the system.proc /proc proc defaults 0 0/dev/sda1 / ext3 barrier=1,errors=remount-ro 0 1/dev/sda5 /opt ext3 barrier=1,defaults 0 22 /dev/sda2 /opt/vortex/dvss ext3 barrier=1,defaults 0 3/dev/sda6 none swap sw 0 0/dev/scd0 /media/cdrom0 udf,iso9660 user,noauto 0 0We have this system running without issues till date. But, often in some installed machine we see that the system is not able to boot properly and sudden goes into Grub rescueWhen i mount the device as secondary and run E2Fsck i see that the system can be restored.Now, we are trying to address this failure. [Fixing System boot failure due to GRUB ErrorIn order, I noticed in some forums they say to SET UUID based boot up in FSTABwhat are all the advantages that we would have if it is set through UUID.Is there a possibility that it would reduce my GRUB ERROR | UUID vs /dev/sdX in FSTAB | boot;partition;grub2;fstab | null |
_unix.294365 | My OpenVZ VPS has two IPv4 and two IPv6 addresses:23.54.xx.10223.54.xx.1032604:xxxx:1::xxxx:6x0b2604:xxxx:1::xxxx:5x7cNow I want to rotate my outbound IP so that when I run any PHP, ruby or curl commands, they rotate through my IP's. I am doing a test with curl 'https://api.ipify.org?format=json', which shows me the same IP each time. root@local:~#curl 'https://api.ipify.org?format=json' {ip:23.54.xx.102}root@local:~#curl 'https://api.ipify.org?format=json' {ip:23.54.xx.102}root@local:~#curl 'https://api.ipify.org?format=json' {ip:23.54.xx.102}I used some StackExchange tables rules but the result is the same -- not rotating IP.I want the result:root@local:~#curl 'https://api.ipify.org?format=json' {ip:23.54.xx.102}root@local:~#curl 'https://api.ipify.org?format=json' {ip:23.54.xx.103}root@local:~#curl 'https://api.ipify.org?format=json' {ip:2604:xxxx:1::xxxx:6x0b}root@local:~#curl 'https://api.ipify.org?format=json' {ip:2604:xxxx:1::xxxx:5x7c}is it possible to rotate ip through IPtables and i want to use php,ruby and python. | Rotate IP outbound IP address in Ubuntu or CentOS | ubuntu;centos;networking;network interface | null |
_webmaster.49746 | Point 1: I have a front page which is dynamic because it gets all the latest posts that are rated. The question is: do I have to make these posts an excerpt, because that's better for SEO?Point 2: I have two pages where I have links on the left and right sides (widgets) that are pointing to posts. In the middle of the page, I have one post that has the content shown with his own comments and tags beneath the post itself. The question: do I have to set a nofollow on these tags? And what about the links? If I click on a post link, it will bring me to a single.php, which has actually the same lay-out and elements as the pages I described at point 2. Will this be seen as duplicate content?Point 3: I have an alphabetic list of all my posts which are linking to the single page. Is this seen as duplicate content? Do I have to set a nofollow on the links?Point 4: I have author pages which have the last commented and posted posts and reactions of a person... Is this seen as duplicate content and do I have to add a nofollow link?There is so much information that I don't know which one is best or how to look at it. I would really appreciate it if someone can help me out with it.PS: if I generate a XML Sitemap, which of all has the highest priority? | Wordpress - Too much information about SEO tags - archives - posts. Which is best? | seo;google;wordpress;duplicate content;nofollow | Understanding how nofollow works and what it's meant forYour biggest mistake is your misunderstanding of how nofollow works - a lot of people assume that nofollow means Google will not follow the links, this is not true. Nofollow is considered a attribute that is best used for off-site links, quote from Johns recent answer on another related question:SOURCEGenerally speaking, nofollow should be used on the following type of links:Untrusted contentPaid linksTackling the duplicate content problemWith the birth of Blog engines such as WordPress created a headache for webmasters a like with duplicate pages such as Tag, Author, Recent, Date, Popular pages etc, Google confronted this issue many years back with the release of rel=canonical, by using canonical pages you are telling search engines that this is the master page and all other pages that the content is found on is ignored without punishment for duplicate content. SOURCEIf Google knows that these pages have the same content, we may index only one version for our search results. Our algorithms select the page we think best answers the user's query. Now, however, users can specify a canonical page to search engines by adding a element with the attribute rel=canonical to the section of the non-canonical version of the page. Adding this link and attribute lets site owners identify sets of identical content and suggest to Google: Of all these pages with identical content, this page is the most useful. Please prioritize it in search results.Yoast SEO and many alikeAdding canonical pages to WordPress is a walk in the park thanks to many plugin authors such as Yoast SEO, by using such plugins you can easily add REL Canonical to your master pages and make the duplicate content problem go away for good. I've used Yoast as my example as I've used this plugin for many sites and it works well but there are many more plugins that are equally as good which you can explore on the WP plugin library.SOURCEWhen Google introduced the canonical link element, to distinguish the original page from derivative pages within your site carrying the same content, they reached out to me to develop a WordPress plugin for it, and I did. Later on, canonical link elements were added to core. They work fine, with one caveat: they only work for single posts and pages, not for categories and tags, not for the homepage. The WordPress SEO plugin fixes that, and sets the correct canonical on each of those pages.XML SitemapThe priority settings is should not be considered a reliable option to conquering duplicate content issues, you should opt to use Rel Canonical instead and leave the sitemap untouched.SOURCEQ: If I have multiple URLs that point to the same content, can I use my Sitemap to indicate my preferred URL for that content? A: Yes. While we can't guarantee that our algorithms will display that particular URL in search results, it's still helpful for you to indicate your preference by including that URL in your Sitemap. We take this into consideration, along with other signals, when deciding which URL to display in search results. |
_unix.70668 | I have got the directory called Test and a few directories inside it. Both Test and the directories inside it have executable files. I'd like to print them with ls. I'd use this command.ls -l `find Test/ -perm /u=x,g=x,o=x -type f`Is this a good/right/quick command or not?My solution is:find Test/ -executable -type f -exec ls -l {} \;and got the same result as warl0ck and pradeepchhetri offered. | Find executable files recursively | find;executable;command;recursive | Not really, you can integrate the ls command with find,find Test/ -type f -perm /u=x,g=x,o=x -exec ls -l {} \;UPDATEActually -executable is not an equivalent of -perm /u=x,g=x,o=x. You might have files that is executable only by the group or others, which will not be displayed.So, depends on your purpose, if you want files executable only by you, that's okay to use -executable. |
_unix.72896 | I noticed that the MySQL daemon running on my CentOS 6.4 machine was suddenly not running anymore. I checked the MySQL log, but didn't see anything relevant:121229 22:17:45 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended121229 22:17:50 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql121229 22:17:50 InnoDB: Initializing buffer pool, size = 8.0M121229 22:17:50 InnoDB: Completed initialization of buffer pool121229 22:17:50 InnoDB: Started; log sequence number 0 206087326121229 22:17:50 [Note] Event Scheduler: Loaded 0 events121229 22:17:50 [Note] /usr/libexec/mysqld: ready for connections.Version: '5.1.66-log' socket: '/var/lib/mysql/mysql.sock' port: 3306 Source distribution130205 11:09:32 [Note] /usr/libexec/mysqld: Normal shutdown130205 11:09:32 [Note] Event Scheduler: Purging the queue. 0 events130205 11:09:34 InnoDB: Starting shutdown...130205 11:09:36 InnoDB: Shutdown completed; log sequence number 0 529664030130205 11:09:36 [Note] /usr/libexec/mysqld: Shutdown complete130205 11:09:36 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended130205 11:09:37 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql130205 11:09:37 InnoDB: Initializing buffer pool, size = 8.0M130205 11:09:37 InnoDB: Completed initialization of buffer pool130205 11:09:37 InnoDB: Started; log sequence number 0 529664030130205 11:09:37 [Note] Event Scheduler: Loaded 0 events130205 11:09:37 [Note] /usr/libexec/mysqld: ready for connections.Version: '5.1.67-log' socket: '/var/lib/mysql/mysql.sock' port: 3306 Source distribution130310 11:33:12 [Note] /usr/libexec/mysqld: Normal shutdown130310 11:33:12 [Note] Event Scheduler: Purging the queue. 0 events130310 11:33:14 InnoDB: Starting shutdown...130310 11:33:16 InnoDB: Shutdown completed; log sequence number 0 788753738130310 11:33:16 [Note] /usr/libexec/mysqld: Shutdown complete130310 11:33:16 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended130310 11:36:03 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql130310 11:36:03 InnoDB: Initializing buffer pool, size = 8.0M130310 11:36:03 InnoDB: Completed initialization of buffer pool130310 11:36:04 InnoDB: Started; log sequence number 0 788753738130310 11:36:04 [Note] Event Scheduler: Loaded 0 events130310 11:36:04 [Note] /usr/libexec/mysqld: ready for connections.Version: '5.1.67-log' socket: '/var/lib/mysql/mysql.sock' port: 3306 Source distribution130413 20:56:55 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql130413 20:56:56 InnoDB: Initializing buffer pool, size = 8.0M130413 20:56:56 InnoDB: Completed initialization of buffer poolInnoDB: Log scan progressed past the checkpoint lsn 0 1139894636130413 20:56:56 InnoDB: Database was not shut down normally!InnoDB: Starting crash recovery.InnoDB: Reading tablespace information from the .ibd files...InnoDB: Restoring possible half-written data pages from the doublewriteInnoDB: buffer...InnoDB: Doing recovery: scanned up to log sequence number 0 1139895853130413 20:56:56 InnoDB: Starting an apply batch of log records to the database...InnoDB: Progress in percents: 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 InnoDB: Apply batch completedInnoDB: Last MySQL binlog file position 0 335782050, file name ./mysql-bin.000003130413 20:56:57 InnoDB: Started; log sequence number 0 1139895853130413 20:56:57 [Note] Recovering after a crash using mysql-bin130413 20:56:59 [ERROR] Error in Log_event::read_log_event(): 'read error', data_len: 809, event_type: 2130413 20:56:59 [Note] Starting crash recovery...130413 20:56:59 [Note] Crash recovery finished.130413 20:56:59 [Note] Event Scheduler: Loaded 0 events130413 20:56:59 [Note] /usr/libexec/mysqld: ready for connections.Version: '5.1.67-log' socket: '/var/lib/mysql/mysql.sock' port: 3306 Source distributionThen I checked /var/log/messages, and realized the system was rebooted for some reason:Apr 7 03:48:03 localhost rsyslogd: [origin software=rsyslogd swVersion=5.8.10 x-pid=1335 x-info=http://www.rsyslog.com] rsyslogd was HUPedApr 13 17:19:07 localhost kernel: imklog 5.8.10, log source = /proc/kmsg started.Apr 13 17:19:07 localhost rsyslogd: [origin software=rsyslogd swVersion=5.8.10 x-pid=1370 x-info=http://www.rsyslog.com] startApr 13 17:19:07 localhost kernel: Initializing cgroup subsys cpusetApr 13 17:19:07 localhost kernel: Initializing cgroup subsys cpuApr 13 17:19:07 localhost kernel: Linux version 2.6.32-358.2.1.el6.x86_64 ([email protected]) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) ) #1 SMP Wed Mar 13 00:26:49 UTC 2013Apr 13 17:19:07 localhost kernel: Command line: ro root=/dev/mapper/VolGroup-lv_root rd_NO_LUKS LANG=en_US.UTF-8 rd_NO_MD rd_LVM_LV=VolGroup/lv_swap SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_LVM_LV=VolGroup/lv_root KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quietApr 13 17:19:07 localhost kernel: KERNEL supported cpus:Apr 13 17:19:07 localhost kernel: Intel GenuineIntelApr 13 17:19:07 localhost kernel: AMD AuthenticAMDApr 13 17:19:07 localhost kernel: Centaur CentaurHaulsApr 13 17:19:07 localhost kernel: BIOS-provided physical RAM map:Apr 13 17:19:07 localhost kernel: BIOS-e820: 0000000000000000 - 000000000009b000 (usable)Apr 13 17:19:07 localhost kernel: BIOS-e820: 000000000009b000 - 00000000000a0000 (reserved)Apr 13 17:19:07 localhost kernel: BIOS-e820: 00000000000e0000 - 0000000000100000 (reserved)Apr 13 17:19:07 localhost kernel: BIOS-e820: 0000000000100000 - 000000008bf64000 (usable)Apr 13 17:19:07 localhost kernel: BIOS-e820: 000000008bf64000 - 000000008c051000 (ACPI NVS)Apr 13 17:19:07 localhost kernel: BIOS-e820: 000000008c051000 - 000000008c13d000 (ACPI data)Apr 13 17:19:07 localhost kernel: BIOS-e820: 000000008c13d000 - 000000008d53d000 (ACPI NVS)Apr 13 17:19:07 localhost kernel: BIOS-e820: 000000008d53d000 - 000000008f602000 (ACPI data)Apr 13 17:19:07 localhost kernel: BIOS-e820: 000000008f602000 - 000000008f64f000 (reserved)Apr 13 17:19:07 localhost kernel: BIOS-e820: 000000008f64f000 - 000000008f6e4000 (ACPI data)Apr 13 17:19:07 localhost kernel: BIOS-e820: 000000008f6e4000 - 000000008f6ef000 (ACPI NVS)Apr 13 17:19:07 localhost kernel: BIOS-e820: 000000008f6ef000 - 000000008f6f1000 (ACPI data)Apr 13 17:19:07 localhost kernel: BIOS-e820: 000000008f6f1000 - 000000008f7cf000 (ACPI NVS)Apr 13 17:19:07 localhost kernel: BIOS-e820: 000000008f7cf000 - 000000008f800000 (ACPI data)Apr 13 17:19:07 localhost kernel: BIOS-e820: 000000008f800000 - 0000000090000000 (reserved)Apr 13 17:19:07 localhost kernel: BIOS-e820: 00000000a0000000 - 00000000b0000000 (reserved)Apr 13 17:19:07 localhost kernel: BIOS-e820: 00000000fc000000 - 00000000fd000000 (reserved)Apr 13 17:19:07 localhost kernel: BIOS-e820: 00000000fed1c000 - 00000000fed20000 (reserved)Apr 13 17:19:07 localhost kernel: BIOS-e820: 00000000ff800000 - 0000000100000000 (reserved)Apr 13 17:19:07 localhost kernel: BIOS-e820: 0000000100000000 - 0000000270000000 (usable)Apr 13 17:19:07 localhost kernel: DMI 2.5 present.Apr 13 17:19:07 localhost kernel: SMBIOS version 2.5 @ 0xF0440Apr 13 17:19:07 localhost kernel: last_pfn = 0x270000 max_arch_pfn = 0x400000000Apr 13 17:19:07 localhost kernel: x86 PAT enabled: cpu 0, old 0x7040600070406, new 0x7010600070106Apr 13 17:19:07 localhost kernel: last_pfn = 0x8bf64 max_arch_pfn = 0x400000000Apr 13 17:19:07 localhost kernel: Using GB pages for direct mappingApr 13 17:19:07 localhost kernel: init_memory_mapping: 0000000000000000-000000008bf64000Apr 13 17:19:07 localhost kernel: init_memory_mapping: 0000000100000000-0000000270000000Apr 13 17:19:07 localhost kernel: RAMDISK: 3717b000 - 37fef73aApr 13 17:19:07 localhost kernel: ACPI: RSDP 00000000000f0410 00024 (v02 Cisco0)Apr 13 17:19:07 localhost kernel: ACPI: XSDT 000000008f7fe120 0009C (v01 Cisco0 CiscoUCS 00000000 01000013)Apr 13 17:19:07 localhost kernel: ACPI: FACP 000000008f7fc000 000F4 (v04 Cisco0 CiscoUCS 00000000 MSFT 0100000D)Apr 13 17:19:07 localhost kernel: ACPI: DSDT 000000008f7f6000 05DBE (v02 Cisco0 CiscoUCS 00000003 MSFT 0100000D)Apr 13 17:19:07 localhost kernel: ACPI: FACS 000000008f6f1000 00040Apr 13 17:19:07 localhost kernel: ACPI: APIC 000000008f7f5000 001A8 (v02 Cisco0 CiscoUCS 00000000 MSFT 0100000D)Apr 13 17:19:07 localhost kernel: ACPI: MCFG 000000008f7f4000 0003C (v01 Cisco0 CiscoUCS 00000001 MSFT 0100000D)Apr 13 17:19:07 localhost kernel: ACPI: HPET 000000008f7f3000 00038 (v01 Cisco0 CiscoUCS 00000001 MSFT 0100000D)Apr 13 17:19:07 localhost kernel: ACPI: SLIT 000000008f7f2000 00030 (v01 Cisco0 CiscoUCS 00000001 MSFT 0100000D)Apr 13 17:19:07 localhost kernel: ACPI: SPCR 000000008f7f1000 00050 (v01 Cisco0 CiscoUCS 00000000 MSFT 0100000D)Apr 13 17:19:07 localhost kernel: ACPI: WDDT 000000008f7f0000 00040 (v01 Cisco0 CiscoUCS 00000000 MSFT 0100000D)Apr 13 17:19:07 localhost kernel: ACPI: SSDT 000000008f7d5000 1AFC4 (v02 Cisco SSDT PM 00004000 INTL 20090730)Apr 13 17:19:07 localhost kernel: ACPI: SSDT 000000008f7d4000 001D8 (v02 Cisco IPMI 00004000 INTL 20090730)Apr 13 17:19:07 localhost kernel: ACPI: SSDT 000000008f7d3000 00962 (v02 CISCO PMETER 00004000 INTL 20090730)Apr 13 17:19:07 localhost kernel: ACPI: HEST 000000008f7d1000 000A8 (v01 Cisco CiscoTbl 00000001 CISC 00000001)Apr 13 17:19:07 localhost kernel: ACPI: BERT 000000008f7d0000 00030 (v01 Cisco CiscoTbl 00000001 CISC 00000001)Apr 13 17:19:07 localhost kernel: ACPI: ERST 000000008f7cf000 00230 (v01 Cisco CiscoTbl 00000001 CISC 00000001)Apr 13 17:19:07 localhost kernel: ACPI: EINJ 000000008f6f0000 00130 (v01 Cisco CiscoTbl 00000001 CISC 00000001)Apr 13 17:19:07 localhost kernel: ACPI: DMAR 000000008f6ef000 001A8 (v01 Cisco0 CiscoUCS 00000001 MSFT 0100000D)Apr 13 17:19:07 localhost kernel: Setting APIC routing to flat.Apr 13 17:19:07 localhost kernel: No NUMA configuration foundApr 13 17:19:07 localhost kernel: Faking a node at 0000000000000000-0000000270000000Apr 13 17:19:07 localhost kernel: Bootmem setup node 0 0000000000000000-0000000270000000Apr 13 17:19:07 localhost kernel: NODE_DATA [000000000000b000 - 000000000003efff]Apr 13 17:19:07 localhost kernel: bootmap [000000000003f000 - 000000000008cfff] pages 4eApr 13 17:19:07 localhost kernel: (9 early reservations) ==> bootmem [0000000000 - 0270000000]Apr 13 17:19:07 localhost kernel: #0 [0000000000 - 0000001000] BIOS data page ==> [0000000000 - 0000001000]Apr 13 17:19:07 localhost kernel: #1 [0000006000 - 0000008000] TRAMPOLINE ==> [0000006000 - 0000008000]Apr 13 17:19:07 localhost kernel: #2 [0001000000 - 000201b0a4] TEXT DATA BSS ==> [0001000000 - 000201b0a4]Apr 13 17:19:07 localhost kernel: #3 [003717b000 - 0037fef73a] RAMDISK ==> [003717b000 - 0037fef73a]Apr 13 17:19:07 localhost kernel: #4 [000009b000 - 0000100000] BIOS reserved ==> [000009b000 - 0000100000]Apr 13 17:19:07 localhost kernel: #5 [000201c000 - 000201c2f8] BRK ==> [000201c000 - 000201c2f8]Apr 13 17:19:07 localhost kernel: #6 [0000008000 - 000000a000] PGTABLE ==> [0000008000 - 000000a000]Apr 13 17:19:07 localhost kernel: #7 [000000a000 - 000000b000] PGTABLE ==> [000000a000 - 000000b000]Apr 13 17:19:07 localhost kernel: #8 [0000001000 - 0000001030] ACPI SLIT ==> [0000001000 - 0000001030]Apr 13 17:19:07 localhost kernel: found SMP MP-table at [ffff8800000fc640] fc640Apr 13 17:19:07 localhost kernel: Reserving 129MB of memory at 48MB for crashkernel (System RAM: 9984MB)Apr 13 17:19:07 localhost kernel: Zone PFN ranges:Apr 13 17:19:07 localhost kernel: DMA 0x00000001 -> 0x00001000Apr 13 17:19:07 localhost kernel: DMA32 0x00001000 -> 0x00100000Apr 13 17:19:07 localhost kernel: Normal 0x00100000 -> 0x00270000Apr 13 17:19:07 localhost kernel: Movable zone start PFN for each nodeApr 13 17:19:07 localhost kernel: early_node_map[3] active PFN rangesApr 13 17:19:07 localhost kernel: 0: 0x00000001 -> 0x0000009bApr 13 17:19:07 localhost kernel: 0: 0x00000100 -> 0x0008bf64Apr 13 17:19:07 localhost kernel: 0: 0x00100000 -> 0x00270000Apr 13 17:19:07 localhost kernel: ACPI: PM-Timer IO Port: 0x408Apr 13 17:19:07 localhost kernel: Setting APIC routing to flat.Apr 13 17:19:07 localhost kernel: ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)Apr 13 17:19:07 localhost kernel: ACPI: LAPIC (acpi_id[0x01] lapic_id[0x02] enabled)Apr 13 17:19:07 localhost kernel: ACPI: LAPIC (acpi_id[0x02] lapic_id[0x12] enabled)Apr 13 17:19:07 localhost kernel: ACPI: LAPIC (acpi_id[0x03] lapic_id[0x14] enabled)Apr 13 17:19:07 localhost kernel: ACPI: LAPIC (acpi_id[0x04] lapic_id[0x01] enabled)Apr 13 17:19:07 localhost kernel: ACPI: LAPIC (acpi_id[0x05] lapic_id[0x03] enabled)Apr 13 17:19:07 localhost kernel: ACPI: LAPIC (acpi_id[0x06] lapic_id[0x13] enabled)Apr 13 17:19:07 localhost kernel: ACPI: LAPIC (acpi_id[0x07] lapic_id[0x15] enabled)Apr 13 17:19:07 localhost kernel: ACPI: LAPIC (acpi_id[0x08] lapic_id[0xff] disabled)Apr 13 17:19:07 localhost kernel: ACPI: LAPIC (acpi_id[0x09] lapic_id[0xff] disabled)Apr 13 17:19:07 localhost kernel: ACPI: LAPIC (acpi_id[0x0a] lapic_id[0xff] disabled)Apr 13 17:19:07 localhost kernel: ACPI: LAPIC (acpi_id[0x0b] lapic_id[0xff] disabled)Apr 13 17:19:07 localhost kernel: ACPI: LAPIC (acpi_id[0x0c] lapic_id[0xff] disabled)Apr 13 17:19:07 localhost kernel: ACPI: LAPIC (acpi_id[0x0d] lapic_id[0xff] disabled)Apr 13 17:19:07 localhost kernel: ACPI: LAPIC (acpi_id[0x0e] lapic_id[0xff] disabled)Apr 13 17:19:07 localhost kernel: ACPI: LAPIC (acpi_id[0x0f] lapic_id[0xff] disabled)Apr 13 17:19:07 localhost kernel: ACPI: LAPIC (acpi_id[0x10] lapic_id[0xff] disabled)Apr 13 17:19:07 localhost kernel: ACPI: LAPIC (acpi_id[0x11] lapic_id[0xff] disabled)Apr 13 17:19:07 localhost kernel: ACPI: LAPIC (acpi_id[0x12] lapic_id[0xff] disabled)Apr 13 17:19:07 localhost kernel: ACPI: LAPIC (acpi_id[0x13] lapic_id[0xff] disabled)Apr 13 17:19:07 localhost kernel: ACPI: LAPIC (acpi_id[0x14] lapic_id[0xff] disabled)Apr 13 17:19:07 localhost kernel: ACPI: LAPIC (acpi_id[0x15] lapic_id[0xff] disabled)Apr 13 17:19:07 localhost kernel: ACPI: LAPIC (acpi_id[0x16] lapic_id[0xff] disabled)Apr 13 17:19:07 localhost kernel: ACPI: LAPIC (acpi_id[0x17] lapic_id[0xff] disabled)Apr 13 17:19:07 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high level lint[0x1])Apr 13 17:19:07 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high level lint[0x1])Apr 13 17:19:07 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high level lint[0x1])Apr 13 17:19:07 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high level lint[0x1])Apr 13 17:19:07 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high level lint[0x1])Apr 13 17:19:07 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high level lint[0x1])Apr 13 17:19:07 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high level lint[0x1])Apr 13 17:19:07 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high level lint[0x1])Apr 13 17:19:07 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high level lint[0x1])Apr 13 17:19:07 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high level lint[0x1])Apr 13 17:19:07 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high level lint[0x1])Apr 13 17:19:07 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high level lint[0x1])Apr 13 17:19:07 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high level lint[0x1])Apr 13 17:19:07 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high level lint[0x1])Apr 13 17:19:07 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high level lint[0x1])Apr 13 17:19:07 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high level lint[0x1])Apr 13 17:19:07 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high level lint[0x1])Apr 13 17:19:07 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high level lint[0x1])Apr 13 17:19:07 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high level lint[0x1])Apr 13 17:19:07 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high level lint[0x1])Apr 13 17:19:07 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high level lint[0x1])Apr 13 17:19:07 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high level lint[0x1])Apr 13 17:19:07 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high level lint[0x1])Apr 13 17:19:07 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high level lint[0x1])Apr 13 17:19:07 localhost kernel: ACPI: IOAPIC (id[0x08] address[0xfec00000] gsi_base[0])Apr 13 17:19:07 localhost kernel: IOAPIC[0]: apic_id 8, version 32, address 0xfec00000, GSI 0-23Apr 13 17:19:07 localhost kernel: ACPI: IOAPIC (id[0x09] address[0xfec90000] gsi_base[24])Apr 13 17:19:07 localhost kernel: IOAPIC[1]: apic_id 9, version 32, address 0xfec90000, GSI 24-47Apr 13 17:19:07 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)Apr 13 17:19:07 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)Apr 13 17:19:07 localhost kernel: Using ACPI (MADT) for SMP configuration informationApr 13 17:19:07 localhost kernel: ACPI: HPET id: 0x8086a401 base: 0xfed00000Apr 13 17:19:07 localhost kernel: SMP: Allowing 24 CPUs, 16 hotplug CPUsApr 13 17:19:07 localhost kernel: PM: Registered nosave memory: 000000000009b000 - 00000000000a0000Apr 13 17:19:07 localhost kernel: PM: Registered nosave memory: 00000000000a0000 - 00000000000e0000Apr 13 17:19:07 localhost kernel: PM: Registered nosave memory: 00000000000e0000 - 0000000000100000Apr 13 17:19:07 localhost kernel: PM: Registered nosave memory: 000000008bf64000 - 000000008c051000Apr 13 17:19:07 localhost kernel: PM: Registered nosave memory: 000000008c051000 - 000000008c13d000Apr 13 17:19:07 localhost kernel: PM: Registered nosave memory: 000000008c13d000 - 000000008d53d000Apr 13 17:19:07 localhost kernel: PM: Registered nosave memory: 000000008d53d000 - 000000008f602000Apr 13 17:19:07 localhost kernel: PM: Registered nosave memory: 000000008f602000 - 000000008f64f000Apr 13 17:19:07 localhost kernel: PM: Registered nosave memory: 000000008f64f000 - 000000008f6e4000Apr 13 17:19:07 localhost kernel: PM: Registered nosave memory: 000000008f6e4000 - 000000008f6ef000Apr 13 17:19:07 localhost kernel: PM: Registered nosave memory: 000000008f6ef000 - 000000008f6f1000Apr 13 17:19:07 localhost kernel: PM: Registered nosave memory: 000000008f6f1000 - 000000008f7cf000Apr 13 17:19:07 localhost kernel: PM: Registered nosave memory: 000000008f7cf000 - 000000008f800000Apr 13 17:19:07 localhost kernel: PM: Registered nosave memory: 000000008f800000 - 0000000090000000Apr 13 17:19:07 localhost kernel: PM: Registered nosave memory: 0000000090000000 - 00000000a0000000Apr 13 17:19:07 localhost kernel: PM: Registered nosave memory: 00000000a0000000 - 00000000b0000000Apr 13 17:19:07 localhost kernel: PM: Registered nosave memory: 00000000b0000000 - 00000000fc000000Apr 13 17:19:07 localhost kernel: PM: Registered nosave memory: 00000000fc000000 - 00000000fd000000Apr 13 17:19:07 localhost kernel: PM: Registered nosave memory: 00000000fd000000 - 00000000fed1c000Apr 13 17:19:07 localhost kernel: PM: Registered nosave memory: 00000000fed1c000 - 00000000fed20000Apr 13 17:19:07 localhost kernel: PM: Registered nosave memory: 00000000fed20000 - 00000000ff800000Apr 13 17:19:07 localhost kernel: PM: Registered nosave memory: 00000000ff800000 - 0000000100000000Apr 13 17:19:07 localhost kernel: Allocating PCI resources starting at b0000000 (gap: b0000000:4c000000)Apr 13 17:19:07 localhost kernel: Booting paravirtualized kernel on bare hardwareApr 13 17:19:07 localhost kernel: NR_CPUS:4096 nr_cpumask_bits:24 nr_cpu_ids:24 nr_node_ids:1Apr 13 17:19:07 localhost kernel: PERCPU: Embedded 31 pages/cpu @ffff88002f800000 s94552 r8192 d24232 u131072Apr 13 17:19:07 localhost kernel: pcpu-alloc: s94552 r8192 d24232 u131072 alloc=1*2097152Apr 13 17:19:07 localhost kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 Apr 13 17:19:07 localhost kernel: pcpu-alloc: [0] 16 17 18 19 20 21 22 23 -- -- -- -- -- -- -- -- Apr 13 17:19:07 localhost kernel: Built 1 zonelists in Zone order, mobility grouping on. Total pages: 2045459Apr 13 17:19:07 localhost kernel: Policy zone: NormalApr 13 17:19:07 localhost kernel: Kernel command line: ro root=/dev/mapper/VolGroup-lv_root rd_NO_LUKS LANG=en_US.UTF-8 rd_NO_MD rd_LVM_LV=VolGroup/lv_swap SYSFONT=latarcyrheb-sun16 crashkernel=129M@0M rd_LVM_LV=VolGroup/lv_root KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quietApr 13 17:19:07 localhost kernel: PID hash table entries: 4096 (order: 3, 32768 bytes)Apr 13 17:19:07 localhost kernel: Checking aperture...Apr 13 17:19:07 localhost kernel: No AGP bridge foundApr 13 17:19:07 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)Apr 13 17:19:07 localhost kernel: Placing 64MB software IO TLB between ffff880020000000 - ffff880024000000Apr 13 17:19:07 localhost kernel: software IO TLB at phys 0x20000000 - 0x24000000Apr 13 17:19:07 localhost kernel: Memory: 7972340k/10223616k available (5221k kernel code, 1901576k absent, 349700k reserved, 7121k data, 1264k init)Apr 13 17:19:07 localhost kernel: Hierarchical RCU implementation.Apr 13 17:19:07 localhost kernel: NR_IRQS:33024 nr_irqs:1008Apr 13 17:19:07 localhost kernel: Extended CMOS year: 2000Apr 13 17:19:07 localhost kernel: Console: colour VGA+ 80x25Apr 13 17:19:07 localhost kernel: console [tty0] enabledApr 13 17:19:07 localhost kernel: allocated 33554432 bytes of page_cgroupApr 13 17:19:07 localhost kernel: please try 'cgroup_disable=memory' option if you don't want memory cgroupsApr 13 17:19:07 localhost kernel: Fast TSC calibration using PITApr 13 17:19:07 localhost kernel: Detected 2666.901 MHz processor.Apr 13 17:19:07 localhost kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5333.80 BogoMIPS (lpj=2666901)Apr 13 17:19:07 localhost kernel: pid_max: default: 32768 minimum: 301Apr 13 17:19:07 localhost kernel: Security Framework initializedApr 13 17:19:07 localhost kernel: SELinux: Initializing.Apr 13 17:19:07 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes)Apr 13 17:19:07 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes)Apr 13 17:19:07 localhost kernel: Mount-cache hash table entries: 256Apr 13 17:19:07 localhost kernel: Initializing cgroup subsys nsApr 13 17:19:07 localhost kernel: Initializing cgroup subsys cpuacctApr 13 17:19:07 localhost kernel: Initializing cgroup subsys memoryApr 13 17:19:07 localhost kernel: Initializing cgroup subsys devicesApr 13 17:19:07 localhost kernel: Initializing cgroup subsys freezerApr 13 17:19:07 localhost kernel: Initializing cgroup subsys net_clsApr 13 17:19:07 localhost kernel: Initializing cgroup subsys blkioApr 13 17:19:07 localhost kernel: Initializing cgroup subsys perf_eventApr 13 17:19:07 localhost kernel: Initializing cgroup subsys net_prioApr 13 17:19:07 localhost kernel: CPU: Physical Processor ID: 0Apr 13 17:19:07 localhost kernel: CPU: Processor Core ID: 0Apr 13 17:19:07 localhost kernel: mce: CPU supports 9 MCE banksApr 13 17:19:07 localhost kernel: CPU0: Thermal monitoring enabled (TM1)Apr 13 17:19:07 localhost kernel: using mwait in idle threads.Apr 13 17:19:07 localhost kernel: ACPI: Core revision 20090903Apr 13 17:19:07 localhost kernel: ftrace: converting mcount calls to 0f 1f 44 00 00Apr 13 17:19:07 localhost kernel: ftrace: allocating 21430 entries in 85 pagesApr 13 17:19:07 localhost kernel: dmar: Host address width 40Apr 13 17:19:07 localhost kernel: dmar: DRHD base: 0x000000fe710000 flags: 0x1Apr 13 17:19:07 localhost kernel: dmar: IOMMU 0: reg_base_addr fe710000 ver 1:0 cap c90780106f0462 ecap f020feApr 13 17:19:07 localhost kernel: dmar: RMRR base: 0x0000008f62f000 end: 0x0000008f631fffApr 13 17:19:07 localhost kernel: dmar: RMRR base: 0x0000008f61a000 end: 0x0000008f61afffApr 13 17:19:07 localhost kernel: dmar: RMRR base: 0x0000008f617000 end: 0x0000008f617fffApr 13 17:19:07 localhost kernel: dmar: RMRR base: 0x0000008f614000 end: 0x0000008f614fffApr 13 17:19:07 localhost kernel: dmar: RMRR base: 0x0000008f611000 end: 0x0000008f611fffApr 13 17:19:07 localhost kernel: dmar: RMRR base: 0x0000008f60e000 end: 0x0000008f60efffApr 13 17:19:07 localhost kernel: dmar: RMRR base: 0x0000008f60b000 end: 0x0000008f60bfffApr 13 17:19:07 localhost kernel: dmar: RMRR base: 0x0000008f608000 end: 0x0000008f608fffApr 13 17:19:07 localhost kernel: dmar: RMRR base: 0x0000008f605000 end: 0x0000008f605fffApr 13 17:19:07 localhost kernel: dmar: No ATSR foundApr 13 17:19:07 localhost kernel: IOAPIC id 8 under DRHD base 0xfe710000Apr 13 17:19:07 localhost kernel: IOAPIC id 9 under DRHD base 0xfe710000Apr 13 17:19:07 localhost kernel: Enabled IRQ remapping in xapic modeApr 13 17:19:07 localhost kernel: Setting APIC routing to physical flatApr 13 17:19:07 localhost kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1Apr 13 17:19:07 localhost kernel: CPU0: Intel(R) Xeon(R) CPU E5640 @ 2.67GHz stepping 02Apr 13 17:19:07 localhost kernel: Performance Events: PEBS fmt1+, Westmere events, Intel PMU driver.Apr 13 17:19:07 localhost kernel: CPUID marked event: 'bus cycles' unavailableApr 13 17:19:07 localhost kernel: ... version: 3Apr 13 17:19:07 localhost kernel: ... bit width: 48Apr 13 17:19:07 localhost kernel: ... generic registers: 4Apr 13 17:19:07 localhost kernel: ... value mask: 0000ffffffffffffApr 13 17:19:07 localhost kernel: ... max period: 000000007fffffffApr 13 17:19:07 localhost kernel: ... fixed-purpose events: 3Apr 13 17:19:07 localhost kernel: ... event mask: 000000070000000fApr 13 17:19:07 localhost kernel: NMI watchdog enabled, takes one hw-pmu counter.Apr 13 17:19:07 localhost kernel: Booting Node 0, Processors #1 #2 #3 #4 #5 #6 #7Apr 13 17:19:07 localhost kernel: Brought up 8 CPUsApr 13 17:19:07 localhost kernel: Total of 8 processors activated (42670.41 BogoMIPS).Apr 13 17:19:07 localhost kernel: devtmpfs: initializedApr 13 17:19:07 localhost kernel: PM: Registering ACPI NVS region at 8bf64000 (970752 bytes)Apr 13 17:19:07 localhost kernel: PM: Registering ACPI NVS region at 8c13d000 (20971520 bytes)Apr 13 17:19:07 localhost kernel: PM: Registering ACPI NVS region at 8f6e4000 (45056 bytes)Apr 13 17:19:07 localhost kernel: PM: Registering ACPI NVS region at 8f6f1000 (909312 bytes)Apr 13 17:19:07 localhost kernel: regulator: core version 0.5Apr 13 17:19:07 localhost kernel: NET: Registered protocol family 16Apr 13 17:19:07 localhost kernel: ACPI FADT declares the system doesn't support PCIe ASPM, so disable itApr 13 17:19:07 localhost kernel: ACPI: bus type pci registeredApr 13 17:19:07 localhost kernel: PCI: MCFG configuration 0: base a0000000 segment 0 buses 0 - 255Apr 13 17:19:07 localhost kernel: PCI: MCFG area at a0000000 reserved in E820Apr 13 17:19:07 localhost kernel: PCI: Using MMCONFIG at a0000000 - afffffffApr 13 17:19:07 localhost kernel: PCI: Using configuration type 1 for base accessApr 13 17:19:07 localhost kernel: bio: create slab <bio-0> at 0Apr 13 17:19:07 localhost kernel: ACPI: Interpreter enabledApr 13 17:19:07 localhost kernel: ACPI: (supports S0 S5)How can I determine what caused the system to be rebooted? | What caused my CentOS 6.4 box to reboot itself? | centos;mysql;reboot | null |
_codereview.114092 | Below is the assignment I was given and the coding is what I have came up with. I would appreciate criticism on how I did and on ways to have gone differently about it.You want to number your random list numbers, so use an iterator to put the list numbers into a map (but keep the list). The key will be the number the random number was generated in starting at 1 (ex. 1, 2, 3, etc., since these are not just index numbers). The value will be the random number. Use an iterator to print the two fields lined up on the right with field widths in two columns with the numbers labeled with centered column headings of Key and Value.Since the list is easier to work with, lets work with the list to perform some calculations. You will sort the numbers in your list. Then use iterators, algorithms, and list functions (not the user's input variable) to find the sum, mean (average), median (middle value - find the size and move up half way), range (difference between highest and lowest), and a random number from a random spot from your list of random number (get a random number (in the proper range) with rand() and move up that many spots in your list). Then print the labeled values (with the mean to three decimal places).Now, use your iterator to put the now sorted list numbers back into the map (which will change the original map's order). The key will still be numbers starting at 1. The value will now be the random number in their sorted order. Again, use an iterator to print the two fields lined up on the right with field widths in two columns with the numbers labeled with column headings of Key and Value.#include <bits/stdc++.h>using namespace std;int main() { cout << Enter the length of list: ; int n; cin >> n; std::vector<int> v(n); cout << The random generated numbers are: \n; for (int i = 0; i < n; i++) { int x = rand() % 151; cout << std::setw(10); cout << i << ; cout << std::setw(10); cout << x << endl; v[i] = x; } int sum = 0; for (int i = 0; i < n; i++) sum += v[i]; cout << The sum: << sum << endl; cout << The mean: << sum / n*1.0 << endl; sort(v.begin(), v.end()); cout << The median: << v[n / 2] << endl; return 0;} | Mapping & Sorting Randomly Generated Numbers | c++;algorithm;random;iterator | Include the right thingsYou're starting with:#include <bits/stdc++.h>using namespace std;The second line is bad practice; avoid it. Typing std:: is not a burden. The former is definitely unacceptable - that's a compiler-specific file that you don't want to be including. Just include the actual headers you need:#include <iostream>#include <algorithm>#include <vector>Use algorithms and iteratorsRather than writing your own loop to perform a sum, you could use the standard algorithm for this: std::accumulate:int sum = std::accumulate(v.begin(), v.end(), 0);Also they wanted you to use iterators to find the median, which would be:auto median_it = std::next( v.begin(), std::distance(v.begin(), v.end()) / 2);Where's the rest?There's a lot of text in that question that I don't think you addressed in your code. Looks like we need a std::map or something. |
_webmaster.33976 | I'm looking for options to get back a domain name that just expired and was re-registered by an unknown link farmer. Between extended holidays, bank and hosting provider's opening hours and me the domain wasn't renewed and was quickly scooped up elsewhere.How can I get hold of the new registrator (My contact information still appears in the whois databases)?Is it possible to appeal to any authority to claim ownership?Is it possible to contact google about removing the page from the search engines? | Reclaim snatched domain name | dns | Firstly,All of these questions should really be addressed to your domain registrar.That said:1. How can I get hold of the new registrator (My contact information still appears in the whois databases)?If that's true, then it would appear that your domain name was not actually fully expired. When a domain is expired the old whois information should be purged, and then when it's re-registered by someone else the new details should then be present.Without knowing at least which TLD it's in (and maybe the whole name) it's hard to guess more.It's possible that the domain registry has themselves pointed your NS records at their own link farming pages.2. Is it possible to appeal to any authority to claim ownership?Talk to your registrar, and if necessary the TLD domain registry. What they do will depend on the TLD's policies though.3. Is it possible to contact google about removing the page from the search engines?No, as far as I know Google will not arbitrate in cases like this - as far as they're concerned the new content is legitimate and there's no reason not to index it. |
_datascience.16553 | I have the last couple of month worked with an regression problem, of turning a framed audio file into a set of mfcc features, for a speech recognition applicationI tried a lot different network structures, Cnn, different normalisation techniques, different optimizer, adding more layers and so on.. but finally i've got some decent result, but i don't understand why.. What i did was i added a linear layer as output, and somehow that minimised the error tremendeosly, and bit puzzled why a linear layer would have that much effect?... I mean am still tried to fit the actual output to the desired output?..Why would the activation function matter here?... I mean the weight are being adjusted based on the error, so why is the neural network better at adjusting for the error when the output is linear rather than non-linear (such as: tanh, Relu).. ? | The effect of an linear layer? | neural network;regression;audio recognition | If you are performing regression, you would usually have a final layer as linear.Most likely in your case - although you do not say - your target variable has a range outside of (-1.0, +1.0). Many standard activation functions have restricted output values. For example a sigmoid activation can only output values in range (0.0, 1.0) and a ReLU activation can only output positive values. If the target value is outside of their range, they will never be able to match it and loss values will be high.The lesson to learn here is that it is important which activation function you use in the output layer. The function must be able to output the full range of values of the target variable. |
_webmaster.42959 | I am using AWS free tier. While launching EC2 instance and RDS instance, I didn't choose any specific region. So, randomly I was assigned two different region for ec2 and rds. Later, after launching the application, I came to realize that, being these two instance in two different region, causing a performance loss on the application.So, I want to relaunch my RDS on the same region my EC2 resides. However, I went to management console, took a snapshot of the rds instance and went to launch a new instance, but it didn't give me other region options to launch . Can you please help me how can I do this and have my RDS instance on the region I want? Thanks. | Change Region For An Amazon RDS Instance | amazon ec2;amazon aws | null |
_webmaster.14341 | I'm trying to set up a mobile redirect for a site with 2 subfolders, and I cannot get both to work at the same time.This is the structure of the sitewww.example.com/EN/www.example.com/ES/This is a bilingual site so each subfolder contains the files corresponding to each language version. Then I was using a 301 redirect, and setting up the index in /EN/ as the main index. Everything was getting redirected to it. I was using:DirectoryIndex index.htmlRedirect /index.html http://www.example.com/EN/index.htmland several RewriteCond to redirect example.com and old URLs to the new URL.It worked fine before I decided to add a mobile version to m.example.com.I used the solution provided in https://stackoverflow.com/questions/3680463/mobile-redirect-using-htaccess, and it redirects my mobile version properly, but now the desktop is both working. Besides, my mobile version must be bilingual as well.I'll do my best to clarify.My site is bilingual. I've created two subfolders (EN and ES) to hold the contents for each language.My desktop site requires some kind of redirect to find the right index (one of them at least, so I decided to set the English language version as the main index). The script I used is as stated above.Now, when I added mobile detection redirect (and it does take me to the mobile version on a mobile device), it seems to go to the usual default index on the desktop (www.example.com/index.html) which is not my desktop index.Any suggestions? | .htaccess mobile redirect issues | htaccess;redirects;mobile | null |
_softwareengineering.18421 | I've got substantial J2EE experience, have worked with Grails, and am comfortable with several languages (Perl, PHP, Smalltalk). I've decided to try a new project using Ruby on Rails. For those who've decided to migrate to a new technology stack, how do you make the transition? Do you learn the details of Ruby first (I've done a few tutorials, and feel like I've got a decent enough grasp of the language to start doing some basic work) or do you do a full Ruby on Rails tutorial, and expand your knowledge from there. I'm leaning towards the latter approach because I feel like I can delve into details of the language as I encounter them. For those who work in Ruby on Rails, how did you learn, and which resources do you recommend? | Learning a new language and framework, should I spike a project, or first ground myself in the language fundamentals? | programming languages;learning | The best way to learn a new language is simply to use it (reading tutorials won't make it 'click') and one way that I've been assured is fantastic is through writing tests. I'm yet to try this myself (and will do so this weekend) but give Ruby Koans a go and let me know how you get on :) |
_softwareengineering.198843 | I have a large application in Java filled with independent classes which are unified in a PlayerCharacter class. The class is intended to hold a character's data for a game called the Burning Wheel, and as a result, is unusually complex. The data in the class needs to have computations controlled by the UI, but needs to do so in particular ways for each of the objects in the class.There are 19 objects in the class PlayerCharacter. I've thought about condensing it down, but I'm pretty sure this wouldn't work anywhere in the application as it stands so far.private String name = ; private String concept = ;private String race = ; private int age = -1; private ArrayList<CharacterLifepath> lifepaths = new ArrayList<>();private StatSet stats = new StatSet();private ArrayList<Affiliation> affiliations = new ArrayList<>();private ArrayList<Contact> contacts = new ArrayList<>();private ArrayList<Reputation> reputations = new ArrayList<>();//10 more lines of declarationsI've been considering this problem for some time now, and have considered multiple approaches. The problem arises primarily when data is deleted - for instance, since pretty much everything else (but not some parts!) depends upon Lifepaths, when a lifepath is deleted, nearly everything else must be recalculated. However, if a Skill is deleted, only a few things must be recalculated.Additionally, the application must somewhere track certain values; skill points, trait points, etc. to ensure that the user is not unintentionally exceeding those values. So my question is generally as follows: Where should everything go? What makes this easiest? There are a couple options:Place point total calculations in the PlayerCharacter class (but how does this generate a warning?)Handle all calculation outside of the PlayerCharacter class, and just use PlayerCharacter as a container for all the character's informationPlace all calculations in the PlayerCharacter class; after each item is changed, recompute the entire character. Then, if there are issues arising from deletion, throw warnings back at the UI.I'm slightly overwhelmed by the scope of this particular class - whereas everything I've done so far has been easily broken down into small manageable chunks, this beast seems to resist being tamed. If there's a better approach to this, I'm all ears! But as of right now, I'm slightly confused, and progressing aimlessly probably won't get me anywhere. Any advice is appreciated.I apologize if this is unclear - I'm certain I lack the software vocabulary to properly communicate my ideas. I would love to improve this question - any help here is appreciated! | Should a complex unifying class be doing computation? | java;design;object oriented design | null |
_unix.158377 | We host a Magento webshop hosted by Amazon EC2, and we have speed issues. We're currently looking at moving to another provider. We found one hosting provider where we have a test account now. Speed is better. They use memcached, and this made me wonder how that will improve things for us, if we use that on our servers. I've read that memcached caches database tables, but I understand that it is installed on the webserver, not the database server. Are there any downsides or risks when using Memcached? Will this have unforseen effects on other applications, besides improved speed? How about RAM - will we need more memory for it to work properly? Now we have 1715MB RAM (strange number but this is what it reports). | Memcached on a webserver | centos;cache;amazon ec2 | One installs memcached to the compute node or webserver -- that way it can cache queries before they'd have to hit the wire. Your apache instance will be faster if (eg) php can hit a rich cache before going out on networking. It's better to devote spare memory on the db box to the db instance anyway, so that it can use as much mem as possible and buffer what it can, and that way keep it in memory.Memcached will use memory on the webserver, of course, and you don't want to over-allocate so that your compute/web node starts eating into swap: you kill all your space tradeoffs when you do that.In my limited experience, it's not quite a magic pill; and will need tuning and watching like any other cache or with any other performance tuning effort. It gets much better when there are more than one web host per db, but it should still show you some nice improvements with one web box and one db box. I haven't installed it in a case where the db and web are on the same box, I should say, and if you're thinking of doing so I'd maybe reconsider. |
_webapps.74808 | I posted an old video on my Facebook, but I dont want others to know I posted it, so I hid it from my timeline. However, I found out that the post is still showing in the newsfeed. I just dont want people to know I have posted that video, but I want anyone to watch it if they somehow click on my album.Is this possible? | How to prevent my post showing on the Newsfeed? | facebook;facebook timeline;news feed | null |
_cs.50504 | Suppose we have public key: $$n= 1015, e= 3$$ and private key: $$d= 635, p= 35, q= 29, \phi(n)= 952$$For $m = 100$, we have $$c = m^e ~mod~n = 100^3 mod~1015 = 225.$$To decipher this, let us take $$c^d~mod~n$$ which is $$225^{635}~mod~1015$$ which equals $$680$$ But $680 \neq 100$ so this means that RSA incorrectly decrypted it right? Why does this happen? | Issues in RSA setup | cryptography;encryption;modular arithmetic | Your public key is not a legal RSA public key. In RSA, $n$ must be a product of two primes, but 35 is not a prime. Therefore, things don't work right: for instance, you got the wrong value of $\phi(n)$. |
_unix.296275 | I installed OpenSSH on my Windows 7 system so I could tunnel my VNC into it from my Arch machine. However, when I run /usr/sbin/sshd -D on the W7 machine, I get the error: /var/empty must be owned by root and not group or world-writable.This is the output of the ls -All /var:$ ls -All /vartotal 0drwxr-xr-x+ 1 {my_usrnm} None 0 Jul 15 21:39 cachedrw-------+ 1 cyg_server Administrators 0 Jul 15 21:43 emptydrwxr-xr-x+ 1 {my_usrnm} None 0 Jul 15 21:39 libdrwxrwxrwt+ 1 {my_usrnm} None 0 Jul 15 21:45 logdrwxrwxrwt+ 1 {my_usrnm} None 0 Jul 15 23:36 rundrwxrwxrwt+ 1 {my_usrnm} None 0 Jul 15 21:39 tmpI've tried a few of the permissions fixes and rebooted and reinstalled OpenSSH (by running ssh-host-config) at least 10 times, but nothing had fixed it.How do I fix this error? Thanks! | Running sshd in cygwin: /var/empty must be owned by root... | ssh;sshd;cygwin | null |
_softwareengineering.337532 | I am rather new to the concept of MVVM in C#/.NET WPF projects. The way I understand it, the view-model is supposed to lessen the amount of code-behind required to display data on a form. I try to do as much of the interaction logic as possible in the view-model, so any objects I will use get passed into the view-model via property injection. The view-model will then utilize those objects to do whatever the application is supposed to do.The only logic I have in the view is to instantiate something based on a user selection and pass it immediately into the view-model. Additionally, I only really need to do that with a few things that don't make sense to bind (everything else uses data-binding).What I am asking is... Is it generally okay to pass objects to the view-model? Or, does that create some kind of coupling or other problems I am not aware of? | When using MVVM, is it okay to use property injection on a view model in WPF? | c#;wpf;mvvm | What kind of objects do you want to pass to your ViewModel? My view of MVVM is that my ViewModels are my application, and the View is just a pretty user-friendly interface for interacting with the ViewModels. In an ideal world, you should be able to easily hook up a command-line interface to your ApplicationViewModel and perform the same actions.It sounds like you have the View creating an object based on a View's SelectedItem, and trying to pass the created object to a ViewModel. That's typically not what you want.What you should have is a ViewModel containing a SelectedItem (which is bound to the View to give the user an easy way to change it), and that ViewModel should create your object using the ViewModel.SelectedItem property. From there, it can pass it to another ViewModel if needed, or take some other action on it.I have a very simple MVVM example on my blog if you're interested in understanding the pattern a bit more. Its what I typically give to WPF beginners on StackOverflow who are looking for a basic example. :) |
_unix.266035 | I want to grep file A for every phrase in file B, where a phrase is a string of words of length X. Ideally, it would be an approximate grep, like agrep. Is there a way to do that using command-line tools? | Can I grep two files against each other? | arch linux;grep | With zsh, you could try something like:x=3B_words=($(<B))A_words=($(<A))A=$A_wordssetopt extendedglobfor ((i = 1; i<=$#B_words - x + 1; i++)) { phrase=$B_words[i,i+x-1] [[ $A = (#a2)* $phrase * ]] && printf '%s\n' $phrase}Which should give you the sequences of 3 words of file B that are also found in file A (allowing 2 errors with (#a2)).For instance, if A is your question and B is the sentence above, I get:of 3 words3 words ofin file AOr if you want to see what was matched in file A:for ((i = 1; i<=$#B_words - x + 1; i++)) { phrase=$B_words[i,i+x-1] [[ $A = (#a2)(#b)* ($phrase) * ]] && printf '%s\n' $phrase ($match[1])}which gives:of 3 words (of words)3 words of (words of)in file A (in file B,)words here are defined as sequences of non-IFS characters which with the default value of $IFS is any character other than space, tab, newline and nul. |
_softwareengineering.132205 | What is the proper way to suggest features to be added to the C# language?For example, I would like to have an operator similar to ?? but for selecting the min or max values.I'd like a <? b to expand to a < b ? a : blikewise for >? | What is the proper way to suggest features to be added to the C# language? | c# | Microsoft Connect is the central Hub regarding all suggestions about Microsoft products.Concerning Visual Studio and the .NET ecosystem, you will have to go through the Visual Studio and .Net Framework product and you will end up on the Visual Studio User Voice website where ideas can be submitted. You can also find discussions and issues on GitHub, for the open-sourced version of .NET.There is also a new Q/A site for both Visual Studio and TFS.Concerning C#, now that roslyn is open-source, the evolution of C# is discussed in the open, and is still designed by the C# Language Design Team (LDT). You can request and discuss new features of C# on GitHub. |
_unix.273539 | I would like to use the pass password manager. I don't seem to be ableto get pass to recognize my public key.$ gpg2 --list-keys/home/johndoe/.gnupg/pubring.gpg-------------------------------pub rsa4096/3AD31D0B 2011-02-08 [SCE]uid [ unknown] Fedora-SPARC (15) <[email protected]>sub elg4096/A9DAE699 2011-02-08 [E]... lots of other keys like the one above and then ...pub rsa2048/27FA9292 2016-03-31 [SC]uid [ultimate] John Doe <[email protected]>sub rsa2048/7C8FD1D9 2016-03-31 [E]$ pass git init 27FA9292Reinitialized existing Git repository in /home/johndoe/.password-store/.git/pass insert pubs/checkbookEnter password for pubs/checkbook: Retype password for pubs/checkbook: gpg: captain Password Storage Key: skipped: No public keygpg: [stdin]: encryption failed: No public keyfatal: pathspec '/home/johndoe/.password-store/pubs/checkbook.gpg' did not match any filescaptain is the hostname. Why can pass not find my public key?Thank you. | gpg problem in using pass password manager | gpg;password store | You can't compress the two commands into one. You need to first initialise the pass store with your key and then, separately, initialise the git repository. Because, as the manual states, pass git only takes git-command-args.So, the correct approach requires two steps:pass init YOUR_KEYpass git init |
_unix.28330 | Emacs encrypts/decrypts .gpg files automatically. But recently I have lost the ability to decrypt files encrypted by the Linux gpg tool and vice versa.I use:passphrase symmetric encryptiongnupg 1.4.11emacs 24.0.92.1Debian sidDecrypting using gpg (encrypted by emacs) gives: gpg: decryption failed: bad keyDecrypting using emacs (encrypted by gpg) gives: epa-file--find-file-not-found-function: Opening input file: Decryption failed,Any idea how to avoid this? | Emacs auto encryption and gpg | emacs;encryption;gpg | The issue was in this (in Russian) solution which manipulates with input method. At present time it affects on passphrase during encryption/decryption. |
_unix.340914 | OS and HW: Debian Jessie x64 cinnamon on Lenovo G50-45I have just installed a new Debian Jessie (Debian 8.7.1 x86_64 Cinnamon) on a friend's laptop, and I needed to upgrade the kernel and some stuff, I was directed to these steps by a person in the #debian IRC chat with these 3 packages in order to get the WiFi working:linux-image-4.8.0-0.bpo.2-amd64_4.8.15-2~bpo8+2_amd64.deblinux-base_4.3~bpo8+1_all.debfirmwqare-atheros_20161130-2~bpo8+1_all.debThe problem I am having now is that I am unable to install anything.Every program I try to install, even simple things like DosBox, give me errors about dependencies that are unsatisfiable.What is the reason for this and how to remedy it?Log of terminal output:$ sudo apt-get install wineReading package lists... DoneBuilding dependency tree Reading state information... DoneE: Unable to locate package wine$ sudo add-apt-repository ppa:ubuntu-wine/ppasudo: add-apt-repository: command not found$ sudo apt-get updateIgn cdrom://[Debian GNU/Linux 8 _Jessie_ - Official Snapshot amd64 LIVE/INSTALL Binary 20170117-02:05] jessie InReleaseIgn cdrom://[Debian GNU/Linux 8 _Jessie_ - Official Snapshot amd64 LIVE/INSTALL Binary 20170117-02:05] jessie Release.gpgIgn cdrom://[Debian GNU/Linux 8 _Jessie_ - Official Snapshot amd64 LIVE/INSTALL Binary 20170117-02:05] jessie ReleaseIgn cdrom://[Debian GNU/Linux 8 _Jessie_ - Official Snapshot amd64 LIVE/INSTALL Binary 20170117-02:05] jessie/contrib amd64 Packages/DiffIndexIgn cdrom://[Debian GNU/Linux 8 _Jessie_ - Official Snapshot amd64 LIVE/INSTALL Binary 20170117-02:05] jessie/main amd64 Packages/DiffIndexIgn cdrom://[Debian GNU/Linux 8 _Jessie_ - Official Snapshot amd64 LIVE/INSTALL Binary 20170117-02:05] jessie/non-free amd64 Packages/DiffIndexIgn cdrom://[Debian GNU/Linux 8 _Jessie_ - Official Snapshot amd64 LIVE/INSTALL Binary 20170117-02:05] jessie/contrib Translation-en_USIgn cdrom://[Debian GNU/Linux 8 _Jessie_ - Official Snapshot amd64 LIVE/INSTALL Binary 20170117-02:05] jessie/contrib Translation-enIgn cdrom://[Debian GNU/Linux 8 _Jessie_ - Official Snapshot amd64 LIVE/INSTALL Binary 20170117-02:05] jessie/main Translation-en_USIgn cdrom://[Debian GNU/Linux 8 _Jessie_ - Official Snapshot amd64 LIVE/INSTALL Binary 20170117-02:05] jessie/main Translation-enIgn cdrom://[Debian GNU/Linux 8 _Jessie_ - Official Snapshot amd64 LIVE/INSTALL Binary 20170117-02:05] jessie/non-free Translation-en_USIgn cdrom://[Debian GNU/Linux 8 _Jessie_ - Official Snapshot amd64 LIVE/INSTALL Binary 20170117-02:05] jessie/non-free Translation-enReading package lists... Done$ sudo apt-get install software-properties-commonReading package lists... DoneBuilding dependency tree Reading state information... DoneE: Unable to locate package software-properties-common$ sudo apt-get install dosboxReading package lists... DoneBuilding dependency tree Reading state information... DoneE: Unable to locate package dosbox$ sudo apt-get install python-software-propertiesReading package lists... DoneBuilding dependency tree Reading state information... DonePackage python-software-properties is not available, but is referred to by another package.This may mean that the package is missing, has been obsoleted, oris only available from another source | Default sources.list on Debian Jessie (Unsatisfiable dependencies) | debian;software installation;dependencies | The reason for this is probably bad contents of your software sources. To remedy it:Edit your sources, with your favorite editor, use nano if unsure:sudo nano /etc/apt/sources.listComment out (with #) all of the CD lines.Make sure there is something apart from the CDs;if there is nothing else, you may copy-paste the following complete list:deb http://httpredir.debian.org/debian jessie main contrib non-freedeb-src http://httpredir.debian.org/debian jessie main contrib non-freedeb http://httpredir.debian.org/debian jessie-updates main contrib non-freedeb-src http://httpredir.debian.org/debian jessie-updates main contrib non-freedeb http://security.debian.org/ jessie/updates main contrib non-freedeb-src http://security.debian.org/ jessie/updates main contrib non-freeupdate cache:sudo apt-get updateinstall whatever you need ;)It may be wise to consider some things, namely in this answer, these two:If you don't need software sources, you may omit deb-src lines. That means, if you currently don't need to compile any software by yourself, you don't need these lines, but since they don't hurt...If you intend to use only pure GNU free software, you may omit non-free from all lines, if unsure, or new to Linux, you will probably want to have some non-free software, though... |
_unix.370718 | tl;dr: Suppose I have a list of LAN clients (ip/macaddress/name); how would I best go about graphing the traffic going in/out of my OpenBSD 6.1 gateway?In my network everything going in/out of it passes through my OpenBSD gateway through pf. I would like to be able to graph for all LAN clients (PC's, iPads, phones etc.) the amount of traffic coming from/going to the client.I have looked at pfstat but that only seems to graph values for interfaces. I would like a bit more detailed view so I can make out what client does how much traffic.I can script a little bash/python/perl so I could periodically (cron) generate a list of clients in my network with ip/macaddress/name in a file and generate (for example) pfstat.conf files based on that to keep my graphs up-to-date whenever new clients are added to my network etc. That shouldn't be a problem. My question is specifically on how to go from there. pfstat seemed like a good choice but doesn't seem to support my scenario; I'm not even sure pf supports what I am looking for, for that matter. Maybe it does but I missed it or maybe there are better tools I don't know about. I am aiming for a simple setup; I don't want to go the Nagios/Centreon/that-kinda-stuff route. I prefer a simple tool with ditto config, a cronjob and be done with it. | Graph gateway in/out traffic per client | networking;openbsd;pf;graph;rrdtool | null |
_softwareengineering.339512 | I'm designing a configurable api and I've seen a few ways to accept options from the caller. One is using an options object like so:var options = new MyApiConfigurationOptions { Option1 = true, Option2 = false};var api1 = MyApiFactory.Create(options);Another is using a configuration function:var api2 = MyApiFactory.Create(o => { o.Option1 = true; o.Option2 = false;});Is one approach any better/worse/different than the other? Is there any real difference or would it be nice to support both so the caller can use whatever syntax they prefer? | Configuration object vs function | c#;configuration | The function approach, whilst appearing less intuitive at first glance, has a couple of significant advantages:The function is supplied with the config object, ie the API has total control over the creation of the object. It could have internal only constructors for example, ensuring only the API itself gets to create the object.The API can support full-blown delayed initialisation. Not only can the API perform lazy initialisation of itself, it gets to then call back to the main code in a delayed fashion for the configuration data. Such an approach can prove really useful when, for example, plumbing together a system via IoC, allowing references to the API object to be injected into various objects and then the configuration supplied to the API afterwards.It's all too easy to dismiss this approach as just being fancy for fanciness' sake or obscure and pointless. But take the time to understand the benefits and it's easy to see why it can be a powerful (and simple to understand once the concept is understood) mechanism.Of course, whether you need the function-approach completely depends on your API. If neither of the above points offer any obvious benefits to your code, then more devs will find your API easier to use if you adopt the simpler approach. |
_unix.213115 | I've been looking on the web and through the few Knoppix books I have. I cannot see where to locate the hard drives for a CentOS server I've booted with Knoppix. I am booting the CentOS server with Knoppix, so I can run fsck on the disk. The server hangs during boot. | Where does Knoppix store the location of hard drives/ | centos;fsck;knoppix | null |
_unix.282829 | I need to know how to set up a control user that a user can sudo into only if the user is in a certain group. For example, stupid is a control user that interacts with a service, and is in the group stupiduser. bill is a user that is in the group stupiduser. anne is not in the group stupiduser. Only bill is able to sudo into stupid. ann cannot interact with stupid at all. How can I get a situation like this set up? | su to a specific user only if requesting user is in a specific group | sudo | From the sudoers man pageUser_List ::= User | User ',' User_ListUser ::= '!'* user name | '!'* #uid | '!'* %group | '!'* %#gid | '!'* +netgroup | '!'* %:nonunix_group | '!'* %:#nonunix_gid | '!'* User_Aliassimply use (the confusingly named group):%stupiduser HOSTS_HERE = /usr/bin/su - stupid |
_datascience.9325 | I'm looking for a Python library that can compute the confusion matrix for multi-label classification.FYI:scikit-learn doesn't support multi-label for confusion matrix)What is the difference between Multiclass and Multilabel Problem | Python library that can compute the confusion matrix for multi-label classification | python;software recommendation;multilabel classification | null |
_unix.382539 | I have two drives on my system:/dev/sda has a GPT and an EFI partition. It has Debian 9 and Windows 8.1 installed on it. Debian controls the MBR using grub (grub2?)./dev/sdc has an msdos partition table. It has a CentOS 7 system on it which, due to bugs in anaconda, I was forced to install in legacy (i.e. non-UEFI) mode. CentOS controls the MBR on that disk using grub2.To boot Debian or Windows I have to be in UEFI mode. The debian boot menu comes up and I can select either OS from there.To boot CentOS 7 I have to switch to legacy mode and flag /dev/sdc as the boot drive. The CentOS boot menu shows me the Debian and Windows systems but cannot successfully boot them.I would like to be able to boot all of my systems from a single boot menu, preferably while in UEFI mode, but don't have sufficient grub-fu to make it work.I tried simply copying the relevant entry from CentOS's grub.cfg file to Debian's. It showed up on Debian's boot menu but when I selected it the system did a full reboot and put me back in the boot menu.Since I plan on removing Debian I would like CentOS's boot menu, from its /boot partition on /dev/sdc, to be used, but if I have to create a separate boot partition on /dev/sda, I can live with that.From my reading it looks like it might be as simple as running the grub-install command on CentOS and giving it /dev/sda1 as the location of the EFI partition, but none of the examples I've seen involve this mix of GPT and msdos drives, so I'm afraid of hosing my system.Any help from the grub experts out there would be greatly appreciated. | grub: boot system on non-GPT disk while in UEFI mode | boot;grub2;uefi | null |
_cs.58029 | In a filter bubble, what are some of the techniques and best ways to determine the relationship between a video that is watched or liked, and the list of related videos to be presented to the browsing user? Thanks. | Filter bubble algorithms for presenting related video lists | social networks;video;filtering problem | null |
_unix.387564 | I'm trying to boot KNOPPIX to RAM from the grub prompt.Booting usinggrub> set root=(cd1)grub> linux /boot/isolinux/linux lang=engrub> initrd /boot/isolinux/minirt.gzgrub> bootworks just fine, but adding the toram parameter causes it to get stuck after pressing Enter on boot.I've seen this question where the poster uses loopback loop /boot/iso/knoppix.iso and bootfrom=.../boot/iso/knoppix.iso. I've tried adding both using /KNOPPIX/bootonly.iso, which is the only .iso file I was able to find on the CD, neither of which helped.I know I should be able to boot the OS to ram from grub as I've done this with other distros. Is there something I'm missing? | Booting KNOPPIX DVD to ram from grub prompt | boot;grub;knoppix | null |
_unix.344607 | I've (mostly) successfully split my Debian 8 root filesystem into a /boot partition (still on my original disk) and a / partition (using LVM on a second disk).The system boots fine, but the initramfs (at least I assume it is that) is complaining that it can't check the root filesystem:fsck error 2 (No such file or directory) while executing fsck.ext3 for /dev/mapper/SSDVG-RootVolfsck exited with status code 8There is then a successful fsck of the same filesystem from (I think) systemd.As far as I can tell, all files in /etc/initramfs-tools and /usr/share/initramfs-tools are identical to a system originally built as LVM, which does not have this issue.Is the problem that my initramfs is not loading the LVM support early enough, or that my new root filesystem is somehow not being identified correctly? (I see it is looking for fsck.ext3, where other logs I've seen online suggest it may usually use fsck.ext4.Any advice on where to look to diagnose what is going on appreciated - the Debian grub2 / initramfs behaviour with LVM appears to be rather lightly documented! | fsck error after converting Debian 8 root to LVM | debian;boot;lvm;initramfs;fsck | null |
_unix.237531 | If you issue ls -all command some files or directories have the year and some have the time? Why do some show the year while others show the time? Is the time representative of the time the file was created? | Why does ls -all show time for some files but only year for others? | ls | By default, file timestamps are listed in abbreviated form, using a date like Mar 30 2002 for non-recent timestamps, and a date-without-year and time like Mar 30 23:45 for recent timestamps. This format can change depending on the current locale as detailed below. A timestamp is considered to be recent if it is less than six months old, and is not dated in the future. If a timestamp dated today is not listed in recent form, the timestamp is in the future, which means you probably have clock skew problems which may break programs like make that rely on file timestamps.Source: http://www.gnu.org/software/coreutils/manual/coreutils.html#Formatting-file-timestampsTo illustrate:$ for i in {1..7}; do touch -d $i months ago file$i; done$ ls -ltotal 0-rw-r--r-- 1 terdon terdon 0 Sep 21 02:38 file1-rw-r--r-- 1 terdon terdon 0 Aug 21 02:38 file2-rw-r--r-- 1 terdon terdon 0 Jul 21 02:38 file3-rw-r--r-- 1 terdon terdon 0 Jun 21 02:38 file4-rw-r--r-- 1 terdon terdon 0 May 21 02:38 file5-rw-r--r-- 1 terdon terdon 0 Apr 21 2015 file6-rw-r--r-- 1 terdon terdon 0 Mar 21 2015 file7 |
_webmaster.26 | What's a good way to list a Contact Us email address on a web site, while reducing the likelihood it will get spammed?Is putting the email address in an image the best technique, or are there others? | Way to list Contact Us email address on web site, yet reduce likelihood of spam? | email;email address;spam;contact page | null |
_webapps.102427 | I have deleted some important emails from Inbox and Trash folder. How I can recover them again? | Recovering deleted emails from Trash folder | gmail;data recovery | null |
_vi.7681 | If I type::echo system('grep -IRn foobar ~/.vim | grep -v backup')Vim displays the output of the shell command:$ grep -IRn foobar ~/.vim | grep -v backupWhich is the list of files containing the pattern foobar inside the folder ~/.vim, after removing the matches containing the pattern backup.:echo expand('`grep -IRn foobar ~/.vim | grep -v backup`')Vim does the same thing (without the newline at the end).:e `=system('grep -IRn foobar ~/.vim | grep -v backup | tail -1 | cut -d: -f1')`Vim edits the last file from the output of the previous shell command.The last 3 commands work without escaping the pipe.The latter is never interpreted as a command termination, probably because it's protected by the string. Maybe for the same reason this command works::echo 'hello | world'But If I want to populate the quickfix list with the same shell command and I type::cexpr system('grep -IRn foobar ~/.vim | grep -v backup')I have the following errors:E115: Missing quote: 'grep -IRn foobar ~/.vim E116: Invalid arguments for function system('grep -IRn foobar ~/.vim E15: Invalid expression: system('grep -IRn foobar ~/.vim It seems that the pipe was interpreted as a command termination, and that it must be escaped::cexpr system('grep -IRn foobar ~/.vim \| grep -v backup')The pipe is inside a string, and the command is almost identical to the first one with :echo where the pipe is not escaped.Why is it suddenly interpreted as a command termination with :cexpr system('shell cmd')? | Why is a pipe interpreted as a command termination in :cexpr system('shell cmd')? | command line;external command | null |
_unix.132101 | I installed a little python software (pycarddav) before to notice there is a packaged version for Debian sid...Now, I would like to uninstall properly this software and then, install the packaged version with apt.Here is what I did to install pycarddav (following its doc):Download pycarddav and extract itGo to the folder and launch python setup.py install which contain:#!/usr/bin/env python2import osimport stringimport subprocessimport sysimport warnings#from distutils.core import setupfrom setuptools import setupMAJOR = 0MINOR = 7PATCH = 0RELEASE = TrueVERSION = {0}.{1}.{2}.format(MAJOR, MINOR, PATCH)if not RELEASE: try: try: pipe = subprocess.Popen([git, describe, --dirty, --tags], stdout=subprocess.PIPE) except EnvironmentError: warnings.warn(WARNING: git not installed or failed to run) revision = pipe.communicate()[0].strip().lstrip('v') if pipe.returncode != 0: warnings.warn(WARNING: couldn't get git revision) if revision != VERSION: revision = revision.lstrip(string.digits + '.') VERSION += '.dev' + revision except: VERSION += '.dev' warnings.warn(WARNING: git not installed or failed to run)def write_version(): writes the pycarddav/version.py file template = \__version__ = '{0}' filename = os.path.join( os.path.dirname(__file__), 'pycarddav', 'version.py') with open(filename, 'w') as versionfile: versionfile.write(template.format(VERSION)) print(wrote pycarddav/version.py with version={0}.format(VERSION))write_version()requirements = [ 'lxml', 'vobject', 'requests', 'urwid', 'pyxdg']if sys.version_info[:2] in ((2, 6),): # there is no argparse in python2.6 requirements.append('argparse')setup( name='pyCardDAV', version=VERSION, description='A CardDAV based address book tool', long_description=open('README.rst').read(), author='Christian Geier', author_email='[email protected]', url='http://lostpackets.de/pycarddav/', license='Expat/MIT', packages=['pycarddav', 'pycarddav.controllers'], scripts=['bin/pycardsyncer', 'bin/pc_query', 'bin/pycard-import'], requires=requirements, install_requires=requirements, classifiers=[ Development Status :: 4 - Beta, License :: OSI Approved :: MIT License, Environment :: Console :: Curses, Intended Audience :: End Users/Desktop, Operating System :: POSIX, Programming Language :: Python :: 2 :: Only, Topic :: Utilities, Topic :: Communications :: Email :: Address Book ],)How can I remove it properly?Can I use pip uninstall pycarddav even if I didn't use pip for the installation? | Uninstall a python software properly | python | null |
_unix.271452 | I need to parse a text file and replace a certain portion of it. I believe sed is the way to do it but not sure how to handle multiple lines with itThe pattern I am looking isset cells { \ cell1 \ cell2 \ cell3 \ }and I want to replace it with the contents of a variable.Something like the following should do the trick, but it doesn't work on multiple lines. Any ideas?sed s/set cells {.*}/set cells {$cell_variable}/ file | sed pattern and multiline substitution | text processing;sed | null |
_unix.373819 | Since moving to the new Linux Mint 18.1 the KDE version that comes with is KDE5.In KDE4 when using kate on a remote file using fish:// when asking to open another file it would have opened the location of the currently open file, which is super helpful because you don't have to go all the way again just to open the file adjacent your currently open file.In KDE5 in kate I couldn't find any setting to make it happen again. Everytime I work on a file and wish to open a diffrent file on the same location the kate opens at my home folder and than I have to go to the remote folder all the way.I know I could mount the remote location using sshfs but I'd rather not. Anyone has any idea if I could force-downgrade to KDE4 or change a setting on the kate to get that feature back?All help will be appreciatedEdit: Found out that while trying to save a new file it will open the remote directory, hope it helps | KDE5 kate wont open remote ssh location (fish) of currently opened file | ssh;linux mint;kde5;kate | null |
_unix.347813 | I am using Ubuntu 14.10. Sometimes my file-system goes into read-only file system. At that time if I restart, it asks for some options:F - for fix - if I press it, it says creating /tmp directory once process completes, then problem solved.S - for skipM - manual recovery.Even pressing fix, after some time it goes again into read only file system.Again if I restart and press F, it is fine for some time.What can I do, to fix it permanently?Here, I attached the result of smartctl /dev/sda5 -a | Sometimes filesystem goes to 'Read-only file system' | linux;debian;boot;reboot;readonly | null |
_softwareengineering.65822 | In which respects Just In Time compilation is better than Ahead Of Time compilation? And vice versa.Is AOT same as direct native compilation? | Pros and cons of JIT and AOT | compiler;jit | Just-in-time compilation usually uses profiling information (like how many times has this method been executed up to now) to determine how aggressively a given method should be compiled, and is usually included in the execution platform. JIT was brought into focus with the early versions of Java as they could only do simple interpretation, but with hooks allowing for external JIT modules to be plugged in allowing for vast improvements of execution speed.Ahead-of-time compilation is - to my knowledge - a term coined to complement the JIT concept, and usually reflects a compilation step done before execution which rarely includes profiling information (so all methods are compiled equally hard). The optimization step is finished at runtime, so it does not need to be present at execution time. This mean that the execution environment does not need to be as big as in the JIT-environment.One of the big advantages of JIT-compilation, is that it allows directly targetting the exact hardware at each run utilizing things like SSE3 or AltiVec, instead of having to compile to a common subset available on all platforms.Note, however, that the JIT-process is usually hard to predict what it will do. You generally cannot figure out how to trick it to do something special with gnarly code, as it may change in the next release. This means that it is very important to write clean, simple code giving the JIT the best possible conditions to do its work. |
_softwareengineering.277081 | I have seen in many programs, almost only on linux, that when you run the program with a graphical manager(Clicking the executable) the program runs in a graphical window and when you run it from the terminal it runs on text mode.I want to know how to do that. Does anyone know how? | how can i make a program written in c++ with qt to text and graphic mode? | c++;qt | It is operating system specific.On Linux with X11, you could simply use the fact that DISPLAY is an unset environment variable outside of graphical desktop interfaces (or test for the success of XOpenDisplay); You might also use isatty(3) on STDIN_FILENO(which is 0) to test if stdin is a terminal (but you could also open /dev/tty and see if it fails, cf tty(4))so simply codeif (getenv(DISPLAY)) startmyGUIapplication();else if (isatty(STDIN_FILENO)) startmyterminalapplication();else error(); // application started outside, e.g. from `crontab`Regarding Qt, I guess that QApplication would fail to be constructed if not used in a GUI, Or its exec function would fail.PS. On Linux with a Wayland desktop (or on MacOSX with Quartz), I don't know how to do that, but I am sure there is a simple solution. |
_softwareengineering.313714 | The software product I work on has several services and user interfaces. There are separate setup files for each unit of the product for installation. We make customer specific changes and bug fixes every day. We need to store a seperate version for each customer.Recently, we made the mistake of installing the wrong version of the product to several customers. We have a development team which uses github for version control and a technical service team, which installs the product and provides customer service. I am a part of the developer team and we want our customer service team to have easy access to recent setups and customer specific setups. Currently, they ask the responsible developer for the setup files.How can we organize and store our setup packages properly to avoid making mistakes? | Easily accessible setup files for installation | version control;maintenance;install | null |
_codereview.51038 | I created a Time class. Now I want to modify the code to perform input validation. Hour should be between 0-24, minutes and seconds between 0-59. If class need improvements, please offer suggestions.Time.h Header File#ifndef TIME_H#define TIME_Hclass Time{ private : int hour; int minute; int second; public : Time(const int h = 0, const int m = 0, const int s = 0); //with default value void setTime(const int h, const int m, const int s); // setter function void print() const; // Print a description of object in hh:mm:ss bool equals(Time) const; //compare two time object};#endifTime.cpp implementation file#include <iostream>#include Time.hTime :: Time(const int h, const int m, const int s) : hour(h), minute (m), second(s){}void Time :: setTime(const int h, const int m, const int s) { hour = h; minute = m; second = s; } void Time :: print() const{ (hour < 10) ? std::cout << 0 << hour : std::cout << hour; std::cout << : ; (minute < 10) ? std::cout << 0 << minute : std::cout << minute ; std::cout << : ; (second < 10) ? std::cout << 0 << second : std::cout << second ; std::cout << \n ; }bool Time :: equals(Time otherTime) const{ if(hour == otherTime.hour && minute == otherTime.minute && second == otherTime.second) return true; else return false;}main.cpp#include <iostream>#include Time.hint main(){ Time t1(10, 50, 59); t1.print(); // 10:50:59 Time t2; t2.print(); // 06:39:09 t2.setTime(6, 39, 9); t2.print(); // 06:39:09 if(t1.equals(t2)) std::cout << Two objects are equal\n; else std::cout << Two objects are not equal\n; return 0;}modified code of setTime for input validation (need suggestions) void Time :: setTime(const int h, const int m, const int s) { if(h>=0 && h <=23) hour = h; else hour = 0; if(m>=0 && m <=59) minute = m; else minute = 0; if(s>=0 && s <=59) second = s; else second = 0; } | Validating C++ Time class objects | c++;classes;datetime | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.