id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_codereview.133942
I recently wrote the livejson module as a way to make working with JSON in Python easier. The idea of the module is that you initialize a livejson.File, and then you can interact with your JSON file with the same interface as a Python dict, except your changes are reflected in the file in realtime.I'd like to know how I can improve this.Is my design good, in terms of things like naming, OOP design, etc.?Are there ways I could improve performance without decreasing reliability? I implemented the context manager with a concept of grouped writes specifically so that repetitive operations could achieve the performance of the native objects.Should I be concerned about thread safety?The codeHere's the code. I've tried to document it well with comments and docstrings, and also tried to follow PEP8 so I hope it's readable:Imports:A module implementing a pseudo-dict class which is bound to a JSON file.As you change the contents of the dict, the JSON file will be updated inreal-time. Magic.import collectionsimport osimport jsonThe code begins with some helper functions and generic base classes:# MISC HELPERSdef _initfile(path, data=dict): Initialize an empty JSON file. data = {} if data.lower() == dict else [] # The file will need to be created if it doesn't exist if not os.path.exists(path): # The file doesn't exist # Raise exception if the directory that should contain the file doesn't # exist dirname = os.path.dirname(path) if dirname and not os.path.exists(dirname): raise IOError( (Could not initialize empty JSON file in non-existant directory '{}').format(os.path.dirname(path)) ) # Write an empty file there with open(path, w) as f: json.dump(data, f) return True elif len(open(path, r).read()) == 0: # The file is empty with open(path, w) as f: json.dump(data, f) else: # The file exists and contains content return Falseclass _ObjectBase(object): Class inherited by most things. Implements the lowest common denominator for all emulating classes. def __getitem__(self, key): out = self.data[key] # Nesting if isinstance(out, (list, dict)): # If it's the top level, we can use [] for the path pathInData = self.pathInData if hasattr(self, pathInData) else [] newPathInData = pathInData + [key] # The top level, i.e. the File class, not a nested class. If we're # already the top level, just use self. toplevel = self.base if hasattr(self, base) else self nestClass = _NestedList if isinstance(out, list) else _NestedDict return nestClass(toplevel, newPathInData) # Not a list or a dict, don't worry about it else: return out def __len__(self): return len(self.data) # Methods not-required by the ABC def __str__(self): return str(self.data) def __repr__(self): return repr(self.data) # MISC def _checkType(self, key): Make sure the type of a key is appropriate. passOne of the major challenges I faced was easily supporting nested JSON structures, with the easy syntax of my_file[a][b] = c. It was difficult to make these cases behave properly because __setitem__ is not called in this case. To overcome this, I set up simple nesting classes to replace Python lists and dicts in nested situations. These mainly just listen for __setitem__ and tell the top-level class to update the file's contents.# NESTING CLASSESclass _NestedBase(_ObjectBase): Inherited by _NestedDict and _NestedList, implements methods common between them. Takes arguments 'fileobj' which specifies the parent File object, and 'pathToThis' which specifies where in the JSON file this object exists (as a list). def __init__(self, fileobj, pathToThis): self.pathInData = pathToThis self.base = fileobj @property def data(self): # Start with the top-level data d = self.base.data # Navigate through the object to find where self.pathInData points for i in self.pathInData: d = d[i] # And return the result return d def __setitem__(self, key, value): self._checkType(key) # Store the whole data data = self.base.data # Iterate through and find the right part of the data d = data for i in self.pathInData: d = d[i] # It is passed by reference, so modifying the found object modifies # the whole thing d[key] = value # Update the whole file with the modification self.base.set_data(data) def __delitem__(self, key): # See __setitem__ for details on how this works data = self.base.data d = data for i in self.pathInData: d = d[i] del d[key] self.base.set_data(data)class _NestedDict(_NestedBase, collections.MutableMapping): A pseudo-dict class to replace vanilla dicts inside a livejson.File. This watches for changes made to its content, then tells the base livejson.File instance to update itself so that the file always reflects the changes you've made. This class is what allows for nested calls like this >>> f = livejson.File(myfile.json) >>> f[a][b][c] = d to update the file. def __iter__(self): return iter(self.data) def _checkType(self, key): if not isinstance(key, str): raise TypeError(JSON only supports strings for keys, not '{}'. {} .format(type(key).__name__, Try using a list for storing numeric keys if isinstance(key, int) else ))class _NestedList(_NestedBase, collections.MutableSequence): A pseudo-list class to replace vanilla lists inside a livejson.File. This watches for changes made to its content, then tells the base livejson.File instance to update itself so that the file always reflects the changes you've made. This class is what allows for nested calls involving lists like this: >>> f = livejson.File(myfile.json) >>> f[a].append(foo) to update the file. def insert(self, index, value): # See _NestedBase.__setitem__ for details on how this works data = self.base.data d = data for i in self.pathInData: d = d[i] d.insert(index, value) self.base.set_data(data)Here we have the main element of the module, the classes which envelop everything else:# THE MAIN CLASSESclass _BaseFile(_ObjectBase): Class inherited by DictFile and ListFile. This implements all the required methods common between collections.MutableMapping and collections.MutableSequence. def __init__(self, path, pretty=False, sort_keys=False): self.path = path self.path = path self.pretty = pretty self.sort_keys = sort_keys self.indent = 2 # Default indentation level _initfile(self.path, list if isinstance(self, ListFile) else dict) def _data(self): A simpler version of data to avoid infinite recursion in some cases. Don't use this. if self.is_caching: return self.cache with open(self.path, r) as f: return json.load(f) @property def data(self): Get a vanilla dict object to represent the file. # Update type in case it's changed self._updateType() # And return return self._data() def __setitem__(self, key, value): self._checkType(key) data = self.data data[key] = value self.set_data(data) def __delitem__(self, key): data = self.data del data[key] self.set_data(data) def _updateType(self): Make sure that the class behaves like the data structure that it is, so that we don't get a ListFile trying to represent a dict. data = self._data() # Change type if needed if isinstance(data, dict) and isinstance(self, ListFile): self.__class__ = DictFile elif isinstance(data, list) and isinstance(self, DictFile): self.__class__ = ListFile # Bonus features! def set_data(self, data): Overwrite the file with new data. You probably shouldn't do this yourself, it's easy to screw up your whole file with this. if self.is_caching: self.cache = data else: fcontents = self.file_contents with open(self.path, w) as f: try: # Write the file. Keep user settings about indentation, etc indent = self.indent if self.pretty else None json.dump(data, f, sort_keys=self.sort_keys, indent=indent) except Exception as e: # Rollback to prevent data loss f.seek(0) f.truncate() f.write(fcontents) # And re-raise the exception raise e self._updateType() def remove(self): Delete the file from the disk completely. os.remove(self.path) @property def file_contents(self): Get the raw file contents of the file. with open(self.path, r) as f: return f.read() # Grouped writes @property def is_caching(self): Returns a boolean value describing whether a grouped write is underway. return hasattr(self, cache) def __enter__(self): self.cache = self.data return self # This enables using as def __exit__(self, *args): # We have to write manually here because __setitem__ is set up to write # to cache, not to file with open(self.path, w) as f: json.dump(self.cache, f) del self.cacheclass DictFile(_BaseFile, collections.MutableMapping): A class emulating Python's dict that will update a JSON file as it is modified. def __iter__(self): return iter(self.data) def _checkType(self, key): if not isinstance(key, str): raise TypeError(JSON only supports strings for keys, not '{}'. {} .format(type(key).__name__, Try using a list for storing numeric keys if isinstance(key, int) else ))class ListFile(_BaseFile, collections.MutableSequence): A class emulating a Python list that will update a JSON file as it is modified. Use this class directly when creating a new file if you want the base object to be an array. def insert(self, index, value): data = self.data data.insert(index, value) self.set_data(data) def clear(self): # Under Python 3, this method is already in place. I've implemented it # myself to maximize compatibility with Python 2. Note that the # docstring here is stolen from Python 3. L.clear() -> None -- remove all items from L. self.set_data([])And finally, this is what a user will directly initialize (most of the time). File is a class so that I can have staticmethods on it, but it really behaves like a factory function since it immediately changes self.__class__. Initializing a File() will give you either a ListFile or a DictFile object.class File(object): The main interface of livejson. Emulates a list or a dict, updating a JSON file in real-time as it is modified. This will be automatically replaced with either a ListFile or as DictFile based on the contents of your file (a DictFile is the default when creating a new file). def __init__(self, path, pretty=False, sort_keys=True, indent=2): # When creating a blank JSON file, it's better to make the top-level an # Object (dict in Python), rather than an Array (list in python), # because that's the case for most JSON files. self.path = path self.pretty = pretty self.sort_keys = sort_keys self.indent = indent _initfile(self.path) with open(self.path, r) as f: data = json.load(f) if isinstance(data, dict): self.__class__ = DictFile elif isinstance(data, list): self.__class__ = ListFile @staticmethod def with_data(path, data): Initialize a new file that starts out with some data. Pass data as a list, dict, or JSON string. # De-jsonize data if necessary if isinstance(data, str): data = json.loads(data) # Make sure this is really a new file if os.path.exists(path): raise ValueError(File exists, not overwriting data. Use 'set_data' if you really want to do this.) else: f = File(path) f.set_data(data) return f# Aliases for backwards-compatibilityDatabase = FileListDatabase = ListFileDictDatabase = DictFileUsageHere's example usage in case that's helpful to critique my API design:Basic usage:import livejsonf = livejson.File(test.json)f[a] = b# That's it, the file has been written to!Usage as a context manager:import livejsonwith livejson.File(test.json) as f: f[a] = bIf I could get some feedback on this that would be great.
A module to make JSON in Python easier
python;performance;object oriented;library
From a first look pretty clean code, and you also presented it in a nicefashion, so off to a good start.I'll go with your questions first:Naming looks mostly good, though I'd wager that at some point genericnames with Base in it will make it harder to understand thannecessary - unfortunately I don't have a good suggestion as to whatother names would be more helpful.Any unspecific question about performance should get the answer takea profiler and look IMO. Really, there's more to worry about thanraw numbers, including the fact that this is Python. For starts theregular json module also isn't the fastest. However correctnesscomes first, sonot only is thread safety a concern, access from multiple processes isa concern even more so. The one pattern that should really be usedhere at least is to first create a temporary file with theto-be-written content, then atomically moving that to thedestination. I'm not sure what's the canonical source for this, butthis blog postis a nice start. The set_data method is particularly bad in thatrespect.Okay, so the code is almost PEP8 compatible, but there's still a lot ofcamel case in there. IMO that's fine as long as it's consistent.One note:Replacing self.__class__ ... that works? Even if it did that looksextremely fishy,c.f. this SO postfor example. I share the sentiment that a different mechanism wouldbe better - if I see ...File() as a constructor call I definitelydon't expect a subclass of that to be the object in question. Notdoing clever things will save people minutes to hours of debuggingtime, and this is definitely too clever. Just replace it with aregular function like with livejson.mapped(test.json) as f: or soand make it clear that you get a File subclass as a result would beenough for me.
_unix.244818
I am now extending my own answers to MCQs, as made How to SED these paragraphs to MCQ format?. Data in file.txt1 c2 aData in exam.tex\itemWhich of the following lorem ipsun are the best playstation games you have played? A PattmanB PokemonC LoremD IpsunE Heillui\itemWhich of the following lorem ipsun are the best playstation games you have played? A PattmanB PokemonC LoremD IpsunE HeilluiWanted output where bold made by the answers in the file.txt\itemWhich of the following lorem ipsun are the best playstation games you have played? A PattmanB Pokemon\textbf{C Lorem}D IpsunE Heillui\itemWhich of the following lorem ipsun are the best playstation games you have played? \textbf{A Pattman}B PokemonC LoremD IpsunE HeilluiPseudocodefor loop \item's in Perl paragraph mode for loop empty lines in each paragaph by the count in the list (a=1, b=2, c=3, d=4, e=5)apply \textbf to the beginning of the line; and } to the end of the linestep out of the current paragraphendPseudocode with Python-Perlfor item in items perl -00pe 's/\\item\n.*\n{$item}^/\textbf{/;' file perl -00pe 's/\\item\n.*\n{$item}$/}/;' fileendwhere I do not like that I handle the beginning and the end in separate commands. How can you apply answer to MCQs by Perl/SED/Python?
To Apply Answers to MCQs
text processing;sed;perl
{ printf '[13*%d-n[bs]pc]s%c\n' 9 a 7 b 5 c 3 d 1 e tr -s ' \n' lx <file.txt; } | dc |sed -f- -eb -e:s -e's/.*/\\textbf{&}/' exam.txtThat uses the dc reverse-polish-notation calculator to generate a sed script that looks like:8bs17bs...which is then concatenated with the sed scripts entered on the command-line and so results in every line number which is not included in the generated script to be branched away, and for the those that are included to have the string \textbf{ inserted before whatever exists on the line and } appended to its tail.It will work for more than just the two sections, of course. Basically it multiplies the leading number on each line in file.txt by 13 and then subtracts from the product 1, 3, 5, 7, 9 for either of e, d, c, b, a respectively to arrive at the list of targeted line numbers.In any case, all sed has to do at the end is the default print it does for every line which it doesn't edit before branching away, or else to do the one substitution for your string. tr and dc are both very fast utilities, and they handle nearly all of the pre-processing.Possible advantages to this approach:No comparisons need be made between the two files.No regular expression matching is required at all.While true, sed's processing might benefit slightly from an initial additional filter to prevent it attempting to match every line number against those in the list, like:... | sed -e'/^[ABCDE] /!b' -f- ...The answers in file.txt need not be in any particular order or even completely represent every question in exam.txt.Possible disadvantages:Each answer letter in file.txt must be lowercase.use tr -s '[:upper:] \n' \[:lower:]lx to handle either/or.At least one intermediate space and intervening newline is required between the answer number and the answer letter and between each answer respectively in file.txt, though any larger number is permitted with the exception that no leading blank-lines can be found before the first answer, and no line can begin or end with spaces. Every question/answer block in exam.txt must be comprised of exactly 13 lines (excepting the last, which need not include a trailing blank-line).This depends on one or two GNU dc extensions as written. Here is a more portable version:{ printf '[13*%d-p[s]pc]s%c\n' 9 a 7 b 5 c 3 d 1 e tr -s ' \n' lx <file.txt;}| dc | paste -db - -|sed -f- -eb -e:s -e's/.*/\\textbf{&}/' exam.txt...either way you write it, though:\itemWhich of the following lorem ipsun are the best playstation games you have played?A PattmanB Pokemon\textbf{C Lorem}D IpsunE Heillui\itemWhich of the following lorem ipsun are the best playstation games you have played?\textbf{A Pattman}B PokemonC LoremD IpsunE Heillui
_codereview.155972
The following is a Python 3 script I wrote to take and delete elasticsearch snapshots. Could anyone please point out any issues? I am concerned that I have structured the code poorly (particularly the order of the functions), making it hard to read, but I am unsure how to fix that.I also made no real attempt to write pythonic code. Any ways I could improve the code to be more pythonic would also be appreciated.GitHub#!/usr/bin/env python3def main(): import argparse, logging # Parse arguments parser = argparse.ArgumentParser(description='Take subcommands and \ parameters for elasticsearch backup script.') parser.add_argument('function', type=str, choices=['backup','delete'], help='Triggers a new elasticsearch action.') parser.add_argument('--config', type=str, help='Specify path to a config file.') parser.add_argument('--name', type=str, help='Specify snapshot name') parser.add_argument('--age', type=int, help='Specify age to delete backups after') parser.add_argument('--logfile', type=str, help='Specify where to put a logfile') parser.add_argument('--loglevel', type=str, help='Specify log level', choices=['INFO', 'DEBUG', 'WARNING', 'ERROR', 'CRITICAL']) args = parser.parse_args() # Find config file, and read in config config = find_config(args.config) # Check if logfile was specified if args.logfile == None: logfile = '/var/log/elasticsearch/snapshot_backup.log' else: logfile = args.logfile # Check if logging level was specified try: if args.loglevel != None: logging_level = logging.getLevelName(args.loglevel) print('I am args: {0}'.format(logging_level)) elif config['logging_level'] != None: logging_level = logging.getLevelName(config['logging_level']) print('I am config: {0}'.format(logging_level)) except KeyError as e: logging_level = logging.getLevelName('ERROR') # Set up logging logging.basicConfig(filename=logfile, level=logging_level, format='%(asctime)s - %(levelname)s - %(message)s') # Map argument string to function name FUNCTION_MAP = {'backup': backup, 'delete': delete} # Initialise callable variable using the FUNCTION_MAP func = FUNCTION_MAP[args.function] # Call funky voodoo variable function if args.function == 'backup': func(elastic_node=config['elasticsearch_host'], backup_repository=config['backup_repository'], snapshot_name=args.name) elif args.function == 'delete': func(elastic_node=config['elasticsearch_host'], backup_repository=config['backup_repository'], snapshot_name=args.name, age=args.age) else: # Should never get here. argparse should pick this up logging.error('Invalid option {}'.format(args.function)) exit()def find_config(config_arg): import yaml # Check if custom config defined if config_arg == '' or config_arg == None: configfile = '/etc/elasticsearch/backup.yaml' else: configfile = config_arg # Read in config from config file try: with open(configfile, 'r') as ymlfile: config = yaml.load(ymlfile) return(config) except FileNotFoundError as e: print(e) exit()def backup(elastic_node, backup_repository, port=9200, snapshot_name=None): import json, logging, requests # Get new snapshot name if none provided if snapshot_name == None: snapshot_name = generate_snapshot_name() # Assemble url snapshot_url = 'http://{0}:{1}/_snapshot/{2}/{3}?wait_for_completion=true \ '.format(elastic_node, port, backup_repository, snapshot_name) # Send REST API call try: request = requests.put(snapshot_url) request.raise_for_status() logging.info('backup: {0} completed'.format(snapshot_name)) except ConnectionError as e: logging.error(backup: Failed to create {0}.format(e)) logging.error(json.loads(request.text)) except requests.exceptions.HTTPError as e: logging.error(backup: Failed to create {0}.format(e)) logging.error(json.loads(request.text)['error']['reason']) except requests.exceptions.ConnectionError as e: logging.error(backup: Failed to create {0}.format(e))def delete(age, elastic_node, backup_repository, port=9200, snapshot_name=None): import logging, json, requests # If age not provided, use default if age == None: age = 40 # If snapshot_name provided delete it if snapshot_name is not None: url = 'http://{0}:{1}/_snapshot/{2}/{3}'.format(elastic_node, port, backup_repository, snapshot_name) # Send REST API call try: request = requests.delete(url) request.raise_for_status() logging.info('delete: {0} completed'.format(snapshot_name)) except ConnectionError as e: logging.error(delete: Failed to create {0}.format(e)) logging.error(json.loads(request.text)) except requests.exceptions.HTTPError as e: logging.error(delete: Failed to create {0}.format(e)) logging.error(json.loads(request.text)['error']['reason']) except requests.exceptions.ConnectionError as e: logging.error(delete: Failed to create {0}.format(e)) # Get today's snapshot name if none provided else: snapshot_name = generate_snapshot_name() logging.info('delete: Generated snapshot name \ {0}'.format(snapshot_name)) snapshot_time_delta = calculate_delta(snapshot_name, age=age) bulk_delete(snapshot_time_delta, elastic_node, backup_repository, port)def generate_snapshot_name(prefix='snapshot-'): from datetime import datetime # generate date string in UTC time date = datetime.utcnow().strftime(%Y%m%d) snapshot_name = prefix + date return snapshot_namedef calculate_delta(snapshot_name=None, age=40): import logging, re from datetime import timedelta try: if snapshot_name == None: raise ValueError('snapshot_name cannot be None') else: date = parse_snapshot_name(snapshot_name) # Calculate time delta date_delta = date - timedelta(days=age) logging.info('calculate_delta: date {0} days ago is {1} \ '.format(age, date_delta)) return date_delta except ValueError as e: logging.error('calculate_delta: {0}}'.format(e))def bulk_delete(snapshot_time_delta, elastic_node, backup_repository, port): import logging, requests # Fetch all snapshot metadata all_snapshots = fetch_all_snapshots(elastic_node, backup_repository, port) # Find snapshots older than snapshot_time_delta old_snapshots = find_old_snapshots(all_snapshots, snapshot_time_delta) for snapshot in old_snapshots: # Assemble url url = 'http://{0}:{1}/_snapshot/{2}/{3}'.format(elastic_node, port, backup_repository, snapshot) logging.debug('bulk_delete: URL is: {0}'.format(url)) # Send REST API call try: request = requests.delete(url) request.raise_for_status() logging.info('bulk_delete: {0} completed'.format(snapshot)) except ConnectionError as e: logging.error(bulk_delete: Failed to create {0}.format(e)) logging.error(json.loads(request.text)) except requests.exceptions.HTTPError as e: logging.error(bulk_delete: Failed to create {0}.format(e)) logging.error(json.loads(request.text)['error']['reason']) except requests.exceptions.ConnectionError as e: logging.error(bulk_delete: Failed to create {0}.format(e))def fetch_all_snapshots(elastic_node, backup_repository, port): import json, logging, requests # Assemble URL url = 'http://{0}:{1}/_snapshot/{2}/_all'.format(elastic_node, port, backup_repository) try: #Send REST API call request = requests.get(url) request.raise_for_status() logging.info('fetch_all_snapshots: retrieved all snapshots metadata') return json.loads(request.text) except ConnectionError as e: logging.error(fetch_all_snapshots: Failed to create {0}.format(e)) logging.error(json.loads(request.text)) except requests.exceptions.HTTPError as e: logging.error(fetch_all_snapshots: Failed to create {0}.format(e)) logging.error(json.loads(request.text)['error']['reason']) except requests.exceptions.ConnectionError as e: logging.error(fetch_all_snapshots: Failed to create {0}.format(e))def find_old_snapshots(all_snapshots, snapshot_time_delta): import datetime, logging old_snapshots = [] for snapshot in all_snapshots['snapshots']: snapshot_date = parse_snapshot_name(snapshot['snapshot']) logging.info('find_old_snapshots: snapshot name: {0}: snapshot date: {1}\ '.format(snapshot['snapshot'], snapshot_date)) if snapshot_date <= snapshot_time_delta: old_snapshots.append(snapshot['snapshot']) logging.info('find_old_snapshots: {0}'.format(old_snapshots)) return old_snapshotsdef parse_snapshot_name(snapshot_name=None): import logging, re from datetime import datetime try: if snapshot_name is None: raise ValueError('snapshot_name cannot be None') else: # Pull datetime out of snapshot_name search_result = re.search(r'\d{8}', snapshot_name) logging.info('parse_snapshot_name: Parsed date is: \ {0}'.format(search_result.group())) #Convert regex result into datetime date = datetime.strptime(search_result.group(), '%Y%m%d') logging.info('parse_snapshot_name: Confirm parsed date converted \ to date: {0}'.format(type(date))) return date except ValueError as e: logging.error('parse_snapshot_name: {0}'.format(e)) except AttributeError as e: logging.info('Snapshot {0} is a manual snapshot. It cannot be deleted \ automatically.'.format(snapshot_name))if __name__ == '__main__': main()
Python script to manage elasticsearch backups
python;python 3.x;elasticsearch
null
_unix.315322
In windows 7 we can disable automatic updates completely. But in linux distributions like fedora & ubuntu even after disabling automatic updates, they consume my bandwidth. I have limited bandwidth each month and I want to completely disable automatic update. Which linux distro has this capability?
Which Linux distribution has the capability to completely disable the automatic update?
linux;distribution choice;distributions
null
_scicomp.4952
I have a simple (and small) linear homogeneous system $Ax=0$, where the entries of the $N\times M$ matrix $A$ are small integers. I do not need fancy methods which efficiently solve almost singular matrices and treat roundoff errors etc. But I need to know if a system is singular and also the general solution for underdetermined systems. It must be rock-solid. Any reference to such an algorithm is welcome. I am coding in C. Is SVD the way to go?
Numeric solution of simple but possibly singular linear system
linear solver
The Integer Matrix Library is a C library that claims to be able to compute the null space of an integer matrix. See also this answer on MathOverflow to the exact same question, which gives a list of libraries (including PARI, which also can be called from C and is still being updated). You could also take a look at LinBox, even though it's written in C++.
_unix.101326
Most guides seem to revolve around modifying ifcfg-eth0, which doesn't exist on my system..[root@Azaz07]# vi /etc/sysconfig/network-scripts/ifcfg-eth0[root@Azaz07]# ls /etc/sysconfig/network-scripts/ifcfg-lo ifdown-ppp ifup-ippp ifup-sitifdown ifdown-routes ifup-ipv6 ifup-tunnelifdown-bnep ifdown-sit ifup-isdn ifup-wirelessifdown-eth ifdown-tunnel ifup-plip init.ipv6-globalifdown-ippp ifup ifup-plusb net.hotplugifdown-ipv6 ifup-aliases ifup-post network-functionsifdown-isdn ifup-bnep ifup-ppp network-functions-ipv6ifdown-post ifup-eth ifup-routes[root@Azaz07]# /sbin/ifconfiglo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:480 (480.0 b) TX bytes:480 (480.0 b)p5p1 Link encap:Ethernet HWaddr C8:1F:66:03:00:7D inet addr:192.168.0.14 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::ca1f:66ff:fe03:7d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:9818 errors:0 dropped:0 overruns:0 frame:0 TX packets:9634 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:7020054 (6.6 MiB) TX bytes:1665345 (1.5 MiB)wlan0 Link encap:Ethernet HWaddr 70:18:8B:03:3C:59 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)How would I change my p5p1 adapter to static with the IP 192.169.0.55?Also, why is it called a p5p1 adapter, instead of the seemingly more common eth0?Running setup gives me a blank screen upon Device configuration
Set Static IP in CentOS 6.4?
networking;centos;ip
You can read about the new naming convention for devices here, titled: Features/ConsistentNetworkDeviceNaming. The change in naming conventions is also discussed in the official Redhat docs titled: Appendix A. Consistent Network Device Naming. The convention now follows one based on location rather than arbitrarily eth0, etc.Change the network device naming scheme from ethX to a physical location-based name for easy identification and useTo change your IP address you can follow a couple of paths to do this. Use NetworkManager (see official RH docs)You can find out what devices are under NetworkManager's juristiction.Example$ nmcli -p nm====================================================================================== NetworkManager status======================================================================================RUNNING STATE WIFI-HARDWARE WIFI WWAN-HARDWARE WWAN --------------------------------------------------------------------------------------running connected enabled enabled enabled enabled Run the command setup, and change it through the GUI            Change the configuration fileThe file should be called /etc/sysconfig/network-scripts/ifcfg-p5p1. Simply change the line in this file to:IPADDR=192.169.0.55If this file doesn't exist at all, I'd simply create it and add content similar to the following:# /etc/sysconfig/network-scripts/ifcfg-p5p1DEVICE=p5p1BOOTPROTO=staticNM_CONTROLLED=noONBOOT=yesTYPE=EthernetDEFROUTE=yesPEERDNS=yesPEERROUTES=yesIPV4_FAILURE_FATAL=yesIPV6INIT=noNAME=System p5p1IPADDR=192.169.0.55Use the network serviceIt's unclear to me if there are any downsides to using /etc/init.d/network service in addition to NetworkManager at the same time. In all my setups of CentOS 6 I have both running. But I generally make the ifcfg-eth0 device files myself by hand.You could try making sure that this service is enabled and then try the steps above in #2 (using setup) to create the device afterwards. $ /etc/init.d/network startThe setup GUI (actually it can be launched directly with this command: system-config-network) should be able to add devices and manage devices that have their configuration information kept in the ifcfg-* files under the directory /etc/sysconfig/network-scripts/.I highly suggest you at least familiarize yourself with the Deployment Guide. It has a lot of useful information on networking using Redhat base products, which in turn will help with CentOS and Fedora.So why's my network devices list blank?I found this Fedora 16 issue that I believe is the issue you're experiencing (or is at least related). The issue is titled: Bug 802580 - Device p21p1 does not show up in system-config-network. From the sound of this bug, the default behavior is to not create any of the ifcfg-* files unless you change some aspect of the settings via a networking GUI, such as NetworkManager - I believe. So you might need to try that, or follow the details I mentioned in options #3, above, and manually create the file yourself.ReferencesRedhat Deployment Guide for RHEL 6 - Part III. NetworkingCentOS no network interface after installation in VirtualBox
_webmaster.71897
Background: I am implementing a custom cache handler in PHP. The cache script is triggered on-demand by website traffic. I want to intelligently handle an influx of simultaneous requests. To do this, I'm using the first request to trigger the cache writer. Concurrent requests are be deferred until after the cache is built. I welcome debates regarding this approach, but I still want an answer to my actual question below.Question: I'd like to know the best method to defer loading of webpage content. Here are two options I'm considering.Option 1: Delay the response server-side. I don't like this option because I could end up having hundreds of sleeping connections, and I don't think hosts would like that.<?phpwhile (!file_exists($cache)) sleep(1);Option 2: Reload the page client-side. I don't like this option because the requests could be doubled, or trippled, depending on how long it takes to build the cache. With this I'm also worried about SEO Optimization. Will bots retry the page? Will they try more than once if the page still isn't ready?<?phpheader('HTTP/1.1 503 Service Temporarily Unavailable');header('Status: 503 Service Temporarily Unavailable');// Maybe code 408 Insteadheader('Retry-After: 1');header(Refresh: 1);// Maybe some JavaScript to reload the page insteadMaybe there's another best approach?
Delay Loading of Webpage Content / Debounce Cache
seo;php
null
_cstheory.1354
I need to implement one of the logical clock algorithms (described here), to allow me to coordinate an election protocol for a distributed system. I'm struggling to work out how I might go about using the clock to enforce a total ordering on a sequence of events in practice. i.e. I don't see a problem in implementing the data structures in practice, but I can't work out how to ensure that my election is not being duplicated elsewhere in the network.For example, if my 'master' machine goes down, I expect several machines to notice at round about the same time. There will be contention for the 'master' position as several machines try to take it over. My plan is for each machine to broadcast a 'grab the master position' message. Using a logical clock, I should be able to decide what was the globally first machine that sent the message. It will then be acclaimed the master. What I can't work out is how (and where) I will be able to create that ordering over the network messages.Has anyone out there implemented this sort of algorithm and can offer me some guidance on how I should go about solving the problem?Thanks in Advance.
Distributed Elections using Logical Clocks (hints and tips)
dc.distributed comp
null
_unix.339106
Laptop I want to buy comes with preinstalled windows 10. I want to install linux on this laptop but I want to have option to recover windows (reset to factory settings). Because of that I want to make bit by bit image of whole disk (all partitions, MBR, etc). Of course I must have an option to flash this image to disk so laptop is reverted to factory settings with preinstalled windows. What is the best option to create such image and how do I later recover such image?I will boot linux from usb and attach much bigger hard drive throught usb. I will store backup on this usb hard drive. I was thinking about dd command but it won't take in account empty space so it will produce huge image.
Creating recovarable disk image
backup;administration;disk image;system recovery
null
_ai.2762
In classical set theory there is two options for an element. It is either a member of a set, or not. But in fuzzy set theory there are membership functions to define rate of an element being a member of a set. In other words, classical logic says it is all black or white, but fuzzy logic offers that there is also grey which has shades between white and black.Matlab Simulink Library is very easy to design and helpful in practice. And it has good examples on its own like deciding about tip for a dinner looking at service and food quality. In the figure below some various membership functions from Matlab's library are shown:My question: How do we decide about choosing membership functions while designing a fuzzy controller system? I mean in general, not only in Matlab Simulink. I have seen Triangular and Gaussian functions are used mostly in practise, but how can we decide which function will give a better result for decision making? Do we need to train a neural network to decide which function is better depending on problem and its rules? What are other solutions?
Fuzzy Logic Controller: Choosing Membership Function
neural networks;fuzzy logic
null
_unix.89913
I want to replace the default listening port of httpd to 9090. I can edit the line in httpd.conf file using belowsed -i /^Listen/c\Listen 9090 /etc/httpd/conf/httpd.confBut the line Listen 80may have white space before it.How do I ignore this white space to match this line?
sed : Ignore line starting whitespace for match
linux;sed;regular expression
Change your matching pattern no catch white spaces before liste in the following way:/^\s*Listen/That will include allListen .. Listen ...and others.
_softwareengineering.51873
I have always had this question in my mind and I would really be happy to get an explanation for this.Is it only me or do you also feel the same way that it's hard to find anything on MS site.For example, every time I need to download .NET framework I have to Google it. You never know what you can download, no category for downloads. You are simply left to a search field. You never know if you downloaded the latest version of the file. The tragically true is that you have to rely on their competitor Google to find anything on their site.I know that they are a big company. But is it really that hard to have an organized way to publish information?
Why is it so hard to find anything on MS site?
websites;information;search
I think you have Microsoft confused with a corporation that cares about the free downloads. Don't get me wrong, I have met and talked to several people from Microsoft and they are smart, intelligent, and nice people. However, Microsoft is more like Oracle in the sense that they know how to make money. If you make it difficult to find free stuff, people end up buying the (now) reasonably cost alternative. In short, it's simple business. Remember, they don't have to compete with and differentiate themselves from all the other businesses out there, so it really isn't in their best interest to make it easy for you to find the free downloads.Bottom line, the MS site has to promote a lot of different products. If you want helpful information, you need to go to MSDN. MSDN is a mass of information of varying levels of helpfulness. In my experience it tends to be 100% right, and 100% useless. But I'm trying to do things that doesn't match their API exactly--things which they've done but haven't released to the public. MSDN is very difficult to organize, as is any knowledge base. Marketing is easy to organize, and you emphasize what you want people to do.Your free downloads are somewhere in between MSDN and marketing, and as a result just don't get emphasized.Note about Apple's website: There's plenty of marketing (just like MS), and the downloads are pretty easy to manage. However, if you have any kind of technical issue, they do not have a knowledge base that is nearly as useful as MSDN. Unfortunately, that's not saying much. You get more help from third party forums and Q&A sites than you do from Apple itself. I'm hoping that Apple figures out how to change this.
_unix.78270
When am playing with my Android phone (CyanogenMod 10.1 with kernel 3.0.78), I found this:A script in /etc/init.d/99system.swap.shIt remount /system as rw, and creates an image file in /system/swap/swap.imgAfter the phone is started, I can find this swap.img file, and it is in use, by checking /proc/swaps. But the /system is mounted as ro.I tried to do the same thing, by manual, on another phone. But when I try to remount /system as ro, I got error message saying there are resources in use. If I swapoff the image on swap.img file, I can immediately remount /system as ro.I was wrong, I cannot remoutn /system as ro after I remount it as rw. lsof shows tons of process are using /system.
Can I remount as readonly if there's a swap image in use?
linux;mount;swap;readonly
It should be possible in theory, as the swapon manpage states:[...] the swap file implementation in the kernel expecting to be able to write to the file directly, without the assistance of the file system [...]So when you have a swap file, the kernel likes to treat it more like a partition, i.e. it determines where the swap file is physically located on disk and then uses that region directly as it would with a partition region. This is a trick to make swapping to a file just as fast as to a regular partition.At that point the filesystem is out of the loop and it should be okay to make the filesystem itself readonly. Whether you're actually allowed to do this is another matter; I haven't tested it myself and Android in particular may behave differently from regular Linux environment.
_webapps.104825
As you can see on the image below (this is what google shows for a games search):if I look for games that have rating from 3.5 to 4.0 you will be able to find them with the following search line:site:https://itunes.apple.com/us/app game rating: 3.5..4.2as it will look for apps that games game and rating: in it. And according to this article when you look for a range you could use ... But look what I see when I search with the range:Of course, it is pretty clear that there are tons of games with this criteria. What is the right way to search for them?
How to search in AppStore for games that have rating from 3.5 to 4.0
google search
null
_scicomp.27368
I'm playing around with dynamic programming and need to calculate a multidimensional integral $E[V(W)]$ where we assume $W$ has a log normal distribution. I was looking at the following example in this pdf, section 9.4.3 on page 83.To give some background (I summarize from the section): The example is from economics and is about asset allocation. Assume $R$ is a return vector of dimension $n$ and is log-normally distrbuted, i.e. $\log(R) = (\log(R_1),\dots, \log(R_n))$ has a multivariate normal distribution with given mean and covariance matrix, i.e. $\log(R)\sim\mathcal{N}((\mu-\frac{\sigma^2}{2})\Delta t,(\Lambda \Sigma\Lambda)\Delta t)$. The exact structure is not that important. $\Sigma$ is the correlation matrix and $\Lambda$ is simple a diagonal matrix with the standard deviation $\sigma_1,\dots,\sigma_n$ on its diagonal. Using Choleski decomposition we can write $\Sigma = LL^T$. Then one has$$\log(R_1) = (\mu_1-\frac{\sigma^2_1}{2})\Delta t + (L_{11}z_1)\sigma_1\sqrt{\Delta t}$$$$\log(R_2) = (\mu_2-\frac{\sigma^2_2}{2})\Delta t + (L_{21}z_1+ L_{22}z_2)\sigma_2\sqrt{\Delta t}$$and so on, where $z_i$ are independent normal distribution. So that we have$$R_i=\exp{((\mu_i-\frac{\sigma^2_i}{2})\Delta t+\sigma_i\sqrt{\Delta t}\sum_{j=1}^iL_{ij}z_j)}$$Let for simplicity no $\Delta t=1$ then we are interested in the quantity$$ W_{t+1}= W_t(R_f(1-e^Tx_t) + \sum_{i=1}^n\exp{((\mu_i-\frac{\sigma_i^2}{2})+\sigma^2\sum_{j=1}^iL_{ij}z_j)}x_{ti}) $$where $e$ is a $n$ dimensional vector of $1$ and $x_t$ is some $n$ dimensional vector. For a given function $V$ the conditional expectatino of $V(W_{t+1})$ given $W_t, x_t$ can be calculated using Gauss Hermite quadrature $$\sum_{k_1,\dots,k_n=1}^m w_{k_1}\cdot\cdot\cdot w_{k_n} V\left(W_t(R_f(1-e^Tx_t) + \sum_{i=1}^n\exp{((\mu_i-\frac{\sigma_i^2}{2})+\sigma^2\sum_{j=1}^iL_{ij}q_{k_j})}x_{ti})\right)$$ where $w_{k_i}$ are the Gauss-Hermite weights and $q_i$ the corresponding nodes. My question is how can the above sum of sums be efficiently implemented in python? The exponential part can be precalculated using a cummulative sum if I'm not wrong.
How can this multidimensional integral be efficiently implemented in python using Gauss-Hermite quadrature
python;numerical;integration
The summation over $q_{k_i}$ in $V$ is independent of the sums over the products $w_{k_1}w_{k_2}\cdots w_{k_n}$. So we can calculate $V$ first. To avoid double evaluations of same operations we can further decompose the equation:Step 1: precalculate $\tilde L_{ij}$\begin{equation} \tilde L_{ij} = \sigma^2 L_{ij}q_{k_j}\end{equation}Step 2: precalculate $E_i$\begin{equation} E_i = \exp\left(\left(\mu_i-\frac{\sigma_i^2}{2}\right) +\sum_{j=1}^i\tilde L_{ij} \right)\end{equation}Step 3: calculate $X_n$\begin{equation} X_n = W_t\left(R_f(1-e^Tx_t)+\sum_i^n x_{ti}E_i\right)\end{equation}For the sums over the products $w_{k_1}w_{k_2}\cdots w_{k_n}$ we can use numpy.linalg.multi_dot(*w), where w=[w1,w2,...] is the list of all vectors $w_{k_i}$. In a final step we multiply this result by $V(X_n)$.
_unix.298700
My level has 6 to 10 GB sized files as input. These files contain several lines of data. The next level's max input capacity is 2GB. So I have to split these 6-10 GB files into several sub-2 GB files without breaking lines! Basically I have to split a file based on size but without breaking lines.
How to split a 6 or 7 GB file into several sub-2 GB files without splitting entry?
command line;files;split
null
_softwareengineering.352007
I am currently trying to figure out how to make asynchronous task at the background if someone reach the rest endpoint. The scenario looks like this:User reaches the endpoint /resourceServer immediately gives him a response with:Status 202, msg={status: started_computation, resource: [actual resource from db], link: /resource/latest}In the same time, server start async method to compute new resource state.After async method ends computing there will be updated resouce at link /resource/latest which was handled to user at first requestI know how to achive this, but i dont know how to prevent multiple computation. If for example user1 enters endpoint and right after user2 enters endpoint I will have two concurrent methods computing the same resource which is pointles.I would like to have some kind of lock for starting computation only when it is not startedI thought about storing computation state in DB for example: computation=started, computation=finishedAnd before starting async task check the state but is there any other way to do such lock inside backend? I will do it in spring.
Background task in REst service only once at a time
concurrency;spring;asynchronous programming;async;asynchronous
null
_cstheory.20608
A deterministic automaton $\mathcal A = (X, Q, q_0, F, \delta)$ is called $k$-local for $k > 0$ if for every $w \in X^k$ the set $\{ \delta(q,w) : q \in Q \}$ contains at most one element. Intuitively that means if a word $w$ of length $k$ leads to a state, then this state is unique, or said differently from an arbitrary word of length $> k$ the last $k$ symbols determine the state it leads to.Now if an automaton is $k$-local, then it need not be $k'$-local for some $k' < k$, but it has to be $k'$-local for $k' > k$ cause the last symbols of some word $|w| > k$ determine the state, if any, uniquely.Now I try to connect the number of states and the $k$-localness of an automaton. I conjecture:Lemma: Let $\mathcal A = (X,Q,q_0,F,\delta)$ be $k$-local, if $|Q| < k$ then the automaton is also $|Q|$-local.But I failed to proof, any suggestions or ideas?I hope by this Lemma to derive something about the number of states of an automaton which is not $k$-local for all $k \le N$ given a fixed $N > 0$, but $k$-local for some $k > N$.
The number of states of local automata
fl.formal languages;automata theory;synchronization
Since you say that $T_w:=\{\delta(q,w):q\in Q\}$ should have at most one element, I'll assume that you use the version of DFA where $\delta$ can be partial. Then this is a counterexample: $X=\{a,b\}, Q=\{0,1,2,3,4\},\delta(q,a)=q+1$ for $q<4$, and $\delta(1,b)=2,\delta(2,b)=3,\delta(4,b)=0$. $F$ and $q_0$ obviously don't matter for this question. The automaton is $6$-local, but not $5$-local, since $T_{abaab} = \{0,3\}$.Edit: this counterexample does not work, I'll keep it so that the comments make sense. The following does, though.Take $X=\{a,b\}, Q=\{0,1,2,3\}$, with transitions $0\to 1(a),1\to 2(a), 2\to 3(a), 2\to 0(b), 3\to 2(b)$. This automaton is $5$-local, but not $4$-local: for $aaba$, we get the paths $0\to 1\to 2\to 0\to 1$ and $1\to 2\to 3\to 2\to 3$, i.e. $T_{aaba}=\{1,3\}$.
_cs.70612
An integer addition can be represented as following:$A + B = A \oplus B \oplus carry_{vector}$My question is how to determine the carry vector from $A$ and $B$ (not from the result)?
How to determine the carry vector of an integer addition
arithmetic;integers
The generation of the carry vector is a recursive process where the results of lower bits factor into the current carry output.In general:$$Carry_{n} = (A_{n} \bullet B_{n}) + ((A_{n} \oplus B_{n}) \bullet Carry_{n-1})$$Note the $$ Carry_{n-1} $$ term. The presence of this term means that carry data ripples through the adder chain.This is the reason that so much effort was devoted in older computer implementation technologies to speeding up this process. This was done by brute force computing the carry for groups of bits in a process called carry look-ahead. Let's see this for two bits:$$Carry_{n} = (A_{n} \bullet B_{n}) + ((A_{n} \oplus B_{n}) \bullet ((A_{n-1} \bullet B_{n-1}) + ((A_{n-1} \oplus B_{n-1}) \bullet Carry_{n-2})))$$As bits get added, this gets messy in a hurry.The classic 74181 ALU used this method for the 4 bits handled by the chip and was often paired with the 74182 Carry Look Ahead chip to speed carry propagation by performing the same between up to four chips. If more that 16 bits needed to be processed, another layer of 74182 was employed to speed up carry propagation in the lower layer. Thus an ALU with 16 74181s, 4 74182s, and one top level 74182 could process 64 bits with a delay of roughly 3 times the four bit ALU delay, instead of 16 times.It is noteworthy though, that there is no easy solution to the problem. Each added bit of look ahead, increases the complexity of the carry generator.
_cstheory.19449
I have built a max network flow graph that carries certain amount of people from a source to a destination. Now, I'd like to attach a lower bound $l_(e_)$ constraint to each edge $e$. But I don't know what algorithm to use and how to analyze its complexity. Here's the graph:
Algorithm for Max Network Flow with lower bounds and its complexity
graph theory;graph algorithms
null
_unix.383921
I've been trying to mount a USB drive that's formatted as FAT32, and getting an error. The drive works fine on Windows machines.When I try to mount it with sudo mount -t vfat /dev/sdb1 /media/usbdev, I get mount: /dev/sdb1 is not a block device.When I try to mount /dev/sdb to the same place (sudo mount -t vfat /dev/sdb /media/usbdev), I get mount: wrong fs type, bad option, bad superblock on /dev/sdb, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so.I've tried Googling around and searching this site. 1 and 2 seem like the most relevant questions, but the solutions proposed there haven't worked. I've tried adding a line to /etc/fstab (/dev/sdb1 /media/usbdev vfat defaults 0 0), also to no avail. I'm pretty confused - what's going on, and what can I do to mount this USB drive? I'd rather not reformat it since I have some important data on there. Here's what lsblk returns: NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdb 8:16 1 29.9G 0 disk sdb1 8:17 1 29.9G 0 part sda 8:0 0 119.2G 0 disk sda2 8:2 0 488M 0 part /boot sda3 8:3 0 118.3G 0 part sda3_crypt 253:0 0 118.3G 0 crypt mint--vg-root 253:1 0 110.4G 0 lvm / mint--vg-swap_1 253:2 0 7.9G 0 lvm cryptswap1 253:3 0 7.9G 0 crypt [SWAP] sda1 8:1 0 512M 0 part /boot/efiAnd here's the relevant portion of sudo fdisk -l: Disk /dev/sdb: 29.9 GiB, 32078036992 bytes, 62652416 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xc3072e18 Device Boot Start End Sectors Size Id Type /dev/sdb1 96 62652415 62652320 29.9G c W95 FAT32 (LBA)dmesg |tail shows the following: [152334.491944] sdb: sdb1 [152334.493759] sd 3:0:0:0: [sdb] Attached SCSI removable disk [153063.602803] sdb: sdb1So, it seems like the device is being recognized - it just won't mount. I'm new to Linux, so please let me know if I should provide more information. Thanks in advance.
Unable to mount FAT32 USB drive in Mint (is not a block device)
linux mint;mount;usb drive
Is not a block device is pretty specific. It suggests you've accidentally overwritten the block device with a regular file at some point. In this case, ls -l /dev/sdb1 will show something other than b in the first column. Here's an example from my system:$ ls -l /dev/sda1brw-rw----. 1 root disk 8, 1 Aug 3 08:32 /dev/sda1- in the first column means a regular file. d is a directory. b is a block device node. c is a character device node. p is a named pipe. s should be a named unix socket, I think.This particular problem should go away if you just reboot. /dev/ is a tmpfs, it is recreated from scratch on each boot.
_unix.50475
Ghostscript wipes the PDF metadata like author, title, subject etc. How can I tell ghostscript not to touch the metadata? I invoke it as follows:gs \ -dBATCH \ -dNOPAUSE \ -sOutputFile=<output_file> \ -sDEVICE=pdfwrite \ -dPDFSETTINGS=/ebook \ <input_file>
How to make ghostscript not wipe PDF metadata
pdf;ghostscript
Apparently it's not possible to keep the PDF metadata when usingghostscript. Here is a workaround which first saves the metadata toa file using pdftk, then compresses the file with ghostscriptand finally writes back the metadata also using pdftk.INPUTPDF=<input_file>OUTPUTPDF=<output_file>TMPPDF=$(mktemp)METADATA=$(mktemp)# save metadatapdftk $INPUTPDF dump_data_utf8 > $METADATA# compressgs \ -q \ -sOutputFile=$TMPPDF \ -sDEVICE=pdfwrite \ -dNOPAUSE \ -dBATCH \ -dPDFSETTINGS=/ebook \ $INPUTPDF# restore metadatapdftk $TMPPDF update_info_utf8 $METADATA output $OUTPUTPDF# clean uprm -f $TMPPDF $METADATAEdit: This is a bug in ghostscript, see Bug report and the confirmation that this is not supposed to happen.
_softwareengineering.106702
I'm creating a pretty simple database driven application. Whenever I create a db app, I create classes that mimic the data in the db. Is this good practice?Am I better off making one big call to the database and populating my objects and working with these objects, or should I retrieve data from the db only when needed?
Constant database calls or store in objects?
database development;object oriented design
Is this good practice?It's called Object-Relational Mapping. ORM. It's done all the time. Am I better off making one big call to the database and populating my objects and working with these objects, or should I retrieve data from the db only when needed?That's imponderable. First, you haven't defined better. Second, it depends on the volatility of the data in the database, the nature of your application, and the performance of your ORM layer.
_vi.11512
I'm familiar with ngT, which will move n tabs backward. Is there an equivalent way to move forward? I know that if I have 5 tabs open and I'm on tab 3, I can use 3gT to move backwards to tab 5. However, this quickly becomes unwieldy as the number of tabs increases and I lose track of the exact number I'm on at a given time.
Is there a way to move n tabs forward?
tabbed user interface
As far as I'm aware, there is no default way to do this. I looked into the ex command tabnext, which accepts a count, but functions exactly the same as gt (that is, it moves to tab page {count}, not {count} pages forward)::tabn[ext] {count}{count}<C-PageDown>{count}gt Go to tab page {count}. The first tab page has number one.However, this isn't very hard to configure! This vimscript mapping does what you're looking for:nnoremap <leader>gt :<C-u>exec 'normal '.repeat(gt, v:count1)<cr>This literally just simulates pressing gt over and over as many times as you need. With no count given, it will default to 1.I picked <leader>gt somewhat arbitrarily. Of course, you can remap this to whatever is easiest or most convenient for you, whether it's a separate binding, or you choose to override the default gt.
_softwareengineering.83596
I'm starting to think that a lot of my tables could be replaced by only a graph db:For example:I have 4 tables: accounts, votes, posts, relationshipsbut I can represent all these in a graph table with different edges,NODE1 -> type of relation -> NODE2account -> vote_+1 -> postaccount -> wrote -> postaccount -> friend -> account2is there a difference of performance or other between them?
graph or relational database?
database;database design
null
_softwareengineering.46821
I have tried to search for this answer for quite some time and I have gone through all the various FAQ's and documentation regarding the three licenses; but none of them have been able to answer a question that I have.So I've been working an idea for a website for sometime now and recently I found open source software that has many of components that are similar. It is licensed under the mpl/gpl/lgpl licenses. I think for the most part I understand the ramifications, due to the searches and reading, of what is required if I modify/use and want to distribute the software.But what if I want to modify and not distribute, but use it on a public website that I generate ad revenue from? Is this illegal?It doesn't seem like it is from reading other open source system, say like Drupal, where they allow you to use the software but it's not considered distribution if people just go to the website.I know this site may not be the best resource and I've tried some other sites, but I haven't received any clear replies back. If you know some other resource that I could contact also, please let me know.Links for those who don't know:MPL - Wikipedia, LegaleseGPL - Wikipedia, Legalese LGPL - Wikipedia, Legalese
How does trilicense (mpl,gpl,lgpl) work when you want to use it on public website
licensing;gpl;websites;lgpl
null
_unix.31872
Using the Linux area since I'm using Linux apps and utilities.My phone crashed (several times) and managed to corrupt my microSD card. It no longer appears to have partitions and shows as 32MB rather than 2GB. That's using testdisk. dd and ddrescue only pulled 30.6MB of null off of it.It isn't a fake, SanDisk branded, purchased from a reputable retailer, and the space has been working perfectly for a year.I expect it's had it, but I didn't see the harm in asking. Even if I forget about the few files I'd like off of it, formatting will probably leave me with a fairly useless 32MB card.If anyone has any method of at least repairing the card, it would be much appreciated.
SD card corrupt and stuck at 32MB, any way to fix it?
block device;corruption;sd card
null
_unix.229925
I have two drives /dev/sda and /dev/sdb that are in a Volume Group with three volumes: /boot, /root, and /home, mounted respectfully.If I remove /dev/sda after doing# pvmove /dev/sda# vgreduce $myVolGroup /dev/sdaWill that affect my boot-loader assuming all of the /boot volume was on /dev/sda and it was the initial install disk?I would really like to avoid surprises.
Removing drive from LVM that has the /boot partition
boot;lvm
There are two components to the boot loader: the first stage, which is written directly to the MBR-and-later-blocks and isn't on a filesystem anywhere, and the second stage/config files/kernel image/initrd, which is in /boot. grub at least (I'm assuming that's your bootloader, if not this may not apply) knows about lvm in its own right, and so can find a /boot which uses it and load configuration from there. Thus, on that level it shouldn't make a difference. However, if you plan to remove /dev/sda from the system, you need to ensure that the first stage is in at least one of the remaining block devices; you would probably use some invocation of grub-install for this.
_webmaster.106922
Would it make any significant temporary problems to change all canonical URLs from relative to absolute?
Would changing of all canonical URLs from relative to absolute temporarily hurt SEO?
seo;canonical url
If search engines indexed exactly the same URLs which you now want to provide as absolute references, there is no reason to assume that the change affects anything. Its just a different way to specify URL references in the href attribute, but the end result is the same.However, if your current relative references result in absolute URLs different from those which you intend to specify now, indexed URLs might change. Examples when this could be the case:if you have specified a base element (it only affects relative references)if your site is accessible from different hostnames (with www and without www, via HTTPS and via HTTP, multiple domains, )
_opensource.1261
I often hear about GPL variants, MIT, BSD and many other license options. A general Internet search reveals sites like Top 20 licenses or pages some licenses going out of fashion and others coming in.With so many licenses to choose from, a major stumbling block for beginners in open sourcing is deciding what license to go for.So, what are the stats on the most frequently adopted open source license options, and what are there primary differences?
What are the most common open source license options and how do they differ?
open source definition
Determining the relative popularity of various licenses is actually quite difficult. Below are two graphs.The first is taken from BlackDuck: Top 20 Open Source Licenses:and list the following top 5 (making up 76 % av the total):GPLv2 (24 %)MIT/Expat (20 %)Apache 2.0 (16 %)GPLv3 (10 %)modified BSD (6 %)The second one is taken from presentation at the 2013 Linux Foundation Collaboration Summit Licensing of Software on Github: A Quantitative Analysis:and list the following top 5 (making up 82 % av the total):MIT/Expat (36 %)modified BSD (13 %)GPLv2 (13 %)GPLv3 (12 %)Apache 2.0 (8 %)There is a lot of interesting data in the Linux Foundation Collaboration Summit presentation (e.g. the distribution between permissive, copyleft, weak copyleft and dual licensing, and the shift over time towards more permissive licenses). I recommend clicking through all the slides.While the source and methodology of these studies differ (and only the presentation made at the Linux Foundation Collaboration Summit explains the methodology used), both list the same licenses as the top five, but their relative position differ.While these stats are not definitive (and I hope other answers can show us more studies of this type), I think it is quite likely that these licenses are among the most used ones.As for how they differ in their most important terms and conditions?, that would make a very long answer. However, the FSF has a very good page about various licenses and comments about them, here are direct links to each of the top 5, with a note stating whether the license is copyleft or permissive - as this by far is the main difference between the FLOSS licenses.MIT/Expat - permissivemodified BSD - permissiveGPLv2 - copyleftGPLv3 - copyleftApache 2.0 - copyleft, but allows downstream relicensing to GPLv3
_softwareengineering.323252
I have a Python script that continuously makes GET and POST requests to a third-party API at about 1 request per second. It uses a while loop, so the requests all come from the same process. Currently, I just run and kill the script on my local machine with python run.py, but I'd like to be able to start and stop this script from anywhere (e.g. my phone), so I was considering hosting it as a Django app on Heroku. Is this possible?I've considered a few ways to accomplish this, but I'm not sure any of them would work. One thought would be to have a run button on a web page that starts a background process to run the script, but:I wouldn't know how to kill this process from a browser once it was created (heroku ps:stop is less than ideal since I would need a console).I'm not sure that Heroku was intended to have a script running for potentially several days.If all else fails, I'll probably just end up running the script on a Raspberry Pi and exposing localhost, but it would be convenient to use Heroku to do this.
Is there a way to control a looping Python script with Heroku?
python;django;heroku
UPDATED ANSWERThe best way I've found to control a looping script in Heroku is by defining a process type for the script in the Procfile. This way, you simply have to scale the number of dynos for the process type to 1 and 0 to start and stop the script, respectively. So for a process type named run:Start the scriptheroku ps:scale run=1Stop the scriptheroku ps:scale run=0-- Demo --Here's a demo of a way to easily scale the dynos using a Django application and the Heroku API endpoint for scaling. I made a GitHub repository in case it would help anyone, and I've included the important pieces below:Procfileweb: gunicorn {project}.wsgirun: python run.py {args}run.pydef run(): # looping code hereif __name__ == '__main__': run()index.html<label for=run-loop>Loop running: </label><input id=run-loop type=checkbox>main.js// set the checkbox to checked if the dyno is up, unchecked otherwisevar getLoopStatus = function() { $.ajax({ url: /get-loop-status/, success: function(data) { if (data.quantity > 0) { $('#run-loop').prop('checked', 'checked'); } } });};// make a request to the server to scale the run process typevar runLoop = function(quantity) { $.ajax({ url: /run-loop/, method: POST, data: { quantity: quantity }, headers: { X-CSRFToken: Cookies.get('csrftoken'), } });};$(function() { // bind the checkbox to the runLoop function $('#run-loop').change(function() { runLoop(+$(this).is(':checked')); }); getLoopStatus();});urls.pyurlpatterns = [ url(r'^$', TemplateView.as_view(template_name='loopapp/index.html'), name='home'), url(r'^get-loop-status/$', LoopStatusView.as_view()), url(r'^run-loop/$', RunLoopView.as_view()), url(r'^admin/', include(admin.site.urls)),]views.pyclass LoopStatusView(View): def get(self, request, *args, **kwargs): response = requests.get(HEROKU_RUN_URL, headers=HEROKU_HEADERS) return JsonResponse({'quantity': response.json()['quantity']})class RunLoopView(View): def post(self, request, *args, **kwargs): data = request.POST.dict() payload = RUN_POST_PAYLOAD % data['quantity'] response = requests.patch(HEROKU_RUN_URL, data=payload, headers=HEROKU_HEADERS) return JsonResponse({'new_quantity': response.json()['quantity']})-- How to find Heroku API URL --You should have Chrome DevTools open on the Network tab for this. Go to your app in your Heroku dashboard and click on the Resources tab. Then click the pencil icon next to the looping process:Then, activate a dyno for that process type by clicking the switch and then clicking Confirm. The Heroku API endpoint will come up in DevTools in the form of:https://api.heroku.com/apps/{app ID}/formation/{process-ID}In order to make requests to this endpoint, you'll need an access token. Expand the request in DevTools and find the Authorization: Bearer {access token} header under the Request Headers of the Headers tab.Note: This token expires frequently. You'll probably either want to refresh this token when it expires or use Basic Authentication, which doesn't expire. I had Postman generate a Basic Authentication token for me.OLD ANSWERI've found an answer I'm somewhat happy with. I don't think it's possible to have a front-end interface that controls the execution of continuous loops on Heroku, but I did find an app called Nezumi which allows you to run functions and control Heroku processes from your phone.What I ended up doing was I made a file run.py in my root directory of a Django app and pushed this to Heroku. Then, in Nezumi, I can go click on the run icon and type python run.py {args} which creates a run process and executes the loop. I can stop the process and view the logs by clicking on the ps icon in Nezumi. So, now I can start and stop this script from anywhere I have my phone.Nezumi app view:Nezumi http://nezumiapp.com/images/homepage2.jpg
_codereview.20674
While I'm aware that code with callbacks tends to be complex, I'm wondering if there are any patterns or something to improve my situation.All I do here is check if a file exists and then print its name if it's not a directory:var fs = require('fs'), filename = process.args[2];fs.exists(filename, function(exists) { if (exists) { fs.stat(filename, function(err, stats) { if (stats.isDirectory()) { console.log(filename + : is a directory); } else { // do something with file console.log(filename); } }); } else { console.log(filename + : no such file); }});
Checking if a file exists then do something if it's not a directory
javascript;node.js;file system
You can actually omit fs.exists(). The fs.stat() will return an error when the item you are testing is not there. You can scavenge through the err object that fs.stat() returns to see what error caused it. As I remember, when fs.stat() stats a non-existing entry, it returns an ENOENT, no such file or directory error.And so:var fs = require('fs') , filename = process.args[2] ;fs.stat(filename, function(err, stats) { if(err){ //doing what I call early return pattern or basically blacklisting //we stop errors at this block and prevent further execution of code //in here, do something like check what error was returned switch(err.code){ case 'ENOENT': console.log(filename + ' does not exist'); break; ... } //of course you should not proceed so you should return return; } //back there, we handled the error and blocked execution //beyond this line, we assume there's no error and proceed if (stats.isDirectory()) { console.log(filename + : is a directory); } else { console.log(filename); }});So essentially, we reduced the number of indents caused by callbacks and also reduced it by restructuring if-else statements
_unix.266037
$ ls sess.vim -lh -rw-r--r-- 1 root root 11K Feb 26 18:52 sess.vimI want this file to be readable for everyone and writable by no one (except by root). Thus I set its permissions to 644 and ownership to root:root. $ echo text >> sess.vim zsh: permission denied: sess.vimSeems fine. After some changes in vim I do :w! (force write) and the file is saved successfully. Now:$ ls sess.vim -lh-rw-r--r-- 1 MY_USERNAME users 11K Feb 26 19:06 sess.vimWt.. Why? How?
Vim writes to file without having permissions
files;permissions;vim
Using :w! in vim is similar to the following:echo 'test' > sess.vim.tempmv sess.vim.temp sess.vimThe mv commands only cares about the directory permissions, the permissions of the file are not relevant. This is because you are modifying the directory, not writing to the file. To accomplish your goal, you will also need to adjust the permissions of the directory the file resides in.
_datascience.13756
I have read in literature that in some cases the training set is not representative for a real-world dataset. However, I cannot seem to find a proper term describing this phenomenon; what is the proper term to address this problem? Edit:So far I have settled for the term domain adaptation, shortly described as a field in machine learning which aims to learn from a certain data distribution in order to predict data coming from a different (but related) target distribution.
Discrepancy between training set and real-world data set: domain adaptation?
machine learning;predictive modeling;dataset;domain adaptation
null
_webapps.52720
I have several from addresses added to my GMail account. In the new compose view, the from address is visible and changeable as long as to is focused, but is collapsed when subject or the message body is focused. I want the from address to always be visible so I can take a glance at it to quickly make sure that I will send send the e-mail using the right address. Is there a way to achieve this?
Always show from address in GMail's compose view
gmail
null
_ai.1544
While thinking about AI, this question came into my mind. Could curiosity help in developing a true AI? According to this website (for testing creativity):Curiosity refers to persistent desire to learn and discover new things and ideasalways looks for new and original ways of thinking,likes to learn,searches for alternative solutions even when traditional solutions are present and available,enjoys reading books and watching documentaries,wants to know how things work inside outLet's take Clarifai, a image/video classification startup which can classify images and video with the best accuracy (according to them). If I understand correctly, they trained their deep learning system using millions of images with supervised learning. In the same algorithm, what would happen if we somehow added a curiosity factor when the AI has difficulty in classifying a image or its objects? It would ask a human for help, just like a curious child. Curiosity makes a human being learn new things and also helps to generate new original ideas. Could the addition of curiosity change Clarifai into a true AI?
Could curiosity improve artificial intelligence?
human inspired
null
_unix.364825
I have a Banana Pi running some custom Arch distribution by Ryad. I plugged in a cheap wifi dongle with the Ralink RT5370N chipset. Under /etc/systemd/network:10-eth0.network[Match]Name=eth0[Network]DHCP=yesRouteMetric=1020-wlan0.network[Match]Name=wlan0[Network]DHCP=yesRouteMetric=20Whenever I plug in the wireless usb card, I have trouble SSHing into the thing. Existing secure shell sessions continue to work normally. What would cause this?networkctl (before plugging in)IDX LINK TYPE OPERATIONAL SETUP 1 lo loopback carrier unmanaged 2 eth0 ether routable pending 3 tunl0 tunnel off unmanaged3 links listed.networkctl (after plugging intakes a long time to respond)IDX LINK TYPE OPERATIONAL SETUP 1 lo loopback carrier pending 2 eth0 ether routable pending 3 tunl0 tunnel off pending 4 wlan0 wlan off configuring4 links listed.wireless_tools is installed.lsmod yields the following possibly relevant results:rt5370sta 673082 0rt2800usb 13632 0rt2800lib 48613 1 rt2800usbrt2x00usb 11127 1 rt2800usbrt2x00lib 42474 3 rt2x00usb,rt2800lib,rt2800usbmac80211 247524 3 rt2x00lib,rt2x00usb,rt2800lib
What is causing SSH to stop working over wired connection when I plug in a wireless usb card?
wifi;network interface;arch arm;bananapi
null
_unix.366351
I'm researching for the possible problems when cloning existing hard drive using the dd clone utility into the new hard drive.I'm interested to know if any one is aware of unexpected behavior (or maybe experienced problems trying to execute similar task) with aligning into the ATA 4 KiB sectors. I will be cloning existing hard drive into the new (bigger drive) using dd and then will use gparted to expand the target drive.Am I correct in thinking that gparted should allign the drive automatically?Any insights will be much appreciated, Tomek
Cloning hard drive using the dd utility
linux;ssd;dd
null
_webapps.73709
I want to link 2 spreadsheets with importrange let's say A with B. B is the source. I want A to import numbers from B every 6 hours, because the numbers in B are constantly moving and since the sheet is really big, it would slow down sheet A. So therefore, I want it to only import every 6 hours. Is there a way to do that?
Google Sheets importrange every x hours
google spreadsheets
The IMPORTRANGE function does not seem to cache its results, so it will refresh whenever the source spreadsheet changes.But we can circumvent this by writing our own function:function customImportRange(spreadsheetUrl, rangeStr, timestamp) { var sheet = SpreadsheetApp.openByUrl(spreadsheetUrl); var range = sheet.getRange(rangeStr); return range.getValues();}Install this function by clicking Tools → Editor, paste the code, and click the Run button. This will popup a dialog asking you to give the script permission to run.The script does exactly the same as IMPORTRANGE, but, being a Google Apps Scripts function, Google Spreadsheets will cache its results, and not fetch any new data as long as the parameters to the function are unchanged.That means that we can change the timestamp parameter every 6 hours to trigger a refetch, which again means that we need a formula that generates a new value every 6 hours.Given that we put =NOW() in cell B1, and the following formula in cell C1:=DATEVALUE(B1) * 10 + ROUNDUP(HOUR(B1) / 6)Cell C1 now displays a number that changes every 6 hours, meaning we can use it as input to our function. Put the following in cell A2:=customImportRange(https://docs.google.com/spreadsheets/d/1qFryUlGfGT8dp_lx9sSE_7suCV0t8cAg9btM1AOPnSI/edit#gid=0, 'Custom function'!A4:F6, C1)The parameters are:The full URL to the source spreadsheetThe range notation for the data you want to fetchThe cache keyI have put up an example spreadsheet to demonstrate this, feel free to copy it for your own experimentation.To have the spreadsheet update also when it is not opened in a browser, you should go to File → Spreadsheet settings and set Recalculation to On change and every hour.
_codereview.162411
I tested this program on Smooth-CoffeeScriptI just wanted to have as many eyes on this as I could to learn about coffeescript;Is there anything I should do to improve it?Am I using the tools available to me adeptly? (i.e. good variable naming, layout, commenting, etc.)userQuestion = ' what is 2 + 2 ?'correctLength = 1correctAnswer = '4'correctAnswerMsg = That's correct!closeMsg = Almost there!digitsOnlyMsg = Digits only!singleDigitMsg = Single digit!tryAgainMessage = Try again!prompt userQuestion, '', (answer) -> # the answer can not be set as a variable here # (answer) grabs the answer from the console userEntry = answer # -> casts that answer down to the rest of the indent to be used plainly # at least that's how I think it works :/ show userEntry # the user's answer always shows on the first line if userEntry.length isnt correctLength # .length only works when comparing the length of a string to an integer show singleDigitMsg userEntry = (Number) userEntry correctAnswer = (Number) correctAnswer # now we convert our user entry and our correct answer into integers so the math functions work if userEntry is correctAnswer show correctAnswerMsg exit program else if userEntry is correctAnswer - 1 or userEntry is correctAnswer + 1 show closeMsg else if isNaN(userEntry) show digitsOnlyMsg else # this only shows when the user types in 1, 2, or 6-9. Or multiple characters with a digit, like '-4' show tryAgainMessage
CoffeeScript 2 + 2 program
beginner;functional programming;coffeescript
null
_unix.356203
I have a custom systemd unit file that uses the Type of forking. The initial process's stdout will be logged to the journal but the forked process's stdout is not logged to the journal. How can the forked processes stdout be logged to the system journal?
How do you send stdout to the journal when using a process type of forking
systemd;fork
null
_cs.11732
I want to show that non-emptiness of context free language is P-complete. So, I am trying to reduce CVP to this problem by generating grammar from circuit. I consider all type of gates in circuit and I make right production in grammar. I hope that it is good way to prove that it is P-complete, but I have a problem with showing that it is log-space reduction. I think that if I have this circuit, I need only one pointer to the structure, because I can directly write the production. Howerver, I need also have information how the non-terminal symbol is named. Of course, I can use binary numbers for variable names (and it is log-space), but it seems to me that I need know all names to produce grammar production. I have no idea how to go in this graph (circuit) and give a names of next non-terminals and use only log-space.Could anyone explain me how to prove that it is log-space reduction?
Reduction CVP to CFG problem
context free;reductions
null
_datascience.13611
I was much impressed by the news-explorer view of IBM.It shows a network view with on searching a keyword and it explores the relational edges with a graph.Here it isI think it uses the k-means clustering in background.I too want to do the same for twitter data.I have tweets and users and mentions i want show same a network relation between them.can any one suggest me any graphical data tool which shows the network just like IBM by taking data. using clustering algorithms like k-means or any other make my work done?or do i need to go for any other approach ?
How to create a social network like IBM's Watson News Explorer?
machine learning;k means;unsupervised learning;orange
null
_webapps.17870
I have 1000 unsorted, untagged bookmarks that I want to upload to a service and have it automatically and intelligently tag all of the bookmarks by the category they best fit with. I could then adjust it later. But it seems the bookmark services just suggest tags. Does any bookmark fully automate everything from one upload? I looked at Xmarks.com, Delicious.com, Diigo.com and Springpad.com, but didn't notice this ability. Update: Delicious can tag bookmarks, but only of sites that have already been tagged by its users. Is there any service that uses more extensive data to get more accurate tagging?
Is there any bookmarking service that can automatically sort all my bookmarks?
delicious;bookmarks
null
_unix.251494
Is there a way to list out all the files within a directory tree in a single list, sorted by modification time on Linux? ls -Rlt lists out files recursively, but they are grouped under different folders in the output and as a result, the output isn't sorted as a whole. Only the contents of each directory are sorted by time.
Flatten the output of a recursive directory listing
files;ls;recursive
null
_unix.94811
I have a secondary internal HDD to store auxiliary data. I want to give it a better name. How do I do it?
Renaming a Hard Drive
terminal;hard disk
For FAT16 and FAT32 partitions, use mlabel from the mtools package or dosfslabelMethod 1# using dosfslabel ( As @Macro suggested )Umount Partition sudo umount <device>Set Label using :sudo dosfslabel <device> labelMethod 2# using mtoolsInstall package using sudo apt-get install mtoolsunmount the external drive, Partitions generally need to be unmounted before you can fiddle with them, so unmount the partition of the device you want to change the label for: sudo umount <device>where device name can be /dev/sdbx, you can find in sudo fdisk -lCheck the current labelsudo mlabel -i <device> -s ::Note that we're using the special :: drive which allows us to specify the device descriptor on the command line; otherwise we'd have to edit ~/.mtoolsrc to assign a drive letter. Change the labelsudo mlabel -i <device> ::<label>Reference Link
_webmaster.79724
Within the context of a market study, I would like to determine if a website has responsiveness qualities. Given an website URL, I would give a notation regarding these points:good points: use of media-queriespresence of a viewport tagbad points:elements, fonts with static sizesI do not want to test the design quality, nor the usability, just know if there has been a work done in that direction. Do you see other things that makes obvious that a website has been build with responsivity in mind?I want to make automated tests, so criterias should be simple.
Test if a website has responsive qualities
responsive webdesign;testing
I'm not sure what your question is aimig at and your question is also rather broad Responsive Design is a wide field. You may also consider asking over at http://ux.stackexchange.com I have the feeling you might get more suitable answers than here. That said I'll try to answer:if you just want to check for 'responsiveness' (yes/no) then media queries are probably the criteria I would recommend looking at.Basically Responsive Design just means, that the website design should adapt to different screen sizes. That can mean that the layout changes slightly according to screen sizes by rearranging layout elements or the like. It can however also mean that image sizes and/or font sizes change relative to the screen size / viewport size. So following your question, I would probably just change the browser window size to see if anything changes. If in doubt, I would look at the css and check media queries.What I would personally be interested in:Does the layout change according to screen sizes?Does the navigation adapt to the layout? How?Is the website layout still zoomable or fixed?Are elements hidden to achieve a well-fitting layout? What elements? Are media elements e.g. video player and slide shows integrated mobile friendly? For example are HTML5 fall backs used for Adobe Flash Players?How are pop up windows / modal windows handled? Do they also adapt to screen size? Can the page be viewed in Reader mode? (iOS)Are form elements (textfields, buttons, ) 'mobile friendly'? Are HTML5 tags used that may (or may not) be supported by various mobile devices?to be continued
_unix.80643
How can I find -R in files with grep? I tried grep -R *.GNU to get the lines that contain -R, but it returns nothing.
How to get the lines that contain -R with grep
grep
Try:grep -- -R *.GNUThe -- indicates end of options so that -R is seen as an argument instead of an option to grep
_unix.350280
We have a peculiar problem with SSH connection. We have a local DNS Server to resolve the local subnet. 192.100.X.XSince few days ago we have started getting problem while connecting to local servers using hostname, so this is what happens: when I SSH from one local Unix system (systemA) to another Unix system (systemB)it waits for few seconds(10-20 sec) and then gets through. Next time immediately if I try same connection, it gets through immediately. Which it used to be like this. But now everytime ssh connection to local server. Surprisingly when I try to do it using IP address, it gets through immediately. I say surprisingly because nslookup gives correct IP address of the systemB hostname that too quickly.So if I assume DNS is working correctly then why SSH connection takes time? I searched on the Internet and there are some suggestions, e.g. setting UseDNS to No. But it didn't help me.ssh root@systemB -vvvOpenSSH_4.2p1, OpenSSL 0.9.8a 11 Oct 2005debug1: Reading configuration data /etc/ssh/ssh_configdebug1: Applying options for *debug2: ssh_connect: needpriv 0At this stage it waits for 15-20 seconds.And then it gets through....debug1: Connecting to bachost [192.100.X.X] port 22.debug1: Connection established.:::
Slow SSH connection
ssh;dns;ip
null
_unix.200014
After upgrading to version 21 of Fedora OS, I was issued sendmail.cf.rpmnew and submit.cf.rpmnew files. Since sendmail uses .mc files to generate new configuration, I was expecting to receive sendmail.mc.rpmnew instead. Judging from this thread on Redhat, it seems like a long standing issue.My question is, what can be done after receiving .cf.rpmnew that is different from my existing .cf file?
No sendmail.mc.rpmnew file after upgrade despite having sendmail.cf.rpmnew file
fedora;upgrade;sendmail
null
_softwareengineering.79146
Is it feasible that we have a layer of API simply for the reason so that people in the team don't have to understand the actual implementation of the individual components each of us are working on? I had this thought because I started to find that some times when working a project with a team of people (I am new to working in a group), it is kind of difficult to know where to start from.We want to split a huge application into parts so that each of us can concentrate on our own part while at the end of the day, we still want these parts to all work together without having every member to know thoroughly about the actual implementation of every parts.But an API to me, usually serves as an interface for external developers to extend on or build new things from our work without having to go directly into the core part. Since we all working in a team, having an API for each components just for us to work among ourselves seems a little overkill and funny.So what is the usual practise when it comes to situations like this? Although I am using Java in my case, I think a correct practise for such situations work for projects of all languages, right?
In my API, how can I make knowing implementation details unnecessary?
design;api
Think of it this way: When a brand-spanking-new developer joins the team, you're going to assign him to a given area of the project. Wouldn't you like him to be able to get up to speed on that section of the project so he can complete his assignment, or do you want him to spend three months learning the whole massive edifice?Or from your side - suppose you spend a lot of time on another project and need to fix a bug, wouldn't you like to be able to limit the scope of your debugging efforts?(There is a method here, just hang on.)API stands for Application Programming Interface. Nothing in that name defines at what scope it is defined for. In my opinion, every class has an API. By always thinking about how the class would be used from elsewhere, you can make each class, package, web service, and so on easy to use and maintain.As you plot your design at first, you will see several large sections. For example, in a shopping application, you can break it down into Inventory, Billing, and Shipping. Each of those will have specific things they need to communicate with each other. Each of those in turn can be broken down further. You end up with a tree. The easiest to maintain applications do not communicate across branches in this tree in undefined ways, like using public fields, public statics and singletons. They only have to worry about the fields and objects they own, through those object's public interface: its API.Everything has an API. What you should be worrying about is whether or not that API is easy to use, maintainable, and testable, not whether you need one. A well-defined interface helps you limit the scope of feature additions, bug squashing, and everyone's learning curve.
_codereview.24552
I am learning ORM using Hibernate by creating a movie rental application. Currently, I have the following three entities for persistence:UserMovieRental (composite element for many-to-many mapping between above two)Now, to handle the cart updates, I have the following code in a session scoped bean:@ManagedBean@SessionScopedpublic class UserManager extends BaseBean implements Serializable{ .... private Map<Integer, Movie> cart; .... public void updateCart(Movie selectedMovie) { if (!isLoggedIn()) { return; } int id = selectedMovie.getMovieID(); if (cart.containsKey(id)) { cart.remove(id); } else { cart.put(id, selectedMovie); } }}Now for user's purchase scenario, I need to keep track of the number of rentals per movie. So, I thought of adding that property to the Rental entity and use it as a value for the map. But, this class itself contains Movie as a property for the mapping as below:<set name=rentals cascade=all lazy=true table=USER_MOVIES> <key column=userRegistrationID not-null=true/> <composite-element class=Rental> <property name=bookingDate column=bookingDate type=date/> <many-to-one name=movie column=bookedMovieID class=Movie not-null=true/> </composite-element></set>So, while adding to / removing from cart, I will have to create / destroy an instance of the Rental class, which would be a bad idea. So I am thinking that I should restructure my entities.Am I right in assuming the above? Is there a better approach for the above?I am using the following:JSF 2.1 + PrimeFacesHibernate 4.3MySQL
How do I structure my entities for easier persistence?
java
null
_unix.235691
I am attempting to dual boot Linux Mint 17.2 (Cinnamon, 64 bit) and Windows 8.1 on my Acer E 15 (model # E5-573-55W1). I have successfully installed Mint, but I can't get grub to start in a reasonable way. Here is what I have done:From Windows, create a new partition on my machine to install Mint ontoCreated a bootable USB driveRestarted my computer and pressed F2 to enter the bios settings:Bootmode: UEFISecure Boot: DisabledBoot Order: USB HDD FirstMint boots from the USB drive and I install:The installer didn't recognize that I had windows installed, so I selected the something else option (IE not the remove everything option).I created 3 new partition from the new partition that I made in windows, one for /, one for swap and one for /home. Now I am confused, as to what my bios settings should be. I have the following set: Bootmode: UEFISecure Boot: DisabledBoot order: (see image)When I boot, I don't get grub. I go directly to windows. I know that my install worked because I was able to launch it through the grub on the USB. I can provide more detailed instructions on this if that would be helpful. Full computer information:The things that I am suspicious of right now are:Bad bios settingsWrong boot loader locationEditReinstall with different boot loader locationI re-installed and set the Device for boot loader installation to sda2 but that didn't make any difference. Same behavior. EditSteps to Boot into Linux Mint (in a unacceptably poor way):In bios set the boot order to use the USB HDDGrub will be launched. Press ESC: Now you should be in Grub command mode. Exit it by typing exit:Now you should be on the Boot Manager page. From here you can launch the right grub (IE the first option): This is what the right grub looks like. The first option will launch the installed version of linux mint (complete with the username I created during install). The third option will start windows. The only question remaining, is why can't I open this grub on startup? What bios settings do I need?
Why is windows launching instead of grub. Bad bios settings? Wrong boot loader location?
linux mint;system installation;bios
null
_unix.329298
I wrote two custom scripts to brighten/dim my screen and want to bind them to my F9 and F10 keys.I put them into /opt/bin and used sudo chown root:root script and sudo chmod 755 script on both and they work when called from the terminal.When I now try to run xbindkeys -v, with these lines added to ~/.xbindkeysrc/opt/bin/dim_screen.sh Control + c:75/opt/bin/brighten_screen.sh Control + c:76it gives me this error message:displayName = :0.0rc file = /home/pi/.xbindkeysrcrc guile file = /home/pi/.xbindkeysrc.scmgetting rc guile file /home/pi/.xbindkeysrc.scm.WARNING : /home/pi/.xbindkeysrc.scm not found or reading not allowed.2 keys in /home/pi/.xbindkeysrcmin_keycode=8 max_keycode=255 (ie: know keycodes)/opt/bin/dim_screen.sh m:0x4 + c:75 Control + F9/opt/bin/brighten_screen.sh m:0x4 + c:76X Error of failed request: BadAccess (attempt to access private resource denied) Major opcode of failed request: 33 (X_GrabKey) Serial number of failed request: 17 Current serial number in output stream: 21At first I thought its about the file permissions of the scripts, thus I added ALL ALL= NOPASSWD: /opt/bin/brighten_screen.sh ALL ALL= NOPASSWD: /opt/bin/dim_screen.shto my /etc/sudoers. But the error persisted, so I read it again and now after reading about xgrabkeys I think the key signals are already in use by some other program when xbindkeys wants to read them, so that it can't access them. It also won't work when I don't use Function Keys.Because it may well be that F9 and F10 are reserved for internal purposes I changed the xmodmap mapping of F9 to F13 and of F10 to F14I could temporarily make it work, by following the instructions by Vincent Yu on the Question Using xbindkeys to bind the meta key (a.k.a. super key / Windows key) to left click and allow drag and drop (I noticed what he said about the changes not being persistent through sessions) but now after reboot it gives me the same error again (with F9substituted by F13), even though i ran xmodmap -e 'keycode 75 = F13' and xmodmap -e 'keycode 76 = F14' respectively and changed my ~/.XmodmapI don't know how to handle that or find out what blocks xbindkeys from using the keys. Google didn't give me helpful results and a glance over the posts on unix.SE (with exception of the aforementioned one) didn't help, either.I use the Raspbian Jesse core with lxde if thats relevant.
BadAccess on X_GrabKeys when using xbindkeys
keyboard shortcuts;xbindkeys
I had similar problem, google lead me to https://askubuntu.com/questions/499926/why-do-these-xte-commands-work-in-terminal-but-not-when-bound-with-xbindkeys, which basically says add the xbindkeys-specific modifier release to the key binding so that they script fires on keyup in javascript parlance. Doing that fixes the issue for me.So for your case the following should work:/opt/bin/dim_screen.sh Control + c:75 + release/opt/bin/brighten_screen.sh Control + c:76 + release
_unix.140169
I have a code that takes values from a log file and populates them into an excel spreadsheet:function ltrim(s) { sub(/^[ \t]+/, , s); return s }function rtrim(s) { sub(/[ \t]+$/, , s); return s }function trim(s) { return rtrim(ltrim(s)); }function getfield( xml, fieldname) { i = index ( xml, fieldName=\fieldname\); if(i < 1){ ts = ; } else{ ts = substr( xml, i + length(fieldName=\fieldname\) ); ts = substr( ts, index( ts, value=\ ) + length( value=\ ) ); ts = substr( ts, 1, index( ts, \)-1); gsub(/&amp;/, &, ts ); if ( index( ts, , ) > 0 ) ts =\ts\; } return ts; }BEGIN { FS = --------;OFS=,;} {sub(/..,[A-Z][A-Z]/, CT-PT)} { orig = $0; # $7 = getfield(orig, Modalities In Study); $9 = getfield(orig,Subject Patients Sex); $10 = getfield(orig,Modality Body Region); $11 = getfield(orig, Patients Birth Date); $12 = getfield(orig, Protocol Name); $13 = getfield(orig, Timepoint ID); print; }Everything is working great except for one thing. If I place this print statement anywhere in the code, it loops in the whole Excel spreadsheet. So basically the print statement appears in every other row. I only want it to appear in the first row:{print Study Instance UID,Number of Series,Number of Instances, Exam Transfer Date,Exam Date,Subject Number,Modalities In Study,Upload Status,Subject Patients Sex,Modality Body Region,Patients Birth Date,Protocol Name,Timepoint}I am not sure why this occurs. I even tried placing this line right after function trim(s) { return rtrim(ltrim(s)); } but still keeps looping in the spreadsheet. Does anyone have any suggestions or might know why this occurs?
Awk print problem
awk
Aside from functions, awk code looks likecondition {actions}If the {actions} block is missing, the implicit action if the condition returns true is {print}.If the condition is missing, and this directly relates to your question, it is implicitly true for every line of the input. That's why you get that line printed so many times.You need to specify a condition. Either put the print statement inside the BEGIN block as Stphane suggests, or you can do this:NR == 1 {print this is the header}That action only occurs when the condition is true, which is for the first line of input.
_reverseengineering.13870
I'm trying to understand how this game builds the map and terrain. Specially if it uses a heightmap. So I can write an app that can view the map and move around (camera) freely.First, I'm not entirely sure if the game actually uses height maps. But it was made in 2000/2001, so it makes sense if it did.The following files are loaded each time a map is loaded. I'll use map 4 as an example from here on:/map/map4.spt/map/map4.inf/map/map4.obj/terrain/chunk4.dat/terrain/addterrain4.ctr/terrain/addterrain4.obj/terrain/addterrain4.inf/terrain/addterrain4.t16/terrain/terrain4.obj/terrain/terrain4.t16*.T16I know these are textures.Here is terrain4.t16 unpacked (terrain):And here is addterrain4.t16 unpacked (building):*.SPTI know this is an encrypted script. It contains things like ambient light color, sky color and other configurations for the map. *.CTRI'm not sure about this yet but I don't think this is the heightmap.*.INFI'm not sure about this yet but I don't think this is the heightmap. But it looks like positions for buildings/objects on the map.*.OBJAlthough this sounds like a 3D object file, it isn't. So far I was able to parse this file with this script:with open(map4.obj, rb) as bin: bin.seek(0, 2) file_size = bin.tell() bin.seek(0) unk1 = struct.unpack(<h, bin.read(2))[0] width = struct.unpack(<h, bin.read(2))[0] height = struct.unpack(<h, bin.read(2))[0] c_width = struct.unpack(<h, bin.read(2))[0] c_heigth = struct.unpack(<h, bin.read(2))[0] size1 = struct.unpack(<I, bin.read(4))[0] size2 = struct.unpack(<I, bin.read(4))[0] bin.seek(50) # (1, 76, 76, 304, 304, 5776, 92416) print(unk1, width, height, c_width, c_heigth, size1, size2) for x in range(size1): unk2 = struct.unpack(<BBBB, bin.read(4)) for y in range(size2): unk3 = struct.unpack(<BB, bin.read(2))The script reads the file till the end so I think I got the parsing right. But I still don't understand what those bytes (unk2, unk3) mean.Here are the first few bytes: (1, 76, 76, 304, 304, 5776, 92416).76x76 seems to me like an image widthxheight. This is the part where I start to suspect that this must be the heightmaps.304x304 is the size of the map coordinates, if I go to the bottom right part of the map, the coords show me x304 y304. (0,0) (304,304)5776 is 76*76.92416 is 304*304.I'm guessing the cellspacing is 4, because 304/76 = 4.Hmmm... Maybe I should be reading terrain4.obj instead of map4.obj ???Checking terrain4.obj (filesize: 2,194,888)with open(terrain4.obj, rb) as bin: bin.seek(0, 2) file_size = bin.tell() bin.seek(0) width = struct.unpack(<I, bin.read(4))[0] height = struct.unpack(<I, bin.read(4))[0] print(width, height) for x in range(width*height): unk1 = struct.unpack(<fffHH, bin.read(16)) print(unk1) for y in range(10): unk2 = struct.unpack(<ffffffBBBBff, bin.read(36)) print(unk2) unk3 = struct.unpack(<HH, bin.read(4)) print(unk3) print((bin.tell()-file_size) == 0)results of prints (last few lines):(64656, 900)(9536.0, 0.0, -9664.0, 8, 60)(9536.0, 367.3583679199219, -9664.0, 0.6106882691383362, 0.7202576398849487, -0.3290725350379944, 125, 125, 125, 255, 0.5, 0.5)(9472.0, 439.9534912109375, -9664.0, 0.6727219820022583, 0.5930730104446411, -0.4423908591270447, 72, 72, 72, 255, 0.0, 0.5)(9472.0, 529.98291015625, -9600.0, 0.5103294849395752, 0.5574429631233215, -0.6548444032669067, 91, 91, 91, 255, 0.0, 0.0)(9536.0, 428.7572021484375, -9600.0, 0.5774530172348022, 0.5274922251701355, -0.6231372356414795, 83, 83, 83, 255, 0.5, 0.0)(9600.0, 382.6859436035156, -9600.0, 0.32311421632766724, 0.6967470049858093, -0.6404222846031189, 110, 110, 110, 255, 1.0, 0.0)(9600.0, 331.24566650390625, -9664.0, 0.4635116457939148, 0.8214490413665771, -0.3322325348854065, 149, 149, 149, 255, 1.0, 0.5)(9600.0, 326.34356689453125, -9728.0, 0.5303630828857422, 0.847672700881958, -0.012873286381363869, 122, 122, 122, 255, 1.0, 1.0)(9536.0, 370.5403747558594, -9728.0, 0.603411078453064, 0.7964465618133545, 0.039598409086465836, 107, 107, 107, 255, 0.5, 1.0)(9472.0, 423.61572265625, -9728.0, 0.6945193409919739, 0.7165639996528625, -0.06464438885450363, 97, 97, 97, 255, 0.0, 1.0)(9472.0, 439.9534912109375, -9664.0, 0.6727219820022583, 0.5930730104446411, -0.4423908591270447, 72, 72, 72, 255, 0.0, 0.5)(64656, 900)(9664.0, 0.0, -9664.0, 8, 60)(9664.0, 323.531005859375, -9664.0, -0.16421209275722504, 0.9194830656051636, -0.3571907877922058, 202, 202, 202, 255, 0.5, 0.5)(9600.0, 331.24566650390625, -9664.0, 0.11137175559997559, 0.9239282011985779, -0.3659959137439728, 199, 199, 199, 255, 0.0, 0.5)(9600.0, 382.6859436035156, -9600.0, 0.32311421632766724, 0.6967470049858093, -0.6404222846031189, 110, 110, 110, 255, 0.0, 0.0)(9664.0, 370.8617248535156, -9600.0, -0.030711062252521515, 0.8036384582519531, -0.5943247675895691, 202, 202, 202, 255, 0.5, 0.0)(9728.0, 387.73626708984375, -9600.0, -0.3357184827327728, 0.8114649057388306, -0.478348970413208, 202, 202, 202, 255, 1.0, 0.0)(9728.0, 359.2901306152344, -9664.0, -0.4858676493167877, 0.869583010673523, -0.0880785584449768, 202, 202, 202, 255, 1.0, 0.5)(9728.0, 373.6744079589844, -9728.0, -0.6241185665130615, 0.7812370657920837, -0.012026631273329258, 202, 202, 202, 255, 1.0, 1.0)(9664.0, 302.3572998046875, -9728.0, -0.23366673290729523, 0.9231089949607849, -0.3054006099700928, 202, 202, 202, 255, 0.5, 1.0)(9600.0, 326.34356689453125, -9728.0, 0.23029862344264984, 0.9547197222709656, -0.18834225833415985, 177, 177, 177, 255, 0.0, 1.0)(9600.0, 331.24566650390625, -9664.0, 0.11137175559997559, 0.9239282011985779, -0.3659959137439728, 199, 199, 199, 255, 0.0, 0.5)(64656, 900)True*.DATFirst few bytes of 2,127,308 total:00000000 19 00 00 00 00 01 00 00 23 00 00 00 00 00 80 42 ........#......B00000016 00 00 FF 43 00 00 80 C2 00 00 00 00 FF FF 7F 3F ...C...........?00000032 00 00 00 00 7A 7A 7A FF 00 00 00 3F 00 00 00 3F ....zzz....?...?00000048 00 00 00 00 00 00 FF 43 00 00 80 C2 00 00 00 00 .......C........00000064 FF FF 7F 3F 00 00 00 00 7A 7A 7A FF 00 00 00 00 ...?....zzz.....00000080 00 00 00 3F 00 00 00 00 00 00 FF 43 00 00 00 80 ...?.......C....00000096 00 00 00 00 FF FF 7F 3F 00 00 00 00 7A 7A 7A FF .......?....zzz.00000112 00 00 00 00 00 00 00 00 00 00 80 42 00 00 FF 43 ...........B...CI'm not sure about this one though.My Questions:Could the .obj file possibly be the heightmap, if not, what is it?Any idea what the .CTR and .INF files are? (guesses?)Additional Information:The map 4 is called Lava Canyon. It features paths surrounded by canyons of lava.Minimap Texture:Related files: Download From Mediafire
Analyzing Game Map/Terrain Format
binary analysis;hex
null
_webapps.78537
Is there any way you can make it an option to have the auto numbering turned off on repeating sections? It's causing a lot of errors with the staff that are reviewing the forms. For instance, by default I think it says Item 1, Item 2, etc. In my form I have to insert a Zone 1, Zone 2, etc. However I might jump right from Zone 2 to Zone 95, but your form shows up as Item 3. So our staff might input it as Zone 3 instead of Zone 95. It's been a big headache to try to get them to understand this. Just wondering if the auto numbering can be removed.
Cognito Forms: Repeating Sections Auto Numbering
cognito forms
null
_webapps.56766
A person with low computer skills was writing a message in Gmail (web interface). You know how as you type it automatically saves the message and the word saved appears at the bottom, she clicked saved thinking it was the command to save. Now she can't find her message. Is there any way of getting it back?
Recovering a saved email in Gmail
gmail;email;backup
null
_codereview.125891
As stated in the tags, I'm a beginner to C. This code implements a dynamic array (List in .NET, vector in C++, etc) for any pointer type. I would like that it be reviewed on memory usage and code structure/style.list.h#ifndef _game_structures_list_h#define _game_structures_list_h#include <stdbool.h>typedef enum { LIST_ERR_NONE = 0x00, LIST_ERR_POP_EMPTY = 0x01, LIST_ERR_OUT_OF_BOUNDS = 0x02, LIST_ERR_MEMORY = 0x03, LIST_ERR_UNITIALIZED = 0xff} ListError;typedef struct { int capacity; int length; ListError error; void **data;} List;/** * Creates a new List with the given starting capacity. * @param capacity The initial allocation size for this list, not its length! * @return A new empty List */List list_new(int capacity);/** * Frees memory used by the list and makes it unusable */void list_destroy(List *list);/** * Gets the error of the last operation on this list * @return The last error, or LIST_ERR_NONE if successful */ListError list_error(List *list);/** * Gets the amount of items in the list * @return Length of list */int list_length(List *list);/** * Removes all elements from the list * @param list The list to be cleared */void list_clear(List *list);/** * Access the item at the given index. * @return Pointer to the item at the given index */void *list_get(List *list, int index);/** * Sets the value at the given index to the item given */void list_set(List *list, void *item, int index);/** * Adds an item to the end of the list * @param list Where to add the item to * @param item The value to be added */void list_push(List *list, void *item);/** * Removes the last item from the list and returns it * @param list The list to be popped * @return The value removed */void *list_pop(List *list);/** * Adds a new item to a list at the given position, moving all remaining items to the right * @param list The list to which the item will be added * @param item The value to be added * @param index The position in the list where the item will be placed */void list_insert(List *list, void *item, int index);/** * Removes the item at the given index from the list, returning it and shifting all remaining items to fill the gap * @param list The list to remove * @param index Index of the item to be removed * @return The removed value */void *list_remove(List *list, int index);#endiflist.c#include <stdlib.h>#include <string.h>#include list.h#define L_ERR(list, err) list->error = err; return#define L_OK(list) L_ERR(list, LIST_ERR_NONE)#define L_BOUNDS(list, index, ...) if (index < 0 || index >= list->length) {L_ERR(list, LIST_ERR_OUT_OF_BOUNDS) __VA_ARGS__;}#define CAPACITY_FACTOR 2static inline bool change_capacity(List *list, int new_capacity) { void **new_data = realloc(list->data, new_capacity * sizeof(*list->data)); if (new_data) { list->data = new_data; list->capacity = new_capacity; L_OK(list) true; } L_ERR(list, LIST_ERR_MEMORY) false;}static inline bool adjust_capacity(List *list) { int length = list->length; int capacity = list->capacity; int half_cap = capacity / CAPACITY_FACTOR; if (length < half_cap) { return change_capacity(list, half_cap); } else if (length == capacity) { return change_capacity(list, capacity * CAPACITY_FACTOR); } L_OK(list) true;}List list_new(int capacity) { List list = { .capacity = 0, .length = 0, .data = NULL }; if (capacity) { change_capacity(&list, capacity); } return list;}void list_destroy(List *list) { list_clear(list); free(list->data); list->data = NULL; list->capacity = 0;}ListError list_error(List *list) { if (!list->capacity) { list->error = LIST_ERR_UNITIALIZED; } return list->error;}int list_length(List *list) { return list->length;}void list_clear(List *list) { list->length = 0; L_OK(list);}void *list_get(List *list, int index) { L_BOUNDS(list, index, NULL); L_OK(list) list->data[index];}void list_set(List *list, void *item, int index) { L_BOUNDS(list, index, ); list->data[index] = item; L_OK(list);}void list_push(List *list, void *item) { if (!adjust_capacity(list)) return; list->data[list->length] = item; list->length++; L_OK(list);}void *list_pop(List *list) { if (!list->length) { L_ERR(list, LIST_ERR_OUT_OF_BOUNDS) NULL; } void *item = list->data[list->length - 1]; if (!adjust_capacity(list)) return NULL; list->length--; L_OK(list) item;}void list_insert(List *list, void *item, int index) { L_BOUNDS(list, index, ); if (!adjust_capacity(list)) return; void **base = list->data + index; int size = (list->length - index) * sizeof(*list->data); if (!memmove(base + 1, base, size)) { L_ERR(list, LIST_ERR_MEMORY); } list->data[index] = item; list->length++; L_OK(list);}void *list_remove(List *list, int index) { L_BOUNDS(list, index, NULL); if (!adjust_capacity(list)) return NULL; void **base = list->data + index; void *item = *base; int size = (list->length - index) * sizeof(*list->data); if (!memmove(base, base + 1, size)) { L_ERR(list, LIST_ERR_MEMORY) NULL; } list->length--; return item;}
Generic variable-size array
beginner;c;array;c11
Better Naming ConventionsOne of the things that immediately jumps out at me is the name you used for the data structure. As you mentioned, a dynamic array is called a List in C# and in Java. However, in the C/C++ world, we generally like to refer to dynamic arrays as either (ahem) dynamic arrays or vectors. When I saw List, my brain automatically started thinking I was going to be looking at code for a singly/doubly linked list of some sort (since I come from a C/C++ world). I'm sure others would find the use of the name List to describe a vector a little misleading as well. Obviously, you'd need to rename all your functions and your error enumeration to reflect this name change and keep the API consistent. For the code I'll be presenting, I'll be calling the structure a cvector.Use Opaque TypesOne issue with the way you have your structure defined in the header file is that it exposes it's internal representation to the user. That is, if the user includes your header file, they are free to change the contents of the members without ever using your functions. As you can probably imagine, without proper documentation on how to properly manipulate the fields (or even with proper documentation), this is very error-prone. Ideally, you'd like your users to only be able to manipulate your structure using your API. To do this, we need to forward declare the structure in the header file and supply its definition in the implementation file. One disadvantage with this approach though is that you'll be forced to use heap allocation to initialize opaque structures.cvector.h// forward declare heretypedef struct cvector_ cvector;// no need to assign values to enumerations past the first one since// the compiler automatically provides incremented values for them.typedef enum { CVECTOR_ERR_NONE = 0x00, CVECTOR_ERR_POP_EMPTY, CVECTOR_ERR_OUT_OF_BOUNDS, CVECTOR_ERR_MEMORY, CVECTOR_INVALID_OPERATION, CVECTOR_ERR_UNITIALIZED} CVectorError;cvector.c// define structure in herestruct cvector_ { // these values can never be negative so use unsigned numbers to reflect that size_t capacity, size; void **data;};CVector APIHaving is_empty() convenience function, a function to return the current capacity of the vector, a reserve() function to reserve memory storage, and maybe a shrink_to_fit() function would be great to have. You will also want to have init() and free() functions to both initialize and destroy the vector you created. Also, notice that I removed the error code member from the vector structure. It isn't needed because if you design your API to return error codes where appropriate, the user could just query the return result of whatever API function they're calling. Moreover, it serves to just bloat the data structure unnecessarily. You should look to have your set(), get(), push(), pop(), insert(), and remove() functions return error codes that the user could query if they so choose. Below is a simplified implementation of some of the functions I talked about to give you an idea of what I mean.cvector *cvector_init(size_t capacity){ cvector *cv = calloc(1, sizeof(*cv)); if (cv && internal_cvector_reserve(cv, capacity) != CVECTOR_ERR_NONE) { free(cv); cv = NULL; } return cv;}void cvector_free(cvector *cv){ free(cv->data); free(cv);}static CVectorError internal_cvector_reserve(cvector *cv, size_t new_capacity){ void **new_data = realloc(cv->data, new_capacity * sizeof(*(cv->data))); if (!new_data) { return CVECTOR_ERR_MEMORY; } cv->data = new_data; cv->capacity = new_capacity; return CVECTOR_ERR_NONE;}CVectorError cvector_reserve(cvector *cv, size_t new_capacity){ return new_capacity > cv->capacity ? internal_cvector_reserve(cv, new_capacity) : CVECTOR_INVALID_OPERATION;}bool cvector_empty(const cvector *cv){ return cvector->size == 0;}CVectorError cvector_shrink_to_fit(cvector *cv){ return internal_cvector_reserve(cv, cv->size);}Avoid Macros When They Are UnnecessaryIn your case, all the macros are completely unnecessary and could just be implemented as either small static functions or as typed constants. This one is especially egregious: L_ERR . That's because it hides a return statement and makes reading the code using it very very awkward. Instead of the CAPACITY_FACTOR macro, you could just as easily use:static const size_t CAPACITY_FACTOR = 2;You gain the benefits of type safety and self-documenting code with this.EDIT:Revise allocation strategyCurrently, your allocation strategy is forcing you to reallocate every single time a modifying operation is done on a vector. As an example, assume I create a vector with 10 as the initial capacity:Capacity: 10Size: 0Then I push an element into the vector. Your code would check and see that my size is less than the capacity and halve the capacity; this incurs a reallocation which is very expensive relative to the actual push operation:Capacity: 5Size: 1Another push:Capacity: 2 (this is truncated down from 2.5)Size: 2Another push... Now the size is equal to the capacity so I reallocate to double the length and push:Capacity: 4Size: 3Another push... Now the size is less than the capacity and we reallocate to half the storage:Capacity: 2Size: 4Now we have an interesting situation where the size is greater than the capacity... and your adjust_capacity doesn't take that into account and will happily return true. Now when I do another push, I have invoked undefined behavior. Not to mention that I've managed to reallocate on every single push operation!! Notice too that this behavior will exist for any initial capacity you provide. In fact, given an initial capacity of N, your allocation strategy will invoke undefined behavior in exactly \$\mathcal{ceil}(\log N)\$ modifying operations!The fix for this would be to just ditch halving the capacity and focus on making the vector grow by some certain factor. Read this article here for an interesting vector allocation scheme.
_cs.8941
I took a course on compilers in my undergraduate studies in which we wrote a compiler that compiles source programs in a toy Java-like language to a toy assembly language (for which we had an interpreter). In the project we made some assumptions about the target machine closely related to real native executables, including:a run-time stack, tracked by a dedicated stack pointer (SP) registera heap for dynamic object allocation, tracked by a dedicated heap pointer (HP) registera dedicated program counter register (PC)the target machine has 16 registersoperations on data (as opposed to, e.g., jumps) are register-to-register operationsWhen we got to the unit on using register allocation as an optimization, it made me wonder: What is the theoretical minimum number of registers for such a machine? You can see by our assumptions that we made use of five registers (SP, HP, PC, plus two for use as storage for binary operations) in our compiler. While optimizations like register allocation certainly can make use of more registers, is there a way to get by with fewer while still retaining structures like the stack and heap? I suppose with register addressing (register-to-register operations) we need at least two registers, but do we need more than two?
Theoretical minimum number of registers for a modern computer?
compilers;computer architecture
null
_cstheory.7100
There is theoretical evidence that the naive cartesian-product construction for the intersection of DFAs is the best we can do. What about the concatenation of two DFAs? The trivial construction involves converting each DFA into an NFA, adding an epsilon-transition and determinizing the resulting NFA. Can we do better? Is there a known bound on the size of the minimal concatenation DFA (in terms of the sizes of the prefix and suffix DFAs)?
Efficient concatenation of DFAs?
cc.complexity theory;fl.formal languages;automata theory;dfa
Yes - it is known that there exist DFA's of $m$ and $n$ states, respectively, such that the minimal DFA for the concatenation of the corresponding languages has $m*2^n - 2^{n-1}$ states, which matches the known upper bound.This was first observed by Maslov in 1970, and rediscovered by Yu, Zhuang, and K. Salomaa in their paper, The state complexities of some basic operations on regular languages, TCS 125 (1994), 315-328.
_unix.314659
/usr/bin/rsync -avh -r /Parent/Folder1 /Destination/if [ $? == 0 ]then cp /FolderCopyStatus/Success /Result/Success$(basename !:3)else cp /FolderCopyStatus/Failure /Result/Failure$(basename !:3)fiQuestion 1I am using rsync command to sync two folders in cent os.If rsync command is successful then I copy a folder to Result directory from Success directory and append current date.This works fineWhat I want is that when I copy from success instead of appending date,Folder1 from the above command should be appended.So how? Question 2I have automated this shell script in crontab.I want to pass Folder1 as parameter to the command in the automated script.So how??Update: if [ $? == 0 ] then cp /FolderCopyStatus/Success /Result/Success$(basename !:3) else cp /FolderCopyStatus/Failure /Result/Failure$(basename !:3) fiProblem: $(basename !:3) doesnot works in script but works in normal command like echo $(basename !:3)
Using previous command parameter in shell script
shell script;cron;rsync
null
_webmaster.32947
The Yandex spider is a frequent visitor to one of the sites I manage. On occasion it replaces the page name with two ampersands and a space. So if the page is: /mypage.aspx?param=value then it will try and crawl it as: /&& ?param=value Any idea why it is doing this?EDIT:If I remember correctly the IP that this mistake is coming from is based in California and not Russia. I believe that they crawl US sites from a US based IP address. Not sure if that helps.More information about the request: IP: 199.21.99.82City: Palo AltoState: CaliforniaCountry: United StatesISP: Yandex Inc.User-Agent: Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)
Yandex frequently replaces page names with ampersands
web crawlers;crawl errors;yandex
null
_webapps.74421
I'm messing around with Google Sheets today, and I'm trying to put together a balance sheet similar to the one my bank uses. Its got both a credit and debit field in each row and a running balance to the far right. I'm kinda new to this, so I'm learning as I go.I've got it set up like so - cell E2 has the following formula:=sum(c2:d2)While cells E3 and beyond have:=(C4*-1)+D4+E3It works for the most part, but if you'll notice cells E6 onwards each have the same value as a result of the formula being applied to every cell in that column. Is there any way to hide the balance value or change it to 0 if the debit/credit field is empty, just for layout purposes?edit:It's supposed to be a running balance of an account. So the end result is like thisDEBIT CREDIT BALANCE DATE 5 5 05 Feb 20151 4 06 Feb 2015 10 14 07 Feb 20158 6 09 Feb 2015But just to re-iterate, my issue isn't so much with the formula but rather with the way it displays the total.I'm looking for something like this (I zeroed out E5-E11 by hand):The balance should display P0.00, or be blank, when the line is empty.
How to get a running balance?
google spreadsheets
I'd use this formula in E3 and copy down:=IF(AND(E2<>,OR(C3<>,D3<>)),E2-C3+D3,)EXPLANATION:The IF statement contains 3 comma separated parameters:logical_expressionvalue_if_truevalue_if_falseIn this case the logical expression is AND(E2<>,OR(C3<>,D3<>)).AND takes multiple logical expressions as comma separated parameters, and returns TRUE only if all expressions are true. In this case these two:E2<>OR(C3<>,D3<>)Number one checks if there is anything in the cell above (if it's not equal to nothing).Number two is an OR function, works as AND but returns TRUE if only one or more of the expressions are true. In this case:C3 (the Debit value on this row) is not emptyD3 (the Credit value on this row) is not emptySo to summarizeIf the cell above has a value, and Debit or Credit has a value. The value_if_true is used. In the example: E2-C3+D3 (the cell above minus Debit plus Credit).If the expression evaluates to false, the value_if_false is used, in this case (an empty string).
_softwareengineering.330586
I've got java logging user activities to Fluentd, Fluentd is then writing these activities into an elasticsearch index.Every user activity is recorded, some include:User1 follows user2User1 likes article1User1 creates articleUser1 searches for tag user2 signs upMore...Now for each of these activities I'm storing a user object. For example, a CREATED activity would look like this:{activity: CREATED,user: { userId: X, userName: xxxx,},{article: { title: XOXO, description: LOTS OF MARKUP, date: DATE_CREATED, ...more data on article include locations, coords, etc... tags: [ {tagName: relatedTagToArticle, tagId: 1} {tagName: relatedTag2ToArticle, tagId: 2} ],},}}The document could become larger for different activities. But by storing this sort of information I'd be able to select * activities where activities.user = [list of the users followers]and process the results in some sort of algorithm.Is it fine to keep storing activities like this? Should I avoid this, if so why?I'm also wondering how I should figure out and store popular tags and searches? Should I have a program that runs every X minutes and calculates the number of unique SEARCH activities and stores that information in a Redis list?EditOne of the main reasons I'd add all this extra information (storing whole objects, such as an article) is so I can query the ES index and get activities from a users followers! So say I wanted to build a news feed in my app, I could query es with something like this (pseudo code):{select all from user_activities where { activity_type: [LIKE, COMMENT, CREATED, FOLLOW] userId: [LIST_OF_FOLLOWING_IDS] date > XX AND date < YY <SOMETHING HERE TO DETERMINE POPULAR ACTIVITIES> }}I'm not sure if this is a good approach or whether it will scale or not! Doing it this way, the data stored could become stale, for instance if I displayed a CREATED article from the above query, the article could've been updated! Although I'm not really worried about that just yet.
How should I store user activities in ElasticSearch and figure out popular searches?
java;design;logging;redis;elasticsearch
null
_unix.338578
When I download the kernel directly as type tar.xz, and untar it, the size is around 1GB. But when I download it via git clone from here, the size is around 7GB. It shows only master branch. Why this huge difference?
Linux kernel source code, size difference
linux kernel
The tarball only contains the source code for the specific release of the kernel in the tarball, whereas the git repository (cloned using git clone) contains the history of the kernel going back quite a long time. Even if you only see the master branch when you initially clone it, using the default clone parameters you actually have the full repository locally: git log will show you the full history, git branch --remote will show all the available branches.If you only want the latest commit, you can use a shallow clone which will be much smaller:git clone --depth 1 ...or if you want a specific date,git clone --shallow-since=...You can combine that with a specific branch or tag to only download that branch's tip or that tag:git clone --depth 1 --branch v4.10-rc4 git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git linux-4.10-rc4This produces a tree using 947MiB (and a 159MiB download).
_cs.33595
Say I have an undirected finite sparse graph, and need to be able to run the following queries efficiently:$IsConnected(N_1, N_2)$ - returns $T$ if there is a path between $N_1$ and $N_2$, otherwise $F$$ConnectedNodes(N)$ - returns the set of nodes which are reachable from $N$This is easily done by pre-computing the connected components of the graph. Both queries can run in $O(1)$ time.If I also need to be able to add edges arbitrarily - $AddEdge(N_1, N_2)$ - then I can store the components in a disjoint-set data structure. Whenever an edge is added, if it connects two nodes in different components, I would merge those components. This adds $O(1)$ cost to $AddEdge$ and $O(InverseAckermann(|Nodes|))$ cost to $IsConnected$ and $ConnectedNodes$ (which might as well be $O(1)$).If I also need to be able to remove edges arbitrarily, what is the best data structure to handle this situation? Is one known? To summarize, it should support the following operations efficiently:$IsConnected(N_1, N_2)$ - returns $T$ if there is a path between $N_1$ and $N_2$, otherwise $F$.$ConnectedNodes(N)$ - returns the set of nodes which are reachable from $N$.$AddEdge(N_1, N_2)$ - adds an edge between two nodes. Note that $N_1$, $N_2$ or both might not have existed before.$RemoveEdge(N_1, N_2)$ - removes an existing edge between two nodes.(I am interested in this from the perspective of game development - this problem seems to occur in quite a few situations. Maybe the player can build power lines and we need to know whether a generator is connected to a building. Maybe the player can lock and unlock doors, and we need to know whether an enemy can reach the player. But it's a very general problem, so I've phrased it as such)
What is the most efficient algorithm and data structure for maintaining connected component information on a dynamic graph?
algorithms;graph theory;graphs;data structures
This problem is known as dynamic connectivity and it is an active area of research in the theoretical computer science community. Still some important problems are still open here.To get the terminology clear, you ask for fully-dynamic connectivity since you want to add and delete edges. There is a result of Holm, de Lichtenberg and Thorup (J.ACM 2001) that achieves $O(\log^2 n)$ update time and $O(\log n / \log\log n)$ query time. From my understanding it seems to be implementable. Simply speaking the data structure maintains a hierarchy of spanning trees - and dynamic connectivity in trees is easier to cover. I can recommend Erik D. Demaine's notes for a good explanation see here for a video. Erik's note also contain pointers to other relevant results. As a note: all these results are theoretical results.These data structures might not provide ConnectedNodes queries per se, but its easy to achieve this. Just maintain as an additional data structure the graph (say as doubly connected edge list) and the do the depth-first-search to get the nodes that can be reached from a certain node.
_unix.379865
I downloaded the sagemath package for ubuntu 16.04 to my kubuntu-16.04 laptop and extracted the SageMath folder. When I tried to run the command ./sage it showed the following errors:==============================================================ops, Sage crashed. We do our best to make it stable, but...A crash report was automatically generated with the following information: - A verbatim copy of the crash traceback. - A copy of your input history during this session. - Data on your current Sage configuration.It was left in the file named: '/home/santosh/.sage/ipython-5.0.0/Sage_crash_report.txt'If you can email this file to the developers, the information in it will helpthem in understanding and correcting the problem.You can mail it to: sage-support at [email protected] the subject 'Sage Crash Report'.If you want to do it now, the following command will work (under Unix):mail -s 'Sage Crash Report' [email protected] < /home/santosh/.sage/ipython-5.0.0/Sage_crash_report.txtTo ensure accurate tracking of this issue, please file a report about it at:http://trac.sagemath.orgHit <Enter> to quit (your terminal may close):==========================================================What could be the problem?
Sage Installation problem (kubuntu)
linux;kubuntu;sage math
null
_codereview.62673
This is the code for my first CS1 project, I hope to be posting my code here to track my progress of learning the language of C++. I shouldn't be too shabby, since I know C considerably well and have reviewed C++ code here before.Write a program that allows the user to enter a time in seconds and then outputs how far an object would drop if it is in freefall for that length of time. Assume that the object starts at rest, there is no friction or resistance from air, and there is a constant acceleration of 32 feet per second due to gravity. Use the equation:$$\text{distance} = \dfrac{\text{acceleration} \cdot\text{time}^2}{2}$$rock.cpp:/** * @file rock.cpp * @brief Determines the height that a rock falls (in feet) after a given time * @author syb0rg * @date 9/8/14 */#include <iostream>#include <cmath>int main(void){ // initialize variables on declaration double time = 0; double acceleration = 32.174; std::cout << Enter the time in seconds that a rock will fall: ; std::cin >> time; double feetFallen = (acceleration * pow(time, 2)) / 2; std::cout << The rock fell << feetFallen << feet. << std::endl;}
Calculating freefalling object's height
c++;beginner
Your code is both short and good, so there's not much to say.Header organizationIt's a good idea to keep your headers organized. Keeping them alphabetically ordered allows you to spot accidental duplicates.main() signatureYou don't need to put void in the parentheses. It's very much a C-ism, and is not necessary in C++. Empty parentheses also means the function takes no arguments.const variables and constantsacceleration is a constant, and should be marked constexpr to indicate it is a compile-time constant. Please note that constexpr is only available in C++11 or later. If you have to use an earlier version, you can replace it with static const for largely the same effect. feetFallen should be marked const, since you only read from it after it is initialized.Marking variables const and constants constexpr limits the number of moving parts in your application, making it easier to reason about and keep correct.Unnecessary commentYour comment about initialization is not necessary. The fact that you initialize variables the same time they are declared is obvious from reading the code.Invalid user inputtime will contain the value 0 if your user inputs non-numeric input. If you want to constrain the user to numeric-only input, you can loop until the user enters only a valid number. Inspired by this StackOverflow answer, your solution could look like this:while(!(std::cin >> time)){ std::cin.clear(); std::cin.ignore(std::numeric_limits<std::streamsize>::max(), '\n'); std::cout << Invalid input. Please enter a number: ;}
_unix.40063
I want to find file types that are executable from the kernel's point of view. As far as I know all the executable files on Linux are ELF files. Thus I tried the following:find * | file | grep ELFHowever that doesn't work; does anybody have other ideas?
How to find executable filetypes?
bash;files;find;executable;elf
Later edit: only this one does what jan needs: thank you huygens;find . -exec file {} \; | grep -i elf
_unix.184770
My shell script:#!/bin/bashecho $1;startd=$(date -d $1 +%Y%m%d); echo $startd;My command:sudo ./test.sh 20151010The output:20151010 20150213it printed todays date instead of printing the input date any idea?
Convert input string to date in shell script
bash;shell script;osx;macintosh;parameter
The OS X version of date uses the -f option to parse a formatted date/time:date -j -f '%Y%m%d' $1 +'%Y%m%d'The -j option causes it to just print the time, not try to set the system clock. So the full script would be:#!/bin/bashecho $1;startd=$(date -j -f '%Y%m%d' $1 +'%Y%m%d'); echo $startd;Here's a transcript:$ ./testdate.sh 201510102015101020151010
_webmaster.69439
I'm using the trackEvent function of Google Analytics to track how many people are clicking on my external links / banners.Recently I have noticed some events that show up as such in the Real time Events feature of Google Analytics but they give no detail of which event they are. Normally it states the name of each event triggered, eg. I call my Banner events banners and when someone clicks on those, it says banners on the Real Time Event window.Now interestingly enough, those unidentified events I see do not score as banner clicks or any other type of events (neither in real time nor in any other analytics report) I have and I cannot find out what is being triggered by the user. Normally I wouldn't care but if there is the possibility of some malformed code that prevents my banner ads from scored as valid clicks / conversions, I should find out and fix it.Otherwise, where do those events come from? Any idea that would help me to find out?(I already looked into the source code of the most visited pages to see if the trackevent code was put in correctly, and everything seems fine, I also clicked on almost every banner link myself to see if they work or not, just to leave out that option, so it must be something else).Anyone experience this kind of thing before?
Unidentified events in google analytics
google analytics;click tracking;event tracking
null
_unix.61820
I have an hourly hour-long crontab job running with some mtr (traceroute) output every 10 minutes (that is going to go for over an hour prior to it being emailed back to me), and I want to see the current progress thus far.On Linux, I have used lsof -n | fgrep cron (lsof is similar to BSD's fstat), and it seems like I might have found the file, but it is annotated as having been deleted (a standard practice for temporary files is to be deleted right after opening):COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME...cron 21742 root 5u REG 202,0 7255 66310 /tmp/tmpfSuELzy (deleted)And cannot be accesses by its prior name anymore:# stat /tmp/tmpfSuELzystat: cannot stat `/tmp/tmpfSuELzy': No such file or directoryHow do I access such a deleted file that is still open?
How can I access a deleted open file on Linux (output of a running crontab task)?
linux;data recovery;open files
The file can be access through the /proc filesystem: you already know the PID and the FD from the lsof output.cat /proc/21742/fd/5
_codereview.46302
I have an XML document that contains ads and includes information about the department name, image name, link to URL, and alt text. I have a PHP function that accepts two arguments, department name and the number of ads to display. It reads through the entire XML document, and stores the information in arrays if the department name matches what is passed to the argument or matches all. What I have works, but it seems rather long and cumbersome. Is there a way to shorten it and improve performance?PHPfunction rightAds($deptName, $displayNumber) { //load xml /* --- Original loading, left in for completeness $completeurl = http://example.com/example.xml; $xml = simplexml_load_file($completeurl); --- */ //Loading without an HTTP call, as per Corbin's point $xml = simplexml_load_file(dirname(__FILE__)./smallAdsByDepartment.xml); $smallAds = $xml->smallAds; //number of ads that can be used for department $numberForDepartment = 0; //loop through xml document, save the ads that are all or the department name foreach($smallAds as $ad) { //get department name $departmentName = $ad->departmentName; //if it is a valid department, add info to corresponding arrays if((strpos($departmentName, $deptName) !== false) || (strpos($departmentName, all) !== false)) { //PT exception. If department name includes noPT, and provided name is pt, don't include the image if(($deptName == pt && strpos($departmentName, noPT) !== false) || (strpos(dirname($_SERVER['PHP_SELF']), pt) !== false && basename($_SERVER['REQUEST_URI']) == admissions.php && strpos($departmentName, noPT) !== false )) { continue; } //CLS exception. if(($deptName == cls && strpos($departmentName, noCLS) !== false)) { continue; } $name[$numberForDepartment] = $ad->name; $alt[$numberForDepartment] = $ad->alt; $linkTo[$numberForDepartment] = $ad->linkTo; $numberForDepartment++; } } //if $displayNumber is 1, or an invalid negative number, display random ad. if($displayNumber <= 1) { $randKeys = mt_rand(0, $numberForDepartment); echo <div class='adRight'><a href='$linkTo[$randKeys]'><img src='../../includes/images/adImages/$name[$randKeys]' alt='$alt[$randKeys]'></a></div>; } else { //get a list of randomly selected keys from the array $randKeys = range(0, (count($name) - 1)); shuffle($randKeys); $numberOfAds = 0; //loop through and display image/link for each foreach($randKeys as $adNumber) { echo <div class='adRight'><a href=' . $linkTo[$adNumber] . ' tabindex='0'><img src='../../includes/images/adImages/ . $name[$adNumber] . ' alt=' . $alt[$adNumber] . '></a></div>; $numberOfAds++; if($numberOfAds == $displayNumber) { break; } } }}Example XML<smallAds> <departmentName>all</departmentName> <name>smallAd3.jpg</name> <alt>Faculty Research</alt> <linkTo>http://example.com/research/index.php</linkTo></smallAds><smallAds> <departmentName>all, noPT</departmentName> <name>smallAd6.jpg</name> <alt>Advising</alt> <linkTo>http://example.com/advising.php</linkTo></smallAds><smallAds> <departmentName>all, noCLS</departmentName> <name>checkYouTube.png</name> <alt>Check us out on YouTube</alt> <linkTo>http://www.youtube.com/user/example</linkTo></smallAds>
Displaying random ads from an XML document
php;performance;random;xml
If you can use any format, use serialized PHP, it's by far the fastest.<?phpclass Ad { public $alt; public $departmentName; public $linkTo; public $name;}function formatAd(Ad $ad) { return <div class='adRight'><a href='{$ad->linkTo}'><img src='../../includes/images/adImages/{$ad->name}' alt='{$ad->alt}'></a></div>;}function rightAds($departmentName, $displayNumber) { static $ads = null; if (!$ads) { $ads = unserialize(file_get_contents(__DIR__ . /smallAdsByDepartment.ser)); } $pt = $departmentName === pt; /* @var $ad Ad */ foreach ($ads as $delta => $ad) { if (strpos($ad->departmentName, $departmentName) !== false || strpos($departmentName, all) !== false) { $noPT = strpos($ad->departmentName, noPT) !== false; if (($pt && $noPT) || (strpos(__DIR__, pt) !== false && basename($_SERVER[REQUEST_URI]) === admissions.php && $noPT) || ($departmentName === cls && strpos($ad->departmentName, noCLS) !== false)) { unset($ads[$delta]); } } } if ($displayNumber <= 1) { echo formatAd($ads[mt_rand(0, count($ads) - 1)]); } else { $keys = range(0, count($ads) - 1); shuffle($keys); $c = count($keys); for ($i = 0; $i < $c; ++$i) { echo formatAd($ads[$keys[$i]]); if ($i === $displayNumber) { break; } } }}
_codereview.132726
I have an authetication api for an intranet site but I'm a little worried that my design of the authentication is bad and unsafe.Below is the basic part of the authetication process and I hope I can get some overall feedback on my flaws.if($_POST['authType'] == 'login'){ login($_POST['authUser'],$_POST['authPsw']);}if($_POST['authType'] == 'logout'){ logout($_POST['hash']);}if($_POST['authType'] == 'cookie'){ login('','',$_POST['hash']);}function login($user,$psw, $hash){ //Cookie hash login if($hash){ //Check if hash and IP resides in authed db table if(dbHash('CHECK', $hash)){ //IF OK set session variables with OK and DB data, return $cookie //Expire hashes older than 30 days and return Session expired $cookie['login'] = 1; $cookie['hash'] = $hash; dbCart('LOAD'); } else{ //IF NOT $cookie['login'] = 0; $cookie['error'] = 'Session expired'; } header('Content-type: application/json'); echo json_encode($cookie, JSON_PRETTY_PRINT); } //User/password login else{ $ldapObj = authUser($user,$psw,domain); if($ldapObj['error']){ header('Content-type: application/json'); echo json_encode($ldapObj['error'], JSON_PRETTY_PRINT); exit(); } if($ldapObj['user']['samaccountname'] and !$ldapObj['error']){ $_SESSION['auth']['login'] = 1; $_SESSION['auth']['hash'] = bin2hex(openssl_random_pseudo_bytes(16)); $_SESSION['auth']['ip'] = $_SERVER['REMOTE_ADDR']; $_SESSION['auth']['user'] = $ldapObj['user']; $cookie['hash'] = $_SESSION['auth']['hash']; $cookie['user'] = $_SESSION['auth']['user']; //Save session in DB with hash, IP and session_id. dbHash('ADD'); dbCart('LOAD'); header('Content-type: application/json'); echo json_encode($cookie, JSON_PRETTY_PRINT); } else{ unset($_SESSION['auth']); } }}function logout($hash){ unset($_SESSION['auth']); unset($_SESSION['cart']); dbHash('DELETE', $hash);}Below is the dbHash function. All of the backend authentication should now be posted except my common functions such as database connection etc and the ldap library.include common.php;include ldap.php;session_start();/****************************************************************************Authentication is divided in thress parts, 1=Login, 2=Logout and 3=DB managementLogin can be done by HASH or user/password. For user it connects to LDAP annd forHASH the local DB.HASH login requires HASH code and same IP as when created. If not then its deleted.SESSION variables decide on logged in or not in site.DB management is set to 3 subfunctions, 1= ADD new hash, 2= CHECK if has is valid3= DELETE hash3= Checks for expired hashes and deletes them every time its accessed.User LDAP information is stored in json format in DB in case site need the information.*****************************************************************************/function dbHash($task, $input){ //Add hash to DB //Fix data checks if($task == 'ADD'){ dbHash('DELETE'); $con = local_pdo(); $stmt = $con->prepare(INSERT INTO `auth`(`hash`, `ip`, `create`, `expire`, `data`, `session_id`) VALUES (?,?,?,?,?,?)); $stmt->bind_param(ssssss, $hash,$ip,$create,$expire,$data,$sid); $date = new DateTime(); $date->modify('+30 days'); $hash = $_SESSION['auth']['hash']; $ip = $_SERVER['REMOTE_ADDR']; $expire = $date->format('Y-m-d H:i:s'); $data = json_encode($_SESSION['auth']['user']); $sid = session_id(); $stmt->execute(); $stmt->close(); } //Delete hash in DB if($task == 'DELETE'){ $date = new DateTime(); $date->modify('-30 days'); $con = local_pdo(); $stmt = $con->prepare(delete from auth where (hash = ? or expire < ?)); $stmt->bind_param(ss, $hash, $expire); $hash = $input; $expire = $date->format('Y-m-d H:i:s'); $stmt->execute(); $stmt->close(); } //Check hash in DB if($task == 'CHECK' && $input){ $date = new DateTime(); $date->modify('-30 days'); $con = local_pdo(); $stmt = $con->prepare(select hash,ip,data from auth where hash = ? and ip = ? and expire > ?); $stmt->bind_param(sss,$value,$ip,$expire); $value = $input; $ip = $_SERVER['REMOTE_ADDR']; $expire = $date->format('Y-m-d H:i:s'); $stmt->execute(); $stmt->bind_result($hash, $ip, $data); $stmt->fetch(); if($hash){ $_SESSION['auth']['login'] = 1; $_SESSION['auth']['hash'] = $hash; $_SESSION['auth']['ip'] = $_SERVER['REMOTE_ADDR']; $_SESSION['auth']['user'] = json_decode($data,true); $output = true; } else{ $output = false; dbHash('DELETE', $input); } dbHash('DELETE'); $stmt->close(); return $output; }}There is no proper CSRF at the moment but site is only avalible on our internal network untill I cant trust the authentication and arrange an ssl certificate.It is one reason why I added the requirement to match both cookie hash and client ip.I'm probably in over my head with this but you got to try :)
PHP Login/cookie authentication
php;beginner;authentication
SecurityYou have an auth type that is called cookie, but which doesn't work on cookies, which is odd. Apart from the added complexity (you have to add it to every form yourself), this is also bad as you can't make POST data httpOnly. It also doesn't make any sense to me (it doesn't work as a remember me token or anything). It's also very unclear to me how you match a cookie to a specific user, as it's just random data.If you deviate from the standard approach of sessions and remember me cookies, you should really properly document why and how exactly your mechanism works.There is no proper CSRF at the moment but site is only available on our internal network until I cant trust the authentication and arrange an ssl certificate. It is one reason why I added the requirement to match both cookie hash and client ip.As long as you have access to it, it doesn't matter that it is internal, it can still be exploited. The IP binding also does not prevent CSRF exploitation (as the request is send by you, not by the attacker). But seeing that the cookie hash isn't actually a cookie (or a hash), it seems to serve as a CSRF token (if it needs to be send for each request, which I'm not quite sure if it is).StructureI like that you have designated functions for login and logout, it makes the code nicer to read. But I wouldn't stop there. At the very least I would split login into loginWithCookie and loginWithCredentials. You might also want to think about using some generic function/class to send json, to avoid the duplication of header / json_encode / die. dbHashYour dbHash method isn't very well named: the name doesn't really tell me what it is doing (also, it isn't a hash; token might be more fitting).It is also not all too well designed: I have to pass a string like ADD to it to tell it what to do, which means I cannot use it without reading the documentation or looking at the implementation. Instead, it would be better to have some HashDAO class, which then has methods like add(), delete(), etc. It is also very confusing that I don't have to pass additional data when ADDing. dbHash('ADD') doesn't tell me anything. $hashDAO->add($hash, $ip, $session_id) would tell me exactly what is happening.MiscI already mentioned it above, but your naming really hurts readability (which makes it more difficult to look for security and other issues). The main problem is with hash (which isn't a hash, but a token) and cookie (which isn't a cookie, but also a token). Some other names are also not ideal such as data (too generic, might be username) or output (better: isAuthenticated).You always assign your variables after binding them. I think your code would be more readable the other way around.Your auth table contains values which you do not seem to use such as sid, which adds unnecessary complexity.
_cs.44926
I was wondering as what are the specific conditions which make the A* algorithm - optimal in terms of the node expansion over the other Unidirectional algorithms: When the same heuristic information is given to all the algorithms.
What are the conditions that make the A* algorithm optimal over the other unidirectional search algorithms
algorithms;algorithm analysis;search algorithms;shortest path
null
_bioinformatics.673
I have a list of transcription factors and I am interested in finding out which which genes might be transcribed as a result of the formation of transcription factor complexes with that transcription factor.Any ideas on good databases? I've seen that ENCODE hosts data from CHIPseq experiments, but I'm not sure if I can find conclusions regarding interactions from this site. I'm looking for a database with known or putative transcription/gene interactions.Thanks in advance.
Given a transcription factor, what genes does it regulate?
chip seq;encode;multi omics
null
_softwareengineering.70574
I've been building a web-app with a fairly complex GUI - many small elements alongside eachother and within other elements that need various behaviours (dragging, clicking, but context-sensitive).My code tends to work like this:controls start out as html tags<li>then I add classes to the tag<li class = 'static passive included'>some classes might be for css styling to affect appearance.static {background: blue; float: left;}while others are acted upon by jQuery to give behaviour for the associated DOM events$(#container).delegate(.passive, function {onclick : //do-stuff})and others might be picked up by normal javascript when scanning the document, eg. collect the text values of all nodes with the 'included' class. So certain DOM elements are GUI objects which implement classes definted through a combination of css and jQuery.My question is, is this a good way to work? And also are these really 'decorators' - they certainly 'add additional responsibilities to an object dynamcially' but I'm not sure about that term. Finally, do you use this pattern in your own sites?
Using CSS classes as decorators - a good pattern?
javascript;design patterns;web applications;jquery;css
null
_cs.22769
If $P = NP$ would this imply that polynomial time reduction from an $NP$- to a $P$-problem would be possible? And if $P\neq NP$ does it imply that a polynomial time reduction from an $NP$- to a $P$-problem would be impossible?
P, NP and polynomial time reduction?
complexity theory;np complete;reductions
Since $P \subseteq NP$, every problem in $P$ is also in $NP$. So in both cases there is an $NP$-problem that can be reduced to a $P$-problem. Simply choose a problem in $P$ as the $NP$-problem and the same problem as the $P$-problem.If you replace $NP$-problem by $NP$-complete problem, both of your statements hold.
_codereview.32076
This is my very first Bash script that does something real. I'm here to learn, so please tell me if and how this can be done better.#!/bin/sh# Daniele Brugnara# October - 2013if [ -k $1 ]; then echo Error! You must pass a repository name! exit 1fiSVN_PATH=/var/lib/svncurrPath=$(pwd)cd $SVN_PATHsvnadmin create $1chown -R www-data:www-data $1cd $currPathecho Done! Remember to set valid user permission in authz file.
New svn repository script
bash
If the first argument to your script contains a space, then you get an error message:$ ./cr32076.sh my repository./cr32076.sh: line 6: [: my: binary operator expectedTo avoid this, put quotes around the variable $1, like this:if [ -k $1 ]; thenThe man page for [ says:-k file True if file exists and its sticky bit is set.is this really what you mean? It seems more likely that you want the -z option:-z string True if the length of string is zero.You remember the current directory in a variable:currPath=$(pwd)and then later change to it:cd $currPathbut this will go wrong if the current directory has a space in its name:$ pwd/Users/gdr/some directory$ currPath=$(pwd)$ cd $currPathbash: cd: /Users/gdr/some: No such file or directoryTo avoid this, put quotes around the variable, like this:cd $currPathIf you want to change directory temporarily and change back again, then instead of writingcurrPath=$(pwd) # remember old directorycd $SVN_PATH # change to new directory# do some stuffcd $currPath # go back to old directoryyou can use a subshell:( cd $SVN_PATH # change to new directory # do some stuff ) # old directory is automatically restored when subshell exitsBut in this script, there's no need to change directory back again, because after you go back all you do is echo Done! ... and it doesn't matter what directory you do that in.In fact, there's no need to change directory at all, because instead ofSVN_PATH=/var/lib/svncurrPath=$(pwd)cd $SVN_PATHsvnadmin create $1chown -R www-data:www-data $1cd $currPathyou could writeSVN_PATH=/var/lib/svn/$1svnadmin create $SVN_PATHchown -R www-data:www-data $SVN_PATH
_unix.374083
I am using Ubuntu 16.04. The problem is I am able to SSH other machines from my system but I am unable to SSH my system from another machine. However, I am able to ping my system from other machines.For example, my system's IP address is 192.168.103.32 and another machine which runs CentOS has IP address 192.168.170.52. So I am able to SSH on 192.168.170.52 from 192.168.103.32 but vice-versa fails. Also, I am able to ping 192.168.103.32 from 192.168.170.52.Output of ssh from 192.168.170.52:ssh -v [email protected]_5.3p1, OpenSSL 1.0.1e-fips 11 Feb 2013debug1: Reading configuration data /etc/ssh/ssh_configdebug1: Applying options for *debug1: Connecting to 192.168.103.32 [192.168.103.32] port 22.Output for:sudo service ssh statusis:ssh.service - OpenBSD Secure Shell server Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: ena Active: active (running) since Thu 2017-06-29 11:39:00 IST; 39min ago Process: 3064 ExecReload=/bin/kill -HUP $MAINPID (code=exited, status=0/SUCCE Main PID: 1142 (sshd) CGroup: /system.slice/ssh.service 1142 /usr/sbin/sshd -DJun 29 11:39:37 bhavya systemd[1]: Reloading OpenBSD Secure Shell server.Jun 29 11:39:37 bhavya sshd[1142]: Received SIGHUP; restarting.Jun 29 11:39:37 bhavya systemd[1]: Reloaded OpenBSD Secure Shell server.Jun 29 11:39:37 bhavya sshd[1142]: Server listening on 0.0.0.0 port 22.Jun 29 11:39:37 bhavya sshd[1142]: Server listening on :: port 22.Jun 29 11:39:37 bhavya systemd[1]: Reloading OpenBSD Secure Shell server.Jun 29 11:39:37 bhavya sshd[1142]: Received SIGHUP; restarting.Jun 29 11:39:37 bhavya sshd[1142]: Server listening on 0.0.0.0 port 22.Jun 29 11:39:37 bhavya sshd[1142]: Server listening on :: port 22.Jun 29 11:39:37 bhavya systemd[1]: Reloaded OpenBSD Secure Shell server.
Unable to SSH but able to ping it
ubuntu;centos;ssh;ping
null
_webapps.65427
If you have no way of being able to connect to Facebook but there is someone you need to contact, is there any other way of finding out what their e-mail address is?
How to find out what somebody's email address is without being connected to Facebook
facebook;email
null
_webapps.40307
Suppose I wanted to find out about the Pythagorean Theorem by searching via WolframAlpha. This is trivial -- we can search directly for Pythagorean Theorem and get a wealth of information.But suppose instead that I wanted to find that same information without knowledge of the theorem's name. Suppose I only knew that a^2 + b^2 = c^2 for a right triangle with legs a and b and hypotenuse c. Searching purely by that formula doesn't yield the desired result -- and indeed it shouldn't, as we left out some important details!I am curious if other cases exist where you can do this on WolframAlpha. Are there any such queries where we can input colloquial formulas, and return an association with some related theorem?
WolframAlpha - Is it possible to associate equations with theorems?
search;wolfram alpha
null
_computergraphics.245
I've played with real time raytracing (and raymarching etc) quite a bit, but haven't spent that much time on non real time raytracing - for higher quality images or for pre-rendering videos and the like.I know one common technique to improve image quality in the non real time case is to just cast A LOT more rays per pixel and average the results.Are there any other techniques which stand out as good ways to improve image quality in the non real time case, over what you would normally do in the real time case?
Non Real Time Raytracing
raytracing;photo realistic
Path tracing is the standard technique in non-realtime photorealistic rendering, and you should look specifically into bidirectional path tracing to get effects like caustics, which you can't really get with basic path tracing. Bidirectional path tracing also converges faster to the ground truth as shown in the below image:Also Metropolis light transport (MLT) is a more advanced path tracing technique that converges even faster to the ground truth by mutating existing good paths:You can also use importance sampling for faster convergence by focusing more rays to the directions that matter more. I.e. by focusing rays based on BRDF (more towards the BRDF spike using probability density function) or to the light source, or get best of the two worlds and using multiple importance sampling. This is all about reducing noise in unbiased manner. There are also denoising techniques to further reduce noise in the rendered images.I think it's best to first implement basic brute force Monte Carlo path tracer to serve as unbiased reference before looking into the more advanced techniques. It's quite easy to do mistakes and introduce biasing that goes unnoticed, so having simple implementation is good to have around for reference.You can also get some really nice results by applying path tracing to participating media but that gets slow really fast :D
_softwareengineering.48481
I guess no one would argue for decomposing use cases, that is just wrong. However sometimes it is necessary to specify use cases, which are on lower, more technical level, like for example authetication and authorization, which give the actor value, but are further from his business needs. Cockburn argues for levels when needed and explains how to move use cases from/to different levels and how to determine the right level. On the other hand, e.g. Bittner argues against use case levels, although he uses subflows, requires the vision documents which contain information about the purpose of the system in the business, much like use cases on the higher level, and at the end of his book mentions, that at least two levels are needed most of the time.My questionis, do you find use case levels necessary, helpful or unwanted? What are the reasons? Am I misssing some important arguments?
Do we need use case levels or not?
requirements
null
_cstheory.19591
Can anyone come up with a nice way of computing a solution to the linear diophantine equation $ax + by = c$where $a,b,c \in \mathbb{Z}$ and $\gcd(a,b) \mid c$, such that all the calculations are carried out without any of the intermediate results exceeding $\max\{|a|, |b|, |c|\}$ in absolute value?I.e. if $a,b,c$ are 64-bit integers, all the calculations should be done using 64-bit integers only.The standard method is of course to use Euclidean algorithm to first find a solution to $ax + by = \gcd(a,b)$ and then multiplying $x$and $y$by $\frac{c}{\gcd(a,b)}$. The Euclidean algorithm is fine, but this last multiplication might go out of range.
Avoiding overflow in finding a solution to$ax + by = c$
ds.algorithms
Here is a modified version of the Euclidean algorithm that is stable.When considering the equation$$ax + by = c \quad (1)$$we may w.l.o.g. assume that $a, b, c \ge 0$, $b \ge 1$ and $b \ge a$. From now on we will only consider such equations. Moreover, we will set $d = \gcd(a,b)$.We call a solution $(x,y)$ good if $0 \le x < \frac{b}{d}$. Such a solution always exists, since all solutions are parametrized by $(x + t \cdot \frac{b}{d}, y - t \cdot \frac{a}{d})$.If $(x,y)$ is a good solution, then$$\frac{c}{b} - \frac{a}{d} < y \le \frac{c}{b} \quad (2)$$This follows simply from the bounds on $x$ and the fact that $y = \frac{c}{b} - \frac{ax}{b}$. Notice that in particular $|y| \le \max(a,c)$.We will now solve the problem by induction. Assume first that $a=0$. Then $x=0$, $y = \frac{c}{b}$ is a good solution. Let us then suppose by induction that we can obtain good solutions for all coefficients smaller than $a,b,c$. We let $(x',y')$ be a good solution to the equation$$(b - \left\lfloor \frac{b}{a} \right\rfloor a) x' + a y' = c \quad (3)$$Then $(\tilde{x}, \tilde{y}) = (y' - \left\lfloor \frac{b}{a} \right\rfloor x', x')$ is a solution to (1). First of all we can calculate $\left\lfloor \frac{b}{a} \right\rfloor x'$ without overflow since $0 \le x' < \frac{a}{d}$. We note that (2) applied to the equation (3) gives us$$\frac{c}{a} - \frac{b - \left\lfloor \frac{b}{a}\right\rfloor a}{d} < y' \le \frac{c}{a}.$$Hence it follows that$$\frac{c}{a} - \frac{b}{d} < y' - \left\lfloor \frac{b}{a} \right\rfloor x' \le \frac{c}{a} \quad (4)$$Thus $|\tilde{x}| \le \max(b,c)$ and $\tilde{x}$ can be computed without overflow.Finally it remains to normalize $(\tilde{x}, \tilde{y})$ to a good solution. This involves computing $x = \tilde{x} + t \cdot \frac{b}{d}$ and $y = \tilde{y} - t \cdot \frac{a}{d}$ where $t = \left\lceil \frac{-\tilde{x}}{b/d} \right\rceil$. Assume first that $\tilde{x} < 0$. From (4) we see that then $t=1$ and everything can be done without overflow. Assume then that $\tilde{x}\ge 0$. Then $|t| \le \frac{\tilde{x}}{b/d}$. Because $b \ge a$, both products $t \cdot \frac{b}{d}$ and $t \cdot \frac{a}{d}$ can be calculated without overflow. We are done.
_unix.289272
I just acquired a macbook air. I dual-booted mac os with Ubuntu. It is my first time multiple-booting on a mac. I triple-booted with another Ubuntu. Ater removing the third distro, I experience some issues with grub.First of all, I made my partitions as follow:$ lsblksda sda1 200M /boot/efi sda2 47.3GB # Mac os sda3 620MB # Mac rescue sda4 2GB # Swap partition sda5 50GB # Ubuntu 1 sda6 50GB # Ubuntu 2 sda10 100GB # ext4 file systemI did an install of Ubuntu on sda5. Once finished, it directly boots with Ubuntu which is great. I later installed another ubuntu on sda6 as I would like to test using other ditros and I want to check if I could do that easily. Note that the Ubuntu version is the exact same I installed on another partition. Once the second Ubuntu installed, I reboot and I have the grub screen asking me to chose between the two Ubuntus. Neat. Then, having no use for the second ubuntu, I simply erased sda6 using gdisk:$ gdisk /dev/sda> d # delete partition> 6 # delete partition 6> w # write changes> Y # confirmation of writing changes.> q # quit gdisk$ lsblksda sda1 200M /boot/efi sda2 47.3GB # Mac os sda3 620MB # Mac rescue sda4 2GB # Swap partition sda5 50GB # Ubuntu 1 sda10 100GB # ext4 file systemNow on reboot, I get the grub command line on black screen. I have to specify the disk where my ubuntu is located. I followed some indications in this thread to boot on Ubuntu: https://askubuntu.com/questions/159846/tried-to-boot-ubuntu-but-the-grub-rescue-shows-up-instead and tried doing the following:grub> ls (hd2,gpt5) # That's my Ubuntu partitiongrub> root=(hd2,gpt5)grub> configfile /boot/grub/grub.cfgNow I succesfully booted in Ubuntu. After logging in, I followed the instrutions on updating grub:$ sudo update-grubAlas when rebooting, the grub screen pops up again. So updating grub did not do the trick. I also tried to do with grub 2 in case:$ sudo update-grub2It does not change anything.I also tried to reinstall the second version of Ubuntu on sda6. Same scheme, when I reboot, I am asked to chose between the two Ubuntu versions in the grub menu, so it kind of solves the issue. But as soon as I remove the second Ubuntu partition, the first one won't boot directly unless I specify it in the grub command line screen.I don't know if this might be useful, but here is my fstab:$ cat /etc/fstab# /etc/fstab: static file system information.## Use 'blkid' to print the universally unique identifier for a# device; this may be used with UUID= as a more robust way to name devices# that works even if disks are added and removed. See fstab(5).## <file system> <mount point> <type> <options> <dump> <pass># / was on /dev/sda2 during installationUUID=85ab4560-729a-4f7d-91d9-69af89ea1219 / ext4 errors=remount-ro 0 1# /boot/efi was on /dev/sda1 during installationUUID=DAC6-DEC2 /boot/efi vfat defaults 0 1# swap was on /dev/sda4 during installationUUID=9c76739a-5996-43d8-a14e-fe690c06870f none swap sw 0 0What can I do to solve this issue? Is it a matter of EFI? Why removing the second Ubuntu partition makes the first one unrecognized to grub? Note that I would like to find a clean solution, so I would like to avoid reinstalling Ubuntu to solve it.
Grub menu on reboot after removing another distro partition
ubuntu;osx;dual boot;grub;uefi
I solved my issue by getting hints from this thread: https://superuser.com/questions/376470/how-to-reinstall-grub2-efiI did not need to use a live installation, I just booted into my ubuntu session through the grub window. I then reinstalled grub:$ apt-get install --reinstall grub-efi-amd64This also did an update-grub automatically. It works and updated the grub.cfg file in /boot/efi/EFI/ubuntu/grub.cfg, updating the right partition to boot ubuntu from: $ cat /boot/efi/EFI/ubuntu/grub.cfg search.fs_uuid 17441147-6b9d-45fe-bccd-bed2451f43f8 root hd0,gpt5 set prefix=($root)'/boot/grub'configfile $prefix/grub.cfgPreviously, running $ update-grubwould not update neither the uuid nor the partition, old partition was the latest ubuntu's, named hd0,gpt6. So it seems reinstalling grub was necessary.
_unix.293450
On my laptop, I have an onboard sound card, and also a connected bluetooth headset. I have configured the bluetooth device in /etc/asound.conf:# cat /etc/asound.confpcm.bluetooth { type bluetooth device 12:34:56:78:9a:bc profile auto}ctl.bluetooth { type bluetooth}Now I can play audio to my headset by specifying the new audio device, such as:mplayer -ao alsa:device=bluetooth file.mp3If I want to play to my default device, I simply omit the device:mplayer file.mp3However, I need to configure ALSA, so that all sound is sent to both devices by default, without having to explicitly set this per application.ie:mplayer file.mp3should play both on the laptop soundcard, as well as in the bluetooth headset.How can I do that ?
ALSA: send audio to two audio devices
audio;alsa;bluetooth
null
_unix.37493
Is Oracle Linux feasible for a desktop enviroment or is it strictly server oriented?
Is Oracle Linux feasible for a desktop?
distribution choice;distros
Oracle Linux is based upon RHEL (Red Hat Enterprise Linux). It can be used either as a server or as a desktop, as a compatible alternative to RHEL.As for a desktop, if you're looking for bleeding-edge packages (GNOME 3, recent versions of KDE, etc...), you will not find them in in Oracle Linux or any RHEL clone (CentOS, Scientific Linux etc...).
_unix.273337
In my Ubuntu, i run java application in background. I use bash script to run it and now it looks like:nohup java -jar app.jar &exit 0The problem that i want to be able to write an input string to my application, without making it foreground, from different terminals/sessions. Something likeecho mytext > /appdir/inHow should i change my script?
How to redirect STDIN of background process?
bash;io redirection;io;stdin;nohup
main.sh#!/bin/bashset -eif [ ! -p in ]; then mkfifo infitail -f in | java -jar app.jarSend command to the application with following syntaxecho command > /home/user/in
_unix.53428
In emacs there are the functions forward-word and backward-word. Are there also functions which move the point to the next/last whitespace?
Emacs move to next whitespace
emacs
You can modify the syntactical properties of characters using the modify-syntax-entry function (C-h f modify-syntax-entry in emacs for more info):For instance, if you are writing .tex documents, you might add the following to your .emacs:(add-hook 'TeX-mode-hook '(lambda () (modify-syntax-entry ?_ w) (modify-syntax-entry ?- w)))This tells emacs to treat _ and - as word characters when you are in TeX mode, thus forward-word and backward-word will do what you want.
_webmaster.103126
I have a website which has events. Events have description, dates and locations. Single event may be held in multiple locations on different dates, but the description is still the same.How can I prevent duplicate content on such events or this case is OK for search engine optimization?
Duplicate content in event profile pages
google search;duplicate content
According to your current setup:You have dedicated page for every event with the details of that event.Every event may have multiple locations / dates and for those locations / dates you have event listing pages.The listing pages for locations / dates contain short description of the original events. Other than that they don't provide any additional unique information.For this sort of structure, abide by the following two rules for better Search Engine Optimization (SEO):Make sure each dedicated event page have rel=canonical properly set to it's own URL. You'll find details on canonical here.Since your event listing pages for locations & dates do not contain any additional unique information, there is no reason to index them for search engines. You may use noindex meta tag to prevent Search Engines from indexing them for search results. Search Engines will still find your events, but they'll simply show the dedicated event pages in the Search Result.You'll find more information on noindex here and here.This way Search Engines will index all your unique event information without any duplicates.
_codereview.78133
I'm writing my own HTTP client, kinda like cURL. (I already know I'm reinventing the wheel, this is more or less getting an inside look of HTTP 1.x before 2 becomes a thing.)So far pages download perfectly fine using OS provided socket libs. (Linux/Mac)So what the problem/question is, how can I better my client? One of the main problems with it, is that it's slow. Like 1.00 - 1.30 seconds slow. I know this may not sounds like much, but the 1 second is mainly because of the timeout of the recv. Or maybe I cant do much more except do what other HTTP clients do and cache data.char buf[CHUNK_SIZE+1]; //now it is time to receive the page memset(buf, 0, sizeof(buf)); std::string htmlcontent; struct timeval tv; tv.tv_sec = 1; setsockopt(socket, SOL_SOCKET, SO_RCVTIMEO,(struct timeval *)&tv,sizeof(struct timeval)); while(true){ tmpres = recv(socket, buf, CHUNK_SIZE, 0); if(tmpres < 1) { break; } htmlcontent += buf; memset(buf, 0, tmpres); }So as you can see, the recv timeout is 1 second. I'm hoping to maybe manipulate this in someway so, it's instead 500 milliseconds for timeout or perhaps look at another method in general when dealing with chunk data from a HTTP server. And of course I googled some topics, but a lot of them had pretty basic examples. Mostly what I have already written.
HTTP client similar to cURL
c++;socket;unix
null
_codereview.52481
I recently wrote a concurrent, mutex-less (but not lockfree) queue and wanted to know if it is actually correct, and if there are any particular improvements I could make:template <typename T>class concurrent_queue{protected: T *storage; std::size_t s; std::atomic<T*> consumer_head, producer_head; union alignas(16) dpointer { struct { T *ptr; std::size_t cnt; }; __int128 val; }; dpointer consumer_pending, producer_pending;public: concurrent_queue(std::size_t s): storage(nullptr) { storage = static_cast<T*>(::operator new((s+1)*sizeof(T))); consumer_head = storage; __atomic_store_n(&(consumer_pending.val), (dpointer{storage, 0}).val, __ATOMIC_SEQ_CST); producer_head = storage; __atomic_store_n(&(producer_pending.val), (dpointer{storage, 0}).val, __ATOMIC_SEQ_CST); this->s = s + 1; } ~concurrent_queue() { while(consumer_head != producer_head) { ((T*)consumer_head)->~T(); ++consumer_head; if(consumer_head == storage + s) consumer_head = storage; } ::operator delete(storage); } template <typename U> bool push(U&& e) { while(true) { dpointer a; a.val = __atomic_load_n(&(producer_pending.val), __ATOMIC_ACQUIRE); auto b = consumer_head.load(std::memory_order_relaxed); auto next = a.ptr + 1; if(next == storage + s) next = storage; if(next == b) return false; dpointer newval{next, a.cnt+1}; if(!__atomic_compare_exchange_n(&(producer_pending.val), &(a.val), (newval.val), true, __ATOMIC_ACQUIRE, __ATOMIC_RELAXED)) continue; new (a.ptr) T(std::forward<U>(e)); while(!producer_head.compare_exchange_weak(a.ptr, next, std::memory_order_release, std::memory_order_relaxed)); return true; } } template <typename U> bool pop(U& result) { while(true) { dpointer a; a.val = __atomic_load_n(&(consumer_pending.val), __ATOMIC_ACQUIRE); auto b = producer_head.load(std::memory_order_relaxed); auto next = a.ptr + 1; if(next == storage + s) next = storage; if(a.ptr == b) return false; dpointer newval{next, a.cnt+1}; if(!__atomic_compare_exchange_n(&(consumer_pending.val), &(a.val), (newval.val), true, __ATOMIC_ACQUIRE, __ATOMIC_RELAXED)) continue; result = std::move(*(a.ptr)); (a.ptr)->~T(); while(!consumer_head.compare_exchange_weak(a.ptr, next, std::memory_order_release, std::memory_order_relaxed)); return true; } }};It's platform specific to GCC on x86_64 and a CPU that supports double width CAS right now, but I assume it wouldn't be that hard to adjust it for other platforms.I've stress tested it with multiple threads pushing and popping, with both POD types and with memory-managing types like std::vector and haven't had any issues so far...before I put in the double width CAS I encountered the ABA problem and had segmentation faults, but with it it seems to be fine.However, I'm new to all this multithreaded stuff, and wanted someone more experienced than me to tell me if this would work, say, on a system with a weak memory model.
Concurrent queue
c++;multithreading;queue;concurrency;atomic
Regarding the conditional return types in push() and pop(), it may be clearer to use a different type of infinite loop, such as a for(;;). This will better focus the attention to the loop contents.You could also use curly braces for the single-line statements. Some of these lines are quite long, so the executed code is at the very end. This should also help distinguish them from the busy-waits.
_codereview.117105
I have a slider with 3 images and left/right arrows. If you click on them, the image changes. You also have 3 bullets below the images and the image changes depending on what bullet you click.I don't want to use buttons arrows; I want to use the tag a. Any ideas on making it better? You'll need to add 3 images because sliders use 3 imagesand the other 2 images are to use left/right arrows (to change the images). That is what I don't want to use, as I said.HTML:<!DOCTYPE html><html><head> <title>Slider</title> <meta charset=utf-8 /> <link rel=stylesheet href=style.css> <script src=js/jquery-2.1.4.min.js></script> <script src=js/slider.js></script></head><body><div class=sliders> <ul > <li class=activa><img src=fotos-de-Hamburguesa-americana.jpg></li> <li><img src=images (3).jpg></li> <li><img src=images21312.jpg></li> </ul><!-- <ul class=controles> <li class=activa>&nbsp</li> <li>&nbsp</li> <li>&nbsp</li> </ul> --></div><div class=sliders> <ul> <li class=activa><img src=fotos-de-Hamburguesa-americana.jpg></li> <li><img src=images (3).jpg></li><!-- <li><img src=images21312.jpg></li> --> </ul> </div></body></html>JS:$.fn.slider = function(config){ var nodos = this; var delay = (typeof config.delay == number)?parseInt(config.delay):4000; for (var i = 0; i < nodos.length; i++) { Slider(nodos[i]); } function Slider(nodo){ var galeria = $(nodo).find('ul'); var btn1 = <button class='before'></button>; if(!$(nodo).hasClass('slider')) $(nodo).addClass('slider'); if(!galeria.hasClass('galeria')) galeria.addClass(galeria); //Encontrar cuantas imagenes hay en la galeria var imagenes = $(galeria).find('li'); //Controles var html_bullets=<ul class='controles'>; for (var it = 0; it < imagenes.length; it++) { if(it==0) html_bullets+=<li data-index='+it+' class='activa'>&nbsp;</li>; else html_bullets+=<li data-index='+it+'>&nbsp;</li>; } html_bullets+=</ul><button class='next'></button>; $(nodo).append(html_bullets); $(nodo).prepend(btn1); var bullets = $(nodo).find(ul.controles li); bullets.click(function(){ var index= $(this).data(index); bullets.removeClass('activa'); imagenes.removeClass('activa'); $(imagenes[index]).addClass(activa); $(bullets[index]).addClass(activa); }); } $(.slider).on(click,button.before,function(){ var div = this; div = $(div).parent(); console.log(div); flechas({div:div}); }); $(.slider).on(click,button.next,function(){ var div = this; div = $(div).parent(); flechas({div:div,direccion:1}); }); function flechas(tipo){ var div = tipo.div; var imagen = $(div).find(ul.galeria li.activa); var imagenes = $(div).find(ul.galeria li); var bullet = $(div).find(ul.controles li.activa); var bullets = $(div).find(ul.controles li); var index = bullet.data(index); var max = bullets.length-1; bullets.removeClass('activa'); imagenes.removeClass('activa'); if(tipo.direccion){ if(index == max){ $(imagenes[0]).addClass(activa); $(bullets[0]).addClass(activa); }else { $(imagenes[index+1]).addClass(activa); $(bullets[index+1]).addClass(activa); } } else{ if(index == 0){ $(imagenes[max]).addClass(activa); $(bullets[max]).addClass(activa); }else { $(imagenes[index-1]).addClass(activa); $(bullets[index-1]).addClass(activa); } } } };$(document).ready(function() {$(.sliders).slider({delay:5000});});CSS:.slider{ width: 420px; overflow: hidden;}.slider ul{ list-style: none; padding: 0; }.slider ul.galeria{ height: 200px; position: relative; margin-left: 30px;}.slider ul.galeria li{ position: absolute; top: 0; left: 0; opacity: 0; transition: all 2s;}.slider ul.galeria li.activa{ opacity: 1;}.slider ul.galeria img{ max-height: 200px; margin-left: 5px; }/*Controles*/.slider ul.controles { text-align: center;}.slider ul.controles li{ background-color: black ; display: inline-block; width: 20px; height: 20px; border-radius: 10px;}.slider ul.controles li.activa{ background-color: blue ;}/*Botones-flechas*/.slider button.before{ background-image: url(Flecha_002.png); position: relative; left: 0; top: 128px; background-repeat: no-repeat; background-size: 28px; height: 48px; width: 33px; background-color: white; border: none;}.slider button.next{ background-image: url(Flecha_001.png); position: relative; left: 86%; bottom: 190px; background-repeat: no-repeat; background-size: 28px; height: 48px; width: 33px; background-color: white; border: none;} .slider button.before:focus{ outline: none;} .slider button.next:focus{ outline: none;}.slider button.before:hover,.slider button.next:hover,.slider ul.controles li:hover{ cursor: pointer;}
Slider with jQuery using bullets
javascript;jquery;html;css
Your code is good, but there's some points you can improve upon:Assumptions:$.fn.slider = function(config){ var nodos = this; var delay = (typeof config.delay == number)?parseInt(config.delay):4000;You're assuming config won't be empty or not added. You should have a default config, and simply extend the parameter config to include the items the parameter config left off.abusing jQuery:You may not need to use jQuery for everything: See youmightnotneedjquery.com for a full list.For example:$(nodo).find('ul') can be written as nodo.getElementsByTagName('ul')$(this).data(index) can be written as this.getAttribute('index')$(div).parent() can be written as div.parentElement.addClass can be written as .classList.addBetter call the redundancy department department:Your code has quite a bit of redundancy, for example:var div = this;div = $(div).parent();console.log(div);flechas({div:div});Could be expressed as:flechas({div: this.parentElement});And the following is also redundant:var btn1 = <button class='before'></button>;var html_bullets=<ul class='controles'>;html_bullets+=...;$(nodo).append(html_bullets);$(nodo).prepend(btn1);Simply remove btn1 and attach its contents to html_bullets.Indentation and spacing:Your indentation and spacing is wrong:You should have spaces around your operators and after commas.You should have each layer of indentation indented by either two, four or eight spaces. Additionally, keep consistent.Miscellanoustypeof vs. instanceof: While both are similar, I would prefer to use instanceof as it lets you declare the type as the type, meaning the compiler will pick up if you mess up the format on the name:if (config.delay instanceof Number)parseInt(config.delay): there's many ways to do this as described by this example, however, I would prefer Number(config.delay) as it looks more clear as to what are trying to create.In addition to this above point, leaving off the second parameter in parseInt can cause strange issues. Unless needed, you can use 10 as the second parameter to parse to decimal.
_unix.119680
What is the difference between the device representation in /dev and the one in /sys/class?Is one preferred over the other? Is there something one offers and the other doesn't?
Difference between /dev and /sys/class?
linux;devices;udev;sysfs
The files in /dev are actual devices files which UDEV creates at run time. The directory /sys/class is exported by the kernel at run time, exposing the hierarchy of the hardware through sysfs.From the libudev and Sysfs TutorialexcerptOn Unix and Unix-like systems, hardware devices are accessed through special files (also called device files or nodes) located in the /dev directory. These files are read from and written to just like normal files, but instead of writing and reading data on a disk, they communicate directly with a kernel driver which then communicates with the hardware. There are many online resources describing /dev files in more detail. Traditonally, these special files were created at install time by the distribution, using the mknod command. In recent years, Linux systems began using udev to manage these /dev files at runtime. For example, udev will create nodes when devices are detected and delete them when devices are removed (including hotplug devices at runtime). This way, the /dev directory contains (for the most part) only entries for devices which actually exist on the system at the current time, as opposed to devices which could exist.another excerptThe directories in Sysfs contain the heirarchy of devices, as they are attached to the computer. For example, on my computer, the hidraw0 device is located under:/sys/devices/pci0000:00/0000:00:12.2/usb1/1-5/1-5.4/1-5.4:1.0/0003:04D8:003F.0001/hidraw/hidraw0Based on the path, the device is attached to (roughly, starting from the end) configuration 1 (:1.0) of the device attached to port number 4 of device 1-5, connected to USB controller 1 (usb1), connected to the PCI bus. While interesting, this directory path doesn't do us very much good, since it's dependent on how the hardware is physically connected to the computer.Fortunately, Sysfs also provides a large number of symlinks, for easy access to devices without having to know which PCI and USB ports they are connected to. In /sys/class there is a directory for each different class of device.Usage?In general you use rules in /etc/udev/rules.d to augment your system. Rules can be constructed to run scripts when various hardware is present.Once a system is up you can write scripts to work against either /dev or /sys, and it really comes down to personal preferences, but I would usually try and work against /sys and make use of tools such as udevadm to query UDEV for locations of various system resources.$ udevadm info -a -p $(udevadm info -q path -n /dev/sda) | head -15Udevadm info starts with the device specified by the devpath and thenwalks up the chain of parent devices. It prints for every devicefound, all possible attributes in the udev rules key format.A rule to match, can be composed by the attributes of the deviceand the attributes from one single parent device. looking at device '/devices/pci0000:00/0000:00:1f.2/ata1/host0/target0:0:0/0:0:0:0/block/sda': KERNEL==sda SUBSYSTEM==block DRIVER== ATTR{ro}==0 ATTR{size}==976773168 ATTR{stat}== 6951659 2950164 183733008 41904530 16928577 18806302 597365181 580435555 0 138442293 622621324 ATTR{range}==16...
_softwareengineering.245393
In Java there are no virtual, new, override keywords for method definition. So the working of a method is easy to understand. Cause if DerivedClass extends BaseClass and has a method with same name and same signature of BaseClass then the overriding will take place at run-time polymorphism (provided the method is not static).BaseClass bcdc = new DerivedClass(); bcdc.doSomething() // will invoke DerivedClass's doSomething method.Now come to C# there can be so much confusion and hard to understand how the new or virtual+derive or new + virtual override is working. I'm not able to understand why in the world I'm going to add a method in my DerivedClass with same name and same signature as BaseClass and define a new behaviour but at the run-time polymorphism, the BaseClass method will be invoked! (which is not overriding but logically it should be). In case of virtual + override though the logical implementation is correct, but the programmer has to think which method he should give permission to user to override at the time of coding. Which has some pro-con (let's not go there now).So why in C# there are so much space for un-logical reasoning and confusion. So may I reframe my question as in which real world context should I think of use virtual + override instead of new and also use of new instead of virtual + override?After some very good answers especially from Omar, I get that C# designers gave stress more about programmers should think before they make a method, which is good and handles some rookie mistakes from Java. Now I've a question in mind. As in Java if I had a code likeVehicle vehicle = new Car();vehicle.accelerate();and later I make new class SpaceShip derived from Vehicle. Then I want to change all car to a SpaceShip object I just have to change single line of codeVehicle vehicle = new SpaceShip();vehicle.accelerate();This will not break any of my logic at any point of code.But in case of C# if SpaceShip does not override the Vehicle class' accelerate and use new then the logic of my code will be broken. Isn't that a disadvantage?
Why was C# made with new and virtual+override keywords unlike Java?
java;c#;language design;implementations
Since you asked why C# did it this way, it's best to ask the C# creators. Anders Hejlsberg, the lead architect for C#, answered why they chose not to go with virtual by default (as in Java) in an interview, pertinent snippets are below.Keep in mind that Java has virtual by default with the final keyword to mark a method as non-virtual. Still two concepts to learn, but many folks do not know about the final keyword or don't use proactively. C# forces one to use virtual and new/override to consciously make those decisions.There are several reasons. One is performance. We can observe that as people write code in Java, they forget to mark their methods final. Therefore, those methods are virtual. Because they're virtual, they don't perform as well. There's just performance overhead associated with being a virtual method. That's one issue. A more important issue is versioning. There are two schools of thought about virtual methods. The academic school of thought says, Everything should be virtual, because I might want to override it someday. The pragmatic school of thought, which comes from building real applications that run in the real world, says, We've got to be real careful about what we make virtual. When we make something virtual in a platform, we're making an awful lot of promises about how it evolves in the future. For a non-virtual method, we promise that when you call this method, x and y will happen. When we publish a virtual method in an API, we not only promise that when you call this method, x and y will happen. We also promise that when you override this method, we will call it in this particular sequence with regard to these other ones and the state will be in this and that invariant. Every time you say virtual in an API, you are creating a call back hook. As an OS or API framework designer, you've got to be real careful about that. You don't want users overriding and hooking at any arbitrary point in an API, because you cannot necessarily make those promises. And people may not fully understand the promises they are making when they make something virtual.The interview has more discussion about how developers think about class inheritance design, and how that led to their decision.Now to the following question:I'm not able to understand why in the world I'm going to add a method in my DerivedClass with same name and same signature as BaseClass and define a new behaviour but at the run-time polymorphism, the BaseClass method will be invoked! (which is not overriding but logically it should be).This would be when a derived class wants to declare that it does not abide by the contract of the base class, but has a method with the same name. (For anyone who doesn't know the difference between new and override in C#, see this MSDN page).A very practical scenario is this:You created an API, which has a class called Vehicle.I started using your API and derived Vehicle.Your Vehicle class did not have any method PerformEngineCheck().In my Car class, I add a method PerformEngineCheck(). You released a new version of your API and added a PerformEngineCheck().I cannot rename my method because my clients are dependent on my API, and it would break them.So when I recompile against your new API, C# warns me of this issue, e.g.If the base PerformEngineCheck() was not virtual:app2.cs(15,17): warning CS0108: 'Car.PerformEngineCheck()' hides inherited member 'Vehicle.PerformEngineCheck()'.Use the new keyword if hiding was intended.And if the base PerformEngineCheck() was virtual:app2.cs(15,17): warning CS0114: 'Car.PerformEngineCheck()' hides inherited member 'Vehicle.PerformEngineCheck()'.To make the current member override that implementation, add the override keyword. Otherwise add the new keyword.Now, I must explicitly make a decision whether my class is actually extending the base class' contract, or if it is a different contract but happens to be the same name.By making it new, I do not break my clients if the functionality of the base method was different from the derived method. Any code that referenced Vehicle will not see Car.PerformEngineCheck() called, but code that had a reference to Car will continue to see the same functionality that I had offered in PerformEngineCheck().A similar example is when another method in the base class might be calling PerformEngineCheck() (esp. in the newer version), how does one prevent it from calling the PerformEngineCheck() of the derived class? In Java, that decision would rest with the base class, but it does not know anything about the derived class. In C#, that decision rests both on the base class (via the virtual keyword), and on the derived class (via the new and override keywords).Of course, the errors that the compiler throws also provide a useful tool for the programmers to not unexpectedly make errors (i.e. either override or provide new functionality without realizing so.)Like Anders said, real world forces us into such issues which, if we were to start from scratch, we would never want to get into.EDIT: Added an example of where new would have to be used for ensuring interface compatibility.EDIT: While going through the comments, I also came across a write-up by Eric Lippert (one of the original members of C# design committee) on other example scenarios (mentioned by Brian).PART 2: Based on updated questionBut in case of C# if SpaceShip does not override the Vehicle class' accelerate and use new then the logic of my code will be broken. Isn't that a disadvantage?Who decides whether SpaceShip is actually overriding the Vehicle.accelerate() or if it's different? It has to be the SpaceShip developer. So if SpaceShip developer decides that they are not keeping the contract of the base class, then your call to Vehicle.accelerate() should not go to SpaceShip.accelerate(), or should it? That is when they will mark it as new. However, if they decide that it does indeed keep the contract, then they will in fact mark it override. In either case, your code will behave correctly by calling the correct method based on the contract. How can your code decide whether SpaceShip.accelerate() is actually overriding Vehicle.accelerate() or if it is a name collision? (See my example above).However, in the case of implicit inheritance, even if SpaceShip.accelerate() did not keep the contract of Vehicle.accelerate(), the method call would still go to SpaceShip.accelerate().
_softwareengineering.201359
Is it completely unheard of to have a .c file dedicated to just data? In my case, I'd be using it for global variables that are shared across two other .c files. Here's specifically how I'm using it.// serverth.hstruct serverth_parameters { struct { // right now, this is the only struct needed char * root, * user, * public, * site; } paths; // I anticipate needing another struct here};#ifndef SERVERTH_SOURCEextern struct serverth_parameters parameters;#endif// serverth.c#define SERVERTH_SOURCE#include serverth.hstruct serverth_parameters parameters = { .paths = { // macros are actually used here /srv, /user, /public /site }};parameters is a struct that's used for a websockets server, in two files:One for HTTP (uses parameters.paths.site)Two for proprietary protocols (both use .paths.user, one uses paths.public)Is this a bad practice? Do people do this? Or, is it more conventional to just keep the data in the source file in which it is most relevant?
.c FIle Dedicated to Data
programming practices;c
In general I'd be wary of dumping ground files, where one would start listing global variables or long lists of #defines.However, if by globals you actually meant that these globals would be constants, then that would seem acceptable, as long as they are structured and re-grouped in meaningful patterns, and that the extent of their reuse justifies to extract them to a separate file.If these conditions aren't satisfied, then I'd usually restrain them to the file where they're expected to be used, if possible.Include GuardsOn a different note, I don't know how your files are compiled and what build system you use, but you might want to use compilation guards of the form:#ifndef MY_MODULE# define MY_MODULE/* stuff here */#endif
_webmaster.68147
For every relevant action users perform on a site (posting comments, voting, reporting/flagging, etc.) I store his IP and host name (retrieved from PHP function gethostname() - doc). I have copied this idea of storing the host name from a WordPress rating plugin.I thought that if a well known plugin is doing this, it is because that info is somehow useful when moderating comments or avoiding poll manipulations. But now that I'm redesigning the site and the database structure, I start to question if this info is really useful.Is it normal to keep this info? Should I keep it or just drop the column?
Should I keep host name info for moderation purpose?
moderation
You should not do this. It is a bad practice for website to do host name lookups for each visitor as it uselessly taxes DNS servers. Keep the IP, not only because it is useful if you need to report something to an ISP (such as a DOS attack or the like) but also because it is useful when you need to cross reference your DB logs with your web server access logs. The only information you're going to really get from having a hostname is the ISP and that is only by convention. An IP lookup service will help you figure out the rest if you need to. Even though IPs may change for specific users fairly often; the information provided by these services doesn't really change that much because those addresses are still reserved for the same ISP. Just keep in mind that at any time multiple (unrelated) users may share the same IP from your servers perspective and a single user may use a different IP at any time so these are only hints as to what is going on (but knowing the host name doesn't clear things up at all because this is information you could easily look up at any time it is needed).
_codereview.138960
I have solved problem 12 on Project Euler website, which reads:The sequence of triangle numbers is generated by adding the natural numbers. So the 7th triangle number would be 1 + 2 + 3 + 4 + 5 + 6 + 7 = 28. The first ten terms would be:1, 3, 6, 10, 15, 21, 28, 36, 45, 55, ...Let us list the factors of the first seven triangle numbers:1: 1 3: 1,3 6: 1,2,3,6 10: 1,2,5,10 15: 1,3,5,15 21: 1,3,7,21 28: 1,2,4,7,14,28We can see that 28 is the first triangle number to have over five divisors.What is the value of the first triangle number to have over five hundred divisors?And I've solved this in two ways:1.functools.reduce()import mathfrom functools import reduceimport timedef primeFactors(n): i = 2 factors = {} num = 0 while i ** 2 <= n: if n % i: i += 1 else: n //= i num = factors.get(i, None) if num is None: factors[i] = 1 else: factors[i] = num+1 if n > 1: num = factors.get(n, None) if num is None: factors[n] = 1 else: factors[n] = num+1 return factorsstart_time = time.time()numOfDivisors = 500n = 2while True: t = int(n*(n+1)/2) factors = primeFactors(t).values() if reduce(lambda x, y: x*y, [t+1 for t in factors]) >= numOfDivisors: print(t) break else: n += 1print(----%s seconds ---- % (time.time() - start_time))2.for-loopimport mathimport timedef primeFactors(n): i = 2 factors = {} num = 0 while i ** 2 <= n: if n % i: i += 1 else: n //= i num = factors.get(i, None) if num is None: factors[i] = 1 else: factors[i] = num+1 if n > 1: num = factors.get(n, None) if num is None: factors[n] = 1 else: factors[n] = num+1 return factorsstart_time = time.time()numOfDivisors = 500n = 2while True: t = int(n*(n+1)/2) factors = primeFactors(t).values() k = 1 for i in factors: k *= (i+1) if k >= numOfDivisors: print(t) break else: n += 1print(----%s seconds ---- % (time.time() - start_time))I tested out which one was faster, found that there was a fine line only:1.670 secs and 1.678 secsI attempted to calculate the average computing time by making the given number bigger from 500 to 1000, repeating this process 10 times.But both solutions were done almost simultaneously:functools.reduce [13.616845607757568, 13.67394757270813, 13.623621225357056, 13.596383094787598, 13.657264471054077, 13.694176197052002, 13.669324398040771, 13.65349006652832, 13.66620421409607, 13.57088851928711, 13.686619901657104]for-loop[13.56181812286377, 13.686477422714233, 13.548126935958862, 13.565587759017944, 13.562162637710571, 13.556873798370361, 13.562631845474243, 13.572312593460083, 13.57419729232788, 13.567578315734863, 13.611796617507935]Question: Are there any insignificant or significant differences between the two from a practical view and in a normal case, Or is this just a matter of behavior of two functions to use?
Project Euler 12 (triangle numbers), solved using functools.reduce() and looping
python;programming challenge;primes;comparative review
Here's an in-depth view from BDFL.reduce should be faster, especially if you supply it with function that is written in C . Try replacing your multiplication lambda with operator.mul and see if anything changes.There is a matter of style which of two functions to prefer, because more complex tasks would require more complex lambdas and the whole thing would be less comprehensible than for-loop. Thus, use of reduce was discouraged and it was moved into functools.On the other hand, for less complicated tasks there are such things as sum, all and any. Latter two also short-circuit, whereas reduce does not.