id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_webapps.35332 | I share a folder with 4 users of Dropbox and I just want them to see automatically the files Im adding, I dont want them to be able to modify or delete anything inside it.So, how can I unauthorize them from modification?P.S.: I know already about the share-link option, but this way the users dont see automatically the added files. | Share a folder with Dropbox users but prevent them from modifying? | dropbox | null |
_cs.72336 | I'm looking for a data structure that is like a quadtree where each level is a subdivision of the previous. However, unlike a quadtree I need the subdivision to occur a different number of times in the horizontal direction to the vertical direction. In a quadtree the space is subdivided once in each dimension (resulting in four children per node). In the tree I'm looking for the space may be divided a given number of times in one dimension and a different number of times in the other. Say for example, twice in the X and once in the Y (resulting in six children per node). Has such a space partitioning tree been given a name? Can anyone point me to an existing data structure that fulfills this requirement? Thanks! | Tree structure that is like a quadtree/octree but splits a different number of times in each dimension? | data structures;search trees;space partitioning | null |
_unix.20170 | I have a somewhat-kludgey script in which I want to copy valid, complete JPEG files to a photo album directory. I thought I could use ImageMagick's identify command to do this. The web documentation for that command says The identify program describes the format and characteristics of one or more image files. It also reports if an image is incomplete or corrupt.... which sounds great. And if I run identify jpeg:testfile.jpg, I get an exit code of 0 if the test file is a JPEG, and 1 if it isn't. Good start but the command also returns 0 if the file appears to be a JPEG but is incomplete.I looked through the extensive list of options, but nothing seems relevant. Can I use this command to do what I want? How? | How can I use imagemagick's identify command in a script to tell if a JPEG file is invalid or corrupt? | scripting;imagemagick | If an alternate program is an option, I think I used jpeginfo -c the last time I needed to check the validity of a bunch of JPEG files. |
_codereview.164230 | I have a chunk of code that I know is way too long for the rather simple output I get. It is simply creating a list of items found within a list of websites. It then assigns the text from each of those items in the list to a Tkinter label and positions each of those labels in a distinct row and column inside a grid. So if I had four websites, and seven items found on each of those four websites, I would have 28 labels to create, and 28 grid locations. I have posted the snipped of the code, along with a link to the entire file code below if you would like to open and run it. Would someone please be able to show me a better way of writing this? (Am just a beginner)Snippetloop_count = 1watch_model_description_list = []watch_model_description_grid_list = []for num, url in enumerate(watch_link_urls): dennisov_model_html = urlopen(url).read() dennisov_model_html = dennisov_model_html.replace('', ') watch_model_description = findall(dib_02'\>([A-Z \/0-9]+)\<, dennisov_model_html) for number, category in enumerate(watch_model_description): watch_model_description = 'category_' + str(loop_count) + ' = Label(window, font = (Helvetica, 5), text = '+ category +')' watch_model_description_grid = 'category_' + str(loop_count) + '.grid(padx = 0, pady = 0, row = ' + str(number+3) + ', column = '+ str(num) +')' watch_model_description_list.append(watch_model_description) watch_model_description_grid_list.append(watch_model_description_grid) loop_count += 1model_option = {'model_{}'.format(i):e for i, e in enumerate(watch_model_description_list)}model_grid_option = {'model_grid{}'.format(i):e for i, e in enumerate(watch_model_description_grid_list)}model_grid_option_string = '\n'.join(model_grid_option.values())model_option_string = '\n'.join(model_option.values())exec(model_option_string)exec(model_grid_option_string)File codefrom Tkinter import Tk, Button, Canvas, END, Spinbox, PhotoImage, Labelfrom ttk import Progressbar, Comboboxfrom urllib import urlopenfrom re import findallimport urllib## Create a windowwindow = Tk()## Give the window a titlewindow.title('Watch finder')## Show Dennisov image logodennisov_logo_url = http://i.imgur.com/KD6AK08.gifhandle = urlopen(dennisov_logo_url)data = handle.read()raw_image = PhotoImage(master = window, data = data)Label(window, image = raw_image, width = 600).grid(row = 0, column = 0, columnspan = 10)handle.close## Types of Dennisov watches - INTRODUCE THE WATCH NAMES FUNCTION HERE AS THIS IS## HARDCODEDdennisov_type_list = ['Barracuda Limited','Barracuda Chronograph', 'Barracuda Mechanical','Speedster','Free Rider', 'Nau Automatic','Lady Flower','Enigma','Number One']## Open a new HTML file enable writing to itdennisov_file = open('dennisov_url.html', 'w')## Begin writing to HTML filedennisov_file.write('''<!DOCTYPE html><html> <head> <title>Watches</title> </head> <body>''') dennisov_file.write('''</body></html>''')## Define function for selection button pushdef show_models():## Make function read from user dropdown box selection dennisov_type_selection = (dennisov_dropdown_box.get())## Convert to lower-case and replaces spaces with _ to be used in URL dennisov_type = dennisov_type_selection.lower().replace(' ','_')## Complete the Dennisov URL using the converted user name selection dennisov_url = 'https://denissov.ru/en/'+ dennisov_type + '/'## Make a varibale for the Dennisov subpage from the URL domain onwards dennisov_url_subpage = dennisov_url[19:]## Read the Dennisov URL HTML dennisov_html = urlopen(dennisov_url).read()## Replace instances of double quotation marks in the text with singles ## so that the findall regex code does not get confused dennisov_html = dennisov_html.replace('', ')## Find all of the images of the watches. Each watch image starts with the text## img src=. Do not match those with any symbols in the URL watch_image_urls = findall(<img src='(/files/collections/o[^']*)', dennisov_html)## Add the URL domain to each watch image subpage to create full addresses watch_image_urls = ['https://denissov.ru' + remainder for remainder in watch_image_urls]## dennisov_file.write(' <img src=' + image + '>\n')## Download and save the PNG images to the directory as GIFs so that## Tkinter can use them## Return the watch 'type'. The watch type is in a title tag called titlusref## and can be any combination of letters and spaces, followed by a space and## < symbol. watch_type = findall(titlusref'\>([a-zA-Z]+ *[a-zA-Z]*) *\<, dennisov_html)[0]## Find all of the links when each watch is clicked. Each watch link starts## with the text a href= followed by the subpage, followed by any## letter, number and _ symbol combination, followed by a backslash watch_link_urls = findall(a href=' + (dennisov_url_subpage) + ([A-Za-z0-9_]+/), dennisov_html) ## Add the main URL to each watch subpage watch_link_urls = [str(dennisov_url) + remainder for remainder in watch_link_urls]## Find all of the model numbers of each watch. Each model starts with the text## covername then any combination of letters, dots and spaces. watch_names = findall(covername'>([A-Z a-z0-9\.]+), dennisov_html)## Get current USD to AUD exchange rate using a known currency website currency_converter_url = 'http://www.xe.com/currencyconverter/convert/?From=USD&To=AUD' currency_html = urlopen(currency_converter_url).read()## Replace instances of double quotation marks in the text with singles ## so that the findall regex code does not get confused currency_html = currency_html.replace('', ')## Find the exchange rate. The exchange rate starts with uccResultAmount'>## and is then followed by any combination of numbers with a decimal place exchange_rate = float(findall(uccResultAmount'\>([0-9]+\.[0-9]*), currency_html)[0])## Find the price of the models and make into floats. Each model price contains## numbers followed by the text USD USD_watch_prices = [float(price) for price in (findall(([0-9]*) usd, dennisov_html))]## Convert the USD watch prices to current AUD prices and round to 2 decimals watch_prices = [round(exchange_rate*price, 2) for price in USD_watch_prices]## Add the currency to the prices watch_prices = [AU $ + str(price) for price in watch_prices]## Match each watch name to its image and URL inside a tuple and place each## tuple inside a list watch_list = zip(watch_names, watch_image_urls, watch_prices, watch_link_urls)## For each watch tuple (matching image, name and URL), assign a watch number watch_option = {'watch_{}'.format(i): e for i, e in enumerate(watch_list)}## CREATE A LIST OF MODEL DESCRIPTIONS ACCORDING TO THE TYPE OF WATCH SELECTED:## > Read each model HTML link in watch_link_urls.## > Search for the terms inside each URL that start with dib_02 followed by any ## combination of capital letters, numbers and backslashes.## > Insert Label and Label grid attributes to each result using a counter to keep track ## of overall loops in order to create unique names and locations. ## > Append the results of each loop to a list called watch_model_description_list ## > Place the results into a dictionary so that the values can be extracted from the ## lists easily, converted to strings and then executed loop_count = 1 watch_model_description_list = [] watch_model_description_grid_list = [] for num, url in enumerate(watch_link_urls): dennisov_model_html = urlopen(url).read() dennisov_model_html = dennisov_model_html.replace('', ') watch_model_description = findall(dib_02'\>([A-Z \/0-9]+)\<, dennisov_model_html) for number, category in enumerate(watch_model_description): watch_model_description = 'category_' + str(loop_count) + ' = Label(window, font = (Helvetica, 5), text = '+ category +')' watch_model_description_grid = 'category_' + str(loop_count) + '.grid(padx = 0, pady = 0, row = ' + str(number+3) + ', column = '+ str(num) +')' watch_model_description_list.append(watch_model_description) watch_model_description_grid_list.append(watch_model_description_grid) loop_count += 1 model_option = {'model_{}'.format(i):e for i, e in enumerate(watch_model_description_list)} model_grid_option = {'model_grid{}'.format(i):e for i, e in enumerate(watch_model_description_grid_list)} model_grid_option_string = '\n'.join(model_grid_option.values()) model_option_string = '\n'.join(model_option.values()) exec(model_option_string) exec(model_grid_option_string)## Create the Spinboxes for each watch model global spinboxes spinboxes = [] spinbox_grid_list = [] for number, name in enumerate(watch_names): spinboxes.append(Spinbox(window, width = 3, from_= 0, to = 10)) spinboxes[number].grid(padx = 0, pady = 0, row = 13, column = number)## Create the prices for each watch model prices = [] prices_grid_list = [] for number, price in enumerate(watch_prices): prices.append(Label(window, font = (Helvetica, 10), text = price)) prices[number].grid(padx = 0, pady = 0, row = 12, column = number) ## Create the names for each watch model names = [] names_grid_list = [] for number, name in enumerate(watch_names): names.append(Label(window, font = (Helvetica, 10), text = name)) names[number].grid(padx = 0, pady = 0, row = 2, column = number) ## Create a Create Invoice button invoice_button = Button(window, text = 'Create Invoice', command = create_invoice, width = 20) invoice_button.grid(pady = 10, padx = 2, row = 16, columnspan = 9, sticky = S) ## Define an action for what the Create Invoice button is pushed def create_invoice(): for i, spinbox in enumerate(spinboxes): print 'spinbox_{} = '.format(i) + (spinbox.get())## Create dropdown box textdennisov_dropdown_text = Label(window, text = Watch type:, bg = 'White', width = 20)## Create the dropdown box for the Dennisov watch typesdennisov_dropdown_box = Combobox(window, width = 25, values = dennisov_type_list)## Create Dennisov type selection buttondennisov_select_button = Button(window, text = 'Select', command = show_models, width = 20)## Locate elements on griddennisov_dropdown_text.grid(pady = 2, padx = 2, row = 1, columnspan = 9, sticky = 'W')dennisov_dropdown_box.grid(pady = 2, padx = 2, row = 1, columnspan = 9, sticky = 'N')dennisov_select_button.grid(pady = 2, padx = 2, row = 1, columnspan = 9, sticky = 'E')dennisov_file.close()window.mainloop() | Code that extracts items from a list of websites and creates Tkinter text labels using their values | python;beginner;python 2.7;tkinter | null |
_softwareengineering.198378 | The Apache License states:You must give any other recipients of the Work or Derivative Works a copy of this License;So if you've copied a bunch of functions from some Apache-licensed code (so it's not just linking, you've borrowed code) into a product that is not itself Apache-licensed, what's a good way to indicate that the license you included only applies to some bits of code in your product, not to the product as a whole? (To avoid the suggestion that the product as a whole is freely copyable.) Should a list of projects you've borrowed from and their various licenses be included in the About screen for the product, insisting that the licenses only apply to those bits? I'm asking because that seems to make sense, but I don't recall seeing that anywhere. | Apache license in non-Apache licensed software; how to distinguish? | licensing;apache license | null |
_softwareengineering.124649 | Let's pretend we have a service that calls a business process. This process will call on the data layer to create an object of type A in the database. Afterwards we need to call again on another class of the data layer to create an instance of type B in the database. We need to pass some information about A for a foreign key. In the first method we create an object (modify state) and return it's ID (query) in a single method. In the second method we have two methods, one (createA) for the save and the other (getId) for the query. public void FirstMethod(Info info) { var id = firstRepository.createA(info); secondRepository.createB(id); } public void SecondMethod(Info info) { firstRepository.createA(info); var key = firstRepository.getID(info); secondRepository.createB(key); }From my understanding the second method follows command query separation more fully. But I find it wasteful and counter-intuitive to query the database to get the object we have just created. How do you reconcile CQS with such a scenario?Does only the second method follow CQS and if so is it preferable to use it in this case? | Does command/query separation apply to a method that creates an object and returns its ID? | object oriented | CQS is a guideline rather than an absolute rule. See the wiki article for examples of activities that are impossible under strict CQS.However, in this case, if you did want to maintain CQS, you could either create the ID on the client side (such as a GUID), or the client could request an ID from the system before creating any objects, which feels cleaner to me than creating the object then querying for it (but is harder than just using an identity column).Personally I'd just return the ID and call it one of those situations where CQS isn't a good fit.Another good article with examples: Martin Fowler |
_unix.382948 | I have a shell script ts.shperl -i.BAK -p -e s/^.* - // *.log*When I run that like below on windows using strawberry perl on cygwin bash, I get the below error.$ ./ts.sh-i used with no filenames on the command line, reading from STDIN.However, if I run the below command on the very same cygwin bash, it does its job without error $ perl -i.BAK -p -e s/^.* - // *.log*What should I do to get the ts.sh script working without borking ?Thanks | perl inplace regex works in shell; however, inside shell script gives error. why? | regular expression;backup;perl;cygwin | null |
_webapps.31685 | Before you guys start tormenting me by saying that it's a repeated question, my question deals with the redesigned Facebook events page, even at Facebook help if i check how to export birthdays, the help text given there is for the old page, wherein it says that you need to click the magnifying glass icon, from there select birthdays and so on. But the redesigned events page doesn't have this option.I'm really getting annoyed since even Facebook didn't change its help text. There is no option to select Birthdays or export birthdays. | Export birthdays from facebook (Redesigned Events Page) | facebook | null |
_cs.39695 | There is a hint for part B, but I do not get which ones are the right cases to get rid of both goblins. Thanks!How can I tell who is a truth teller and who is a liar? | Help on previous question (Truthteller and Liar problem) | algorithms | null |
_codereview.169212 | I've created a subclass of Python's builtin dictionary dict. The subclass allows dictionaries to have multiple keys with the same value. Hence the name homogeneous. Behind the scenes, I'm using a normally dictionary. If a key has more than one value, then it's values are implemented with a list. Otherwise, the key, value pair is normal.Here are some example usages:>>> d = HomogeneousDict([('a', 1), ('b', 2), ('c', 3), ('b', 4)])>>> dHomogeneousDict({'c': 3, 'b': 2, 'b': 4, 'a': 1})>>> d['a']1>>> d['b'][2, 4]>>> d['a'] = 5>>> dHomogeneousDict({'c': 3, 'b': 2, 'b': 4, 'a': 1, 'a': 5})>>> d['a'][1, 5]>>> len(d)5>>> d['subdict'] = HomogeneousDict([('d', 6), ('e', 7)])>>> dHomogeneousDict({'c': 3, 'b': 2, 'b': 4, 'a': 1, 'a': 5, 'subdict': HomogeneousDict({'e': 7, 'd': 6})})>>> d['subdict'] = 8>>> dHomogeneousDict({'c': 3, 'b': 2, 'b': 4, 'a': 1, 'a': 5, 'subdict': HomogeneousDict({'e': 7, 'd': 6}), 'subdict': 8})>>> d['subdict'][HomogeneousDict({'e': 7, 'd': 6}), 8]>>> del d['subdict']>>> dHomogeneousDict({'c': 3, 'b': 2, 'b': 4, 'a': 1, 'a': 5})>>> d.pop('b', 1) # Pop the second value of the 'b' key4>>> dHomogeneousDict({'c': 3, 'b': 2, 'a': 1, 'a': 5})>>> for k, v in d.items(): print('{} => {}'.format(k, v))c => 3b => 2a => 1a => 5>>>I haven't given much thought if there is a practically usage for this type of dictionary. I really made this as more of a hobby project, rather than out of necessity. Here's the source code. I documented the important parts of the code, so their shouldn't be much to explain or figure out. It's actually only about 120 LOCs:class HomogeneousDict(dict): About ----- A dictionary that allows multiple keys to have like values. The dictionary supports all of the same methods of a normally dictionary. The chief difference between a normally dictonary and a HomogeneousDict, is when you get items. If a key has be given multiple values, a list of all of the values are returned for that key. Otherwise, the key's value is returned normally as is. Examaples --------- >>> d = HomogeneousDict([('a', 1), ('b', 2), ('c', 3), ('b', 4)]) >>> d HomogeneousDict({'c': 3, 'b': 2, 'b': 4, 'a': 1}) >>> d['a'] 1 >>> d['b'] [2, 4] >>> d['a'] = 5 >>> d HomogeneousDict({'c': 3, 'b': 2, 'b': 4, 'a': 1, 'a': 5}) >>> d['a'] [1, 5] >>> len(d) 5 >>> d['subdict'] = HomogeneousDict([('d', 6), ('e', 7)]) >>> d HomogeneousDict({'c': 3, 'b': 2, 'b': 4, 'a': 1, 'a': 5, 'subdict': HomogeneousDict({'e': 7, 'd': 6})}) >>> d['subdict'] = 8 >>> d HomogeneousDict({'c': 3, 'b': 2, 'b': 4, 'a': 1, 'a': 5, 'subdict': HomogeneousDict({'e': 7, 'd': 6}), 'subdict': 8}) >>> d['subdict'] [HomogeneousDict({'e': 7, 'd': 6}), 8] >>> del d['subdict'] >>> d HomogeneousDict({'c': 3, 'b': 2, 'b': 4, 'a': 1, 'a': 5}) >>> d.pop('b', 1) # Pop the second value of the 'b' key 4 >>> d HomogeneousDict({'c': 3, 'b': 2, 'a': 1, 'a': 5}) >>> for k, v in d.items(): print('{} => {}'.format(k, v)) c => 3 b => 2 a => 1 a => 5 >>> def __init__(self, values): Parameters ---------- values : list List of tuples that represent key, value pair for key, value in values: self.__setitem__(key, value) def __setitem__(self, key, value): if key in self: if isinstance(self[key], list): self[key].append(value) else: super().__setitem__(key, [self.pop(key), value]) else: super().__setitem__(key, value) def __getitem__(self, key): return super().__getitem__(key) def __len__(self): count = 0 for key, value in super().items(): if isinstance(value, list): count += len(value) else: count += 1 return count def __repr__(self): key_value_reprs = [] for key, value in super().items(): if isinstance(value, list): for element in value: repr_ = '{}: {}'.format(key.__repr__(), element.__repr__()) key_value_reprs.append(repr_) else: repr_ = '{}: {}'.format(key.__repr__(), value.__repr__()) key_value_reprs.append(repr_) return 'HomogeneousDict({%s})' % ', '.join(key_value_reprs) def pop(self, key, index=0): if isinstance(self[key], list): value = self[key].pop(index) if len(self[key]) == 1: super().__setitem__(key, self[key].pop()) return value else: return super().pop(key) def items(self): for key, value in super().items(): if isinstance(value, list): for element in value: yield (key, element) else: yield key, valueThe biggest question I have is: How well did I emulate a Python container type. Is there anything unintuitive or non-normally for a dictionary-like container type? Also, is there a better name for a dictionary that behaves like this? | HomogeneousDict - The Python dictionary that can have homogeneous keys | python;dictionary | __repr__ should:look like a valid Python expression that could be used to recreate an object with the same value (given an appropriate environment).Yours cannot possibly do that, as creating the vanilla dictionary loses all but one of the values for each key. Also note you can write:repr_ = '{!r}: {!r}'.format(key, element)In terms of emulating container types, you can look at the ABCs to see what needs to be implemented. I would probably have approached this with composition rather than inheritance, leveraging the ABCs for automatic implementation of some methods:class HomogeneousDict(MutableMapping):This helps avoid missing important methods; for example, you don't provide a matching implementation of .values. In terms of naming, this is generally called a MultiDict. Given the examples, I'd add the following to the end of the file:if __name__ == '__main__': import doctest doctest.testmod()This won't impact anything importing the class, but means that the behaviour is validated against your expectations if you run the file directly, using doctest. This may mean tweaking things slightly to avoid issues with order, etc.; for example:File so.py, line 20, in __main__.HomogeneousDictFailed example: dExpected: HomogeneousDict({'c': 3, 'b': 2, 'b': 4, 'a': 1})Got: HomogeneousDict({'a': 1, 'c': 3, 'b': 2, 'b': 4})You will also need to update multi-line examples with a leading ellipsis:>>> for k, v in d.items():... print('{} => {}'.format(k, v))...c => 3 |
_unix.11550 | If I follow a process I mentioned here, I am supposed to log out and then in for the changes to take place. What about Nautilus? I tried to restart it and was still unsuccessful. The only way I have so far found that works is logging out of the desktop and then in again. That's not always convenient of course. | How to make Nautilus notice changes regarding group permissions | permissions;nautilus;group | You can't grant a new group to a running process. You need to log in again to get a process with the changed group memberships.What you can do is to launch nautilus from a different session but have it display on your existing display, something likessh localhost DISPLAY=$DISPLAY XAUTHORITY=$XAUTHORITY nautilus & |
_softwareengineering.120328 | During development process, things are constantly changing (especially in early phases). Requirements change, UI changes, everything changes. Pages that clearly belonged to specific controller, mutated to something completely different in the future.For example. Lets say that we have website for managing Projects. One page of the website was dedicated to managing existing, and inviting new members of specific project. Naturally, I created members controller nested under projects which had proper responsibility. Later in the development, it turned out that it was the only page that was configuring the project in some way, so additional functionalities were added to it:editing project descriptionsetting project as a default...In other words, this page changed its primary responsibility from managing project members to managing project itself. Ideally, this page should be moved to edit action of projects controller. That would mean that all request and controller specs need to refactored too. Is it worth the effort? Should it be done?Personally, I am really starting to dislike the 1-1 relationship between views and controllers. Its common situation that we have 1 page (view) that handles 2 or more different resources. I think we should have views completely decoupled from controllers, but rails is giving us hard time to achieve this.I know that AJAX can be used to solve this issue, but I consider it an improvisation. Is there some other kind of architecture (other than MVC) that decouples views from controllers? | how to deal with controller mutations | ruby on rails | Applying composition over inheritance to controllers is one of those approaches with which you just can't go wrong.The idea is to have as thin of a controller as possible. The controller should just define the request/response processes, but anything substantial happening in between is going to be defined outside of the class.For instance, if you had a controller that filtered a collection of Product instances by a certain criteria, apply the VAT to the base price and then produce a tabular representation, CSV or JSON response, you'd end up with the following classes:A class that takes the request object and returns an appropriate collection of Product instances; this class is concerned with knowing how to construct an appropriate query based on the incoming request (like querying products based on a mix of attribute values).A class that takes a Product collection, processes each entry and returns the resulting collection; this class is concerned with taking a product's specified VAT and applying it on the base price.A class that takes the incoming request object and a Product collection and produces an appropriate response; this class is concerned with figuring out whether to produce an HTML, CSV or JSON response of the collection.Your actual controller that simply creates a network of communication between the three classes; this class finally specifies the notion of collecting, processing and displaying the Product model appropriately.The thing is that controllers often end up exhibiting a lot of complex behavior, but this way that behavior is factored into several different components that are defined and tested independently. By simply mixing in different classes and changing a few lines of code, your controller's behavior changes dramatically. I don't really have any experience with Ruby, but from what I know this can be achieved by including and mixing different modules and perhaps even a macro or two for seasoning.One good example of this approach are Django's generic controllers. They're just types composed of different mixins where each mixin defines certain behavior and exposes class attributes and instance methods that can be specified/overriden to configure their behavior. The generic controllers provided are just a particular combination of existing mixins that are suited for a particular problem.Another good example is Android with it's controllers. An Acitivty is called by the instrumentation framework whenever the activity needs to respond to a certain kind of request (create yourself, start yourself, destroy yourself) with certain input, but it's all happening on a UI thread that expects these methods to be snappy. Anything substantial, such as number crunching, retrieving data over HTTP, collecting data from the DB, so forth, is actually managed by objects of some external types that communicate some data back to the calling controller. Sometimes, this communication is generic enough that it can structured by implementing interfaces and the behavior pertaining to each is specified in an anonymous class. Again, you end up with a simple controller that only wires a network of communication between different objects to define some complex behavior, but the behavior is defined in a compact form from which you can readily figure out what's going on.This is the code reuse promised by the composition principle: if your controller's behavior needs to be migrated to a different controller or if a certain kind of behavior needs to be shared between multiple controllers, composition over inheritance provides you with the ability to do so. And what's best, by applying Liskov substitution principle, the possibilities become endless! |
_scicomp.21798 | Im doing a computational biology project in which I simulate evolution under different inheritance rulesets and I am generating phylogenetic trees (beautifully visualised in python with ete3, which I recommend and can be found here: http://etetoolkit.org/download/My question is: Can someone point me in the right direction to find and test out some simple metrics that can describe these trees in terms of 'branchy-ness' (you can tell i'm not a bioinformatician or phylogeneticist!). I'm looking for kind of mean-field descriptors of the trees. Kind of like degree distribution for networks... | Help with some starter metrics on phylogenetic trees? | computational biology | In Natural Language Processing the terms bushy and straggly are used to describe tree structure of sentence grammar parses. Bushy trees are flatter rather than deeper. Straggly trees branch deeply to the right or left. As far as metrics, you could use tree depth as well as straggliness which has be quantified as by Belay et al.:We calculate the straggliness of a sentence to be the maximum depth of the tree (calculated by counting the maximum stack depth of the parse trees) divided by the number of phrases in the sentence (as determined by the number of lines in the parse tree). In addition, we took the average and standard deviation of the bushiness and straggliness of each sentence and used those as features as well. |
_unix.260331 | currently i have a setup with two hardrives, one with xfs, the other one with btrfs on it.The root fs is mounted on the xfs, and the btrfs under /data.For verious reasons, the directory /var/www on the xfs was replaced by a bind mount to /data/var/www. (mount -o bind /data/var/www /var/www) So if you look in both directories (/var/www & /data/var/www) its exactly the same content.To my surprise some btrfs tools cannot handle bind mounted pathes, so whatever i i do i need for certain situations that the given path /var/ww is canonicalized to /data/var/wwwHow can i do that with a shell tool? something like resolve /var/www which would then return /data/var/www (or if there is more than one bind mount, it would return the proper path. | resolve bind mounts with a shell tool/script | mount;btrfs;bind | null |
_softwareengineering.71250 | In the last two years I have worked with a poorly written codebase of nearly 40K lines of code. Over that time I have made many small refactorings to improve it and made some bigger, as time permits.Unfortunately, I still view the codebase as poorly written.What's your experience? Do you think many small fixes can take me somewhere, or I have to stop and allocate a big chunk of time to fix things (of course, management won't be happy if I do this while we're working on the current very important new feature). | Small refactorings on a poor codebase? | refactoring;legacy | My take is that refactoring is included in delivering new features. I assume that leaving good code behind me is an expectation of my job and seniority.Therefore, in a case like yours, I would refactor as heavily as needed all the parts that are affected by any change, and include refactoring times in the estimates I give.If your management does not understand the concept of technical debt—and the costs of repaying it—or does not support a reasonable amount of refactoring, you might be very limited in your current position.You should never be asked to deliver crap code, within reasonable limits. If you are, then the standard retort is Why don't you hire a junior programmer to write it? They will surely cost less than me. |
_webapps.100333 | My friend connects with me via the Facebook Messenger application. A few days ago, after it was turned off, she checked if she had messages from me and found out that I was online Active now many times.There is no way that I was online because I closed all of my applications when I work and had been in meetings (although my phones were, in fact, offline). Can anyone explain why the app lists me as being online and active? | My Facebook Messenger say Online - Active Now despite my phones being offline | facebook chat | null |
_unix.129424 | I was wondering about the semantics of ipset(8).Is it possible to add a rule matching a set to iptables and then manipulate the set, or can I only create a set and swap it for an older set in order to apply it to an iptables rule matching the name? I.e. can I add/remove to/from an IP set ad hoc, or can I exchange whole sets while the sets are in active use?The reason I ask is this is as follows. Say I create a setipset create PCs hash:ipipset add PCs 1.1.1.1ipset add PCs 2.2.2.2... et cetera. And a rule that allows access to HTTP:iptables -A INPUT -p tcp --dport 80 -m set --set PCs src -j ACCEPTWhat happens when I run:ipset add PCs 3.3.3.3will the iptables rule now take immediate effect for IP 3.3.3.3 as well?I saw it's possible to use -j SET --add-set ... to manipulate IP sets ad hoc from within iptables rules. This makes me think it should work to manipulate a set at any given point.However, the ipset project site seems to suggest that swapping a new (adjusted) set for another is the better alternative. Be it via ipset swap or via ipset restore -!.Can anyone shed light on this? | Do I have to swap IP sets, or can I add/remove on the fly? | linux;iptables;firewall;ipset | You can add and remove IPs to your already defined sets on the fly. This is one of the ideas behind IPsets: if this wasn't possible, the whole set extension of iptables wouldn't make much sense.The primary goal of ipset was to enable you to define (also dynamically) classes of matches (e.g. for dynamically blacklisting malicious hosts without the need to magically add one rule for every single host).excerpt from the ipset homepagestore multiple IP addresses or port numbers and match against the collection by iptables at one swoopdynamically update iptables rules against IP addresses or ports without performance penaltyexpress complex IP address and ports based rulesets with one single iptables rule and benefit from the speed of IP sets |
_unix.68229 | This is the content of /etc/bashrc, I would like to modify it for show root as red but don't know where to add the color code.# /etc/bashrc# System wide functions and aliases# Environment stuff goes in /etc/profile# It's NOT a good idea to change this file unless you know what you# are doing. It's much better to create a custom.sh shell script in# /etc/profile.d/ to make custom changes to your environment, as this# will prevent the need for merging in future updates.# are we an interactive shell?if [ $PS1 ]; then if [ -z $PROMPT_COMMAND ]; then case $TERM in xterm*) if [ -e /etc/sysconfig/bash-prompt-xterm ]; then PROMPT_COMMAND=/etc/sysconfig/bash-prompt-xterm else PROMPT_COMMAND='printf \033]0;%s@%s:%s\007 ${USER} ${HOSTNAME%%.*} ${PWD/#$HOME/~}' fi ;; screen) if [ -e /etc/sysconfig/bash-prompt-screen ]; then PROMPT_COMMAND=/etc/sysconfig/bash-prompt-screen else PROMPT_COMMAND='printf \033]0;%s@%s:%s\033\\ ${USER} ${HOSTNAME%%.*} ${PWD/#$HOME/~}' fi ;; *) [ -e /etc/sysconfig/bash-prompt-default ] && PROMPT_COMMAND=/etc/sysconfig/bash-prompt-default ;; esac fi # Turn on checkwinsize shopt -s checkwinsize [ $PS1 = \\s-\\v\\\$ ] && PS1=[\u@\h \W]\\$ # You might want to have e.g. tty in prompt (e.g. more virtual machines) # and console windows # If you want to do so, just add e.g. # if [ $PS1 ]; then # PS1=[\u@\h:\l \W]\\$ # fi # to your custom modification shell script in /etc/profile.d/ directoryfiif ! shopt -q login_shell ; then # We're not a login shell # Need to redefine pathmunge, it get's undefined at the end of /etc/profile pathmunge () { case :${PATH}: in *:$1:*) ;; *) if [ $2 = after ] ; then PATH=$PATH:$1 else PATH=$1:$PATH fi esac } # By default, we want umask to get set. This sets it for non-login shell. # Current threshold for system reserved uid/gids is 200 # You could check uidgid reservation validity in # /usr/share/doc/setup-*/uidgid file if [ $UID -gt 199 ] && [ `id -gn` = `id -un` ]; then umask 002 else umask 022 fi # Only display echos from profile.d scripts if we are no login shell # and interactive - otherwise just process them to set envvars for i in /etc/profile.d/*.sh; do if [ -r $i ]; then if [ $PS1 ]; then . $i else . $i >/dev/null 2>&1 fi fi done unset i unset pathmungefi# vim:ts=4:sw=4In Debian I do it in this line:PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u\[\033[00;37m\]@\[\033[01;34m\]\h\[\033[01;37m\]:\[\033[01;36m\]\w\[\033[01;35m\]\$\[\033[00m\] 'but in CentOS no idea.edit: Also I am thinking that maybe editing that file is going to take effect on both normal user and root user. But looking in the specifics .bashrc they only point to /etc/bashrc. | How to colorize root in red in CentOS? | centos;colors;bashrc | You will need to play with the colours, but something like this should do the trick. Create the file /etc/profile.d/colours.sh with content similar to this:#!/bin/bashif [ $(id -u) -eq 0 >/dev/null 2>&1 ]; then export PS1=\[\033[01;32m\]\u\[\033[00;37m\]@\h:\w\$ fi |
_webmaster.90187 | How do I display multiple book authors using JSON-LD?This question is similar to my original question, but decided to create a different one seeing that 2 different formats are used.Here is my post using microdata:schema.org/Book with multiple authorsMy question is similar. I have a book with multiple authors. Am I doing it correctly by displaying the author property more than once? Below is the code using JSON-LD:<script type=application/ld+json>{ @context: http://schema.org, @type: WebPage, mainEntity: { @type: Book, author: { @type: Person, familyName: van der Westhuizon, givenName: Jason, name: Jason van der Westhuizon }, author: { @type: Person, familyName: du Toit, givenName: Jene, name: Jene du Toit }, author: { @type: Person, familyName: September, givenName: Koos, name: Koos September }, bookFormat: http://schema.org/Paperback, datePublished: 2014-11, inLanguage: en, isbn: 1234567890123, name: My Book Name, numberOfPages: 381, publisher: { @type: Organization, name: My Publisher }, }}</script>The reason why I ask is because when I go to Google's structured data tester tool then in the results it only shows one author. Why doesn't it display all 3? Is my code wrong? | schema.org/Book with multiple authors using JSON-LD | html;html5;microdata;schema.org;json ld | your code contains error, thats why two authors aren't recognized. If you have more then one author, you should add them as list without entity duplication. Here the correct code:<script type=application/ld+json>{ @context: http://schema.org, @type: WebPage, mainEntity: { @type: Book, author: [{ @type: Person, familyName: van der Westhuizon, givenName: Jason, name: Jason van der Westhuizon }, { @type: Person, familyName: du Toit, givenName: Jene, name: Jene du Toit }, { @type: Person, familyName: September, givenName: Koos, name: Koos September }], bookFormat: http://schema.org/Paperback, datePublished: 2014-11, inLanguage: en, isbn: 1234567890123, name: My Book Name, numberOfPages: 381, publisher: { @type: Organization, name: My Publisher } }}</script> |
_softwareengineering.283760 | Making types immutable is often desireable, especially for multi-threaded applications. There's no need to worry about concurrent access and no need for any synchronization. The common STL containers require the contained objects to be assignable, however, so they don't work with immutable types.How should I strike a balance between wanting immutable types and wanting STL-compatible types? What I could:provide an assignment operator and stop worrying about immutability. Either add synchronization or document that object assignment isn't thread-safe.live with the fact that the types can't go directly into containers and wrap them in std::unique_ptr/std::shared_ptr when required.What are the tradeoffs involved and which solution should I prefer for what types of objects? Should I strive to always make my objects STL compatible or should I only do that when I anticipate need for it? Should I strive to always make my objects immutable or only when I see concrete benefit from it? | C++: Make classes immutable or compatible with STL containers | object oriented;c++;immutability | Typical C++ is more about values that are copied than about making immutable objects that are shared. Separation instead of immutability. Try to keep the different threads' data separate.If you do want immutable shared data, you can always make a shared_ptr<T>, fill the object there with data, then hand out shared_ptr<const T> references to it. Or use something like Adobe's copy_on_write.So make your classes mutable and have them behave as proper values. |
_cstheory.6234 | How kerberos key distribution protocol prevents replay attack ? | How kerberos key distribution protocol prevents replay attack ? | cr.crypto security;cryptographic attack | null |
_cs.44855 | I learned that recursive Fibonacci is $O(2^N)$.However, when I implement it and print out the recursive calls that were made, I only get 15 calls for N=5. What I am missing? Should it not be 32 or near abouts? It's like less then half!$COUNT = 0def fibonacci( n ) $COUNT +=1 return n if ( 0..1 ).include? n # Base Case returns 0 or 1 fibonacci( n - 1 ) + fibonacci( n - 2 ) # Recursive caseend | If recursive Fibonacci is $O(2^N)$ then why do I get 15 calls for N=5? | algorithm analysis;asymptotics;runtime analysis;recursion | You ask, I have an $O(2^n)$ runtime, why do I not observe $2^n$ recursive calls for $n=15$?There are many things wrong in the implied conclusion.$O(\_)$ only gives you an upper bound. The true behaviour may be of much smaller growth.Asymptotics (like $O(\_))$ only give you only something in the limit, that is you can only expect the bound to hold for large $n$. Since you use $O$ as though it was $\Omega$, note that a (lower) bound on runtime does not necessarily translate into a (lower) bound on recursive calls, each of which may take $\omega(1)$ time.Similarly, a tight upper runtime bound (which you don't have) may not be a tight upper bound on the number of recursive calls for the same reasons.You can analyse exactly how many recursive calls you need. Just solve the recurrence$\qquad\displaystyle\begin{align*} f(0) &= 1,\\ f(1) &= 1,\\ f(n) &= f(n-1) + f(n-2) + 1, \quad n \geq 3,\end{align*}$which adds up $1$ for every recursive call your program makes. See here for more detail on how you get to such recurrences.See here how to do that, or use computer algebra. The result is$\qquad f(n) = 2F_{n+1} - 1$;insert $n=5$ and you will get $f(n) = 15$.It's not that surprising that the Fibonacci numbers $F_n$ should show up. Inserting their known closed form, we get$\qquad\displaystyle f(n) = \frac{2}{\sqrt{5}} \cdot \Bigl( \varphi^{n+1} - (1-\varphi)^{n+1} \Bigr) - 1$with $\varphi = \frac{1 + \sqrt{5}}{2} \approx 1.62$ the golden ratio. This closed form of $f(n)$ implies$\qquad\displaystyle f(n) \sim \frac{2}{\sqrt{5}} \cdot \varphi^{n+1} \approx 1.45 \cdot 1.62^n$since $0< 1 - \varphi < 1 < \varphi$.We see that the $O(2^n)$-bound is rather loose -- it has exponential absolute and relative error. |
_cogsci.12635 | Can anyone give a list of all the books one should read in order to have a fairly general understanding of psychology, and with somewhat of an emphasis cognitive and evolutionary psychology. I realize that the discipline diversifies and it is a difficult thing to do, but perhaps it can something like the books a typical psychology major would have to read in undergrad? And I welcome reading the technical works so please feel to mention them.*What I am interested in is the difference between the attempts of existential philosophy and psychology and their results of human understanding, and how the two compliment and relate to each other. If anyone has any recommendations on this question specifically, other than the works of Maslow, that will really help. Thanks in advance. | A list of books on the big ideas of psychology | cognitive psychology;social psychology;reference request;philosophy of mind | null |
_unix.302858 | I'm using the Arch Linux and trying to create an AUR file that would download the source of a driver, compile and install it. What I wonder is - how can I check which packages are required for the compilation to succeed (just so I could declare those dependencies in my AUR)? Do I really have to analyze the very long configure file or is there a better way?Edit: or maybe I'm not supposed to add build-dependencies to the usage dependencies in AUR?Edit #2: I'm trying to create AUR for this: http://www.acs.com.hk/download-driver-unified/6258/ACS-Unified-Driver-Lnx-Mac-113-P.zip | Arch Linux: Finding out build-dependencies so they can be turned into AUR dependencies | arch linux;compiling;aur | Most packages want to be helpful when installing them so they will have some dependency information in the README or the INSTALL file. Otherwise yes, checking the configure script it the only real option (Note: autoconf produces configure.log that enumerates dependencies, and several other configure scripts print a dependency summary at the end to make this easier for packagers).For the problem at hand the README is most useful, it says:Linux- pcsclite 1.8.3 or above- libusb 1.0.9 or above- flex- perl- pkg-configNow, let us have a look at that. An AUR package can assume that base and base-devel groups of packages are installed, which make things easier to reason about. Enumerating the dependencies we see:pkg-config: is in base-devel, we do not need to care about it.perl: is part of base, fine.flex: base-devel again.libusb 1.0.9 or above and pcsclite 1.8.3 or abovepacman -Ss libusbcore/libusb 1.0.20-1 Library that provides generic access to USB devicespacman -Ss pcsclitecommunity/pcsclite 1.8.16-1 PC/SC Architecture smartcard middleware libraryThe beauty of a rolling forward distribution is that you can assume that the user has the latest packages (if he is a sane user he will do pacman -Syu before installing your AUR package). Therefore we can simply do (inside PKGBUILD):depends=(libusb pcsclite)Extra note: Sometimes it is not as easy to find the package a README or INSTALL talks about. In this case google-fu is needed. Yet, arch also has pkgfile which is the database of which files are in which packages.For finding dependencies to be added to AUR packages I strongly recommend installing pkgfile, i.e.pacman -S pkgfileAnd then you can query suspicious packages for libraries, e.g.pkgfile -l <package> | grep lib |
_softwareengineering.139478 | Suppose you have this code in a class:private DataContext _context;public Customer[] GetCustomers() { GetContext(); return _context.Customers.ToArray();}public Order[] GetOrders() { GetContext(); return _context.Customers.ToArray();}// For the sake of this example, a new DataContext is *required* // for every public method callprivate void GetContext(){ if (_context != null) { _context.Dispose(); } _context = new DataContext();}This code isn't thread-safe - if two calls to GetOrders/GetCustomers are made at the same time from different threads, they may end up using the same context, or the context could be disposed while being used. Even if this bug didn't exist, however, it still smells like bad code. A much better design would be for GetContext to always return a new instance of DataContext and to get rid of the private field, and to dispose of the instance when done. Changing from an inappropriate private field to a local variable feels like a better solution. I've looked over the code smell lists and can't find one that describes this. In the past I've thought of it as temporal coupling, but the Wikipedia description suggests that's not the term:Temporal coupling When two actions are bundled together into one module just because they happen to occur at the same time.This page discusses temporal coupling, but the example is the public API of a class, while my question is about the internal design. Does this smell have a name? Or is it simply buggy code? | What code smell best describes this code? | code quality;code smell;coupling | null |
_softwareengineering.108540 | So I'm reading up about the differences between the old and the new APIs, and I can't find whether the new WinRT API will provide for desktop apps, so far it seems its only available to write Metro apps - ie the full screen, 'phone style' apps. Does this mean that I can write a WinRT-based app and it will be invoked from a desktop but display like other Metro apps (eg IE) and I won't be able to write an old-style desktop app that runs in the old-style desktop? What does this mean for server side apps? I imagine WinRT would be the API of choice for servers, now that Windows Server comes with an optional GUI, so I imagine that if I can write a windows-less app using WinRT for the server, I should be able to write the same for the client too? (and if that's the case, surely I could connect a non-Metro UI to it) | WinRT for 'desktop' apps | windows 8;winrt | So I'm reading up about the differences between the old and the new APIs, and I can't find whether the new WinRT API will provide for desktop apps, so far it seems its only available to write Metro apps - ie the full screen, 'phone style' apps.All WinRT more or less is a modified library similar to Win32.WinRT is the Metro version of Win32, they took a great deal of complexity away from it, and made it easy to work with.Does this mean that I can write a WinRT-based app and it will be invoked from a desktop but display like other Metro apps (eg IE) and I won't be able to write an old-style desktop app that runs in the old-style desktop?In theory you would write a desktop application and add WinRT code in order to display a Metro style UI when the user requested it.What does this mean for server side apps? I imagine WinRT would be the API of choice for servers, now that Windows Server comes with an optional GUI, so I imagine that if I can write a windows-less app using WinRT for the server, I should be able to write the same for the client too? (and if that's the case, surely I could connect a non-Metro UI to it)Don't guess...Power Shell( the name escapes me if this is wrong ) is still king in Windows Server |
_unix.292073 | As I have come to understand, SSH always goes through the process of reverse DNS lookups whether you are connecting to [email protected] or a local server [email protected]. I do not know, however, why the SSH command performs a reverse dns lookup for IP addresses regardless. Like, I had to change my DNS server configuration on my Pi to Google's open DNS so that my client machine wouldn't take so long to actually handshake with the server through DNS lookup with my ISPs DNS servers. Even when my Pi is on my local network.tl;dr - Why does SSH do reverse DNS lookup for local IP addresses? | SSH DNS lookup on local IP addresses | ssh;networking;dns;sshd | null |
_softwareengineering.238926 | I'm facing the following situation: There are 2 different teams working on the same project using Scrum, and every now and then, bugs related to user stories developed by team A are being assigned to team B. We're used to fixing bugs made by other people when they belong to the same team, but things get a bit more complicated when different teams are involved. Some of the developers don't like working this way and say that the bugs must be fixed by the ones who made them, and this is causing some conflicts between the teams.While reading some of the similar questions I've found some interesting ones (Is fixing bugs made by other people a good approach? and also Parallel teams and scrum/agile), but they are not about the same situation, or at least I don't see them that way.During our discussions here we got 2 possible approaches: Leave the situation as is, or determine that the bugs must be assigned to the team that developed the feature. Do you guys have any suggestions? | Fixing bugs generated by another team | agile;scrum;team | What are you trying to optimise for?Code quality: This is your chance to figure out how this bug escaped from the lab in the first place. Depending on how much time you want to spend on this, you could look at performing a code review with the whole team, or root cause analysis.Not just looking at the code but everything around your development and test processWhy didn't this get found in testing? You are using test first, right? Do you consider all the quadrants?Why didn't you train your team members in this area? Do you have broken processes that caused this defect or allowed it to be created and not found?Team knowledge / learning: Pair an expert from the team that created the bug (expert in this area, not just anyone from that team) with someone else who does not know that area, their knowledge will spread and you will have more experts in this area. Happiness of team: Try to find the solution everyone, or most people, are happy with, discuss the issue in an open and non-judgemental way with the teams and come to a consensus. Speed: Assign the bug to whoever has the bandwidth to work on it right now. This can be a tough one to optimise for, who will get it done the fastest, who can start working on it now, is it better to get started or to wait for someone else. Beware this one, even when optimised efficiently, aside from possibly having a negative effect on all the other parameters mentioned here, it can lead to technical debt piling up even more. Remember: more haste, less speed. |
_unix.124954 | I have succesfully installed office 2007 using Wine under 1 account on my parents computer (which now runs on linux mint 13 LTS XFCE). I installed it under 1 account and normally, when you do this in windows, it is installed for all accounts. But because I have installed it under Linux using Wine, this does not apply to this situation.therefore, my question is:(How) can I make office 2007 also available for the other user?I guess activating for the 2nd time (which is needed after install) will not work.Can I install it on another drive besides the win c drive and have it shared this way? Can I also create a shortcut in the wine 'start' menu under 'programs'?edit: I have successfully followed the tutorial provided by @slm. Each user now has msword available, although it has been installed only once using wine.I have created a starter (for ms word) which uses this starting command:sudo -u windows -H wine C:\\Program Files\\Microsoft Office\\Office12\\winword.exeAnd I have placed this starter in the 'office' section of the mint menu by adding the starter in the applications directory. I have edited the starter in my default editor (either gedit or leafpad):[Desktop Entry]Version=1.0Type=ApplicationName=Microsoft WordComment=Exec=sudo -u windows -H wine C:\\\\Program Files\\\\Microsoft Office\\\\Office12\\\\winword.exe\nIcon=/media/Schijf-2/MS-Word-2-icon.pngPath=Terminal=falseStartupNotify=falseCategories=OfficeIt is now perfectly listed under 'Kantoor'(which is Dutch for Office).The only thing I have not succeeded in was having all word documents open with ms word. Perhaps I will try to do that in the future. Only setting .doc and .docx files to open with ms word was enough for me at this moment. | office 2007 under wine: available for all user accounts? | linux mint;users;windows;wine | I think what you're really looking for is this Q&A from the Wine HQ forums titled: [FAQ] [RFC] How can multiple users share an installed Wine application.=== How can multiple users share an installed Wine application? ===Wine is a per-user app; every user has their own Wine Registry, with the list of installed apps for that user.So one user installing App A is not going to tell the Wine Registry of any other user that App A is installed. The other user will have to install the application as well. [1]So this scenario is not officially supported. There are some solutions written down on the WWW as for example for Ubuntu [2].The gist of the Ubuntu Forums method mentioned in the 2nd links above, basically makes use of sudo to have the users run wine as a common user vs. each having their own installation. This would seem to be the most direct route, you'd need to follow the steps in the link, moving the previously setup Wine install + Office 2007 to a 3rd user account, windows.Any user wanting to run Wine would then do this:$ sudo -u windows -H wine notepadCommands such as this above could be aliased for your parents accounts and even added to their LinuxMint menus for easier access. |
_unix.43418 | My question is about git command tab-completion, i.e. when I type git statand hit Tab, it completes the command line to git status. (Whereapplicable, this also works with remote names and branch names.)What confuses me is that without additional configuration on any of the boxes,on the machine with older git and stable Debian it works, while on the laptopit does not. Shouldn't that be the other way around? Shouldn't the fancythings tend to rather be in testing than stable?On my VPS, where:aloism@srv:~$ git --versiongit version 1.7.2.5aloism@srv:~$ lsb_release -aNo LSB modules are available.Distributor ID: DebianDescription: Debian GNU/Linux 6.0.5 (squeeze)Release: 6.0.5Codename: squeezeand on my laptop:lennycz@laptop:~$ git --versiongit version 1.7.10.4lennycz@laptop:~$ lsb_release -aNo LSB modules are available.Distributor ID: DebianDescription: Debian GNU/Linux testing (wheezy)Release: testingCodename: wheezylennycz@laptop:~$ There's nothing interesting in ~/.gitconfig on any of the boxes.Can anybody explain it? A bug in git? Any more research ideas? | git tab-completion does not work on Debian Wheezy but works on Squeeze | bash;debian;apt | The reason is that package bash-completion, which enables completion for otherpackages that add their script to /etc/bash_completion, is not instaled on thelaptop.Shouldn't that be the other way around? Shouldn't the fancy things tend to rather be in testing than stable?Well, it sounds logical, but Debian kind of breaks this minimalistic principlesince Lenny as it installs also recommended pakages by default. You can disable this behavior, though.Apparently Debian installer does not use this setting for Wheezy, reason beinganother question. |
_unix.102943 | I'm on a system with MATE 1.6.1. I didn't find the Keyboard Indicator Applet in Add to Panel list. How can I add this applet to a panel? | Keyboard layout panel applet in MATE 1.6 | keyboard layout;mate;gnome panel | The applet is supposed to be added automatically in the Notification area once the user has set up multiple keyboards. If this doesn't happen, it is advised to remove the 2nd keyboard layout and add it back. |
_softwareengineering.253077 | Eventually, I would like to generalize this solutions to work with a Tuple of any length. I think recursion is required for that, but I haven't been able to do it.def combineRanges(maxValues) : for x in range(0, maxValues[0]) : for y in range(0, maxValues[1]) : for z in range(0, maxValues[2]) : print (str(x) + '-' + str(y) + '-' + str(z));m = (6,9,20); combineRanges(m);http://repl.it/WiZ | Is this looping solution possible with recursion? | recursion;python 3.x | There is no need for recursion here; you can use itertools.product, map and tuple unpacking:from itertools import productdef combine_ranges(max_values): for t in product(*map(range, max_values)): print(-.join(map(str, t)))A shorter example:>>> combine_ranges((2, 2, 2, 2))0-0-0-00-0-0-10-0-1-00-0-1-10-1-0-00-1-0-10-1-1-00-1-1-11-0-0-01-0-0-11-0-1-01-0-1-11-1-0-01-1-0-11-1-1-01-1-1-1Note use of PEP-8-compliant names. |
_unix.149607 | I've been wrestling with bash variable substitution for a while now and can't figure this out...I have a variable with a command template:CMD_TMPL='sudo -u ${USER_NAME} ${USER_HOME}/script.sh'The variables USER_NAME and USER_HOME are figured out later in the script, not yet known at the time CMD_TMPL is defined. Therefore the command is in single-quotes and the are not yet substituted.Then the script figures out USER_NAME=test and USER_HOME=/home/test and I want to do something that will lead to ${CMD} containing:sudo -u test /home/test/script.shFurther down in the script I will use that command in a pipe like:${CMD} | output-processing.shHow do I achieve the expansion from variable names in ${CMD_TMPL} to variable values in ${CMD}? I tried all sorts of echo's and eval's but can't figure it out.Thanks! | Replacing shell variable names in another variable | bash;variable substitution | There's a special type of variable that is used to store code, that's functions:cmd_tmpl() { sudo -u $USER_NAME $USER_HOME/script.sh $@; }cmd() { cmd_tmpl other args $@; }cmd | output_processing...The other approaches are to consider the variables as command lines (shell code) and interpret that code in the end using eval:cmd_tmpl='sudo -u $USER_NAME $USER_HOME/script.sh'cmd=$cmd_tmpl' other args as shell code'eval $cmd | output_processing...Using $cmd | output_processing is wrong. $cmd is a scalar variables and you're using the split+glob operator to obtain the list of arguments to a simple command, you're not evaluating the command line stored in it. So that only works in a constrained set of conditions.You can store arguments to a simple command in an array:cmd=(sudo -u $USER_NAME $USER_HOME/script.sh ...)${cmd[@]} | post_processingbut that won't help with the deferred expansion of your variables.You could defer the expansion though with things like:cmd_tmpl='sudo -u $USER_NAME $USER_HOME/script.sh'...eval cmd=($cmd_tmpl) # now expandedcmd+=(other args)${cmd[@]} | post-processing(note that it's good practice to reserve all-uppercase variable names to variables exported to the environment). |
_softwareengineering.53963 | I spend a lot of time working in javascript of late. I have not found a way that seems to work well for testing javascript. This in the past hasn't been a problem for me since most of the websites I worked on had very little javascript in them. I now have a new website that makes extensive use of jQuery I would like to build unit tests for most of the system. My problems are this. Most of the functions make changes tothe DOM in some way.Most of the functions request datafrom the web server as well andrequire a session on the service toget results back.I would like to run the test from either a command line or a test running harness rather then in a browser.Any help or articles I should be reading would be helpful. | How do you unit test your javascript | javascript;unit testing | Maybe QUnit could help you. It is the official test suite of the jQuery project. As described in the documentation, tests can also be run outside of a browser (with Rhino for instance). |
_unix.205175 | I successfully used:grep -wFf inputqueries.txt seachedfile.txt > results.txtto search searchedfile.txt for each query in inputqueries.txt.inputqueries.txt looks like:213.183.56.186216.176.100.240216.215.112.149217.23.49.178222.29.197.23223.235.201.3223.253.150.120202.112.166.5searchedfile.txt looks like168.68.129.127 184.73.191.34199.133.78.171 202.112.166.564.180.139.190 199.141.121.11199.133.186.162 128.118.250.554.145.167.92 168.68.129.73199.154.229.66 23.75.15.164162.79.16.103 199.134.135.69and results.txt was correctly199.133.78.171 202.112.166.5Unfortunately, that is where my success stopped. When I put it to work in the real world, it didn't work. Every time it returned zero results.I used the same inputqueries.txt as well as one with a query list of words (as opposed to IPs). Further, it's important to note that I do not have write privileges to the actual log file directories and most of the logs are zipped as .gz. Additionally, I'm trying to search multiple similar files at the same time (zcat http, zcat conn.*, etc)zcat filestosearch.* | grep -wFf /home/username/inputqueries.txt > /home/username/results.txtDidn't work (nor did it work if I took off -wF and left it just grep -f)zgrep -wFf /home/username/inputqueries.txt filestosearch.* > /home/username/results.txtDidn't work either.The logs I'm searching in real life vary, but the http one looks like... (they are all bro logs) 1432343999.435553 CuCcn04H20cc2ZHyEh 202.170.48.4 50501 197.138.26.55 80 4 GET ndb.nal.usda.gov /ndb/search/autosuggest?manu=&fgcd=&term=Coconut+milk http://ndb.nal.usda.gov/ndb/foods?fgcd=&manu=&lfacet=&count=&max=35&sort=&qlookup=Oil%2C+palm&offset=&format=Abridged&new=&measureby= Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.65 Safari/537.36 0 12994 200 OK - - - (empty) - - - - - FGGh0g4a24L8Q6CZUb text/plain1432343999.382108 CKPWGW2cubkRjFpTKf 197.166.19.125 63803 54.191.210.216 80 1 GET client.ql2.com /cc/diff/http.www.ars.usda.gov/_22Fpandp_22Flocations_22FcityPeopleList.cfm_23Fmodecode_23D60-64-05-10/20150409123538diff.html - WebTrends/3.0 (WinNT) 00 302 Moved Temporarily - - - (empty) - - - - - - -1432343999.595036 Cz4XJl3uaq2Fxc0M9a 63.248.145.199 63004 197.155.76.112 80 1 GET start2farm.gov /sites/all/themes/contrib/twitter_bootstrap/images/arrow-green.png http://start2farm.gov/ Mozilla/5.0 (Windows NT 6.3; WOW64; Trident/7.0; Touch; rv:11.0) like Gecko 0 1498 200 OK - - - (empty) - - - - - Fo69Ao3w36RxKcoH9f image/png1432343999.732470 CTPQZyQ7tX7BUjU5j 197.123.240.10 56863 216.58.217.132 80 36 GET toolbarqueries.google.com /tbr?client=navclient-auto&ch=63738508926&features=Rank&q=info:/url?q=http://www.ncbi.nlm.nih.gov/books/NBK8125/&sa=U&ei=FjjmVJriAceagwSM1oOIDg&ved=0CBsQFjAB&usg=AFQjCNHgMKW6EIWKxclKB9o-o21bQu7IOw - Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; SLCC1; .NET CLR 2.0.50727; .NET CLR 3.5.21022; .NET CLR 1.1.4322; .NET CLR 3.5.30729; .NET CLR 3.0.30618) 0 5928 403 Forbidden - -- (empty) - - - - - F2UdRnxrFUEHJFdW4 text/htmlUsing the simplest single-line command that I can, how do I make the grep from a file work? | grep to compare files not working | grep | The problem seems to be that the file(s) were edited on Windows at some point, which added some extra \r characters to the ends, which aren't normally visible.If you have the dos2unix command you can use that to convert the file. If you don't have that and there isn't any important whitespace at the ends of lines you can do it with GNU sed as follows:sed -i -e 's/\s*$//' inputqueries.txtto modify the file in place (the -i flag) and then replace any amount of whitespace at the end of the line with nothing, effectively deleting it. -i is not part of POSIX though, so if you need a portable solution you can use the rest of the sed command and redirect to a temporary file. When you're sure that file is right, rename it to the file you actually want. |
_webmaster.99166 | What methods can I use to prove I own a particular website? | How to prove that a particular website is mine? | ownership | Write in comments in html or write meta author tag in head of page:<metaname=authorcontent=Your name > |
_reverseengineering.5989 | The dialog box is a password challenge, and I'd like to catch/trace/observe the code (hopefully the password-checking code) that gets executed right after clicking the OK button of the dialog box.I can't seem to find a way to do this in IDA. What's the correct sequence of actions in IDA to accomplish this? | How to find a subroutine (or next instruction) called after returning from a (Windows) dialog box? | ida;windows | null |
_codereview.59979 | I want to check if there is page variable (in URL) and if it is, if it's correct - it cannot be number 1 because this is default page and it should be valid int from 2 to max number of pages (not octal and not hexadecimal).<?php$_GET['page'] = '01'; // this line is only for testing$selPage = 1;$totalPages = 10;if (isset($_GET['page'])) { $selPage = filter_var($_GET['page'], FILTER_VALIDATE_INT, ['options' => ['min_range' => 2, 'max_range' => $totalPages]]); if ($selPage === false) { exit('redirection'); // this line for test only - in fact make 301 redirection here to correct url }}echo $selPage;Could it be improved anyway, or is it the best and shortest it can be? | Pagination filtering | php;php5;validation;url;pagination | null |
_softwareengineering.51163 | My team just tried to contact some guys from an old open source project hosted on code.google.com. We told them that we'd like to join their project and commit to it — at least to some branch of it — but no one responded to us. We tried everyone, owners and committers; no one was in any way active, and no one replied.But we have some code to commit and we really would love to continue work on that project. So we need to create a new project. We came up with a name for it which is close to but not a duplicate of the name of the project we want to inherit from. How should we do our first commit, and what should the commit message be? Should we just copy their code to our repository with a comment like we inherited this code, we found it here under such and such a license ... now we're upgrading it to this more/less strict license ...? Or should we just use their code as our first commit, with updates saying we inherited from ... we made such and such changes ...? | What is a correct/polite way to inherit from an abandoned open-source project for a new open-source project? | project management;open source | null |
_unix.140779 | Sometimes it is necessary to emulate and verify the above variables in small examples and then can be copied immediately to some script, etc.I tried to solve by using a simple example in the following ways: (find $1) /tmpsh -c '(find $1) /tmp'sh -c find $1 /tmpecho '(find $1) /tmp' | shand with other combinations. Also experimented by adding the shebang interpreter directive #!/bin/sh -x, but did not get the desired result.Can I do this simply? | Is it possible to use shell parameter variables ($1, ..., $@) directly in CLI? | shell;parameter | The first argument after sh -c inline-script goes to $0 (which is also used for error messages), and the rest go in $1, $2...$ sh -c 'blah; echo $0; echo $1' my-inline-script argmy-inline-script: blah: command not foundmy-inline-scriptargSo you want:sh -c 'find $1' sh /tmp(in the olden days, you could find sh implementations where the first arg went into $1 instead, so you would do:sh -c 'find $1' /tmp /tmpOr:sh -c 'shift $2; find $@' sh 3 2 /tmp1 /tmp2to account for both behaviours, but those shells are gone now that POSIX is prevalent and publicly available).If you want to set $1, $2 in a local scope within the current shell, that's where you'd use functions. In Bourne-like shells:my_func() { find $1}my_func /tmpSome shells support anonymous functions. That's the case of zsh:(){find $1} /tmpOr es:@{find $1} /tmpTo change the current positional parameters, permanently, the syntax is shell dependant. dchirikov has already covered the Bourne-like shells (Bourne, Korn, bash, zsh, POSIX, ash, yash...).The syntax is:set arg1 arg2 ... argnHowever, you need:set --To empty that list (or shift $#) andset -- -footo set $1 to something starting with - or +, so it's a good habit to always use set -- especially when using arbitrary data such as set -- $@ other-arg to add arguments to the end of the positional parameter list.In shells of the csh family (csh, tcsh), you assign to the argv array:set argv=(arg1 arg2)In shells of the rc family (rc, es, akanga), to the * array:*=(arg1 arg2)Though you can also assign elements individually:2=arg2In fish, the positional parameters are in the argv array only (no $1, $@ there):set argv arg1 arg2In zsh, for compatibility with csh, you can also assign to the argv array:argv=(arg1 arg2)argv[4]=arg4And you can also do:5=arg5That means you can also do things like:argv+=(another-arg)to add an argument to the end, and:argv[-1]=()argv[2]=()to remove an argument from the end or the middle, which you can't easily do with other shells. |
_softwareengineering.330662 | I want to count how many times an image is loaded on a webpage. We are looking at maybe 100 times per second.One option is to store every impression as a row in a database, but this would get massive quite quickly. Another option is to have a single variable and just increment it each time.So if I went with a single variable and incremented it each time, is it even possible to lock a database variable? | How to protect quickly-incrementing variables in Django? | python;django;synchronization | null |
_codereview.99197 | This code embeds a hyperlink into a cell of your choice of an Excel file. /** * @param pathName the path for the Excel file which you will update * @param string the string you want to put in the cell, and if you put an empty string, it will default to * Empty String * @param hyperlink the hyperlink which will be embedded into the cell * @param row the row number of the cell * @param column the column number of the cell * @param firstTime this is used in case you want the file to be wiped and start from the beginning * @throws RuntimeException if the path can't be found or if an {@link IOException} is thrown for whatever reason */public static void writeToCell(String pathName, String string, String hyperlink, int row, int column, boolean firstTime) { if (string.equals()) string = Empty String; if (!firstTime) { try { fileInputStream = new FileInputStream(new File(pathName)); } catch (FileNotFoundException e) { throw new RuntimeException(File not found!, e); } try { workbook = new XSSFWorkbook(fileInputStream); sheet = workbook.getSheetAt(0); } catch (IOException e) { throw new RuntimeException(IOException thrown!, e); } } else { workbook = new XSSFWorkbook(); sheet = workbook.createSheet(); } CreationHelper createHelper = workbook.getCreationHelper(); XSSFCell cell; try { cell = sheet.getRow(row).getCell(column, XSSFRow.CREATE_NULL_AS_BLANK); } catch (NullPointerException e) { try { cell = sheet.getRow(row).createCell(column); } catch (NullPointerException npe) { cell = sheet.createRow(row).createCell(column); } } XSSFHyperlink url = (XSSFHyperlink) createHelper.createHyperlink(XSSFHyperlink.LINK_URL); XSSFCellStyle hyperlinkStyle = workbook.createCellStyle(); XSSFFont hlink_font = workbook.createFont(); hlink_font.setColor(IndexedColors.BLUE.getIndex()); hyperlinkStyle.setFont(hlink_font); url.setAddress(hyperlink); try { cell.setHyperlink(url); } catch (NullPointerException e) { cell.setCellType(XSSFCell.CELL_TYPE_STRING); cell.setHyperlink(url); } cell.setCellValue(string); cell.setCellStyle(hyperlinkStyle); try { if (fileInputStream != null) fileInputStream.close(); } catch (IOException e) { throw new RuntimeException(IOException thrown!, e); } FileOutputStream fileOutputStream; try { fileOutputStream = new FileOutputStream(new File(pathName)); } catch (FileNotFoundException e) { throw new RuntimeException(File not found!, e); } try { workbook.write(fileOutputStream); fileOutputStream.close(); } catch (IOException e) { throw new RuntimeException(IOException thrown!, e); }}public static HyperlinkWriter newInstance(String pathName) { HyperlinkWriter writer = new HyperlinkWriter(); try { fileInputStream = new FileInputStream(new File(pathName)); } catch (FileNotFoundException e) { throw new RuntimeException(File not found!, e); } try { workbook = new XSSFWorkbook(fileInputStream); sheet = workbook.getSheetAt(0); } catch (IOException e) { throw new RuntimeException(IOException thrown!, e); } return writer;}I was wondering if there was a way to optimize this method in a way because I use this with a different class and I call upon this method multiple times but since I'm creating a FileInputStream and an XSSFWorkbook almost each time, I was wondering if there was a way to get around that and have the FileInputStream just add the bytes that it's missing, instead of reading the file from top to bottom almost each time the method is called. I should also mention that fileInputStream is static, so it won't be released from memory when the method ends (I think...) | Writing a hyperlink to a cell with Apache POI | java;excel | Some thoughts:Use methods! Break out complexities where you can to make it easier on readers of your code.Avoid statics. There's no reason for it. Make a reusable class instead.It also helps to rearrange the member variables so they're declared right before they're used.You really shouldn't have your three static instance variables as static instance variables. To my knowledge, no, POI does not support anything other than reading in the whole file, making your change, and then writing the whole file.If you're going to make lots of these calls, create an object that holds the four relevant pieces of information (label, link, row, column), then create them and stick them in a queue. When you've got all of them, open the workbook once and add them all. Might not be worth the complexity - are you sure you have a real performance bottleneck, or are you guessing?The exception handling needs work, and it only probably compiles, but this might get you pointed in a good direction. Note that it is not thread-safe! An interesting addition might be to make it implement AutoCloseable so that clients could use it in a try-with-resources block, and have it write automatically when they were done with it. Just add a close() method that calls flush, and a check in all public methods to make sure close() hasn't been called yet.public final class ExcelFileEditor { private final File file; private final List<HyperlinkAddition> hyperlinkAdditions = new LinkedList<>(); public ExcelFileEditor(final File file) { this.file = file; } public void setHyperlink( final String label, final String hyperlink, final int row, final int column) throws IOException { this.hyperlinkAdditions.add(new HyperlinkAddition(label, hyperlink, row, column)); } public void flush() throws IOException { final XSSFWorkbook workbook = this.loadWorkbook(); for (final HyperlinkAddition hyperlinkAddition : this.hyperlinkAdditions) { this.setHyperlink( workbook, hyperlinkAddition.label, hyperlinkAddition.hyperlink, hyperlinkAddition.rowIndex, hyperlinkAddition.columnIndex); } this.hyperlinkAdditions.clear(); this.writeWorkbook(workbook); } private void setHyperlink( final XSSFWorkbook workbook, final String label, final String hyperlink, final int rowIndex, final int columnIndex) { final CreationHelper createHelper = workbook.getCreationHelper(); final XSSFHyperlink url = createHelper.createHyperlink(XSSFHyperlink.LINK_URL); url.setAddress(hyperlink); final XSSFFont hyperlinkFont = workbook.createFont(); hyperlinkFont.setColor(IndexedColors.BLUE.getIndex()); final XSSFCellStyle hyperlinkStyle = workbook.createCellStyle(); hyperlinkStyle.setFont(hyperlinkFont); final XSSFSheet sheet = this.getSheet(workbook); final XSSFCell cell = this.getCell(sheet, rowIndex, columnIndex); cell.setCellType(XSSFCell.CELL_TYPE_STRING); cell.setHyperlink(url); cell.setCellValue(this.buildCellLabel(label)); cell.setCellStyle(hyperlinkStyle); } private XSSFWorkbook loadWorkbook() throws IOException { if (!this.file.exists()) { return new XSSFWorkbook(); } try (FileInputStream fis = new FileInputStream(this.file)) { return new XSSFWorkbook(fis); } } private XSSFSheet getSheet(final XSSFWorkbook workbook) { final XSSFSheet sheet = workbook.getSheetAt(0); if (sheet == null) { return workbook.createSheet(); } return sheet; } private XSSFCell getCell(final XSSFSheet sheet, final int rowIndex, final int columnIndex) { final XSSFRow row = this.getRow(sheet, rowIndex); final XSSFCell cell = row.getCell(columnIndex, XSSFRow.CREATE_NULL_AS_BLANK); if (cell == null) { return row.createCell(columnIndex); } return cell; } private XSSFRow getRow(final XSSFSheet sheet, final int rowIndex) { final XSSFRow row = sheet.getRow(rowIndex); if (row == null) { return sheet.createRow(rowIndex); } return row; } private String buildCellLabel(final String label) { if (label.isEmpty()) { return Empty String; } return label; } private void writeWorkbook(final XSSFWorkbook workbook) throws IOException { final FileOutputStream fileOutputStream = null; try (FileOutputStream fos = new FileOutputStream(this.file)) { workbook.write(fos); } } private static final class HyperlinkAddition { public final String label; public final String hyperlink; public final int rowIndex; public final int columnIndex; public HyperlinkAddition( final String label, final String hyperlink, final int rowIndex, final int columnIndex) { this.label = label; this.hyperlink = hyperlink; this.rowIndex = rowIndex; this.columnIndex = columnIndex; } }} |
_webapps.98835 | Totally new to formulas that aren't a simple Count or Sum.I have a sheet containing a list of names and the number of goals they have scored this season. I would like a cell that has the leading goalscorer's name in it. Name Match 1 Match 2 Match 3Player 1 0 1 0 Player 2 1 1 3 Player 3 0 0 0 Player 4 0 1 0Leading goalscorer = Player 2Can anybody please point me in the right direction? | Count up the sum of a row, compare with others and return a particular cell value | google spreadsheets | null |
_unix.367795 | I am trying to set up a couple of Linux workstations (RedHat 7), and I am trying to figure out how to set up authentication against an LDAP server with some unusual requirements.I basically know how to set up LDAP authentication using sssd, but I don't know how to restrict authentication to only certain users to meet my requirement.To enable LDAP configuration, I would use this command line:authconfig --enableldap --enableldapauth --ldapserver=<redacted> --ldapbasedn=<redacted> --update --enablemkhomedirThis will allow all LDAP users to log on, and as far as I know works just fine. However, my requirement is that only some users from LDAP can log in, and the list of users will be supplied in a separate text file (by user login name).More information: We have an LDAP server (Active Directory, actually) with a couple thousand users. Only about 20 of them who have a need to work on these workstations should be allowed to log on to these workstation. Unfortunately, LDAP does not include any information related to this, and I do not have control of the LDAP server. Instead, every couple of weeks, I get a text file with a list of the user names who should be allowed to log on.How can I set up authentication to use LDAP for user name/password/user ID etc. while also restricting it to only users on this list? | Using openldap authentication only for some users | pam;openldap;sssd | null |
_reverseengineering.12397 | Stack layout is well documented in many ways. Especially for x86 systems as there were numerous tutorials on how to exploit stack overflow on old 32-bit systems many years ago.So far we can know that on a 32-bit system, the user stack starts from 0xc0000000 address (which is the limit between usermode and kernelmode).This address is not the same if we take an Elf32 running on a x86-64 linux system.I cannot find this address but I can figure out it is 0xffffe000 thanks to gdb:(gdb) x $esp0xffffd4bc: 0xf7e16a83(gdb) x/w 0xffffdffc0xffffdffc: 0x00000000(gdb) x/w 0xffffe0000xffffe000: Cannot access memory at address 0xffffe000We can actually see that the 0xffffe000 address points to an invalid location (or at least the process doesn't have proper permission to access this memory page).Yet I cannot especially find a relevant source that tells us that the gnu stack of a x86 program on a x64 linux starts from 0xffffe000. Am I doing things wrong?I can find sources telling us about linux-gate.so.1 but I do not think this is the point here.Any ideas, reversers? | 32-bit binary stack layout on a x64 Linux OS | linux;x64 | The valid stack access range of a (32 or 64 bit) process can be viewed by looking into its memory map. We can observe the memory map of a process with id pid by cat /proc/pid/maps, an example of a 32-bit process on my 64-bit box:...09b58000-09b79000 rw-p 00000000 00:00 0 [heap]...f7785000-f7787000 rw-p 001c7000 00:23 592106 /usr/lib/libc-2.20.so...f77c2000-f77c4000 r--p 00000000 00:00 0 [vvar]f77c4000-f77c5000 r-xp 00000000 00:00 0 [vdso]...f77e8000-f77e9000 rw-p 00022000 00:23 592099 /usr/lib/ld-2.20.sofff9b000-fffbc000 rw-p 00000000 00:00 0 [stack]Then the valid range for stack is [0xfff9b000, 0xfffbc000), memory access to an address higher or equal than 0xfffbc000 will trigger memory access violation exception. According to this article, this range is calculated when the kernel maps the binary into the memory, and by several functions, the interesting input for them is STACK_TOP which is defined by TASK_SIZE for 32-bit processes. |
_unix.136374 | For automated testing of some applications, I would like to emulate the users behaviour. I know I can user the xdotool type --delay option, but I would like a more natural delay. The easiest thing would be to add a random delay, but more advanced methods are also appreciated.Is there such a option / utility? | xdotool random/more natural delay? | scripting;x11;testing;random | null |
_unix.335255 | I'd like to find all directories containing .txt files. But I need to print one level up the output. For example, I'm using find to search recursively for .txt files in the current directory. Then only the directories containing such files are shown.find . -name *.txt -printf '%h\n'outputs to (for example)./out/data./out/data./output/dataSo, I'd like to improve the use of find to:does not show repeated dirs (I think it happens because there are 2 .txt file therein)I'd like to print only the first level, that is, only ./out/ and ./output/ | Change depth for find when searching directories | find | There might be a more find-centric way, but you could do it with a couple of other tools helping:find . -name '*.txt' -printf '%h\n' | cut -f1,2 -d/ | sort -uOr I guess we could save a process and use awk likefind . -name '*.txt' -printf '%h\n' | awk -F/ '{matched[$1/$2]=1} END {for(dir in matched) {print dir}}' |
_unix.55 | What does size of a directory mean in output of ls -l command? | What does size of a directory mean in output of 'ls -l' command? | directory;ls;size | This is the size of space on the disk that is used to store the meta information for the directory (i.e. the table of files that belong to this directory). If it is i.e. 1024 this means that 1024 bytes on the disk are used (it always allocate full blocks) for this purpose. |
_unix.137742 | When I try to upgrade packages using aptitude -t wheezy-backports, they often require newer libc than is installed. Proposed solutions remove hundreds of packages. Is there a way to request only the versions which wouldn't require upgrading libc (preferably from the command line)? | Finding the latest version of package which doesn't require upgrading libc (on Debian) | apt;aptitude | null |
_unix.55800 | I have Ubuntu 12.04 with BIND9, working just as a caching server (forwarding to 8.8.8.8).When I use, for example, dig +norecurse @l.root-servers.net www.uniroma1.it, I obtain the following output; <<>> DiG 9.8.1-P1 <<>> +norecurse @l.root-servers.net www.uniroma1.it; (1 server found);; global options: +cmd;; connection timed out; no servers could be reachedUsing Wireshark I discovered that the outgoing queries are correct, but there aren't any incoming answers. Why?P.S. Using simply dig www.uniroma1.it I obtain the correct answers. | dig @nameserver doesn't work | dns;bind | Your command works fine here. My guess is that a firewall, either at your location or at your ISP, is blocking the DNS requests or responses. The normal dig www.uniroma1.it likely works because said firewall is allowing requests to certain servers, like the ones provided by your ISP and maybe 8.8.8.8. |
_unix.49115 | Possible Duplicate:Batch renaming files I have some files that I wish to rename in a single command. The files are names thus. I want the E S Posthumus bit removed from the names and also the 01, 02 ...etc at the start of each file. How do I remove that? | batch rename a few files | shell script;rename | null |
_softwareengineering.238476 | What is the best way to track any change made to the SQL Server database before moving to production in case of a large team ?We have used excel sheets but quite often things get missed which come out with SQL compare only during moving changes to production? | What is the best way to track database changes across a large team | programming practices;project management;agile;sql server | SQL Server Data Tools provides a Visual Studio project template which allows database development. This way, you can have the source of your database within the same solution that contains the code for your application. You can track all the changes made to the database through source control, given that all changes are made through the project. It also has a database compare tool, in which you can see the structural differences between the database and what's in your project. The tools also make it easy to deploy the changes made in the source code to the database. |
_cstheory.38734 | I have a stochastic dynamic programming problem, the deterministic version can be solved in polynomial time. I'd like to show this stochastic problem is NP-hard and even Approximation-hard: e.g., with a maximization objective, there is a constant $\alpha_0$, for any $\alpha > \alpha_o$, if there is a $\alpha$-approximation algorithm for this stochastic problem (i.e., recover at least $\alpha$ fraction of the optimal expected objective), then P = NP.I was wondering if there is some related literature that deals with similar problems for me to refer to? Most NP-hard and Approximation-hard problems I've learned from textbooks are static problems. I'm still thinking of how to reduce a static problem to a stochastic dynamic problem. | Stochastic dynamic programming problem that is Approximation-hard | cc.complexity theory;co.combinatorics;approximation algorithms;approximation hardness | null |
_unix.20400 | How could I change the order of fields from a given command?As an example take the output of ls -l. By default the file name is displayed as the last field of the output. What could I pipe that output to in order to make the file name the first field shown?My initial thought was to use the cut command, but regardless of the order in which fields are passed to the -f option they're always outputted in the original order.Note: I'm only using ls -l as an easy example, but I'd like to find a general purpose idiom that could be used for any command that outputs data in columns. If it could also be applied to delimited data that would be great too. | Change the order of displayed fields of arbitrary command output | text processing | For a fixed number of fields you can use awk, for example to exchage second and fourth field:command | awk '{ t = $2; $2 = $4; $4 = t; print }' | column -tBut unfortunately ls -l does not have a fixed number of field (separated by spaces) because also filenames can contain spaces: this is the reason why parsing ls -l output is discouraged. |
_unix.101980 | I have three possibilities to installing and running Ubuntu as my main operating system.1) Install onto an external SSD: Obviously, this would mean installing onto the External SSD and connecting through USB. I would have my main drive at start up select this driver, if not, it will select the Mac boot up (therefore, I don't have to completely remove OSX). I will also make nightly back-ups of my SSD to another external HD plugged into my main computer. 2) I completely remove OSX and install Ubuntu onto my main machine, use the external SSD to store all the files and libraries needed to carry out my programming duties. I prefer option one since I can take the external HD to work with me and just boot straight into there.I'm just asking for your opinions guys. P.S. Also, I'm running VMWARE at the moment. Can I therefore just use the .iso that I already have? I don't really want to completely re-install Ubuntu as on the VMWARE I have already installed a large number of libraries. | Ubuntu - Install Options | ubuntu;system installation;dual boot | null |
_unix.223455 | Why is it that when I run programs for the first time it creates a copy of that file with a ~ at the end? For example, a file like HelloWorld.lua when run would create 2 files: HelloWorld.lua and HelloWorld.lua~. What is the purpose of this happening? | Why do compiled programs create a copy with ~? | ubuntu;files;filenames | If file foo has a sibling foo~ the file with a tilde is likely a by-product, backup, or intermediate file for either your compiler or editor.They're usually cleaned up automatically, ignored by your version control, and hidden in guis.Thinks of it as one of those things that most people aren't familiar with and you probably don't want to deal with, unless you're certain you need it — kinda like toupees and colostomy bags. |
_codereview.31779 | I have written a memory pool (pool_allocator) that was described in a book I am reading (Game Engine Architecture). The interface is pretty basic:The constructor: pool_allocator(std::size_t count, std::size_t block_size, std::size_t alignment);.void* allocate();, which returns a pointer to a memory location with (at least) the requested alignment.void free(void* block); which returns the block to the pool.The general design is not complex, either. The constructor uses operator new() to allocate a chunk of memory, and that memory is treated as a linked list - any calls to allocate and free operate on the head of the list.However, this is the first time I have ever given alignment any real consideration, and apparently if I use enough {static,reinterpret}_casts, I start to question that the Earth is round. I would be grateful for comments. My main concerns are:Is alignment handled properly? I think align_head() is okay, but what about minimum_alignment() and padding()?Are the reinterpret_cast and static_cast calls used properly, or are there better alternatives? I understand that they are necessary when handling raw data, but I am out of my comfort zone here.If you want to go the extra mile and point out other deficiencies (which is more than welcome), keep in mind that in the current design it is not the responsibility of this object to handle error conditions or construct objects, just manage a chunk of memory.class pool_allocator {public: typedef void* pointer_type;public: pool_allocator(std::size_t count, std::size_t block_size, std::size_t alignment) : m_data(nullptr) , m_head(nullptr) , m_blocks(count) , m_block_size(block_size) , m_padding(0) { // each block must be big enough to hold a pointer to the next // so that the linked list works if (m_block_size < sizeof(pointer_type)) { m_block_size = sizeof(pointer_type); } // each block must meet the alignment requirement given as well // as the alignment of pointer_type. alignment = minimum_alignment(alignof(pointer_type), alignment); // find the padding required for sequential blocks: m_padding = padding(m_block_size, alignment); // allocate a chunk of memory m_data = operator new((m_block_size + m_padding) * m_blocks + alignment); // align the head pointer align_head(alignment); // initialize the list initialize_list(); } pool_allocator(pool_allocator const&) = delete; pool_allocator& operator=(pool_allocator const&) = delete; virtual ~pool_allocator() { operator delete(m_data); } // grab one free block from the pool pointer_type allocate() { if (m_head == nullptr) { return nullptr; } else { pointer_type result = m_head; m_head = *(static_cast<pointer_type*>(result)); return result; } } // return the block to the pool void free(pointer_type block) { *(static_cast<pointer_type*>(block)) = m_head; m_head = block; }private: // align_head: calculates the offset from the beginning of the // allocated data chunk (m_data) to the first aligned location // and stores it as the void* m_head void align_head(std::size_t alignment) { std::uintptr_t raw_address = reinterpret_cast<std::uintptr_t>(m_data); std::uintptr_t c_alignment = static_cast<std::uintptr_t>(alignment); std::uintptr_t offset = c_alignment - (raw_address & (c_alignment - 1)); m_head = reinterpret_cast<pointer_type>(raw_address + offset); } // initialize_list: fills the first sizeof(void*) bytes of // each block in the data chunk with a pointer to the next void initialize_list() { std::uintptr_t current = reinterpret_cast<std::uintptr_t>(m_head); std::uintptr_t block_size = static_cast<std::uintptr_t>(m_block_size + m_padding); for (std::size_t block = 0; block < m_blocks; ++block) { std::uintptr_t next = current + block_size; *(reinterpret_cast<pointer_type*>(current)) = reinterpret_cast<pointer_type>(next); current = next; } // make sure the last block points nowhere current -= block_size; *(reinterpret_cast<pointer_type*>(current)) = nullptr; } // minimum_alignment: calculate the least commmon multiple of // a and b and return that value std::size_t minimum_alignment(std::size_t a, std::size_t b) { std::size_t ta = a; std::size_t tb = b; while (tb != 0) { std::size_t tr = tb; tb = ta % tr; ta = tr; } return (a / ta) * b; } // padding: calculate the smallest multiple of padding that can // contain block_size std::size_t padding(std::size_t block_size, std::size_t alignment) { std::size_t multiplier = 0; while (multiplier * alignment < block_size) { ++multiplier; } return multiplier * alignment - block_size; }private: pointer_type m_data; pointer_type m_head; std::size_t m_blocks; std::size_t m_block_size; std::size_t m_padding;}; | Memory Pool and Block Alignment | c++;c++11;memory management | I think you have some fundamental misunderstandings about alignment (and you can simplify your code a lot):3.11 Alignment3: Alignments are represented as values of the type std::size_t. Valid alignments include only those values returned by an alignof expression for the fundamental types plus an additional implementation-defined set of values, which may be empty. Every alignment value shall be a non-negative integral power of two.So you can the alignment requirements of a type from alignof(). This value is a power of 2.5: Alignments have an order from weaker to stronger or stricter alignments. Stricter alignments have larger alignment values. An address that satisfies an alignment requirement also satisfies any weaker valid alignment requirement.If something satisfies the alignments for a larger alignment. Then it automatically satisfies alignment requirements for smaller values.Thus if memory is aligned for an object of size N. Then it is also correctly aligned for objects that are smaller than 'N'.18.6.1.1 Single-object forms [new.delete.single]void* operator new(std::size_t size); 1: Effects: The allocation function (3.7.4.1) called by a new-expression (5.3.4) to allocate size bytes of storage suitably aligned to represent any object of that size.Thus using new to allocate memory for a space of size 'M' means the memory returned will also be aligned for objects of size 'M'. This also means that it aligned for all objects that are smaller than 'M'. Thus for your block of memory that will hold multiple objects of type 'N' such that `M = Count*alignof(N)' it is automatically aligned for all objects of type 'N'. |
_unix.371516 | I have always Allocation failed error in ARI when folder have > 0 recordings.Folder and files have 777 access.No logs in CLI*.http://192.168.0.123:8088/ari/recordings/stored?api_key=myuser:mypass{ error: Allocation failed}Trying to debug with strace:[pid 5035] openat(AT_FDCWD, /var/spool/asterisk/recording, O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 29[pid 5035] getdents(29, /* 3 entries */, 32768) = 80[pid 5035] stat(/var/spool/asterisk/recording/test.wav, {st_mode=S_IFREG|0664, st_size=62124, ...}) = 0[pid 5035] close(29) = 0[pid 5035] write(18, HTTP/1.1 500 Internal Server Err..., 281) = 281[pid 5035] write(18, {\error\:\Allocation failed\}, 29) = 29[pid 5035] read(18, 0x1ca5d63, 1) = -1 EAGAIN (Resource temporarily unavailable)[pid 5035] poll([{fd=18, events=POLLIN|POLLPRI}], 1, 15000 <unfinished ...> | Asterisk Allocation failed in ARI (list recordings) | strace;asterisk;freepbx | null |
_unix.83109 | I don't understand it. Almost every 10th or 20th question is about fan or overheating or that the fan is too loud and the speed too big etc.And these questions are not only about laptops but often about desktop Linux systems.Why there are so many problems with fans in the Linux world?Why is it so hard to set the right fan speed in Linux distributions or during the install process? | Why there are so many problems with the overheating and fan speed when installing Linux? | kernel;system installation;hardware;fan | null |
_cs.75031 | I had this question in an interview recently. I was unable to answer this question and would really like to start a discussion on how to approach this problem and get the most efficient solution.You have this matrix. You want to print out how many islands are in the matrix. Assume water is surrounding everything outside of the matrix. An island is any land completely surrounded by water. Land pieces can be connected to form an island as well. As long there is another piece of land in any of the surrounding blocks, that piece of land is considered a part of that island, and yes this includes diagonals. M = 'land'o = 'water'world = [ [o,M,o,o,o,M,o,o,o,o,o], [o,o,M,o,M,M,o,o,o,o,o], [o,o,o,o,o,M,o,o,M,M,o], [o,o,o,M,o,M,o,o,o,M,o], [o,o,o,o,o,M,M,o,o,o,o], [o,o,o,o,M,M,M,M,o,o,o], [M,M,M,M,M,M,M,M,M,M,M], [o,o,o,M,M,o,M,M,M,o,o], [o,o,o,o,o,o,M,M,o,o,o], [o,M,o,o,o,M,M,o,o,o,o], [o,o,o,o,o,M,o,o,o,o,o]]In this example, there are 5 islands.I attempted to answer this with for loops but was unable to do so as this seems more of a recursive problem. Unfortunately, I am not the most proficient with recursion and would really like to see the thought process you take to tackle this and the running time of your algorithm :). Feel free to use your favorite programming language. Also If I didn't describe the problem clearly enough just leave a comment and I'll do my best to clarify. | Finding islands recursively in a matrix | algorithms;computability;recursion | null |
_unix.257953 | With the 7.0 release, OpenSSH disabled ssh-dss keys. The not-so-recommended workaround is to explicitly re-add DSA key support to .ssh/config, which will eventually be dropped by a later OpenSSH version:PubkeyAcceptedKeyTypes=+ssh-dssAs I have deployed my DSA key to countless machines (and I do not have a full list of them, as known_hosts is hashed), I have to replace the old public user key whenever I encounter it on login.Is it possible to make the OpenSSH client display a warning when I log in with an ssh-dss key? | SSH: display warning when using (deprecated) ssh-dss key | ssh;authentication;openssh | There is no special mechanism in ssh to notify you that you use some key. It works or fails.Differentiate the keys using passphraseOnly other idea that comes to my mind is differentiate between the keys using passphrase.For example, have the RSA keys in ssh-agent or without passphrase and the DSA one with passphrase (and not in ssh-agent). It might warn you with passphrase prompt.Not to use themThe easiest thing is not to add the above line into ssh_config and then you will see when you can't connect. Then you can use the options on command line for individual connections:ssh -oPubkeyAcceptedKeyTypes=+ssh-dss host |
_unix.263703 | I have a file with a lot of lines like this/item/pubDate=Sun, 23 Feb 2014 00:55:04 +010If I execute this echo /item/pubDate=Sun, 23 Feb 2014 00:55:04 +010 | grep -Po (?<=\=).*Sun, 23 Feb 2014 00:55:04 +010I get the correct date (all in one line). Now I want to try this with a lot of dates in a xml file. I use this and it's ok. xml2 < date_list | egrep pubDate | grep -Po (?<=\=).*Fri, 22 Jan 2016 17:56:29 +0100Sun, 13 Dec 2015 18:33:02 +0100Wed, 18 Nov 2015 15:27:43 +0100...But now I want to use the date in a bash program and I get this outputfor fecha in $(xml2 < podcast | egrep pubDate | grep -Po (?<=\=).*); do echo $fecha; done Fri, 22 Jan 2016 17:56:29 +0100 Sun, 13 Dec 2015 18:33:02 +0100 Wed, 18 Nov 2015 15:27:43 +0100I want the date output in one line (in variable fecha) how the first and second examples but I don't know how to do it. | for loop over input lines | bash;text processing;for | Do it this way instead:while IFS= read -r fecha; do echo $fechadone < <(xml2 < podcast | egrep pubDate | grep -Po (?<=\=).*)Bash will separate words to loop through by characters in the Internal Field Separator ($IFS). You can temporarily disable this behavior by setting IFS to nothing for the duration of the read command. The pattern above will always loop line-by-line.<(command) makes the output of a command look like a real file, which we then redirect into our read loop.$ while IFS= read -r line; do echo $line; done < <(cat ./test.input)Fri, 22 Jan 2016 17:56:29 +0100Sun, 13 Dec 2015 18:33:02 +0100Wed, 18 Nov 2015 15:27:43 +0100 |
_codereview.96838 | I have a superclass with several subclasses. Each subclass overrides the solve method with some a different algorithm for estimating the solution to a particular problem. I'd like to efficiently create an instance of each subclass, store those instances in a list, and then take the sum (or average) of the estimates that the subclasses give:class Technique(): def __init__(self): self.weight = 1 def solve(self, problem): passclass Technique1(Technique): def __init__(self): super() def solve(self,problem): return 1class Technique2(Technique): def __init__(self): super() def solve(self,problem): return 6 class Technique3(Technique): def __init__(self): super() def solve(self,problem): return -13testCase = 3myTechniques = []t1 = Technique1()myTechniques.append(t1)t2 = Technique1()myTechniques.append(t2)t3 = Technique1()myTechniques.append(t3)print(sum([t.solve(testCase) for t in myTechniques])The code at the end feels very clunky, particularly as the number of subclasses increases. Is there a way to more efficiently create all of the instances? | Creating instances of all subclasses in Python | python;inheritance | You might try this:myTechniques = [cls() for cls in Technique.__subclasses__()]Note that it does not return indirect subclasses, only immediate ones. You might also consider what happens if some subclass of Technique is itself abstract. Speaking of which, you should be using ABCMeta and abstractmethod. |
_unix.59137 | I start and stop my VM headless style.StartVBoxHeadless -s Windows -v on -e TCP/Address=0.0.0.0 -e TCP/Ports=3389 StopVBoxManage controlvm Windows poweroffAnd I connect to it using the rdesktop protocol that VirtualBox natively supports. rdesktop-vrdp localhost:3389However after my kernel was forcefully upgraded for me today, I had to also upgrade my Virtualbox installation from VirtualBox-4.1-4.1.8_75467_fedora16-1.i686.rpm to VirtualBox-4.2-4.2.6_82870_fedora16-1.i686.rpm and after doing so, found myself unable to use arrow keys, control keys, page-up, page-down, etc, in my rdesktop-vrdp VM window. So the problem is: control/alt/arrow/etc keys not working in rdesktop-vrdp. | Certain keys (arrows, page-up, page-down) not working in rdesktop-vrdp | keyboard;virtualbox;remote desktop;remote control | null |
_codereview.1636 | Given the following exercise:Exercise 2.6In case representing pairs as procedures wasn't mind-boggling enough, consider that, in a language that can manipulate procedures, we can get by without numbers (at least insofar as nonnegative integers are concerned) by implementing 0 and the operation of adding 1 as(define zero (lambda (f) (lambda (x)x)))(define (add-1 n) (lambda (f)(lambda (x) (f ((n f) x)))))This representation is known as Church numerals, after its inventor, Alonzo Church, the logician who invented the calculus.Define one and two directly (not in terms of zero and add-1). (Hint: Use substitution to evaluate (add-1 zero)). Give a direct definition of the addition procedure + (not in terms of repeated application of add-1).I wrote this solution. I'm not 100% certain what the correct answer is, and I'm not sure the best way to check my solution so it's possible that I have made a mistake in my answer. Please let me know what you think.;given definitions(define zero (lambda (f) (lambda (x) x)))(define (add-1 n) (lambda (f) (lambda (x) (f ((n f) x))))); exercise 2.6: define one and two directly -; not in terms of zero or add-1(define one (lambda (f) (lambda (x) (f (f x)))))(define two (lambda (f) (lambda (x) (f ((f (f x)) x)))))(define (church-plus a b) (define (church-num n) (cond ((= 0 a) (lambda (x) x)) ((= 1 a) (lambda (f) (lambda (x) (f ((n f) x))))) (else (church-num (- n 1))))) (church-num (+ a b))) | Church Numerals - implement one, two, and addition | lisp;scheme;sicp | Church defined numerals as the repeated application of a function. The first few numerals are defined thus:(define zero (lambda (f) (lambda (x) x)))(define one (lambda (f) (lambda (x) (f x))))(define two (lambda (f) (lambda (x) (f (f x)))))... and so on.To see how one is derived, begin by defining one as (add-1 zero) and then perform the applications in order as shown in the steps below (I have written each function to be applied and its single argument on separate lines for clarity):;; Givens(define add-1 (lambda (n) (lambda (f) (lambda (x) (f ((n f) x))))))(define zero (lambda (f) (lambda (x) x)))(define one (add-1 ;; substitute with definition of add-1 zero))(define one ((lambda (n) ;; apply this function to argument zero (lambda (f) (lambda (x) (f ((n f) x))))) zero))(define one (lambda (f) (lambda (x) (f ((zero ;; substitute with definition of zero f) x)))))(define one (lambda (f) (lambda (x) (f (((lambda (f) ;; apply this function to argument f (lambda (x) x)) f) x)))))(define one (lambda (f) (lambda (x) (f ((lambda (x) ;; apply this function to argument x x) x)))))(define one (lambda (f) (lambda (x) (f x)))) ;; we're done!You may try your hand at two in a similar fashion. =)By definition, Church numerals are the repeated application of a function. If this function were the integer increment function and the initial argument were 0, then we could generate all natural numbers 0, 1, 2, 3, .... This is an important insight.Thus, a Church numeral is a function that takes one argument, the increment function, and returns a function that also takes one argument, the additive identity. Thus, if we were to apply a Church numeral to the integer increment function (lambda (n) (+ 1 n)) (or simply add1 in Scheme), which we then apply to the integer additive identity 0, we would get an integer equivalent to the Church numeral as a result. In code:(define (church-numeral->int cn) ((cn add1) 0))A few tests:> (church-numeral->int zero)0> (church-numeral->int one)1> (church-numeral->int two)2> (church-numeral->int (add-1 two))3> (church-numeral->int (add-1 (add-1 two)))4A Church-numeral addition should take two Church numerals as input, and not integers, as in your code.We use the insight above about increment functions and additive identities to notice that integer addition is simply repeated incrementing. If we wish to add integers a and b, we begin with the number b and increment it a times to get a + b.The same applies to Church numerals. Instead of using integer increment function and integer additive identity, we use the Church increment function add-1 and we begin with a Church numeral as the additive identity. Thus, we can implement Church-numeral addition as:(define (plus cn-a cn-b) ((cn-a add-1) cn-b))A few examples:(define three (plus one two))(define four (plus two two))(define five (plus two three))(define eight (plus three five))... and associated tests:> (church-numeral->int three)3> (church-numeral->int four)4> (church-numeral->int five)5> (church-numeral->int eight)8 |
_unix.361743 | I use a Samung 256 GB SDD Model MZ - 7TD2560/0L7 built Octobre 2013. WINMAGIC bootloader with WINDOWS 7 is installed on it.hdparm -I doesn't report any security features for that drive. hdparm --dco-identify tells me that security feature set might have been disabled by DCO, so I guess this is what happened here.Tried to restore factory setting via --dco-restore but only got I/O error.How can I restore the security feature set? | Security features disabled by DCO - how to unfreeze and restore? | security;hdparm | null |
_unix.91954 | Let's edit this theme's gtkrc file:vi vi /usr/share/themes/industrial/gtk-2.0/gtkrcgtk-color-scheme = bg_color: #000000000000\nfg_color: #ffffff\nbase_color: #000000000000\ntext_color: #ffffff\nselected_bg_color: #ffffff\nselected_fg_color: #000000000000\ntooltip_bg_color: #000000000000\ntooltip_fg_color: #ffffffLet's change the bg_color to red and apply the theme to the desktop.But the problem is this, the background of every scroll bar, and the foreground of the scroll bar, both become red, this makes it impossible to see the scroll bar due to the background being same color as the scroll bar. By scroll bar I am referring to the scroll bar in firefox web browser for example that you'd use to scroll up and down. I am currently using my mouse's wheel button because the scroll bar does not look visible.Why are colors being applied this way? Why isn't there a distinction between the foreground and the background of the scroll bars? | gtk-color-scheme = bg_color: red\ results in red bg + red scroll bar bg + red scroll bar ? where is the logic? | configuration;theme;gtk2;gnome2 | Every element combination of a RGB, if you want to retrieve a RGB of red , blue or another color, you can open gimp, and choose color and retrieve the given RGB. |
_softwareengineering.93322 | Is it possible to do the following in C# (or in any other language)?I am fetching data from a database. At run time I can compute the number of columns and data types of the columns fetched.Next I want to generate a class with these data types as fields. I also want to store all the records that I fetched in a collection. The problem is that I want to do both step 1 and 2 at runtimeIs this possible? I am using C# currently but I can shift to something else if i need to. | Generating a class dynamically from types that are fetched at runtime | c#;dynamic typing | Use CodeDom. Here's something to get startedusing System;using System.Collections.Generic;using System.Linq;using Microsoft.CSharp;using System.CodeDom.Compiler;using System.CodeDom;namespace Test{ class Program { static void Main(string[] args) { string className = BlogPost; var props = new Dictionary<string, Type>() { { Title, typeof(string) }, { Text, typeof(string) }, { Tags, typeof(string[]) } }; createType(className, props); } static void createType(string name, IDictionary<string, Type> props) { var csc = new CSharpCodeProvider(new Dictionary<string, string>() { { CompilerVersion, v4.0 } }); var parameters = new CompilerParameters(new[] { mscorlib.dll, System.Core.dll}, Test.Dynamic.dll, false); parameters.GenerateExecutable = false; var compileUnit = new CodeCompileUnit(); var ns = new CodeNamespace(Test.Dynamic); compileUnit.Namespaces.Add(ns); ns.Imports.Add(new CodeNamespaceImport(System)); var classType = new CodeTypeDeclaration(name); classType.Attributes = MemberAttributes.Public; ns.Types.Add(classType); foreach (var prop in props) { var fieldName = _ + prop.Key; var field = new CodeMemberField(prop.Value, fieldName); classType.Members.Add(field); var property = new CodeMemberProperty(); property.Attributes = MemberAttributes.Public | MemberAttributes.Final; property.Type = new CodeTypeReference(prop.Value); property.Name = prop.Key; property.GetStatements.Add(new CodeMethodReturnStatement(new CodeFieldReferenceExpression(new CodeThisReferenceExpression(), fieldName))); property.SetStatements.Add(new CodeAssignStatement(new CodeFieldReferenceExpression(new CodeThisReferenceExpression(), fieldName), new CodePropertySetValueReferenceExpression())); classType.Members.Add(property); } var results = csc.CompileAssemblyFromDom(parameters,compileUnit); results.Errors.Cast<CompilerError>().ToList().ForEach(error => Console.WriteLine(error.ErrorText)); } }}It creates an assembly 'Test.Dynamic.dll' with this class in itnamespace Test.Dynamic{ public class BlogPost { private string _Title; private string _Text; private string[] _Tags; public string Title { get { return this._Title; } set { this._Title = value; } } public string Text { get { return this._Text; } set { this._Text = value; } } public string[] Tags { get { return this._Tags; } set { this._Tags = value; } } }}You could also use dynamic features of C#DynamicEntity class, no need to create anything at runtimepublic class DynamicEntity : DynamicObject{ private IDictionary<string, object> _values; public DynamicEntity(IDictionary<string, object> values) { _values = values; } public override bool TryGetMember(GetMemberBinder binder, out object result) { if (_values.ContainsKey(binder.Name)) { result = _values[binder.Name]; return true; } result = null; return false; }}And use it like thisvar values = new Dictionary<string, object>();values.Add(Title, Hello World!);values.Add(Text, My first post);values.Add(Tags, new[] { hello, world });var post = new DynamicEntity(values);dynamic dynPost = post;var text = dynPost.Text; |
_unix.210394 | The debian wiki page for the apt sources list gives this example file:deb http://httpredir.debian.org/debian jessie maindeb-src http://httpredir.debian.org/debian jessie maindeb http://httpredir.debian.org/debian jessie-updates maindeb-src http://httpredir.debian.org/debian jessie-updates maindeb http://security.debian.org/ jessie/updates maindeb-src http://security.debian.org/ jessie/updates mainWhat is the difference between the jessie and jessie-updates distribution entries? | What is the difference between the jessie and jessie-updates distributions in /etc/apt/sources.list? | debian;apt | null |
_webapps.79174 | When checking settings of custom labels in Google Inbox, one can see the filters that are used to put emails there. However, such setting is not available for the special labels e.g. Purchases, Finance, Updates, etc. Now, I do realize that there is a classifier underneath (unlike custom labels), but when an email is moved to the label, you are still being asked if all emails from this address should be moved there. It would be good to see which addresses were marked this way. Is it possible? | See filters for Google Inbox Finance/Purchases labels | gmail labels;inbox by gmail | null |
_unix.206828 | Question: Does anyone know of any methods to perform a second login without using su, or a means to use su that requires the user to enter their username after the su command is issued?Situation: We have multiple types of devices that login to a Linux server. The users can then choose from options on a menu to execute. The devices auto login to the system with a different account for each device. The reason we do this is to limit the number of setup accounts to maintain on the devices to one per device. The ssh clients we have that we can use all tie an automatic login name to the device account in use (see Juice ssh on android for one example). Maintaining 40 plus accounts per device on each app is too much overhead for us.Goal: After the device logs in (primary login) provide a secondary login for the user that will allow them to enter their user id and password. My first inclination would have been 'su', but 'su' needs a username provided with the command or it defaults to root account. To keep things simple, I am trying avoid writing more shell code to prompt for the username before issuing a command such as 'su - $username' to accomplish this.Naturally, any application they login to already asks for username and password, but dropping to the shell and other home built functions rely on the login to provide authentication. | Auto login by device followed by login by user? | linux;shell;login | What does it matter just getting users to type su - username in the shell, if they must type their username and password in any way? Only 5 extra characters typing :).If it's not acceptable by users and you are so lazy as not to want to write single script anymore, just install telnet and telnetd on the server and put the following alias in /etc/bash.bashrc.alias x='telnet localhost'Updated: Don't forget to restrict remote accesses to telnetd port (tcp/23). You can set up firewall (iptables) or traditional TCP wrapper (/etc/hosts.allow, /etc/hosts.deny - consult hosts_access(5) manual page). |
_unix.329926 | I installed Linux Mint on my laptop along with a pre-installed Windows 10. When I turn on the computer, the normal GRUB menu appears most of the time:But after booting either Linux or Windows then rebooting, I GRUB starts in command line mode, as seen in the following screenshot:There is probably a command that I can type to boot from that prompt, but I don't know it. What works is to reboot using Ctrl+Alt+Del, then pressing F12 repeatedly until the normal GRUB menu appears. Using this technique, it always loads the menu. Rebooting without pressing F12 always reboots in command line mode.I think that the BIOS has EFI enabled, and I installed the GRUB bootloader in /dev/sda.Why is this happening and how can I ensure that GRUB always loads the menu?EditAs suggested in the comments, I tried purging the grub-efi package and reinstalling it. This did not fix the problem, but now when it starts in command prompt mode, GRUB shows the following message:error: no such device: 6fxxxxx-xxxx-xxxx-xxxx-xxxxxee.Entering rescue mode...grub rescue>I checked with the blkid command and that is the identifier of my linux partition. Maybe this additional bit of information can help figure out what is going on? | GRUB starts in command line after reboot | grub2;dual boot;boot loader;uefi | null |
_codereview.82510 | I'm trying to write clean code, but I'm not so good with exception handling.How can I improve this method? Or is it good enough already?This method gets a hash from a file.public String getLatestHash() { String hash = none; URL packURL = null; try { packURL = new URL(ConfigManager.get(play_url) + / + HTMLParser.getPackName()); } catch (MalformedURLException e) { logger.severe(invalid URL!); logger.severe(e.getMessage()); } try (InputStream inputStream = packURL.openConnection().getInputStream()) { hash = DigestUtils.md5Hex(inputStream); } catch (IOException e) { e.printStackTrace(); } return hash;} | Getting the latest hash from a file | java | null |
_codereview.74357 | Previous question:Text-based Tetris gameI don't have a Linux terminal to be sure whether or not I have implemented the Linux version correctly, but hopefully it's OK. Summary of improvements:More OOP Improved the names of the classes and their functions and I hope they're OK nowMore portable, isolated platform-specific codeEliminated magic numbersI would like to know if there is anything I should take care of before processing the code.#include <iomanip>#include <iostream>#include <vector>#include <random>#ifdef __linux__ /*****************************************************************************kbhit() and getch() for Linux/UNIXChris Giese <[email protected]> http://my.execpc.com/~geezerRelease date: ?This code is public domain (no copyright).You can do whatever you want with it.****************************************************************************/#include <sys/time.h> /* struct timeval, select() *//* ICANON, ECHO, TCSANOW, struct termios */#include <termios.h> /* tcgetattr(), tcsetattr() */#include <stdlib.h> /* atexit(), exit() */#include <unistd.h> /* read() */#include <stdio.h> /* printf() */static struct termios g_old_kbd_mode;/**********************************************************************************************************************************************************/static void cooked(void){ tcsetattr(0, TCSANOW, &g_old_kbd_mode);}/**********************************************************************************************************************************************************/static void raw(void){ static char init; /**/ struct termios new_kbd_mode; if (init) return; /* put keyboard (stdin, actually) in raw, unbuffered mode */ tcgetattr(0, &g_old_kbd_mode); memcpy(&new_kbd_mode, &g_old_kbd_mode, sizeof(struct termios)); new_kbd_mode.c_lflag &= ~(ICANON | ECHO); new_kbd_mode.c_cc[VTIME] = 0; new_kbd_mode.c_cc[VMIN] = 1; tcsetattr(0, TCSANOW, &new_kbd_mode); /* when we exit, go back to normal, cooked mode */ atexit(cooked); init = 1;}/**********************************************************************************************************************************************************/static int _kbhit(void){ struct timeval timeout; fd_set read_handles; int status; raw(); /* check stdin (fd 0) for activity */ FD_ZERO(&read_handles); FD_SET(0, &read_handles); timeout.tv_sec = timeout.tv_usec = 0; status = select(0 + 1, &read_handles, NULL, NULL, &timeout); if (status < 0) { printf(select() failed in kbhit()\n); exit(1); } return status;}/**********************************************************************************************************************************************************/static int _getch(void){ unsigned char temp; raw(); /* stdin = fd 0 */ if (read(0, &temp, 1) != 1) return 0; return temp;}struct COORD { short X; short Y; };bool gotoxy(unsigned short x = 1, unsigned short y = 1) { if ((x == 0) || (y == 0)) return false; std::cout << \x1B[ << y << ; << x << H;}void clearScreen(bool moveToStart = true) { std::cout << \x1B[2J; if (moveToStart) gotoxy(1, 1);}inlinevoid print(std::string& str, COORD& coord){ gotoxy(coord.X, coord.Y); std::cout << str << std::flush;;}inlinevoid print(TCHAR* str, COORD& coord){ gotoxy(coord.X, coord.Y); std::cout << str << std::flush;;}inlinevoid print(TCHAR& c, COORD& coord){ gotoxy(coord.X, coord.Y); std::cout << c << std::flush;}#elif _WIN32#include <conio.h> /* kbhit(), getch() */#include <Windows.h>#include <tchar.h>void clearScreen(){ HANDLE hStdOut; CONSOLE_SCREEN_BUFFER_INFO csbi; DWORD count; DWORD cellCount; COORD homeCoords = { 0, 0 }; hStdOut = GetStdHandle(STD_OUTPUT_HANDLE); if (hStdOut == INVALID_HANDLE_VALUE) return; if (!GetConsoleScreenBufferInfo(hStdOut, &csbi)) return; cellCount = csbi.dwSize.X *csbi.dwSize.Y; if (!FillConsoleOutputCharacter( hStdOut, (TCHAR) ' ', cellCount, homeCoords, &count )) return; if (!FillConsoleOutputAttribute( hStdOut, csbi.wAttributes, cellCount, homeCoords, &count )) return; /* Move the cursor home */ SetConsoleCursorPosition(hStdOut, homeCoords);}void gotoxy(int x, int y){ COORD coord; coord.X = x; coord.Y = y; SetConsoleCursorPosition(GetStdHandle(STD_OUTPUT_HANDLE), coord);}static CONSOLE_SCREEN_BUFFER_INFO csbi;COORD getXY(){ GetConsoleScreenBufferInfo(GetStdHandle(STD_OUTPUT_HANDLE), &csbi); COORD position; position.X = csbi.dwCursorPosition.X; position.Y = csbi.dwCursorPosition.Y; return position;}static DWORD cCharsWritten = 0;inlinevoid print(std::string& str, COORD& coord){ WriteConsoleOutputCharacter(GetStdHandle(STD_OUTPUT_HANDLE), str.c_str(), str.length(), coord, &cCharsWritten);}inlinevoid print(TCHAR* str, COORD& coord){ WriteConsoleOutputCharacter(GetStdHandle(STD_OUTPUT_HANDLE), str, _tcslen(str), coord, &cCharsWritten);}inlinevoid print(TCHAR& c, COORD& coord){ FillConsoleOutputCharacter(GetStdHandle(STD_OUTPUT_HANDLE), c, 1, coord, &cCharsWritten);}#else#error OS not supported!#endifstatic std::vector<std::vector<std::vector<int>>> block_list ={ { { 0, 1, 0, 0 }, { 0, 1, 0, 0 }, { 0, 1, 0, 0 }, { 0, 1, 0, 0 } }, { { 0, 0, 0, 0 }, { 0, 1, 1, 0 }, { 0, 1, 0, 0 }, { 0, 1, 0, 0 } }, { { 0, 0, 1, 0 }, { 0, 1, 1, 0 }, { 0, 1, 0, 0 }, { 0, 0, 0, 0 } }, { { 0, 1, 0, 0 }, { 0, 1, 1, 0 }, { 0, 0, 1, 0 }, { 0, 0, 0, 0 } }, { { 0, 0, 0, 0 }, { 0, 1, 0, 0 }, { 1, 1, 1, 0 }, { 0, 0, 0, 0 } }, { { 0, 0, 0, 0 }, { 0, 1, 1, 0 }, { 0, 1, 1, 0 }, { 0, 0, 0, 0 } }, { { 0, 0, 0, 0 }, { 0, 1, 1, 0 }, { 0, 0, 1, 0 }, { 0, 0, 1, 0 } }};struct NonCopyable{ NonCopyable() = default; NonCopyable(const NonCopyable &) = delete; NonCopyable(const NonCopyable &&) = delete; NonCopyable& operator = (const NonCopyable&) = delete;};struct Random : public NonCopyable{ Random(int min, int max) : mUniformDistribution(min, max) {} int operator()() { return mUniformDistribution(mEngine); } std::default_random_engine mEngine{ std::random_device()() }; std::uniform_int_distribution<int> mUniformDistribution;};struct Block : public NonCopyable{ static const int ROTATIONS_IN_CIRCLE = 4; int rotation_count = 0; COORD coord; std::vector<std::vector<int>> block; Block() { block.resize(4, std::vector<int>(4, 0)); coord.X = 0; coord.Y = 0; } void rotate() { ++rotation_count; while (rotation_count > ROTATIONS_IN_CIRCLE) { rotation_count -= ROTATIONS_IN_CIRCLE; } } int& getDim(int row, int column) { switch (rotation_count % ROTATIONS_IN_CIRCLE) { default: return block[row][column]; case 1: return block[block.size() - column - 1][row]; case 2: return block[block.size() - row - 1][block.size() - column - 1]; case 3: return block[column][block.size() - row - 1]; } } size_t size() { return block.size(); }};struct Board : public NonCopyable{ Board() { field.resize(22, std::vector<int>(13, 0)); coord.X = 0; coord.Y = 0; } COORD coord; std::vector<std::vector<int>> field; int& getDim(int row, int column) { return field[row][column]; } size_t size() const { return field.size(); } size_t rowSize() const { return field[0].size(); }};struct Collidable : public NonCopyable{ Collidable() { stage.resize(22, std::vector<int>(13, 0)); coord.X = 0; coord.Y = 0; } COORD coord; int& getDim(int row, int column) { return stage[row][column]; } size_t size() const { return stage.size(); } size_t rowSize() const { return stage[0].size(); } std::vector<std::vector<int>> stage;};class Tetris : public NonCopyable{public: Tetris() : board(), block(), stage() {}; bool makeBlocks(); void initField(); void moveBlock(int, int); void collidable(); bool isCollide(int, int); void userInput(); bool rotateBlock(); void spawnBlock(); virtual void display(){}; virtual void GameOverScreen() {};protected: int y = 0; int x = 4; bool gameOver = false; Board board; Block block; Collidable stage; Random getRandom{ 0, 6 };};void Tetris::initField(){ for (size_t i = 0; i <= board.size() - 2; ++i) { for (size_t j = 0; j <= board.rowSize() - 2; ++j) { if ((j == 0) || (j == 11) || (i == 20)) { board.getDim(i, j) = stage.getDim(i, j) = 9; } else { board.getDim(i, j) = stage.getDim(i, j) = 0; } } } makeBlocks(); display();}bool Tetris::makeBlocks(){ x = 4; y = 0; int blockType = getRandom(); for (size_t i = 0; i < block.size(); ++i) { for (size_t j = 0; j < block.size(); ++j) { block.getDim(i, j) = 0; block.getDim(i, j) = block_list[blockType][i][j]; } } for (size_t i = 0; i < block.size(); i++) { for (size_t j = 0; j < block.size(); j++) { board.getDim(i, j + block.size()) = stage.getDim(i, j + block.size()) + block.getDim(i, j); if (board.getDim(i, j + block.size()) > 1) { gameOver = true; return true; } } } return false;}void Tetris::moveBlock(int x2, int y2){ //Remove block for (size_t i = 0; i < block.size(); ++i) { for (size_t j = 0; j < block.size(); ++j) { board.getDim(y + i, x + j) -= block.getDim(i, j); } } //Update coordinates x = x2; y = y2; // assign a block with the updated value for (size_t i = 0; i < block.size(); ++i) { for (size_t j = 0; j < block.size(); ++j) { board.getDim(y + i, x + j) += block.getDim(i, j); } } display();}void Tetris::collidable(){ for (size_t i = 0; i < stage.size(); ++i) { for (size_t j = 0; j < stage.rowSize(); ++j) { stage.getDim(i, j) = board.getDim(i, j); } }}bool Tetris::isCollide(int x2, int y2){ for (size_t i = 0; i < block.size(); ++i) { for (size_t j = 0; j < block.size(); ++j) { if (block.getDim(i, j) && stage.getDim(y2 + i, x2 + j) != 0) { return true; } } } return false;}void Tetris::userInput(){ char key; key = _getch(); switch (key) { case 'd': if (!isCollide(x + 1, y)) { moveBlock(x + 1, y); } break; case 'a': if (!isCollide(x - 1, y)) { moveBlock(x - 1, y); } break; case 's': if (!isCollide(x, y + 1)) { moveBlock(x, y + 1); } break; case ' ': rotateBlock(); }}bool Tetris::rotateBlock(){ std::vector<std::vector<int>> tmp(block.size(), std::vector<int>(block.size(), 0)); for (size_t i = 0; i < tmp.size(); ++i) { for (size_t j = 0; j < tmp.size(); ++j) { tmp[i][j] = block.getDim(i, j); } } for (size_t i = 0; i < block.size(); ++i) { //Rotate for (size_t j = 0; j < block.size(); ++j) { block.getDim(i, j) = tmp[block.size() - 1 - j][i]; } } if (isCollide(x, y)) { // And stop if it overlaps not be rotated for (size_t i = 0; i < block.size(); ++i) { for (size_t j = 0; j < block.size(); ++j) { block.getDim(i, j) = tmp[i][j]; } } return true; } for (size_t i = 0; i < block.size(); ++i) { for (size_t j = 0; j < block.size(); ++j) { board.getDim(y + i, x + j) -= tmp[i][j]; board.getDim(y + i, x + j) += block.getDim(i, j); } } display(); return false;}void Tetris::spawnBlock(){ if (!isCollide(x, y + 1)) { moveBlock(x, y + 1); } else { collidable(); makeBlocks(); display(); }}class Game : public Tetris{public: Game() = default; int menu(); virtual void gameOverScreen(); void gameLoop(); virtual void display(); void introScreen();private: size_t GAMESPEED = 20000;};void Game::gameOverScreen(){ COORD coord = { 0, 0 }; coord.Y++; print( ##### # # # ####### ####### # # ####### ######, coord); coord.Y++; print(# # # # ## ## # # # # # # # #, coord); coord.Y++; print(# # # # # # # # # # # # # # #, coord); coord.Y++; print(# #### # # # # # ##### # # # # ##### ######, coord); coord.Y++; print(# # ####### # # # # # # # # # #, coord); coord.Y++; print(# # # # # # # # # # # # # #, coord); coord.Y++; print( ##### # # # # ####### ####### # ####### # #, coord); coord.Y += 2; print(Press any key and enter, coord); char a; std::cin >> a;}void Game::gameLoop(){ size_t time = 0; initField(); while (!gameOver) { if (_kbhit()) { userInput(); } if (time < GAMESPEED) { time++; } else { spawnBlock(); time = 0; } }}int Game::menu(){ introScreen(); int select_num = 0; std::cin >> select_num; switch (select_num) { case 1: case 2: case 3: break; default: select_num = 0; break; } return select_num;}void Game::introScreen(){ clearScreen(); COORD coord = { 0, 0 }; print(#==============================================================================#, coord); coord.Y++; print(####### ####### ####### ###### ### #####, coord); coord.Y++; print( # # # # # # # #, coord); coord.Y++; print( # # # # # # #, coord); coord.Y++; print( # ##### # ###### # #####, coord); coord.Y++; print( # # # # # # #, coord); coord.Y++; print( # # # # # # # #, coord); coord.Y++; print( # ####### # # # ### ##### made for fun , coord); coord.Y += 4; coord.Y++; print( <Menu>, coord); coord.Y++; print( 1: Start Game, coord); coord.Y++; print( 2: Quit, coord); coord.Y += 2; print(#==============================================================================#, coord); coord.Y++; print(Choose >> , coord); coord.X = strlen(Choose >> ); gotoxy(coord.X, coord.Y);}void Game::display(){ clearScreen(); for (size_t i = 0; i < board.size(); ++i) { for (size_t j = 0; j < board.rowSize(); ++j) { switch (board.getDim(i, j)) { case 0: std::cout << << std::flush; break; case 9: std::cout << @ << std::flush; break; default: std::cout << # << std::flush; break; } } std::cout << std::endl; } COORD coord = { 0, board.size() }; print( A: left S: down D: right Rotation[Space], coord); if (gameOver) { clearScreen(); gameOverScreen(); }}int main(){ Game game; switch (game.menu()) { case 1: game.gameLoop(); break; case 2: return 0; case 0: COORD coord = { 20, 20 }; print(Choose 1~2, coord); return -1; } return 0;} | Text-based Tetris game - follow-up | c++;c++11;game;tetris | Just a few quick comments on things I saw when scanning through:struct Block : public NonCopyableBy defining this as a struct, you make members public (unless overridden). As a general rule, it is preferred for members to be private as a default. If you are marking a member as public then you should have a specific reason for that member. That pushes you to write code that does not rely on the members being public. You do not seem to be publicly using your public members, so you might as well make this a class. Note: this is not to say that there is never a reason to use a struct or public member, just that I see no sign of such a reason here. block.resize(4, std::vector<int>(4, 0)); field.resize(22, std::vector<int>(13, 0)); stage.resize(22, std::vector<int>(13, 0));int x = 4;Random getRandom{ 0, 6 };for (size_t i = 0; i <= board.size() - 2; ++i) for (size_t j = 0; j <= board.rowSize() - 2; ++j) if ((j == 0) || (j == 11) || (i == 20)) board.getDim(i, j) = stage.getDim(i, j) = 9;x = 4; COORD coord = { 20, 20 };There still seem to be a lot of magic numbers in the code. Some of them are repeated. Random getRandom{ 0, 6 };could be something like Random getRandom{ 0, blocklist.size() };I don't understand when the following triggers: if (board.getDim(i, j + block.size()) > 1) { gameOver = true;It's probably in the code somewhere, but I don't feel like tracking it down now. A comment of why we are comparing to 1 would be helpful. |
_unix.77506 | Is there a way to get the disk usage per user under a given path? du doesn't seem to have an option to aggregate disk usage per user, and df only seems to report how much disk is left on the drive. Can this be done with one command or in a few lines on the shell? Any pointers would be greatly appreciated. In case it helps, I use zsh.Note: Quotas are not enabled on our filesystem. | Disk usage summary per user | files;zsh;users;disk usage | The following will work with GNU find and awk:find /path -type f -printf '%u %k\n' | awk '{ arr[$1] += $2 } END { for ( i in arr ) { print i: arr[i]K } }' |
_webmaster.21688 | Mozilla gets money from Google when somebody searches in Google through Firefox and then clicks an ad.If people search on your own website with Google, you also get a share if they click an ad in search results.But is it possible to search the web with Google from your website and still get a share just like Mozilla does? | How to get a share of Google's search revenue? | google;search;revenue | Yes. The Google Custom Search Engine that you can create for your own site can be made to search the web in general (in fact, I think it's the default option).However I believe Mozilla has a special deal with Google to use their search (which actually sends the user direct to Google, not to a proxy site). It's unlikely you would be able to do any similar deal unless you were getting a huge number of searches with your site. |
_codereview.4095 | Wondering if this is fully in C++, and if not can someone help me tell the differences. This was submitted by me last semester, and received a good grade. I'm currently trying to ensure I can tell C++ and C apart.#include <fstream>#include <stack>#include <stdlib.h> //Not neccessary in C++#define swap(type, a, b) {type t = a; a = b; b = t;}using namespace std;typedef struct { int degree, *adjList, //arraySize = degree nextToVisit, //0 <=nextToVisit < degree parent, dfLabel, nodeName;} GraphNode;int numNodes, startNode = 1;GraphNode *nodes;void readGraph();void depthFirstTraverse(int startNode);int main() { readGraph(); depthFirstTraverse(startNode);}void depthFirstTraverse(int startNode) { int nextNode, dfLabels[numNodes], nextToVisit[numNodes], parents[numNodes], lastDFLabel = 0; stack<int> stackArray; for (int i = 0; i < numNodes; i++) dfLabels[i] = nextToVisit[i] = 0; stackArray.push(startNode); printf(startNode = %d, dfLabel(%d) = %d\n,startNode,startNode,1); while (!stackArray.empty()) { if (0 == dfLabels[stackArray.top()]) { lastDFLabel++; dfLabels[stackArray.top()] = lastDFLabel; } if (nextToVisit[stackArray.top()] == nodes[stackArray.top()].degree) { if (lastDFLabel < numNodes) { printf(backtracking %d, stackArray.top()); stackArray.pop(); printf( -> %d\n, stackArray.top()); } else { printf(backtracking %d -> %d\n, stackArray.top(), startNode); stackArray.pop(); } } else { nextNode = nodes[stackArray.top()].adjList[nextToVisit[stackArray.top()]]; nextToVisit[stackArray.top()]++; if (0 == dfLabels[nextNode]) { parents[nextNode] = stackArray.top(); printf(processing (%d, %d): tree-edge, dfLabel(%d) = %d\n, stackArray.top(), nextNode, nextNode, lastDFLabel+1); stackArray.push(nextNode); } else if (dfLabels[nextNode] < dfLabels[stackArray.top()]) { if (nextNode != parents[stackArray.top()]) printf(processing (%d, %d): back-edge\n, stackArray.top(), nextNode); else printf(processing (%d, %d): tree-edge 2nd-visit\n, stackArray.top(), nextNode); } else printf(processing (%d, %d): back-edge 2nd-visit\n, stackArray.top(), nextNode); } } for (int i = 0; i < 30; i++) printf(-);}void readGraph() { char ignore; ifstream digraph(digraph.data); digraph >> numNodes; nodes = new GraphNode[numNodes]; for (int i = 0; i < numNodes; i++) { digraph >> nodes[i].nodeName >> ignore >> nodes[i].degree >> ignore; nodes[i].adjList = new int[nodes[i].degree]; for (int j = 0; j < nodes[i].degree; j++) digraph >> nodes[i].adjList[j]; } digraph.close(); printf (Input Graph:\nnumNodes = %d\n, numNodes); for (int i = 0; i< numNodes; i++) { printf(%d (%d): , nodes[i].nodeName, nodes[i].degree); for (int j = 0; j < nodes[i].degree; j++) for (int k = 0; k < j; k++) if (nodes[i].adjList[j] < nodes[i].adjList[k]) swap(int, nodes[i].adjList[j], nodes[i].adjList[k]); for (int j = 0; j < nodes[i].degree; j++) printf(%d , nodes[i].adjList[j]); printf(\n); } for (int i = 0; i < 30; i++) printf(-); printf (\n);} | depthFirstTraverse fully in C++? | c++;c;algorithm | It's still closer to C than C++ IMO.First of two non C++ issues:You don't encapsulate your classes. Your whole interface is public. Thus you are now doomed to forever maintain this interface for eternity. Additionally it allows other people to accidentally modify your structure. Use the C++ class system to encapsulate your objects and at least protect your objects from accidental miss use.Your tree-traversal actually depends on the node object to track traversal. Now that's a bad design thing. You are basically binding yourself to an implementation and limiting the usability of your code. You should most defiantly look up the Visitor Pattern.The rest are C++ issues:You are doing resource management manually:nodes = new GraphNode[numNodes]; // STUFFnodes[i].adjList = new int[nodes[i].degree];It would be better to use a std::vector at least that would be exception safe in controlling the resource:std::vector<GraphNode> nodes(numNodes);You are manually loading nodes. Should a node not know how to stream itself onto and off a stream? digraph >> nodes[i].nodeName >> ignore >> nodes[i].degree >> ignore;I would expect it to look like this: digraph >> nodes[i];Or even better create items and push them into the vector:Since the only thing that seems to be in the file digraph.data seem to be GraphNodes then you could do something like this:std::copy(std::istream_iterator<GraphNode>(digraph), std::istream_iterator<GraphNode>(), std::back_inserter(nodes) // Assuming nodes is a vector described above. );This is not C++ printf(%d (%d): , nodes[i].nodeName, nodes[i].degree);The problem with the printf style is that it is not type safe. The main advantage C++ has over C is the exceptional type safety of the language. Use this to your advantage.This is so bad I nearly swallowed my tounge.: swap(int, nodes[i].adjList[j], nodes[i].adjList[k]);You have #defined swap. Use all caps for macros.The reason is to make sure macros do not clash with any other identifiers.Macros are not type safe.If there was a conversion from this object to int the compiler will now happily do it. There is already a type safe macro as part of the standard library.If you want to override the default swap behavior you just need to rewrite your type specific version and Koenig lookup will automatically find it.Not a big deal but prefer pre-increment. The reason is it does not matter for POD types but for user defined types it does (or potentially does) matter. If a maintainer at a latter stage changes the type of the loop variable then you don't have to go and change any of the code the pre-increment is already the correct type. for (int i = 0; i < 30; ++i) // ^^^^ Prefer pre-increment hereDon't declare multiple variables on the same line:int nextNode, dfLabels[numNodes], nextToVisit[numNodes], parents[numNodes], lastDFLabel = 0;Every coding standard is going to tell you not to do it. So get used to it. Technically there are a few corner cases that will come and hit you and putting each variable on its own line makes it easier to read. Also note it looks like you are initializing all values to zero. That's not happening so all of these (apart from the last one) are undefined.Also defining arrays with a non cost type is not technically allowed.dfLabels[numNodes],Some compilers allow it because it is C90 extension. But its not really portable. Use std::vector (or std::array) instead.Don't put code on the same line as the close braces.I dobt anybody is going to like that style even the people that want to save vertical line space.} for (int i = 0; i < 30; i++) printf(-);And I prefer to put the statement the for loop is going to run on the next line so it is obvious what is happening (but I am not going to care that much about it).typedefing a structure is not needed in C++typedef struct { // BLAH} GraphNode;This is juststruct GraphNode{ // BLAH};When you loop over your containers you are using a C style (ie you are using an index). It is usually more conventionally to use an iterator style to index over a container (even a simple array). Thus if you ever choose to move to another style of container the code itself does not need to be changed. The iterator style makes the code container agnostic.Personally I don't think it is a big deal. In this limited context. for (int j = 0; j < nodes[i].degree; j++)You implement (badly) a sort function: for (int j = 0; j < nodes[i].degree; j++) for (int k = 0; k < j; k++) if (nodes[i].adjList[j] < nodes[i].adjList[k]) swap(int, nodes[i].adjList[j], nodes[i].adjList[k]);Lets just say in both C/C++ you should probably use the built-in sorting algorithms:std::sort(&nodes[i].adjList[0], &nodes[i].adjList[nodes[i].degree]);Or am I reading your intention here incorrectly? Is it some sort of randomizing feature? Either way you should have probably documented this to explain what you are trying to achieve!Here is a more C++ like version#include <vector>#include <map>#include <iterator>#include <stdexcept>#include <fstream>#include <iostream>class GraphVisitor; // We will implement the `Visitor Pattern`class Graph{ public: class GraphNode { public: // Allows us to visit each node. // Don't need to track progress via modifying the node void accept(GraphVisitor& visitor); private: // The streaming operators are defined so we can freeze and thaw data to a stream friend std::istream& operator>>(std::istream& stream, Graph::GraphNode& node); friend std::ostream& operator<<(std::ostream& stream, Graph::GraphNode const& node); // The data is private nobody actually needs to see this int nodeName; std::vector<int> link; }; // Graph knows how to implement depth first traversal. // External entities can't do this because that requires knowledge of the implementation void DepthFirstTraversal(size_t startNode); private: // The streaming operators are defined so we can freeze and thaw data to a stream friend std::istream& operator>>(std::istream& stream, Graph& node); friend std::ostream& operator<<(std::ostream& stream, Graph const & node); // The data is just the nodes std::vector<GraphNode> nodes;};// A visitor is easy// When you only have one node type.// So we just have one visit method.class GraphVisitor{ public: virtual ~GraphVisitor() {} virtual void visit(Graph::GraphNode& node, int name, std::vector<int> const& links) = 0;};// All the node does with the visitor is tell it that it can visit.void Graph::GraphNode::accept(GraphVisitor& visitor){ visitor.visit(*this, nodeName, link);}// Stream Nodesstd::ostream& operator<<(std::ostream& stream, Graph::GraphNode const& node){ stream << node.nodeName << : << node.link.size() << :; std::copy(node.link.begin(), node.link.end(), std::ostream_iterator<int>(stream, )); return stream;}std::istream& operator>>(std::istream& stream, Graph::GraphNode& node){ char ignore; int count; stream >> node.nodeName >> ignore >> count >> ignore; for(std::istream_iterator<int> loop(stream); count != 0; --count, ++loop) { node.link.push_back(*loop); } // For some reason you sort your nodes so I am sorting them here. // Or I am reading your original code incorrectly. std::sort(node.link.begin(), node.link.end()); return stream;}// Stream the whole graph.// This is simply the number of nodes followed by a list of nodes.std::ostream& operator<<(std::ostream& stream, Graph const & graph){ stream << graph.nodes.size() << ; std::copy(graph.nodes.begin(), graph.nodes.end(), std::ostream_iterator<Graph::GraphNode>(stream, )); return stream;}std::istream& operator>>(std::istream& stream, Graph& graph){ int nodeCount; stream >> nodeCount; // We could implement it like this. // As this would match your current file format nicely and if their // is anything after the nodes it still works. /* for(int loop=0;loop < nodeCount; ++loop) // I implemented it like this becuase { Graph::GraphNode node; stream >> node; graph.nodes.push_back(node); } */ // But if there is nothing after the nodes it could be implemented like this: std::copy(std::istream_iterator<Graph::GraphNode>(stream), std::istream_iterator<Graph::GraphNode>(), std::back_inserter(graph.nodes) );}// A visitor object used by Graph::DepthFirstTraversal// It tracks which nodes it has already seen.// So you do not need to visit them again.// When a node says it can visit then just visit all the linked nodes.// If you get a call from a node that has already visited then just exit.class DepthFirstVisitor: public GraphVisitor{ public: DepthFirstVisitor(std::vector<Graph::GraphNode>& n) : nodes(n) {} private: virtual void visit(Graph::GraphNode& node, int name, std::vector<int> const& links) { if (visited[&node]) { return; } visited[&node] = true; std::cout << Visiting Node: << name << \n; for(std::vector<int>::const_iterator loop = links.begin();loop != links.end();++loop) { nodes[*loop].accept(*this); } } std::map<Graph::GraphNode const*, bool> visited; std::vector<Graph::GraphNode>& nodes;};// Implementing the Depth First traversal is easy as calling the first node with the// the correct visitor objectvoid Graph::DepthFirstTraversal(size_t startNode){ if (startNode >= nodes.size()) { throw std::runtime_error(Failed); } DepthFirstVisitor visitor(nodes); nodes[startNode].accept(visitor);};int main(){ Graph graph; std::ifstream digraph(digraph.data); digraph >> graph; // Very easy to load a graph. std::cout << graph << \n\n\n; // Printing it out is also easy graph.DepthFirstTraversal(1); // start at node one instead of zero for some reason!!!!!.} |
_unix.347260 | I am new to unix(just a week into it), i have problem there are two files2|1019|0|12 2|1019|3|0 2|1021|0|2 2|1021|2|0 2|1022|4|52|1030|0|1 2|1030|5|0 2|1031|4|4and 2|1019|0|12 2|1019|3|10 2|1021|0|22 2|1021|2|0 2|1022|4|15one is an output file and another an input file If the values in column 2 match, I want to sum the values in column 3 and 4 of both lines, else just the sum of the values in column 3 and 4 in the unique line for both input and output and compare the values of the total(3rd and 4th) and if diffrent raise a message with the value of 2nd column for which the sum is not matchingTotal of 1019 151021 41022 91030 61031 8in inputTotal of 1019 251021 241022 19in outputExpected output Unequal total for 1019,1021,1022Note: The values in input and output are pipe(|) seperatedRan this scriptawk -F '|' '{Arr[$2]=Arr[$2]+$3+$4}END{ for(i in Arr)print amount for planId i is :Arr[i]}'on first file and got this outputamount for planId is :0amount for planId 1019 is :12amount for planId 1021 is :4amount for planId 1022 is :9amount for planId 1030 is :6amount for planId 1031 is :8dont know why the first line comes amount for planId is :0 | Matching the data in input vs output file | shell;awk | Another awk approach:$ awk 'NR==FNR{a[$2]+=$3+$4; next} {b[$2]+=$3+$4;} END{ for(i in a){ if(i in b && a[i]!=b[1]){ print Unequal total for,i } } }' input output Unequal total for 1019Unequal total for 1021Unequal total for 1022Or, if you really need the exact output you show in your question:$ awk 'NR==FNR{a[$2]+=$3+$4; next} {b[$2]+=$3+$4;} END{ for(i in a){ if(i in b && a[i]!=b[1]){ c[i] } } printf Unequal total for ; for(i in c){printf %s, , i} }' input output | perl -pe 's/,\s*$/\n/'Unequal total for 1019, 1021, 1022FNR is the current file's line number and NR is the overall line number of all input. The two are equal only while the first file is being read. So, NR==FNR{a[$2]+=$3+$4; next} will add the sum of the 3rd and 4th column, to the value associated with the 2nd column in the array a, and do so only for the first file. The next ensures that we move to the next line and don't execute the rest of the script for the current line. The {b[$2]+=$3+$4;} will only be run if the previous one wasn't, if we're reading the second file. It does the same thing for the values in the second file, except it saves them in the array b. Once we reach the end of all input, the END{} block is executed. This will iterate over all keys in a and, if they are also keys in b, and their value is not the same, it prints out the key. |
_softwareengineering.144546 | Just like the question asks, is there a need to add Debug.Assert into your code if you are writing unit tests (which has its own assertions)? I could see that this might make the code more obvious without having to go into the tests, however it just seems that you might end up with duplicated asserts. It seems to me that Debug.Assert was helpful before unit-testing became more prevalent, but is now unnecessary. Or, am I not thinking of some use case? | Is Debug.Assert obsolete if you write unit tests? | unit testing;assertions | Assertions make assumptions explicit. Well-written unit tests do the same, but in an entirely different place.Therefore, assertions, interspersed into the logic, help document the code and make it more understandable. This is how I use assertions. This is helpful in particular when implementing algorithms from theoretical descriptions, which usually offer the assertions as part of a correctness proof.In a way, unit tests and assertions are two sides of the same coin: unit tests attempt to ensure correctness by trial, while assertions attempt to ensure it by proof.Furthermore, assertions may help debug such algorithms because they offer sub-unit resolution, if you will. Unit tests on the other hand are inherently constrained to a coarser resolution. They cant tell you anything about the internal state of an algorithm. Having small units helps but isnt always feasible. |
_cs.76189 | I have two numbers, which are each the product of a large number of smaller numbers that I know. I want to find the GCD (Greatest common divisor) of these two numbers. Is there any way I can make use of the partial factorization I have to speed up the process?In particular, each larger number is the product of $2^{15}$ smaller numbers, each of which is on the order of $2^{4000}$. I don't know anything about the factorization of the smaller numbers.Edit:While the input numbers are about 120,000,000 bits, the GCD is about 500,000 bits. The factors of the numbers are in particular in sequence. They are all integers in a consecutive range.All of the GCD algorithms I've seen make use of the numbers directly, not in a partially factored form or anything. Are there any algorithms which could incorporate this information to speed things up? | GCD of a pair of products | algorithms;number theory;performance | null |
_cstheory.34181 | Problem:Given a finite set of strings $\{x_1, x_2, ..., x_n\}$ of length $\ell$ or less from some finite alphabet $\Sigma=\{a_1, a_2, ..., a_k\}$, find the minimal context free grammar that recognizes all of these strings.If $k$ is a constant, and $n = poly(\ell)$, what can one say about this problem's complexity as $\ell$ grows?This seems similar to the smallest grammar problem, which is NP-Hard for the optimization problem. However it is a little different because of having multiple strings, since now one needs to recognize each string independently instead of recognizing all of them at once. The smallest grammar problem is clearly a subset of this, but for small $\ell$ this might be much easier to solve, I'm not sure.Is there a way to approximately solve this problem efficiently? | Finding a minimal context free grammar that recognizes a finite set of strings of bounded length | np hardness;approximation hardness;context free | null |
_webapps.29319 | How can I make some guests of my event able to change the event? I would expect an answer in G+ Events help, but there's nothing about this topic. | More admins in Google+ Event | google plus;google plus events | Google+ does not have the ability to allow guests to edit an Event as of Feb 2014. One alternative is to Create a Google+ PageInvite Event managers/administrators to become Manager of your PageAll Google+ Page managers will automatically have the permissions to edit an Event |
_codereview.40379 | Write getfloat, the floating-point analog of getint. What types does getfloat return as its function value?gefloat would also return an integer value.Here is my solution:int getfloat(double *pf) { int c, sign; double power; while(isspace(c = getch())) ; if(!isdigit(c) && c != EOF && c != '-' && c != '+' && c != '.') { ungetch(c); /* push the number back in buffer */ return 0; /* not a valid number */ } sign = (c == '-') ? -1 : 1; if(c == '-' || c == '+') c = getch(); if(!isdigit(c) && c != '.') { ungetch(c); return 0; /* not a number */ } for(*pf = 0.0; isdigit(c); c = getch()) *pf = *pf * 10 + (c - '0'); if(c == '.') c = getch(); for(power = 1.0; isdigit(c); c = getch()) { *pf = *pf * 10 + (c - '0'); power *= 10; } *pf *= sign; *pf /= power; /* moving the decimal point */ if(c != EOF) ungetch(c); return c; /* it actually returns the value of the charachter that caused the break of the second for */}I have changed the type of the parameter *pf to double, thus it will handle the floating point represenation of a number. The condition of the first if-statement was extended, such that it will not push back in to the buffer a decimal point.The second for computes the decimal part of the number, storing the number of places that the decimal point should pe moved in the variable power. If there is no decimal part, power will have the value 1 - thus, nothing will change if I devide *pf by power when power is 1. | getfloat, the floating point analog of getint | c;parsing;floating point | It obviously does not handle NAN and INF. Maybe outside code's goals.Surprisingly does not handle exponential notation such as 1.23e+56.if(!isdigit(c) && c != '.') { returns a 0 implying not a number and ungets the offending c. But a potential + or - was all ready consumed and not available for alternative parsings. Not sure best way to handle, but maybe consider that once a non-white-space char is irrevocable consumed, this function should not return a error signal and should set *pf to _something_like 0 or NAN.Minor: The *pf /= power; approach fails with input such as 123.00000...(300+ zeros) as power or *pf already became INF.This is a pedantic point: Once you get about 17 (depends on double) decimal digits, the addition of (c - '0') to *pf * 10 becomes moot. To get a more accurate significand, forming the number right-to-left has precision advantages.The return c; could return 0 indicating not a number . Recommend an alternative return value scheme.ungetch is not in the C spec. Consider ungetc().With ungetc() If the value of c equals that of the macro EOF, the operation fails and the input stream is unchanged. thus negating the need for the if(c != EOF). Still not bad to have there.The decimal point . is locale sensitive and could be , or something else. Use locale sensitive routines to determine the current decimal point.You are commended for for taking in -0 and rendering -0.0 rather than 0.0.[Edit] IEEE_754-2008 Character_representation discusses the maximum number of non-zero leading digits to use when converting from text to a number. double is often a IEEE_754 binary64, so a maximum number to use is 17+3. This goes into point #3 above. Its a bit deep on how this affects things, but I'll just note it for those who want to delve into a portable robust solution. |
_unix.162586 | I need to iterate through every file inside a directory. One common way I saw was using the for loop that begins with for file in *; do. However, I realized that it does not include hidden files (files that begin with a .). The other obvious way is then do something like for file in `ls -a`; doHowever, iterating over ls is a bad idea because spaces in file names mess everything up. What would be the proper way to iterate through a directory and also get all the hidden files? | proper way to iterate through contents in a directory | shell script;ls;wildcards;for;dot files | You just need to create a list of glob matching files, separated by space:for file in .* *; do echo $file; doneEditThe above one can rewrite in different form using brace expansion for file in {.*,*}; do echo $file; doneor even shorter for file in {.,}*; do echo $file; doneAdding path for selected files: for file in /path/{.,}*; do echo $file; doneFor completeness it is worth to mention that one can also set dotglob to match dot-files with pure *.shopt -s dotglob |
_unix.127745 | This is a dumb question, but I recently moved from windows to linux, and I see one strange thing:./configureornano /something/something2Is this something2 a file? It looks like a directory, I don't understand how it can be edited? Same for ./configure. What does it means? | About the .file and ./directory confusion | files;filenames | null |
_codereview.31130 | There is an e-commerce site that I'm working on that has thousands of pages in it (mostly product pages). I've got some jQuery that I'm running on about 1300 of these products. Only specific products need the code to run so what I've done is set up the product codes (which display in the URL of the product) in a JSON data sheet and I'm looping through the JSON data like this:$(document).ready(function(){$.getJSON('js/round.js', function(data) { var location_var = location.pathname.indexOf('/product-p/main-product.htm') for(i=0; i<data.records.length-1; i++){ var location_var = location_var + || location.pathname.indexOf(' + data.records[i].productcode + '); } if (location_var != -1){ //DO SOMETHING TO THESE SPECIFIC PAGES }}); });Now this functions but obviously it has to handle the 1300 hundred rows, check and then run my code. This causes a slight lag and unfortunately that lag will run on all of the pages because it has to determine if the page is correct for every page that it goes through.This is the only way I could think of doing this (my Javascript is a bit limited at this point) so I'm curious if anybody knows of a way to do this more efficiently and to make the query faster.Thanks for your help! | More Efficient Javascript indexOf query | javascript;jquery;json | null |
_softwareengineering.278782 | We currently have a web app (.net MVC 5), user's login using their username and password and then we store an encrypted value in a cookie to authenticate them on future requests.We are now in the process of designing an API for that will be mainly used by our mobile app (will be built once the API is ready). What I/we are trying to understand is the best way to secure requests made to the API. Our current plan is to have a simple API method that accepts a username and password, which will then validate these details and on success return a token that is unique to that user. This token will then be used on each subsequent request to authenticate. The issue here is that if that token is stolen then someone could impersonate that user quite easily. So should we be sending back some kind of token secret that we can use to sign each request (using something like HMAC) then this can be checked on the server before allowing the request to continue? | Securing a Web Api for individual Users | security;web api;asp.net mvc web api | null |
_unix.301864 | Is there any way to tell if a file system (regardless of its type) has been resized? Specifically shrunk? | How to tell if a file system has been shrunk? | filesystems;resize2fs | As far as I know, there is no direct way for this purpose.The only idea which sprang to my mind was to check out and examine the contents - e.g. filesystem - metadata of the partition. If the size recorded in metadata does not match the size of the partition, it may have been resized.Even if the contents has been resized too, some metadata may not have been updated to reflect the new size. For instance, the number of inodes in an ext2/3/4 filesystem is fixed at the creation and does not change when the filesystem gets resized. So being aware of concepts and rules is needed. So assuming the filesystem was created with the default values you can compare the output of e2mkfs in sumulation mode and the output of tune2fs -l. |
_unix.35483 | I have been using ZSH to do host name completion, and want to change the default behaviour. When I have multiple hosts with similar names, the completion does stuff that I don't care for. An example is best:Let's say I have these hosts:host01.stage.example.comhost02.stage.example.comhost01.prod.example.comhost02.prod.example.comnow, in my prompt, I will type:$ ssh hos<tab>zsh will show me:$ ssh host..example.com with the cursor right after host and shows me a menu with the host names in it. I like the menu showing me hostnames, I just don't want it to complete everything. Weird things happen. Most of the time I tab through and either have to delete host names or have extra stuff on the line i have to delete. A preferred way would be to not complete the rest of the hostname. something like:$ ssh hos<tab>zsh would hopefully show me:$ ssh hosthost01.stage.example.com host01.prod.example.comhost02.stage.example.com host02.prod.example.comAny thoughts? | ZSH host name completion behaviour change? | zsh | null |
_codereview.36707 | I have some code below that takes different detected users recognized by the Kinect and assigns them a specific random id range based on the player they are. I use this to grab the x,y coordinates of their left or right hand. I was trying to have the Kinect get at least 6 peoples coordinates with the code. Am I taking the approach right or should I perform it differently? I am using the Kinect SDK v1.8. Using skeletonFrameData As SkeletonFrame = e.OpenSkeletonFrame If skeletonFrameData Is Nothing Then Exit Sub End If ' sensor.SkeletonStream.AppChoosesSkeletons = True Dim allSkeletons(skeletonFrameData.SkeletonArrayLength - 1) As Skeleton skeletonFrameData.CopySkeletonDataTo(allSkeletons)Player Identification: 'identify specific players by ID 'ResetValues() For j = 0 To playerid.Length - 1 'all players playerid(i) = CInt(Rnd() * 4 + (i * 5)) allSkeletons(i).TrackingId = playerid(0) Next j 'force each player to be moved to first blank spot Dim tempList As New List(Of Skeleton)(allSkeletons) tempList.RemoveAll(Function(sk) IsNothing(sk)) allSkeletons = tempList.ToArray Log(Tracking id for player#1: + allSkeletons(0).TrackingId.ToString) Log(Tracking id for player#2: + allSkeletons(1).TrackingId.ToString) Log(Tracking id for player#3: + allSkeletons(2).TrackingId.ToString) Make sure current player is getting tracked or not: If IsNothing(s) Then Exit Sub End If If s.TrackingState = SkeletonTrackingState.Tracked Then activeCount = activeCount + 1 End If If s.TrackingState = SkeletonTrackingState.PositionOnly Then passiveCount = passiveCount + 1 End If If s.TrackingState = SkeletonTrackingState.NotTracked Then nottracked = nottracked + 1 End If totalplayers = activeCount + passiveCount + nottracked 'Log(passive count: + passiveCount.ToString + date: + Now.ToString)Player x,y display: ' the first found/tracked skeleton moves the mouse cursor If s.TrackingState = SkeletonTrackingState.Tracked Then ' make sure both hands are tracked 'If Skeleton.Joints(JointType.HandLeft).TrackingState = JointTrackingState.Tracked AndAlso Skeleton.Joints(JointType.HandRight).TrackingState = JointTrackingState.Tracked Then Dim cursorX, cursorY As Integer ' get the left and right hand Joints Dim jointRight As Joint = s.Joints(JointType.HandRight) Dim jointLeft As Joint = s.Joints(JointType.HandLeft) ' scale those Joints to the primary screen width and height Dim scaledRight As Joint = jointRight.ScaleTo(CInt(Fix(SystemParameters.PrimaryScreenWidth)), CInt(Fix(SystemParameters.PrimaryScreenHeight)), SkeletonMaxX, SkeletonMaxY) Dim scaledLeft As Joint = jointLeft.ScaleTo(CInt(Fix(SystemParameters.PrimaryScreenWidth)), CInt(Fix(SystemParameters.PrimaryScreenHeight)), SkeletonMaxX, SkeletonMaxY) ' relativemouselocation.Content = jointRight.Position ' figure out the cursor position based on left/right handedness If LeftHand.IsChecked.GetValueOrDefault() Then cursorX = CInt(Fix(scaledLeft.Position.X)) cursorY = CInt(Fix(scaledLeft.Position.Y)) Else cursorX = CInt(Fix(scaledRight.Position.X)) cursorY = CInt(Fix(scaledRight.Position.Y)) End If Dim leftClick As Boolean ' figure out whether the mouse button is down based on where the opposite hand is If (LeftHand.IsChecked.GetValueOrDefault() AndAlso jointRight.Position.Y > ClickThreshold) OrElse ((Not LeftHand.IsChecked.GetValueOrDefault()) AndAlso jointLeft.Position.Y > ClickThreshold) Then leftClick = True ' MsgBox(clicked) Else leftClick = False End If 'if i is less then the total amount of players then send players coordinates for person that is active. If i <= 5 Then Select Case True Case Is = s.TrackingId >= 1 And s.TrackingId <= 4 i = 0 playeractive(i) = True player1xy.Content = cursorX & , & cursorY & , & leftClick Status.Text = player1 identified & cursorX.ToString & , & cursorY.ToString & , & leftClick.ToString ' sensor.SkeletonStream.ChooseSkeletons(1, 2) Case Is = s.TrackingId >= 5 And s.TrackingId <= 9 i = 1 playeractive(i) = True player2xy.Content = cursorX & , & cursorY & , & leftClick Status.Text = player2 identified & cursorX.ToString & , & cursorY.ToString & , & leftClick.ToString Case Is = s.TrackingId >= 10 And s.TrackingId <= 14 i = 2 playeractive(i) = True player3xy.Content = cursorX & , & cursorY & , & leftClick Status.Text = player3 identified & cursorX.ToString & , & cursorY.ToString & , & leftClick.ToString Case Is = s.TrackingId >= 15 And s.TrackingId <= 19 i = 3 playeractive(i) = True player4xy.Content = cursorX & , & cursorY & , & leftClick Status.Text = player4 identified & cursorX.ToString & , & cursorY.ToString & , & leftClick.ToString Case Is = s.TrackingId >= 20 And s.TrackingId <= 24 i = 4 playeractive(i) = True player5xy.Content = cursorX & , & cursorY & , & leftClick Status.Text = player5 identified & cursorX.ToString & , & cursorY.ToString & , & leftClick.ToString Case Is = s.TrackingId >= 25 And s.TrackingId <= 29 i = 5 playeractive(i) = True player6xy.Content = cursorX & , & cursorY & , & leftClick Status.Text = player6 identified & cursorX.ToString & , & cursorY.ToString & , & leftClick.ToString End Select Log(person #: + i.ToString + Tracking ID: + s.TrackingId.ToString) currentperson.Content = i.ToString + Tracking ID: + s.TrackingId.ToString End If 'If playeractive(i) = True And i >= 0 And frame.SkeletonArrayLength > 0 Then 'NativeMethods.SendMouseInput(cursorX, cursorY, CInt(Fix(SystemParameters.PrimaryScreenWidth)), CInt(Fix(SystemParameters.PrimaryScreenHeight)), leftClick, totalplayers, i) 'End If 'if total players is 1 or greater send coordinate data. If totalplayers >= 1 Then DefineMouseData(cursorX, cursorY, leftClick) End If 'make the below code active when I get multiple player tracking working If playeractive(i) = True Then ' Environment.SetEnvironmentVariable(Player + i.ToString + xcoords, player(i).bytex.ToString, EnvironmentVariableTarget.Machine) ' Environment.SetEnvironmentVariable(Player + i.ToString + ycoords, player(i).bytey.ToString, EnvironmentVariableTarget.Machine) 'Environment.SetEnvironmentVariable(Player + i.ToString + leftclick, player(i).leftclick.ToString, EnvironmentVariableTarget.Machine) 'System.Threading.Thread.Sleep(300) End If If i <= 5 Then If playeractive(5) = True Then 'if 6th player exit for loop and open next frame. playeractive(5) = False Exit For Else 'd = d + 1 End If End If End If NumPassive.Content = passive count: + passiveCount.ToString Numactive.Content = active count: + activeCount.ToString nottrackedplayers.Content = not tracked players: + nottracked.ToString playeractive(i) = FalseChoose specific player: For h = 0 To activeCount - 1 If h >= 2 Then If activeCount - 1 > 0 Then sensor.SkeletonStream.ChooseSkeletons(playerid(h), playerid(h + 1)) End If End If Next Next ResetValues() End UsingAs you can see I go from 1-4 for player one (it might be 0-4), 5-9 for player 2, and so on for each player. If anyone can improve this code or know of any methods I am not using please post a solution below.Sorry for the long code post but if someone thinks that what I have commented out is irrelevant please edit this and remove the comments. I plan on using the multiple Kinect code too I commented out to do more then 6 players later but wish to get 4-6 players working for now.Edit: I removed the unimportant parts. The important part is the sensor.SkeletonStream.ChooseSkeletons(s.TrackingId) Line here.For more explanation: http://msdn.microsoft.com/en-us/library/microsoft.kinect.skeletonstream.chooseskeletons.aspxI want a specific user to be looped through such as player 1's coordinates retrieved first, then second, and so on so it does not seem slow. But I need the best method to do so.The problem is: ChooseSkeletons only does 2 people. I can loop through each person passing their id to chooseskeleton and this gives me 4-6 people but this seems inefficient because chooseskeleton takes too long (maybe improper usage?). | Identifying specific players with the kinect? | vb.net | null |
_computerscience.3678 | I'm trying to figure this out for my Computer Graphics course, but I find it very hard to understand. I believe the minimum number of angles obtained from clipping a convex polygon with n angles is 0, for example if the polygon is outside of the clipping rectangle, but I'm not sure if this makes sense.As for the maximum number of angles, I believe it is 2n, as in if every angle tip gets outside of the rectangle. However, I'm not sure about this either. | Maximal and minimal no. of angles obtained from clipping a convex polygon with n angles | polygon;clipping | Since this is a homework question, I'll give hints rather than a numerical answer.Clipping a convex polygonThink about how many times the polygon can cross each edge of the rectangle. In general, a polygon can cross one of the edges of the rectangle an arbitrarily large number of times. How is this number reduced if the polygon is convex?This will give you the maximal number of new angles that can be added per rectangle edge, which should lead you to the answer.Clipping a non-convex polygonAs for your guess of 2n, even for a non-convex polygon it may not be able to reach this high. If one vertex is outside the rectangle, then in order to create 2 new vertices both of its neighbouring vertices must be inside the rectangle. This means you cannot create 2n new vertices this way.The exception is when an edge of the polygon crosses 2 edges of the rectangle, for example at a corner of the rectangle. This allows creating two new vertices from a single edge. So for small enough n, it is possible to clip to a polygon with 2n vertices, but for larger n this is not possible. |
_webmaster.9376 | Possible Duplicate:Which Ecommerce Script Should I Use? Are there ready made PHP scripts which would allow my USERS to create their own stores? I don't want an eBay clones. Just a script which is easy to configure and customize. | Looking for PHP E-Commerce Script which allows USERS to create their own stores | php;ecommerce;looking for a script | null |
_cogsci.5215 | According to this thread, certain regions of the brain, and even some distributed activation patterns can be up/down regulated via bio-feedback.Is it possible in theory and is there any research where subjects were taught to consciously control neurotransmitter (e.g. serotonin, dopamine, etc...) levels?It seems that doing so would present with alternatives to pharmacological methods of altering neurotransmitter levels in treatment of various mood and cognitive disorders. | Neurotransmitter control via biofeedback? | neurobiology;neuroimaging;health psychology | null |
_cs.78048 | In CLRS, it works through an example going through a recurrence relation proof using the substitution method.We have the recurrence $$T(n) = 2T(\lfloor n/2 \rfloor) + n \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (4.19)$$It claims:We guess that our solution is $T(n) = O(n \lg(n)).$We assume that $T(m) \leq cm\lg(m)$ for all $m < n$ (in particular, for $m = \lfloor(n/2) \rfloor$, and need to show that $T(m) \implies T(n).$I'm fine with this; this seems like a standard case of strong induction, but what I don't understand is the explanation of the base case (which we also need to prove):Could someone explain this paragraph to me?More specifically:What are boundary conditions?How do we know the base case? We're only given $T(n)$ recursively; there's no initial case when $n$ is, for example, $1$. How, then, do we know that $T(1) = 1$?I don't understand what the paragraph is getting at. How do we determine the base case in the general case?Also, I've noticed that, in practice examples, the author has failed to mention the base case completely when doing these types of proof; that is, they assume the inductive hypothesis, and prove that (for all $m<n$), $T(m) \leq \text{something} \implies T(n) \leq \text{something}$, but they leave it at that, as though that were sufficient for the proof.In fact, these proofs don't even explicitly state the inductive hypothesis. For example, in the case of (4.19), they would write something like:$$ \begin{align} T(n) \leq c\lfloor( n/2) \rfloor \lg(\lfloor n/2\rfloor) \end{align} \\ \leq \ ... \\ \leq cn\lg(n) \ .$$Is this acceptable, or would that constitute a sloppy proof? | What exactly is going on in a proof by induction of a recurrence relation? | proof techniques;recurrence relation | What are boundary conditions?The boundary condition here refers to the base case(s) of the induction. This makes more sense in an intuitive manner if we consider, for instance, a 2D case: Suppose $T(m,n)$ is the running time of an algorithm where the input parameters are $m$ & $n$ are non-negative integers. Now, assume we have a recurrence relation of the form, say, $T(m,n) = T(m-1,n)+T(m,n-1)$ Arguably, the boundary condition that needs to be supplied for an inductive proof of $T(m,n)$ would be $T(m,0)\ \forall\ m \geq 0 $ & $T(0,n)\ \forall\ n \geq 0$. Thus, the set of $ (m,0),\ m \geq 0$ along with $ (0,n),\ m \geq 0$ can be thought of as a boundary(identifiable with the first quadrant axes). Thus, coming back to the question at hand, the boundary conditions are nothing but the specification of the values of the recurrence at the edge of its applicable domain. This in 1D turns out to be point(s) at the end of the interval.How do we know the base case? We're only given T(n) recursively; there's no initial case when n is, for example, 1. How, then, do we know that T(1)=1?When setting up a recurrence relation, it is always a good practice to specify the domain of applicability of the relation. This as we just saw above can help identify the boundary of validity of the relation. Now, the next step of how we come to know the exact values taken by the function at the boundary is often a problem of understanding the algorithm for which we have setup the recurrence relation for. In this particular case of $T(1) = 1$, we obtain $T(1)$ by manually computing(using the algorithm) the (worst-case)time it will take to work on an input of size 1.How do we determine the base case in the general case?This is where the understanding of the algorithm(and its correctness) becomes crucial. Knowing the boundary of domain of the recurrence relation and the approximations we want to use to bound the range of the recurrence on a given size of input determines the cases for which we will need to supply the starting terms of the recurrence relation. For instance, even though the natural boundary point for the example in your question is the input of size 1, due to the approximation being used namely $cnlg(n)$, we need to perturb the boundary in order to get a new domain such that the approximation will work on all possible inputs for the new domain of the recurrence. This is why the input $1$ is discarded and the boundary(in 1D it usually a single point if the interval is finite in a direction) and a new boundary point, $n_0 = 2$ is chosen.Also, I've noticed that, in practice examples, the author has failed to mention the base case completely when doing these types of proof;...In fact, these proofs don't even explicitly state the inductive hypothesis...Is this acceptable or sloppy proof?You are correct in stating that a base case as well as explicit verification using the inductive hypothesis is needed for the proof of induction to be complete and valid. However, in most treatments of this nature, when the base case/inductive hypothesis is not explicitly invoked, it is usually a passive call to action on the reader that they have been supplied with enough information to figure that out for themselves. |
_unix.136164 | I have a project with a custom build-system, so the Debian package is just one of a targets: I do it with dpkg-deb.But now I want to upload my package to some enterprise repo - and I need some files like .dsc and .changes, also they should be signed with PGP.What is the easiest way to sign already built package and generate those files? | How to sign already built Debian package? | debian;repository;packaging;signature | null |
_webmaster.7864 | I am working on a brand new SaaS web application and need to estimate the initial bandwidth usage. Since the site doesn't exist yet, and since this is my first endeavor of this sort, I'm not really sure how much bandwidth to estimate to begin with.We will be using Linux, Apache, PHP and Mysql.The content will be generated dynamically. There will be images as part of the site design but user's will also upload images that will be displayed and documents that will be stored for download at later times.We'd like to be able to support 500,000 page loads per month with estimated image loads being about two to three times that.Edit:Since the site isn't built yet, we're looking for suggestions on how to create a good faith estimate for what we'll need at a minimum to launch the web app. | How to Estimate Needed Bandwidth for New Web Application? | bandwidth;web applications;saas | null |
_webapps.54722 | Pinterest lets users post images from other websites as a sort of image bookmark. Often when I come across a Pinterest post, I am interested in the original full-resolution source.Pinterest posts have a Website link pointing to the web page the user pinned the image from. For this picture of a puppy, this links back to the gallery page for that specific image on Imgur (http://imgur.com/gallery/PSP7XRR), which is good.However, frequently Pinterest posts just link uselessly to a website's home page or other time-dependent pages (commonly Tumblr blogs), which doesn't help much for finding the original image. For instance, this puppy picture goes to the source website's home page (http://www.cutestpaw.com/).Can you get the source URL of the image itself from Pinterest?(I'm aware of external reverse-image-search tools like Google Images and TinEye; these can't always find the source of an image. I'm asking whether Pinterest lets you see where it got the image from.)Somewhat related (also involving image source URLs): Exclude Pinterest pins of the same image from the same domain | Obtain Pinterest image's source URL | images;pinterest | null |
_unix.48171 | Is there anyway to update package installed with yaourt ? yaourt -Syu seems to do the same as pacman -Syu, which only care about package in official repository. | Check for updates of packages installed through yaourt | arch linux;package management;pacman;yaourt | You can update your system including AUR packages with: yaourt -Syuahttps://wiki.archlinux.org/index.php/Yaourt |
_unix.166434 | I have a tab delimited file like this:chr1 53736473 54175786chr1 56861276 56876438chr1 57512145 57512200I want to concatenate the three fields result like this:chr1:53736473-54175786chr1:56861276-56876438chr1:57512145-57512200I tried with paste -d ':-' file, which apparently didn't work. Could anyone help?Ideally could be with simple unix command, I know it is rather easy with higher language. | Concatenate different fields with different seperator | text processing;paste | With sed:$ sed 's/\(.*\)\t\(.*\)\t/\1:\2-/' filechr1:53736473-54175786chr1:56861276-56876438chr1:57512145-57512200printf:printf %s:%s-%s\n $(< file)chr1:53736473-54175786chr1:56861276-56876438chr1:57512145-57512200 |
_softwareengineering.327933 | I'm curious about how the flight/train search engines combine the results from multiple sources. For example, let's say I'm asking to go from London to Paris, and let's assume there are no direct flights for whatever reason. However, there is a flight from London to Lille (North of France), and then a train from there to Paris. An extreme example would be where there is no direct connection either, but you can get to your destination by combining plane, train, bus and finally taxi or a ride-sharing service.How would a search engine effectively find the best option? It has access to a basic API from each provider that allows it to ask for rides from point A to B at a specific time. It does not have a database of all available rides and flights though, it can only query each provider's API but those queries are quite slow (if you have to do hundreds of them) and costly, so the objective is to minimize the amount of queries.I'm thinking about making a small ride-sharing comparator as a side project (and maybe include bus/train/taxi as well) and I'm not sure where to start or if this is even doable considering the constraints.I believe my problem is not simply about connecting the dots - the suggested questions assume that you know all the dots and you just need to find the best path. In my case it's a bit different because not only do I not know the best path, but I do not even know which dots do I have. I can make queries such as is there a ride from A to B but I can't ask give me all the rides you offer, which means I'm looking for advice on how can I efficiently query for potential dots without using the bruteforce approach of asking for all possible combinations (which would DoS the provider's site). | How do travel search engines combine flights | comparison;search engine;graph traversal;geospatial | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.