id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_webmaster.100674
I'm trying to update a page for a MediaWiki database running MW version 1.26.4. The MediaWiki is currenty suffering unexplained Internal Server Errors, so I am trying to perform an end-around by updating the database directly.I logged into the database with the proper credentials. I dumped the table of interest and I see the row I want to update:MariaDB [my_wiki]> select * from wikicryptopp_page;+---------+----------------+---------------------------------------------------------------------------+-------------------+------------------+-------------+--------------------+----------------+-------------+----------+--------------------+--------------------+-----------+| page_id | page_namespace | page_title | page_restrictions | page_is_redirect | page_is_new | page_random | page_touched | page_latest | page_len | page_content_model | page_links_updated | page_lang |+---------+----------------+---------------------------------------------------------------------------+-------------------+------------------+-------------+--------------------+----------------+-------------+----------+--------------------+--------------------+-----------+| 1 | 0 | Main_Page | | 0 | 0 | 0.161024148737 | 20161011215919 | 13853 | 3571 | wikitext | 20161011215919 | NULL |...| 3720 | 0 | GNUmakefile | | 0 | 0 | 0.792691625226 | 20161030095525 | 13941 | 36528 | wikitext | 20161030095525 | NULL |...I know exactly where the insertion should occur, and I have the text I want to insert. The Page Title is GNUmakefile, and the Page ID is 3720.The text is large at 36+ KB, and its sitting on the filesystem in a text file. How do I manually insert the text into existing table row?
Manually insert text into existing MediaWiki table row?
mediawiki;mysql;database
null
_hardwarecs.2159
I sometimes need to tweak (python) software on a PC that doesn't have a parallel port yet the live system does and the software uses it.I need some way to convince my system that it has a parallel port - ideally a USB to parallel port converter that would function at some level - but if not then just something that fakes it - so long as I can run the software without having to comment all the code that tries to access the parallel port.All the parallel port to USB converters I've seen appear to only be for printers and don't fake a port.I realize that even if it did work that the timing would not be precise as per a dedicated parallel port - I don't care about this though as its not being used to test that aspect of the software.
USB to parallel port converter for more than printers
usb
null
_codereview.151423
I wrote a program to show graphically the solution of the problem of the Tower of Hanoi. Here is a video of an example run.The logic of the solving is taken from StackOverflow (see links in docstring), I wrote the part dealing with the graphics, so review is most welcome on the graphical part that I wrote.The code is pretty simple, tested and documented, so no further introduction is needed.# -*- coding: utf-8 -*-import doctestimport timeimport pygameSPACE_PER_PEG = 200def hanoi(pegs, start, target, n): From stackoverflow: http://stackoverflow.com/questions/23107610/towers-of-hanoi-python-understanding-recursion http://stackoverflow.com/questions/41418275/python-translating-a-printing-recursive-function-into-a-generator This function, given a starting position of an hanoi tower, yields the sequence of optimal steps that leads to the solution. >>> for position in hanoi([ [3, 2, 1], [], [] ], 0, 2, 3): print position [[3, 2], [], [1]] [[3], [2], [1]] [[3], [2, 1], []] [[], [2, 1], [3]] [[1], [2], [3]] [[1], [], [3, 2]] [[], [], [3, 2, 1]] assert len(pegs[start]) >= n, 'not enough disks on peg' if n == 1: pegs[target].append(pegs[start].pop()) yield pegs else: aux = 3 - start - target # start + target + aux = 3 for i in hanoi(pegs, start, aux, n-1): yield i for i in hanoi(pegs, start, target, 1): yield i for i in hanoi(pegs, aux, target, n-1): yield idef display_pile_of_pegs(pegs, start_x, start_y, peg_height, screen): Given a pile of pegs, displays them on the screen, nicely inpilated like in a piramid, the smaller in lighter color. for i, pegwidth in enumerate(pegs): pygame.draw.rect( screen, # Smaller pegs are ligher in color (255-pegwidth, 255-pegwidth, 255-pegwidth), ( start_x + (SPACE_PER_PEG - pegwidth)/2 , # Handles alignment putting pegs in the middle, like a piramid start_y - peg_height * i, # Pegs are one on top of the other, height depends on iteration pegwidth, peg_height ) )def visual_hanoi_simulation(number_of_pegs, base_width, peg_height, sleeping_interval): Visually shows the process of optimal solution of an hanoi tower problem. pegs = [[i * base_width for i in reversed(range(1, number_of_pegs+1))], [], []] positions = hanoi(pegs, 0, 2, number_of_pegs) pygame.init() screen = pygame.display.set_mode( (650, 650) ) pygame.display.set_caption('Towers of Hanoi') for position in positions: screen.fill((255, 255, 255)) for i, pile in enumerate(position): display_pile_of_pegs(pile, 50 + SPACE_PER_PEG*i, 500, peg_height, screen) pygame.display.update() time.sleep(sleeping_interval) pygame.quit()if __name__ == __main__: doctest.testmod() visual_hanoi_simulation( number_of_pegs = 4, base_width = 30, peg_height = 40, sleeping_interval = 0.5 ) visual_hanoi_simulation( number_of_pegs = 8, base_width = 20, peg_height = 30, sleeping_interval = 0.2 )
Tower of Hanoi: graphical representation of optimal solution
python;python 2.7;recursion;pygame
null
_reverseengineering.2730
When I try to debug Windows Media Player and play a movie this error message is shown: Running this process under a debugger while using DRM content is not allowed.But with drmdbg it can debug wmplayer and play a movie.Is there a solution to overcome this?
How to debug Windows Media Player without attach mode?
debuggers;debugging
null
_unix.134415
Whenever I sleep my laptop I mean if I down the flap sound doesn't work, every time i have to restart my laptop for sound to make working.Fo instance right now sound is not working(since I closed the flap few minutes back.) so I check sound app and in output tab its showing Speakers Built-in Audiobut still I can't hear the output sound.I've even tried sudo alsa force-reload but still same problem.
Ubuntu sound not working after sleep/suspend
ubuntu;audio;alsa
null
_webmaster.34074
I need a booking system for a theoretical project website. It would be an in-house job (not outsourcing to a web service) but all google searches on the subject yield results for said web services.I'd want to be able to use the system as such:For each day, there is availability recorded and if available a user can book in using the website, which sets that date to unavailable. There are other complexities, but this is the basic system I am trying to achieve - what would allow me to implement something like this?
What is appropriate for creating a booking system?
looking for a script;web applications
null
_codereview.71611
How to improve the functionality, exception safeness and other aspects of the following container adaptors?Allows to write more generic code (not working for, say, <set>):#include <utility>#include <iterator>#include <type_traits>template< typename container, bool = std::is_const< std::remove_reference_t< container > >::value >struct consumable_container;template< typename container >struct consumable_container< container, false >{ consumable_container(container && _container) noexcept : container_(std::forward< container >(_container)) { ; } auto begin() noexcept { return std::make_move_iterator(std::begin(container_)); } auto end() noexcept { return std::make_move_iterator(std::end(container_)); }private : container container_;};template< typename container >struct consumable_container< container, true >{ static_assert(!std::is_rvalue_reference< container >::value); consumable_container(container && _container) noexcept : container_(std::forward< container >(_container)) { ; } auto begin() const noexcept { return std::cbegin(container_); } auto end() const noexcept { return std::cend(container_); }private : container container_;};template< typename container >consumable_container< container >move_if_not_const(container && _container) noexcept{ return std::forward< container >(_container);}// some generic code#include <list>template< typename container >autotransform_to_list(container && _container) noexcept{ static_assert(std::is_reference< container >::value || !std::is_const< container >::value); std::list< typename std::remove_reference_t< container >::value_type > list_; for (auto && value_ : move_if_not_const(std::forward< container >(_container))) { list_.push_back(std::forward< decltype(value_) >(value_)); } return list_;}// testing data type#include <iostream>struct A{ A() { std::cout << __PRETTY_FUNCTION__ << std::endl; } ~A() { std::cout << __PRETTY_FUNCTION__ << std::endl; } A(A const &) { std::cout << __PRETTY_FUNCTION__ << std::endl; } A(A &) { std::cout << __PRETTY_FUNCTION__ << std::endl; } A(A &&) { std::cout << __PRETTY_FUNCTION__ << std::endl; } A & operator = (A const &) { std::cout << __PRETTY_FUNCTION__ << std::endl; return *this; } A & operator = (A &) { std::cout << __PRETTY_FUNCTION__ << std::endl; return *this; } A & operator = (A &&) { std::cout << __PRETTY_FUNCTION__ << std::endl; return *this; }};// testing#include <deque>#include <forward_list>#include <vector>#include <cstdlib>intmain(){ { std::deque< A > const deque_(1); std::list< A > ld_ = transform_to_list(deque_); } std::cout << std::endl; { std::forward_list< A > forward_list_; forward_list_.push_front(A{}); std::list< A > ls_ = transform_to_list(forward_list_); } std::cout << std::endl; { std::vector< A > vector_; vector_.push_back(A{}); std::list< A > lv_ = transform_to_list(std::move(vector_)); } return EXIT_SUCCESS;}Allows to enumerate elements in reverse order:#include <utility>#include <iterator>template< typename container >struct reversed_container{ reversed_container(container && _container) noexcept : container_(std::forward< container >(_container)) { ; } auto begin() noexcept { return std::rbegin(container_); } auto end() noexcept { return std::rend(container_); }private : container container_;};template< typename container >reversed_container< container >reverse(container && _container) noexcept{ return std::forward< container >(_container);}// testing#include <iostream>#include <vector>#include <cstdlib>intmain(){ std::vector< int > vector_({1, 2, 3, 4, 5}); // using for (int i : reverse(vector_)) { std::cout << i << std::endl; } return EXIT_SUCCESS;}Allows to print list of elements with delimiter (without trailing delimiter):#include <utility>#include <iterator>template< typename container >struct head_container{ head_container(container && _container) noexcept : container_(std::forward< container >(_container)) { ; } auto begin() noexcept { return std::begin(container_); } auto end() noexcept { auto last = std::end(container_); if (last == std::begin(container_)) { return last; } else { return std::prev(last); } }private : container container_;};template< typename container >head_container< container >head(container && _container) noexcept{ return std::forward< container >(_container);}// testing#include <iostream>#include <vector>#include <cstdlib>intmain(){ std::vector< int > vector_({1, 2, 3, 4, 5}); // using if (!vector_.empty()) { for (int i : head(vector_)) { std::cout << i << , ; } std::cout << vector_.back() << std::endl; } return EXIT_SUCCESS;}Allows to treat first element in special way:#include <utility>#include <iterator>template< typename container >struct tail_container{ tail_container(container && _container) noexcept : container_(std::forward< container >(_container)) { ; } auto begin() noexcept { auto first = std::begin(container_); if (first == std::end(container_)) { return first; } else { return std::next(first); } } auto end() noexcept { return std::end(container_); }private : container container_;};template< typename container >tail_container< container >tail(container && _container) noexcept{ return std::forward< container >(_container);}// testing#include <iostream>#include <vector>#include <cstdlib>intmain(){ std::vector< int > vector_({1, 2, 3, 4, 5}); // using if (!vector_.empty()) { for (int i : tail(vector_)) { vector_.front() += i; } std::cout << vector_.front() << std::endl; } return EXIT_SUCCESS;}
Container adaptors
c++;c++11;collections;c++14
null
_cs.61180
I was wondering if, given an arbitrary cycle basis (that's complete, e.g. every cycle in the graph can be expressed as the $\mathbb{Z}/2\mathbb{Z}$ sum of elements from the basis) of some graph $G$, is there some way of sampling all cycles uniformly? I'm not able to find papers on it, and I was wondering if anyone had any results on this.
Uniformly sampling from cycles of a graph
graph theory;sampling
Exponential-time algorithmOne (slow) approach is to rejection sampling.Let $c_1,\dots,c_k$ be the cycle basis. Then we obtain an isomorphism $\varphi : (\mathbb{Z}/2\mathbb{Z})^k \to \mathcal{E}$, where $\mathcal{E}$ is the set of Eulerian subgraphs of $G$, given by$$\varphi(\alpha_1,\dots,\alpha_k) = \alpha_1 \cdot c_1 + \cdots + \alpha_k \cdot c_k.$$This gives us a way to sample uniformly at random from the set of Eulerian subgraphs.Note that the set $\mathcal{C}$ of simple cycles of $G$. Note that every simple cycle is an Eulerian subgraph, i.e., $\mathcal{C} \subseteq \mathcal{S}$. Thus, a simple procedure to sample uniformly at random from $\mathcal{C}$ is:Sample uniformly at random from $\mathcal{E}$, to get a subgraph $e$. (Namely, pick $\alpha_1,\dots,\alpha_k$ uniformly at random and set $e = \varphi(\alpha_1,\dots,\alpha_k)$.)If $e$ is a simple cycle, output it. Otherwise, go back to step 1.This samples a simple cycle uniformly at random from the set of all simple cycles of $G$.The (expected) running time is proportional to $|\mathcal{E}|/|\mathcal{C}|$. It is known that $|\mathcal{E}|=2^{|V|+|E|-1}$ if $G$ is connected. Thus, if $G$ has $C$ different cycles, this gives an algorithm whose running time is proportional to $2^{|V|+|E|-1}/C$. Depending on the number of cycles in $G$, this might be extremely slow.I don't know if there is a polynomial-time algorithm or not.Counting the number of simple cyclesCounting the exact number of simple cycles in a graph is NP-hard, by reduction from the Hamiltonian cycle problem. Jonathan Katz has some lecture notes that provide a proof.I don't know whether it is hard to approximate the number of simple cycles in $G$. If it is, then this might imply hardness results for uniform sampling from among the set of simple cycles.See also https://cstheory.stackexchange.com/q/20246/5038.
_unix.348648
I got some project where need to have simple workstation with running only web browser. It should boot -> autologin and run browser; when somebody will close browser it should start it again.I have made autologin in console, and run browser with startx from .bashrc but I stuck on browser reopen, now when somebody will close browser workstation is going back to bash in tty and not starting X11 with browser again.Is there any nice trick to keep browser life in loop?regardsPeval
linux - autologin and run X11 app in loop
linux;x11;browser;autologin
null
_webmaster.75909
Last months I have lots of referrer spammers in my GA statistics. Their count is ~10x higher than count of legit visitors (my site is not very popular yet). I've turned on an option to hide known spammers in GA settings, but it didn't help at all. It seems these spammers are using scripts to spam directly to GA (i.e. they are not logged in my IIS).Is there anything I can do to stop these spammers? UPD 10 months later, and they started spamming using fake target page names... and Google is still doing nothing about it.
How to fight off Google Analytics referrer spammers?
google analytics;spam;referrer;google analytics spam
The Spam is getting out of control. The list it's growing and it's time-consuming and not even efficient to add a filter for each of the spammers since most of them shows up for a few days and then disappear and a new one comes.There is a lot of misinformation, the most common mistake is recommending to use the .htaccess, this file blocks the access to the Website, although there are a few crawlers(5 or 6) than can be block, the vast majority of the spam never access your site is Ghost Spam.The best way to stop this type of spam (Ghosts) is by creating a valid hostname filter, the ghost spam use either a fake or not set hostname, so with this filter you don't have to add endless filters, one filter will take care of the old and new spam.. Been using this solution successfully for 3 months More information about this method here:https://stackoverflow.com/a/28354319/3197362
_softwareengineering.341474
I am working on a local copy of a large project on github. Most of the changes I make are modifications to tailor the application for a speific use case, as such they are not going to be accepted by the maintainer.I am trying to figure out the best way I can regularly update my local branch with changes made in the master branch, but without overwriting changes I have made. For example I could use:git checkout my_branch git merge masterThen push mybranch to a seperate project master on githubIf this the recommended way to have your own copy of a git project with regular updates from a master project?
How to work on git local branch, using pull to update but without push
git;github
There is no easy way to get updates from upstream while maintaining local changes to the code base. At some point, there will likely be conflicts, and you will need to resolve them manually.Rebasing your changes onto the current state of upstream is not a viable solution in the long term, especially if your changes are more than a handful of commits. Since your adaptions are never merged back into the upstream repository, each time you rebase you will have to rebase everything you've ever done. That means resolving the same conflicts again and again. Rebasing also rewrites history which means that merging your branch becomes absolutely impossible.If you need to keep your current workflow, it might be better to stop your main branch from tracking any upstream branch, and handle all merges manually so effectively the commands you are suggesting. A git pull upstream master is almost the same as git fetch upstream; git merge upstream/master. Doing the merge manually will give you more control. Merging instead of rebasing also reduces your work load in the long run.However, Git is not a suitable tool to manage different modifications of a code base. The term version control is slightly misleading here. This is not because Git is a bad tool, but because this is an inherently difficult problem. This problem becomes a lot easier if the original code base is sufficiently configurable so that you don't have to edit the code in order to implement your changes in OOP speak, it should obey the open/closed principle.While your problem-specific adaptions do not have any value for the original project, making it more configurable certainly would. The best strategy in the long term is therefore to make small changes to the original code to make it more configurable. This could mean setting some options in a configuration file, adding a plugin system, hooks and callbacks for certain events, or introducing dependency injection. Once those changes are in the upstream project, you can implement your changes without having to touch the upstream code, which significantly reduces your maintenance burden. Your problem-specific plugins would live in a completely separate project.If the upstream project does not accept your modifications, maintaining a small set of patches as a compatibility layer is still better than making all your changes directly in the upstream code base.
_unix.332656
I have an HDD bought a few years I have not touch since at least a year.When connecting it to my laptop, the drive does not appear in the drive list in the file explorer.I used fdisk to be sure it was connected and detected as /dev/sdb:berhthun@debian:~$ sudo fdisk -l[sudo] password for berhthun: Disque /dev/sda: 465,8 GiB, 500107862016octets, 976773168secteursUnits: secteur de 1 512 = 512octetsTaille de secteur (logique / physique): 512octets / 4096octetstaille d'E/S (minimale / optimale): 4096octets / 4096octetsType d'tiquette de disque: dosIdentifiant de disque: 0x0b1495d8Device Boot Start End Sectors Size Id Type/dev/sda1 * 2048 40011775 40009728 19,1G 83 Linux/dev/sda2 40011776 976773167 936761392 446,7G 5 Extended/dev/sda5 40013824 56307711 16293888 7,8G 82 Linux swap / Solaris/dev/sda6 56309760 976773119 920463360 438,9G 83 LinuxDisque /dev/sdb: 465,7 GiB, 500074283008octets, 976707584secteursUnits: secteur de 1 512 = 512octetsTaille de secteur (logique / physique): 512octets / 512octetstaille d'E/S (minimale / optimale): 512octets / 512octetsType d'tiquette de disque: dosIdentifiant de disque: 0x00038e76Device Boot Start End Sectors Size Id Type/dev/sdb1 2048 976707583 976705536 465,7G 83 LinuxAfter lookin on the Internet, I tried to use mkfs command to format it:berhthun@debian:~$ sudo mkfs.ntfs /dev/sdb1[sudo] password for berhthun: Cluster size has been automatically set to 4096 bytes.Initializing device with zeroes: 100% - Done.Creating NTFS volume structures.Error writing to /dev/sdb1: Erreur d'entre/sortieError writing non-resident attribute value.add_attr_sd failed: Erreur d'entre/sortieCouldn't create root directory: Erreur d'entre/sortieFailed to fsync device /dev/sdb1: Erreur d'entre/sortieWarning: Could not close /dev/sdb1: Erreur d'entre/sortieberhthun@debian:~$ I went to gnome-disks, the HDD is also detected. Partition format is unknown. It tells me the disk have a 500 Go space and that's it.If I try to format the drive, it also tells me an error message:Erreur lors du formatage du disqueError creating file system: Command-line `parted --script /dev/sdb mktable msdos' exited with non-zero exit status 1: Error: Erreur d'entre/sortie during read on /dev/sdbError: Erreur d'entre/sortie during write on /dev/sdb (udisks-error-quark, 0)What can I do to make it work again ?Thank you for your help.Update after Stephen Kitt comment:When using dmesg I find the following errors about the hdd:[ 748.613769] end_request: critical medium error, dev sdb, sector 976707456[ 748.613777] Buffer I/O error on device sdb, logical block 122088432[ 756.133563] end_request: critical medium error, dev sdb, sector 976707456[ 756.133571] Buffer I/O error on device sdb, logical block 122088432[ 868.845815] end_request: critical medium error, dev sdb, sector 976707456[ 868.845821] Buffer I/O error on device sdb1, logical block 122088176[ 945.172666] end_request: critical medium error, dev sdb, sector 976707456[ 945.172674] Buffer I/O error on device sdb1, logical block 122088176[ 975.727890] end_request: critical medium error, dev sdb, sector 976707456[ 975.727898] Buffer I/O error on device sdb, logical block 122088432And so on.
Debian gnome-disks and mkfs can not format external hdd
debian;filesystems;hard disk
If the failing sectors are all at the end of the disk (976707456 and so on), you could use your disk with a shorter partition: delete sdb1 and re-create a partition that's one or two megabytes shorter.But I wouldn't trust such a disk: unless you use it for throwaway data, it's not worth the hassle and risk...
_unix.91046
Is it possible to use to search for specific mail content using built-in functionality of Mutt? Or, as last resort, how can I configure grep to be used in Mutt ?The documentation only mentions the search and limit functions, which only search headers.
Search for mail content with mutt
email;search;mutt
search and limit can also actually search inside messages, depending on the search patterns you give. From the Patterns subsection of the Mutt reference: ~b EXPR messages which contain EXPR in the message body~B EXPR messages which contain EXPR in the whole messageThat is, ~b only searches in the body, whereas ~B also searches in the headers.Note that this can be quite slow, since it may have to download each message one by one if they are not already cached. If you have a mutt version greater or equal to 1.5.12, you can cache the ones you are downloading for later use by setting message_cachedir to a directory where you want to store message bodies, which can significantly speed up searching them (and the same for headers with header_cache).
_webapps.57751
I have a folder of about 20 Lesson Plans in Google Documents. Although I used an initial template with a bunch of custom formatting for Heading 1, Heading 2, etc. I now want to tweak the styles. Is there a reasonable way to sync the formatting between documents? In other words, I want designate one document as a Master, and propagate the styles to other documents.Any suggestions?
How can I share common formatting between Google Documents?
google documents;formatting
Sadly, this seems impossible. I tried to see if I could create a script that would copy paragraph heading styles from one document to another, but there seems to be no API for getting style attributes for a heading style.There might be a slight chance you could get something working by using only custom styles, that is, defining your very own Heading styles, for example, and have a script copy those styles from a master document to slave documents. However, once you depend on the pre-defined Heading styles in any of your custom styles, you're out of luck. In any case, using such a system will take a lot of discipline when using styles, so I don't think any such solution will be very practical.If anyone wants to work further on this, I have shared a folder on my Google Drive.
_codereview.19796
I have three methods that are really similar but do slightly different things. This doesn't feel very DRY to me, but I can't think of a way to reduce the duplication. Here are the methods, I think they're fairly self explanatory:# Present a date as a string, optionally without a year# Example output: December 1, 2011def date_string(year=true) if date str = %B %e str += , %Y if year date.strftime(str).gsub(' ',' ') else endend# Present a time as a string, optionally with a meridian (am/pm)# Example output: 1:30 PMdef start_time_string(meridian=true) if start_time str = %l:%M str += %p if meridian start_time.strftime(str).lstrip else endend# Present a time as a string, optionally with a meridian (am/pm)# Example Output: 2:00def end_time_string(meridian=true) if end_time str = %l:%M str += %p if meridian end_time.strftime(str).lstrip else endendEach method just presents a time as a string with certain options, and if the time object they're trying to present is nil, they return an empty string.Any ideas how to DRY this up?
Presenting a time as a string
strings;ruby;datetime
To complement @mjgpy3's answer, some notes:def date_string(year=true). That's idiomatic Python, but in Ruby the idiomatic is def date_string(year = true). Personally I don't like positional optional values, specially booleans; when you call them you have no idea what the argument stands for. You can use an options hash instead. Do you remember the old Rails obj.save(false)? wisely changed in Rails 3 to obj.save(:validate => false).How to build string with conditional values: This is the pattern I use:str = [%B %e, *(, %Y if year)].jointhe else returns an empty string, why? it's better if it returns nil.Check String#squish.To sum it up, I'd write the first method like this:def date_string(options = {}) format = [%B %e, *(, %Y if options[:year])].join date.strftime(format).squishendJust as a side note: does it make sense to have all these methods in a model? (or a presenter, it doesn't matter). Think of an orthogonal approach, create helper methods available for everbody to use (and send them date object), this way you achieve real modularity.
_webmaster.59356
Our home page features links to our 4 main sub pages (it's really more, but I'll use 4 as an example). It's currently just a list of links in the content body:Custom websites designed for desktop, tablets, and mobile phonesDomain Name RegistrationWebsite HostingEcommerce SolutionsDisregarding user experience for better or worse and focusing on SEO only, I considered working the links into a paragraph in a more organic way:We create custom websites designed for desktop, tablets, and mobile phones. We take care of everything from domain name registration, website hosting, ecommerce solutions and more!My assumption is that the paragraph form is preferred because it is more natural and reads better. From an SEO perspective, is there much of a difference?
Links in natural text VS plain list format
seo
The text that surrounds links is a ranking factor for the page receiving the link. How much is unknown but it does matter. In this case there isn't a whole lot of text surrounding the links in the paragraph so I don't think it is going to make much of a difference either way.Just go with whatever is better for the user (and/or makes for a better presentation).
_codereview.54887
Given an array, print the Next Greater Element (NGE) for every element. The Next Greater Element for an element x is the first greater element on the right side of x in array. Elements for which no greater element exist, consider next greater element as -1.Examples:For any array, rightmost element always has next greater element as -1.For an array which is sorted in decreasing order, all elements have next greater element as -1.For the input array [4, 5, 2, 25}, the next greater elements for each element are as follows: Element NGE 4 --> 5 5 --> 25 2 --> 25 25 --> -1For the input array [13, 7, 6, 12}, the next greater elements for each element are as follows. Element NGE 13 --> -1 7 --> 12 6 --> 12 12 --> -1final class DataSet { private final int x; private final int nextX; public DataSet(int x, int nextX) { this.x = x; this.nextX = nextX; } public int getX() { return x; } public int getNextX() { return nextX; } @Override public String toString() { return x: + x + nextX: + nextX; } @Override public int hashCode() { final int prime = 31; int result = 1; result = prime * result + nextX; result = prime * result + x; return result; } @Override public boolean equals(Object obj) { if (this == obj) return true; if (obj == null) return false; if (getClass() != obj.getClass()) return false; DataSet other = (DataSet) obj; if (nextX != other.nextX) return false; if (x != other.x) return false; return true; }}public final class NextGreatestElement { private NextGreatestElement() {} /** * Computes list of dataset, where dataset is a tuple of * (currentInt, nextGreatestelement) * * @param a the array * @return list of dataset */ public static List<DataSet> getNextGreatestElement(int[] a) { final List<DataSet> dataSetList = new ArrayList<DataSet>(); final Stack<Integer> stack = new Stack<Integer>(); for (int i = 0; i < a.length; i++) { while (!stack.isEmpty() && stack.peek() < a[i]) { dataSetList.add(new DataSet(stack.pop(), a[i])); } stack.push(a[i]); } while (!stack.isEmpty()) { dataSetList.add(new DataSet(stack.pop(), -1)); } return dataSetList; }}public class NextGreatestElementTest { @Test public void test1() { int[] a1 = {2, 3, 4, 5}; List<DataSet> ds1 = new ArrayList<DataSet>(); ds1.add(new DataSet(2, 3)); ds1.add(new DataSet(3, 4)); ds1.add(new DataSet(4, 5)); ds1.add(new DataSet(5, -1)); assertEquals(ds1, NextGreatestElement.getNextGreatestElement(a1)); } @Test public void test2() { int[] a2 = {3, 2, 1}; List<DataSet> ds2 = new ArrayList<DataSet>(); ds2.add(new DataSet(1, -1)); ds2.add(new DataSet(2, -1)); ds2.add(new DataSet(3, -1)); assertEquals(ds2, NextGreatestElement.getNextGreatestElement(a2)); }}
Find the Next Greatest Element
java;algorithm;stack
null
_reverseengineering.10679
I'm in the process of trying to reverse engineer a firmware from unknown vendor. Here's what I got so farI have the firmware image (.mg file). AFAIK it's no common image, I couldn't find any information about it from googlingHere's the binwalk output:DECIMAL HEXADECIMAL DESCRIPTION--------------------------------------------------------------------------------2149 0x865 HTML document header2685 0xA7D HTML document footer2685 0xA7D HTML document header3052 0xBEC HTML document footer3052 0xBEC HTML document header3483 0xD9B HTML document footer3483 0xD9B HTML document header3747 0xEA3 HTML document footer3747 0xEA3 HTML document header5542 0x15A6 HTML document footer8071 0x1F87 HTML document header10462 0x28DE HTML document footer10463 0x28DF HTML document header10962 0x2AD2 HTML document footer43377 0xA971 HTML document header43752 0xAAE8 HTML document footer43753 0xAAE9 HTML document header47269 0xB8A5 HTML document footer47269 0xB8A5 HTML document header50689 0xC601 HTML document footer50690 0xC602 HTML document header52285 0xCC3D HTML document footer52289 0xCC41 HTML document header54387 0xD473 HTML document footer54389 0xD475 HTML document header66620 0x1043C HTML document footer66621 0x1043D HTML document header70158 0x1120E HTML document footer70159 0x1120F HTML document header89096 0x15C08 HTML document footer89329 0x15CF1 Copyright string: © 2008.  All rights reserved.96973 0x17ACD GIF image data, version 89a, 9 x 997819 0x17E1B GIF image data, version 89a, 1600 x 398790 0x181E6 GIF image data, version 89a, 780 x 1099797 0x185D5 GIF image data, version 89a, 780 x 10100806 0x189C6 HTML document header126955 0x1EFEB HTML document footer126957 0x1EFED HTML document header133790 0x20A9E HTML document footer133792 0x20AA0 HTML document header149863 0x24967 HTML document footer149865 0x24969 HTML document header157824 0x26880 HTML document footer157826 0x26882 HTML document header173039 0x2A3EF HTML document footer173041 0x2A3F1 HTML document header201137 0x311B1 HTML document footer201139 0x311B3 HTML document header207772 0x32B9C HTML document footer207773 0x32B9D HTML document header219240 0x35868 HTML document footer219241 0x35869 HTML document header220859 0x35EBB HTML document footer220860 0x35EBC HTML document header229141 0x37F15 HTML document footer229142 0x37F16 HTML document header237943 0x3A177 HTML document footer237943 0x3A177 HTML document header243109 0x3B5A5 HTML document footer243110 0x3B5A6 HTML document header248986 0x3CC9A HTML document footer248987 0x3CC9B HTML document header256674 0x3EAA2 HTML document footer256675 0x3EAA3 HTML document header264418 0x408E2 HTML document footer264419 0x408E3 HTML document header267737 0x415D9 HTML document footer267791 0x4160F HTML document header278611 0x44053 HTML document footer278612 0x44054 HTML document header302510 0x49DAE HTML document footer302511 0x49DAF HTML document header340467 0x531F3 HTML document footer340468 0x531F4 HTML document header350590 0x5597E HTML document footer350591 0x5597F HTML document header353983 0x566BF HTML document footer353984 0x566C0 HTML document header400086 0x61AD6 HTML document footer400087 0x61AD7 HTML document header402883 0x625C3 HTML document footer936805 0xE4B65 Zlib compressed data, default compression949237 0xE7BF5 Zlib compressed data, compressed959131 0xEA29B Zlib compressed data, compressed992718 0xF25CE Zlib compressed data, best compression999966 0xF421E Zlib compressed data, default compression1014782 0xF7BFE Zlib compressed data, default compression1018489 0xF8A79 Zlib compressed data, default compression1032857 0xFC299 Zlib compressed data, default compression1042869 0xFE9B5 Zlib compressed data, compressed1044292 0xFEF44 Zlib compressed data, best compression1052372 0x100ED4 Zlib compressed data, compressed1055182 0x1019CE Zlib compressed data, default compression1071400 0x105928 Zlib compressed data, default compression1076050 0x106B52 Zlib compressed data, best compression1096297 0x10BA69 Zlib compressed data, compressed1137920 0x115D00 Zlib compressed data, compressed1170902 0x11DDD6 Zlib compressed data, compressed1186555 0x121AFB Zlib compressed data, compressed1188821 0x1223D5 Zlib compressed data, default compression1195821 0x123F2D Zlib compressed data, default compression1201862 0x1256C6 Zlib compressed data, compressed1236300 0x12DD4C Zlib compressed data, compressed1279273 0x138529 Zlib compressed data, compressed1280971 0x138BCB Zlib compressed data, compressed1302926 0x13E18E Zlib compressed data, best compression1303266 0x13E2E2 Zlib compressed data, compressed1312923 0x14089B Zlib compressed data, default compression1332979 0x1456F3 Zlib compressed data, default compression1340608 0x1474C0 Zlib compressed data, best compression1378862 0x150A2E Zlib compressed data, compressed1398072 0x155538 Zlib compressed data, default compression1415562 0x15998A Zlib compressed data, compressed1464907 0x165A4B Zlib compressed data, default compression1500357 0x16E4C5 Zlib compressed data, best compression1500644 0x16E5E4 Zlib compressed data, default compression1527997 0x1750BD compress'd data, 16 bits1544522 0x17914A Zlib compressed data, best compression1552354 0x17AFE2 Zlib compressed data, best compression1571408 0x17FA50 Zlib compressed data, compressed1581827 0x182303 Zlib compressed data, best compression1591785 0x1849E9 Zlib compressed data, compressed1605018 0x187D9A Zlib compressed data, default compression1620883 0x18BB93 Zlib compressed data, compressed1657385 0x194A29 Zlib compressed data, compressed1660472 0x195638 Zlib compressed data, compressed1676765 0x1995DD Zlib compressed data, default compression1677967 0x199A8F Zlib compressed data, best compression1716067 0x1A2F63 Zlib compressed data, best compression1805822 0x1B8DFE Zlib compressed data, compressed1816311 0x1BB6F7 Zlib compressed data, compressed1828850 0x1BE7F2 Zlib compressed data, compressed1847543 0x1C30F7 Zlib compressed data, compressed1884695 0x1CC217 Zlib compressed data, compressed1897488 0x1CF410 Zlib compressed data, best compression1897690 0x1CF4DA Zlib compressed data, compressed1942524 0x1DA3FC Zlib compressed data, best compression1963605 0x1DF655 Zlib compressed data, compressed1975034 0x1E22FA Zlib compressed data, compressed1977041 0x1E2AD1 Zlib compressed data, compressed1997374 0x1E7A3E Zlib compressed data, compressed2036964 0x1F14E4 Zlib compressed data, default compression2037783 0x1F1817 Zlib compressed data, compressed2041756 0x1F279C Zlib compressed data, compressed2051259 0x1F4CBB Zlib compressed data, compressed2063154 0x1F7B32 Zlib compressed data, best compression2072756 0x1FA0B4 Zlib compressed data, default compression2076724 0x1FB034 Zlib compressed data, compressed2082453 0x1FC695 Zlib compressed data, default compression2093131 0x1FF04B Zlib compressed data, compressed2107066 0x2026BA Zlib compressed data, compressed2107321 0x2027B9 Zlib compressed data, default compression2121073 0x205D71 Zlib compressed data, compressed2125853 0x20701D Zlib compressed data, best compression2126493 0x20729D Zlib compressed data, best compression2154155 0x20DEAB Zlib compressed data, compressed2177138 0x213872 Zlib compressed data, compressed2195782 0x218146 Zlib compressed data, best compression2199697 0x219091 Zlib compressed data, compressed2211810 0x21BFE2 Zlib compressed data, default compression2237221 0x222325 Zlib compressed data, default compression2245919 0x22451F Zlib compressed data, best compression2257118 0x2270DE Zlib compressed data, compressed2265100 0x22900C Zlib compressed data, compressed2276225 0x22BB81 Zlib compressed data, compressed2285428 0x22DF74 Zlib compressed data, best compression2286328 0x22E2F8 Zlib compressed data, best compression2300068 0x2318A4 Zlib compressed data, compressed2306513 0x2331D1 Zlib compressed data, default compression2324445 0x2377DD Zlib compressed data, default compression2339864 0x23B418 Zlib compressed data, best compression2349252 0x23D8C4 Zlib compressed data, compressed2358483 0x23FCD3 Zlib compressed data, compressed2375872 0x2440C0 Zlib compressed data, compressed2383190 0x245D56 Zlib compressed data, default compression2474107 0x25C07B Zlib compressed data, default compression2505436 0x263ADC Zlib compressed data, compressed2505514 0x263B2A Zlib compressed data, compressed2507416 0x264298 Zlib compressed data, best compression2515542 0x266256 Zlib compressed data, compressed2517896 0x266B88 Zlib compressed data, best compression2520742 0x2676A6 Zlib compressed data, compressed2526861 0x268E8D Zlib compressed data, best compression2534916 0x26AE04 Zlib compressed data, default compression2561503 0x2715DF Zlib compressed data, default compression2569730 0x273602 Zlib compressed data, compressed2583767 0x276CD7 Zlib compressed data, default compression2602237 0x27B4FD Zlib compressed data, default compression2611477 0x27D915 Zlib compressed data, default compression2626957 0x28158D Zlib compressed data, compressed2639089 0x2844F1 Zlib compressed data, compressed2647278 0x2864EE Zlib compressed data, default compression2652206 0x28782E Zlib compressed data, best compression2664654 0x28A8CE Zlib compressed data, best compression2721674 0x29878A Zlib compressed data, best compression2722212 0x2989A4 Zlib compressed data, compressed2729622 0x29A696 Zlib compressed data, default compression2746239 0x29E77F Zlib compressed data, compressed2747460 0x29EC44 Zlib compressed data, best compression2773475 0x2A51E3 Zlib compressed data, compressed2912430 0x2C70AE Zlib compressed data, compressed2915559 0x2C7CE7 Zlib compressed data, best compression2919137 0x2C8AE1 Zlib compressed data, compressed2919398 0x2C8BE6 Zlib compressed data, default compression2925208 0x2CA298 Zlib compressed data, default compression2970697 0x2D5449 Zlib compressed data, compressed2975629 0x2D678D Zlib compressed data, compressed3007894 0x2DE596 Zlib compressed data, compressed3085804 0x2F15EC Zlib compressed data, compressed3092200 0x2F2EE8 Zlib compressed data, compressed3109095 0x2F70E7 Zlib compressed data, default compression3137073 0x2FDE31 Zlib compressed data, default compression3159460 0x3035A4 Zlib compressed data, best compression3169969 0x305EB1 Zlib compressed data, compressed3174676 0x307114 Zlib compressed data, best compression3190315 0x30AE2B Zlib compressed data, default compression3190481 0x30AED1 Zlib compressed data, compressed3199853 0x30D36D Zlib compressed data, best compression3209970 0x30FAF2 Zlib compressed data, default compression3233371 0x31565B Zlib compressed data, compressed3262017 0x31C641 Zlib compressed data, compressed3294698 0x3245EA Zlib compressed data, compressed3294814 0x32465E Zlib compressed data, best compression3300772 0x325DA4 Zlib compressed data, compressed3314325 0x329295 Zlib compressed data, default compression3325430 0x32BDF6 Zlib compressed data, best compression3347539 0x331453 Zlib compressed data, default compression3381973 0x339AD5 Zlib compressed data, compressed3384096 0x33A320 Zlib compressed data, compressed3426731 0x3449AB Zlib compressed data, best compression3436707 0x3470A3 Zlib compressed data, best compression3439327 0x347ADF Zlib compressed data, best compression3483067 0x3525BB Zlib compressed data, default compression3488551 0x353B27 Zlib compressed data, default compression3493009 0x354C91 Zlib compressed data, default compression3494362 0x3551DA Zlib compressed data, default compression3499219 0x3564D3 Zlib compressed data, best compression3538245 0x35FD45 Zlib compressed data, compressed3579428 0x369E24 Zlib compressed data, compressed3588400 0x36C130 Zlib compressed data, compressed3597734 0x36E5A6 Zlib compressed data, default compression3609086 0x3711FE Zlib compressed data, compressed3617545 0x373309 Zlib compressed data, default compression3622662 0x374706 Zlib compressed data, compressed3626922 0x3757AA Zlib compressed data, best compression3634992 0x377730 Zlib compressed data, best compression3638148 0x378384 Zlib compressed data, default compression3638417 0x378491 Zlib compressed data, best compression3688902 0x3849C6 Zlib compressed data, compressed3698631 0x386FC7 Zlib compressed data, best compression3713175 0x38A897 Zlib compressed data, best compression3781687 0x39B437 Zlib compressed data, default compression3786285 0x39C62D Zlib compressed data, default compression3794941 0x39E7FD Zlib compressed data, default compression3816840 0x3A3D88 Zlib compressed data, best compression3817656 0x3A40B8 Zlib compressed data, default compression3829859 0x3A7063 Zlib compressed data, compressed3859678 0x3AE4DE Zlib compressed data, best compression3887499 0x3B518B Zlib compressed data, best compression3902283 0x3B8B4B Zlib compressed data, default compression3923673 0x3BDED9 Zlib compressed data, best compression3927277 0x3BECED Zlib compressed data, best compression3974662 0x3CA606 Zlib compressed data, compressed3995965 0x3CF93D Zlib compressed data, default compression4004880 0x3D1C10 Zlib compressed data, compressed4024426 0x3D686A Zlib compressed data, default compression4076111 0x3E324F Zlib compressed data, compressed4097672 0x3E8688 Zlib compressed data, best compression4115526 0x3ECC46 Zlib compressed data, best compression4115649 0x3ECCC1 Zlib compressed data, best compression4122032 0x3EE5B0 Zlib compressed data, best compression4124189 0x3EEE1D Zlib compressed data, best compression4129024 0x3F0100 Zlib compressed data, compressed4181033 0x3FCC29 Zlib compressed data, default compression4218129 0x405D11 Zlib compressed data, compressed4219339 0x4061CB Zlib compressed data, best compression4221408 0x4069E0 Zlib compressed data, best compression4239114 0x40AF0A Zlib compressed data, compressed4245809 0x40C931 Zlib compressed data, best compression4277644 0x41458C Zlib compressed data, compressed4289972 0x4175B4 Zlib compressed data, compressed4300292 0x419E04 Zlib compressed data, best compression4319935 0x41EABF Zlib compressed data, compressed4319988 0x41EAF4 Zlib compressed data, compressed4320041 0x41EB29 Zlib compressed data, compressed4320089 0x41EB59 Zlib compressed data, compressed4350321 0x426171 Zlib compressed data, compressed4375063 0x42C217 Zlib compressed data, compressed4414926 0x435DCE Zlib compressed data, compressed4417582 0x43682E Zlib compressed data, compressed4430754 0x439BA2 Zlib compressed data, best compression4430955 0x439C6B Zlib compressed data, best compression4433422 0x43A60E Zlib compressed data, compressed4440964 0x43C384 Zlib compressed data, default compression4459836 0x440D3C Zlib compressed data, best compression4465037 0x44218D Zlib compressed data, compressed4491474 0x4488D2 Zlib compressed data, compressed4525321 0x450D09 Zlib compressed data, compressed4536020 0x4536D4 Zlib compressed data, default compression4539019 0x45428B Zlib compressed data, best compression4547686 0x456466 Zlib compressed data, compressed4594046 0x46197E Zlib compressed data, compressed4595742 0x46201E Zlib compressed data, compressed4611527 0x465DC7 Zlib compressed data, compressed4621480 0x4684A8 Zlib compressed data, best compression4622546 0x4688D2 Zlib compressed data, best compression4658238 0x47143E Zlib compressed data, compressed4666730 0x47356A Zlib compressed data, best compression4682920 0x4774A8 Zlib compressed data, best compression4687391 0x47861F Zlib compressed data, compressed4725536 0x481B20 Zlib compressed data, compressed4729111 0x482917 Zlib compressed data, default compression4739376 0x485130 Zlib compressed data, best compression4747013 0x486F05 Zlib compressed data, compressed4764107 0x48B1CB Zlib compressed data, compressed4774983 0x48DC47 Zlib compressed data, default compression4775153 0x48DCF1 Zlib compressed data, best compression4828104 0x49ABC8 Zlib compressed data, best compression4866122 0x4A404A Zlib compressed data, compressed4895722 0x4AB3EA Zlib compressed data, compressed4934732 0x4B4C4C Zlib compressed data, default compression4940722 0x4B63B2 Zlib compressed data, default compression4949644 0x4B868C Zlib compressed data, compressed4983247 0x4C09CF Zlib compressed data, compressed4992046 0x4C2C2E Zlib compressed data, best compression5023817 0x4CA849 Zlib compressed data, compressed5037667 0x4CDE63 Zlib compressed data, compressed5046276 0x4D0004 Zlib compressed data, best compression5048862 0x4D0A1E Zlib compressed data, compressed5050674 0x4D1132 Zlib compressed data, compressed5073060 0x4D68A4 Zlib compressed data, default compression5092725 0x4DB575 Zlib compressed data, default compression5122481 0x4E29B1 Zlib compressed data, compressed5122600 0x4E2A28 Zlib compressed data, compressed5140098 0x4E6E82 Zlib compressed data, compressed5140130 0x4E6EA2 Zlib compressed data, compressedIt all seems like a false positive because when I run binwalk -e I get these files as output:100ED4.zlib 15998A.zlib 1CF410.zlib 20AA0.html 263ADC.zlib 29E77F.zlib 311B3.html 369E24.zlib 3B5A6.html 40C931.zlib 45428B.zlib 4B63B2.zlib B8A5.html1019CE.zlib 165A4B.zlib 1CF4DA.zlib 20DEAB.zlib 263B2A.zlib 29EC44.zlib 31565B.zlib 36C130.zlib 3B8B4B.zlib 41458C.zlib 456466.zlib 4B868C.zlib BEC.html1043D.html 16E4C5.zlib 1DA3FC.zlib 213872.zlib 264298.zlib 2A3F1.html 31C641.zlib 36E5A6.zlib 3BDED9.zlib 4160F.html 46197E.zlib 4C09CF.zlib C602.html105928.zlib 16E5E4.zlib 1DF655.zlib 218146.zlib 266256.zlib 2A51E3.zlib 3245EA.zlib 3711FE.zlib 3BECED.zlib 4175B4.zlib 46201E.zlib 4C2C2E.zlib CC41.html106B52.zlib 1750BD.Z 1E22FA.zlib 219091.zlib 266B88.zlib 2C70AE.zlib 32465E.zlib 373309.zlib 3CA606.zlib 419E04.zlib 465DC7.zlib 4CA849.zlib D475.html10BA69.zlib 17914A.zlib 1E2AD1.zlib 21BFE2.zlib 2676A6.zlib 2C7CE7.zlib 325DA4.zlib 374706.zlib 3CC9B.html 41EABF.zlib 4684A8.zlib 4CDE63.zlib D9B.html1120F.html 17AFE2.zlib 1E7A3E.zlib 222325.zlib 26882.html 2C8AE1.zlib 329295.zlib 3757AA.zlib 3CF93D.zlib 41EAF4.zlib 4688D2.zlib 4D0004.zlib E4B65.zlib115D00.zlib 17FA50.zlib 1EFED.html 22451F.zlib 268E8D.zlib 2C8BE6.zlib 32B9D.html 377730.zlib 3D1C10.zlib 41EB29.zlib 47143E.zlib 4D0A1E.zlib E7BF5.zlib11DDD6.zlib 182303.zlib 1F14E4.zlib 2270DE.zlib 26AE04.zlib 2CA298.zlib 32BDF6.zlib 378384.zlib 3D686A.zlib 41EB59.zlib 47356A.zlib 4D1132.zlib EA29B.zlib121AFB.zlib 1849E9.zlib 1F1817.zlib 22900C.zlib 2715DF.zlib 2D5449.zlib 331453.zlib 378491.zlib 3E324F.zlib 426171.zlib 4774A8.zlib 4D68A4.zlib EA3.html1223D5.zlib 187D9A.zlib 1F279C.zlib 22BB81.zlib 273602.zlib 2D678D.zlib 339AD5.zlib 37F16.html 3E8688.zlib 42C217.zlib 47861F.zlib 4DB575.zlib F25CE.zlib123F2D.zlib 189C6.html 1F4CBB.zlib 22DF74.zlib 276CD7.zlib 2DE596.zlib 33A320.zlib 3849C6.zlib 3EAA3.html 435DCE.zlib 481B20.zlib 4E29B1.zlib F421E.zlib1256C6.zlib 18BB93.zlib 1F7B32.zlib 22E2F8.zlib 27B4FD.zlib 2F15EC.zlib 3449AB.zlib 386FC7.zlib 3ECC46.zlib 43682E.zlib 482917.zlib 4E2A28.zlib F7BFE.zlib12DD4C.zlib 194A29.zlib 1F87.html 2318A4.zlib 27D915.zlib 2F2EE8.zlib 3470A3.zlib 38A897.zlib 3ECCC1.zlib 439BA2.zlib 485130.zlib 4E6E82.zlib F8A79.zlib138529.zlib 195638.zlib 1FA0B4.zlib 2331D1.zlib 28158D.zlib 2F70E7.zlib 347ADF.zlib 39B437.zlib 3EE5B0.zlib 439C6B.zlib 486F05.zlib 4E6EA2.zlib FC299.zlib138BCB.zlib 1995DD.zlib 1FB034.zlib 2377DD.zlib 2844F1.zlib 2FDE31.zlib 3525BB.zlib 39C62D.zlib 3EEE1D.zlib 43A60E.zlib 48B1CB.zlib 531F4.html FE9B5.zlib13E18E.zlib 199A8F.zlib 1FC695.zlib 23B418.zlib 2864EE.zlib 3035A4.zlib 353B27.zlib 39E7FD.zlib 3F0100.zlib 43C384.zlib 48DC47.zlib 5597F.html FEF44.zlib13E2E2.zlib 1A2F63.zlib 1FF04B.zlib 23D8C4.zlib 28782E.zlib 305EB1.zlib 354C91.zlib 3A177.html 3FCC29.zlib 44054.html 48DCF1.zlib 566C0.html14089B.zlib 1B8DFE.zlib 2026BA.zlib 23FCD3.zlib 28A8CE.zlib 307114.zlib 3551DA.zlib 3A3D88.zlib 405D11.zlib 440D3C.zlib 49ABC8.zlib 61AD7.html1456F3.zlib 1BB6F7.zlib 2027B9.zlib 2440C0.zlib 28DF.html 30AE2B.zlib 3564D3.zlib 3A40B8.zlib 4061CB.zlib 44218D.zlib 49DAF.html 865.html1474C0.zlib 1BE7F2.zlib 205D71.zlib 245D56.zlib 29878A.zlib 30AED1.zlib 35869.html 3A7063.zlib 4069E0.zlib 4488D2.zlib 4A404A.zlib A7D.html150A2E.zlib 1C30F7.zlib 20701D.zlib 24969.html 2989A4.zlib 30D36D.zlib 35EBC.html 3AE4DE.zlib 408E3.html 450D09.zlib 4AB3EA.zlib A971.html155538.zlib 1CC217.zlib 20729D.zlib 25C07B.zlib 29A696.zlib 30FAF2.zlib 35FD45.zlib 3B518B.zlib 40AF0A.zlib 4536D4.zlib 4B4C4C.zlib AAE9.htmlThe zlib files give me error during extraction. ( I can't unzip the zlib files)From hexdump output and strings command I see quite a lot of ascii which I guess indicates it's not encrypted. When I open the HTML files I see some binaries which apparently got combined with HTML files.My question is: Where do I go from here?
Extracting Firmware problem with Binwalk
firmware;embedded
null
_codereview.47823
I have a website where people can write articles and publish/unpublish them. This is done via single button on the page:<a id=publish-button href=# class=btn btn-default <%= 'btn-success' if @article.published? %>> <span class=glyphicon glyphicon-globe></span></a>As you can see, if an article is already published I'm adding additional btn-success class.Now, this is the JavaScript code that handles Ajax clicks on that button:var $editor = $('#editor'); // Article container, it contains update action url as data-urlvar $publishButton = $('#publish-button');var togglePublished = function () { var url = $editor.data('url'); var postData = { article: { stage: $publishButton.hasClass('btn-success') ? 'unpublished' : 'published' }, _method: 'PUT' }; $.ajax({ url: url, data: postData, method: 'POST', success: function (article) { $publishButton['published' == article.stage ? 'addClass' : 'removeClass']('btn-success'); } });};$publishButton.click(function (e) { e.preventDefault(); togglePublished();});My controller action just returns a JSON encoded article in this case:def update if @article.update(article_params) render json: @article else render json: { errors: @article.errors.full_messages }, status: :unprocessable_entity endendThere are few things I see wrong with this:My post data depends on class, which can be changed and changing it would require me to change it on two places: in template and in JS. If I add data-published attribute to the button I would still have to add/remove class after Ajax has finished resulting in duplicated view logic (in HTML file and JS file).I can see two ways to avoid this and be DRY:Add conditional class only in JavaScript and run check upon page loadThis would be fairly easy to do but would result in a brief delay because conditional class would be added in JavaScript which runs after all HTML has already loaded.Extract button to partial view and return it from controller in AjaxThis seems like a more solid approach but since my update action is used in other places too it has to return article as JSON. That's why I would have to create another controller action:def toggle_published @article.toggle_published # Something like this render partial: '_publish_button'endThis reduces restfulness of my controller and it seems kind of silly to extract only one HTML element into a partial view.Is there any better way to do this? It seems like a minor and insignificant code duplication but it's not an isolated example and as website complexity grows these things can multiply a lot and cause a lot of headaches.
Rails preventing ajax view duplication
ruby;ruby on rails
null
_unix.253457
yum provides <command> tells you what package provides, for instance, /usr/bin/python.But how do I find out what commands are provided by a certain package?
What is the opposite of yum provides?
centos;rhel;yum
But how do I find out what commands are provided by a certain package?There is nothing as commands in Linux. If you are interested what files the package provides, there is rpm:rpm -ql package_nameAnd you will find your executables aka commands usually under /usr/bin/ path.
_webmaster.98944
I've been doing a lot of research on whether or not GA would be usable inside of a private network such as a 192.X.X.X or 10.X.X.X instead of reporting the publicly held IP for each node. On the surface, it looks like this a no-go. However, I did find this question on SF that seems to imply that this could be possible.Is this possible? If so, how?
Is there a way for GA to track internal network traffic for a private network?
google analytics
null
_codereview.67564
Here's some part of my code is use to draw objects on the screen :Class members:private double cursorX = 0;private double cursorY = 0;private LineObj cursorOldY = new LineObj();private LineObj cursorOldX = new LineObj();private int index1 = 0;private double cursor2X = 0;private double cursor2Y = 0;private LineObj cursor2OldY = new LineObj();private LineObj cursor2OldX = new LineObj();private int index2 = -1;ZedGraph.CurveItem curve = null;Drawing function :private void drawCursors(bool first, bool second, bool fromKeyboard = false, Point mousePoint = new Point()) { GraphPane p = lineGraphControl1.GraphPane; if (p.CurveList.Count == 0 || curve == null) return; ZedGraph.Axis axis = curve.IsY2Axis ? (Axis)p.Y2AxisList[curve.YAxisIndex] : (Axis)p.YAxisList[curve.YAxisIndex]; ZedGraph.PointPair point = new PointPair(0.0, 0.0); if (first) { p.GraphObjList.Remove(cursorOldY); p.GraphObjList.Remove(cursorOldX); if (fromKeyboard) point = curve.Points[index1]; else if (lineGraphControl1.Graph.XAxis.Mode == XAxisMode.Parameter) { ZedGraph.CurveItem outputCurve; p.FindNearestPoint(mousePoint, curve, out outputCurve, out index1); if (index1 >= 0) point = curve.Points[index1]; } else { if (!mousePoint.IsEmpty) p.ReverseTransform(mousePoint, out cursorX, out cursorY); point = findPoint(curve, cursorX); } cursorX = point.X; cursorY = transform(point.Y, axis); cursorOldY = new LineObj(cursorX, p.YAxis.Scale.Min, cursorX, p.YAxis.Scale.Max); cursorOldX = new LineObj(p.XAxis.Scale.Max, cursorY, p.XAxis.Scale.Min, cursorY); if (cursorX > p.XAxis.Scale.Min && cursorX < p.XAxis.Scale.Max) p.GraphObjList.Add(cursorOldY); if (cursorY > p.YAxis.Scale.Min && cursorY < p.YAxis.Scale.Max) p.GraphObjList.Add(cursorOldX); } if (second) { p.GraphObjList.Remove(cursor2OldY); p.GraphObjList.Remove(cursor2OldX); if (fromKeyboard) point = curve.Points[index2]; else if (lineGraphControl1.Graph.XAxis.Mode == XAxisMode.Parameter) { ZedGraph.CurveItem outputCurve; p.FindNearestPoint(mousePoint, curve, out outputCurve, out index2); if (index2 >= 0) point = curve.Points[index2]; } else { if (!mousePoint.IsEmpty) p.ReverseTransform(mousePoint, out cursor2X, out cursor2Y); point = findPoint(curve, cursor2X); if (index2 == -1) { index2 = (int) Math.Round(curve.NPts * 0.75, 0); point = curve.Points[index2]; } } cursor2X = point.X; cursor2Y = transform(point.Y, axis); cursor2OldY = new LineObj(cursor2X, p.YAxis.Scale.Min, cursor2X, p.YAxis.Scale.Max); cursor2OldX = new LineObj(p.XAxis.Scale.Max, cursor2Y, p.XAxis.Scale.Min, cursor2Y); if (cursor2X > p.XAxis.Scale.Min && cursor2X < p.XAxis.Scale.Max) p.GraphObjList.Add(cursor2OldY); if (cursor2Y > p.YAxis.Scale.Min && cursor2Y < p.YAxis.Scale.Max) p.GraphObjList.Add(cursor2OldX); }}Is (structurally speaking) there some optimizations I missed ?
Draw crosshairs on the screen - ZedGraph
c#;optimization
DRY Don't repeat yourself Looking at the method, it is clearly seen that you do the same thing for first == true and for second == true, but you are using different variables. As the set of variables private double cursor2X = 0;private double cursor2Y = 0;private LineObj cursor2OldY = new LineObj();private LineObj cursor2OldX = new LineObj();private int index2 = -1; are defining, what you need to draw a crosshair, you should place them inside a class. An instance of this class can then be passed instead of the bool first,second to your method. In this way you have less code to maintain, if e.g something should be changed in this drawing part, as you only need to change it once and not twice. Naming GraphPane pSingle letter variable names should be avoided. For loops as iteration variable they are ok, or for positioning like x,y. Your code would be easier to understand if you would name it pane instead.Based on the naming guidlines you should use PascalCasing casing for method names.
_unix.34725
I have situation where my ISP has blocked .rpm and .deb files to be downloaded. I am already using a proxy to access the Internet. So any chance of me updating via Internet is bleak.One peculiar thing though - I am able to download .rpm of Google chrome, but any other source or site or software I try I am blocked. Also I have to work on unregistered RHEL 6; I've already had a hard time setting it up with a 3rd party repository, and now I am faced with this restriction.I want to ask for suggestion as to what I can do. I will list a few things I thought, tell what you think and also suggest if you have some.I can download the software repo from different ISP and use it offline. Repos which addresses to a local location is allowed by yum.I can download .tar (and derivatives) from this ISP. So do you know any software repository which give me only .tar (or similar) files for the software? I searched but couldn't find any.Try other package management - different flavor of Linux. I am trying different Linux - like Arch Linux, Puppy Linux (though won't serve my purpose) etc.
rpm and deb fileformat are blocked by my ISP. How to install software?
linux;rhel;software installation;tar;rpm
null
_cstheory.34294
Assume that classical one-way functions secure against quantum adversaries exist. Is it possible, given a quantum state $Q$ and classical secret key $k$, produce a quantum state $AuthQ$ such that:One who knows $k$ can recover $Q$, provided that $AuthQ$ has not been altered.If $AuthQ$ has been altered, this can be detected with high probability.It may or may not be possible to recover the original state without the secret key. What matters is that, given the secret key, one can be confident that:The creator of the quantum state had the secret keyThe message has not been tampered with since.
Is it possible to MAC a quantum state with a classical key under reasonable assumption?
quantum computing;cr.crypto security;quantum information
Howard Barnum, Claude Crepeau, Daniel Gottesman, Adam Smith, Alain Tapp. Authentication of Quantum Messages, FOCS 2002.http://www.cse.psu.edu/~ads22/pubs/PS-CSAIL/BCGST02-focs-final.pdfAs with encryption, there is a protocol that requires no computational assumptions. It uses a key of length about 2n+2k to authenticate n qubits with security level 2^{-k}.
_softwareengineering.180186
Possible Duplicate:Ive inherited 200K lines of spaghetti code what now? I'm dealing with legacy code. It contains some BIG classes (line count 8000+) and some BIG methods (line count 3000+). For one of these methods I wrote a unit test that covers at least some code. This function has no specific structure, it is a mess of nested loops and conditions. I'd like to refactor but still I have no idea how to start.I actually started by extracting one single function. In the end it had to have 21 parameters :-/Should I ...1. continue extracting awful functions knowing that in the long run I will succeed?2. start extracting even bigger functions? So I expose the structure of this monster and then I can start the real refactoring.3. only extract small good functions (and / or classes) just because I am a helluva clean coder?4. do something completely different?
Refactoring several huge C++ classes / methods. How to start?
c++;class;methods;refactoring
Refactoring is somewhat the whole books is written about. It's hard to retell all the methods and pitfalls.Before starting refactoring write the tests for a piece you want to refactor.Start refactoring with small and clean pieces of those big functionsFind the piece that can be rewritten as a function and make it a function.Run tests to be sure you've done the things right and no side effects appearsAfter this you should have and idea what has to be done with all other (big one) pieces.When the whole thing becomes clearer then you can make a higher-level redesign.P.S. it's just a brief sketch. About refactoring itself is better to read some books (like M. Fowler - Refactoring, etc.)
_unix.330185
I have a Windows 10 / Linux Ubuntu PC Layout over 2 HDD's as below:HDD 0 = 30Gb / 7.5Gb / 154Gb Linux Partitions plus 740Gb Vol K Windows 10 BU/data.HDD 1 = 20Gb Recovery Z / 249GB (135 used) Windows 10 C: / 100Mb BDEdrive / 660GB Windows data files.I am not too sure what the Z: and BDEDrive actually do in this set up ( I have another very similar Win 7 Pro to Win 10 Pro upgraded PC and these drives are not present but 100Mb System Reserved and 450Mb Recovery are!!The Windows 10 C: drive is a wholly owned Win 7 Pro free upgraded to Windows 10 Pro and contains a number of applications including Adobe CS Master Suite which I would rather not reinstall. The data files are all in volume K and BU's etc on volume J:so C: is purely OS/Applications equalling 135Mb.The Linux install currently has no valuable data or apps installed.I wish to move Windows and probably Linux to a Crucial 275Mb SSD. I have already secured the Windows 10 using Macrium.Plan overview - (gleaned in my ignorance by checking a mix of web sources).My Questions are at steps 5, 6 and 8. but any other advice is welcomeCreate bootable USB with media creation tool.Reboot from USB.Run bootrec.exe /fixmbrRestart without USB to go straight into Windows.Delete Linux??. Do I simply unallocate the Linux partitions - I do want to reinstall Linux later.Clone Windows C: drive to 200Gb of SSD, leaving 500Mb of the remainder for a system reserved need (is that sensible? but if so how do I apply it?).Do I also need to clone the Z: and BDEDrive?Any Recovery files can go onto an HDD.Once all running OK I then hope to reinstall Ubuntu Linux with the Linux data partition on an HDD. Do I do this using about 35Gb of the SSD leaving a small reserve on the SSD for Win 10 O/S extension / usage? Or simply put it all on an HDD?
How to transfer Windows 10 / Linux Ubuntu to SSD
ssd
null
_unix.163381
Is there an easy way to get a count of the total number of patches applied to a server over a time frame, or even in summary?
Get summary count of all patches applied to a server?
centos;yum;statistics
null
_unix.134143
What's the best way to confirm that your /etc/hosts file is mapping a hostname to the correct IP address?Using a tool like dig queries an external DNS directly, bypassing the hosts file.
How to test /etc/hosts
dns;hosts
I tried this out and it seems to work as expected:echo 1.2.3.4 facebook.com >> /etc/hostsThen I ran:$ getent ahosts facebook.com1.2.3.4 STREAM facebook.com1.2.3.4 DGRAM 1.2.3.4 RAI hope this helps you!
_webmaster.39012
I have installed and configure Awstats on my server to see its traffic.How can I see the traffic of a given hour of a given day in Awstats? I need to know who visited what pages in 23 PM of the 13/Dec .
How to see a given hour traffic in Awstats?
awstats
null
_cstheory.22719
There exist sentences of first-order logic that are satisfiable and are satisfiable only by models of infinite size. However, all such sentences I can think of are satisfied by infinite models that can be generated by a non-terminating algorithm of a finite size.For a example, there exists a sentence of first-order logic that is satisfied only by a model of an infinite size:$$ \exists x \lnot \exists y S ( y, x ) \land $$$$ \forall x \exists y S ( x, y ) \land $$$$ \forall x \forall y \forall z ( ( S ( x, z ) \land S ( y, z) ) \to x = y ) $$A model for this sentence can be generated (listed, enumerated) by a finite algorithm that never terminates - just list the elements of the infinite model in a sequence one after another and make $S$ hold only for successive pairs of such elements.Does there exist a sentence of first-order logic that is satisfiable only in some infinite models for which there does not exist a finite algorithm that generates them?
Does there exist a sentence of first-order logic that is satisfiable only in infinite models that do not have a finite algorithmic representation?
lo.logic
One possible answer to your question is Tennenbaum's Theorem. It states that no non-standard model of arithmetic can have only computable operations, i.e. either addition or multiplication will be non-computable given any recursive description of the elements of the model.Now the remaining difficulty is having a single formula which has as models only non-standard models of arithmetic. Note that while arithmetic isn't finitely axiomatizable, there are conservative extensions of PA which are finite.Now it's easy to build an infinite axiomatization of (an extension of) PA which has only non-standard models: add to PA a constant $c$ and the axioms$$ c\neq 0,\ c\neq S(0),\ c\neq S(S(0)),\ldots$$There's a trick to turn this into a finite axiom. Add the binary predicate $\mathrm{NS}$ to your language, along with the axiom$$ \forall xy,\ \mathrm{NS}(x, y)\leftrightarrow x\neq y\wedge \mathrm{NS}(x, S(y))$$Then the above schema is expressed by $\mathrm{NS}(c, 0)$.Important edit: It is crucial that the induction schema (as axiomatized by your finite theory) does not apply to formulas containing $\mathrm{NC}$! Otherwise it is quite easy to show $\forall y,\ c\neq y$ and conclude $c\neq c$, a contradiction. I think that if you exclude those formulas from the induction principle, the above reasoning should go through without trouble.
_codereview.145625
I wrote the following class so that a single object can be shared across many threads. Is using the ReaderWriterLockSlim redundant in this case and only Interlocked.Funcs() can do the job and ensure thread safety?using System.Threading;public class SharedData<T> where T:class{T _data;ReaderWriterLockSlim _lock;public SharedData(T arg){ Interlocked.Exchange<T>(ref _data, arg); _lock = new ReaderWriterLockSlim();}public T Value{ get { _lock.EnterReadLock(); var ret = Interlocked.CompareExchange<T>(ref _data, default(T), default(T)); _lock.ExitReadLock(); return ret; } set { _lock.EnterWriteLock(); Interlocked.Exchange<T>(ref _data, value); _lock.ExitWriteLock(); }}}
Generic thread-safe data structure
c#;multithreading
null
_unix.237355
For some reason, my backup is not working, when it is started by cron.Crontab entry0 10 * * * /home/yzT/BackupDaily.shBackupDaily.sh#!/bin/bash/home/yzT/Tools/FreeFileSync/FreeFileSync /home/yzT/Tools/FreeFileSync/BackupDaily.ffs_batchI can see cron starting my backup script in syslog.Oct 20 10:00:01 debian CRON[2589]: (yzT) CMD (/home/yzT/BackupDaily.sh) When I run it manually, the backup system (FreeFileSystem) creates a log file on my Desktop and I can see updated files in the backup directory. But via cron I do not get a logfile nor see any updates.How can I find/fix the problem?editI found the root of the problem. Switched to TTY and run the script, and I get the following message: Error: Unable to initialize GTK+, is DISPLAY set properly?. So, although there is no GUI using the script, it seems the script wants to have access to the GUI application. How can I fix this?
My backup is not working when started by Cron
cron
If you are the only user on the system, you can just edit your crontab (with crontab -e) and add DISPLAY=:0.0 at the top of the file.Alternatively, you can try running your backup job like so:/home/yzT/Tools/FreeFileSync/FreeFileSync /home/yzT/Tools/FreeFileSync/BackupDaily.ffs_batch --display=:0.0
_codereview.163538
I have got an assignment for an OS course that consists in the use of mutex and condition variables to synchronize N threads, each involved in the search of a character in a row of a NxN matrix. The first thread to find the character should notify the others that it did, so that the other threads stop looking for it.I wrote a solution, then tried to improve it further, but I'm not so sure about the improved version.First version#include <pthread.h>#include <printf.h>#include <unistd.h>#include <memory.h>#define N 3char chars[N][N] = { {'d', 'b', 'c'}, {'a', 'd', 'f'}, {'d', 'h', 'i'}};char to_find = 'd';struct { pthread_mutex_t mutex; pthread_cond_t cond; ssize_t row; ssize_t col;} pos = {PTHREAD_MUTEX_INITIALIZER, PTHREAD_COND_INITIALIZER, -1, -1};void *find_char(void *params) ;pthread_t threads[N];int main () { int rows[N]; int i; for (i = 0; i < N; ++i) rows[i] = i; for (i = 0; i < N; ++i) pthread_create(&threads[i], NULL, find_char, &rows[i]); pthread_cond_wait(&pos.cond, &pos.mutex); printf(character found at position [%ld][%ld], pos.row, pos.col); fflush(stdout); return NULL;}void *find_char(void *params) { const int row = *((int *) params); char buff[BUFSIZ]; sprintf(buff, starting search on row %d\n, row); write(STDOUT_FILENO, buff, strlen(buff)); int j; for (j = 0; j < N; ++j) { if (chars[row][j] == to_find) { // if two threads find char at the same time // only first one must get to signal main thread // and cancel the other threads pthread_mutex_lock(&pos.mutex); if (pos.row >= 0) { pthread_mutex_unlock(&pos.mutex); return NULL; } sprintf(buff, thread %d found the char\n, row); write(STDOUT_FILENO, buff, strlen(buff)); for (int i = 0; i < N; ++i) if (i != row) { sprintf(buff, cancelling thread %d\n, row); write(STDOUT_FILENO, buff, strlen(buff)); pthread_cancel(threads[i]); } pos.row = row; pos.col = j; pthread_mutex_unlock(&pos.mutex); pthread_cond_signal(&pos.cond); } } return NULL;}I used the global threads because I found it easier to worry about one this less while writing my first mutex code. The second iteration consists of trying to remove that.Second version#include <pthread.h>#include <printf.h>#include <unistd.h>#include <memory.h>#define N 3char chars[N][N] = { {'d', 'b', 'c'}, {'a', 'd', 'f'}, {'d', 'h', 'i'}};char to_find = 'd';struct { pthread_mutex_t mutex; pthread_cond_t cond; ssize_t row; ssize_t col;} pos = {PTHREAD_MUTEX_INITIALIZER, PTHREAD_COND_INITIALIZER, -1, -1};void *find_char(void *params) ;int main () { pthread_t threads[N]; int rows[N]; for (int i = 0; i < N; ++i) rows[i] = i; for (int i = 0; i < N; ++i) pthread_create(&threads[i], NULL, find_char, &rows[i]); pthread_cond_wait(&pos.cond, &pos.mutex); for (int i = 0; i < N; ++i) pthread_cancel(threads[i]); printf(character found at position [%ld][%ld], pos.row, pos.col); fflush(stdout); return NULL;}void *find_char(void *params) { const int row = *((int *) params); char buff[BUFSIZ]; sprintf(buff, starting search on row %d\n, row); write(STDOUT_FILENO, buff, strlen(buff)); int col; for (col = 0; col < N; ++col) { if (chars[row][col] == to_find) { pthread_mutex_lock(&pos.mutex); if (pos.row >= 0) { pthread_mutex_unlock(&pos.mutex); return NULL; } sprintf(buff, thread %d found the char\n, row); write(STDOUT_FILENO, buff, strlen(buff)); pos.row = row; pos.col = col; pthread_cond_signal(&pos.cond); pthread_mutex_unlock(&pos.mutex); } } return NULL;}The main doubt I have got about this is the following:What happens if the main thread cancels a thread that has entered the critical section and has locked the mutex? If a cancel request is issued while a thread has locked the mutex, will it be canceled? Will the mutex stay locked? If yes, what do we usually do to prevent this condition?P.S. Bonus pointsI just re-read the code and whilst I'm learning more on condition variables I noticed I am using a condition variable in a way that is not exactly what it was intended for. In fact, now that I think about it, I could just use the pos.mutex, swap the pthread_cond_wait with two pthread_mutex_lock calls, swap pthread_cond_signal with a pthread_mutex_unlock call and I'd have the same result, without even declaring the conditional variable. Why do we even need a condition variable in C? Can't we always use a mutex to replace them?In particular, what are the differences between the lock/unlock operations on a mutex that's been initialized with PTHREAD_MUTEX_INITIALIZER and locked once, and the wait/signal operations for a condition variable that's been initialized with PTHREAD_COND_INITIALIZER?
Exercise Synchronization between threads using `pthread_mutex_t` and `pthread_cond_t`
c;thread safety;pthreads;sync
null
_opensource.1065
Copyright for old works expire at some point. In most countries that happens 70 years after authors death. Can I take a book with expired copyright (say Dickens Oliver Twist), change it substantially (modern language, some different characters and so on) and release the result under an open license (let's say CC-BY-SA)? What are the restrictions in doing so? And if it is allowed, who do I attribute as author?
Can I create a derivative of an old book?
licensing;attribution;public domain
Copyright for old works expire at some point. In most countries that happens 70 years after authors death. Can I take a book with expired copyright (say Dickens Oliver Twist), [...] and release the result under an open license (let's say CC-BY-SA)?According to the Berne convention, works may pass into the public domain after life of author + 70 years (i.e. Berne signatories have to provide this minimum protection for literary and artistic works). However, the Berne convention does not impose a maximum term of protection, and it also deals with two different sets of rights: Economic rights and moral right.Both economic rights and moral rights may be inherited, and it is the heir who will exercise these rights after the death of the author. Any rights that is inherited in this way may pass down to future generations as long as the rights persist. When the economic rights have expired, you're allowed to make verbatim reprints without asking for permission, and you can distribute those as you like (for money or for free). I can't see any reason why you should not be able to attach a license, but since the original is without license, anybody can ignore the license just by saying they copied the original (and not your reprint).change it substantially (modern language, some different characters and so on)This is what is legally called an adaptation, and permission to create adaptations may be blocked by what is called moral rights (droits d'auteur).In most jurisdiction, and in particular those outside of northern Europe, moral rights is treated as equal to economic rights: I.e. they expire after the life of author + 70 years.Certainly, in the USA, rather heavy handed adaptations of some classical works has been made, for instance Shakespeare's Romeo and Juliet (West Side Story), and The Taming of the Shrew (10 Things I Hate About You).However, in some jurisdiction, such as Denmark, Sweden, Norway, Poland, Estonia, Latvia and Lithuania moral right never expireWhat are the restrictions in doing so?First, if you're allowed make an adaptation, you hold the copyright of the adaptation. So you can impose any terms you like (including a CC BY-SA) on the adaptation.However, if moral rights are intact, you may not be allowed to make an adaptation. You're only allowed to adapt if the adaptation respects the integrity of the work and the author.For instance, a be-bop adaptation by Arne Domnrus of the Norwegian national anthem (Ja, vi elsker - composed by Rikard Nordraak, 1842-1866) was in 1977 (111 years after the death of Nordraak) by a Noregian court declared to violate Nordraak's moral rights and had to be pulled from the Norwegian market.It should be said that this is an extremely rare example.And if it is allowed, who do I attribute as author?Yes, if it is an adaptation, then moral rights require you to credit the original author (e.g. adapted from 'Oliver Twist' by Charles Dickens). However, if it is an adaptation, you, not Dickens is the author of the adaptation, so you list yourself as the author). Moral rights also means that you're not allowed to state that Mr. Dickens is in any way responsible for the adaptation.However, the line between what is considered an adaptation, and what is a new work that has some plot lines in common with some earlier work is blurred. West Side Story is obviously an adaptation of Romeo and Juliet (and Shakespeare adapted it from the 1562 poem The Tragical History of Romeus and Juliet by Arthur Brooke). However, Wuthering Heights and countless other tragic love stories where societal norms prevents a happy ending, is not.
_cs.52097
I have been taking a language theory class, and we learned about Brzozowski derivatives recently. At class it occurred to me that they could be used to implement a simple regular expression matcher. If you take the derivative of a regular expression with respect to a given string and the resulting expression matches the empty string ($\varepsilon$), then the original expression matches the string used in the derivative:$$w \text{ matches } e \Leftrightarrow \varepsilon \text{ matches } D_w(e)$$I did a web search and, as expected, other people had the same idea before. There is an implementation here and another one here. They are short and simple.Is this used in practice anywhere? I'm not sure, but I believe that most regular expression matchers are implemented using either finite automata or backtracking algorithms (like Pearl regexps). Why is this the case? Is the technique I mentioned too slow? Is it missing functionalities? Does anybody know what is the complexity of regular expression matching using derivatives?
Implementing regular expression matching using Brzozowski derivatives
formal languages;regular expressions
Yes. This approach has been used before. I've seen it used in a research project, where the goal was to build a regexp matcher that could be formally verified to be correct (with as little effort as possible). This algorithm was implemented in Coq and then proven correct. The nice thing is that it is relatively easy to prove correct. In contrast, a table-driven lexer would be harder to prove correct, because one would have to prove that the table was constructed correctly.I think that table-driven lexers are often faster in practice. Your scheme essentially computes the transition table on the fly. Table-driven lexers involve pre-computing the transition table, and now you only need to do a few instructions per character read from the input (read one character, do a table lookup based on the current state and just-read character, update the current state). Therefore, their constant factor can be lower.To be clear, by table-driven lexer, I mean something that converts the regexp to a DFA and then precomputes a table that represents the transition table. This is useful in cases where the regexp is fixed and known at compile time. For many regexps seen in practice, the size of the DFA is not too large and so this leads to a fairly efficient matcher algorithm.
_unix.343798
I have copied the rm executable from my machines /bin/rm to another linux machine that happens to be so minimal that it does not include the rm command.When I tried to execute the rm command I got this error:/bin/rm: /bin/rm: 1: Syntax error: ( unexpectedWhy won't it work? How could I add the rm functionality to this box?(This box does not have a package manager install either.)
Why would a copied rm executable not work on another linux machine?
linux;command line
rm is a binary file and therefore architecture dependent. It would only work if you copy from the same architecture and with the same required libraries installed.Alternatively, you can compile it from source code or install the binary package. In Debian systems, it's a package.In case you already have a binary and want to know its architecture, use file or objdump commands.
_codereview.93213
I'm working in MVC5/WebAPI using ASP.NET Identity, with a custom storage provider for my UserStore, custom ApplicationUser inheriting from IdentityUser, and a pretty expansive configuration in the project (both MVC5 and WebAPI endpoints in the same project).I've been working on the user management/settings controller (ManageController), following the same principles the template projects use; however, with the additional fields I have in my ApplicationUser, it is getting to the point that my view model is starting to be a duplicate of the ApplicationUser class. I wanted someone to take a look at some of my code and let me know if they think the way I'm setting things up currently is appropriate, or if I am cluttering a view model with fields.ManageViewModels.csusing System.Collections.Generic;using System.ComponentModel.DataAnnotations;using Microsoft.AspNet.Identity;using Microsoft.Owin.Security;using System.Security.Claims;namespace Disco.Models{ public class IndexViewModel { public string Email { get; set; } public bool HasPassword { get; set; } public IList<UserLoginInfo> Logins { get; set; } public string PhoneNumber { get; set; } public bool TwoFactor { get; set; } public bool BrowserRemembered { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public string Picture { get; set; } public string Handle { get; set;} } public class ManageLoginsViewModel { public IList<UserLoginInfo> CurrentLogins { get; set; } public IList<AuthenticationDescription> OtherLogins { get; set; } } public class FactorViewModel { public string Purpose { get; set; } } public class SetPasswordViewModel { [Required] [StringLength(100, ErrorMessage = The {0} must be at least {2} characters long., MinimumLength = 6)] [DataType(DataType.Password)] [Display(Name = New password)] public string NewPassword { get; set; } [DataType(DataType.Password)] [Display(Name = Confirm new password)] [Compare(NewPassword, ErrorMessage = The new password and confirmation password do not match.)] public string ConfirmPassword { get; set; } } public class ChangePasswordViewModel { [Required] [DataType(DataType.Password)] [Display(Name = Current password)] public string OldPassword { get; set; } [Required] [StringLength(100, ErrorMessage = The {0} must be at least {2} characters long., MinimumLength = 6)] [DataType(DataType.Password)] [Display(Name = New password)] public string NewPassword { get; set; } [DataType(DataType.Password)] [Display(Name = Confirm new password)] [Compare(NewPassword, ErrorMessage = The new password and confirmation password do not match.)] public string ConfirmPassword { get; set; } } public class AddPhoneNumberViewModel { [Required] [Phone] [Display(Name = Phone Number)] public string Number { get; set; } } public class VerifyPhoneNumberViewModel { [Required] [Display(Name = Code)] public string Code { get; set; } [Required] [Phone] [Display(Name = Phone Number)] public string PhoneNumber { get; set; } } public class ConfigureTwoFactorViewModel { public string SelectedProvider { get; set; } public ICollection<System.Web.Mvc.SelectListItem> Providers { get; set; } }}ApplicationUser.csusing System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Threading.Tasks;using Microsoft.AspNet.Identity;using System.Security.Claims;using Newtonsoft.Json;namespace Schloss.AspNet.Identity.Neo4j{ public class ApplicationUser : IdentityUser { public string FirstName { get; set; } public string LastName { get; set; } public DateTimeOffset DateOfBirth { get; set; } // @ handle / username public string Handle { get; set; } public string Picture { get; set; } public char Gender { get; set; } public string DeviceId { get; set; } // Most recently used mobile phone ID (iPhone, iPad, Android, etc) // Address public string City { get; set; } public string State { get; set; } public string ZIPCode { get; set; } public string County { get; set; } public string Country { get; set; } public string Address1 { get; set; } public string Address2 { get; set; } public string Address3 { get; set; } // SignalR public HashSet<string> ConnectionIds { get; set; } // SignalR connections set (for handling multiple live sessions, propagation of push notifications) //public static string Labels { get { return User; } } // Helper Properties [JsonIgnore] public String FullName { get { return (FirstName + + LastName).Trim(); } // helper property to format user's full name } public async Task<ClaimsIdentity> GenerateUserIdentityAsync(UserManager<ApplicationUser> manager) { // Note the authenticationType must match the one defined in CookieAuthenticationOptions.AuthenticationType var userIdentity = await manager.CreateIdentityAsync(this, DefaultAuthenticationTypes.ApplicationCookie); // Add custom user claims here return userIdentity; } public async Task<ClaimsIdentity> GenerateUserIdentityAsync(UserManager<ApplicationUser> manager, string authenticationType) { // Note the authenticationType must match the one defined in CookieAuthenticationOptions.AuthenticationType var userIdentity = await manager.CreateIdentityAsync(this, authenticationType); // Add custom user claims here return userIdentity; } }}ManageController.cs (just the Index view)// // GET: /Manage/Index public async Task<ActionResult> Index(ManageMessageId? message) { ViewBag.StatusMessage = message == ManageMessageId.ChangePasswordSuccess ? Your password has been changed. : message == ManageMessageId.SetPasswordSuccess ? Your password has been set. : message == ManageMessageId.SetTwoFactorSuccess ? Your two-factor authentication provider has been set. : message == ManageMessageId.Error ? An error has occurred. : message == ManageMessageId.AddPhoneSuccess ? Your phone number was added. : message == ManageMessageId.RemovePhoneSuccess ? Your phone number was removed. : ; var userId = User.Identity.GetUserId(); var user = await UserManager.FindByIdAsync(userId); var model = new IndexViewModel { Email = await UserManager.GetEmailAsync(userId), HasPassword = await UserManager.HasPasswordAsync(userId), PhoneNumber = await UserManager.GetPhoneNumberAsync(userId), TwoFactor = await UserManager.GetTwoFactorEnabledAsync(userId), Logins = await UserManager.GetLoginsAsync(userId), BrowserRemembered = await AuthenticationManager.TwoFactorBrowserRememberedAsync(userId), FirstName = user.FirstName, LastName = user.LastName, Picture = user.Picture, Handle = user.Handle }; return View(model); }
ASP.NET View Model / Identity Model
c#;asp.net
null
_unix.326892
I just transferred files and folders from my local machine to my web server using SFTP.Doing ls -l on both machines indicates file and folder permissions seems to have changed.Why would this happen?
File permissions changed after SFTP
permissions;sftp
New files copied over are generally given a filter through the umask when written to a new location. To preserve the permissions as at the source, use scp -p (see also cp -p; rsync -p).
_webapps.101681
I've created several small sheets, all with the same number of columns, and I'd like to produce one large sheet consisting of all the rows from the smaller sheets. I don't want to perform any deduplication, since each row contains important data. Is there any way I can do this? The closest thing I've been able to find is: ={'Sheet1'!A:D, 'Sheet2'!A:D}The trouble here is that the columns are appended to one another, not the rows. Meanwhile, if I do this (note the semicolon instead of the comma):={'Sheet1'!A:D; 'Sheet2'!A:D}I get the data from the first sheet and no data from the second.
How to append rows from multiple sheets into one master sheet?
google spreadsheets
Give row numbers instead of column letters, and separate the ranges by semicolon ; (vertical stacking instead of horizontal).={Sheet1!1:3; Sheet2!2:7}This assumes both sheets 1 and 2 have the same number of columns. If this is not the case, you'll need to specify which columns you want:={Sheet1!A1:Z3; Sheet2!A2:Z7}To get all rows of another sheet, use a construction with indirect:=arrayformula({Sheet1!1:4; indirect(Sheet2!1: & max(row(Sheet2!A:A)))})Here, Sheet2!1: & max(row(Sheet2!A:A)) constructs a string such as Sheet2!1:1000, with the latter number obtained as the maximum of row numbers.
_cs.44261
In the context of communication complexity I see a definition of differential privacy which isn't totally clear to me as to why it makes sense. So the two parties $A$ and $B$ draw two strings $X$ and $Y$ from the set $S^n$ where $S$ is some finite set. Let $P$ be the protocol. Now if $z_1 = (X_1,Y_1)$ and $z_2 = (X_2,Y_2)$ are two instances drawn with a probability distribution $\mu$ over the set $S^n \times S^n$ then the protocol $P$ is called $\epsilon$-differentially private if the following holds: $$e^{-2 \epsilon n} \leq \Pr[P(z_1) = p] / \Pr[P(z_2) = p] \leq e^{2 \epsilon n}$$Now why does this make sense? What's the intuition? How is this related to the bounded derivative definition?
About the definition of differential privacy in communication complexity
complexity theory;terminology;cryptography;communication complexity
null
_webmaster.3435
I have a somewhat complicated situation. We have a blog let's say at myblog.com and it is running fine. However, we now want to cut the budget and use a free hosting service isgreat.org. The service allows us to let a domain point to it so that we can host our blog.The problem is, now I'll have to change the domain's HOST record in order to start testing it. I obviously want a way to test it before actually changing anything. Using an alias for test.myblog.com requires the domain to be managed in the service's control panel, which I don't want either.So, the question is, can I type the IP address of the host in the browser, and somehow tell it that I am referring to it under the domain name myblog.com, so that it will show me my blog, instead of a default page?
How to test the migration of a domain without actually changing the domain?
web hosting;domains;dns;free;migration
You can edit your hosts file in a way that this domain name points to different IP. For example in Windows the host file is in: \Windows\System32\drivers\etc\In Ubuntu it is in:/etc/So you find the file, open it with simple text editor, and write the new IP and the domain name. It must look something like: 92.152.175.86 mydomain.com
_webapps.79173
I recently created a Cognito Form and tried to embed it on a Weebly website. However, the code did not produce a form in the editor or when the website was published. I have used Weebly a long time and have never had any problem with inserting any code, be it forms or another widget.I also tried embedding the form on a blog and this worked. I am wondering if anyone knows why Cognito forms is not compatible with Weebly or if there is something I can change that will fix this issue.Not working page (in the white space): http://nayaktech1.weebly.com/quote.htmlWorking blog (all the way at the bottom above follow by email): http://blog.earthyes.org/Thanks!
Cognito Forms: Form Not Appearing in Weebly
cognito forms;weebly
null
_unix.167731
After finishing a customized CentOS-7 kickstart installation from a CDROM, the system has booted as expected and according to the ks.cfg file. At the grub menu the options I have are:CentOS Linux, with Linux 3.10.0-123.el7.x86_64.debugin addition to the rescue option.since I have another system (regular graphic installation) without the debug suffix in the menu entry, I was wondering if that suffix indicates any failures/problems/issues in the installed system, and if not - how I can customize this name.Any ideas?
CentOS-7 Kernel Image has a debug suffix after a Kickstart installation
centos;grub2;kickstart
You have the kernel-debug package installed:Description : The kernel package contains the Linux kernel (vmlinuz), the core of any : Linux operating system. The kernel handles the basic functions : of the operating system: memory allocation, process allocation, device : input and output, etc. : : This variant of the kernel has numerous debugging options enabled. : It should only be installed when trying to gather additional information : on kernel bugs, as some of these options impact performance noticably.Check your Kickstart script or remove the package with yum.
_unix.34323
I bought a barcode printer as Godex. I have to install its drivers. I did it on Ubuntu. However, I have some problems about it.For example:In README file:$ sudo aptitude install libcupsimage2-devBut shell says mesudo:aptitude: command not foundMoreover when i write sudo apt-get install aptitude it sayssudo:apt-get: command not foundAnd then,When I write ./configure shell says followings../configurechecking for a BSD-compatible install... /usr/bin/install -cchecking whether build environment is sane... yeschecking for a thread-safe mkdir -p... /bin/mkdir -pchecking for gawk... gawkchecking whether make sets $(MAKE)... nochecking for gcc... nochecking for cc... nochecking for cl.exe... noconfigure: error: no acceptable C compiler found in $PATHSee `config.log' for more details.How can i fix it ? How can i install this package?Thanks for your help already now.
How to install rastertoezpl on Pardus?
compiling;software installation
Each Linux distribution (or family of distributions) has its own package management system. Ubuntu uses .deb packages and has the APT tool family including Aptitude. Pardus has its own package manager called PiSi.Your immediate problem is no acceptable C compiler. You need to install a C compiler, and probably other development packages. See Compiling Software and Packaging in the Pardus Wiki; start by installing the basic development tools (the equivalent of Ubuntu's build-essentials):sudo pisi install -c system.devel
_cstheory.22241
Suppose $X$ is a message which takes values on the set $\{x_1, \dots, x_m\}$ with probability distribution $P_X$. We transmit the message $X$ over the channel $P_{Y|X}$ which outputs $Y$ taking values on finite alphabet $\mathcal{Y}$. Suppose we take $n$ independent samples from channel output; namely, $Y^n=(Y_1, Y_2, \dots, Y_n)$ with distribution $\prod_{i=1}^nP_{Y|X}(y_k|x)$ where $x$ is the message sent over the channel.Hence, we have $m$ distributions $\{P_{Y^n|X}(\cdot|x_1), \dots, P_{Y^n|X}(\cdot|x_m)\}$. I am looking for a deterministic randomness extractor which works for all of these $m$ distributions, that is, a randomness $Ext:\mathcal{Y^n}$ such that for given $Y^n$ distributed as one of these $m$ distribution $Ext(Y^n)$ is $\epsilon-$close to uniform distribution over $\{0,1\}^r$ (in total variation distance sense). What is the largest random bits that we can extract, i.e., the largest $r$? Is that $\min_{x}H_{\infty}(P_{Y^n|X}(\cdot|x))$?The motivation of this comes from privacy.
deterministic randomness extractor and privacy
cr.crypto security;randomness;it.information theory;extractors;privacy
null
_unix.81115
A similar problem has been reported here with no solution:https://www.centos.org/modules/newbb/print.php?form=1&topic_id=42696&forum=56&order=ASC&start=0I've added and upgraded gcc to centos cd /etc/yum.repos.dwget http://people.centos.org/tru/devtools-1.1/devtools-1.1.repo yum --enablerepo=testing-1.1-devtools-6 install devtoolset-1.1-gcc devtoolset-1.1-gcc-c++scl enable devtoolset-1.1 bashThe result is this for my gcc[root@hhvm-build-centos cmake-2.8.11.1]# gcc -vUsing built-in specs.COLLECT_GCC=gccCOLLECT_LTO_WRAPPER=/opt/centos/devtoolset-1.1/root/usr/libexec/gcc/x86_64-redhat-linux/4.7.2/lto-wrapperTarget: x86_64-redhat-linuxConfigured with: ../configure --prefix=/opt/centos/devtoolset-1.1/root/usr --mandir=/opt/centos/devtoolset-1.1/root/usr/share/man --infodir=/opt/centos/devtoolset-1.1/root/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --disable-build-with-cxx --disable-build-poststage1-with-cxx --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --enable-languages=c,c++,fortran,lto --enable-plugin --with-linker-hash-style=gnu --enable-initfini-array --disable-libgcj --with-ppl --with-cloog --with-mpc=/home/centos/rpm/BUILD/gcc-4.7.2-20121015/obj-x86_64-redhat-linux/mpc-install --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linuxThread model: posixgcc version 4.7.2 20121015 (Red Hat 4.7.2-5) (GCC) And I tried to then install cmake through http://www.cmake.org/cmake/resources/software.html#latestBut I keep running into this error:Linking CXX executable ../bin/ccmake/opt/centos/devtoolset-1.1/root/usr/libexec/gcc/x86_64-redhat-linux/4.7.2/ld: CMakeFiles/ccmake.dir/CursesDialog/cmCursesMainForm.cxx.o: undefined reference to symbol 'keypad'/opt/centos/devtoolset-1.1/root/usr/libexec/gcc/x86_64-redhat-linux/4.7.2/ld: note: 'keypad' is defined in DSO /lib64/libtinfo.so.5 so try adding it to the linker command line/lib64/libtinfo.so.5: could not read symbols: Invalid operationcollect2: error: ld returned 1 exit statusgmake[2]: *** [bin/ccmake] Error 1gmake[1]: *** [Source/CMakeFiles/ccmake.dir/all] Error 2gmake: *** [all] Error 2The problem seems to come from the new gcc installed because it works with the default install. Is there a solution to this problem?
Centos CMake Does Not Install Using gcc 4.7.2
linux;centos;gcc
null
_unix.22851
Last Friday I upgraded my Ubuntu server to 11.10, which now runs with a 3.0.0-12-server kernel. Since then the overall performance has dropped dramatically. Before the upgrade the system load was about 0.3, but currently it is at 22-30 on an 8 core CPU system with 16GB of RAM (10GB free, no swap used).I was going to blame the BTRFS file system driver and the underlaying MD array, because [md1_raid1] and [btrfs-transacti] consumed a lot of resources. But all the [kworker/*:*] consume a lot more.sar has outputted something similar to this constantly since Friday:11:25:01 CPU %user %nice %system %iowait %steal %idle 11:35:01 all 1,55 0,00 70,98 8,99 0,00 18,48 11:45:01 all 1,51 0,00 68,29 10,67 0,00 19,53 11:55:01 all 1,40 0,00 65,52 13,53 0,00 19,55 12:05:01 all 0,95 0,00 66,23 10,73 0,00 22,10 And iostat confirms a very poor write rate:sda 129,26 3059,12 614,31 258226022 51855269 sdb 98,78 24,28 3495,05 2049471 295023077 md1 191,96 202,63 611,95 17104003 51656068 md0 0,01 0,02 0,00 1980 109 The question is: How can I track down why the kworker threads consume so many resources (and which one)? Or better: Is this a known issue with the 3.0 kernel, and can I tweak it with kernel parameters?Edit:I updated the Kernel to the brand new version 3.1 as recommended by the BTRFS developers. But unfortunately this didn't change anything.
Why is kworker consuming so many resources on Linux 3.0.0-12-server?
kernel;performance;cpu;load
I found this thread on lkml that answers your question a little. (It seems even Linus himself was puzzled as to how to find out the origin of those threads.)Basically, there are two ways of doing this:$ echo workqueue:workqueue_queue_work > /sys/kernel/debug/tracing/set_event$ cat /sys/kernel/debug/tracing/trace_pipe > out.txt(wait a few secs)For this you will need ftrace to be compiled in your kernel, and to enable it with:mount -t debugfs nodev /sys/kernel/debugMore information on the function tracer facilities of Linux is available in the ftrace.txt documentation.This will output what threads are all doing, and is useful for tracing multiple small jobs.cat /proc/THE_OFFENDING_KWORKER/stackThis will output the stack of a single thread doing a lot of work. It may allow you to find out what caused this specific thread to hog the CPU (for example). THE_OFFENDING_KWORKER is the pid of the kworker in the process list.
_codereview.157781
I have written the following JavaScript code and want to ask if it is written as per best practices. Please recommend any changes to be made in the code. The code is taking a few variables as input and then validating if all of them have some value and not blank/null.var var1 = false;var missingVAL = false;var validationMsg = ;host = scriptletContext.get(host);user = scriptletContext.get(user);serverID = scriptletContext.get(serverId);runID = scriptletContext.get(runId);if (host == false){ validationMsg = serverHostName (serverHostName) is missing.; var1 = true;}if (user == false){ if (var1 == true) { validationMsg = validationMsg + Requestor user name (requestorUserName) is missing.; } else { validationMsg = Requestor user name (requestorUserName) is missing.; } var1 = true;}if (serverID == false){ if (var1 == true) { validationMsg = validationMsg + (vascoServerId) is missing.; } else { validationMsg = (vascoServerId) is missing.; } var1 = true;}if (runID == false){ if (var1 == true) { validationMsg = validationMsg + OAC run type (runTypeId) is missing.; } else { validationMsg = OAC run type (runTypeId) is missing.; } var1 = true;}if (var1 == true){ scriptletContext.putGlobal(validationMsg, validationMsg);}scriptletContext.putGlobal(missingVAL, var1);
Validating input variables
javascript;validation
A quick note on style, it's much more common in JavaScript code to have the opening curly bracket on the same line as the expression that precedes it, rather than on a new line. var1, based on the code, indicates whether there has been a validation problem, and to keep the same logical flow, we could name it instead isInvalid for more clarity.Along the same lines, missingVAL is never used; perhaps you meant to use that one instead of var1?It's a good practice to use strict equality/inequality (=== and !==) instead of == and != in many cases, to avoid some unpredictable behavior that is otherwise caused by the JavaScript engine.For instance:1 == 1 //true1 === 1 //falsenull == undefined //truenull === undefined //false == 0 // true === 0 //falseAlso, when comparing values to true or false, it's simpler to just let the truthiness or falseness of the value be the condition, such as:// instead of this:if (someValue == true && someOtherValue == false) { ... }// simply do this: (note the use of not ! operator)if (someValue && !someOtherValue) { ... }You are using the same validation routine 4 separate times; instead of copying the code, it's simpler to make one function that does the work then just call it 4 times.Your validations do this:Check that the input value is not false (empty, undefined or null)If it fails, add a message into a string noting the validation failedMake the validation flag variable true to indicate there is a validation problem.Step (3) could reasonably be removed by using the validationMsg, since if there have been no validation failures by the end, it will still be an empty string.We could write a function like this: (code comments added for explanation)function validate(value, messageIfInvalid) { // null, undefined, 0 and evaluate to false if (!value) { // empty string evaluates to false and non-empty to true if (validationMsg) { // if not empty, add a space at the end validationMsg += ; } // add the messageIfInvalid validationMsg += messageIfInvalid; }}This function introduces a side effect which is to modify the globally-scoped variable validationMsg, which makes it less predictable. It would be generally better to instead pass it the validationMsg as argument and then return the validationMsg with any applicable modifications made by the function, like so:// also pass validationMsg as argumentfunction validate(value, validationMsg, messageIfInvalid) { if (!value) { if (validationMsg) { validationMsg += ; } validationMsg += messageIfInvalid; } // return the validationMsg, whether or not it was modified return validationMsg;}Then at the very end of the script, we can check whether validationMsg is an empty string or not to find out if there have been any invalid inputs.// this condition will be true unless validationMsg is an empty stringif (validationMsg) { scriptletContext.putGlobal(validationMsg, validationMsg); scriptletContext.putGlobal(missingVAL, true);} else { scriptletContext.putGlobal(missingVAL, false);}There is one potential issue with this logic, in that if a value checked as invalid/false, but the messageIfInvalid argument was passed an empty string, then no message would be added to validationMsg. We can add another condition to address this special case and just add a generic validation message, like so:if (!value) { if (!messageIfInvalid) { messageIfInvalid = a value is missing; } if (validationMsg) { validationMsg += ; } validationMsg += messageIfInvalid;}This is what it looks like with all the above applied. host = scriptletContext.get(host);user = scriptletContext.get(user);serverId = scriptletContext.get(serverId);runId = scriptletContext.get(runId);function validate(value, validationMsg, messageIfInvalid) { if (!value) { if (!messageIfInvalid) { messageIfInvalid = a value is missing; } if (validationMsg) { validationMsg += ; } validationMsg += messageIfInvalid; } return validationMsg;}var validationMsg = ;validationMessage = validate(host, validationMessage, serverHostName (serverHostName) is missing.);validationMessage = validate(user, validationMessage, Requestor user name (requestorUserName) is missing.);validationMessage = validate(serverId, validationMessage, (vascoServerId) is missing.);validationMessage = validate(runId, validationMessage, OAC run type (runTypeId) is missing.);if (validationMsg) { scriptletContext.putGlobal(validationMsg, validationMsg); scriptletContext.putGlobal(missingVAL, true);} else { scriptletContext.putGlobal(missingVAL, false);}
_cstheory.18863
Are there any known exact algorithms for Euclidean TSP that take advantage of the inherent structure of the problem? Do any of these algorithms have better asymptotics than $O(2^n n^2)$ of a DP solution via Held-Karp? I am also interested in simplicity of implementation (I've seen a 10-line implementation of Held-Karp in Python).Is this still the current state of the art?
Euclidean TSP algorithms
tsp;dynamic programming
null
_unix.203935
When I do ls | grep png the output of grep is:2015-05-15-200203_1920x1080_scrot.png 2015-05-16-025536_1920x1080_scrot.png(filename,newline,filename,newline)then, echo $(ls | grep png) outputs:2015-05-15-200203_1920x1080_scrot.png 2015-05-16-025536_1920x1080_scrot.png(filename,space from word splitting,filename,newline !!from echo!!)That is all ok, but when I do this to prevent the word splitting: echo $(ls | grep png), the output is:2015-05-15-200203_1920x1080_scrot.png 2015-05-16-025536_1920x1080_scrot.pngAnd my question is, where is the second newline (one should be from grep and one from echo)?
Why there isn't a new line at the end of quoting a subshell and passing the results to echo?
bash;shell;echo;command substitution
That's newline from echo, you can verify by using echo -n to suppress trailing newline:echo -n $(ls | grep png)Command substitution remove all trailing newlines, last newline was added by echo, grep has nothing to do here.
_webapps.91209
I'm trying to average a range but only if one condition is true. A B C1: A 1 22: B 3 43: C 5 64: D 7 85: A 1 66: E 8 97: E 5 8What I want is if the char I'm looking for is A then I want my average to produce something to the form of:(1/2 + 1/6) /2=AVERAGEIF(A:A, =&SomeOtherVariable, B:B/C:C)I want to avoid creating an extra column because the sheet that this data is on is taken from a google form and I am not sure how the form will take having a new column added to it.
Conditional Average with a function
google spreadsheets
Short answerInstead of AVERAGEIF use an array formula as AVERAGEIF requires a range as it's third parameter.ExplanationThe below formula could be used to calculate the average for each category without having to use an auxiliary column and as it use open ended references, it will not require to be modified when new form responses be submitted.=ArrayFormula( QUERY( FILTER( {'Sheet1'!A:A,'Sheet1'!B:B/'Sheet1!C:C},LEN('Sheet1'!A:A) ), select Col1,AVG(Col2) group by Col1) )Using the example source data provided by the OP, the result is the following:+---+---+--------------+| | A | B |+---+---+--------------+| 1 | | avg || 2 | A | 0.3333333333 || 3 | B | 0.75 || 4 | C | 0.8333333333 || 5 | D | 0.875 || 6 | E | 0.7569444444 |+---+---+--------------+{'Sheet1'!A:A,'Sheet1'!B:B/'Sheet1!C:C} creates an array with two columns, the first one is the category column, the second calculates the dividend of Column B divided by Column C. The FILTER function is used to remove blank rows.The QUERY function is used to calculate the average for each category in the first column.ReferencesUsing arrays in Google SheetsFILTERQUERY
_codereview.143995
I'm getting the right results for the code for the sample outputs, but the OJ is giving a TLE (3 seconds max).The questions is here.int main(){ double h,u,d,f; while(scanf(%lf%lf%lf%lf,&h,&u,&d,&f)!=EOF) { //cin>>h; // if(h==0) // break; //cin>>u>>d>>f; double fu=.01*f*u; double used_fu=0; int day=1; double crossed=0; int success=0; while(1) { crossed+=(u-used_fu); if(crossed>h) { success++;break; } crossed-=d; if(crossed<0) break; used_fu+=fu; day++; } if(success==0) cout<<failure; else cout<<success; cout<< on day <<day<<\n; } return 0;}
Optimizing code for UVAJ 573 The Snail
c++;programming challenge
null
_unix.269057
I'm new to Linux and trying to understand how redirections work.I have been testing various syntaxes for redirecting stdout and stderr to the same file, which don't all produce the same results.For example, if I try to list 2 files that don't exist (file1 and file2) and 2 that do (foo and fz):Syntax #1 (without redirection):$ ls file1 foo fz file2Here's the output I get in terminal:ls: cannot access file1: No such file or directoryls: cannot access file2: No such file or directoryfoo fzSyntax #2:Now, with redirection:$ ls file1 foo fz file2 > redirect 2>&1The redirect file contains the same as the result for Syntax #1:ls: cannot access file1: No such file or directoryls: cannot access file2: No such file or directoryfoofzSo with both of the above syntaxes, it seems that the shell prints stderr first, then stdout. Syntax #3:Now, if I try with either of the following syntaxes:$ ls file1 foo fz file2 > redirect 2> redirector$ ls file1 foo fz file2 2> redirect > redirectThen the redirect file will contain this:foofznnot access file1: No such file or directoryls: cannot access file2: No such file or directoryHere it looks like stdout is printed before stderr, but then we see that the beginning of stderr is cropped by the same number of characters as stdout.The stdout is 6 chars long (foo fz, carriage return included), so the first 6 chars of the stderr (ls: ca) have been overwritten by stdout.So it actually seems like stderr was printed first, and that stdout was then printed over stderr instead of being appended to it. However, it would have made more sense to me if stderr had been completely erased and replaced with stdout, rather than just partially overwitten.Syntax #4:The only way I have found to correct Syntax #3 is by adding the append operator to the stdout:$ ls file1 foo fz file2 >> redirect 2> redirector$ ls file1 foo fz file2 2> redirect >> redirectWhich produces the same as Syntax #2:ls: cannot access file1: No such file or directoryls: cannot access file2: No such file or directoryfoofzThis article here explains that Syntax #3 is wrong (presumably, so is Syntax #4). But for argument's sake: why is Syntax #3 wrong? What exactly is it telling (or not telling) the shell to do as opposed to Syntax #2?Also, is there a reason why the output always displays stderr before stdout?Thank you!
Trying to make sense of bash redirection syntaxes and their outputs
bash;io redirection;stdout;stderr
null
_codereview.19659
Is this an acceptable way to make sure a java.sql.ResultSet is always closed, and also make sure that an Exception caught is propagated to the caller? Please don't hesitate to review other aspects of this sample as well.public void updateReplicationStatus(BeanReplicationTask taskBean, String strStatus) throws Exception { String strSelect = this.createReplicationSelectStatement(taskBean); ResultSet resultSet = null; try { resultSet = DerbyDog.getResultSet(strSelect); if (resultSet.next()) { resultSet.updateString(COLUMN_STATUS, strStatus); resultSet.updateRow(); } } catch (Exception e) { throw e; } finally { if (resultSet != null) { resultSet.close(); } }}
Closing a ResultSet and rethrowing an Exception
java;exception
null
_unix.47895
What version of Red Hat is used for the Red Hat Certified System Administrator Exam?Because I don't have over $3000 to invest in the Red Hat course that prepares you for the exam I am going to install Red Hat and practice all of the listed objectives on my own. (Exam Objectives) I'll probably do some other practice as well just for fun. As of right now I am downloading Red Hat Linux 7.1 i386.As a secondary question, would you advise either for or against taking the course? Are there other avenues the you would suggest to help me prepare for the exam? Also if I were to fail the exam would I have to pay again to retake it?
What version of Red Hat is used for the Red Hat Certified System Administrator Exam?
rhel
First: Red Hat 7.1 is 11 years old and hopelessly obsolete, thus being unsuitable for any recent exam.From the link you posted:This guide provides information candidates may use in preparing to take the Red Hat Certified System Administrator (RHCSA) exam on Red Hat Enterprise Linux 6. So, get CentOS 6 and learn using it: most of what you need will be covered there, except for the RHN (Red Hat Network) stuff. For those you could grab a trial version of RHEL.
_codereview.46693
I wrote the following split function. It takes a list, an element and an Ordering (LT, GT, EQ). For LT, the list will filter out all items that are >= the element argument.split' :: (Ord a) => [a] -> a -> Ordering -> [a]split' [] _ _ = []split' (x:xs) a LT | x < a = x : split' xs a LT | otherwise = split' xs a LT split' (x:xs) a GT | x > a = x : split' xs a GT | otherwise = split' xs a GT-- non-exhaustive, unacceptableExample:*Main> split' [1,2,3] 2 LT[1]Note that this function can result in an exception if the Ordering argument is EQ. Of course this is terrible.However, please review the above code, offering suggestions on how to improve it.
Splitting a List
haskell
null
_unix.237332
I have folder with about 1200 files that I need to rename. Theit file extension is .jpg, .gif, and .png for the most part. Here are some examples: Awesome\ FB\ \ (462).jpg Bonus\ FB\ Cover\ Pics\ (80).jpg Awesome\ FB\ \ (463).jpg Bonus\ FB\ Cover\ Pics\ (81).jpg Awesome\ FB\ \ (464).jpg Bonus\ FB\ Cover\ Pics\ (82).jpg Awesome\ FB\ \ (465).jpg Bonus\ FB\ Cover\ Pics\ (83).jpgI tried: rename 's/\.{???} $/-img4sm.{???}/' ./*.{???}I also tried this but just placing a .jpg or .gif instead of {???}, and replacing the {} with []. I also tried: find . -type f -iname '*.???' -exec rename 's/\.{???} $/-img4sm.{???}/' ./*.{} +I've managed to use this command without error messages in CloudLinux (CentOS), but no file names changed. Please help me. Thank you!
Rename all files in directory while preserving any file extension
find;rename
Here is a solution using php based on this answer.Create a file eg myrename.php with the following contents:<?php$test = 1;if(!chdir('/tmp/mydir')) echo failed chdir\n;else if ($handle = opendir('.')) { while (false !== ($fileName = readdir($handle))) { $newName = preg_replace(/\.(jpg|gif|png)$/i,-img4sm.\\1,$fileName); if($newName!=$fileName){ if($test){ echo $fileName -> $newName\n; }else{ if(!rename($fileName, $newName))echo failed $fileName -> $newName\n; } } } closedir($handle);}?>replace /tmp/mydir by the path of your directory. Run the script withphp myrename.phpand it will print out the old filename and new filename for each matched file.If this is ok, change $test = 1; to $test = 0; and run it for real.Remember to backup the files somewhere first in case of problems.
_datascience.19481
I am learning deep learning, and as a first exercise to myself I am trying to build a system that learns a very simple task - capitalize the first letter of each word. As a first step, I am tried to create character embeddings - a vector for each character. I am using the following code:import gensimmodel = gensim.models.Word2Vec(sentences)where sentences is a list of lists of chars which I took from this long Wikipedia page. For example, sentences[101] is:[' ', ' ', ' ', ' ', 'S', 'p', 'e', 'a', 'k', 'i', 'n', 'g', ' ', 'a', 't', ' ', 't', 'h', 'e', ' ', 'c', 'o', 'n', 'c', 'l', 'u', 's', 'i', 'o', 'n', ' ', 'o', 'f', ' ', 'a', ' ', 'm', 'i', 's', 's', 'i', 'l', 'e', ' ', 'e', 'x', 'e', 'r', 'c', 'i', 's', 'e', ... ]To test the model, I did:model.most_similar(positive=['A', 'b'], negative=['a'], topn=3)I hoped to get 'B' at the top, since 'A'-'a'+'b'='B', but I got:[('D', 0.5388374328613281), ('N', 0.5219535827636719), ('V', 0.5081528425216675)](also, my capitalization application did not work so well, but this is probably because of the embeddings).What should I do to get embeddings that identify capitalization?
Learning character embeddings with GenSim
word2vec;word embeddings;gensim
null
_cstheory.12122
I call Fill3 the following simple game: the input is a $n \times n$ grid;every cell of the grid has a type: OR, AND, CHOICE, FANOUT and FIXED and can be rotated 0, 90, 180 or 270 degrees;every cell (except the FIXED) contains three green squares that according to the cell's type can be placed in several different valid configurations (see figure);The aim of the game is to choose a valid configuration for each cell (the type and the rotation cannot be changed) in such a way that no two green squares are adjacent (vertically or horizontally). The game is NP-complete (a quick reduction is from planar 3SAT).Valid configurations for each type of cell.But the cell types are obviously a direct transposition of the four gadgets used in the bounded planar Nondetermnistic Constraint Logic (NCL) and the valid configurations match the constraints of the corresponding vertices in a constraint graph (blue/red arrows in the figure). I'm wondering if a direct quick reduction from NCL is possible. It is immediate to reduce a planar constraint graph to a Fill3 game (CHOICE cells with a free endpoint can be used to build straight links and 90 turns). But in a bounded planar NCL problem $P$ the edges are reversed dinamically (though an edge can be reversed at most once) and the aim is to reverse a single edge; on the contrary the game Fill3 is static. A possible approach is: build the corresponding Fill3 game from $P$; break the edge that must be reversed (eventually expanding the board) and force it in the final direction using FIXED cells. Then one way is easy: if $P$ is solvable, the corrisponding Fill3 game has a solution (represented by the final configuration of the constraint graph); but the other way is more problematic:If Fill3 has a (static) solution $s$, can we say that a (dynamic) solution for the corresponding planar bounded NCL problem $P$ always exist (i.e. a sequence of valid edge reversals)? And how to prove it formally?
Reduction from planar bounded NCL to a static puzzle game
cc.complexity theory;np hardness;puzzles
null
_scicomp.14635
I have a RKF45 numerical integrator that simulates polymerization of proteins using CUDA. It does so by tracking the populations of discrete length polymers, e.g. monomers, dimers, trimers, etc. all the way up to 8192-mers using the equations of the type that follow.$$\frac{dc_r}{dt}= k_nc_r^{n_c}+2k_a(c_{r-1}-c_r) + k_m(2\sum^{max}_{s=r+2}c_s-(r-1)c_r) $$where r represents the length of the polymer and c its concentration at that given time. The k's are the parameters that I'd like to minimize an objective function with respect to. These equations are integrated over about 20,000-30,000 timesteps. My goal is to start fitting the model to experimental data. The first type of fit I'm trying is fitting a value derived from these polymer populations at arbitrary timesteps. For instance, I want to calculate the average length of polymers at five different times in the simulation/integration and compare them to externally supplied data. The average length is derived for each timestep from the concentrations like this : $$L(t) = \frac{M(t)}{N(t)}$$$$M(t) = \sum^{max}_{s=n_c}s*c_s(t)$$$$N(t) = \sum^{max}_{s=n_c}c_s(t)$$The objective function I'm trying to minimize thusly looks like this : $$y(t)=\sqrt{\sum_{fit data}(L_{calculated}-L_{experimental})^2}$$I've already implemented a solution to do this and run it through the following norm and minimize said norm/objective function using Scipy's Nelder-Mead optimize method. It seems to work, but if it's possible to make it perform better, I'd like to make it happen. I want to try and utilize more robust and swifter converging methods like L-BFGS-B, especially because they also allow me to impose bounds on the parameters, that need to remain positive.Now, the Runge-Kutta algorithm, for the unfamiliar, is here : http://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methodsDue to the recursive nature of the Runge Kutta calculations such as $$k_1=f(y_n)$$$$k_2=f(y_n+a_{21}k_1)$$$$k_3=f(y_n+a_{31}k_1+a_{32}k_2)$$I'm having issues figuring out how to calculate the Jacobian, which I believe would be this.$$J(t) = [\frac{dy}{dk_a} \frac{dy}{dk_m} \frac{dy}{dk_n}]$$So, if I framed this question properly, is it possible at some point during the RKF calculation to construct a Jacobian to pass out to Scipy along with the objective function evaluation? I'd like to get an optimized optimization script up and running ASAP.
Is there a relatively simple way to extract the Jacobian from a Runge-Kutta 4/5 integrator?
optimization;runge kutta;jacobian
So, if I framed this question properly, is it possible at some point during the RKF calculation to construct a Jacobian to pass out to Scipy along with the objective function evaluation?The Runge-Kutta-Fehlberg 4(5) method you describe is an explicit numerical method for solving an ODE. As such, it does not require the solution of a nonlinear algebraic equation; Jacobian matrices are not required for that algorithm.Is it possible to construct a Jacobian? Sure; the RKF4(5) method has nothing to do with it.What you want are parameter sensitivities, namely $\partial{c_{r}}/\partial{k_{a}}$, $\partial{c_{r}}/\partial{k_{m}}$, etc., so that you can calculate the derivatives you need.These can be calculated numerically in a few ways, to my knowledge; all of them involve solving a set of auxiliary differential equations:the adjoint method; see Steven G. Johnson's notes, and this paper by Petzold, et al., among others by Petzold. Adjoint methods work best when you have fewer state variables than parameters. (That is, the dimension of your $c_{r}$ is smaller than the number of parameters you have.)the simultaneous direct method, where you solve the sensitivity ODE system and your original state ODE system at the same time as a larger coupled ODE systemthe staggered direct method (see M. Caracotsios, W.E. Stewart, Sensitivity analysis of initial value problems with mixed ODEs and algebraic equations, Comput. Chem. Engrg. 9 (4) (1985) 359365)the simultaneous corrector method, where you solve for your state variables and sensitivities at the same time (see T. Maly, L.R. Petzold, Numerical methods and software for sensitivity analysis of differentialalgebraicsystems, Appl. Numer. Math. 20 (1997) 5779)the staggered corrector method (by Barton and Feehery; disclaimer: Barton was one of my advisors); see Feehery's thesis, Chapter 4, also see W.F. Feehery, J.E. Tolsma, P.I. Barton, Efficient sensitivity analysis of large-scale differentialalgebraic systems, Appl. Numer. Math. 25 (1997) 4154In chemistry and chemical engineering, there are a few other types of sensitivity analysis available, but I don't think of them will give you the Jacobian matrix information you need for parameter fitting.One final warning: ODE-constrained optimization problems are notoriously tricky, and generally nonconvex. It is possible to apply a standard nonlinear programming solver to these problems and obtain local minima that are globally suboptimal. See the thesis of Joseph K. Scott for examples. (Disclaimer: Dr. Scott was a former colleague.)
_softwareengineering.274393
I am using ViewModel classes in order to structure data being populated inside of a controller. My questions is now where exactly is data of a @model stored after being populated via asp mvc controller.
Where is asp mvc model data stored?
mvc;asp.net mvc;asp.net mvc 4
In MVC, the model (that is an instance of a model class) is nothing more than an ordinary object. It is initialized by the controller which passes it to MVC's engine which, in turn, uses it when generating the final result from a view.If you're asking whether it is stored on the stack or on the heap, the response is: on the heap.Instance variables for a reference type are always on the heap.(Source; see also: What and where are the stack and heap?)If you're asking whether it is stored in memory or on a hard disk, the response is: it depends. In general, it will be in memory, unless the operating system runs out of memory and decides to move it to the pagefile (chances for this to happen are slim).If you're asking how should you access the model once initialized (i.e. where to find the instance of the model class), just keep a reference to it within the controller itself.
_unix.212248
I have CentOS 7 installed as a guest OS inside VirtualBox running on a Windows 8.1 machine. There is a SATA Wire connecting the machine to an external hard drive that used to contain a dual boot CentOS7/Windows7 machine. How can I mount the CentOS 7 partitions of the old external hard drive using the CentOS 7 terminal in the new machine? So far, in the CentOS 7 terminal of the new machine, I have typed in: cd /dev/disk/by-idls -alThe following print screen shows the results (Zoom in to read): I also typed in fdisk -l and got the following screen shot results: But which results from the above screen shots represent partitions from the external hard drive, and what do I need to type to mount each of those drives?For referece, the same external drive viewed from Windows disk manager is the row Disk 1 in the screen shot below. The CentOS 7 partitions are the right-most few partitions of disk 1:
how do I mount an external hard drive connected via SATA Wire to CentOS 7
centos;mount;usb;virtualbox;virtual machine
null
_unix.174099
I have gone through 10s of answers on this and other sites trying to debug my udev rule, but to no avail. The rule is very simple: I want to lock my screen when my Yubikey is unplugged.My rule is in the file /etc/udev/rules.d/98-yubikey.rules.I have tried both # udevadm control --reload-rules && udevadm trigger and simply rebooting my computer to update the rules.Here are the rules I have tried so far, none of which lock the screen (I have tested that the script does, in fact, lock the screen when run).ACTION==remove, SUBSYSTEM==input, ATTRS{idVendor}==XXXX, ATTRS{idProduct}==YYYY, RUN+=/home/user/bin/lock_screen, OWNER=userACTION==add, SUBSYSTEM==input, ENV{ID_VENDOR_ID}==XXXX, ENV{ID_MODEL_ID}==YYYY, RUN+=/home/user/bin/lock_screen, OWNER=userVarious combinations of these items with or without subsystem/owner (and with subsystem as usb instead of input).
UDEV Rule Not Triggering
udev
I have a system configured to do the same and it looks like this:SUBSYSTEM==input, ACTION==remove, RUN+=/usr/local/sbin/yubikey_goneThen the script /usr/local/sbin/yubikey_gone contains:#!/bin/shif [ x$ID_MODEL != xYubico_Yubikey_II ]; then exit 0fiexec su vandry -c DISPLAY=:0.0 gnome-screensaver-command --lockThis invokes the script when any input device is unplugged, and then the script tests whether or not it indeed was a Yubikey before proceeding. It's not the correct solution, but I must have had trouble getting it to work with the device model test directly in the udev configuration file (I don't remember why the script hasn't been touched in a long time). It's not the best way, but it does at least work.
_softwareengineering.210248
I have just landed a role as a C#/Asp.Net developer at a large software house. I have previously worked at a much smaller software house for about two years but it was a varied/mixed role there, and here the asp.net applications we have are a factor of 10 or so larger.As seems to be the norm, I have been given the task of fixing bugs. At the moment I am just trying to understand the system. How long , in your experience , does it roughly take ( and is generally acceptable) for a new developer to get up to speed? It of course varies from company to company but as a general rule, when you have hired someone/have worked with someone new, how many days/weeks would it have been normal for them to get to grips with the system?
How many days is it normal for a new hire programmer to take to get up to speed?
java;c#;asp.net;knowledge transfer
It depends. But IMHO, it should take about a month to know your way around, and up to six months to be normally productive. An interesting exercise, if you have time, etc: Take a part of the application, and reprogram it in another language, if you know any. Or, try reading all of the source code that you can, and write down what it does. That should help you get up to speed!
_cs.39686
I would like to rate a set of $n$ elements, with each element assigned a rating from ${0,\dots, 10}$.The way in which I want to rate is by repeatedly selecting subsets of $k$ elements and querying a user to rank them relative to each other. I would like a means of minimizing the number of necessary queries to assign ratings (i.e. how do I pick which elements I should ask about?), and a way of aggregating my partial orders into the appropriate buckets (when do I stop, and how do I map the partial order into my range?).Are there any decent references I should be looking at?
Rating elements via partial orders
ordering
What you're essentially asking is to find a total ordering of a graph of users whose partial orderings are results of querying a user for their relative rank. Your asymptotic complexity of the minimum necessary number of partial orders will relate to the Stirling number of the Second Kind mathworld.wolfram.com/StirlingNumberoftheSecondKind.html. To find this total ordering (re: how do I map the partial order into my range?) you would topologically sort the graph you're describing. Note that for every vertex, $u\leq_p v\Rightarrow (u,v) \in E$, and the graph would be directed.
_unix.151763
I was running a script that iterated over all the files on my Linux system and created some metadata about them, and it threw an error when it hit a broken symbolic link.I am newish to *nix, but I get the main idea behind linking files and how broken links come to exist. As far as I know, they are like the equivalent of litter in the street. Things that a program I'm removing wasn't smart enough to tell the package manager existed, and belonged to it, or something that got left behind in an upgrade. At first, I started to tweak the script I'm running to skip them, then I thought, 'well we could always delete them while we're down here...'I'm running Ubuntu 14.04 (Trusty Tahr). I can't see any reason not to, but before I go ahead and run this over my development system, is there any reason this might actually be a terrible idea? Do broken symlinks serve some purpose I am not aware of?
Is there a downside to deleting all of the broken symbolic links in a system?
symlink
There are many reasons for broken symbolic links:A link was created to a target which no longer exists.Resolution: remove the broken symlink.A link was created for a target which has been moved. Or it's a relative link that's been moved relative to its target. (Not to imply that relative symlinks are a bad idea quite the opposite: absolute symlinks are more prone to going stale because their target moved.)Resolution: find the intended target and fix the link.There was a mistake when creating the link.Resolution: find the intended target and fix the link.The link is to a file which is on a removable disk, network filesystem or other storage area which is not currently mounted.Resolution: none, the link isn't broken all the time. The link will work when the storage area is mounted.The link is to a file which exists only some of the time, by design. For example, the file is the cached output of a process, which is deleted when the information goes stale but only re-created upon explicit request. Or the link is to an inbox which is deleted when empty. Or the link is to a device file which is only present when the corresponding peripheral is attached.Resolution: none, the link isn't broken all the time.The link is only valid in a different storage hierarchy. For example, it is valid only in a chroot jail, or it's exported by an NFS server and only valid on the server or on some of its clients.Resolution: none, the link isn't broken everywhere.The link is broken for you, because you lack the permission to traverse a directory to reach the target, but it isn't broken for users with appropriate privilege.Resolution: none, the link isn't broken for everybody.The link is used to store information, as in the Firefox lock example cited by vinc17. One reason to do it this way is that it's easier to populate a symlink atomically there's no other way, whereas populating a file atomically is more complex: you need to create the file content under a temporary name, then move it into place, and handle stale temporary files left behind by a crash. Another reason is that symlinks are typically stored directly inside their inode on some filesystems, which makes reading them faster than reading the content of a file.Resolution: none. In this case, removing the link would be detrimental.If you can determine that a symlink falls into the first category, then sure, go ahead and delete it. Otherwise, abstain.A program that traverses directories recursively and cares about file contents should usually ignore broken symbolic links.
_softwareengineering.341946
There is considerable cost (and pain) for developers to debug external libraries due to the fact that many libraries are distributed in two editions: one with debug information, other without. The developer has to search and download debug files, and point the debugging environment to their location. This is often a painful and error-prone task. There is also a lot of work related to build and distribute those libraries.At same time, the world changed in ways that make the release version (ie the version without debugging information) obsolete. For instance:A modern runtime can efficiently strip debugging information, optimize on-the-fly and run the program at full speed. Examples are the JVM and JavaScript interpreters.Open-source projects, or binaries that are not published outside the company, have no reason to be obfuscated. Also, given that many modern languages now support reflection, obfuscation by stripping debug information is very limited, if effective at all.Disk space is high, disk performance (SDDs) is getting better and better, and network bandwidth is improving as well.Please note that this question is limited to libraries. I'm 100% OK about distributing applications without debugging information, which is actually extremely important in the mobile world.After a lot of trouble with .NET PDBs, and .class files without debugging information, I think we should evolve processes and tools into a world where the default is to compile and distribute libraries with full debug information, embedded in the binary (like .class files) if that's possible. After all, the only remote reason for not doing that is protecting copyrights (which doesn't apply to many scenarios).Of course, the tools that pack applications should be smart enough to search and destroy every piece of embedded debugging information before publishing for download. This is based on the fact that end-users seldom debug applications.
Should we compile and ship libraries with debug information whenever possible?
productivity;compiler;debugging;copyright;release
null
_scicomp.22107
Let $u_h$ be the finite element solution of a fourth order equation (like biharmonic equation), using polynomial degree two.If the convergence rate of $u_h$ is $2$, what is the convergence rate of the second order derivatives of $u_h$? Is it possible they do not converge?
Convergence of the second derivative of the finite element solution
finite element;error estimation
Yes, for several reasons.First, it is instructive to look at how one usually proves convergence rates. In the standard setting, where your discrete approximation $u_h\in V_h$ is in the same space $V$ as the true solution $u$, the error $\|u-u_h\|_V$ in the natural space $V$ (where you have continuity and coercivity of the bilinear form) is bounded by the interpolation error of $u$ by functions in $V_h$ (this is Ca's lemma). So all you have to do is to get an estimate of the interpolation error $\|u- I_h u\|_V$. If $V=H^l$, the space of $l$ times weakly differentiable functions, and $V_h$ is the space of piecewise polynomials of degree $k$ on a given mesh, and your exact solution is in $H^{k+1}$ for $k+1\geq l$, then you can show that$$ \| u- I_h u \|_{H^l} \leq C h^{k+1-l} |u|_{H^{k+1}},\tag{1}$$where $|u|_{H^{s}}$ is the $H^s$ seminorm of $u$, i.e., the $L^2$ norm of all partial derivatives of order $s$, and $C>0$ is a constant independent of $h$ and depending only on the geometrical properties of the mesh. (The proof works by first considering a reference element, where the error can be bounded by a constant, and then transforming the norms to each element and summing; the power of $h$ in the estimate comes purely from how the different $H^s$ norms scale under affine transformations.) This means that the smoothness of the true solution limits the polynomial degree you can use to get better convergence. (You can always use higher degrees, of course, but you won't get better accuracy from them.) You can use different methods (e.g., Aubin-Nitsche lemma) to trade a lower order error norm for higher powers of $h$, i.e., an estimate of the form $(1)$ for the error $\|u-u_h\|_{H^l}$ usually holds also for smaller values of $l$. Now to your specific case. The natural regularity of the biharmonic equation is $H^2$, since the weak form involves second derivatives. You are also using piecewise quadratic polynomials. This means $l=k=2$, such that you would need to have the true solution in $H^3$, which is more than the natural regularity, and hence does not necessarily hold for the test problem you are solving. (But $u\in H^2$ is enough to get from $(1)$ a convergence order of $\mathcal{O}(h^2)$ in $H^0=L^2$, which is what you observe. You should also get $\mathcal{O}(h)$ in $H^1$, i.e., for the first derivatives, which is a useful test.)Rule of thumb: You lose one order of convergence for each derivative, and you gain one order for each polynomial degree, up to a point determined by the regularity of the exact solution you are trying to approximate.However, you are not in the standard setting: Functions in $H^2$ are continuously differentiable across element boundaries (apply the usual argument to the first derivatives), which is not the case for the standard piecewise polynomial spaces (which are only continuous across boundaries). Hence, $(1)$ doesn't apply in this case. Intuitively, since you are not enforcing the correct continuity across element boundaries, you might get the function values right (enough), but not the derivatives -- and especially not the second derivatives. (Here's where the test with the first derivatives comes in handy.) Instead, you also need an interior penalty method to enforce the required continuity of $H^2$ functions across element boundaries (such as discontinuous Galerkin methods; an alternative would be mixed methods). But how to do this would be a new question.
_webapps.58728
I have a long form for users to answer survey questions with about 20 pages. It's long enough that users may want to stop and finish the survey later, or they may abandon the survey altogether half way through. How can I...Cause the answers on each page to be saved to a spreadsheet when the user clicks the continue button to move to the next page?ORAdd the option for the user to save their responses and complete the survey later?
How to save users' responses in Google Form survey as they continue from page to page?
google apps;google forms;survey
null
_cs.62672
Where is the formal spec located?Like this?Are tests made from formal or informal spec?I saw a project where they where generated from a the formalspec and that never made sense to me.Is the verification step manual or can it be automated?If it is manual, we now have 3 things to maintain and keep in sync.The problem of a possible mismatch between informal spec and code has just been moved between the informal and the formal spec. What did we gain?If the verification step can be automated completely then why write any code at all? Just let it be generated from the spec automatically.If we fold those two steps into one the situation is the same asbefore; we have a high level description of our problem which a compilerturns into a low level representation that can be executed by a machine.What did we gain?
Where in the toolchain does formal specification come in?
formal methods;software engineering;software verification
null
_cogsci.30
I've read research reporting that lower IQ is correlated with higher variability in reaction time. But such research does not seem to disentangle the possible effect of ADD, in which case you might have a separate cluster that does not follow the general trend.So, if ADD/ADHD subjects are given a cognitive test on different days, will the variability in their outcomes be higher than that in controls?
What is the effect of ADD/ADHD on performance variability on cognitive tests over time?
performance;adhd;reliability
General thoughts on factors influencing test-retest correlationsFrom a theoretical perspective, it makes sense that groups that experience temporary states that lower state intelligence or lead to a poorer test taking orientation would have lower test-retest cognitive test correlations. By lower state intelligence, I'm referring to states of being such as are observed with extreme sleep deprivation, alcohol intoxication, or while on psychedelic drugs that would temporarily lower actual cognitive functioning. By test taking orientation I'm referring to motivation and concentration issues related to taking a test. Because such factors are transient, they may be present in one test session and not another, and as such would be a source of error in measurement and would reduce test-retest correlations. In groups where such factors are greater or more prevalent, then test-retest correlations should be reduced more than other groups.A clinician that administers a high-quality intelligence test should be aware of these transient factors when administering the test. If a test-taker is not taking a test seriously, is distracted, and so on, then the scores may have to be disregarded. Some studies might use group testing protocols that don't pick up these issues. And perhaps sometimes teasing out what is transient and what is permanent may be difficult.In general, latent ability on a task is conceptualised in terms of an individual applying reasonable and normal levels of effort, under normal and healthy conditions, in a normal and reasonable environment. Under such conditions, a person is not able to improve their performance (discounting practice and cheating effects), but there are many ways that performance can decline. Sackett et al (1988) talks about the difference between typical and maximal performance. Test performance is meant to be conducted under conditions of maximal performance, where the test taker is meant to be applying maximal effort to the task.Thoughts about ADHDResearch does show that in general individuals diagnosed with ADHD tend to score lower on intelligence tests. Frazier et al (2004) peformed a large-scale meta analysis and found that healthy participants scored on average 0.61 standard deviations (i.e., weighted d) higher on full scale IQ than ADHD participants.Many of the symptoms of ADHD, such as being easily distracted, experiencing boredom on a task, and other problems related to sustained attention would make the administration of some cognitive tests difficult. It is possible for any measure of an individual's IQ to fall below what the person is capable, where such an effect can be due, in addition to random variation between testing sessions, to systematic variability in transient factors. Thus, it seems plausible that individuals with ADHD could more frequently experience such factors, and as such, experience lower test-retest correlations.Caveat: ADHD research is not my area. I imagine there are some empirical studies that have compared test-retest correlations of tests between ADHD and other groups. I just haven't found them.ReferencesFrazier, T., Demaree, H., and Youngstrom, E. (2004). Meta-analysis ofintellectual and neuropsychological test performance inattention-deficit/hyperactivity disorder. Neuropsychology, 18(3):543.Sackett, P. R., Zedeck, S., and Fogli, L. (1988). Relations betweenmeasures of typical and maximum job performance. Journal of AppliedPsychology, 73:482-486.
_codereview.107399
I maintain an application that has a method in a class for saving uploaded files to filesystem and I want to run asynchronously as it saves multiple files in a single request. I don't know if it's best to use Task.FromResult or Task.Run, but I've modifed the code to use Task.FromResult.public class AttachmentProcessor { public static Task<IEnumerable<Attachment>> SaveAttachmentsToFileSystemAsync(IEnumerable<HttpPostedFileBase> files, string cOfONumber) { if (files == null) return Task.FromResult(Enumerable.Empty<Attachment>()); string topFolderPath = AppDomain.CurrentDomain.BaseDirectory + Attachments\\; string folderName = cOfONumber; var directory = topFolderPath + folderName; string folderPath = directory + \\; var attachments = new List<Attachment>(); DirectoryInfo subFolder = Directory.CreateDirectory(directory); foreach (HttpPostedFileBase file in files) { if (file != null) { var fileName = Path.GetFileName(file.FileName); if (!string.IsNullOrEmpty(fileName)) { // file is saved to path file.SaveAs(folderPath + fileName); attachments.Add(new Attachment { CofONumber = cOfONumber, FileName = fileName, File = folderPath + fileName }); } } } return Task.FromResult(attachments.AsEnumerable()); } }And the calling code:[HttpPost]public async Task<ActionResult> Index(AttachmentViewModel model){ if (ModelState.IsValid) { try { var attachments = await AttachmentProcessor.SaveAttachmentsToFileSystemAsync(model.Others, model.CofONumber).ConfigureAwait(false); _repository.InsertEntities(attachments.ToList()); _unitOfWork.Save(); return RedirectToAction(Index, Home); } catch (Exception) { ModelState.AddModelError(, Unable to save data); } } return View(new AttachmentViewModel() { CofONumber = model.CofONumber});}My other thoughts are: calling Task.Run() within SaveAttachmentsToFileSystemAsync methodpublic static async Task<IEnumerable<Attachment>> SaveAttachmentsToFileSystemAsync(IEnumerable<HttpPostedFileBase> files, string cOfONumber) { if (files == null) return null; var task = Task.Run(() => { string topFolderPath = AppDomain.CurrentDomain.BaseDirectory + Attachments\\; string folderName = cOfONumber.Replace(@/, _); var directory = topFolderPath + folderName; string folderPath = directory + \\; var attachments = new List<Attachment>(); DirectoryInfo subFolder = Directory.CreateDirectory(directory); foreach (HttpPostedFileBase file in files) { if (file != null) { var fileName = Path.GetFileName(file.FileName); if (!string.IsNullOrEmpty(fileName)) { // file is saved to path file.SaveAs(folderPath + fileName); attachments.Add(new Attachment { CofONumber = cOfONumber, FileName = fileName, File = folderPath + fileName }); } } } return attachments; }); return await task; }Or leaving it synchronous as it is and using Task.Run from the calling code:public static IEnumerable<Attachment> SaveAttachmentsToFileSystemAsync(IEnumerable<HttpPostedFileBase> files, string cOfONumber){ if (files == null) return null; string topFolderPath = AppDomain.CurrentDomain.BaseDirectory + Attachments\\; string folderName = cOfONumber.Replace(@/, _); var directory = topFolderPath + folderName; string folderPath = directory + \\; var attachments = new List<Attachment>(); DirectoryInfo subFolder = Directory.CreateDirectory(directory); foreach (HttpPostedFileBase file in files) { if (file != null) { var fileName = Path.GetFileName(file.FileName); if (!string.IsNullOrEmpty(fileName)) { // file is saved to path file.SaveAs(folderPath + fileName); attachments.Add(new Attachment { CofONumber = cOfONumber, FileName = fileName, File = folderPath + fileName }); } } } return attachments;}//And using Task.Run from the controller [HttpPost]public async Task<ActionResult> Index(AttachmentViewModel model){ if (ModelState.IsValid) { try { var attachments = await Task.Run(() => AttachmentProcessor.meT(model.Others, model.CofONumber)); _repository.InsertEntities(attachments.ToList()); _unitOfWork.Save(); return RedirectToAction(Index, Home); } catch (Exception) { ModelState.AddModelError(, Unable to save data); } } return View(new AttachmentViewModel() { CofONumber = model.CofONumber });}
Updating sychronous code to run asynchronously using async/await
c#;asynchronous;async await
null
_unix.58138
How do I search man pages for options? I currently do this:/-<option>Example:To see what grep -i does, I do this:man grepand then/-i. Or, I can do this:man grep | grep .-iIs there a better way of searching man pages for options?
How do I better read man pages?
man
null
_unix.52726
I'm running Linux Mint 13 MATE 64-bit. Everything has been working for several weeks. Yesterday, when I tried to boot up my computer, after the BIOS screen flashes I reach a screen with a black background that reads at the top:GNU GRUB version1.99-21ubuntu3.4Then there is a box in which I can select from the following lines:Linux Mint 13 MATE 64-bit, 3.2.0-31-generic (/dev/sdb2)Linux Mint 13 MATE 64-bit, 3.2.0-31-generic (/dev/sdb2) -- recovery modePrevious Linux versionsMemory test (memtest86+)Memory test (memtest86+, serial console 115200)At the bottom it reads:Use the and keys to select which entry is highlighed. Press enter to boot the selected OS, 'e' to edit the commands before booting or 'c' for a command-line.I have no idea why it started doing this and, worse, I have no idea how to get out of here. No matter which option I select, I can't get it to boot the OS. If I select either of the first two, it reboots to splash the BIOS and then I'm right back where I started. If I choose Previous Linux versions I get essentially the same screen with only two choices (which are the same as the first two choices listed above, Linux 13 MATE and the recovery mode). Again, choosing either one of those results in a reboot. If I try to run either of the memtest options, it reads:error: unknown command 'linux16',Press any key to continue...Then it brings me back to the same screenCan anyone help me please?Hardware Specification:Intel Core i5-2500; ASUS P8Z68-V LX Intel Motherboard; G. Skill Ripjaws series F3-12800CL9D-8GBRL (4GB x2);Plextor 128GB M5S Series SSDupdate:If I press 'e' it reads as follows:setparams 'Linux Mint 13 MATE 64-bit, 3.2.0-31-generic (/dev/sdb2)'recordfailgfxmode $linnux_gfx_modeinsmod part_gptinsmod ext2set root = '(hd1,gpt2)'search --no-floppy --fs-uuid --set=root 249aaa9-029d-4599-b25d-92003c49e087 linux /boot/vmlinuz-3.2.0-31-generic root=UUID=2492aaa9-029d-4599-b25d-92003c49e087 ro \quiet splash $vt_handoff initrd /boot/initrd.img-3.2.0-31-generic
after BIOS splash, will not boot, asked to select OS, but can't
boot;grub
null
_softwareengineering.150332
As far as I know, a Turing Machine is the widely used model in computational theory to know whether something could be computed and if computed can they can be computed in finite time (P, NP, NPSpace). But I have the following doubts:Is a Turing Machine essentially a black-box model where a set of inputs give a set of outputs such that there could be no interaction during computation? By no interaction, I mean that during computation the variables wouldn't be altered by an external factor.As an extension to the above question, are non-deterministic functions Turing complete?Is Turing Machine an efficient model for Quantum Computing?Going by what I have learned so far, my answers are:Turing Machines cannot handle interaction and random behavior and it's not guaranteed even by Turing in his original paper. Non-deterministic functions may bring Turing machines to a halt.No since Turing machines cannot efficiently support the superposition of bits.NoteI am not well-versed in either Computational Theory or Quantum Computing. So a lot of links and some beginner's stuff would be a lot helpful and reading this article inspired this question.
Quantum computers and Turing Machine
computer science;turing completeness
1) As you pointed out a lot of the theory about what can be computed is based on it. For that to work out it is essential to know how it operates internally. A Turing machine is not a black box. A favorable property of Turing machines is their locality of change. Every step changes just very little, that is, the internal state (think of it as number), the letter on the tape and the position on the tape. The latter can only be changed by 1 step to the left or to the right. In this model all input is in form of what is written on the tape. The tape content is only changed by the machine. So - no interaction.2) A machine or programming language is called Turing complete, if it can simulate all Turing machines. Thus, non-deterministic Turing machines are Turing complete, because they can simulate a Turing machine by simply not using non-determinism. Interestingly enough, a deterministic Turing can simulate a non-deterministic one, simply by trying all possible outcomes of non-deterministic results sequentially. This is a brute force approach and not very efficient. It is unclear if there is an efficient way to do it. BTW, most computer scientist do not think that it can be done.As for your own answer to this - Turing machines are supposed to halt. In this context it means that the computation is finished and has a result. You might think it means that the machine gets frozen. But it is the other way round. A frozen machine (e.g. your desktop computer) did not halt, when it was supposed to and know you are waiting forever and cannot do anything (but reboot). Non-determinism has no effect on a machine halting or not.3) The only known way to simulate a quantum computing device uses non-determinism. As said under 2), we can simulate non-determinism, but not efficiently. And we probably never will.
_webmaster.106228
One of my web page with sensitive information was indexed by BING search and which in turn showing in the search results. Now that i solved by Robots meta tag in the header. However i would like to know the Root from which BING got this link? To be specific -> Where did web crawler identified my website link.Is there any way i could find that?
How do we know, what is the source from which BING search indexed my webpage? - A Blog or website where my website link is placed
web crawlers
null
_unix.222793
I am creating a script in Centos 7 to move the latest file within a directory to another directory. The original directory that I'm copying from contains a valid file, however when I try to move or copy the file it errors out saying the file does not exist. I know the file does exist as I prove below. Why does it fail and what I can do to fix it?If I run this line from my script in the shell the $( expands the output into the variable as expected:NEW=$(ls -Art /home/user/directory/ | tail -1)I can prove this to myself be echoing the value of the variable like so:echo $NEWfile.tar.gzThen I try to move the file to a different directory: mv $NEW /usr/local/directory/..and this is where I get the error. Note that the error message explicitly names the file it cannot find:mv: cannot stat file.tar.gz: No such file or directoryThe shell appears to be telling me that it can't find the file and then naming the file it can't find. I have tried replacing the backticks with parentheses but same result. I have tried changing the permissions of both the file and the directories above it to pretty much every permutation I can think of and also changed ownership to user.user I have tried running the command as both root and user, same result each time. I will appreciate any attempt to help resolve this.
File exists but mv errors out with: mv: cannot stat file.tar.gz: No such file or directory
shell;mv
It looks that you are not in the directory where file is.You use ls -Art /home/user/directory/ which return into NEW only the filename part, not the directory part.Your move command should bemv /home/user/directory/$NEW /usr/local/directory/
_unix.329440
For example, i have file with contenthello worldit's nice to see youamazing nightwhat a wonderful daymy name is Robertstill breathingspeaking bottom soulsomething wrongI need to match those lines, in which second word has exactly two vowels.So the output should be:it's nice to see youmy name is Robertspeaking bottom soulHow can I do this using grep?
Grep template for extracting lines where second word has only two vowels
grep;regular expression
grep with extended regular expressions:grep -iE '^[^[:blank:]]+[[:blank:]]+([^aeiou]*[aeiou]){2}[^aeiou]*\>' filegrep with pcregrep -iP '^\S+\s+([^aeiou]*[aeiou]){2}[^aeiou]*\b' fileperl (honestly, I did this independent of steeldriver's comment)perl -ane 'print if (lc($F[1]) =~ tr/aeiou/aeiou/) == 2' fileawkawk '{col2 = tolower($2); gsub(/[aeiou]/,,col2)} length($2) - length(col2) == 2' file
_unix.231726
I'd like to keep my modifications to as few files as possible, so I don't want to touch .inputrc unless I absolutely have to. So, given .inputrc lines like:\e[5~: history-search-backward\e[6~: history-search-forwardHow can I apply them only using bash?This SU post indicated that bind could read from .inputrc, and bind's help says:$ help bindbind: bind [-lpsvPSVX] [-m keymap] [-f filename] [-q name] [-u name] [-r keyseq] [-x keyseq:shell-command] [keyseq:readline-function or readline-command]history-search-* look like readline functions, so I tried:bind \e[6~:history-search-forwardbind \e[5~:history-search-backwardPage Up now triggers a bell, Page Down printed a ~.Is there a general way for me to use inputrc lines in bash?
How do I convert inputrc settings to bashrc ones?
bash;readline
According to what I have in my .bashrc you need something likebind '\e[6~: history-search-forward'
_unix.141630
I am trying to run Chromium (the browser) on a Ubuntu install within Android (ARM) and am facing a weird issue: Webpages are not rendering at all.Links etc within the page are still clickable, however the webpage area is completely blank.Terminal output:chromium-browser --user-data-dir='/home/zac'[2495:2495:0710/143156:ERROR:browser_main_loop.cc(192)] GTK theme error: Unable to locate theme engine in module_path: pixmap,[2495:2495:0710/143156:ERROR:browser_main_loop.cc(192)] GTK theme error: Unable to locate theme engine in module_path: pixmap,[2495:2495:0710/143156:ERROR:browser_main_loop.cc(192)] GTK theme error: Unable to locate theme engine in module_path: pixmap,[2495:2495:0710/143156:ERROR:browser_main_loop.cc(212)] Gdk: shmget failed: error 38 (Function not implemented)Failed to open /proc/cpuinfoFailed to open /proc/cpuinfoFailed to open /proc/cpuinfoFailed to open /proc/cpuinfoFailed to open /proc/cpuinfoFailed to open /proc/cpuinfoFailed to open /proc/cpuinfoFailed to open /proc/cpuinfoFailed to open /proc/cpuinfoFailed to open /proc/cpuinfoFailed to open /proc/cpuinfoFailed to open /proc/cpuinfoFailed to open /proc/cpuinfoFailed to open /proc/cpuinfoFailed to open /proc/cpuinfoGkr-Message: secret service operation failed: The name org.freedesktop.secrets was not provided by any .service filesFailed to open /proc/cpuinfoFailed to open /proc/cpuinfoFailed to open /proc/cpuinfoFailed to open /proc/cpuinfoFailed to open /proc/cpuinfoFailed to open /proc/cpuinfoFailed to open /proc/cpuinfoFailed to open /proc/cpuinfoFailed to open /proc/cpuinfoFailed to open /proc/cpuinfoFailed to open /proc/cpuinfoFailed to open /proc/cpuinfoFailed to open /proc/cpuinfoFailed to open /proc/cpuinfoFailed to open /proc/cpuinfoIf you need any more info, let me know!
Chromium does not Render webpage (arm)
linux;ubuntu;chrome;arm
null
_computerscience.2222
I am trying to write a ray tracer to render boxes that are arbitrarily rotated, i.e. not necessarily axis aligned. While I am reasonably comfortable ray tracing axis-aligned bounding boxes (AABB), I don't know how to handle non-axis aligned objects. The geometry becomes quite difficult. I've looked through a textbook (Ray Tracing from the Ground Up) but couldn't find an answer.How should I do it? Detailed suggestions would be greatly appreciated.
How to convert Non-Axis Aligned Bounding Boxes to AABB
raytracing;geometry
You can assign a coordinate system to each nAABB in such a way that the nAABB becomes an AABB in its own coordinate system. We call this a local coordinate system.I assume rays are expressed in a world or global coordinate system. In order to test an nAABB for intersection, one first needs to apply the world-to-local transformation on the ray (origin and direction). This way we obtain a transformed ray which we can intersect with an AABB. Once an intersection point is found, we need to transform this back to world space via the inverse transformation (i.e. local-to-world transformation).In practice an nAABB is thus expressed as an AABB with a pair of world-to-local and local-to-world transformation matrices.If one applies a scaling ($S$) or translation ($T$) on an AABB, one obtains another AABB. If one applies a rotation ($R$) on an AABB, one obtains a nAABB.Typically the corresponding transformation matrices are multiplied as follows $T * R * S$. Thus translation is applied after rotation. Since the $R$ component must be included in the world-to-local transformation, you need to include the $T$ component as well. The $S$ component can be included or directly applied to a unit AABB centered at the origin of the world-coordinate system. Including a $T$ component requires your world-to-local and local-to-world transformation matrices to be of size $4$x$4$ (expressed in homogeneous coordinates).Here you can find C++ code to construct all the $4$x$4$ world-to-local matrices and their inverses you probably need.Note that this approach works for other geometrical objects as well (if the transformations involve scaling, rotation and translation).
_webmaster.8608
I am inexperienced in the configuration of DNS and have an issue with dominan hosting set up. I have two domains'www.mydomain1.com' and 'www.mydomain2.com', with mydomain2 pointed at the same place as mydomain1.The domains were passed to me recently by the person who previoulsy controlled them. I have an account with Fasthosts in the UK. When I accepted the domains I could not access the DNS settings and inquired with fasthosts as to why. The reply was: The delegate hosting option for both domains were enabled and this is the reason why you were unable to find the option to edit the advanced DNS records.I have now disabled the delegate hosting option so you can now edit the advanced DNS records for both domains in your account.When I log into the Fasthost control panel now I can access the DNS controls but both domains have no A record or Cname record set up. I am concerned that Fasthosts have blatted the previous Nameserver entries and set me up on theirs but not added any record. 'www.mydomain1.com' currently still works but 'www.mydomain2.com' does not find the site anymore. I am worried I will lose mydomain1 to as the DNS changes filter through the system.my webhosting is at 'xxx.xxx.xxx.xxx/mydomain1.com/' and this is where I want both domains to point. Any advice would be much appreciated. One thing which is confusing me is that because I am on a shared server I have to put 'xxx.xxx.xxx.xxx/mydomain1.com/' to get to my site rather than just 'xxx.xxx.xxx.xxx'. The form on Fasthosts for the A name record only allows an IP to be entered - does it add the mydomain1.com/ onto the end itself?Thanks for any help given - I'm quite worried about thisDavid
DNS NAmeserver Aname and cname records
dns;nameserver
null
_unix.120124
module-assistant has been the de-facto method of compiling and building binary Debian packages containing kernel modules for a while now. More recently a comparable utility has appeared - dkms.If anyone has experience using both, please do a compare and constrast on the advantages and disadvantages of using one versus the other.One item to address in an answer is whether dkms builds binary Debian packages for kernel modules too, and if so, how, and if so, what the differences between the packages built by m-a and dkms are, if any.I have personally never used dkms, but I have used module-assistant sporadically over many years, and it has been a good experience. I don't have immediate plans to experiment with dkms, so I don't think I am the right person to write an answer.Random googling found a discussion here, also this.Needless to say, any answer should be based on first hand experience, not copying from web forums. A worked example using both would be nice. Possibly nvidia-kernel, since that is quite commonly used. Yes, I know it is a proprietary kernel module. :-(UPDATE: Thanks to jordanm for his answer. I'd like something that goes into a little more detail about what is going on under the hood, for both m-a and dkms, though I did not initially mention this. Also, it sounds like most of the time dkms will work transparently and automatically. But what are the failure modes of dkms. How do they both cope with manually compiled/installed kernels? Either installed from a binary package or a local install.
dkms versus module-assistant
debian;dkms
null
_unix.193944
I tried to view failed user login attempts since a specific time. But all methods I found by searching the internet does not work. I am using openSUSE 13.2.journalctl -a --no-pager --since=2015-02-04 00:00:00gives me a long and ugly list of all system events (also with the failed login attempts). Is there a better way to collect these informations?
how to view failed login attempts?
users;sshd
This works for me:journalctl `which sshd` -a --no-pager --since=2015-02-04 00:00:00 | grep FailedSample output:Apr 02 10:18:13 sturm sshd[6068]: Failed password for aboettger from 192.168.2.40 port 4812 ssh2Apr 02 10:18:18 sturm sshd[6068]: Failed password for aboettger from 192.168.2.40 port 4812 ssh2Or use the -p-Option, eg.:journalctl `which sshd` -p 5 -a --no-pager --since=2015-02-04 00:00:00journalctl man page: -p, --priority= Filter output by message priorities or priority ranges. Takes either a single numeric or textual log level (i.e. between 0/emerg and 7/debug), or a range of numeric/text log levels in the form FROM..TO. The log levels are the usual syslog log levels as documented in syslog(3), i.e. emerg (0), alert (1), crit (2), err (3), warning (4), notice (5), info (6), debug (7). If a single log level is specified, all messages with this log level or a lower (hence more important) log level are shown. If a range is specified, all messages within the range are shown, including both the start and the end value of the range. This will add PRIORITY= matches for the specified priorities.
_softwareengineering.152453
In bash at least, if and case blocks are closed like this:if some-exprthen echo hello worldficase $some-var in[1-5]) do-a-thing ;;*) do-another-thingesacas opposed to the more typical close of end or endif/endcase. As far as I know, this rather funny convention is unique to shell scripting and I have never seen such an odd block terminator anywhere else. Sometimes things like this have an origin in another language (like Ruby's elsif coming from Perl), or a strange justification. Does this feature of shell scripting have a story behind it? Is it found in other languages?
Are backwards terminators for if and case unique to shell scripting?
syntax;shell;bash
According to Wikipedia, the sh program reused the syntax from ALGOL 68.Stephen Bourne carried into this shell some aspects of the ALGOL 68C compiler that he had been working on at Cambridge University. Notably he reused portions of ALGOL 68's if ~ then ~ elif ~ else ~ fi, case ~ in ~ esac and for ~ while ~ do ~ od (using done instead of od) clauses in the common Unix Bourne shell syntax.Since Bash was the GNU replacement of sh, it reused the same syntax for backwards compatibility.
_unix.223922
I would like to update my OCaml distribution from version 4.01 to version 4.02, in particular the ocaml-nox package. This breaks the dependencies of several other packages that explicitly require 4.01. In my case, libctypes-ocaml[-dev] v0.2.3, libfindlib-ocaml[-dev] v1.4.1 and ocaml-findlib v1.4.1 have this dependency.I could not find updates for the dependent packages and the maintainers told me in a one-liner that the situation might not change any time soon. Is there any way for me to proceed with the update anyway (assuming that all packages would indeed work with 4.02)? Can I for example trick aptitude into believing that the new versions are also fine?
Package update on debian jessie breaks dependencies
debian;apt
You need to rebuild the packages for the new ocaml version by yourselfor wait until Debian does it officially.For rebuilding, e.g. follow this general tutorial: https://wiki.debian.org/BuildingTutorialThere is already a transition in Debian ongoing where all OCamlpackages are rebuild automatically for the new OCaml version.You can find details for this and related problems in the Bug tracking system.Rebuild packages for the amd64 architecture can be found already here.
_codereview.25189
I wrote a small ducktaped VBA script which pings servers every 15 mins or so. If a server's status is anything other than Alive, the server and timestamp is written to another worksheet called Log. Sub Countup() Dim CountDown As Date CountDown = Now + TimeValue(00:00:01) Application.OnTime CountDown, Auto_OpenEnd SubSub Auto_Open() Dim count As Range Set count = Worksheets(Servers).Range(A1:A1) count.Value = count.Value - TimeSerial(0, 0, 1) If count <= 0 Then count = Worksheets(Servers).Range(C1:C1) Call GetComputerToPing Call Countup Exit Sub End If Call CountupEnd SubPublic Sub addDataToTable(ByVal strTableName As String, ByVal strData As String, ByVal Col As Integer) Dim lLastRow As Long Dim iHeader As Integer With ActiveSheet.ListObjects(strTableName) 'find the last row of the list lLastRow = ActiveSheet.ListObjects(strTableName).ListRows.count 'shift from an extra row if list has header If .Sort.Header = xlYes Then iHeader = 1 Else iHeader = 0 End If End With 'add the data a row after the end of the list ActiveSheet.Cells(lLastRow + 1 + iHeader, Col).Value = strDataEnd Sub'Requires references to Microsoft Scripting Runtime and Windows Script Host Object Model.'Set these in Tools - References in VB Editor.Function sPing(sHost) As String On Error Resume Next sHost = Trim(sHost) Dim ipaddress As String Dim computername As String Dim Model As String Dim memory As Long Dim oPing As Object, oRetStatus As Object Set oPing = GetObject(winmgmts:{impersonationLevel=impersonate}) Set oPing = oPing.execquery(select * from win32_pingstatus where address =' & sHost & ') For Each oRetStatus In oPing If IsNull(oRetStatus.statuscode) Then sPing = Dead ElseIf oRetStatus.statuscode = 11010 Then sPing = Request Timed Out ElseIf oRetStatus.statuscode = 11013 Then sPing = Destination Host Unreachable Else sPing = Alive End If Next Set oPing = Nothing Set oRetStatus = NothingEnd FunctionSub GetComputerToPing() Application.DisplayAlerts = False 'On Error Resume Next Dim applicationobject As Object Dim i As Integer i = 3 'row to start checking servers from Do Until Cells(i, 1) = 'If Cells(i, 1) <> Then 'If Cells(i, 2) = Request Timed Out Or Cells(i, 2) = Or Cells(i, 2) = Dead Then Cells(i, 2) = sPing(Cells(i, 1)) Cells(i, 3) = Now() 'log it to Log If Cells(i, 2).Value <> Alive Then Call copytest(i) End If 'End If 'End If i = i + 1 LoopSet applicationobject = NothingEnd SubFunction findlast_Row() As Long Dim ws As Worksheet Set ws = ThisWorkbook.Sheets(Log) With ws findlast_Row = .Range(A & .Rows.count).End(xlUp).Row End WithEnd FunctionSub copytest(ByVal intRow As Integer) 'screens for last row in log sheet iLastRow = findlast_Row() + 1 Worksheets(Log).Range(A & CStr(iLastRow) & :E & CStr(iLastRow)).Value = Worksheets(Servers).Range(A & CStr(intRow) & :E & CStr(intRow)).ValueEnd SubIs there another way (or better way) to do the countdown timer?
How to make this ping test with timer more efficient?
optimization;vba;timer;excel
null
_webmaster.99240
On the Google structured testing tool, it shows that they are all separate instances. I'm trying to have them list to each other.This is what I have now:<section class=primary content-area itemscope itemtype=http://schema.org/CollectionPage> <meta itemprop=name headline content=Stepanie Schaefer /> <div itemid=http://www.cheapflights.com/news/author/stephanieschaefer/ itemscope itemtype=http://schema.org/Person> <h1 itemprop=headline><span itemprop=name>Stephanie Schaefer</span></h1> <p itemprop=description>Stephanie is a Boston native who loves to find ways to escape New England winters. Shes thrown a coin in the Trevi Fountain, sipped wine on a vineyard in Northern Spain and swam in the Mediterranean Sea. Although she hasnt been everywhere, its definitely on her list.</p> </div> <div itemscope itemtype=http://schema.org/BlogPosting> <h1 itemprop=headline>Top 10 booze-infused getaways</h1> <link itemprop=author href=http://www.cheapflights.com/news/author/stephanieschaefer/ /> <p itemprop=description>Description of Blog</p> </div> <div itemscope itemtype=http://schema.org/BlogPosting> <h1 itemprop=headline>Top 10 booze-infused getaways</h1> <link itemprop=author href=http://www.cheapflights.com/news/author/stephanieschaefer/ /> <p itemprop=description>Description of Blog</p> </div></section>I tried to link them together with itemref but it still doesn't seem to work. <section class=primary content-area itemscope itemtype=http://schema.org/CollectionPage itemref=blogs1 blogs2> <meta itemprop=name headline content=Stepanie Schaefer /> <div itemid=http://www.cheapflights.com/news/author/stephanieschaefer/ itemscope itemtype=http://schema.org/Person> <h1 itemprop=headline><span itemprop=name>Stephanie Schaefer</span></h1> <p itemprop=description>Stephanie is a Boston native who loves to find ways to escape New England winters. Shes thrown a coin in the Trevi Fountain, sipped wine on a vineyard in Northern Spain and swam in the Mediterranean Sea. Although she hasnt been everywhere, its definitely on her list.</p> </div> <div itemscope itemtype=http://schema.org/BlogPosting id=blogs1> <h1 itemprop=headline>Top 10 booze-infused getaways</h1> <link itemprop=author href=http://www.cheapflights.com/news/author/stephanieschaefer/ /> <p itemprop=description>Description of Blog</p> </div> <div itemscope itemtype=http://schema.org/BlogPosting id=blogs2> <h1 itemprop=headline>Top 10 booze-infused getaways</h1> <link itemprop=author href=http://www.cheapflights.com/news/author/stephanieschaefer/ /> <p itemprop=description>Description of Blog</p> </div></section>How do we link the two objects CollectionPage and BlogPosting together?
HTML5 Microdata - itemref to another itemscope (CollectionPage containing all the blogs)
html;html5;schema.org;microdata
You need to use properties (itemprop) if you want to relate items.The itemref attribute can be used if you cant nest the elements, but you still need to use properties. In your example, it seems that you can nest the elements, so there is no need to use itemref.As described in my answer to your previous question, you could use mainEntity with an ItemList value (and in the ItemList, itemListElement for each BlogPosting).Another option is to use hasPart. Its less expressive than the mainEntity/ItemList way, and using blogPost (in a Blog) might be preferable, but its very simple, so it can illustrate the way how it works in Microdata.Nesting (without itemref)<section itemscope itemtype=http://schema.org/CollectionPage> <article itemprop=hasPart itemscope itemtype=http://schema.org/BlogPosting> </article> <article itemprop=hasPart itemscope itemtype=http://schema.org/BlogPosting> </article></section>itemref (without nesting)<section itemscope itemtype=http://schema.org/CollectionPage itemref=blogs1 blogs2></section><!-- dont nest this 'article' inside another element with 'itemscope' --><article itemprop=hasPart itemscope itemtype=http://schema.org/BlogPosting id=blogs1></article><!-- dont nest this 'article' inside another element with 'itemscope' --><article itemprop=hasPart itemscope itemtype=http://schema.org/BlogPosting id=blogs2></article>(Googles SDTT will output an @id value for this example, using the id values, but this is likely a bug. You can overwrite it by giving each BlogPosting an itemid value.)
_unix.114770
On a device located between my local network and a router, (all the traffic pass through) I need to read the common name from Hello Server Certificate packet.So I'm trying to figure out how to get the proper filter with tcpdump.I found help from this paper : http://www.wains.be/pub/networking/tcpdump_advanced_filters.txtIt explains how to use advance filter on IP and TCP fields.I try this kind of filter :tcpdump -i any 'tcp and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' -A -s 0 -v | grep 'Host\|id-at-commonName='As explained in the paper, We are matching any packet that contains data.It works for Host field as for many other data, but I can't match the field id-at-commonName= which is in SSL field (so in the TCP Data field ?).To be sure I captured a pcap file with the exact same filter (without the grep) and when I open it with Wireshark, I can read every Certificate Common name.I must use a tcpdump filter because I need to get the data on the fly.Can someone tell why I can't see those data through tcpdump ?
tcpdump hello server certificate
tcpdump
null
_unix.82842
I've one usb modem qualcomm based, here is my configuration what I've done in the past.wvdial# wvdial phone --> WvDial: Internet dialer version 1.61 --> Cannot get information for serial port. --> Initializing modem. --> Sending: ATZ --> Sending: ATQ0 --> Re-Sending: ATZ --> Modem not responding.udev$ cat /etc/udev/rules.d/option.rulesATTRS{idVendor}==05c6, ATTRS{idProduct}==1000, RUN+=/usr/bin/usbModemScriptATTRS{idVendor}==05c6, ATTRS{idProduct}==1000, RUN+=/sbin/modprobe optionscript$ cat /usr/bin/usbModemScript#! /bin/bashecho 05c6 1000 > /sys/bus/usb-serial/drivers/generic/new_idlsusb$ lsusb |grep 1000Bus 003 Device 004: ID 05c6:1000 Qualcomm, Inc. Mass Storage Device/dev$ ls /dev/ttyUSB0/dev/ttyUSB0wvdial.conf$ cat /etc/wvdial.conf[Dialer phone]Modem Type = Analog ModemPhone = #777ISDN = 0Baud = 460800Username = userPassword = pwdModem = /dev/ttyUSB0Init1 = ATZStupid Mode = 1wvdialconf$ wvdialconfEditing `/etc/wvdial.conf'.Scanning your serial ports for a modem.Modem Port Scan<*1>: S0 S1 S2 S3 WvModem<*1>: Cannot get information for serial port.ttyUSB0<*1>: ATQ0 V1 E1 -- failed with 2400 baud, next try: 9600 baudttyUSB0<*1>: ATQ0 V1 E1 -- failed with 9600 baud, next try: 9600 baudttyUSB0<*1>: ATQ0 V1 E1 -- and failed too at 115200, giving up.Sorry, no modem was detected! Is it in use by another program?Did you configure it properly with setserial?Please read the FAQ at http://alumnit.ca/wiki/?WvDialwvdial$ wvdial phone--> WvDial: Internet dialer version 1.61--> Cannot get information for serial port.--> Initializing modem.--> Sending: ATZ--> Sending: ATQ0--> Re-Sending: ATZ--> Modem not responding.I'm using x86 fedora 18.UPDATE #1lsmod$ lsmodModule Size Used byoption 29833 0 usb_wwan 18701 1 optionip6table_filter 12712 0 ip6_tables 17745 1 ip6table_filterebtable_nat 12696 0 ebtables 21316 1 ebtable_natfuse 71577 9 bnep 18864 2 bluetooth 275642 7 bnepvboxpci 22897 0 vboxnetadp 25637 0 vboxnetflt 27262 0 vboxdrv 264146 3 vboxnetadp,vboxnetflt,vboxpcibe2iscsi 76220 0 iscsi_boot_sysfs 15122 1 be2iscsibnx2i 49543 0 cnic 57574 1 bnx2iuio 14413 1 cniccxgb4i 32075 0 cxgb4 97513 1 cxgb4icxgb3i 28034 0 cxgb3 130967 1 cxgb3imdio 13244 1 cxgb3libcxgbi 54562 2 cxgb3i,cxgb4iib_iser 32692 0 rdma_cm 37085 1 ib_iserib_addr 13513 1 rdma_cmiw_cm 13753 1 rdma_cmib_cm 36713 1 rdma_cmib_sa 23966 2 rdma_cm,ib_cmib_mad 37175 2 ib_cm,ib_saib_core 61976 6 rdma_cm,ib_cm,ib_sa,iw_cm,ib_mad,ib_iseriscsi_tcp 18016 0 libiscsi_tcp 19468 4 cxgb3i,cxgb4i,iscsi_tcp,libcxgbi libiscsi 44825 8 libiscsi_tcp,bnx2i,cxgb3i,cxgb4i,be2iscsi,iscsi_tcp,ib_iser,libcxgbiscsi_transport_iscsi 46616 8 bnx2i,be2iscsi,iscsi_tcp,ib_iser,libcxgbi,libiscsiarc4 12544 2 rtl8187 56256 0 eeprom_93cx6 12987 1 rtl8187mac80211 471137 1 rtl8187uvcvideo 71339 0 videobuf2_vmalloc 12840 1 uvcvideovideobuf2_memops 13191 1 videobuf2_vmallocvideobuf2_core 33259 1 uvcvideovideodev 91347 2 uvcvideo,videobuf2_coremedia 19720 2 uvcvideo,videodevcfg80211 170721 2 mac80211,rtl8187snd_hda_codec_conexant 56642 1 snd_hda_intel 32539 2 snd_hda_codec 109374 2 snd_hda_codec_conexant,snd_hda_inteltoshiba_acpi 18335 0 sparse_keymap 13343 1 toshiba_acpisnd_hwdep 13233 1 snd_hda_codecsnd_seq 54700 0 rfkill 20452 5 cfg80211,toshiba_acpi,bluetoothsnd_seq_device 13825 1 snd_seqsnd_pcm 81512 2 snd_hda_codec,snd_hda_intelsnd_page_alloc 13710 2 snd_pcm,snd_hda_intelsnd_timer 23743 2 snd_pcm,snd_seqsnd 63247 12 /var/log/messages$ tail -f /var/log/messagesJul 13 14:16:43 localhost ntfs-3g[1519]: Version 2012.1.15 integrated FUSE 27Jul 13 14:16:43 localhost ntfs-3g[1519]: Mounted /dev/sda5 (Read-Write, label , NTFS 3.1)Jul 13 14:16:43 localhost ntfs-3g[1519]: Cmdline options: rwJul 13 14:16:43 localhost ntfs-3g[1519]: Mount options: rw,allow_other,nonempty,relatime,fsname=/dev/sda5,blkdev,blksize=4096Jul 13 14:16:43 localhost ntfs-3g[1519]: Ownership and permissions disabled, configuration type 1Jul 13 14:16:44 localhost systemd[1]: Starting Stop Read-Ahead Data Collection...Jul 13 14:16:44 localhost systemd[1]: Started Stop Read-Ahead Data Collection.Jul 13 14:17:17 localhost kernel: [ 102.933110] usb 3-1: new full-speed USB device number 2 using uhci_hcdJul 13 14:17:17 localhost kernel: [ 103.081203] usb 3-1: New USB device found, idVendor=05c6, idProduct=1000Jul 13 14:17:17 localhost kernel: [ 103.081214] usb 3-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3Jul 13 14:17:17 localhost kernel: [ 103.081221] usb 3-1: Product: USB MMC StorageJul 13 14:17:17 localhost kernel: [ 103.081228] usb 3-1: Manufacturer: Qualcomm, IncorporatedJul 13 14:17:17 localhost kernel: [ 103.081234] usb 3-1: SerialNumber: 000000000002Jul 13 14:17:17 localhost mtp-probe: checking bus 3, device 2: /sys/devices/pci0000:00/0000:00:1a.0/usb3/3-1Jul 13 14:17:17 localhost mtp-probe: bus: 3, device: 2 was not an MTP deviceJul 13 14:17:17 localhost kernel: [ 103.132523] usbserial_generic 3-1:1.0: The generic usb-serial driver is only for testing and one-off prototypes.Jul 13 14:17:17 localhost kernel: [ 103.132531] usbserial_generic 3-1:1.0: Tell [email protected] to add your device to a proper driver.Jul 13 14:17:17 localhost kernel: [ 103.132535] usbserial_generic 3-1:1.0: generic converter detectedJul 13 14:17:17 localhost kernel: [ 103.134761] usb 3-1: generic converter now attached to ttyUSB0Jul 13 14:17:17 localhost kernel: [ 103.159122] usbcore: registered new interface driver optionJul 13 14:17:17 localhost kernel: [ 103.160896] USB Serial support registered for GSM modem (1-port)Jul 13 14:17:17 localhost dbus-daemon[593]: modem-manager[720]: <info> (ttyUSB0) opening serial port...Jul 13 14:17:17 localhost dbus-daemon[593]: modem-manager[720]: <warn> (ttyUSB0): port attributes not fully setJul 13 14:17:17 localhost modem-manager[720]: <info> (ttyUSB0) opening serial port...Jul 13 14:17:17 localhost modem-manager[720]: <warn> (ttyUSB0): port attributes not fully setJul 13 14:17:29 localhost dbus-daemon[593]: modem-manager[720]: <info> (ttyUSB0) closing serial port...Jul 13 14:17:29 localhost modem-manager[720]: <info> (ttyUSB0) closing serial port...Jul 13 14:17:59 localhost dbus-daemon[593]: modem-manager[720]: <info> (ttyUSB0) serial port closedJul 13 14:17:59 localhost modem-manager[720]: <info> (ttyUSB0) serial port closedJul 13 14:17:59 localhost modem-manager[720]: <info> (ttyUSB0) opening serial port...Jul 13 14:17:59 localhost dbus-daemon[593]: modem-manager[720]: <info> (ttyUSB0) opening serial port...Jul 13 14:18:05 localhost dbus-daemon[593]: modem-manager[720]: <info> (ttyUSB0) closing serial port...Jul 13 14:18:05 localhost modem-manager[720]: <info> (ttyUSB0) closing serial port...Jul 13 14:18:09 localhost dbus-daemon[593]: dbus[593]: [system] Activating service name='net.reactivated.Fprint' (using servicehelper)Jul 13 14:18:09 localhost dbus[593]: [system] Activating service name='net.reactivated.Fprint' (using servicehelper)Jul 13 14:18:09 localhost dbus-daemon[593]: dbus[593]: [system] Successfully activated service 'net.reactivated.Fprint'Jul 13 14:18:09 localhost dbus[593]: [system] Successfully activated service 'net.reactivated.Fprint'Jul 13 14:18:09 localhost dbus-daemon[593]: Launching FprintObjectJul 13 14:18:09 localhost dbus-daemon[593]: ** Message: D-Bus service launched with name: net.reactivated.FprintJul 13 14:18:09 localhost dbus-daemon[593]: ** Message: entering main loopJul 13 14:18:35 localhost dbus-daemon[593]: modem-manager[720]: <info> (ttyUSB0) serial port closedJul 13 14:18:35 localhost modem-manager[720]: <info> (ttyUSB0) serial port closedJul 13 14:18:35 localhost modem-manager[720]: <info> (ttyUSB0) opening serial port...Jul 13 14:18:35 localhost modem-manager[720]: <warn> (ttyUSB0): port attributes not fully setJul 13 14:18:35 localhost dbus-daemon[593]: modem-manager[720]: <info> (ttyUSB0) opening serial port...Jul 13 14:18:35 localhost dbus-daemon[593]: modem-manager[720]: <warn> (ttyUSB0): port attributes not fully setJul 13 14:18:40 localhost dbus-daemon[593]: ** Message: No devices in use, exitJul 13 14:18:47 localhost dbus-daemon[593]: modem-manager[720]: <info> (ttyUSB0) closing serial port...Jul 13 14:18:47 localhost modem-manager[720]: <info> (ttyUSB0) closing serial port...Jul 13 14:19:17 localhost dbus-daemon[593]: modem-manager[720]: <info> (ttyUSB0) serial port closedJul 13 14:19:17 localhost dbus-daemon[593]: modem-manager[720]: <info> (ttyUSB0) opening serial port...Jul 13 14:19:17 localhost modem-manager[720]: <info> (ttyUSB0) serial port closedJul 13 14:19:17 localhost modem-manager[720]: <info> (ttyUSB0) opening serial port...Jul 13 14:19:23 localhost dbus-daemon[593]: modem-manager[720]: <info> (ttyUSB0) closing serial port...Jul 13 14:19:23 localhost modem-manager[720]: <info> (ttyUSB0) closing serial port...Jul 13 14:19:53 localhost dbus-daemon[593]: modem-manager[720]: <info> (ttyUSB0) serial port closedJul 13 14:19:53 localhost modem-manager[720]: <info> (ttyUSB0) serial port closedJul 13 14:19:55 localhost dbus-daemon[593]: dbus[593]: [system] Activating service name='net.reactivated.Fprint' (using servicehelper)Jul 13 14:19:55 localhost dbus[593]: [system] Activating service name='net.reactivated.Fprint' (using servicehelper)Jul 13 14:19:55 localhost dbus-daemon[593]: dbus[593]: [system] Successfully activated service 'net.reactivated.Fprint'Jul 13 14:19:55 localhost dbus[593]: [system] Successfully activated service 'net.reactivated.Fprint'Jul 13 14:19:55 localhost dbus-daemon[593]: Launching FprintObjectJul 13 14:19:55 localhost dbus-daemon[593]: ** Message: D-Bus service launched with name: net.reactivated.FprintJul 13 14:19:55 localhost dbus-daemon[593]: ** Message: entering main loopJul 13 14:20:01 localhost modem-manager[1827]: <info> ModemManager (version 0.6.0.0-2.fc18) starting...Jul 13 14:20:01 localhost modem-manager[1827]: <warn> Could not acquire the org.freedesktop.ModemManager service as it is already taken. Return: 3//here logged because I ve just tried to manually start modem-manager from terminal but the log below saysJul 13 14:20:25 localhost modem-manager[1833]: <info> [1373700025.052722] ModemManager (version 0.6.0.0-2.fc18) starting...Jul 13 14:20:25 localhost modem-manager[1833]: <warn> [1373700025.057438] Could not acquire the org.freedesktop.ModemManager service as it is already taken. Return: 3Jul 13 14:20:26 localhost dbus-daemon[593]: ** Message: No devices in use, exit
Qualcomm based modem Venus converted to ttyUSB0 but not responding forever on fedora 18
modem;fedora
null
_unix.42477
I'm developing for a specific TI ARM processor with custom drivers that made it to the kernel. I'm trying to migrate from 2.6.32 to 2.6.37, but the structure changed so much I will have weeks of work to upgrade my code.For example, my chip is the dm365, which comes with video processing drivers. Now most of the old drivers which were directly exposed to me go through v4l2, which might make more sense. TI provides very little information for those upgrades. How am I supposed to keep up with the changes? When I google for specific file names, I seldom get a few patches with fewer comments on what changed and why and how old relates to new.
How am I supposed to keep up with kernels as a developer?
linux;kernel;upgrade
If you select a kernel to track, be sure to select one that is tagged for long-term support. But sooner or later you will have to move on...
_codereview.127966
I would like to ask more experienced developers to review my code, it's my first app, self-made, I know there is a lot to improve, it would be great if you could just point the main errors. I'm aware that my implementation is not universal, but I couldnt get past it. My edit option with usage of DTO seems shi*y to me, details can be found in the code.https://github.com/filemonczyk/crudThanks to all in advance@Controllerpublic class AnimalsController {@Autowiredprivate AnimalDAO animalDAO;@RequestMapping()public String startPage(){ return home;}//gives a list of animals of specified type (dog,cat,snake)@RequestMapping(write{animal})public String getAnimalList(Model model, @PathVariable(animal) String animal ) { model.addAttribute(animalName, animalDAO.getAnimalList(animal)); return list;}@RequestMapping({animal}/{name}/{id}/delete)public String dropAnimal(@PathVariable(id) int id, HttpServletResponse response) throws IOException{ animalDAO.delete(id); response.sendRedirect(http://localhost:8080/crud); return home;}//following 3 methodes are adding new animals to the database, I have used here DTO, had no idea how I can turn it into more flexible solutions@RequestMapping(addCat)public String addAnimal(HttpServletRequest request,HttpServletResponse response, @ModelAttribute(catDto) @Valid CatDTO catDto, BindingResult result) throws IOException{ if(request.getMethod().equalsIgnoreCase(post) && !result.hasErrors()){ Cat cat = new Cat(); cat.setName(catDto.getName()); cat.setAge(catDto.getAge()); cat.setDateOfBirth(catDto.getBirthDay()); cat.setBreed(catDto.getBreed()); cat.setWeight(catDto.getWeight()); cat.setFurColor(catDto.getFurColor()); animalDAO.addAnimal(cat); response.sendRedirect(http://localhost:8080/crud); return home; } return addCat;}@RequestMapping(addDog)public String addAnimal(HttpServletRequest request,HttpServletResponse response, @ModelAttribute(dogDto) @Valid DogDTO dogDto, BindingResult result) throws IOException{ if(request.getMethod().equalsIgnoreCase(post) && !result.hasErrors()){ Dog dog = new Dog(); dog.setName(dogDto.getName()); dog.setAge(dogDto.getAge()); dog.setDateOfBirth(dogDto.getBirthDay()); dog.setWeight(dogDto.getWeight()); dog.setTrained(dogDto.isTrained()); dog.setOwnerName(dogDto.getOwnerName()); animalDAO.addAnimal(dog); response.sendRedirect(http://localhost:8080/crud); return home; } return addDog;}@RequestMapping(addSnake)public String addAnimal(HttpServletRequest request,HttpServletResponse response, @ModelAttribute(snakeDto) @Valid SnakeDTO snakeDto, BindingResult result) throws IOException{ if(request.getMethod().equalsIgnoreCase(post) && !result.hasErrors()){ Snake snake = new Snake(); snake.setName(snakeDto.getName()); snake.setAge(snakeDto.getAge()); snake.setDateOfBirth(snakeDto.getBirthDay()); snake.setWeight(snakeDto.getWeight()); snake.setLength(snakeDto.getLength()); snake.setVenomous(snakeDto.isVenomous()); animalDAO.addAnimal(snake); response.sendRedirect(http://localhost:8080/crud); return home; } return addSnake;}// here is the core of my doubts, i wasnt able to configure view-model communication in that way I was not forced //to hard-type 3 different @ModelAttributes with 3 different object types. @RequestMapping({animal}/{name}/{id})public String getAnimal(HttpServletResponse response,Model model, HttpServletRequest request, @PathVariable(animal) String name, @PathVariable(id) int id, @ModelAttribute(snakeDto) SnakeDTO snakeDto, @ModelAttribute(catDto) CatDTO catDto ,@ModelAttribute(dogDto) DogDTO dogDto) throws IOException{ if(request.getMethod().equalsIgnoreCase(post)&&name.equalsIgnoreCase(snake)){ Snake an = (Snake)animalDAO.getAnimalById(id); an.setAge(snakeDto.getAge()); an.setDateOfBirth(snakeDto.getBirthDay()); an.setLength(snakeDto.getLength()); an.setName(snakeDto.getName()); an.setWeight(snakeDto.getWeight()); an.setVenomous(snakeDto.isVenomous()); animalDAO.updateAnimal(an); response.sendRedirect(http://localhost:8080/crud); return home; } if(request.getMethod().equalsIgnoreCase(post)&&name.equalsIgnoreCase(cat)){ Cat an = (Cat)animalDAO.getAnimalById(id); an.setAge(catDto.getAge()); an.setDateOfBirth(catDto.getBirthDay()); an.setBreed(catDto.getBreed()); an.setName(catDto.getName()); an.setWeight(catDto.getWeight()); an.setFurColor(catDto.getFurColor()); animalDAO.updateAnimal(an); response.sendRedirect(http://localhost:8080/crud); return home; } if(request.getMethod().equalsIgnoreCase(post)&&name.equalsIgnoreCase(dog)){ Dog an = (Dog)animalDAO.getAnimalById(id); an.setAge(dogDto.getAge()); an.setDateOfBirth(dogDto.getBirthDay()); an.setTrained(dogDto.isTrained()); an.setName(dogDto.getName()); an.setWeight(dogDto.getWeight()); an.setOwnerName(dogDto.getOwnerName()); animalDAO.updateAnimal(an); response.sendRedirect(http://localhost:8080/crud); return home; } model.addAttribute(animalDetails, animalDAO.getAnimalById(id)); return details+name;}}this is corresponding view: <%@ page language=java contentType=text/html; charset=ISO-8859-1pageEncoding=ISO-8859-1%><!DOCTYPE html PUBLIC -//W3C//DTD HTML 4.01 Transitional//EN http://www.w3.org/TR/html4/loose.dtd><html><head><meta http-equiv=Content-Type content=text/html; charset=ISO-8859-1><title>Insert title here</title></head><body><form method=post methodAttribute=catDto>Insert Cat's name <input type=text name=name/><br>Insert Cat's age<input type=text name=age/><br>Insert Cat's date of birth (dd-mm-yyyy)<input type=text name=birthDay><br>Insert Cat's weight<input type=text name=weight/><br>Insert Cat's breed?<input type=text name=breed/><br>Insert Cat's fur color<input type=text name=furColor/> <br><input type=submit value=add/></form></body></html>
Code for my first crud app
java;spring;hibernate;crud
This is by no means a comprehensive review, just some little things that I would change.You are often injecting HttpServletRequest and HttpServletResponse as method arguments. This is not necessary and bloats the method signatures. You can just define two instance fields of those two classes and annotate them with @Autowired. Spring creates proxies for those classes and offers the correct instances to your request threads.Run a code formatter / cleaner on your code before comitting! You should enable format code on save in your IDE, if you use one.You do not need to and probably shouldn't call request.getMethod().equalsIgnoreCase(post) the way you do: You can define the HTTP methods you want to accept in the @RequestMapping annotation via method=RequestMethod.POST. You can define multiple methods with the same path but different methods.If you decide to put a comment above a method, do it in the proper JavaDoc style.You should not use absolute redirects like: response.sendRedirect(http://localhost:8080/crud);, use relative paths instead since this would break if your application were to run on another server/port.Your entities do not use JPA constraints like nullable. Your animals name should probably not be nullable for example!You might want to implement equals and hashcode for your DTOs and entities, or let your IDE do it for you.You would benefit from using Spring Boot for applications like this. With Spring Boot you do not need to define applicationContext.xml and web.xml. It's very easy to use and does not require changes to applications on this level.You would benefit from using spring-data which brings default implementations for the DAO pattern (JPARepository interface and its relatives).Your controllers might contain too much logic. They should call @Component/@Service bean methods that contain the DB logic. This serves separation of concerns and gives you code that can be called from elsewhere.Your entity method public void setId(int id) should probably not be public. When would you ever want to manipulate the entities id? Let your JPA provider take care of it.
_unix.78481
I want a regex to match identifiers consisting of letters, digits and _, without double underscores. My current attempt:^(?!_)(?!.*?_$)[a-zA-Z][a-zA-Z0-9_]+$Example:Abdfsdfsdf__ 1B2345_v1__23456__23456789b12345-67891234567891_fsdfdfsdfv_fsdfsdf_fsdfdv_123v__123v134234_fsdfsd123456a1b2c3d4e5Matched:v1__23456v_fsdfsdf_fsdfdv_123v__123v134234_fsdfsda1b2c3d4e5How can I remove the rows v1__23456 and v__123 from the matches?
regex to match identifiers without double _
regular expression
^(?!_) # don't start with _(?!.*?_$) # don't end with _[a-zA-Z] # the first character must be a letter[a-zA-Z0-9_]+$ # after that digits and underscores are okYou've expressed in two different ways that the first character must not be _, but nothing here says anything about __ in the middle.Using negative lookahead, it is simple to express one or more alphanumeric characters or _, don't start with _ or a digit, don't end with _, and don't allow __ anywhwere:^(?![0-9_]|.*__.*_$|)[0-9A-Z_a-z]+$Without negative lookahead (e.g. in awk or grep -E), you can split the pieces:^[A-Za-z][0-9A-Za-z]*(_[0-9A-Za-z]+)*$Start with a letter, then zero or more alphanumerics, then you can have underscores but each one must be followed by one or more alphanumerics.
_codereview.105329
I am using the following code to lazy load some social buttons in my personal blog. Before continue, I would like to enumerate some dependencies first:jQuery 2.1.4;Using Font Awesome 4.1.0 (but should work with any icon/font);Tested on the most recent Firefox, Chrome, Iceweasel and Safari;So, the following HTML is using Font Awesome to present the Twitter and Facebook icons. It should be possible to do social actions even when no JavaScript is available, by clicking at the Icons:<li> <a id=tw-root href=https://twitter.com/share class=twitter-share-button data-url=#example data-text=Example to the CodeReview awesome QA site. data-counturl=#example data-count=horizontal data-via=ctwitterc title=Tweet> <span class=fa-stack fa-lg> <i class=fa fa-circle fa-stack-2x></i> <i class=fa fa-twitter fa-stack-1x fa-inverse></i> </span> </a></li><li id=fb-root> <a id=fb-fa href=https://www.facebook.com/sharer/sharer.php?u=#example onclick=window.open('https://www.facebook.com/sharer/sharer.php?u=#example', 'facebook-share','width=580,height=296');return false; title=Share on Facebook> <span class=fa-stack fa-lg> <i class=fa fa-circle fa-stack-2x></i> <i class=fa fa-facebook fa-stack-1x fa-inverse></i> </span> </a> <a class=fb-like data-href=#example data-layout=button_count data-action=like data-share=true data-text=Example to the CodeReview awesome QA site. data-counturl=#example></a></li>Here is a snippet (Facebook button will not load, maybe due security protocols, SecurityError: The operation is insecure., you can still check the live site).<script src=https://ajax.googleapis.com/ajax/libs/jquery/2.1.4/jquery.min.js></script><link href=http://maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css rel=stylesheet /><li> <a id=tw-root href=https://twitter.com/share class=twitter-share-button data-url=#example data-text=Example to the CodeReview awesome QA site. data-counturl=#example data-count=horizontal data-via=ctwitterc title=Tweet> <span class=fa-stack fa-lg> <i class=fa fa-circle fa-stack-2x></i> <i class=fa fa-twitter fa-stack-1x fa-inverse></i> </span> </a></li><li id=fb-root> <a id=fb-fa href=https://www.facebook.com/sharer/sharer.php?u=#example onclick=window.open('https://www.facebook.com/sharer/sharer.php?u=#example', 'facebook-share','width=580,height=296');return false; title=Share on Facebook> <span class=fa-stack fa-lg> <i class=fa fa-circle fa-stack-2x></i> <i class=fa fa-facebook fa-stack-1x fa-inverse></i> </span> </a> <a class=fb-like data-href=#example data-layout=button_count data-action=like data-share=true data-text=Example to the CodeReview awesome QA site. data-counturl=#example></a></li><!-- Lazy load --><!-- Twitter JavaScript http://www.paulund.co.uk/lazy-load-social-media --><script type=text/javascript> var tweetHover = function(e) { $(e).hover( function() { // mouse enter if ($(this).hasClass(share-enabled)) { // do nothing } else { if (typeof(twttr) != 'undefined') { // will load just the entered #tw-root twttr.widgets.load(this); $(this).addClass(share-enabled); } else { // will load all #tw-root, and it sucks $.getScript('http://platform.twitter.com/widgets.js'); $(this).addClass(share-enabled); } } }, function() { // mouse leave } ); }; $(document).ready(function() { tweetHover(#tw-root); });</script><!-- Facebook JavaScript https://developers.facebook.com/docs/javascript/howto/jquery/v2.4 --><script type=text/javascript> var facebook_sdk = '//connect.facebook.net/en_US/sdk.js'; if (document.documentElement.lang = 'pt-BR') { facebook_sdk = '//connect.facebook.net/pt_BR/sdk.js'; }; var facebookHover = function(e) { $(e).hover( function() { // mouse enter if ($(this).hasClass(share-enabled)) { // do nothing } else { if (typeof(FB) != 'undefined') { // will load just the entered #fb-root FB.XFBML.parse(this); $(this).addClass(share-enabled); $(this).find(#fb-fa).remove(); } else { // will load all #fb-root, and it sucks too $.getScript(facebook_sdk, function() { FB.init({ appId: 'app-id-goes-here', xfbml: true, version: 'v2.4' }); }); $(this).addClass(share-enabled); $(document).find([id=fb-fa]).each(function() { $(this).remove() }); } } }, function() { // mouse leave // } ); }; $(document).ready(function() { facebookHover(#fb-root); });</script>Here is the link of my personal blog to a live example. That code should produce this result:The following jQuery/JavaScript (if available) will:load the Twitter (or the Facebook) SDK on mouse enter.If the corresponding SDK is being loaded for the first time, the page might have multiple button IDs at the same time (#fb-root or #tw-root), if so it will load all buttons at once, no matter what button was entered.If the corresponding SDK was loaded and more buttons IDs appeared somehow (from ajax calls, for example), only the entered button will be loaded.In both cases:The Twitter SDK will remove its Font Awesome Icon automatically after loaded.The Facebook SDK will not remove its Font Awesome Icon, but I explicitly coded to do so (yes, the id=fb-fa related stuff is a hack).The buttons should load like this:On touch devices, let's consider untested on touch devices (but the buttons should load too).// Twitter JavaScript// Inspiration source.// http://www.paulund.co.uk/lazy-load-social-media var tweetHover = function(e){ $(e).hover( function(){ // mouse enter if ($(this).hasClass(share-enabled)) { // do nothing } else { if (typeof (twttr) != 'undefined'){ // will load just the entered #tw-root twttr.widgets.load(this); $(this).addClass(share-enabled); } else { // will load all #tw-root, and it sucks $.getScript('http://platform.twitter.com/widgets.js'); $(this).addClass(share-enabled); } } }, function(){ // mouse leave } ); }; $(document).ready(function(){ tweetHover(#tw-root); });// Facebook JavaScript// https://developers.facebook.com/docs/javascript/howto/jquery/v2.4 // some multilanguage support var facebook_sdk = '//connect.facebook.net/en_US/sdk.js'; if (document.documentElement.lang = 'pt-BR') { facebook_sdk = '//connect.facebook.net/pt_BR/sdk.js'; }; var facebookHover = function(e){ $(e).hover( function(){ // mouse enter if ($(this).hasClass(share-enabled)) { // do nothing } else { if (typeof (FB) != 'undefined') { // will load just the entered #fb-root FB.XFBML.parse(this); $(this).addClass(share-enabled); $(this).find(#fb-fa).remove(); } else { // will load all #fb-root, and it sucks too $.getScript(facebook_sdk, function(){ FB.init({ appId : 'facebook-id-goes-here', xfbml : true, version : 'v2.4' }); }); $(this).addClass(share-enabled); $(document).find([id=fb-fa]).each(function(){$(this).remove()}); } } }, function(){ // mouse leave // } ); }; $(document).ready(function(){ facebookHover(#fb-root); });I would appreciate any comments about the code style and really want to know if you have a better alternative in mind to do the same thing (or better, as you wish).
Lazy loading social buttons on mouse enter
javascript;jquery
Notes on the code style:Avoid deep nestingSplit high level logic and low level logic. This let's you see what a function does at a glance, and then dig deeper only if you need to. Code written this way is easier to understand. It also allows you to name the chunks of code with function names, rather than relying on comments.This splitting can be done fractally, at all levels. Think of a book organized into sections, sub-sections, and perhaps sub-sub-sections.Use function fnName() {} rather than var fnName = function() {} unless you have a good reason not to. It's shorter, and allows you to take advantage of js's nice hoisting feature, whereby you can define a function after you use it. This lets you put implementation details below high level logic.As for the general approach, I'm not sure you need to do lazy loading. If you think the social buttons are important for you site, consider loading them as soon as the page loads. I know you are trying to respect your user's bandwidth, but I think the UX is worse with the lazy loading, because of the stutter and the button transformations. It's been a while since I did FB integration, but I don't remember having to load such a large SDK. You might see if there's a lighter weight alternative. But I could be wrong on this point.Below is a rewrite (untested) based on the principles above. function loadFacebookWhenHovering(e) { var self = $(this); // The highest level of work. // Everything this function does is here to see in a single line // It adds a hover handler to the given element // Want to know the details of the handler? Keep reading down... $(e).hover(mouseEnterCallback, function(){}); function mouseEnterCallback() { // mouse enter // use a guard clause instead of an empty if clause if (self.hasClass(share-enabled)) return; // These 2 lines contain all the high level logic // of the mouse enter callback var isFBLoaded = (typeof (FB) != 'undefined'); isFBLoaded ? enableWhenFBIsLoaded() : loadFBAndEnable(); } // These helper functions implement the high level logic for the callback function loadFBAndEnable() { $.getScript(facebook_sdk, function(){ FB.init({ appId : 'facebook-id-goes-here', xfbml : true, version : 'v2.4' }); }); self.addClass(share-enabled); $(document).find([id=fb-fa]).each(function(){self.remove()}); } function enableWhenFBIsLoaded() { FB.XFBML.parse(self[0]); self.addClass(share-enabled); self.find(#fb-fa).remove(); }};
_codereview.123312
I have a piece of Clojure code that reverses a string recursively.It is working but too slow (because I got the problem from Codewars and I cannot validate it before it timeouts).The algorithm is :take a string ex. abc and reverse it (r abc) -> cbatake the first position and reverse from there c + (r ba) -> cabtake the second position and reverse from there ca + (r b) -> caband so on..First iteration :(defn reverse-fun [s] (loop [r s p 0] (if (= p (count s)) r (let [[b e] (split-at p r) e (reverse e)] (recur (apply str (concat b e)) (inc p))))))However it was too slow. I pass about 31 tests with this one before the timeout. Here is the fastest I have for now :(defn reverse-fun [s] (apply str (loop [r (vec s) p 0] (if (= p (count r)) r (let [b (subvec r 0 p) e (rseq (subvec r p (count r)))] (recur (into [] (concat b e)) (inc p)))))))This code passes about 72 tests before there is a timeout.How can I make this code faster ?
Clojure reverse multiple times
performance;strings;functional programming;time limit exceeded;clojure
The intermediate reversals seem like a bit of a trick question and are not really necessary to solve this problem. They are a complicated way to describe reordering a sequence indexed0 ... n by taking items n, 0, n-1, 1, n-2, 2 ...alternating from which end you take each time. If you abandon the idea of creating the intermediate strings described it becomes more about how to select the right character from the input string for each position :(ns reversefun.core)(defn reverse-fun [s] (->> (interleave (reverse s) s) (take (count s)) (apply str)))
_softwareengineering.167859
According to Stephen Schach, Classical and Object Oriented Software Engineering, chapter 6a module consists a single block of code that can be invoked in the way that a procedure, function, or method is invokedThis seems very vague and broad. So could anyone explain it in other clear words and show some actual example how to break a requirement into module. Thanks.
What is actually a module in software engineering?
software;modules
null
_unix.321134
I have a Raspberry Pi that I have connected to our AD domain using realm. It is using sssd-ad to perform user authentication.All accounts in the child domain to which the Pi is joined worked. However some accounts from the parent domain do not work, but some do work. So far all I am seeing is the following errors in /var/log/auth.log.Nov 3 14:13:03 pi-ncb234 lightdm: pam_sss(lightdm:auth): authentication failure; logname= uid=0 euid=0 tty=:0 ruser= rhost= user=user_name@domainNov 3 14:13:03 pi-ncb234 lightdm: pam_sss(lightdm:auth): received for user user_name@domain: 10 (User not known to the underlying authentication module)Are there any tools that will allow me to see how it is searching for user_name which does exist?Thanks
How to debug authentication failures?
raspbian;active directory;sssd
null
_codereview.8940
I wrote some code which divides a line through the words of the text so that each substring is no longer than MaxWidth. It works well, but it's very slow.Pattern pattern = Pattern.compile((.{1, + symbols + }(\\b|\\s))); // symbols - MaxWidthPattern pattern2 = Pattern.compile(\\s.*);while ((line = in.readLine()) != null) { // reading file by lines String dopLine = ; if (pattern2.matcher(line).matches()) { // If a line begins with a space, it is the beginning of a paragraph and need to add \ n if(!tempDopLine.equals()) { // tempDopLine - Some part of the previous line, which is not full screen tempStringBuffer.append(tempDopLine); tempStringBuffer.append(\n); lineCounter++; if (lineCounter == lines) { addPage(tempStringBuffer); // create page tempStringBuffer = new StringBuilder(); lineCounter = 0; numberOfPages++; } } tempDopLine = ; dopLine = line; } else { dopLine = tempDopLine + + line; // if this line } Matcher matcher = pattern.matcher(dopLine); // divide a string into a substrings HashMap<Integer, String> temp = new HashMap<Integer, String>(); int i = 0; while (matcher.find()) { temp.put(i, matcher.group()); i++; } for(i = 0; i < temp.size(); i++) { if (i<(temp.size()-1)) { String tempL = temp.get(i); tempStringBuffer.append(tempL); tempStringBuffer.append(\n); lineCounter++; if (lineCounter == lines) { addPage(tempStringBuffer); tempStringBuffer = new StringBuilder(); lineCounter = 0; numberOfPages++; } } else { tempDopLine = temp.get(i); // The last part of the string remember to display it along with the next line } } }
Breaking a string into substrings
java;performance;strings
Few thought,You do a while loop ( while( matcher.find() ) to find all matchs, then you do a for loop to deal with it. I think that can be done in only one loop.in you last for loop, you could remove the first if : for(i = 0; i < (temp.size() -1); i++) { String tempL = temp.get(i); tempStringBuffer.append(tempL); tempStringBuffer.append(\n); lineCounter++; if (lineCounter == lines) { addPage(tempStringBuffer); tempStringBuffer = new StringBuilder(); lineCounter = 0; numberOfPages++; } } // End for loop tempDopLine = temp.get(temp.size()-1); } Other suggestion the HashMap temp got be only a array, because right now your code do a lot of autoboxing ( from int to Integer) add and to retrieve information from your HashMap.Edit: Here a quick example to merge your two loops:String tempL = matcher.group();while (matcher.find()) { tempStringBuffer.append(tempL); tempStringBuffer.append(\n); lineCounter++; if (lineCounter == lines) { addPage(tempStringBuffer); tempStringBuffer = new StringBuilder(); lineCounter = 0; numberOfPages++; } tempL = matcher.group();} tempDopLine = tempL;
_webmaster.74564
What is the syntax for showing an externally hosted image in a media wiki page? The media wiki help page doesn't seem to cover that scenario.I've tried several things that don't work:[http://example.com/image.jpg][[File:http://example.com/image.jpg]]<img src=http://example.com/image.jpg>
How do you hot link an external image from a Media Wiki site?
images;mediawiki;hotlinking
Your example [http://example.com/image.jpg] is correct.You will need to enable this by setting $wgAllowExternalImages to true in your configuration file. The default is false. Optionally, you can set $wgAllowExternalImagesFrom to allow exceptions while $wgAllowExternalImages remains set to false. You can also use $wgEnableImageWhitelist to allow exceptions based upon regular expressions.You will likely need to copy the appropriate line from DefaultSettings.php to LocalSettings.php.Here are some links (in order):http://www.mediawiki.org/wiki/Manual:LocalSettings.phphttp://www.mediawiki.org/wiki/Manual:Configuration_settingshttp://www.mediawiki.org/wiki/Manual:$wgAllowExternalImageshttp://www.mediawiki.org/wiki/Manual:$wgAllowExternalImagesFromhttp://www.mediawiki.org/wiki/Manual:$wgEnableImageWhitelist
_softwareengineering.181044
Possible Duplicate:Ive inherited 200K lines of spaghetti code what now? Not long ago my company placed me in a team that deals with some of the most complex bugs that are in production. The thing is that almost all of these bugs are in legacy applications I am having a really difficult time understanding and debugging, and these are some of the reasons:Bad software designLots of code duplicationMisleading commentsBad names No documentation at allThe creators of the software no longer work in the companyReally big classes and methods, very badly programmedBugs are very badly documented and the operations team makes very poorly documented reports on the issues that occur.It is very time consuming and frustrating. As a TDD and ATDD developer, I try by starting writing tests to triangulate and pinpoint the problem, but there are so many things that need to be mocked that it is very difficult. Also the business analysts don't provide criteria for this software since they don't even know it themselves. The only thing they say is that it needs to be fixed.I thought that maybe here I could find somebody with experience in legacy software that could give me some advice on how to deal with bugs in an environment like this. I feel I am doing software archeology when working with this. Also it is very stressful and this big amount of frustration makes me feel useless and unproductive since it takes weeks sometimes to fix bugs. I always worked green field development and this is the first time I am doing production support. I would really appreciate some advice.
How to understand and debug legacy software?
java;tdd;debugging;legacy
You have hit the nail on the head - What you have complained about is the definition of Legacy software. Working on this class of software requires a different mindset to Green-fields work. Your stress is a result of you measuring your progress against unrealistic measures. Don't try to put todays ideology on yesterdays work. If it was not designed around TDD and UT, then retro-fitting Test driven methodology is a major, expensive and difficult undertaking. As you are finding - shoehorning todays silver bullets into yesterdays failed silver bullets is far from the ideal way to work. If carefully though through, with clear understanding of goals and benefits, it can be worth while. However, it quickly devolves into negative returns. Go for the low hanging fruit and develop tests only if it clear they will be used many times and find many defects.The best approach is to remove unrealistic expectations. Productivity will be low, code hard to analyze and faults hard to fix. Unexpected side effects of changes are likely, and as you have found, regression testing non-existent- Maintaining Legacy code is slow and hard - thats why so many developers run off to Green Fields projects. I can be rewarding, but only if you reset your expectations. When you start work on a defect, start by staying focused on three things - What the code does now, not introducing unexpected side effects and what it should do when fixed. You may find that code duplication is better than unexpected side effects..... Without decent tests and requirements, introducing regression is often the biggest sin of all.Always strive to leave the code in a better state than you found it, but be careful not to change its behavior except where required.