id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_unix.282128 | I am currently using Exceed to handle individual X windows being displayed back to my Windows machine and this works flawlessly. I'm using Windows as my Window manager, but I have also experimented with various X Windows Managers (i.e., fluxbox, twm, mwm, gnome-wm) to no avail.The issue I'm having is the graphics being displayed back on individual applications is poor compared to what I am seeing when forwarding a full gnome-Desktop/gnome-session over the same Exceed connection. In fact, if I run an instance of gnome-session over X11 I am able to launch applications with their standard a high quality look as directly connected to the machine. However, if I just use a window manager and display back single applications (not a full desktop/session) the quality reverts to a poorer more basic design.Is there anyway to launch gnome-session against a single application vice having to create a whole desktop instance? Or a way to use its graphics engine to render the application as it would be in a normal session/desktop environment? I thought the gnome-wm would do the trick, but no luck. | Best Graphics over X11 Forwarding | centos;x11;gnome;graphics;forwarding | null |
_hardwarecs.2634 | I am owning i5 plus. I want something more fancy.I am considering samsung gear and microsoft band.Microsoft band's battery only last 2 days.Samsung gear only works at samsung phone. Iphone watch only works at iPhone.My i5 plus is very great. However, it looks so dull. It doesn't impress my business partners.I want something like i5 plus but with fancy color. Not sure if I want always on display. Battery life is kind of cool feature for me.I also like the fact that i5 doesn't have unusual charger and can be charged in any USB port.i5 is great. I just want something more fancy.I am considering microsoft band 2. However, it has a feature I don't need, namely, gps.I definitely like heart rate monitor. | Recommend me a good high end smart band | smart device | Have you considered the Motorola Moto 360?It's got a coloured screen and looks pretty stylish in my opinion. Though according to reviews.. You need nightly charging, however considering you charge your phone everyday anyway, I don't see why that would be that much of a problem. |
_softwareengineering.262545 | In Big O notation, allocate an array of N element is defined by O(1) or O(n) ?For example in C#, if I allocate an array like this :int[] a = new int[10]When I display this array, I have :{0,0,0,0,0,0,0,0,0,0} | Big O notation allocate array of N element | array;big o;allocation | Normally, size of an array has no effect on complexity of allocation itself. AFAIK, an array is internally a pointer to some address where the array begins and hidden fields for element size and element count. So it would be O(1).At least this is the case in Object Pascal, though I am not sure for C# or other languages.But finding the chunk of memory which fits for your array has some higher complexity which depends on the size at some point but is more dependent to how the active memory manager works: Time complexity of memory allocation. |
_unix.57601 | I'm using a program called node-webkit, but I can't start the program without specifying the full path to the executable file. Is there any way to associate a command (such as node-webkit) with an executable file on Linux, so that the full path to the file won't need to be specified? | Create a command for a Linux executable file | path;executable | A third option, perhaps least intrusive, is to add an alias in your .bashrc file. This file is a set of options for bash which it reads every time an instance of bash is started.Open your .bashrc file with your file editor, for e.g gedit ~/.bashrc Add the below line to the bottom of your .bashrc filealias node-webkit=/path/to/node-webkitDo source ~/.bashrc to be able to use the alias as if it were a command. The way this works is like #define in C/C++, when you type node-webkit, it will be replaced with the right hand side of the alias definition, which here is the full path to the executable. |
_opensource.4498 | I have a closed-source project. I am the copyright holder.Can I incorporate parts from my closed-source project into an open-source project without running into legal issues commercially using my closed-source project?Please note that the part incorporated into the open-source project would be a derivative of a small part of the closed-source project. | Incorporating closed-source derivative into GPL code | gpl 3;closed source | null |
_webmaster.77939 | I'm trying to use a domain name I have on Namecheap with Heroku. Heroku's documentation advised a CNAME Alias. In the past, I've gotten other domains on the same service (Namecheap) to work with Heroku, but I can't get this domain to work. I followed Heroku's instructions and imitated the way the other domains are set up.Using Namecheap's web tool, I changed All Host Records. In that section, I have:@ http://www.example.com URL Redirectwww example.herokuapp.com CNAME (Alias)set up at the top of the page. In Heroku's app settings, I can see that www.example.com is a domain of example.herokuapp.com, which I configured with the command line.When I go to www.example.com, however, I get The server at page www.www.example.com/?from=@ is not responding.Any ideas how to fix it? I already tried manually changing the DNS to Google's servers, as well as using just about every variation of my domain. The additional www and /?from=@ seems to be a clue, but I'm not sure what's causing it.To be clear, my objective is running my Heroku app through www.example.com | Heroku CNAME redirect leads to www.www | redirects;url;heroku | For anyone who has this issue come up, changing the DNS to v1 using Namecheap's online interface and then setting the @ row to blank fixed the issue. |
_unix.157254 | Macbookpro Mountion Lion 10.8.5I am not new to technology or programming but relatively new to Web Development.My Unix is rusty.Issue. I cannot get GIT to execute commands. I have been researching and trouble shooting for 2 days.Some suggestions I have tried are listed at the end.Below will just be the commands and console output which may clearly identify the issue to people familiar with this issue.I am following a course online.I installed GIT from git-scm.com$ which git/usr/local/git/bin/git$ echo $PATH/usr/local/git/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/local/git/$ git --versiongit version 2.0.1$ git init new_repositorydyld: lazy symbol binding failed: Symbol not found: ___strlcpy_chkReferenced from: /usr/local/git/bin/gitExpected in: /usr/lib/libSystem.B.dylibTrace/BPT trap: 5$ Suggestions and Points I took from researching.A suggestion is that the version of GIT is a mismatch to my MacI am trying to uninstall this way.In the GIT.pkg there is an uninstall.sh file$ sudo sh uninstall.shdyld: DYLD_ environment variables being ignored because main executable (/usr/bin/sudo) is setuid or setgidPassword:sh: uninstall.sh: No such file or directoryLooks like the uninstall.sh was not copied when I installed GITI tried these commandsrm -rf /usr/local/git etc/paths.d/git rm /etc/manpaths.d/gitpermission deniedA suggestion to add/modify the path in my .bash_profile with these different options and this did not change anything:PATH=$PATH:/usr/local/binexport PATHorLD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/libexport LD_LIBRARY_PATHorexport PATH=/usr/local/git/bin:$PATHHow can I uninstall GIT ? | GIT on 'mountain lion' will not execute commands | path;git;macintosh | null |
_reverseengineering.12518 | I am debugging a 32-bit program on a 64-bit MS Windows 7 using IDA Pro 6.8 as seen in the image below:The instruction highlighted in the trace window (upper-left part of screen-shot) is supposed to MOV a word from some memory address in the .text segment (at the address given by the EDX register), into the EBX register.EDX = 0x013D4021 and the bytes stored at this address are 50 53 51 52, shown in the HexView of IDA in the lower half of the screen-shot above. Therefore, after executing the highlighted instruction mov ebx, [edx] I was expecting that EBX = 0x52515350. However, as you can see in the Result column of the trace window this is not true because EBX = 0x525153CC. Can anyone explain why the least significant byte in EBX is equal to CC instead of 50? Is it a bug in IDA or is it caused by the OS?NOTE: I tried the same program with IDA Pro 6.9 and encountered the same behavior.UPDATE: If you also have this issue and still want to debug the program, use hardware breakpoints. Hardware breakpoints do not modify the code like in the example above. IDA Pro allows enabling hardware breakpoints: hex-rays.com/products/ida/support/idadoc/1407.shtml | Unexpected memory value MOVed from text segment to register in Windows x86 32-bit program | ida;windows;debugging;x86;memory | CC is a single-byte encoding of int 3, which is the standard way of breaking to the debugger. In particular, debuggers often use it for break points and for single-stepping: they simply replace the first instruction byte with CC and wait for the interrupt. Then they write back the original instruction byte.The hexdump of the memory area around [edx] definitely looks like code, and the bytes loaded into ebx look like push opcodes. So it seems reasonable to suppose that either IDA is playing around with int 3 or someone else does... If your target program is aliasing memory then this could explain the whole confusion. |
_webapps.60445 | I have a Google Spreadsheet with a list of emails. I want to build a routine that send email automatically to those email addresses. I also want to attach a PDF to this email. The PDF file is located in my Google Drive.When running the following script, I'm getting the following error:TypeError: Cannot find function getAs in object FileIterator. (line 21, file Code)I'm new to Google Spreadsheets, please can you help me out?This is the code:// This constant is written in column C for rows for which an email// has been sent successfully.var EMAIL_SENT = EMAIL_SENT;function sendEmails2() { var sheet = SpreadsheetApp.getActiveSheet(); var startRow = 2; // First row of data to process var numRows = 1; // Number of rows to process // Fetch the range of cells A2:B3 var dataRange = sheet.getRange(startRow, 1, numRows, 2) // Fetch values for each row in the Range. var data = dataRange.getValues(); for (var i = 0; i < data.length; ++i) { var row = data[i]; var emailAddress = row[0]; // First column var message = row[1]; // Second column var emailSent = row[2]; // Third column if (emailSent != EMAIL_SENT) { // Prevents sending duplicates var subject = Sending emails from a Spreadsheet; var file = DriveApp.getFilesByName('test123.pdf') MailApp.sendEmail(emailAddress, subject, message, { attachments: [file.getAs(MimeType.PDF)], name: 'Automatic Emailer Script' }); sheet.getRange(startRow + i, 3).setValue(EMAIL_SENT); // Make sure the cell is updated right away in case the script is interrupted SpreadsheetApp.flush(); } }} | Send an email with attachment using Google Apps Script | google spreadsheets;google drive;google apps script | null |
_unix.5453 | Using bash, what is the easiest way to 'replace' a given part of the current path with something else? If my current path is of the form /xxxxx/foo/yyyyy, how can I jump to the /xxxxx/bar/baz/yyyyy directory with the shortest command? | Quickest way to change dir from /xxxxx/foo/yyyyyy to /xxxxx/bar/yyyyyy | bash | You can leverage a shell function to provide you this ability as needed:change() { path=`pwd`; cd `echo $path | sed s/$1/$2/`; }Which would be called from /foo/bar/ as:change bar gaziPlease note that the 's are not required for this example, but would be required for special strings such as directories with space character(s) in the name. |
_softwareengineering.337137 | I know it is best practice to split of your global requirements. Which is something I do when documenting in Confluence.However, when coming to grooming with the development team, I don't really know how to best incorporate these in to our User Stories in Jira. The obvious approach would I guess be to add them as acceptance criteria on each user story. But that defeats the purpose of making them global in the first place.E.g: for a checkout projectGlobal: all markup must use the styles provided in the style guideReq: Reset password - As a customer who has forgotten his password, I want to be able to reset my password, so that I can proceed to purchaseObviously, when connecting the functionality to the provided design (html/css), the developers need to ensure it remains fully responsive (details in html/css). But I would prefer not to spell it out in each user story. How do I deal with this? | How do you include global requirements? | agile;scrum;user story;jira | null |
_computergraphics.4151 | I have a WebGL circuit simulator. One of the problems it has is that, due to using quite a lot of intermediate float textures as it simulates, it doesn't work on various mobile devices. They only support byte textures.My intended solution to this problem is to encode the high-precision (i.e. 32-bit) floats as bytes. Every output float is packed into a nearly-IEEE format (I put the sign bit at the other end to avoid a few shifts, I don't do denormalized values, and I don't do infinities/NaNs). Similarly, every input is unpacked before being used.I have found various blog posts and answers related to this task out on the internet (example 1, example 2, example 3), but I haven't found any that work properly on all finite non-denormalized floats.The problem I'm running into is precision. I want to round trip the floats without introducing any error, but I can't seem to make a shader that preserves all 23 bits of the mantissa. There always seems to be some rounding on some machine that loses the last bit, though I can perturb the cases where the rounding happens and it happens differently on the various machines I've tested on.Here is my packing method:vec4 packFloatIntoBytes(float val) { if (val == 0.0) { return vec4(0.0, 0.0, 0.0, 0.0); } float mag = abs(val); float exponent = floor(log2(mag)); // Correct log2 approximation errors. exponent += float(exp2(exponent) <= mag / 2.0); exponent -= float(exp2(exponent) > mag); float mantissa; if (exponent > 100.0) { // Not sure why this needs to be done in two steps for the largest float to work. // Best guess is the optimizer rewriting '/ exp2(e)' into '* exp2(-e)', // but exp2(-128.0) is too small to represent. mantissa = mag / 1024.0 / exp2(exponent - 10.0) - 1.0; } else { mantissa = mag / float(exp2(exponent)) - 1.0; } float a = exponent + 127.0; mantissa *= 256.0; float b = floor(mantissa); mantissa -= b; mantissa *= 256.0; float c = floor(mantissa); mantissa -= c; mantissa *= 128.0; float d = floor(mantissa) * 2.0 + float(val < 0.0); return vec4(a, b, c, d) / 255.0;}And here's my unpacking method:float unpackBytesIntoFloat(vec4 v) { float a = floor(v.r * 255.0 + 0.5); float b = floor(v.g * 255.0 + 0.5); float c = floor(v.b * 255.0 + 0.5); float d = floor(v.a * 255.0 + 0.5); float exponent = a - 127.0; float sign = 1.0 - mod(d, 2.0)*2.0; float mantissa = float(a > 0.0) + b / 256.0 + c / 65536.0 + floor(d / 2.0) / 8388608.0; return sign * mantissa * exp2(exponent);}This method is close. It works on my laptop, as far as I can tell. But it doesn't work on my Nexus tablet. For example, the float -0.20717763900756836 should be encoded as [124, 168, 76, 193]. When I unpack that then repack it on the Nexus tablet the output is one ulp lower: [124, 168, 76, 191] (which encodes -0.2071776[5390872955]). Close, but I want perfect.Mostly I'm at a loss trying to figure out where the precision is being destroyed in this method. Changes almost seem to have random effects, like replacing x * exp2(n) with x / exp2(-n) might fix an error in one place but introduce an error in another place.Is there any exact way to pack floats into bytes, without losing precision?Some example values that such a method should work on:var testValues = new Float32Array([ 0, 0.5, 1, 2, -1, 1.1, 42, 16777215, 16777216, 16777218, 0.9999999403953552, // An ulp below 1. 1.0000001192092896, // An ulp above 1. Math.pow(2.0, -126), // Smallest non-denormalized 32-bit float. 0.9999999403953552 * Math.pow(2.0, 128), // Largest finite 32-bit float. Math.PI, Math.E]); | WebGL packing/unpacking functions that can roundtrip all typical 32-bit floats | webgl | null |
_softwareengineering.233733 | I recently started at a new job. The existing system works OK but is poorly designed and hard to maintain, and they are planning to rebuild it in MVC and I fear it will be much worse. (Not because of MVC)I want to discourage the lead dev from using an anti-patternExisting issues includeHard coded strings in the business logic which are also tied to the interface (e.g. if (x.CustomerType.Contains(Shipping Customer) foobar(); is everywhereSome classes map to dbTables directly (which is good). But because the softwarehas to be easy to add features to , there is a Sharepoint-esque DynamicData table for storing new fields that the customer may require. a la [FieldName] [FieldValue] [FieldType]Refusal to use Linq but chains lamda queries for about 2 page widths.Exceptions not explicitly thrown or rethrown, but not handled properly (the client can see the call stack because it is just dumped into a popup dialog)Refusal to use var but loves the idea of dynamic everything. What is the best way to go about convincing my lead dev to avoid these horrible methods and not put everything in the generic key-value-pair table in version 2.0? (Considering I am new here and the team lead has been here for many years) | How can I explain this is an anti-pattern? | c#;maintenance;patterns and practices;anti patterns;technical debt | What you are really talking about are not really anti-patterns, but smells. Your question covers multiple completely separate issues and if you google enough, you should find out enough material to argument against your boss. There is also problem of persuading your boss that those are actual problems and need to be addressed. Now, for your problems:Having xxxType in a class is clear code smell and implies that there should be some kind of object hierarchy. Refactoring is highly encouraged to create good OOP design. If the class can have multiple types, then Strategy Pattern is good fit.This is commonly called Entity-attribute-value model and it has it's merits and problems. It is up to you and your boss to figure out if it fits your application.Note that lambda chains are too part of LINQ. And while using LINQ query syntax might create more readable code, some prefer not to use it.There is tons of discussion on how exception handling should be done. Google is your friend.var and dynamic are two completely different things. And when one uses dynamic, the person should be aware of the downsides it brings. Again, Google should help. |
_codereview.71550 | The below code takes an incoming sequence of objects, checks each object's type and then yields the object in a specific order. This is all taking place in an active pattern. As I am still new to F#, I would like to know if there is a better or more efficient way of accomplishing this task. type ArticleObj = | Submission | Photos | ReviewRating | BusinessReviewRating | ArticleTaglet (|OrderArticleObj|_|) (objSeq : seq<obj>) = // Create DU array let articleParts = [| Submission; Photos; ReviewRating; BusinessReviewRating; ArticleTag |] // create ordered array of objects let orderedSeq = seq{ for row in articleParts do for row2 in objSeq do match row with | Submission -> let isValid = match row2 with | :? ArticleSubmission as Sub -> true | _ -> false if isValid then yield row2 | Photos -> let isValid = match row2 with | :? seq<Photos> as Photo -> true | _ -> false if isValid then yield row2 | ReviewRating -> let isValid = match row2 with | :? ArticleReviewRating as RevRating -> true | _ -> false if isValid then yield row2 | BusinessReviewRating -> let isValid = match row2 with | :? BusinessReviewRating as BizRev -> true | _ -> false if isValid then yield row2 | ArticleTag -> let isValid = match row2 with | :? seq<ArticleTags> as ArtTag -> true | _ -> false if isValid then yield row2 } if not (Seq.isEmpty orderedSeq) then Some(orderedSeq) else None | Ordering a sequence of objects | f# | You could define a sorted list of the types and use that to sort the input. Here's what a function that does that would look like.let sortArticles : seq<obj> -> seq<obj> = let sorted = [ typeof<ArticleSubmission> typeof<seq<Photos>> typeof<ArticleReviewRating> typeof<BusinessReviewRating> typeof<seq<ArticleTags>> ] Seq.sortBy (fun o -> sorted |> List.findIndex ((=) (o.GetType()))) |
_webmaster.8447 | First off,My group and I don't have any experience with databases or anything of that sort, only programming (not web-programming), so this post is just me wondering what things I should start researching and possibly some referral to a hosting company.The abstract idea is to have a list of things, each categorized, and each item with user-submitted reviews (made by users who are signed up with the website). Would absolutely everything be stored on a SQL database? (the long text reviews for example)Does anyone have any suggestions on some 'web frameworks' we could use to jumpstart us?What should our absolute first step be? (I was thinking about first designing the basic database? so we have something to work around...?)Should we worry about which host we choose right now, any recommendations? (would it be a trivial task to switch hosts in the future?)Thanks again, any help is appreciated! | Looking to create a website ... need assistance on what technologies to use! | web development | null |
_cs.12090 | So, I have a book here, which has an example for context sensitive grammar, and the grammar is the famous $0^n1^n2^n$ , and it has:$$ \begin{align}S &\rightarrow 0BS2 \mid 012 \\B0 &\rightarrow 0B \\B1 &\rightarrow 11 \\\end{align} $$I agree that the above works, but what is wrong with just saying:$S\rightarrow 0S12 |\epsilon$The above also generators the same number of $0$s as $1$s and $2$s. | Can this grammar be simplified? | formal languages;formal grammars;context sensitive | null |
_cogsci.6479 | I'm creating a game, and I would like to know what research I can consult to make it more addicting.The game is a casual one, like Candy Crush, Angry Birds, etc | What does current research tell us about addictive behaviors in games? | motivation;addiction | null |
_webmaster.71193 | The main problem is I have no previous design samples like JPG or PSD files by which the site can be sliced again. So, I need the previous look and feel of the site before hacking. Otherwise, as I have no backup, if I know how to recover background image files for this hacked website, then it will be also fine.I have killed whole my day to search for a solution whether there is a way to recover my theme files, specially the background image files which were used by the CSS files. I've tried online cache solutions so far but with no luck.For example, the following link is showing only my CSS file but I can not recover my image files. http://web.archive.org/web/20141019170905/http://www.drinkfreshlysqueezed.com/wp-content/themes/manifest_v1.1/style.cssI have also used: http://www.viewcached.com/ but it is same.If I use Google cache link: http://webcache.googleusercontent.com/search?q=cache:http://www.drinkfreshlysqueezed.com then it shows me the hacked site for now (though this is not a problem as it will stay few more days until I fix and update my site!).The solutions described into the following links are not working for me :(How to recover a website from google cache?https://webapps.stackexchange.com/questions/15633/how-to-modify-a-url-to-get-a-google-cached-version-of-pagehttps://webapps.stackexchange.com/questions/27414/how-to-use-googles-web-cache-to-view-a-page | How to see previous look and feel from cached versions of a hacked website? | google search console;background image;google cache;domain hacks;disaster recovery | null |
_datascience.9612 | I have a dataset containing data on temperature, precipitation and soybean yields for a farm for 10 years (2005 - 2014). I would like to predict yields for 2015 based on this data.Please note that the dataset has DAILY values for temperature and precipitation, but only 1 value per year for the yield, since harvesting of crop happens at end of growing season of crop.I want to build a regression or some other machine learning based model to predict 2015 yields, based on a regression/some other model derived by studying the relation between yields and temperature and precipitation in previous years.As per, Building a machine learning model to predict crop yields based on environmental data, I am using sklearn.cross_validation.LabelKFold to assign each year the same label.The question is that since I have a single target value per year, do I need to interpolate to fill in target values for all the other days of the year? Should I just use the same target value for each day of the year? | Assigning values to missing target vector values in scikit-learn | python;scikit learn;pandas | The model likely won't have much predictive power if the input is a single day. No weather patterns longer than one day can be captured that way.Instead you should aggregate the days together. You can come up with different features that describe your larger, aggregated unit of time (months, year). For example mean precipitation is a very simple one. Binning the data and using counts within those bins would also work.More advanced options would roll the time all the way up to a full year and learn a feature set at that level. |
_unix.179945 | I have little problem with my two ports USB 2.0. They don't work properly. If something (like mouse or keyboard) is connected on boot it's working but when I re-plug this device he often don't want to work. On Windows all works fine.Laptop: MSI GE60-2PE 640XPLSystem: Linux Mint 17.1Outputs:lsusbBus 002 Device 008: ID 1770:ff00 Bus 002 Device 002: ID 8087:8000 Intel Corp. Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubBus 001 Device 005: ID 8087:07dc Intel Corp. Bus 001 Device 039: ID 046d:c52b Logitech, Inc. Unifying ReceiverBus 001 Device 002: ID 8087:8008 Intel Corp. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubBus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hubBus 003 Device 007: ID 1532:0021 Razer USA, Ltd Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hublspci00:00.0 Host bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor DRAM Controller (rev 06)00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller (rev 06)00:02.0 VGA compatible controller: Intel Corporation 4th Gen Core Processor Integrated Graphics Controller (rev 06)00:03.0 Audio device: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor HD Audio Controller (rev 06)00:14.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB xHCI (rev 05)00:16.0 Communication controller: Intel Corporation 8 Series/C220 Series Chipset Family MEI Controller #1 (rev 04)00:1a.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #2 (rev 05)00:1b.0 Audio device: Intel Corporation 8 Series/C220 Series Chipset High Definition Audio Controller (rev 05)00:1c.0 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #1 (rev d5)00:1c.3 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #4 (rev d5)00:1c.4 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #5 (rev d5)00:1c.5 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #6 (rev d5)00:1d.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #1 (rev 05)00:1f.0 ISA bridge: Intel Corporation HM87 Express LPC Controller (rev 05)00:1f.2 SATA controller: Intel Corporation 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] (rev 05)00:1f.3 SMBus: Intel Corporation 8 Series/C220 Series Chipset Family SMBus Controller (rev 05)01:00.0 3D controller: NVIDIA Corporation GM107M [GeForce GTX 860M] (rev a2)03:00.0 Ethernet controller: Qualcomm Atheros Killer E2200 Gigabit Ethernet Controller (rev 13)04:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. Device 5249 (rev 01)05:00.0 Network controller: Intel Corporation Wireless 3160 (rev 83)dmesg | grep USB [16334.667461] usb 1-1.2: USB disconnect, device number 15[16338.454165] usb 1-1.2: new full-speed USB device number 16 using ehci-pci[16338.862356] usb 1-1.2: new full-speed USB device number 17 using ehci-pci[16339.282564] usb 1-1.2: new full-speed USB device number 18 using ehci-pci[16339.616843] usb 1-1.2: New USB device found, idVendor=1532, idProduct=001f[16339.616846] usb 1-1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=0[16354.657326] usb 1-1.2: USB disconnect, device number 18[16358.537997] usb 3-2: new full-speed USB device number 5 using xhci_hcd[16358.555730] usb 3-2: New USB device found, idVendor=1532, idProduct=001f[16358.555738] usb 3-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0[16368.024510] usb 3-2: USB disconnect, device number 5[16370.525259] usb 3-1: new full-speed USB device number 6 using xhci_hcd[16370.543091] usb 3-1: New USB device found, idVendor=1532, idProduct=001f[16370.543095] usb 3-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0[16370.544550] hid-generic 0003:1532:001F.0019: input,hidraw3: USB HID v1.11 Mouse [Razer Razer Naga Epic] on usb-0000:00:14.0-1/input0[16370.545223] hid-generic 0003:1532:001F.001A: input,hidraw4: USB HID v1.11 Keyboard [Razer Razer Naga Epic] on usb-0000:00:14.0-1/input1[16519.936479] usb 3-1: USB disconnect, device number 6[16524.257417] usb 1-1.2: new full-speed USB device number 19 using ehci-pci[16524.609576] usb 1-1.2: new full-speed USB device number 20 using ehci-pci[16524.909786] usb 1-1.2: new full-speed USB device number 21 using ehci-pci[16525.390047] usb 1-1.2: new full-speed USB device number 22 using ehci-pci[16525.578107] hub 1-1:1.0: unable to enumerate USB device on port 2[16531.946007] usb 1-1.2: new full-speed USB device number 23 using ehci-pci[16532.454324] usb 1-1.2: new full-speed USB device number 24 using ehci-pci[16532.802429] usb 1-1.2: new full-speed USB device number 25 using ehci-pci[16533.069479] usb 1-1.2: New USB device found, idVendor=1532, idProduct=001f[16533.069482] usb 1-1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=0[16549.944299] usb 1-1.2: USB disconnect, device number 25[16553.218730] usb 1-1.2: new full-speed USB device number 26 using ehci-pci[16553.616086] usb 1-1.2: New USB device found, idVendor=1532, idProduct=0021[16553.616096] usb 1-1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=0[16553.698891] hid-generic 0003:1532:0021.001B: input,hidraw3: USB HID v1.11 Mouse [Razer] on usb-0000:00:1a.0-1.2/input0[16560.195433] usb 1-1.2: USB disconnect, device number 26[16569.256274] usb 3-1: new full-speed USB device number 7 using xhci_hcd[16569.274506] usb 3-1: New USB device found, idVendor=1532, idProduct=0021[16569.274515] usb 3-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0[16569.277265] hid-generic 0003:1532:0021.001C: input,hidraw3: USB HID v1.11 Mouse [Razer Razer Naga Epic Dock] on usb-0000:00:14.0-1/input0[16569.279406] hid-generic 0003:1532:0021.001D: input,hidraw4: USB HID v1.11 Keyboard [Razer Razer Naga Epic Dock] on usb-0000:00:14.0-1/input1[16825.447012] usb 1-1.1: USB disconnect, device number 3[16834.361139] usb 1-1.1: new full-speed USB device number 27 using ehci-pci[16834.649436] usb 1-1.1: new full-speed USB device number 28 using ehci-pci[16834.933712] usb 1-1.1: new full-speed USB device number 29 using ehci-pci[16835.150017] usb 1-1.1: new full-speed USB device number 30 using ehci-pci[16835.305958] hub 1-1:1.0: unable to enumerate USB device on port 1[16846.918988] usb 1-1.2: new full-speed USB device number 31 using ehci-pci[16847.211267] usb 1-1.2: new full-speed USB device number 32 using ehci-pci[16847.491384] usb 1-1.2: new full-speed USB device number 33 using ehci-pci[16847.711794] usb 1-1.2: new full-speed USB device number 34 using ehci-pci[16847.859780] hub 1-1:1.0: unable to enumerate USB device on port 2[16858.495475] usb 3-2: new full-speed USB device number 8 using xhci_hcd[16858.513896] usb 3-2: New USB device found, idVendor=046d, idProduct=c52b[16858.513905] usb 3-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0[16858.513910] usb 3-2: Product: USB Receiver[16858.519636] logitech-djreceiver 0003:046D:C52B.0020: hiddev0,hidraw0: USB HID v1.11 Device [Logitech USB Receiver] on usb-0000:00:14.0-2/input2[16858.529910] logitech-djdevice 0003:046D:C52B.0021: input,hidraw1: USB HID v1.11 Keyboard [Logitech Unifying Device. Wireless PID:4003] on usb-0000:00:14.0-2:2[16863.584386] usb 3-2: USB disconnect, device number 8[16874.084629] usb 1-1.2: new full-speed USB device number 35 using ehci-pci[16874.373177] usb 1-1.2: new full-speed USB device number 36 using ehci-pci[16874.657421] usb 1-1.2: new full-speed USB device number 37 using ehci-pci[16874.873550] usb 1-1.2: new full-speed USB device number 38 using ehci-pci[16875.029590] hub 1-1:1.0: unable to enumerate USB device on port 2[16881.873228] usb 3-2: new full-speed USB device number 9 using xhci_hcd[16881.891818] usb 3-2: New USB device found, idVendor=046d, idProduct=c52b[16881.891828] usb 3-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0[16881.891833] usb 3-2: Product: USB Receiver[16881.897882] logitech-djreceiver 0003:046D:C52B.0024: hiddev0,hidraw0: USB HID v1.11 Device [Logitech USB Receiver] on usb-0000:00:14.0-2/input2[16881.903672] logitech-djdevice 0003:046D:C52B.0025: input,hidraw1: USB HID v1.11 Keyboard [Logitech Unifying Device. Wireless PID:4003] on usb-0000:00:14.0-2:2[17131.199699] usb 3-2: USB disconnect, device number 9[17136.516909] usb 1-1.2: new full-speed USB device number 39 using ehci-pci[17136.807238] usb 1-1.2: New USB device found, idVendor=046d, idProduct=c52b[17136.807242] usb 1-1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=0[17137.221868] logitech-djreceiver 0003:046D:C52B.0027: hiddev0,hidraw0: USB HID v1.11 Device [Logitech] on usb-0000:00:1a.0-1.2/input2dmesg after replug device:[18116.850679] usb 1-1.2: USB disconnect, device number 39[18116.852812] logitech-djreceiver 0003:046D:C52B.0027: can't reset device, 0000:00:1a.0-1.2/input2, status -32[18117.818878] usb 1-1.2: new full-speed USB device number 40 using ehci-pci[18118.128109] usb 1-1.2: New USB device found, idVendor=046d, idProduct=c52b[18118.128114] usb 1-1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=0[18118.128117] usb 1-1.2: Product: USB Receiver[18118.128118] usb 1-1.2: Manufacturer: Logitech[18118.245108] usbhid 1-1.2:1.0: can't add hid device: -71[18118.245136] usbhid: probe of 1-1.2:1.0 failed with error -71[18118.468629] logitech-djreceiver 0003:046D:C52B.0029: hiddev0,hidraw0: USB HID v1.11 Device [Logitech USB Receiver] on usb-0000:00:1a.0-1.2/input2 | USB 3.0 works always, USB 2.0 works sometimes | linux mint;usb | null |
_webapps.42347 | I recently got an app request from a friend on facebook. I don't use that app, or any app for that matter, and I wanted to delete it. This not being my first time, I hit the small x button next to it, expecting it to disappear, but it didn't. Confused, I pressed it again, but once again, it didn't disappear. I then went to the app center and tried to disable the request from there, but it didn't say that I had received an app recently. I, then, went and unfriended the person who sent me the request, hoping that the app itself would disappear. I checked, and it didn't. I went into my Account Settings on my facebook, went to Notifications, went to Apps and disabled all apps. I checked, and the persistent notification was still there. I believe the information that I've given to you so far has demonstrated how difficult it is to delete this app, and how badly I want it gone. If ANYONE has ANY idea about how to make this request disappear, I'll probably love you forever. Thanks! | Deleting Facebook App Requests | facebook | null |
_unix.107827 | I want to disable the menu in Linux-Mint which is showing when you press ALT+1. I can't even capture the screen when the menu is showing so I've attached a photo. I tried everything and I didn't find the answer how to disable it. Any sugestions?EDIT:The solution posted below by @tohuwawohu works perfect on Linux Mint 15 and 16, but doesn't work on my computer with Mint 13 (Maya). I'm still looking for a way to disable the menu. | How to disable menu under ALT+1 binding in Linux-Mint | linux mint;keyboard shortcuts | Open exactly that menu -> System -> Preferences -> Keyboard Shortcuts -> Desktop -> Show the panel's main menu and use backspace to delete the shortcut (or assign any other shortcut). |
_webmaster.10280 | I've got a site that includes a blog amongst other things. I'm adding the tag for autodiscovery of the RSS feed. There's only one feed on the site. Should live on every page (i.e. I put it in the base template)?Or should it just live on the blog index page and/or individual blog post pages? | RSS feed on which pages? | rss;feeds | Put it on every page. Your goal is to get people to subscribe to your feed and you want to take advantage of every opportunity to get them to do so. Since all that tag does (in the major browsers) is put the RSS icon in the address bar it's hardly intrusive. So put it in the <head> of every page and maximize the chances of someone subscribing. |
_softwareengineering.93770 | When developing software, I often have a centralised 'core' library containing handy code that can be shared and referenced by different projects.Examples:a set of functions to manipulate stringscommonly used regular expressionscommon deployment codeHowever some of my colleagues seem to be turning away from this approach. They have concerns such as the maintenance overhead of retesting code used by many projects once a bug is fixed. Now I'm reconsidering when I should be doing this.What are the issues that make using a 'core' library a bad idea? | When is a 'core' library a bad idea? | code reuse;libraries | null |
_softwareengineering.179389 | Okay, say I have a point coordinate.var coordinate = { x: 10, y: 20 };Now I also have a distance and an angle.var distance = 20;var angle = 72;The problem I am trying to solve is, if I want to travel 20 points in the direction of angle from the starting coordinate, how can I find what my new coordinates will be?I know the answer involves things like sine/cosine, because I used to know how to do this, but I have since forgotten the formula. Can anyone help? | Find the new coordinates using a starting point, a distance, and an angle | javascript;math;geometry | SOHCAHTOASine = Opposite/HypotenuseCosine = Adjacent/HypotenuseTangent = Opposite/AdjacentIn your example:Sine(72) = Y/20 -> Y = Sine(72) * 20Cosine(72) = X/20 -> X = Cosine(72) *20The problem is you have to be careful with what quadrant you are in. This works perfectly in the upper right quadrant, but not so nice in the other three quadrants. |
_cs.29851 | Suppose I want to build an operating system based on a very small native lower kernel that acts as a managed code interpreter/runtime and a larger upper kernel compiled to a non-native machine language (Java bytecode, CIL, etc.). Examples of similar operating systems would be Singularity and Cosmos.What pitfalls and development challenges exist writing an OS with this sort of infrastructure in contrast to a purely-native solution? | What are potential pitfalls with having a minimal kernel that runs managed code? | operating systems;type checking;interpreters;os kernel | Depending on the language, there can be many development challenges:Pointers: If a language doesn't have pointers, it will be a challenge to do relatively-easy tasks. For example, you can use pointers to write to VGA memory for printing to the screen. However, in a managed language, you will need some kind of plug (from C/C++) to do the same.Assembly: An OS always needs some assembly. Languages like C#, Java, etc. don't work so well with it, unlike C/C++. In C or C++ you can also have inline assembly which is very, very useful for many tasks. There are MANY cases where this is needed (examples in x86): loading a GDT, loading an IDT, enabling paging, setting up IRQs, etc. Control: If you are using something like Cosmos, you aren't having full control. Cosmos is a micro-kernel and essentially bootstraps your kernel. You can implement something like Cosmos from scratch if you really wanted to, however, that can take a long, long time.Overhead: With managed languages, there is A LOT of overhead compared to C or even C++. Things like Cosmos need to implement a lot of things before even a C# hello world kernel can be run. In C, you are ready to go, no real setup needed. In C++, there are just a few things that need to be implemented to use some of C++'s features. Structures: In C/C++ there are structs which many managed languages do not have, and so, you would need to implement some way of having something like a struct. For example, if you want to load a IDT (Interrupt Descriptor Table), in C/C++, you can create a struct (with a packed attribute), and load it using the x86 ASM instruction lidt. In a managed language, this is much harder to do...Managed languages, syntax-wise, are easier, however, for many OS related things are many times not very-well suited. That doesn't mean they can't be used, however, something like C/C++ are often recommended. |
_unix.40937 | I have old system with an AMD Athlon 1,2 GHz processor and [SiS] 65x/M650/740 graphics (output from lspci). Recently I discovered on a german ubuntu page that since version 10.10 some older processors are not longer supported, since ubuntu version 12.04 there are further restrictions. I guess this is completely related to the used kernel version. This leads me to the following questions:How can I find out which kernel versions support the processor and graphics card mentioned above? Which versions provide optimal support (concerning performance and stability)?When updating a system (for example between two ubuntu versions or more interesting, when running a rolling release like debian testing or archlinux), there seems to be the danger of loosing (optimal) hardware support when the kernel version is updated. Do I have to check the hardware support manually before each update or is it checked automatically in the three distros mentioned above (ubuntu, debian testing, archlinux)? | Choose kernel for specific hardware | kernel;distribution choice;hardware;distros;drivers | null |
_unix.242602 | I wonder if it is allowed and by this possible that duplicate labels within a *.dts device tree file appear and if so what happens then?Does a new label allow to overwrite/redefine the old label, for instance?To make the question more transparent and clearer I would like to state the question what happens to this example dts data./dts-v1/;/ { #address-cells = <1>; #size-cells = <1>; chosen { labelname: bootargs = lalalallal; labelname: bootargs2 = lalalallal; }; aliases { }; memory { device_type = memory; reg = <0 0>; };};in which the we have a duplicate use of the label labelname.The motivation to this question was the inability to find a clear and crisp documentation on the dts syntax telling that labels need to be unique | In Linux device tree syntax, what happens when duplicate labels appear? | device tree | null |
_unix.94224 | I want to handle filenames as arguments in a bash script in a cleaner, more flexible way, taking 0, 1, or 2 arguments for input and output filenames.when args = 0, read from stdin, write to stdoutwhen args = 1, read from $1, write to stdoutwhen args = 2, read from $1, write to $2How can I make the bash script version cleaner, shorter?Here is what I have now, which works, but is not clean,#!/bin/bashif [ $# -eq 0 ] ; then #echo args 0 fgrep -v stuffelif [ $# -eq 1 ] ; then #echo args 1 f1=${1:-null} if [ ! -f $f1 ]; then echo file $f1 dne; exit 1; fi fgrep -v stuff $f1 elif [ $# -eq 2 ]; then #echo args 2 f1=${1:-null} if [ ! -f $f1 ]; then echo file $f1 dne; exit 1; fi f2=${2:-null} fgrep -v stuff $f1 > $f2fiThe perl version is cleaner,#!/bin/env perluse strict; use warnings;my $f1=$ARGV[0]||-;my $f2=$ARGV[1]||-;my ($fh, $ofh);open($fh,<$f1) or die file $f1 failed;open($ofh,>$f2) or die file $f2 failed;while(<$fh>) { if( !($_ =~ /stuff/) ) { print $ofh $_; } } | How to use filename arguments or default to stdin, stdout (brief) | bash;stdout;stdin;arguments | I'd make heavier use of I/O redirection:#!/bin/bash[[ $1 ]] && [[ ! -f $1 ]] && echo file $1 dne && exit 1[[ $1 ]] && exec 3<$1 || exec 3<&0[[ $2 ]] && exec 4>$2 || exec 4>&1fgrep -v stuff <&3 >&4Explanation[[ $1 ]] && [[ ! -f $1 ]] && echo file $1 dne && exit 1Test if an input file has been specified as a command line argument and if the file exists. [[ $1 ]] && exec 3<$1 || exec 3<&0If $1 is set, i.e. an input file has been specified, the specified file is opened at file descriptor 3, otherwise stdin is duplicated at file descriptor 3. [[ $2 ]] && exec 4>$2 || exec 4>&1Similarly if the $2 is set, i.e. an output file has been specified, the specified file is opened at file descriptor 4, otherwise stdout is duplicated at file descriptor 4.fgrep -v stuff <&3 >&4Lastly fgrep is invoked, redirecting its stdin and stdout to the previously set file descriptors 3 and 4 respectively.Reopening standard input and outputIf you'd prefer not to open intermediate file descriptors, an alternative is to replace the file descriptors corresponding to stdin and stdout directly with the specified input and output files:#!/bin/bash[[ $1 ]] && [[ ! -f $1 ]] && echo file $1 dne && exit 1[[ $1 ]] && exec 0<$1[[ $2 ]] && exec 1>$2fgrep -v stuffA drawback with this approach is that you loose the ability to differentiate output from the script itself from the output of the command which is the target for the redirection. In the original approach, you can direct script output to the unmodified stdin and stdout, which in turn might have been redirected by the caller of the script. The specified input and output files could still be accessed via the corresponding file descriptors, which are distinct from the script stdin and stdout. |
_reverseengineering.12944 | I am trying to ptrace_attach the main process and its threads (/proc/<pid>/task) of an android unity app to avoid malicious users debugging the app(which is a game). I developed a ndk library that forks from main process and ptrace_attach the parent process(being the main process) inside the JNI_OnLoad() function. After that periodically checks the /proc/<pid>/task folder to attach newly created threads. The problem is, this works well in normal apps but when I try to run this inside an app made with unity, the main process stops and screen becomes black or white not responding. But if you delay attaching a few seconds just enough to see the animation working on the screen, attaching works fine.Code is roughly something like this:if(!fork()){ parentPid = getppid(); // attach parent process if(ptrace(PTRACE_ATTACH,parentPid,0,0)<0) exit(-1); ptrace(PTRACE_SETOPTIONS, parentPid, 0, PTRACE_O_TRACEEXEC| PTRACE_O_TRACEVFORKDONE|PTRACE_O_TRACESYSGOOD |PTRACE_O_TRACEFORK |PTRACE_O_TRACEVFORK |PTRACE_O_TRACECLONE ); while(true) { // get signal from processes stoppedPid = waitpid(-1,&stat_loc, 0); ... // check if stoppedPid need to be attached // if so, attach ptrace(PTRACE_ATTACH,stoppedPid,0,0); ... // else, just continue the stopped process ptrace(PTRACE_CONT,stoppedPid,0,0); } }Maybe I should adjust the ptrace_setoptions ?Thanks in advance :) | Has anyone tried ptrace_attaching android unity apps for anti debugging? | android;anti debugging | Well somethings I found out - When I ptrace_attach the main process of the target app and wait for signals, I get SIGSEGV signal while app loads and just hangs there(because forked process cannot handle SIGSEGV). In the java code, it seems SIGSEGV occurs while calling View related functions. I guess UnityPlayer or Android app loader handles SIGSEGV smoothly while app loading time. Therefore, if you get a SIGSEGV, simply detaching it and attaching again does not hang the app. |
_unix.97751 | I have a Raspberry Pi that's running a Raspbmc distribution and I've noticed that a lot of the directories are either owned by the user 501 and the group dialout or both the user and group root. It's frustrating for me to move files from the main filesystem on the SD card to the external drive because I always need root access (and it makes automating tasks a pain too), so I'd really like to be able to chown it to the user pi. I've read up a little bit on what the 501 user and the dialout group are and don't see why I shouldn't do this, but my knowledge of Unix permissions is basic at best so I'd like to know if I've missed any considerations before I go ahead and change the permissions recursively on the entire drive. So my question would be: Is there any harm in doing a chown -R pi on the external drive? | What are the ramifications of recursively chown'ing the directories on an external drive that currently has 501:dialout or root:root permissions? | permissions;raspberry pi | If you create a common user between the systems where this disk is moving you can then make the ownership on this disk that single user and you'll no longer have to deal with this.Simply add a user on both systems, and make sure that this user's UID (user ID) and GID (group ID) are the same numbers on both systems. The names are immaterial, it's the numbers that need to be kept in sync, so that the UID/GID is recognized across both systems as a single user/group.When creating a user these are the parts that drive the recognition by the system which user/group owns files.ExampleSay I have this directory, it's user/group is saml & saml.$ ls -ld .drwx------. 245 saml saml 32768 Oct 26 22:41 .Using the -n switch to ls you can see what the numbers are for these fields.$ ls -ldn .drwx------. 245 500 501 32768 Oct 26 22:41 .So we need to make sure that I have the same user/group on both systems (saml/saml) and the UID/GID needs to be 500/501 as well.If you look in the /etc/group file you'll see the group saml + GID.$ grep ^saml /etc/groupsaml:x:501:Looking in /etc/passwd file you'll see the user saml + UID.$ grep ^saml /etc/passwdsaml:x:500:501:Sam M. (local):/home/saml:/bin/bashWhen running the useradd command you can control what UID/GID to use.$ sudo useradd -u 500 -g 501 saml |
_codereview.115869 | So this a exercise from the book COMPUTERS SYSTEMS A PROGRAMMERS PERSPECTIVE I need to add two signed numbers and in case it overflows or underflows return TMAX (011..1b) or TMIN (100..0b) respectively. In this case we can assumed two's complement representation. The book imposes a set of rules that the solution must follow : Forbidden:Conditionals, loops, function calls and macros.Division, modulus and multiplication. Relative comparison operators (<, >, <= and >=).Allowed operations:All bit level and logic operations.Left and right shifts, but only with shift amounts between 0 and w - 1Addition and subtraction. Equality (==) and inequality (!=) tests. Casting between int and unsigned.My codeint saturating_add(int x , int y) { int sum = x + y; int w = (sizeof(int) << 3) -1; int mask = (~(x ^ y) & (y ^ sum) & (x ^ sum)) >> w; int max_min = (1 << w) ^ (sum >> w); return (~mask & sum) + (mask & max_min);}Compiled code in my machineNote: I used the following command -> gcc -O2 -S sat_add.c.leal (%rdi,%rsi), %edx movl %edx, %eax movl %edx, %ecx xorl %esi, %eax xorl %edi, %esi xorl %edx, %edi notl %esi sarl $31, %ecx andl %esi, %eax addl $-2147483648, %ecx andl %edi, %eax sarl $31, %eax movl %eax, %esi andl %ecx, %eax notl %esi andl %edx, %esi leal (%rsi,%rax), %eax retSo I want to know if there is a better solution in terms of elegance and performance. Also if there is a solution that compiles to a single instruction in x86_64 (Maybe PADDS although this instruction may not be the one I am looking for). Also any other kind of feedback is welcome. | Saturated signed addition | performance;c;bitwise | Undefined behavior from signed overflowTechnically, your first line causes undefined behavior:int sum = x + y;It should be written instead as:int sum = (unsigned int) x + y;In C, signed integer overflow is undefined behavior but unsigned integer overflow is not. Your compiler probably will treat the two lines above identically, but to be safe you should use the unsigned add.Save a couple of instructionsThis line here could be optimized:int mask = (~(x ^ y) & (y ^ sum) & (x ^ sum)) >> w;to this:int mask = (~(x ^ y) & (x ^ sum)) >> w;If x and y have the same sign, then you only need to check one of them against sum instead of both of them. This saves 2 assembly instructions when you compile it. |
_unix.305371 | I am using open connect to create a split VPN connection. It works great... the first time. If the openconnect process dies, subsequent tries appear to succeed, but leave me unable to actually access anything behind the VPN. Rebooting temporarily allows openconnect to work once again, but I'd like to be able to turn the VPN on and off without having to reboot every time.I think the problem is related to improper closing/clean up of the VPN connection, but this is out of my depth and I have no idea what I'm doing. What is going on and how to fix it or set up a system that allows me to start and stop my VPN connection multiple times without rebooting. route produces the same output both when the VPN is working and when it isn't.Here is the script I use to connect:sudo openvpn --mktun --dev tun1 && \sudo ifconfig tun1 up && \sudo /usr/sbin/openconnect -s $VPNSCRIPT $VPNURL --user=$VPNUSER --authgroup=$VPNGRP --interface=tun1sudo ifconfig tun1 downopenvpn --rmtun --dev tun1where $VPNSCRIPT is a wrapper around the default vpnc-script to set up the environment for split VPN: #!/bin/sh# Add one IP to the list of split tunneladd_ip (){ export CISCO_SPLIT_INC_${CISCO_SPLIT_INC}_ADDR=$1 export CISCO_SPLIT_INC_${CISCO_SPLIT_INC}_MASK=255.255.255.255 export CISCO_SPLIT_INC_${CISCO_SPLIT_INC}_MASKLEN=32 export CISCO_SPLIT_INC=$(($CISCO_SPLIT_INC + 1))}# Initialize empty split tunnel listexport CISCO_SPLIT_INC=0# Delete DNS info provided by VPN server to use internet DNS# Comment following line to use DNS beyond VPN tunnelunset INTERNAL_IP4_DNS# List of IPs beyond VPN tunneladd_ip --REDACTED--# Execute default script. /usr/share/vpnc-scripts/vpnc-script # End of scriptThis is all happening on a Ubuntu 14.04 VPSresults of route -nNo connection attempt:Destination Gateway Genmask Flags Metric Ref Use Iface0.0.0.0 0.0.0.0 0.0.0.0 U 0 0 0 venet0Connected and workingDestination Gateway Genmask Flags Metric Ref Use Iface<HostA> 0.0.0.0 255.255.255.255 UH 0 0 0 tun1<VPN> 0.0.0.0 255.255.255.255 UH 0 0 0 venet0<HostB> 0.0.0.0 255.255.255.255 UH 0 0 0 tun1<VPN DHCP> 0.0.0.0 255.255.254.0 U 0 0 0 tun10.0.0.0 0.0.0.0 0.0.0.0 U 0 0 0 venet0Supposedly connected, but not workingDestination Gateway Genmask Flags Metric Ref Use Iface<HostA> 0.0.0.0 255.255.255.255 UH 0 0 0 tun1<VPN> 0.0.0.0 255.255.255.255 UH 0 0 0 venet0<HostB> 0.0.0.0 255.255.255.255 UH 0 0 0 tun1<VPN DHCP> 0.0.0.0 255.255.254.0 U 0 0 0 tun10.0.0.0 0.0.0.0 0.0.0.0 U 0 0 0 venet0where Host* is an entry in the split VPN config. | OpenConnect only works once | openvpn;vpn;openconnect | null |
_webapps.102260 | So I'm working on my website and I wanted to create a list of recently uploaded videos on my channel. I can do it manually but I wanted to know if there was a way to automate it, maybe a script or something. I tried looking up the YouTube API but I can't find something similar. | Is it possible to list a YouTube Channel's uploaded videos? | youtube;youtube channel | null |
_datascience.6076 | I am thinking of preprocessing techniques for the input data to a convolutional neural network (CNN) using sparse datasets and trained with SGD. In Andrew Ng's coursera course, Machine Learning, he states that it is important to preprocess the data so it fits into the interval $ \left[ 3, 3 \right] $ when using SGD. However, the most common preprocessing technique is to standardize each feature so $ \mu = 0 $ and $ \sigma = 1 $. When standardizing a highly sparse dataset many of the values will not end up in the interval.I am therefore curious - would it be better to aim for e.g. $ \mu = 0 $ and $ \sigma = 0.5 $ in order for the values be closer to the interval $ \left[ 3, 3 \right] $? Could anyone argue based on a knowledge of SGD on whether it is most important to aim for $ \mu = 0 $ and $ \sigma = 1 $ or $ \left[ 3, 3 \right] $? | Most important part of feature standardization and how is standardization affected by sparsity? | machine learning;feature scaling | No, you are misinterpreting his comments. If you have data that has some outliers in it then the outliers will extend beyond 3 standard deviations. Then if you standardize the data some will extend beyond the [-3,3] region. He is simply saying that you need to remove your outliers so the outliers don't reap havoc on your stochastic gradient descent algorithm. He is NOT saying that you need to use some weird scaling algorithm.You should standardize your data by subtracting the mean and dividing by the standard deviation, and then remove any points that extend beyond [-3,3], which are the outliers.In stochastic gradient descent, the presence of outliers could increase the instability of the minimization and make it thrash around excessively, so its best to remove them. If the sparseness of the data prevents removal then... Do you need to use stochastic gradient descent, or can you just use gradient descent? Gradient descent (GD) might help to alleviate some of the problems relating to convergence. Finally, if GD is having trouble converging, you could always do an direct solve (e.g. direct matrix inversion) rather than an iterative solve.Hope this helps! |
_datascience.11853 | I am trying to figure out how many weights and biases are needed for CNN.Say I have a (3, 32, 32)-image and want to apply a (32, 5, 5)-filter.For each feature map I have 5x5 weights, so I should have 3 x (5x5) x 32 parameters. Now I need to add the bias. I believe I only have (3 x (5x5) + 1) x 32 parameters, so is the bias the same across all colors (RGB)? Is this correct? Do I keep the same bias for each image across its depth (in this case 3) while I use different weights? Why is that? | Question about bias in Convolutional Networks | deep learning;convnet;backpropagation | Bias operates per virtual neuron, so there is no value in having multiple bias inputs where there is a single output - that would equivalent to just adding up the different bias weights into a single bias.In the feature maps that are the output of the first hidden layer, the colours are no longer kept separate*. Effectively each feature map is a channel in the next layer, although they are usually visualised separately where the input is visualised with channels combined. Another way of thinking about this is that the separate RGB channels in the original image are 3 feature maps in the input.It doesn't matter how many channels or features are in a previous layer, the output to each feature map in the next layer is a single value in that map. One output value corresponds to a single virtual neuron, needing one bias weight.In a CNN, as you explain in the question, the same weights (including bias weight) are shared at each point in the output feature map. So each feature map has its own bias weight as well as previous_layer_num_features x kernel_width x kernel_height connection weights.So yes, your example resulting in (3 x (5x5) + 1) x 32 weights total for the first layer is correct for a CNN with first hidden layer processing RGB input into 32 separate feature maps.* You may be getting confused by seeing visualisation of CNN weights which can be separated into the colour channels that they operate on. |
_unix.116480 | To put it simply, I'm trying to use my computer as an alarm clock. It's slightly old and noisy, so I'd like it to start from power off at a scheduled time and then execute a command, such as playing an MP3 file. I'm running Linux Mint Nadia. How would I go about this? | Starting Linux from power off at predetermined time (and executing command)? | linux mint;hardware;scheduling | null |
_codereview.70933 | Previous question:Tic-Tac-Toe in C++11 - follow-upIs there any way to improve this code?#include <iostream>#include <cctype>#include <algorithm>#include <functional>#include <array>enum struct Player : char{ none = '-', first = 'X', second = 'O'};std::ostream& operator<<(std::ostream& os, Player p){ return os << static_cast<char>(p);}enum struct Type : int{ row = 0, column = 1, diagonal = 2};enum struct Lines : int{ first = 0, second = 1, third = 2};class TicTacToe{public: TicTacToe(); bool isFull() const; void draw() const; void turn(Player player); bool check(Player player) const;private: bool applyMove(Player player, int position); static const std::size_t mDim = 3; std::array<Player, mDim * mDim> mGrid;};// utility functor to compute matching conditiontemplate<int dim>struct Match { Match(Type t, Lines i) : mCategory(t), mNumber(i){} bool operator() (int number) const { switch (mCategory) { case Type::row: return (std::abs(number / dim) == static_cast<int>(mNumber)); case Type::column: return (number % dim == static_cast<int>(mNumber)); case Type::diagonal: if (mNumber == Lines::first) return ((std::abs(number / dim) - number % dim) == static_cast<int>(mNumber)); else return ((std::abs(number / dim) + number % dim) == static_cast<int>(mNumber)); } return false; } Type mCategory; Lines mNumber;};TicTacToe::TicTacToe() { mGrid.fill(Player::none);}bool TicTacToe::applyMove(Player player, int position){ if (mGrid[position] != Player::none) return false; mGrid[position] = player; return true;}bool TicTacToe::isFull() const{ return 0 == std::count_if(mGrid.begin(), mGrid.end(), [](Player i) { return i == Player::none; });}bool TicTacToe::check(Player player) const{ // check for row or column wins std::array<bool, 8> win; win.fill(true); int j = 0; // checking condition loop std::for_each(mGrid.begin(), mGrid.end(), [&](Player i) { int x = j++; // columns if (Match<mDim>(Type::column, Lines::first)(x)) win[0] &= i == player;; if (Match<mDim>(Type::column, Lines::second)(x)) win[1] &= i == player; if (Match<mDim>(Type::column, Lines::third)(x)) win[2] &= i == player; // rows if (Match<mDim>(Type::row, Lines::first)(x)) win[3] &= i == player; if (Match<mDim>(Type::row, Lines::second)(x)) win[4] &= i == player; if (Match<mDim>(Type::row, Lines::third)(x)) win[5] &= i == player; // diagonals if (Match<mDim>(Type::diagonal, Lines::first)(x)) win[6] &= i == player; if (Match<mDim>(Type::diagonal, Lines::third)(x)) win[7] &= i == player; }); for (auto i : win) { if (i) return true; } return false;}void TicTacToe::draw() const{ //Creating a onscreen grid std::cout << ' '; for (auto i = 1; i <= mDim; ++i) std::cout << << i; int j = 0; char A = 'A'; for (auto i : mGrid) { if (Match<mDim>(Type::column, Lines::first)(j++)) std::cout << \n << A++; std::cout << ' ' << i << ' '; } std::cout << \n\n;}void TicTacToe::turn(Player player){ char row = 0; char column = 0; std::size_t position = 0; bool applied = false; std::cout << \n << player << : Please play. \n; while (!applied) { std::cout << Row(1,2,3,...): ; std::cin >> row; std::cout << player << : Column(A,B,C,...): ; std::cin >> column; position = mDim * (std::toupper(column) - 'A') + (row - '1'); if (position < mGrid.size()) { applied = applyMove(player, position); if (!applied) std::cout << Already Used. Try Again. \n; } else { std::cout << Invalid position. Try again.\n; } } std::cout << \n\n;}class Game{public: Game() = default; void run();private: TicTacToe mTicTacToe; std::array<Player, 2> mPlayers{ { Player::first, Player::second } }; int mPlayer = 1; void resultScreen(bool winner); std::function<void()> display = std::bind(&TicTacToe::draw, &mTicTacToe); std::function<void(Player)> turn = std::bind(&TicTacToe::turn, &mTicTacToe, std::placeholders::_1); std::function<bool(Player)> win = std::bind(&TicTacToe::check, &mTicTacToe, std::placeholders::_1); std::function<bool()> full = std::bind(&TicTacToe::isFull, &mTicTacToe);};void Game::run(){ while (!win(mPlayers[mPlayer]) && !full()) { mPlayer ^= 1; display(); turn(mPlayers[mPlayer]); } resultScreen(win(mPlayers[mPlayer]));}void Game::resultScreen(bool winner){ display(); if (winner) { std::cout << \n << mPlayers[mPlayer] << is the Winner!\n; } else { std::cout << \nTie game!\n; }}int main(){ Game game; game.run();} | Tic-Tac-Toe in C++11 - follow-up 2 | c++;game;c++11;tic tac toe | Here are some things that may allow you to improve your code:Separate responsibilitiesThe Model-View-Controller design pattern is often useful for programs like this. Because the view in this case is essentially just printing the board to std::cout, we can simplify a bit and just have a model, the TicTacToe class, and a controller, the Game class. Here's what the TicTacToe class looks like:class TicTacToe{public: TicTacToe() = delete; TicTacToe(const TicTacToe &t) = delete; TicTacToe(const TicTacToe &&t) = delete; TicTacToe(char ch, std::size_t dim) : mDim(dim), emptychar(ch), remaining(mDim*mDim), grid(remaining, emptychar) { } bool isNotFull() const { return remaining; } bool isWinner(char player) const; bool applyMove(char player, unsigned row, unsigned column); friend std::ostream &operator<<(std::ostream &out, const TicTacToe &t) { out << ' '; for (std::size_t i = 1; i <= t.mDim; ++i) out << << i; std::size_t j = 0; char A = 'A'; for (auto& i : t.grid) { if (j == 0) { out << \n << A++; j = mDim; } --j; out << ' ' << i << ' '; } return out << \n\n; }private: const std::size_t mDim; const char emptychar; unsigned remaining; std::vector<char> grid;};There are some differences in this class compared to yours, so I'll point out the salient features.Delete automatic functions which are not wantedThe way I've defined the TicTacToe class requires values to be passed to the constructor. For that reason, I've deleted the default constructor, the copy constructor and the move constructor. This prevents the class from being misused and alerts the user of the class that some things are not supported.Isolate the internal representation from the interfaceThe game is played on a square grid and not a linear array (even though that may be the internal representation), so the applyMove function in the revised version takes row and column arguments rather than a linear position value.Allow for dynamic sizingThe dimension of the board in the revised version of the TicTacToe class is a const value that is initialized with a value passed to the constructor. This allows for more than one size game to be played without recompiling. Also, this required changing from a std::array to a std::vector.Allow for any character representationsThis version does not specify the representations for an empty square, or any of the player tokens. In particular, the emptychar member function is initialized by the constructor. Perhaps more interesting is the fact that this class allows for more than two players. This can be seen most easily in the applyMove member function:// Returns `false` if requested move was applied, otherwise truebool TicTacToe::applyMove(char player, unsigned row, unsigned column){ unsigned position = row + mDim * column; if ((position > grid.size()) || (grid[position] != emptychar)) return true; grid[position] = player; --remaining; return false;}Define logical functions in a way that makes them most usefulIf we look at the original isFull routine, it was always being used as !isFull() so it seems that what's actualy more useful is a method to check if the grid is not full. For this reason, the function is now isNotFull() in the redefined version.Avoid inefficient algorithmsThe original isFull routine counts empty squares each time it is called, but a more efficient (and simpler) way to do this is to simply keep a running count as the game is played.Use clear function namesThe original code has a function named check but it's not clear what it checks. I've renamed it to isWinner so that it's very clear now that it's checking to see if a particular player is a winner or not. I've also reimplemented it to work simply and efficiently no matter what size the array happens to be:// returns true if the player is a winnerbool TicTacToe::isWinner(char player) const{ // check for row or column wins for(unsigned i = 0; i < mDim; ++i){ bool rowwin = true; bool colwin = true; for (unsigned j=0; j < mDim; ++j) { rowwin &= grid[i*mDim+j] == player; colwin &= grid[j*mDim+i] == player; } if (colwin || rowwin) return true; } // check for diagonal wins bool diagwin = true; for (unsigned i=0; i < mDim; ++i) diagwin &= grid[i*mDim+i] == player; if (diagwin) return true; diagwin = true; for (unsigned i=0; i < mDim; ++i) diagwin &= grid[i*mDim+(mDim-i-1)] == player; return diagwin; }Revise the Game class to be a controllerIn the interest in clearly separating responsibilities of the classes, here is the revised Game class:class Game{public: Game(std::size_t dim=3) : ttt(players[2], dim), player(1) {} void run(); void run(const char *move); void turn(); void showResult() const;private: const char players[3] = { 'X', 'O', '-' }; TicTacToe ttt; int player;};The most significant change here is that the turn method is a method of Game rather than of TicTacToe. This is important because the controller actually controls the game; the model simply reacts to the applied controls. This makes some alternatives much easier to implement as I'll describe. Put the player character representations within the Game classThe character representations for each player and an empty space are all solely concerns of the Game class. They don't need to be in global space as originally defined.Validate user input carefullyThe current code accepts such inputs as (0,B) which should be rejected. The revised code fixes this:void Game::turn(){ char row = 0; char column = 0; std::cout << \n << players[player] << : Please play. \n; for (bool pending = true; pending; ) { std::cout << Row(1,2,3,...): ; std::cin >> row; std::cout << players[player] << : Column(A,B,C,...): ; std::cin >> column; column = std::toupper(column) - 'A'; row -= '1'; pending = column < 0 || row < 0 || ttt.applyMove(players[player], row, column); if (pending) std::cout << Invalid position. Try again.\n; } std::cout << \n\n;}Note that it also changes the sense of the boolean variable from applied to pending which somewhat simplifies the code and requires no negations.Eliminate pointless obfuscationThe use of std::bind is really not needed in this program and makes the program that much harder to read and understand. The revised version of run doesn't need them and is easy to read and understand:void Game::run(){ while (!ttt.isWinner(players[player]) && ttt.isNotFull()) { player ^= 1; std::cout << ttt; turn(); } showResult();}Use const where possibleThe resultScreen function doesn't and shouldn't modify the underlying Game class, and so it should be declared const. Also, I've changed the name of the function to a more desscriptive showResult and eliminated the need to pass a variable.void Game::showResult() const{ std::cout << ttt; if (ttt.isWinner(players[player])) std::cout << \n << players[player] << is the Winner!\n; else std::cout << \nTie game!\n;}Note that this code only yields correct results when the game has already ended; one could add a check ttt.isNotFull() to handle any mid-game requests for results.Consider having the computer play by itselfBy separating the turn function in Game from the applyMove function in TicTacToe, we have the first step toward having the potential for the computer to play against a human player. I haven't implemented that, but I did implement a means by which a game can be run automatically, given a fixed series of moves. That function looks like this:void Game::run(const char *move){ unsigned row, column; while (!ttt.isWinner(players[player]) && ttt.isNotFull() && *move) { player ^= 1; std::cout << ttt; row = *move++ - '1'; column = std::toupper(*move++) - 'A'; std::cout << Applying << players[player] << to << row+1 << static_cast<char>('A'+column) << \n; ttt.applyMove(players[player], row, column); } showResult();}Note that this part of the code lacks much in the way of error handling, but it's meant solely as illustration.Putting it all togetherHere's a sample main function that plays one 3x3 tie game automatically, and then allows for two humans to play a 4x4 game against each other:#include <iostream>#include <cctype>#include <vector>// TicTacToe and Game classes go hereint main(){ Game game1; game1.run(2B1A2A2C1C3A3B1B3C); Game game2(4); game2.run();} |
_codereview.42121 | I have written an HTTP client wrapped the libcurl. It should be able to do HTTP get/post with string/map param, with cookies and proxy.Can somebody review the code? B.T.W., I'm not sure the way pass a map into HTTP header is correct, maybe I should remove these two interfaceCURLcode do_http_post(std::string post_url, std::map<std::string, std::string> map_param, void* user_data); and void set_http_header(std::map<std::string, std::string> map_param);.curl_wrapper.h#ifndef CURL_WRAPPER#define CURL_WRAPPER#include <string>#include <exception>#include curl.h#include <map>#define DATA_MAX_LEN CURL_MAX_WRITE_SIZE*30struct client_data;class curl_client{public: curl_client(); ~curl_client(); CURLcode do_http_get(std::string get_url, void* user_data); CURLcode do_http_post(std::string post_url, std::map<std::string, std::string> map_param, void* user_data); CURLcode do_http_post(std::string post_url, std::string post_fields, void* user_data); void set_http_header(std::string header_param); void set_http_header(std::map<std::string, std::string> map_param); void set_http_cookie(std::string cookie_file); void set_http_proxy(std::string proxy_url);private: static size_t write_data( char *ptr, size_t size, size_t nmemb, void *user_data); void set_common_opt(std::string url, void* user_data); void set_post_fields(std::string post_fields); void set_post_fields(std::map<std::string, std::string> map_param); CURLcode perform();private: CURL* curl_; CURLcode res_; curl_slist* p_header_list_;};struct client_data { client_data() { size = DATA_MAX_LEN; used = 0; buf = new char[DATA_MAX_LEN]; } ~client_data(){ if (buf!=nullptr){ delete[] buf; } } char *buf; int size; int used;};#endifcurl_wrapper.cpp#include curl_wrapper.hcurl_client::curl_client(){ curl_ = curl_easy_init(); p_header_list_ = nullptr;}curl_client::~curl_client(){ curl_easy_cleanup(curl_); curl_slist_free_all(p_header_list_);}CURLcode curl_client::do_http_get(std::string get_url, void* user_data){ set_common_opt(get_url, user_data); return perform();}CURLcode curl_client::do_http_post(std::string post_url, std::map<std::string, std::string> map_param, void* user_data){ set_common_opt(post_url, user_data); set_post_fields(map_param); return perform();}CURLcode curl_client::do_http_post(std::string post_url, std::string post_fields, void* user_data){ set_common_opt(post_url, user_data); set_post_fields(post_fields); return perform();}void curl_client::set_common_opt(std::string url, void* user_data){ curl_easy_setopt(curl_, CURLOPT_URL, url.c_str()); curl_easy_setopt(curl_, CURLOPT_WRITEFUNCTION, curl_client::write_data); curl_easy_setopt(curl_, CURLOPT_WRITEDATA, user_data); curl_easy_setopt(curl_, CURLOPT_HEADER, 1); curl_easy_setopt(curl_, CURLOPT_FOLLOWLOCATION, 1); // not verify host and ca for https curl_easy_setopt(curl_, CURLOPT_SSL_VERIFYPEER, 0); curl_easy_setopt(curl_, CURLOPT_SSL_VERIFYHOST, 0);#ifdef DEBUG curl_easy_setopt(curl_, CURLOPT_VERBOSE, 1); #endif}void curl_client::set_post_fields(std::map<std::string, std::string> map_param){ std::string post_fields; std::map<std::string, std::string>::iterator it; for (it = map_param.begin(); it!=map_param.end(); ++it){ if (it!=map_param.begin()){ post_fields += &; } post_fields += it->first + = + it->second; } set_post_fields(post_fields); }void curl_client::set_post_fields(std::string post_fields){ curl_easy_setopt(curl_, CURLOPT_POSTFIELDS, post_fields.c_str()); curl_easy_setopt(curl_, CURLOPT_POST, 1); }void curl_client::set_http_header(std::string header_param){ if (p_header_list_){ curl_slist_free_all(p_header_list_); p_header_list_ = nullptr; } p_header_list_ = curl_slist_append(p_header_list_, header_param.c_str()); curl_easy_setopt(curl_, CURLOPT_HTTPHEADER, p_header_list_);}void curl_client::set_http_header(std::map<std::string, std::string> map_param){ std::string header_param; std::map<std::string, std::string>::iterator it; for (it = map_param.begin(); it!=map_param.end(); ++it){ if (it!=map_param.begin()){ header_param += &; } header_param += it->first + = + it->second; } set_http_header(header_param);}void curl_client::set_http_cookie(std::string cookie_file) { curl_easy_setopt(curl_, CURLOPT_COOKIEJAR, cookie_file.c_str());//save to curl_easy_setopt(curl_, CURLOPT_COOKIEFILE, cookie_file.c_str());//read from}void curl_client::set_http_proxy(std::string proxy_url){ curl_easy_setopt(curl_, CURLOPT_PROXY, proxy_url.c_str());}size_t curl_client::write_data( char *ptr, size_t size, size_t nmemb, void *user_data){ client_data *the_buf = (client_data *)user_data; int bytes_passed_in = size * nmemb; int bytes_written = 0; if (the_buf->used + bytes_passed_in < DATA_MAX_LEN){ memcpy(the_buf->buf + the_buf->used, ptr, bytes_passed_in); the_buf->used += bytes_passed_in; *(the_buf->buf + the_buf->used) = 0; bytes_written = bytes_passed_in; }else { memcpy(the_buf->buf + the_buf->used, ptr, DATA_MAX_LEN - the_buf->used - 1); bytes_written = DATA_MAX_LEN - the_buf->used - 1; the_buf->used = DATA_MAX_LEN; *(the_buf->buf + DATA_MAX_LEN - 1) = 0; // here libcurl will signal an error for intact data written. } return bytes_written;}CURLcode curl_client::perform(){ if (curl_) { res_ = CURL_LAST; try { res_ = curl_easy_perform(curl_); } catch (std::exception e){ } return res_; }}An HTTP get test case: void curl_get_googleplay(){ curl_client curl; client_data user_data; CURLcode res = curl.do_http_get(std::string(play.google.com/store/apps/details?id=com.teamviewer.teamviewer.market.mobile), &user_data); } | Wrapping Curl, an HTTP client | c++;http;curl | Looking at your header file:Replace the #define DATA_MAX_LEN with a static const variable; Do the same with any other constant that is #define-d.Pass more complex parameters by const reference, to avoid making a copy (urls, parameters maps, etc).You define client_data structure for (I assume) receiving the results; If it is strongly typed, why do you pass it in by void*? This just allows client code to call your API in a way that will corrupt your code (e.g. a client might decide to pass there a pointer to a std::vector<uint8_t> instead).You do not use RAII and smart pointers (you probably should)After looking at your code, it is still unclear to me, if I would be able to use it to read the HTTP response headers (e.g. I want to know if the response came with the Cache-Control header specified, and what was it's value).Consider returning the result data as a return value (instead as an output parameter) and raising an exception in case you get a HTTP error. This would allow you to specify other error conditions as well.Just looking at the interface (not the implementation) I have no idea how your class will behave if I call with invalid parameters.Code:curl_client curl;client_data user_data;curl.set_http_cookie(\\); // does this throw? What does it throw?Looking at your implementation file:Your perform function calls curl_easy_perform (a C function) inside a try/catch block for std::exception. Exceptions are a C++ thing (i.e. curl_easy_perform will never throw an exception -- that's why it's using return codes to signal errors).You are using C-style casts; Don't! (see my point above about removing the void* code).There is no reason at all to use raw pointer and memcpy. Consider using std::vector instead (it will be safer, more efficient, exception-safe and already tested).set_post_fields concatenates strings into a different string, in a loop. Consider using a std::ostringstream instead. |
_webmaster.49707 | I have an eCommerce store and it sells digital downloads. The website is hosted in the US, but I have a large customer base that is outside the United States in different countries.I know the hosting here will be fast for US customers, but how can I make it fast for viewers all around the world? Because when someone outside the US accesses the website, the request will always go to the US data center. Is it possible that when the site is accessed from outside US (e.g: UK), the request should go to UK (or nearest) data center?If I buy different TLD's .co.uk, .com etc. for different countries (but should target same website), then is it possible that in UK, it should open .co.uk. I can achieve it using IP locator and redirect the user to the respective URL? But how does the request go to the nearest server?Is it possible via CDN, cloud, cloud hosting, or something else? How can I achieve this goal? | How to host a website in multiple countries for fast response times | web hosting;multiple domains | null |
_unix.57986 | I'm using Pulseaudio. The image is of the Output Devices tab and shows the Port set to Analog Output. This is fine and works great. There's another option called Headphones but if you try and set the Port to Headphones it automatically switches back to Analog Output. This is also fine because I don't want to use that portThe problem is that if the volume increases over 87% it automatically switches to Headphones port, which in turns auto switches back to Analog and back and forth quickly and forever making lots of clicking noises.Perhaps related to that Base marker on the volume control?No idea here, very strange behavior. | Pulseaudio rapidly switches ouputs at high volumes | linux;audio;alsa;pulseaudio | I used alsamixer to mute the headphones channel. That solved the problem. |
_codereview.124945 | I wrote a Brainfuck interpreter in JavaScript, however it is quite buggy and I can't figure out what I'm doing wrong. It works for programs I've written, but fails on most programs I find on the internet. It seems that the while loop doesn't work correctly when it is supposed to skip over. When debugging, the value of i (index) behaves as I intended. In addition, I feel that my code is overly complicated for such a simple task, and I think there might be a way to simplify my code, although I can't think of a way :(Here is the code:function execBrainf() { document.getElementById(output).value = ; var ptr = 0, i, ii; var cells = new Array(), labels = new Array(); for (i = 0; i < 30000; ++i) cells[i] = 0; for (i = ii = 0; i < document.getElementById(code).value.length; ++i) { switch (document.getElementById(code).value.charAt(i)) { case '>': ++ptr; break; case '<': --ptr; break; case '+': cells[ptr] = (++cells[ptr] % 256); break; case '-': cells[ptr] = (--cells[ptr] % 256); break; case ',': cells[ptr] = (ii > document.getElementById(input).length ? 0 : document.getElementById(input).value.charCodeAt(ii++)); break; case '.': document.getElementById(output).value = document.getElementById(output).value + String.fromCharCode(cells[ptr]); break; case '[': (cells[ptr] == 0 ? i = document.getElementById(code).value.indexLoopEnd(i) : labels.push(i)); break; case ']': (cells[ptr] == 0 ? labels.pop() : i = labels[labels.length - 1]); break; } }}function indexLoopEnd(i) { var x = 1; while (x > 0) { switch (this.charAt(++i)) { case '[': ++x; break; case ']': --x; break; } } return i;}I am using the html here: http://esotools.ml/brainfuck/interpreter.htmlI am looking for:- what the problem is- suggestions on making the code prettier- suggestions for improving the algorithmI am not too concerned about optimization yet, and would like to focus on these three things. | JavaScript Brainfuck interpreter | javascript;interpreter;brainfuck | Store document.getElement(s)By... callsYou call the method document.getElementById quite a lot in this code, and it's all to get the same exact elements: input, output, and codeWhile it is a wonderfully useful method, it is also expensive so it's best to reduce the calls to it as much as possible.function execBrainf() { var input = document.getElementById(input); var output = document.getElementById(output); var code = document.getElementById(code);FlexibilityThe above tip is somewhat linked to this.Right now, your code works only a very specific environment: there must be three elements with specific IDs and specific information in them else this is not going to work.Yes, I understand, you wrote this for that specific environment, but we can still make the code more easy to test if we instead have the function take the code, input, and output through parameters:function execBf(code, input) {Then, you could easily test this code by passing in strings as the code and input, and have the function return the output (thanks to Pieter Witvoet for proposing a better idea than functions)Simplify setting the cells to 0This is borrowing from a very good answer I once read.Rather than iterating 30000 times to initialize an array's values to 0, you can define a function that will return 0 for a cell's value based on its index if the cell's value is undefined (which is the default value for JavaScript array values).function getCell(i) { return cells[i] || 0;}Now you don't have to go through that long loop.Misc.Create new arrays with [], not new Array()....value.indexLoopEnd(i) Is that a bug/mistype/mis-version? You only defined indexLoopEnd as a function. |
_softwareengineering.273023 | In Stephen Cleary's article in MSDN magazine Introduction to Async/Await on ASP.NET he says that every thread pool thread on a modern OS has a 1MB stack. (modern OS == Windows 7/8 for this discussion) But I thought that this was 1MB of virtual memory, and that physical memory was allocated dynamically as the stack grew. Based on my dated C++ threading knowledge on other OSs, I believe that actual stack sizes rarely exceed 64k, especially in languages that make heavy use of the heap like .NET.Does Windows allocate 1MB of physical memory, or does the stack dynamically allocate more memory as needed? Maybe it is cheaper to allocate it all up front now and rely on swapping it out as needed? I was going to test this using the profiler but I don't see how to get the stack size from it. | How much physical memory is consumed by the stack of a .NET thread? | .net;multithreading;windows;stack;memory usage | The link Robert Harvey added as a comment does explain this more explicitly, but, by default, Windows allocates 1MB of virtual memory which is committed as physical memory as needed (as you described). |
_softwareengineering.180050 | I am an experienced Java programmer, and I want to create a complex web application requiring dynamic pages, drawings, etc (take SO as an example). Do I have to learn javascript/html in order to create such an application?It is not that I don't want to learn another language (I've done this before), but technology on the javascript environment seems to change so fast that when you finish learning one framework it is already obsolete. I have checked a number of java framework for web development (spring, play), but not deeply. So can these frameworks (or other possible java frameworks that I'm not aware of) be used without learning html/javascript? I also have some python experience. So if I can do the app in python it is also an option. | Do I have to learn html and javascript to create web applications? | java;javascript;python;web applications;html | You don't have to learn JavaScript and HTML to create web applications.But you will.If you really want to write webapps in mostly Java, have a look at the Google Web Toolkit, which does vast amounts of Java to JS, and can satisfy a good chunk of the code needed for a webapp. Django is a similar framework for Python.And if you really want to avoid writing HTML there are vast amounts of templates and What-you-see-is-what-you-get editors out there.But you see, regardless of the abstraction framework and HMTL templates you start with, at some point you'll be dissatisfied with the presentation. And so you'll get enough HTML/JS on your hands to change the one tiny little thing you want. And another thing. And another.And then one day you'll wake up in a cold sweat.And that's how you'll learn. That's how a lot of us learned, back in the era of point-and-click website makers like Geocities. After a while, if you're serious about the web, you'll learn the languages of the web, intentionally or not.So you don't have to learn HTML and JavaScript to make a site like StackOverflow. But if you really try and make a site like StackOverflow, you won't be able to stop yourself from learning them. |
_codereview.122730 | A friend of mine was practicing his programming skills with a textbook meant to prepare students for computer science exams. He asked for help with a specific task.The task is to capture user input (day of week and a year in the range 1500 to 2005 inclusive), and output all instances of the weekday in February that year.The differences between the Julian and Gregorian calendars are to be accounted for.Now, most likely the idea behind this task was to have the student create an algorithm to manually calculate the dates. However, as Java SE seems to be allowed in exams in my country, I came up with the idea of utilizing the GregorianCalendar class (which, despite its name, combines the Julian and Gregorian calendars).package calendar;import java.text.DateFormat;import java.util.GregorianCalendar;import java.util.HashMap;import java.util.Locale;import java.util.Scanner;public class CalendarTask { public static void main(String[] args) { GregorianCalendar cal = new GregorianCalendar(); HashMap<String, Integer> daysOfWeek = new HashMap<>(); daysOfWeek.put(monday, cal.MONDAY); daysOfWeek.put(tuesday, cal.TUESDAY); daysOfWeek.put(wednesday, cal.WEDNESDAY); daysOfWeek.put(thursday, cal.THURSDAY); daysOfWeek.put(friday, cal.FRIDAY); daysOfWeek.put(saturday, cal.SATURDAY); daysOfWeek.put(sunday, cal.SUNDAY); System.out.print(Enter day of week: ); Scanner sc = new Scanner(System.in); int dayOfWeek, year; try { dayOfWeek = daysOfWeek.get(sc.next().toLowerCase()); System.out.print(Enter year (1500-2005 inclusive): ); year = Integer.parseInt(sc.next()); if (year < 1500 || year > 2005) throw new Exception(); System.out.println(Output:); DateFormat df = DateFormat.getDateInstance(DateFormat.MEDIUM, Locale.GERMANY); cal.set(cal.YEAR, year); trySetDay(cal, dayOfWeek, 1); do { System.out.println(df.format(cal.getTime())); cal.add(cal.DAY_OF_MONTH, 7); } while (cal.get(cal.MONTH) == cal.FEBRUARY); } catch (Exception e) { System.out.println(Incorrect input!); } finally { sc.close(); } } private static void trySetDay(GregorianCalendar cal, int dayOfWeek, int weekOffset) { cal.set(cal.MONTH, cal.FEBRUARY); cal.set(cal.WEEK_OF_MONTH, weekOffset); cal.set(cal.DAY_OF_WEEK, dayOfWeek); if (cal.get(cal.MONTH) != cal.FEBRUARY) trySetDay(cal, dayOfWeek, weekOffset + 1); }}Example input and output (dates are output in DD.MM.YYYY format):Enter day of week: MondayEnter year (1500-2005 inclusive): 2000Output:07.02.200014.02.200021.02.200028.02.2000The trySetDay() method is meant for cases when the attempted day belongs to the previous month. In such a case, the week offset is increased by one to make sure we're dealing with February.This code works fine and I'm satisfied with it. The catch is there to handle NullPointerException (when attempting to assign null, a possible result of HashMap.get(), and also when parsing the year) and a generic Exception set to limit the possible input to the range 1500 to 2005.What can be improved about this code? Is there anything that caught your eye instantly and that could be done better? Any and all feedback is appreciated.Also, is it a good idea to access static class members via an instance of the class? Such as cal.MONDAY (where cal is an instance of GregorianCalendar), instead of GregorianCalendar.MONDAY? | Find all instances of a given weekday in February for a given year | java;datetime | Java 8 Time APIsInstead of the 'legacy' Calendar and DateFormat classes, you can rely on Java 8's new java.time.* APIs for more fluent chronology-related calculations.For starters, your manually constructed Map can be replaced with a simple look-up on the DayOfWeek enum:// scanner will be a wrapper over System.inprivate static DayOfWeek getDayOfWeek(Scanner scanner) { String values = Arrays.toString(DayOfWeek.values()); System.out.printf(Enter a day of week:%n%s%n, values); while (true) { try { return DayOfWeek.valueOf(scanner.nextLine().trim().toUpperCase()); } catch (IllegalArgumentException e) { System.err.printf(Please try again with one of these:%n%s%n, values); } }}The looping-validation ensures that only a valid DayOfWeek value is returned.Next, you can use a couple of TemporalAdjusters to get the LocalDates you require:firstInMonth(DayOfWeek): the first day-of-week of the month.lastDayOfMonth(): the last day of the month.next(DayOfWeek): the next day-of-week, i.e. 7 days from the LocalDate instance.Then, with a helpful serving of DateTimeFormatter, the main processing logic can just be:try (Scanner scanner = new Scanner(System.in)) { DayOfWeek dayOfWeek = getDayOfWeek(scanner); // getYear(Scanner) returns an int between the year range, MONTH = 2 LocalDate first = LocalDate.of(getYear(scanner), MONTH, 1) .with(TemporalAdjusters.firstInMonth(dayOfWeek)); LocalDate last = first.with(TemporalAdjusters.lastDayOfMonth()); DateTimeFormatter formatter = DateTimeFormatter.ofLocalizedDate(FormatStyle.MEDIUM) .withLocale(Locale.GERMANY); for (LocalDate date = first; !date.isAfter(last); date = date.with(TemporalAdjusters.next(dayOfWeek))) { System.out.println(formatter.format(date)); }}Other observationsIt's not recommended to throw a generic Exception as it's... too vague. You can consider using IllegalArgumentException for invalid year inputs outside \$[1500, 2005]\$, as the exception type is then more defined.More importantly, you shouldn't be throwing your own Exceptions and then catching it purely as a form of flow control.HashMap<String, Integer> daysOfWeek = new HashMap<>() can be better written as Map<String, Integer> daysOfWeek = new HashMap<>(), to program to an interface rather than the implementation.As illustrated above, you should also consider using try-with-resources on the Scanner instance for safe and efficient handling of the underlying I/O resource. |
_unix.335690 | I recently overwrote ~100MB from the beginning of my 1TB external hdd using the dd command. This means my partition tables have likely been lost. fdisk -l shows no partition information.However since I actually had my drive mounted while issuing the dd command, the data on the drive (all partitions) can be accesed with the file explorer. The external hdd is still connected to the computer. This leads me to believe that the partition table can be recovered.Searches on this topic recommend data recovery tools that can restore the partition table, but these options assume the drive has been disconnected from the computer.Looking at /proc/partitions gives the size of each block device, but not their offsets in sectors.I assume that since i can view the file structure in nautilus, the partition offsets must be known. Is there a way to expose this information? | Recovering partitioning information from block devices | linux;partition;ntfs | You can get partition information from /sys, precisely from /sys/block/<disk>/<partition>/{start,size}.This shell function may help you::print_partitions(){ local disk=$1 local part local template=%-6s %16s %16s %16s\n printf $template Part. First sector Last sector # sectors for part in /sys/block/$disk/sd*; do st=$(cat $part/start) sz=$(cat $part/size) end=$((st + sz - 1)) printf $template ${part##*/} $st $end $sz done}Usage:$ print_partitions sddPart. First sector Last sector # sectorssdd1 2048 2099199 2097152sdd3 2099200 3907029167 3904929968Note: sectors here are 512 byte sectors.For a full dump of your partitions:for disk in /sys/block/sd*; do print_partitions ${disk##*/} echodoneNote that you may also have overwritten precious information at the beginning of the first partition, like an ext superblock, but this is another story question. |
_softwareengineering.266975 | I'm attempting to make an online game with a mini-economy in it. The challenge is that the only person selling anything will be the website itself, and hence the supply and demand curve doesn't apply (as it costs the website nothing to produce the items being sold). Products sold will have several factors that contribute to its value. Each of these factors can be represented by a number, but it is difficult to know if product X is better than product Y if X has a better A and Y has a better B. The monetary value of each factor (and thus, each product) is hard for me to calculate, which is why I would like an economy to do that for me.Some products are upper-end products, and should be inaccessible to players who don't have the money.There will be several types of products on the market, each serving completely different purposes in the game.I'm guessing that there will be a larger number of lower-level players.I have three ideas on how to develop it:Generate an initial price on each product. Each time a person buys a product A, the price of A goes up a fixed amount, X, and the price of every other product goes down X/(# of products).Generate an initial price on each product. Each time a person buys a product A, the price goes up X. Each minute, the price of A goes down Y.Produce each product at a fixed rate, with lower amounts of high-end products. Upon production, add the product to an auction, where each player can cast a bid on the amount he will pay. The initial bid will be generated.I believe that method #1 and #2 will overvalue low end products, and undervalue high end products, so #3 is my favorite. #3 also only has 1 value for me to generate, while #1 and #2 have multiple. However, #3 may have problems if the initial bid is too high for the products, or it produces products at a lower rate than it should, causing a shortage.Is auctioning off products the best way to determine the product's price? Is there a better way to do this? | How to code a one-sided virtual economy | algorithms;economics | Virtual economies can be very hard to get right. Fortunately, by not having players selling items they've crafted/looted themselves you've simplified the problem somewhat: you can control supply, which will allow you to prevent the massive deflation in item values that can occur when they're produced in much higher volumes than they're consumed.A couple of other problems to watch out for:Can players transfer items between themselves? If so, this will reduce demand for low level items substantially, as players trade away items they no longer need.(assuming you're talking about selling items for game currency, not real money) what prevents high level players from accumulating extremely large amounts of cash, which might cause runaway inflation?Assuming you have good answers to these (item decay, where items quality is reduced with use, is one possible answer to both problems, but a lot of players don't like it), auctions are a good bet. You can generate the initial bids as a percentage of recent selling prices for similar items, which should solve that problem. And then keep an eye out for products which often don't sell, or which always attract a lot of bids, as these could be warnings that you've got the resupply rate wrong.Here are a few must-read articles on online game economies:http://psychochild.org/?p=1179http://www.raphkoster.com/2006/09/07/agc-mmo-economies/http://www.raphkoster.com/2012/03/20/do-auction-houses-suck/ |
_unix.381695 | I have issues with reading csv format like this:foo,bar foo, foo bar, foo, bar, farbar, foobar foo, foo , bar, fobar, barTechnically, both lines should have 5 fields accordingly with separator ,.awk -F, '{print NF}' resolver.csv 66This is where problem goes. AWK treats , as a separator between quotations marks, and provide non accurate results. Giving the separator like -F ',' makes things only worse.awk -F, '{print $3}' test.csv foo bar foo Any work around? | Problem with csv and separator awk | text processing;awk;csv | null |
_unix.116699 | What is the easiest way of installing JRE on my Debian OS (Linux)? | What's the easiest way of installing JRE on Debian? | linux;debian;java | It depends on the version of the language and the version of the implementation you're after.Sun/Oracle JRESun used to provide .deb packages for Java 6 that where present in Debian's official packages. So installing it was pretty straightforward:sudo apt-get install sun-java6-jreHowever they do not provide deb packages for Java 7. They do provide, however binary packages you can install like this:wget http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.tar.gztar zxvf jdk-7-linux-x64.tar.gz -C /usr/lib64/jvm/update-alternatives --install /usr/bin/java java /usr/lib/jvm/jdk1.7.0/bin/java 1065Alternatively, you can look for user-supplied repos that provide a .deb packages (trust these at your own risk, since it's not officially supported by Debian). You can add this repo (source):echo deb http://ppa.launchpad.net/webupd8team/java/ubuntu precise main | tee -a /etc/apt/sources.listecho deb-src http://ppa.launchpad.net/webupd8team/java/ubuntu precise main | tee -a /etc/apt/sources.listapt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys EEA14886apt-get updateInstalling Java 7 becomes straightforward:apt-get install oracle-java7-installerOpenJDKOpenJDK is an open source implementation of the Java language specification and it's available in the Debian repository. You can install it with apt-get:sudo apt-get install openjdk-6-jresudo apt-get install openjdk-7-jre |
_unix.46127 | I have a directory with a big load of sub directories. I own all of them, and the permissions are all 777.pascal@azazel /box $ ls -altotal 147872drwxr-xr-x 293 root root 12288 ao 22 19:44 .drwxr-xr-x 25 root root 4096 jun 28 18:49 ..drwxrwxrwx 7 pascal pascal 4096 ao 4 2010 131082[...]I want to rename the directories:pascal@azazel /box $ mv 131073 NewNamemv: impossible de dplacer 131073 vers NewName: Permission non accordeThe message is in French, basically it said that I don't have the permission to rename (move) the directory.What is happening? | Can't rename a directory that I own | permissions;files;directory;rename | Renaming a file (whatever its type, including directories) means changing its name in the directory where it is located. In fact, renaming and moving inside the filesystem are the same operation; the file is detached from its old name and attached to its new name, which requires modifying both the source and the destination directory (for renaming inside one directory, the source and target directories are the same). The upshot is that you need write permission on the containing directory, /box in your example.These are exactly the same permissions you'd need to copy the file then remove the original, by the way. |
_unix.62707 | I am refining a bash/yad script which runs when an event reminder is triggered in KOrganizer. (yad is a drop-in replacement for zenity. It is being actively developed and has many more features and options.)When a normal KOrganizer reminder is triggered - especially a recurring one, you get a lot of information that is less than useful.This is a very simple script, but it makes a big difference. It pops up an information box on top of the reminder that can have whatever you want in it to clearly describe the event.I'm having trouble getting it to work again. (It worked fine around a year ago when I last used it.)I have isolated the problem to the way KOrganizer passes arguments to the script. Passing HAPPY_BIRTHDAY works.Modifying the script to accept multiple arguments and passing it HAPPY BIRTHDAY works.But, what I want to pass it is something like<span color=\#FFD700\>\t\t\t\tHAPPY BIRTHDAY\!\n\n\t\t\t\t\tTo ME\!</span> which works from the command line, but I have no idea how to do it from KOrganizer.The script works fine from the command line. #!/bin/bash## custom_reminder## Copyleft 01/26/2013 - JPmicrosystems## Creates a pop up reminder for use with## korganizer## Usage: custom_reminder reminder text## reminder text can contain some special characters like \n## Theoretically, it can also conatain some markup tags## Got simple span to work in bash, but not insde korganizer yetif [ -z ${1} ]then MSG=ERROR - NO MESSAGEelse MSG=${1}fikstart --ontop -- yad --title Personal Event Calendar --info --text=${MSG} --width=300 --height=100The script is installed using edit existing reminder. Select What: Run application/script and enter the script name custom_reminder in Application/Script and the text in Arguments:. Any ideas would be appreciated. | Passing arguments to KOrganizer event reminder bash/yad scripts | bash;productivity;kde4;calendar | null |
_cs.23474 | Have a question that requires me to write the rules for parsing a turing machine This is the questionThe PROBLEM involves writing a set of Turing Machine rules that will read and determine whether or not an input corresponds to the rules of a Turing Machine. This is the rst step of a Universal Turing Machine. For example, if the input is a legal set of rules, then the machine shall accept this input (and any other legal input that describes a Turing Machine).The input would be IO = current state with the number of I's indicating what stateI = would be the current symbol being true with O being falseIIO = next state I = next symbol with O being falseI or O = direction with I being left and O being rightso a legal input would be IOOIIOOI which would be state 1 = current state false = next state 2 = next state false = leftHow would I write the rule so that if I ran it through and it detected that the rules did not conform to the legal input it would place II at the beginning of the input and if its correct put OO at the beginning. | Parsing Turing Machine | turing machines;parsing | null |
_unix.343081 | I added persistent volume to open stack server. I created volume from default image (Centos 7).I wan't to boot from this persistent volume now on and maybe keep that or remove other disk just for temporary files. How I can do this ? | Change disk to boot from | centos;boot;hard disk | null |
_unix.315876 | I have 2 text in different files file1 and file2. I need a command that takes file1 and file2 as arguments and print on terminalThis is text 1. This is This is text 2. This istext 1.This is text 1. This text 2.This is text 2. Thisis text 1. This is text 1. is text 2. This is text 2. This is text 1. This is text 2. | How to print 2 texts in two columns | text processing;columns;text formatting | For columns of size 10 distant of 20 characterspaste <(fold file1 -w 10) <(fold file2 -sw 10) | pr -t -e20fold options-w is the column width-s avoid having separated words from line to linepr options-t leads to omit header and footer (date, time and page number)-eN set N to be the number of spaces replacing the tab produced by paste |
_unix.490 | I'm looking for some common problems in unix system administration and ways that shell scripting can solve them. Completely for self-educational purposes. Also I'd like to know how would you go about learning shell scripting. | Practical tasks to learn shell scripting | shell;scripting;learning | Any time you EVER find yourself doing something multiple times, script it. Think as lazy as you possibly can. Computers were built to do all of that menial crap. Any thing that smells like busy work needs a shell script.Personally, I learned by rummaging around in Slackware for a couple of years. See what happens when you strip your system back as much as possible. Learn to be comfortable with text. While everybody else is ooing and awing over NetworkManager, learn how simple it is to make your own damn NetworkManager. Sure, it might not have as many use cases, but you can get something up and running, dynamically connecting via ethernet and wireless on-demand pretty simply enough. |
_unix.17023 | Coming from the Windows world, I have found the majority of the folder directory names to be quite intuitive:\Program Files contains files used by programs (surprise!)\Program Files (x86) contains files used by 32-bit programs on 64-bit OSes\Users (formerly Documents and Settings) contains users' files, i.e. documents and settings\Users\USER\Application Data contains application-specific data\Users\USER\Documents contains documents belonging to the user\Windows contains files that belong to the operation of Windows itself\Windows\Fonts stores font files (surprise!)\Windows\Temp is a global temporary directoryet cetera. Even if I had no idea what these folders did, I could guess with good accuracy from their names.Now I'm taking a good look at Linux, and getting quite confused about how to find my way around the file system.For example:/bin contains binaries. But so do /sbin, /usr/bin, /usr/sbin, and probably more that I don't know about. Which is which?? What is the difference between them? If I want to make a binary and put it somewhere system-wide, where do I put it?/media contains external media file systems. But so does /mnt. And neither of them contain anything on my system at the moment; everything seems to be in /dev. What's the difference? Where are the other partitions on my hard disk, like the C: and D: that were in Windows?/home contains the user files and settings. That much is intuitive, but then, what is supposed to go into /usr? And how come /root is still separate, even though it's a user with files and settings?/lib contains shared libraries, like DLLs. But so does /usr/lib. What's the difference?What is /etc? Does it really stand for et cetera, or something else? What kinds of files should go in there -- global or local? Is it a catch-all for things no one knew where to put, or is there a particular use case for it?What are /opt, /proc, and /var? What do they stand for and what are they used for? I haven't seen anything like them in Windows*, and I just can't figure out what they might be for.If anyone can think of other standard places that might be good to know about, feel free to add it to the question; hopefully this can be a good reference for people like me, who are starting to get familiar with *nix systems.*OK, that's a lie. I've seen similar things in WinObj, but obviously not on a regular basis. I still don't know what these do on Linux, though. | Standard and/or common directories on Unix/Linux OSes | linux;directory structure;fhs | Linux distributions use the FHS: http://www.pathname.com/fhs/pub/fhs-2.3.html You can also try man hier.I'll try to sum up answers your questions off the top of my head, but I strongly suggest that you read through the FHS:/bin is for non-superuser system binaries/sbin is for superuser (root) system binaries/usr/bin & /usr/sbin are for non-critical shared non-superuser or superuser binaries, respectively/mnt is for temporarily mounting a partition/media is for mounting many removable media at once/dev contains your system device files; it's a long story :)The /usr folder, and its subfolders, can be shared with other systems, so that they will have access to the same programs/files installed in one place. Since /usr is typically on a separate filesystem, it doesn't contain binaries that are necessary to bring the system online./root is separate because it may be necessary to bring the system online without mounting other directories which may be on separate partitions/hard drives/serversYes, /etc stands for et cetera. Configuration files for the local system are stored there./opt is a place where you can install programs that you download/compile. That way you can keep them separate from the rest of the system, with all of the files in one place./proc contains information about the kernel and running processes/var contains variable size files like logs, mail, webpages, etc.To access a system, you generally don't need /var, /opt, /usr, /home; some of potentially largest directories on a system.One of my favorites, which some people don't use, is /srv. It's for data that is being hosted via services like http/ftp/samba. I've see /var used for this a lot, which isn't really its purpose. |
_codereview.171784 | The following code is an implementation of an hashtable in C.hashTable.c: #include <stdio.h> #include <string.h> #include <stdlib.h> #include hashTable.h int hashCode(int key) { return key % SIZE; } /* By given key returns a pointer to a DataItem with the same key Input: Pointer to hashArray array, int key value Output: If key found - the pointer to a DataItem else NULL */ DataItem *getValueByKey (DataItem** hashArray, int key) { /* Get the hash */ int hashIndex = hashCode(key); /* Move in array until an empty */ while(hashArray[hashIndex] != NULL) { if(hashArray[hashIndex]->key == key) return hashArray[hashIndex]; /* Go to next cell */ ++hashIndex; /* Wrap around the table */ hashIndex %= SIZE; } return NULL; } /* Adding a DataItem to a hashArray Input: Pointer to hashArray array, int key value, char* (string) data value Output: None */ void putValueForKey (DataItem** hashArray, int key, char *data) { /* Get the hash */ int hashIndex = hashCode(key); DataItem *item = (DataItem*) malloc(sizeof(DataItem)); item->data = data; item->key = key; /* Move in array until an empty or deleted cell */ while(hashArray[hashIndex] != NULL) { /* Go to next cell */ ++hashIndex; /* Wrap around the table */ hashIndex %= SIZE; } hashArray[hashIndex] = item; } /* Deleting a DataItem node from an hash array Input: Pointer to hashArray array, pointer to the item we want to delete Output: The deleted item pointer */ DataItem* deleteHash (DataItem** hashArray, DataItem* item) { int key; int hashIndex; if (item == NULL) { return NULL; } key = item->key; /* Get the hash */ hashIndex = hashCode(key); /* Move in array until an empty */ while(hashArray[hashIndex] != NULL) { if(hashArray[hashIndex]->key == key) { DataItem* temp = hashArray[hashIndex]; /* Assign a dummy item at deleted position */ hashArray[hashIndex] = NULL; return temp; } /* Go to next cell */ ++hashIndex; /* Wrap around the table */ hashIndex %= SIZE; } return NULL; } /* displaying an hash array Input: Pointer to hashArray array Output: None */ void displayHashTable (DataItem** hashArray) { int i = 0; for (i = 0; i < SIZE; i++) { if (hashArray[i] != NULL) printf( (%d,%s),hashArray[i]->key,hashArray[i]->data); else printf( ~~ ); } printf(\n); } /* Freeing an hash array Input: Pointer to hashArray array Output: None */ void freeHashTable (DataItem** hashArray) { int i = 0; for (i = 0; i < SIZE; ++i) { if (hashArray[i] != NULL) { free(hashArray[i]); } } } /* Initialize an hash array by setting all the nodes to NULL Input: Pointer to hashArray array Output: None */ void initializeHashArray (DataItem** hashArray) { int i = 0; for (i = 0; i < SIZE; i++) { hashArray[i] = NULL; } } int main() { DataItem* hashArray[SIZE]; initializeHashArray(hashArray); putValueForKey(hashArray, 100, MAIN); putValueForKey(hashArray, 107, LOOP); putValueForKey(hashArray, 121, END); putValueForKey(hashArray, 122, STR); putValueForKey(hashArray, 129, LENGTHHHHHHHHHHHHHHHHHHHHH); putValueForKey(hashArray, 132, K); putValueForKey(hashArray, 133, M1); displayHashTable(hashArray); DataItem* item = getValueByKey(hashArray, 100); if (item != NULL) { printf(Element found: %s\n, item->data); } else { printf(Element not found\n); } deleteHash(hashArray, item); item = getValueByKey(hashArray, 100); if (item != NULL) { printf(Element found: %s\n, item->data); } else { printf(Element not found\n); } displayHashTable(hashArray); freeHashTable(hashArray); }I believe this code is well commented and very understandable, if you think it's not please let me know.The main is there just for testing and will not be part of the final code.My main concerns are: The free function (freeHashTable) because I don't know how to test it so I don't know if it's working.The while loops that might get into an infinite loop condition for some reason.Also this is the first time I'm trying to implement hash table, so if I'm missing something from the logical aspect of hash code explanation will be very welcomed.hashTable.h:#define SIZE 20struct DataItem { char *data; int key;}typedef DataItem;DataItem *getValueByKey (DataItem** hashArray, int key);void putValueForKey (DataItem** hashArray, int key, char *data);DataItem* deleteHash (DataItem** hashArray, DataItem* item);void displayHashTable (DataItem** hashArray);void freeHashTable (DataItem** hashArray);void initializeHashArray (DataItem** hashArray);Thanks in advance! | C hashtable implementation | c;hash table | null |
_unix.141571 | To paste many files, whose names are incremental numbers:paste {1..8}| column -s $'\t' -tWhat if your files wasn't named by number, but only words?It can be up to ten files, what should I do?In addition, you have a list of files that contains all the files you want.So far, my approach is:mkdir pastej=0; while read i; do let j+=1; cp $i/ paste/$j; done<list;cd paste; paste {1..8}| column -s $'\t' -tI have no problem with this approach, I just want to ask if there is any shorter one.Actually my files have the same name, just on different locations, for instance 1MUI/PQR/A/sum, 2QHK/PQR/A/sum, 2RKF/PQR/A/sum. The paste command should be paste {list}/PQR/A/sum. The list file is:1MUI2QHK2RKF... | How to use paste command for many files whose names are not numbers? (paste columns from each file to one file) | shell script;command line;paste | With bash 4mapfile -t <listpaste ${MAPFILE[@]} | column -s $'\t' -tfor the paste {list}/PQR/A/sum version of the questionmapfile -t <listpaste ${MAPFILE[@]/%//PQR/A/sum} | column -s $'\t' -t |
_unix.337743 | I am unable to filter traffic where packet loss is > 0. I am attempting the following, but I get an error instead:racluster -n -m saddr/24 daddr proto -% -s stime dur trans proto dport sport saddr dir daddr pkts bytes loss sjit djit | ra - loss pkts gt 0Essentially, I am wanting to find traffic where packet loss occurs. | How to set network traffic filter for packet loss? | networking;monitoring;argus | null |
_unix.299901 | I'm using Elementary OS 0.3.4 Freya 64 bits. I was just doing okay, but all of a sudden google chrome stopped working. I tried opening again but it just won't work. I uninstalled using sudo apt-get purge google-chrome-stable and installed it again with the .deb from the official website, but nothing works.I then tried launching it from the terminal and this is what I get:Gtk-Message: Failed to load module pantheon-filechooser-module[19712:19712:0801/205511:ERROR:browser_main_loop.cc(261)] GTK theme error: Unable to locate theme engine in module_path: pixmap,[19712:19712:0801/205511:ERROR:browser_main_loop.cc(261)] GTK theme error: Unable to locate theme engine in module_path: pixmap,[19712:19712:0801/205511:ERROR:browser_main_loop.cc(261)] GTK theme error: Unable to locate theme engine in module_path: pixmap,[19712:19712:0801/205511:ERROR:browser_main_loop.cc(261)] GTK theme error: Unable to locate theme engine in module_path: pixmap,[19712:19712:0801/205511:ERROR:browser_main_loop.cc(261)] GTK theme error: Unable to locate theme engine in module_path: pixmap,Bus error (core dumped)I didn't install any update, it just stopped working and never open again. Any help will be appreciated. | Google Chrome stopped working and won't open | ubuntu;chrome;elementary os;pantheon | null |
_webmaster.60664 | I am currently trying to find a host for my website. The website allows people to host projects in a somewhat similar way to this. I would rather not reveal more about what it is until I release it. The problem is, that with users uploading files for their mods/projects/etc I would quickly reach any inode limit, especially with my current folder structure.http://www.example.com/files/HOrrZCINYUoTxPVzaRrRFYUQRDxPPRun/wiD4Pj38Tkq/SomeFileUploadedWithTheProperNameStillHere.jar.As you can see from that, it is a quite long URL. The first set of jumble is the project id (every project has one), it stays the same for all files of the same project. The second set of jumble is the file id, randomly generated for each file. There is a file id so that files of the same name can be uploaded to each project (Example would if someone uploads 2 MyProject.jar files, one for version 1 and another for version 2). So far I cannot find any hosts that allow for unlimited inodes, every host has a 25,000-500,000 limit (hence too small). Even if that was big enough, I would prefer not to have the imminent loom of inode limit bypassing over my head all the time (It would drive me insane). I would like either a link to a host with unlimited MySQL, disk space, and inodes. Or another way to do this. I was thinking about storing the files in a MySQL database, however it seems like most hosts limit those too (Including the database's files in the inode count). | How do I work around hitting the inode limit imposed by web hosts? | web hosting | null |
_codereview.141919 | Starting with the fact that creating the classic database handler in Android is really annoying and it usually takes a lot of time since you have to create one handler for each object, I thought at creating a generic one that could avoid this long work.I'm posting this code for many reasons, the main one being that I would love some suggestions about how to optimize this code way to make it as fast as possible.This code is working, I don't need a fix, but I will surely need some optimizations because I'm not a senior and surely someone here knows how to perform the same operations with less memory impact.How to use it:At app start:Simply call this code: DatabaseHelper db = new DatabaseHelper(this); db.OpenDB(); try { db.CreateTable(new myClass1()); db.CreateTable(new myClass2()); ... } catch (Exception e) { e.printStackTrace(); }Now all tables are created or updated.Generic class(Every class that needs to be a database table, must extend this class)PS: the id field must be called id and must be UUIDimport android.content.Context;import java.util.ArrayList;import java.util.List;import java.util.UUID;interface IGenericClass { ArrayList<?> SelectAll(Class<?> type, Context ctx, String whereClause); int SelectCount(Class<?> type, Context ctx, String whereClause); boolean SaveAll(List<?> objects, Context ctx); boolean Save(Object object, Context ctx); boolean Save(Context ctx); boolean UpdateObject(Context ctx); Integer SumColumn(Class<?> type, Context ctx, String whereClause, String columnName); boolean DeleteAll(Class<?> type, Context ctx); boolean Delete(Class<?> ciboClass, Context ctx, String whereClause); Object SelectById(Class<?> type, Context ctx, UUID id);}public class GenericClass implements IGenericClass { public ArrayList<?> SelectAll(Class<?> type, Context ctx, String whereClause) { DatabaseHelper db = new DatabaseHelper(ctx); db.OpenDB(); ArrayList<?> returnList = new ArrayList<>(); try { return db.SelectAll(this.getClass(), whereClause); } catch (Exception ex) { return null; } } public int SelectCount(Class<?> type, Context ctx, String whereClause) { DatabaseHelper db = new DatabaseHelper(ctx); db.OpenDB(); try { return db.SelectCount(this.getClass(), whereClause); } catch (Exception ex) { return -1; } } public Integer SumColumn(Class<?> type, Context ctx, String whereClause, String columnName) { DatabaseHelper db = new DatabaseHelper(ctx); db.OpenDB(); try { return db.SumColumn(this.getClass(), whereClause, columnName); } catch (Exception ex) { return -1; } } public boolean Save(Object object, Context ctx) { DatabaseHelper db = new DatabaseHelper(ctx); db.OpenDB(); return db.Save(object); } public boolean Save(Context ctx) { DatabaseHelper db = new DatabaseHelper(ctx); db.OpenDB(); return db.Save(this); } public boolean SaveAll(List<?> objects, Context ctx) { DatabaseHelper db = new DatabaseHelper(ctx); db.OpenDB(); return db.SaveAll(objects); } public boolean UpdateObject(Context ctx) { DatabaseHelper db = new DatabaseHelper(ctx); db.OpenDB(); db.UpdateObject(this); return true; } public boolean DeleteAll(Class<?> type, Context ctx) { DatabaseHelper db = new DatabaseHelper(ctx); db.OpenDB(); return db.DeleteAll(type); } public boolean Delete(Class<?> ciboClass, Context ctx, String whereClause) { DatabaseHelper db = new DatabaseHelper(ctx); db.OpenDB(); return db.Delete(ciboClass, whereClause); } public Object SelectById(Class<?> type, Context ctx, UUID id) { DatabaseHelper db = new DatabaseHelper(ctx); db.OpenDB(); return db.SelectById(type, id); }}Database HelperThis is the handler for the SQLiteDatabaseimport android.content.Context;import android.database.Cursor;import android.database.sqlite.SQLiteDatabase;import android.os.Build;import android.util.Pair;import java.lang.reflect.Field;import java.util.ArrayList;import java.util.Date;import java.util.List;import java.util.Objects;import java.util.UUID;interface IDatabaseHelper { ArrayList<?> SelectAll(Class<?> tipo, String whereClause); int SelectCount(Class<?> type, String whereClause); boolean Save(Object object); boolean SaveAll(List<?> objects); boolean OpenDB(); boolean CreateTable(Object object); void Close(); Integer SumColumn(Class<?> type, String whereClause, String columnName); boolean DeleteAll(Class<?> type); Object SelectById(Class<?> type, UUID id); boolean Delete(Class<?> type, String whereClause); boolean UpdateObject(Object objToUpdate);}public class DatabaseHelper implements IDatabaseHelper { private static final String DATABASE_NAME = myDatabase.db; private static String DATABASE_FULLPATH = ; private static SQLiteDatabase database;// private static SimpleDateFormat simpleDateFormat = new SimpleDateFormat(dd/MM/yyyy, Locale.getDefault()); //constructor public DatabaseHelper(Context context) { DATABASE_FULLPATH = context.getFilesDir().getPath() + / + DATABASE_NAME; } //returns all object of the given class public ArrayList<?> SelectAll(Class<?> type, String whereClause) { if (whereClause == null) { whereClause = ; } String query = select * from + type.getSimpleName() + + whereClause; Cursor cursor = database.rawQuery(query, null); ArrayList list = new ArrayList(); try { if (cursor.moveToFirst()) { while (!cursor.isAfterLast()) { Object o = GetObjectFromCursor(type, cursor); list.add(o); cursor.moveToNext(); } } cursor.close(); return list; } catch (Exception ex) { return null; } } //returns the count of records of given type public int SelectCount(Class<?> type, String whereClause) { try { if (whereClause == null) { whereClause = ; } String query = select count(*) from + type.getSimpleName() + + whereClause; Cursor cursor = database.rawQuery(query, null); cursor.moveToFirst(); int count = cursor.getInt(0); cursor.close(); return count; } catch (Exception ex) { return -1; } } //save an object public boolean Save(Object object) { //we build the query for each object String insertQuery = insert into + object.getClass().getSimpleName() + (; try { ArrayList<Pair<String, Object>> name_value = GetFieldNameValue(object); String tableNames = ; String tableValues = ; //for each record we add the values and the field names for (Pair<String, Object> pair : name_value) { tableNames += pair.first + ,; tableValues += ' + pair.second.toString() + ' + ,; } //remove the last comma tableNames = tableNames.substring(0, tableNames.length() - 1); tableValues = tableValues.substring(0, tableValues.length() - 1); //finished adjusting query insertQuery += tableNames + )values( + tableValues + );; database.execSQL(insertQuery); return true; } catch (Exception ex) { return false; } } //save multiple objects into db public boolean SaveAll(List<?> objects) { int saved = 0; for (Object object : objects) { if (Save(object)) { saved++; } else { return false; } } return saved == objects.size(); } //open db public boolean OpenDB() { try { database = SQLiteDatabase.openOrCreateDatabase(DATABASE_FULLPATH, null, null); return true; } catch (Exception ex) { return false; } } //create a table if not exists public boolean CreateTable(Object object) { try { String query = GetCreateQueryFromObject(object); database.execSQL(query); //we check for each field if it exists, if not it create the field on the database String className = object.getClass().getSimpleName(); try { List<Pair<String, Object>> fields = GetFieldNameValue(object); for (Pair<String, Object> c : fields) { if (!CheckColumnExistInTable(className, c.first)) { String addColumnSql = alter table ; addColumnSql += className; addColumnSql += add ; addColumnSql += c.first; String columnType = GetSQLFieldType(c.first, c.second.getClass().getSimpleName()); addColumnSql += + columnType; database.execSQL(addColumnSql); } } return true; } catch (Exception ex) { return false; } }catch(Exception ex){ return false; } } //closes db public void Close() { database.close(); } //sum a column value for a given table public Integer SumColumn(Class<?> type, String whereClause, String columnName) { String sql = select sum( + columnName + ) as total from + type.getSimpleName() + + whereClause; Cursor cursor = database.rawQuery(sql, null); int columnIndex = cursor.getColumnIndex(total); if (columnIndex == -1) { return 0; } if (cursor.moveToFirst()) { int value = cursor.getInt(columnIndex); cursor.close(); return value; } else { cursor.close(); return 0; } } //delete all record in a table public boolean DeleteAll(Class<?> type) { String sql = delete from + type.getSimpleName(); database.execSQL(sql); return true; } //select a record from id public Object SelectById(Class<?> type, UUID id) { String query = select * from + type.getSimpleName() + where id=' + id.toString() + '; Cursor cursor = database.rawQuery(query, null); if (cursor.moveToFirst()) { try { Object object = GetObjectFromCursor(type, cursor); cursor.close(); return object; } catch(Exception ex){ return null; } } else { return null; } } //delete all records with a given condition public boolean Delete(Class<?> type, String whereClause) { try { String sql = delete from + type.getSimpleName() + + whereClause; database.execSQL(sql); return true; } catch (Exception ex) { return false; } } //update an object from his id public boolean UpdateObject(Object objToUpdate) { try { Field field = objToUpdate.getClass().getField(id); int id = (int) field.get(objToUpdate); String whereClause = where id = + id; String sqlQuery = update + objToUpdate.getClass().getSimpleName() + set ; ArrayList<Pair<String, Object>> name_value = GetFieldNameValue(objToUpdate); //for each field we add name and value for (Pair<String, Object> pair : name_value) { if (!pair.first.equals(id)) { sqlQuery += pair.first + =; sqlQuery += ' + pair.second.toString() + ' + ,; } } sqlQuery = sqlQuery.substring(0, sqlQuery.length() - 1); sqlQuery += ; sqlQuery += whereClause; database.execSQL(sqlQuery); return true; } catch (Exception ex) { return false; } } private boolean CheckColumnExistInTable(String tableName, String columnName) { Cursor mCursor = null; try { // Query 1 row mCursor = database.rawQuery(SELECT * FROM + tableName + LIMIT 0, null); // getColumnIndex() gives us the index (0 to ...) of the column - otherwise we get a -1 return mCursor.getColumnIndex(columnName) != -1; } catch (Exception Exp) { return false; } finally { if (mCursor != null) mCursor.close(); } } //given a cursor and a class, it returns the object from the cursor private Object GetObjectFromCursor(Class<?> tipo, Cursor cursor) throws Exception { Field[] fields = tipo.getFields(); Object o = tipo.newInstance(); for (int i = 0; i < fields.length; i++) { Object fieldValue = GetCursorFieldValue(cursor, i); if (fieldValue != null) { o = SetUnknownFieldValue(o, cursor.getColumnName(i), fieldValue); } } return o; } //returns from a given object a fieldName - fieldValue map private ArrayList<Pair<String, Object>> GetFieldNameValue(Object object) throws Exception { Field[] fields = object.getClass().getFields(); ArrayList<Pair<String, Object>> pairs = new ArrayList<>(); for (Field f : fields) { Object value = GetUnknownObjectFieldValue(f, object); String fieldName = f.getName(); if (value == null || fieldName.isEmpty()) { continue; } pairs.add(new Pair(fieldName, value)); } return pairs; } //returns from an object and a field name, the value private Object GetUnknownObjectFieldValue(Field field, Object object) throws Exception { field.setAccessible(true); Object o = field.get(object); //we save dates as longs Date date = new Date(); UUID uuid = UUID.randomUUID(); if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.KITKAT) { if (o != null && Objects.equals(o.getClass(), date.getClass())) { return new Date((long) o); } if (o != null && Objects.equals(o.getClass(), uuid.getClass())) { return o.toString(); } } else { if (o != null && o.getClass().equals(date.getClass())) { return new Date((long) o); } if (o != null && o.getClass().equals(uuid.getClass())) { return o.toString(); } } return o; } //we set the value val on the field field. we use this to value an object without having its name private Object SetUnknownFieldValue(Object object, String fieldName, Object fieldValue) throws Exception { Class<?> clazz = object.getClass(); Field field = clazz.getDeclaredField(fieldName); field.setAccessible(true); Object fieldCasted = CastField(field.getType(), fieldValue); field.set(object, fieldCasted); return object; } //we take the field type and the object way to convert the object in the required field private Object CastField(Class fieldType, Object fieldValue) throws Exception { switch (fieldType.getSimpleName().toLowerCase()) { case boolean: if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.KITKAT) { if (Objects.equals(fieldValue.getClass().getSimpleName(), String)) { return Objects.equals(fieldValue, true); } } else { if (fieldValue.getClass().getSimpleName().equals(String)) { return fieldValue.equals(true); } } break; case double: if (fieldValue == null) { return null; } return (double) Float.parseFloat(fieldValue.toString()); case date: return new Date((long) fieldValue); case uuid: return UUID.fromString((String) fieldValue); default: return fieldValue; } return fieldValue; } //return a generic field value from cursor private Object GetCursorFieldValue(Cursor cursor, int i) { switch (cursor.getType(i)) { /* FIELD_TYPE_NULL FIELD_TYPE_INTEGER FIELD_TYPE_FLOAT FIELD_TYPE_STRING FIELD_TYPE_BLOB */ case 0: return null; case 1: return cursor.getInt(i); case 2: return cursor.getFloat(i); case 3: return cursor.getString(i); case 4: return cursor.getBlob(i); default: cursor.close(); return null; } } //prende un oggetto e ne crea la query di creazione tabella private String GetCreateQueryFromObject(Object object) throws Exception { String fullQuery = create table if not exists ; String objectName = GetTableName(object); fullQuery += objectName + ; String properties = GetPropertiesFromObject(object); fullQuery += ( + properties + );; return fullQuery; } //returns object properties as: (fieldName fieldType, fieldName fieldType,..) private String GetPropertiesFromObject(Object object) throws Exception { Class<?> tClass = object.getClass(); Field[] fieldsArray = tClass.getFields(); ArrayList<Pair<String, String>> fieldMap = GetFields(fieldsArray);//1 field name, 2 field type String fields = ; for (Pair<String, String> field : fieldMap) { fields += field.first + ; fields += GetSQLFieldType(field.first, field.second) + , ; } return fields.substring(0, fields.length() - 2); } //returns from a given type, the sql type required private String GetSQLFieldType(String fieldName, String fieldType) throws Exception { if (fieldName.toLowerCase().equals(id)) { return TEXT PRIMARY KEY UNIQUE; } switch (fieldType.toLowerCase()) { case uuid: return TEXT; case string: return TEXT; case int: return INT; case double: return DOUBLE; case boolean: return BOOLEAN; case float: return FLOAT; case integer: return INT; case date: return INT; default: return BLOB; } } //returns from an object the table name private String GetTableName(Object object) { return object.getClass().getSimpleName(); } //returns a map with an object properties as <fieldType-fieldName> private ArrayList<Pair<String, String>> GetFields(Field[] fields) { ArrayList<Pair<String, String>> pairs = new ArrayList<>(); for (Field f : fields) { if (!f.getName().equals(shadow$_klass_) && !f.getName().equals(shadow$_monitor_) && !f.getName().equals($change)) { pairs.add(new Pair<>(f.getName(), f.getType().getSimpleName())); } } return pairs; }} | Android generic SQL database handler | java;performance;android;sqlite | null |
_codereview.4911 | I have implemented generic doubly-linked list class, which supports IEnumerable<T>,IEnumerator<T> interfaces.DoublyLinkedList<T> is fully compatible with standart BCL classes.Targets:class can be used as a standard replacement for Stack<T>, Queue<T> and List<T>.class itself is immutable, so you can-not change the containing value of the element.class methods raises exceptions which conforms for common considerations.class method's lock statement was used for locking current active element. class Push, Pop, Peek operations is breaking the current state of the enumerated object.How can we compare the performance for that class compared to the BCL classes and other implementation?Will you provide the C# code to compare to the BCL classes like Stack<T>, Querty<T>, List<T>?Unit tests (code coverage - 100%):[TestClass]public class DoublyLinkedListUnitTest{ [TestMethod] public void TestValueMethod() { DoublyLinkedList<int> dll = new DoublyLinkedList<int>(); int i1 = 1; int i2 = 2; int i3 = 3; Assert.AreEqual(((IDoublyLinkedList<int>)dll).Index, -1); try { dll.Peek(); } catch (Exception ex) { Assert.AreEqual(typeof(InvalidOperationException), ex.GetType()); } using (IEnumerator<int> dllEnumerator = dll.GetEnumerator()) { Assert.AreEqual(dll.MoveLast(), false); Assert.AreEqual(dll.MovePrevious(), false); Assert.AreEqual(dllEnumerator.MoveNext(), false); Assert.AreEqual(dllEnumerator.Current, default(int)); Assert.AreEqual(dll.Count, 0); Assert.AreEqual(dll.CurrentIndex, -1); try { dll.Pop(); } catch (Exception ex) { Assert.AreEqual(typeof(InvalidOperationException), ex.GetType()); } dll.Add(i1); dll.MovePrevious(); try { dll.Remove(); } catch (Exception ex) { Assert.AreEqual(typeof(InvalidOperationException), ex.GetType()); } dll.MoveLast(); Assert.AreEqual(dllEnumerator.MoveNext(), true); Assert.AreEqual(dllEnumerator.Current, i1); Assert.AreEqual(dll.Count, 1); Assert.AreEqual(dll.CurrentIndex, 0); dll.Add(i2); Assert.AreEqual(dllEnumerator.MoveNext(), true); Assert.AreEqual(dllEnumerator.Current, i2); Assert.AreEqual(dll.Count, 2); Assert.AreEqual(dll.CurrentIndex, 1); dll.MoveFirst(); try { dll.Remove(); } catch (Exception ex) { Assert.AreEqual(typeof(InvalidOperationException), ex.GetType()); } try { dll.Add(i1); } catch (Exception ex) { Assert.AreEqual(typeof(InvalidOperationException), ex.GetType()); } dll.MoveLast(); dll.Add(i3); Assert.AreEqual(dllEnumerator.MoveNext(), true); Assert.AreEqual(dllEnumerator.Current, i3); Assert.AreEqual(dll.Count, 3); Assert.AreEqual(dll.CurrentIndex, 2); } IEnumerator o = ((IEnumerable)dll).GetEnumerator(); o.MoveNext(); Assert.AreEqual(o.Current, i1); List<int> list = new List<int>(); dll.CopyTo(list); Assert.AreEqual(dll.ToList().Except(list).Count(), 0); Assert.AreEqual(list.Except(dll).Count(), 0); Assert.AreEqual(list.Count, 3); Assert.AreEqual(list[0], i1); Assert.AreEqual(list[1], i2); Assert.AreEqual(list[2], i3); using (IEnumerator<int> dllEnumerator = dll.GetEnumerator()) { if (dll.Count > 0) { dll.Remove(); } Assert.AreEqual(dllEnumerator.MoveNext(), true); Assert.AreEqual(dllEnumerator.Current, i1); if (dll.Count > 0) { dll.Remove(); } Assert.AreEqual(dllEnumerator.MoveNext(), false); if (dll.Count > 0) { dll.Remove(); } } try { dll.Remove(); } catch (Exception ex) { Assert.AreEqual(typeof(InvalidOperationException), ex.GetType()); } dll.AddRange(list); try { dll.AddRange(null); } catch (Exception ex) { Assert.AreEqual(typeof(ArgumentNullException), ex.GetType()); } try { dll.MoveFirst(); dll.AddRange(list); } catch (Exception ex) { Assert.AreEqual(typeof(InvalidOperationException), ex.GetType()); } Assert.AreEqual(dll.Contains(i1), true); Assert.AreEqual(dll.Contains(i2), true); Assert.AreEqual(dll.Contains(i3), true); Assert.AreEqual(dll.Contains(default(int)), false); using (IEnumerator<int> dllEnumerator = dll.GetEnumerator()) { Assert.AreEqual(dllEnumerator.MoveNext(), true); Assert.AreEqual(dllEnumerator.Current, i1); Assert.AreEqual(dllEnumerator.MoveNext(), true); Assert.AreEqual(dllEnumerator.Current, i2); Assert.AreEqual(dllEnumerator.MoveNext(), true); Assert.AreEqual(dllEnumerator.Current, i3); } Stack<int> stack = new Stack<int>(); stack.Push(i3); stack.Push(i2); stack.Push(i1); foreach (object value in dll) { Assert.AreEqual(stack.Pop(), value); } try { int value = default(int); while ((value = dll.Pop()) != default(int)) { stack.Push(value); } } catch (Exception ex) { Assert.AreEqual(typeof(InvalidOperationException), ex.GetType()); } dll.Push(i1); dll.Push(i2); dll.Push(i3); dll.Clear(); Assert.AreEqual(dll.MoveFirst(), false); stack.Clear(); stack.Push(i3); stack.Push(i2); stack.Push(i1); foreach (int value in stack) { dll.Push(value); Assert.AreEqual(dll.Peek(), value); } IDoublyLinkedList<int> dllInterface = dll; dllInterface.Reset(); Assert.AreEqual(dllInterface.MovePrevious(), false); Assert.AreEqual(dllInterface.Current, default(int)); Assert.AreEqual(dllInterface.CurrentIndex, -1); Assert.AreEqual(dllInterface.MoveNext(), true); Assert.AreEqual(dllInterface.Current, i1); Assert.AreEqual(dllInterface.CurrentIndex, 0); Assert.AreEqual(dllInterface.MoveNext(), true); Assert.AreEqual(dllInterface.Current, i2); Assert.AreEqual(dllInterface.CurrentIndex, 1); Assert.AreEqual(dllInterface.MoveNext(), true); Assert.AreEqual(dllInterface.Current, i3); Assert.AreEqual(dllInterface.CurrentIndex, 2); Assert.AreEqual(dllInterface.MoveFirst(), true); Assert.AreEqual(dllInterface.Current, i1); Assert.AreEqual(dllInterface.CurrentIndex, 0); Assert.AreEqual(dllInterface.MoveLast(), true); Assert.AreEqual(dllInterface.Current, i3); Assert.AreEqual(dllInterface.CurrentIndex, 2); Assert.AreEqual(dllInterface.MovePrevious(), true); Assert.AreEqual(dllInterface.Current, i2); Assert.AreEqual(dllInterface.CurrentIndex, 1); Assert.AreEqual(dllInterface.MovePrevious(), true); Assert.AreEqual(dllInterface.Current, i1); Assert.AreEqual(dllInterface.CurrentIndex, 0); Assert.AreEqual(dllInterface.MovePrevious(), true); Assert.AreEqual(dllInterface.Current, default(int)); Assert.AreEqual(dllInterface.CurrentIndex, -1); } [TestMethod] public void TestReferenceMethod() { DoublyLinkedList<object> dll = new DoublyLinkedList<object>(); object o1 = new object(); object o2 = new object(); object o3 = new object(); Assert.AreEqual(((IDoublyLinkedList<object>)dll).Index, -1); try { dll.Peek(); } catch (Exception ex) { Assert.AreEqual(typeof(InvalidOperationException), ex.GetType()); } using (IEnumerator<object> dllEnumerator = dll.GetEnumerator()) { Assert.AreEqual(dll.MoveLast(), false); Assert.AreEqual(dll.MovePrevious(), false); Assert.AreEqual(dllEnumerator.MoveNext(), false); Assert.AreEqual(dllEnumerator.Current, default(object)); Assert.AreEqual(dll.Count, 0); Assert.AreEqual(dll.CurrentIndex, -1); try { dll.Pop(); } catch (Exception ex) { Assert.AreEqual(typeof(InvalidOperationException), ex.GetType()); } dll.Add(o1); dll.MovePrevious(); try { dll.Remove(); } catch (Exception ex) { Assert.AreEqual(typeof(InvalidOperationException), ex.GetType()); } dll.MoveLast(); Assert.AreEqual(dllEnumerator.MoveNext(), true); Assert.AreEqual(dllEnumerator.Current, o1); Assert.AreEqual(dll.Count, 1); Assert.AreEqual(dll.CurrentIndex, 0); dll.Add(o2); Assert.AreEqual(dllEnumerator.MoveNext(), true); Assert.AreEqual(dllEnumerator.Current, o2); Assert.AreEqual(dll.Count, 2); Assert.AreEqual(dll.CurrentIndex, 1); dll.MoveFirst(); try { dll.Add(o1); } catch (Exception ex) { Assert.AreEqual(typeof(InvalidOperationException), ex.GetType()); } dll.MoveLast(); dll.Add(o3); Assert.AreEqual(dllEnumerator.MoveNext(), true); Assert.AreEqual(dllEnumerator.Current, o3); Assert.AreEqual(dll.Count, 3); Assert.AreEqual(dll.CurrentIndex, 2); } IEnumerator o = ((IEnumerable)dll).GetEnumerator(); o.MoveNext(); Assert.AreEqual(o.Current, o1); List<object> list = new List<object>(); dll.CopyTo(list); Assert.AreEqual(dll.ToList().Except(list).Count(), 0); Assert.AreEqual(list.Except(dll).Count(), 0); Assert.AreEqual(list.Count, 3); Assert.AreEqual(list[0], o1); Assert.AreEqual(list[1], o2); Assert.AreEqual(list[2], o3); using (IEnumerator<object> dllEnumerator = dll.GetEnumerator()) { if (dll.Count > 0) { dll.Remove(); } Assert.AreEqual(dllEnumerator.MoveNext(), true); Assert.AreEqual(dllEnumerator.Current, o1); if (dll.Count > 0) { dll.Remove(); } Assert.AreEqual(dllEnumerator.MoveNext(), false); if (dll.Count > 0) { dll.Remove(); } } try { dll.Remove(); } catch (Exception ex) { Assert.AreEqual(typeof(InvalidOperationException), ex.GetType()); } dll.AddRange(list); try { dll.AddRange(null); } catch (Exception ex) { Assert.AreEqual(typeof(ArgumentNullException), ex.GetType()); } try { dll.MoveFirst(); dll.AddRange(list); } catch (Exception ex) { Assert.AreEqual(typeof(InvalidOperationException), ex.GetType()); } Assert.AreEqual(dll.Contains(o1), true); Assert.AreEqual(dll.Contains(o2), true); Assert.AreEqual(dll.Contains(o3), true); Assert.AreEqual(dll.Contains(default(object)), false); using (IEnumerator<object> dllEnumerator = dll.GetEnumerator()) { Assert.AreEqual(dllEnumerator.MoveNext(), true); Assert.AreEqual(dllEnumerator.Current, o1); Assert.AreEqual(dllEnumerator.MoveNext(), true); Assert.AreEqual(dllEnumerator.Current, o2); Assert.AreEqual(dllEnumerator.MoveNext(), true); Assert.AreEqual(dllEnumerator.Current, o3); } Stack<object> stack = new Stack<object>(); stack.Push(o3); stack.Push(o2); stack.Push(o1); foreach (object value in dll) { Assert.AreEqual(stack.Pop(), value); } try { object value = default(object); while ((value = dll.Pop()) != default(object)) { stack.Push(value); } } catch (Exception ex) { Assert.AreEqual(typeof(InvalidOperationException), ex.GetType()); } dll.Push(o1); dll.Push(o2); dll.Push(o3); dll.Clear(); Assert.AreEqual(dll.MoveFirst(), false); stack.Clear(); stack.Push(o3); stack.Push(o2); stack.Push(o1); foreach (object value in stack) { dll.Push(value); Assert.AreEqual(dll.Peek(), value); } IDoublyLinkedList<object> dllInterface = dll; dllInterface.Reset(); Assert.AreEqual(dllInterface.MovePrevious(), false); Assert.AreEqual(dllInterface.Current, default(object)); Assert.AreEqual(dllInterface.CurrentIndex, -1); Assert.AreEqual(dllInterface.MoveNext(), true); Assert.AreEqual(dllInterface.Current, o1); Assert.AreEqual(dllInterface.CurrentIndex, 0); Assert.AreEqual(dllInterface.MoveNext(), true); Assert.AreEqual(dllInterface.Current, o2); Assert.AreEqual(dllInterface.CurrentIndex, 1); Assert.AreEqual(dllInterface.MoveNext(), true); Assert.AreEqual(dllInterface.Current, o3); Assert.AreEqual(dllInterface.CurrentIndex, 2); Assert.AreEqual(dllInterface.MoveFirst(), true); Assert.AreEqual(dllInterface.Current, o1); Assert.AreEqual(dllInterface.CurrentIndex, 0); Assert.AreEqual(dllInterface.MoveLast(), true); Assert.AreEqual(dllInterface.Current, o3); Assert.AreEqual(dllInterface.CurrentIndex, 2); Assert.AreEqual(dllInterface.MovePrevious(), true); Assert.AreEqual(dllInterface.Current, o2); Assert.AreEqual(dllInterface.CurrentIndex, 1); Assert.AreEqual(dllInterface.MovePrevious(), true); Assert.AreEqual(dllInterface.Current, o1); Assert.AreEqual(dllInterface.CurrentIndex, 0); Assert.AreEqual(dllInterface.MovePrevious(), true); Assert.AreEqual(dllInterface.Current, default(object)); Assert.AreEqual(dllInterface.CurrentIndex, -1); }}Source code:public interface IDoublyLinkedList<T> : IEnumerator<T>, IEnumerable<T>, IEnumerator, IEnumerable, IDisposable{ void Add(T value); void AddRange(IEnumerable<T> values); bool Contains(T item); void Remove(); bool MoveFirst(); bool MovePrevious(); bool MoveLast(); void CopyTo(List<T> list); List<T> ToList(); void Clear(); T Pop(); T Peek(); void Push(T value); int Count { get; } int Index { get; } int CurrentIndex { get; }}public class DoublyLinkedList<T> : IDoublyLinkedList<T>{ private DoublyLinkedList<T> _current; private DoublyLinkedList<T> _previous; private DoublyLinkedList<T> _first; private DoublyLinkedList<T> _last; private readonly T _value; private readonly int _count; public DoublyLinkedList() { _current = this; } private DoublyLinkedList(DoublyLinkedList<T> copy) { _current = copy; _previous = copy._previous; _first = copy._first; _last = copy._last; _count = copy._count; } public int Count { get { lock (_current) { if (_last != null) { return _last._count; } return 0; } } } public int Index { get { lock (_current) { return _count - 1; } } } public int CurrentIndex { get { lock (_current) { return _current._count - 1; } } } private DoublyLinkedList(DoublyLinkedList<T> previous, T value) { _current = this; _previous = previous; _value = value; _count = previous._count + 1; } public void Clear() { _current = this; _previous = null; _first = null; _last = null; } public bool Contains(T item) { lock (_current) { foreach (T value in this) { if (object.Equals(value, item)) return true; } return false; } } public void CopyTo(List<T> list) { lock (_current) { list.AddRange(this); } } public List<T> ToList() { lock (_current) { return new List<T>(this); } } public void AddRange(IEnumerable<T> values) { lock (_current) { if (values != null) { if (_current._first == null) { foreach (T value in values) { _current._first = new DoublyLinkedList<T>(_current, value); _current = _current._first; } _last = _current; return; } throw new InvalidOperationException(); } throw new ArgumentNullException(); } } public void Add(T value) { lock (_current) { if (_current._first == null) { _current._first = new DoublyLinkedList<T>(_current, value); _last = _current = _current._first; return; } throw new InvalidOperationException(); } } public void Remove() { lock (_current) { if (_current._first == null) { if (_current._previous != null) { _last = _current = _current._previous; _current._first = null; return; } throw new InvalidOperationException(); } throw new InvalidOperationException(); } } public T Pop() { lock (_current) { if (_last != null) { _current = _last; } if (_current._previous != null) { T value = _current._value; _last = _current = _current._previous; _current._first = null; return value; } throw new InvalidOperationException(); } } public T Peek() { lock (_current) { if (_last != null) { _current = _last; } if (_current._previous != null) { T value = _current._value; return value; } throw new InvalidOperationException(); } } public void Push(T value) { lock (_current) { if (_last != null) { _current = _last; } _current._first = new DoublyLinkedList<T>(_current, value); _last = _current = _current._first; } } public bool MoveFirst() { lock (_current) { if (_first != null) { _current = _first; return true; } return false; } } public bool MoveLast() { lock (_current) { if (_last != null) { _current = _last; return true; } return false; } } public bool MovePrevious() { lock (_current) { if (_current._previous != null) { _current = _current._previous; return true; } return false; } } T IEnumerator<T>.Current { get { lock (_current) { return _current._value; } } } public IEnumerator<T> GetEnumerator() { lock (_current) { return new DoublyLinkedList<T>(this); } } bool IEnumerator.MoveNext() { lock (_current) { if (_current._first != null) { _current = _current._first; return true; } return false; } } object IEnumerator.Current { get { lock (_current) { return _current._value; } } } void IEnumerator.Reset() { lock (_current) { _current = this; } } IEnumerator IEnumerable.GetEnumerator() { lock (_current) { return new DoublyLinkedList<T>(this); } } void IDisposable.Dispose() { }} | Comparing DoublyLinkedList implementation performance with other BCL (.NET) classes | c#;.net;performance | null |
_unix.352465 | I have the following script:#! /usr/bin/pythonimport glibimport reimport subprocessimport requestsimport bs4import datetimeimport sysimport osimport timefrom selenium import webdriverfrom pyudev import Context, Monitorfrom selenium.common.exceptions import NoSuchElementExceptiondef demote(): def result(): os.setgid(100) os.setuid(1000) return resultdef inotify(title, message): subprocess.call(['notify-send', '{}\n'.format(title), '{0}\n'.format(message)], preexec_fn=demote()) #os.system('notify-send ' + title + ' ' + message)def get_network_data(tout): Scrapes balance data from ISP website. if tout is not None: try: # Do some scraping if data_found: full_msg = '{0}\n{1}'.format(my_balance.capitalize(), airtime_balance.capitalize()) inotify('My Balance', full_msg) #subprocess.call(['notify-send', 'My Balance', '\n{0}\n{1}'.format(my_balance.capitalize(), airtime_balance.capitalize())], preexec_fn=demote()) else: print('Could not retrieve data from page...') full_msg = '{0}'.format('Error: Could not retrieve data from page.') inotify('My Balance', full_msg) #subprocess.call(['notify-send', 'My Balance', '\n{0}'.format('Error: Could not retrieve data from page.')], preexec_fn=demote()) except NoSuchElementException: print('Could not locate element...') full_msg = '{0}'.format('Error: Could not locate element - acc.') inotify('My Balance', full_msg) #subprocess.call(['notify-send', 'iMonitor:get_network_data', '\n{0}'.format('Error: Could not locate element - acc.')], preexec_fn=demote()) else: print('Could not find USB device...') full_msg = '\n{0}'.format('Error: Could not find USB device.') inotify('My Balance', full_msg) #subprocess.call(['notify-send', 'iMonitor', '\n{0}'.format('Error: Could not find USB device.')], preexec_fn=demote())def identify_phone(observer, device): Identifies if specific USB device (phone) is connected (tethered). global last_updated, initial_search, msg_count current_time = datetime.datetime.now() time_diff = current_time - last_updated if (time_diff.seconds > 300) or initial_search: try: time.sleep(0.25) tout = subprocess.check_output(lsusb | grep 1234:5678, shell=True) except subprocess.CalledProcessError: tout = None last_updated = datetime.datetime.now() initial_search = False get_network_data(tout) if time_diff.seconds > 10: msg_count = 1 if not initial_search and msg_count == 1: wait_time = datetime.datetime.fromtimestamp(600 - time_diff.seconds) message = wait_time.strftime('You may have to wait %-M minute(s), %-S second(s) before another check is done.') print('Could not retrieve data from page...') full_msg = '\n{0}'.format(message) inotify('My Balance', full_msg) #subprocess.call(['notify-send', 'iMonitor:Identify Phone', '\n{0}'.format(message)], preexec_fn=demote()) msg_count += 1try: initial_search = True last_updated = datetime.datetime.now() msg_count = 1 try: from pyudev.glib import MonitorObserver except ImportError: from pyudev.glib import GUDevMonitorObserver as MonitorObserver context = Context() monitor = Monitor.from_netlink(context) monitor.filter_by(subsystem='usb') observer = MonitorObserver(monitor) observer.connect('device-added', identify_phone) monitor.start() glib.MainLoop().run()except KeyboardInterrupt: print('\nShutdown requested.\nExiting gracefully...') sys.exit(0)I would like it to run overy boot, thus, I have created a service at /etc/systemd/system which does the calling of the script. However, as the script is meant to display a desktop notification, I haven't managed to make this work since it runs as root; that is despite the fact that I have changed the guid and uid. Any help would be appreciated.KDE Plasma Version 5.5.5 | How to run a Python script on every boot? | python;opensuse | null |
_unix.360321 | I learnt that there was a command cloc to count lines of code. Now I wonder if it the file types are accurate? Should I look a the cloc project to know how file types are detected? The reason I wonder is that cloc seems to have false positives if I'm not mistaken when I compare the file types to the tree|ls *.py there is no output even though cloc reports python files in the current directory. | Statistics for project filestypes | tree | null |
_softwareengineering.317254 | At my company, we're building an SDK consisting of a number of assemblies. For example, we deliver an assembly called Company.Platform.Security that contains the implementation of our authorization model.Now, this assembly contains code that talks to other services over HTTPS. It therefore needs to discover where those services live and needs to log the interactions and any errors that result from it.So the same SDK, contains assemblies called Company.Platform.Logging, Company.Platform.ServiceDiscovery etc. These assemblies expose interfaces called Company.Platform.Logging.ILogging and Company.Platform.ServiceDiscovery.IServiceDiscovery respectively.Now my question is, should the classes in Company.Platform.Security take these interfaces as dependencies in their constructors? Or should Company.Platform.Security define its own Company.Platform.Security.Interfaces.ILogging and Company.Platform.Security.Interfaces.IServiceDiscovery interface and take objects implementing those in the constructor of its classes?The latter makes the assembly more cohesive IMO as it protects it against changes in those interfaces (which may or may not be maintained by other team members or teams). But a lot of concrete classes will just be very simple adapter classes for the classes in the other assemblies. It might also make the system more complex.What are valid arguments for either approach? | Sdk building, declare dependencies inside the assembly or use external? | c#;design;dependencies;microservices;sdk | null |
_webmaster.17641 | According to Google Analytics, a particular page has 97710 page views. This page has an exit rate of 57.82% and a bounce rate of 64.4%. According to the Navigation Summary page, 91.76% of page visits were entrances, and 8.24% came from previous pages.From the page views and entrance rate, I calculate the number of direct entrances as:.9176 * 97710 = 89659and from that, I calculate the number of bounces as:.644 * 89659 = 57740This seems reasonable. Then, from the exit rate and the number of page views, I calculate the number of exits as:.5782 * 97710 = 56496This too seems like a reasonable number...but then I saw that the number of bounces (57740) > the number of total exits (56496)!I've read about what exactly bounce rate and exit rate are, and I've looked at a few examples online where people showed examples of how to calculate bounce and exit rates. Does anyone see anything wrong with my math? Or do I misunderstand how Analytics works? | How can I have more bounces than exits in Google Analytics? | google analytics;bounce rate | the data is correct, your math is too. the only difference on why you get a discrepancy of 1244 is because you are calculating with only 2 decimal numbers.if for example your real bounce rate is 91.7653% (and not 91.76% flat) that would result in 89659 direct entrances.917653 * 97710 = 89663.644353 * 89663 = 57774and so forthalso your total exit rate can be smaller than your bounce rate. for example visitors could time out on your site and hence a new session is made without accounting for an exit ... |
_unix.199399 | I'm trying to list all the files with names no longer than 250 characters (including the directory it is part of, from the relative path my command is inside).I've seen a similar thread , but that will only list the files recursively. Any idea on how to modify script to only show files with names no longer than 250 characters (including the relative path)? | List files recursively in Linux CLI with path relative to the current directory, max 250 char | files;find;recursive | null |
_webmaster.23465 | Okay, so I checked the page rank again, and http://www.namhost.com's page rank is now 4. But I get this warning:Google Pagerank for: http://www.namhost.com 4/10Pagerank is valid!Attention! This domain has a very little quality backlinks and you run the risk of the domain losing it's page rank on the next Google pagerank update!I used: http://www.checkpagerank.netI then used a few other checkers: http://www.whatsmypagerank.com/pagerank-checker.phpThis one found my page rank to be zero. So my questions arewhich one is correct?why are they different?should I attempt get rid of the +- 40 000 BAD backlinks I have, or should I focus my efforts on getting 40 000 GOOD backlinks ? i.e. Will the good backlinks trump the badlinks (copyright pending)? | Page Rank not showing correctly everywhere? | seo;pagerank | null |
_codereview.134410 | Original Problem: HackerRankI am trying to count the number of inversions using Binary Indexed Tree in an array. I've tried to optimise my code as much as I can. However, I still got TLE for the last 3 test cases. Any ideas that I can further optimise my code?# Enter your code here. Read input from STDIN. Print output to STDOUT#!/usr/bin/pythonfrom copy import deepcopyclass BinaryIndexedTree(object): Binary Indexed Tree def __init__(self, n): self.n = n + 1 self.bit = [0] * self.n def update(self, i, k): Adds k to element with index i while i <= self.n - 1: self.bit[i] += k i = i + (i & -i) def count(self, i): Returns the sum from index 1 to i total = 0 while i > 0: total += self.bit[i] i = i - (i & -i) return totaldef binary_search(arr, target): Binary Search left, right = 0, len(arr) - 1 while left <= right: mid = left + ((right - left) >> 1) if target == arr[mid]: return mid elif target < arr[mid]: right = mid - 1 else: left = mid + 1 # If not found, return -1. return -1T = input()for iterate in xrange(T): n = input() q = [ int( i ) for i in raw_input().strip().split() ] answer = 0 # Write code to compute answer using x, a and answer # Build a Binary Indexed Tree. bit = BinaryIndexedTree(n) # Copy q and sort it. arr = sorted(deepcopy(q)) # index array. index = map(lambda t: binary_search(arr, t) + 1, q) # Loop. for i in xrange(n - 1, -1, -1): answer += bit.count(index[i]) bit.update(index[i] + 1, 1) print answer | Count the number of inversions using Binary Indexed Tree in an array | python;algorithm;programming challenge;time limit exceeded | null |
_vi.11373 | Like most programmers, I perform a lot of repetitive tasks. In optimising my workflow, I'm taking some of those repetitive tasks, and refactoring them into shell scripts.One thing that I'm trying to automate is the recreation of PostgreSQL views. I have the following view.create or replace view person asselect 1 as person_id , 'John'::text as first_name, 'Doe'::text as last_name;I can dump this view with psql -c \\d+ person and the output is as follows: View public.person Column | Type | Modifiers | Storage | Description ------------+---------+-----------+----------+------------- person_id | integer | | plain | first_name | text | | extended | last_name | text | | extended | View definition: SELECT 1 AS person_id, 'John'::text AS first_name, 'Doe'::text AS last_name;I can reformat this text into a CREATE OR REPLACE VIEW statement with the following keystrokes in vim: gg0dfiCREATE OR REPLACE VIEW <ESC>$cl AS<ESC>j0d/^View definition:<CR>ddG$:wq (I've reformatted <ESC>, etc, in the above).I've got the above working perfectly, except for the screen flashes that occur. For example, if at the shell I type a=$(psql -c \\d+ person | vim -s <vimscriptfile> -); echo $a then my screen flashes before outputting the nicely format SQL.Is there any way to remove this flash? Or is there a better using-vim-in-a-pipeline approach than what I'm employing? | How can you use vim as a stream editor? | vimscript;bash | null |
_unix.84669 | If I am not wrong, this is how umask is calculated.for dir, 777 - 022(root's umask value) = 755.for file, 666 - 022(root's umask value) = 644.Now, where this umask value is defined? Is it the /etc/bashrc file?. If so, then what is the file /etc/login.defs for?My /etc/login.defs file says 077 as umask - what does this mean?Also where is cmask defined?The umask can be changed using umask command, but that is temporary. Right?If I have to make it permanent, I can edit .bashrc file in my home dir and append umask value to it.Also, say I am root and I want to set a specific umask for all other users, how to do that? | Umask for root and other system users | umask | null |
_unix.140780 | I've just connected to a Linux terminal using an ssh client. I'm running Linux on both desktops, but cannot open PDFs, IDE, or other graphics from the ssh client. How can I fix this? | Cannot open GUI's via SSH connection? | ssh;x11;gui | null |
_unix.3329 | What's the best way to run a command inside a screen session such that its parent shell can be accessed? I want to be able to restart it from within the same screen session.The best I've managed to come up with is the following:$ cat mr-t.sh #!/bin/bashtopexec /bin/bashand then:screen -e'^\\\' -S top-in-screen ./mr-t.shThen, if top stops running, I'll at least get a new shell within the same screen session I can work with. This isn't very elegant, though, and I can't quite convince myself that the signal will reliably be sent to the right process if I hit C-c. Moreover, C-z doesn't work at all.I'm using bash 3.2.48 on OSX and 3.2.39 on Linux. Both are probably patched by OS vendors.There's nothing special about the top command here, of course.[As an aside, -e'^\\\' reassigns the Magick Screen Key from C-a (a bad default if there ever was one) to C-\.] | How to run a command inside screen such that you can get back to the command's parent shell? | gnu screen | Since asking this question, I've adopted a different but much more effective solution: use tmux.Start a new named, detached session:$ tmux new -d -s topSend a command to its zeroth window.$ tmux send-keys -t top:0 top C-mC-m is equivalent to hitting return. Often it's what you want; sometimes you'll want to leave it out.tmux' command set is well-documented. So it's easy to write far more complicated scripts to drive it.Normally this would be an extremely obnoxious answer, but since it's my question I'll allow myself some leeway, especially since I never really got any of the other methods to work reliably. In contrast, I was able to script tmux correctly the first time I tried to. |
_unix.312297 | I have a file with many numbers in it (only numbers and each number is in one line). I want to find out the number of lines in which the number is greater than 100 (or infact anything else). How can I do that? | Counting the number of lines having a number greater than 100 | sed;awk;grep | null |
_cstheory.3401 | Given a term t : x.y.((x = 0) x = S(y)) in Martin-Lof's type theory, what's the value of w(t(0)), where w is the operator that extracts the witness of a term of existential type? | What happens if we try to extract a witness but it actually does not exist from a term of existential type? | lo.logic;type theory;proof theory | Any value. It depends upon which $t$ you are given. A term of type $\exists y.(\neg(0 = 0) \Rightarrow 0 = S(y))$ is a pair of an int $y$ and a function that takes a proof of $\neg(0=0)$ and gives you a proof of $0 = S(y)$. You can use a term of type $\neg(0 = 0)$ and type $0 = 0$ (from reflexivity) to derive a term of any type you want. This includes a term of type $0 = S(0)$, $0 = S(1)$, $\ldots$. So, you can make $y$ any integer you want. |
_unix.134679 | In OS X, if you want to view the ACL information on a file, you can do so with the -e option of `ls.$ ls -lde app/cachedrwxrwxr-x+ 7 alanstorm staff 238 Apr 1 10:02 app/cache 0: user:alanstorm allow add_file,delete,add_subdirectory,file_inherit,directory_inherit 1: user:root allow add_file,delete,add_subdirectory,file_inherit,directory_inherit 2: user:_www allow add_file,delete,add_subdirectory,file_inherit,directory_inheritWhat's the format of the individual ACE lines? Is this documented anywhere? I couldn't find anything in the chmod or ls man pages, and most internet articles did a lot of and there's your ACL/ACE entries hand waving once they taught you the -e option. I can start to guess at the meanings the last column is obviously the individual permissions, the first is either a user or group, etc., but I don't know what the meaning of allow/deny is in OS X ACL talk, and I don't know if the 0, 1, 2 carry any semantic meaning, and (most importantly) I don't know what else I don't know. For example, there's an inherited column that shows up if a file's inherited permissions0: user:alanstorm allow add_file,delete,add_subdirectory,file_inherit,directory_inherit vs.0: user:alanstorm inherited allow add_file,delete,add_subdirectory,file_inherit,directory_inherit This screws up straight whitespace parsing, and I'd like to know if there's other places where stuff like this pops up.If anyone here could help clear up the individual questions I have about column 1 and column 3, or more generally describe the format, I'd appreciated it.Long time unix user here, but I'm not really up to speed on ACL stuff. Bitmasks, chmod, pry from my cold dead hand, etc. | OSX/Darwin ACL Format | osx;acl;darwin | null |
_codereview.150652 | This is the third question in the series. Number 1 had most of the official two-player rules implemented, and Number 2 was the basic UI. This one has the complete two-player rules implemented, an AI using minimax with alpha-beta pruning. All suggestions welcome, especially those that help me program in a more functionally and more cleanly.Checkers.fs contains my basic types:module public Checkers.Typestype Player = Black | Whitetype PieceType = Checker | Kingtype Coord = { Row :int; Column :int }let offset c1 c2 = { Row = c1.Row + c2.Row; Column = c1.Column + c2.Column }type Move = Coord Listtype MoveTree = { Move :Move; Parent :Option<MoveTree>; Children :Option<List<MoveTree>> }type internal AlphaBetaMove = { Alpha :float Option; Beta :float Option; Move :Move }Piece.fs contains the information on a piece:module public Checkers.Pieceopen Checkers.Typestype Piece = { Player :Player; PieceType :PieceType }let Promote piece = { Player = piece.Player; PieceType = King }let whiteChecker = Some <| { Player = White; PieceType = Checker }let whiteKing = Some <| { Player = White; PieceType = King }let blackChecker = Some <| { Player = Black; PieceType = Checker }let blackKing = Some <| { Player = Black; PieceType = King }Board.fs contains a type alias and some helper methods:module public Checkers.Boardopen Checkers.Typesopen Checkers.Pieceopen System.Collections.Generictype Board = Piece option list listlet square (coord :Coord) = List.item coord.Row >> List.item coord.Columnlet rowFromSeq (value :'a seq) = Some (List.ofSeq value)let listFromSeq (value :'a seq seq) = List.ofSeq (Seq.choose rowFromSeq value)let defaultBoard = [ List.replicate 4 [None; blackChecker] |> List.concat List.replicate 4 [blackChecker; None] |> List.concat List.replicate 4 [None; blackChecker] |> List.concat List.replicate 8 None List.replicate 8 None List.replicate 4 [whiteChecker; None] |> List.concat List.replicate 4 [None; whiteChecker] |> List.concat List.replicate 4 [whiteChecker; None] |> List.concat ]FSharpExtensions.fs contains a few functions that will be used by all the variants:module internal Checkers.FSharpExtensionsopen Checkersopen Checkers.Typesopen Systemlet internal getJumpedCoord startCoord endCoord = { Row = startCoord.Row - Math.Sign(startCoord.Row - endCoord.Row); Column = startCoord.Column - Math.Sign(startCoord.Column - endCoord.Column) }let internal moveIsDiagonal startCoord endCoord = startCoord <> endCoord && System.Math.Abs(startCoord.Row - endCoord.Row) = System.Math.Abs(startCoord.Column - endCoord.Column)let internal otherPlayer player = match player with | White -> Black | Black -> WhiteAmericanCheckers.fs contains the logic for American Checkers (or English Draughts, if you prefer):module internal Checkers.Variants.AmericanCheckersopen Checkers.Typesopen Checkers.Pieceopen Checkers.Boardopen Checkers.FSharpExtensionsopen Systemopen System.Collections.Generic[<Literal>]let Rows = 7[<Literal>]let Columns = 7let internal kingRowIndex(player) = match player with | Player.Black -> Rows | Player.White -> 0let internal coordExists coord = coord.Row >= 0 && coord.Row <= Rows && coord.Column >= 0 && coord.Column <= Columnslet internal checkMoveDirection piece startCoord endCoord = match piece.PieceType with | PieceType.Checker -> match piece.Player with | Player.Black -> startCoord.Row < endCoord.Row | Player.White -> startCoord.Row > endCoord.Row | PieceType.King -> truelet internal isValidCheckerHop startCoord endCoord (board :Board) = let piece = (square startCoord board).Value checkMoveDirection piece startCoord endCoord && (square endCoord board).IsNonelet internal isValidKingHop endCoord (board :Board) = (square endCoord board).IsNonelet internal isValidCheckerJump startCoord endCoord (board :Board) = let piece = (square startCoord board).Value let jumpedCoord = getJumpedCoord startCoord endCoord let jumpedPiece = square jumpedCoord board checkMoveDirection piece startCoord endCoord && (square endCoord board).IsNone && jumpedPiece.IsSome && jumpedPiece.Value.Player <> piece.Playerlet internal isValidKingJump startCoord endCoord (board :Board) = let piece = (square startCoord board).Value let jumpedCoord = getJumpedCoord startCoord endCoord let jumpedPiece = square jumpedCoord board (square endCoord board).IsNone && jumpedPiece.IsSome && jumpedPiece.Value.Player <> piece.Playerlet internal isValidHop startCoord endCoord (board :Board) = match (square startCoord board).Value.PieceType with | PieceType.Checker -> isValidCheckerHop startCoord endCoord board | PieceType.King -> isValidKingHop endCoord boardlet internal isValidJump startCoord endCoord (board :Board) = match (square startCoord board).Value.PieceType with | PieceType.Checker -> isValidCheckerJump startCoord endCoord board | PieceType.King -> isValidKingJump startCoord endCoord boardlet internal hasValidHop startCoord (board :Board) = let hopCoords = [ offset startCoord {Row = -1; Column = 1}; offset startCoord {Row = -1; Column = -1}; offset startCoord {Row = 1; Column = 1}; offset startCoord {Row = 1; Column = -1} ] let flattenedList = seq { for coord in hopCoords do yield coordExists coord && isValidHop startCoord coord board } flattenedList |> Seq.exists idlet internal hasValidJump startCoord (board :Board) = let jumpCoords = [ offset startCoord {Row = -2; Column = 2}; offset startCoord {Row = -2; Column = -2}; offset startCoord {Row = 2; Column = 2}; offset startCoord {Row = 2; Column = -2} ] let flattenedList = seq { for coord in jumpCoords do yield coordExists coord && isValidJump startCoord coord board } flattenedList |> Seq.exists idlet internal jumpAvailable player (board :Board) = let pieceHasJump row column = let piece = board.[row].[column] piece.IsSome && piece.Value.Player = player && hasValidJump { Row = row; Column = column } board let flattenedList = seq { for row in 0 .. Rows do for column in 0 .. Columns do yield (pieceHasJump row column) } flattenedList |> Seq.exists idlet internal moveAvailable (board :Board) player = let pieceHasMove row column = let piece = board.[row].[column] piece.IsSome && piece.Value.Player = player && (hasValidJump { Row = row; Column = column } board || hasValidHop { Row = row; Column = column } board) let flattenedList = seq { for row in 0 .. Rows do for column in 0 .. Columns do yield (pieceHasMove row column) } flattenedList |> Seq.exists idlet isWon (board :Board) = match (moveAvailable board) with | x when not <| x White -> Some Black | x when not <| x Black -> Some White | _ -> Nonelet internal setPieceAt coord piece (board :Board) = let boardItems = List.init (Rows + 1) (fun row -> match row with | i when i = coord.Row -> List.init (Columns + 1) (fun col -> match col with | j when j = coord.Column -> piece | _ -> board.[row].[col] ) | _ -> board.[row] ) boardItemslet internal jump startCoord endCoord (board :Board) = let kingRowIndex = kingRowIndex((square startCoord board).Value.Player) let piece = match endCoord.Row with | row when row = kingRowIndex -> Some <| Promote (square startCoord board).Value | _ -> (square startCoord board) let jumpedCoord = getJumpedCoord startCoord endCoord board |> setPieceAt startCoord None |> setPieceAt endCoord piece |> setPieceAt jumpedCoord Nonelet internal hop startCoord endCoord (board :Board) = let kingRowIndex = kingRowIndex (square startCoord board).Value.Player let piece = match endCoord.Row with | row when row = kingRowIndex -> Some <| Promote (square startCoord board).Value | _ -> (square startCoord board) board |> setPieceAt startCoord None |> setPieceAt endCoord piecelet internal playerTurnEnds (move :Move) (originalBoard :Board) (currentBoard :Board) = let lastMoveWasJump = Math.Abs(move.[0].Row - move.[1].Row) = 2 let pieceWasPromoted = (square (List.last move) currentBoard).Value.PieceType = King && (square move.[0] originalBoard).Value.PieceType = Checker pieceWasPromoted || not (lastMoveWasJump && hasValidJump (List.last move) currentBoard)let public isValidMove startCoord endCoord (board :Board) = coordExists startCoord && coordExists endCoord && moveIsDiagonal startCoord endCoord && (square startCoord board).IsSome && match Math.Abs(startCoord.Row - endCoord.Row) with | 1 -> isValidHop startCoord endCoord board && not <| jumpAvailable (square startCoord board).Value.Player board | 2 -> isValidJump startCoord endCoord board | _ -> falselet public movePiece startCoord endCoord (board :Board) :Option<Board> = match isValidMove startCoord endCoord board with | false -> None | true -> match Math.Abs(startCoord.Row - endCoord.Row) with | 1 -> Some <| hop startCoord endCoord board | 2 -> Some <| jump startCoord endCoord board | _ -> Nonelet rec public moveSequence (coordinates :Coord seq) (board :Option<Board>) = let coords = List.ofSeq(coordinates) match board with | None -> None | Some b -> match coords.Length with | b when b >= 3 -> let newBoard = movePiece coords.Head coords.[1] board.Value moveSequence coords.Tail newBoard | _ -> movePiece coords.Head coords.[1] board.Valuelet internal uncheckedMovePiece startCoord endCoord (board :Board) = match Math.Abs(startCoord.Row - endCoord.Row) with | 1 -> hop startCoord endCoord board | 2 -> jump startCoord endCoord boardlet rec internal uncheckedMoveSequence (coordinates :Coord seq) (board :Board) = let coords = List.ofSeq(coordinates) match coords.Length with | b when b >= 3 -> let newBoard = uncheckedMovePiece coords.Head coords.[1] board uncheckedMoveSequence coords.Tail newBoard | _ -> uncheckedMovePiece coords.Head coords.[1] boardAmericanCheckersAI.fs contains the variant-specific logic for AI:module Checkers.AIs.AmericanCheckersAIopen Checkers.Boardopen Checkers.Variants.AmericanCheckersopen Checkers.Typesopen Systemlet checkerWeights = [[0.0; 3.20; 0.0; 3.20; 0.0; 3.20; 0.0; 3.10]; [1.15; 0.0; 1.05; 0.0; 1.0; 0.0; 1.10; 0.0]; [0.0; 1.10; 0.0; 1.0; 0.0; 1.05; 0.0; 1.15]; [1.15; 0.0; 1.05; 0.0; 1.0; 0.0; 1.10; 0.0]; [0.0; 1.10; 0.0; 1.0; 0.0; 1.05; 0.0; 1.15]; [1.15; 0.0; 1.05; 0.0; 1.0; 0.0; 1.10; 0.0]; [0.0; 1.10; 0.0; 1.0; 0.0; 1.05; 0.0; 1.15]; [3.10; 0.0; 3.20; 0.0; 3.20; 0.0; 3.20; 0.0]]let kingWeights = [[0.0; 1.05; 0.0; 1.0; 0.0; 1.0; 0.0; 1.0]; [1.05; 0.0; 1.10; 0.0; 1.05; 0.0; 1.05; 0.0]; [0.0; 1.10; 0.0; 1.15; 0.0; 1.10; 0.0; 1.0]; [1.0; 0.0; 1.15; 0.0; 1.20; 0.0; 1.05; 0.0]; [0.0; 1.05; 0.0; 1.20; 0.0; 1.15; 0.0; 1.0]; [1.0; 0.0; 1.10; 0.0; 1.15; 0.0; 1.10; 0.0]; [0.0; 1.05; 0.0; 1.05; 0.0; 1.10; 0.0; 1.05]; [1.0; 0.0; 1.0; 0.0; 1.0; 0.0; 1.05; 0.0]]let isPlayerPiece player coord (board :Board) = let piece = square coord board piece.IsSome && player = piece.Value.Playerlet nextPoint coord = match coord with | c when c.Row = Rows && c.Column = Columns -> None | c when c.Column = Columns -> Some {Row = c.Row + 1; Column = 0} | _ -> Some {coord with Column = coord.Column + 1}let calculateCheckerWeight coord (board :Board) = let piece = (square coord board).Value let kingRow = kingRowIndex piece.Player let weight = 8.0 - (float <| Math.Abs(kingRow - coord.Row)) + (square coord checkerWeights) match piece.Player with | Black -> weight | White -> -weightlet calculateKingWeight coord (board :Board) = let piece = (square coord board).Value let weight = 8.0 + (square coord kingWeights) match piece.Player with | Black -> weight | White -> -weightlet calculatePieceWeight coord (board :Board) = let piece = square coord board match piece.Value.PieceType with | Checker -> calculateCheckerWeight coord board | King -> calculateKingWeight coord boardlet calculateWeight player (board :Board) = let rec loop (weight :float) coord :float = match nextPoint coord with | Some c -> match isPlayerPiece player coord board with | true -> loop (weight + (calculatePieceWeight coord board)) c | false -> loop weight c | None -> weight loop 0.0 {Row = 0; Column = 0}let calculateWeightDifference (board :Board) = let rec loop (weight :float) coord = match nextPoint coord with | Some c -> let piece = square coord board match piece.IsSome with | true -> loop (weight + (calculatePieceWeight coord board)) c | false -> loop weight c | None -> weight loop 0.0 {Row = 0; Column = 0}let checkerJumps player = match player with | White -> [{Row = -2; Column = -2}; {Row = -2; Column = 2}] | Black -> [{Row = 2; Column = -2}; {Row = 2; Column = 2}]let kingJumps player = (checkerJumps player) @ (match player with | White -> [{Row = 2; Column = -2}; {Row = 2; Column = 2}] | Black -> [{Row = -2; Column = -2}; {Row = -2; Column = 2}])let checkerHops player = match player with | White -> [{Row = -1; Column = -1}; {Row = -1; Column = 1}] | Black -> [{Row = 1; Column = -1}; {Row = 1; Column = 1}]let kingHops player = (checkerHops player) @ (match player with | White -> [{Row = 1; Column = -1}; {Row = 1; Column = 1}] | Black -> [{Row = -1; Column = -1}; {Row = -1; Column = 1}])let getPieceSingleJumps coord (board :Board) = let piece = (square coord board).Value let moves = match piece.PieceType with | Checker -> checkerJumps piece.Player | King -> kingJumps piece.Player let hops = List.ofSeq (seq { for move in moves do let endCoord = offset coord move yield match coordExists endCoord && isValidJump coord endCoord board with | true -> Some [coord; endCoord] | false -> None }) List.map (fun (item :Option<Move>) -> item.Value) (List.where (fun (item :Option<Move>) -> item.IsSome) hops)let rec createMoveTree (move :Move) (board :Board) = let moveTree = { Move = move; Parent = None; Children = let newBoard = if move.Length = 1 then board else uncheckedMoveSequence move board let newJumps = getPieceSingleJumps (List.last move) newBoard let newMoveEndCoords = List.map (fun item -> List.last item) newJumps let oldPieceType = (square move.Head board).Value.PieceType let newPieceType = (square (List.last move) newBoard).Value.PieceType match newMoveEndCoords.IsEmpty || (oldPieceType = Checker && newPieceType = King) with | false -> let moves = List.map (fun (item :Coord) -> move @ [item]) newMoveEndCoords let children = List.map (fun item -> createMoveTree item board) moves Some children | true -> None } moveTreelet getPieceJumps coord (board :Board) = let moves = new System.Collections.Generic.List<Move>() let rec loop (moveTree :MoveTree) = match moveTree.Children with | None -> moves.Add(moveTree.Move) | Some t -> List.iter (fun item -> (loop item)) t let moveTree = createMoveTree [coord] board match moveTree.Children with | Some t -> loop <| createMoveTree [coord] board | None -> () List.ofSeq moveslet getPieceHops coord (board :Board) = let piece = (square coord board).Value let moves = match piece.PieceType with | Checker -> checkerHops piece.Player | King -> kingHops piece.Player let hops = List.ofSeq (seq { for move in moves do let endCoord = offset coord move yield match coordExists endCoord && isValidHop coord endCoord board with | true -> Some [coord; endCoord] | false -> None }) List.map (fun (item :Option<Move>) -> item.Value) (List.where (fun (item :Option<Move>) -> item.IsSome) hops)let calculateMoves player (board :Board) = let rec loop jumpAcc hopAcc coord = match isPlayerPiece player coord board with | true -> let newJumpAcc = getPieceJumps coord board @ jumpAcc match newJumpAcc with | [] -> let newHopAcc = getPieceHops coord board @ hopAcc match nextPoint coord with | Some c -> loop newJumpAcc newHopAcc c | None -> newHopAcc | _ -> match nextPoint coord with | Some c -> loop newJumpAcc [] c | None -> newJumpAcc | false -> match nextPoint coord with | Some c -> loop jumpAcc hopAcc c | None -> jumpAcc @ hopAcc loop [] [] {Row = 0; Column = 0}GameController.fs contains the state of the game as a whole. In the future, this will be expanded to tracking move history and other relevant information.module public Checkers.GameControlleropen Checkers.Typesopen Checkers.Boardtype GameController = { Board :Board; CurrentPlayer :Player; CurrentCoord :Option<Coord> }let newGame = { Board = Board.defaultBoard; CurrentPlayer = Black; CurrentCoord = None }Minimax.fs contains the general logic for the AI's. This module contains some very nasty code, including a few mutable variables. I'm sure my other code is a mess as well, but I'd like special attention for this.module internal Checkers.Minimaxopen Checkers.Typesopen Checkers.Boardopen Checkers.FSharpExtensionsopen Checkers.Variants.AmericanCheckersopen Checkers.AIs.AmericanCheckersAIlet rec internal bestMatchInList player highestDifference moveForHighestDifference (list :List<float * Move>) = let head::tail = list let weight = fst head let newMoveForHighestDifference = match player with | Black -> match weight > highestDifference with | true -> snd head | false -> moveForHighestDifference | White -> match weight < highestDifference with | true -> snd head | false -> moveForHighestDifference let newHighestDifference = (highestDifference, weight) ||> match player with | Black -> max | White -> min match tail with | [] -> (highestDifference, newMoveForHighestDifference) | _ -> bestMatchInList player newHighestDifference newMoveForHighestDifference list.Taillet internal chooseNewAlpha currentAlpha (candidateAlpha :float Option) = match currentAlpha with | Some x -> if candidateAlpha.IsSome then Some <| max x candidateAlpha.Value else currentAlpha | None -> candidateAlphalet internal chooseNewBeta currentBeta (candidateBeta :float Option) = match currentBeta with | Some x -> if candidateBeta.IsSome then Some <| min x candidateBeta.Value else currentBeta | None -> candidateBetalet rec minimax player searchDepth alpha beta (board :Board) = match searchDepth = 0 || (isWon board).IsSome with | true -> let weightDifference = Some <| calculateWeightDifference board let newAlpha = if player = Black then weightDifference else alpha let newBeta = if player = White then weightDifference else beta { Alpha = newBeta; Beta = newAlpha; Move = [] } | false -> let moves = calculateMoves player board let mutable alphaForNode = None let mutable betaForNode = None let mutable newAlpha = alpha let mutable newBeta = beta let mutable move = [] if searchDepth <> 0 then ignore <| List.map (fun x -> if newAlpha.IsNone || newBeta.IsNone || newAlpha.Value < newBeta.Value then let newBoard = uncheckedMoveSequence x board let alphaBetaMove = minimax (otherPlayer player) (searchDepth - 1) alphaForNode betaForNode newBoard match player with | Black -> alphaForNode <- chooseNewAlpha alphaForNode alphaBetaMove.Alpha newAlpha <- chooseNewAlpha newAlpha alphaForNode move <- if newAlpha = alphaBetaMove.Alpha then x else move | White -> betaForNode <- chooseNewBeta betaForNode alphaBetaMove.Beta newBeta <- chooseNewBeta newBeta betaForNode move <- if newBeta = alphaBetaMove.Beta then x else move ()) moves { Alpha = betaForNode; Beta = alphaForNode; Move = move }PublicAPI.fs is really kind of a wrapper that containing methods that are meant to be called from outside the library. On this note, note that all the internal methods are really meant to be private to the class, but xUnit doesn't support Portable Class Libraries (PCLs), so I need to expose them to the test library.module public Checkers.PublicAPIopen Checkers.Variantsopen Checkers.Typesopen Checkers.Boardopen Checkers.FSharpExtensionsopen Checkers.Variants.AmericanCheckersopen Checkers.Minimaxopen Checkers.GameControlleropen Systemlet isValidMove startCoord endCoord gameController = isValidMove startCoord endCoord gameController.Board && (square startCoord gameController.Board).Value.Player = gameController.CurrentPlayer && match gameController.CurrentCoord with | None -> true | coord -> startCoord = coord.Valuelet movePiece startCoord endCoord gameController :Option<GameController> = let board = movePiece startCoord endCoord gameController.Board match (isValidMove startCoord endCoord gameController) with | true -> Some <| { Board = board.Value; CurrentPlayer = match playerTurnEnds [startCoord; endCoord] gameController.Board board.Value with | true -> otherPlayer gameController.CurrentPlayer | false -> gameController.CurrentPlayer CurrentCoord = match playerTurnEnds [startCoord; endCoord] gameController.Board board.Value with | true -> None | false -> Some endCoord } | false -> Nonelet move (move :Coord seq) (gameController) :Option<GameController> = let board = moveSequence move (Some gameController.Board) match board with | Some b -> Some <| { Board = board.Value; CurrentPlayer = match playerTurnEnds (List.ofSeq move) gameController.Board board.Value with | true -> otherPlayer gameController.CurrentPlayer | false -> gameController.CurrentPlayer CurrentCoord = match playerTurnEnds (List.ofSeq move) gameController.Board board.Value with | true -> None | false -> Some (Seq.last move) } | None -> Nonelet getMove searchDepth gameController = (minimax gameController.CurrentPlayer searchDepth None None gameController.Board).Movelet isWon controller = isWon controller.Board | American Checkers with AI | game;f#;ai;checkers draughts | I don't like these at all:let internal chooseNewAlpha currentAlpha (candidateAlpha :float Option) = match currentAlpha with | Some x -> if candidateAlpha.IsSome then Some <| max x candidateAlpha.Value else currentAlpha | None -> candidateAlphalet internal chooseNewBeta currentBeta (candidateBeta :float Option) = match currentBeta with | Some x -> if candidateBeta.IsSome then Some <| min x candidateBeta.Value else currentBeta | None -> candidateBetaIn our chat you state you were told to use if when matching against a simple boolean (there's no specific reason to use it over match, or match over it for that situation, so I'll not comment on that) but you can rewrite that with one match instead of a match with a nested if:let internal chooseNewAlpha currentAlpha (candidateAlpha :float Option) = match (currentAlpha, candidateAlpha) with | (Some current, Some candidate) -> Some <| max current candidate | (Some current, None) -> Some current | (None, Some candidate) -> Some candidate | _ -> Nonelet internal chooseNewBeta currentBeta (candidateBeta :float Option) = match (currentBeta, candidateBeta) with | (Some current, Some candidate) -> Some <| min current candidate | (Some current, None) -> Some current | (None, Some candidate) -> Some candidate | _ -> NoneWhy match with a Tuple? Because current and candidate are codependent: each one can affect the result of the other. So we want to take both into account idiomatically.let internal moveIsDiagonal startCoord endCoord = startCoord <> endCoord && System.Math.Abs(startCoord.Row - endCoord.Row) = System.Math.Abs(startCoord.Column - endCoord.Column)F# has an abs method: startCoord <> endCoord && abs (startCoord.Row - endCoord.Row) = abs (startCoord.Column - endCoord.Column).I would consider a refactor:[<Literal>]let Rows = 7[<Literal>]let Columns = 7You use Rows and 0 in the same place, but now 0 doesn't make sense.let internal kingRowIndex(player) = match player with | Player.Black -> Rows | Player.White -> 0You should have:[<Literal>]Rows = 8[<Literal>]Columns = 8[<Literal>]FirstRow = 0[<Literal>]LastRow = Rows - 1[<Literal>]FirstColumn = 0[<Literal>]LastColumn = Columns - 1Thus we have:let internal kingRowIndex(player) = match player with | Player.Black -> LastRow | Player.White -> FirstRowSome of your functions can take advantage of function composition:let internal isValidJump startCoord endCoord (board :Board) = match (square startCoord board).Value.PieceType with | PieceType.Checker -> isValidCheckerJump startCoord endCoord board | PieceType.King -> isValidKingJump startCoord endCoord boardCould be something like:let internal isValidJump startCoord endCoord (board:Board) = let jumpFunc = match (square startCoord board).Value.PieceType with | PieceType.Checker -> isValidCheckerJump | PieceType.King -> isValidKingJump jumpFunc startCoord endCoord boardlet getPieceHops coord (board :Board) = let piece = (square coord board).Value let moves = match piece.PieceType with | Checker -> checkerHops piece.Player | King -> kingHops piece.Player let hops = List.ofSeq (seq { for move in moves do let endCoord = offset coord move yield match coordExists endCoord && isValidHop coord endCoord board with | true -> Some [coord; endCoord] | false -> None }) List.map (fun (item :Option<Move>) -> item.Value) (List.where (fun (item :Option<Move>) -> item.IsSome) hops)Boy is that a mess.You use a for loop (not F# idiomatic), filter in the for loop and return either None or Some, then use a List.map on a List.where to filter only the Some values and return the actual value.You use a List.where, which is a synonym for List.filter (which is the preferred method), and the worst part is you use it on a list that could have already been filtered.let getPieceHops coord (board :Board) = let piece = (square coord board).Value let moves = match piece.PieceType with | Checker -> checkerHops piece.Player | King -> kingHops piece.Player let hopsFilter = List.filter (fun (head::tail) -> let startCoord = head let endCoord = tail |> List.head coordExists endCoord && isValidHop startCoord endCoord board) moves |> List.map (fun move -> [coord; offset coord move]) |> hopsFilterWe eliminated multiple excessive methods, and return the same thing you did initially. (Or should have.)I've finally had time to review minimax, and I think this should be a suitable version that uses tail-call recursion and should do what you want.Do note I've not tested this at all yet, this was a rewrite in Notepad. Something like the following should work, though you've mentioned to me in chat, so I've updated it substantially.let rec minimax player searchDepth alpha beta (board:Board) = match searchDepth = 0 || (isWon board).IsSome with | true -> let weightDifference = Some <| calculateWeightDifference board let newAlpha = match player with | Black -> weightDifference | _ -> alpha let newBeta = match player with | White -> weightDifference | _ -> beta { Alpha = newBeta; Beta = newAlpha; Move = [] } | false -> let rec loop alphaForNode betaForNode newAlpha newBeta move moves = match List.isEmpty moves with | true -> { Alpha = betaForNode; Beta = alphaForNode; Move = move } | false -> let currentMove = moves |> List.head match newAlpha.IsNone || newBeta.IsNone || newAlpha.Value < newBeta.Value with | false -> loop alphaForNode betaForNode newAlpha newBeta move (moves |> List.tail) | true -> let newBoard = uncheckedMoveSequence currentMove board let alphaBetaMove = minimax (otherPlayer player) (searchDepth - 1) alphaForNode betaForNode newBoard match player with | Black -> let newAlphaForNode = chooseNewAlpha alphaForNode alphaBetaMove.Alpha let newNewAlpha = choseNewAlpha newAlpha newAlphaForNode let newMove = match newNewAlpha with | a when a = alphaBetaMove.Alpha -> currentMove | _ -> move loop newAlphaForNode betaForNode newNewAlpha newBeta newMove (moves |> List.tail) | White -> let newBetaForNode = chooseNewBeta betaForNode alphaBetaMove.Beta let newNewBeta = chooseNewBeta newBeta newBetaForNode let newMove = match newNewBeta with | b when b = alphaBetaMove.Beta -> currentMove | _ -> move loop alphaForNode newBetaForNode newAlpha newNewBeta newMove (moves |> List.tail) let moves = calculateMoves player board loop None None alpha beta [] movesOf course that's huge and does a lot of stuff. We obviously want to break it down.So we'll extract our match player with to use some methods, function composition, etc.It's still long, but it's slightly more maintainable.let rec minimax player searchDepth alpha beta (board:Board) = match searchDepth = 0 || (isWon board).IsSome with | true -> let weightDifference = Some <| calculateWeightDifference board let newAlpha = match player with | Black -> weightDifference | _ -> alpha let newBeta = match player with | White -> weightDifference | _ -> beta { Alpha = newBeta; Beta = newAlpha; Move = [] } | false -> let getNewValueAndMove chooseMethod nodeValue moveValue currentValue currentMove newMove = let newNodeValue = chooseMethod nodeValue moveValue let newValue = chooseMethod currentValue newNodeValue let finalMove = match newValue with | a when a = moveValue -> currentMove | _ -> newMove (newNodeValue, newValue, finalMove) let rec loop alphaForNode betaForNode newAlpha newBeta move moves = match List.isEmpty moves with | true -> { Alpha = betaForNode; Beta = alphaForNode; Move = move } | false -> let currentMove = moves |> List.head match newAlpha.IsNone || newBeta.IsNone || newAlpha.Value < newBeta.Value with | false -> loop alphaForNode betaForNode newAlpha newBeta move (moves |> List.tail) | true -> let newBoard = uncheckedMoveSequence currentMove board let alphaBetaMove = minimax (otherPlayer player) (searchDepth - 1) alphaForNode betaForNode newBoard match player with | Black -> let (newAlphaForNode, newNewAlpha, newMove) = getNewValueAndMove chooseNewAlpha alphaForNode alphaBetaMove.Alpha newAlpha currentMove move loop newAlphaForNode betaForNode newNewAlpha newBeta newMove (moves |> List.tail) | White -> let (newBetaForNode, newNewBeta, newMove) = getNewValueAndMove chooseNewBeta betaForNode alphaBetaMove.Beta newBeta currentMove move loop alphaForNode newBetaForNode newAlpha newNewBeta newMove (moves |> List.tail) loop None None alpha beta [] (calculateMoves player board)Finally, PublicAPI: I hate the whitespace there, the way you indented that.Generally, if I have to multi-line things like that I line break and then start indentation.let movePiece startCoord endCoord gameController :Option<GameController> = let board = movePiece startCoord endCoord gameController.Board match (isValidMove startCoord endCoord gameController) with | false -> None | true -> Some <| { Board = board.Value; CurrentPlayer = match playerTurnEnds [startCoord; endCoord] gameController.Board board.Value with | true -> otherPlayer gameController.CurrentPlayer | false -> gameController.CurrentPlayer CurrentCoord = match playerTurnEnds [startCoord; endCoord] gameController.Board board.Value with | true -> None | false -> Some endCoord }let move (move :Coord seq) (gameController) :Option<GameController> = let board = moveSequence move (Some gameController.Board) match board with | None -> None | Some b -> Some <| { Board = board.Value; CurrentPlayer = match playerTurnEnds (List.ofSeq move) gameController.Board board.Value with | true -> otherPlayer gameController.CurrentPlayer | false -> gameController.CurrentPlayer CurrentCoord = match playerTurnEnds (List.ofSeq move) gameController.Board board.Value with | true -> None | false -> Some (Seq.last move) }It's a lot easier to follow. I also tend to put the single-line match patterns at the top, where it allows, so that there's not one line underneath a massive block like that just hanging out.Overall it was quite good, excepting the few idiomatic issues. |
_cs.74985 | I am performing some inter-procedural abstract interpretation to capture certain program properties. Following the standard way, what I am doing is to maintain a work list of to-be-analyzed functions, and pick one from the work list to perform intra-procedural analysis until the work list is empty.Currently, I observed that the intra-procedural analysis of certain functions would not terminate. Basically, it is just taking too much memory and become too slow to reason for even one more statement.Of course, I can optimize my code base and improve the memory efficiency, but fundamentally that shouldn't help too much. Now I am seeking for some strategies to bypass such obstacle. Intuitively, what I can do is to cut a intra-procedural analysis off whenever the memory usage grows too high, return a Top as the return value, and switch to the next function in my work list for analysis.This seems very naturally, and should be sound as well! However, after some quick review of some abstract interpretation work, I don't see any related work performing such cut off. So here is my question:What is the standard approach in abstract interpretation if a intra-procedural analysis becomes too costly to finish? I assume the cut off and return Top approach is sound. Then why shouldn't I find any related literature employing such approach? Is it too ad-hoc and naive? | Cut costly functions in inter-procedural abstract interpretation | programming languages;formal methods;data flow analysis;static analysis | Sure. Cut off and return Top is sound. It's valid. It seems like a reasonable thing to do.I don't know if it has been studied or used before in the literature. I wouldn't be surprised if it has. It feels to me like a variant on widening, even though it's not identical to the usual widening operator. I've seen similar methods applied before (if you get stuck, give up and return Top) in other situations.I don't think there's a single standard approach to this problem. My impression is that there are multiple techniques. You can memoize analysis of a function (so you lazily generate a transfer function that summarizes the effect of the function, in terms of its arguments, and then reuse that information when you need it again in the future). You can use widening operators. You can use counter-example guided refinement to start with a coarse abstraction (which is efficient for static analysis), and then make the abstraction more precise as needed. I imagine you could also do the converse: start with a fine-grained abstraction, and if it is taking too long, switch to a more coarse abstraction and restart the analysis with that coarser abstraction. There are probably many other possible techniques, and the best one might depend on the specifics of your particular analysis. |
_hardwarecs.1258 | Does it exist a USB/bluetooth flash drive? I frequently need to copy data from a personal computer (I do not have bluetooth dongle) and a tablet. So I'm thinking about an hardware key acting as a flash drive but with both USB (for the PC) and bluetooth (for the tablet) interfaces. | Flash drive with USB and bluetooth interface | usb;bluetooth;wireless | null |
_cs.24494 | I'm currently taking a class in Automata Theory and it's kicking my butt. I have an assignment that my teacher gave me that consists of three questions. I have no idea where to start. My teacher and I have a language barrier that I can't seem to break. If anyone could help me understand how to solve the following problems or guide me in the right direction, I would appreciate it! Sorry if I write some of the symbols wrong. I will attach images of the handout as well an assignment/answer sheet he gave us on a previous assignment. Thank you!!Define the language generated by the grammar $G_c= (V_n, V_t, P, \delta)$ where:$V_n = \{\delta, A\}$,$V_t = \{0, 1\}$,$P = \{(\delta, A0), (\delta, 0\delta 0), (\alpha A,\alpha^T \alpha \alpha^T)\}$Give a rule tree for one sentence of the language $L(G_c)$ generated by the grammar $G_c$.Assume that we are given the language:$L(G_b) = \{0^n 1^k 0^k 1^n \mid k, n = 1, 2,... \}$.Find a context-free grammar for this language.Assume we are given the following grammar $G_a = (V_n, V_t, P, \delta)$ where:$V_n = \{\delta, A, B\}$,$V_t = \{0, 1\}$, and$P = \{(\delta, 0A), (\delta, 1B), (A, 1), (B, 0B), (B,1)\}$.Find the language $L(G_a)$ generated by the grammar $G_a$.The Assignment: http://i.imgur.com/qx2hB1u.jpgPrevious Assignment + Answers:https://imgur.com/a/jO124 | Automata Theory Questions: Rule Trees, Context-Free Grammar, Proving Ambiguity | context free;automata;trees | (1) It's always helpful to start listing some strings to get an idea of what kind of language we accept. I'm going to be making some assumptions here, but you should verify that these assumptions are true with whoever gave the assignment (they might be bad assumptions).$\delta$ is the start symbol. We want to derive this until we get a string of only terminal symbols, either $0$s or $1$s. What's the smallest string we can derive? Well, we can get $0A$ from $\delta$, and from $\alpha A$ (I assume $\alpha$ is any string), we can derive $\alpha^T \alpha \alpha^T$; therefore, from $0A$ we can derive $0^T 0 0^T$. Since $T$ isn't defined, I assume it's a free variable - from context, I assume an integer - which means it can assume any natural value. Therefore, we can derive $0$, $00$, ... (i.e., $0^+ = 00^*$) from $0A$.The next observation is that there is no way to get $\epsilon$, the empty string. Only the $A$ symbol can ever be removed, and the only productions that add $A$ also add $0$.The final observation is that productions only add the $0$ terminal symbol, so whatever language the grammar generates, it's a subset of $0^*$.In summary, we know that:1. The language contains $0^+ = 0^1 + 0^2 + ...$;2. The language doesn't contain $\epsilon = 0^0$;3. The language is a subset of $0^* = 0^0 + 0^1 + 0^2 + ...$.I'm not entirely sure what a rule tree would look like for a context-sensitive grammar, so someone else might need to address that. To get the string $000$ one could apply the rules $\delta \rightarrow 0A \rightarrow 0^T 0 0^T = 000$ by taking $T = 1$.(2) Something like this should work:Start := 0 Inner 1 | 0 Start 1Inner := 10 | 1 Inner 0Nonterminals are Start and Inner. Terminals are 0 and 1. Let's consider how this works: first, Start can generate $0^{n-1} S 1^{n-1}$ by applying the rule Start := 0 Start 1 $n-1$ times. It can then get $0^n I 1^n$ by applying Start := 0 Inner 1. Next, we can get $0^n1^{k-1}0^{k-1}1^n$ by applying Inner := 1 Inner 0 $k - 1$ times. Finally, we apply Inner := 10 once to get $0^n1^k0^k1^n$.(3) See the approach given for problem (1). You should find the language looks roughly like $01, 11, 101, 1001, ..., 10^n1, ... = 01 + 10^*1$. |
_unix.2222 | Emacs lisp source code for .elc files?e.g. cal-mayan.elcFiles in the /bin directory?e.g. cat, split, and echo | Where can I find sources for | source | It depends a bit on what distribution you use. On a debian style system you could do something like this:$ dpkg -S `which cat`coreutils: /bin/cat$ apt-get source coreutilsThe last command will fetch the source archive and all the patches which were used to build the binary package that includes the cat command.Alternatively you could just google for it. Or use even google code search. |
_unix.168811 | I'm using CentOS 7, and am trying to create a systemd service for my local user. However, before I can do that, I need my systemd user instance, which I cannot find or create to begin with.Running systemctl --user gives meFailed to issue method call: Process /bin/false exited with status 1Running systemctl status [email protected] (where 1000 is the id given to me from the id command) gives [email protected] Loaded: not-found (Reason: No such file or directory) Active: inactive (dead)How do I find/create my systemd user instance? | Can't start systemd user instance | centos;rhel;systemd;services | null |
_unix.209731 | I am trying to set up a Wifi hotspot on Debian 7.6. After reading a few tutorials, I have managed to get the Wifi working by installing and configuring hostapd. I need to install and configure this on many different machines, some with no internet access, thus, I downloaded the deb packages for hostapd and its dependencies from here: http://ftp.br.debian.org/debian/pool/mainand wrote a script to configure the network interfaces and what not.The problem is, if I install the required packages (libnl-genl-3-200_3.2.7-4_i386.deb, libnl-3-200_3.2.7-4_i386.deb and hostapd_1.0-3+deb7u2_i386.deb) using dpkg rather than using apt-get, the machine hangs at boot, however the Wifi is up and running.After manual install and configuration, running service hostapd restart hangs when it attempts to start the service. While the the terminal hangs at [....] Starting advanced IEEE 802.11 management: hostapd. the wifi becomes accessible and I can connect to it successfully. Terminating the hanging process by pressing Ctrl+C will kill the Wifi connection. It feels like hostapd is not started in a separate thread, and thus hangs.I have compared the md5sum of manually downloaded packages and those found in /var/cache/apt/archives and the md5sums match up, so it must be config, yet the config I apply when install via apt-get and dpkg are identical. I have read that apt-get uses dpkg to install the actual packages, so I have no idea how this happens with the same debs and same applied config. Any pointers will be appreciated. | Installing hostapd via apt-get and dpkg results in different boot behaviour | apt;dpkg;hostapd | null |
_webmaster.79231 | Currently when we search site:domain.com on google we get a return of 3.6 million pages indexed, About 3,650,000 results (0.31 seconds). When we search site:domain.com/childdirectory we get 3.8 million pages indexed, About 3,770,000 results (0.60 seconds). How is it possible that the child directory has more pages indexed than the parent? Is each directory's index counted separately? | Google Site: listing Parent Domain directory Index | google search;indexing;pagerank;subdirectory | Using site: is highly unreliable for counting indexed pages. It is often out of date, incorrect and actual serps is limited to sample data only. You should opt to use Google Webmaster Tools for a more accurate index count. |
_webapps.14235 | I just starting using rescuetime to try to keep track of my productivity, but I've run into a bit of a snag.Generally, I keep track of my unproductive time by being in incognito mode in Google Chrome. However, rescuetime doesn't distinguish between incognito mode and just browsing normally.Is there a way to get rescuetime to treat incognito mode specially, without using rescuetime's whitelist?Thanks! I would gladly provide any additional details, and apologies for being so terse. | How to hide data from rescuetime while in incognito | google chrome;web history | null |
_unix.336831 | I have a Netgear NAS (ReadyNAS Duo v2) running a custom variant of Debian 6.0.3 squeeze, named RAIDiator-arm 5.3.12 and there are no further upgrades as the device is no longer supported. I would like to install ffmpeg on this system but there are some conflicting packages. Here is the output of aptitude install ffmpeg:The following packages have unmet dependencies: libavfilter0: Depends: libavcodec52 (< 4:0.5.10-99) but 4:0.6.6-.netgear2 is installed. or libavcodec-extra-52 (< 4:0.5.10-99) which is a virtual package. libavdevice52: Depends: libavcodec52 (< 4:0.5.10-99) but 4:0.6.6-1.netgear2 is installed. or libavcodec-extra-52 (< 4:0.5.10-99) which is a virtual package. Depends: libavformat52 (< 4:0.5.10-99) but 4:0.6.6-1.netgear2 is installed. or libavformat-extra-52 (< 4:0.5.10-99) which is a virtual package.It appears some netgear-specific version of the libavcodec52 package is hindering the installation. What can I do?I had the idea of cross-compiling a static build of ffmpeg on my computer but run into other problems. | Install ffmpeg on Netgear NAS running custom Debian Squeeze | debian;apt;ffmpeg;aptitude | null |
_unix.254402 | Using BSD sed;How can I perform the following substitution?: Before:hello hello hellohello hello helloAfter:hello world hellohello hello helloIn other words; how can I replace only the Nth occurence of a pattern?(Or in this case; the 2nd occurrence of a pattern?) | BSD sed: Replace only the Nth occurrence of a pattern | text processing;sed;regular expression;freebsd;bsd | null |
_unix.220042 | #!/bin/sh# This script is for checking the status of SSHD2 Number of connectionsCHKOUT=Please also Check the Number of ORPHAN / DEFUNCT Process on `hostname` :DFOUT=`/usr/bin/ps -eaf | grep defucnt`SHCOUNT=`/usr/bin/pgrep sshd2|wc -l`if [ $SHCOUNT -gt 150 ]then mailx -s Warning! : `hostname` has more than $SHCOUNT SSHD2 Connections running Please Check! [email protected] << EOF`echo $CHKOUT``echo ==========================================================``echo $DFOUT`EOFfi======================================Hello Experts.. I am unable to get the output of DFOUT Variable, Can you guys suggest the correct way of doing this?===================Edited code that is working nice..#!/bin/sh# This script is for checking the status of SSHD2 Number of connectionsCHKOUT=Please also Check the Number of ORPHAN / DEFUNCT Process on `hostname` :PS=`/usr/bin/ps -eaf | grep -v grep| grep defunct`SHNT=`/usr/bin/pgrep ssh|wc -l`if [ $SHNT -gt 15 ]then mailx -s Warning! `hostname` has more than $SHNT SSHD2 Connections running Please Check! [email protected] << EOF Hi Team, $CHKOUT =========================================================== $PSEOFfi | How to call variable in KSH shell script | shell script | null |
_codereview.41795 | In my first iterative algorithm, I do thisfor (auto &widget : controls.getWidgets()){ if (!widget->visible) continue; widget->draw(); for (auto &widget_component : widget->components) { if (!widget_component->visible) continue; widget_component->draw(); for (auto &ww : widget_component->components) { if (!ww->visible) continue; ww->draw(); } }}Now, I see the flaw in this algorithm because of the repetition of code.I try to write a recursive function call to handle this thing.void swcApplication::recursiveDisplay(swcWidget *next){ if (next == nullptr) return; if ((next->parent != nullptr) && !next->parent->visible) return; if (next->visible) next->draw(); for (auto &i : next->components) { recursiveDisplay(i); }}void display(){ for (auto &widget : controls.getWidgets()) { recursiveDisplay(widget); }}Display first the parent before all other children, if parent is not visible, then don't draw its childrenIs the above phrase satisfy by this algorithm (As far as I can see, it is)? Is this optimal? I don't know what drawbacks might occur here because I just write in a few seconds.If you didn't find anything wrong here, now, how can I put it back in iterative way? I know iteration is better.UpdateI didn't use any z attribute here and instead I go for painter's algorithm.The flaw I am referring in my iteration version is that, it is limited to draw widgets depends on how deep I will code for iteration. | Iteration to recursive function | c++;optimization;c++11;recursion;iteration | null |
_codereview.166966 | I started learning Python just yesterday, and so I am aware of my lack of knowledge regarding the language. Below is a program I wrote to simulate a guessing game. I ended up creating a lot of ad hoc patches for some possible errors, and suspect that there are more elegant ways to write the program. I'd like pointers/assistance/an example of a best practices version of the program I write below:import random, syssys.setrecursionlimit(10000) #I expect any troll who's trying to fuck with the program to give up before they reach 10,000.control = True #global variable to control the behaviour of the 'guessNumber()' function.errors = 0 #To store the number of errors a user triggers.def guessNumber(maxx): global errors global control try: maxx = float(maxx) #Incase a cheeky user decided to submit floating point numbers. if int(maxx) != maxx: #If they submitted a floating point number instead of an integer. print(Input whole numbers only.) errors += 1 guessNumber(input('Input the Maximum number: \t')) #Call a new instance of 'guessNumber()' else: #If they indeed submitted a whole number. maxx = int(maxx) except ValueError: #They didn't submit a whole number. print(Input whole numbers only.) errors += 1 guessNumber(input('Input the Maximum number: \t')) #Call a new instance of 'guessNumber()' if not control: #control variable. If the program has already properly evaluated, it should terminate. return None print('I am thinking of a whole number between 1 and ' + str(maxx) + ' (both inclusive).') num = random.randint(1, maxx) count = 0 check = True var = None while check: print(Guess the number.) try: var = float(input()) #Incase a cheeky user decided to submit floating point numbers. if int(var) != var: #If they submitted a floating point number instead of an integer. print(Input whole numbers only.) errors += 1 continue else: #If they indeed submitted a whole number. var = int(var) except ValueError: #They didn't submit a whole number. print(Input whole numbers only.) errors += 1 continue count += 1 if var == num: #If they guess the number right. check = False elif var > num: print('Too high! \nTry a little lower.') elif var < num: print('Too low! \nTry a little higher.') print('Congratulations!!!\nYou guessed the number after ' + str(count) + tries and triggering + str(errors) + errors.) control = False #After every completed execution, we set control to false so that if the function was called from another instance of itself it terminates and doesn't cause errors. return NoneguessNumber(input('Input the Maximum number (integers only): \t'))I am particularly worried about the memory cost, due to using recursion to deal with errors. | Number guess game program in python 3 | python;beginner;python 3.x;number guessing game;memory optimization | null |
_codereview.124390 | I've made a data structure for mathematical expressions. I want to parse mathematical expressions like:\$x = 3\$\$y = 4\$\$z = x + y :\$into an evaluated document like:\$x = 3\$\$y = 4\$\$z = x + y : 7\$where \$=\$ is assignment and \$:\$ is evaluation.The data structure must handle errors:Invalid input like multiple equals signsInvalid expressions like referencing an undefined variableSmells and commentsHaskell is fantastic for this, but I'm still struggling with algebraic data structures; I'm used to object-oriented design. Because of this, I'd like some feedback!Smells:Structure for output document relates arbitrarily on structure for input documentsPicking type vs data seems randomShould I be using records?I struggle with finding good namesevalExp contains much repetitionComments on my style in general?I suspect that I'm evaluating nested expressions multiple times. Thoughts on how to fix this?Data structureThe data structure itself is most important. I've included code for serialization and evaluation for reference. These are less important, but I'm thankful for comments on those as well!module Document whereimport Text.Printf(printf)import Data.List(intercalate)import qualified Data.Map.Strict as M-- Source datadata Exp = Num Double | Add Exp Exp | Sub Exp Exp | Mult Exp Exp | Div Exp Exp | Neg Exp | Ref Name | Call Name [Exp] deriving (Show)type Name = Stringtype Evaluation = Booldata Statement = Statement (Maybe Name) Exp Evaluation | Informative String deriving (Show)data Document = Document [Statement] deriving (Show)instance Monoid Document where mempty = Document [] (Document a) `mappend` (Document b) = Document (a `mappend` b)-- Result datatype EvalError = Stringtype EvalRes = Either EvalError Doubledata StatementResult = StatementResult Statement EvalRes | JustInformative String deriving (Show)type DocumentResult = [StatementResult]data EvalState = Success Double -- Value found | InProgress -- For terminating cyclic dependencies | Error EvalError -- Unable to evaluatetype NameExpressions = M.Map Name Exptype NameValues = M.Map Name EvalState-- Serializationclass Serialize a where serialize :: a -> Stringinstance Serialize Exp where serialize (Num d) = show d serialize (Add x y) = printf (%s + %s) (serialize x) (serialize y) serialize (Sub x y) = printf (%s - %s) (serialize x) (serialize y) serialize (Neg x) = - ++ serialize x serialize (Mult x y) = printf %s * %s (serialize x) (serialize y) serialize (Div x y) = printf %s / %s (serialize x) (serialize y) serialize (Ref name) = name serialize (Call name exps) = printf %s(%s) name (intercalate , $ map serialize exps)instance Serialize Statement where serialize (Statement mn exp eval) = prefix mn ++ serialize exp ++ postfix eval where prefix (Just n) = n ++ = prefix Nothing = postfix True = : postfix False = serialize (Informative s) = sinstance Serialize Document where serialize (Document ls) = unlines . map serialize $ lsinstance Serialize StatementResult where serialize (StatementResult statement evalRes) = serialize statement ++ = ++ serializedEval evalRes where serializedEval (Left err) = err serializedEval (Right d) = show d serialize (JustInformative s) = sserializeResult :: DocumentResult -> StringserializeResult = unlines . map serializeEvaluationmodule Evaluator whereimport Documentimport qualified Data.Map.Strict as Mimport Control.Monad(liftM2, liftM)getNameExpressions :: Document -> NameExpressionsgetNameExpressions (Document statements) = let toKVPair (Statement (Just n) exp _) = [(n, exp)] toKVPair _ = [] in M.fromList $ statements >>= toKVPairevalDocument :: Document -> DocumentResultevalDocument doc@(Document statements) = map (evalStatement nameMap) statements where nameMap = getNameExpressions docevalStatement :: NameExpressions -> Statement -> StatementResultevalStatement nameMap s@(Statement _ exp _) = StatementResult s $ evalExp nameMap expevalStatement _ (Informative s) = JustInformative s-- Expression interpretation without cachingevalExp :: NameExpressions -> Exp -> EvalResevalExp d (Num n) = Right nevalExp d (Add x y) = liftM2 (+) (evalExp d x) (evalExp d y)evalExp d (Sub x y) = liftM2 (-) (evalExp d x) (evalExp d y)evalExp d (Mult x y) = liftM2 (*) (evalExp d x) (evalExp d y)evalExp d (Div x y) = liftM2 (/) (evalExp d x) (evalExp d y)evalExp d (Call sin (arg1:_)) = liftM sin (evalExp d arg1)evalExp d (Call cos (arg1:_)) = liftM cos (evalExp d arg1)evalExp d (Call tan (arg1:_)) = liftM tan (evalExp d arg1)evalExp d (Call asin (arg1:_)) = liftM asin (evalExp d arg1)evalExp d (Call acos (arg1:_)) = liftM acos (evalExp d arg1)evalExp d (Call atan (arg1:_)) = liftM atan (evalExp d arg1)evalExp d (Call sinh (arg1:_)) = liftM sinh (evalExp d arg1)evalExp d (Call cosh (arg1:_)) = liftM cosh (evalExp d arg1)evalExp d (Call tanh (arg1:_)) = liftM tanh (evalExp d arg1)evalExp d (Call asinh (arg1:_)) = liftM asinh (evalExp d arg1)evalExp d (Call acosh (arg1:_)) = liftM acosh (evalExp d arg1)evalExp d (Call atanh (arg1:_)) = liftM atanh (evalExp d arg1)evalExp d (Call log (arg1:_)) = liftM log (evalExp d arg1)evalExp d (Call exp (arg1:_)) = liftM exp (evalExp d arg1)evalExp d (Call abs (arg1:_)) = liftM abs (evalExp d arg1)evalExp d (Call sqrt (arg1:_)) = liftM sqrt (evalExp d arg1)evalExp d (Call pow (arg1:arg2:_)) = liftM2 (**) (evalExp d arg1) (evalExp d arg2)evalExp d (Neg x) = liftM negate (evalExp d x)evalExp d (Ref name) = case M.lookup name d of Just exp -> evalExp d exp Nothing -> Left $ No match for name: ++ nameevalExp _ _ = Left Not implemented | Data structure for expression evaluation in Haskell | beginner;haskell;math expression eval | Data.Functor.Foldable can take some boilerplate out of your code. Also I tried to make the recursion in evalExp's references as tight-looped as possible.import Data.Functor.Foldable-- Source datadata ExpF t = Num Double | Add t t | Sub t t | Mult t t | Div t t | Neg t | Ref Name | Call Name [t] deriving (Show, Functor, Foldable, Traversable)type Exp = Fix ExpFinstance Serialize Exp where serialize = cata $ \case Num d -> show d Add x y -> printf (%s + %s) x y Sub x y -> printf (%s - %s) x y Neg x -> - ++ x Mult x y -> printf %s * %s x y Div x y -> printf %s / %s x y Ref name -> name Call name exps -> printf %s(%s) name (intercalate , exps)evalExp :: NameExpressions -> Exp -> EvalResevalExp d = evalExp' d' where d' = evalExp' d' <$> d evalExp' d' = cata $ sequenceA >=> \case Ref name -> fromMaybe (Left $ No match for name: ++ name) (M.lookup name d') x -> first (const Not implemented) $ do Right $ case x of Num n -> n Add x y -> x + y Sub x y -> x - y Mult x y -> x * y Div x y -> x / y Call sin (arg1:_) -> sin arg1 Call cos (arg1:_) -> cos arg1 Call tan (arg1:_) -> tan arg1 Call asin (arg1:_) -> asin arg1 Call acos (arg1:_) -> acos arg1 Call atan (arg1:_) -> atan arg1 Call sinh (arg1:_) -> sinh arg1 Call cosh (arg1:_) -> cosh arg1 Call tanh (arg1:_) -> tanh arg1 Call asinh (arg1:_) -> asinh arg1 Call acosh (arg1:_) -> acosh arg1 Call atanh (arg1:_) -> atanh arg1 Call log (arg1:_) -> log arg1 Call exp (arg1:_) -> exp arg1 Call abs (arg1:_) -> abs arg1 Call sqrt (arg1:_) -> sqrt arg1 Call pow (arg1:arg2:_)) -> arg1 ** arg2 Neg x -> negate x |
_unix.232444 | I have Ubuntu 14.04 running on Oracle Virtual Box. Window Manager in Ubuntu is compiz. The issue I am facing is when I am playing a media file in VLC or any other media player the window is overlaying any other window (say terminal, firefox, etc.) and is not letting them come to the fore when I switch between windows. Even if I minimize the the VLC player it won't let other windows come to the fore.I know this was an issue in metacity but do not know the same issue exists in compiz as well. Is there any solution to this issue? | VLC media player window is overlaying other windows in Ubuntu 14.04 - compiz Window manager | ubuntu;compiz | null |
_unix.78185 | I have a Red Hat Kickstart process which reports its progress at key points via a POST request to a status server.This is fine during %pre and %post, but when the actual build is taking place between them, it's an informational black hole.I've written a simple shell snippet that reports on the number of packages installed to give a rough idea of progress. I've placed the following in %pre:%pre## various other stuff here, all works fine ##cat > /tmp/rpm_watcher.sh << EOF_RPMPREV=-1while truedo COUNT=\$(rpm -qa | wc -l) if [ \${COUNT} -ne \${PREV} ] ; then /bin/wget --post-data ${Hostname} : Package count \${COUNT} ${builddest}/log PREV=\${COUNT} fi sleep 15doneEOF_RPM/bin/sh /tmp/rpm_watcher.sh &disown -a%endHowever, when I launch this as a background task from %pre as above, it hangs waiting for the script to end -- %pre never completes (if I kill the spawned script %pre completes and the build proper starts).I can't use nohup as it isn't available in the pre-installation environment, the same goes for using at now and screen.I've attempted to use disown -a, which is available; this seems to successfully disown the process (such that it's owned by PID 1) but still it hangs waiting for the script to finish.Can anyone offer me an alternative? | How can I background a shell script during a Kickstart? | linux;bash;shell;kickstart | null |
_unix.213601 | I was executing the screen command with a split window in my CentOS server, then I lost connection to it. I reconnected to my server and I did screen -r to get my screen back but now I don't see them split, I only see the first screen and I have to switch to my other screen with the ctrl+a+space. How can I put them as they were before? Is there a way? | How to get back my split-screen after I lost connection with screen | bash;gnu screen | null |
_cs.55994 | I have some digital logic circuits in Algebraic Normal Form, and am limited to using XOR and AND logic gates.For instance:$B_{out} = B_1 B_2 \oplus B_1 B_3$I was wondering, are there any algorithms to simplify ANF to use a smaller number of gates? I'm looking to minimize ANDs specifically.The above equation would ideally become this:$B_{out} = B_1 (B_2 \oplus B_3)$Since AND and XOR act like multiplication and addition respectively, it also seems like the answer could be in algorithms which minimize operations in polynomials. If that is the case, I'm specifically looking to minimize multiplications (which is the equivelant of ANDs in ANF).One person suggested i use a Karnaugh map, but am unsure how (or if it's possible) to use a Karnaugh map with XOR/AND instead of OR/AND. I could convert OR/AND back and forth to XOR/AND terms as needed, but in that case I don't believe the result is garaunteed (or likely) to be minimal anymore.Are there algorithms for this? I feel like there has to be, but I haven't been able to find any. | Algorithm for simplifying ANF or polynomials? | boolean algebra;digital circuits | The algebraic normal form (ANF) is unique. You can't simplify the ANF; each formula has a single, unique ANF, and there's only one. Once you've found it, that's it; there's no other, simpler ANF for the same formula.Perhaps what you want is, given a formula, find the smallest circuit that uses only XOR and AND logic gates. In general, that circuit won't necessarily be in algebraic normal form. (For instance, $B_1 (B_2 \oplus B_3)$ is not in ANF; the ANF of that formula is $B_1 B_2 \oplus B_1 B_3$.) That's called logic minimization or logic synthesis or circuit minimization. Most prior work has considered how to use a gate basis of NAND or {AND, OR, NOT}; you are looking for an algorithm that uses the basis {AND, XOR}.If you want to minimize the total number of gates, I'd suggest you do a literature search on the literature on logic minimization, looking for methods that work with an arbitrary basis, or that work with the basis {AND, XOR}. (One possibly buzzword or phrase to search for is exclusive-or sum-of-products minimization; this covers the special case of circuits that have multi-input AND gates on the first level and a single multi-input XOR gate at the second level.)In general, essentially all of these circuit minimization problems are NP-hard, so you shouldn't expect any efficient algorithm that will always work. Instead, people rely on heuristics that sometimes work or are sometimes efficient.You can find people who have studied a similar problem in the cryptography world, because Yao-style garbled circuits naturally support AND and XOR gates. Cryptographers have studied how to implement various functions efficiently using only AND and XOR gates. However, in that world, for various reasons we can make XOR gates effectively free, so they generally try to minimize the multiplicative complexity, i.e., the minimum number of AND gates needed in any circuit over the basis {AND,XOR}. I couldn't tell whether that was what you wanted or not. If it is, you might enjoy the following page, which lists circuits of minimal multiplicative complexity for a variety of functions of cryptographic interest:http://cs-www.cs.yale.edu/homes/peralta/CircuitStuff/CMT.htmlPerhaps this will help you articulate a more precisely defined question. |
_softwareengineering.303388 | I'm programming a web app using Perl and Dancer. The logic is not trivial and I want to separate the web page logic (inside Dancer routes) from the business logic that reads and writes from the database. The business logic is going to be stored in several modules the read and write in the database using DBIx::Class.My question is: What is the best way to keep DB logic in the business Logic modules but still be able to send messages explaining the error to a Web Page and storing more details of the error in a log. I'd like a technique that works with nested functions.I've been thinking about this, but I haven't found a clear winner. Possible options might be:Catch all the exceptions inside the function and return array ref with the data. If an error happens, the result would be undef.Catch all the exceptions inside the function and return a hash ref. If there hasn't been an error the key ok would be equal to 1. A return data would be like this:return ({ ok => 1, data => [ { name => 'Alfa', year => 2014, }, { name => 'Beta', year => 2015, }, ], });In an error case the return value might be something like this:return ({ ok => 0, err => 'bussines.deliveries.estimated_date.no_access_dhl', err_web => 'No access to DHL site',});Catch exceptions in the web code using Try::Tiny;Edit 1: Expanding my question as an answer to Umut Kahramankaptan:I'm only starting with exceptions. In the past days I've read several web pages about expections in perl and how to handle them. What should trigger an exception? Is an exception only caused by an error (not being able to open a file, a division by 0 or a database not responding) or should it be triggered by other cause that blocks the execution of a function?To explain the last question, I propose several cases:A function executes several read and write operations against a database. If one of them fails it should trigger an exception.Similarly to the previous one a function has to perform several read and write operations with a database. If it finds that it can't continue its sequence (due to some logic or value existing in the database), should in this case trigger the exception?A simpler example could be a login function. If the username and passwords are correct, it should return several user data. What should happen if it fails? return undef? or trigger an exception? | How can I handle errors in functions | perl | null |
_webapps.90911 | Google Inbox is now generally available without any invites. However, the Chrome Inbox App is still alive and maintained (last version was February 9th).Does the Chrome extension provide any extra features that do not exist in the inbox web site? | Is there a difference between the gmail Inbox site and Chrome extension? | inbox by gmail;google chrome extensions | Short answerThere is no difference at all.ExplanationThe screenshot doesn't correspond to an extension. Please note that the right up corner only display the Visit website button but not the Add to Chrome button. The screenshot it's about the Inbox by Gmail directory/catalog entry about in the Chrome Web Store.At this time the Chrome Web Store include four categories apps, games, extensions and themes and two types Chrome apps and websites. |
_scicomp.5228 | I need to solve the same sparse linear system (300x300 to 1000x1000) with many right hand sides (300 to 1000).In addition to this first problem, I would also like to solve different systems, but with the same non-zero elements (just different values), that is many sparse systems with constant sparsity pattern.My matrices are indefinite.Performance of the factorization and initialization is not important, but performance of the solve stage is.Currently I'm considering PaStiX or Umfpack, and I will probably play around with Petsc (which supports both solver)Are there libraries capable of taking advantage of my specific needs (vectorization, multi-threading) or should I rely on general solvers, and maybe modify them slightly for my needs ?What if the sparse matrix is larger, up to $10^6 \times 10^6$ ? | Sparse linear solver for many right-hand sides | linear algebra;petsc;linear solver;sparse | null |
_softwareengineering.187536 | I am studying about optimizing alogrithms.(Prof. Skiena's Algorithm Guide book)One of the exercises asks us to optimize an algorithm:Suppose the following algorithm is used to evaluate the polynomialp(x) = a^(xn) + a^n1(xn1) + . . . + a(x) + ap := a0;xpower := 1;for i := 1 to n doxpower := x xpower;p := p + ai xpowerend(Here xn, xn-1... are the names given to distinct constants.)After giving this some thought, the only way I found to possibly improve that algorithm is as followsp := a0;xpower := 1;for i := 1 to n do xpower := x xpower; for j := 1 to ai p := p + xpower next jnext iendWith this solution I have converted the second multiplication to addition in a for loop.My questions:Do you find any other way to optimize this algorithm?Is the alternative I have suggested better than the original? (Edit: As suggested in the comments, this question though related to the problem deserves another question of its own. Please ignore.) | Addition vs multiplication on algorithm performance | algorithms;language agnostic | The standard solution is to notice that one has many multiplications with the same number x.So instead ofp(x) = a_n*x^n + a_n1*x^n1 + . . . + a_1*x + a_0one might writep(x) = (...((a_n*x) + a_n1)x + ... + a_1)*x + a_0Perhaps easier to read:p(x) := a_3*x^3 + a_2*x^2 + a_1*x + a_0q(x) := ((a_3*x + a_2)*x + a_1)*x + a_0p(x) == q(x)We call such evaluation Horner's method. It translates to approximately:p := a_n;for i is n-1 to 0 p := p*x + a_iHorner's method has n multiplications and n additions and is optimal. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.