id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_softwareengineering.208922 | How to build a common infrastructure for my web apps using asp.net mvc, and efficiently manage it?The goal is to increase developer productivity. The common infrastructure stuff includes visual & interactive components like customizable menus, consistent web page layout, consistent navigation experience, breadcrumbs, etc. It will also have some api's for user & role management.So this platform/infrastructure will have its own database & tables (which can be deployed using entity framework). But there are other things which I don't have an answer to:The database will also have user defined types, stored procs, etc. How should these get deployed (or adopted) in the target application?There would be a set of controllers and views intended for user and role management. They need to be a part of the target project. What is the best way this can be included?Just to outline the kind of experience a developer should get:He'd start an MVC project.Run some command (suitably in the IDE itself - I am thinking powershell or NuGet)And he should get working...At some point later, there would have been some updates in the platform...how should they be updated...the problems he might face here...how to resolve them...etc...?How to design such a infrastructure project? | common infrastructure for web apps using asp.net mvc | asp.net mvc;code reuse;software updates | null |
_codereview.110770 | I'm trying to design a multithreaded daemon for an industrial automation related project.The daemon will be using a number of 3rd party libs like MQTT, mysql, etc..My idea is to have worker threads (or callbacks registered with these 3rd party libs) relay events to the main thread using a global mutex and conditional variable based synchronisation mechanism.The main thread then performs some global state management and some lite processing that I'm willing to restrict to the main thread (like MySQL stuff). After that the main thread defers the processing to a pool of worker threads.I haven't posted the logging and much of the error checking code here. I'm planning on using the start-stop-daemon facility for daemonizing.I wanted to know if I'm headed in the right direction. Constructive criticism is welcome.#include <stdio.h>#include <pthread.h>#include <errno.h>#define RESPONSE_MAX_LENGTH 256/** * Internal data structures and enums */typedef struct myThread_ { pthread_t id;} myThreadT;typedef enum myEventType_ { MY_EVENT_TYPE_NONE, MY_EVENT_TYPE_STOP, MY_EVENT_TYPE_RELOAD} myEventTypeT;typedef struct myEvent_ { int is_triggered; myEventTypeT type; void *data; /* may include a callback */} myEventT;const myEventT myEventInitializer = { .is_triggered = 0, .type = MY_EVENT_TYPE_NONE, .data = NULL};/** * Global variables */static myEventT myEvent;static myThreadT worker;/* Mutex and cond_var */static pthread_mutex_t my_event_mutex = PTHREAD_MUTEX_INITIALIZER;static pthread_cond_t my_event_trigger_cv = PTHREAD_COND_INITIALIZER;/* * Triggers an internal myEvent * Blocks calling thread till event acknowledgement*/int trigger_event(myEventTypeT type, void *data) { pthread_mutex_lock(&my_event_mutex); myEvent.type = type; myEvent.data = data; // Trigger event! myEvent.is_triggered = 1; pthread_cond_signal(&my_event_trigger_cv); pthread_mutex_unlock(&my_event_mutex); // Block till current trigger has been acknowledged while(myEvent.is_triggered); return 0;}void *worker_loop() { while(1) { sleep(5); /* Do something worthwhile */ trigger_event(MY_EVENT_TYPE_NONE, NULL); } pthread_exit(NULL);}/* * Called from main thread, mainly for state management & db related processing, * i.e. stuff we want to restrict to the main thread.*/int handle_event(myEventT *event, char *response) { // handle the event, fill response if required return 0;}int main(int argc, char* argv[]) { // Initialize globals myEvent = myEventInitializer; // Thread management pthread_attr_t attr; pthread_attr_init(&attr); pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED); int thread_error; thread_error = pthread_create(&worker.id, &attr, worker_loop, NULL); if (thread_error != 0) { return -1; } pthread_attr_destroy(&attr); pthread_mutex_lock(&my_event_mutex); // Main loop: wait for event while(1) { // Wait for signal from worker threads/ callbacks int res_wait = pthread_cond_wait(&my_event_trigger_cv, &my_event_mutex); if (res_wait == EINVAL) { // TODO: Consolidated error handling return -1; } /** * Event handling construct * */ if (myEvent.is_triggered) { pthread_mutex_unlock(&my_event_mutex); char response_string[RESPONSE_MAX_LENGTH]; //TODO macro for size! if(handle_event(&myEvent, response_string) >= 0) { // Initiate deferred processing on another thread (maybe callback) } // Clear event triggered flag and reinit event data myEvent = myEventInitializer; pthread_mutex_lock(&my_event_mutex); } } // Clean up pthread resources pthread_mutex_destroy(&my_event_mutex); pthread_cond_destroy(&my_event_trigger_cv); pthread_exit(NULL);} | pthread_cond_wait() based multithreaded Linux daemon skeleton | c;linux;pthreads | null |
_codereview.60601 | I have a toy app that contains:editText1 for entering a number. Uses inputType of number.editText2 for entering a second number. Uses inputType of number.adderButton for triggering addition.textVeiw for displaying the result of the addition.I know:The naming could be betterI should check for overflow/underflow on the addition and elsewhere. This code is working for simple cases. public void adder_click(View view) { EditText et1 = (EditText) findViewById(R.id.editText1); EditText et2 = (EditText) findViewById(R.id.editText2); Integer s = Integer.valueOf(et1.getText().toString()) + Integer.valueOf(et2.getText().toString()); TextView tv = (TextView) findViewById(R.id.textView); tv.setText(s.toString()); }Is there a more direct way of reading the values and performing the addition? | Using Java to Add the Contents of two EditText views in Android | java;android | No, that's as direct as it's going to be with an EditText (you could try it with a NumberPicker), as you cannot directly get an integer from it.But you could extract the process into its own method:private int getInt(EditText editText) throws NumberFormatException { return Integer.valueOf(editText.getText().toString());}If you also use what @mleyfman said, your code would look like this:public void adder_click() { tv.setText(getInt(et1) + getInt(et2)); }Also note that I removed the method argument, as it's unused.And for the record, yes, the naming isn't very good. It's easy to just leave the names as editText1, editText2, etc, but this will get confusing really fast (as will tv, et1, et2, etc). Also, use camelCase (adder_click should be adderClick) |
_unix.213783 | Is it possible to inspect the output of some command to determine the right settings for a given WPA2 Enterprise wifi network? If so, how?I've just been through the process of getting my university eduroam network set up using netctl. There seem to be a lot of wpa_supplicant options to be get just right, but most GUI clients seem able to autodetect these in many (if not all) cases. wifi-menu, on the other hand, doesn't work for WPA2 Enterprise.I like the Arch way of text-based configuration but it would be good to be able to connect to work networks without having to ask for help from the IT department (when these things are trivial for other users). | How can settings for a WPA2 enterprise network be determined? | wifi;wpa supplicant;netctl;wpa | null |
_softwareengineering.291663 | It's very interesting how to restore read model in system based on CQRS.In regular mode system processes commands, creates domain events and posts them to message bus. Then another part of system (call it RM subsystem) processes these messages and saves them to read model. This mode is good enough for regular purpose.But how should I repair my read model? For example storage with read model was corrupted or I changed location of storage. I want my system to restore read model during initialization, before queries begin to try read data. And I want to know the end of repairing process. I can imagine two ways:Create REST controller, throug which my RM subsystem will be able to query all domain events (in messages forms) and restore it synchronously. Create special mechanism, calling which my RM subsystem will be able to start replaying all messages. As for me, this way isn't very good, because I cannot control time of finish of repairing process. And the second, if there are other consumers of messages, they probably can corrupt their data.Which way is more preferable? | CQRS: How to restore read model | cqrs;event sourcing | null |
_softwareengineering.29334 | If you have a product that has been in the market for a long time, but it is still in active development daily - how often should the manuals be updated? If your users are constantly updated to the bleeding edge version due to the fact that your organization sees fit that the latest bug fixes are always in the shipped version. Meaning, you could fix a bug one day and next day it is hitting production. | Manuals - How Up To Date? | manuals;production | I would update the manual:For each major release, andWhen important new features become stable and mature enough that you know they're not going to change every five minutes. |
_unix.197237 | How do I extract the names from the gitolite info command output, for further piping into a script?I'm writing a migration script to migrate all my repositories from gitolite to a Gitlab server. Thus, I want to get all the repository names from gitolite and use those in Gitlab. Below is the example output I'm trying to match against.hello sitaram, this is gitolite v2.1-29-g5a125fa running on git 1.7.4.4the gitolite config gives you the following access: R SecureBrowse R W anu-wsd R W entrans @R W git-notes @R W gitolite R W gitolite-admin R W indic_web_input @C R W private/sitaram/[\w.-]+ R W proxy @C @R W public/sitaram/[\w.-]+ @R_ @W_ testing R W vicThis should output:SecureBrowseanu-wsdentransgit-notesgitolitegitolite-adminindic_web_inputproxytestingvkcCurrently I'm trying to put this in a shell script, so it would be usable for others. My current approach is a grep command, which looks like the following:grep -Eio ([A-Za-z0-9_-]+)$ gitolite-info-outputHowever, this also captures the 4 at the end of the first line. I've been trying several approaches, but I can't seem to exclude that properly, without including or excluding other things.Doing this OS X 10.10.3. | Find repository names from gitolite info output | shell script;text processing;grep;regular expression;git | Based on the input you show in your question, this should work:$ grep -oP '^[ @]*R.* \K.*' gitolite-info-outputSecureBrowseanu-wsdentransgit-notesgitolitegitolite-adminindic_web_inputproxytestingvicThis is using GNU grep's -P switch to enable Perl Compatible Regular Expressions which give us \K : Exclude anything matched up to this point. Combined with -o, we can search for lines starting with 0 or more spaces or @ (^[ @]*), then an R, then 0 or more characters until another space. All this is discarded because of the \K so only the last word is printed.If you don't have GNU grep (on OSX, for example), you can do something like this:$ grep -E '^[ @]*R' gitolite-info-output | awk '{print $NF}'SecureBrowseanu-wsdentransgit-notesgitolitegitolite-adminindic_web_inputproxytestingvicOr do the whole thing in awk:$ awk '/^[ @]*R/{print $NF}' gitolite-info-output SecureBrowseanu-wsdentransgit-notesgitolitegitolite-adminindic_web_inputproxytestingvicOr Perl:$ perl -nle '/^[ @]*R.*\s(.*)/ && print $1' gitolite-info-output SecureBrowseanu-wsdentransgit-notesgitolitegitolite-adminindic_web_inputproxytestingvic |
_unix.158333 | I set my prompt on tcsh 6.18.01 to use some silly emoji characters, but they don't show up. > cat .cshrcset prompt = '\n [%~]\n%# '\360\237\224\245 [~] > source .cshrc\360\237\224\245 [~] >I pulled up the special characters window on OSX, found the character I wanted, copied and pasted it into vi on the tcsh machine. It turned up like this \xf0\x9f\x94\xa5. Here's the full uname if it helps shed any light. FreeBSD raspberry-pi 11.0-CURRENT FreeBSD 11.0-CURRENT #0 r271779: Fri Sep 19 01:18:53 UTC 2014 [email protected]:/usr/obj/arm.armv6/usr/src/sys/RPI-B arm | Unicode emoji not showing up in tcsh prompt | freebsd;locale;unicode;tcsh | null |
_webmaster.30593 | Navigation menu is in the form of categories on website http://bankpo.in, none of the category pages is being indexed by Google.I searched with exact URL for a category, still related results are shown instead of category page. I have checked Google Webmaster Tools and there are no crawling error or any other error message, so I am really confused what might be the problem.Website is not a new one and is continuously updated, so please can anyone tell me the reason for this. | Navigation Category page not indexed by Google | google index;navigation | null |
_codereview.11573 | From my original question on Stack Overflow: Is my implementation of reversing a linked list correct?I'm a beginner at C and I'd like to know about style, and correctness of the reverse algorithms. What about memory management?#include <stdio.h>#include <stdlib.h>typedef struct Node { int val; struct Node *next;} Node;Node * build_list(int num);Node * reverse_list(Node * head);static Node * r_reverseItem(Node * last, Node * head);Node * r_reverse(Node * head);/* * Function which iterates to 100 creating nodes * and linking them to head * */Node * build_list(int num) { Node * head = NULL; Node * curr; int i; for(i=0;i<num;i++) { curr = (Node *)malloc(sizeof(Node)); curr->val = i; curr->next = head; head = curr; } return head;}/* * Function which reverses linked list * by traversing list and flipping ptr of curr to last * @param head * */Node * reverse_list(Node * head) { if (head == NULL) //only one elem return NULL; Node * last = NULL; do { Node * next = head->next; head->next = last; last = head; head = next; } while (head != NULL); return last;}/* * Function which recursively reverses linked list * @param last * @param head * */static Node * r_reverseItem(Node * last, Node * head) { if (head == NULL) return last; Node * next = head->next; head->next = last; return r_reverseItem(head,next);}Node * r_reverse(Node * head) { return r_reverseItem(NULL, head);}int main (void) { Node * head = build_list(5); Node * curr; head = r_reverse(head); // head = reverse_list(head); while(head) { printf(%d\n,head->val); curr = head; head = head->next; free(curr); } return 0;} | Reversing a linked list by iteration and recursion | c;beginner;recursion;memory management | Code Comments:curr = malloc(sizeof(Node));malloc() returns void* thus you need to cast it before assignment.curr = (Node*) malloc(sizeof(Node));Most (all I have seen) coding standards mention not to use this:Node * head, * curr;head = NULL;I agree with them as it just makes things hard to read (thus maintain). So one variable per line. And initialize on declaration (where appropriate).Node *head = NULL;Node *curr;OK. The next one is arguable (so feel free to use best judgement).Personally I find that hard to read: if (!head->next)I think it would be easier to read as:if (head->next == NULL)But surprisingly I normally I prefer using the ! when the object is a pointer (or bool). I think in this case it is because it is combined with testing a nested member of a structure it just does not look correct (but as I said it is an arguable case).Edge CaseWhat happens if the list is empty?Node * reverse_list(Node * head) { if (!head->next) //only one elem return head;You have chosen the wrong edge case.The one you should be checking is checking for NULL. A list of one node is no different than a list of many. The edge case with pointers is nearly always the NULL case.AlgorithmIterativeThe code before your while loop looks like the code inside the while loop. This sort of suggests you can replace it with a do-while loop. Thus your iterative code should look more like this:Node* reverse(Node* list){ if (list == NULL) { return NULL; } Node* last = NULL; do { Node* next = list->next; list->next = last; last = list; list = next; } while(list != NULL); return last;}RecursiveConverting a loop into recursion requires an extra function to cope with the extra variable used for last. So the main recursion looks like this (it just calls the actually recursive function setting up the last parameter). You should have this wrapper function to help users of the code from calling the recursive part of the function incorrectly.Node* r_reverse(Node* list){ return r_reverseItem(NULL, list);}Now we can do the recursion. Again the edge case is still NULL.Node* r_reverseItem(Node* last, Node* list){ if (list == NULL) { return last; } Node* next = list->next; list->next = last; return r_reverseItem(list,next);} |
_cogsci.932 | What is a typical approach to conducting a survey-based psychological experiment?There seems to be a host of issues involved; some that seem particularly important are:the appropriate design of the experiment,ethical approval,securing funding.There seems to be a number of books written about the topic, and it's hard to know where to begin. I'm largely after an outline of what one needs to know in order to go from nothing to having successfully conducted a survey-based experiment.MotivationIn a previous question, I received an answer that suggested conducting a survey-based experiment. Assuming (once I do a thorough literature review) I don't find any other answer to the question, I'd be interested in implementing the experiment (partly to answer the question itself, and partly for professional development). I'm a scientist, but not in a discipline where surveys would typically arise. As such, I have no experience in conducting a survey-based experiment. | How does a researcher typically go about conducting a survey-based psychological experiment? | reference request;methodology;experimental psychology | null |
_softwareengineering.178707 | In my current project, I need to store a very long ASCII string to each instance of a given object. This string will receive an 2 appends per minute and will not be retrieved so frequently. The worst case scenario is a 5-10MB string. I'll have thousands of instances of my object and I'm worried that storing all those strings in the filesystem would not be optimal, but I can't think of a better solution.Can anyone suggest an alternative? Maybe a key-value store? In this case, which one? Any other thoughts? | Storing lots of large strings with frequent appends and few reads | database;strings;file handling;file systems;storage | null |
_webapps.108027 | I want to trigger an immediate CommCare SMS reminder every time a particular question is answered in a form. If I set case property trigger = OK, it will be triggered the first time I fill that form for a particular case. Will it be triggered the second time I fill it for the same case (since I'm not actually changing the value of that case property)? | Triggering a CommCare SMS reminder every time a form is filled | commcare | The reminder alert will not be triggered the second time since the value of the case property is not changing.One way to handle this use case is to use two values (such as 1 and 2) for the case property trigger, having the form alternate between changing it from 1 to 2 and 2 to 1 each time you require a new alert to be sent.Then you can have two reminders with exactly the same content, only one sends when trigger = 1 and the other sends when trigger = 2.It's important to note that if the form data is being collected on mobile, frequent syncs are important for this to work properly. For example, if the form gets filled out twice, changing trigger from 1 to 2 and then 2 to 1, and only then a sync happens, the no new alerts will send. |
_unix.267161 | I wonder if this is possible, make an alias, that does a sudo apt-get to the command if it is not there already, and then realiases itself to stop making this changes.Thus I am looking for this semanticssmartalias top = if (not installed htop) then install htop; alias top htop; top | A tricky recursive bash alias? install at first use | bash;alias | null |
_webmaster.34740 | I started receiving a large (for me) amount of traffic on one of my pages yesterday. Today I thought that it would be useful to track goals from that page -- there's a link to my blog from it.I added the 'visited external link' goal to Piwik, and new visits are being recorded. However, it seems to me that there must be enough data in the database to retroactively apply the goal to past users -- is there a way to achieve that? | Retroactively applying a Piwik goal to visitors | goal tracking;piwik | null |
_unix.233066 | I have been working on building a linux kernel for qemu. I have been following the tutorial http://xecdesign.com/compiling-a-kernel/As of now i have been able to boot the kernel until start of init process.But i have been getting the following error:Kernel panic - not syncing: Attempted to kill init!I tried debugging and found that executing /sbin/init in the function kernel_init() is resulting in the kernel panic....Freeing init memory: 120KKernel panic - not syncing: Attempted to kill init!Backtrace: [<c0017348>] (dump_backtrace+0x0/0x10c) from [<c02fb118>] (dump_stack+0x18/0x1c) r6:cf815d60 r5:c03f179c r4:c0409738[<c02fb100>] (dump_stack+0x0/0x1c) from [<c02fb25c>] (panic+0x64/0x188)[<c02fb1f8>] (panic+0x0/0x188) from [<c0029d24>] (do_exit+0x564/0x61c) r3:cf815d60 r2:cf81be54 r1:cf81be54 r0:c0376584 r7:cf81a000[<c00297c0>] (do_exit+0x0/0x61c) from [<c002a03c>] (do_group_exit+0x44/0xa4) r7:cf81a000[<c0029ff8>] (do_group_exit+0x0/0xa4) from [<c0033e34>] (get_signal_to_deliver+0x13c/0x478) r4:cf81a000[<c0033cf8>] (get_signal_to_deliver+0x0/0x478) from [<c0016620>] (do_signal+0x6c/0x530)[<c00165b4>] (do_signal+0x0/0x530) from [<c0017070>] (do_notify_resume+0x50/0x5c)[<c0017020>] (do_notify_resume+0x0/0x5c) from [<c0014438>] (work_pending+0x24/0x28) r4:ffffffffRebooting in 1 seconds..I am using a custom root file system generated using buildroot. The same rootfs works fine with the kernel-qemu originally downloaded from https://xecdesign.com/downloads/linux-qemu/kernel-qemu Can somebody help me with getting this right?Let me know if further info is required. | not syncing: Attempted to kill init! in custom kernel for qemu | qemu | I found the fix with the help of this thread.I had to enable CONFIG_AEABI in my kernel. |
_softwareengineering.129714 | I'm new to github, and am looking for advice on how to manage issues. I'm used to having priority and other ordering options but see that none exist.How do others manages issues during the lifecycle of a bug/feature?Thanks in advance. | How to manage github issues for (priority, etc)? | project management;github;issue tracking | null |
_hardwarecs.5536 | I'm going to try and make this so the answer is not based on opinion.I have a desktop computer that I use for a variety of tasks: web browsing, video editing, video gaming, university projects, software development, listening to music, etc. I recently dropped my old pair of cheap speakers, and figured it was time for an upgrade.I am looking for a pair of desktop/bookshelf speakers. I would like to spend less than $100-150. Now, I know that there are thousands of options in this price range, but I would like to narrow it down to two. I am either thinking of either the Edifier R1280T or the Mackie CR4. I would appreciate a recommendation, if anyone has experience with these two specific models, for which speaker would work best for my needs. I do not know enough about audio equipment to see the pros and cons of each of these two models, and would appreciate any advice you can give me. I am also open to other suggestions, but these are the two I am primarily looking at. Here are a few things I would like to have in my new speakers, and what I am considering:hard budget of $100-150 ($150 max)studio quality (for the price range)clean aesthetics (basic form factor, nothing crazy)RCA inputCan anyone recommend which of these two speakers better fits the above requirements? I appreciate opinions, but I would like to keep this on-topic for this website, by rating which of the two has better sound quality, which has better bang-for-buck value, etc. I am more looking for metrics and values that can be measured, rather than just I like this one more than that one. | Speaker comparison for desktop | speakers | null |
_webapps.75675 | I am tackling a bug on the Japanese site and want to search for three different strings so I can link them to the community.I would like to search for: refiner OR explainer OR illuminatorHow can I do that with the string search in Transifex? | How do you do a Transifex text search with an or operator? | transifex | Sam from the Transifex team here. Unfortunately, the search function doesn't support operators at the moment. You'll need to search for strings individually and then share the permalink of each search result with your community. |
_reverseengineering.8028 | i'm tried to gather information of firmware and extract the contain with binwalk on kali , when i scanned rom.bin , i have as result many lines 1-> most of lines are LZMA data compressed , but when i extract this data i can't open it. 2-> last line Mcrypte 2.2 , blofish cryptedcan some help me , what can i do to extract data correctlyhere the firmwarehttp://wikisend.com/download/396604/rom.bin http://wikisend.com/download/194102/ChannelList.binthank you | Binwalk and firmware of a sat receiver | firmware | It's probably obfuscated:read more about an obfuscated firmware from WRT120NI think that you should do hardware analysis in order to know how the firmware is unpacked... |
_cs.49847 | How is out-of-order instruction execution related to superscalar execution?How is data hazard detection and handling affected by superscalar execution? | How is out-of-order instruction execution related to superscalar execution? | operating systems;parallel computing | null |
_codereview.30593 | It's an exercise from a book. There is a given long integer that I should convert into 8 4-bit values. The exercise is not very clear for me, but I think I understood the task correctly. I googled how to return with an array in a function. In the splitter function I add the last 4 bits into the values array, then cut the them from the original number. Is this a good way to do this?#include <iostream>int * splitter(long int number){ static int values[8]; for (int i = 0; i < 8; i++) { values[i] = (int)(number & 0xF); number = (number >> 4); } return values;}int main(){ long int number = 432214123; int *values; values = splitter(number); for (int i = 7; i >= 0; i--) std::cout << values[i] << ; return 0;} | Split a long integer into eight 4-bit values | c++;bitwise;integer | There are only three issues I take with your code...The caller of the splitter function has the possibility of trying to access a value outside of the returned array and could cause a memory access violation if they did so. For this reason alone, it would be better to go with the vector approach that @Jamal posted. You should use the C++ style casting (e.g. static_cast<int>(number & 0XF)) instead of the C style casting (e.g. (int)(number & 0xF)). You should use std::int32_t instead of long or std::uint32_t instead of unsigned long. long is not gauranteed to be 32 bits, especially on 64 bit platforms. I don't take as big of an issue with this as some do though because on most systems, it will be at least 32 bits. Here is a solution that only returns one nybble at a time and throws an exception if the caller tries to access an invalid portion of the of number. Since your book hasn't talked about pointers or vectors yet, this might actually be the type of approach it is looking for (possibly minus the exception throwing if it hasn't talked about exceptions yet either). If you take away the exception throwing, the caller still cannot cause a memory access violation though. If they passed something lower than 0 or greater than 7 as the value of part they would just get a return value of 0. #include <iostream>#include <stdexcept>#include <cstdint>unsigned short get_nybble( std::uint32_t number, const unsigned short part ){ if( part > 7 ) throw std::out_of_range('part' must be a number between 0 and 7); return (number >> (4 * part)) & 0xF;}int main( int argc, char* argv[] ){ std::uint32_t number = 432214123; try { for( short i = 7; i >= 0; i-- ) std::cout << get_nybble(number, i) << ; } catch( std::exception& ex ) { std::cerr << Error: << ex.what() << std::endl; } return 0;}This may not be the best solution, but another approach I find interesting is an anonymous union combined with a bit field struct. This works with the GNU G++ compiler and should also work with Visual C++. Note that as I understand it this is potentially unsafe on non x86 systems due to the structure packing which I'd bet your book has not talked about yet either. #include <iostream>#include <cstdint>#pragma pack(push, 1)typedef union { std::uint32_t value; struct { std::uint32_t part1: 4; std::uint32_t part2: 4; std::uint32_t part3: 4; std::uint32_t part4: 4; std::uint32_t part5: 4; std::uint32_t part6: 4; std::uint32_t part7: 4; std::uint32_t part8: 4; };} nybbles;#pragma pack(pop)int main( int argc, char* argv[] ){ nybbles num; num.value = 432214123; std::cout << Parts = << num.part8 << , << num.part7 << , << num.part6 << , << num.part5 << , << num.part4 << , << num.part3 << , << num.part2 << , << num.part1 << std::endl; return 0;} |
_codereview.167148 | Found this problem in hackerrank.One day Bob drew a tree, with \$n\$ nodes and \$n-1\$ edges on a piece of paper. He soon discovered that parent of a node depends on the root of the tree. The following images shows an example of that:Learning the fact, Bob invented an exciting new game and decided to play it with Alice. The rules of the game is described below:Bob picks a random node to be the tree's root and keeps the identity of the chosen node a secret from Alice. Each node has an equal probability of being picked as the root.Alice then makes a list of \$g\$ guesses, where each guess is in the form \$u, v\$ and means Alice guesses that \${\rm parent}(v) = u\$ is true. It's guaranteed that an undirected edge connecting \$u\$ and \$v\$ exists in the tree.For each correct guess, Alice earns one point. Alice wins the game if she earns at least \$k\$ points (i.e., at least \$k\$ of her guesses were true).Alice and Bob play \$q\$ games. Given the tree, Alice's guesses, and the value of \$k\$ for each game, find the probability that Alice will win the game and print it on a new line as a reduced fraction in the format p/q.Solution: There is a tree with some edges marked with arrows. For every vertex in a tree you have to count how many arrows point towards it. For one fixed vertex this may be done via one depth-first search (DFS). Every arrow that was traversed during DFS in direction opposite to its own adds 1. If you know the answer for vertex \$v\$, you can compute the answer for vertex \$u\$ adjacent to \$v\$ in \$O(1)\$.It's almost the same as for \$v\$, but if there are arrows \$uv\$ or \$vu\$, their contributions are reversed. Now you can make the vertex \$u\$ crawl over the whole graph by moving to adjacent vertices in the second DFS.def gcd(a, b): if not b: return a return gcd(b, a%b)def dfs1(m, guess, root, seen): '''keep 1 node as root and calculate how many arrows are pointing towards it''' count = 0 stack = [] stack.append(root) seen.add(root) while len(stack): root = stack.pop(len(stack)-1) for i in m[root]: if i not in seen: seen.add(i) count += (1 if root in guess and i in guess[root] else 0) stack.append(i) return countdef dfs2(m, guess, root, seen, cost, k): '''now make every node as root and calculate how many nodes are pointed towards it; If u is the root node for which dfs1 calculated n (number of arrows pointed towards the root) then for v (adjacent node of u), it would be n-1 as v is the made the parent now in this step (only if there is a guess, if there is no guess then it would be not changed)''' stack = [] stack.append((root, cost)) seen.add(root) t_win = 0 while len(stack): (root, cost) = stack.pop(len(stack)-1) t_win += cost >= k for i in m[root]: if i not in seen: seen.add(i) stack.append((i, cost - (1 if root in guess and i in guess[root] else 0) + (1 if i in guess and root in guess[i] else 0))) return t_winq = int(raw_input().strip())for a0 in xrange(q): n = int(raw_input().strip()) m = {} guess = {} seen = set() for a1 in xrange(n-1): u,v = raw_input().strip().split(' ') u,v = [int(u),int(v)] if u not in m: m[u] = [] m[u].append(v) if v not in m: m[v] = [] m[v].append(u) g,k = raw_input().strip().split(' ') g,k = [int(g),int(k)] for a1 in xrange(g): u,v = raw_input().strip().split(' ') u,v = [int(u),int(v)] if u not in guess: guess[u] = [] guess[u].append(v) cost = dfs1(m, guess, 1, seen) seen = set() win = dfs2(m, guess, 1, seen, cost, k) g = gcd(win, n) print({0}/{1}.format(win/g, n/g)) | The Story of a Tree solved using depth-first search | python;algorithm;programming challenge;tree;depth first search | gcd is missing a docstring.gcd is tail-recursive, but Python doesn't have tail-recursion elimination, and so:>>> gcd(3**7000, 2**10000)Traceback (most recent call last): File <stdin>, line 1, in <module>[...]RecursionError: maximum recursion depth exceededIt is more robust to turn the tail-recursion into a loop:def gcd(a, b): Return the greatest common divisor of a and b. while b: a, b = b, a % b return aand then:>>> gcd(3**7000, 2**10000)1Python has a built-in greater common divisor function, fractions.gcd, so you could use that and avoid writing your own.The only use for gcd is to print a fraction in lowest terms. The built-in fractions.Fraction class can do that, so you could write:from fractions import Fractionprint({0.numerator}/{0.denominator}.format(Fraction(win, n)))The names dfs1 and dfs2 are not very informative. Sure, they do some kind of depth-first search, but which one is which?The docstrings for dfs1 and dfs2 don't explain the arguments m, guess, seen, cost, and k. What am I supposed to pass for these arguments?The seen argument to dfs1 and dfs2 always gets an empty set which is not used again after the function returns. So it would better for these functions to use a local variable and initialize it with an empty set. Then there would be no risk of passing the wrong value for this argument.Instead of creating an empty list and appending an item to it:stack = []stack.append(root)it's simpler to create a list with one item:stack = [root]Instead of testing the length of a list:while len(stack):it's simpler to test the list itself:while stack:Instead of computing the index of the last element of a list:root = stack.pop(len(stack)-1)you can use a negative index:root = stack.pop(-1)but by default the pop method pops the last element, so you can write:root = stack.pop()Instead of writing this complex condition:count += (1 if root in guess and i in guess[root] else 0)you can take advantage of the fact that True equals 1 and False equals 0, and write:count += root in guess and i in guess[root]The only use you make of guess is to test if it contains a directed edge. But this is slightly awkward because the test takes two steps. It would be better if you built a data structure that was better adapted to the use you intend to make of it. In this case, you could build a set of directed edges. So instead of:guess = {}for a1 in xrange(g): u,v = raw_input().strip().split(' ') u,v = [int(u),int(v)] if u not in guess: guess[u] = [] guess[u].append(v)write:guess = set()for _ in range(g): u, v = map(int, raw_input().split()) guess.add((u, v))and then in dfs1 you can write:count += (root, i) in guess(The construction of guess can be turned into a set comprehension, like this:guess = {tuple(map(int, raw_input().split())) for _ in range(g)}but I think the explicit loop is a bit clearer.)Similarly, in dfs2 you have the complicated expression:stack.append((i, cost - (1 if root in guess and i in guess[root] else 0) + (1 if i in guess and root in guess[i] else 0)))Given the set of directed edges, this simplifies to:stack.append((i, cost - ((root, i) in guess) + ((i, root) in guess)))The top-level code is hard to test, because you can't call it from a Python program or the interactive interpreter. Better to put it in its own function, conventionally called main.The top-level code is full of one-letter variables (m, q, g, k, u, v). It is hard to guess what these mean. Although it makes sense to stick with the names from the HackerRank problem description, a comment or two would help remind the reader of what they mean.When you don't use a loop variable like a1, it's conventional to use the name _.Some of the repetition in the input code can be avoided by using collections.defaultdict:# Map from node to list of neighbours.tree = defaultdict(list)for _ in range(n - 1): u, v = map(int, raw_input().split()) tree[u].append(v) tree[v].append(u) |
_unix.183200 | I have below file in one directory of linux now i want to shoart like ls -ltr secu.meta then i wanted to rename those shorted files from .meta to .xml how can i do that with loop in linux ? i tried coupld of option but no luck my goal is too loop through all file list and find pattren and rename all file to .xml i may need to find another pattren also like ls -ltr est.meta to est.xml ?? this is just an example i may have more then 5 pattren in search.401409.test_est.meta301409.test_secu.meta201409.test_secu.meta201409.resp_secu.meta2001409.test_esf.meta101409.test_secu.metaThanks in advance to all linux guru | How to loop through file pattren and rename file name in linux? | linux | null |
_unix.105322 | We have a big file in which three columns are there that is hour, response time and service name. These fields are separated by space. We need write a script to calculate the number of occurrence of the service name on an hourly basis and find the average of response time. A piece of file is shown below. 15 999 createLead15 999 getLead15 999 jointCall15 999 searchLead16 1002 jointCall16 1019 createLead16 1031 jointCall16 1032 jointCall16 1040 jointCall16 1044 jointCall17 1011 createLead17 1027 createLead17 189 getLTSUserDetails19 1439 searchLead19 1708 searchFileStatus19 1832 updateLead | script to calculate the average | text processing | awkawk ' { task[$3,$1] += $2 count[$3,$1] += 1 } END { for (t in task) { split(t, tHour, \034); print tHour[1] tHour[2] count[t] task[t] / count[t] } }' yourFileResultTask, Hour, Num of occurrence, Average of response timesearchFileStatus 19 1 1708searchLead 19 1 1439getLead 15 1 999jointCall 15 1 999jointCall 16 5 1029.8updateLead 19 1 1832createLead 15 1 999createLead 16 1 1019getLTSUserDetails 17 1 189createLead 17 2 1019searchLead 15 1 999Previous attempt:awkawk ' { task[$3] += $2 count[$3] += 1 } END { for (t in task) { print t count[t] task[t] / count[t] } }' yourFileResultTask, Num of occurrence, Average of response timeupdateLead 1 1832jointCall 6 1024.67getLead 1 999getLTSUserDetails 1 189createLead 4 1014searchLead 2 1219searchFileStatus 1 1708 |
_cogsci.13370 | For my master thesis i would like to make a survey on user emotions on using a web user interface. The proband should write down his emotions using the Geneva emotion wheel. (More information here: http://www.affective-sciences.org/gew).Now i would like to know if there are any survey tools out there which offer this GEW as a clickable version. Or is it somewhere availlable as a widget which i can add then to my self created online survey? | Survey tool with clickable version of the Geneva emotion wheel | emotion;survey | I made up a SVG version connected to a hidden html form. It's not perfect, but it may be useful anyway. You can find it here.Note that a sufficient screen resolution is required.Please send me feedbacks if you use it so that I can improve it. |
_unix.11081 | I would like to highlight today's date in the output of the cal command. What is the best way?This is what I have so far:cal -m | grep -C6 --color $(date +%e)but it doesn't work for all cases e.g, when the date has a single digit. I also want the highlighting to work when I display the calendar for a year. | Highlight the current date in cal | date;highlighting;cal | null |
_datascience.6264 | I'm trying to cluster and classify users with Mahout. At the moment I am at the planning phase, my mind is completely mixed with ideas, and since I'm relatively new to the area I'm stuck at the data formatting.Let's say we have two data table (big enough). In the first table there are users and their actions. Every user has at least one action and they can have too many actions, too. About 10000 different user_actions and millions of records are in the table.user - user_actionu1 - au2 - bu3 - au1 - cu2 - cu2 - cu1 - bu4 - fu4 - eu1 - eu1 - du5 - dIn the other table, there're action categories. Every action may have none or multiple categories. There are 60 categories.user_action - categorya - cat1b - cat2c - cat1d - NULLe - cat1, cat3f - cat4I'm going to try to build a user classification model with Mahout but I've no idea what I should do. What type of user vectors should I create? Or do I really need user vectors?I think I need to create something like;u1 (a, c, b, e, d)u2 (b, c, c)u3 (a)u4 (f, e)u5 ()Problem in here, some users performed more than 100000 actions (some of them are same actions)So; this is more useful, I think;u1 (cat1, cat1, cat2, cat1, cat3)u2 (cat2, cat1, cat1)u3 (cat1)u4 (cat4, cat1, cat3)u5 ()The things I also worry about areHow should I weight categories for users? For example u1 has at least three action that related with cat1, while u3 has only 1. These one should be different?How can I decrease the difference between active users and passive ones? Like u1 has too many actions and so categories, u3 has only 1.Any guidance are welcome. | User profiling with Mahout from categorized user behavior | classification;clustering;apache mahout | null |
_codereview.135497 | For my first Kotlin project, I'm implementing the Redux pattern with simple Increment and Decrement buttons and a text view to display the current value.My main questions have to do with Kotlin and Android idioms and how I structured my code. Is the below a radical departure to how Kotlin is normally written? For example the when statement... Would you put an else clause in it instead of the return at the bottom of the function?Does it look right to override the onStart and onStop or would it make sense to move the store.subscribe code into the onCreate? If I did that, would the GC successfully collect the activity and the text field? (The GC in general has me nervous.) Is it weird to be using so many function objects or is that acceptable in Kotlin? Any constructive criticism is also wanted. package com.myapplicationimport android.support.v7.app.AppCompatActivityimport android.os.Bundleimport android.view.Viewimport android.widget.Buttonimport android.widget.TextViewclass MainActivity : AppCompatActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) textView = findViewById(R.id.textView) as TextView val incrementButton = findViewById(R.id.increment_button) as Button incrementButton.setOnClickListener { store.dispatch(Action.INCREMENT) } val decrementButton = findViewById(R.id.decrement_button) as Button decrementButton.setOnClickListener { store.dispatch(Action.DECREMENT) } } override fun onStart() { super.onStart() unsubscriber = store.subscribe { state -> textView?.text = state.count.toString() } } override fun onStop() { unsubscriber?.invoke() super.onStop() } private var unsubscriber: (() -> Unit)? = null private var textView: TextView? = null}package com.myapplicationimport com.redux.Storeenum class Action { INCREMENT, DECREMENT }data class State(val count: Int = 0) { }fun reducer(action: Action, state: State): State { when (action) { Action.INCREMENT -> return state.copy(count = state.count + 1) Action.DECREMENT -> return state.copy(count = state.count - 1) } return state}val store = Store<Action, State>(State(), ::reducer)package com.reduxclass Store<Action, State>(initialState: State, reducer: (action: Action, state: State) -> State) { fun dispatch(action: Action) { if (dispatching) { throw Error(Can't dispatch in the middle of a dispatch.) } dispatching = true currentState = reducer(action, currentState) notifySubscribers() dispatching = false } fun subscribe(subscriber: (state: State) -> Unit): () -> Unit { val id = uniqueID uniqueID += 1 subscribers[id] = subscriber val dispose: () -> Unit = { subscribers.remove(id) } subscriber(currentState) return dispose } private var currentState = initialState private val reducer = reducer private var dispatching = false private var subscribers: MutableMap<Int, (state: State) -> Unit> = mutableMapOf() private var uniqueID = 0 private fun notifySubscribers() { for (subscriber in subscribers.values) { subscriber(currentState) } }} | Counter with increment and decrement buttons | beginner;android;kotlin;redux | Preface: I haven't worked in the Android ecosystem, so this review is mostly looking at the Kotlin side of things. I've also had minimal exposure to the Redux pattern.Some low hanging fruit that IntelliJ IDEA points out:return state is unreachable code in fun reducer. Control flow will always exit as action can only be .INCREMENT or .DECREMENT. I would use an else -> throw IllegalStateException(Invalid Action) in the when, to keep safety in the case of expanding the enum class Action in the future. As a bonus, this is not marked as unreachable code, as IntelliJ recognizes it as a fallback case.Explicit type arguments in val store = Store<Action, State>... are not required: just val store = Store(State(), ::reducer) works fine and Kotlin infers the types. This is Kotlin's biggest advantage over Java.class Store<Action, State> can have declaration site variance. Read the link for information about what this means; it is written class Store<in Action, out State> and does not change any semantics of the present code.private val reducer can be declared directly in the constructor. It's personal preference whether you declare it there or not, but if you do, I would probably also declare currentState there as well (instead of initialState).Some other points:I wouldn't use the package com.myapplication; it's a placeholder. Instead, give it something that identifies this package as yours. Oracle's Java tutorial recommends using a domain you own. Even though I do own http://cad97.com, rather than use package com.cad97.project I usually use just cad97.project, as a holdover from before I bought the domain. For you, this pattern would suggest a base package of daniel_t. If you do not own the domain, DO NOT use a com (or any other common TLD) base package.Personally, I typically don't specify argument labels on function types. (This refers to the types for parameters to the Store constructor, the MutableMap types, the subscribe method.)Java (and in my adventures so far, Kotlin) typically uses one-letter types for generic types, to distinguish them from regular types. This is compounded by the fact that your generic Action is named the same as your concrete Action. How you resolve this is up to the programmer, but in an ideal world you would not have this conflict of names.You can return dispose directly: just write return { subscribers.remove(id) }Store::notifySubscribers can be written as a single-expression function: private fun notifySubscribers() = subscribers.values.forEach { it(currentState) }subscribers can be declared as private var subscribers = mutableMapOf<Int, (State) -> Unit>()Rather than return Error from Store::dispatch, use IllegalStateException, as it is more descriptive of the reason for the error.By the format of Store::dispatch, it looks like what you're trying to do is use a Lock. Kotlin provides special support for this, which would be written something like this:private val dispatchLock = ReentrantLock() // replaces dispatchingfun dispatch(action: Action) = dispatchLock.withLock { currentState = reducer(action, currentState) notifySubscribers()} |
_opensource.5740 | For a commercial project, I need to add the support of the HEIF images (Nokia High-Efficiency Image File Format). I found a SDK provided by Nokia that seems to do the job. However I'm in trouble with the license agreement.In the license, it is said:Nokia Technologies Ltd (Nokia) hereby grants to you a non-sublicensable, perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this license) license, under its copyrights and Licensed Patents only to, use, run, modify (in a way that still complies with the Specification), and copy the Software within the Licensed Field. For the avoidance of doubt the Licensed Patents shall not include Codec Patents. Codec Patent licenses are neither granted, implied nor otherwise conveyed hereunder.which seems to indicate that the source code may be used inside a commercial project. However, the term Licensed Field refers to this paragraph:Licensed Field means the non-commercial purposes of evaluation, testing and academic research in each non-commercial case to use, run, modify (in a way that still complies with the Specification) and copy the Software to (a) generate, using one or more encoded pictures as inputs, a file complying with the Specification and including the one or more encoded pictures that were given as inputs; and/or (b) read a file complying with the Specification, resulting into one or more encoded pictures included in the file as outputs.which seems to indicate the exact opposite of the first paragraph above. The copyright issues are not my strong point, so can somebody point me if the code relative to this agreement may be used in a commercial project or not? | Can the code source under a HEIF License be used for a commercial project? | license compatibility;commercial | IANAL/IANYLIn the first license except you provided it says you can only to,use, run, modify (in a way that still complies with the Specification), and copy the Software within the Licensed Field. Then, you continue with:which seems to indicate that the source code may be used inside a commercial project.I don't see a justification for that assertion. It says only .. in the Licensed Field. Without knowing what Licensed Field means you cannot assume anything. Knowing that, the license includes the second excerpt you provided, which says what Licensed Field means. Doing substitution results in the following created condition:only to, use, run, modify (in a way that still complies with the Specification), and copy the Software within non-commercial purposes of evaluation, testing and academic research.That seems to pretty much rule out any commercial purpose completely. It even rules out most non-commercial purposes except evaluation, testing and academic research. That seems to even rule out using it to display your family photos on your personal (non-commercial) website since it is not evaluation, testing, or academic research.In short: Find a different SDK, or get a commercial license from Nokia to use their SDK. |
_codereview.144464 | I wanted to create containers from other container instances by changing the order of elements in a stridden pattern such as following:[1, 2, 3, 4, 5, 6, 7, 8] with factor of 2 as stride turns into:[1, 3, 2, 4, 5, 7, 6, 8]I try to keep it as generically as possible I can.If I have reinvented the wheel and this can easily done by STL algorithms please warn me. template<typename SeqContainer, typename Integral>SeqContainer strider(const SeqContainer & c, Integral stride){ static_assert(std::is_integral<Integral>::value, 'Integral' template type must be an integral type!); SeqContainer n; n.resize(c.size()); if(c.size() % stride != 0) { std::cerr << container size must be multiple of stride!\n; return SeqContainer{}; } std::vector<decltype(c.begin())> it_vec; it_vec.reserve(c.size() / stride); for(auto it = c.begin(), et = c.end(); it != et;) { it_vec.emplace_back(it); std::advance(it, stride); } for(auto it = n.begin(), et = n.end(); it != et;) { for(auto& iter : it_vec) { *it = std::move(*iter++); ++it; } } return n;}The ideone link : http://ideone.com/9QaPa0 | Stride Shuffling of Sequential STL Containers (except std::array) | c++;algorithm;c++11;collections | null |
_webmaster.29608 | In my Google Analytics, I have a Goal URL defined for an exact match of a thank you page. When I view in my reports under Conversion > Funnel Visualization, it says at the bottom of my report that 27 people completed the funnel and saw the thank you page. However, when I look at my content report and check for how many unique visitors saw the page, it says 38. It also says 38 when I click goal flow under the conversions reports. Shouldn't the numbers all be the same? Anybody know why it's not? | Google Analytics Goals Not Adding Up With Goal Funnel | analytics;goals | null |
_codereview.12190 | I am creating a skeleton application to demonstrate use of the Strategy design pattern and inversion of control inside a Strategy.The application is the backend of a simple board game, which also has the capability to save and load the board. Since we can have different types of data stores for saving, I have used an interface PersistenceStrategy and concrete implementations of it like the FilePersistenceStrategy.I have listed the code below. However, I am stuck in one place. Inside the FilePersistenceStrategy class, the save() as well as load() methods need to communicate with a file. I do not want these methods to create a file handle (Reader/Writer) themselves, since this will be bad for testability.I know there are several ways for these methods to get hold of FileReader and FileWriter objects - like getting them from a factory, or getting the FQN from a properties file and instantiating them using reflection. The most common method of solving this problem - Dependency Injection, cannot be used here because the load() and save() methods come from an interface and we cannot add Strategy specific parameters to the interface.How would you solve this problem, and what in your opinion are the pros and cons of different approaches ?Code follows:Board.javapackage com.diycomputerscience.example.sanddi;public class Board { private PersistenceStrategy persistenceStrategy; private BoardState state; public Board(PersistenceStrategy persistenceStrategy) { this.persistenceStrategy = persistenceStrategy; } //various methods of playing the board game public void load() throws PersistenceException { this.state = this.persistenceStrategy.load(); } public void save() throws PersistenceException { this.persistenceStrategy.save(this.state); }}BoardState.javapackage com.diycomputerscience.example.sanddi;public class BoardState {}PersistenceStrategypackage com.diycomputerscience.example.sanddi;public interface PersistenceStrategy { public void save(BoardState state) throws PersistenceException; public BoardState load() throws PersistenceException;}FilePersistenceStrategy.javapackage com.diycomputerscience.example.sanddi;import java.io.FileReader;import java.io.FileWriter;import java.io.IOException;import java.io.Writer;public class FilePersistenceStrategy implements PersistenceStrategy { @Override public void save(BoardState state) throws PersistenceException { FileWriter writer = getFileWriter(); //logic to save the state if(writer != null) { try { writer.close(); } catch(IOException ioe) { String msg = Could not close FileWriter; throw new PersistenceException(msg, ioe); } } } @Override public BoardState load() throws PersistenceException { FileReader reader = getFileReader(); BoardState state = parseFileForBoardState(reader); return state; } private BoardState parseFileForBoardState(FileReader reader) { // TODO Auto-generated method stub return null; } /** * HOW DO I USE INVERSION OF CONTROL HERE FOR BETTER TESTABILITY ? * @return */ private FileReader getFileReader() { // Create a FileReader from a well known file name... // perhaps a hardcoded name or read name from a config file return null; } /** * HOW DO I USE INVERSION OF CONTROL HERE FOR BETTER TESTABILITY ? * @return */ private FileWriter getFileWriter() { // Create a FileWriter from a well known file name... // perhaps a hardcoded name or read name from a config file return null; }}PersistenceException.javapackage com.diycomputerscience.example.sanddi;public class PersistenceException extends Exception { //Override all constructors from the superclass public PersistenceException(String msg, Throwable cause) { super(msg, cause); }} | How to use inversion of control within a strategy when using the strategy pattern | java;design patterns;unit testing | For me the question is how the strategy is created. Is it possible to inject the file name during construction of the strategy? That would be the way I would do it.Else you can create callbacks that are created and injected. When you need the file call this callback and let this decide (eg asking the user). |
_codereview.54034 | This is my first Scala game. I would love some feedback on my coding style, or your brief input on how you would do it.object tictactoe extends App { def tttFormat(board: Array[Char]): String = | + board(0) + | + board(1) + | + board(2) + |\n + | + board(3) + | + board(4) + | + board(5) + |\n + | + board(6) + | + board(7) + | + board(8) + |\n println(Enter the number of the square you want to occupy!\n+ tttFormat(GameObj.board)) while(GameObj.atPlay) { println(tttFormat(GameObj.updatedStateArray)) } GameObj.nextTurn println(game over! + GameObj.nextTurn + 's win!)}object GameObj { val board: Array[Char] = Array('1','2','3', '6','5','4', '7','8','9') var whosTurn = false def nextTurn: Char = { whosTurn = !whosTurn; if (whosTurn) 'X' else 'O' } def atPlay: Boolean = !(board(0) == 'X' && board(1) == 'X' && board(2) == 'X') && !(board(3) == 'X' && board(4) == 'X' && board(5) == 'X') && !(board(6) == 'X' && board(7) == 'X' && board(8) == 'X') && !(board(0) == 'X' && board(4) == 'X' && board(8) == 'X') && !(board(6) == 'X' && board(4) == 'X' && board(2) == 'X') && !(board(0) == 'X' && board(3) == 'X' && board(6) == 'X') && !(board(1) == 'X' && board(4) == 'X' && board(7) == 'X') && !(board(2) == 'X' && board(5) == 'X' && board(8) == 'X') && !(board(0) == 'O' && board(1) == 'O' && board(2) == 'O') && !(board(3) == 'O' && board(4) == 'O' && board(5) == 'O') && !(board(6) == 'O' && board(7) == 'O' && board(8) == 'O') && !(board(0) == 'O' && board(4) == 'O' && board(8) == 'O') && !(board(6) == 'O' && board(4) == 'O' && board(2) == 'O') && !(board(0) == 'O' && board(3) == 'O' && board(6) == 'O') && !(board(1) == 'O' && board(4) == 'O' && board(7) == 'O') && !(board(2) == 'O' && board(5) == 'O' && board(8) == 'O') def updatedStateArray: Array[Char] = { val in: Int = Integer.parseInt(readLine()) - 1 board.update(in, nextTurn) board }} | Scala Tic Tac Toe Game | game;functional programming;scala | null |
_cs.13282 | It's known that the complement of a DFA can be easily formed. That is, given a machine $M$, we can construct $M'$ such that $L(M') = \Sigma^* \setminus L(M)$.Is it possible to construct such a complement for a non-deterministic finite automation (NFA)? To my knowledge, it isn't. | Complement of Non deterministic Finite Automata | automata;finite automata;closure properties | null |
_unix.11152 | How to get the icon for a MIME type?I don't want to use g_file_info_get_icon() because its input parameter is a file, not a MIME type.My use case:I have a wrapper file which embeds a media object, I can get the MIME type of the media object and want to find the icon of the MIME type. And I don't want to dump the media object to a separate file to use g_file_info_get_icon(). | How to get the icon for a MIME type? | gnome;gtk;mime types;icons | null |
_codereview.44745 | I've written a function to call with the deferred library to generate a large task queue, and at first without the recursion it timed out (deadlineexceeded) so now I'm trying with a recursion but is it safe that the function will terminate? I actually ran it with GAE and it did the correct task - deleted all the empty blobs that were written previously because of a bug. (It seems that empty blobs with 0 bytes get written several times a day and the code is to delete those blobs.)def do_blobs_delete(msg=None, b=None, c=None): logging.info(msg) edge = datetime.now() - timedelta(days=750) statmnt = WHERE size < 1 # size in bytes logging.debug(statmnt) try: gqlQuery = blobstore.BlobInfo.gql(statmnt) blobs = gqlQuery.fetch(999999) for blob in blobs: if blob.size < 1 or blob.creation < edge: blob.delete() continue except DeadlineExceededError: # Queue a new task to pick up where we left off. deferred.defer(do_blobs_delete, Deleting empty blobs, 42, c=True) return except Exception, e: logging.error('There was an exception:%s' % str(e)) | Generating a large task queue | python;recursion;error handling;google app engine | Yes, that function terminate (unless the semantics of blobstore are stranger than expected and blobs is an infinite generator). One thing to be concerned about might be that, should your query take a very long time, you would not delete a single blob before a DeadlineExceededError is thrown, and so schedule another task without doing any work. This is a bad thing, as you may end up with many jobs that simply get half way through a query, give up and then schedule themselves to run again. The worst part is that your only indication would be that your log would be full of info level messages (i.e. messages that will be ignored), giving you no idea that this travesty was unfolding in your task queue.I would recommend you add some kind of limit to make sure you are decreasing the number of blobs towards zero each time. You could think of it as an inductive proof almost; see Burstall's 69 paper on structural induction if you feel like formulating a mathematical proof. However, that is probably not necessary in this case. My suggested rewrite would be something like:# The number of times one call to do_blobs_delete may `recurse`, should probably be# made a function of the count of all blobsMAX_ATTEMPTS = 10def do_blobs_delete(msg=None, b=None, c=None, attempt=0): logging.info(msg) edge = datetime.now() - timedelta(days=750) statmnt = WHERE size < 1 # size in bytes logging.debug(statmnt) try: gqlQuery = blobstore.BlobInfo.gql(statmnt) blobs = gqlQuery.fetch(999999) for blob in blobs: if blob.size < 1 or blob.creation < edge: blob.delete() except DeadlineExceededError: if MAX_ATTEMPTS <= attempt: logging.warning(Maximum blob delete attempt reached) return # Queue a new task to pick up where we left off. attempt++ deferred.defer(do_blobs_delete, Deleting empty blobs, 42, c=True, attempts=attempt) return except Exception, e: logging.error('There was an exception:%s' % str(e))Note that this does not explicitly address the lack of an inductive variable, though it does limit any damage done.Another thing to note is that you have a race condition on the .delete. Should the method be called concurrently, say by process A and B, then the call to .fetch could return the same set of bobs to each. They would then attempt to delete the elements twice, which would raise an error on the .delete call, leading to fewer blob deletions than expected. This problem gets a lot worse when we consider more processes and things like unspecified ordering. The correct way to handle this is to treat any exceptions thrown by the .delete in a more nuanced way than you are currently, and accommodate multiple attempted deletes without prejudice.Your current code will work fine at the moment, the problems will manifest themselves when things get more complicated. While I am sure Google's infrastructure can handle these upsets, your bottom line may be less flexible. |
_unix.371111 | Consider the following in bash:root@debian-lap:/tmp I=$(echo)root@debian-lap:/tmp echo $Iroot@debian-lap:/tmp [ -z $I ] && echo TRUE || echo FALSETRUEThis means that variable $I is zero. The same I could achieve with negation test to see if variable is non zero, and ! makes test reverse so it checks is variable zero root@debian-lap:/tmp ! [ -n $I ] && echo TRUE || echo FALSETRUEroot@debian-lap:/tmpSo, my question is, are there any special cases when to use -z and ! -n , or vice versa ! -z and -n as they are basically doing the same test? Thanks | Usage of -n and -z in test built in - Bash | bash;shell script;shell;test | You are given -n and -z for the same reason that other test suites give you both == and !=, or AND and NOT. Some test cases can be made a lot clearer to future maintainers by eschewing double-negatives. Also, as mentioned in an above comment, ancient incarnations of sh (i. e. the Bourne and Thompson shells), as opposed to modern POSIX sh did not have a ! keyword to negate the truthiness of test expressions. |
_unix.366148 | A web application running on CentOS 7 (app server) in a private LAN needs to make database connections to another CentOS 7 server (database server) running on the same private LAN. When I type systemctl stop firewalld on the app server, the database connections to the remote database server work perfectly. But when I type systemctl start firewalld on the same application server, the web application is no longer able to connect to the remote database server. This tells me that I need to create an outbound firewalld rule on the application server. But that would require knowing what port needs to be used for the outbound connections. What specific commands can be used to determine which port is being used in the application server to make remote connections to the database server? | What port is a CentOS 7 app using to make remote connections? | centos;networking;firewall;firewalld | null |
_softwareengineering.85003 | What is the best way to develop and maintain legacy code not in version control? Adding it to version control is of course the obvious answer, but if you can't, for some reason, what would you do?A few reasons I can think of why version control wouldn't be possible are:Management is against it (doesn't understand it, thinks it would take to much time, isn't worth it, etc.)You don't have the administritative privileges to install the required software.The code runs/is stored on legacy systems with limited capabilities for version control.So, if real version control isn't available, what do you do? Set up some regular backup system? Or perhaps create version-named folders? | What is the best way to deal with legacy code not in version control? | project management;version control;legacy | There is no reason to not be able to have version control period.If management is against it even after having the benefits pointed out to them, they're clearly not fit to be managers and their boss should be approached.If you don't have administrative privileges, either get them or have someone from IT install the software.The code shouldn't be executed and be stored on the same machine. You should have centralized source control with a proper deployment pipeline. |
_unix.122105 | The debian packages console-data, console-setup, console-common and console-tools (maybe even more) all seem to do the same thing. What are the differences and which ones should I use? | What is the difference between console-data, console-setup, console-common and console-tools? | debian | Debian likes to split applications into small units, even when 99% of people would want to install everything, for the sake of the 1% with unusual needs. You exagerate however when you claim that they all seem to do the same thing the descriptions are pretty informative.console-data contains architecture-independent data such as keymaps and fonts. There is a single binary package for all architectures, which saves space on package mirrors and download bandwidth on sites with installations of multiple architectures. The data package isn't useful by itself, it'll get pulled in as a dependency of programs that use that data.console-tools contains the programs that use the data in console-data: set a keymap with loadkeys, set a font with consolechars, etc. The package also contains some tools to manage text consoles such as chvt, openvt, ... This package recommends console-data, but does not depend on it, because you don't have to have all the keymaps and fonts: you may want the package just for the other tools, or to load one keymap.console-common contains just the infrastructure necessary to load a keymap at boot time. It depends on both console-data (for the keymaps) and console-tools (for the loadkeys program). This package is there to provide an easy configuration; if you want a minimalist system without all the keymaps, you can do the same job manually.console-setup is an extra program to convert X11 keymaps into Linux console keymaps.You missed kbd, which is an alternative implementation of console-tools. I don't know what the differences are.For most users, the answer to which ones should I use is none just let your distribution pull it whatever it wants by default. You won't be interacting with the console much anyway: as soon as X starts, all of this is irrelevant. |
_unix.148829 | I had a problem:Could not retrieve mirrorlist http://mirrorlist.centos.org/release=6&arch=x86_64&repo=os error was14: PYCURL ERROR 6 - Couldn't resolve host 'mirrorlist.centos.org'Error: Cannot find a valid baseurl for repo: baseI've read and tried this suggestion. Our server is remote, we connect to it with ssh. After issuing ifdown eth0, the server stopped responding.My OS is CentOS. How can I fix my problem? | Access remote server after running ifdown eth0 | ssh;ethernet | null |
_softwareengineering.314963 | I'm writing a library function that takes a list (or bunch) of items (let's say Student) and does something with them. What's the best way to write the function signature in the interface?std::vector<Student> The problem with the is that maybe my caller isn't using STL and then they need to instantiate and populate a vector. Not clean and performance suffers.Student* + numOfStudents doesn't seem right, this isn't C.A function taking a begin and end iterator? Should it then be implemented as a template function? | How should I write an interface that takes a list of items? | c++;interfaces | If the goal is to accept as many different types of list as possible, then you should write a template function that accepts anything, and in your implementation either use the begin() and end() free functions to get iterators for it, or use a foreach loop (which the compiler will implement for you using begin() and end()).This will work for raw arrays, all standard container classes, and all non-standard container classes emulating the standard container interface. In fact, that level of generic-ness is exactly why begin() and end() free functions were added to the language.If you really need to accept anything, you can still provide additional overloads that take a begin and an end iterator or a pointer and a length. |
_unix.121956 | When I run command lvs -a the output shows the logical volumes from which the mirrored-volume has been made but how to find the actual disks from which the sub-logical volume were allocated. Any specific command, options? Or do we have to manually find out?[root@cent06x32vm12 ~]# lvs -aLV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convertmvol3 datavg2 mwi-a-m--- 40.00m mvol3_mlog 100.00 [mvol3_mimage_0] datavg2 iwi-aom--- 40.00m <==== How to find which disks these come from [mvol3_mimage_1] datavg2 iwi-aom--- 40.00m <==== [mvol3_mlog] datavg2 lwi-aom--- 4.00m <==== | How to find the underlying disks of a mirrored volume in LVM (linux)? | linux;lvm | To show all lvs and where they come from you can check with:lvs -ao +devices |
_unix.204018 | Often when installing or upgrading packages the following appears in the log:* .tar.gz SHA256 SHA512 WHIRLPOOL size ;-) ...What does this mean and what does the emoticon signify? ;-) | What does WHIRLPOOL size ;-) mean? | gentoo;emerge | When a package maintainer creates a version of a package, the repoman tool takes the input files, usually a tar archive with source code and the ebuild itself, and calculate a number of hashes on it. This information is then recorded in a packages Manifest file.Before portage unpacks and compiles the package, it verifies that all these hashes are accurate.For example, if you look at /usr/portage/app-editors/vim/Manifest, you'll see a list of files for that package, along with a list of hashes.The check you are seeing is portage having verified that the hashes are right, and it will then proceed to unpacking/compiling/installing.The specific list you are seeing SHA256 SHA512 WHIRLPOOL size tells you that portage successfully verified the hashes SHA256, SHA512, WHIRLPOOL, and in addition, the file size.Why there's a smiley in there, I don't really know.To test the above, and see the check fail, simply make any small change to an e-build, and then try to install it.For example, changing a single letter in what is the current vim version at the time of writing, I get:# emerge -vp vimThese are the packages that would be merged, in order:Calculating dependencies / * Digest verification failed: * /usr/portage/app-editors/vim/vim-7.4.273.ebuild * Reason: Failed on SHA256 verification * Got: 376375965ab5830f176e9825e1f69b98f88d14331db5527317308b201befa933 * Expected: cbc64bcd5136f7c6059e379634e75117062204075001cf861d18a589c6f8535d |
_unix.312500 | I am trying to install sIBL-GUI, a standalone application for rendering hdri with 3D programs.I am using openSUSE Leap 42.1.I've googled for answers and also search the community at hdrlabs (sIBL-GUI) for answers, but could not find anything.As to the instructions shipped with the package you need PyQt and python 2.7.3openSUSE Leap 42.1 is shipped withPyQt4python 2.7.12and I installed python-setuptools.I downloaded the sIBL-GUI package from github, followed the installation instructions and get these errors.../hdri/sibl-gui/sIBL_GUI-develop/sIBL_GUI> python setup.py installTraceback (most recent call last): File setup.py, line 66, in <module> long_description=get_long_description(), File setup.py, line 46, in get_long_description if .. code:: python in line and len(description) >= 2:UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 18: ordinal not in range(128)..../hdri/sibl-gui/sIBL_GUI-develop/sIBL_GUI> What do these errors mean and what must I do? | errors with setup.py for sIBL-GUI | software installation;python;opensuse | null |
_unix.59500 | I have taken a complete image of a hard drive using:dd if=/dev/sda of=/home/user/harddriveimg bs=4MIt would seem to me, that I should be able to re-size the partitions within it after suitably mounting it.As I am less than familiar with the command line parted, I tried:gparted /home/user/harddriveimgWhile this loaded the partition table, it couldn't find the partitions themselves, e.g. harddriveimg0.Is it possible to modify an image file like this, without writing it back to some disk, and if so how? I would be perfectly happy with a solution that uses only terminal commands. | How to re-size partitions in a complete hard drive image? | filesystems;dd;storage;block device;gparted | You need to associate a loopback device with the file:sudo losetup /dev/loop0 /home/user/harddriveimgThen run gparted on that. |
_softwareengineering.211835 | I'm not sure if this is an acceptable question, but compiler-os-design-where-to-start was, so I figured that I'd take a shot at it.I have taken no formal Computer Science classes. I have programmed in Python and attempted C# without success. My technical vocabulary is expansive, yet scattered over a very wide range of computer science topics.I have a very long way to go before I can get to a level where I could reasonably read a book about compiler design/theory. I am asking what steps I need to take before attempting compiler design. I have some examples here already: Computer architectureBinaryHow booting/kernels/OSes workImperative vs. Comparative language designGrammarsAt least these are some examples of what I've seen.Edit: I can't for the life of me see how this problem is unclear. I quite clearly bolded what I was asking. I was expecting it to be marked as biased, vague/broad, or too generalized, but certainly not unclear. Don't be afraid to say it's unconstructive (just make sure whatever it is classified under is accurate). | Path to learning compiler design | design;compiler | Compilers are not some mythical creatures, even though some people might like you to think that.A compiler is a program like any other program. It takes some input, tries to make sense of it, and generates some output. Have you ever written a program which reads a text file in some format and outputs some HTML based on that text? Well, congratulations: you already have written a compiler. A very simple one, I admit, but it is a compiler.You approach it like any other program: try, fail, learn, repeat.Some resources to help you fail less and learn more :-)Let's Build a Compiler, by Jack CrenshawA Nanopass Framework for Compiler Education, by Dipanwita Sarkar, Oscar Waddell, and R. Kent Dybvig |
_unix.231999 | I had a working copy of Kali Linux on a USB and something happened and I accidentally formatted the USB. I have placed the 64 bit iso back on the USB with LinuxLive USB, and changed my boot settings to boot from USB. Now when it loads up all I get is a flashing _ and nothing ever happens. Any ideas what to do ? | Kali Linux not loading from USB | system installation;kali linux;live usb | Some distributions don't work well with certain live USB creation tools. Your best bet is to just create the USB again using a different tool. For example:Live USB multibootunetbootinYUMIA great source of information for this kind of thing is the pendrive linux website. |
_datascience.6030 | The Vowpal Wabbit apparently supports sequence tagging functionality via SEARN. The problem is that I cannot find anywhere detailed parameter list with explanations and with some examples. The best I could find is Zinkov's blog entry with a very short example. The main wiki page barely mentions SEARN.In the checked out source code I found demo folder with some NER sample data. Unfortunately, the script running all the tests does not show how to run on this data. At least it was informative enough to see what is the expected format: almost the same as standard VW data format, except that entries are separated by blank lines (this is important).My current understanding is to run the following command:cat train.txt | vw -c --passes 10 --searn 25 --searn_task sequence \--searn_passes_per_policy 2 -b 30 -f twpos.vwwhere--searn 25 - the total number of NER labels (?)--searn_task sequence - sequence tagging task (?)--searn_passes_per_policy 2 - not clear what it doesOther parameters are standard to VW and need no additional explanation. Perhaps there are more parameters specific to SEARN? What is their importance and impact? How to tune them? Any rules of thumb?Any pointers to examples will be appreciated. | Using Vowpal Wabbit for NER | machine learning;nlp | null |
_computerscience.5397 | I've been following a guide to learn OpenGL, and I'm now learning how to do post-processing.In particular, I'm trying to apply a blur to my rendering through the following kernel:$\frac{\begin{bmatrix}1&2&1\\2&4&2\\1&2&1\end{bmatrix}}{16}$The problem I'm encountering is that whenever I try to access a float array's member through the [] operator giving it an int variable as parameter, it returns 0.0.Here's my fragment shader:#version 410 corein vec2 txcoords; // Texture coordinatesout vec4 color;uniform sampler2D tx;const float offset = 1.0 / 300.0;void main() { vec2 offsets[9] = vec2[]( vec2(-offset, offset), vec2( 0.0, offset), vec2( offset, offset), vec2(-offset, 0.0), vec2( 0.0, 0.0), vec2( offset, 0.0), vec2(-offset, -offset), vec2( 0.0, -offset), vec2( offset, -offset) ); const float kernel[9] = float[9]( 1.0 / 16, 2.0 / 16, 1.0 / 16, 2.0 / 16, 4.0 / 16, 2.0 / 16, 1.0 / 16, 2.0 / 16, 1.0 / 16 ); vec3 sampleTex[9]; vec3 final = vec3(0.0); for (int i = 0; i < 9; i++) { sampleTex[i] = vec3(texture(tx, txcoords.st + offsets[i])); } for (int i = 0; i < 9; i++) { final += sampleTex[i] * kernel[i]; } color = vec4(final, 1.0);}This code doesn't work(whole screen renders black), but if I change it tofinal += sampleTex[i] * kernel[0];then my scene is rendered correctly, but, of course, without any blur, just a bit darker than normal.It also doesn't just work with kernel[0], but with any number ([1], [2], etc.)If instead I change it to final += sampleTex[i] * (kernel[i] + 0.05);then it renders exactly the same as withfinal += sampleTex[i] * 0.05;And that's why I ended up suspecting that it's the [i] part that just won't work, for whatever reason.Does anyone have any idea as to why this is happening? I also tried with different shader versions (330, 400, etc.) but they all give the same result.Thanks for your time! | GLSL broken access operator | opengl;glsl | null |
_softwareengineering.333356 | Lets consider a class ClassA that needs an instance of another class ClassB, I can pass in this class in ClassA's constructor;public class ClassA{ private readonly ClassB _classB; public ClassA(ClassB classB) { _classB = classB; }}public class ClassB{ public void Method1() { } public void Method2() { } public void Method3() { } public void Method4() { }}I understand that this causes issues because ClassA now has a dependency on a concrete type, also if I write some unit tests they would not purely be testing ClassA (so strictly speaking they would be integration tests). To solve both of these I could either create an abstract base class or an interface that ClassB could implement and then inject this into the constructor instead. My issue is this; I would only have a single concrete implementation of the base class (ignoring implementations created purely for the sake of unit testing). I would end up creating abstract base classes for all types that I create, many of which would never have more than one implementation. This seems like a waste of time and adds to the complexity of the code. Does the benefit of having all those abstractions really out-weigh the cost? Could I not simply inject concrete types (e.g. ClassB) and if there is a need for a different implementation simply convert the concrete implementation into an abstract base class and then subclass from that? | Should I create interfaces/abstract base classes when I only have a single implementation? | c#;unit testing;dependency injection | null |
_unix.326929 | CentOS 7 - fresh instance.I want to open port 8888, so issued: # firewall-cmd --zone=public --add-port=8888/tcp --permanent success# firewall-cmd --reload success# netstat -plntActive Internet connections (only servers)Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1706/master tcp 0 0 0.0.0.0:26532 0.0.0.0:* LISTEN 1470/sshd tcp6 0 0 ::1:25 :::* LISTEN 1706/master tcp6 0 0 :::26532 :::* LISTEN 1470/sshd Where is port 8888? So I try:# firewall-cmd --zone=public --list-allpublic (default) interfaces: sources: services: dhcpv6-client https ports: 8888/tcp 26532/tcp masquerade: no forward-ports: icmp-blocks: rich rules: What? How come this shows port 8888? So I try:# lsof -iCOMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAMEchronyd 497 chrony 1u IPv4 12405 0t0 UDP my.srvr.com:323 chronyd 497 chrony 2u IPv6 12406 0t0 UDP localhost:323 sshd 1321 root 3u IPv4 14788 0t0 TCP *:26532 (LISTEN)sshd 1321 root 4u IPv6 14797 0t0 TCP *:26532 (LISTEN)master 1716 root 13u IPv4 15666 0t0 TCP my.srvr.com:smtp (LISTEN)master 1716 root 14u IPv6 15667 0t0 TCP localhost:smtp (LISTEN)sshd 1969 root 3u IPv4 71914 0t0 TCP my.srvr.com:26532->184.71.11.86:35174 (ESTABLISHED)sshd 1971 rob 3u IPv4 71914 0t0 TCP my.srvr.com:26532->184.71.11.86:35174 (ESTABLISHED)smtpd 2572 postfix 6u IPv4 15666 0t0 TCP my.srvr.com:smtp (LISTEN)smtpd 2572 postfix 7u IPv6 15667 0t0 TCP localhost:smtp (LISTEN)So where is port 8888? Why does it show up under firewall-cmd but not the others?Usually I use Ubuntu, so am a CentOS newbie. | CentOS - where are my ports? | centos;networking | Opening a port in the firewall does not automatically mean that an application will try to bind it. These are separate things. Applications can bind to ports that haven't been opened in the firewall and vice versa.firewall-cmd (and probably also iptables -L) shows you what your firewall looks like.netstat and lsof show you what your applications are doing and trying to do. |
_webapps.102694 | I am struggling to find the way to move a Trello card using the Trello App for Slack.In the page From The Trello App for Slack /trello commands I can find multiple commands, but not the one to move cards through columns.Does it exist? | How can I move cards using Trello App for Slack? | trello;slack | null |
_webapps.75358 | I want a field to copy (duplicate) the data from another field. I am building a Booking Form for my tours. So for the Lead Passenger I created a field named TRIP CODE. I want that code to be duplicated in the additional passenger field without having to type it again.So I created the same field in the additional passenger section and wrote the formula =TripCode in this new text field. But it is not working.I just realized that when working in this ADDITIONAL PASSERGERS section (which is a repeating section), fields formulas work with the fields created only in this section but not in the ones created outside this section. | Cognito Forms: I want a field to copy (duplicate) the data from another field | cognito forms | null |
_computergraphics.3580 | This question was originally asked on Physics, then moved to Cognitive Sciences.Consider the following image:You might want to display the image in a new page, in case it gets resized for mobile displays.On the top half, there is a pixel-sized checkerboard pattern with alternating black and white pixels; on the bottom half, there's a black to white gradient. Now, I don't know if you see the same, but for me, when viewing from distance or defocusing my eyes and the top half blending into one color, I can't find any color in the gradient below which would match it.With simple arithmetics, I would guess at first guess the resulting color be either RGB $(0.5, 0.5, 0.5)$ or, with gamma, $(\sqrt{0.5}, \sqrt{0.5}, \sqrt{0.5})$. I cannot help myself, but the resulting color appears a lot warmer than the metallic gray which would appear if you for example zoomed this page or used a blur filter. I tried to add some yellow to the gradient, and the result looks more similar to the perceived color.Now, based on the comments, it appears some people perceive yellowish tint and some don't. And I do on my LCD computer screen but not on my mobile display. Thus I guess it's based on some property of the display.Why don't we perceive the resulting color like real gray? Where does the yellow color come from?I have a theory: Based on the color arrangement in a typical LCD pixel, one white pixel would contain the red and green color together and blue on the right side. Blue color appears darker to human eye than colors with the same physical intensity, so a white pixel is more green than red than blue. Green and red have roughy the same perceptional insensity, and mixing red and green color addivitely gives yellow, thus the yellow tint? Well, shouldn't then all white pixels on the display appear a bit yellow?Is this possible? Are there any other explanations?A side question: Do you know any computer image scaling algorithm or blur filter that tries to mimic this, simulating a blurry vision of a human eye correctly? | Why does checkerboard pattern on a computer screen appear with a yellowish tint? | color;pixel graphics;color management | Because your monitor is not properly calibrated.On my screen at home the top and bottom parts have the same hue. At my office though, the top part tends to looks a bit yellow compared to the bottom part that looks more red.The difference: my screen at home was from a series that was decently calibrated out of the factory, and on top of that I did calibrate it properly with a color calibration tool. That was quite some years ago and its colors have probably degraded since, but it still makes a difference.A poorly calibrated monitor will not display exactly the color intensity requested for R, G and B, resulting in differences between the color that should be displayed and the one actually displayed. It can even vary depending the area of the screen (many consumer level LCD screens tend to have more light leaking near the edges for example).The image you included is a good example to highlight a calibration issue, because it allows to compare directly the average color at 50% intensity (combining two halves at 0% and 100% intensity) with the color displayed when requesting 50% intensity. Moreover, the gradient allows to see if the hue is constant from 0% to 100% intensity. |
_softwareengineering.340850 | I have been tasked with helping our HR department create some new interview questions for candidates applying for development positions. As part of the process, I would like to assess their ability to both understand code without the help of a compiler and spot potential future problems. As such, I have devised a basic console application. The source code I will provide to the candidates and ask them the following questions:Would this code compile without any errors? If not, why not?Can you spot any potential issues in the code which might cause bugs in future?What improvements would you make to the code?I've got my own answers to the questions, but I would love to get some feedback to see if there's anything that I may have missed.using System;using System.Collections.Generic;namespace DeveloperTest{ class Program { static void Main(string[] args) { var users = new List<User>(); users.Add(new User(John Smith, 42, DateTime.Today)); string name = Jane Smith; int age = 37; int yearJoined = 17; int monthJoined = 01; int dayJoined = 15; users.Add(new User() { Name = name, Age = age, DateJoined = new DateTime(day: dayJoined, month: monthJoined, year: yearJoined) }); users[0].PrintUserInfo(); RemoveUsersUnderAge(users, 40); Console.ReadKey(); } private void RemoveUsersUnderAge(List<User> users, int age) { foreach (var user in users) { if (user.Age < age) { users.Remove(user); } } } } class User { private string _name; private int _age; private DateTime _dateJoined; public User() { } public User(string name, string age, DateTime dateJoined) { if (name == null) Name = Unknown; if (age == null) Age = 0; if (dateJoined == null) DateJoined = DateTime.Today; Name = name; Age = Convert.ToInt32(age); } public string Name { get { return _name; } set { _name = value; } } public int Age { get { return _age; } set { _age = value; } } public DateTime DateJoined { get { return _dateJoined; } set { _dateJoined = value; } } private void PrintUserInfo() { Console.WriteLine($Name: {this.Name}.); Console.WriteLine($Age: {this.Age} years old.); Console.WriteLine($Joined: {this.DateJoined}.); } }} | Console application for developer interviews | c#;.net;code reviews;debugging;maintainability | null |
_unix.12801 | I'm trying to build the PHP memcache extension (v2.2.6) for i386 (32bit) on my x86_64 Ubuntu 11.04../configure uses config.guess by default (which outputs x86_64-unknown-linux-gnu on my system) but I want to override that.How would I have to proceed? | Building 32-Bit on a 64-Bit system | 64bit;cross compilation;32bit;configure | You need two things to cross-compile: a compiler that can generate code for the target architecture, and the static libraries (*.a) for the target architecture. Install at least the libc6-dev-i386 packages, and possibly other lib32.*-dev packages. The libc6-dev-i386 also pulls in the components of gcc needed for cross-compilation in the gcc-multilib package . Then tell gcc to compile for i386 by passing it the -m32 flag through the CFLAGS variable.sudo apt-get install libc6-dev-i386 lib32ncurses5-dev # whatever 32-bit libraries you needexport CFLAGS='-m32'./configure If you don't find all the libraries you need, it'll probably be easier to install a 32-bit Ubuntu in a chroot. Ubuntu ships dchroot from the Debian buildd project, which makes running a chrooted system easy. Use debootstrap to perform the installation. There's a reasonable-looking dchroot tutorial on the Ubuntu forums. |
_unix.315150 | I am using dhclient on Debian (isc-dhcp-client, isc-dhcp-common).The logs that dhclient is logging are flooded with copyright information and other useless stuff (please consult README, please visit http://...)This is how simple dhcp session looks like:dhclient: Internet Systems Consortium DHCP Client 4.2.2dhclient: Copyright 2004-2011 Internet Systems Consortium.dhclient: All rights reserved.dhclient: For info, please visit https://www.isc.org/software/dhcp/dhclient: dhclient: Listening on LPF/wlan0/aa:bb:cc:dd:ee:ffdhclient: Sending on LPF/wlan0/aa:bb:cc:dd:ee:ffdhclient: Sending on Socket/fallbackdhclient: DHCPRELEASE on wlan0 to 192.168.0.1 port 67dhclient: send_packet: Network is unreachabledhclient: send_packet: please consult README file regarding broadcast address.dhclient: Internet Systems Consortium DHCP Client 4.2.2dhclient: Copyright 2004-2011 Internet Systems Consortium.dhclient: All rights reserved.dhclient: For info, please visit https://www.isc.org/software/dhcp/dhclient: dhclient: Listening on LPF/wlan0/aa:bb:cc:dd:ee:ffdhclient: Sending on LPF/wlan0/aa:bb:cc:dd:ee:ffdhclient: Sending on Socket/fallbackdhclient: DHCPRELEASE on wlan0 to 192.168.0.1 port 67dhclient: send_packet: Network is unreachabledhclient: send_packet: please consult README file regarding broadcast address.dhclient: Internet Systems Consortium DHCP Client 4.2.2dhclient: Copyright 2004-2011 Internet Systems Consortium.dhclient: All rights reserved.dhclient: For info, please visit https://www.isc.org/software/dhcp/dhclient: dhclient: Listening on LPF/wlan0/aa:bb:cc:dd:ee:ffdhclient: Sending on LPF/wlan0/aa:bb:cc:dd:ee:ffdhclient: Sending on Socket/fallbackdhclient: DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 7dhclient: DHCPREQUEST on wlan0 to 255.255.255.255 port 67dhclient: DHCPOFFER from 192.168.0.1dhclient: DHCPACK from 192.168.0.1dhclient: bound to 192.168.0.16 -- renewal in 36865 seconds.The important information is mixed with garbage. This is an issue for me because I am forwarding important logs (including dhclient), and having them displayed in the background. And since dhclient often renews the lease, the log can get quite long pretty quickly. I then have to scroll up to see other logs.Is it possible to hide the garbage, and only keep the relevant information? | dhclient logs flooded with useless information | debian;logs;dhclient | null |
_cstheory.20050 | $\underline{\mathsf{EQUAL\mbox{ }k-COMPLEMENTARY\mbox{ }SUBSET\mbox{ }SUM(EkCSS)} }$Problem:Input: $a_1,\dots,a_n,b\in \mathbb Z$, with distinct $a_i$ and $k\in\Bbb Z^+$.Output: $k$ $\mbox{ }\underline{not\mbox{ }necessarily\mbox{ }disjoint}$ subsets of $a_i$s of sizes $n_1,\dots,n_k$ satisfying $\sum_{i=1}^kn_i=n$ such that each subset sums to $b$. (1) Is $\mathsf{EkCSS}$ $NP$-complete?(2) If we replace requiring $k$ subsets by requiring $\lceil\log^cn\rceil$ subsets for some fixed $c\in\Bbb R^+$ does the problem remain NP complete?(3) Is $\mathsf{EkCSS-Diff}\mbox{ }$ $NP$-complete where $\mathsf{EkCSS-Diff}$ is same as $\mathsf{EkCSS}$ but with added condition $n_i\neq n_j\forall i\neq j$ (different subset sizes)? | Multiple subset sum where subsets have complementary cardinality | np complete;subset sum | This is a reduction (attempt :-) from this slight variant of SUBSET SUM (which should be NPC) to prove that E2CSS is NP-complete:Given integers $A = a_1,a_2,...,a_n\;; a_i > 0$ (with the additional constraint that n is even) and a target sum $b$. Does exist $X \subseteq A$ such that $|X|=2m, m \geq 1$ (i.e. $|X|$ is even and greater than or equal to 2) and $sum(X)=b$? Reduction: suppose that $2^k > b+\sum_{i=1}^n |a_i|$; then we expand $A$ adding a dummy solution$A' = A \cup \{ -2^k, 2^k + b\}$If $X$ is a solution for the original problem, then $X, Y= \{ -2^k, 2^k + b\}$ are two distinct solutions for $A'$ and target sum $b$. Now we can further expand $A'$ to $|A''|$ adding padding pairs of integers that do not affect the sums but can be used to pad $X$ and $Y$ up to $|X|+|Y|=|A''|$. We have$|A'|=n+2$, so we add $n/2-1$ pairs $\{2^{k+i},-2^{k+i}\}$, $i=1..n/2-1$:$A'' = \{a_1,...,a_n,-2^k, 2^k + b, 2^{k+1},-2^{k+1},...,2^{k+n/2-1},-2^{k+n/2-1}\}$, $|A''|=2n$Using all the $n/2-1$ padding pairs we can build a set $Z$ such that $sum(Z)=0$ and $|Z|=2(n/2-1)=n-2$.We can notice that using the dummy solution and the padding pairs we can build a set $Y'=Y \cup Z$, $sum(Y')=b$, $|Y'|=n$ (which is half the size of $A''$). So in order to get another different solution we must use two or more of the $a_i$ plus some padding pairs.In the figure a binary expansion of the elements of $A''$:The above reduction works for any fixed $k$: just start with the (NP-complete) subset sum variant in which $k$ divides $n$ and $|X|\geq k$, then add $k-1$ dummy solutions (each dummy solution made with $k$ elements) and $(n/k-1)$ padding k-tuples.If $k$ is part of the input (EkCSS), we have a simple generalization of the E2CSS problem and its NP-hardness follows immediately by the NP-completeness of E2CSS; the reduction is trivial: given an instance $A'',b$ of the E2CSS problem, just transform it to $A'',b,k=2$ which is a valid instance of the EkCSS problem. So EkCSS is NP-complete.(Very informally) The EkCSS-diff should be NP-complete, too and the reduction from EkCSS is: attach to every $a_i$ a group of elements $T_i$ of size $m_i=(i+1)*n$:$T_i= \{ a_i+1*2^u, 2*2^u,3*2^u,....,(m_i-1) 2^u,-2^u*\sum_{j=1}^{m_i-1} \}, |T_i|=(i+1)*n$The first element of $T_i$ which replaces $a_i$, contains two detached bit components the original $a_i$ and $1*2^u$; but the only way to clear $1*2^u$ is to include the other elements of $T_i$ and the final one that is the only negative element (it clears all the previous $2^u$ components).Every $T_i$ have distinct values for $u$ (and choosen in such a way that the sums don't interfere with each other and with the sum of the original $a_i$s). |
_computerscience.4858 | I'm wondering what metrics I could use to decide upon the visibility of a 3D object in VR or other 3D applications and what advantages each has. | What metrics are used for deciding if a 3D object is visible? | 3d | null |
_unix.322996 | On running the command ssh-add mykey.ppk it asks for passphrase:Enter passphrase for mykey.ppk:But I can see that the key does not have any passphrase and is not encrypted $ head mykey.ppkPuTTY-User-Key-File-2: ssh-rsaEncryption: noneComment: imported-openssh-keyPublic-Lines: 6AAAAB3NzaC1yc2EAAAADAQABAAABAQC8V+PLuklXrfFDZ9GNluXB/L8foOzaEp5sjwaOL1iAxCKDWWsfsmyj9MbhV5r4Z6VGo/0TSimply pressing enter at the prompt does not work. How can I add this key to the agent? PS: I've already heard the sermon on security practices, so you can save your breath :-) | How to add a phrase-less key to ssh agent? | ssh;ssh agent | ssh-agent does not support private keys in PPK format (PuTTY). You need to convert the key to OpenSSH key using PuTTY gen to be able to add it to your ssh-agent.Related question on RaspberryPi.These steps are needed:Load your private key into PuTTYgenGo to Conversions Export OpenSSH and export your key as mykey.keyAdd your key to your agent using ssh-add mykey.key.On Linux, the equivalent puttygen command is:puttygen mykey.ppk -o mykey.key -O private-openssh |
_unix.264677 | I have a Linux farm in VMware Enterprise 5.5. The VMs are (mostly) 64-bit amd64 Debian Jessie servers with SysVinit and not systemd. The VMs have open-vm-tools installed.I paravirtualized their Ethernet and disk controllers. Paravirtual drivers are ones where the virtualization platform does not have to emulate another device, such as an Intel E1000 NIC or a LSI Logic SAS SCSI adapter. These paravirtual drivers essentially cut the middleman out by ditching the emulation layer, which usually results in significant performance increases.As lspci | egrep PVSCSI|VMXNET can show, ethernet and disks are now paravirtualized:3:00.0 Serial Attached SCSI controller: VMware PVSCSI SCSI Controller (rev 02)0b:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)Doing cat to /etc/proc/interrups is is easy to show there are interrupts associated to them and to the functionalities paravirtualization depends on:56: 6631557 0 PCI-MSI 1572864-edge vmw_pvscsi57: 72647654 0 PCI-MSI 5767168-edge eth0-rxtx-058: 44570979 0 PCI-MSI 5767169-edge eth0-rxtx-159: 0 0 PCI-MSI 5767170-edge eth0-event-260: 1 0 PCI-MSI 129024-edge vmw_vmci61: 0 0 PCI-MSI 129025-edge vmw_vmcivmw_vmci: The Virtual Machine Communication Interface. It enables high-speed communication between host and guest in a virtual environment via the VMCI virtual device.It seems obvious using itop they are fairly used monitoring a moderately busy SSL-enabled web front end:INT NAME RATE MAX 57 [ 0 0 ] 142 Ints/s (max: 264) 58 [ 0 0 ] 155 Ints/s (max: 185) 59 [ 0 0 ] 119 Ints/s (max: 419) 60 [ 0 0 ] 133 Ints/s (max: 479)I am quite sure irqbalance is not needed in VMs with CPU affinity, and in single core VMs. The two servers where we have CPU affinity manually configured have indeed special needs, as in general cases, the literature says irqbalance is supposed to do a better job.So my question is, when is irqbalance necessary to distribute interrupt load via the different CPUs for multi-CPU Linux VMs? Note: I already consulted some papers, and a related (dated) serverfault post, they are not very clear about it. I also found an academic paper voicing similar concerns for Xen. vBalance: Using Interrupt Load Balance to Improve I/O Performance for SMP Virtual Machines | When is `irqbalance` needed in a Linux VM under VMware? | linux;debian;vmware;interrupt | null |
_softwareengineering.311869 | THE SCENARIOI learned about basic database design concepts such as basic CRUD operations, referential integrity, relationships, etc., years ago. I've messed around with databases and used this knowledge in an unofficial capacity over the years while learning about C# and WPF.Now I find myself at the beginning of my first official database design project. I am about to design and write a WPF application for my company. This application is to replace and an old Foxpro 2.6 / Visual Foxpro conglomeration they are currently using.The new WPF application will use the MVVM design pattern and a Repository pattern for data access to the SQL database back-end. I will be converting seven FoxPro 2.6 tables to a new SQL database design:Customers - each customer can have multiple pieces of equipment**Equipment** - each piece of equipment can have multiple testtypes **TestType1** - each test can have multiple years of test results testtype1 year1 test result testtype1 year2 test result **TestType2** testtype2 year1 test result testtype2 year2 test result **TestType3** testtype3 year1 test result testtype3 year2 test result etc...Right now, the Foxpro customers table has a CustomerID field as the identifier, and the field is repeated throughout ALL of the tables, including the TestType tables for the equipment, along with an equipnumber field.PRIMARY KEYS / FOREIGN KEYSI intend on creating primary keys in all my tables and setting up standard one-to-many relationships between the Customer and Equipment tables and between the Equipment and Test Type tables.That is what I want to do, right?Should I also store a Customer's primary ID in my TestType tables? I know that would make SOME queries easier, but is that proper design?I have worked in a couple of shops and have seen their database designs. To me, it didn't look like they were setting up relationships and enforcing referential integrity using SQL. I think they were doing all that in C# / VB code.Should I setup things like cascading deletions, updates, etc. and just let SQL Server handle it?Is that a maintenance nightmare? ISDELETED FIELD - Should I or Shouldn't IInitially I thought I'd want to incorporate and UNDELETE feature into the new program. But I'm not sure I want to do that. I was going to have an IsDeleted and DateDeleted fields for all my table records, and have the WPF app update the IsDeleted field to true and enter a date when a record is deleted. Then, in the Admin utilities have a real PURGE feature that can purge by Date Deleted.Am I asking for trouble here?In what scenarios is this a good idea? A bad idea? METADATA - Store IDs or actual data?And one final question. When it comes to meta data, (i.e., test type name, result code, etc., should I store the actual meta data value itself with a data record, or a metadata ID and link it back to the metadata table? The latter seems like it would be cumbersome to query and become a maintenance nightmare too. Are there any advantages to storing a reference ID back to a metadata table?I hope I'm not being too broad here. And any advice and suggestions are truly appreciated. | SQL - Design concepts - Relationships - Referential Integrity - Cascading | database design;mvvm | I would endorse @Claude's opinion of not repeating customerId in TestTable, as this is denormalising the data before you have found a good reason to do so. Also, sticking to the rule that the primary key in any table is a single field works well (OK, I might relax that for a 2 field many:many resolving table)As to the isDeleted field, I have a different take.You don't need both an isDeleted flag field and a dateDeleted date field. I would go with just the dateDeleted field. Once you read your data into POJOs, you can easily add an isDeleted() method to the POJO which just checks if dateDeleted is not null. And of course your SQL queries can do the same check.Do you need to keep a history of changes in this table? If so, the dateDeleted field is a good way to go. Otherwise, why not physically delete the record - simplicity is good :)As to the MetaData question, the answer depends on the business requirements. If there is a 1:1 relationship to the parent table, then there is little to be gained by creating a second table. If The relationship is 1:many you need a separate table, and if many:many you need two tables. I would question the name of this table - MetaData in an SQL database has the connotation of describing the database columns (e.g. NUMBER(12,0) or VARCHAR2(166)), so it is a good term to avoid. And a couple of notes on foreign keys:Make sure you create an index on all the foreign key fields. A normalised database is great, but most of your queries will involve joins, and indexes are required to keep performance reasonable.Think carefully about enforcing foreign key constraints in the database. This slows down performance, particularly when deleting data. Many shops turn them on in development databases and turn them off for production. Personally I leave them on in production unless there is a proven performance problem.Best of luck - this was a good question. |
_cstheory.21863 | How can I construct a sorting network for $k$ numbers?My goal is to implement sorting networks in Java for $k$ in the range $[3,\hspace{-0.03 in}32]$.To be even more specific, I only want to sort integers.I found some implementation in this article (pages 2-3), but I don't understand it.I been trying to convert this problem to SAT. I started with a simple non-optimal network: $[01, 12, \ldots, (n-1)n, 01, 12, \ldots, (n-2)(n-1), \ldots, 01, 12, 01]$ (source). The idea is to convert it to SAT, find the shortest equal-satisfiable SAT formula, and convert it back to a network representation. The problem is that in the network, the order of comparisons is important, so I don't know hot to convert it to SAT. It sounds like some one has already been trying to do something like this, but I don't understand it completely. Related question. | How can I construct sorting network for $k$ numbers | sorting | there is some research angle here dating at least to Knuth's Art of Computer Programming and presumably earlier in finding optimal sorting networks for low $n$. its intractable to find optimal sorting networks for small $n$ but it has been done up to about $n=10$ eg as in this recent notable paper, also using SAT. details about how to reduce the problem to SAT are in the paper. basically a large SAT formula encoding is built that asserts these boolean variables configure a circuit that sorts all inputs for size $n$. (the more nonresearch angle is to use existing sort algorithms or sorting network configurations as mentioned in the paper by Har-Peled you cite to generate the (nonoptimal) circuits, this is more like a CS/EE exercise.)Optimal Sorting Networks Daniel Bundala, Jakub ZvodnThis paper settles the optimality of sorting networks given in The Art of Computer Programming vol. 3 more than 40 years ago. The book lists efficient sorting networks with n <= 16 inputs. In this paper we give general combinatorial arguments showing that if a sorting network with a given depth exists then there exists one with a special form. We then construct propositional formulas whose satisfiability is necessary for the existence of such a network. Using a SAT solver we conclude that the listed networks have optimal depth. For n <= 10 inputs where optimality was known previously, our algorithm is four orders of magnitude faster than those in prior work. |
_reverseengineering.2574 | I want to alter a ELF executable function call and replace one of it's parameters.The executable calls dlopen() function and passes RTLD_NOW as the flag parameter.I want to change it to RTLD_LAZY.What's the easiest way to detect the exact place where this call is done, and replacing the parameter.I have to do it on production environment, so I only have GNU toolchain, gcc, gdb, etc. | Changing parameter of function call in ELF executable | linux;elf | null |
_unix.312420 | Is there any way to print a custom phrase or run a custom script upon login failure? | PAM login with custom phrases upon failed login | login;pam | null |
_unix.97511 | When I insert, for example, unix.stackexchange.com followed by Enter in terminal, I get the following error:unix.stackexchange.com: command not foundThis is ok and as I expected. But when I insert http://unix.stackexchange.com, I get another error message:bash: http://unix.stackexchange.com: No such file or directoryI do not ask about why I get errors. I want to know why these are different and, eventually, which process/function handles them? | Different error messages when using different strings in terminal | bash;gnome terminal;error handling;stderr | As also ewhac pointed out, the error messages differ because the latter command line contains forward slashes (/), which causes your shell to interpret it as a file path.Both errors originate from your shell, which in this case is bash (which is evident from the second error message).More specifically, the first error originates from the execute_disk_command() function defined in execute_command.c in the bash-4.2 source code. The execute_disk_command() function calls search_for_command() defined in findcmd.c, which, in case the specified pathname does not contain forward slashes, searches $PATH for the pathname. In case pathname contains forward slashes, search_for_command() does not perform this lookup. In case search_for_command() does not return a command, execute_disk_command() will fail with the command not found internal error.The second error originates from the shell_execve() function, also defined in execute_command.c. At this point in your scenario,search_for_command() would have returned successfully because there would not have been any lookup needed, and execute_disk_command() has called shell_execve(), which in turn performs the execve() system call. This fails, because the file execve() attempts to execute does not exist, and execve() indicates this by setting errno appropriately. When execve() fails, shell_execve() uses strerror() to report the corresponding file error message (No such file or directory) and exits the shell immediately on the error. |
_unix.103045 | The post repeat command every x seconds shows that watch is the utility that is useful for the invoking a command at fixed interval repeatedly.Now, I have very long commands, and have used aliases to group them logically to get a quicker output, like$ alias c1='grep checking for file1.log'$ alias c2='grep validated file1.log'$ (echo Checking: `c1`) && (echo Validated: `c2`)The output of 3rd command is likeChecking: 100Validated: 80It is a long running process, I need to check the status of this process, to get the mentioned counts. But invoking the above with watch gives error $ watch '(echo Checking: c1) && (echo Validated: c2)' c1 command not foundI can put the entire command in there and remove the aliases, but is there any other work-around to get the aliases working with watch command?Note: Did quickly go through the man page for watch, but didn't find any reference to alias in specific. | Use an alias with watch command | alias;watch | null |
_webapps.60363 | We often have the case of two people overwriting each other's changes by writing on the same cell at the same time.Is there a way to make it so that cells can't be written by more than one person at a time? For instance, by having them locked for other editors when they are selected by a user. | Preventing concurrent editing of a cell in Google Spreadsheet? | google spreadsheets | null |
_webmaster.68754 | In Google Webmaster Tools -> Search Traffic-> Links to your site, I have a lot of backlinks which are not there anymore.For example, Google Webamaster Tools shows that there is a link coming from http://example.com.When I check that domain, there is nothing there linked back to my site anymore.Is there a way to remove these ghost backlinks? | Google Webmaster Tools shows backlinks which are not active anymore - how do I remove them? | google search console;backlinks | null |
_unix.334198 | I have a file which contains...CNN111XXXABC111XXXABC111BBC...and I need to change the 111 to 999 but only as part of ABC\n111\nXXX ...CNN111XXXABC999XXXABC111BBC...I have tried this, but it changes 111 everywhere.perl -i -pe '/ABC\n111\nXXX/ if s/111/999/g' FILENote: We need to compare multiple lines as 111 might be in many other places. The file size is 227kb. | Using perl to change multi-line expression | perl | null |
_unix.211515 | I am new to Linux and using Elementary OS. I am facing an issue with the file browser. It displays only files and directories inside the home directory. The file navigator (sidebar) is missing and I cannot access other drives. | My file navigator (sidebar) is hidden in Elementary OS | elementary os;file browser | null |
_webmaster.90223 | I created a blog and I want to purchase a second level domain, that's to say I want to move from domain.altervista.org to -> domain.com.I would like to register that domain with the .com extension but it is already registered.By doing a research on Whois I found out that the site was created on may 2010 and expires on may 2016, and it was updated on September 2015 (The owner put it on sale for 1000 dollars), but I wonder if the update does mean that the license has been extended or if it does refer to something else.In any case I'm considering waiting to have the .com extension available again for free.So ->- Given the dates provided by Whois, do they mean that the domain.com will be available again on may or should I wait more? | top-level domain registration: does updating imply a license extension (Whois report)? | domain registration;top level domains;whois;registration;licenses | null |
_unix.224301 | I'm using Linux Mint 17.2 Rafaela - Cinnamon (64-bit) on an Acer Aspire v3-571g. I was at kernel 3.16.0-38 (3.16.7-ckt10), but updated to 3.19 trying to solve this problem without it helping.Moving the mouse and using the buttons below the touchpad works. Scrolling of any kind does not, even though the settings for it is on at Mouse and Touchpad -> Touchpad.synclient -lCouldn't find synaptics properties. No synaptics driver loaded?xinput list Virtual core pointer id=2 [master pointer (3)] Virtual core XTEST pointer id=4 [slave pointer (2)] PS/2 Generic Mouse id=14 [slave pointer (2)] ..sudo apt-get install xserver-xorg-input-synapticsReading package lists... Done Building dependency tree Reading state information... Done xserver-xorg-input-synaptics is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. I've tried reinstalling the package to no avail.Any ideas? The touchpad is of type ELAN PS/2 Port Smart-Pad. | Couldn't find synaptics properties. No synaptics driver loaded? after reboot. Scrolling doesn't work | drivers;touchpad | null |
_unix.186823 | I just installed nagios on CentOS 6.5, while creating the default auth. user:htpasswd c /usr/local/nagios/etc/htpasswd.users nagiosadminhere is what I got:htpasswd: cannot create file /usr/local/nagios/etc/htpasswd.usersthe command was run as root /usr/local/nagios/etc/ exists and SELinux is enabledso what might be the problem? | htpasswd: cannot create file /etc/nagios/htpasswd.users | nagios | null |
_softwareengineering.21926 | The stereotypical view of a programmer can't do graphics very well, from what I've read and seen. However, I love programming (preferably OOP, PHP, C++, Objective-C) and can't deny the fact I have a unique taste in web design and others have said I am doing well at it (CSS). I thought to myself Hey, wait, I'm a programmer - how can I design well?. Question is: is it possible to be good at programming and designing? Does anyone here feel the same?For the record: actual images I have created have been called programmer art several times before by friends | Is it possible to be good at both programming and graphic design? | design;stereotypes | Well, why not? Lots of people have multiple talents.But the amount of time that you devote to a particular skill does make a difference. Spending more time one one skill means you have to spend less time on another, and spending less time means being less competent.For my part, I have spent the vast majority of my time on coding, not design. As such, I am a pretty good programmer, but have stick-figure design skills (although I do believe I know good design when I see it).Good design means more than just looking pretty; it also means making an application that is intuitive and easy to use. |
_webmaster.99660 | I have category pages that contain products. Some of the products have fixed prices and some of the products have price ranges.Also any product has a seller that must be inside Offer. Right? So I must use the Offer type and I can not using PriceSpecification. Am I wrong?How can I use Microdata with this condition?A product with price range in my product list page:<li itemscope itemtype=http://schema.org/Product ><span itemprop=name >ProductName</span><div itemprop=offers itemscope itemtype=http://schema.org/Offer><ul> <li><meta itemprop=maxprice content=1000000 /><meta itemprop=minprice content=10000 /><meta itemprop=priceCurrency content=USD />from 10,000 to 1,000,000 USD</li></ul><div itemprop=seller itemscope itemtype=http://schema.org/Organization><ul><li><span itemprop=telephone >00188341534</span></li></ul></div> <div itemprop=address itemscope itemtype=http://schema.org/PostalAddress><span itemprop=addressLocality>Washington</span></div>I need valid code look like this. But this code is not valid. | How to use price range inside Offer type? | ecommerce;schema.org;microdata | You can have a PriceSpecification for the Offer, by using the priceSpecification property:<article itemscope itemtype=http://schema.org/Product> <div itemprop=offers itemscope itemtype=http://schema.org/Offer> <div itemprop=priceSpecification itemscope itemtype=http://schema.org/PriceSpecification> </div> </div></article>And inside the PriceSpecification, you can use the minPrice and maxPrice properties. (Note that they are case-sensitive.) |
_unix.79129 | Is there a way to get the bandwidth, delay, jitter collision, error rate and loss rate of a certain link through the interface on a local machine?let's say my machine is connected to a network via two interfaces, one wireless and the other ethernet. I want to compare the quality of these two links through these measurements.Is there any way to get these measurements in the Linux kernel? (v. 3.5.0) | how to get network QoS statistics in linux kernel? | networking;linux kernel;qos | null |
_reverseengineering.3021 | I am starring at the following which looks like base64 but not quite:$ curl -s 'http://cgp.compmed.ucdavis.edu/chapr/education/PATHOBIOLOGY%20OF%20THE%20MOUSE%20TIER%201A/MICROANATOMY/Images/EX02-0006-4.svs?XINFO' | xpath -q -e '//cdata'<cdata>/9j/2wBDAAoHBwgHBgoICAgLCgoLDhgQDg0NDh0VFhEYIx8lJCIfIiEmKzcvJik0KSEiMEExNDk7Pj4+JS5ESUM8SDc9Pjv/xAAfAAABBQEBAQEBAQAAAAAAAAAAAQIDBAUGBwgJCgv/xAC1EAACAQMDAgQDBQUEBAAAAX0BAgMABBEFEiExQQYTUWEHInEUMoGRoQgjQrHBFVLR8CQzYnKCCQoWFxgZGiUmJygpKjQ1Njc4OTpDREVGR0hJSlNUVVZXWFlaY2RlZmdoaWpzdHV2d3h5eoOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4eLj5OXm5+jp6vHy8/T19vf4+fr/2Q==</cdata><cdata>/9j/2wBDAAUDBAQEAwUEBAQFBQUGBwwIBwcHBw8LCwkMEQ8SEhEPERETFhwXExQaFRERGCEYGh0dHx8fExciJCIeJBweHx7/xAAfAAABBQEBAQEBAQAAAAAAAAAAAQIDBAUGBwgJCgv/xAC1EAACAQMDAgQDBQUEBAAAAX0BAgMABBEFEiExQQYTUWEHInEUMoGRoQgjQrHBFVLR8CQzYnKCCQoWFxgZGiUmJygpKjQ1Njc4OTpDREVGR0hJSlNUVVZXWFlaY2RlZmdoaWpzdHV2d3h5eoOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4eLj5OXm5+jp6vHy8/T19vf4+fr/2Q==</cdata><cdata>/9j/2wBDAAMCAgICAgMCAgIDAwMDBAYEBAQEBAgGBgUGCQgKCgkICQkKDA8MCgsOCwkJDRENDg8QEBEQCgwSExIQEw8QEBD/xAAfAAABBQEBAQEBAQAAAAAAAAAAAQIDBAUGBwgJCgv/xAC1EAACAQMDAgQDBQUEBAAAAX0BAgMABBEFEiExQQYTUWEHInEUMoGRoQgjQrHBFVLR8CQzYnKCCQoWFxgZGiUmJygpKjQ1Njc4OTpDREVGR0hJSlNUVVZXWFlaY2RlZmdoaWpzdHV2d3h5eoOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4eLj5OXm5+jp6vHy8/T19vf4+fr/2Q==</cdata>However this does not appear to be proper base64 encoded stream:$ openssl enc -base64 -d <<< /9j/2wBDAAMCAgICAgMCAgIDAwMDBAYEBAQEBAgGBgUGCQgKCgkICQkKDA8MCgsOCwkJDRENDg8QEBEQCgwSExIQEw8QEBD/xAAfAAABBQEBAQEBAQAAAAAAAAAAAQIDBAUGBwgJCgv/xAC1EAACAQMDAgQDBQUEBAAAAX0BAgMABBEFEiExQQYTUWEHInEUMoGRoQgjQrHBFVLR8CQzYnKCCQoWFxgZGiUmJygpKjQ1Njc4OTpDREVGR0hJSlNUVVZXWFlaY2RlZmdoaWpzdHV2d3h5eoOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4eLj5OXm5+jp6vHy8/T19vf4+fr/2Q==openssl returns nothing as if it was invalid base64.What is the encoding used in this case ?EDIT:I was confused by the output. The encoded is actually a valid JPEG header, it does not contains no image, but it contains a valid JPEG Quantization Table (DQT) & Huffman Table(s) (DHT). | Invalid base64 encoding | decryption;encodings | When I looked into the URL, I can see that the base64 encoded data is having newline character.When you remove newline characters and present that as one single line to openssl, you need the -A option.The below will work fine.$openssl enc -A -d -base64 <<< /9j/2wBDAAMCAgICAgMCAgIDAwMDBAYEBAQEBAgGBgUGCQgKCgkICQkKDA8MCgsOCwkJDRENDg8QEBEQCgwSExIQEw8QEBD/xAAfAAABBQEBAQEBAQAAAAAAAAAAAQIDBAUGBwgJCgv/xAC1EAACAQMDAgQDBQUEBAAAAX0BAgMABBEFEiExQQYTUWEHInEUMoGRoQgjQrHBFVLR8CQzYnKCCQoWFxgZGiUmJygpKjQ1Njc4OTpDREVGR0hJSlNUVVZXWFlaY2RlZmdoaWpzdHV2d3h5eoOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4eLj5OXm5+jp6vHy8/T19vf4+fr/2Q==Or you can also use still a simple method as below$ echo /9j/2wBDAAoHBwgHBgoICAgLCgoLDhgQDg0NDh0VFhEYIx8lJCIfIiEmKzcvJik0KSEiMEExNDk7Pj4+JS5ESUM8SDc9Pjv/xAAfAAABBQEBAQEBAQAAAAAAAAAAAQIDBAUGBwgJCgv/xAC1EAACAQMDAgQDBQUEBAAAAX0BAgMABBEFEiExQQYTUWEHInEUMoGRoQgjQrHBFVLR8CQzYnKCCQoWFxgZGiUmJygpKjQ1Njc4OTpDREVGR0hJSlNUVVZXWFlaY2RlZmdoaWpzdHV2d3h5eoOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4eLj5OXm5+jp6vHy8/T19vf4+fr/2Q== | base64 --decodeWe can also do this with notepad++ using MIME tools plugin.This proves this is base64 encoded data. |
_webmaster.12759 | I'm developing an application at the moment which shows local content based on location.Since the site is intended for local users, there won't be much need to change domain.For example: someone from London visiting london.mysite.com will receive only London-related content and would not necessarily be interested in anything on edinburgh.mysite.com.The whole subdomain vs. sub-directory thing has been haunting me though and I'm not sure if this approach is best or if we'd be better with mysite.com/edinburgh.Can any SEO gurus out there lend some advice? | Should I choose sub-directories over sub-domains in this case? | seo;subdomain;subdirectory | @Gavin Morrice: From a hosting and development point of view, sub-directories are so much easier and quicker to maintain (especially if you're not on a dedicated server). While trying to find the latest information on which method is the preferred or best one, the general sense I get is that (as far as Google is concerned), it really doesn't matter anymore. If you build it, they will come. It's about content; it has always been about content. If you can sort out relevance and structure, whether your articles about London are on london.mysite.com or mysite.com/london/, doesn't make much of a difference and tales of sub-domains diluting Google's PageRank from the main site remain tales and certainly don't seem to be true anymore. |
_unix.249622 | Every time that I try to install updates in mintupdate, I get the following popup. Does anybody know what could be the cause for this? I noticed that it mentions gzip: stdout: No space left on device so I checked my ssd, however I still have almost 100GB of free space if that has any relation.After removing several older kernels and installing a new one then rebooting, I was greeted by this scary looking screen:picture taken of monitor | Linux mintupdate constantly failing with error | linux;linux mint;upgrade | The error you are having is because you have run out of space in the /boot partition to create an initrd file (hence the error no space left in device).If you have several kernels installed, try to delete alternative kernels before upgrading. However bear in mind it is good practice to have one alternative kernel to boot laying around in case of a bad kernel upgrade.If you only do have one kernel, you can try, and I have done it more often than I care to admit, to delete the running kernel to install the new one.Often this situation happens in servers that have been upgraded to new versions several times, as the kernel and associated files used to be smaller. In that case, add this machine to an upgrade list in the short/medium term. |
_unix.335050 | I have a critic problem with PleskSymptoms:Plesk panel inaccessible 403 forbidden nginxFTP inaccessibleplesk command not found (version 12.5.30)System Details:Ubuntu 14.04.5 LTSPlesk Version:System Logs:/var/log/sw-cp-server/error_log[error] 32368#0: *1 directory index of /opt/psa/admin/htdocs/ is forbidden, client: 1X.X.X.X, server: , request: GET / HTTP/1.1, host: xxxx.net/var/log/plesk/install/plesk_12.5.30_installation.logERR [panel] MailListManager encounter an error: listmng failed: file does not exist or is not executable: /opt/psa/admin/bin/listmngDescription:After an installation of a new php version (around 11:30) on Plesk Ididn't notice any problem but since around 12:25 neither plesk panel nor FTP was accessible.The file /root/.autoinstaller/microupdates.xml shows that there is anupdate occurred at 12:18 but I am not responsible of it, maybe it'san automatic update ?I tried to reinstall plesk with this command:/usr/local/psa/admin/sbin/autoinstaller --select-release-current --reinstall-patch --install-component base but the result was You already have the latest version of product(s) and all the..... Installation will not continue.plesk command was not found to run auto repair !I appreciate your help to resolve this critic problem, don't hesitate to ask me for further details. | Plesk panel inaccessible 403 forbidden nginx | plesk | null |
_unix.302546 | Is there a possibility to configure proxy (global or for few processes) only for one domain (and its subdomains)?What I'd like to achieve:All connections (HTTP/HTTPS, database connections, etc) to the subdomains of dupa.xyz should go through a proxy, connections to other domains should be direct. I know that I can define all_proxy and a long exclusion list using no_proxy environment variables, but I don't feel that it is the right way. Is there maybe a third party application which provides this functionality? | Proxy only for some domains | proxy;domain;whitelist | Ok, I've found a solution. In my case all subdomains from dupa.xyz are in the same subnetwork, so the easiest solution is to use IPTABLES + redsocks + ssh tunnel to the host inside this subnetwork. In theory, It should be possible to redirect DNS traffic via TCP through redsocks, but I didn't try that. I've just added appropiate entries to hosts file. |
_unix.137391 | Anyone know of a non-line-based tool to binary search/replace strings in a somewhat memory-efficient way? See this question too.I have a +2GB text file that I would like to process similar to what this appears to do:sed -e 's/>\n/>/g'That means, I want to remove all newlines that occur after a >, but not anywhere else, so that rules out tr -d.This command (that I got from the answer of a similar question) fails with couldn't re-allocate memory :sed --unbuffered ':a;N;$!ba;s/>\n/>/g'So, are there any other methods without resorting to C?I hate perl, but am willing to make an exception in this case :-)I don't know for sure of any character that does not occur in the data, so temporary replacing \n with another character is something I'd like to avoid if possible.Any good ideas, anyone? | Replace string containing newline in huge file | text processing;sed;newlines | This really is trivial in Perl, you shouldn't hate it!perl -i.bak -pe 's/>\n/>/' fileExplanation-i : edit the file in place, and create a backup of the original called file.bak. If you don't want a backup, just use perl -i -pe instead.-pe : read the input file line by line and print each line after applying the script given as -e.s/>\n/>/ : the substitution, just like sed. And here's an awk approach:awk '{if(/>$/){printf %s,$0}else{print}}' file2 |
_unix.338222 | I want to logout/delete from multiple iscsi targetsiscsiadm --mode node --logoutall=all this will delete all the connections present in the system. How do I delete multiple nodes in just one command.for example the ip addresses are: 10.100.0.101192.168.1.10110.100.0.111192.168.1.11110.100.0.121192.168.1.12110.100.0.131192.168.1.13110.100.0.141192.168.1.141 | How to logout from multiple iscsi targets | linux;storage | null |
_codereview.126078 | I just finished writing a program that captures temperature information for a patient. I'm looking for suggestions on how to improve the code that I have. Although what I have compiles and does what I want, I wonder if there is anything methodology I'm using inappropriately despite it still working.Some particular areas I worry about are redundant code and my use of the conversion and fever methods (you can ignore the clearly bad math. I just wanted any calculation until I determined functionality).Temperature.hpp:#ifndef Temperature_h#define Temperature_h#include <iostream>#include <string>using namespace std;class Temperature {private: string newPatient; float newTemp; char newDegree;public: enum TempMethod {oral, arm, leg, bum}; Temperature(); Temperature(string, float, char); ~Temperature(); string getPatient() const; float getTemp() const; char getDegree() const; float setFaren(); float setCels(); void hasFever();};#endif /* Temperature_h */Temperature.cpp:Temperature::Temperature(string patient, float temp, char degree){ newPatient = patient; newTemp = temp; newDegree = degree;}Temperature::~Temperature(){}string Temperature::getPatient() const{ return newPatient;}float Temperature::getTemp() const{ return newTemp;}char Temperature::getDegree() const{ return newDegree;}float Temperature::setFaren(){ float faren = newTemp+32; return faren;}float Temperature::setCels(){ float celsi = newTemp-32; return celsi;}void Temperature::hasFever(){ if(newTemp >= 100.4){ cout << Patient << newPatient << has a fever!!\n; }}main.cpp:#include <iostream>#include <string>#include Temperature.husing namespace std;int main() { string patient; float temp; float celsius; float farenheit; char degree; cout << Enter Patient Name: ; getline(cin, patient); cout << Enter the tempature: ; cin >> temp; cout << Enter degree method (F or C): ; cin >> degree; Temperature Temp_1(patient, temp, degree); cout << \nPatient Name: << Temp_1.getPatient() << endl << Temperature: << Temp_1.getTemp() << degrees << Temp_1.getDegree() << endl; if(Temp_1.getDegree() == 'F'){ celsius = Temp_1.setCels(); cout << Temperate in Celsius: << celsius << degrees Celsius << endl; } else if(Temp_1.getDegree() == 'C'){ farenheit = Temp_1.setFaren(); cout << Temperature in Farenheit: << farenheit << degrees Farenheit << endl; } Temp_1.hasFever(); return 0;} | Capturing temperature information for a patient | c++;object oriented | TechnicalitiesYour destructor is the same as the default destructor -- I wouldn't bother defining one. It makes me think that the class is going to actually do something in there, so it's strange when it doesn't.Your header doesn't use iostream, only your implementation does. This means that the #include should be in the cpp file. Try to keep headers as tightly included as possible.You're not mutating the patient variable so there's no need to make a copy. Instead, you should take it by const reference.Temperature(const std::string& patient, float temp, char degree);Instead of assigning your parameters, you should use an initializer list. Assinging means that they might be default constructed first then assigned, and this is unnecessary.Temperature::Temperature(const std::string& patient, float temp, char degree) : patient(patient), temp(temp), degree(degree){ }You did a good job of making your getters const, but getFahrenheit, getCelsius and hasFever should all also be const (they should also be renamed as this implies -- see the Style section).You shouldn't use using namespace std in header files.For little toy programs like this, it doesn't matter much, but it can be a good habit to always remember that any kind of IO can fail. In particular, if you want to indulge in some paranoia, you should check that your cin extraction was successful. After all, what if someone puts in a non-sense value when you're expecting a float. You don't particularly want your program to continue after that.Stylefloat Temperature::setFaren(){ float faren = newTemp+32; return faren;}This isn't setting anything. This is a getter. It should be named getFaren, or even better, getFahrenheit. To get a bit cliche-y for a second, code is read much more than it's written. Strive for clarity even if that means typing a few extra characters.Also, the variable is a bit superfluous. I'd go with a simple return newTemp + 32;.#ifndef Temperature_hInclude guards are traditionally SCREAMING_SNAKE_CASE:#ifndef TEMPERATURE_HTemperature(string, float, char);This is confusing as a consumer of the class. You should give these parameters names:Temperature(string patient, float temperature, char degree);You have some non-optimal namings. For example, newPatient, newTemp, and newDegree don't really make sense. Why are they new? Just call them patient, temp and degree. If you're worried about them clashing with the local variables in the constructor, you can just use this->patient. I would also consider calling patient patientName since patient makes me expect some kind of model rather than a simple string.I also am not a fan of the parameter name temp. That makes me assume it's some kind of temporary variable. temperature is clearer.degree also unfortunately has issues. Fahrenheit and Celsius aren't degrees. They're scales of temperatures. This parameter could instead be named scale.void Temperature::hasFever(){ if(newTemp >= 100.4){ cout << Patient << newPatient << has a fever!!\n; }}This shouldn't output, but should instead return a bool. The consumer of the class should decide what to do with this information. What if you had a list of Temperatures and you wanted to count how many instances were considered to have a fever. You would likely not want this to output each time a fever was found. Instead, you want the method to return a bool that you can then decide what you want to do with it.Also, you need to be aware of what scale you're using here. The threshold for a fever in Celsius is much different than the threshold in Fahrenheit.enum TempMethod {oral, arm, leg, bum};As this seems to be unused, I would remove it.If this code were going to live longer than 1 homework assignment or practice, I would expect the class and its methods to be documented.DesignI would decide on a canonical scale and use that internally. Then, your other scale(s) would simply convert in their getters. For example, you could always store Celsius and just convert to Fahrenheit when needed (this would of course mean that your constructor would need to convert to Celcius when Fahrenheit is provided).I would consider using an enum for your scale instead of a char. An enum would mean that a user could only provide correct values. It also more explicitly signals what is supported. For example, someone might try to pass in K as your code currently is, then your conversion methods might not support that. Even worse, someone could currently pass in some non-sense like a.If you do keep it a char, make sure to validate that it's one of the expected values. It's strange for Temperature to have a patient name. It means that you're not really modeling a temperature; you're modeling a patient/temperature combination. In other words, you've coupled your Temperature object to your patient. What if you wanted to model the temperature of a room? Or of a bucket of water? There's nothing about a temperature that inherently has a patient name.You should likely have some form of a Patient model that has a temperature (or something even more complex than this in the real world). More pragmatically though, this is a completely reasonable way to do it as long as you're aware that you've made this decision and the program isn't going to grow much. Do keep in mind though that this is a very real tradeoff, and if this program were to ever grow in complexity, you'd likely want to decouple these. |
_datascience.15189 | I'm trying to build a recommendation engine for an e-commerce site. By using the common recommendation approach, I'm assuming that each product I recommend has the same value, so all I need to do is optimize the conversion rate probably using a common recommendation algorithm, but when the product's price varies a lot, what I really need to optimize is the following formula for each user:Value of recomendation = (probability to convert) * (product price)The bigger problem than choosing the right algorithm and approach is choosing the right metric, so I could compare the different algorithms. For example, if you would like to only optimize the conversion rate, I would use the precision and recall or false/positive metrics.What metrics and approaches/algorithms are recommended in this case?Thanks | Recommendation/personalization algorithm conflict | machine learning;recommender system | This is actually slightly similar to the problem that insurance companies face except that it seems like your loss costs are known. Insurers have some probability of loss and then, given loss, the magnitude of the loss follows some distribution. The cost to the insurer is dependent on both and they tend to be inversely related (lower losses are more likely than higher ones.) In your case, the value is known so you don't need to predict it the way insurers need to predict losses so you could simply:Model the probability (phat)Multiply the predicted probability by the known value (score = phat * value)Recommend based on the resulting scoreInsurance companies typically do the same thing in calculating premiums except that they also need to model value. They sometimes model the two components jointly but typically they have separate models for frequency and severity and then just multiply them together to determine how much premium they should charge somebody. |
_webapps.48686 | Can my non Facebook friends like my page?Can I invite people to like my page that are not friends on my personal Facebook page? I dont want some people to be my friend but I want them to like the page. | Getting non Facebook friends to like my page | facebook;facebook pages | null |
_codereview.79391 | I've been learning Common Lisp lately and I've implemented ANFIS network based on Sugeno model I.Network layout and details can be read in these slides by Adriano Oliveira Cruz.I use sigmoid as the fuzzy set membership function of each input (in layer 1)\$\mu(x) = \sigma(x) = \frac{1}{(1 + e^{b * (x - a)})}\$T-norm is a simple product (layer 2)\$w = \prod_i^n\mu(x_i)\$then those results are normalized in layer 3\$\overline{w_i} = \frac{w_i}{\sum_j^rw_j}\$which are used in layer 4 as a consequent of a rule which then outputs:\$\overline{w}f = \overline{w}*(px + qy + r)\$Final 5th layer just sums all the \$r\$ rules' consequents:\$\sum_i^r\overline{w_i}f_i\$Parameters \$a, b, p, q, r\$ are optimized using (online) gradient descent using this (for input dimension being 2):\$\delta = (t - o)\$\$a_i^{(k+1)} \leftarrow a_i^{(k)} + \eta\delta\frac{\sum_{j \neq i}^r w_j(f_i - f_j)}{(\sum_j^rw_j)^2}\mu_i(y)b_i\mu_i(x)(1 - \mu_i(x))\$\$b_i^{(k+1)} \leftarrow b_i^{(k)} - \eta\delta\frac{\sum_{j \neq i}^r w_j(f_i - f_j)}{(\sum_j^rw_j)^2}\mu_i(y)(x - a_i)\mu_i(x)(1 - \mu_i(x))\$\$p_i^{(k+1)} \leftarrow p_i^{(k)} + \eta\delta\overline{w_i}x\$\$q_i^{(k+1)} \leftarrow q_i^{(k)} + \eta\delta\overline{w_i}y\$\$r_i^{(k+1)} \leftarrow r_i^{(k)} + \eta\delta\overline{w_i}\$where \$t\$ is expected and \$o\$ network output and \$k\$ is iteration. For batch gradient descent just add sum for all samples after \$\eta\$.Parameters in code are stored as 2 arrays. One array for premise parameters \$a, b\$ for each rule and input dimension. So if dimension of input is \$n\$ and there are \$r\$ rules array length is \$2*n*r\$.The other array is consequent parameters which are stored in this order: \$r, p, q\$ per rule and the length of array is \$3*r\$.Here is the implementation:(defclass anfis () ((rules :initarg :rules :reader rules :type (integer 1) :documentation Number of rules.) (input-dim :initarg :input-dim :reader input-dim :type (integer 1) :documentation Dimension of the input) (fuzzy-set :initarg :fuzzy-set :reader fuzzy-set :type (cons (function (sequence number) (double-float 0.0d0 1.0d0)) (integer 1)) :documentation Parametrized membership function.) (t-norm :initarg :t-norm :reader t-norm :type (function (double-float double-float) (double-float 0.0d0 1.0d0)) :documentation T-norm function.) (premise-params :initarg :premise-params :accessor premise-params :type (vector double-float) :documentation Vector of parameter values for fuzzy sets.) (consequent-params :initarg :consequent-params :accessor consequent-params :type (vector double-float) :documentation Vector of consequent parameter values.)))(defun random-vector (size random-fun) Crates a vector of given SIZE using provided generator RANDOM-FUN. (declare (type (integer 0) size) (type (function) random-fun)) (let ((vec (make-array size))) (dotimes (i size vec) (setf (elt vec i) (funcall random-fun)))))(defun make-anfis (&key input-dim rules fuzzy-set t-norm) Takes numbers of INPUT-DIM and RULES, cons of membership function and numberof parameters in FUZZY-SET and T-NORM function. (let* ((fuzzy-fun (car fuzzy-set)) (fuzzy-params (cdr fuzzy-set)) (premise-params (* input-dim rules fuzzy-params)) (consequent-params (* (1+ input-dim) rules))) (make-instance 'anfis :input-dim input-dim :rules rules :t-norm t-norm :fuzzy-set fuzzy-fun :premise-params (random-vector premise-params (lambda () (1- (random 2.0d0)))) :consequent-params (random-vector consequent-params (lambda () (1- (random 2.0d0)))))))(defun sigmoid (params x) Sigmoid function for argument X with sequence of parameters PARAMS. (declare (type (real) x)) (let ((a (elt params 0)) (b (elt params 1))) (/ 1 (1+ (exp (* b (- x a)))))))(defun output-premise-layer (anfis input) Filters the INPUT through given ANFIS network premise layer of each rule. (declare (type anfis anfis) (type list input)) (let* ((premise-params (premise-params anfis)) (input-dim (input-dim anfis)) (rules (rules anfis)) (fuzzy-params (/ (array-total-size premise-params) input-dim rules)) (fuzzy-fun (fuzzy-set anfis))) (loop for r from 0 below rules collecting (loop for i from 0 below input-dim for in in input for start = (+ (* r input-dim fuzzy-params) (* i fuzzy-params)) for params = (subseq premise-params start (+ start fuzzy-params)) collecting (funcall fuzzy-fun params in)))))(defun output-consequent-layer (anfis prev-output input) Filters the INPUT and PREV-OUTPUT of previous layer through given ANFISnetwork consequent layer of each rule. (declare (type anfis anfis) (type list prev-output input)) (let ((consequent-params (consequent-params anfis)) (param-len (1+ (input-dim anfis)))) (loop for out in prev-output for start from 0 by param-len for params = (subseq consequent-params start (+ start param-len)) collecting (* out (weighted-sum (cons 1 input) params)))))(defun weighted-sum (x w) Return summed pairs of elements between given sequences X and W. (reduce #'+ (map 'list #'* x w)))(defun output-t-norm-layer (anfis input) Filters given INPUT, received from premise layer, through t-norm layer ofgiven ANFIS network. (declare (type anfis anfis) (type list input)) (let ((t-norm (t-norm anfis))) (loop for in in input collect (reduce t-norm in))))(defun normalize (input) Performs mathematical vector normalization on given sequence. (let ((sum (reduce #'+ input))) (mapcar (lambda (in) (/ in sum)) input)))(defun output-anfis (anfis input) Filters the INPUT pair (input . output) through the given ANFIS network.Returns values of each layer in reverse (the first value is the final output). (declare (type anfis anfis) (type list input)) (check-type anfis anfis) (check-type input list) (let* ((layer1 (output-premise-layer anfis input)) (layer2 (output-t-norm-layer anfis layer1)) (layer3 (normalize layer2)) (layer4 (output-consequent-layer anfis layer3 input)) (layer5 (reduce #'+ layer4))) (values layer5 layer4 layer3 layer2 layer1)))(defun target-function (x y) Function of 2 arguments X and Y (which is being optimized via anfis network). (* (+ (* (+ x 2) (+ x 2)) (- (* (- y 1) (- y 1))) (* 5 x y) -2) (sin (/ x 5)) (sin (/ x 5))))(defun generate-samples (fun start end) Return pairs of input and output for given FUN of 2 arguments where eachinput dimension is generated from START to END. (loop for x from start upto end appending (loop for y from start upto end collecting (cons (list x y) (funcall fun x y)))))(defparameter *train-data* (generate-samples #'target-function -4 4))(defparameter *train-expected* (mapcar #'cdr *train-data*))(defun consequent-delta (input out ws-norm) Return the deltas for parameters p, q and r in anfis consequent layer basedon given INPUT pair (input . output), anfis layer 5 OUT and layer 3 WS-NORM. (declare (type list input ws-norm) (type real out)) (let* ((xs (cons 1 (car input))) (expected (cdr input)) (err (- expected out)) (param-size (* (length xs) (length ws-norm))) (deltas (make-array param-size))) (loop for w-norm in ws-norm and start = 0 then (+ start (length xs)) do (loop for x in xs and i = start then (1+ i) do (setf (elt deltas i) (* err w-norm x)))) deltas))(defun premise-delta (input out consequents ws-norm ws memberships premise-params) Calculate the deltas for parameters a and b in premises based on givenINPUT, every anfis layer output OUT, CONSEQUENTS, WS-NORM, WS and MEMBERSHIPSas well as PREMISE-PARAMS. (declare (type list input consequents ws-norm ws memberships) (type vector premise-params) (type real out)) (let* ((xs (car input)) (expected (cdr input)) (err (- expected out)) (param-size (length premise-params)) (deltas (make-array param-size))) (loop for ms in memberships and start = 0 then (+ start (* 2 (length xs))) and i = 0 then (1+ i) do (loop for x in xs and pari = start then (+ 2 pari) and j = 0 then (1+ j) do (progn (let* ((ai (elt premise-params pari)) (bi (elt premise-params (1+ pari))) (w-delta (w-delta ws ws-norm consequents i)) (m-delta (membership-delta ms j))) (setf (elt deltas pari) (* err w-delta m-delta bi)) (setf (elt deltas (1+ pari)) (* err w-delta m-delta (- ai x))))))) deltas))(defun w-delta (ws ws-norm consequents index) Take anfis layer 2 outputs WS, layer 3 WS-NORM, layer 4 CONSEQUENTS andINDEX. Returns sum of ws * (f-index - fs) divided with squared sum of ws. (declare (type list ws ws-norm consequents) (type (integer 0) index)) (let* ((fs (mapcar #'/ consequents ws-norm)) (fi (elt fs index)) (sum-ws (reduce #'+ ws)) (wd 0.0d0)) (loop for w in ws and f in fs and i = 0 then (1+ i) do (unless (= i index) (incf wd (* w (- fi f))))) (/ wd (* sum-ws sum-ws))))(defun membership-delta (memberships index) Calculates product of MEMBERSHIPS values for sample but also multiplieswith 1 - membership on given INDEX. (declare (type list memberships) (type (integer 0) index)) (let ((prod 1.0d0)) (loop for mem in memberships and i = 0 then (1+ i) do (if (= i index) (setf prod (* prod mem (- 1 mem))) (setf prod (* prod mem)))) prod))(defun batch-learning (anfis input iterations min-error eta) Perform batch gradient learning on given ANFIS instance, sequence of INPUTpairs (input . output) across number of ITERATIONS or until MIN-ERROR isreached. ETA determines learn rate. Returns modified anfis instance. (declare (type anfis anfis) (type list input) (type (integer 1) iterations) (type real min-error eta)) (check-type anfis anfis) (check-type input list) (check-type iterations (integer 1)) (check-type min-error real) (check-type eta real) (dotimes (iter iterations anfis) (let ((cons-delta (make-array (length (consequent-params anfis)) :initial-element 0.0d0)) (prem-delta (make-array (length (premise-params anfis)) :initial-element 0.0d0))) (dolist (in input) (multiple-value-bind (out layer4 layer3 layer2 layer1) (output-anfis anfis (car in)) (let* ((premise-params (premise-params anfis)) (cd (consequent-delta in out layer3)) (pd (premise-delta in out layer4 layer3 layer2 layer1 premise-params))) (map-into cons-delta #'+ cons-delta cd) (map-into prem-delta #'+ prem-delta pd)))) (map-into (consequent-params anfis) (lambda (w d) (+ w (* eta d))) (consequent-params anfis) cons-delta) (map-into (premise-params anfis) (lambda (w d) (+ w (* eta d))) (premise-params anfis) prem-delta) (let ((mse (mean-square-error anfis input))) (print mse) (when (<= mse min-error) (return anfis))))))(defun online-learning (anfis input epochs min-error eta) Perform online gradient learning for given ANFIS instance using sequenceof INPUT (input . output) across number of EPOCHS or until MIN-ERROR is reached.ETA determines learn rate. Returns modified anfis instace. (declare (type anfis anfis) (type list input) (type (integer 1) epochs) (type real min-error eta)) (check-type anfis anfis) (check-type input list) (check-type epochs (integer 1)) (check-type min-error real) (check-type eta real) (dotimes (iter epochs anfis) (dolist (in input) (multiple-value-bind (out layer4 layer3 layer2 layer1) (output-anfis anfis (car in)) (let* ((premise-params (premise-params anfis)) (cd (consequent-delta in out layer3)) (pd (premise-delta in out layer4 layer3 layer2 layer1 premise-params))) (map-into (consequent-params anfis) (lambda (w d) (+ w (* eta d))) (consequent-params anfis) cd) (map-into (premise-params anfis) (lambda (w d) (+ w (* eta d))) (premise-params anfis) pd)))) (let ((mse (mean-square-error anfis input))) (print mse) (when (<= mse min-error) (return anfis)))))(defun mean-square-error (anfis inputs) Returns mean square error for ANFIS instance over sequence of INPUTS whichcontains (input . output) pairs. (declare (type anfis anfis) (type list inputs)) (check-type anfis anfis) (check-type inputs list) (let ((outputs (mapcar (lambda (in) (output-anfis anfis (car in))) inputs)) (expected (mapcar #'cdr inputs))) (/ (reduce #'+ (mapcar (lambda (e o) (* (- e o) (- e o))) expected outputs)) (length expected))))(defun sample-errors (anfis inputs) Return pair of input and ANFIS instance output difference using INPUTSsequence of (input . output) pairs. (declare (type anfis anfis) (type list inputs)) (check-type anfis anfis) (check-type inputs list) (let ((outputs (mapcar (lambda (in) (output-anfis anfis (car in))) inputs))) (mapcar (lambda (in o) (cons (car in) (- (cdr in) o))) inputs outputs)))And my question or rather a request is for you to comment on the:code style (idiomatic Common Lisp suggestions)potential generalizations of code/anfis networkpotential places for macro definitions (perhaps a macro for both types of learning methods?) | ANFIS network based on Sugeno model I | beginner;lisp;common lisp;machine learning | null |
_unix.331088 | So I asked earlier about why timidity was running on my system and got a good explanation that related to systemd.Now I am wondering why it is hogging one of my cores (my processor is an AMD Athlon(tm) II X4 640 Processor so it has 4 cores).If I run strace on the process I get the following (ran it for a few seconds, seems to be repeating so I stopped after that):ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0poll([{fd=6, events=POLLIN}], 1, 0) = 0 (Timeout)ioctl(6, SNDRV_SEQ_IOCTL_GET_QUEUE_STATUS, 0x16c83e0) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x7fff43107780) = 0ioctl(4, SNDRV_PCM_IOCTL_HWSYNC, 0x3cd407d5) = 0The files that are open by that process are, according to information in /proc/$PID/fd:/dev/null/dev/snd/timer/dev/snd/pcmC0D0p/dev/snd/controlC0/dev/snd/seqThe Linux distribution is Debian stable (Jessie). Version of timidity is 2.13.2-40.2. | Why is Timidity taking up all processing power of one core | debian;timidity | null |
_ai.2912 | I've been experimenting with a simple tic-tac-toe game to learn neural network programming (MLP and CNNs) with good results. I train the networks on a board positions and the best moves and the network is able to learn and correctly predict the best moves to make when it encounters those board positions.But the network is unable to discover newer patterns/features from existing ones. For example -Let's say that the board position is below and move is for the X player (AI)O _ __ O __ _ _The recommended move would be 8 (0 based indices) so that the opponent doesn't win, the resulting board would be - O _ __ O __ _ XIf I train the network on the above enough times, the AI (MLP or CNN based) learns to play 8 when it encounters the above situation. But it doesn't recognize the below as variations (rotated and shifted, respectively but slanted straight lines in general) of the same pattern and is not able to correctly pick 6 and 0, respectively -_ _ O _ _ __ O _ or _ O _ etc_ _ _ _ _ OMy question is - Should I expect CNNs to be able to discover new previously untrained on patterns/features such as above? | Can a CNN or MLP discover similar but untrained-on patterns? | neural networks;convolutional neural networks | null |
_softwareengineering.240901 | Sometimes while I'm editing something I see some useless code added by other developers, probably due to habit.Editing this code will not make any difference, so is it appropriate?In my specific case I'm talking about Java private fields like this:private int aSimpleInt = 0;private boolean myBool = false;private MyObject obj = null;declaring the default values is redundant and useless.Should I remove them or just skip? | Is it appropriate to remove redundant code explicitly assigning the default values? | java;team;code reviews | Declaring values that would already have been assigned by the compiler is useless for the behaviour of the program. However, that isn't what you should be optimizing as a professional developer. Instead, you should maintain your code base in the state that best supports ongoing development. If code were only about its semantics here and now, you could think really hard once, slam out the necessary assembler code and never think about it again. But in the real world, requirement changes, fixes, maintenance, format changes etc. etc. keep coming one after another without end. It is crucial that your code base not only does what it's supposed to, but also remains in a state that lets you do the ongoing work that will be required.Spelling out things that would already be the default can be helpful for that, so it's not automatically a bad idea. After all, you don't pay the compiler by the number of characters it consumes. You should declare things when doing so makes the code more readable. In Java, global variables are automatically initialized, but local ones aren't; depending on the experience levels in your team (your current team and the expected future team!) it may be a good idea to spell things out so nobody ever has to think, even for a second, what value some variable will have at run-time, or it may not be. But It doesn't change anything in the compiled program is not a sufficient argument to call code useless. |
_unix.39945 | My user user1 is running a graphical X session with pulse configured per-user.I need to run a graphical program that uses audio with user2.If I do su user2; program program doesn't start and I get no audio neither videoIf I do gksu -u user2 program the video is working, but I get no audio.Why there are these problems? What is the right way to start a pulse application that outputs sound on the pulse of my user? What the right way to start an X/audio application from another session(for example an SSH shell)? | Pulseaudio/X permission other user/SSH | linux;x11;pulseaudio;authorization | null |
_softwareengineering.200681 | I have an opensource project currently under MIT license. I have received a request from a company to use my code for their commercial project without having to give any attribution or credit.To be honest, when I released the code, my sole intention was only to help a fellow programmer, and I didn't really think about if I was credited. Choosing the license was just one of the step I had to do to set up the project on codeplex.On one hand, I feel honored and appreciate that they actually bothered to ask, on the other hand, I felt if I just allowed them to do so without any cost may just destroy the spirit of open source.What are the typical things I or other code owners can do or request from the company to make it a fair trade? Should I even allow it?I am thinking of asking the company to write a official letter of intent and I will sign against it just to make it more formal; and also to request a donation to project/charity of my choice or buy something on my wishlist as compensation (not very expensive). Will that be too much? | What to do when a company request permission to use open source code without attribution? | licensing;open source;mit license | Many open source applications have closed source licensing options for just this scenario. How much you charge them is dependent on:the size of the company (how much can they afford)what they're going to do with it (if they're stealing it or just using it)what they expect you to do (support/updates/extensions? what contractual level?)a ton of other things.Do you want to avoid tax implications of income? Do you hate the company? etc.In general, I would treat it as a business deal while knowing that you've got all the leverage. The mindset of I'd like to promote open source, so I'm charging you $5k (or whatever else high quote seems appropriate for that company for your project) - do you really not just want to give me attribution? |
_softwareengineering.156508 | I've used both Liferay and Alfresco trying to use them as the Document Management System for an intranet.I noticed the following:They use the file system and the database to store filesThey use a GUID to name the file on the filesystem and that GUID isused as an Id in the database.The GUID-named file is a binary fileThe GUID-named binary file stores all versions for a given fileThe path for the file in the DMS doesn't match the one in the filesystemThe URL makes reference to the GUID when a certain file is requestedWhat I want to know is why is this, and what would be the best way of doing it. Like how to would you create the binary file (zip?), and what parts would you keep in the binary file and what parts would you store in the database (meta-data, path?).I'm assuming some of the benefits of doing it like this. As having the same URL for a file, regardless of its current document path. And having only one file even if the file has changed names over time. | Why use binary files to stack up different versions on DMSs? | database;version control;file storage | Storing large binary blocks as files is typically more efficient then storing large BLOBs in a database. It depends.GUIDs have the advantage you can create one at random and use it without depending on some identity provider. Using a seed based ID generated in a DBMS would require you to first go to the database before writing a file to disk, with a GUID the order doesn't matter.Document revisions can perfectly fit an append model. It can keep adding revisions to the file without causing too much rewriting. It also allows for smart storage and just storing deltas going from revision to revision, similar to what a version control repository would do. Otherwise using compression can also make a significant difference compared to storing the revisions in their own file.It may also do this to avoid creating too many files on disk, which in turn may have a negative impact on performance. Copying around directories or making backups of directories with a huge amount of small files can be problematic slow.Perhaps you should not look at the files as files, it's just data. The GUID allows retrieval. Sticking it in the filename allows the file system to help out grabbing it.You could do without a database, you may import some work though that a DB already does for you. In a hybrid approach I would typically put stuff in the DB that I would query on (e.g Which documents are at path X?). This would avoid having to create my own indices and such around the file based repository. |
_unix.361581 | To prevent closing Firefox and other application by a mistaken Ctrl+Q I added a keyboard shortcut with this combination witch opens xclock.It works fine but unexpectedly the shortcut Ctrl+A ==> Select all doesn't work any more system wide.Why the universal shortcut Ctrl+A is implicitly affected by the create of the shortcut Ctrl+Q?How to prevent this? | Defining a new shortcut disables unexpectedly another shortcut | keyboard shortcuts;xfce | null |
_unix.118893 | I just installed fedora 20 on my second pc and its running pretty hot, I found a driver on the site http://support.amd.com/en-us/download/desktop?os=Linux%20x86_64.I unpacked it and I tried sudo sh amd-driver-installer-13.35.1005-x86.x86_64.run and I get this Supported adapter detected.Check if system has the tools required for installation.fglrx installation requires that the system have kernel headers for 3.7 release./lib/modules/3.11.10-301.fc20.i686+PAE/build/include/generated/uapi/linux/version.h cannot be found on this system.One or more tools required for installation cannot be found on the system. Install the required tools before installing the fglrx driver.Optionally, run the installer with --force option to install without the tools.Forcing install will disable AMD hardware acceleration and may make your system unstable. Not recommended.and I don't know how to solve it and I have already tried: yum install kernel-devel kernel-headers gcc -yhttp://www.amd.com/US/PRODUCTS/NOTEBOOK/GRAPHICS/7000M/7400M/Pages/radeon-7400m.aspx | Installing AMD Radeon HD 7400M series fedora 20 | drivers;fedora | null |
_unix.362909 | I'm running two lxc containers onto a VPS machine and being new in iptables area Im trying to create rules to forward external traffic into containers. The first one (192.168.1.2) is running an openvpn server while the second one (192.168.1.4) is running a web server.Until now i used only the openvpn lxc and had these iptables rules for forwarding the traffic:# Generated by iptables-save v1.4.21 on Fri Apr 28 16:07:58 2017*filter :INPUT ACCEPT [1189211:150089991] :FORWARD ACCEPT [902865:826112449] :OUTPUT ACCEPT [1324099:212970374] COMMIT# Completed on Fri Apr 28 16:07:58 2017# Generated by iptables-save v1.4.21 on Fri Apr 28 16:07:58 2017*nat :PREROUTING ACCEPT [36:1998] :INPUT ACCEPT [17:858] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0]-A PREROUTING -p udp -m udp --dport 1194 -j DNAT --to-destination 192.168.1.2:1194-A POSTROUTING -o eth0 -j MASQUERADE COMMIT# Completed on Fri Apr 28 16:07:58 2017Now, that I want to set up the web server, i added this iptables rule in order to forward http traffic to web server container.iptables -t nat -A PREROUTING -p tcp -m conntrack --ctstate NEW --dport 80 -j DNAT --to-destination 192.168.1.4:80The thing is that while the forwarding to port 80 seems to work (I can visit nginx's welcome page), openvpn clients doesn't have proper internet connection (although they can ping outside world). And by this, I mean that sites loads very slow and some others don't load at all ( It seems that http traffic is getting lost somewhere). If I remove the above rule everything in the openvpn client connection is working as expected but i loose the http server.P.S : The final rules are these# Generated by iptables-save v1.4.21 on Fri Apr 28 16:39:24 2017*filter:INPUT ACCEPT [1190228:150215153]:FORWARD ACCEPT [902877:826113261]:OUTPUT ACCEPT [1325229:213163664]COMMIT# Completed on Fri Apr 28 16:39:24 2017# Generated by iptables-save v1.4.21 on Fri Apr 28 16:39:24 2017*nat:PREROUTING ACCEPT [1:44]:INPUT ACCEPT [1:44]:OUTPUT ACCEPT [0:0]:POSTROUTING ACCEPT [0:0]-A PREROUTING -p udp -m udp --dport 1194 -j DNAT --to-destination 192.168.1.2:1194-A PREROUTING -p tcp -m conntrack --ctstate NEW -m tcp --dport 80 -j DNAT --to-destination 192.168.1.4:80-A POSTROUTING -o eth0 -j MASQUERADECOMMIT# Completed on Fri Apr 28 16:39:24 2017Also i though firstly to write the rules for the forwarding and then the ones for INPUT chain.Is there any way to use both of these protocols without mentioned conflicts?Thank you. | iptables rules for two different lxc containers | port forwarding;lxc;iptables redirect | null |
_softwareengineering.332864 | I sometimes hear about making web-site fully API based, meaning that even in browser the page is constructed based on API endpoint and practically nothing else.One of the benefits I see in this is that when you will make smartphone application you already have API that works and tested. And in case you need to change something you can change desktop and mobile app practically at the same time(with some tweaks maybe).Maybe I somehow misunderstood the idea, but I have some questions about the whole viability of it. Unfortunately I couldn't find any good article on this topic(which is strange, maybe it is because this is not what people do and I just misunderstood it completely).Just for example to be clear about the scope lets say we are building social-network-ish site.For mobile development API is the default way, but for desktop it seems to be overkill for me. I am used to Django framework and you can do page rendering quite easily - browser sends GET or POST request with parameters and you render a response page, can not be easier. But if I decide to use API on desktop browser then instead of simple request I need to construct JSON, send it to API endpoint, deserialize it and then render a response. And unlike with usual way, where I can generate very complex response in one step - with API endpoints I may actually need to use several endpoints to generate desktop webpage with lots of content, i.e. I end up sending multiple API requests. So by using API for everything on desktop version of the site I increase the number of requests to the server and also the amount of code needed. Yes, I will save time developing mobile app, but at the cost of increased bandwidth and decreased robustness, which is not alright in my opinion. Maybe I don't get something, would be nice if somebody can explain.Does it really make sense to do desktop version relying on API endpoints everywhere? I've seen people talk about it sometimes, but I have never seen any examples or big articles about it, which suggests to me that maybe this is indeed a stupid idea and I am wasting my time even thinking about it. Does anybody really make big desktop website where every little thing is tied to API endpoints? I may be not very experienced, but as far as I know on it is usually small API calls when they are helpful, but not the whole structure relies on API(unlike in mobile apps). So should I stick to usual development and leave API almost exclusively to mobile apps? | Fully API-based website - is it a good idea? | architecture;web development;api;mobile;django | It sounds like you are talking about a 'single page app' This term is used to refer to websites where all or most of the actions you take are accomplished via client side javascript AJAX calls, which retrieve json, convert that json to html and update the page rather than:Having the browser request a new html page which is returned and shown by the browser, discarding the old page.Although I agree there are downsides to this approach, it is now the de facto standard architecture for website design. The simple reason being that AJAX gives a clearly better user experience over page loads.Several javascript frameworks such as react and angular are available to make this approach easy to implement.So, most 'single page apps' are NOT all done on a single page. Admin pages, Settings etc are in my experience usually split off, in to what is effectively a separate 'single page app'. ie you have a Admin.html which also uses angular or whatever and has a set of apis to do its in page functionality.This makes it easier to separate out the concerns and address some problems that single page apps tend to have with authentication.The user experience is not really affected, because changing between these sections of the site is not something they do often and is seen as a 'big step' with the entire page changing.However, you should consider carefully where this approach is sensible. having some forms submit via standard http and some via your js framework would seem to me not to add any benefit. If you are using a framework, stick to it and use it for everything. If you are doing old skool html, stick to that. combining both approaches will simply mean you get the downsides of both. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.