id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_webmaster.22404 | Google: On Everything : Why does google shuffle/change images, news, videos section periodically ON THE SEARCH RESULT PAGE?And what basis does it shuffle?Hope you people have observed this, that they shuffle the positions ofimagesnewsvideos | Google: On Everything : Why does google shuffle/change images, news, videos section periodically? | google;google search | If anyone could answer this question accurately then it would be valuable to online marketers. In reality it is dependent on what Google's algorithm determines is relevant out of all the data sources Google has available. Although unique to each search you can make a number of observations:A search for a product will bring back data from Google shopping feeds in imagesA search with a local intent (e.g. pizza north london will bring up local business data and Google mapsIf a search can be matched against a YouTube video then that will appear prominently in the search resultsA search for a famous person will return a Google image and Google news article in the top resultsIt really is dependent on the query and its intent; if you are intending to optimize for competitive phrases then it is best to take this into account and try to get visibility for as much screen real estate as possible. That is even before you have taken into account paid adverts. A good article that illustrates this and shows the impact of blended search on click through rates using heatmaps is available at SEOmoz here:http://www.seomoz.org/blog/eyetracking-google-serps |
_codereview.47957 | I have defined some DataTemplates are similar. The templates are like this: <DataTemplate x:Key=DefaultCellTemplate> <TextBlock Text={Binding Value}> <i:Interaction.Behaviors> <beh:AddErrorButtonAdornerToControlsBehavior DoOnButtonClick={Binding RelativeSource={RelativeSource AncestorType={x:Type dxg:GridControl}}, Path=DataContext.ShowErrorDialogCommand} FieldDescriptionId={Binding RelativeSource={RelativeSource Mode=FindAncestor, AncestorType={x:Type dxg:GridCellContentPresenter}}, Path=Column.Tag}/> </i:Interaction.Behaviors> </TextBlock></DataTemplate><DataTemplate x:Key=TimeCellTemplate> <TextBlock Text={Binding Value, Converter={StaticResource TimeConverter}}> <i:Interaction.Behaviors> <beh:AddErrorButtonAdornerToControlsBehavior DoOnButtonClick={Binding RelativeSource={RelativeSource AncestorType={x:Type dxg:GridControl}}, Path=DataContext.ShowErrorDialogCommand} FieldDescriptionId={Binding RelativeSource={RelativeSource Mode=FindAncestor, AncestorType={x:Type dxg:GridCellContentPresenter}}, Path=Column.Tag}/> </i:Interaction.Behaviors> </TextBlock></DataTemplate><DataTemplate x:Key=StationCellTemplate> <TextBlock Text={Binding Value, Converter={StaticResource StationConverter}}> <i:Interaction.Behaviors> <beh:AddErrorButtonAdornerToControlsBehavior DoOnButtonClick={Binding RelativeSource={RelativeSource AncestorType={x:Type dxg:GridControl}}, Path=DataContext.ShowErrorDialogCommand} FieldDescriptionId={Binding RelativeSource={RelativeSource Mode=FindAncestor, AncestorType={x:Type dxg:GridCellContentPresenter}}, Path=Column.Tag}/> </i:Interaction.Behaviors> </TextBlock></DataTemplate><DataTemplate x:Key=ProductionCategoryCellTemplate> <TextBlock Text={Binding Value, Converter={StaticResource ProductionCategoryConverter}}> <i:Interaction.Behaviors> <beh:AddErrorButtonAdornerToControlsBehavior DoOnButtonClick={Binding RelativeSource={RelativeSource AncestorType={x:Type dxg:GridControl}}, Path=DataContext.ShowErrorDialogCommand} FieldDescriptionId={Binding RelativeSource={RelativeSource Mode=FindAncestor, AncestorType={x:Type dxg:GridCellContentPresenter}}, Path=Column.Tag}/> </i:Interaction.Behaviors> </TextBlock></DataTemplate>I don't know, if I can define a template base. The template base should be a TextBlock with the behavior and the derived templates should be use a converter for the TextBlock value. Any idea? | Defining a DataTemplateBase | wpf;xaml | null |
_unix.83502 | I want to know which DNS servers are in effect when I run commands like nslookup, dig, host, ping, etc.The general answer is to cat /etc/resolv.conf, or to look at NetworkManager, but that's only going to show me the list of servers that I normally use. It won't show me any DNS servers that got pushed to me when I connect to a VPN.Is there a way to get an in-order list of DNS servers that commands like nslookup, dig, host, ping, etc will attempt to use? | List all DNS Servers, including those pushed by VPN | dns;vpn;networkmanager;openvpn;resolvconf | If you're using NetworkManager you can use the command line tool that's part of it, nmcli to get this list:$ nmcli dev list iface wlan0 | grep IP4IP4-SETTINGS.ADDRESS: 192.168.1.110IP4-SETTINGS.PREFIX: 24 (255.255.255.0)IP4-SETTINGS.GATEWAY: 192.168.1.1IP4-DNS1.DNS: 192.168.1.8IP4-DNS2.DNS: 192.168.1.5IP4-DNS3.DNS: 24.92.226.11You have to change the bit, wlan0 to whatever is your network interface. You can make it a bit more dynamic by using the iwgetid command:$ nmcli dev list iface $(iwgetid | awk '{print $1}') | grep IP4You can also use nm-tool to get a full report:$ nm-tool ... IPv4 Settings: Address: 192.168.1.110 Prefix: 24 (255.255.255.0) Gateway: 192.168.1.1 DNS: 192.168.1.8 DNS: 192.168.1.5 DNS: 24.92.226.11 |
_unix.168908 | I have quite a strange problem using cron. I distilled it thus far:I created following simple bash script in /home/user1/cron_dir/cron.sh:#!/bin/bashecho SuccessAs user1 I created following crontab:*/1 * * * * sh /home/user1/cron_dir/cron.shThis gets installed and runs as expected (getting a Success-message from cron in my local mail). However, if I log out from my user1 account, wait a couple of minutes to perform the cron job, log back in and check my local mail, I get:sh: 0: Can't open /home/user1/cron_dir/cron.shEdit: Thanks to garethTheRed I realized the problem: my home directory is encrypted. Of course the directory is only accessible when I'm logged in. | Cron can't access my home directory when I'm logged out | shell script;permissions;cron | null |
_webmaster.621 | How much traffic is necessary for adsense to make sense? | Adsense - is it worth setting up? | google;advertising;google adsense;traffic | I recommend listening to Podcast #64 of Stackoverflow where they discuss their disappointment in Adsense.I also tried it on my personal blog where I get only about 50 visitors per day on the high end. In that case it's definitely not worth it and after a few months I ended up taking it down. From the Stackoverflow podcast mentioned above:On the crushing disappointment of Google AdSense on Stack Overflow. The theory of AdSense, matching topical ads to the content on the page, is fantastic. The reality of the type of ads we actually saw on Stack Overflow is a terrible disappointment. They were barely relevant, and often quite ugly.Our hand-selected ads, targetted to our audience, perform 50 times better than AdSense. We believe that if Google could somehow tag a site with a specific audience topic (such as, say, programmers) it would do much better.If a site like Stack Overflow, which does almost a million pageviews a day, cant make enough to cover even one person at half time using Google AdSense, how does anyone make a living with AdSense? Does it even work?Joel says the only people making decent money with AdSense are scammers who specifically build websites to do nothing except target high pay-per-click keywords. I am not sure this is what Google had in mind. It is a stunning indictment of the power of the algorithm. |
_codereview.18366 | I am using C# and the .NET entity framework, and I'm also learning how to better use OOP concepts in my program. My code is all displayed below. I would like to ensure my logic is properly organized before I try to implement the next step in my program.My goal is twofold:Ensure my code is SOLID, OOP, reusable, etc.Enable my code to select via a menu, like the one that currently exists in program, which table it uses CRUD functions on. Because have 2 entities in my entity model, and the code currently uses only the Man entity. But I'm hoping I don't need additional copies of the CRUD functions for alternative entities. I'm hoping that someone in StackExchange knows how to implement CRUD methods that will take any entity I give it.I think that my code is organized in an MVC fashion, where Program is the controller, UserInterface is the view, and DataAccess is the Model.Program classclass Program{ private enum MenuStage { MENU_EXIT, MENU_CRUD //MENU_TableSelect } static private MenuStage _MyMenuStage; static void Main(string[] args) { DoUICycle(); Console.ReadLine(); } // ================================ // MENU DISPLAY AND PROCESSING // ================================ private static void DoUICycle() { string [] sCRUDMenu = new string[] {Exit, Create, Read, Update, Delete }; string [] sTableMenu = new string[] {Quit, Men, Locations}; string[] sMyMenuChoices; UserInterface MyUI = new UserInterface(); int iChoice = -1; _MyMenuStage = MenuStage.MENU_CRUD; while (_MyMenuStage != MenuStage.MENU_EXIT) { if (_MyMenuStage == MenuStage.MENU_CRUD) { sMyMenuChoices = sCRUDMenu; } else //if (MyStage == MenuStage.MENU_Tables) { sMyMenuChoices = sTableMenu; } while (iChoice != 0) { iChoice = MyUI.GetMenuInput(sMyMenuChoices); MenuSwitch(iChoice); } if (iChoice == 0) { _MyMenuStage = MenuStage.MENU_EXIT; } } } static private void MenuSwitch(int selection) { if (_MyMenuStage == MenuStage.MENU_CRUD) { switch (selection) { case 1: DoCreate(); break; case 2: DoRead(); break; case 3: DoUpdate(); break; case 4: DoDelete(); break; default: break; } } /* else if (_MyStage == MenuStage.MENU_TableSelect) { // should i really use enum? or IAmAnEntity interface variable switch (selection) { // not sure what to do here, somehow select a table to be used in CRUD functions? // do i need to call more CRUD functions for each new table I add? } }*/ } // ============================ // CRUD FUNCTIONS // ============================ static private void DoCreate() { int myID; bool bIsValidID; var dbEntities = new TestDatabaseEntities(); string sNewName; UserInterface MyUI = new UserInterface(); DataAccess MyDA = new DataAccess(); do { bIsValidID = MyUI.GetValidTypeInput<int>(Enter ID: , Error: ID must be an integer, int.TryParse, out myID); } while (!bIsValidID); sNewName = MyUI.GetInput<string>(Enter Name:, x => x.Trim()); MyDA.CreateMan(dbEntities, new Man() {ManID = myID, Name = sNewName }); SaveChanges(dbEntities); } static private void DoRead() { var dbEntities = new TestDatabaseEntities(); UserInterface MyUI = new UserInterface(); DataAccess MyDA = new DataAccess(); var query = from person in dbEntities.Men where true select person; MyDA.ReadMan(dbEntities, query); } static private void DoUpdate() { int myID; var dbEntities = new TestDatabaseEntities(); string sNewName = ; UserInterface MyUI = new UserInterface(); DataAccess MyDA = new DataAccess(); myID = MyUI.GetInput<int>(Enter ID to update: , int.Parse); sNewName = MyUI.GetInput<string>(Enter new name: , x => x.Trim()); var query = from person in dbEntities.Men where person.ManID == myID select person; MyDA.UpdateMan(dbEntities, query, new Man() { ManID = myID, Name = sNewName }); SaveChanges(dbEntities); } static private void DoDelete() { int myID; bool bValidInput; var dbEntities = new TestDatabaseEntities(); UserInterface MyUI = new UserInterface(); DataAccess MyDA = new DataAccess(); do { bValidInput = MyUI.GetValidTypeInput<int>(Enter ID to delete: , ID Invalid, please re-enter, int.TryParse, out myID); } while (!bValidInput); var Query = from person in dbEntities.Men where person.ManID == myID select person; MyDA.DeleteMan(dbEntities, Query); SaveChanges(dbEntities); } // ============================ // SAVECHANGES FUNCTION // ============================ private static void SaveChanges(TestDatabaseEntities dbEntities) { DataAccess MyDA = new DataAccess(); MyDA.TryDataBase(dbEntities, Changes saved successfully, () => dbEntities.SaveChanges()); }}UserInterface Classpublic class UserInterface{ /// <summary> /// Delegate that matches the signature of TryParse, method defined for all primitives. /// </summary> /// <typeparam name=T>Output type of This Delegate</typeparam> /// <param name=input>input for this Delegate to translate to type T</param> /// <param name=output>The translated variable to return via out parameter</param> /// <returns>Whether the Parse was successful or not, and output as output</returns> public delegate bool TryParse<T>(string input, out T output); /// <summary> /// Prompts user for input with given message, and converts input to type T /// </summary> /// <typeparam name=T>Value type to convert to, and return</typeparam> /// <param name=message>Message to be printed to console</param> /// <param name=transform>The type conversion function to use on user's input</param> /// <returns>Type T</returns> public T GetInput<T>(string message, Converter<string, T> transform) { DisplayPrompt(message); return transform(Console.ReadLine()); } /// <summary> /// Asks the user for valid input /// </summary> /// <typeparam name=T>The type of result to return as out parameter</typeparam> /// <param name=message>The message to prompt the user with</param> /// <param name=errorMessage>The message to Display to user with if input is invalid</param> /// <param name=TypeValidator>The TryParse function to use to test the input.</param> /// <returns>True if input is valid as per function given to TypeValidator, Result as type T</returns> public bool GetValidTypeInput<T>(string message, string errorMessage, TryParse<T> TypeValidator, out T result, int upper = -1, int lower = -1) { bool bIsValid = false; bool bTestValidRange = (upper != -1 && lower != -1); DisplayPrompt(message); bIsValid = TypeValidator(Console.ReadLine(), out result); if (!bIsValid) DisplayDBMessage(errorMessage); if (bTestValidRange && bIsValid) { bIsValid = isValidRange(int.Parse(result.ToString()), lower, upper); if (!bIsValid) DisplayDBMessage(Input out of valid range); } return bIsValid; } public bool isValidRange(int item, int Lower, int Upper) { return (Lower <= item && item <= Upper); } // ============================ // DISPLAY FUNCTIONS // ============================ public void DisplayDBMessage(string Msg) { Console.WriteLine(Msg); } public void DisplayPrompt(string Msg) { Console.Write(Msg); } public void DisplayRecord(string [] Msg) { if (Msg == null) return; foreach (string s in Msg) { Console.Write(s + ); } Console.Write(\n); } public void DisplayMenuItems(string[] items) { byte ChoiceIndex = 0; DisplayDivider('~'); Console.WriteLine(Select an action from menu); foreach (string s in items) { Console.WriteLine(ChoiceIndex++ + ) + s); } } public void DisplayDivider(char cCharacter = '|') { String sDivider = new String(cCharacter, 30); DisplayDBMessage(sDivider); } // ============================ // MENU PROMPTING FOR USER INPUT // ============================ public int GetMenuInput(string[] sChoices) { int selection; bool bValid = false; UserInterface MyUI = new UserInterface(); DisplayMenuItems(sChoices); do { bValid = (MyUI.GetValidTypeInput<int>(Enter> , Error: Numbers only> , int.TryParse, out selection, lower: 0, upper: sChoices.Length - 1)); } while (!bValid); return selection; }}DataAccess Classpublic class DataAccess{ // ============================ // CRUD FUNCTIONS for MAN TABLE // ============================ public bool CreateMan(TestDatabaseEntities dbEntities, Man M) { return TryDataBase(dbEntities, Man created successfully, () => { dbEntities.Men.Add(new Man { ManID = M.ManID, Name = M.Name }); }); } public bool UpdateMan(TestDatabaseEntities dbEntities, IQueryable<Man> query, Man man) { return TryDataBase(dbEntities, Man updated successfully, () => { foreach (Man M in query) { M.Name = man.Name; } }); } public bool DeleteMan(TestDatabaseEntities dbEntities, IQueryable myQuery) { return TryDataBase(dbEntities, Man deleted successfully, () => { foreach (Man M in myQuery) { dbEntities.Men.Remove(M); } }); } public bool ReadMan(TestDatabaseEntities dbEntities, IQueryable myQuery) { UserInterface MyUI = new UserInterface(); bool bSuccessful; bSuccessful = TryDataBase(dbEntities, Records read successfully, () => { MyUI.DisplayDivider(); foreach (Man m in myQuery) { MyUI.DisplayRecord(new string[] { m.ManID.ToString(), m.Name }); } MyUI.DisplayDivider(); }); return bSuccessful; } // ============================ // TRY FUNCTION // ============================ public bool TryDataBase(TestDatabaseEntities MyDBEntities, string SuccessMessage, Action MyDBAction) { UserInterface MyUI = new UserInterface(); try { MyDBAction(); MyUI.DisplayDBMessage(SuccessMessage); return true; } catch (Exception e) { MyUI.DisplayDBMessage(e.ToString()); return false; } }}Man Class (auto-generated)public partial class Man{ public Man() { this.Locations = new HashSet<Location>(); } public int ManID { get; set; } public string Name { get; set; } public virtual ICollection<Location> Locations { get; set; }}Location Class (auto-generated)public partial class Location{ public Location() { this.Men = new HashSet<Man>(); } public int PlaceID { get; set; } public string Place { get; set; } public virtual ICollection<Man> Men { get; set; }} | Preparing code for more versatile CRUD functions | c#;entity framework;crud | A few comments on your code as follows :Your UI code should not contain a SaveChanges methods as per SRP, it should be in EF itself.Your UserInterface is actually just a utility method. Encapsulate your validationand prompt logic different places. Although Utility method is not considered as good but it is okay to use in this case although their responsibility should be defined properly. Your whole architecture is like UI => BL =>EF so EF layer should not interact with UI Code. As TryDataBase methods is showing the errors in prompt. This is not good idea as this concerns to UI not EF or BL. Throw known exception in case of error do not handle it but log it inside BLL or EF.I would advise you to write test cases even before writing any line of code. By writing unit test properly you can ensure that you followed the SOLID pattern or design pattern. So do this re-factoring part using Unit Test. By EOD eventually you will start following all applicable design pattern.Create a layer between EF and BLL to improve the code reuse (put them in separate projects, this will help you keep the code in the correct place). |
_unix.47433 | I have a USB FAT32 drive that is on /dev/sda2. I've mounted it as /media/bigdrive however, I get permission denied whenever I try to touch a file there as a non root user.When I run mount I can see this line:/dev/sda2 on /media/bigdrive type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=cp437,iocharset=ascii,shortname=mixed,errors=remount-ro)My /etc/fstab has this line:/dev/sda2 /media/bigdrive vfat rw,user,exec,umask=000 0 0I've tried running sudo chmod 777 /media/bigdrive and sudo chmod 777 -R /media/bigdriveNeither one changes anything.Is there anything I'm missing?This is on a rasberry pi running raspbian BTW. | Mount USB drive (FAT32) so all users can write to it | debian;permissions;mount;fstab;raspberry pi | null |
_unix.304771 | I'm having an issue with my Ubuntu Server 16.04 installation. I have it running on a Zotac Z-Box CI23 Nano. It installed fine, but on its first boot all I had was a blank screen. I edited /etc/default/grub and changed GRUB_CMDLINE_LINUX_DEFAULT=nomodesetThis let me see the startup, but the text is all garbled white blocks:I first thought it might be a bad cable, so I switched cables with no change. I changed monitors with no effect. I switched to a TV using HDMI with no effect.Any ideas would appreciated. | Garbled text on startup | boot;grub;startup;display | I finally got this figured out. At first I thought it was bad hardware so I RMA'd the machine. The same exact thing was happening on the replacement. So I spent a few hours fiddling with grub settings. This is what ended up working for this machine:in /etc/default/grub:Comment out GRUB_CMDLINE_LINUX_DEFAULT=splash quiet entirely.Uncomment GRUB_TERMINAL=consoleUncomment GRUB_GFXMODE=640x480 I set it to GRUB_GFXMODE=1280x800 because that's the monitor I'm using's default.Save it then sudo update-grub and reboot and it is showing as expected. Only the combination of those three changes seem to make it work for me, but YMMV. |
_unix.373277 | I am trying to install Fifth Browser (website) (github link) on Xubuntu 16.04.2 LTS. I was able to get all of the dependencies via the official distro repositories, using Synaptic for installation. One of them as listed on fifth's homepage is called liburlmatch (github link). It appears to be a simple library that lets you block URLs while using wildcards. I have installed urlmatch via:/git clone https://github.com/clbr/urlmatch.git and then/sudo checkinstall in a separate folder. This seemed to work flawlessly.When I do ./configure in the fifth folder the last few lines look like this:checking for fltk-config13... nochecking for fltk-config... fltk-configchecking for url_init in -lurlmatch... noconfigure: error: liburlmatch not foundYou can find the part of the configure file pertaining to urlmatch in the following pastebin to your convenience: codeblock from configure for liburlmatch. What am I doing wrong? Why doesn't the configure script recognise the urlmatcher library?Please consider in your answer that this is one of my first attempts to compile a programme like this, thanks. | Fifth Browser - how do I get the configure script to recognise a dependency? | ubuntu;compiling;browser | It looks like the issue is actually to do with how the configure script for fifth-5.0 constructs and runs the conftest for the urlmatch library.First, the errorchecking for url_init in -lurlmatch... noconfigure: error: liburlmatch not foundturns out to be somewhat misleading: if we look at the config.log we see that the conftest is actually failing to build because of an undefined reference to the uncompress function:configure:5511: checking for url_init in -lurlmatchconfigure:5546: g++ -o conftest -g -O2 -pthread -isystem /usr/include/cairo -isystem /usr/include/glib-2.0 -isystem /usr/lib/x86_64-linux-gnu/glib-2.0/include -isystem /usr/include/pixman-1 -isystem /usr/include/freetype2 -isystem /usr/include/libpng12 -isystem /usr/include/freetype2 -isystem /usr/include/cairo -isystem /usr/include/glib-2.0 -isystem /usr/lib/x86_64-linux-gnu/glib-2.0/include -isystem /usr/include/pixman-1 -isystem /usr/include/freetype2 -isystem /usr/include/libpng12 -fvisibility-inlines-hidden -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_THREAD_SAFE -D_REENTRANT -lz conftest.cpp -lurlmatch -Wl,-Bsymbolic-functions -lfltk_images -lfltk -lX11 >&5//usr/local/lib/liburlmatch.a(opti_init.o): In function `initbin':opti_init.c:(.text+0xd6): undefined reference to `uncompress'collect2: error: ld returned 1 exit statusconfigure:5552: $? = 1That's because uncompress is in libz - which is being linked before liburlmatch:. . . -lz conftest.cpp -lurlmatch -Wl,-Bsymbolic-functions -lfltk_images -lfltk -lX11 >&5failing to respect the necessary link order1 for the two libraries. We can trace that back further to the configure.ac file from which the configure script would have been generated: # Checks for libraries.OLD_LDFLAGS=[$LDFLAGS]LDFLAGS=[$LDFLAGS -lz]AC_CHECK_LIB([urlmatch], [url_init], [], AC_MSG_ERROR([liburlmatch not found]))LDFLAGS=[$OLD_LDFLAGS]i.e. rather than being added to the list of LIBS, -lz is added to the LDFLAGS (which is more typically used to specify additional library paths ahead of the LIBS).A quick'n'dirty workaround is to call ./configure with an explicit LIBS argument:./configure LIBS=-lzThis causes an extra -lz to be placed on the g++ command line after the urlmatch library (at the head of the other LIBS):. . . -lz conftest.cpp -lurlmatch -lz -Wl,-Bsymbolic-functions -lfltk_images -lfltk -lX11 >&5A more permanent solution might be to modify the configure.ac file to add -lz to LIBS instead of LDFLAGS, and then re-generate configure using autoconf (or autoreconf if necessary).Refs.:Why does the order of '-l' option in gcc matter? |
_unix.283865 | Does Forever always have its own log file? I have the following process running:data: uid command script forever pid id logfile uptimedata: [0] staff-intranet /usr/bin/node staff-intranet-server.js 24123 26733 /root/.forever/logfile.txt 0:0:10:47.967However, my config file looks like this:// Staff Intranet Configuration File{ uid: staff-intranet, append: true, script: staff-intranet-server.js, sourceDir: /path/to/file/staff-intranet, outFile: /path/to/file/staff-intranet/output.txt, errFile: /path/to/file/staff-intranet/errors.txt, logFile: /path/to/file/staff-intranet/logs.txt} But the logfile is still stored list as being stored in /root/.forever/logfile.txt. Why is it not being stored in /path/to/file/staff-intranet/logs.txt? | NPMJS Forever and Logfiles | node.js | null |
_webmaster.81272 | We're trying to figure out if a certain type of web server architecture is better than our current web server. We need to A/B test these 2 boxes with live traffic to really get a sense of which one performs better on our internal KPIs. The hosts involved can't help with the A/B test. So naturally I'm looking to see if it's possible to roughly split the traffic 50:50 between the servers using PHP redirects and sub domains. For example:www.mydomain.com - (randomly redirects all requests to 1 of the following sub domains, maintaining url structure and query strings...)www1.mydomain.com (original webserver config)www2.mydomain.com (new webserver config)We would need to run this test for at least a week or two to generate good stats. What I'm not sure about is whether the search engines would see this as bad and effect our rankings. | Is it possible to A/B test web servers using a PHP redirect and sub domains without breaking all the SEO goodness? | seo;php;a b testing | null |
_unix.245778 | My question is quite similar to another one here but not quite the same. I have a sequence of commands to create an ssl key/crt ect. And I want to be able to create an automated, default one. These are the commands (they came from this page):openssl genrsa -des3 -out server.key 2048openssl req -new -key server.key -out server.csrcp server.key server.key.orgopenssl rsa -in server.key.org -out server.keyopenssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crtIf each one only took one argument then it would be fine and I would do something like openssl genrsa -des3 -out server.key 2048 <<< arg1But one of them needs as many as 10 inputs which it asks for sequentially.I tried something like this but it didn't workopenssl genrsa -des3 -out server.key 2048 << fooarg1arg2fooEDIT: This approach is actually working I think but not for the arguments that are supposed to be passwords. Does anyone have a workaround for that?Could it make a difference that some of the arguments are passwords?What is the simplest way to go about this? | Automate textual input to a command from a bash script | bash;shell script | This works as expected...I've been piping heredocs to openssl to create certs for years (e.g. i wrote the script below sometime in 2002, and that's the new version of the script...no idea when i first wrote it).You need to provide ALL of the inputs that openssl expects, in the exact order that it expects them, even if some of those inputs are just a blank line (to accept the default).For example, here's (a slightly edited version of) my script to generate self-signed certs for postfix:#! /bin/shumask 077# $site is used for the subdir to hold the certs AND for# the certificate's Common Namesite=$1mkdir -p $siteumask 277REQ=$site/key.pemCERT=$site/cert.pemSERV=$site/server.pemFING=$site/cert.fingerprint# certificate details for herenow script (configurable)COUNTRY=AU # 2 letter country-codeSTATE=Victoria # state or province nameLOCALITY=Melbourne # Locality Name (e.g. city)ORGNAME=organisation name # Organization Name (eg, company)ORGUNIT= # Organizational Unit Name (eg. section)[email protected] # certificate's email address# optional extra detailsCHALLENGE= # challenge passwordCOMPANY= # company nameDAYS=-days 365# create the certificate requestcat <<__EOF__ | openssl req -new $DAYS -nodes -keyout $REQ -out $REQ$COUNTRY$STATE$LOCALITY$ORGNAME$ORGUNIT$site$EMAIL$CHALLENGE$COMPANY__EOF__# sign it - will ask for demoCA's passwordopenssl ca $DAYS -policy policy_anything -out $CERT -infiles $REQ# cert has to be readable by postfixchmod 644 $CERT# create server.pem for smtpd by concatenating the certificate (cert.pem) +# demoCA's public certificate + the host's private key (key.pem)cat $CERT ./demoCA/cacert.pem $REQ >$SERV# create fingerprint fileopenssl x509 -fingerprint -in $CERT -noout > $FINGNOTE: there is no error-checking here, just assumptions about the exact order of input required by openssl for this particular task. If you want error checking, use expect or perl's Expect.pm or python's pexpect. |
_softwareengineering.283030 | Java documentation says it's strongly recommended to have them behaving consistently. But are there legitimate cases of java/c#/python/etc Object.equals() method behaving inconsistently with the method Comparable.compareTo()? | Legitimate cases of having .equals() behaving inconsistently with .compareTo()? | java;c#;object oriented;python;comparison | The reason you have two different methods is that they do two different things.The .equals method returns a boolean value indicating whether the object on which you call the method is equal to the object passed in as a parameter (for some definition of is equal to that is consistent with the nature of the object being compared). The .compareTo method returns a negative integer, zero, or a positive integer as this object is less than, equal to, or greater than the specified object. That makes it a useful method for sorting; it allows you to compare one instance with another for purposes of ordering.When the Java documentation says that these two methods must behave consistently, what they mean is that the .equals method must return true in exactly the same situations where the .compareTo method returns zero, and must return false in exactly the same situations where the .compareTo method returns a nonzero number.Is there any good reason to violate these rules? Generally no, for the same reasons that#define TRUE FALSEis a really bad idea. The only legitimate reason for inconsistent behavior is hinted at in the Java documentation itself: [if the] class has a natural ordering that is inconsistent with equals.To drive home the point, you can actually define .equals() in terms of compareTo(), thus guaranteeing consistent behavior. Consider this .equals() method from a Rational class which, after a few sanity checks, simply defines .equals as compareTo() == 0:public boolean equals(Object y) { if (y == null) return false; if (y.getClass() != this.getClass()) return false; Rational b = (Rational) y; return compareTo(b) == 0;} |
_cs.69626 | Given a pseudo random number generator rand5() that generates a random integer in the set [0,1,2,3,4], how would someone use this to generate a function rand7() that outputs [0,1,2,3,4,5,6] with equal probability.I was thinking of approaching it this way: reduce the problem to generate number from 0 to 6 using a coin.We can represent [0,1,2,3,4,5,6] using bit-wise from 000 to 111.Selecting each bit from LSB to MSB can be done using tossing a coin.However, if we continue in this fashion from LSB to MSB then wecan get all bits as 111 which is decimal 7. So, in the case of MSBif we get 1 then discarding it and starting again should givethe result? Right?Reducing the problem from rand5() to coin can be done in this way: if the number is 0,2 then we can consider it as head and getting 1,3 can be considered as tail.Is my solution right? | Given a random number generator rand5() generate gen7() | algorithms;pseudo random generators | null |
_webapps.67047 | I have a column B which may have a value 1 or is empty.I want to calculate 2 fields: startFirstRange: the start rownumber of the first non-empty range, or -1 if not exist.endFirstRange: the end rownumber of the first non-empty range, or 0 if not exist.I've managed to get startFirstRange as follows: =ArrayFormula(MIN(IFERROR(FILTER(ROW(Sheet1!B2:B),NOT(ISBLANK(Sheet1!B2:B))),-1)))Naively, I tried the following to get endFirstRange: =ArrayFormula(MAX(FILTER(ROW(Sheet1!B:B),NOT(ISBLANK(Sheet1!B:B)))))However, this results in the end rownumber of the last non-empty range, instead of the end rownumber of the first non-empty range. How to proceed? | Google spreadsheet: calculate start and end rownumber of first non-empty range | google spreadsheets | My attempt (there may well be a shorter way):For startFirstRange, lets say invoked in A1:=INDEX(IFERROR(FILTER(ROW(Sheet1!B2:B),NOT(ISBLANK(Sheet1!B2:B))),-1),1)Then for endFirstRange, which will reference the result of startFirstRange in A1:=IF(A1=-1,0,IFERROR(INDEX(FILTER(ROW(Sheet1!B2:B)-1,ROW(Sheet1!B2:B)>A1,ISBLANK(Sheet1!B2:B)),1),ROWS(Sheet1!B:B))) |
_codereview.95458 | I have developed a class that utilises the session_set_saver_handler function so I can store sessions within my DB. The class works just as I would like. However, my only concern is the way I have approached the session timeout. Current the _read() function code looks like:/** * Read session function * @access public * @return the 'data' record providinf the PDO statement executed correctly. Otherwise, return false. */ public function _read($id) { $timeout = time() - $this->accessTime; $locked = false; $this->database->query('SELECT updatedTime, data FROM sessions WHERE session = :id AND locked = :locked'); $this->database->bind(':id', $id); $this->database->bind(':locked', $locked); if($this->database->execute()) { if($this->database->rowCount() > 0) { $row = $this->database->singleResult(); if($row['updatedTime'] < $timeout) { //Set the location of the user. $url = http:// . $_SERVER['SERVER_NAME'] . $_SERVER['REQUEST_URI']; if($url != $this->redirectUrl) { header('Location: ' . $this->redirectUrl); return; } return ''; } return $row['data']; } } return ''; }When I originally create the script I hard coded the redirect URL. The problem was logout.php (the file the user is redirected to) contains the session class. Meaning I have a constant loop. So I approached it by implementing the following:if($row['updatedTime'] < $timeout) { //Set the location of the user. $url = http:// . $_SERVER['SERVER_NAME'] . $_SERVER['REQUEST_URI']; if($url != $this->redirectUrl) { header('Location: ' . $this->redirectUrl); return; } return ''; }This seems to me more of a 'hack' than intelligent code. Did I approach this correct (probably not)? If not, what would be a more intelligent work around? | PHP session_set_saver_handler with session timeout | php;object oriented;session | Since the code for logout.php is not provided, I'll just make assumptions. Your code is pretty clean but I think it could be improved a bit. MiscellaneousBoth locked and timeout are defined but only used once. Since they are local variables, I consider that they could be removed and just replaced by their values.Still concerning the timeout, I prefer working with proper date and time types provided by MySQL rather than working with timestamps stored as integers. The main reason is that there are a lot of functions at your disposition, like for example NOW() and TIMESTAMPDIFF(), which could be used to rewrite your query easily SELECT TIMESTAMPDIFF(MINUTE, NOW(), updatedTime ) AS minutesSinceLastActivity, data FROM sessions WHERE session = :id AND locked = :locked Once you have that, your condition is pretty simple : if($row['minutesSinceLastActivity'] > $this->accessTime) . Which leads me to another point, the naming of accessTime is pretty bad, I would define it as something like minutesInactivityBeforeLogout.One last thing that I'm a bit unsure about (I know it works for C, don't remember for PHP), you could regroup the two conditions if($this->database->execute()) and if($this->database->rowCount() > 0) into one with the and operator. Since you'll use an and, if the execution of the query fails, execution the rest of the code in the if condition is unnecessary since it cannot change the result of the condition. Once again, this is something I know works in C, not quite sure about PHP.How it worksYou said you had a constant redirection loop with a previous version of your code. The main reason I could fin for that is that if you detect a timeout, you don't do anything on your MySQL table, which means you'll always end up in the same condition and doing always the same thing.That's why you should keep your structure and query as it is (you could put the timeout condition in the query, but doing so, you would be unable to differentiate the case where the timeout is reached from the case where there is no record for the current id). The only thing that you have to do is for example to delete the record if the timeout condition is reached. |
_codereview.85839 | I am trying to multithread a for loop, and this works so far. How can it be improved? Have I done anything incorrectly?The work is broken up into blocks and the block variable is the amount of objects to process per thread.Test.classpublic class Test extends MultiThreaded { public static void main(String args[]) { List<TestObject> values = new ArrayList<TestObject>(); for(int i = 1;i < 7;i++) values.add(new TestObject(String.valueOf(i))); int block = 2; int cursor = 0; List<TestObject> objectList = new ArrayList<TestObject>(); for(TestObject v : values) { cursor++; objectList.add(v); if(cursor < block) continue; final List<TestObject> objectListCopy = new ArrayList<TestObject>(); objectListCopy.addAll(objectList); Task task = new Task() { @Override public void run() { for(TestObject o : objectListCopy) o.print(); } }; startTask(task); objectList.clear(); cursor = 0; } while(MultiThreaded.processing()) { continue; } System.out.println(move on); } public static class TestObject { String line; public TestObject(String string) { this.line = string; } public void print() { System.out.println(line); } }}MultiThreaded.classpublic class MultiThreaded { public static HashSet<Task> tasks = new HashSet<Task>(); public static class Task extends Thread { public boolean running; public void run(){ return; } } public static void startTask(Task task) { task.start(); tasks.add(task); } public static Task getTask(int id) { for(Task task : tasks){ if(task.getId() == id) return task; } return null; } public static boolean processing() { boolean processing = false; for(Task task: tasks){ if(task.isAlive()) processing = true; } return processing; }}OUTPUT125634move on | Multithreaded for loop | java;multithreading | Multi-threaded concepts are complicated at times. If you really want to learn multi-threading, it is important to get it right. Some things can be extremely difficult to get right. I can highly recommend reading Java Concurrency in Practice.OverallIt is not that often I see multi-threaded code without a single synchronized keyword, or a single use of an atomic variable or concurrent data structure. I suggest you read up on all three of those to better understand what you will have to deal with sooner or later.I think that you are adding a lot of overhead with the way you are constructing your tasks, that it will remove any benefits you get by running it multi-threaded. Creating all the lists that you use simply takes more time than actually running the code.It is also important to consider that you are using System.out.println, this is a synchronized call, which means that only a single thread can only run it a time. This tends to be a big concurrency bottle-neck in a lot of multi-threaded code. As your task(s) seems to only be to use System.out.println, I think your code would be better of as single-threaded.Oops, missed some!In your current code, it does not run the print method for the last x items in the list. There already, it is broken.If you have 1024 items and use a block value of 100, 24 items will be missed. (Hint: What is the value of your cursor variable after your for-loop to start the tasks is finished?)Singleton mindsetThis code:public static HashSet<Task> tasks = new HashSet<Task>();By making it public static it seems like you are in the singleton mindset. Avoid this. This has no business being either public or static. Imagine if some code somewhere at some point would call tasks.clear(); - That could break a lot.This field is also a big memory leak. You are only adding tasks to it, you are not removing any tasks.Extending for no reasonYou are extending the MultiThreaded class, but all methods and fields in that class are static. Extending it has no real effect besides giving you quick-access to the methods and variables, by not having to write MultiThreaded. each time. This is a bad reason for extending a class.getTaskYour public static Task getTask(int id) method, although it is not used, suggests that you are using the wrong data-structure. You could use a Map<Integer, Task> which will make the getTask lookup much more efficient (from \$O(n)\$ to \$O(1)\$). As you are dealing with multi-threading, better make it a ConcurrentMap as well. You should however ask yourself if you really need this in the first place.Are we working? Are we working? Are we, are we, are we?This code:while(MultiThreaded.processing()) { continue; }Is a busy-loop. This code will run as quickly as possible and waste CPU power for you. Running this will essentially remove any benefits you get by using multi-threading in the first place. You should check into the wait-notify construct. This can be compared to three kids in the backseat of your car constantly asking you Are we there yet? Are we there yet? Are we, are we, are we?, I bet you can drive better without such a distraction. (No offense intended against kids)This busy-loop does not get better by the fact that your processing() method is an inefficient way to check if any task is still working. Use Atomic variables to keep track of the number of running tasks.The wheel...Java 8 provides the concept of parallel streams which significantly makes this easier.Your main method can be replaced with this:List<TestObject> values = new ArrayList<TestObject>();for (int i = 1; i < COUNT; i++) values.add(new TestObject(String.valueOf(i)));values.parallelStream().forEach(o -> o.print());System.out.println(move on); |
_unix.43646 | I still have two old ATA drives (one is only 8GB, one is somewhat defect) and I've been thinking about putting them into my PC and activating swap on them, too. What will be effect of this? Will my system spread data across the drives evenly so that swapping becomes faster? | Can I accelerate swap by using multiple harddrives? | linux;swap | Firstly, using a slow or defective hard drive for swap is not a good idea. It's like having really slow or buggy memory in a way.How your system spreads data across your swap partitions depends on the priority you give them in your /etc/fstabAs an example,/dev/hda5 none swap sw,pri=2 0 0/dev/hdb5 none swap sw,pri=1 0 0/dev/hdc6 none swap sw,pri=3 0 0Your system will use the partition with the highest priority first (in this case /dev/hdc6). Priorities go from 0 to 32767. You can assign the same priority to the different partitions and that will make your system use them equally (or spread the load across different drives). The main reason for this is that you want to use a faster (or less used but still fast) drive first, as it can have a major impact on your system.You can change your system's tendency to write to swap by setting swappiness. More info here. |
_unix.343301 | I use sendmail on solarisI want a secure configuration so i use TLS 1.2 and RunAsUserconfigured with user smmsp.Permission are correct and tls works fine.But when i run email give me this erroronly super-user can use -l optionI solved usingchmod u+s mail.local,but honestly i dislike this solution.Someone know a better solution for this problem? | An alternative solution for sendmail problem | sendmail | null |
_unix.61054 | From what I know it seems that there are many ways to access the shell on Linux. So far the methods I know are:To use a program such as Terminal or KonsoleTo use the shortcuts CTRL + ALT + F1-6To disable X and boot straight into the command lineTo SSH inWhat are the differences between these methods? Are there any more ways of accessing the shell on Linux? Apologies if this is a noob question :S | What's the difference between these ways of accessing the shell? | linux;shell;terminal;tty;konsole | null |
_softwareengineering.95090 | I would like to know which convention (if any) you use for your PHP web application projects. | Directory structure of web application written in PHP | web development;php;web applications;coding standards | I would say it depends on your application structure. Is this multisite? Is there a heavy need in configuration? Are there a lot of external libraries?However, a good idea would be to get inspiration from existing frameworks (a better idea would be to use them), like symfony.Important thing: separate your public files (i.e, the files that can be downloaded from the browser: css, js, html, etc.) from your app internal files. Doing this, you can configure a virtualhost to secure your installation, by serving only public files directly. |
_unix.58091 | I'm trying to run some expect language, and grep/parse the output all in one script. I want to grep the output of and look for error (I should note that standard linux commands like awk, sed, grep, etc are not available on the remote VPlexcli machine)#!/bin/bashexpect - << EOF# connect to vplexclispawn vplexcli# Look for login promptexpect -re Name:# Send loginsend service\r# Look for password promptexpect -re Password:# Send passwordsend letmein123\rexpect -re VPlexcli:/> send ll /clusters/cluster-1/storage-elements/\rexpect -re VPlexcli:/> send exit\rEOFOutput looks like this:VPD83T3:6006016036c02c00e217465c0516e211 ok APM00121002844.SPA APM00121002844.SPB both 0x002e000000000000 implicit-explicitVPD83T3:6006016036c02c00e4dc0671f907e211 ok APM00121002844.SPA APM00121002844.SPB both 0x0010000000000000 implicit-explicitVPD83T3:6006016036c02c00ec79619bdd08e211 error APM00121002844.SPA APM00121002844.SPB none implicit-explicitVPD83T3:6006016036c02c00f0bfd3dedd08e211 error APM00121002844.SPA APM00121002844.SPB none implicit-explicit | grep output of expect script | bash;scripting;expect | null |
_codereview.63863 | I have a catalogue of 56000 objects and I want to replace some columns with new values for instance replace NaN with -99 and 0 for specific columns (as you can see in the following code). Then I need to compute a value for some columns with a function which is called get_ebv_from_mapand subtract it from some other columns and so on.Using get_ebv_from_map function to estimate the values in the catalogue is taking too much time and I think what I did like using this loop is not very efficient.The function is given by:from astropy import units as ufrom astropy.coordinates import SkyCoordfrom scipy.ndimage.interpolation import map_coordinatesimport os.pathfrom collections import Iterableimport pyfits as fitsdef get_ebv_from_map(coordinates, interpolate=True, order=1): mapdir='/vol/Dust_Map/maps/' fname = os.path.join(mapdir, 'SFD_dust_4096_{0}.fits') # Parse input ra, dec = coordinates coordinates = SkyCoord(ra=ra, dec=dec, unit=(u.degree, u.degree)) # Convert to galactic coordinates. coordinates = coordinates.galactic l = coordinates.l.radian b = coordinates.b.radian # Check if l, b are scalar return_scalar = False if not isinstance(l, Iterable): return_scalar = True l, b = np.array([l]), np.array([b]) # Initialize return array ebv = np.empty_like(l) # Treat north (b>0) separately from south (b<0). for n, idx, ext in [(1, b >= 0, 'ngp'), (-1, b < 0, 'sgp')]: if not np.any(idx): continue hdulist = fits.open(fname.format(ext)) mapd = hdulist[0].data # Project from galactic longitude/latitude to lambert pixels. # (See SFD98). npix = mapd.shape[0] x = (npix / 2 * np.cos(l[idx]) * np.sqrt(1. - n*np.sin(b[idx])) + npix / 2 - 0.5) y = (-npix / 2 * n * np.sin(l[idx]) * np.sqrt(1. - n*np.sin(b[idx])) + npix / 2 - 0.5) # Get map values at these pixel coordinates. if interpolate: ebv[idx] = map_coordinates(mapd, [y, x], order=order) else: x=np.round(x).astype(np.int) y=np.round(y).astype(np.int) ebv[idx] = mapd[y, x] hdulist.close() if return_scalar: return ebv[0] return ebvThe loop is given in below:ABtoVEGA=[+0.77,-0.13,-0.02,+0.19,+0.49,-0.19,-0.18,-0.06,-0.06,+0.04,+0.10,+0.22,+0.27,+0.36,+0.45,+0.56,+0.50]ZP=np.loadtxt(zero-points.offsets.asc)Gal=np.loadtxt(photometry_galaxies_photometry.cat, dtype=None)ra=Gal[:,1]dec=Gal[:,2]Av=[3.836 ,3.070,2.542,2.058,1.313, 3.466 , 3.082 , 2.882 , 2.652 , 2.367 , 2.226 , 2.066 , 1.872 , 1.652 , 1.423 , 1.298 , 1.156]count=0for i in range(Gal.shape[0]): j=0 m=39 while (j<len(ABtoVEGA)): if ((isnan(float(Gal[i,j+5]))) or (isnan(float(Gal[i,j+22])))): Gal[i,j+5]='-99.0' Gal[i,j+22]='0.0' count+=1 else: value=get_ebv_from_map((ra[i], dec[i]), interpolate=True, order=1) Gal[i,j+5]=str(float(Gal[i,j+5])-ABtoVEGA[j]-ZP[j]-value*Av[j]) j+=1 while (m<45): if (math.isnan(float( Gal[i,m]))): Gal[i,m] = '0.0' m+=1Do you have any advice on how I could make it optimal and fast? | Catalogue manipulation | python;vectorization;multiprocessing;scipy;astropy | null |
_scicomp.18785 | The other day I had a discussion with a friend about the GAMS solvers and we were wondering what are the mathematical differences between the solvers. Which one to use for which kind of problem? How to know which solvers to use and what happens if the wrong solver is selected? | GAMS solvers: which one to use | linear programming;nonlinear programming | Somewhere in the GAMS file, after you've declared almost all of your model, you have to write a solve statement of the form solve <your_problem> using <formulation> {minimizing|maximizing} <your_objective_function_variable>;, where:<your_problem> should be replaced with the name of your problem<formulation> is one of the GAMS formulation types (lp, mip, nlp, etc.){minimizing|maximizing} means you're either solving a minimization problem or a maximization problem, so pick one of the two<your_objective_function_variable> is whatever gams variable you're using to encode the objective functionGiven a formulation type, GAMS provides a list of solvers you can use to solve that type of formulation. So if you use the wrong combination of formulation and solver, GAMS will return an error.You probably want to pick the most restrictive formulation type that satisfies the formulation you've declared in the GAMS file. That is, you could write out an LP and then write in your GAMS file solve MyProblem using nlp minimizing z;, but LP solvers generally exploit additional structure to make solves faster.For general solver recommendations, you can look at Hans Mittelmann's benchmark data for general purpose recommendations.My experience is as follows, with the caveat that solver performance obviously depends on problem instance and input parameters, you should try multiple solvers and parameter tuning for best performance, and you should definitely consult Mittelmann's benchmarks yourself:LP/QCP/MIP/MIQCP solvers: use CPLEX or Gurobi, if possible. Those two solvers are best of breed, and around 10x faster than anything else. SCIP is also pretty good, and CBC is one of the best free options.NLP solvers (convex): CONOPT and DICOPT were good for small to medium-scale problems; for medium-scale problems, SNOPT works pretty well, too; for large-scale problems, I'd use IPOPT first.MINLP/NLP (nonconvex): BARON's been considered the gold standard for a while. ANTIGONE is relatively new, and worth trying. COUENNE and BONMIN are also good options. |
_unix.248230 | I need to get the data from mysql server and export to .csv file .i need to export data to new .csv file daily automatically.query:select count(*) count,create_date from tabpush where status=1 and create_date between '2015-12-05' AND '2015-12-06' order by create_date desc ;how do i do that ? i am new to shell script.env:linux:centos6.6 | how to export table data in to .csv file using shell script | shell script | null |
_unix.352482 | I added an mSATA drive to my laptop and used it as an lvmcache for the LVM stored on my (spinning) hard drive. This LVM contains /home (as well as /var, /opt and /usr).Upon booting (with kernel 4.9.8-1), it takes a while and I see the following:(1 of 2) A start job is running for device dev-LVM-home.device (39sec / 1min 29sec)(2 of 2) A stop job is running for LVM2 PV scan on device 8:35 (39sec / 1min 29sec)After that minute and a half, I see:[ TIME ] Timed out waiting for device dev-LVM-home.device.[ DEPEND ] Dependency failed for /home.[ DEPEND ] Dependency failed for Local File Systems.[ DEPEND ] Dependency failed for File System check on /dev/LVM/home.I then get the you are in emergency mode prompt. Pressing ctrl-d allows the laptop to boot up properly and /home (and its cache) is mounted correctly!How do I fix this? How do I get it to correctly mount /home on boot?In my /etc/mkinitcpio.conf, I have:HOOKS=base systemd plymouth autodetect block sd-lvm2 filesystems keyboard fsck | Device timed out on boot after enabling lvmcache | arch linux;lvm;ssd | I figured out a solution! Not sure why I didn't think of this before.In /etc/fstab, I had the /usr partition mounted after /home, that was causing the issue here.Mounting /usr right after / (and thus before /home), fixed the problem. |
_softwareengineering.123257 | Does anybody have experience with using a bug-tracking / issue-tracking software like bugzilla, mantis or JIRA not only for bugs or tasks, but to initiate and maintain discussions that in the end lead to a decision?For example, a developer thinks that all protected fields should be abolished and changed to private fields with protected methods that access them. It is not his call, and he would like to discuss it. Normally he brings up the point in the next developer meeting at the end of which a decision is made. Instead, my idea was for him to open an issue of a certain type decision and describe his intent like normally one would describe a bug or task.Other developers can make their comments if they feel like it, and in the end, the issue is closed as accepted or denied.The advantages I see in this:Asynchronous communication: no one is forced to voice his opinion in a meeting when they didn't have time yet to oversee all ramifications of said decision.Written log of considerations that lead to a decision. If one later raises that question again he can be referred to it.Relations to other issues can be made, e.g. a task can be followed back to a decision.Integration with version control software, e.g. a commit can be traced back to a decision.Disadvantages:Heavy smell of a golden hammer: issue tracking software normally is used to track actionable itemsOrganizational overhead may be disproportionate: instead of a small informal talk one has to communicate his ideas in written form | Using bug-tracking / issue-tracking software to discuss design questions, new tools etc | development process;tools;issue tracking | The way we work, issue tracking should track all issues. We don't know what issues are actionable until it has been analyzed. If the tracking system only has actionable issues in it, it is likely they are being triaged too soon, meaning any discussions and decision is lost. We take the approach every thing should go in (In our workflow anyway), as otherwise issues can be repeatably raised with no visibility. We have a category in our Jira implementation for Risk, so we are using Jira to track items that are not actionable, but have the ability to jeopardize the software in some way. Discussion on the item is tracked and once the risk is gone (or mitigated) the issue is closed. The example you have given could easily go into the Risk category. It is important that things such as this are discussed and tracked, and the decision recorded. When the developer re-raises the issue in a few months time, the Asked and answered response has a justification to it. |
_codereview.130105 | The code works perfectly fine for me (compiled it in BlueJ and Eclipse) but I was wondering what other more experienced programmers thought of it. I'd specifically like to know if it could be shorter or be made more resource efficient. /*** This program is just a simple addition calculator that can use whole numbers or decimals.* * @author Christopher Goodburn * @version 6/4/2016*/import java.util.Scanner;import java.math.*;public class Calculator{private static final Scanner askScanner = new Scanner(System.in);public static double answer;public static double firstNumber;public static double secondNumber; //makes variables for the whole classpublic static void main(String[] args) { calculator();} public static void calculator() { System.out.println(Basic calculator); System.out.println(Pick one:); System.out.println(1. Addition); System.out.println(2. Subtraction); System.out.println(3. Multiplication); System.out.println(4. Division); int pick = askScanner.nextInt(); if(pick == 1) { addition(); } else if(pick == 2) { subtraction(); } else if(pick == 3) { multiplication(); } else if(pick == 4) { division(); } else { System.out.println(You need to choose 1, 2, 3, or 4); calculator(); } askScanner.close();} private static double getNumbers() { System.out.println(First number?); firstNumber = askScanner.nextDouble(); System.out.println(Second Number?); secondNumber = askScanner.nextDouble(); return firstNumber; } public static void subtraction() { System.out.println(Subtraction); getNumbers(); answer = firstNumber - secondNumber; System.out.println(This is the difference of the two numbers: + answer); calculator(); } public static void addition() { System.out.println(Addition); getNumbers(); answer = firstNumber + secondNumber; System.out.println(This is the sum of the two numbers: + answer); calculator(); } public static void multiplication() { System.out.println(Multiplication); getNumbers(); answer = firstNumber * secondNumber; System.out.println(This is the product of the two numbers + answer); calculator(); } public static void division() { System.out.println(Division); getNumbers(); answer = firstNumber / secondNumber; System.out.println(This is the quotient of the two numbers: + answer); calculator(); }} | Basic calculator that takes 2 numbers and does an operation with them | java;beginner;calculator | The main issue with your code is repetition. The 4 methods calculating the result based on user input are sketched in the same way:public static void operation() { // get numbers // perform operation and print result // return to main loop}As a result, there is copy-pasted code inside those 4 methods.The first remark is that all of them will need to get numbers from the user. Instead of having each of addition, subtraction, etc. to get the numbers, retrieve them before hand from the user. In the same way, all of them invoke calculator(); to ask the user numbers again.A first possiblity is therefore to have:int pick = askScanner.nextInt();getNumbers();// perform operation based on inputcalculator();with a sample operation being:public static void addition() { System.out.println(Addition); answer = firstNumber + secondNumber; System.out.println(This is the sum of the two numbers: + answer);}That said, this still leaves a design issue:private static final Scanner askScanner = new Scanner(System.in);public static double answer;public static double firstNumber;public static double secondNumber; //makes variables for the whole classHaving static global variables to keep the numbers input by the user isn't a good idea. What you want instead is to have local variables with the minimum possible scope.You'll need to drop the method getNumbers() completely: it needs to have global variables (because it only returns one value and not the two). Also, the operation methods won't operate on the global variables anymore; instead, they will be given the two operands as parameter. And, lastly, they won't set the result global variable but return the result.As such, the code would be:if (pick == 1) { addition(firstNumber, secondNumber);} else if (pick == 2) { subtraction(firstNumber, secondNumber);} else if (pick == 3) { multiplication(firstNumber, secondNumber);} else if (pick == 4) { division(firstNumber, secondNumber);} else { System.out.println(You need to choose 1, 2, 3, or 4);}with a sample operation being:public static double addition(double firstNumber, double secondNumber) { System.out.println(Addition); double answer = firstNumber + secondNumber; System.out.println(This is the sum of the two numbers: + answer); return answer;}You could also consider using a switch instead of the if/else blocks:switch (pick) {case 1: addition(firstNumber, secondNumber);case 2: subtraction(firstNumber, secondNumber);case 3: multiplication(firstNumber, secondNumber);case 4: division(firstNumber, secondNumber);default: System.out.println(You need to choose 1, 2, 3, or 4);}which makes the code a bit shorter.As a final note: your code is an infinite loop right now, the user has no way to exit. It would be nice to introduce an option giving the user the possibility to quit the program. |
_softwareengineering.115034 | So I'm a senior in college and a couple of weeks ago, I interviewed at Microsoft for an entry-level SDE position. While most of the interviews (out of the total 7) were on the general principles of data structures and algorithms, one of the interviewers asked me if I knew anything about distributed computing.I replied that, while I haven't had a course on 'distributed computing' per se, I'd like to take a stab at the question. He proceeded to ask a question on mutexes and semaphores that was covered in our operating systems course, and I thought was more about parallelism and concurrency than distributed computing.Having gone through this experience, I am led to wonder as to what exactly constitutes distributed computing? Do multiple parallel threads trying to synchronize for access to a resource constitute a problem in the domain of distributed computing? | What exactly is distributed computing? | distributed computing;operating systems | Having gone through this experience, I am led to wonder as to what exactly constitutes distributed computing?Distributed computing is an inherently parallel collection of processing elements that communicate with one another to tackle one or more problems. Those processing elements are sufficiently separated from each other that it is not practical to build a reliable and timely messaging fabric between them, and so it becomes impossible for there to be a global knowledge of the state of the system. Particular features of messaging with distributed systems are that messages will get lost, will get garbled, will get delayed solutions in this space have to take account of this. Thus, distributed programming is about dealing with networks and messages and parallelism and a lack of global information.The easiest method of working around the problems is to make a single processing element be special, i.e., authoritative for a particular piece of information. Then the other elements can either refer back to it every time, or cache the information and hope that it doesn't go out of date (since they can't count on being told of changes). This is the classic client/server architecture.Internet computing is distributed computing, but without the ability to control what most of the distributed nodes really do.Do multiple parallel threads trying to synchronize for access to a resource constitute a problem in the domain of distributed computing?They constitute a possible solution that is useful when building the client/server model, but at a cost of a potentially dramatic increase in resource contention. For reads, that's not a very big deal (providing there's enough hardware) but for writes it's a big problem indeed.What you try to avoid though is distributed locks. The lack of reliable timely messaging absolutely slays distributed decision protocols (unless you use something like the Paxos Commit protocol, but that's got a lot of caveats). The fundamental problem with distributed computing is that Bad Stuff Happens To Messages. (Relatively-low level protocols like TCP lessen the problems, but you can still come badly unstuck.) |
_softwareengineering.238266 | I am starting with AngularJS. The back end will be Web API (which is new to me as well) and I'd like to use just one IDE, so I'm trying to figure out how to setup a project in Visual Studio 2013 for AngularJS. I'd like to keep it as a separate project, to keep it loosely coupled to the API (or keep the UI decoupled from the Web API). I'm having a hard time figuring out how to get this setup. So....What project type is best for a pure AngularJS (pure HTML 5) project?Is is suggested to use Grunt? Or is MS Build better to handle build tasks? I thinking here linting, minify, concatenating files into a single distribution file, maybe copying to a web server.How do you run tests? Is there a plugin to run Jasmin tests? Do you just run Karma separately? Are there any good templates for AngularJS? Any that use ngBoilerplate?Any help would be appreciated. | Suggested setup for angularJS development in Visual Studio 2013 | visual studio;html5;angularjs | We had the same choices to make. We decided to :make the whole build process not depend on visual studio. We chose use tools that are considered mainstream in angular development world. This way getting support by the community is easier.use visual studio extensions when available to enhance the experience when possibleHow :use grunt and karma. We scaffolded a project using yeoman angular and used this as a template to setup our own build process. Ours is almost as-is.install the Web Essentials 2013 extension. The extension uses the same .jscs and .jshintrc that your grunt build uses. We decided to let Web Essentials 2013 extension handle the .less files on save so that index.html can refer to main.css and no build process is required.make sure our visual studio editor settings are aligned with our .jscs and .jshintrc formatting rules (spacing, line ending, etc).Also:we run karma watch (or grunt watch) on the command-line for our tests.attaching to running karma tests from visual studio works fine, you have to run your tests with IE. But we use the Chrome dev tools more often than not.we don't use the jasmine web runner at all. karma has all you need.to make things easy our app files are in /static/ inside our WebAPI project. This way you don't need another server to serve your static files.you can use NTVS to debug grunt or karma or any other tool included in your build process, from Visual Studio.integration with our CI (CCNet) was simple, just invoke grunt ci where ci is a task that does the build and then runs the tests using junit style reporters instead of console type reporters.we also made sure that running the build process is optional. The build process bundles stuff in a dist folder. We run integrated tests on this folder but at dev-time, we run on raw js files (not minified or concatenated). The yeoman angular template got us there easily.I hope this helps |
_webapps.72814 | I want to get the questions out of some Facebook survey. It wasn't made by me. Is it possible to transfer to Excel or something similar? | How to get data out of a Facebook survey | facebook;data liberation | null |
_unix.161775 | I use the ssh-keygen -t rsa to generate the RSA key pairs. I see that in the id_rsa.pub file, I see the username and the hostname.But I want to know that what elements will affects the key pairs. For example, if I change the host ip, need I regenerate the key pairs? And the hostname? or even I reinstall the operating system?When should I regenerate the key pairs? | What elements affects the key pairs generated by ssh-keygen? | ssh;ssh keygen | The key is randomly generated. There's nothing more special to its origin than that. Meaning that as long as only you hold the private key, there is no need to replace it.In SSH, a user key is generally used to identify a combination of user & origin system. Meaning the key is not shared among users on the same system, or the same user on multiple systems.Though this is not a technical limitation, as you can break either of these rules without issue. It's just good security practice.In a public key (id_rsa.pub), the last field is a comment. The ssh-keygen command typically puts your username & hostname as the last field. This serves no purpose other than as a comment to identify the key. As for why it puts username & hostname, see the previous paragraph.To directly answer your question, When should I regenerate the key pairs?:You should regenerate the key when the key has been compromised, and has potentially been obtained by someone else. At this time you should also revoke the previous public key from all remote systems which trust it (the authorized_keys file).That is literally the only reason. If you reinstall the OS, change hostname, etc, you do not need to recreate the key. Though you can if you'd like. |
_cstheory.3743 | In a previous question about time hierarchy, I've learned that equalities between two classes can be propagated to more complex classes and inequalities can be propagated to less complex classes, with arguments using padding.Therefore, a question comes to mind. Why do we study a question about different types of computation (or resources) in the smallest (closed) class possible? Most researchers believe that $P \neq NP$. This distinction of classes wouldn't be between classes that use the same type of resource. Therefore, one might think of this inequality as a universal rule: Nondeterminism is a more powerful resource. Therefore, although an inequality, it could be propagated upwards via exploiting the different nature of the two resources.So, one could expect that $EXP \neq NEXP$ too. If one proved this relation or any other similar inequality, it would translate to $P \neq NP$. My argument could maybe become clear in terms of physics. Newton would have a hard time understanding universal gravity by examining rocks (apples?) instead of celestial bodies. The larger object offers more details in its study, giving a more precise model of its behavior and allowing to ignore small-scale phenomena that might be irrelevant. Of course, there is the risk that in larger objects there is a different behavior, in our case that the extra power of non-determinism wouldn't be enough in larger classes. What if after all, $P \neq NP$ is proven? Should we start working on $EXP \neq NEXP$ the next day?Do you consider this approach problematic? Do you know of research that uses larger classes than polynomial to distinguish the two types of computation? | Why don't we use larger classes to study determinism vs non-determinism? | cc.complexity theory;big picture | The issue may be a bit cleaner with $E=Dtime(2^{O(n)})$ and $NE=Ntime(2^{O(n)})$.The easiest way to think about these classes is that they are the same as $P$ and $NP$ but restricted to unary languages. That is, all inputs are of the form $1^k$. That is, the language $L$ is in $E$ if and only if the language $U_L = \{ 1^x : x\in L \}$ is in $P$ (identifying strings with numbers using binary representation), and similarly $NE$ is isomorphic to unary $NP$.So, trying to separate $NE$ from $E$ is just like trying not just to separate $P$ from $NP$, but actually do it using a unary language. No reason it should make your life even conceptually easier. |
_softwareengineering.304385 | I was going through JBAKE code athttps://github.com/jbake-org/jbake/blob/master/src/main/java/org/jbake/app/Asset.java : 58PFB the code.Why are we sorting the array here? if (assets != null) { //TBD : Why is sorting required? Arrays.sort(assets); for (int i = 0; i < assets.length; i++) { if (assets[i].isFile()) { StringBuilder sb = new StringBuilder(); sb.append(Copying [ + assets[i].getPath() + ]...); File sourceFile = assets[i]; File destFile = new File(sourceFile.getPath().replace(source.getPath()+File.separator+config.getString(ConfigUtil.Keys.ASSET_FOLDER), destination.getPath())); try { FileUtils.copyFile(sourceFile, destFile); sb.append(done!); LOGGER.info(sb.toString()); } catch (IOException e) { sb.append(failed!); LOGGER.error(sb.toString(), e); e.printStackTrace(); errors.add(e.getMessage()); } } if (assets[i].isDirectory()) { copy(assets[i]); } } } | Sorting Array before looping : best practice | java;coding style;coding standards;code reviews;clean code | The comparator of File performs a lexicographic comparison of pathnames. So, these files are sorted by name. So, obviously, the author wants the filesystem tree to be traversed in alphabetical order, and not in any other order. That's important, because according to the documentation of File.listFiles():There is no guarantee that the name strings in the resulting array will appear in any specific order; they are not, in particular, guaranteed to appear in alphabetical order.Java specifically refrains from giving such a guarantee because different host environments (i.e. Windows vs. Linux) are known to yield files in different orders, and sometimes even the host environment may refrain from giving any such guarantee. On the other hand, sorting the files can be expensive, so the creators of Java decided to pass this uncertainty to the programmer, rather than imposing the extra overhead of sorting on every single java application out there.So, the author wants to have a guaranteed order in which the files will be traversed. This is useful:When debugging, because the behavior of your program does not change from debug run to debug run due to factors that are beyond your control. If you notice that file aardvark.txt was yielded first, and you try to see what happens when processing aardvark, and you set a breakpoint at some place and rerun, it is very annoying to discover that on the next run zoology.txt is processed first.When testing, because you can create a temporary directory structure, create a tree of files in it, run your function on it, obtain the results in a new tree, and then make sure that the trees are exactly equal, without having to account for the fact that the trees may be equal but not-in-the-same-order.In general, when writing software that you want to behave in a consistent way across different platforms. |
_softwareengineering.338853 | I'm writing a web application that requires access to a user's Google Calendar. I can obtain access to their calendar via Google's OAuth2.0 protocol - easy. Now, my application itself needs to expose a REST API to access data about a user when required. Here's an example flow:User Logs in with Google to http://example.comUser grants permissions to their Google CalendarWebapp stores the user in a databaseWebapp has a route /super that creates a SuperEvent resource. REST API consumer wants to create a SuperEvent so they try calling http://example.com/api/:event_id/super which would involve querying Google Calendar on behalf of the user and making some calculations on the server.My problem lies in the fact that a client of the API server (even my own since I'm dogfooding) needs to not only authenticate with http://example.com but also Google's OAuth2.0 server.Is it possible to accomplish such a paradigm or is this not a recommended approach to the problem?The only other solution I can think of is to bundle my API and static web page server into one, and have mobile clients use native OAuth2.0 bindings to talk to Google. This is a followup to my previous question | Serving a REST API authenticated against an OAuth2.0 Provider | rest;oauth;oauth2 | null |
_softwareengineering.145866 | Lately, I have been very intrigued with F# which I have been working a bit with. Coming mostly from Java and C#, I like how concise and easily understandable it is. However, I believe that my background with these imperative languages disturb my way of thinking when programming in F#. I found a comparison of the imperative and functional approach, and I surely do recognize the imperative way of programming, but I also find it difficult to define problems to fit well with the functional approach.So my question is: How do I best make the transition from object-oriented programming to functional programming? Can you provide some tips or perhaps provide some literature that can help one to think in functions in general? | How to make the transition to functional programming? | object oriented;f#;functional programming;imperative programming | Having spent decades writing imperative, i.e. If and While statements, and logical, i.e. PROLOG, programs and having spent the last few months learning F# I will give you some advice from my perspective. During those decades I have brushed against functional languages such as LISP, and Mercury, with enough understanding to read the code and write simple programs but only until using F# did I learn functional programming with enough detail to write production quality code. Since you specifically mention the imperative languages Java and C# and the functional language F# I will used them as the basis for my answer. Since you use the term imperative and Object-Oriented in your question and specifically mention Java and C# which are Object-Oriented, I will tend you reference Object-Oriented instead of imperative. AdviseHow do I best make the transition from object-oriented programming to functional programming?The first advise I will give is to NOT try and translate any imperative programs you already have into a functional language. This probably goes against most of the advise you will see, but I firmly believe that you need to build up the constructs of how functional programming works in a simple manner rather than continuously trying to think in imperative and then try to think in functional. The reason is, is that functional programming relies heavily on recursion, matching, discriminated unions, functions and first class, let binding, and is function-level programming, while Object-Oriented code relies heavily on objects, polymorphism, inheritance, encapsulation, interfaces, and is object-oriented. The point here is that while they are both programming languages they really on two different ways of thinking. The best analogy I can think of, all be it a bad one is, it would be like trying to translate poetry into math. While I am sure you could do it and lay claim that it could be done, they both have very different uses and are well regarded by those who use it. The second advise is to get one or two good books on F#. I have most of the books of F# and find that most of them are more reference than reading books which is good for the first few weeks, but after that you need to tie the concepts together and don't want a reference book. If I had to chose two books they would be Expert F# 2.0 by Syme, Granicz, and Cisternino, and Real-World Functional Programming With examples in F# and C# by Tomas Petricek and Jon Skeet. Expert F# 20 is more of a reference book and Real-World Functional Programming With examples in F# and C# is a book you read from start to end; don't use it as a refrence book. If Expert F# 2.0 is not your style then as a substitute I would look at Beginning F# by Robert Pickering, or Programming F# by Chris Smith. The third advise is to get a good IDE, e.g. Visual Studio 2010, for using F#. Don't think of this as learning a new programming language but learning a new way of thinking. Most of the errors you get back will be unlike anything from OO and the more you can use an IDE as a crutch, the faster you can hobble along; believe me, you will be hobbling along unless you already know a functional language. In particular, F# uses type inference which causes you to understand this concept to fix the errors. The fourth advise is to get a mentor. Functional programming relies heavily on recursion and one has to think naturally in recursion to be effective. Luckily for me I learned this when learning PROLOG in college; I can't imagine how hard this would be without a mentor. While it would be possible, it will take longer. If you don't have some one who can mentor you, then use SO. Most of the authors I mentioned answer questions here. The fifth advise is to learn these concepts: Let binding, F# List, recursion, discriminated unions and pattern matching. I probably use these concepts for more than 80% of the F# code I write. The reason you need F# list is that you will see code that traverse list building a second list of results and then reversing the list before returning the list of results. Because accessing the head of a list is O(1), this is both very fast and effective. The reason you need recursion is because functional programming does not use loops but recursion to preform looping. If you use a tail-call, you have effectively made a loop which does not cause stack overflows. The reason you need discriminated unions and pattern matching is because you can create a single data type, i.e. discriminated union, that holds all of the information for a function, and when combined with matching this is very effective. Microsoft's Introduction to F# is pretty good for these topics. The sixth advise is don't rush ahead. If you can not explain a part of F# to someone else with enough confidence and clarity to understand it, then you need to keep learning. The seventh advise is to understand that functional programming is not a solution to all problems. While F# adds OO concepts to functional programming, I try to avoid the use of those concepts when programming with F#. Yes I try to be a purest. By doing this, it keeps me focused on doing with F# what functional programming is best. The eighth advise is that once you feel comfortable with many portions of functional programming, write a real program based on a real world problem. I find myself learning more about F# from working on a particular problem than working on made up problems. I have learned so much in the last few months, I won't even try and list it all here. One big benefit has been learning FParsec and a lot about LISP. Since my problem deals with AI, a lot of the books I read give sample code in LISP which I have to translate into F#. Since LISP people like to enter data as S-expressions and LISP can process S-expressions, they tend to get away with skiping parsing; thus a reason to learn FParsec. I could list more items of advise, but at this point it becomes more of a list than sound advise. ExamplesCan you provide some tips or perhaps provide some literature that can help one to think in functions in general?Probably the easist way to identify problems that can be solved with functional programming is to look at the data structure definitions. If you have a class or structure that has a reference back to the same class or structure and you traverse from one object to next using the reference then you should consider using functional programming for the solution. A simple example is a linked list where you have to process all of the items in the list. You start on the first item in the list, then process then next item in the list until you reach the end of the list. A more advanced example is a tree where you have to process all of the items in the tree. You start at the root of the tree, then process each child in the tree until you have no more children left. See: tree traversal An even more advanced example is a graph where you have to process all of the nodes in the graph. You start with a node, then process each adjacent node, keeping track of visited nodes, until you have no more unvisited nodes left. Also if you have a sequence of data that needs to be generated from a starting value then consider using functional programming for the solution. Another large area where functional programming is useful is with Catamorphisms. While this is an advanced concept, think of it simply as a set of data that can be enumerated. You can map the data, e.g. an indentity number into and name, and fold the data, e.g. sum up all of the values in a list. Any time you see an enumeration in an OO language, see if it can be solved with a recursive function. Other more advanced ares were functional programing is used often is with parsing, theorem provers and AI. If you can find code written in other functional languages, then try and translate that to F#. Good luck. |
_webmaster.30364 | I have a urlrewrite that rewrites my urls like mypage.aspx?id=123&title=mytitle into ...page/mytitle/123/I'm trying to understand what are the pros / cons if any of having it like...page/mytitle/123/ instead of ...page/123/mytitle/Does it make any difference for search engines and is there anything i'm doing wrong with this? | urlrewrite and search engines | search engines;url rewriting | Just pick one. (Read: you're putting too much thought into this.)They're not different enough to matter, and whatever benefit one might have over the other isn't going to be enough to offset the time spent debating.I would actually ask why you're retaining the id value at all.It doesn't serve any apparent purpose, and if you end up migrating to another CMS/whatever in the future, you may end up with new ID values, resulting in new URLs, broken links, waiting to get re-indexed, etc. |
_codereview.123252 | (See the previous (and first) iteration.)Now, I have incorporated all the modifications proposed by janos, and, thus, I have the following:HtmlViewComponent.javapackage net.coderodde.html.view;/** * This abstract class defines the API for logical HTML view components, that * may consist of other view components; * * @author Rodion rodde Efremov * @version 1.6 (Mar 18, 2016) */public interface HtmlViewComponent { public String toHtml();}HtmlViewContainer.javapackage net.coderodde.html.view;import java.util.ArrayList;import java.util.List;import java.util.Objects;/** * This class defines the API for HTML elements that may contain other HTML * elements. * * @author Rodion rodde Efremov * @version 1.6 (Mar 18, 2016) */public abstract class HtmlViewContainer implements HtmlViewComponent { protected final List<HtmlViewComponent> components = new ArrayList<>(); public void addHtmlViewComponent(HtmlViewComponent component) { Objects.requireNonNull(component, The input component is null.); components.add(component); } public boolean containsHtmlViewComponent(HtmlViewComponent component) { Objects.requireNonNull(component, The input component is null.); return components.contains(component); } public void removeHtmlViewComponent(HtmlViewComponent component) { Objects.requireNonNull(component, The input component is null.); components.remove(component); }}HtmlPage.javapackage net.coderodde.html.view;/** * This class is the top-level container of view components. * * @author Rodion rodde Efremov * @version 1.6 (Mar 18, 2016) */public class HtmlPage extends HtmlViewContainer { private final String title; public HtmlPage(String title) { this.title = title != null ? title : ; } @Override public String toHtml() { StringBuilder sb = new StringBuilder().append(<!DOCTYPE html>\n) .append(<html>\n) .append(<head>\n) .append(<title>) .append(title) .append(</title>\n) .append(</head>\n) .append(<body>\n); components.stream().forEach((component) -> { sb.append(component.toHtml()); }); return sb.append(</body>\n) .append(</html>).toString(); }}DivComponent.javapackage net.coderodde.html.view.support;import net.coderodde.html.view.HtmlViewContainer;/** * This class implements a {@code div} component. * * @author Rodion rodde Efremov * @version 1.6 (Mar 18, 2016) */public class DivComponent extends HtmlViewContainer { @Override public String toHtml() { StringBuilder sb = new StringBuilder(<div>\n); components.stream().forEach((component) -> { sb.append(component.toHtml()); }); return sb.append(</div>\n).toString(); }}TableComponent.javapackage net.coderodde.html.view.support;import java.util.ArrayList;import java.util.List;import net.coderodde.html.view.HtmlViewComponent;/** * This class represents the table component. * * @author Rodion rodde Efremov * @version 1.6 (Mar 18, 2016) */public class TableComponent implements HtmlViewComponent { private final int columns; public TableComponent(int columns) { checkColumnNumber(columns); this.columns = columns; } private final List<List<? extends HtmlViewComponent>> table = new ArrayList<>(); public void addRow(List<HtmlViewComponent> row) { if (row.size() > columns) { table.add(new ArrayList<>(row.subList(0, columns))); } else { table.add(new ArrayList<>(row)); } } private void checkColumnNumber(int columns) { if (columns <= 0) { throw new IllegalArgumentException( The number of columns must be a positive integer. + Received + columns); } } @Override public String toHtml() { StringBuilder sb = new StringBuilder().append(<table>\n); for (List<? extends HtmlViewComponent> row : table) { sb.append(<tr>); for (HtmlViewComponent cell : row) { sb.append(<td>); sb.append(cell.toHtml()); sb.append(</td>); } // Deal with the missing table cells at the current row. for (int i = row.size(); i < columns; ++i) { sb.append(<td></td>); } sb.append(</tr>\n); } return sb.append(</table>\n).toString(); }}TextComponent.javapackage net.coderodde.html.view.support;import net.coderodde.html.view.HtmlViewComponent;/** * This class represents a simple text. * * @author Rodion rodde Efremov * @version 1.6 (Mar 18, 2016) */public class TextComponent implements HtmlViewComponent { private String text; public TextComponent() { this(); } public TextComponent(String text) { setText(text); } public void setText(String text) { this.text = text != null ? text : ; } public String getText() { return text; } @Override public String toHtml() { return text; }}Demo.javaimport java.util.ArrayList;import java.util.Arrays;import java.util.List;import net.coderodde.html.view.HtmlPage;import net.coderodde.html.view.HtmlViewComponent;import net.coderodde.html.view.support.DivComponent;import net.coderodde.html.view.support.TableComponent;import net.coderodde.html.view.support.TextComponent;public class Demo { public static void main(String[] args) { HtmlPage page = new HtmlPage(FUNKEEH PAGE); DivComponent div1 = new DivComponent(); DivComponent div2 = new DivComponent(); div1.addHtmlViewComponent(new TextComponent(Hey yo!\n)); TableComponent table = new TableComponent(3); // Arrays.asList is immutable, so copy to a mutable array list. List<HtmlViewComponent> row1 = new ArrayList<>( Arrays.asList(new TextComponent(Row 1, column 1), new TextComponent(Row 1, column 2), new TextComponent(Row 1, column 3), new TextComponent(FAIL))); List<HtmlViewComponent> row2 = new ArrayList<>( Arrays.asList(new TextComponent(Row 2, column 1), new TextComponent(Row 2, column 2))); table.addRow(row1); table.addRow(row2); div2.addHtmlViewComponent(table); div2.addHtmlViewComponent(new TextComponent(Bye, bye!\n)); page.addHtmlViewComponent(div1); page.addHtmlViewComponent(div2); System.out.println(page.toHtml()); }}Please, tell me anything that comes to mind. | Mini HTML document builder in Java - follow-up | java;html;library | null |
_unix.66355 | So! I'm not even 13- 11 to be exact. But I'm making a Game, so, of course, I should have a Website, that would make sense. And, I'm making It from scratch and! I need to learn things. I'm using a ssh Web server and have lots of questions. I'm using php but I can't seem to find any Videos on Youtube, and I've tried looking It up on Google, which Is how I found this. One major thing before I pay some money to publish the thing Is Tabs. How this Website has /questiond /tag /users, etc, I want to know how to do that. If you want I can make a short video of what I've done so far. If you've made a Website from scratch using the Terminal I would really like to know how to do a lot more things. I have Linux Mint, the KDE Desktop. | Website Building throught the Terminal Linux MInt please help! | linux mint | null |
_webmaster.16638 | I just transferred a domain from GoDaddy to NameCheap.com. There was a period of downtime until I configured the proper settings at NameCheap. During this downtime browsers returned a 408 Request Timeout error. Next time, how do I avoid the downtime during domain transfers because the previous registrar's name servers stopped serving records? Specifically transfers to NameCheap (from GoDaddy).Note this question differs slightly from Domain transfers - handling the downtime because NameCheap copied the GoDaddy name server information for the domain, but GoDaddy name servers stopped serving DNS records. When I switched to NameCheap's name servers, I had to manually re-enter all the records.I could swear on a previous transfer (also from GoDaddy to Namecheap) the DNS records were all transferred automatically, with no downtime. What did I do differently last time? Was it just a matter of switching the name server ASAP? I think the issue is configuring the domain to use NameCheap's name servers before GoDaddy's name servers stop serving records.I've thought of two possible solutions, but I'm not sure if either is feasible:set the TTL for the records at GoDaddy to a very long value (might not be possible if I don't already own the domain)use an intermediate, third-party name serverAny better ideas? | How to seamlessly transfer a domain (avoid downtime because the previous registrar's nameservers stopped serving DNS records) | domains;transfer;domain registrar;downtime | Well, should have checked the NameCheap knowledgebase, first: How to transfer a domain into Namecheap without a huge downtime?NameCheap offers a FreeDNS service so their name servers can start handling DNS requests before a transfer. I suppose it would keep working when transferring away from NameCheap, too. |
_cstheory.14953 | I'm interested in the critical 3-satisfiability (3-SAT) density $\alpha$. It's conjectured that such $\alpha$ exists: if the number of randomly generated 3-SAT clauses is $(\alpha + \epsilon) n$ or more, they are almost surely unsatisfiable.(Here $\epsilon$ is any small constant and $n$ is the number of variables.)If the number is $(\alpha - \epsilon) n$ or less, they are almost surely satisfiable.The thesis Belief propagation algorithms for constraint satisfaction problems by Elitza Nikolaeva Maneva challenges the problem from the angle of belief propagation known in information theory. On page 13, it says $3.52<\alpha<4.51$ if $\alpha$ exists. What are the best known bounds for $\alpha$? | Current tightest bounds for critical 3-SAT density | pr.probability;random k sat | null |
_codereview.87478 | Recently, I've asked in a code review if my code was causing any pointer related issues. They mentioned that I had problems with my dispose method. Basically, I'm disposing textures from a texture map, by cleaning the texture at the ID given to the method. So I would like to ask you if now, I'm disposing correctly my texture, and if there's a better way to do it.void TextureManager::freeTexture(std::string id){ std::map<std::string, SDL_Texture*>::iterator it = m_textureMap.find(id); std::cout << Disposing texture : [ID== << id << ] [MemoryAdress== << it->second << ] : \n; if (it == m_textureMap.end()) { LOGERROR(ERROR : Invalid ID : cannot dispose texture); return; } SDL_DestroyTexture(it->second); std::cout << --> Destroyed !; m_textureMap.erase(id); std::cout << --> Erased from TextureMap !\n\n;} | C++ Disposing Textures | c++;memory management;sdl | null |
_cstheory.9527 | This is a restatement of an earlier question.Consider the following impartial perfect information game between two players, Alice and Bob. The players are given a permutation of the integers 1 through n. At each turn, if the current permutation is increasing, the current player loses and the other player wins; otherwise, the current player removes one of the numbers, and play passes to the other player. Alice plays first. For example:(1,2,3,4) Bob wins immediately, by definition.(4,3,2,1) Alice wins after three turns, no matter how anyone plays.(2,4,1,3) Bob can win on his first turn, no matter how Alice plays.(1,3,2,4) Alice wins immediately by removing the 2 or the 3; otherwise, Bob can win on his first turn by removing the 2 or the 3.(1,4,3,2) Alice eventually wins if she takes the 1 on her first turn; otherwise, Bob can win on his first turn by not removing the 1.Is there a polynomial-time algorithm to determine which player wins this game from a given starting permutation, assuming perfect play? More generally, because this is a standard impartial game, every permutation has a SpragueGrundy value; for example, (1,2,4,3) has value *1 and (1,3,2) has value *2. How hard is it to compute this value?The obvious backtracking algorithm runs in O(n!) time, although this can be reduced to $O(2^n poly(n))$ time via dynamic programming. | Permutation game redux | gt.game theory;combinatorial game theory | null |
_softwareengineering.246740 | I use Volley to return JSON from an API. I parse this JSON and add StoreItem objects to a List, then use an adapter to display in a listview.I use the cache functionality in Volley, and I'm wondering how I can approach the issue of deleting/adding items. I know I'll need to make a POST/DELETE request with Volley to actually save/destroy the items in the database, but for user experience purposes I don't want to have to reload the view with a fresh Volley request. I also don't want the cache to update the view without recognizing the item that was just added/destroyed.The android documentation for notifyDataSetChanged says:Notifies the attached observers that the underlying data has been changed and any View reflecting the data set should refresh itself.In this case is the underlying data the cache, my server database, or the Items in the list? Please let me know if I can clarify this question. | What are the implications of the cache when using notifyDataSetChanged? | android;android development | null |
_webmaster.58372 | Is it good idea create a company blog on Medium instead of setting it up on your subdomain. What are the advantages and disadvantages of using Medium for your company blog? Can you you post your article on medium after posting it on your blog? How would you avoid penalty by google for duplicate content? | Is it better to host your own blog or use Medium? | seo;subdomain;duplicate content;blog;content | null |
_webmaster.39175 | Can you set a custom Error page if IIS is stopped or recycling for an extended period of time? | IIS 7.5 setting error page with iis stopped | iis7 | null |
_codereview.106669 | I've spent some time writing a library to parse JSON into a statically typed object. Am I following most of the standard coding guidelines? Any thoughts on what I could do better, or any edge cases that are missing and aren't taken care of?private static T ParseDynamic(ExpandoObject input){ //Parse when given an ExpandoObject T output = default(T); var dict = input as IDictionary<string, object>; ParseDictionary<T>(dict, out output); return output;}protected static void ParseDictionary(IDictionary<string, object> Dict, out object Target, Type explicitType){ if(Dict == null) { throw new InvalidOperationException(Dictionary was null, cannot parse a null dictionary); } if (explicitType.IsArray) { var length = Dict.Keys.Count(); Target = (Array)Activator.CreateInstance(explicitType, new object[] { length }); } else { Target = Activator.CreateInstance(explicitType); } foreach (var property in Target.GetType().GetProperties()) { var propertyName = property.Name; if (Dict.ContainsKey(propertyName) && Dict[propertyName] != null) { var val = Dict[propertyName]; var propertyVal = explicitType.GetProperty(propertyName); var expectedType = property.PropertyType; var valType = val.GetType(); if(valType == expectedType) { //Hurray, we matched! propertyVal.SetValue(Target, val); } else if (valType != expectedType && val is IConvertible) { Type safeType = Nullable.GetUnderlyingType(expectedType) ?? expectedType; //Special Case - INT64 to DATETIME Conversion (UNIX Time) if((valType == typeof(long) || valType == typeof(long?)) && (safeType == typeof(DateTime) || safeType == typeof(DateTime?))) { var longValue = (long)Convert.ChangeType(val, typeof(long)); var dateValue = UNIX_EPOCH.AddSeconds(longValue); val = dateValue; } //Convert if possible var explicitVal = (val == null ? null : Convert.ChangeType(val, safeType)); propertyVal.SetValue(Target, explicitVal, null); } else if (val is IDictionary<string, object>) { //Parse non-simple object var propType = propertyVal.PropertyType; object explicitVal = Activator.CreateInstance(propType); ParseDictionary(val as IDictionary<string, object>, out explicitVal, propType); propertyVal.SetValue(Target, explicitVal); } else if (val is IList) { //Parse list/enumeration/array Type elementType; if (expectedType.IsArray) { //Array type is explicitly included with GetElementType elementType = expectedType.GetElementType(); } else if (expectedType.IsGenericType) { //Get List type by inspecting generic argument elementType = expectedType.GetGenericArguments()[0]; } else { //Not sure how we'd get here if we're neither an array nor generic, but we can't really do much continue; } //Create the necessary List implementation that we need var listType = typeof(List<>); var typedList = listType.MakeGenericType(elementType); var explicitList = (IList)Activator.CreateInstance(typedList); foreach(var element in val as IList<object>) { object explicitElement; ParseDictionary(element as IDictionary<string, object>, out explicitElement, elementType); explicitList.Add(explicitElement); } if(property.PropertyType.IsArray) { //Convert from list to array if necessary var arrayType = elementType.MakeArrayType(); var array = (Array)Activator.CreateInstance(arrayType, new object[] { explicitList.Count }); explicitList.CopyTo(array, 0); propertyVal.SetValue(Target, array); } else { propertyVal.SetValue(Target, explicitList); } } else { //Attempt to set it - will error if not compatible and all other checks are bypassed propertyVal.SetValue(Target, val); } } }} protected static void ParseDictionary<K>(IDictionary<string, object> Dict, out K Target) where K : class, new(){ Target = new K(); var explicitType = Target.GetType(); var outObject = new object(); ParseDictionary(Dict, out outObject, explicitType); Target = outObject as K;} | Parsing an ExpandoObject into a typed class using reflection | c#;parsing | protected static void ParseDictionary(IDictionary<string, object> Dict, out object Target, Type explicitType) I would say this method declaration is a big code smell. Having a void method with an out parameter just isn't right. Why don't you let the method return an object ? There is just no reason. Although it had been already mentioned by @akw5013 in his answer, based on the NET naming guidelines methodparameters should be named using camelCase casing.If you choose a style for yourself, which is a valid decision, you should stick to this style. At this declaration you are mixing styles which is always a bad idea. if (Dict.ContainsKey(propertyName) && Dict[propertyName] != null){ var val = Dict[propertyName]; var propertyVal = explicitType.GetProperty(propertyName);This will be incredible slow. Internally this calls 3 times the FindEntry(key) method through the following methods public bool ContainsKey(TKey key) { return FindEntry(key) >= 0;} public TValue this[TKey key] { get { int i = FindEntry(key); if (i >= 0) return entries[i].value; ThrowHelper.ThrowKeyNotFoundException(); return default(TValue); } set { Insert(key, value, false); }}if you would use Dictionary.TryGetValue() this would result in only one call to public bool TryGetValue(TKey key, out TValue value) { int i = FindEntry(key); if (i >= 0) { value = entries[i].value; return true; } value = default(TValue); return false;} but fortunately this problem is easy to fix like so object val; var propertyName = property.Name; if (Dict.TryGetValue(propertyName, out val) && val != null) { var propertyVal = explicitType.GetProperty(propertyName);that had been easy, hadn't it ? if(valType == expectedType){ //Hurray, we matched! propertyVal.SetValue(Target, val);}else if (valType != expectedType && val is IConvertible) if the execution comes to the else if there is a 100% chance that valType != expectedType will be true. else if (valType != expectedType && val is IConvertible){ Type safeType = Nullable.GetUnderlyingType(expectedType) ?? expectedType; //Special Case - INT64 to DATETIME Conversion (UNIX Time) if ((valType == typeof(long) || valType == typeof(long?)) && (safeType == typeof(DateTime) || safeType == typeof(DateTime?))) { var longValue = (long)Convert.ChangeType(val, typeof(long)); var dateValue = UNIX_EPOCH.AddSeconds(longValue); val = dateValue; } //Convert if possible var explicitVal = (val == null ? null : Convert.ChangeType(val, safeType)); propertyVal.SetValue(Target, explicitVal, null);}the use of dateValue can just be skipped. Assigning the return value of UNIX_EPOCH.AddSeconds() to val is enough. The tenary expression would IMO be better just an if..else like so if (val == null){ propertyVal.SetValue(Target, null, null);}else { propertyVal.SetValue(Target, Convert.ChangeType(val, safeType), null);} else if (val is IDictionary<string, object>){ //Parse non-simple object var propType = propertyVal.PropertyType; object explicitVal = Activator.CreateInstance(propType); ParseDictionary(val as IDictionary<string, object>, out explicitVal, propType); propertyVal.SetValue(Target, explicitVal);} Here you are creating an instance of propType which is overwritten inside the recursively called ParseDictionary() method. Just skip it. var listType = typeof(List<>);var typedList = listType.MakeGenericType(elementType); the var listType is only used once at the next line of code. You can compact this like so var typedList = typeof(List<>).MakeGenericType(elementType); and if you want you can just keep your pattern of typeType so typedList will become listType. The creation of the List of this else if branch should be extracted to a separate method like so private static IList ParseAsList(object val, Type expectedType, PropertyInfo property){ Type elementType = null; if (expectedType.IsArray) //Array type is explicitly included with GetElementType { elementType = expectedType.GetElementType(); } else if (expectedType.IsGenericType) //Get List type by inspecting generic argument { elementType = expectedType.GetGenericArguments()[0]; } var listType = typeof(List<>).MakeGenericType(elementType); var explicitList = (IList)Activator.CreateInstance(listType); foreach (var element in val as IList<object>) { object explicitElement = ParseDictionary(element as IDictionary<string, object>, elementType); explicitList.Add(explicitElement); } return explicitList;}which results in the branch beeing else if (val is IList){ //Parse list/enumeration/array if (!(expectedType.IsArray || expectedType.IsGenericType) { //Not sure how we'd get here if we're neither an array nor generic, but we can't really do much continue; } //Create the necessary List implementation that we need var explicitList = ParseAsList(val, expectedType, property); if (expectedType.IsArray) { //Convert from list to array if necessary var arrayType = expectedType.GetElementType().MakeArrayType(); var array = (Array)Activator.CreateInstance(arrayType, new object[] { explicitList.Count }); explicitList.CopyTo(array, 0); propertyVal.SetValue(Target, array); } else { propertyVal.SetValue(Target, explicitList); }} I couldn't come up with a better name than ParseAsList() so its just that. The former ParseDictionary() method will now look like so protected static object ParseDictionary(IDictionary<string, object> dict, Type explicitType){ if (dict == null) { throw new ArgumentNullException(dict, Dictionary was null, cannot parse a null dictionary); } object target; if (explicitType.IsArray) { var length = dict.Keys.Count(); target = (Array)Activator.CreateInstance(explicitType, new object[] { length }); } else { target = Activator.CreateInstance(explicitType); } foreach (var property in target.GetType().GetProperties()) { var propertyName = property.Name; object val; if (dict.TryGetValue(propertyName, out val) && val != null) { var propertyVal = explicitType.GetProperty(propertyName); var expectedType = property.PropertyType; var valType = val.GetType(); if (valType == expectedType) { propertyVal.SetValue(target, val); } else if (val is IConvertible) { Type safeType = Nullable.GetUnderlyingType(expectedType) ?? expectedType; //Special Case - INT64 to DATETIME Conversion (UNIX Time) if ((valType == typeof(long) || valType == typeof(long?)) && (safeType == typeof(DateTime) || safeType == typeof(DateTime?))) { var longValue = (long)Convert.ChangeType(val, typeof(long)); val = UNIX_EPOCH.AddSeconds(longValue); } if (val == null) { propertyVal.SetValue(target, null, null); } else { propertyVal.SetValue(target, Convert.ChangeType(val, safeType), null); } } else if (val is IDictionary<string, object>) { //Parse non-simple object var propType = propertyVal.PropertyType; object explicitVal = ParseDictionary(val as IDictionary<string, object>, propType); propertyVal.SetValue(target, explicitVal); } else if (val is IList) { //Parse list/enumeration/array if (!(expectedType.IsArray || expectedType.IsGenericType)) { //Not sure how we'd get here if we're neither an array nor generic, but we can't really do much continue; } //Create the necessary List implementation that we need var explicitList = ParseAsList(val, expectedType, property); if (expectedType.IsArray) { //Convert from list to array if necessary var arrayType = expectedType.GetElementType().MakeArrayType(); var array = (Array)Activator.CreateInstance(arrayType, new object[] { explicitList.Count }); explicitList.CopyTo(array, 0); propertyVal.SetValue(target, array); } else { propertyVal.SetValue(target, explicitList); } } else { //Attempt to set it - will error if not compatible and all other checks are bypassed propertyVal.SetValue(target, val); } } } return target;}As you can see I have added some vertical space (new lines) to structure the code a little bit better which makes it more readable too. |
_webmaster.52310 | I am building a new website and I have a couple domains to choose from. For SEO what are tips for choosing a good domain?For instance which, in SEO terms, would be better: websitesfor[subject-of-website].com or [subject-of-websites]-websites.com | Choosing a Domain for SEO | seo;domains | As far as SEO for Google, domains that match keywords, otherwise known as Exact Match Domains (EMD) are no longer of benefit. See this for more on that.Therefore, you should pick what's best for building building a brand for your website and business that customers and the public will come to recognize. It would seem [subject-of-websites]-websites.com might be more identifiable since the word websites is so common that it might not be as easily recognized if placed first. If possible, try to chose a domain without a hyphen, and as few characters as possible, to aid in typing and remembering it |
_codereview.151309 | Suppose rectangles are parallel to x-axis/y-axis. Check if two rectangles overlap or not and if they do, output the overlap area.Here is my code and I track min/max x-coordinate and min/max y-coordinate for each rectangle. Using Python 2.7.Any comments on code bugs, code style and performance improvements in terms of algorithm time complexity are appreciated.class Rectangle: def __init__(self, min_x, max_x, min_y, max_y): self.min_x = min_x self.max_x = max_x self.min_y = min_y self.max_y = max_y def is_intersect(self, other): if self.min_x > other.max_x or self.max_x < other.min_x: return False if self.min_y > other.max_y or self.max_y < other.min_y: return False return True def get_insersec_region(self, other): if not self.is_intersect(other): return None min_x = max(self.min_x, other.min_x) max_x = min(self.max_x, other.max_x) min_y = max(self.min_y, other.min_y) max_y = min(self.max_y, other.max_y) return Rectangle(min_x, max_x, min_y, max_y) def __str__(self): return str(self.min_x) + '\t' + str(self.max_x) + '\t' + str(self.min_y) + '\t' + str(self.max_y)if __name__ == __main__: r1 = Rectangle(0,10,0,10) r2 = Rectangle(5,15,5,15) print r1.is_intersect(r2) print r1.get_insersec_region(r2) | Check if two rectangles overlap | python;algorithm;python 2.7 | null |
_scicomp.7228 | I'm relatively new in the CFD modelling and I'm making a VOFmodel of a rectangular box (lxwxh 120x80x8m) with an inlet (wxh 10x8) at the long side. The pressure outlet is the upper-face of the box (lxw 120x80m)Box is initially filled with 5m water.Water is flowing in through the inlet, velocities at inlet: around 0.1 to 0.01 m/sHow many cells/elements do I need to get an good indication for the flow pattern in the box? | How many cells/elements do I need? | mesh generation;elements | The answer to your question can depend on quite a few things, such as whether you need a turbulence model, and which one you choose if so, and how you are handling the top of your fluid since moving interfaces are nontrivial. You are also imposing discontinuities on the boundary at the edges of your inlet so a nonuniform mesh would be advantageous for you. While someone may be able to give you a real answer, the best way is to find it yourself. Mesh your problem with what you deem a reasonable mesh, solve it, mesh it again with twice as many points, and see if your answers changed. If they didn't you are done, and you have your answer. If they did, either double your number of points again if you think you are close, or start over with the meshing based on where you biggest errors are. Not only will this guarentee a good answer for you problem, it will also start to build your intuition for what types of features need more mesh points, so the next time you will start with a much better mesh. |
_softwareengineering.328602 | If I have all my business logic in code and make use of Entity Framework, in what situations (if any) would I be better moving some business logic to a stored procedure, instead of keeping it all in code?To be clear, I mean in conjunction with the current setup (business logic in code), not instead of. I have seen a number of similar questions that are asking for the pros and cons of having all business logic in stored procedures, but I haven't found much regarding using stored procedures sparingly for edge case logic, while keeping the rest of the business logic in code.If it makes a difference, I am using MSSQL and Entity Framework.These are the situations where I have used stored procedures before:A complicated report that was taking minutes to run (this was a page in a web app). I found I could write SQL that was much more efficient (only taking seconds to run) than what LINQ was providing.A web application needed to read and write to a few tables on a separate database which contained a lot of other, sensitive information that was irrelevant to the application. Rather than giving it access to everything, I used a stored procedure that only does what is needed, and only returns limited information. The web application could then be given access to this stored procedure only, without access to any tables etc.Other posts I have looked at before asking this question:Stored Procedures a bad practice at one of worlds largest IT software consulting firms?Pros and Cons of holding all the business logic in stored procedures in web applicationhttps://stackoverflow.com/questions/15142/what-are-the-pros-and-cons-to-keeping-sql-in-stored-procs-versus-codehttps://dba.stackexchange.com/questions/2450/what-are-the-arguments-against-or-for-putting-application-logic-in-the-database/2452#2452When not to use ORM and prefer stored procedures? | When should I use stored procedures? | design;sql server;orm;business logic;stored procedures | You have a couple of perfectly good scenarios already. There are lots of other reasons too. EF is really good at CRUD and at pretty straight forward reporting. Sometimes, though, EF is not the perfect tool. Some other reasons (scenarios) to consider using stored procedures in combination with Entity Framework would include:You have complex units of work, perhaps involving many tables, that cannot be wrapped in a transaction easily using the features of EF. Your database does not play well with EF because it fails to take advantage of declarative referential integrity (foreign key constraints). This is usually a bad scenario to find yourself in, but there are sometimes appropriate scenarios, such as databases used for ETL processes.You need to work with data that crosses server boundaries with linked servers.You have very complex data retrieval scenarios where bare metal SQL is needed in order to ensure adequate performance. For example, you have complex joins that need query hints to work well in your specific situation.Your application does not have full CRUD permissions on a table but your application can be allowed to run under a security context that your server trusts (application identity, rather than user identity, for example). This may come up in situations where DBAs restrict table access to stored procs only because it gives them more granular control over who can do what.I'm sure there are many more besides these. The way to determine the best path forward in any particular circumstance is to use some common sense and to focus on the primary goal, which should be to write high quality, easily maintainable code. Pick the tool(s) that give you this result in each case. |
_softwareengineering.262692 | We've got a global system that we are attempting to solve a permissions issue around. Currently, our system serves a number of different applications out to our clients and each client has their own list of user types and users for our system. However, all of these things can be independent of each other. Such that Client A can have access to Application A and Application B. Client A might also have give different User Types, all with different Users. Each 'level' of usage would add a new level of complexity to how we give permissions out to everything.For example: I might have three permissions, called Grape, Purple, and Car. At the global level (meaning all applications, clients, user types, and users), I deny the permission on Grape, but allow permission for Purple and Car. That means that no application, no client, no user type, and no user can access Grape, but they can access Purple and Car. So now, we have two applications, named Fruit and Auto. Currently, both Fruit and Auto would look down to the global permissions created for Grape, Purple, and Car and use those. So, this is how things would look:FruitGrape: DeniedPurple: GrantedCar: GrantedAutoGrape: DeniedPurple: GrantedCar: GrantedBut, for the applications, this might not make sense. So, I would create an application specific permission to grant permissions for Grape and deny permissions for Car for the application of Fruit. So now, the permissions would look like this.FruitGrape: Granted (a)Purple: GrantedCar: Denied (a)AutoGrape: DeniedPurple: GrantedCar: GrantedWhile I still have the global permissions telling Auto what permissions to use, I have specific permissions telling Fruit that it needs to grant and deny different permissions there. Now, we'd add another level of complexity and add in a client. Let's say I have a client called Fox. Fox uses both applications Fruit and Car. Right now, without adding anything else, the permissions to the three objects (Grape, Purple, and Car) come from the application wide permission level. But, maybe Fox wants things different. So, instead of having Purple granted on application Fruit, it denies that permission specifically. In terms of data, that would be a specific record you would add just for client Fox that overrides the permissions of Purple to be denied, regardless of what the application or global permissions said for Purple. So, now you have something that looks like this.FruitFoxGrape: Granted (a)Purple: Denied (c)Car: Denied (a)AutoFoxGrape: DeniedPurple: GrantedCar: GrantedRemember, too, that a lot of this can be pretty abstracted. You could have clients that only use one application, three applications, or every application you have in your system. And for those clients, you might have very specific user lists that can only use what the client can use, so they'd be inheriting permissions down from the client unless specifically overwritten.Obviously, this can get increasingly complex when you add in a level for the user type and then for the user. Keeping track of it all can be very cumbersome, but it's something we have to account for based on our system's model. So, we came up with a scoring solution for permissions.We would query the database and tables and pass in the current application, client, user type, and user to the database for whatever we have current at the time. The database would then use a scoring solution to determine if we can see or not see a particular permission. Application would be a score of 1, client would be a score of 2, user type would be 4, user would be 8, ect. The more defined objects you put in, the higher the maximum score can get. This gives us a very clear definition of what permissions are active for a given set of objects.Our real problem now is performance. We now have 5 levels of objects in our system and might be adding more, if the design of the system requires it. However, querying all of these levels is a serious performance hit on SQL. And each object will just make the problem exponentially worse. My question is this: how can I achieve the same type of permission inheritance system here, and give the ability to add new levels when needed, without exponentially increasing the performance hit. There has to be other companies and applications that have this same problem and have better solutions to it then we do. | Designing a better performing total permissions setup for multiple permission levels | design patterns;performance;permissions | null |
_unix.345138 | I have an old Sun Blade 1500, with a Ultrasparc IIIi 1.5 GHz, 2GB ECC RAM. SunOS 5.10, ufs filesystem. The hardware (MB, HDD and video card) is failing on me and I'm searching for a way to emulate it on x86 hardware. I have a DD raw copy of the HDD that I tried to mount it on Linux but i'm getting: disk cannot be accessed. I read about emulating it with qemu but I don't have much experience with Sun/Solaris and Linux environments.Has anyone tried and succeeded emulating a system like this? Thank you | Sun UltraSparc IIIi Solaris 5.10 emulation | solaris;qemu;sparc | null |
_unix.163002 | I am trying to read files one by one by using the for loop like below, by passing argument to the functions. Argument 3 makes lot of problem due to space in file names.Example:source_file_restructure(){echo inside the functionsource_folder=$1target_folder=$2file_pattern=$3delimiter=$4echo $file_patternecho Getting inside the function.... for file_name in `ls -Al ${source_folder}*${file_pattern}*.csv`dosome processing.......doneThe above ls command not working properly when there are files contain white spaces like file 1.csv,Input :SOurce Folder : /home/test/File Pattern : fileFunction argument : source_file_restructure ${source_folder} ${target_folder} ${file_pattern} ${delimiter}Suggest some option to handle the problem. | How to List files which contain white space in filename? | linux;sed;awk | Replacefor file_name in `ls -Al ${source_folder}*${file_pattern}*.csv`With:for file_name in ${source_folder}*${file_pattern}*.csvThe output of a command in backticks, as in the first form above, is subject to word-splitting. The second form above does not use backticks. It will, by contrast, work with any file name, even ones that contain spaces, tabs, newlines or any other difficult character.Also, replacesource_file_restructure ${source_folder} ${target_folder} ${file_pattern} ${delimiter}Withsource_file_restructure ${source_folder} ${target_folder} ${file_pattern} ${delimiter}Without the double-quotes, the shell will perform word-splitting. With the double-quotes, the function will be called correctly even if source_folder or target_folder have spaces, tabs, newlines, or other difficult characters. |
_webapps.105747 | I have a table of data that contains labels, and the date that the values next to those labels are summed in one row, and the data itself in subsequent rows like this:A B CSM 10,291 $3.09RCOM 5,171 $11.96ED 4,752 $5.70RS 31,748 $27.41AO 50,745 $5.4106/25/16 102,707 $53.57I would like to have a master total cell that sums only the cells that contain totals. So in English - Sum B if A is not text.I found the isnontext() function, and I know about sumif(), but I can't quite figure out how to make this work. Hopefully that's clear and someone can point me in the right direction. | Google Sheets, how to sumif value in another row is a date? | google spreadsheets | Seems:=sumif(A:A,>1,B:B) was adequate. There is no numeric value for text entries in a formula such as above, so they are ignored and only all the numeric ones need be 'captured', if the alternative to Text is only Date. |
_unix.375258 | ssh key types problem I use the -v option while trying to connect to another computer, so I see all the messages while it is trying to connect. I always specify the -i parameter (i.e. -i ~/.ssh/id_rsa ). However, it seems that all the keys are tested. I have an id_rsa, id_dsa and id_ecdsa. The messages tell me that the dsa key is rejected because it is not in . I searched through all the man pages for sshd_config, ssh_config, sshd, and ssh. I gound not one mention of PubkeyAuthenticationTypes. So, since I want to yes my dsa key, I decided to put a line in my sshd_config (on the target machine that said: PubkeuyAuthenticationTypes rsa, dsa, ecdsa Since that didn't have any beneficial effect, I changed the line to PubkeuyAuthenticationTypes ssh-rsa, ssh-dss, ssh-ecdsa That also had no effect, so I changed it back to what I originally had (rsa, dsa, ecdsa). I am quite sure that all my three public keys are in the target machines ~/.ssh/authorized_keys file. Yet, the sshd server still rejects my dsa and ecdsa keys, and I wonder if anyone can shed light on this situation. | difficulty using public key authenttication | ssh | null |
_webmaster.69133 | I have two websites that I would like to monitor and have created to google accounts for them:https://secure.example.com.au (secure sub-domains only)http://*.example.com.au (all domains including sub-domains)After I created the accounts and added the javascript codes, they don't appear to working on http://*.example.com.au. Is there anything that would cause only the secure variant to work? Or is it simple the javascript code I added isn't valid?One account is configured with the following configuration: var _gaq = _gaq || [];_gaq.push(['_setAccount', 'UA-2XXXXXXX-1']);_gaq.push(['_setDomainName', '.example.com.au']);_gaq.push(['_trackPageview']);(function() {var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);})();One account is configured as follows:(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)})(window,document,'script','//www.google-analytics.com/analytics.js','ga');ga('create', 'UA-5XXXXXXX-1', 'auto');ga('send', 'pageview');Also is there a way I can test that the codes are working? If I'm getting real-time data, it should work right? | How to debug Google Analytics when it it only works on my HTTPS site? | google analytics;analytics;tracking;debug | null |
_softwareengineering.37550 | One thing I'd like to do in C++ is to call non-member functions with the same syntax you call member functions:class A { };void f( A & this ) { /* ... */ }// ...A a;a.f(); // this is the same as f(a);Of course this could only work as long asf is not virtual (since it cannot appear in A's virtual table.f doesn't need to access A's non-public members.f doesn't conflict with a function declared in A (A::f).I'd like such a syntax because in my opinion it would be quite comfortable and would push good habits:calling str.strip() on a std::string (where strip is a function defined by the user) would sound a lot better than calling strip( str );.most of the times (always?) classes provide some member functions which don't require to be member (ie: are not virtual and don't use non-public members). This breaks encapsulation, but is the most practical thing to do (due to point 1).My question here is: what do you think of such feature? Do you think it would be something nice, or something that would introduce more issues than the ones it aims to solve? Could it make sense to propose such a feature to the next standard (the one after C++0x)?Of course this is just a brief description of this idea; it is not complete; we'd probably need to explicitly mark a function with a special keyword to let it work like this and many other stuff. | C++: calling non-member functions with the same syntax of member ones | c++;language design;language features | re your #2: Actually, making functions non-members often increases encapsulation, as Scott Meyers observed more than a decade ago. Anyway, what you describe sounds much like C#'s extension methods to me. They are good to soothe the minds of those that are frightened when they see totally free functions. :) Once you've braved the STL (which, BTW, is not the standard library, but only that part of it which comes from the original STL) where almost all functions are so free that they aren't even real functions, but function templates, you will no longer need those. In short: Don't try to bend a language to adapt to your mindset. Instead enhance your mindset to embrace the language's philosophy. You will emerge a bit taller from doing so. |
_unix.50 | I'd like to setup RAID 1 with two drives in linux. I don't have a hardware RAID controller and don't want to buy one, so what is the best software RAID option for linux?Note: If it makes a difference, I'm running Gentoo. | RAID 1 in linux | linux;gentoo;storage;software raid | Use mdadm, check the manpage. However, I will list one gotcha here. If you do this and really want reliability, you should make sure your master boot record is copied to both drives. By default it will likely only get copied to one drive. If that drive dies, you cannot boot from the other drive, even though all your data is safe.To copy the mbr to both drives, use something like dd to copy the first 446 bytes of one drive to the other. dd if=/dev/sda of=/dev/sdb bs=446 count=1If you're building the RAID on top of the devices (/dev/sda) rather than on top of partions on the devices (/dev/sda1), then you probably shouldn't do what I am suggesting because you're writing data directly to the device underneath the md driver. At least, I've never tried it that way. |
_cs.77797 | In the image files, is it possible that a computer stores information such that it can identify which is what kind of image based on how it stores the file? Or when an image/video/file is sent or received over network, can it be distinguished through Metadata? | Can a computer differentiate a normal image and a sensitive(dirty) image? | image processing | null |
_cs.66420 | Describe the problem as a graph problem. I've tried to modify the breadth first search algorithm, but haven't gotten anywhere. Any ideas? | Consider a social network. The goal is to find in the in the network a largest group of people who are all friends with each other. | graphs | null |
_webmaster.23533 | I am trying to generate a sitemap for a customers website, http://www.ruralmarketing.co.nz . When I use the online generators it only ever creates a sitemap with two urls, the index and contact page. Is there something that I have done wrong with my url structure? I can't find anything obvious. | Sitemap creators not visiting most pages | sitemap | null |
_opensource.5052 | I used an AGPL-licensed library for a data study, and am hoping to release and link to a Jupyter Notebook with my methodology. I was hoping to learn what license, if any, I would need to include with my notebook. Here's my situation:The notebook imports a forked version of an AGPL tool (the fork only differs by three lines of code meant to catch an exception, which is currently an Open PR to the original repo.) This fork is in a repo separate from my notebook.The notebook relies heavily on methods from the AGPL fork, but does not copy or modify source code from it.The notebook will be published standalone, in its own/separate repo, with a comment linking to the forked repo above the relevant import statement in the notebook.From what I understand, the AGPL fork automatically inherits the AGPL license from the original repo. But I then am unsure about what happens when I import this fork into a notebook and publish it.If my notebook imports a fork of an AGPL library, but I don't make any changes or copy source code from the fork into my Jupyter notebook, can I release a notebook that imports/makes heavy use of this library without needing to license the notebook?My end goal is to link to the notebook in a data study post on a company blog so readers can see my methodology. I just want to make sure I am correctly licensing my study/providing proper attribution for this AGPL tool. | Do I need to license a Jupyter Notebook that uses a forked AGPL library? | licensing;license recommendation;license notice;agpl 3.0 | null |
_softwareengineering.275591 | I'm faced with having to do a bsf (find the first bit set) in a 512bits bitmap. This is in the hot path so I'd like to see how I can speed things up.Right now I'm maintaining a header entry to know in which 32bits block the first set bit will be found. By doing a bsf on the header + a bsf in the entry designed by the header and some arithmetic, one can can compute the bsf of the whole bitmap fairly fast.But this obvious require to maintain the header in addition of the bitmap itself. I'd like to explore alternative. Notably, SSE or AVX, but failed to come up with a solution.Is that possible at all ? If yes, how ? | Can SSE (or AVX) be used to do large bsf? | performance;assembly | null |
_unix.387119 | I have a Debian 9 VM server installed and I want to open port 139 & 445. How do I do that ? My firewall status is as follows:lsmod | grep ip_tablesip_tables 24576 1 iptable_filterx_tables 36864 3 ip_tables,iptable_filter,xt_tcpudpiptables -LChain INPUT (policy ACCEPT)target prot opt source destinationACCEPT tcp -- anywhere anywhere tcp ACCEPT tcp -- anywhere anywhere tcp dpt:netbiosssnACCEPT tcp -- anywhere anywhere tcp dpt:microsofdsChain FORWARD (policy ACCEPT)target prot opt source destination Chain OUTPUT (policy ACCEPT)target prot opt source destination I even tried this:iptables -A INPUT -p tcp -m tcp --dport 443 -j ACCEPTiptables -A INPUT -p tcp -m tcp --dport 139 -j ACCEPTbut still the port is closed. | could not open port 139 & 445 in Debian 9 | linux;debian;iptables;firewall;tcp | null |
_unix.97009 | I have installed Linux CentOS 5.5. During installation, I configured the LAN driver and gave it an IP address. After installation, I checked the network connection but it shows not connected. I then tried to enable dhcp (from broadband) but system-config-network shows Determining IP information for eth0... failed; no link present. Check cable I checked the cable and everything is OK with it. I then addedcheck_link_down(){return 1;}to /etc/sysconfig/network-scripts/ifcfg-eth0 but system-config-network still shows the same message.I have checked service NetworkManager status and service network status all is fine but I still can't connect to the network. | Linux CentOS 5.5 not connecting to Network | networking;centos | null |
_unix.349033 | So, there has been this problem with netcat that has limited a lot of what I can do with it.So, let's say I start a server:netcat -l -p 11000And then I just spam some crap into the terminal:netcat -l -p 11000wddwdwwweeeAfterwards, I connect locally in another console:netcat localhost 11000And every time, I get all of that crud I spammed earlier sent immediately to me.netcat localhost 11000wddwdwwweeeNow, I have done a million different things to try and get rid of this queued data, which have included messing with buffers and using udp. However, every time, I always get this preliminary data back.This has limited many of my experimental projects, like getting voice chat to work through console. All of the packets include this data, so if the client takes even a second to connect, there will be a delay. Of course, there are the normal voice chat problems that I didn't address, but if you leave the server hanging for 10 seconds, there will be a 10 second delay on initial connection.I have asked this question before, but since that didn't work, I worded it specifically to my problem, and gave an example.Hopefully someone can finally help me. | How to keep Netcat from keeping data before connection to client | netcat | null |
_codereview.148611 | The Challenge:A valid string is one in which all distinct characters occur the same number of times. Given a string S, check if the string can be made valid by removing less than or equal to one character. If yes, output YES. Otherwise output NO.Constraints:\$1 \le \lvert S \rvert \le 10^5\$String S contains lowercase letters only (a-z)I'd appreciate any comments and suggestions as to how my code can be improved.function processString(str) { if(str.length === 1){ console.log(YES); } else{ const letterCount = buildCount(str.trim()); console.log(checkValid(letterCount) ? YES : NO); }} function buildCount(str){ const letterCount = {}; const len = str.length; for (let i = 0; i < len; i++) { letterCount[str[i]] = (letterCount[str[i]] || 0) + 1; } return letterCount;}function checkValid(letterCount){ const totals = []; for(let key of Object.keys(letterCount)){ totals.push(letterCount[key]); } //only one can be different- and it has to be more than the others //if always compare to the first, // doesn't work if that is the one that is more const compareVal = Math.min(totals[0], totals[1]); const diffOnes = totals.filter(function(val){ return val !== compareVal; }); if(diffOnes.length > 1 || diffOnes[0] > (compareVal + 1)){ return false; } return true;} | Check if string is or can be made into a valid string | javascript;strings;programming challenge | Your logic is fine. I think that a function of this sort does not necessarily need to be broken down into different functions. This function would still serve a single responsibility and without any wider need of such really specific functions in a wider application context, I see no reason to break it apart like this. I think what you have can be simplified to something like this:function isValidString(str) { var strArray = str.split(''); var counter = strArray.reduce( (cnt, letter) => { cnt[letter] = (cnt[letter] || 0) + 1; return cnt; } ,{} ); var i = 0; var compareVal; var diffFound = false; for (letter in counter) { i++; if (i === 1) { compareVal = counter[letter]; continue; } if (counter[letter] === compareVal) continue; if (diffFound) return 'NO'; diffFound = true; if (Math.abs(counter[letter] - compareVal) > 1) return 'NO'; if (counter[letter] < compareVal) { if (i === 2) { compareVal = counter[letter]; } else { return 'NO'; } } } return 'YES';}Note here we use reduce function to build the letter counter object, eliminating need for separate function. We also slightly optimize the logic to iterate and evaluate the counter, allowing for early exit from that iteration as soon as an invalid condition is encountered.I probably would not include console logging within the function itself, as I think the function should do only one thing, which is determine if string is valid, not output it as well. Obviously, it is trivial to log the output of this function likeconsole.log(isValidString(testString));Be careful in ensuring your function and variables names are meaningful. Your functions are vaguely named and do not specifically convey what caller should expect to happen. 'letterCount' as a variable name would lead me to expect an integer value to be contained therein, not an object with letter counts in it. A simple change to 'letterCounter' would make tremendous difference in conveying meaning of this variable. |
_softwareengineering.351800 | What is happening now?Currently, the client (let's call them abc enterprise) provides us the XML files containing employee data, we have a XSLT using which we transform the XML files into web-service request files and call the Business Rules Engine Web-Services. BRE provides the decision determined by the Rules Engine. We parse all this information from the Web-Service response and store it in the database. An UI application displays this data in Tabular and Charts formats.So the existing applications are:TestHarness application: that reads XMLs, transforms it into Web-Service requests, parses the response and puts into the Database tables.UI application: that displays the data from the database in tabular and charts formats What is the new requirement?Requirement is that the new Client (let's call them xyz enterprise) provides us the Database access (Mysql, Oracle etc.) and we need to generate the XMLs from that DB that the application then transforms into BRE Request XMLs(using the same XSLT transformation above) and then do the rest of the following the same way. So basically we will have to get the data from the database directly instead of the XML files that abc provides.The business team eventually wants a UI where they can map the DB fields to the XML fields but the mapping part can be manual for the initial stage of this project.Ideally, there will be: Mapping the DB(s)to the fields required in the XML (Some kind of UI, with DB table dropdown and Column dropdown on the left and XML fields required on the right). Store this mapping and use it to pull data from Database to 'generate XML files that will be then transformed into BRE request XMLs' or ' generate BRE request XMLs directlySo my questions are: How do i approach this project? How and where do I store these mappings? Am I generating a XSD(or it some other file) after mapping and can leverage that in java to retrieve the data from DB?What libraries/tools can i leverage?The existing applications are written in Java. Not a very good one, but i am attaching a mockup of mapping screen | Database to XML Mapping to generate XMLs from DB data | java;design;sql;xml | null |
_unix.331888 | I have to run an old application that requires my display to run at 8 bit colors. This application will use a dynamic colormap (using XStoreColors). If my display already uses too many colors, there will not be enough free spaces in the map for the applicatoin (as this is not my application I don't know the details here).Currently we are looking into using a more modern window manager for this application (Gnome 2.28 in this case), instead of Motif Window Manager, which it currently uses. How can I reduce the amount of colors used by Gnome? Looks are not important, a good solution would be if only the colors black and white are used. Also in the start menu only a few (custom) options need to be present, and the task bar can be disabled. So far I have been able to reduce the amount of colors vastly with the following settings (from the appearance settings menu):Use the high contrast controls and icons.Using the window borders theme 'Mist' (this actually still uses some shades).Setting the background to a solid color.Using monochrome font rendering.This still leaves many grayscales (Antialiassing effects) in the icons, but I can disable those. Are there any Gnome themes that do this even better, or settings to improve on this further? | How do I reduce the amount of colors Gnome uses | gnome;colors;theme | null |
_unix.165902 | In the tcsh manpage, the effects of set complete = enhance are defined as follows:[...] completion 1) ignores case and 2) considers periods, hyphens and underscores ('.', '-' and '_') to be word separators and hyphens and underscores to be equivalent.As regards 1), I know that readline can be configured to ignore case through set completion-ignore-case on. So my question concerns only 2).For instance, if I have a directory containing several files with similar basenames but different extensions like this:$ lsfile0.dat file1.dat file2.dat files.shthen I want the shell to be able to complete on file extension:$ cat .sh<TAB>$ cat files.shIs there a way to achieve or, at least, mimic this behavior in bash?EDITFollowing first answers, here are other examples that show more accurately how the tcsh feature works:$ lsabc.foo abc.bar cab.foo cab.bar$ cat a.f<TAB>$ cat abc.fooor:$ cat .b<TAB>abc.bar cab.bar$ cat .barActually, tcsh completes both basename and extension. The behavior intended here is to list the possibilities if more than one, as usual with TAB completion, not to insert them all. | How to get the equivalent of tcsh's enhanced completion in bash? | bash;autocomplete | This is not quite what you need but close. The Ctrl + x, g shortcut (C-x g in emacsspeak) shortcut will list the expansinos of a glob. So, in your example:$ cat *.b*<Ctrl><x><g>abc.bar cab.bar So, unlike what you describe for tcsh, this needs a valid glob. In other words, it does the equivalent of $ echo *.b*Note that the shortcut is pressing Ctrl and x together, then releasing them and pressing g.This is documented in man bash: glob-list-expansions (C-x g) The list of expansions that would have been generated by glob-expand-word is displayed, and the line is redrawn. If a numeric argument is supplied, an asterisk is appended before pathname expansion. |
_unix.338672 | I'm a newbie on Debian Jessie and I've used vagrant recently. I tried to vagrant up a box, but after several failing attempts I gave up.However, I noticed that my free space has shrunk notably.I think this happened because the virtual machines I've tried to build, although unsuccessful, somehow managed to occupy that space.So, please forgive me if this is a silly question, but is there any command on terminal that shows all virtual disks on my linux partition, so I can find the useless ones and delete them? I've searched for similar questions on the web, but I was unable to find them. Thanks. | About listing virtual disks on debian | debian;disk;vagrant | null |
_unix.196475 | +-------------+| || 1 || ||=============|| || 2 || |+-------------+Each tmux split has vim open. I would like to do stuff like yank a line from 1 and paste then into 2 with the vim shortcuts. | Share a buffer between tmux / vim splits | vim;tmux | null |
_unix.205973 | Why is this?I did remote desktop connection from mac to ubuntu server. I get a command prompt automatically for some reason. | RDP enters command prompt automatically | ubuntu;remote desktop | null |
_softwareengineering.157171 | I have a very big file delimited by some sequence of characters '*L*I*N*E'. The file will be of the order of 250G. And each line comes around 600bytes to 1000 bytes. I will be performing the following operations on the file,Read the file line by line and for each line, I will be giving it to a parser which will do some calculation on the line and update some stats. The parser will take roughly 15 micros per line.As of now I am using the BufferedReader to read the lines and pass it over to the parser in a single thread. My question is if I have a separate reader thread which only reads the file and dump everything in a queue (in memory) and have my parser act on the in-memory queue(in a separate consumer thread), can I achieve better throughput? Nothing changes except that my parser acts on the inmemory data and another thread takes only IO operation(read the file and dump in a queue).It is a complex piece of software in which the above is one part, so I am trying to speed up the above part. Hence I can't able to post the actual code. | Java BufferedReader vs Separate Producer consumer thread | java;concurrency;big data | 250G split by 600 bytes to 1000 bytes gives you about 250.000.000 lines. Spending 15 microseconds at parsing these will take 3750 seconds ie about an hour (one hour is 3600 seconds).About an hour would be the time spent on parsing in between execution of reader code in same thread.To estimate throughput gain I'd next benchmark single threaded version. To save time I'd likely benchmark with smaller size file, like 2.5G (timing would have to be multiplied by 100 in this case) or even 250Mb (timing multiplied by 1000).With benchmarked single threaded timing, I'd get the (optimistic) estimate of throughput gain:single threaded timing 24h would give me >=23h in multithreaded execution...4h would give me >=3h...2h would give me >=1h...1.5h or less would give me >=1h (yeah at less than 2h, parsing would be bottleneck not file reading)Actually, having benchmark less than 2h I'd also check if parsing could run concurrently because in this case, having more than one thread to parse could increase gain.Assume 1MB/s IO, can I achive performance throughput with multithreaded mode?Assuming above, reading 250G file would take about 250.000 seconds. 250 gigabytes is 250 thousands times by 1 megabyte.250.000 seconds is about three days. Parsing in a separate thread would save you one hour or less of 3 days (~2% I think). It's up to you to decide whether it is worth it. I personally would rather think about something like GFS plus MapReduce to handle stuff like that. |
_softwareengineering.334747 | I have a large number of classes with an abstract base class (A) that contains behaviour that must be supported by all the sub classes say [B..I]. In my code, I end up with a collection of objects that is created based on input from an external system. These objects belong to subclasses described above and they go through various operations in my code. All of these operations require some core and common behaviour so I use the abstract base class and I can call methods on a collection of objects using only the base type ( A.DoTheThing() ).The problem is, only a small subset of these types must support a common set of functionality. If I add the methods to the base class with a default implementation that does nothing or returns null then I'm going towards a God class. Only a few subtypes would override these methods but I can keep using the base type to process the collection. The large group of subclasses would do nothing in response to method calls based on the default empty implementation in the base class. The ones that override the default behaviour would do what they need to do. If I don't want to put behaviour where it mostly does not belong, I'd have to define an interface (X) and implement it for the small subset of subtypes that'll be included in the collection. However, I now have a collection based on the type A and at some point after using methods from A, I need to perform operations on the subset of objects that implement X. The only option I'm left with is to filter the collection based on instanceof(X) and call relevant methods. Which one is the lesser evil and do I have another choice here? | Avoiding instanceof vs abstract God Class. Do I have an alternative? | design patterns;object oriented;polymorphism | Don't avoid things just because they are considered evil - understand first why they are considered evil, and then decide how to avoid that evilness.You are warned about these evil things because they are usually the easy, straightforward way to solve the problem - otherwise people wouldn't always attempt to do them and the warning would be redundant. The problem is that if you don't understand why method A is evil, you may end up using method B which is evil for the exact same reason as method A, but because method B is more awkward than A it was not very popular so nobody felt the need to warn you of it.I can't find it, but I remember seeing avoiding singleton by making all the class' fields static so that each new object of that class will use the same state(but there is no single instance so it's not a singleton!)Anyways, this is what you are doing with that master base class. Downcasting is not evil in your case, but that master base class suffers from the same problem that downcasting usually suffers from!Why is downcasting considered evil?Consider this:public abstract class Base { // bla bla bla}public class ImplA extends Base { // bla bla bla}public class ImplB extends Base { // bla bla bla}// somewhere elsevoid doit(Base base) { if (base instanceof ImplA) { ImplA implA = (ImplA)base; // ImplA specific code } else if (base instanceof ImplB) { ImplB implB = (ImplB)base; // ImplB specific code }}Why is this bad? Because it doesn't handle ImplC. But there is no ImplC!!!Well, there is no ImplC now, but nothing stops me - or someone else - from writing it next year, making it extend Base. And then they will create an instance of ImplC, and that instance will be passed to doit - which will probably handle it wrong. Because, while we don't know what ImplC means and how doit should handle it, if doit needs special code for ImplA and special code for ImplB, we should assume it'll need special code for ImplC. But you may not be able to add that code(because doit may be part of a third party library), or you may simply not know you need to, because there is no compilation warning that doit does not have special code to handle ImplC. You'll realize eventually though, after a few hours/days of debugging trying to figure out why your program doesn't work...This is why downcasting is frowned upon, and it's recommended to favor polymorphism and method overriding:public abstract class Base { // bla bla bla public abstract void doit();}public class ImplA extends Base { // bla bla bla @Override public void doit() { // ImplC specific code }}public class ImplB extends Base { // bla bla bla @Override public void doit() { // ImplB specific code }}With this design, ImplC can have it's own override of doit with it's own specific code, and if you forget to write it you'll get a compilation error.The problem with the master base classI argue that your base class suffers from a similar problem - you need to modify the base class to add new functionality. It's not a worst case here, since you have access to the code and you won't forget to add the base methods because you need to use them. But still - you are avoiding downcasting by creating a construct that suffers from downcasting's general problem...Downcasting is OK in this caseThe problem with doit was that it was meant to deal with any Base, but in practice only handled the known types of Base - ImplA and ImplB.Your case is different. You are looping over a collection of A, looking for, let's say, only instances of B, downcasting them to B and using them as B. This loop is not meant to handle all instances of A - it's only meant to handle the Bs in the collection. C, D, ... I will have their own loops, probably elsewhere, that deal with them. And if you add a new subclass J it'll also need new loop(s) - but writing these loops is a fundamental part of subclassing A, not some random method somewhere that needs to be amended.What if you need to handle multiple subclasses in the same loop?If you find yourself writing something like this:for (A a : theBigCollection) { if (a instanceof B) { B b = (B)a; // B specific code } else if (a instanceof G) { G g = (G)a; // G specific code } // Other subclasses are not handled here}You are repeating doit's problem - if this loop needs to handle both A and G, how do we know it doesn't need to handle J?In this case, you should try to bundle this behavior with a mid subclass or with an interface - let's call it X:public interface X { // bla bla bla}public class B extends A implements X {}public class G extends A implements X {}// the loop from beforefor (A a : theBigCollection) { if (a instanceof X) { X x = (X)a; // X specific code }}Now if J needs to, it can implement X and get handled in this loop. This is probably what you need, since you mentioned a small set of subclasses implementing the same methods. You can have multiple such interfaces if you need to.EpilogueThe problem with downcasting is the possibility of adding new subclasses that will go through the same code paths but won't have specialized code that handles them. So when considering whether or not you should use downcasting always think what will happen if someone adds a new subclass.I believe this is a good place to use the wider interpretation of the Zero-One-Infinity Rule. In any code unit, you should handle zero, one or infinite(==all possible) subclasses of A:zero: You don't use A in that code. Not interestingone: One subclass, or one interface that you try to cast to. But not one as in I have one apple - it should be one as in God is one. It should be conceptually impossible to add alternative downcasts, no matter what other subclasses will be added there you are still allowed to change the one type to some interface - as long as it remains one.infinity: You deal with all possible subclasses of A - which means you don't downcast and use A itself.Note that there may be multiple levels - in my last example I was first downcasting to X(one), and then writing code for X which will work on all possible subclasses of X(infinity). |
_codereview.36657 | I have previous revisions of my deck of cards project, but I may not need to link them here since the emphasis is on the use of templates.I've never used them before until now, and I like how intuitive my implementation has become. However, I'm also aware that there are many pitfalls in C++ templates. Although I don't think this implementation uses them heavily, I'd still like to know if I'm at a good start before I use them again.I'm also a bit unsure about some of my naming now, considering this is a template class. For instance, I've changed add()'s parameter from card to item because it cannot be assumed that the user will only use cards. Although it makes sense to just use cards (or anything representing cards), it would not seem practical to limit the implementation to just that usage. I would prefer this to be reusable.Of course, I'm okay with any other improvements that can be made.Deck.hpp#ifndef DECK_HPP#define DECK_HPP#include <algorithm>#include <stdexcept>#include <vector>template <class T> class Deck{private: std::vector<T> original; std::vector<T> playable;public: typedef typename std::vector<T>::size_type SizeType; void add(T const&); T draw(); void reset(); void shuffle(); SizeType size() const { return playable.size(); } bool empty() const { return playable.empty(); }};template <class T> void Deck<T>::add(T const& item){ original.push_back(item); playable.push_back(item);}template <class T> T Deck<T>::draw(){ if (playable.empty()) { throw std::out_of_range(deck empty); } T top(playable.back()); playable.pop_back(); return top;}template <class T> void Deck<T>::reset(){ playable = original;}template <class T> void Deck<T>::shuffle(){ if (!playable.empty()) { std::random_shuffle(playable.begin(), playable.end()); }}#endif | Use of templates with templated Deck class | c++;template;playing cards | Bottom line: your code here is focused and solid. It's a well-written introductory use of templates. However I'm not sure it's a great scenario in which to use them. First I'll talk about ways to improve its reusability, and then I'll say more about why I think it's a questionable scenario.As you start trying to implement your own templatized data structure, there are a lot of concerns to think about. Above all else you need to consider how your data structure will be used; what operations will consumers of it expect, what limitations will they have to put up with, and whether any of the data structure's implementation choices lock them into bad patterns. In general I would say you would do well to follow the lead of the STL data structures unless you have specific reasons not to. Here are a few of the places where you currently differ:void Deck<T>::add(T const &)This one actually is pretty much spot on. But I want to talk about the implications of how it's implemented. You are requiring T to be copyable. Thus if a consumer of this class needs to compare cards, T will need to compare itself through means other than by its address. This isn't particularly unusual, but I just wanted to be clear that comparing T instances by address is invalidated by Deck<T>'s implementation.The biggest question mark here is the lack of a range insertion, perhaps via constructor or add(InputIterator first, InputIterator last). Similarly there's no index-based insertion, but that wouldn't mix well with the two vectors that are often out of sync.T Deck<T>::draw()Before C++11 introduced move semantics, there was not necessarily a low cost way to examine and remove an item from a data structure. This is why vector and other data structures let you examine the front or back, but don't return the removed item on a pop_back(). While it's perfectly good to wrap the functionality of the underlying storage, as the cost of copying T goes up, so does the cost of this implementation of draw(). Implementing it in some other fashion (typically as two separate methods for peeking and removing) is suggested before C++11.Taking into account what Corbin and Loki Astari point out, it's more relevant to consider weak and strong exception guarantees (essentially the ability to reason about correctness in unusual situations) instead of the cost of copying (thanks especially to RVO). Whether you need to care about either is still completely dependent on your use case in practice, and thus only of extreme importance in general purpose data structures, or in ones you know need to offer it.Inside draw()Your implementation of draw() verifies that the playable deck has items remaining. This is certainly good for catching errors, but leads to repeated checks for the same data. Either the calling code will look like while (true) { ::: draw(); ::: } and will have to use a try/catch to find out when the deck was empty, or it will look like while (!deck.empty()) { ::: draw(); ::: }. The former is a questionable way to use exceptions, as an empty deck is hardly an unexpected condition, but the latter means both the caller and callee are repeating the empty check. It's also possible that the caller knows this in other ways, such as having put 52 cards in the deck and then looping 52 times to empty it.As an aside, it can still be useful to have this sort of check available by another name (consider the difference between vector<T>::operator[] and vector<T>::at), or available through a compile-time option (such as a DEBUG build). But it's typically good to make an unchecked version available (vector's operator[] case).void Deck<T>::reset()I'm lumping a couple comments in here, and this is less about following the STL's lead. Like in draw(), as the cost of copying T goes up or the size of your deck goes up, so does the cost of reset(); is it possible that you don't need to save the original order? If you don't, then perhaps it would be better to consider a different implementation:one vector<T> containing the order of the deck, one vector<T>::const_iterator or vector<T>::size_type containing the index/iterator through which you've already played. This would have a lot of repercussions, so I'm going to show the methods that it would impact, assuming the class has a size_type deck_top, and that we remove original. I like the shorter code, but that has less to do with the index and more to do with the other changes.template <class T> bool Deck<T>::empty() const{ return deck_top >= playable.size();}template <class T> T const & Deck<T>::draw() // note we can now return a reference{ return playable[deck_top++];}template <class T> void Deck<T>::reset(){ deck_top = 0;}template <class T> void Deck<T>::shuffle(){ std::random_shuffle(playable.begin() + deck_top, playable.end());}This of course has multiple tradeoffs besides the ones I already mentioned. Perhaps the biggest is that there's still no way to remove cards (temporarily or permanently), so it's hard to model a game that shuffles a partial deck while people are still holding some cards in their hands. I think you will rapidly find that it's hard to make this data structure usable for all the scenarios you think you want it to support without making it almost as generic as vector itself. At which point you will have to consider whether it's worth reusing the data structure, or if it's really just for a single program's use.Final notesAs a generic comment, I'm undecided on card vs. item. I think card conveys the intended use better, and doesn't really get in the way if someone wants, say, a Deck of tiles instead. But I do agree in general with the point about not boxing yourself in by your parameter names. |
_datascience.12274 | The model of linear regression is linear in parameters.What does this actually mean? | What does linear in parameters mean? | regression;linear regression | Consider an equation of the form$y = \beta_0 + \beta_1x_1 + \beta_2x_2 + \epsilon$ where $x$'s are the variables and $\beta$'s are the parameters. Here, y is a linear function of $\beta$'s (linear in parameters) and also a linear function of $x$'s (linear in variables). If you change the equation to$y = \beta_0 + \beta_1x_1 + \beta_2x_1^2 + \epsilon$Then, it is no longer linear in variables (because of the squared term) but it is still linear in parameters. And for (multiple) linear regression, that's all that matters because in the end, you are trying to find a set of $\beta$'s that minimizes a loss function. For that, you need to solve a system of linear equations. Given its nice properties, it has a closed form solution that makes our lives easier. Things get harder when you deal with nonlinear equations.Assume you are not dealing with a regression model but instead you have a mathematical programming problem: You are trying to minimize an objective function of the form $c^Tx$ subject to a set of constraints: $Ax \geq b$ and $x\geq0$. This is a linear programming problem in the sense that it is linear in variables. Unlike the regression model, you are trying to find a set of $x$'s (variables) that satisfies the constraints and minimizes the objective function. This will also require you to solve systems of linear equations but here it will be linear in variables. Your parameters won't have any effect on that system of linear equations. |
_webapps.58830 | I have several times encountered a situation that some particular book had preview in Google Books, but later the same book (on the same link) was without preview.On several occasions I have, for example, included link to some particular page on Google Books in a post on some Stack Exchange site. I distinctly remember that at the time when I posted the link, the preview was available. Of course, I am aware that preview is not available for all users and it is limited. But when I follow the same link now, which means that I get the same book and the same edition, the preview is not available anymore. I have noticed this in a books about mathematics, since this is the area for which I most frequently search something on Google Books. The examples I was able to find were books published between 1990 and 2000, so they are not very recent books.I suppose that preview on Google Books depends on some kind of agreement between Google and the publisher of the particular book. But I am somewhat surprised, that the possibility of preview the book (and search in it) is changed. What are the reasons why preview is removed from some books? | Why does Google Books occasionaly change some books, which had previews, to no preview? | google books | null |
_unix.365840 | I have a host (ubuntu xenial) running a container via systemd-nspawn (also xenial):systemd-nspawn --directory=gogs --network-macvlan=ens192 --bootens192 is the host interface which gets its IP address via DHCP.From within the container, I would like to get an IP address which would be provided by the network DHCP, the one who previously provided it to the host (I first need to use a MAC address which is registered with the DHCP server):root@git:~# ip addr1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: mv-ens192: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1 link/ether d2:b9:c3:77:25:83 brd ff:ff:ff:ff:ff:ff link-netnsid 0root@git:~# ifconfig mv-ens192 hw ether aa:a0:a0:a0:a0:01root@git:~# ifconfig mv-ens192 uproot@git:~# ip addr1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: mv-ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1 link/ether 00:50:56:bb:60:3f brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet6 fe80::250:56ff:febb:603f/64 scope link valid_lft forever preferred_lft foreverroot@git:~# dhclient -vInternet Systems Consortium DHCP Client 4.3.3Copyright 2004-2015 Internet Systems Consortium.All rights reserved.For info, please visit https://www.isc.org/software/dhcp/Listening on LPF/mv-ens192/aa:a0:a0:a0:a0:01Sending on LPF/mv-ens192/aa:a0:a0:a0:a0:01Sending on Socket/fallbackDHCPDISCOVER on mv-ens192 to 255.255.255.255 port 67 interval 3 (xid=0xd36b8c1e)DHCPDISCOVER on mv-ens192 to 255.255.255.255 port 67 interval 7 (xid=0xd36b8c1e)The discovery does not go though. What is the reason this fails? The DHCP discovery packet goes to the host NIC, which then should dispatch it further (it should not be different from the call the host is making when requesting its own IP).Note: when running a tsharkon the host, I see the container's request:634 8.404019212 0.0.0.0 -> 255.255.255.255 DHCP 342 DHCP Discover - Transaction ID 0xd36b8c1e | Can I get a DHCP address from within a container via the DHCP server the host is on? | networking;systemd;container;systemd nspawn | null |
_scicomp.20780 | I am developing a finite element simulation and want to evaluate the errors in $H^1$ and $L_2$ norms. The problem is the classical Poisson equation, with Dirichlet B.C.:$$-\Delta u=f\mbox{ in }\Omega,$$$$u=g_{D}\mbox{ on }\Gamma_{D}.$$I am using Lagrange bilinear 2/3D elements in the Galerkin approach. The error estimates are the standard$$\left\Vert u-u_{h}\right\Vert _{H^{1}\left(\Omega\right)}\leq C\,h\,\left\Vert u\right\Vert _{H^{1}\left(\Omega\right)}$$and $$\left\Vert u-u_{h}\right\Vert _{L_{2}\left(\Omega\right)}\leq C\,h^{2}\,\left\Vert u\right\Vert _{L_{2}\left(\Omega\right)}.$$After computing the numerical errors and calculating the experimental order of convergence, how can I evaluate how good is the experimental order? I mean, if I have an order of $2.30$ for the $L_2$ norm, is that good enough? What if it is $1.70$? What criteria can I use? | Evaluate numerical error estimates | finite element;error estimation | The theory says that the error follows the estimates you state asymptotically. It doesn't actually say very much about what the error would be for any given mesh and in relation to the next mesh.So what you need to do is to compute on finer and finer meshes. You will (or at least should) observe that the convergence rate will approach 2 if you did everything right, and that's all you typically want to demonstrate. |
_softwareengineering.51116 | Whether you learned from a University, or mentor, or what have you (I'm mainly more concerned with University/Equivalent), what did the institution do right? And what do you think they could've improved upon when learning your programming skills?I'm curious how everyone felt their institution did as far as teaching them how to become a good programmer. | What did your college do right and what could they improve upon? | education | I was not a comp sci major, but one comment I'll make: Undergrad education is supposed to be focused on timeless fundamentals of a field, not the latest buzzwords and technologies or mundane nitty-gritty practical details. If you want to learn the absolute latest research and buzzwords, that's what research journals and/or grad school are for. If you want to learn nitty-gritty practical stuff like source control and maintenance, that's what real-world experience is for.I majored in biomedical engineering, and I didn't understand this at the time. I always wondered why we weren't learning about latest cool buzzword that will get me a job X and instead were wasting time on free body diagrams or reaction kinetics something boring like that. In hindsight the focus on the timeless fundamentals of engineering and biology in my undergrad education makes perfect sense. The latest cool buzzword changes too fast and is hard to understand deeply without a solid grasp of the fundamentals.Teaching lots of detail about source control and the latest development methodologies (agile, waterfall, RAD, SCRUM, or whatever else people use) is silly because it will be obsolete in 5 years, there's nothing conceptually deep about it, and it's easy to learn on your own. The timeless fundamentals of computer science are computer architecture, algorithms, complexity classes, data structures, the Church-Turing thesis, etc. |
_cs.33715 | I'm learning about NP-completeness, and many reduction proofs start off by stating that a problem is triviallyin NP. But I can't seem to wrap my head around this.Why is this so? | Why is Steiner Tree trivially in NP? | np complete;np | What is normally meant in cases like this is there is a simple, obvious algorithm which demonstrates that the problem is in $\mathcal{NP}$, with an implicit but we don't have enough space, can't be bothered, or don't want to bore the reader by including it.Whether it really is trivial is a trickier thing (but often it really is).Before returning to the Steiner Tree problem, recall the two common definitions of $\mathcal{NP}$ membership:A problem is in $\mathcal{NP}$ if, given the input and a solution, we can check that the solution is correct in deterministic polynomial time, orgiven the input, we can compute the solution in non-deterministic polynomial time.So for the Steiner Tree problem (remember we only concern ourselves with the decision versions), they are saying that at least one of the following is true:Given a graph, a set of required points, a value $k$ and a tree (the alleged solution), we can easily check that the tree is a Steiner tree of value at most $k$, orGiven a graph, a set of required points and a value $k$, we can easily compute a Steiner tree of value at most $k$ if we can make some really good guesses.Hopefully in this case both of these should be more obviously true. In the first case, we need only check that the tree1 contains the required points, and the total weight of the tree is at most $k$. In the second, we can use the magic of non-determinism to guess which edges are in the tree, then check that it's okay as in the first case.The only further requirement is that these procedures be computable in polynomial time, but in both cases we at most need to look at each edge of the graph and each vertex of the graph once(-ish) (if we have sensible data structures to store the graph and tree), so even without being too precise, we can see that they can both be done in polynomial time.Footnotes.As A. Schulz notes in the comments, there is an additional issue of ensuring that the witness (the solution handed to us) has an encoded length bounded by a polynomial in the size of the original input. This may be easy to see, but as A. Schulz also notes, if we're talking about the Geometric Steiner Tree problem, then we might have to make an additional argument about how we encode the geometry of the tree - not necessarily too complicated, but still important. If we're talking about the Steiner Tree Problem in Graphs, then it's more straightforward (as we don't have to worry about the geometry), and perhaps something you can gloss over. Nonetheless it is important to be careful :). |
_unix.35452 | My Firefox automatically shortens the names of the files I download. For example, 231546798_20110608.pdf becomes 2315.pdf.I later realized that it may relate to the long name of the path into which I try to download the file. The path is ridiculously long:/windows-d/academic discipline/study objects/areas/human aspects/social sciences/communication/ways of communication/way of spread, ie electronic media and communication/application/telephone communication/examples/cell phone/me/verizon/bill/I then build a directory named hahaunder the above long path. Note that haha is exactly as long as the name of the shortened file name (extension pdf is excluded) . It turns out that Firefox doesn't download the file into directory haha, although I specify it to. But I can download the file to other directory with much shorter path name, and have no problem with copying the file into haha.I wonder how to explain the strange behaviour of Firefox? PS:My OS is Ubuntu 10.10, and Firefox is 11.0.The problem here is similar to my previous question, but there Iasked about the OS, and here I talked about Firefox. | My Firefox shortens names of the files I download | firefox | null |
_hardwarecs.5583 | Is there a piece of hardware that can connect two monitors to a single video port? I want to fool an Xbox into thinking that it's connected to a single monitor, when in fact that display area is made up of two monitors on top of each-other. Something like this:--------------| || || ||------------|| || || |--------------The Xbox would only see then outer rectangle, not realizing that there are two monitors present. Does such an adapter exist? | Split XBox Screen across multiple monitors | hdmi;video adapters | What you're looking for is called a Video Wall Controller. These are the boxes that you see behind the massive arrays of screens showing a single composited image in the big chain stores. They usually run around a $1000 or more. Without knowing the monitors involved I can't make a recommendation on which one to use. I know of no such products priced at private consumer levels, since usually that part of the market just uses a PC with two video outs to achieve the same thing (except, of course, that the video source has to come from the PC. You might try to rig up something with an HDMI-in capture card being passed out to two HDMI ports off a PC, but you wouldn't probably get true extended screen support - more like you'd have to span the image in a window drawn across both screens. Alas). |
_cs.29960 | Givenmult = \x -> \y -> x*yI am trying to reduce(mult (1+2)) (2+3)with each of the strategies:normal-orderapplicative-ordercall-by-namecall-by-valueMy attempt:For both applicative and by-value I get the same (because no intermediate lambda expression contains anything reducible, hence applicative does not do reduction under lambda either)(mult (1+2)) (2+3)(mult 3) (2+3)(\y -> 3*y) (2+3)(\y -> 3*y) 53*515With normal and by-name: I am supposed to substitute (2+3) into the body of (mult (1+2)). But that does not have a body, unless I do a reduction. But then it is not normal-order, nor call-by-name, since it is not the outermost I reduce. Hence it seems to me that these evaluation strategies get stuck. But I am probably misunderstanding smth.Please show me the correct evaluation. | How to reduce this with all 4 of normal applicative by-name by-value? | lambda calculus;functional programming | null |
_unix.37993 | When I use graphical desktop-environment on Linux, no matter really what I use, I notice some icons tend to look aliased. It seems like there is perhaps a lack of anti-aliasing, like the curves on Windows and icons look far more blocky than they do on Windows.I am running at a high resolution, 1680x1050. Still, since things look so smooth in Windows 7, I want to know if Linux graphics just are not there yet, or if there is a problem with my configuration?Most of the time I am using Fluxbox, if that makes a difference. | Is there a way to get anti-aliased icons in X, running Linux? | xorg;desktop | null |
_codereview.105543 | I've created some base classes for items, and I want to know how maintainable or expandable the method seems.using System;using System.Diagnostics;using AbilitySystem;using AbilitySystem.AbilityClasses;using AbilitySystem.BehaviorClasses;using AbilitySystem.EffectClasses;using ItemSystem.Enums;using Microsoft.Xna.Framework;using Microsoft.Xna.Framework.Graphics;using UISystem;namespace ItemSystem.ItemClasses{ public class Item { #region Properties /// <summary> /// The ID of the item, shared between server and client /// </summary> public int ID { get; } /// <summary> /// Name of the item /// </summary> public string Name { get; } /// <summary> /// Price of the item /// </summary> public int Price { get; set; } /// <summary> /// Prototype: Type of the item /// RemoveIf: I decide not to do type-based slots /// </summary> public ItemType Type { get; } /// <summary> /// Base description of the item /// Additional info will be added based on stats and effects /// </summary> public string Description { get; set; } /// <summary> /// The ability of the item, can be a stat modifier, an activated ability or any other variation /// </summary> public Ability Ability { get; } #endregion #region Ctor private Item(int id, string name, int price, ItemType type, string baseDescription, Ability ability) { ID = id; Name = name; Price = price; Type = type; Ability = ability; Description = string.Format({0}: {1}\n{2}: {3}, Name, baseDescription, Ability.Name, Ability.Description); } #endregion #region Methods public static Item Create(int id, string name, int price, ItemType type, string description, Ability ability) { return new Item(id, name, price, type, description, ability); } public void Activate(IUnit unit) { if (Ability != null && Ability.IsActivatable) Ability.ActivateAbility(unit); } #endregion } public class TestItem { public Item Item1 { get; } public Item Item2 { get; } /// <summary> /// Example of a behavior that can take a parameter with which it will apply the behavior /// </summary> private class DealDamageBehavior : ActivatableBehavior { private int Damage { get; set; } protected override void BehaviorImplementation(IUnit destinationPlayer) { /* Deal the Damage variable to the Unit(when IUnit will be implemented) */ } public override bool CanApplyBehaviorTo(IUnit unit) { /* Check if `unit` is a valid unit to apply this behavior on */ return true; } public DealDamageBehavior(int damageToDeal) { Damage = damageToDeal; } } /// <summary> /// Example of a behavior that can happen on it's own (has some debug print on Debug output and UI) /// </summary> private class ItemBehavior : ActivatableBehavior { private bool isDrawn; public ItemBehavior() { UI.SubscribeToUIDraw(PrintUi); isDrawn = false; } protected override void BehaviorImplementation(IUnit destinationPlayer) { Debug.Print(Behavior test print); isDrawn = !isDrawn; } public override bool CanApplyBehaviorTo(IUnit unit) { return true; } private void PrintUi(SpriteBatch spriteBatch) { if (!isDrawn) return; spriteBatch.DrawString(UI.Font, string.Format(Test), new Vector2(20, 50), Color.Black); } } public TestItem() { Item1 = Item.Create ( id: 0, name: Test Item, price: 30, type: ItemType.Weapon, description: Just a test item, ability: Ability.CreateActivatable ( effect: new BehaviorApplyingEffect(new ItemBehavior()), name: Test ability, isUnique: false, description: Just a test ability, cooldown: TimeSpan.FromSeconds(3) ) ); Item2 = Item.Create ( id: 1, name: Second Test Item, price: 50, type: ItemType.Armor, description: Just another test item, ability: Ability.CreateActivatable ( effect: new BehaviorApplyingEffect(new DealDamageBehavior(5)), name: Test ability that deals 5 damage, isUnique: false, description: Just a test ability that deals 5 damage, cooldown: TimeSpan.FromSeconds(3) ) ); } }}This is the github to whatever pieces of code you need, like the AbilitySystem and how it behaves.In the bottom of all that code, there are two examples of creating items and how it generally looks like. Do you think this is a good way of doing it? Is there a nicer way of doing it? | Creating items in a game | c#;design patterns;monogame | High-level architectureNot something we usually review, but...namespace ItemSystem.ItemClassesThe namespaces are surprising. You seem to have split the namespaces by object type - one for classes, another for enums:namespace ItemSystem.Enums{ /// <summary> /// Enum for different basic types of items that are possible /// </summary> public enum ItemType { Weapon, Consumable, Armor }}You have created separate projects for what I'd consider namespaces:MainModule -> ModuloZeroUISystem -> ModuloZero.UIStatSystem -> ModuloZero.StatsAbilitySystem -> ModuloZero.AbilitiesItemSystem -> ModuloZero.ItemsAll files define multiple types; projects are usually easier to manage and organize and - most importantly - navigate, when there's one type per file.Something isn't right here:using UISystem;But after a bit of investigation, I figured it was because of the test code you included in the same code file - this particular instruction is the culprit:UI.SubscribeToUIDraw(PrintUi);Along with this one:spriteBatch.DrawString(UI.Font, string.Format(Test), new Vector2(20, 50),Both in ItemBehavior.This isn't a very OOP way of tackling the problem:public static class UIIn my opinion you have the dependency upside down, with the model depending on the UI. I would reverse that dependency and let the UI depend on the model; the application logic does not need to know there's a UI in the picture.Now if you wanted to write a unit test for the Behavior classes, I'd be stuck with this static method call that's running code in another assembly. The solution is in the constructor:public ItemBehavior(){ UI.SubscribeToUIDraw(PrintUi); isDrawn = false;}UI is a dependency. If you wanted to write a unit test that confirms that PrintUi is registered in the constructor, you would have no choice but to include that method in the code that needs to run during that test: ItemBehavior is tightly coupled with the UI class. A step-one refactoring could look end up looking like this:private readonly IDrawingEngine _engine;public ItemBehavior(IDrawingEngine engine){ _engine = engine; _engine.SubscribeToUIDraw(PrintUI); isDrawn = false;}Notice the difference: now if we want to write a unit test to validate that an ItemBehavior object gets created registered to the drawing event.Step two would be to break the dependency on the UI assembly altogether, make it depend on the assembly that defines IDrawingEngine, and implement that interface.#regionDon't do this:#region Properties#region Ctor#region MethodsInstead, layout your files in a consistent manner. You should be able to tell a property from a method, and a method from a constructor. Which specific order you pick is totally your personal preference - I usually follow a format like this:public class Foo{ public const int SomeConstant = 42; private readonly Baz _baz; public Foo(string bar, Baz baz) { _bar = bar; _baz = baz; } private readonly string _bar; public string Bar { get { return _bar; } } public int FooBarBaz { get; set; } public void DoSomethingPublic() { } private void DoSomethingPrivate() { }}Whatever the order is - if it's consistent, there's no need for #region because you know where to find what. And with a single type per file, and a single responsibility per type, there shouldn't be much need for collapsing entire regions anyway; keep regions where they belong, in generated code. |
_webapps.86947 | Someone changed the password of my Google account. I tried to recover it by using Google recovery, but unfortunately that someone has already changed the phone number and the other email address that is needed for the recovery of my account. So I tried other options like answering the questions that Google gave me. One of the questions there is when did you create your account. I really can't remember it and I'm so frustrated. I hope someone can help me to recover my account. | Recover Google account when someone changed the phone number and the email address that is needed for recovery? | google account;account management | null |
_softwareengineering.25188 | OK, so hopefully this is a subjective enough question for Programmers, but here goes. I am continuously broadening my knowledge of languages and software engineering practices... and I've run into something that just makes no sense to me whatsoever.In C++, class declarations include private: methods and parameters in the header file, which, theoretically, is what you pass to the user to include if you make them a lib.In Objective-C, @interfaces do pretty much the same thing, forcing you to list your private members (at least, there's a way to get private methods in the implementation file).From what I can tell, Java and C# allow you to provide an interface/protocol which can declare all the publicly accessible properties/methods and gives the coder the ability to hide all implementation details in the implementation file.Why? Encapsulation is one of the main principles of OOP, why do C++ and Obj-C lack this basic ability? Is there some kind of best-practices work-around for Obj-C or C++ that hides all implementation?Thanks, | Why are private variables described in the publicly accessible header file? | programming languages;language features | The question is whether the compiler needs to know how large an object is. If so, then the compiler has to know about the private members in order to count them up.In Java, there's primitive types and objects, and all the objects are allocated separately, and the variables containing them are really pointers. Therefore, since a pointer is a fixed-size object, the compiler knows how big a thing a variable represents, without knowing the actual size of the pointed-to object. The constructor handles all of that.In C++, it's possible to have objects represented locally or on the heap. Therefore, the compiler needs to know how big an object is, so that it can allocate a local variable or array.Sometimes it's desirable to divide class functionality into a public interface and a private everything else, and that's where the PIMPL (Pointer to IMPLementation) technique comes in. The class will have a pointer to a private implementation class, and the public class methods can call that. |
_unix.158663 | When using Linux (Debian), I often use dd to copy a disk image to an SD card. I have written a script that throws an error if the device file specified in the of option is too large. This keeps me from accidentally blowing away one of my hard disks.SD_SIZE=$(sudo sfdisk -s ${SD_DEV}) if [ $SD_SIZE -gt 33554432 ]; then echo might not be and SD card, exiting exit 1fiHowever, if I insert the SD card and forget to unmount it, results are sketchy. Sometimes the copy succeeds and sometimes it fails.I can amend my script with the answer here:How to check if a filesystem is mounted with a scriptHowever, is there an option in dd with this functionality? (on OS X dd will not write to a mounted disk by default)Also of interest, why will dd (on OS X) error when trying to copy to a mounted disk? Perhaps some differences in the kernel or dd? Here is the error you get if you try to dd to a drive that is mounted in OS X (10.9):dd: /dev/diskN: Resource busy, make sure the disk is not in useI can write to the disk using cp, so perhaps the system calls that dd is making are not as simple as OPEN then WRITE. | Differences between dd in Mac OS X and Linux | linux;mount;osx;dd;sd card | Sometimes the copy succeeds and sometimes it fails.Probably it fail because some process in the while wrote to the mounted filesystem, guess that's why is a good practice to umount before :-)dd should really be just open and write, I guess the MacOSX version add some control and I think is easy to understand why with their device names, compared to Linux I triple-check before dd'ing. |
_cogsci.13542 | I have often told people that I make a mistake two different ways before I really understand a situation. Is there any research that explains how and why this works? As an analogy, there is something called a paperclip tree, a branching structure of linked paperclips (or any other small objects that can be connected loosely). You can find the longest path through the tree by picking it up anywhere, then grasping the lowest point and picking it up again. Absolute, guaranteed, must work every time.So, is learning by making two mistakes something like this? Something like triangulation - two points of view? I also find that I have to sleep twice before something is really integrated in to my awareness and memory. After just one sleep it disappears and can't be recalled. This is why I often tell people that I cannot remember yesterday, but before that is clear. Also why naps are so helpful for learning. Any thoughts in how these things all work? Back propagation? Default Processing? Thank you. | Third time's the Charm: any research into how we learn from error? | learning;sleep | null |
_webapps.31733 | I frequently use SMS notifications in Google Calendar, but I seem to receive the SMS message from a different sending number each time. So my SMS list is full of different random numbers.I'd like to add a Google Calendar contact, adding all its phone numbers in order to get all the SMS messages in the same thread, under a unique name.Is there a list of all the phone numbers used by Google Agenda to send the SMS? | Where can I find a list of calling numbers used by Google Calendar SMS service | google calendar;sms | null |
_cs.48698 | Let be $F=(A\land B)$ and $G=\neg(\neg A \lor \neg B)$. Which of the following statements are correct$F=G, F\equiv G, \neg F=\neg G, \neg F\equiv\neg G$?Is there a difference? | Logical equivalence and equality | equality | Equality is a syntactic notion, equivalence is a semantic notion. Two expressions are equal if they are the same expression — in other words, an expression is only equal to itself. Two logical expressions are equivalent if they have the same truth value in every interpretation.Two equal expressions are always equivalent, but the converse doesn't hold. For example, $A \equiv \lnot\lnot A$ but $A \neq \lnot\lnot A$. |
_softwareengineering.347164 | I've been mainly programming PHP, and I recently started with C++.In PHP the return of a function can be of any type, so you can do checks like this:public function doSomething(){ if (! this->userHasAttribute()) { return false; } return you are logged in.;}So basically, if the user would not be logged in doSomething would return false, else it returns a string.Since in C++ you return strictly one datatype, how would you approach this?How to structure those checks/policies, because clearly you dont want to have them inside a function itself (or you will be throwing exceptions everywhere you do a minor check).Please correct me if I said anything incorrect.I also came across this post, talking about why the Single Entry Single Exit notion exists in the first place. Someone said that the strongest argument in favor or SESE has vanished.https://softwareengineering.stackexchange.com/a/118717/269571 | Return type declaration and checks | design patterns;object oriented;c++;architecture;php | null |
_unix.81040 | I'm hoping this is a stupid question with a simple answer, but I've been hunting around for half an hour and cannot find a way to install kdesudo in OpenSUSE 12.3.I need to use kdesudo because kdesu (which is pre-installed in OpenSUSE 12.3) is a basic tool which does not obey the sudoers configuration.I'm happy to build from source if it comes to it, but I can't work out where the authentic source can be safely downloaded for kdesudo. | How to install kdesudo in OpenSUSE 12.3 | software installation;sudo;opensuse;source | Search on this openSUSE page for kdesudo and you will get a list of personal repos with it. |
_unix.347839 | I noticed plenty of times already that when host OS is under heavy i/o load all VMs start to incredibly struggle with i/o up to point where they fail to even boot with systemd giving CPU stuck messages and other timeouts. It works this way both under VMWare Workstation and VirtualBox on Linux host. With any guest. Basically if host is performing for example filesystem check or some checksum computations guest OSes are half-functional due to not working i/o almost at all. It's worth to mention that host processes don't have problem sharing i/o during such load and when second checksumming is run i/o just gets split more or less evenly across 2 processes. Why is that? What makes VM so special? Can it be helped? | Why VMs terribly struggle with i/o when host is under heavy load i/o wise? | virtual machine;performance;vmware;io;storage | Consumer grade controllers/disks have not the capacity to perform multiple heavy I/O operations efficiently ; however as generally as fastest is the technology the process will go better (in theory).Obviously if you running CPU/ I/O intensive operations like software RAID (as the OP said it is doing), it will degrade the performance of the whole setup, both when sharing controllers, and also using up CPU resources. I would advise investing in a hardware RAID controller at least.Server grade hardware will generally deal better with situations of parallel heavy access of several VMs/several users.However, going for server grade hardware is not the whole story, and there are optimisation strategies that are useful both when dealing with consumer or server grade hardware.What can be done in VMs that help a lot into I/O taxing less the hypervisor is paravirtualizing services. Paravirtualization means adding special drivers, that will talk directly with the virtualisation services/kernel for bulk data transfer (aka PVSCSI in VMWare parlance), and as such the actual media storage devices/NICs do not have to be emulated.For all the vmware solutions, either Enterprise or Workstation, you have got for Linux and FreeBSD, the open-vm-tools package.Under Debian, you install it using:apt-get install open-vm-toolsFor Debian Stretch, it does not involve compiling anything anymore. For Jessie, I do recommend installing open-vm-tools from the backports, as the backports install open-vm-tools v10.After installing the open-vm-tools, you have to shut down the VM, and change the disk controller for type ParaVirtual, and the network controller for vmxnet3.Have a look at Configuring disks to use VMware Paravirtual SCSI (PVSCSI) adapters (1010398)The vmtools also allow the VMs to do memory ballooning and as such, they wont need to use up RAM they are not using.Virtual memory ballooning is a computer memory reclamation technique used by a hypervisor to allow the physical host system to retrieve unused memory from certain guest virtual machines (VMs) and share it with others. Memory ballooning allows the total amount of RAM required by guest VMs to exceed the amount of physical RAM available on the host.Support for emulation technologies at the processor level like VT and VT-d also help the process being smoother. Intel Virtualization Technology for Directed I/O (VT-d)Needless to say, optimizations at the OS level that diminish I/O also help; for instance, logging to remote logging systems, and not the local VM. Or aligning partitions. Partition alignment in VMware vSphere 5, a DeepDrive, Part-1Partition alignment in VMware vSphere 5, a DeepDrive, Part-2Beware also of other I/O optimisations, like taking out database storage space out of the /var partition due to the logging daemon flushing log files to maintain log integrity in case of sudden reboots.It also helps following the philosophy of Unix of using the minimum required services. Empirically from my use, smaller VMs will use less I/O in house keeping/paging. Obviously if you use up more memory than you have, you might have problems with I/O (aka trashing).You can also fine-tune the I/O priority of a particular VM in the hypervisor i.e. giving it higher or less priority. I know it can be done in VCenter/VMWare ESX, probably not in VMWare Workstation as it is a level 2 hypervisor, and as such it is the host OS that deals with managing the I/O operations and slice quotas (more on that later on). It also goes without saying that when using level 2 hypervisors that many of the optimisations we talk here should also be applied to the host OS when possible.VMware hypervisor technology also seems to deal better with high-load I/O in multiple VMs than the alternatives.However, if you are concerned about performance, at least in the VMWare realm, at least for production systems, I would advise going for their Type-1/bare metal hypervisor (ESX or ESXi) instead of using VMware workstation.From Hypervisor:Type-1, native or bare-metal hypervisors These hypervisors run directly on the host's hardware to control the hardware and to manage guest operating systems. For this reason, they are sometimes called bare metal hypervisors. The first hypervisors, which IBM developed in the 1960s, were native hypervisors.[4] These included the test software SIMMON and the CP/CMS operating system (the predecessor of IBM's z/VM). Modern equivalents include Xen, Oracle VM Server for SPARC, Oracle VM Server for x86, the Citrix XenServer, Microsoft Hyper-V and VMware ESX/ESXi. Type-2 or hosted hypervisors These hypervisors run on a conventional operating system (OS) just as other computer programs do. A guest operating system runs as a process on the host. Type-2 hypervisors abstract guest operating systems from the host operating system. VMware Workstation, VMware Player, VirtualBox, Parallels Desktop for Mac and QEMU are examples of type-2 hypervisors.Finally, there is also the option of going for native hypervisors or containers technologies that not add a layer of emulation to mass storage access, like Xen for Linux hosts in PV mode , docker or FreeBSD jails. This alternative also has their advantages and disadvantages, that are out of the scope of this answer. |
_webmaster.44847 | What you'd expect when you select a site in the IIS console and choose the stop button is that all bound ports would stop responding, right? That's what it did in earlier versions of IIS, at least.What I'm seeing today (on more than one server) is that, when I stop the site, the SSL port 443 stops as expected, but port 80 stays open -- and it keeps responding to requests with 404 Not Found errors!TCPView shows that port 80 is being used by System service with a PID of 0. Process Explorer does not show PID 0 using port 80.This is IIS 7.5 on Windows Server 2008 R2 x64. Is this perhaps some fancy new feature of the Windows Firewall? (The service is running, but the firewall is disabled by Group Policy.) | IIS 7.5 site is stopped, but it still responds to requests with 404 errors | iis7;404;windows server 2008 | Sounds like either something is sharing the port or its failing to shut down correctlyRun a netsh http show servicestate to diagnose if anything is running on the HTTP Service.Additionally do a netstat -ano and see if the IIS service is running on a ID. (I know you said TCPView is reporting ID:0 but this doesn't sound right and I'd put more trust in the netstat command.If you find its still running then a simple taskkill /pid:IDNUMBERHERE should do the trick. |
_unix.89576 | root@ROUTER:~# cat maccheck.txt logread | egrep ': STA |DHCPACK' | awk '{print $1 $2 $3 $9}' | sed -e 's/\( [0-9] \)/0\1/' | sed s/.\{15\}/&:/; s/: /:/g | cut -d : -f 1,2,4,5,6,7,8,9 | sed s/.\{13\}/&X/; | sed 's/:X/ /g' | sed 's/XX:XX:XX:XX:XX:XX/AAAA/g' | sort -u -rit OK when it's here: root@ROUTER:~# logread | egrep ': STA |DHCPACK' | awk '{print $1 $2 $3 $9}' | head -1Sep 2 03:03:25 XX:XX:XX:XX:XX:XXbut it's bad if I use the script that has further things in it: root@ROUTER:~# sh maccheck.txt |head -1Sep0 4 13:13 AAAAso the Sep0 is bad.. how to modify it to be Sep? root@ROUTER:~# logread | egrep ': STA |DHCPACK' | awk '{print $1 $2 $3 $9}' | sed -e 's/\( [0-9] \)/0\1/' | head -1Sep0 2 03:03:25 AAAASo the problem is with: sed -e 's/\( [0-9] \)/0\1/'Q: I need the same output, but without the 0 in Sep, how to do it? | Problem with sed | sed | \1 will give you everything listed in your group (the \( \) section). Your group includes the spaces, so the zero will be put in, then the 2 will be added. To fix, change to sed -e 's/ \([0-9]\) / 0\1 /'Examplebefore$ cat sample.txt | sed -e 's/\( [0-9] \)/0\1/' Sep0 2 03:03:25 XX:XX:XX:XX:XX:XXafter$ cat sample.txt | sed -e 's/ \([0-9]\) / 0\1 /'Sep 02 03:03:25 XX:XX:XX:XX:XX:XX |
_codereview.42802 | I needed to impose on a Python method the following locking semantics: the method can only be run by one thread at a time. If thread B tries to run the method while it is already being run by thread A, then thread B immediately receives a return value of None.I wrote the following decorator to apply these semantics:from threading import Lockdef non_blocking_lock(fn): fn.lock = Lock() @wraps(fn) def locker(*args, **kwargs): if fn.lock.acquire(False): try: return fn(*args, **kwargs) finally: fn.lock.release() return lockerThis works in my testing so far. Does anyone notice any gotchas or have any suggestions for improvements?Revised versionAfter the suggestions by @RemcoGerlich, I have added a docstring and kept the lock local to the decorator:from threading import Lockdef non_blocking_lock(fn): Decorator. Prevents the function from being called multiple times simultaneously. If thread A is executing the function and thread B attempts to call the function, thread B will immediately receive a return value of None instead. lock = Lock() @wraps(fn) def locker(*args, **kwargs): if lock.acquire(False): try: return fn(*args, **kwargs) finally: lock.release() return locker | A non-blocking lock decorator in Python | python;concurrency | I think this will work fine and it's very close to how I would write it myself.A few small things come to mind:There's no documentation of any kind that explains what the semantics are, and they're not explicit either (the return None if the lock isn't acquired is entirely implicit). I would put the short explanation you put in this question into a docstring, and/or add an explicit else: return None to the if statement.is there any reason why the lock object is exposed to the outside world by making it a property of the function (fn.lock) ? I would simply make it a local variable, so that it's hidden. But I'm not sure. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.