id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_webapps.86308
Can Google Sheets queries aggregate strings?I want to aggregate all locations for all years, so the following:year location2013 Sudan2014 Syria2012 India2014 Poland2014 Great BritainShould be transformed to:year locations2012 India2013 Sudan2014 Syria, Poland, Great BritainThe problem is that =QUERY(select year, sum(location) group by year) does not work, neither does =QUERY(select year, concatenate(location) group by year).
Aggregate strings
google spreadsheets
No, the aggregation functions available for query do not include concatenation of strings.An alternative approach, illustrated by the following example: +---+------+---------------+------+------------------------------+| | A | B | C | D |+---+------+---------------+------+------------------------------+| 1 | year | location | year | locations || 2 | 2013 | Sudan | 2012 | India || 3 | 2014 | Syria | 2013 | Sudan || 4 | 2012 | India | 2014 | Syria, Poland, Great Britain || 5 | 2014 | Poland | | || 6 | 2014 | Great Britain | | |+---+------+---------------+------+------------------------------+C2 =sort(unique(A2:A))returns sorted list without repetitionsD2 =if(C2=, , join(, , filter(B$2:B, A$2:A=C2)))picks the countries for the given year and joins them into a comma-separated stringThe formula from D2 needs to be dragged/copied down the column; I couldn't come up with an arrayformula variant for it.
_webmaster.77543
I work at a reasonably large e-commerce retailer that's working on doing a lot more with Google Analytics. One of the things we're doing is tagging each individual user that hits our site with a unique ID, and passing it in as a custom dimension to GA. This allows us to get a profile of an individual user, including every traffic source they ever come to our property from, etc. We've hit one snag though.Imagine a custom report... Dimensions are: Unique ID, Source / Medium, and Date. Metric is hits.This will show every traffic source that a user's traffic is attributable to, by day. If the user comes in from organic one day, then a referral the next, we'll know about it.However, the one thing we won't know about is if a user comes from the same source twice in a row. If a user comes from organic twice in a day, we'll just see organic tracked. Ideally, we'd like to know that's two different organic arrivals.Any ideas on how to tease this out? The end goal is pulling this down into a data warehouse for analysis outside of GA.(Obviously, I know that this can be done using Multi-Channel Funnels for users that have already made a purchase. But what about those that haven't?)
Identify Multiple Visits from Same Source / Medium in Google Analytics
google analytics
There is a simpler solution to set up the dimension Date and Time instead of Date which shows more interactions from one unique ID at once. It is not necessary to build another ga script, because GA already collected all data about particular unique id.
_codereview.37866
This week's challenge is essentially about fetching Json data from the Web, and deserializing it into objects. I don't have much time to devote to this one, so what I have is very basic: it displays the names of all pokemons in a WPF ListView:Here's the code for the window:public partial class MainWindow : Window{ public MainWindow() { InitializeComponent(); var service = new PokemonApp.Web.Pokedex(); var pokemons = service.SelectAllPokemonReferences().OrderBy(pokemon => pokemon.Name); DataContext = new MainWindowViewModel { References = pokemons }; }}As you've guessed, the interesting code is in the Pokedex class:public class Pokedex{ public IEnumerable<ResourceReference> SelectAllPokemonReferences() { string jsonResult = GetJsonResult(PokeDexResultBase.BaseUrl + pokedex/1/); dynamic pokemonRefs = JsonConvert.DeserializeObject(jsonResult); ICollection<ResourceReference> references = new List<ResourceReference>(); foreach (var pokeRef in pokemonRefs.pokemon) { string uri = PokeDexResultBase.BaseUrl + pokeRef.resource_uri.ToString(); references.Add(new ResourceReference { Name = pokeRef.name, ResourceUri = new Uri(uri) }); } return references; } private static string GetJsonResult(string url) { string result; WebRequest request = HttpWebRequest.Create(url); request.ContentType = application/json; charset=utf-8; try { using (var response = request.GetResponse()) using (var stream = response.GetResponseStream()) using (var reader = new StreamReader(stream)) { result = reader.ReadToEnd(); } } catch (Exception exception) { result = string.Empty; // throw? } return result; }}The ResourceReference class has nothing special:public class ResourceReference{ public string Name { get; set; } public Uri ResourceUri { get; set; }}This quite basic code makes a foundation for the more complex objects:public abstract class PokeDexResultBase : IPokeDexUri{ private readonly string _controller; protected PokeDexResultBase(string controller) { _controller = string.IsNullOrEmpty(controller) ? BaseUrl : controller; } public static string BaseUrl { get { return http://pokeapi.co/api/v1/; } } public int Id { get; set; } public Uri ResourceUri { get; set; } public DateTime Created { get; set; } public DateTime Modified { get; set; } public virtual string Url { get { return BaseUrl + _controller + (Id == 0 ? string.Empty : Id.ToString()); } }}That base class is inherited by everything that has an Id in the API:public class Pokemon : PokeDexResultBase{ public Pokemon() : base(pokemon/) { } public string Name { get; set; } public ICollection<ResourceReference> Abilities { get; set; } public ICollection<ResourceReference> EggGroups { get; set; } public ICollection<PokemonEvolution> Evolutions { get; set; } public ICollection<PokemonMove> Moves { get; set; } public ICollection<PokemonType> Types { get; set; } public int Attack { get; set; } public int CatchRate { get; set; } public int Defense { get; set; } public int EggCycles { get; set; } public string EvolutionYield { get; set; } public int ExperienceYield { get; set; } public string GrowthRate { get; set; } public int Happiness { get; set; } public string Height { get; set; } public int HitPoints { get; set; } public string MaleFemaleRatio { get; set; } public int SpecialAttack { get; set; } public int SpecialDefense { get; set; } public string Species { get; set; } public int Speed { get; set; } public int Total { get; set; } public string Weight { get; set; }}public class PokemonEvolution{ public int Level { get; set; } public string Method { get; set; } public Uri ResourceUri { get; set; } public string ToPokemonName { get; set; }}There are other classes involved, but there's nothing much to review about them, they're very similar to the Pokemon class.As I extend my code I'll add more methods to the Pokedex class, which will use the GetJsonResult method.Have I well analyzed the API - I mean, is this code a solid foundation for deserializing pokemons and eventually getting them to fight against each other? What could be done better? Any nitpicks?
Gotta catch 'em all!
c#;json;community challenge;pokemon
If a class such as ResourceReference makes sense as far as wrapping the API is concerned, from a UI standpoint, it's like using POCO's in the UI layer: If the data came from a database through Entity Framework, this code would be displaying the entities. This is a basic implementation, but if the goal is to make it an extensible skeleton implementation, there should be a ViewModel class for the UI to display, independently of the WebAPI-provided objects; the ViewModel doesn't care about ResourceUri - this property isn't meant to be displayed, it's plumbing for querying the API, doesn't belong in the UI.As far as the Pokedex service class goes, it looks like it could work, however it should be returning ViewModel objects for the UI to consume, and the static GetJsonResult method could be encapsulated in its own class, which would be an HTTP-based implementation of a helper object whose role is to fetch the data - there could be a SqlServer-based implementation that would do the same thing off a database, idea being to decouple the data from its data source... but it could be overkill.Usage of Uri in the POCO classes adds unnecessary complexity: they could just as well be kept as normal strings, so this code:string uri = PokeDexResultBase.BaseUrl + pokeRef.resource_uri.ToString();Could then look like this, skipping the useless .ToString() call: string uri = PokeDexResultBase.BaseUrl + pokeRef.resource_uri;The PokeDexResultBase.BaseUrl static string property would probably be better off as a static property of the Pokedex class, and then it would make PokeDexResultBase tightly coupled with Pokedex, which somehow makes sense.Also, class PokeDexResultBase should be called PokedexResultBase, for consistency.
_webmaster.100040
Usually I can simply copy and paste content from any blogs to the my blog, and add a no index tag to avoid any duplicate/copy content issues.I have copied above part from popular blog. I need to confirm that it is true?My site has two parts.Article Syndication PartBlog PartI usually write 4 x 1000+ word articles per month and add 100+ syndication posts from different popular sites.I am targeting SEO traffic for only my blog?So is it 100% Safe to add noindex tag to all syndication posts to avoid any duplicate/copy content issues?Or should I have to add rel=canonical instead?So, which is the best one in my case: noindex tag or rel=canonical?
Is it 100% safe to add NOINDEX tag to any duplicate/copy content issues?
seo;duplicate content;canonical url;noindex;syndication
null
_webapps.16189
I would like to have a blog site, though I have created my own by writing the code. Huhhh... hosting cost is killing me. I have my domain name with me and I just wanted to get the blog where I can use my unused domain name.My blog will me 80% technical (i.e. I will show peices of code and downloadable content to my user). Which blog will be useful for me. I just don't want my URL look like http://mysite.hosterName.comI want it like this http://www.mysite.com
Blog site with existing domain name
blog;hosting;domain name
WordPress, Blogger, Posterous & Tumblr - all provide hosting with custom domains. I believe WordPress charges extra for custom domains, while Blogger, Posterous & Tumblr do not.
_cogsci.36
Supposedly, people of higher levels of intelligence do learn faster than people of lower levels of it. But this is an awfully coarse observation, and different people can learn at drastically different speeds in different environments.
Does IQ affect learning speed?
learning;intelligence;iq
null
_unix.309757
In our research group we are running a computing server for deep learning with a number of NVIDIA Titan X graphics cards and quite some CPU cores. Given that it is a research lab and we have ~10 people using the machine the load on the CPU/GPU cores is almost always high. I am now in charge showing that the machine is overly used and I can propose hardware upgrades. To make an argument, I want to create a detailed history of the CPU/GPU/MEM usage on the machine. The problem is, I don't know the right tools for the job. Of course, I can do some scripting but I prefer off-the-shelf tools since I am not a system administrator :) For monitoring the CPU/GPU usage I typically use nvidia-smi and htop, but these are not suitable for generating long-time histories.Any recommendations on creating such histories?
Creating CPU/GPU/Memory load average history
memory;nvidia;cpu;load
null
_scicomp.2464
Edit: I was advised to replace the question with a more specific one.Coming from a very theoretical background, I'm pretty ignorant about what practical matrix solvers exist. (I have been, and will continue to scour the web for information, but I figured I would get direct and concise answers here as well.)Currently, what are the most important matrix solvers? (implemented in any software)I have a rudimentary programming background, and I'm familiar with classical theoretical methods for solving matrix systems. However, I'm well aware computer scientists have been at work finding ingenious ways to solve larger systems more quickly, especially for specialized classes of matrices.I'm looking at applications of matrix solvers in industry, in particular for use in simulation programs.
Wanting to learn about matrix solvers
matrices;reference request;linear solver
The best high-level overview that I know of is Trefethen and Bau. If I had to boil it down to a list, it would be (somewhat in pedagogical order):Dense $QR$ factorizationDense symmetric/Hermitian Eigenvalue Decomposition (EVD)Dense Singular Value Decomposition (SVD)The Conjugate Gradient Method (CG)Generalize Minimum Residual method (GMRES)Sparse Cholesky and $LU$ factorizationFast methods, such as multigrid, the Fast Fourier Transform (FFT), and the Fast Multipole Method (FMM)
_scicomp.2829
I have been reading a recent paper. In it, the authors performed molecular dynamics (MD) simulations of parallel-plate supercapacitors, in which liquid resides between the parallel-plate electrodes. To simplify the situation, let us suppose that the liquid between the electrodes is argon liquid.The system has a slab geometry, so the authors are only interested in variations of the liquid structure along the $z$ direction. Thus, the authors compute the particle number densities averaged over $x$ and $y$: $\bar{n}_\alpha(z)$, where $\alpha$ is a solvent species. (That is, in my simplified example, $\alpha$ is argon -- an argon atom.) $\bar{n} _\alpha(z)$ has dimensions of $\frac{\text{number}}{\text{length}^3}$ or simply $\text{length}^{-3}$, I think.The $xy$-plane is given by the inequalities $-x_0 < x < x_0$ and $-y_0 < y < y_0$. The area $A_0$ of the $xy$-plane is thus given by $A_0 = 4x_0y_0$.So, the authors define the particle number density averaged over $x$ and $y$ as follows: $$\bar{n}_\alpha(z) = A_0^{-1} \int_{-x_0}^{x_0} \int_{-y_0}^{y_0} dx^\prime dy^\prime n_\alpha(x^\prime, y^\prime, z)$$ where $A_0 = 4x_0y_0$ and $n_\alpha(x, y, z)$ is the local number density of $\alpha$ at $(x, y, z)$.Thus, $\bar{n}_\alpha(z)$ is simply proportional to $n_\alpha$ integrated over $x$ and $y$. But, my question is, what is $n_\alpha(x, y, z)$? How is $n_\alpha(x, y, z)$ determined in practice?As far as the computer is concerned, the argon atoms are point particles; they are modeled as having zero volume (although they interact by Lennard-Jones interactions). So how is it possible to define a number density?Do we simply cut the slab in slices along $z$ and then assign the particles to these slices? There might be 5 particles in the first $z$ slice, 10 in the second, 7 in the third, and so on. If I then divide 5, 10, and 7 by the volume of the respective slice, then I have a sort of number density, with units of $\frac{\text{number}}{\text{length}^3}$ or simply $\text{length}^{-3}$. But how do I now integrate this $n_\alpha(x^\prime, y^\prime, z)$ over $x$ and $y$? Do I have to additionally perform binning in the $x$ and $y$ directions?
In molecular dynamics (MD) simulations, how is particle number density computed in practice?
algorithms;computational chemistry;statistics;molecular dynamics
As is often the case in simulation papers, the mathematical description of the reported quantities are not literally describing the algorithm used to compute these quantities. (This typically happens when the main author and the compute monkey are not the same person.)In your case, there is no point to first compute $n_\alpha(x',y',z)$ and then integrate out $x$ and $y$. As you suggest in the question, one may estimate $\bar{n}_\alpha(z)$ directly by only constructing bins along the Z-axis. Just compute (the time-average of) the number of particles in a bin and divide it by the volume of the bin. That will give you an estimate of $\bar{n}_\alpha(z)$ in the bin.
_codereview.4568
I started implementing a graph drawing application in javascript and < canvas > element, and i just wanted to hear your thought on the progress so far. I'm very open to suggestions and i'm very interested to hear what you have to say.So you can see the progress so far in the present code, if you have any questions about it feel free to ask.var graph = {init: function(edges) { graph.vertices = {}; graph.edges = {}; graph.canvas = document.getElementById('platno'); graph.width = graph.canvas.width; graph.height = graph.canvas.height; graph.hookes_test = true; graph.ctx = graph.canvas.getContext('2d'); document.addEventListener(mousedown, graph.klik, false); document.addEventListener(mouseup, graph.drop, false); document.addEventListener(dblclick, graph.dblclick, false); graph.addNodesFromEdgesList(edges); graph.addEdgesFromEdgesList(edges); graph.mapEdges(graph.oDuljina); setInterval(graph.draw, 1024 / 24);},addNodesFromEdgesList: function(EdgesList) { for (var r1 = 0; r1 < EdgesList.length; r1++) { for (var r2 = 0; r2 < EdgesList[r1].length - 1; r2++) { if ((typeof graph.vertices[EdgesList[r1][r2]]) === undefined) { graph.addNode({ id: EdgesList[r1][r2], x: Math.floor(graph.width / 2 + 100 * Math.cos(Math.PI * (EdgesList[r1][r2] * 2) / 11)), y: Math.floor(graph.height / 2 + 100 * Math.sin(Math.PI * (EdgesList[r1][r2] * 2 / 11))), size: 6, ostalo: 100 }); } } }},addEdgesFromEdgesList: function(EdgesList) { for (var a = 0; a < EdgesList.length; a++) { graph.addEdge({ from: EdgesList[a][0], to: EdgesList[a][1], id: a }); }},node: function(node) { this.id = node.id; this.pos = new vektor(node.x, node.y); this.size = node.size; this._size = node.size; this.expanded = false;},addNode: function(node) { (typeof graph.vertices[node.id]) === undefined ? graph.vertices[node.id] = new graph.node(node) : console.log(Duplikat cvora! Id: + node.id);},removeNode: function(id) { if (typeof graph.vertices[id] !== undefined) { graph.removeEdgeByNodeId(id); if (id == graph.info_node.id) graph.info_node = false; delete graph.vertices[id]; } else { console.log(Ne postoji node! Id: + id); }},edge: function(edge) { this.id = edge.id; this.from = graph.vertices[edge.from]; this.to = graph.vertices[edge.to];},addEdge: function(edge) { (typeof graph.edges[edge.id]) === undefined ? graph.edges[edge.id] = new graph.edge(edge) : console.log(Duplikat brida! Id: + edge.id);},removeEdgeByEdgeId: function(id) { (typeof graph.edges[id]) !== undefined ? delete graph.edges[id] : console.log(Ne postoji brid! Id: + id);},removeEdgeByNodeId: function(id) { if (typeof graph.vertices[id] !== undefined) { for (var edge in graph.edges) { if (graph.edges.hasOwnProperty(edge) && (graph.edges[edge].from.id == id || graph.edges[edge].to.id == id)) { delete graph.edges[edge]; } } } else { console.log(Ne postoji cvor! Id: + id); }},clearGraph: function() { for (var id in graph.vertices) { if (graph.vertices.hasOwnProperty(id)) { graph.removeNode(id) } }},mapNodes: function(funkcija, obj) { var res = [], tmp, id; for (id in graph.vertices) { if (graph.vertices.hasOwnProperty(id)) { tmp = funkcija.apply(graph, [graph.vertices[id], obj || {}]); if (tmp) res.push(tmp); } } return res;},mapEdges: function(funkcija) { for (var id in graph.edges) { if (graph.edges.hasOwnProperty(id)) { funkcija.apply(graph, [graph.edges[id].from, graph.edges[id].to, graph.edges[id].id]); } }},vuci: function(e) { if (graph.drag) { graph.drag.pos.x = (e.pageX - graph.canvas.offsetLeft); graph.drag.pos.y = (e.pageY - graph.canvas.offsetTop); if (graph.drag.pos.x > graph.width - 6) graph.drag.pos.x = graph.width - 6; else if (graph.drag.pos.x < 6) graph.drag.pos.x = 6; else if (graph.drag.pos.y > graph.height - 6) graph.drag.pos.y = graph.height - 6; else if (graph.drag.pos.y < 6) graph.drag.pos.y = 6; }},klik: function(e) { graph.drag = graph.getNodeFromXY(e.pageX - graph.canvas.offsetLeft, e.pageY - graph.canvas.offsetTop)[0]; document.addEventListener(mousemove, graph.vuci, false);},drop: function() { graph.drag = false; document.removeEventListener(mousemove, graph.vuci, false);},getNodeFromXY: function(_x, _y) { return graph.mapNodes(function(node, obj) { if ((obj.x > node.pos.x - node.size) && (obj.x < node.pos.x + node.size) && (obj.y > node.pos.y - node.size) && (obj.y < node.pos.y + node.size)) { return node; } else { return false }; }, { x: _x, y: _y });},draw: function() { graph.ctx.clearRect(0, 0, graph.width, graph.height); background(); graph.mapEdges(crtaj_v); graph.mapNodes(crtaj_n); graph.info(); if (!graph.hookes_test) graph.mapEdges(graph.hookes); function background() { var grd = graph.ctx.createRadialGradient(graph.width / 2, graph.height / 2, 30, graph.width / 2, graph.height / 2, graph.height); grd.addColorStop(0, #42586d); grd.addColorStop(0.5, #36495a); grd.addColorStop(1, #26323e); graph.ctx.fillStyle = grd; graph.ctx.fillRect(0, 0, graph.width, graph.height); } function crtaj_n(v) { graph.ctx.fillStyle = 'rgba(0,0,0,0.4)'; graph.ctx.beginPath(); graph.ctx.arc(v.pos.x, v.pos.y, v.size, 0, Math.PI * 2, true); graph.ctx.fill(); graph.ctx.strokeStyle = '#818f9a' graph.ctx.arc(v.pos.x, v.pos.y, v.size, 0, Math.PI * 2, true); graph.ctx.stroke(); return false; } function crtaj_v(v1, v2) { graph.ctx.beginPath(); graph.ctx.strokeStyle = 'rga(129,143,154,0.1)'; var duljina = [v1.pos.udaljenost(v2.pos) - v1.size, v1.pos.udaljenost(v2.pos) - v2.size]; var kut = Math.atan2(v2.pos.y - v1.pos.y, v2.pos.x - v1.pos.x); graph.ctx.moveTo(v2.pos.x - (duljina[0] * Math.cos(kut)), v2.pos.y - (duljina[0] * Math.sin(kut))); graph.ctx.lineTo(v1.pos.x + (duljina[1] * Math.cos(kut)), v1.pos.y + (duljina[1] * Math.sin(kut))); graph.ctx.stroke(); }},dblclick: function(e) { var dbl = graph.getNodeFromXY(e.pageX - platno.offsetLeft, e.pageY - platno.offsetTop)[0] || false; if (dbl.expanded) { dbl.size = dbl._size; dbl.expanded = false; graph.info_node = false; } else if (dbl) { graph.mapNodes(function(v1) { if (v1.expanded) { v1.size = v1._size; v1.expanded = false; graph.info_node = false; } }) dbl.size = 30; dbl.expanded = true; graph.info_node = dbl; }},info: function() { if (graph.info_node) { graph.ctx.font = 10px Verdana; graph.ctx.textAlign = center; graph.ctx.fillStyle = '#ffffff'; graph.ctx.fillText(Node: + graph.info_node.id, graph.info_node.pos.x, graph.info_node.pos.y + 3, 30); }},hookes: function(v1, v2, id) { var duljina = v1.pos.oduzmi(v2.pos), udaljenost = duljina.duljina() - (graph.edges[id].duljina), HL = 20 * (udaljenost / duljina.duljina()), kut = Math.atan2(v2.pos.y - v1.pos.y, v2.pos.x - v1.pos.x); (graph.drag && (graph.drag.id != v1.id)) || !graph.drag ? graph.zbrojiLokacija(v1, kut, HL) : false; (graph.drag && (graph.drag.id != v2.id)) || !graph.drag ? graph.oduzmiLokacija(v2, kut, HL) : false;},oDuljina: function(v1, v2, id) { graph.hookes_test = false; graph.edges[id].duljina = v1.pos.oduzmi(v2.pos).duljina();},zbrojiLokacija: function(v1, kut, HL) { var dis = new vektor(HL * Math.cos(kut), HL * Math.sin(kut)) if (v1.pos.x + dis.x > graph.width - v1.size || v1.pos.x + dis.x < 0 + v1.size) { v1.pos.x += dis.x * (-1); v1.pos.y += dis.y; } else if (v1.pos.y + dis.y > graph.height - v1.size || v1.pos.y + dis.y < 0 + v1.size) { v1.pos.x += dis.x; v1.pos.y += dis.y * (-1); } else { v1.pos = v1.pos.zbroji(dis) }},oduzmiLokacija: function(v1, kut, HL) { var dis = new vektor(HL * Math.cos(kut), HL * Math.sin(kut)) if (v1.pos.x + dis.x > graph.width - v1.size || v1.pos.x + dis.x < 0 + v1.size) { v1.pos.x -= dis.x * (-1); v1.pos.y -= dis.y; } else if (v1.pos.y + dis.y > graph.height - v1.size || v1.pos.y + dis.y < 0 + v1.size) { v1.pos.x -= dis.x; v1.pos.y -= dis.y * (-1); } else { v1.pos = v1.pos.oduzmi(dis) }}}function vektor(x, y) { this.x = x; this.y = y;}vektor.prototype.zbroji = function(v1) { return new vektor(this.x + v1.x, this.y + v1.y);}vektor.prototype.oduzmi = function(v1) { return new vektor(this.x - v1.x, this.y - v1.y);}vektor.prototype.division = function(x) { return new vektor(this.x / x, this.y / x);}vektor.prototype.multiply = function(x) { return new vektor(this.x * x, this.y * x);}vektor.prototype.udaljenost = function(v1) { return Math.sqrt(Math.pow(v1.x - this.x, 2) + Math.pow(v1.y - this.y, 2));}vektor.prototype.duljina = function() { return Math.max(20, Math.sqrt(Math.pow(this.x, 2) + Math.pow(this.y, 2)));}//////////////////////////////////////// Test data ///////////////////////////////////////var edges = [ [1, 2, 1], [1, 3, 1], [2, 3, 1], [3, 4, 1], [3, 5, 1], [3, 6, 1], [4, 1, 1], [4, 2, 1], [5, 6, 1] ];graph.init(edges);UPDATE: added new code, this is just a preview i didn't have time to optimize the code, and also some of function names and varibles are written in my native language.Also I've added a jsfiddle link so you can see the work so far in action.http://jsfiddle.net/nNcHJ/1/
Javascript graph skeleton implementation
javascript;graph
null
_softwareengineering.108812
I know that CRM stands for Customer Relationship Management, CMS stands for Content Management System and ERP stands for Enterprise Resource Planner. I would like to know what each of things does best and which scenarios they are used in. So basic understanding of the three with differences among them and the environments in which they are used.Edit: I read up on the Wiki a bit more and now I understand according to the ERP wiki that a CRM is a part of the ERP. My question to be more specific is: Why have a CRM separately if it's already a part of the ERP? Abstraction? To log and store the information of a call-center that calls a hundred odd people to get info, which is more better to achieve the following tasks, a CRM or an ERP? :Store all the information about all the people that we have been called Store information about which person called and how many hours a person has worked?Find out which employee has been more productive?My friend strongly believes a CRM will get the job done. So thought I'd ask you guys which would be better and why does it make it better.
What's the difference between CRM, CMS and ERP
cms;erp;crm
null
_unix.367015
time can be controlled for formatted printing. For example, time -f %e seconds will print out the time a command used in unit of second.Now, can I do some simple math about %e and print it out? For example, I want to print out the value of %e/60 (which is the time in unit of minute). How should I write in the time command?Thank you very much.
time command output format
command line;time
null
_softwareengineering.204909
In order to submit a desktop application for the Windows 8 app store, you need to digitally sign any driver or .exe associated with the application. However, the application I was trying to submit contains several files that are redistributions of other companies' software, and some of these are not signed. My application was rejected on these grounds. Is it legal (or ethical) to sign other companies' work so that we can submit our application? I think it might be considered some form of false representation but I'm not sure.
Signing redistributed files
legal;ethics
I can only answer this question on an ethical basis. Whether my ethical viewpoint is reflected in law is totally outside of my domain of expertise. (Also, I don't know the ins-and-outs of Microsoft's signing practices, so please correct me if I say something that's inconsistent with MS's way of doing things.)Suppose you sign some file F (having some contents C) with a signing key K. The resulting file/signature pair [F, S(K,C)] says:The owner of key K hereby asserts that file F has contents C.Assuming you have the right to distribute the files unsigned, it would seem that you would also have the default right to distribute them with a signature. A signature is a purely programmatic cryptographic transformation of the file that carries the above assertion. (Saying that you can't cryptographically sign something is quite close to claiming you're not allowed to produce a hash of it.)Your signature doesn't assert that you're the author of the code, but rather it asserts that you are a point of trust in the distribution of the code. That's not being misleading; that's merely a perfectly accurate representation of what is happening. The end recipient must decide whether they trust code distributed by you. The identity of the original author isn't relevant for our trust model, because you are the last person that touched the code before it arrived at their computer.
_cseducators.2787
If you had to recommend a single book to introduce the way programmers think to anyone, but it had to be from outside the field of CS, what would it be?For programmers, my hands down, all-time, crushing winner would be Becoming a Technical Leader by Gerald M. Weinberg, which is mainly about self-development but still for an already technical audience. When I was in High School ,the book Godel, Escher, Bach by Douglas R. Hofstadter had just come out, and we had an entire course devoted to it. It is probably one of the most quoted books of all time.What one book (besides Alice in Wonderland) conveys the mindset and enjoyment of mental processes similar to programming, to a non-technical person? What is it that makes this book important/vital?
What non-programming book is vital for learning the CS mindset?
teaching analogy;resource request;adult education;computational thinking;textbook
The hardest part of determining anything in computer science is the requirements first. If you don't know what the program should do, then there is no way to do it correctly.Thus, Winnie the Pooh is a wonderful book on the matter. It clearly describes time and time again how simple misunderstandings of the base assumptions lead to absurdities of action. From tracking a woozle of one's own making (and debugging where the error was actually being originated) to trying to define a heffalump and building something completely wrong as a result.With the summary of Winnie the Pooh in In which Christoffer Robin gives a Pooh party, and we say good-bye we realize that our jobs as writers of code is to take the stuff of dreams and make it into reality.Winnie the Pooh has the advantage over many other books by being accessible by all levels of readers and can often be found in less expensive books compared to many other textbooks.
_codereview.46559
I am trying to remove file extension of a file with many dots in it:string a = asdasdasd.asdas.adas.asdasdasdasd.edasdasd;string b = a.Substring(a.LastIndexOf('.'), a.Length - a.LastIndexOf('.'));string c = a.Replace(b, );Console.WriteLine(c);Is there any better way of doing this?
How to remove file extension using C#?
c#
If you can, just use Path.GetFileNameWithoutExtensionReturns the file name of the specified path string without the extension.Path.GetFileNameWithoutExtension(asdasdasd.asdas.adas.asdasdasdasd.edasdasd);With one line of code you can get the same result.If you want to create one by yourself, why not just use this?int index = a.LastIndexOf('.');b = index == -1 ? a : a.Substring(0, index);P.S Special thanks to @Anthony and @CompuChip to point me out some mistake i done, bad day maybe.You take everything which comes from 0 (the start) to the last dot which means the start of the extension
_codereview.27715
I wrote a script that updates some library files for the game framework libgdx by grabbing the latest nightly build .zip file from a server and extracting the contents to the appropriate locations.#!/usr/bin/python__appname__ = 'libgdx_library_updater'__version__ = 0.1__author__ = Jon Renner <[email protected]>__url__ = http://github.com/jrenner/libgdx-updater__licence__ = MITimport os, time, sys, urllib2, re, datetime, tempfile, zipfile, argparse# error handling functions and utilsdef fatal_error(msg): print ERROR: %s % msg sys.exit(1)def warning_error(msg): print WARNING: %s % msg if not FORCE: answer = confirm(abort? (Y/n): ) if answer in YES: fatal_error(USER QUIT) def confirm(msg): answer = raw_input(msg) return answer.lower()def human_time(t): minutes = t / 60 seconds = t % 60 return %.0fm %.1fs % (minutes, seconds) # constantsYES = ['y', 'ye', 'yes', '']# for finding the time of the latest nightly build from the web page htmlDATE_RE = r[0-9]{1,2}-[A-Za-z]{3,4}-[0-9]{4}\s[0-9]+:[0-9]+REMOTE_DATE_FORMAT = %d-%b-%Y %H:%MSUPPORTED_PLATFORMS = ['android', 'desktop', 'gwt']CORE_LIBS = [gdx.jar, gdx-sources.jar]DESKTOP_LIBS = [gdx-backend-lwjgl.jar, gdx-backend-lwjgl-natives.jar, gdx-natives.jar] ANDROID_LIBS = [gdx-backend-android.jar, armeabi/libgdx.so, armeabi/libandroidgl20.so, armeabi-v7a/libgdx.so, armeabi-v7a/libandroidgl20.so]GWT_LIBS = [gdx-backend-gwt.jar] # parse argumentsEPILOGUE_TEXT = %s\n%s % (__author__, __url__) + \nUSE AT YOUR OWN RISK!parser = argparse.ArgumentParser(description='LibGDX Library Updater %s' % __version__, epilog=EPILOGUE_TEXT)parser.add_argument('-d', '--directory', help='set the libgdx project/workspace directory', default=os.getcwd())parser.add_argument('-i', '--interactive', action='store_true', help='ask for confirmation for every file', default=False)parser.add_argument('-f', '--force-update', action='store_true', help='no confirmations, just update without checking nightly\'s datetime', default=False)parser.add_argument('-a', '--archive', help='specify libgdx zip file to use for update', default=None)args = parser.parse_args()PROJECT_DIR = args.directoryINTERACTIVE = args.interactiveFORCE = args.force_updateARCHIVE = args.archive# mutually exclusiveif FORCE: INTERACTIVE = False# check the time of the latest archive on the nightlies serverdef get_remote_archive_mtime(): index_page = urllib2.urlopen(http://libgdx.badlogicgames.com/nightlies/) contents = index_page.read() print -- OK -- # regex for filename regex = rlibgdx-nightly-latest\.zip # add regex for anything followed by the nighlty html time format regex += r.*%s % DATE_RE try: result = re.findall(regex, contents)[0] except IndexError as e: print REGEX ERROR: failed to find '%s' in:\n%s % (regex, contents) fatal_error(regex failure to match) try: mtime = re.findall(DATE_RE, result)[0] except IndexError as e: print REGEX ERROR: failed to find datetime in: %s % result fatal_error(regex failure to match) dtime = datetime.datetime.strptime(mtime, REMOTE_DATE_FORMAT) return dtime# downloads and returns a temporary file contained the latest nightly archivedef download_libgdx_zip(): libgdx = tempfile.TemporaryFile() url = http://libgdx.badlogicgames.com/nightlies/libgdx-nightly-latest.zip # testing url - don't hammer badlogic server, host the file on localhost instead # url = http://localhost/libgdx-nightly-latest.zip resp = urllib2.urlopen(url) print downloading file: %s % url total_size = resp.info().getheader('Content-Length').strip() total_size = int(total_size) # base 10 SI units - following Ubuntu policy because it makes sense - https://wiki.ubuntu.com/UnitsPolicy total_size_megabytes = total_size / 1000000.0 bytes_read = 0 chunk_size = 10000 # 10kB per chunk while True: chunk = resp.read(chunk_size) libgdx.write(chunk) bytes_read += len(chunk) bytes_read_megabytes = bytes_read / 1000000.0 percent = (bytes_read / float(total_size)) * 100 sys.stdout.write(\rprogress: {:>8}{:.2f} / {:.2f} mB ({:.0f}% complete).format( , bytes_read_megabytes, total_size_megabytes, percent)) sys.stdout.flush() if bytes_read >= total_size: print finished download break return libgdxdef update_files(libs, locations, archive): for lib in libs: if lib in archive.namelist(): if INTERACTIVE: answer = confirm(overwrite %s? (Y/n): % lib) if answer not in YES: print skipped: %s % lib continue with archive.open(lib, r) as fin: filename = os.path.basename(lib) final_path = os.path.join(locations[lib], filename) with open(final_path, w) as fout: fout.write(fin.read()) print extracted to %s % final_pathdef run_core(locations, archive): title(CORE) update_files(CORE_LIBS, locations, archive)def run_android(locations, archive): title(ANDROID) update_files(ANDROID_LIBS, locations, archive) def run_desktop(locations, archive): title(DESKTOP) update_files(DESKTOP_LIBS, locations, archive) def run_gwt(locations, archive): title(GWT) update_files(GWT_LIBS, locations, archive)def search_for_lib_locations(directory): platforms = [] search_list = CORE_LIBS + DESKTOP_LIBS + ANDROID_LIBS locations = {} for element in search_list: locations[element] = None for (this_dir, dirs, files) in os.walk(directory): for element in search_list: split_path = os.path.split(element) path = os.path.split(split_path[0])[-1] filename = split_path[1] for f in files: match = False if filename == f: f_dir = os.path.split(this_dir)[-1] if path == : match = True else: if path == f_dir: match = True if match: if locations[element] != None: print WARNING: found %s in more than one place! % element if not FORCE: answer = confirm(continue? (Y/n): ) if answer not in YES: fatal_error(USER ABORT) locations[element] = this_dir for lib, loc in locations.items(): if loc == None: print WARNING: did not find library %s in directory tree of: %s % (lib, directory) found_libraries = [lib for lib, loc in locations.items() if locations[lib] != None] if found_all_in_set(CORE_LIBS, found_libraries): platforms.append(core) if found_all_in_set(ANDROID_LIBS, found_libraries): platforms.append(android) if found_all_in_set(DESKTOP_LIBS, found_libraries): platforms.append(desktop) if found_all_in_set(GWT_LIBS, found_libraries): platforms.append(gwt) return platforms, locationsdef found_all_in_set(lib_set, found_list): for lib in lib_set: if lib not in found_list: return False return Truedef main(): start_time = time.time() print finding local libraries in %s % PROJECT_DIR platforms, locations = search_for_lib_locations(PROJECT_DIR) if core not in platforms: fatal_error(did not find CORE libraries %s in project directory tree % str(CORE_LIBS)) else: print found CORE libraries for supported in SUPPORTED_PLATFORMS: if supported in platforms: print found libraries for platform: %s % supported.upper() else: print WARNING: did not find libraries for platform: %s - WILL NOT UPDATE % supported.upper() if ARCHIVE == None: print checking latest nightly... mtime = get_remote_archive_mtime() print lastest nightly from server: %s % mtime if not FORCE: answer = confirm(replace local libraries with files from latest nightly?(Y/n): ) if answer not in YES: fatal_error(USER QUIT) libgdx = download_libgdx_zip() else: if not os.path.exists(ARCHIVE): fatal_error(archive file not found: %s % ARCHIVE) if not FORCE: answer = confirm(replace local libraries with files from '%s'?(Y/n): % os.path.basename(ARCHIVE)) if answer not in YES: fatal_error(USER QUIT) libgdx = open(ARCHIVE, r) with zipfile.ZipFile(libgdx) as archive: if core in platforms: run_core(locations, archive) if desktop in platforms: run_desktop(locations, archive) if android in platforms: run_android(locations, archive) if gwt in platforms: run_gwt(locations, archive) duration = time.time() - start_time print finished updates in %s % human_time(duration) libgdx.close()def title(text): dashes = - * 10 print dashes + %s % text + dashesif __name__ == __main__: main()
Updating libgdx game framework library files
python;library;libgdx
Your code looks pretty good. Some notes:According to PEP8, imports should be written in separate lines.fatal_error: I'd probably write the signature this way: fatal_error(msg, code=1).INTERACTIVE -> interactive. According to PEP8, global variables should be written lower-case.That's an opinion: I prefer to write multi-line lists/dictionaries in JSON style. You save indentation space and the reordering of elements is straightforward (all at the meager cost of two lines):DESKTOP_LIBS = [ gdx-backend-lwjgl.jar, gdx-backend-lwjgl-natives.jar, gdx-natives.jar,] Functions download_libgdx_zip and search_for_lib_locations are written (unnecessarily IMO) in a very imperative fashion. I'd probably refactor it with a functional approach in mind. At least don't reuse the same variable name to hold different values (i.e. total_size), as that takes away the sacred mathematical meaning of =.Those functions run_xyz(locations, archive) look very similar, why not a unique run(platform, locations, archive).Function found_all_in_set can be written:def found_all_in_set(lib_set, found_list): return all(lib in found_list for lib in lib_set)Or:def found_all_in_set(lib_set, found_list): return set(lib_list).issubset(set(found_list))if ARCHIVE == None: -> if ARCHIVE is None: although I prefer the (almost) equivalent, more declarative if not ARCHIVE:.There are a lot of imperative snippets that could be written functionally, for example this simple list-comprehension replaces a dozen lines from your code:platforms = [platform for (platform, libs) in zip(platforms, libs_list) if found_all_in_set(libs, found_libraries)]
_unix.116585
I've got raspbian installed and need to mount a cifs path after boot is completed. In fstab I got an entry for this with the noauto parameter. When using auto, boot hangs.So in raspbian, the file is located in /etc/rc.local# Print the IP address_IP=$(hostname -I) || trueif [ $_IP ]; then printf My IP address is %s\n $_IPfi#mount media storage via cifsif grep -qs '/media/Seagate' /proc/mounts; then echo Seagate already mounted.else echo Mounting Seagate.. sudo -n mount /media/Seagatefiexit 0According to the man page the -n parameter will suppress a password prompt for the sudo command. This is not the case, however. So I tried editing the sudoers file.# This file MUST be edited with the 'visudo' command as root.## Please consider adding local content in /etc/sudoers.d/ instead of# directly modifying this file.## See the man page for details on how to write a sudoers file.#Defaults env_resetDefaults mail_badpassDefaults secure_path=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin# Host alias specification# User alias specification# Cmnd alias specification# User privilege specificationroot ALL=(ALL:ALL) ALL# Allow members of group sudo to execute any command%sudo ALL=(ALL:ALL) NOPASSWD: ALL# See sudoers(5) for more information on #include directives:#includedir /etc/sudoers.dpi ALL=(ALL) NOPASSWD: ALLOnly the %sudo line is changed.I assume root to be in group sudo and since the %sudo line is after the root line, no password should be required. Note that root does not have a password anyways on raspbian, I can fill anything at the password prompt.Any suggestions? Other approaches like crontab may also be suitableEDIT 1Further info: there is #!/bin/sh -eon top of the file; the '-e' parameter apparently makes it not halt on errors. The permissions aredrwxr-xr-x 2 root root 4096 Feb 23 12:09
disable sudo password prompt for boot script
mount;sudo;cifs;rc
null
_unix.388100
I have created a servicefile and I kept it in the /etc/systemd/system.It is starting the service as a daemon at the start of the system.I don't want it to start at the start of the system.I want to start the service when I will run a command to start the service.Thank You.
How to start a service in linux after running command not at start of the system?
linux;command line;services
Extract from the Debian systemd documentationShow status of the service example1:systemctl status example1Enables example1 to be started on bootup:systemctl enable example1Disable example1 to not start during bootup:systemctl disable example1Start a Service example1systemctl start example1
_unix.347073
The packages dev-texlive/texlive* are often outdated. As far I read it is not trivial to keep the TeXLive distribution package up to date or convert the tlmgr system to gentoo ebuilds. As a solution one can install TeXLive with the tlmgr installer from https://www.tug.org/texlive/ in the user space and update on demand (several bugfixes and updates per day). Unfortunately some gentoo packages depend on TeXLive and the package manager will not see the installation in the user space. And via verse some TeXLive-packages depend on software on the gentoo system.As a workaround I installed dev-texlive/texlive-latex via package manager to satisfy most of the dependencies and then installed TeXLive (full installation) with tlmgr in the user space. Is there a better solution?How can I run a recent TeXlive setup with daily updates on gentoo Linux?
How to keep TeXLive up to date on gentoo?
gentoo;dependencies;latex;software updates
You may add packages to /etc/portage/profile/package.provided:dev-texlive/texlive-latex-2016Please note that:you must include a versionyou must not use a leading equal signFor more informations read this Gentoo wiki article.Update (2017-02-28): a new Gentoo wiki article has been published: TeX Live manual installation
_unix.204017
I'm trying to learn to use grep groups, like sed \1\2\3, but have a problem. For example I filtering /etc/services file to separate all ports. What I do:~$ grep -E '[0-9]{1,5}/(tcp|udp)' /etc/servicesand now I get 'port/protocol'. Next, I try separate it with groups:~$ grep -E '\([0-9]{1,5}\)/(tcp|udp)' /etc/servicesand haven't any effect. Well, trying non extended grep:~$ grep '\([0-9]*\)/[tcp\|udp]\1' /etc/servicesbut results not right (/t or /u). So, how to use groups?
grep searching groups
grep;regular expression
You are referring to regex back-references.Please check these two references:https://stackoverflow.com/questions/4609949/what-does-1-in-sed-dohttp://www.gnu.org/software/grep/manual/html_node/Back_002dreferences-and-Subexpressions.htmlAnd see the output of grep '\([0-9]\)\1' /etc/services which will give you a resultset of lines where a digit is directly followed by the same digit (the back reference \1).
_softwareengineering.247197
During grooming, we usually have work items which get approved based upon the team understanding that what needs to be done? The product owner does not discuss the details of how it will be done and stops the discussion if team tries so. This applies to all work items (new UI, new API or changes to existing UI/API). Product Owner's reasoning is that getting into the details (both technical & functional) of how it will be done is something that needs to happen during sprint and discussing it during grooming is not correct. The effort estimation also happens based upon this discussion.But during sprint planning, approved items are taken for the sprint and expectation is that if work item is approved, team should know all the details of the solution and should be able to complete the item in sprint. What happens is team spends first 2-3 days in doing the research and getting the PO's approval for the solution (UI design, key business logic clarification). Unless the analysis results in great difference in effort estimate, team is asked to complete the feature. This happens in each sprint.I dont have a problem with putting extra effort in completing the work item. My question is regarding the process.Should the team say no in approving the item unless team understands how the solution will look like?Should the work item be splitted into research/analysis in which solution prototype will be proposed to the PO? Once the PO will approve the prototype, the main work item will be marked approved.Any other suggestion as to how it can be handled in better way?
Split work item into prototype and main work item?
scrum;planning;sprint
null
_cstheory.11165
In the algebraic decision tree, the result is clear: Elmasry and Belal (Verification Of Minimum Redundancy Prefix Codes)'s lower bound shows that the worst case complexity of computing an optimal prefix free code over all possible sequences of $\sigma$ frequencies is $\Theta(\sigma\lg\sigma)$.In the $\Omega(\lg \sigma)$-word RAM model, combining Van Leeuwen's reduction to sorting (On The Construction Of Huffman Trees) with any $O(\sigma\lg\lg\sigma)$ sorting algorithm in the RAM model yields an upper bound of $O(\sigma(1+\lg\lg\sigma))$, which many people seem to assume to be optimal, probably by analogy with the situation in the algebraic decision tree model. But the only lower bound available is $\Omega(\sigma)$, the cost of reading the frequencies, leaving a gap between the two bounds.What is the best upper bound known on the asymptotic complexity of computing an optimal prefix free code in the word RAM model? Is it $\Theta(\sigma\lg\lg\sigma)$ or $\Theta(\sigma)$? Or somewhere in between?
What is the best upper bound known for the complexity of computing an optimal prefix free code in the RAM model?
ds.algorithms;sorting;prefix free code
null
_unix.193389
I have some files with this structure:2015-03-25 17:08:17sysUpTimeInstance 93474;^M1.ValueforState=2500I want to replace the line break, and leave the third line with the second line, I mean, the output would be like this:2015-03-25 17:08:17sysUpTimeInstance 93474;1.ValueforState=2500I tried with sed:sed 's/^M$//' myfile.dat > mynewfile.datBut it only removed the symbol ^MAny suggestions?
keep out line break
text processing;sed;awk;newlines;special characters
null
_softwareengineering.234795
In the manufacturing of a device each one has to get a unique identifier. So far so good, one can have a file containing the last used ID on a shared drive, or a network-based service managing and assigning IDs so that the uploading of the ID into the device will not be limited to a single workstation.However, in the case of a total system crash and restoration from backup, one can have a few devices with the same ID, if some devices were manufactured after the backup was made. One could have each workstation save the biggest used ID, and check whether it's not larger than the one they got from the network, but it might happen that the workstation with the last produced device is not turned on after the crash. One might require workers to write down the last used ID after their shifts, but this doesn't sound to be fool-proof either.Is there a best practice besides having multiple services managing the IDs, with at least one of them off-site?The IDs are large enough (32 bits) compared to the number of devices to be produced to allow for small gaps, but not large enough to go the true RNG way. This way, one can increase the largest ID by a number significantly larger than the amount being produced per day, but how can one keep this company practice enforced for possibly decades?
How can one guarantee unique identifiers, even in the case of system collapse and restoration from backup
production
null
_webapps.19033
Is there any difference between making someone an acquaintance or setting their updates to only important?
Acquaintances vs. Only Important updates on Facebook
facebook
Yes. Not only does putting someone in the Acquaintances list reduce the updates you get from them, you can more easily manage those people's permissions. For instance, when you add your work history you can set permission to friends except acquaintances (which is a default option on all permissions menus now).
_computerscience.100
I often find myself copy-pasting code between several shaders. This includes both certain computations or data shared between all shaders in a single pipeline, and common computations which all of my vertex shaders need (or any other stage).Of course, that's horrible practice: if I need to change the code anywhere, I need to make sure I change it everywhere else.Is there an accepted best practice for keeping DRY? Do people just prepend a single common file to all their shaders? Do they write their own rudimentary C-style preprocessor which parses #include directives? If there are accepted patterns in the industry, I'd like to follow them.
Sharing code between multiple GLSL shaders
glsl
There's a bunch of a approaches, but none is perfect.It's possible to share code by using glAttachShader to combine shaders, but this doesn't make it possible to share things like struct declarations or #define-d constants. It does work for sharing functions.Some people like to use the array of strings passed to glShaderSource as a way to prepend common definitions before your code, but this has some disadvantages:It's harder to control what needs to be included from within the shader (you need a separate system for this.)It means the shader author cannot specify the GLSL #version, due to the following statement in the GLSL spec:The #version directive must occur in a shader before anything else, except for comments and white space.Due to this statement, glShaderSource cannot be used to prepend text before the #version declarations. This means that the #version line needs to be included in your glShaderSource arguments, which means that your GLSL compiler interface needs to somehow be told what version of GLSL is expected to be used. Additionally, not specifying a #version will make the GLSL compiler default to using GLSL version 1.10. If you want to let shader authors specify the #version within the script in a standard way, then you need to somehow insert #include-s after the #version statement. This could be done by explicitly parsing the GLSL shader to find the #version string (if present) and make your inclusions after it, but having access to an #include directive might be preferable to control more easily when those inclusions need to be made. On the other hand, since GLSL ignores comments before the #version line, you could add metadata for includes within comments at the top of your file (yuck.)The question now is: Is there a standard solution for #include, or do you need to roll your own preprocessor extension?There is the GL_ARB_shading_language_include extension, but it has some drawbacks:It is only supported by NVIDIA (http://delphigl.de/glcapsviewer/listreports2.php?listreportsbyextension=GL_ARB_shading_language_include)It works by specifying the include strings ahead of time. Therefore, before compiling, you need to specify that the string /buffers.glsl (as used in #include /buffers.glsl) corresponds to the contents of the file buffer.glsl (which you have loaded previously).As you may have noticed in point (2), your paths need to start with /, like Linux-style absolute paths. This notation is generally unfamiliar to C programmers, and means you can't specify relative paths.A common design is to implement your own #include mechanism, but this can be tricky since you also need to parse (and evaluate) other preprocessor instructions like #if in order to properly handle conditional compilation (like header guards.)If you implement your own #include, you also have some liberties in how you want to implement it:You could pass strings ahead of time (like GL_ARB_shading_language_include).You could specify an include callback (this is done by DirectX's D3DCompiler library.)You could implement a system that always reads directly from the filesystem, as done in typical C applications.As a simplification, you can automatically insert header guards for each include in your preprocessing layer, so your processor layer looks like:if (#include and not_included_yet) include_file();(Credit to Trent Reed for showing me the above technique.)In conclusion, there exists no automatic, standard, and simple solution. In a future solution, you could use some SPIR-V OpenGL interface, in which case the GLSL to SPIR-V compiler could be outside of the GL API. Having the compiler outside the OpenGL runtime greatly simplifies implementing things like #include since it's a more appropriate place to interface with the filesystem. I believe the current widespread method is to just implement a custom preprocessor that works in a way any C programmer should be familiar with.
_webapps.44250
Google Wallet apparently makes it easy to send money to friends, but it's not clear what can they do with any funds received; there seems to be no mention of any methods for withdrawing the money.Does this mean received funds need to be spent through a merchant that uses Google Wallet, or can they also be withdrawn (say, to a bank account)? If so, will this also work for non-US based users? (e.g., I'm in New Zealand)
Can money received through Google Wallet be easily withdrawn?
international;google wallet
null
_softwareengineering.73152
I was talking with a friend about developing a mobile app (android/iphone) but I've never messed around with mobile app development code before. My friend flippantly told me that I don't need to know mobile app code... because there are converters out there that convert your code to mobile app code. My question is two-fold:Is this true? Where can I find these fabled converters and are they any good? I'm pretty handy with PHP/Javascript/Java (in order of skill level). Which mobile application language would be a good starting place? Or... which of my following code languages convert the best to mobile app language?
Mobile App Development Language Converter?
android;iphone;source code;mobile
I haven't actually tested them, but have made some research and I found that there are some more like Phonegap, you should look at Titanium from Appcelerator or if you have a game oriented idea you should look into Corona from AnscaMobile.I've found at least one success history of a boy who used Corona for a mobile app. If you want to get some nice comparisons between all of them, you should check out this question at SO.If you have a background on Java, you should start better off with Android, it should be easier for you to get a grip, because the dev language is Java, besides it seems that is growing stronger by the second, and the market experiences I've read are not so bad. EDIT I forgot placing a reference to Phonegap, which is another way to accomplish the same results, but uses HTML, Javascript and CSS, so if you have done any sort of web development, it should be quite easy to learn. The question I referenced above considered Phonegap in the comparison as well, I think that this is the best answer (funny enough isn't the chosen one... :))Hope I can help!
_webapps.61154
I need help importing just the symbols of the Dow from this page. http://en.wikipedia.org/wiki/Dow_Jones_Industrial_AverageI can get the full table just using the following, but can't figure out how to just get the symbols and only the symbols listed on that page.=importhtml(http://en.wikipedia.org/wiki/Dow_Jones_Industrial_Average,table)
Importing certain cell range from a wiki page
wikipedia
null
_softwareengineering.211053
Haskell often provides two versions of a given function f, i.e:f :: Int ...genericF :: Integral i => i ...There exist many standard library functions with those two versions: length, take, drop, etc.Quoting the description of genericLength:The genericLength function is an overloaded version of length. In particular, instead of returning an Int, it returns any type which is an instance of Num. It is, however, less efficient than length.My question is: where does the efficiency loss comes from? Can't the compiler detect that we are using genericLength as an Int and therefore us length for better performance? Why isn't length generic by default?
Why isn't `length` generic by default?
haskell
It's the same reason as why we have map and fmap. Error messages/usability for newbies.It'd by mighty confusing for many a new programmer to writemyLength = subtract 1 . genericLengthx = myLength [1, 2, 3] :: Inty = myLength [1, 2, 3] :: Integerand get an error complaining about the monomorphism restriction. Plus which do you preferCouldn't match expected type Integer with type BoolorNo instance for Num Bool arising from usage ....It's simply a matter of usability.Furthermore, in a lot of cases you end up just wanting defaulting anyways, such as with function compositionevenElems = even . subtract 1 . genericLengthvsdefault ()evenElems = even . genericLength -- Error ambiguous type variable.Here we just want GHC to pick a random Num type and we really don't care which (it's Integer IIRC).If everything was fully generic the defaulting rules would get um.. murky. Since As of right now there are only 4 things that are automatically available as defaults and no way to set anything fine grained (per function).As for efficiency, typeclasses means lugging potentially lugging typeclass's dictionary and defaulting to Integer which is much more expensive than Int.There are alternative preludes (I think classy prelude is unmaintained, but interesting) that do attempt to be as generic as possible. Using them is as simple as {- LANGUAGE NoImplicitPrelude #-} import My.Super.Awesome.Prelude
_unix.19915
I have a question,in linux, is there any method to mount a block of memory as a filesystem?For example, in X86 architecture, when power on, I reserve a block of memory about 8Mbytes, and then when linux startup, I mount this block of memory as a filesystem and then read and write file to the file system ? What will be the fs type?
Is there any method to mount a block of memory as a filesystem in linux?
kernel;filesystems;memory
null
_unix.313256
At work, I write bash scripts frequently. My supervisor has suggested that the entire script be broken into functions, similar to the following example:#!/bin/bash# Configure variablesdeclare_variables() { noun=geese count=three}# Announce somethingi_am_foo() { echo I am foo sleep 0.5 echo hear me roar!}# Tell a jokewalk_into_bar() { echo So these ${count} ${noun} walk into a bar...}# Emulate a pendulum clock for a bitdo_baz() { for i in {1..6}; do expr $i % 2 >/dev/null && echo tick || echo tock sleep 1 done}# Establish run ordermain() { declare_variables i_am_foo walk_into_bar do_baz}mainIs there any reason to do this other than readability, which I think could be equally well established with a few more comments and some line spacing?Does it make the script run more efficiently (I would actually expect the opposite, if anything), or does it make it easier to modify the code beyond the aforementioned readability potential? Or is it really just a stylistic preference?Please note that although the script doesn't demonstrate it well, the run order of the functions in our actual scripts tends to be very linear -- walk_into_bar depends on stuff that i_am_foo has done, and do_baz acts on stuff set up by walk_into_bar -- so being able to arbitrarily swap the run order isn't something we would generally be doing. For example, you wouldn't suddenly want to put declare_variables after walk_into_bar, that would break things.An example of how I would write the above script would be:#!/bin/bash# Configure variablesnoun=geesecount=three# Announce somethingecho I am foosleep 0.5echo hear me roar!# Tell a jokeecho So these ${count} ${noun} walk into a bar...# Emulate a pendulum clock for a bitfor i in {1..6}; do expr $i % 2 >/dev/null && echo tick || echo tock sleep 1done
Why write an entire bash script in functions?
bash;shell script;function
I've started using this same style of bash programming after reading Kfir Lavi's blog post Defensive Bash Programming. He gives quite a few good reasons, but personally I find these the most important:procedures become descriptive: it's much easier to figure out what a particular part of code is supposed to do. Instead of wall of code, you see Oh, the find_log_errors function reads that log file for errors . Compare it with finding whole lot of awk/grep/sed lines that use god knows what type of regex in the middle of a lengthy script - you've no idea what's it doing there unless there's comments.you can debug functions by enclosing into set -x and set +x. Once you know the rest of the code works alright , you can use this trick to focus on debugging only that specific function. Sure, you can enclose parts of script, but what if it's a lengthy portion ? It's easier to do something like this: set -x parse_process_list set +xprinting usage with cat <<- EOF . . . EOF. I've used it quite a few times to make my code much more professional. In addition, parse_args() with getopts function is quite convenient. Again, this helps with readability, instead of shoving everything into script as giant wall of text. It's also convenient to reuse these.And obviously, this is much more readable for someone who knows C or Java, or Vala, but has limited bash experience. As far as efficiency goes, there's not a lot of what you can do - bash itself isn't the most efficient language and people prefer perl and python when it comes to speed and efficiency. However, you can nice a function:nice -10 resource_hungry_functionCompared to calling nice on each and every line of code, this decreases whole lot of typing AND can be conveniently used when you want only a part of your script to run with lower priority.Running functions in background, in my opinion, also helps when you want to have whole bunch of statements to run in background.Some of the examples where I've used this style:https://askubuntu.com/a/758339/295286https://askubuntu.com/a/788654/295286https://github.com/SergKolo/sergrep/blob/master/chgreeterbg.sh
_webapps.68988
I've a spreadsheet with a lot of contact information. In order to sort the data, I want to rank them according to the +1's given on their website.Is it possible to get the +1 count of a website using Google Spreadsheet?
How to get the +1's of a website?
google spreadsheets;google plus 1
Yes that's possible !! See the below-mentioned formula to do just that.FormulaA2 = HYPERLINK(http://www.jacobjantuinstra.nl)=IMPORTXML( https://plusone.google.com/u/0/_/+1/fastbutton?&count=true&url= & A2, // URL //div[@id='aggregateCount'] // xpath_query )copy / paste=IMPORTXML(https://plusone.google.com/u/0/_/+1/fastbutton?count=true&url=&A2,//div[@id='aggregateCount'])ExplainedThe URL that is being fed as a first argument for the IMPORTXML formula, is constructed to contain the plusone URL that Google uses to perform +1 counts. The second argument is looking for a specific div, having an id equal to aggregateCount. All is possible when a website has a button, enabled for Google+.ScreenshotshyperlinkscountsExampleI've created an example file for you: How to get the +1's of a website?See this post, here on Web Applications, on another usage of retrieving website info.H/TBruce McPherson, Extract the plus one counts from a page - Desktop Liberation
_webmaster.98350
In GA's audience overview report, I see users as 145, however in the audience -> behaviour -> new vs returning report, I see new visitor as 98 and returning visitor as 53. This is highly confusing:What exactly is difference between user and new/returningvisitor?If there were 145 users, and 98 new users, then I assume therewere 47 returning users??
Google Analytics - User vs New Visitor vs Returning Visitor
google analytics;users;visitors;reporting
null
_softwareengineering.223775
I have to write a decent size database, 1GB more or less.I have only taken an introductory semester regarding SQL databases. During the course we learned that the relational model under SQL has no implied order and that therefore when querying we should always have a variable that sets the correspondant order in the table.However, when doing practical work I find 'convenient' to make already ordered small, infinite number of tables. For example, lets say that you have 5 clients and that their info will never join their data together. Is it admissible to have a potentially infinite number of tables with a table per client?For example, I append new rows already ordered to the database. The database is already ordered. Is it admissible to make an unordered query and just assume that is implicitly ordered?All this violates the principles that I know. My boss, who does not know about databases says that if it works it works and that speed is important. What shall answer to him?
Violating SQL principles
database;sql;database design
5 clients and that their info will never join their data together. Is it admissible to have a potentially infinite number of tables with a table per client?I'm not sure where you're going with this. Do you have multiple copies of your Client table? do you have multiple copies of the tables that are joined with Client? Either way, both of these designs will cause problems in the future, mostly with long-term maintenance issues.Usually what happens is that the data model changes, and then you have to propagate the change to all copied tables and hope that nothing breaks. For example, removing/modifying columns might not break referential integrity for most clients, but it might for some, so your test environment must include samples of all copied data.Another issue is that if you're doing reporting, you need to join all the copied tables if you want your report to cover more than one subset of the copied structures. This can make for very long and unwieldy queries, and if you forget one client's table then your results will be wrong.And then you have the problem of having queries and code bound to the names of copied tables even though they may all be doing the same thing.There are some cases where doing this is useful and possibly necessary, but I find it's something that needs a bit of thought and a good reason to do it.Is it admissible to make an unordered query and just assume that is implicitly ordered?If the ordering of the result set doesn't matter, or if you plan to re-order it in your application later (maybe after some very specific in-application filtering/procesing), then I don't see why this is a problem. If you want to ensure that the result set is ordered correctly, add an ORDER BY clause to your query.Of course, if you find that some tables are frequently hit with the same query and ordering conditions, it might make sense to order the index that your query uses so that it's ordered it the most commonly used way. This could give you some performance improvements. Even if you do this, I'd say it's still a good idea to include the ORDER BY. Mostly so that someone who's unfamiliar with the code knows exactly what the order is and doesn't have to go digging through database definitions to figure it out. How you do this may vary depending on which database product you're using (if it's even supported at all, but I think most major database engines would support this).
_cogsci.12911
I have a friend who has a terrible fear of dams, water reservoirs, jetties or bridges with big blocks on water; in general, a fear of large buildings on water.He often tells me that he imagines a dam near the gates and has anxiety attacks and becomes very afraid. However, he is not afraid of swimming, or common places with water, like the ocean, pools and so on. On the contrary, he loves to swim.While he is not afraid of pools, he is very afraid of the wells of the pools where the water enters or where is filtered.His psychologist doesn't know the name of such a phobia and, after a while, his diagnosis was hydrophobia. The problem is that he is not afraid of water, only these large structures over the water, whose purpose is to control water flow.How can I get information about this phobia; is there a specific name?And how can be treated?
What is the name of the phobia of dams?
phobia
null
_unix.309128
The hosts file allows us to configure the system to override the whole DNS servers system and resolve a particular DNS name into a prtcular IP address. But what if I just want it to use a particular DNS server for this?
How to configure system to use a custom DNS server for a particular domain?
dns
null
_unix.313163
Content of file a.txtEvent: 112506400,17,2016/07/13-15-25-59.00,,,,,,,,,,,112506400,115101234,02:00:00,pc,abc,4194,file_nam,F,,,LA,,jk,123,,,,,,,,,,I need a file which doesn't have $20 ( file_name ) redirected to asort.txt . Is there any short command as currently I am using the belowcat a.txt | grep Event: |awk -F, '{print $1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17,$18,$19,$21,$22,$23,$24,$25,$26,$27,$28,$29,$30,$31,$32,$33,$34,$35,$36,$37}'> asort.txt
Pruning fields out of a file
text processing;awk
null
_unix.387960
A few days ago, I accidentally replaced my desktop in Mint 18 with a virtual console with an errant keystroke. Not having an obvious way to reverse the changes, I logged in and typed exit to exit the console and go back to the desktop. Instead, the computer rebooted, and I lost God-only-knows-how-much unsaved work. After showing the Mint logo and flashing a message underneath it far too quickly to read, the computer now boots only to a black screen. By pressing esc during reboot, I can get to a Startup Menu. I ran disk checks and memory checks using it, and they all came up clean. None of the other options in this menu help. I have no idea what to do.The screen is clearly receiving power, and the black screen that it boots to shows a dash or underscore in the top right corner.How do I get my computer back? It's an HP Pavilion laptop, if it helps.
Mint 18 will boot only to a black screen
linux mint;boot
null
_unix.282781
I've seen posts in the past that advise against changing the partition number but like Fox Mulder: All the evidence to the contrary is not entirely dissuasive I have an old MBR disk that I used to boot from before I install Ubuntu on a faster M.2 ssd.I now want to remove my old Win8 partitions and give the space back to Linuxfdisk /dev/sdaDisk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: dosDisk identifier: 0x3457a860Device Boot Start End Sectors Size Id Type/dev/sda1 * 2048 524290047 524288000 250G 7 HPFS/NTFS/exFAT/dev/sda2 524290048 548290559 24000512 11.5G 82 Linux swap / Solaris/dev/sda3 548292606 648290303 99997698 47.7G 5 Extended/dev/sda4 648290304 3907028991 3258738688 1.5T 83 Linux/dev/sda5 548292608 648290303 99997696 47.7G 83 LinuxThe only partition that I use is sda4.sda1 was my old Win8 dual boot partitionsda2 was for swap but now I'm running on a separate ssdsda5 was for /boot before my current ssdIs is possible to delete sda1, sda2 and sda5 and perform something like e2fsresize to grow my ext4 filesystem into the extra space and then rename sda4 to sda1?I rather not resort to buying a new 2TB drive just so I can backup my files and repartition the whole disk.
How to renumber partitions
partition;fdisk
null
_softwareengineering.167758
I've been wondering about this. What do we exactly mean by design and verification.Should I just apply TDD to make sure my code is SOLID and not check if it's external behaviour is correct?Should I use BDD for verifying the behaviour is correct?Where I get confused also is regarding TDD code Katas, to me they looked like more about verification than design; shouldn't they be called BDD Katas instead of TDD Katas?I reckon that for example the Uncle Bob bowling Kata leads in the end to a simple and nice internal design but I felt that most of the process was centred more around verification than design. Design seemed to be a side effect of testing the external behaviour incrementally. I didn't feel so much that we were focusing most of our efforts on design but more on verification. While normally we are told the contrary, that in TDD, verification is a side effect, design is the main purpose.So my question is what should I focus on exactly, when I do TDD: SOLID, external API usability, or something else? And how can I do that without being focused on verification?What do you guys focus your energy on when you are practising TDD?
TDD is about design, not verification; concretely, what does that mean?
design;unit testing;development process;tdd;bdd
TDD code katas are about learning TDD practices and how they drive design (and learning good design in that way - yes, normally this means learning how to write SOLID code).This means that TDD is about achieving good design - but having well designed code that is not solving a problem is useless. That's where verification comes in - this can be in many forms, BDD being one of them as one of many automated acceptance testing techniques. Verification is about ensuring that the code written is solving the correct problem.So, when doing TDD, focus on the design - make sure it is clean and SOLID.But don't forget to add unit tests and acceptance tests (and any other tests that are deemed needed).
_unix.237939
I need a bash script to source a file that is encrypted, as the file being sourced contains sensitive information.I would like the script to prompt for the GPG passphrase and then run, sourcing the encrypted file. I cannot figure out how to do this though. There must be user input for the passphrase, as I don't want to store a key on the server with the encrypted file.Looking into some different methods, I do not want to decrypt the file, source the non-encrypted file, then delete it after. I would like to reduce the chance of leaving an non-encrypted file behind, if something went wrong in the script.Is there a way to get the GPG output of a file to source it this way? Possibly collecting STDOUT and parsing it (if GPG can even output the contents this way).Also, if there is another way to encrypt a file that shell scripts can use, I am not aware of it, but open to other possibilities.
Is there a way to source an encrypted (GPG) file on-the-fly in a script?
bash;shell script;gpg
You can do this using process substitution.. <(gpg -qd $encrypted_filename)Here's an example:% cat > to-source <<< 'echo Hello'% . ./to-source Hello% gpg -e -r [email protected] to-source% . <(gpg -qd to-source.gpg)Hellogpg -d does not persist the file to disk, it just outputs it to stdout. <() uses a FIFO, which also does not result in the actual file data being written to disk.In bash, . and source are synonyms, but . is more portable (it's part of POSIX), so I've used it here. Note, however, that <() is not as portable -- as far as I know it's only supported in bash, zsh, ksh88, and ksh93. pdksh and mksh have coprocesses which can have the same effect.
_unix.92849
I want to stop the normal users from using following commands/bin/bash/usr/bin/sudo/bin/suI added them in sudoers file but normal user can still use the following command This is how i add the entry in Sudoers FileCmnd_Alias NOTALLOWED = /bin/sh,/bin/bash,/usr/bin/sudo,/bin/suOnly /bin/bash is working otherthan this sudo and su are not working and users are able to switch
How to stop regular users from Switching Users
rhel;sudo;users
Your question is a bit confusing. I think you want to prevent users from running commands as root. If that's what you want:Don't give them the root password. If they already have the root password, change it.Don't allow them to use sudo to run commands as root. Remove them from the suoders file.Forbidding users from running a few commands such as su and bash while allowing users is completely useless. They'll be able to run any of hundreds of commands that allow running other commands (sh, env, perl, vi, nethack, gcc). You can't achieve any extra security by blacklisting a few commands. If you don't want users to be allowed to run commands as root, don't allow them to run commands as root: keep them out of the sudoers file, or only allow a carefully chosen set of commands which do not provide a way to run a shell or to overwrite arbitrary files.It's possible to set up a wheel group such that only users in that group can become root by running su, even if they know the root password. However, this is not really useful since user who know the root password can log in with login. Again, if there are users who know the root password but shouldn't, that's what you need to address.
_unix.179477
I know how to user fail2ban and how to configure a jail, but I'm not comfortable about how it actually works.The thing is, there's a particular jail option that pique my curiosity: findtime.When I configure a filter, it is necessary to use the HOST keyword (match IP address), so that fail2ban can know the IP to compare and ban if necessary. Alright.But there's no such thing for time: fail2ban can't know the exact time a line was added to a log file, because there's no TIME keyword, right? Actually, it can scan files without any time on any line and it will still work.I guess it means fail2ban is scanning files periodically: it set a scan time internally so it can handle options like findtime by comparing its own scan dates.First, am I right? If so, what is the scan frequency? Can't it be a bottleneck if there are lots of big log files to scan often?Then, what happened if the scan frequency is superior to the findtime options? Does it means fail2ban adapts to the minimal findtime option it found to set its minimal scan frequency?
How does fail2ban detect the time of an intrusion attempt if the log files don't have timestamp?
fail2ban
First off. This is (perhaps) not an answer but perhaps better then a comment (and a bit long for it).Time stampsFind your statement:Actually, it can scan files without any time on any line and it will still work.conflicting with the documentation. What do you mean by work?The manual#filters (v 0.8) states:If you're creating your own failregex expressions, here are some things you should know: [...]In order for a log line to match your failregex, it actually has to match in two parts: the beginning of the line has to match a timestamp pattern or regex, and the remainder of the line has to match your failregex. If the failregex is anchored with a leading ^, then the anchor refers to the start of the remainder of the line, after the timestamp and intervening whitespace.The pattern or regex to match the time stamp is currently not documented, and not available for users to read or set. See Debian bug #491253. This is a problem if your log has a timestamp format that fail2ban doesn't expect, since it will then fail to match any lines. Because of this, you should test any new failregex against a sample log line, as in the examples below, to be sure that it will match. If fail2ban doesn't recognize your log timestamp, then you have two options: either reconfigure your daemon to log with a timestamp in a more common format, such as in the example log line above; or file a bug report asking to have your timestamp format included.Note here that log files can be configured to include time stamps as well as the format of the time stamps. (That include dmesg as mentioned in comment.)Also see this thread, Message #14 and #19 in particular:fail2ban: time pattern match is undocumented and unavailable to usersTwo examples:Note that you can also test with commands like:fail2ban-regex /var/log/auth.log /etc/fail2ban/filter.d/sshd.conf1 No time stamp:$ fail2ban-regex ' [1.2.3.4] authentication failed' '\[<HOST>\] authentication failed'Running tests=============Use failregex line : \[<HOST>\] authentication failedUse single line : [1.2.3.4] authentication failedResults=======Failregex: 0 totalIgnoreregex: 0 totalDate template hits:Lines: 1 lines, 0 ignored, 0 matched, 1 missed|- Missed line(s):| [1.2.3.4] authentication failed`-2 With time stamp:$ fail2ban-regex 'Jul 18 12:13:01 [1.2.3.4] authentication failed' '\[<HOST>\] authentication failed'Running tests=============Use failregex line : \[<HOST>\] authentication failedUse single line : Jul 18 12:13:01 [1.2.3.4] authentication failedResults=======Failregex: 1 total|- #) [# of hits] regular expression| 1) [1] \[<HOST>\] authentication failed`-Ignoreregex: 0 totalDate template hits:|- [# of hits] date format| [1] MONTH Day Hour:Minute:Second`-Lines: 1 lines, 0 ignored, 1 matched, 0 missedScan timesManual#Reaction time:It is quite difficult to evaluate the reaction time. Fail2ban waits 1 second before checking for new logs to be scanned. This should be fine in most cases. However, it is possible to get more login failures than specified by maxretry.In that regard also see this thread: Re: Bug#481265: fail2ban: Poll interval is not configurable.But under optional but recommended software one find Gamin.Gamin is a file alteration monitor. Gamin greatly benefits from a inotify-enabled kernel. Thus, active polling is no longer required to get the file modifications.If Gamin is installed and backend in jail.conf is set to auto (or gamin) - Gamin will be used.
_webmaster.25302
If my video contains a musical recording and the owner of the recording master rights authorizes YouTube to place ads on the video, do I have the right / capability to prevent ads from being displayed?
Can a video uploader block ads if he/she does not own all content?
advertising
null
_codereview.78831
A small project I recently started. The program must sum all the numbers the user input. You must first specify how many numbers there will be.Sumaster.c#include <stdio.h>int main(void){ unsigned short int num = 0; while(num <= 0 || num >= 256) { printf(Enter number of values to sum: ); scanf(%d, &num); } unsigned long long int arr[num]; unsigned long long int total = 0; unsigned short int i; for(i = 0; i < num; i++) { scanf(%lld, arr + i); total += arr[i]; if(i < num - 1) printf(+\n); else printf(=\n%lld\n, total); } return(0);}
Summing user input
c;io
Small programs as such should be relatively honored, since they are a lot used as a practical examples, usually for beginners. So an appropriate review follows to be a Nobel improvement attempt of this particular variation.Name of programThe name of a program like that must be exclusively identical with its functional behavior.It must hint the notion as clear as possible to the user, before he wastes efforts and resources to open the mysterious source file. The name you picked up... Sumaster is somehow idiosyncratic and certainly not as pretty self-explaining. It could be interpreted in many sick ways. If it tries to exalt the program by calling it Summation Master then it applies, but it goes a bit funny if you assimilate it.My suggestion is simply Sumilizer or something a bit more simplified Auto Summation (enthusiasts can call it The A.S Bot to bring some 21s software eminence affection).Besides that, if you are in trouble figuring a name, you can visit this powerful international internet cyber searcher and type Good names for a ... where ... is the target object you are seeking a good name for.Program description...which is missing. The description must provide some additional information about the program usage. If you will be the only one using this, there is obviously no need (except that time when we forget). If someone wants to use the program, he might be unable to guess the unique functional mechanism you invented. Tell me: how many times you got in trouble understanding how one program works?There is no need for you to dedicate a webpage or a readme.txt. The readme.txt can be in the source file. If this is intended to be open-source. Unlikely it won't be. If this program's future involves beginners, learning from the program, then it will be kind of you to add some description. If it is for your own experience then you can decide. In each circumstance, it isn't completely irrelevant./* Summation automatized program by Genis * Copyright: Free For Any Purposes * Compilation arguments: std=c99 * * The first thing you'll have to specify is the length of the number table. * Enter to continue.. * Then you are typing the first number you wish to add to the number table. * Enter to continue.. then the next number. * When the numbers reach the length of the number table, the program will output the total of all the numbers in the number table. */InternationalizationAssuming that this program will be widely used by people around the world, it could be internationalized (i.e translated to a number of regional languages). The MinGW encoding defaults to UTF-8, the char datatype is set to be capable of representing any member of the basic execution character set and UTF-8 code units. Unicode Programming allows you to use larger set of characters.Variable type-definitionWhat have you accomplished invokes:Readability issue.Portability Issue.More pointless additional words and more time required for the user to read and digest the formed data type.. I suggest you typedef these data type creations.typedef unsigned short int uint_16, word;typedef unsigned long long int uint_32, dword;And then use these definitions instead. Use word and dword if the program is intended to work under Windows.Variable namingReadability issue #1.The first name of the first variable num is not accurate. If someone (or you in the forgetfulness future) wants to edit/analyze the code, he/she will need more time to assimilate the exact purpose of the variable. Why not the name arrSize since it behaves like index and it is an index. Alternatives are table_length for snake_case or TableLength / tableLength for CamelCase name styling.The second name of the second variable arr will be fine if the first variable is named arrSize. That is if we want to name the variables by their behavior, not logical purpose. Otherwise.. you can choose table.total or tableTotal is equally good.Inconsiderate conversionAs we found out, num is a variable type unsigned short. On line 10 we except signed int as an input for num. This will wake up some compiler optimizations, possibly -Woverflow and it might be quite an undesired behavior after all. In the name of the sense, you can use the printf format specifier %hu which will equalize the expectations.Problematic function terminationIf you copy/paste this code into Skype. return(0) will turn into a clock. It will look like you are returning a clock. Get rid of the brackets! Just kidding.OverallI have seen better implementations of a summation. I would prefer to use space in order to move to the next number from the table. And everything happens in one-line instead of being able to pass an unlimited amount of newlines during the process of scanf.
_unix.234270
I've been using Fedora 22 on my work pc for a number of months now, it works fine updated frequently with multiple kernel updates, up until a one day it's can no longer find the which or clear command.bash: clear: command not found... Install package 'ncurses' to provide command 'clear'? [N/y] y* Waiting in queue... Failed to install packages: ncurses-5.9-18.20150214.fc22.x86_64 is already installedbash: /usr/bin/which: No such file or directorythe next time I reboot the OS will not even start, originally I would have put this down to being a beta version of Fedora but it's happen fours times now... So you anyone can help it would be much appreciated
Fedora 22 cannot find clear or which commands
linux;fedora
Managed to actually get working after a few hours of trying different thinks.I had to boot into the older kernel where it still sort of works remove all third parity drivers and still no clear or which by this point.reboot into the new version of the kernel and then sudo dnf reinstall ncurses which and reinstall drivers...I have no idea why this happen but I have my PC working again without having to flatten it again.thanks for all your help
_codereview.172274
I built a program in C that can tell how many odd and even Fibonacci numbers there are between a given range. Input Specification First line of the input contains T, representing the number of test cases (1T50). Each test case contains two integers N and M (1NM1018) and (|N-M|105), where N is the Nth Fibonacci number and M is the Mth Fibonacci number of the sequence. Output SpecificationCase T: Odd = total number of odd Fibonacchi numbers between N and M Even = total number of even Fibonacchi numbers between N and M The full code#include <stdio.h>#include <math.h>int main(){ int T, i, j, k; scanf(%d, &T); for (i=0; i<T; i++) { int N, M, val; scanf(%d%d, &N,&M); val = abs(N-M)+1; int n,m; if (M>N) { n=N; m=M; } if (N>M) { n=M; m=N; } int b_tri, e_tri, even=0, odd=0; if (n%3==1) { b_tri=3; even++; odd+=2; } if (n%3==2) { b_tri=2; odd+=2; } if (n%3==0) { b_tri=1; odd++; } if (m%3==1) { e_tri=1; even++; } if (m%3==2) { e_tri=2; odd++; even++; } if (m%3==0) { e_tri=3; odd+=2; even++; } val-=(e_tri+b_tri); val/=3; for (j=0; j<val; j++) { even++; odd+=2; } printf(Case %d:\nOdd = %d\nEven = %d\n, i+1, odd, even); } return 0;}Logic1st Fibonacci number is 0 (even).2nd Fibonacci number is 1 (odd).3rd Fibonacci number is 1 (even + odd = odd). 4th Fibonacci number is 2 (odd + odd = even).5th Fibonacci number is 3 (odd + even = odd).6th Fibonacci number is 5 (even + odd = odd).7th Fibonacci number is 8 (even).8th Fibonacci number is 13 (odd).9th Fibonacci number is 21 (odd).I'd like advice on:Using mathematical formulas instead of loops to calculate thisMaking the code more efficient
A program to find out the number of odd and even Fibonacci numbers between given range
performance;c;programming challenge;fibonacci sequence
I'm going to assume you tested it for several inputs and got the right values. Your logic seems right so if a couple test cases work, it probably works. But I didn't look carefully for any off-by-one like errors.There is one logic issue that I see:You don't set n and m if N = M. Double check your two if statements in the beginning of the loop; you only check for strict inequality on both sides. One of them should change to a >= or <=, or better yet just use if-else instead of two ifs.As for optimization...In terms of an average computer, your program is good enough for any N and M because you specify N,M < 1018. Even if you don't, for-loops by themselves are pretty cheap.However, there are still optimizations/cleanups. The biggest one is your last for loop:for (j=0; j<val; j++){ even++; odd+=2;}This can just be: even+=val;odd+=2*valYour two if chains can turn into if, else if, else instead of three ifs (same thing for your first if pair, which should just be if else)if (...) { }else if (...) { }else (...) { }although the compiler will most likely optimize that for you.I think that's the best you can do for your current algorithm.However, it seems odd to me that you are checking for all mod cases for n and m; I think your idea is that you know the pattern for the large chunks in between N and M but you want to trim off the edge cases. Now, you won't need to trim down the case when n%3==1 or m%3==0, because you can just treat that as the even++, odd+=2 chunk.n=m caseIn programming you can always cop out :)Here is how I would address the n=m case even though it's not too beautiful:if (N == M){ printf(Odd: 0, Even: 0); //format this to match your output break; // break ends the current for loop iteration in this context}else if (N > M){ n = M; m = N;}else{ n = N; m = M;}
_unix.192013
Under Debian Wheezy permanent network configuration takes place in /etc/network/interfaces file. Is it possible to configure hwaddress for an interface without configuring other network parameters like address or netmask? Something like:root@1:~# cat /etc/network/interfaces# This file describes the network interfaces available on your system# and how to activate them. For more information, see interfaces(5).# The loopback network interfaceauto loiface lo inet loopbackauto eth0iface eth0hwaddress ether DE:AD:BE:EF:69:01# The primary network interfaceauto eth0.100iface eth0.100 inet static address 10.1.1.2 netmask 255.255.255.0 network 10.1.1.0 broadcast 10.1.1.255 gateway 10.1.7.1root@1:~#
set MAC address in interfaces file without configuring other network parameters
linux;debian;ethernet;mac address
null
_codereview.135432
I am trying to implement a class for the Node of a Tree in C++ in order to represent an HTML structure.It is not complete by a shot, but i'd still like an opinion.TreeNode.h :#ifndef TreeNode_H#define TreeNode_H#include <string>#include <vector>class TreeNode{ private: std::string textContent; std::string tagName; TreeNode *parent; std::vector<TreeNode *> children; int countNodesRec(TreeNode *root, int& count); public: TreeNode(); TreeNode(std::string iTextContent, std::string iTagName); void appendChild(TreeNode *child); void setParent(TreeNode *parent); void popBackChild(); void removeChild(int pos); bool hasChildren(); bool hasParent(); TreeNode* getParent(); TreeNode* getChild(int pos); int childrenNumber(); int grandChildrenNum(); std::string getTextContent(); std::string getTagName();};#endifTreeNode.cpp :#include TreeNode.h#include <string>#include <vector>#include <iostream>TreeNode::TreeNode() {};TreeNode::TreeNode(std::string iTextContent, std::string iTagName) : textContent(iTextContent), tagName(iTagName), parent(NULL){}int TreeNode::countNodesRec(TreeNode *root, int& count){ TreeNode *parent = root; TreeNode *child = NULL; for(int it = 0; it < parent->childrenNumber(); it++) { child = parent->getChild(it); count++; //std::cout<<child->getTextContent()<< Number : <<count<<std::endl; if(child->childrenNumber() > 0) { countNodesRec(child, count); } } return count;}void TreeNode::appendChild(TreeNode *child){ child->setParent(this); children.push_back(child);}void TreeNode::setParent(TreeNode *theParent){ parent = theParent;}void TreeNode::popBackChild(){ children.pop_back();}void TreeNode::removeChild(int pos){ if(children.size() > 0) { children.erase(children.begin()+ pos); } else { children.pop_back(); }}bool TreeNode::hasChildren(){ if(children.size() > 0) return true; else return false;}bool TreeNode::hasParent(){ if(parent != NULL) return true; else return false;}TreeNode * TreeNode::getParent(){ return parent;}TreeNode* TreeNode::getChild(int pos){ if(children.size() < pos) return NULL; else return children[pos];}int TreeNode::childrenNumber(){ return children.size();}int TreeNode::grandChildrenNum(){ int t = 0; if(children.size() < 1) { return 0; } countNodesRec(this, t); return t;}std::string TreeNode::getTextContent(){ return textContent;}std::string TreeNode::getTagName(){ return tagName;}Sample code :#include <iostream>#include <vector>#include TreeNode.h/* run this program using the console pauser or add your own getch, system(pause) or input loop */int main(int argc, char** argv) { TreeNode *itr = NULL; TreeNode *node = new TreeNode(k, p); node->appendChild(new TreeNode(test1, testag)); node->appendChild(new TreeNode(test2, testag)); node->appendChild(new TreeNode(test3, testag)); itr = node->getChild(0); itr->appendChild(new TreeNode(test1a, testtag)); itr->appendChild(new TreeNode(test1b, testtag)); itr->getChild(0)->appendChild(new TreeNode(test1aa, testtag)); itr = node->getChild(1); itr->appendChild(new TreeNode(test2a, testtag)); itr->appendChild(new TreeNode(test2b, testtag)); itr->getChild(0)->appendChild(new TreeNode(test2aa, testtag)); std::cout<<node->grandChildrenNum(); return 0;}
C++ Tree Class implementation
c++;algorithm;html;tree;dom
null
_unix.174835
Is there a way to run a Gui application (X11) in background so that if I disconnect, I can resume the running app again?I am using SmarTTY on windows to connect to remote CentOS. When I run a Gui application (e.g. gnome-help) it starts Xming server and displays its window. I want to keep it running even if I disconnect, crash or close ssh connection. So that I can get back to running applicaition later.I have tried 'screen' and '&' and combination of both but neither works. I can not connect again to the GUI application when SSH connection is closed.--EDIT-- As answered by AnthonInstall both the VNC Server and VNC Viewer on the remote system (e.g. CentOS). Start VNC Server on remote Xvnc -localhost :13 Start VNC Viewer so that it displays locally via X (e.g. on your Windows)Set display export DISPLAY=:13Start a GUI application and it will be displayed in the VNC Viewer
Run a GUI application in background and reconnect later
ssh;x11;background process;xming
The X application needs a screen to connect to and normally (if you connect via ssh using -X that is your local screen). What you can do instead is use Xvnc and create a virtual screen for you X application to connect to and then, after logging back in, use a vncviewer to observe what is happening on this virtual screen. This functions in a similar way as using screen or tmux for terminal sessions.You start Xvnc via: Xvnc -localhost -SecurityTypes=None :13with 13 being a unique number. You use this number to set your DISPLAY environment variable before starting the X application.During startup Xvnc will tell you which port to use to connect (5913 in my case). If you do not specify -localhost you can connect over the network directly using a vnc viewer without first having to login using ssh (this depends on your firewall of course, and you should use password protected connections instead of -SecurityTypes=None)On Debian based systems you can install Xvnc from the package vnc4server
_webmaster.101149
Assuming non-existent-page.html does not exist, and the user is trying to access that page and triggered a 404 error.Can i show the requested page URL:http://www.example.com/non-existent-page.htmlinstead of the error page URL:http://www.example.com/404.htmlSolution:While looking at Stephen Ostermiller's answer i knew i was using a relative URL, but i realized it had a missing trailing slash at the end because i was pointing to a directory and not a page. This mostly occurs with some xSP with bad configurations.ProblemErrorDocument 404 /error/404 <-- no slashFixErrorDocument 404 /error/404/ <-- added slash
Have Apache show 404 at missing page URL instead of redirecting to error page URL
htaccess;apache;mod rewrite;url rewriting
Apache server can be configured to show the error page at the error URL, or it can redirect to the error page. It is almost better to show the error page directly at the URL rather than redirecting to it.The Apache ErrorDocument directive explains how to implement it both ways:URLs can begin with a slash (/) for local web-paths (relative to the DocumentRoot), or be a full URL which the client can resolve.In practical terms, that means if you specify the error document as an absolute URL it will cause a redirect to the error page:ErrorDocument 404 http://www.example.com/404.htmlbut if you specify the error document as a relative URL starting with a slash, it will show the error document at the original URL where the error occurred:ErrorDocument 404 /404.htmlMy guess is that you have your ErrorDocument directive configured as an absolute URL either in your .htaccess file or your httpd.conf file. You need to edit it to change it to a relative URL.
_codereview.107757
I need an external application that helps deployment of the web application. The application should create the aspnetdb database on the dedicated SQL server. Then it creates the wanted roles for the web application, and it defines the initial users, assigns their roles and properties. The code below works, but it probably needs some cleaning/refactoring (see the questions below the code):The App.config:<?xml version=1.0 encoding=utf-8 ?><configuration> <startup> <supportedRuntime version=v4.0 sku=.NETFramework,Version=v4.5 /> </startup> <connectionStrings> <add name=aspnetdbConnectionString connectionString=Data Source=COMPUTER\SQL2014INSTANCE;Initial Catalog=aspnetdb;Persist Security Info=True;User ID=sa;Password=saSQL2014INSTANCE providerName=System.Data.SqlClient /> </connectionStrings> <system.web> <membership defaultProvider=SqlMembershipProv> <providers> <clear /> <add name=SqlMembershipProv type=System.Web.Security.SqlMembershipProvider connectionStringName=aspnetdbConnectionString enablePasswordRetrieval=false enablePasswordReset=false requiresQuestionAndAnswer=false passwordFormat=Hashed /> </providers> </membership> <profile defaultProvider=profileProv> <providers> <clear /> <add name=profileProv type=System.Web.Profile.SqlProfileProvider connectionStringName=aspnetdbConnectionString applicationName=web_app /> </providers> <properties> <add name=FullName /> <add name=Phone /> </properties> </profile> </system.web></configuration>The Program.cs:using System;using System.Collections.Generic;using System.Collections.Specialized;using System.Configuration;using System.Data.SqlClient;using System.Diagnostics;using System.Web.Management;using System.Web.Profile;using System.Web.Security;namespace InitAspnetdbAppRolesAndUsers{ class Program { static void Main(string[] args) { // The connection string of the following name used throughout the application. const string conStrName = aspnetdbConnectionString; // Web application name. const string webAppName = web_app; // Extract the info from the connection string. string conString = ConfigurationManager.ConnectionStrings[conStrName].ConnectionString; SqlConnectionStringBuilder cns = new SqlConnectionStringBuilder(conString); string SQLServerName = cns.DataSource; //@COMPUTER\SQL2014INSTANCE; string SQLServerUser = cns.UserID; string SQLServerPassword = cns.Password; string ASPNETdbName = cns.InitialCatalog; // Create the aspnetdb database. It does not harm if it exists. SqlServices.Install(SQLServerName, SQLServerUser, SQLServerPassword, ASPNETdbName, SqlFeatures.All); // The RoleProvider could be removed from the App.config with using this. SqlRoleProvider roleProvider = new SqlRoleProvider(); NameValueCollection roleProvConfig = new NameValueCollection { { connectionStringName, conStrName}, { applicationName, webAppName } }; roleProvider.Initialize(SqlRoleProv, roleProvConfig); // The roles to be created for the application. string[] roles = { admin, poweruser, export }; foreach (var role in roles) { if (!roleProvider.RoleExists(role)) roleProvider.CreateRole(role); } // I was unsuccessfull to remove the <membership> part from the App.config. SqlMembershipProvider memProv = new SqlMembershipProvider(); NameValueCollection memProvConfig = new NameValueCollection { { connectionStringName, conStrName}, { applicationName, webAppName }, { enablePasswordRetrieval, false }, { enablePasswordReset, false }, { requiresQuestionAndAnswer, false }, { passwordFormat, Hashed }, }; memProv.Initialize(SqlMembershipProv, memProvConfig); // I was unsuccessfull to replace the <profile> in the App.config. // For example, I would like to use the code for the more general // application, where the properties can differ for different web applications, // and it would not be hardwired in App.config. // // SqlProfileProvider profileProv = new SqlProfileProvider(); // NameValueCollection profProvConfig = new NameValueCollection { // { connectionStringName, conStrName}, // { applicationName, webAppName }, // }; // profileProv.Initialize(SqlProfileProv, memProvConfig); // It is possible to create the users; however, I would like to get cleaner code. List<string[]> userList = new List<string[]> { new[] { admin, adminPa$$word1234, [email protected], admin poweruser export, Jan Admin, 111 222 333 }, new[] { admin2, admin2Pa$$word1234, [email protected], admin poweruser, Petr Admin, 222 333 444 }, new[] { admin3, admin3Pa$$word1234, [email protected], admin, Lenka Admin, 333 444 555 }, }; foreach (var info in userList) { string name = info[0]; string password = info[1]; string email = info[2]; // Get the list of the roles. If no role is defined, the array // of the lenght 1 with empty string inside is created. // See the testing below. string[] rolenames = info[3].Split(); string fullname = info[4]; string phone = info[5]; if (memProv.GetUser(name, false) == null) { MembershipCreateStatus status = new MembershipCreateStatus(); memProv.CreateUser(name, password, email, none, // password question none, // password answer true, // isApproved null, // out status); Debug.Assert(status.Equals(MembershipCreateStatus.Success)); // Assign the roles to the user. if (rolenames[0] != ) roleProvider.AddUsersToRoles(new[] { name }, rolenames); // Set the user properties. ProfileBase pb = ProfileBase.Create(name); pb.SetPropertyValue(FullName, fullname); pb.SetPropertyValue(Phone, phone); pb.Save(); } } } }}I am rather new to C# (coming from C++, desktop native application to ASP.NET, C# web applications) -- I am not very strong in C#, and the ASP.NET infrastructure.My initial idea was to have a kind of general application that reads some proprietary form of the config text file with the definitions to be interpreted. I tried to get rid of the App.config, but it seems like to pee against the wind.So, the second idea was let the App.config exist, but not to use it for storing any information that can be specific for the deployment case. I tried to implement the most of the functionality in the Program.cs (here hardwired, but think in terms of getting the info say from another text file, imported from the Excel table, etc.). I was able to fully remove the role provider part from the App.config. However, I was not able to replace the membership provider, and the profile provider (including the properties definitions) from the App.config.I was not able also to get rid of the connection string definition in the App.config. So, I have taken approach at least not to duplicate the information in the App.config and in the Program.cs (e.g. the connection string from the App.config parsed to get the information).Anyway, notice the profile provider part of the App.config -- I do not know how to avoid duplication of the name of the web application. (I was able to do that for the other providers.)I appreciate any help and idea how to improve the code and the approach.
External application for the web application deployment
c#;asp.net
static void Main(string[] args)If you're not going to use args, you can omit them:static void Main()const string conStrName = aspnetdbConnectionString;SqlConnectionStringBuilder cns = new SqlConnectionStringBuilder(conString);Try not to abbreviate variable names, it makes your code harder to understand.Also, consider using var, especially if the type of the variable is clear from the right side of the assignment.string SQLServerName = cns.DataSource;Local variables should start with a lowercase letter: sqlServerName.List<string[]> userList = new List<string[]>{ new[] { admin, adminPa$$word1234, [email protected], admin poweruser export, Jan Admin, 111 222 333 }, new[] { admin2, admin2Pa$$word1234, [email protected], admin poweruser, Petr Admin, 222 333 444 }, new[] { admin3, admin3Pa$$word1234, [email protected], admin, Lenka Admin, 333 444 555 },};foreach (var info in userList){ string name = info[0]; string password = info[1]; string email = info[2];You shouldn't use arrays to store this kind of information, it's logically not an array.You can use an anonymous type instead:new { name = admin, password = adminPa$$word1234, email = [email protected], roleNames = new[] { admin, poweruser, export }, fullName = Jan Admin, phone = 111 222 333 }Note that this means it's much harder to confuse which array index contains which value and you can (and should) use types other than string.(The incorrect indentation in the original code is due to a bug.)MembershipCreateStatus status = new MembershipCreateStatus();memProv.CreateUser(name, password, email, none, // password question none, // password answer true, // isApproved null, // out status);You don't need to initialize out variables. (And newing up an enum doesn't make much sense either.)Instead of comments, you should use named arguments, that way, the compiler verifies the names for you (comments can be wrong, code can't).Put together:MembershipCreateStatus status;memProv.CreateUser(name, password, email, passwordQuestion: none, passwordAnswer: none, isApproved: true, providerUserKey: null, status: out status);status.Equals(MembershipCreateStatus.Success)You can compare enums using ==:status == MembershipCreateStatus.Success
_codereview.101324
I have implemented an upgrades system in my Unity3d game, and I am pretty sure that I am not doing things in an optimal way. Any advice regarding best practices would be much appreciated.The idea is that when the player collects a powerup, their run speed and jump speed increases. Collect enough powerups, and you can jump around the level like the incredible hulk.Play the game here: Super VoxelThe powerup is a basic Sphere object with its Sphere Collider Is Trigger checkbox activated. It has this script attached:SphereTriggerScript.cspublic class SphereTriggerScript : MonoBehaviour { public GameObject sphereObject; private GameObject playerObject; void Start() { playerObject = GameObject.Find(Player1); } void OnTriggerEnter (Collider other) { Destroy(sphereObject); PlayerScript script = playerObject.GetComponent<PlayerScript>(); script.jumpForce += 1; script.runSpeed += 1; }}In the scene I have a basic Player1 prefab with this script attached:PlayerScript.cspublic class PlayerScript : MonoBehaviour { public int runSpeed; public int jumpForce; void Start () { runSpeed = 20; jumpForce = 10; }}The powerup references this script with GameObject.Find. I do understand that this is suboptimal, however the powerups are created programmatically at run time, so it is not possible to simply drag and drop the Player1 game object onto the powerup prefab.I have modified the default FPSController prefab (specifically the FirstPersonController script) to add a public GameObject field that I populate with the Player1 prefab. Then when I need to get the jump or run speed, I access that script like so:PlayerScript script = playerObject.GetComponent<PlayerScript>();m_MoveDir.y = script.jumpForce;Finally, I am doing something a bit funny to position the powerup orbs properly. When creating them, I use a RayCast to position them at the top of the building:int randomNumber = random.Next(0, 100);if (randomNumber > 80) { int maxPossibleHeight = 250; RaycastHit hit; Ray ray = new Ray(new Vector3(x * tileSize, maxPossibleHeight, y * tileSize), Vector3.down); if (tileObject.GetComponentInChildren<MeshCollider>().Raycast(ray, out hit, 2.0f * maxPossibleHeight)) { int randomHeightOffset = random.Next(2, 12); float height = hit.point.y; GameObject powerup = (GameObject)GameObject.Instantiate (Resources.Load (powerupPrefabName)); powerup.transform.position = new Vector3(x * tileSize, height + randomHeightOffset, y * tileSize); }}
Do you want to be a super (voxel) hero?
c#;game;unity3d
In your PlayerScript class, you have two variables, runSpeed, and jumpForce, and you initialize them in Start, like this:void Start () { runSpeed = 20; jumpForce = 10;}This isn't necessary. Rather, you can do something like this:public class PlayerScript : MonoBehaviour { public int runSpeed = 20; public int jumpForce = 10;}This will also mean that the variables runSpeed and jumpForce will have the default values of 20 and 10 in the Unity Editor as well.In addition, in your SphereTriggerScript class, you also don't need to initialize playerObject in Start as well.In your SphereTriggerScript class, in the method OnTriggerEnter, you never do a check to make sure that the collider is actually the Player GameObject. In order to do this, you'll have to wrap the contents of SphereTriggerScript.OnTriggerEnter in this if statement:if(other.gameObject.name == playerObject.name){ ...}This ensures that in case any other GameObject happens to collide with a power-up, that it won't be destroyed.The UnityEngine namespace comes with a static class Random, which I prefer over System.Random while using Unity3D. Rather than having to instantiate System.Random, I can just call the methods directly from UnityEngine.Random.Compare:System.Random random = new System.Random();int randomNumber = random.Next(0, 100);To:int randomNumber = (int)Random.Range(0, 100);Other than the above, I just have a few small nitpicks:The general style for storing an instantiated GameObject is the below example, not containing the Gameobject. prefix, and using as for a cast instead.GameObject gameObject = Instantiate( ... ) as GameObject;In addition, C# braces should be placed on the next line, not like Java braces, as seen in the below example.if( ... ){ ...}Finally, you have a few magic numbers scattered across your code, so here's a small list of places containing magic numbers:script.jumpForce += 1;script.runSpeed += 1;int randomNumber = random.Next(0, 100);int randomHeightOffset = random.Next(2, 12);Other than that, your code looks pretty nice! I'm excited to see where this game goes!
_codereview.66650
I've just started to write jQuery and have just written my first code using Ajax calls. I'm fetching data from the pokeapi and am using it to random generate new Pokemon.I have refactored it as much as possible, but I really think there must be a way to abstract the Ajax call into a function and only use one whilst still retaining the random values. I tried to do it myself but came across problems when trying to use data.name, data.types etc as I kept being told that data does not exist when I abstracted it.// There are 778 pokemon on the database// This function allows us to generate a random number between two limitsfunction randomIntFromInterval(min, max) { return Math.floor(Math.random() * (max - min + 1) + min);};// A more specific number between 0 and the number of poke on databasefunction randPokemon() { return randomIntFromInterval(0, 718).toString();}// Fetch a random pokemon namefunction generateName(urlinput, id) { var generateurl = http://pokeapi.co/api/v1/ + urlinput + randPokemon(); $.ajax({ type: GET, url: generateurl, // Set the data to fetch as jsonp to avoid same-origin policy dataType: jsonp, async: true, success: function(data) { // If the ajax call is successfull, add the name to the name span $(id).text(data.name); } });}// Fetch random pokemon typesfunction generateTypes(urlinput, id) { var generateurl = http://pokeapi.co/api/v1/ + urlinput + randPokemon() $.ajax({ type: GET, url: generateurl, dataType: jsonp, async: true, success: function(data) { var types = ; // Loop over all the types contained in an array for (var i = 0; i < data.types.length; i++) { // Set the current type we will add to the types span var typetoAdd = (data.types[i].name); // Capitalise the first letter of the current ability typetoAdd = typetoAdd.charAt(0).toUpperCase() + typetoAdd.slice(1, (typetoAdd.length)); // Append the current type to the overall types variable types += typetoAdd + ; } // Insert each type the pokemon is into the types span $(id).text(types); } });}// Fetch random pokemon abilitiesfunction generateAbilities(urlinput, id) { var generateurl = http://pokeapi.co/api/v1/ + urlinput + randPokemon() $.ajax({ type: GET, url: generateurl, dataType: jsonp, async: true, success: function(data) { var abilities = ; // Loop over all the abilities for (var i = 0; i < data.abilities.length; i++) { // Set the current ability we will add to the abilities span var abilityToAdd = (data.abilities[i].name); // Capitalise the first letter of the current ability abilityToAdd = abilityToAdd.charAt(0).toUpperCase() + abilityToAdd.slice(1, (abilityToAdd.length)); // Append the current ability to the overall abilities variable abilities += <li> + abilityToAdd + </li>; } // Insert abilities to abilities span $(id).html(abilities); } });}// Fetch a random pokemon imagefunction generateSprite(urlinput, id) { var generateurl = http://pokeapi.co/api/v1/ + urlinput + randPokemon() $.ajax({ type: GET, url: generateurl, dataType: jsonp, async: true, success: function(data) { var href = http://pokeapi.co + data.image; // Add random image source to the sprite image source $(id).attr(src, href); } });}// Use all generate functions together to make a new random pokemon!function makeAPokemon() { generateName(pokemon/, #name); generateTypes(pokemon/, #types); generateAbilities(pokemon/, #abilities) generateSprite(sprite/, #sprite)}// If the generate button is clicked, call the makeAPokemon() function$(#generate).on(click, makeAPokemon);// Call the makeAPokemon() function once initial page loadmakeAPokemon();body { background-color: #FFFFFF; font-family: Open Sans, sans-serif; width: 100%;}header { width: 100%;}h1 { text-align: center;}h2 { font-size: 1.4em;}main { max-width: 600px; margin: 0 auto;}img { display: block; margin: 0 auto;}.pokemon { background-color: #E3E3E3; border: 4px solid #000000;}.name,.species { text-align: center;}.sprite { height: 100px;}.abilities { margin: 30px 25px; height: 160px;}.abilities h3 { padding-left: 60px; padding-bottom: 5px; border-bottom: 1px solid #999999;}.abilities ul { list-style-type: none; margin-left: 15px;}.abilities li { padding: 5px;}button { background-color: #EA0041; color: #FFFFFF; width: 200px; height: 50px; border: 1px solid black; box-shadow: none; margin: 50px auto; display: block;}<!doctype html><html lang=en><head> <title>Random Pokemon Generator</title> <link rel=shortcut icon type=image/ico href=/favicon.ico /> <!-- Styles --> <link href='http://fonts.googleapis.com/css?family=Open+Sans' rel='stylesheet' type='text/css'> <link href=assets/css/style.min.css rel=stylesheet type=text/css media=all /></head><body> <header> <h1>Random Pokemon Generator</h1> </header> <main> <div class=pokemon id=pokemon> <h2 class=name> <span>Name: </span> <span id=name></span> </h2> <h2 class=species> <span>Type: </span> <span id=types></span> </h2> <div class=sprite> <img src= id=sprite> </div> <div class=abilities> <h3>Abilities</h3> <ul id=abilities> <li>Ability One</li> <li>Ability Two</li> <li>Ability Three</li> <li>Ability Four</li> </ul> </div> <button id=generate role=button>Generate!</button> </div> </main> <script src=https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js></script></body></html>
Pokemon database utilizing pokeapi
javascript;jquery;ajax;pokemon
First, a question: If there are 778 Pokemon (as your documentation says), why does randPokemon() return a number between zero and 718? I know nothing about pokemon, so I don't know which one is correct. (I looked it up, though: It's 718)Also, bug: Your current implementation may return an ID of zero, which doesn't match any pokemon.Anyway, overall it's not bad, though I do have some comments:randPokemon should probably be randomPokemonID. Since the purpose of the entire script is to generate a random pokemon, you might think that the randPokemon function is the main function. But it just returns a number.Or actually, it returns a string, but that's not necessary; it only does so because you need a string later (well, you don't; JS will happily add a number to a URL string), but that's not this function's concern.Incidentally, while I applaud that you've extracted a generic random int between min and max function, it's an unnecessary complication in this case. Since there are a fixed number of pokemon, starting at ID 1, and ending at ID 718, all you need is:Math.floor(Math.random() * MAX_POKEMON_ID) + 1;And you're right: Those ajax calls should be abstracted somehow. I don't know what you tried, but I've included an example below.Looking at the current implementation, though, I'd call all the functions fetch... instead of generate...; they don't really generate data out of thin air.Also, there's no need to pass the API path to the functions, when the functions are specialized. For instance, generateSprite should always fetch data from /sprites, so passing that as an argument is unnecessary. However, if you abstract the fetching, it's of course necessary to pass the endpoint path.In terms of abstracting things, I'd start by wrapping everything in a generateRandomPokemon function, that produces an object containing all the data you want to display. And yes, this one I would call generate, since it ties together several random API calls.It'd decouple your UI-updating code from the data itself. By simply producing a plain object, you're free create/update UI independent of the data parsing.function generateRandomPokemon(callback) { // constants var MAX_POKEMON_ID = 718, BASE_URL = http://pokeapi.co; var fetches = [], // array to hold fetch operations pokemon = {}; // object to hold random pokemon data function getRandomID() { return 1 + Math.random() * MAX_POKEMON_ID | 0; // bitwise floor() trick } function fetchRandom(endpoint, callback) { var url = BASE_URL + /api/v1/ + endpoint + / + getRandomID(); return $.ajax({ type: GET, url: url, dataType: jsonp, success: callback }); } // fetch a random name fetches.push(fetchRandom('pokemon', function (data) { pokemon.name = data.name; })); // fetch random types fetches.push(fetchRandom('pokemon', function (data) { pokemon.types = data.types.map(function (type) { return type.name; }); })); // fetch random abilities fetches.push(fetchRandom('pokemon', function (data) { pokemon.abilities = data.abilities.map(function (type) { return type.name; }); })); // fetch random sprite fetches.push(fetchRandom('sprite', function (data) { pokemon.image = BASE_URL + data.image; })); // when all the fetches are done, trigger the callback with // the pokemon object. // If there was an error, trigger it with null (not really // informative, but better than nothing) $.when.apply(null, fetches) .done(function () { callback(pokemon); }) .fail(function () { callback(null); });}// ---------------------function displayRandomPokemon() { generateButton.prop(disabled, true); generateRandomPokemon(function (data) { generateButton.prop(disabled, false); if(!data) { alert(Oops); // an error occurred return; } // output example var properties = []; properties.push(Name: + data.name); properties.push(Types: + data.types.join(, )); properties.push(Abilities: + data.abilities.join(, )); $(#output).empty().text(properties.join(\n)); $(#sprite).attr(src, data.image); });}var generateButton = $(#generate);generateButton.on(click, displayRandomPokemon);// run on loaddisplayRandomPokemon();<script src=https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js></script><img id=sprite src=><pre id=output></pre><button id=generate disabled>Generate</button>
_unix.125183
The Question:I plugged in a device (i.e. GSM modem) through a serial port (a.k.a. RS-232), and I need to see with which file in /dev/ filesystem this device was tied up, to be able to communicate with it. Unfortunately there is no newly created file in /dev/ nor can be seen anything in dmesg output. So this seems to be a hard question.Background:I had never worked with a serial device, so yesterday, when there appeared a need, I tried to Google it but couldn't find anything helpful. I spent a few hours in seek, and I want to share a found answer as it could be helpful for someone.
How to find which serial port is in use?
devices;tty;serial port
A Solution:The problem is that this serial port is non-PlugNPlay, so the kernel does not know which device was plugged in. After reading a HowTo tutorial I got the working idea. In the /dev/ directory of a unix like OS there are files named as ttySn where the ending n is a number. Most of these files are dumb i.e. doesn't correspond to an existing device. But some of those refer to a real port. To find which it is, issue a command:$dmesg | grep ttyS[ 0.872181] 00:06: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A[ 0.892626] 00:07: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A[ 0.915797] 0000:01:01.0: ttyS4 at I/O 0x9800 (irq = 19) is a ST16650V2[ 0.936942] 0000:01:01.1: ttyS5 at I/O 0x9c00 (irq = 18) is a ST16650V2Above is an example output of my PC. You can see the initialization of a few serial ports which are:ttyS0, ttyS1, ttyS4, ttyS5. One of those serial ports is going to have a positive voltage when a device is plugged in so by comparing the content of the file /proc/tty/driver/serial in two different situations as: With the device plugged in Without the device plugged in we can easily find the ttyS related to our device.To do so do the following:$ sudo cat /proc/tty/driver/serial> /tmp/1(un)plug a device$ sudo cat /proc/tty/driver/serial> /tmp/2Next check the difference between two files. Below is output of my PC:$ diff /tmp/1 /tmp/22c2< 0: uart:16550A port:000003F8 irq:4 tx:6 rx:0---> 0: uart:16550A port:000003F8 irq:4 tx:6 rx:0 CTS|DSRThe uart number of our device is 16550A. By looking up this number in dmesg output, we can find our device's corresponding ttyS which here it is ttyS0:[ 0.872181] 00:06: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550ASo, our device is /dev/ttyS0, that's it. mission accomplished!
_softwareengineering.335312
I'm a front-end web developer and currently facing a situation where I have one form with few parameters of one object and to save them I should make few requests to few different API endpoints. As I'm used to REST ideology I suppose to have only one API to save one resource.There is some entity, some object with huge number of different properties and to update them I have about 10 different API endpoints for each property groups. This lead to situations where I have one form with few parameters of one object and to save them I should make few requests to few different API endpoints and my nature resists to what seems to me a chaos.As I have talk with my colleagues they explained that this is done due to security reasons - so that some users can access some API endpoints to update some objects properties groups. However I still disagree with situation where I should make few requests to update single object.so here is my questions:I have heard about ACLs but as far as I understand they are to control access to objects, not to individual object's properties. If I'm wrong then how can ACLs' be used in such situation?Why can't be there single API and then server would decide which of properties from request should be updated and which not. So that whole that process would be transparent for me - client side developer?If not ACL then what are the architectures/techniques/approaches should be used in this situation (in web development)
Control access to objects properties
rest;access control
null
_codereview.40881
Reservoir sampling implementation. Reservoir sampling is a family of randomized algorithms for randomly choosing a sample of k items from a list S containing n items, where n is either a very large or unknown number. If question is unclear let me know I will reply asap. Looking for code review, optimizations and best practice. public final class ReservoirSampling<T> { private final int k; /** * Constructs ReservoirSampling object with the input sample size. * * @param k the number of sample elements needed. * @throws IllegalArgumentException if k is not greater than 0. */ public ReservoirSampling(int k) { if (k <= 0) { throw new IllegalArgumentException(The k should be greater than zero); } this.k = k; }; /** * Returns a list of random `k` samples from the input list. * * @param list of elements from which we chose the k samples from. * @return the list containing k samples, chosen randomly. * @throws NullPointerException if the input list is null. */ public List<T> sample(List<T> list) { final List<T> samples = new ArrayList<T>(k); int count = 0; final Random random = new Random(); for (T item : list) { if (count < k) { samples.add(item); } else { // http://en.wikipedia.org/wiki/Reservoir_sampling // In effect, for all i, the ith element of S is chosen to be included in the reservoir with probability // k/i. int randomPos = random.nextInt(count); if (randomPos < k) { samples.set(randomPos, item); } } count++; } return samples; } public static void main(String[] args) { List<Integer> list = new ArrayList<Integer>(); list.add(1); list.add(2); list.add(3); ReservoirSampling<Integer> reservoirSampling = new ReservoirSampling<Integer>(3); System.out.print(Expected: 1 2 3, Actual: ); for (Integer i : reservoirSampling.sample(list)) { System.out.print(i + ); } System.out.println(); System.out.print(Expected: random output: ); list.add(4); list.add(5); list.add(6); for (Integer i : reservoirSampling.sample(list)) { System.out.print(i + ); } }}
Reservoir sampling
java;algorithm;random
This is not an accurate implementation of the Reservoir Sampling algorithm.The Reservoir Algorithm creates a reservoir of size k and fills it from the first k items from the source data.It then iterates through the remaining source data, and selects a random value for each subsequent item in the data set. If the random value is within the limits of the Reservoir, then the item is placed in the reservoir at that point.The issue you have is in your details.... Consider a source dataset of size 4 (values a,b, c, and d), and a reservoir of size 3.There should be a 3-in-4 chance that the 4th item is sampled. Conversely, there should be a 1-in-4 chance that it is not sampled.In your code, if we applied this example, k would be 3. You would create a reservoir of size 3, and you would fill it with the values a, b, and c.At this point, you would loop again and your item would be 'd', your count would be 3, and we would enter the 'else' clause.You then get your random number with the expression: int randomPos = random.nextInt(count);, or, effectively nextInt(3).nextInt(3) will never return the value 3 since nextInt(int) is an exclusive-of-the-end-range function. As a result, it will always return one of 0, 1, or 2.These values are all less than 3.As a consequence, your algorithm will always include the k+1 element in your reservoir.You need to change the way you generate your random number to be: nextInt(count + 1); Alternatively, you should increment your count before it is used to generate the random value.
_unix.203556
I have this working:% cat read.sh!/bin/shfile=list.txtwhile read linedo echo $linecut -d' ' -f27 | sed -n '$p' > file2done < $file% cat list.txt sftp> #!/bin/shsftp> sftp> cd u/aaasftp> ls -lrt x_*.csv-rwxr-xr-x 0 1001 1001 12274972 May 13 21:07 x_20150501.csv-rw-r--r-- 0 1001 1001 0 May 13 21:44 x_20150601.csv-rw-r--r-- 0 1001 1001 0 May 13 21:44 x_20150701.csv-rw-r--r-- 0 1001 1001 0 May 13 21:44 x_20150801.csv-rw-r--r-- 0 1001 1001 0 May 13 21:44 x_20150901.csv-rw-r--r-- 0 1001 1001 0 May 13 21:45 x_20151001.csv-rw-r--r-- 0 1001 1001 0 May 13 21:45 x_20151101.csv-rw-r--r-- 0 1001 1001 0 May 13 21:45 x_20151201.csv% cat file2 x_20151201.csvFirst question:Is there something more glamorous to read just the very last item on the very last line? Would you use cut and sed? This is a redirect of a sftp directoy listing.Second question:Whatever is in file2, I want to have it read from a sftp batch file to get that exact file.% cat fetch.sh #!/bin/shcd u/aaa!sh read.sh!< file2 getbyeAs you can imagine, sftp doesn't like get provided without any file, so how can I read in file2 to get that file from the sftp server? % sftp -b fetch.sh user@pulse sftp> #!/bin/sh sftp> sftp> cd u/aaa sftp> !sh read.sh sftp> #!/bin/sh sftp> !< file2 x_20151201.csv sftp> get You must specify at least one path after a get command.
Shell read script for sftp
shell;scripting;sftp;bsd;read
You can combine all the actions in one command:sftp user@host:/path/to/file/$(tail -1 file1.txt |tr -s ' ' |cut -d ' ' -f 9)This will fetch the file into the current working directory. If you need to fetch the file into another directory specify the destination directory as a next argument to the sftp command.
_unix.307589
I wish to have a macvlan interface with an IP on the same subnet as its parent interface, but I am experiencing difficulties in getting traffic from the internet to flow back in to applications (namely ping) bound to the macvlan.I think it's an ARP issue.eth0 is on 192.168.1.214/24I would like another interface, static0 to get 192.168.1.5.I create and bring up a macvlan interface with eth0 as parent:ip link add static0 link eth0 type macvlan mode bridgeip link set static0 upI configure static0's ip to be on the same subnet as eth0:ip addr add 192.168.1.5/24 dev static0eth0's has hwaddr aa:aa:aa:aa:aa:aastatic0 gets bb:bb:bb:bb:bb:bbAt this point, my routing looks as follows:$ ip routedefault via 192.168.1.254 dev eth0 proto static metric 100 169.254.0.0/16 dev eth0 scope link metric 1000192.168.1.0/24 dev static0 proto kernel scope link src 192.168.1.5 192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.214 metric 100 It's a bit redundant, but it all goes out the same physical link anyways.I do ping from the host, binding to my new interface, but the application does not receive the replies:# ping -I static0 google.comPING google.com (216.58.216.174) from 192.168.1.5 static0: 56(84) bytes of data.(no output)In a separate terminal, I tcpdump icmp traffic on static0, and see both the echo requests and replies, but they don't make their way to the application:# tcpdump -i static0 -vten icmpbb:bb:bb:bb:bb:bb > 00:00:0c:07:ac:00, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 64, id 36290, offset 0, flags [DF], proto ICMP (1), length 84) 192.168.1.5 > 216.58.216.174: ICMP echo request, id 31813, seq 541, length 6400:1e:f6:46:2c:00 > bb:bb:bb:bb:bb:bb, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 55, id 0, offset 0, flags [none], proto ICMP (1), length 84) 216.58.216.174 > 192.168.1.5: ICMP echo reply, id 31813, seq 541, length 64Another thing I tried was to assign 192.168.1.5/32 to static0 instead of /24, which simplifies the routing table, and produces the same tcpdump trace, but doesn't help otherwise.
Is placing a macvlan interface on the same subnet as the parent interface supported?
networking;macvlan
null
_unix.211066
I have a Linux system with a user named service. I'm using the pam_succeed_if.so module to match this username. For example:auth required pam_succeed_if.so user = serviceBut it won't match a username of service, apparently because it is also a field accepted by pam_succeed_if.so. From the man page (edited for emphasis):Available fields are user, uid, gid, shell, home, ruser, rhost, tty and serviceHow do you escape values that match field names?Further Troubleshooting:I turned the debug option on for pam_succeed_if.so, and it's converting the username service to login:login: pam_succeed_if(login:auth): 'user' resolves to 'login'And this just so happens to be the PAM config for login, /etc/pam.d/login.
How do you get `pam_succeed_if.so` to recognize `service` user?
pam
pam_succeed_if.so converts the passed-in user to the field value, if that value exists. For example, using the user shell:login: pam_succeed_if(login:auth): 'user' resolves to '/bin/bash'I don't know how to keep it from doing this clearly undesired resolve. But there is a workaround that will work for almost everyone.Just check for the resolved value, instead of the actual value.ExamplesTo match the service user in the login PAM config:auth required pam_succeed_if.so user = loginTo match the service user in the sshd PAM config:auth required pam_succeed_if.so user = sshdTo match the shell user, whose configured shell is /bin/bash:auth required pam_succeed_if.so user = /bin/bash
_scicomp.25141
I am solving Biot equation with sparse matrix in MATLAB. I have no problem with global sparse matrices assembly, but when I assign Dirichlet boundary condition, it is so slow.From this topic, How to efficiently implement Dirichlet boundary conditions in global sparse finite element stiffnes matrices, @James also mentioned that it is so slow if global matrix is modified. My matrix is 25000x25000, and it took around 100s to assign boundary conditon. In MATLAB, many authors state that, by using setdiff function, only free nodes are considered when solving the equation. Only right hand vector is modified when applying dirichlet boundary condition and global matrix is kept unchanged.Hier is the code in MATLAB: % Dirichlet conditions u = sparse(size(coordinates,1),1);u(unique(dirichlet)) = u_d(coordinates(unique(dirichlet),:));b = b - A * u;% Computation of the solutionu(FreeNodes) = A(FreeNodes,FreeNodes) \ b(FreeNodes);What is the difference between two approaches? And is there any similar function as setdiff in MATLAB in C++?
Dirichlet boundary condition for sparse matrix - Solving Ax=b only for free nodes?
finite element;boundary conditions;sparse
I frequently solve simple FEM problems with 100 000 to 1 000 000 unknowns in Matlab and setting Dirichlet boundary conditions is blazing fast.Here is a code (although you cannot run it without the related functions but you get the idea):% get meshmesh=squaremesh(9);% build stiffness matrixK=bilin_assembly(@(u,v,ux,uy,vx,vy,x,y) ux.*vx+uy.*vy,mesh);% build load vectorf=lin_assembly(@(v,vx,vy,x,y) 1.*v,mesh);% number of nodesn=size(mesh.p,2);% nodes on the dirichlet boundaryD=boundary_dofs(mesh);% free nodes are all nodes except dirichlet boundary nodesI=setdiff(1:n,D);% initialize u as zerou=zeros(n,1);% set dirichlet conditionX=mesh.p(1,D);u(D)=sin(10*X);% solve the problem in the free nodesu(I)=K(I,I)\(f(I)-K(I,D)*u(D));% plot the solutionfigure(1);trisurf(mesh.t',mesh.p(1,:),mesh.p(2,:),u);The whole thing took me 3.5 seconds to run with n=263169. The result looks like this:One cannot really see what's happening without removing the element edge lines:What the commandu(I)=K(I,I)\(f(I)-K(I,D)*u(D));actually does is that it forms a smaller sparse matrix by including only the columns and rows defined in set I. In fact, runningtic;K(I,I);tocprintsElapsed time is 0.018481 seconds.so this is a very fast operation indeed.Let's say that in C++ you store your sparse matrix in compressed row storage format. Then you can easily drop rows by looking at the contents of your row index array. Next perform a transpose and drop rows again. This way you end up with a smaller sparse matrix that you can use in your computations.Edit: Forgot to mention, that in C++ standard library you have set_difference as well.
_datascience.18217
I know that there are various pre-trained models available for ImageNet (e.g. VGG 16, Inception v3, Resnet 50, Xception). Is there something similar for the tiny datasets (CIFAR-10, CIFAR-100, SVHN)?
Are pre-trained models vor CIFAR-10 / CIFAR-100 / SVHN available?
machine learning;keras
null
_unix.228812
I know that some shells at least support file test operators that detect when a filename names a symlink.Is there a POSIX utility1 that provides the same functionality?1 I may not be using the right terminology here. What I mean by utility is a free-standing executable living somewhere under /bin, /usr/bin, etc., as opposed to a shell built-in.
Looking for POSIX utility to test whether filename is a symlink
symlink;posix
You're looking for test:-h pathnameTrue if pathname resolves to a file that exists and is a symbolic link. False if pathname cannot be resolved, or if pathname resolves to a file that exists but is not a symbolic link. If the final component of pathname is a symlink, that symlink is not followed.Most shells have it as a builtin, but test also exists as a standalone program, which can be called from other programs without invoking an intermediate shell. This is the case for most builtins that shells may have, except for those that act on the shell itself (special builtins like break, export, set, ).[ -h pathname ] is equivalent to test -h pathname; [ works in exactly the same way as test, except that [ requires an extra ] argument at the end. [, like test, exists as a standalone program.For example:$ ln -s foo bar$ /usr/bin/test -h bar && echo yy
_softwareengineering.312915
With traditional Java/Spring web apps, I've historically used an JEE architecture where there's a domain tier and a web tier. The web tier mostly contains web controllers. The domain tier includes entities, data access objects (or repositories in Spring Data parlance), and transactional services.Lately I've been building more apps that incorporate external web services: sometimes as data sources, sometimes as providers of some service that my app needs.Currently I incorporate these web services by invoking them from the service tier. That way the web controllers don't have to handle them. (I like to treat web controllers as thin adapters between domain services and HTTP.)I wonder however whether there are better approaches.I like the idea of keeping the service tier responsible for managing the app's internal resources, as opposed to also incorporating external resources. Here maybe I could handling integration of internal and external resources above the service tier (but below the web tier).Either that, or else use the service tier to handle integration concerns, but separate the strictly internal resources from the other ones. Basically treat the internal/external distinction as an implementation detail that we hide behind the service interface.I'm sure there are other approaches too. Interested to hear about them and associated pros/cons.
Incorporating external web services in Java/Spring web app
java;architecture;web applications;web services;spring
null
_unix.171239
I have four partitions on 2 HD's, sda and sdb, each having Debian installed in different configurations. My general purpose OS is on sda1, and I want to be able to run VM's of the other OS's while logged in to sda1.A recommendation I got was to use KVM since it apparently allows use of an existing installed OS as it's 'image'. I have never used KVM before and would appreciate guidelines & references on how to go about doing install and setup.
How to install KVM to use existing installed OS's?
kvm
You can boot you other systems like this:$ qemu-system-x86_64 -hda /dev/sdb -m 2G -enable-kvm \ -net user,hostfwd=tcp::10022-:22 -net nicAssuming that /dev/sdb has a working grub installation.The -net enables simple networking support (TCP/UDP but no ICMP) and creates a port redirection for ssh (you can then connect to local port 10022 on the host to ssh into the VM). With -m 2G you assign 2GB RAM to this KVM instance.
_unix.230197
I am quite curious about why locale-archive files are preferred in many Linux distros and what and to which extent its advantage over compiled files for each locale would be.
What's the advantage of locale archives over locale files spread out in directories?
filesystems;locale;glibc
The locale-archive files contain the languages usedover the system man pages for example. This memory-mapping enablesreading the file as it's in memory, avoiding system callsused to perform disk-read operations, therefore it can result in much fasteraccess. Memory-mapped files (like shared libraries) are kind of part of the virtual memory of processes like the top command, VIRTfield.So, the part of the locale-archive file mapped tomemory adds up to the virtual memory of every processes that makes useof glibc (basically everything), while this is actually only once inmemory.At last, for each processes, the virtual memoryoverestimates the real memory of the process by, the amount of part ofthe locale-archive file which is memory-mapped.
_softwareengineering.303159
This is yet another follow up question about Object oriented design in general. I am trying to break them down to separate questions per advice I got from @Jay earlier.I get a new question, almost every step of the OO design process. I tried to get the ideas from books, articles and the web. It just gets more muddied up. I would like to get some advice from the experts here, so I can get my concepts right.Say I have the below Objects:public class AddressRecord { protected String addressLine1; protected String addressLine2; protected String city; protected String state; protected String postalCode; protected String country; ....}public class AddressValidationRequest { private AddressRecord addresses[]; protected ArrayList actions; protected ArrayList columns; protected Properties options; ...}Should I keep AddressRecord completely hidden inside Request (is it Aggregation?) or expose and access the AddressRecord from outside (Composition)? (To me, even inheritance seemed like an option to extend AddressRecord to a Request, though I feel they are distinct entities in real world, so I dropped that idea).What I mean is, should I have AddressRecord as a private object inside Request and add getters/setters in the Request itself for each field in AddressRecord?Or just add getter/setter to get/set AddressRecord[] from the Request and then set it up outside, say in a controller code?(This later idea I got when I tried to extract the AddressRecord class from Request in Eclipse like below:public class AddressSearchRequest { AddressRecord address; public AddressSearchRequest(String format, String customerId, String addressLine1, String suite, String city, String state, String postalCode) { this.format = format; this.customerId = customerId; this.address.setAddressLine1(addressLine1); this.address.setAddressLine2(Suite); this.address.setCity(city); this.address.setState(state); this.address.setPostalCode(postalcode); } public void setAddressLine1(String addressLine1) { this.address.setAddressLine1(addressLine1); } public String getSuite() { return address.getAddressLine2(); } public void setSuite(String suite) { this.address.setAddressLine2(suite); } public String getCity() { return address.getCity(); } public void setCity(String city) { this.address.setCity(city); } public String getState() { return address.getState(); } public void setState(String state) { this.address.setState(state); } public String getPostalCode() { return address.getPostalCode(); } public void setPostalCode(String postalCode) { this.address.setPostalCode(postalCode); }}UPDATE:There are actually 2 different types of Requests I am dealing with - a Search Request - takes only one address and may return multiple matches.A Validation Request may take one or more addresses. I had them both as Request causing some confusion. corrected it.Note: I am using AddressRecord also in other types of Request(s) and Response classes.
Java Object Oriented Question - 2
java;object oriented design
null
_cs.32332
The ACM Contest Problem 102 (HTML or PDF) can be paraphrased as:Given 3 bins each containing possibly different number of bottles of 3 colors, move the bottles so that there is one color per bin, minimizing the number of bottle movements.Specifically, we wish to decide on an allocation of colors to bins so that moving from the initial position to the sorted position requires the least number of bottle moves. The bins are assumed to have infinite capacity.For the problem as stated, one can brute-force check all $3! = 6$ permutations to see which has the fewest number of bottle movements. However, for $N > 3$ colors of glass, $N!$ grows very large. Is there a way to solve this problem for general $N$ that runs in polynomial time?
Is ecological bin packing NP-hard?
complexity theory;optimization;np hard
Yes, this can be solved in polynomial time, for arbitrary $N$. This is an instance of the assignment problem, for which polynomial-time algorithms are known.You want to assign a color of glass to each bin, such that each bin gets exactly one color, and each color is assigned to exactly one bin. The cost of assigning bin $b$ to color $c$ is the number of bottle moves needed to move all bottles of color $c$ that aren't currently in bin $b$, into bin $b$. Therefore, this is an assignment problem.(You can compute this cost function more efficiently as follows: the cost of assigning bin $b$ to color $c$ is the number of moves needed to move all bottles that aren't colour $c$ out of bin $b$. Thanks to @David Richerby for this insight.)The assignment problem can be solved using standard methods, e.g., the Hungarian algorithm or algorithms for min cost max flow.
_unix.306393
i've got empty modules.dep file after compiling new kernel from sourceafter kernel compile, and then kernel module compile and then make modules_install to modules directory, i checked modules.dep file inside destination modules directory is empty, and then i ran this commanddepmod -a -b <kernel-modules-dest-dir> -e -F <kernel-source-directory>/System.map -n -v 4.7.0and the result was :# Aliases extracted from modules themselves.# Soft dependencies extracted from modules themselves.# Aliases for symbols, used by symbol_request().# Device nodes to trigger on-demand module loading.i'am compiling kernel 4.7.0 using host with version 4.4.x (ubuntu 16.04), there are a lot of .ko files inside <kernel-modules-dest-dir> but somehow depmod don't see any compiled loadable kernel modulesthese are roughly the command script i ran :cp ../../kernel-config ./.config-x86_64make mrpropermake menuconfigmake -j8make bzImagecp arch/x86/boot/bzImage ../../vmlinuzmkdir -p ../../kernel-modulesmake modulesmake modules_install INSTALL_MOD_PATH=../../kernel-modulesis they anything wrong in my compilation step ?
empty modules.dep for new kernel compilation
shell script;linux kernel;compiling;kernel modules;modprobe
null
_cs.43892
I am trying to prove that $\qquad L=\{\langle M\rangle \mid M \text{ is a TM }, \exists w. \text{ in } M(w) \text{ the head moves only right and } M(w)\!\uparrow \}$is decidable.I thought about the following solution:Lets build $\hat{M}$, a TM that will decide L:M on input $\langle N \rangle$: 1. $\Sigma^{Q+1} $ is decidable so it has an enumerator f. 2. for every word $ w\in\Sigma^{Q+1} $ simulate parallel N on w for |Q|+1 steps: $\space \space \space$ - if N got to a blank then N moved only right and is stuck in a loop.3. if all of those words stopped the simulation at a blank accept. else reject. I am quite sure I am missing something here. Can you help please?
Show that the set of all TMs that move only to the right and loop for some input is decidable
computability;turing machines
Your approach is on the right track. It should follow the next ideas:If there exists such $w$, then, when $M$ reaches the first blank after reading $w$ it will be in a state $\tilde q$ that goes right (and then keep going right, no matter the state changes). Finding this $\tilde q$ that satisfied this property (if such exists) can easily be verified for by checking all the states in $Q$ (of the given $M$).If such $\tilde q$ exists, we need to see if it is reachable after reading some input $w \in \Sigma^*$. The idea you already noticed is that if this happen, then $\tilde q$ will be reachable after processing the end of some word in $\Sigma^{Q+1}$ (or the like). This is simply a pigeonhole argument..Therefore it is enough to only examine all the words in $\Sigma^{Q+1}$ and check if $M$ on any of them goes only right, and reaches $\tilde q$ when hitting the first blank. Since the number of words in $\Sigma^{Q+1}$ is finite, this is doable in a finite time.You just need to properly formalize the above ideas as a decider $\hat M$.
_unix.26606
I've removed nagios3 with apt-get remove nagios3 and then removed the files in by using the command sudo rm -R /etc/nagios*Now when I run apt-get install nagios3 the config files (/etc/nagios3/*) are not present. How can I regenerate them?This box is on Ubuntu 10.04.
How can I regenerate the default '/etc/' config files?
linux;ubuntu;configuration;apt;aptitude
Purge nagios3. Then reinstall. That will probably work.apt-get purge nagios3apt-get install nagios3The purge will get rid of the config files, which the system didn't delete initially, and so thought were still installed. If purging nagios3 is not an option, then it will be a little more complicated. If that is the case, leave a comment.
_unix.225590
I have a file of multiple columns. I would like to make an additional column based on the values of 2 columns from this file.Example input:A B C D E F1 2 T TACA A 3 23 4 I R 8 29 3 A C 9 3If the values in cols 3 and 4 (labelled C and D) are the letters A,C,G or T, col 7 should be P.If the letters in cols 3 and 4 are I, D, or R, col 7 should be Q.If there are multiple letters in either column 3 or 4, col 7 should be Q.Desired ouput:A B C D E F G1 2 T TACA A 3 2 Q3 4 I R 8 2 Q9 3 A C 9 3 PI have the following code except this replaces some of the col 3 values with '1'. I want to leave cols 1-6 unchanged.awk '{if ((($3!=A && $3!=C && $3!=G && $3!=T) || ($3=I || $3=D || $3=R)) || (($4!=A && $4!=C && $4!=G && $4!=T) || ($4=I || $4=D || $4=R))) { $7 = INDEL } else { $7 = SNP }}1' filename > newfilename
How to add new column, where the value is based on existing columns with awk
bash;text processing;awk
This works with mawk:awk 'NR==1{$7=G;print;next} \ $3~/^[A,C,G,T]$/ || $4~/^[A,C,G,T]$/ {$7=P} \ $3~/^[I,D,R]$/ || $4~/^[I,D,R]$/ {$7=Q} \ $4~/[A-Z][A-Z]/ || $3~/[A-Z][A-Z]/ {$7=Q} 1' fileline: In the first line write the G in the header.line: If $3 of $4 are A, C, G or T then $7 is P.line: If $3 of $4 are I, D, or R then $7 is Q.line: If $3 of $4 are more than one Letter then $7 is Q. The 1 at the end prints all lines.
_softwareengineering.76666
Is it possible to generate every possible combination of a 32 character alpha numeric string? If so, how long would it take on today's fast computers?My lecturer at university said it's impossible, and I thought nothing is impossible.Any ideas on how long it'd take and how it'd be done? We're using Java at the moment. I would like to think I can prove him wrong.
Generating every combination of a 32 character alpha numeric string?
java;php;algorithms;cryptography
It boils down to the math, even if you take pretty crazy values. Let's say you have an instruction that can generate one 32 character combination in one cycle. Also, you're not storing these combinations so there is virtually no memory access. Finally let's assume that your effective clock speed for these instructions is 2.0 petahertz (ten to the fifteenth cycles per second), which doesn't exist yet outside of the fastest supercomputers on the planet.The number of 32 character combinations that are alphanumeric with repetition is 36 to the 32nd power. So, 10 to the 32nd power is a much smaller lower bound on this value. To compute this smaller group, you will need 10 to the 17th seconds (combinations / clock speed). There's approximately 32 million seconds in a year, so for this example, we'll take 100 million seconds as 3 years to make the math easy. So that leaves us with 10 to the 17th seconds being divided by 10 to the 8th seconds per 3 years. That means it will take more than 3 billion years for this computation to complete. Check that against the estimated time we have left before our sun becomes a red giant, that's just 5 billion years....
_webmaster.107223
I came across a site which serves multiple articles with single click using infinite scrolling.I see page URL is updating as we reach to the bottom of 1st article and so on.I am curious to know how it works. What is the procedure to implement it? What pros and cons it may have in terms of reporting and SEO.
What is the SEO impact of infinite scrolling of all articles?
seo;infinite scroll
null
_codereview.172770
I have created a GNUMake building system for pandoc. It has grown quite a bit, I wonder if I can optimize it further. Would it be possible, for instance, that it only runs a target if the output hasn't changed since last run?.DEFAULT_GOAL := pdfINPUTDIR=$(CURDIR)/sourceOUTPUTDIR=$(CURDIR)/outputSTYLEDIR=$(CURDIR)/styleNAME = $(notdir $(shell basename $(CURDIR)))FILFILES = $(wildcard style/*.py)FILTER := $(foreach FILFILES, $(FILFILES), --filter $(FILFILES))TEXFLAGS = --filter pandoc-crossref --filter pandoc-citeproc $(FILTER) --latex-engine=xelatexifeq ($(shell test -e $(STYLEDIR)/template.tex && echo -n yes),yes) TEXTEMPLATE = --template=$(STYLEDIR)/template.texendififeq ($(shell test -e $(STYLEDIR)/reference.docx && echo -n yes),yes) DOCXTEMPLATE = --reference-docx=$(STYLEDIR)/reference.docxendifhelp: @echo ' ' @echo 'Makefile for automated typography using pandoc. ' @echo 'Version 1.0 ' @echo ' ' @echo 'Usage: ' @echo ' make prepare first time use, setting the directories ' @echo ' make html generate a web version ' @echo ' make pdf generate a PDF file ' @echo ' make docx generate a Docx file ' @echo ' make tex generate a Latex file ' @echo ' make beamer generate a beamer presentation ' @echo ' make all generate all files ' @echo ' make update update the makefile to last version ' @echo ' make will fallback to PDF ' @echo ' ' @echo 'It implies some directories in the filesystem: source, output and style' @echo 'It also implies that the bibliography will be defined via the yaml ' @echo ' ' @echo 'Depends on pandoc-citeproc and pandoc-crossref ' @echo 'Get local templates with: pandoc -D latex/html/etc ' @echo ' 'all : tex docx html epub pdfpdf: pandoc $(INPUTDIR)/*.md \ -o $(OUTPUTDIR)/$(NAME).pdf \ $(TEXTEMPLATE) \ $(TEXFLAGS) 2>pandoc.log xdg-open $(OUTPUTDIR)/$(NAME).pdftex: pandoc $(INPUTDIR)/*.md \ --filter pandoc-crossref \ --filter pandoc-citeproc \ -o $(OUTPUTDIR)/$(NAME).tex \ --latex-engine=xelatexdocx: pandoc $(INPUTDIR)/*.md \ --filter pandoc-crossref \ --filter pandoc-citeproc \ $(DOCXTEMPLATE) \ --toc \ -o $(OUTPUTDIR)/$(NAME).docxhtml: pandoc $(INPUTDIR)/*.md \ -o $(OUTPUTDIR)/$(NAME).html \ --include-in-header=$(STYLEDIR)/style.css \ -t html5 \ --toc \ --standalone \ --filter pandoc-crossref \ --filter pandoc-citeproc \ --number-sections rm -rf $(OUTPUTDIR)/source mkdir $(OUTPUTDIR)/source cp -r $(INPUTDIR)/figures $(OUTPUTDIR)/source/figuresepub: pandoc $(INPUTDIR)/*.md \ -o $(OUTPUTDIR)/$(NAME).epub \ --toc \ --standalone \ --filter pandoc-crossref \ --filter pandoc-citeproc rm -rf $(OUTPUTDIR)/source mkdir $(OUTPUTDIR)/source cp -r $(INPUTDIR)/figures $(OUTPUTDIR)/source/figuresbeamer: pandoc $(INPUTDIR)/*.md \ -t beamer \ -o $(OUTPUTDIR)/$(NAME).pdf \ $(TEXTEMPLATE) \ $(TEXFLAGS) 2>pandoc.log xdg-open $(OUTPUTDIR)/$(NAME).pdfprepare: mkdir output mkdir source mkdir styleupdate: wget http://tiny.cc/mighty_make -O Makefileclean: rm -f $(OUTPUTDIR)/ *.md *.html *.pdf *.tex *.docx.PHONY: help pdf docx html tex clean
Pandoc builder system
makefile;make
only runs a target if the output hasn't changed since last runConsider writing rules like this:%.pdf: %.md pandoc -o $@ $<For that to work, the current all: ... pdf would need to mention particular *.pdf files, perhaps by globbing: $(patsubst %.md,%.pdf,*.md)
_unix.359742
I already tried a lot of suggestions I found in the forums but none of them have worked.I have 2 machines, one with a Ubuntu distribution and another with OpenSUSE. The following happens:-> ssh user@ubuntuhost (from OpenSUSE machine) WORKS!-> ssh user2@anothermachine (from OpenSUSE machine) WORKS!-> ssh user3@opensusehost (from Ubuntu machine) DOES NOT WORK!-> ssh user4@anothermachine (from Ubuntu machine) WORKS!I tried generating keys and adding them to the authorized_keys file, restarting the ssh, and checking if port 22 is open (I believe so).I do not know what else to do.
ssh not working on both directions (only one)
ssh
null
_codereview.45974
Please review my code for bearer token (JWT) authentication of Web API 2 (Self Hosted using OWIN)Are there any security issues in the implementation?Quick overview:Token creation and validation using JWT HandlerSymmetric key encryptionCORS support not yet checked for the authorization headerWeb traffic will be on SSL.The key cannot be auto-generated as it will break during a load balanced scenario. Can I save the key in config? Or switch to X509 certificates?This is the main class to create and validate tokens:public class TokenManager{ public static string CreateJwtToken(string userName, string role) { var claimList = new List<Claim>() { new Claim(ClaimTypes.Name, userName), new Claim(ClaimTypes.Role, role) //Not sure what this is for }; var tokenHandler = new JwtSecurityTokenHandler() { RequireExpirationTime = true }; var sSKey = new InMemorySymmetricSecurityKey(SecurityConstants.KeyForHmacSha256); var jwtToken = tokenHandler.CreateToken( makeSecurityTokenDescriptor(sSKey, claimList)); return tokenHandler.WriteToken(jwtToken); } public static ClaimsPrincipal ValidateJwtToken(string jwtToken) { var tokenHandler = new JwtSecurityTokenHandler() { RequireExpirationTime = true }; // Parse JWT from the Base64UrlEncoded wire form //(<Base64UrlEncoded header>.<Base64UrlEncoded body>.<signature>) JwtSecurityToken parsedJwt = tokenHandler.ReadToken(jwtToken) as JwtSecurityToken; TokenValidationParameters validationParams = new TokenValidationParameters() { AllowedAudience = SecurityConstants.TokenAudience, ValidIssuer = SecurityConstants.TokenIssuer, ValidateIssuer = true, SigningToken = new BinarySecretSecurityToken(SecurityConstants.KeyForHmacSha256), }; return tokenHandler.ValidateToken(parsedJwt, validationParams); } private static SecurityTokenDescriptor makeSecurityTokenDescriptor( InMemorySymmetricSecurityKey sSKey, List<Claim> claimList) { var now = DateTime.UtcNow; Claim[] claims = claimList.ToArray(); return new SecurityTokenDescriptor { Subject = new ClaimsIdentity(claims), TokenIssuerName = SecurityConstants.TokenIssuer, AppliesToAddress = SecurityConstants.TokenAudience, Lifetime = new Lifetime(now, now.AddMinutes(SecurityConstants.TokenLifetimeMinutes)), SigningCredentials = new SigningCredentials(sSKey, http://www.w3.org/2001/04/xmldsig-more#hmac-sha256, http://www.w3.org/2001/04/xmlenc#sha256), }; }}I have a message handler to intercept the requests and I verify the validity of token except for the route for log in, using TokenManager.ValidateJwtToken() above.To create the token, in the LoginController, I have the following code:[Route(login)][HttpPost]public HttpResponseMessage Login(LoginBindingModel login){ if (login.Username == admin && login.Password == password) //Do real auth { string role = Librarian; var jwtToken = TokenManager.CreateJwtToken(login.Username, role); return new HttpResponseMessage(HttpStatusCode.OK) { Content = new ObjectContent<object>(new { UserName = login.Username, Roles = role, AccessToken = jwtToken }, Configuration.Formatters.JsonFormatter) }; } return new HttpResponseMessage(HttpStatusCode.BadRequest);}The full working code is available here and the instructions to run the sample are in the Wiki.
Web API 2 authentication with JWT
c#;authentication;asp.net web api;jwt
null
_unix.276169
If X11 provides functionality to use a graphical application remotely why is VNC used instead of the X11 server capabilities apart from overheading issues?Can you config your X11 client to connect to a remote X11 server and have access to the full desktop like in VNC?Can you have a fusion of local and remote desktops and manage both of them transparently?
Why is VNC so extended instead of X11
x11;vnc
The short version is that X11 and VNC serve different purposes, so you'd use them in different circumstances.It is possible to open a full X11 desktop remotely, using XDMCP; this is how old X11 terminals work (a central system provides the desktops and hosts all the applications, the terminals only display them). But as far as I'm aware you can't connect remotely to an existing X11 desktop.What you can do with X11 is have a local desktop and display remote applications on it, without a remote desktop. (This might be close to what you're thinking of with transparent management of fused local and remote desktops.)VNC's main advantage is that it's cross-platform, so you can view a Windows desktop on an X11 system etc. There are VNC servers available which allow a VNC client to connect to an existing X11 desktop, so you can re-connect remotely to your existing desktop without restarting everything. You can also share a desktop: local and remote users can use (or at least, view) the same desktop simultaneously.As far as overhead is concerned, plain VNC is less efficient than X11: VNC transfers pixel updates, whereas X11 transfers graphical primitives (e.g. draw a rectangle, print this text). This is less relevant nowadays since for example most text updates in X11 are now pixel-based.
_codereview.152681
I use the following code in Javascript to get the difference between two Date objects. I want the result to return the difference in:seconds if the result is less than 60 secsminutes if the result is less than 60 minshours if the result is less than 24 hoursdays otherwiseThe code is very long and I see lots of code duplication. Isn't there a smarter/shorter way to do this (without using a library)?function dateDiff(a, b) { let utc1 = Date.UTC(a.getFullYear(), a.getMonth(), a.getDate(), a.getUTCHours(), a.getUTCMinutes(), a.getUTCSeconds()); let utc2 = Date.UTC(b.getFullYear(), b.getMonth(), b.getDate(), b.getUTCHours(), b.getUTCMinutes(), b.getUTCSeconds()); let result = (utc2 - utc1) / (1000 * 60 * 60 * 24); let floor = Math.floor(result); if (floor > 0) return floor + d; result *= 24; floor = Math.floor(result); if (floor > 0) return floor + h; result *= 60; floor = Math.floor(result); if (floor > 0) return floor + min; result *= 60; floor = Math.floor(result); if (floor > 0) return floor + sec;}
Get the difference between two dates, in the most convenient unit
javascript;datetime;ecmascript 6
null
_softwareengineering.215655
I have a main activity that contains many buttons. I don't want to have a list of buttons like this. I know this looks bad, only cause I threw this together to show you what I have. As you can see, I really don't even have enough room. I don't want a scrollView in my main Activity. Does anyone have suggestions on building my activity to look sleek and simple?
Main activity design too busy
android
One approach would be to put the buttons in a NavigationDrawer and display only the most important info in the content of the Activity or one of the other screens as the starting point and the user can navigate from here through the NavigationDrawerIf you don't like this approach you could also try putting the buttons in an ActionBar Spinner as a list. This way you'll have one of the Fight Cards, Buy Tickets, etc screen as the starting point and the user can navigate from here through the SpinnerI don't know exactly what you're displaying in the application so I cannot decide what is the best solution for your problem, you'll have to figure that out yourself.
_unix.17153
I have a machine running Debian Linux (stable), but I'd also like a Debian unstable box to use as a sandbox to test and develop. Is there an easy way to create a virtualized Debian unstable machine on my Debian stable box?
Running Linux virtual machine guest on Linux Host
debian;virtual machine
If you want the alternate distribution as a development environment, and you don't need to run services (or only a few selected ones) or a different kernel in the unstable installation, put it in a chroot. See this guide (replace 64-bit/32-bit by stable/unstable).At the other extreme, if you want a completely separate installation, the easiest way is to fire up a full-fledged virtual machine and install Debian unstable there. VirtualBox is easy to set up; VMware and KVM are reasonable alternatives.There are other Linux-on-Linux virtualization technologies that provide better performance (less RAM usage, in particular) at the expense of ease of installation and the requirement of running Linux on Linux. Go for this approach only if you need to run all the normal services in the unstable installation, and you can't afford the RAM requirement of fully independent virtualization.
_unix.379214
Executing . .bash_profile (or source ~/.bash_profile) doesn't seem to do anything. (.bash_profile below; I don't see the echo bash profile end in the terminal). However, if I copy-paste lines from .bash_profile, they work.I'm running an interactive login shell, have tried restarting but now am at a loss to understand . .bash_profile doesn't return anything. (additionally, when I open a new terminal, .bash_profile isn't sourced, but it was a couple of hours ago). Any help would be most appreciated.Mac OS Sierra. SHELL: /bin/bash/PATH: /usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin:/Library/TeX/texbin# Colors and styles# export PS1=\[\033[36m\]\u\[\033[m\]@\[\033[32m\]\h:\[\033[33;1m\]\w\[\033[m\]\$ export CLICOLOR=1 export LSCOLORS=ExFxBxDxCxegedabagacad alias ls='ls -GFh'# -G == colorize output, -h size human readable, -F {/ after directory, * after executable, @ after symlink}# Move homebrew's usr/local/bin ahead of /usr/bin# export PATH=/usr/local/bin:/usr/local/sbin:~/bin:$PATH# Add homebrew directoryexport PATH=/usr/local/bin:/usr/local/sbin:$PATH# virtualenvwrapperexport WORKON_HOME=~/.envsexport VIRTUALENVWRAPPER_PYTHON=/usr/local/Cellar/python3/3.6.2/bin/python3export VIRTUALENVWRAPPER_VIRTUALENV=/usr/local/bin/virtualenvexport VIRTUALENVWRAPPER_VIRTUALENV_ARGS='--no-site-packages'source /usr/local/bin/virtualenvwrapper.shecho bash profile end
Mac terminal cannot not execute .bash_profile
bash;terminal
null
_reverseengineering.3766
Does anybody know something hack to debug 16bit DOS program in IDA 6.1?
DOS program debug in IDA?
ida;debugging;dos
IDA DOSBox plugin by Eric Fry.Note that it requires a modified DOSBox build.
_softwareengineering.306181
Traditionally, a function that want to skip certain items during processing will accept a Collection or var-args argument, but in the Java 8 world I think I should switch to Predicates.Now, since the code I am working on is mainly for testing, most of the exclusion will be ad-hoc literals, so I am suggesting users use the formArrays.of(a, b, ...)::containsI am just wondering how acceptable this is?(I see little value in wrapping the above form in an overload because it will be one more signature to learn and prevents the simple optimisation of extrating this literal to a variable.)Edit:Old function will be likepublic void processUnlessNameIn(String... names)I want the new one to bepublic void processUnlessName(Predicate<String> pred)
How confusing is `new SomeCollection(values...)::contains` as a Predicate?
java;coding style;lambda;java8
To directly answer your question: somewhat confusing, since the code example you gave just doesn't sit right, for me anyway.TBH, if your collection always contains strings, then I think adding predicates/lambdas is overkill in this situation. What does it really get you? If you have a collection of a richer object type, and want the user to be able to operate on that, then supplying a lambda instead of many overrides for every possible filtering choice would be sensible.(Also, I'd rename processUnlessName(...) to processExcept(...), which seems clearer, but that's purely my opinion.)
_unix.333512
I'm currently reading Advanced Programming in the Unix environment. In one of the exercises, it says:The only user-level daemon that isn't a session leader is the rsyslogd process. Explain why the syslogd daemon isn't a session leader.However, when I go to verify this myself, I see that on my system (Linux 4.4) it indeed is a session leader (because PID == SID):UID PID PPID PGID SID CMDsyslog 1171 1 1171 1171 /usr/sbin/rsyslogd -nIs this a systemd thing? Some of the information in this book is a bit out of date now that everyone has jumped on the systemd bandwagon, whereas it talks mainly about classic System V init. Or perhaps that they've simply changed how it works? The book clearly wants to make a point of why it's different, so if anyone knows why historically it didn't used to be a session leader, and why it is now, that'd be excellent.
rsyslogd as a session leader
linux;process
EditI actually had a look at an old ubuntu 10.04 system that had rsyslogd 4.2.0 running.That one did not call setsid() at all (so inherited the sid from the process executing it) but instead did (here from strace output): 19391 open(/dev/tty, O_RDWR|O_LARGEFILE|O_CLOEXEC) = 0 19391 ioctl(0, TIOCNOTTY) = 0To detach from the terminal.Looking at the source code it does that only when HAVE_SETSID is not set. Obviously Linux has setsid() and has had for decades, so there's something amiss.Now, looking more at the source, it's just the the build procedure never sets HAVE_SETSID as it doesn't check for the support of setsid() in the first place.The bug (a typo: setsid spelled setid in the autoconf file) was fixed in 2013 (first released in rsyslogd 7.5.3).(btw, see the TODO section about HP/UX in the old code which shows the authors had already realised there was something amiss (but didn't investigate it until much later)).Keeping original answer below, as one might find the information in there useful.A wild guess:If you're a session leader and open a tty device without the O_NOCTTY flag, then you become the controlling process of a terminal.That's why when trying to execute an application (that otherwise has not been designed to run as a daemon) so it runs as a daemon, it's recommended to do another fork after the setsid() before executing it to make sure the process doesn't inadvertently become a terminal controller if for some reason it opens a tty device.syslogd typically does open tty devices to send user messages, so it could be why your book says syslogd is not a session leader, describing the behaviour from syslogd implementations from a time where the O_NOCTTY flag didn't exist (though that flag has existed at least since the late 80s).The other approach is for syslogd to make sure that all the files it opens are opened with O_NOCTTY, which is probably what your rsyslogd (nothing to do with systemd) does.
_codereview.43096
I'm using jQuery's .get() method to load content from external HTML files into my main index file. I created 25 different functions, function videoLoad1(), function videoLoad2() etc, for the 25 videos that I'm loading separately when its corresponding link is clicked. The content that is being swapped out in my HTML index file is the video src and video details. I'm new to jQuery and have been trying to find a more practical way of writing the code.HTML - links for the onclick function:<div class=row> <div id=movie_list class=movie_sec-1 pull-left> <h6><a href=Javascript:void(0); id=cars_hb>cars.com: be honest</a></h6> <h6><a href=Javascript:void(0); id=cars_t>cars.com: tag</a></h6> </div></div>HTML for video inclusion:<div id=kz-video style=display: none;></div>External HTML file that is being loaded via $.get() (file name is cars_bh.html):<div class=video-info> <h1>Video</h1> <h4>Now Playing</h4> <h4>cars.com: be honest</h4></div> <!-- Video --><video id=kz-player width=100% height=100% controls preload> <source src=vid/CarscomBeHonest.mp4 type='video/mp4;'> <source src=vid/CarscomBeHonest.webmhd.webm type='video/webm;'> <source src=vid/CarscomBeHonest.oggtheora.ogv type='video/ogg;'></video>jQuery function:function videoLoad2() {$(a#cars_hb).click(function(e) {e.preventDefault();e.stopPropagation();$.get('cars_hb.html', function( data ) { $('#kz-video').html( data ); });});//close overlay/hide content$('.close').click(function (e) {e.stopPropagation();$('#kz-player')[0].pause();$('#kz-video').hide();$('.close').fadeOut(800);$('#video_overlay').fadeOut(800);});}
Loading content from external HTML files
javascript;beginner;jquery;html;ajax
null
_softwareengineering.143633
I've got an SVN repository of a PHP site and the last programmer didn't use source control properly. As a result, only code since I started working here is in the Repo. I have a bunch of old copies of the full code base saved in files as backups but they're not in source control. I don't know why most of the copies were saved nor do I have any reasonable way to tag them to a version number. I do have the dates the backups were made, all backups have proper file system timestamps.Due to upgrades to the frameworks and database drivers involved, the old code is quite defunct; it no longer works on the current server configuration. However, the previous programmers had some unique logic, so I hate to be completely without old copies to refer to what on earth they were doing.Should I keep this stuff in version control? How? Wall off the old code in separate Tags/branches?
Should I add old code into my repository?
version control;svn
If you have reasonable timestamps on each of these working versions then perhaps you can check them in one at a time until you get to the most recent version of the codebase, your latest changes.The problem with the Tag approach like everybody else suggests is that you will lose changeset history on each file and this will make comparisons between older versions of the code more difficult.
_webmaster.68098
I setup a 301 redirect on my root that points to a new domain. When I type the old domain URL into browser 301 works just fine. When I do a keyword search for my site and click on the link it still takes me to my old site. Below is the code:<IfModule mod_rewrite.c>RewriteEngine onRewriteCond %{HTTP_REFERER} !^http://(.*)?exampleA\.com [NC]Rewriterule ^(.*) http://exampleB.com/ [L,R=301]</IfModule>I am redirecting from http://www.exampleA.com to http://exampleB.com.
301 redirect doersnt work in SERP
seo;301 redirect;regular expression
Use HTTP_HOST not HTTP_REFERER<IfModule mod_rewrite.c>RewriteEngine onRewriteCond %{HTTP_HOST} ^(www\.)?olddomain\.com$ [NC]Rewriterule ^(.*) http://newdomain.com/ [L,R=301]</IfModule>
_unix.35789
I just opened a legacy shell script (written in old ksh88 on Solaris) and found the following repeated all throughout the code:[ -f $myfile ] && \rm -f $myfileThe escaping backslash strikes me as odd.I know it is deliberate, since this kind of (apparently useless) escaping is repeated all throughout the code. The original author is long gone, I cannot get in touch with him to ask him. Is this simply a funny idiosyncrasy of the author or is it some sort of deprecated good practice that made sense at some point in time? Or maybe is actually the recommended way of doing things and I'm missing something altogether?
Why escape trivial characters in shell script?
shell script;quoting;ksh
This is used for alias protection:$ ls.bashrc a b$ alias lsalias ls='ls $LS_OPTIONS'$ \lsa b
_webapps.74778
I created a form that has list of items to be selected. I also created a spreadsheet that has a couple of sheets that correspond to the elements in the form. I wanted users to select a category and fill data on the form for that category so that the data will be saved in the particular sheet associated to the category. Is it possible to edit a particular sheet on Google's Spreadsheet based on a selection Google Form?
How to edit a particular sheet using Google Forms
google spreadsheets;google forms
null
_unix.317814
I had a 32GB SD Card with this structure (or very close):luis@Fresoncio:~$ sudo fdisk -lDisk /dev/mmcblk0: 29.2 GiB, 31393316864 bytes, 61315072 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: dosDisk identifier: 0xec4e4f57Device Boot Start End Sectors Size Id Type/dev/mmcblk0p1 1 125000 125000 61M c W95 FAT32 (LBA)/dev/mmcblk0p2 125001 33292287 33167287 15.8G 83 Linux/dev/mmcblk0p3 33292288 61315071 28022784 13.4G 83 LinuxAnd I transferred (from another computer, so the devices where sda and sdb) it to a (I choose the wrong one) 64GB SD Card via dd (ddcfld, in fact):# ddcfld if=/dev/sda of=/dev/sdb bs=1MSo now, my new 64GB SD Card is:luis@Fresoncio:~$ sudo fdisk -lDisk /dev/mmcblk0: 59.5 GiB, 63864569856 bytes, 124735488 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: dosDisk identifier: 0xec4e4f57Device Boot Start End Sectors Size Id Type/dev/mmcblk0p1 1 125000 125000 61M c W95 FAT32 (LBA)/dev/mmcblk0p2 125001 33292287 33167287 15.8G 83 Linux/dev/mmcblk0p3 33292288 61315071 28022784 13.4G 83 LinuxWell, no problem for now, but now I don't have the source 32 GB SD Card anymore, only the 64GB SD Card remains, and I would like to transfer it to some empty 32 GB SD Card again.In this case, I assume I can not use dd or ddcfldWhat may I do?Can I use dd or ddcfld ? What could happen when the transfer arrives to the 32 GB boundary on the destination SD Card (data integrity problems)?Further notes: Any other method to clone the SD cards would be OK, but I have a problem: this case scenario is some SD card boot drive for a Raspberry Pi 2, and cloning via partimage or gparted did not work (the Raspberry does not boot). Only dd seems to do the cloning without flaws.Similar question, but I think not the same.The ddcfld tool has the same syntax and behavior as dd. It just gives more info (progress... etc).
Using DD to copy only half (part) of removable device
dd
Assuming sda is your 64GB source SD card and sdb is your 32GB destination SD card.You can limit dd to only copy the number of required sectors with:dd if=/dev/sda of=/dev/sdb bs=512 count=61315072
_codereview.33328
I want move a Vertical line segment across a plane or another way sweep the plane, from Left to right.The figure illustrates how the segment is moving at the X-axis. When x1 >= X beginning and i translate it to the upper part and so on, till y2 which is the upper part of the segment reaches Y. You can think of it as how a scanner works. Line = (x1,y1,x2,y2)When x1 pr x2 coordinates becomes greater or equal to -rightBorder I increase y1 and y2 to the next level and so on till y2 becomes greater than Y.Algorithm: #define STEP 9 #define Y 20 #define X 30 void moveLine(int, int, int, int); int main() { moveLine(5, 0, 5, 10); return 0; } void moveLine(int x1, int y1, int x2, int y2) { //Reaches upper border (Y-axis) if (y1 >= Y) { return; } // cout << y1 << << y2 << endl; // Increase y1 and y2 and Return to the beginning if (x1 >= X) { //startint points y1 += 10; y2 += 10; // Reinitialize x1 and x2 x1 = -4; x2 = -4; } // sweep again moveLine(x1 + STEP, y1, x2 + STEP,y2); }Explanation: I start with x1 = 5, and x2 = 5, y1 = 5 and y2 = 10 then x1+=9, x2+=9 till x1 >= X, then I reinitialize the x1 and x2, and increase y1 by 5 and y2 + 5 till y2 becomes greater than Y.I wrote the piece of Code. it worked ok, but I wanted to your advice and if the recursion function is okay.Thank you.
Move Line across Plane
c++;algorithm;recursion;computational geometry
Your code has a lot of magic numbers. I can't tell what the significance of the number 20 that is being compared to y1, or the significance of the 20 that is being compared to x1, or whether the two numbers are the same or different (if I changed the number compared to y1 to 30, should I also change the number compared to x1?). One of the comparisons uses > and the other uses >=. I can't tell whether this is what you meant to do or a bug. If you created named constants with descriptive names, probably all of those things would be obvious. I think you have a misconception about how function arguments work. Despite having the same name and the same type, the int x1 in main() and the int x1 in moveLine() are different, and the x1 in each recursive call of moveLine() are different. The line moveLine(x1 += 9, y1, x2+=9,y2); should work, but writing it as moveLine(x1+9, y1, x2+9, y2); is less confusing and would also work. Also, this code:int x1 = 5;int y1 = 0;int x2 = 5;int y2 = 10;moveLine(x1, y1, x2, y2);could be written like this:moveLine(5, 0, 5, 10);(I'm not suggesting you keep the 4 int style; Barry's suggestion is good. I'm just pointing out what looks like a misconception and trying to clear it up.)
_softwareengineering.13798
Following reading the latest CodeProject newsletter, I came across this article on bitwise operations. It makes for interesting reading, and I can certainly see the benefit of checking if an integer is even or odd, but testing if the n-th bit is set? What can possibly be the advantages of this?
What are the advantages of using bitwise operations?
operators;bitwise operators
Bitwise operations are absolutely essential when programming hardware registers in embedded systems. For example every processor that I have ever used has one or more registers (usually a specific memory address) that control whether an interrupt is enabled or disabled. To allow an interrupt to fire the usual process is to set the enable bit for that one interrupt type while, most importantly, not modifying any of the other bits in the register.When an interrupt fires it typically sets a bit in a status register so that a single service routine can determine the precise reason for the interrupt. Testing the individual bits allows for a fast decode of the interrupt source.In many embedded systems the total RAM available may be 64, 128 or 256 BYTES (that is Bytes not kilobytes or megabytes) In this environment it is common to use one byte to store multiple data items, boolean flags etc. and then use bit operations to set and read these.I have, for a number of years been working with a satellite communications system where the message payload is 10.5 bytes. To make the best use of this data packet the information must be packed into the data block without leaving any unused bits between the fields. This means making extensive use of bitwise and shift operators to take the information values and pack them into the payload being transmitted.
_webapps.47442
I've got two work email accounts from different companies, both running on separate domains on Google Apps for business so they are using Gmail infrastructure. At the moment I'm managing them using Mac Mail which is ok, but I'd like to move to use the Gmail in the browser to view / manage / compose emails.In Mac Mail I can switch between my two accounts like this: Is there something similar inside of Gmail that I can set up?
Managing multiple account from Gmail
gmail;email;google apps;google apps email
What I do is this:I set up the ability to send from many e-mail accounts.I forward the other accounts to my gmail account.I set up label filters for those accounts.I select the label to view mail from other accounts.
_softwareengineering.16936
And why do most programmers quit coding and become managers so early compared to other areas? You can work as a construction worker, a lawyer, researcher, writer, architect, movie director for decades if not your whole life, without quitting the favorite thing you are doing. And yet most programmers think being a manager, or better, a non-coding manager is a cool thing and the ultimate goal of the software career. It's so rare to see an ordinary coder in his 40s or 50s (or 60s!) in a typical software company. Why?(And I am for being a coder for as long as you love doing it.)
Do non-manager coders in their forties or older out there sometimes regret about not being managers?
management;coding
null
_unix.280996
How do I find files that have ~ * and other special characters in the name?e.g.find . -name *\*should match any characters and then *, but it matches nothing; how can I get the command to correctly match the files?
Finding special characters in name
find;regular expression
Implementations of find vary, but they should all handle character classes in wildcards (POSIX.2, section 3.13):find . -name '*[~*]*'If newline is among your special characters, you may need to work out how to get your shell to pass it to find. In Bash, you can usefind . -name $'*[\t \n]*'to show files containing whitespace, for example. A simpler method, if supported, is to use a character class:find . -name '*[[:space:]]*'
_codereview.154574
I've been working on a simple MVC PHP framework on and off for the last nearly 12 months, so I thought that I'd put it out there.The project began as a way to mentor and teach a junior (or a more junior than me) PHP developer, to show programming principles, OO methodologies and MVC-type file structures. The framework wasn't finished to my satisfaction as I moved on to a new employer in early 2017, but luckily I was allowed to continue its development, so I thought I'd put it out there.It's on GitHub here - I've called it FrameWork.php as I hope it to be as simple and stupid as I am. A quickly written quick start guide is here.I hope to finish the HTML builder so I can move on to making a MySQL/PDO query builder for the model core.The file structure is as follows:../application ./controller ./core ./library ./model ./view../public_html ./css ./jsThe entry point in the framework is via the Index.php class (i.e., mysite.local/home) as follows:<?phpuse Application\Core\Framework\Core;require_once(serverPath('/core/FrameworkCore.php'));require_once(serverPath('/core/GlobalHelpers.php'));header('X-Content-Type-Options: nosniff');header('X-XSS-Protection: 1');header('Strict-Transport-Security: max-age=31536000; includeSubDomains');session_set_cookie_params(0, '/', getDomain(host(), ($https = isHttps()) ), $https, true);if(session_id() == ){ session_start();}## filepath ../public_htmlclass Index{ protected $core; public $timeZone = Europe/London; /** * This will initiate the core to load the view * according to the uri path, one may also * change the default timezone for the project * by altering the public $timeZone string above * for a list of valid timezones, see: * http://php.net/manual/en/timezones.php * * @param na * @author sbebbington * @date 24 Jan 2017 - 09:49:15 * @version 0.0.2 * @return * @todo */ public function __construct(){ $this->core = new \Application\Core\Framework\Core(); setTimeZone($this->timeZone); $this->core->loadPage(); }}/** * This will correctly route to the application * directory on your server * * @param string * @author Shaun * @date 2 Feb 2017 - 13:01:13 * @version 0.0.2 * @return string * @todo */function serverPath(string $routeTo = ''){ $_x = str_replace(\\, /, dirname(__FILE__)) . '/application'; $_x = str_replace(public_html/, , $_x); $_x .= $routeTo; return str_replace(//, /, $_x);}// Creates new instance and therefore initiates the controllers, models and views etc...$page = new Index();This will then call the core, which works out which view (page) to show, it works like this:<?phpnamespace Application\Core\Framework;require_once(serverPath('/core/HtmlBuilder.php'));## filepath ../application/coreclass Core extends \Application\Core\Framework\HtmlBuilder{ public $segment; public $host; public $allowedSegments = array( 'home', ); public $pageController = array( 'home' => HomeController, ); public $partial; public $controller; public $title; public $description; public $serverPath; public $root; public $flash; public $filePath; public $uriPath; public $http; private $errorReporting = array( http://framework.php.local, https://framework.php.local ); private $allowedFileExts = array( 'htm', 'html', 'asp', 'aspx', 'js', 'php', 'phtml', ); public $canonical = ''; public function __construct(){ parent::__construct(); $this->host = isHttps() ? https:// : http://; $this->host .= host(); if(in_array($this->host, $this->errorReporting)){ error_reporting(-1); ini_set('display_errors', '1'); } $this->uriPath = ''; $page = array_filter(explode('/', $_SERVER['REQUEST_URI']), 'strlen'); if(count($page)>1){ foreach($page as $key => $data){ if($key != count($page)){ $this->uriPath .= {$data}/; } } } $this->segment = !empty($page) ? strtolower($page[count($page)]) : ; if(!empty($_GET)){ $segment = explode('?', $this->segment); $this->segment = $segment[0]; if(isset($segment[1]) && strlen($segment[1])>0){ $_GET = array(); $get = explode(&, $segment[1]); foreach($get as $data){ $_data = explode(=, $data); $_GET[$_data[0]] = urldecode($_data[1]); } } } if(strpos($this->segment, .) > 0){ $segments = explode(., $this->segment); if(in_array($segments[count($segments)-1], $this->allowedFileExts)){ $this->segment = $segments[count($segments)-2]; $this->canonical = <link rel=\canonical\ href=\{$this->host}/{$this->segment}\> . PHP_EOL; } } $this->serverPath = serverPath(); $this->root = str_replace(\\, /, $_SERVER['DOCUMENT_ROOT']); $this->controller = new \stdClass(); $this->flash = new \stdClass(); $this->partial = array( 'header' => serverPath(/view/partial/header.phtml), 'footer' => serverPath(/view/partial/footer.phtml), ); } /** * Sets the meta page titles in the views * * @param string * @author sbebbington * @date 2 Feb 2017 - 13:04:10 * @version 0.0.2 * @return string * @todo */ public function setTitle(string $page = ''){ $titles = array( 'home' => Example FrameWork.php skeleton site, ); return $titles[{$page}]; } /** * Sets the meta page descriptions in the views * * @param string * @author sbebbington * @date 2 Feb 2017 - 13:04:47 * @version 0.0.1 * @return string * @todo */ public function setDescription(string $page = ''){ $descriptions = array( 'home' => The Skeleton, ); return $descriptions[{$page}]; } /** * Bug fixed edition of the using ZF type view variables * * @param * @author sbebbington * @date 2 Feb 2017 - 13:05:41 * @version 0.0.3a * @return * @todo */ public function setView($instance, string $masterKey = ''){ foreach($instance as $key => $data){ if($masterKey == ''){ $this->$key = $data; }else{ $this->$masterKey->$key = $data; } } } /** * This will load the view and related controllers * It has an added exception for Jamie's admin html * template. This version should now allow Zend-alike * view variables - so if you set an object in a page * controller as $this->view->objName, you can use * $this->objName in the PHP/HTML view or something. * * @param na * @author sbebbington * @date 6 Feb 2017 - 11:15:14 * @version 0.0.3 * @return void * @todo */ public function loadPage(){ if($this->segment == ){ $this->segment = 'home'; } if(in_array($this->segment, $this->allowedSegments) == false || !file_exists(serverPath(/view/{$this->uriPath}{$this->segment}.phtml))){ $this->setView(array(_404Error => 1)); $this->title = '404 error - page not found, please try again'; $this->description = 'There\'s a Skeleton in the Sandbox'; require_once(serverPath(/view/404.phtml)); exit; } if(in_array($this->segment, $this->allowedSegments) == true){ $this->title = $this->setTitle($this->segment); $this->description = $this->setDescription($this->segment); foreach($this->pageController as $instance => $controller){ if($this->segment == $instance){ require_once(serverPath(/controller/{$controller}.php)); $this->controller->$instance = new $controller(); if(isset($this->controller->$instance->view)){ $this->setView($this->controller->$instance->view); $this->controller->$instance->view = null; } } } if(isset($_SESSION['flashMessage']) && !empty($_SESSION['flashMessage'])){ $this->setView($_SESSION['flashMessage'], flash); } require_once(serverPath(/view/{$this->uriPath}{$this->segment}.phtml)); } }}Each view will load a controller instance, routed in the ../application/controller directory, by default there is a ControllerCore class and a page controller class (assumes HomeController.php by default), as follows:<?phpnamespace Application\Controller;use Application\Library;require_once(serverPath('/library/Library.php'));## filepath ../application/controllerclass ControllerCore{ public $post; public $view; public $lib; public $sql; public $host; public function __construct(){ $this->lib = new \Application\Library\Library(); if(!isset($_SESSION['flashMessage'])){ $_SESSION['flashMessage'] = array(); } if(empty($this->post) || $this->post == null){ $this->post = array(); } if(!empty($_POST)){ $this->setPost(); } $this->view = new \stdClass(); $this->host = host(); } /** * Sanatizes posted data * * @param Array * @author Linden && Shaun * @date 7 Oct 2016 14:54:10 * @version 0.0.3 * @return na * @todo */ public function setPost(){ foreach($_POST as $key => $data){ $this->post[$key] = is_string($data) ? trim($data): $data; } } /** * This should empty the super global $_POST and the controller $this->post * * @param na * @author Shaun * @date 16 Jun 2016 11:25:04 * @version 0.0.1 * @return array * @todo */ public function emptyPost(){ $_POST = array(); $this->post = $_POST; } /** * Clears PHP session cookies * * @param na * @author Shaun * @date 14 Sep 2016 14:29:23 * @version 0.0.2 * @return * @todo */ public function emptySession(){ if(session_id() != ){ session_destroy(); } $this->session = null; } /** * Sets flash messages (recommend using string for value param) * * @param string, string | int | boolean * @author Shaun * @date 14 Sep 2016 09:48:53 * @version 0.0.1 * @return na * @todo */ public function setFlashMessage($key, $value){ $_SESSION['flashMessage'][$key] = $value; } /** * Checks if flash message exists * * @param string * @author Shaun * @date 14 Sep 2016 10:18:04 * @version 0.0.1 * @return boolean * @todo */ public function getFlashMessage($key){ return isset($_SESSION['flashMessage'][$key]) ? true : false; } /** * Clears flash messages * * @param string|array * @author Shaun * @date 15 Sep 2016 10:40:49 * @version 0.0.1 * @return na * @todo */ public function clearFlashMessage($key){ if(is_array($key)){ foreach($key as $_key => $_data){ $_SESSION['flashMessage'][$_data] = null; } }else{ $_SESSION['flashMessage'][$key] = null; } }}And the home controller:<?phprequire_once (serverPath('/controller/ControllerCore.php'));require_once (serverPath('/model/HomeModel.php'));## filepath ../application/controllerclass HomeController extends \Application\Controller\ControllerCore{ public function __construct(){ parent::__construct(); $this->sql = new HomeModel(); foreach($this->sql->getView() as $key => $data){ $key = $this->lib->convertSnakeCase($key); $this->view->$key = htmlspecialchars_decode($data); } // if(isset($this->post['submit'])){ // Do something with the posted data here, but for now // we'll simply see the contents of the posted data // $this->lib->debug($this->post, true); // } }}Any feedback is appreciated.
Simple OO MVC PHP framework
php;object oriented;mvc
null
_unix.237501
I was upgrading a package when I came across the following -$ sudo aptitude install openjdk-9-jre=9~b87-1 openjdk-9-jre-headless=9~b87-1 -yThe following packages will be upgraded: openjdk-9-jre openjdk-9-jre-headless 2 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded.Need to get 30.9 MB of archives. After unpacking 175 MB will be freed.Get: 1 http://httpredir.debian.org/debian/ experimental/main openjdk-9-jre amd64 9~b87-1 [39.4 kB]Get: 2 http://httpredir.debian.org/debian/ experimental/main openjdk-9-jre amd64 9~b87-1 [39.4 kB] Get: 3 http://httpredir.debian.org/debian/ experimental/main openjdk-9-jre-headless amd64 9~b87-1 [30.8 MB] Fetched 30.9 MB in 21min 2s (24.4 kB/s) Retrieving bug reports... DoneParsing Found/Fixed information... DoneReading changelogs... Doneapt-listchanges: Do you want to continue? [Y/n] yapt-listchanges: Mailing root: apt-listchanges: changelogs for think-debian(Reading database ... 375935 files and directories currently installed.)Preparing to unpack .../openjdk-9-jre_9~b87-1_amd64.deb ...Unpacking openjdk-9-jre:amd64 (9~b87-1) over (9~b80-2) ...Preparing to unpack .../openjdk-9-jre-headless_9~b87-1_amd64.deb ...Unpacking openjdk-9-jre-headless:amd64 (9~b87-1) over (9~b80-2) ...Processing triggers for gnome-menus (3.13.3-6) ...Processing triggers for mime-support (3.59) ...Processing triggers for desktop-file-utils (0.22-1) ...Setting up openjdk-9-jre-headless:amd64 (9~b87-1) ...update-binfmts: warning: /usr/share/binfmts/jar: no executable /usr/bin/jexec found, but continuing anyway as you requestSetting up openjdk-9-jre:amd64 (9~b87-1) ...Scanning processes... Scanning linux images... Running kernel seems to be up-to-date.No services need to be restarted.No containers need to be restarted.Any ideas which /usr/bin/jexec is missing.
the case of the missing /usr/bin/jexec
debian;java
null
_unix.173829
I have libnotify-bin installed, yet when I typenotify-send hellonothing happens. No error, no messageI have the necessary packages:# dpkg -l | grep notifyii libnotify-binii libnotify4:amd64what could be the problem?I am using LXDE on Debian WheezyUPDATE:While the solution suggested by @Anthon works (install notification-daemon), I am not sure whether that is the best solution. I was expecting that it is enough to have libnotify-bin and libnotify4 and dbus installed. Inedeed, on my other machine, notify-send works without notification-daemon.Can somebody please clarify whether notification-daemon is necessary or not ?
notify-send not working on Debian Wheezy
debian;lxde;libnotify
Your notification daemon has probably not been started. Try to start it by hand with:/usr/lib/notification-daemon/notification-daemon If you have a properly started daemon, you might have hit this bug, which causes the daemon to crash.