text
stringlengths
15
59.8k
meta
dict
Q: Define functions in structs Can we define functions in structs in C programming language? A: No, as functions are not data. But you can define function pointers inside a struct. struct foo { int a; void (*workwithit)(struct foo *); } A: No, you cannot have functions inside struct in a C program. I wrote a single code and saved that as a .c and a .cpp. The .cpp file complies and works as expected, but the .c file doesn't even compile. Here is the code for your reference. Save it once as .cpp and then run it. Then save the same code as .c and compile it. You will get a compilation errors. #include <stdio.h> struct C { void Test(int value) { static int var = 0; if (var == value) printf("var == value\n"); else printf("var != value\n"); var = value; } }; int main() { C c1; C c2; c1.Test(100); c2.Test(100); int ii; scanf("%d",&ii); } A: No. You can have function pointers in structs, but that's as close as you'll get. A: You can't really declare stuff inside of a struct in C. This is not C++ or any other OO language where an object encapsulates some kind of scope. C structs are very simple objects, it's just syntactic sugar for managing a piece of memory. When you create new struct "instance", struct A a;, compiler just reserves stack space according to its size, and when you then do a.member, compiler knows that member's offset, so it jumps to &a + offset of that member. Those offsets are usually not just sums of sizes of preceding members, because compiler usually adds some padding bits into the structure to align it nicer into memory. Hope it helped a bit. You obviously expect slightly too much from C stuctures :-) A: I came to this post because I was looking for a way to teach a bit of Object Oriented "style" of programming in C for a very simple data structures course. I did not want to teach C++ because I did not want to keep explaining its more advanced features. But I wanted to explore how one might implement the OO pattern used in Python but in a low-level language / run-time. By explaining what is going on in C, students might better understand the Python OO run-time patterns. So I went a bit beyond the first answer above and adapted some of the patterns from https://stackoverflow.com/a/12642862/1994792 but in a way that would elucidate OO run time patterns a bit. First I made the "class" with a "constructor" in point.c: #include <stdio.h> #include <stdlib.h> struct point { int x; int y; void (*print)(const struct point*); void (*del)(const struct point*); }; void point_print(const struct point* self) { printf("x=%d\n", self->x); printf("y=%d\n", self->y); } void point_del(const struct point* self) { free((void *)self); } struct point * point_new(int x, int y) { struct point *p = malloc(sizeof(*p)); p->x = x; p->y = y; p->print = point_print; p->del = point_del; return p; } Then I imported the class, constructed an object from the class, used the object, then destructed the object in main.c #include "point.c" int main(void) { struct point * p3 = point_new(4,5); p3->print(p3); p3->del(p3); } It feels very "Pythonic in C". A: No, you can't. Structs can only contain variables inside, storing function pointers inside the struct can give you the desired result. A: No, You cant define functions inside structures in C programs, However if the extension of your file is .cpp (that is not C), you can have member functions like classes however the default modifier of these functions will be 'public'(unlike class). Read these links for more information on Structures a good link , another good link, One more good link As a convention in C++, Classes are used for storing functions and variables both and Structures are used only for storing information (i.e. data). A: No, but you can in c++ struct! A: You can in C++ instead: // Example program #include <iostream> #include <string> struct Node { int data; Node *prev,*next; Node(int x, Node* prev=NULL, Node* next=NULL) { this->data=x; this->prev=prev; this->next=next; } void print_list() { Node* temp=this; //temp is created in function call stack while(temp!=NULL) { std::cout<<temp->data<<" "; temp=temp->next; } } Node* insert_left(int x) { Node* temp=new Node(x,this->prev,this); this->prev=temp; return temp; //list gets new head } Node* insert_right(int x) { Node* temp=new Node(x,this,this->next); this->next=temp; return this; //this would still be head } }; int main() { Node* head=new Node(-1); //-1 head=head->insert_left(0); //0 -1 head=head->insert_right(1); //0 1 -1 head=head->insert_left(2); //2 0 1 -1 head->print_list(); } A: You can use only function pointers in C. Assign address of real function to that pointer after struct initialization, example: #include <stdio.h> #include <stdlib.h> struct unit { int result; int (*add) (int x, int y); }; int sumFunc(int x, int y) { return x + y; } void *unitInit() { struct unit *ptr = (struct unit*) malloc(sizeof(struct unit)); ptr->add = &sumFunc; return ptr; } int main(int argc, char **argv) { struct unit *U = unitInit(); U->result = U->add(5, 10); printf("Result is %i\n", U->result); free(U); return 0; } Good example of using function pointers in a struct you can find here https://github.com/AlexanderAgd/CLIST Check header and then clist.c file.
{ "language": "en", "url": "https://stackoverflow.com/questions/9871119", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "61" }
Q: Moving from Gulp/Grunt to Webpack for an AngularJs 1.x project I have a two-year-old AngularJs 1.x project, which is built with Gulp for development and Grunt for production (don't ask me why; I don't know either). The build process is basically: * *Compile all scss files into one css file *Merge all JS files into one JS file. We are not using any import mechanism. Each file is basically one of AngularJs' controller, component, service or filter. Something like this: angular.module("myApp").controller("myCtrl", function() {//...}); *Merge all html templates into one JS file. Each template is hardcoded with $templateCache. *Moving assets like images and fonts into the build folder. *Moving third-party libraries into the build folder. Now I want to switch to webpack for this project. I want to incrementally modernize this project, but the first step would be just building it with webpack with a similar process like the above. I would like to keep the code base as much the same as possible. I don't want to add import for all the JS files yet. There are too many. I would also like to add a babel-loader. I have some basic concepts about webpack, but never really customized the configuration myself. Would anyone please give me some pointers? Like which loaders/plugins would I need, etc.? Thanks! A: My process to do such a transition was gradual, I had a similar Grunt configuration. These are my notes & steps in-order to transition to Webpack stack. The longest step was to refactor the code so it will use ES6 imports/exports (yeah, I know you have said that it is not a phase that you wanna make, but it is important to avoid hacks). At the end each file looks like that: //my-component.js class MyComponentController { ... } export const MyComponent = { bindings: {...}, controller: MyComponentController, template: `...` } //main.js import {MyComponent} from 'my-component' angular.module('my-module').component('myComponent', MyComponent); In order not going over all the files and change them, we renamed all js files and added a suffix of .not_module.js. Those files were the old untouched files. We added grunt-webpack as a step to the old build system (based on Grunt). It processed via Webpack all the new files (those that are without .not_module.js suffix). That step produced only one bundle that contains all the files there were converted already, that file we have added to concat step. One by one, each converted file gradually moved from being processed by Grunt tasks to be processed by Webpack. You can take as a reference that webpack.config. Good luck.
{ "language": "en", "url": "https://stackoverflow.com/questions/50083914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How to launch an application from a link in a webpage on mac I am trying to have an application open when a link is clicked on in a webpage. Lets say if I clicked the link that says iTunes it would open iTunes. I am not sure if this is similar but if you go to an album link for iTunes that opens in the web browser, it will open iTunes to that page. Remember this is for mac. Also, the other tutorials that I have tried did not work. Other tutorials used JavaScript so I am guessing that I will need to use this as well. A: You can not only open an application. you e.g. can tell an application that registered a protocol to do an action. You can tell iTunes to open an album with the itmss:// protocol. Instant messenger may have registered e.g. aim://, skype:// or similar so that you can directly open a chat window. You can create a link with mailto:[email protected] to tell the default mail application to start a new mail with this address. But you can not start the application directly by default. If you have control over the system where the webpage ist opened, e.g. an in-house launching service or something like that. You could think of creating an protocol default handler for launching applications.
{ "language": "en", "url": "https://stackoverflow.com/questions/14415012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Sum and Count by Two different group by in SQL Server I want to print the total order and transaction amounts and counts of each account. When I run the query below I get the following error: Column 'orders.order_amount' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. How can I fix this? SELECT trans.account_id, SUM(trans.amount), COUNT(trans.account_id), orders.order_amount, orders.order_count FROM trans FULL JOIN (SELECT [order].account_id, SUM([order].amount) as order_amount, COUNT([order].account_id) as order_count FROM [order] GROUP BY [order].account_id) AS orders ON (trans.account_id = orders.account_id) GROUP BY trans.account_id ORDER BY trans.account_id; Trans table: trans_id account_id type amount balance account date ----------------------------------------------------------------- 1 1 a 88 213 75 1995-03-24 7 1 b 156 66 75 1995-02-25 Order table order_id account_id bank_to amount ------------------------------------- 1 1 a 88 7 1 b 156 A: You need to add orders.order_amount and orders.order_count to group by: select trans.account_id, SUM(trans.amount), COUNT(trans.account_id), orders.order_amount, orders.order_count from trans FULL JOIN ( select [order].account_id, SUM([order].amount) as order_amount, COUNT([order].account_id) as order_count from [order] GROUP BY [order].account_id ) as orders ON (trans.account_id = orders.account_id) group by trans.account_id, orders.order_amount, orders.order_count order by trans.account_id; A: You would be using full join if you thought that accounts could be in either table, but not necessarily in both. If so, you should fix your query. I would recommend aggregating both tables before joining. And then fixing the order by conditions: select coalesce(t.account_id, o.account_id), t.trans_amount, t.trans_count, o.order_amount, o.order_count from (select t.account_id, sum(t.amount) as trans_amount, count(*) as trans_count from trans t group by t.account_id ) t full join (select o.account_id, sum(o.amount) as order_amount, count(o.account_id) as order_count from [order] o group by o.account_id ) o on t.account_id = o.account_id order by coalesce(t.account_id, o.account_id); Note that full join is often not needed. However, if it is needed, you should write the query correctly for it.
{ "language": "en", "url": "https://stackoverflow.com/questions/60766287", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Separate nameserver for certain hostnames On my home LAN I have a number of computers. They are connected to a router which connects me to the internet and acts as DHCP server and nameserver on the LAN. Some of my computers are Ubuntu machines, and I'm trying to set up a good /etc/resolv.conf on them. The problem is that my router is really lousy as a nameserver (very, very slow), so I prefer to use an external name server such as Google's nameserver or OpenDNS. But these public nameservers obviously do not know the IP addresses of my local machines. So here's my question: Is there any way to configure /etc/resolv.conf so that certain hostnames are resolved by one nameserver and other hostnames are resolved by another nameserver? (My guess is no, and in that case I'll have to set up fixed IP addresses. But I hope to avoid that.) A: Install a local forwarding nameserver on the computers that need to make a choice between the local nameserver and Internet nameserver. Use Unbound because it's lightweight and easy to configure for this task. Put this into the Unbound config: stub-zone: name: "internal.example.com" stub-host: internal.nameserver.ip.address forward-zone: name: "." forward-host: internet.nameserver.ip.address Put nameserver 127.0.0.1 into /etc/resolv.conf so that application on the local host wil use this instance of Unbound. Now when you try to resolve myhost.internal.example.com it will send the query to internal.nameserve.ip.address and when you try to resolve www.google.com it will send the query to internet.nameserver.ip.address. Hopefully all of your local hosts are grouped under a single local domain (internal.example.com) above. Unfortunately, it is all too likely that your cheap router+DHCPserver+DNSserver sticks all of the hostnames it knows right into its synthesized root zone. If that's the case then you'll have to list them all one by one as follows: stub-zone: name: "hostname1" stub-host: internal.nameserver.ip.address stub-zone: name: "hostname2" stub-host: internal.nameserver.ip.address stub-zone: name: "hostname3" stub-host: internal.nameserver.ip.address forward-zone: name: "." forward-host: internet.nameserver.ip.address The problem with this is of course that now you have a bunch of Unbound instances on different machines that you have to configure and set up. You could avoid this by having just one host on your LAN provide this service and act as a recursive nameserver for all the other hosts, but if you're going to do that then you might as well make that host the DHCP server and authoritative nameserver for local hosts too and get rid of your slow embedded DHCP+DNS server in the first place.
{ "language": "en", "url": "https://stackoverflow.com/questions/11350868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How could I detect when an unauthenticated user leaves a public channel in laravel websockets? I have a Laravel/Angular application that doesn't require any Authentication to use (An online Scrabble game) Whenever a guest connects with a username (the only constraint is that the username is unique) I register him to the database and Connect him to a Public channel for WebSocket that contains other players (for matchmaking). I need to detect whenever a player suddenly leaves the game (Ex: loses connection) and I need to do that server side, is that possible ? I have managed to trigger an event whenever a user leaves a channel using ArrayChannelManager->removeFromAllChannels I can get the SocketId out of that event but I have no idea which user triggered it class LeaveEvent extends ArrayChannelManager { public function removeFromAllChannels(ConnectionInterface $connection) { error_log($connection->socketId); } } Note: I cannot use a Presence channel because I have no authentication.
{ "language": "en", "url": "https://stackoverflow.com/questions/73019310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Behaviour of goto statement #include<stdio.h> int main() { int i; goto l; for(i = 0; i < 5; i++) { l:printf("Hi\n"); } return 0; } The above code gives output three times Hi . I don't have any idea how it happens, please expalin it. If i reducde value of 5 to 3 then only once the Hi printed. A: You are doing the comparison i < 5 and incrementing i in the for loop without initializing it first causing undefined behavior (the value of i at that point is a random garbage value) If you try this instead #include<stdio.h> int main() { int i = 0; goto l; for(i = 0 ; i < 5 ; i++) l: printf("Hi\n"); return 0; } it will have defined behavior and this program will print Hi 5 times. And to see what is going on try #include<stdio.h> int main() { int i = 4; goto l; for(i = 0 ; i < 5 ; i++) l: printf("Hi\n"); return 0; } and Hi will be printed only once, becase once you enter the for loop, i == 4. So basicaly you are jumping this line for(i = 0 ; i < 5 ; i++) failing to initialize i and thus having undefined behavior for the reasons explained above. Using goto is not always bad, but when it's used to control the flow of program, it makes it hard to follow the code and understand what it does, and generally it wont be necessary for that, but it's surely useful in some situations, for example consider this case FILE *file; int *x; int *y; file = fopen("/path/to/some/file", "r"); if (file == NULL) return IO_ERROR_CODE; x = malloc(SomeSize * sizeof(int)); if (x == NULL) { fclose(file); return MEMORY_EXHAUST_ERROR_CODE; } y = malloc(SomeSize * sizeof(int)); if (y == NULL) { free(x); fclose(file); return MEMORY_EXHAUST_ERROR_CODE; } return SUCESS_CODE; so you have to add more and more code at each function exit point, but you could do this instead FILE *file = NULL; int *x = NULL; int *y = NULL; file = fopen("/path/to/some/file", "r"); if (file == NULL) return SOME_ERROR_CODE; x = malloc(SomeSize * sizeof(int)); if (x == NULL) goto abort; y = malloc(SomeSize * sizeof(int)); if (y == NULL) goto abort; return SUCCESS_CODE; abort: if (x != NULL) free(x); if (y != NULL) free(y); if (file != NULL) fclose(file); return MEMORY_EXHAUST_ERROR_CODE; of course in your example, there is absolutely no reason to use goto. A: Your code exhibits Undefined Behavior. This is because when the execution of the program reaches the goto statement, the execution of the program jumps inside the body of the for loop, thus skipping the initialization part of the for loop. Thus,i is uninitialized and it contains a "garbage value". As a side note: Using gotos are considered to be bad practice as it makes reading/maintaining your code much harder.
{ "language": "en", "url": "https://stackoverflow.com/questions/27842921", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-4" }
Q: Qt3d Slow rendering of thousands of curves From qt3d, I would like to pick an arbitrary number of face normals drawn at each facet of a mesh. Once the number of normals (drawn as individual lines each with its own VBO and attached QObjectPicker) exceeds a couple of thousand instances, the rendering speed slows to a crawl. I feel like there must be a better way to do this that preserves rendering speed. Any suggestions?
{ "language": "en", "url": "https://stackoverflow.com/questions/53808577", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is std::string() the same as std::string("") Suppose I have a function that returns a std::string. Certain circumstances mean that the string is not populated with anything. Is returning std::string() exactly equivalent to std::string("")? For example, would c_str() or data() give you the same character sequence? Perhaps the std::string("") invokes a short string optimisation but std::string() does no such thing until some characters are added. Does anyone know if the current standard (C++11) says anything definitive? A: There's certainly no member function on std::string that would allow you to distinguish between a std::string() and a std::string(""). I defer to a philosopher or logician to verify if that satisfies any definition of equality. As for the standard itself, it states that std::string() will leave the capacity unspecified but std::string("") will define a capacity of at least zero. So the internal state of the object could be different. On my particular STL implementation (MSVC2012), std::string() calls a function called _Tidy whereas std::string("") calls _Tidy and assign. (The base class initialisation is identical). assign is a standard std::string function. So could they be different? Yes. Can you tell if they are different? No. A: If we look at the effects of each constructor in the C++ standard, section § 21.4.2 [string.cons] : For explicit basic_string(const Allocator& a = Allocator()) : * *data() : a non-null pointer that is copyable and can have 0 added to it *size() : 0 *capacity() : an unspecified value For basic_string(const charT* s, const Allocator& a = Allocator()) : * *data() : points at the first element of an allocated copy of the array whose first element is pointed at by s *size() : traits::length(s) *capacity() : a value at least as large as size() So strictly speaking, both constructs are not identical : in particular, the capacity of the constructed std::string objects might be different. In practice, it's unlikely that this possible difference will have any observable effect on your code. A: Yes, they are both same. Default constructor of std::string prepares an empty string same as "" explicit basic_string( const Allocator& alloc = Allocator() ); Default constructor. Constructs empty string (zero size and unspecified capacity) basic_string( const CharT* s, const Allocator& alloc = Allocator() ); Constructs the string with the contents initialized with a copy of the null-terminated character string pointed to by s. The length of the string is determined by the first null character. The behavior is undefined if s does not point at an array of at least Traits::length(s)+1 elements of CharT. A: The only difference is that std::string() knows at compile-time that it will produce a zero-length string, while std::string("") has to use strlen or something similar to determine the length of the string it will construct at run-time. Therefore the default-constructor should be faster.
{ "language": "en", "url": "https://stackoverflow.com/questions/26866438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Can we create two columns with same name but different case in database? Can we create two (or more) columns in Postgres/ MySQL with the same name but a different case? As in does case-sensitivity matter in column names? For e.g., Can a table contain two columns with names - COL and col? Linked post: can we create two columns with same name in database? only talks about same names, but not about case sensitivity. A: In standard SQL, quoted identifiers are case sensitive and Postgres follows that standard. So the following: select column_one as "COL", column_two as "col" from ... Or as part of a table create table dont_do_this ( "COL" integer, "col" integer ); Those are two different names as they become case sensitive due to the use of the double quotes. But I would strongly advise to not do that. This will most probably create confusion and problems down the line. I think this should work with MySQL as well, but as it traditionally doesn't care about following the SQL standard, I don't know. A: Can we create two (or more) columns in Postgres/ MySQL with the same name but a different case? As in does case-sensitivity matter in column names? Not possible to create columns with same name - yes case-sensitivity matters. Example in MySQL: CREATE TABLE test( id int, id int ); CREATE TABLE test1( id int, ID int ); Output in MySQL: Schema Error: Error: ER_DUP_FIELDNAME: Duplicate column name 'id' Output in PostgreSQL: Schema Error: error: column "id" specified more than once SELECT statement: SELECT id as "id", ID1 as "id" FROM test; Output: id 2 Case-sensitive SELECT: SELECT id as "id", ID1 as "ID" FROM test; Output: id ID 1 2
{ "language": "en", "url": "https://stackoverflow.com/questions/74795411", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Fluent NHibernate Mapping Compent with Many-To-Many and Extra Fields I have been spinning around in the same place trying to figure this issue with a many-to-many table with extra columns. I am swallowing my pride and asking more experienced Fluent NHibernate experts if you can help out here. I have been trying to use a component for a many-to-many table but I don't seem to find where to point the component to cause a join on a specific column. I guess as long as I populate the completion information object, the presence and usage of a component is not a requirement for me. This is my simplified schema. User ................... Id Name Task ................... Id Name Job ................... Id Name JobTask ................... JobId TaskId CompletedByUserId CompletionDate CompletionNotes Then I have objects like so: public class Job { long Id { get; set; } IList<Task> Tasks { get; set; } } public class Task { long Id { get; set; } string Name { get; set; } CompletionInfo CompletionInfo { get; set; } } public class CompletionInfo { User User { get; set; } DateTime CompletionDate { get; set; } } Can you help with your ideas how to implement this in fluent nhibernate? I had this mapping before on the JobMap and it would work but only for the many-to-many columns ( TaskId, JobId). I need the extra information related to that relationship(dateCompleted, userCompleted,etc) HasManyToMany<Task>(job => job.Tasks) .Table(ManyToManyTable) .ParentKeyColumn("JobId") .ChildKeyColumn("TaskId") .Not.LazyLoad() .Cascade.Delete(); In order to simplify things with the many-to many I thought to create an object to represent the relationship and encapsulate a task. Like so: And I would add a proxy in the Job task to modify the actual Task properties. But that is not going anywhere. This must be something common out there, I am very surprised there not much regarding this issue. Naturally I would have loved a way to extend the HasManyToMany but not sure how, hence this post. public class JobTask : Entity<long, JobTask> { public virtual Job ParentJob { get; set; } public virtual Task Task { set;set; } public virtual TaskCompletionDetails CompletionInformation { get { if (this.Task == null) return null; return Task.CompletionInformation; } set { if (this.Task == null) return; this.Task.CompletionInformation = value; } } public virtual string CompletionNotes { get { if (this.Task == null || this.Task.CompletionInformation == null) return null; return Task.CompletionInformation.Notes; } set { if (this.Task == null) return; this.Task.CompletionInformation.Notes = value; } } public virtual DateTime? CompletionDate { get { if (this.Task == null || this.Task.CompletionInformation == null) return null; return Task.CompletionInformation.Date; } set { if (this.Task == null) return; this.Task.CompletionInformation.Date = value; } } public virtual IUser User { get { if (this.Task == null || this.Task.CompletionInformation == null) return null; return Task.CompletionInformation.User; } set { if (this.Task == null || value == null) return; if (this.Task.CompletionInformation != null) this.Task.CompletionInformation.User = value; } } } } This would be the map direction I have started for it: public class JobTaskMap : ClassMap<JobTask> { private const string ModelTable = "[JobTasks]"; public JobTaskMap() { Table(ModelTable); Id(jobTask => jobTask.Id) .Column("Id") .GeneratedBy.Identity(); References<Job>( jobTask => jobTask.ParentJob) .Column("JobId") .Fetch.Join(); Component<Task>( jobTask => (Task) jobTask.Task, // This is the task comp => { // This is the task completion information comp.Component<TaskCompletionDetails>( task => (TaskCompletionDetails)task.CompletionInformation, compInfo => { compInfo.Map(info => info.Date) .Column("CompletionDate") .Nullable(); compInfo.Map(info => info.Notes) .Column("CompletionNotes") .Nullable(); compInfo.References<User>(info => info.User) .Column("CompletedByUserId") .Nullable() .Fetch.Join(); }); }); } These are some other related reading I have followed without success: Reference: http://wiki.fluentnhibernate.org/Fluent_mapping Fluent Nhibernate Many-to-Many mapping with extra column A: heres a way which depends on the id generation strategy used. if Identity is used then this won't do (at least NH discourages use of Identity for various reasons), but with every strategy that inserts the id itself it would work: class JobMap : ClassMap<Job> { public JobMap() { Id(x => x.Id); HasMany(x => x.Tasks) .KeyColumn("JobId"); } } class TaskMap : ClassMap<Task> { public TaskMap() { Table("JobTask"); Id(x => x.Id, "TaskId"); Component(x => x.CompletionInfo, c => { c.Map(x => x.CompletionDate); c.References(x => x.User, "CompletedByUserId"); }); Join("Task", join => { join.Map(x => x.Name, "Name"); }); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/8066335", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: WPF Web (xbap) using WCF Service throws System.Net.webPermission exception I have an xbap application running with partial trust on my local machine's IIS 7.5. I published the WCF service to the same directery as the xbap. After jumping through some hoops I could get it working through Visual Studio for debugging perposes, but I can't seem to get it to work on an IIS server after it's published. I'm running on .Net 4.0 Contents of the error: "Request for the permission of type System.Net.webPermission, System Version=4.0.0.0, culture neutral, PublicKeyToken=b77a5c56l934e089 failed." Update: So I started over, making a new Wcf service, new Xbap and a new site to deploy to. After verifying every step of the way, I got it to work. So I started to integrate my previous apps, one-by-one, over to the new site to discover what was the problem. I narrowed it down to my original Wcf service, but after making it identical to the working one, it still has the WebPermission error. So, I still don't know what was causing the problem, other than redoing it fixed it. A: Well, looking through the web.config file I noticed that I had commented out the following line: <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> I uncommented it and no more error.
{ "language": "en", "url": "https://stackoverflow.com/questions/3762520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Having a relative path in a module relative to the file calling the module Let's say I have the current file structure : * *app\modules\module.py *app\modules\somefile.txt *app\scripts\script.py *app\main.py modules\script.py : import sys sys.path.append("..\\") from modules.module import module [...] main.py : import sys from modules.module import module [...] modules\module.py : [...] fileToRead="somefile.txt" The problem is : * *If my module is imported from main.py, the path to somefile.txt should be "modules\\somefile.txt" *If my module is imported from script.py, the path to somefile.txt should be "..\\modules\\somefile.txt" I don't want to use an absolute path as I want my app folder to be movable. I thought of a path relative to the root folder, but I don't know if it's a clean solution and I don't want to pollute all my scripts with redondant stuff. Is there a clean way to deal with this ? A: I'm not sure what all you're doing, but since somefile.txt is in the same folder as module.py, you could make the path to it relative to the module by using its predefined __file__ attribute: import os fileToRead = os.path.join(os.path.dirname(__file__), "somefile.txt")
{ "language": "en", "url": "https://stackoverflow.com/questions/30508146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Java Session Flash - Play Framework 2.6.2 I am currently having an issue getting a flash value from a Java controller to a Scala view. As a simplification of the code base and an example of the controller: public Result viewAccount(final String accountNo) { return ok( ViewAccount.render( accountNo)); } public Result adjustment(final String accountNo) { try { // do things flash("success", "Successfully updated account."); } catch (NumberFormatException e) { flash("error", "Could not update the account"); } return redirect(routes.AccountsController.viewAccount(accountNo)); } I then try to use flash.containsKey("success") within the view. @if(flash.containsKey("success")) { <div class="alert alert-success" role="alert"> <b>Success:</b> @flash.get("success") <button type="button" class="close" data-dismiss="alert"><span>&times;</span></button> </div> } I have attempted various other ways of doing it with no success: @if(flash.get("success") != null) { <div class="alert alert-success" role="alert"> <b>Success:</b> @flash.get("success") <button type="button" class="close" data-dismiss="alert"><span>&times;</span></button> </div> } From what I have found online, I potentially need to return the request param inside the viewAccount method. Then add @()(implicit request: Http.Request) (not really sure how this is added). After that I can call @if(request.flash.get("success") != null) { <div class="alert alert-success" role="alert"> <b>Success:</b> @flash.get("success") <button type="button" class="close" data-dismiss="alert"><span>&times;</span></button> </div> }
{ "language": "en", "url": "https://stackoverflow.com/questions/70243260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Unity, editor-time script, On GameObject Added to Scene Event Say you have a trivial prefab, "Box", which we'll say is nothing more than a standard meter cube. * *1 - The prefab Box is in your Project panel *2 - Drag it to the Scene *3 - Obviously it now appears in the Hierarchy panel also, and probably selected and shown in Inspector To be clear, game is NOT Play when you do this, you're only in ordinary Editor mode. Is it possible to make a script (an "Editor script"?) so that, * *when you do "1" and "2" above, (again this is in Editor mode, not during a game) *when 3 happens, we can affect the new Box item in the scene *So, simple example: we will set the Z position to "2" always, no matter where you drop it. In short: Editor code so that every time you drag a prefab P to the scene, it sets the position z to 2.0. Is this possible in Unity? I know nothing of "editor scripts". It seems very obvious this should be possible. A: You can add a custom window editor which implements OnHierarchyChange to handle all the changes in the hierarchy window. This script must be inside the Editor folder. To make it work automatically make sure you have this window opened first. using System.Linq; using UnityEditor; using UnityEngine; public class HierarchyMonitorWindow : EditorWindow { [MenuItem("Window/Hierarchy Monitor")] static void CreateWindow() { EditorWindow.GetWindow<HierarchyMonitorWindow>(); } void OnHierarchyChange() { var addedObjects = Resources.FindObjectsOfTypeAll<MyScript>() .Where(x => x.isAdded < 2); foreach (var item in addedObjects) { //if (item.isAdded == 0) early setup if (item.isAdded == 1) { // do setup here, // will happen just after user releases mouse // will only happen once Vector3 p = transform.position; item.transform.position = new Vector3(p.x, 2f, p.z); } // finish with this: item.isAdded++; } } } I attached the following script to the box: public class MyScript : MonoBehaviour { public int isAdded { get; set; } } Note that OnHierarchyChange is called twice (once when you start dragging the box onto the scene, and once when you release the mouse button) so isAdded is defined as an int to enable its comparison with 2. So you can also have initialization logic when x.isAdded < 1 A: You could thy this: using UnityEngine; #if UNITY_EDITOR using UnityEditor; #endif [ExecuteInEditMode] public class PrintAwake : MonoBehaviour { #if UNITY_EDITOR void Awake() .. Start() also works perfectly { if(!EditorApplication.isPlaying) Debug.Log("Editor causes this Awake"); } #endif } See https://docs.unity3d.com/ScriptReference/ExecuteInEditMode.html Analysis: * *This does in fact work! *One problem! It happens when the object starts to exist, so, when you are dragging it to the scene, but before you let go. So in fact, if you specifically want to adjust the position in some way (snapping to a grid - whatever) it is not possible using this technique! So, for example, this will work perfectly: using UnityEngine; #if UNITY_EDITOR using UnityEditor; #endif [ExecuteInEditMode] public class PrintAwake : MonoBehaviour { #if UNITY_EDITOR void Start() { if(!EditorApplication.isPlaying) { Debug.Log("Editor causes this START!!"); RandomSpinSetup(); } } #endif private void RandomSpinSetup() { float r = Random.Range(3,8) * 10f; transform.eulerAngles = new Vector3(0f, r, 0f); name = "Cube: " + r + "°"; } } Note that this works correctly, that is to say it does "not run" when you actually Play the game. If you hit "Play" it won't then again set random spins :) Great stuff A: Have a similar issue - wanted to do some stuff after object was dragged into scene (or scene was opened with already existed object). But in my case gameobject was disabled. So I couldn't use neither Awake, nor Start. Solved via akin of dirty trick - just used constructor for my MonoBehaviour class. Unity blocks any attempts to use most of API inside MonoBehaviour constructors, but we could just wait for some time, for example via EditorApplication.delayedCall. So code looks like this: public class ExampleClass: MonoBehaviour { //... // some runtime logic //... #if UNITY_EDITOR //Constructor ExampleClass() { EditorApplication.delayCall += DoSomeStuffForDisabledObjectAfterCreation; } void DoSomeStuffForDisabledObjectAfterCreation() { if (!isActiveAndEnabled) { //Some usefull stuff } } #endif } A: Monobehaviours have a Reset method, that only gets called in Editor mode, whenever you reset or first instantiate an object. public class Example : MonoBehaviour { void Reset() { transform.position = new Vector3(transform.position.x, 2.22f, transform.position.z); Debug.Log("Hi ..."); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/52070070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How to Custom Position the Jquery dialog box On click of a button , i am opening a jquery dialog box as shown in this code <div class="editButton">Chat</div> <div id="dialogContent" title="This is a dialog box"> <textarea rows="14" cols="40" name="comment"></textarea> </div> $(document).on("click", ".editButton", function() { $("#dialogContent").dialog("open"); }); $(document).ready(function() { $(document).ready(function() { $("#dialogContent").dialog({ autoOpen: false, modal: false, resizable: false, height: 'auto', width: 'auto', draggable: true, closeOnEscape: true, show: { effect: "blind", duration: 1000 }, hide: { effect: "explode", duration: 1000 }, dialogClass: 'no-close success-dialog', buttons: { 'Submit': function() {}, 'Cancel': function() { $(this).dialog('close'); } } }); }); }); Could you please tell me how to make this dialog box appear just above the chat div ?? This is my jsfidle https://jsfiddle.net/g4sgoe45/3/ A: From the jQuery UI documentation, you can use the position option but it defaults to center (as shown in your example). Default: { my: "center", at: "center", of: window } Specifies where the dialog should be displayed when opened. The dialog will handle collisions such that as much of the dialog is visible as possible. The following code should suffice by positioning it to the right bottom with an offset for your editButton height, add this to your options: draggable: false, position: { my: "right bottom", at: "right bottom-44" }, See this updated fiddle. A: What's the purpose of the jQuery UI dialog? If you strip it out and use plain HTML/CSS the whole thing becomes a lot easier to manage. If that chat button has to move for some reason, or becomes scrollable, you're back to "stuck wrestling with this thing that's generally meant to take over the page and sit in the center"! Here's a sample of another way. You probably want to run it in "Full Page" so the dialog doesn't get truncated. /* JS only to toggle a class on the container */ $(document).on("click", ".editButton, .chat-cancel", toggleChat); function toggleChat(){ var $chatWindow = $('.chat-area'); $('#comment').val(''); $chatWindow.toggleClass('visible'); } /* Terrible CSS but hopefully you'll get the idea. 1), 2) and 3) are the main bits to take away. The rest is me faffing around. */ /* 1) By default, #dialogContent is hidden. */ #dialogContent { height: 0px; margin-bottom: 30px; overflow: hidden; position: relative; /* use CSS transitions to show it */ transition: height 0.5s ease-in-out; } /* 2) When someone clicks "chat" we add the class 'visible' to it*/ .visible #dialogContent { display: block; height: 270px; } .chat-area, .chat-area * { box-sizing: border-box; } /* 3) Fix the entire container to the bottom right and then position the needed elements within it */ .chat-area { bottom: 0; right: 10px; position: fixed; width: 200px; font-family: helvetica, arial, sans-serif; padding: 10px; background-color: green; } #comment { font-family: helvetica, sans-serif; font-size: 12px; margin-bottom: 4px; padding: 4px; } .editButton { background: green; bottom: 0; color: white; cursor: pointer; height: 30px; right: 0; padding: 0 10px 7px 10px; position: absolute; width: 100%; } .visible .editButton:before { content: "Close "; width: auto; } .chat-area h2 { color: #fff; display: inline-block; font-size: 15px; margin: 0 0 4px 0; } header .chat-cancel { color: #fff; display: inline-block; float: right; cursor: pointer; } button { background: #3498db; background-image: linear-gradient(to bottom, #999999, #7a7a7a); border: 0 none; border-radius: 5px; color: #ffffff; cursor: pointer; font-family: Arial; font-size: 15px; padding: 5px 10px; text-decoration: none; } button:hover { background: #3cb0fd; background-image: linear-gradient(to bottom, #555555, #444444); text-decoration: none; } <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script> <div class="chat-area"> <div id="dialogContent" title="This is a dialog box"> <header> <h2> Chat Dialog Header</h2> <span class="chat-cancel">X</span> </header> <textarea rows="14" cols="40" name="comment" id="comment"></textarea> <button class="chat-cancel"> Cancel </button> <button class="chat-save" type="submit"> Save </button> </div> <div class="editButton">Chat</div> </div>
{ "language": "en", "url": "https://stackoverflow.com/questions/34432340", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Scss not loaded with Vite The build with Vite and Vue works like a charm (so ist the path correct). However, it does not with storybook. Here my config: vite.config.js import { defineConfig } from 'vite' import { resolve } from 'path' import vue from '@vitejs/plugin-vue' // https://vitejs.dev/config/ export default defineConfig({ css: { preprocessorOptions: { scss: { additionalData: `@import "./src/css/global.scss";` }, }, }, }) .storybook/main.js: module.exports = { "stories": [ "../src/**/*.stories.mdx", "../src/**/*.stories.@(js|jsx|ts|tsx)" ], "addons": [ "@storybook/addon-links", "@storybook/addon-essentials", "@storybook/preset-scss" ], "framework": "@storybook/vue3", "core": { "builder": "storybook-builder-vite" } } I am using storybook-builder-vite as vite is used to build the project too. package.json "devDependencies": { "@storybook/addon-actions": "^6.4.18", "@storybook/addon-essentials": "^6.4.18", "@storybook/addon-links": "^6.4.18", "@storybook/preset-scss": "^1.0.3", "@storybook/vue3": "^6.4.18", "sass": "^1.49.7", "sass-loader": "^12.4.0", "storybook-builder-vite": "^0.1.15", "typescript": "^4.4.4", "vite": "^2.7.2", "vue-i18n": "^8.27.0", "vue-loader": "^16.8.3", "vue-tsc": "^0.29.8" } Any ideas ? A: The preprocessorOptions.*.additionalData parameter will only work if there are already loaded/imported css to prepend to, so basically using both options of importing directly into your main.ts file for the bulk and any other preprocessing can be defined in the vite.config.js file. Th documentation at https://vitejs.dev/config/#css-preprocessoroptions unfortunately does NOT explain this, which did tarnish a perfectly good Saturday night A: I have had exactly the same problem. Although many vite vue 3 boilerplates do it the same way you and I do, strangely it didn't work, maybe it's a certain vite version that might have bugs. The only thing that worked for me was the import in main.ts: import { createApp } from 'vue' import { createPinia } from 'pinia' import App from './App.vue' import router from './router' //import your scss here import "@/assets/scss/style.scss"; const app = createApp(App) app.use(createPinia()) app.use(router) app.mount('#app') My versions: sass: 1.49.8 (you only need the sass package here and not the loader) vue: 3.2.29 A: Here's what worked for me: If you haven't, install the Storybook builder for Vite (@storybook/builder-vite) npm install @storybook/builder-vite --save-dev or yarn add --dev @storybook/builder-vite Then in the .storybook/main.js file, const { mergeConfig } = require('vite'); module.exports = { async viteFinal(config, { configType }) { // return the customized config return mergeConfig(config, { css: { preprocessorOptions: { scss: { // Next line will prepend the import in all you scss files as you did with your vite.config.js file additionalData: `@import "./src/styles/main";`, }, }, }, }); }, // ... other options here }; It is almost the same thing as you did, the trick here, is to add the same css.preprocessorOptions.scss object to the storybook configuration via mergeConfig in the main.js file in storybook. Based on the @storybook/builder-vite documentation
{ "language": "en", "url": "https://stackoverflow.com/questions/71064299", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How to use SSO for PWA? Is it possible to use SSO for PWAs for standalone display? I can login to my app through email but if I try twitter/fb/google SSO, it opens to a new browser since they are out of the scope. This makes it pretty useless... Is the only solution to just wrap the app? A: The short answer is, SSO-inside-an-installed-PWA is broken on Chrome for Desktop as of Chrome 70 (November 2018). The good news is, the W3C web.manifest standard has changed, and will no longer require browsers to open out-of-scope navigation in a separate window. This will fix the case of installed PWAs with single-sign-on authentication. This will be fixed in Chrome 71 on the desktop (due December 2018), and is already fixed on Chrome for Android. Here's the update to the W3C web.manifest spec that details the change. In short, the spec says browsers must not block out-of-scope navigation inside an installed PWA. Instead, the spec encourages browsers to show some prominent UI (perhaps a bar at the top) notifying the user that the navigation is out-of-scope. "Unlike previous versions of this specification, user agents are no longer required or allowed to block off-scope navigations, or open them in a new top-level browsing context. This practice broke a lot of sites that navigate to a URL on another origin..."
{ "language": "en", "url": "https://stackoverflow.com/questions/50538147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: I was creating a sticky notes website using html, css, javascript. i made the website but the problem is i want to store the data in a mongo database I was creating a sticky notes website using HTML, CSS, and javascript. I made the website but the problem is I want to store the data in a Mongo database I m able to save the text content in an array but I don't know whether their styles will save or not. Also, I don't know how to store the array in a mongo data base Please help to store the data in the database var random_margin = ["-5px", "1px", "5px", "10px", "7px"]; var random_colors = ["#c2ff3d","#ff3de8","#3dc2ff","#04e022","#bc83e6","#ebb328", "#e94652", "#ff9337", "#fad301", "#75e8b7", "#d3d75d", "#31cfe4", "#a6c7f7", "#abcaf8", "#ff7db9", "#323c44", "#fa8893", "#f9baa6", "#f9e75b", "#c6ee8f", "#99cf72", "#a2ecf0", "#7be4fa", "#e1e5f6", "#fec7dc", "#cbd4d7", "#f2efd7", "#ecf368", "#cff4f5", "#a1c2f6", "#fbcbd8", "#cbe1f2", "#f3d9e2", "#e9f2f8"]; var random_degree = ["rotate(3deg)", "rotate(1deg)", "rotate(-1deg)", "rotate(-3deg)", "rotate(-5deg)", "rotate(-8deg)"]; var index = 0; window.onload = document.querySelector("#user_input").select(); document.querySelector("#add_note").addEventListener("click", () => { document.querySelector("#modal").style.display = "block"; document.querySelector("#user_input").select(); }); document.querySelector("#hide").addEventListener("click", () => { document.querySelector("#modal").style.display = "none"; document.querySelector("#user_input").value=''; }); document.querySelector("#user_input").addEventListener('keydown', (event) => { if(event.key === 'Enter'){ if(document.querySelector("#user_input").value != 0){ const text = document.querySelector("#user_input"); createStickyNote(text.value); document.querySelector("#modal").style.display = "none"; document.querySelector("#user_input").value=''; } else { alert('Fill The Note'); document.querySelector("#user_input").value=''; }; }; }); createStickyNote = (text) => { let note = document.createElement("div"); let details = document.createElement("div"); let noteText = document.createElement("h1"); note.className = "note"; details.className = "details"; noteText.textContent = text; details.appendChild(noteText); note.appendChild(details); if(index > random_colors.length - 1) index = 0; note.setAttribute("style", `margin:${random_margin[Math.floor(Math.random() * random_margin.length)]}; background-color:${random_colors[index++]}; transform:${random_degree[Math.floor(Math.random() * random_degree.length)]}`); note.addEventListener("dblclick", () => { note.remove(); }) document.querySelector("#all_notes").appendChild(note); } <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.15.3/css/all.min.css"> <link rel="shortcut icon" type="image/png" href="notes.png" /> <link rel="stylesheet" href="/stylesheets/sn_style.css"> <title>Sticky Notes</title> </head> <body> <div id="modal"> <div id="inner_modal"> <textarea placeholder="Note..." id="user_input" maxlength="160"></textarea> <i class="far fa-times-circle" id="hide"></i> </div> </div> <main> <header> <div class="container"> <div id="header"> <h1><a href="/">HOME</a></h1> <h1><i class="fas fa-sticky-note"></i> Sticky Notes</h1> <button id="add_note">Add Note</button> </div> </div> </header> <section> <div class="container"> <div id="all_notes"></div> </div> </section> </main> <script src="/javascripts/sn_script22.js"></script> </body> </html>
{ "language": "en", "url": "https://stackoverflow.com/questions/71820168", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Creating an Ad-hoc iOS build to test on other devices using an existing CSR / organization account Okay, I have put a day in this and am super - confused ( hail Android, it's super easy ). A client of mine has an organization developer account on apple, and uses his public-private Key Pair to look after development and deployment of an app on test devices. I am developing another app for him , and can request for his credentials (to generate the public-private Key Pair to sign and create ad - hoc provisioning profile and builds / invite me as a developer of the team on apple , and proceed) . The problem is : I need to issue a CSR ( Xcode -> Preferences -> Accounts -> Add Apple ID and request for public - private key ), BUT if I add his account , and generate CSR request to create key-value pair, would that not invalidate the CSR on his Mac and hamper his development? Is there a way out to generate a separate CSR for me using his organization account so that I can create provisioning profiles for an ad - hoc build ? Dang .. this is confusing.. A: you can do that by created a new APP ID specifically for you or your fellow developers and link with the developer profile you create. Else you can also ask for .p12 file that can be exported from machine keychain from which your client first created CSR certificate from. Just you would require to add to it your mac and download the same profile and certificate from apple's developer portal, finally you can install it on your device.
{ "language": "en", "url": "https://stackoverflow.com/questions/35154192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Static table view cells: Clear color not corresponding I have the below controller with the cell's, content view's, and table view's backgroundColor set to .clear, however, there is still a white background which I can't figure out what it is corresponding to. A: It is due to Your table view cell colour. Select table view cell: Set its background colour as clear. A: I suggest you to debug it in your Xcode. To do this you can launch it on you device(or on simulator), click 'Debug View Hierarchy' button. Then by Mouse Click + Dragging on the screen rectangle you can rotate all layers of your screen and see which view is causing that white background. A: this white background is due to table view cell select table view cell and from navigator make its background clear. or you can do it in func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell { let cell = tableView.dequeueReusableCell(withIdentifier: "YourCellIdentifier", for: indexPath) as! YourCellClass cell.backgroundColor = .clear } this will solve your problem... :)
{ "language": "en", "url": "https://stackoverflow.com/questions/50615306", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: how to display current date in textbox using placeholder in html <html> <body> Date: <input type="text" id="txt" placeholder=" "/> </body> </html> What should i specify inside placehoder quotes so that i can view current date in my textbox A: These are one of the soultions how to display current date in textbox: 1. JAVASCRIPT SOLUTION <!DOCTYPE html> <html> <body onload="myFunction()"> Date: <input type="text" id="demo"/> <script> function myFunction() { document.getElementById('demo').value= Date(); } </script> </body> </html> EDIT Instead of value, in the same way by id, you could set the placeholder as you wish: document.getElementById('demo').placeholder= Date(); 2. JQUERY SOLUTION <!DOCTYPE html> <html> <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script> </head> <body onload="myFunction()"> Date: <input type="text" id="demo"/> <script> function myFunction() { $(document).ready(function(){ $('#demo').attr("placeholder", Date()); }); } </script> </body> </html> I think this will help you.
{ "language": "en", "url": "https://stackoverflow.com/questions/27670288", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: return the image file from GetoutputmediaFile method Good Day I'm working in app that captuer image and display it in Gridview but when I click the button I get the following error java.lang.NullPointerException: file at android.net.Uri.fromFile(Uri.java:452) at CameraFragment.getOutputMediaFileUri(CameraFragment.java:117) at CameraFragment$1.onClick(CameraFragment.java:91) can anyone help? here my code myLists = new ArrayList<Images>(); adapter = new ImageListAdapter(getActivity(), R.layout.img_list_view, myLists); Button myButton = (Button) view.findViewById(R.id.camerabutton); myButton.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE); fileUri = getOutputMediaFileUri(MEDIA_TYPE_IMAGE);// create a file to save the image intent.putExtra(MediaStore.EXTRA_OUTPUT, fileUri); // set the image file name startActivityForResult(intent, CAMERA_CAPTURE_IMAGE_REQUEST_CODE); // start the image capture Intent } }); myGridView = (GridView) view.findViewById(R.id.gridView); myGridView.setAdapter(adapter); public Uri getOutputMediaFileUri(int type) { return Uri.fromFile(getOutputMediaFile(type)); } public void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); switch (requestCode) { case 1: if (resultCode == Activity.RESULT_OK) { Uri selectedImage = data.getData(); String[] filePathColumn = {MediaStore.Images.Media.DATA}; Cursor cursor = getActivity().getContentResolver().query(selectedImage, filePathColumn, null, null, null); cursor.moveToFirst(); int columnIndex = cursor.getColumnIndex(filePathColumn[0]); //file path of captured image String filePath = cursor.getString(columnIndex); //file path of captured image File f = new File(filePath); String filename = f.getName(); cursor.close(); //Convert file path into bitmap image using below line. Bitmap yourSelectedImage = BitmapFactory.decodeFile(filePath); //put bitmapimage in your imageview //newImageView.setImageBitmap(yourSelectedImage); images = new Images(); ByteArrayOutputStream stream = new ByteArrayOutputStream(); // compressing the image Bitmap image =Bitmap.createScaledBitmap(yourSelectedImage, yourSelectedImage.getWidth()/2, yourSelectedImage.getHeight()/2, true); image.compress(Bitmap.CompressFormat.PNG, 50, stream); // convert the image to a byte stream byte[] byte_arr = stream.toByteArray(); images.setImageBlob(byte_arr); images.setImageName(filename); } } if (resultCode == getActivity().RESULT_CANCELED) { // user cancelled Image capture Toast.makeText(getActivity(), "User cancelled image capture", Toast.LENGTH_SHORT) .show(); } else { // failed to capture image Toast.makeText(getActivity(), "Sorry! Failed to capture image", Toast.LENGTH_SHORT) .show(); } } } private static File getOutputMediaFile(int type) { // External sdcard location File mediaStorageDir = new File(Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_PICTURES), IMAGE_DIRECTORY_NAME); // Create the storage directory if it does not exist if (!mediaStorageDir.exists()) { if (!mediaStorageDir.mkdirs()) { Log.d(IMAGE_DIRECTORY_NAME, "Oops! Failed create " + IMAGE_DIRECTORY_NAME + " directory"); return null; } } // Create a media file name String timeStamp = new SimpleDateFormat("yyyyMMdd_HHmmss", Locale.getDefault()).format(new Date()); File mediaFile; if (type == MEDIA_TYPE_IMAGE) { mediaFile = new File(mediaStorageDir.getPath() + File.separator + "IMG_" + timeStamp + ".jpg"); } else { return null; } mCurrentPhotoPath = mediaFile.getAbsolutePath(); return mediaFile; } A: return Uri.fromFile(getOutputMediaFile(type)); Null Pointer Exception occurring on this line, Because getOutputMediaFile(type) return null. Because somehow Directory create failed and return null // Create the storage directory if it does not exist if (!mediaStorageDir.exists()) { if (!mediaStorageDir.mkdirs()) { Log.d(IMAGE_DIRECTORY_NAME, "Oops! Failed create " + IMAGE_DIRECTORY_NAME + " directory"); return null; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/35521030", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Coloring rectangle in function of distance to nearest edge produces weird result in diagonals I'm trying to color a rectangle in ShaderToy/GLSL in function of each pixel's distance to the nearest rectangle edge. However, a weird (darker) result can be seen on its diagonals: I'm using the rectangle UV coordinates for it, with the following piece of code: void mainImage( out vec4 fragColor, in vec2 fragCoord ) { // Normalized pixel coordinates (from 0 to 1) vec2 uv = fragCoord/iResolution.xy; vec2 uvn=abs(uv-0.5)*2.0; float maxc=max(uvn.y,uvn.x); vec3 mate=vec3(maxc); fragColor = vec4(mate.xyz,1); } As you can see, the error seems to come from the max(uvn.y,uvn.x); line of code, as it doesn't interpolate smoothly the color values as one would expect. For comparison, those are the images obtained by sampling uvn.y and uvn.x instead of the maximum between those two: You can play around with the shader at this URL: https://www.shadertoy.com/view/ldcyWH A: The effect that you can see is optical illusion. You can make this visible by grading the colors. See the answer to stackoverflow question Issue getting gradient square in glsl es 2.0, Gamemaker Studio 2.0. To achieve a better result, you can use a shader, which smoothly change the gradient, from a circular (or elliptical) gradient in the middle of the the view, to a square gradient at the borders of the view: void mainImage( out vec4 fragColor, in vec2 fragCoord ) { // Normalized pixel coordinates (from 0 to 1) vec2 uv = fragCoord/iResolution.xy; vec2 uvn=abs(uv-0.5)*2.0; vec2 distV = uvn; float maxDist = max(abs(distV.x), abs(distV.y)); float circular = length(distV); float square = maxDist; vec3 color1 = vec3(0.0); vec3 color2 = vec3(1.0); vec3 mate=mix(color1, color2, mix(circular,square,maxDist)); fragColor = vec4(mate.xyz,1); } Preview:
{ "language": "en", "url": "https://stackoverflow.com/questions/48792209", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Send POST request with Pillow image I get an image from pymongo like this: def get_my_photo(self, user_id): res = self.db.find_one({'user_id': user_id}) image_data = BytesIO(res['photo']) image = Image.open(image_data) image_data = BytesIO() image.save(image_data, "JPEG") image_data.seek(0) return image_data Then i try send POST request with image_data: request = requests.post(url, files={'photo': image}) But headers is: 'Content-Type': 'text/html; charset=windows-1251'. And there is no photo uploading. How correctly send Pillow image with a request? Thanks. Edit: find the solution in this post
{ "language": "en", "url": "https://stackoverflow.com/questions/54611872", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: sqlite transactions do not respect delete I need to modify all the content in a table. So I wrap the modifications inside a transaction to ensure either all the operations succeed, or none do. I start the modifications with a DELETE statement, followed by INSERTs. What I’ve discovered is even if an INSERT fails, the DELETE has still takes place, and the database is not rolled back to the pre-transaction state. I’ve created an example to demonstrate this issue. Put the following commands into a script called EXAMPLE.SQL CREATE TABLE A(id INT PRIMARY KEY, val TEXT); INSERT INTO A VALUES(1, “hello”); BEGIN; DELETE FROM A; INSERT INTO A VALUES(1, “goodbye”); INSERT INTO A VALUES(1, “world”); COMMIT; SELECT * FROM A; If you run the script: “sqlite3 a.db < EXAMPLE.SQL”, you will see: SQL error near line 10: column id is not unique 1|goodbye What’s surprising is that the SELECT statement results did not show ‘1|hello’. It would appear the DELETE was successful, and the first INSERT was successful. But when the second INSERT failed (as it was intended to)….it did not ROLLBACK the database. Is this a sqlite error? Or an error in my understanding of what is supposed to happen? Thanks A: It works as it should. COMMIT commits all operations in the transaction. The one involving world had problems so it was not included in the transaction. To cancel the transaction, use ROLLBACK, not COMMIT. There is no automatic ROLLBACK unless you specify it as conflict resolution with e.g. INSERT OR ROLLBACK INTO .... And use ' single quotes instead of “ for string literals. A: This documentation shows the error types that lead to an automatic rollback: SQLITE_FULL: database or disk full SQLITE_IOERR: disk I/O error SQLITE_BUSY: database in use by another process SQLITE_NOMEM: out or memory SQLITE_INTERRUPT: processing interrupted by application request For other error types you will need to catch the error and rollback, more on this is covered in this SO question.
{ "language": "en", "url": "https://stackoverflow.com/questions/23495065", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: When parsing a string in to DateTime, parse the string to take data before putting each variable into the DateTime Object I have checked all answer related to this error but I couldn't solve it. I have an admission form. They are fields like DateOfAdmission, PhoneNumber **, **TimeOfAdmission and other things . When I try to retrieve values from the form and convert them. And then try to insert them in SQL Server Database. The application breaks saying... 'System.FormatException' When parsing a string into DateTime, parse the string to take data before putting each variable into the DateTime Object. Here is my code please look at it and tell me where I am getting wrong. public partial class AdmissionForm : Window { public AdmissionForm() { InitializeComponent(); } private void SubmitFormClick(object sender, RoutedEventArgs e) { // Parsing Values // STIRNG TO INT int AdmissionNumber = int.Parse(AdmissionNumberTextBox.Text); int CNIC = int.Parse(CNICTextBox.Text); int Age = int.Parse(AgeTextBox.Text); int PhoneNumber = int.Parse(PhoneTextBox.Text); int MobileNumber = int.Parse(MobileTextBox.Text); int EmergencyNumber = int.Parse(EmergencyTextBox.Text); // Converting time 'String' into Byte[] //string time = DateTime.Now.ToString("hh:mm:tt"); //TimeOfDay.ToString("HH:mm:ss") byte[] TimeOfAdmission = Encoding.ASCII.GetBytes(DateTime.Now.ToString("hh:mm:tt")); // Converting date 'String' into DateTime //string date = DateTime.Now.ToString("dd/MM/yyyy"); DateTime DateOfAdmission = Convert.ToDateTime(DateTime.Now.ToString("dd/MM/yyyy")); // String var StudentName = StudentNameTextBox.Text; var FatherName = FatherNameTextBox.Text; var Qualification = QualificationTextBox.Text; var Nationality = NationalityTextBox.Text; var Adress = AdressTextBox.Text; var Timing = SheduleTime.SelectedValue.ToString(); var ClassLevel = ClassValue.SelectedValue.ToString(); var Reference = ReferenceTextBox.Text; // DB context Getting Entites and attributes var context = new AcademyEntities(); // Creating new Model var Model = new StudentTable(); // Getting and Setting values // Reading Values Model.StudentName = StudentName; Model.FatherName = FatherName; Model.StudentCNIC = CNIC; Model.GuardianName = null; Model.Nationality = Nationality; Model.Qualification = Qualification; Model.StudentAge = Age; Model.AdmissionNumber = AdmissionNumber; Model.TimeOfAdmission = TimeOfAdmission; Model.DateOfAdmission = DateOfAdmission; Model.MobileNumber = MobileNumber; Model.PhoneNumber = PhoneNumber; Model.EmergencyNumber = EmergencyNumber; Model.Adress = Adress; Model.ClassID = ClassTime(Timing); Model.CourseLevelID = CourseLevelID(ClassLevel); Model.Reference = Reference; context.StudentTables.Add(Model); context.SaveChanges(); // Class Time Id int ClassTime(string Timing) { int classId = 1; if (Timing == "2 to 3 PM") { return classId; } else if (Timing == "3 to 4 PM") { classId = 2; return classId; } else if (Timing == "4 to 5 PM") { classId = 3; return classId; } else if (Timing == "5 to 6 PM") { classId = 4; return classId; } else if (Timing == "6 to 7 PM") { classId = 5; return classId; } return classId; } // Course Level Id int CourseLevelID(string ClassLevel) { int courseLevelId = 1; if (ClassLevel == "Lower Basic") { return courseLevelId; } else if (ClassLevel == "Basic") { courseLevelId = 2; return courseLevelId; } else if (ClassLevel == "Pre Starter") { courseLevelId = 3; return courseLevelId; } else if (ClassLevel == "Starter") { courseLevelId = 4; return courseLevelId; } else if (ClassLevel == "Intermediate") { courseLevelId = 5; return courseLevelId; } else if (ClassLevel == "Advance") { courseLevelId = 5; return courseLevelId; } return courseLevelId; } } A: Well I searched and found one solution to parse date as SQL SERVER timestamp string currentTime = DateTime.Now.ToString("hh:mm:tt"); byte[] TimeOfAdmission = Encoding.ASCII.GetBytes(currentTime);
{ "language": "en", "url": "https://stackoverflow.com/questions/50955195", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Multiple git subtree of same repo In a repo called apps I'm stuck with using a folder structure like /apps - /app1 - /shared - /app2 - /shared - /app3 - /shared where shared is a separate repo. How do I use git subtree such that changes to any app1/shared, app2/shared, app3/shared take effect in each other and can be pushed back to the shared repo?
{ "language": "en", "url": "https://stackoverflow.com/questions/28705190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: STM32CubeMx: How to add your own "USER CODE BEGIN / END" section? I have a project where I need to add a few lines to one of the generated C files. However, the place where I want to add the change does not have a "USER CODE BEGIN / END" section. So whenever, I regenerate code, the changes are overwritten. I tried adding my own user code section as shown below but even that got overwritten. It seems, CubeMX is looks for a predefined set of USER CODE blocks, and overwrites everything else. /* USER CODE BEGIN 8 */ /* USER CODE END 8 */ I would like to be able to define my own user code blocks so that I can write custom code in places where CubeMX has not already provided a user code block. A: Adding custom user code sections is not supported by CubeMX. See this support post: https://community.st.com/s/question/0D50X0000ALxNlmSQF/is-it-possible-to-add-custom-user-code-sections
{ "language": "en", "url": "https://stackoverflow.com/questions/55934599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Empty file when using sprintf and system function on C I want to save some information in a file text, I wrote this program: #include <stdio.h> #include <stdlib.h> #include <string.h> int main(int argc,char *argv[]) { FILE *fichier; char buffer[20]; char command[200]; char command1[100]; system(" cat /etc/resolv.conf | grep nameserver | awk -F' ' '{print $2}' | cut -d'.' -f1-3 | awk '{print $1\".1\"}' > ethernet_dns.txt"); fichier=fopen("ethernet_dns.txt","r"); memset(&buffer,0,sizeof(buffer)); fread(buffer,20,1,fichier); printf("buffer is: %s",buffer); snprintf(command,sizeof(command),"ping -c 1 -W 1 %s > /tmp/ping_result",buffer); printf("command is: %s",command); system(command); return 0; } Results: buffer is: 127.0.1.1 command is : ping -c 1 -W 1 127.0.1.1 the system command returns this: PING 127.0.1.1 (127.0.1.1) 56(84) bytes of data. 64 bytes from 127.0.1.1: icmp_seq=1 ttl=64 time=0.115 ms --- 127.0.1.1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms But when I run : cat /tmp/ping_result.I got an empty file A: Problem: Code is reading '\n' into buffer[] yet trying to that as part of the command. Need to trim the buffer. *** below // Insure file is open fichier=fopen("ethernet_dns.txt","r"); assert(fichier); // Use fgets //memset(&buffer,0,sizeof(buffer)); //fread(buffer,20,1,fichier); if (fgets(buffer, sizeof buffer, fichier)) { // lop off potential \n buffer[strcspn(buffer, "\n")] = '\0'; // *** printf("buffer is: <%s>\n",buffer); int n = snprintf(command, sizeof(command), "ping -c 1 -W 1 %s > /tmp/ping_result", buffer); printf("command is: <%s>\n",command); // Only issue command if no problems occurred in snprintf() if (n > 0 && n < sizeof(command)) system(command); A: the posted code has a couple of problems 1) outputs results of ping to stdout rather than to /tmp/ping_result 2) fails to removed trailing newline from the buffer[] array The following code 1) cleans up the indenting 2) corrects the problems in the code 3) handles possible failure of call to fopen() 4) eliminates unneeded final statement: return 0 #include <stdio.h> #include <stdlib.h> #include <string.h> int main( void ) { FILE *fichier; char buffer[20]; char command[200]; system(" cat /etc/resolv.conf | grep nameserver | awk -F' ' '{print $2}' | cut -d'.' -f1-3 | awk '{print $1\".1\"}' > ethernet_dns.txt"); fichier=fopen("ethernet_dns.txt","r"); if( !fichier ) { perror( "fopen for ethernet_dns.txt failed"); exit( EXIT_FAILURE ); } // implied else, fopen successful memset(buffer,0,sizeof(buffer)); size_t len = fread(buffer,1, sizeof(buffer),fichier); printf( "len is: %lu\n", len ); buffer[len-1] = '\0'; // eliminates trailing newline printf("buffer is: %s\n",buffer); snprintf(command,sizeof(command),"ping -c 1 -W 1 "); strcat( command, buffer); strcat( command, " > /tmp/ping_result"); printf("command is: %s\n",command); system(command); } the resulting output, on my computer, is in file: /tmp/ping_result PING 127.0.1.1 (127.0.1.1) 56(84) bytes of data. 64 bytes from 127.0.1.1: icmp_seq=1 ttl=64 time=0.046 ms --- 127.0.1.1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms
{ "language": "en", "url": "https://stackoverflow.com/questions/39978821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: ASP.NET Core ILogger does not log to Trace unless a debugger is attached I am having a weird situation where I cannot receive any Trace from ASP.NET Core ILogger<T> unless a debugger is attached (even if I am running in Debug configuration). First let me explain: I need to write logs to files by date, and I previously wrote a simple custom TraceListener for that, so I think I can reuse it without writing new Log Provider: public static IHostBuilder CreateHostBuilder(string[] args) => Host.CreateDefaultBuilder(args) .ConfigureLogging(logging => { logging.AddTraceSource("IISLogCleaner"); // I added below line to make sure I am not missing anything logging.SetMinimumLevel(LogLevel.Trace); }) .ConfigureWebHostDefaults(webBuilder => { webBuilder.UseStartup<Startup>(); }); In Startup.ConfigureServices(), I add the listener: var listener = new DailyTraceListener(folder); Trace.Listeners.Add(listener); Now for the logging background task: ILogger<CleanupTask> logger; public CleanupTask(ILogger<CleanupTask> logger) { this.logger = logger; } // ... async Task<int> WorkAsync(CancellationToken token) { System.Diagnostics.Debug.WriteLine("Start Working"); this.logger.LogInformation("Start working"); await Task.Delay(1000); this.logger.LogInformation("Work finished"); return await Task.FromResult(10000); } Here's the problem: with debugger attached, all the logs are written to log file. However, when I choose to Run without Debugging, even in Debug configuration, no file is created, no content is written, until I added the Debug.WriteLine calls, now only these lines get logged, but all the logger.LogInformation() are not recorded. The Output window however, receive all the messages: To be safe, I have deleted the appsettings.Development.json file and only appsettings.json remains. Here is the Log content: "Logging": { "LogLevel": { "Default": "Information", "Microsoft": "Warning", "Microsoft.Hosting.Lifetime": "Information" } }, A: Ages later, I know but I believe you are hitting this: ASP.Net Core Logging and DebugView.exe I spent 3 hours on this one today and feel your paint TL;DR: Debugger.IsAttached is checked within Microsoft.Extensions.Logging.Debug.
{ "language": "en", "url": "https://stackoverflow.com/questions/58211042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Efficient way of calculating minimum distance between point and multiple faces I have multiple faces in 3D space creating cells. All these faces lie within a predefined cube (e.g. of size 100x100x100). Every face is convex and defined by a set of corner points and a normal vector. Every cell is convex. The cells are result of 3d voronoi tessellation, and I know the initial seed points of the cells. Now for every integer coordinate I want the smallest distance to any face. My current solution uses this answer https://math.stackexchange.com/questions/544946/determine-if-projection-of-3d-point-onto-plane-is-within-a-triangle/544947 and calculates for every point for every face for every possible triple of this faces points the projection of the point to the triangle created by the triple, checks if the projection is inside the triangle. If this is the case I return the distance between projection and original point. If not I calculate the distance from the point to every possible line segment defined by two points of a face. Then I choose the smallest distance. I repeat this for every point. This is quite slow and clumsy. I would much rather calculate all points that lie on (or almost lie on) a face and then with these calculate the smallest distance to all neighbour points and repeat this. I have found this Get all points within a Triangle but am not sure how to apply it to 3D space. Are there any techniques or algorithms to do this efficiently? A: Since we're working with a Voronoi tessellation, we can simplify the current algorithm. Given a grid point p, it belongs to the cell of some site q. Take the minimum over each neighboring site r of the distance from p to the plane that is the perpendicular bisector of qr. We don't need to worry whether the closest point s on the plane belongs to the face between q and r; if not, the segment ps intersects some other face of the cell, which is necessarily closer. Actually it doesn't even matter if we loop r over some sites that are not neighbors. So if you don't have access to a point location subroutine, or it's slow, we can use a fast nearest neighbors algorithm. Given the grid point p, we know that q is the closest site. Find the second closest site r and compute the distance d(p, bisector(qr)) as above. Now we can prune the sites that are too far away from q (for every other site s, we have d(p, bisector(qs)) ≥ d(q, s)/2 − d(p, q), so we can prune s unless d(q, s) ≤ 2 (d(p, bisector(qr)) + d(p, q))) and keep going until we have either considered or pruned every other site. To do pruning in the best possible way requires access to the guts of the nearest neighbor algorithm; I know that it slots right into the best-first depth-first search of a kd-tree or a cover tree.
{ "language": "en", "url": "https://stackoverflow.com/questions/72857322", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Is there a way to take a ECS TaskDefinition as an input to a CloudFormation template? I am new to AWS CloudFormation. I have been testing out automating deployments of containers using this service. However, as of now, I am manually adding the name of my TaskDefinition to the YAML file of the CloudFormation template. Is there a way for me to define this as an input. A: Yes, you can use stack output exports, define export value on one stack and import it in any other: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-stack-exports.html
{ "language": "en", "url": "https://stackoverflow.com/questions/61233033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is the simplest way to sort the array of object? I want to sort an array like this [ { 'type': 'apple', 'like': 3 }, { 'type': 'pear', 'like': 5 }, ... ] I was using lodash lib for the sorting. Is there a any more simple way to sort an array by like value? A: How about pure JS sort method: var k=[{'type': 'apple', 'like': 7}, {'type': 'pear', 'like': 5},{'type': 'pear', 'like': 10}]; console.log(k.sort((a,b)=>a.like-b.like)); If you want to understand it better, read it here. I hope this helps. Thanks! A: You can directly do this in javascript in the following manner: var my_arr = [{'type': 'apple', 'like': 3}, {'type': 'pear', 'like': 5}, {'type': 'pea', 'like': 7}, {'type': 'orange', 'like': 1}, {'type': 'grape', 'like': 4}] console.log(my_arr) my_arr.sort((a, b) => (a.like > b.like) ? 1 : -1) console.log(my_arr) where my_arr is the original array you were attempting to sort
{ "language": "en", "url": "https://stackoverflow.com/questions/61812559", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Giving each subset of records a group ID I need to assign a rank to a subset of records. Sample data: Colour Code Size Code DWMUL040 7 DWMUL040 8 DWMUL040 9 DWMUL040 10 DWMUL040 7 DWMUL040 8 DWMUL040 9 DWMUL040 10 DWMUL040 7 DWMUL040 8 DWMUL040 9 DWMUL040 10 DWMUL040 7 DWMUL040 8 DWMUL040 9 DWMUL040 10 I need the data to look like this: Group ID Colour Code Size Code 1 DWMUL040 7 1 DWMUL040 8 1 DWMUL040 9 1 DWMUL040 10 2 DWMUL040 7 2 DWMUL040 8 2 DWMUL040 9 2 DWMUL040 10 3 DWMUL040 7 3 DWMUL040 8 3 DWMUL040 9 3 DWMUL040 10 4 DWMUL040 7 4 DWMUL040 8 4 DWMUL040 9 4 DWMUL040 10 The list will be ordered, and basically, whenever the Size Code changes, it is a new group. I have a feeling this one is going to be hard, as the size code (these are garment sizes) could be anything "Such as XL". A: select [colour code], [size code], row_number() over (partition by [colour code], [size code] order by 1/0) [group id] from tbl order by [group id], [colour code], [size code];
{ "language": "en", "url": "https://stackoverflow.com/questions/13982661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Correct concurrency handling using EF Core 2.1 with SQL Server I am currently working on an API using ASP.NET Core Web API along with Entity Framework Core 2.1 and a SQL Server database. The API is used to transfer money from two accounts A and B. Given the nature of the B account which is an account that accepts payments, a lot of concurrent requests might be executed at the same moment. As you know if it's not well managed, this can result in some users not seeing their payments arrive. Having spent days trying to achieve concurrency I can't figure out what the best approach is. For the sake of simplicity I created a test project trying to reproduce this concurrency issue. In the test project, I have two routes: request1 and request2 each one perform a transfer to the same user the first one have an amount of 10 and the second one is 20. I put a Thread.sleep(10000) on the first one as follows: [HttpGet] [Route("request1")] public async Task<string> request1() { using (var transaction = _context.Database.BeginTransaction(System.Data.IsolationLevel.Serializable)) { try { Wallet w = _context.Wallets.Where(ww => ww.UserId == 1).FirstOrDefault(); Thread.Sleep(10000); w.Amount = w.Amount + 10; w.Inserts++; _context.Wallets.Update(w); _context.SaveChanges(); transaction.Commit(); } catch (Exception ex) { transaction.Rollback(); } } return "request 1 executed"; } [HttpGet] [Route("request2")] public async Task<string> request2() { using (var transaction = _context.Database.BeginTransaction(System.Data.IsolationLevel.Serializable)) { try { Wallet w = _context.Wallets.Where(ww => ww.UserId == 1).FirstOrDefault(); w.Amount = w.Amount + 20; w.Inserts++; _context.Wallets.Update(w); _context.SaveChanges(); transaction.Commit(); } catch (Exception ex) { transaction.Rollback(); } } return "request 2 executed"; } After executing request1 and request2 after in a browser, the first transaction is rolled back due to: InvalidOperationException: An exception has been raised that is likely due to a transient failure. Consider enabling transient error resiliency by adding 'EnableRetryOnFailure()' to the 'UseSqlServer' call. I can also retry the transaction but isn't there a better way? using locks ? Serializable, being the most isolated level and the most costly too is as said in the documentation: No other transactions can modify data that has been read by the current transaction until the current transaction completes. Which means no other transaction can update data that has been read by another transaction, which is working as intended here since the update in the request2 route wait for the first transaction (request1) to commit. The problem here is we need to block read by other transactions once the current transaction has read the wallet row, to solve the problem I need to use locking so that when the first select statement in request1 executes, all the transactions after need to wait for the 1st transaction to finish so they can select the correct value. Since EF Core have no support for locking I need to execute a SQL query directly, so when selecting the wallet I'll add a row lock to the current row selected //this locks the wallet row with id 1 //and also the default transaction isolation level is enough Wallet w = _context.Wallets.FromSql("select * from wallets with (XLOCK, ROWLOCK) where id = 1").FirstOrDefault(); Thread.Sleep(10000); w.Amount = w.Amount + 10; w.Inserts++; _context.Wallets.Update(w); _context.SaveChanges(); transaction.Commit(); Now this works perfectly even after executing multiple request the result of the transfers all combined is correct. In addition to that am using a transaction table that holds every money transfer made with the status to keep a record of each transaction in case something went wrong am able to compute all wallets amount using this table. Now there are other ways of doing it like: * *Stored procedure: but I want my logic to be in the application level *Making a synchronized method to handle the database logic: this way all the database requests are executed in a single thread, I read a blog post that advise about using this approach but maybe we'll use multiple servers for scalability I don't know if I'm not searching well but I can't find good material for handling pessimistic concurrency with Entity Framework Core, even while browsing Github, most of code I've seen don't use locking. Which bring me to my question: is this the correct way of doing it? Cheers and thanks in advance. A: My suggestion for you is to catch on DbUpdateConcurrencyException and use entry.GetDatabaseValues(); and entry.OriginalValues.SetValues(databaseValues); into your retry logic. No need to lock the DB. Here is the sample on EF Core documentation page: using (var context = new PersonContext()) { // Fetch a person from database and change phone number var person = context.People.Single(p => p.PersonId == 1); person.PhoneNumber = "555-555-5555"; // Change the person's name in the database to simulate a concurrency conflict context.Database.ExecuteSqlCommand( "UPDATE dbo.People SET FirstName = 'Jane' WHERE PersonId = 1"); var saved = false; while (!saved) { try { // Attempt to save changes to the database context.SaveChanges(); saved = true; } catch (DbUpdateConcurrencyException ex) { foreach (var entry in ex.Entries) { if (entry.Entity is Person) { var proposedValues = entry.CurrentValues; var databaseValues = entry.GetDatabaseValues(); foreach (var property in proposedValues.Properties) { var proposedValue = proposedValues[property]; var databaseValue = databaseValues[property]; // TODO: decide which value should be written to database // proposedValues[property] = <value to be saved>; } // Refresh original values to bypass next concurrency check entry.OriginalValues.SetValues(databaseValues); } else { throw new NotSupportedException( "Don't know how to handle concurrency conflicts for " + entry.Metadata.Name); } } } } } A: Why don't you handle the concurrency problem in the code, why it needs to be in the DB layer? You can have a method that updates the value of given wallet with given value and you can use simple lock there. Like this: private readonly object walletLock = new object(); public void UpdateWalletAmount(int userId, int amount) { lock (balanceLock) { Wallet w = _context.Wallets.Where(ww => ww.UserId == userId).FirstOrDefault(); w.Amount = w.Amount + amount; w.Inserts++; _context.Wallets.Update(w); _context.SaveChanges(); } } So your methods will look like this: [HttpGet] [Route("request1")] public async Task<string> request1() { try { UpdateWalletAmount(1, 10); } catch (Exception ex) { // log error } return "request 1 executed"; } [HttpGet] [Route("request2")] public async Task<string> request2() { try { UpdateWalletAmount(1, 20); } catch (Exception ex) { // log error } return "request 2 executed"; } You don't even need to use a transaction in this context. A: You can use distributed lock mechanism with redis for example. Also, you can lock by userId, it will not lock method for others.
{ "language": "en", "url": "https://stackoverflow.com/questions/53584812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Get count of an item in a column when values in other column are equal to some value for i in range(0,9): num = training_data[training_data[i]==row[i]&training_data[9]==1].groupby([i,9]).size() print num count_equals.append(num) DataFrame 0 1 2 3 4 5 6 7 8 9 314 1 1 1 1 1 1 2 1 1 1 431 5 1 1 3 4 1 3 2 1 1 260 10 5 8 10 3 10 5 1 3 -1 91 3 1 1 2 2 1 1 1 1 1 337 1 1 1 1 2 1 3 1 1 1 I need counts into a list else groupby without second condition works. if row =[1,1,1,1,1,1,1,1] then the count_equals list should be [2,4,4,2,4,1,3,4] Error: Traceback (most recent call last): File "naive.py", line 46, in num = training_data[training_data[i]==row[i]&training_data[9]==1].groupby([i,9]).size() File "/usr/local/lib/python2.7/dist-packages/pandas/core/generic.py", line 917, in nonzero .format(self.class.name)) ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). A: In training_data[i]==row[i]&training_data[9]==1, the operator & has higher precedence than ==. Surround the relational expressions with parentheses before doing the AND: (training_data[i]==row[i]) & (training_data[9]==1)
{ "language": "en", "url": "https://stackoverflow.com/questions/42882632", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Query with a data frame and a database I'd like to know if it's possible to make a query between a data frame and a database like this : test <- sqlQuery(ch," SELECT * FROM table_from_database as A, dataframe as B WHERE a.id=b.id ") A: Use the RODBC package to connect to a MS SQL Server database. First you need to do some setup. Open the "Data Sources (ODBC)" application. (In Control Panel\System and Security\Administrative Tools, or search under the Start Menu.) Add a User DSN (or a System DSN if you have admin rights and want the connection for all users). Step 1: Give it a name like MyDataBase and select the server that it lives on. The name shouldn't be more than 32 characters or you'll get a warning. Step 2: Connection details are the same as you would use in SQL Server. Step 3: Change the default database to the one that you want to connect to. Finish and test your connection. Now you get to use R. It's as easy as library(RODBC) channel <- odbcConnect("MyDataBase") #or whatever name you gave query <- "SELECT * FROM MyTable WHERE x > 10" results <- sqlQuery(query, channel) odbcClose(channel) If you are feeling fancy or hate wizards, you can set up the ODBC connection by writing registry entries. Apologies for the big code chunk. #' Reads the Windows registry #' #' Wrapper for readRegistry that replace environment variables. #' @param ... Passed to readRegistry #' @return A list of registry keys. See \code{readRegistry}. #' @examples #' \dontrun{ #' key <- "Software\\ODBC\\ODBCINST.INI\\SQL Server" #' hive <- "HLM" #' read_registry(key, hive) #' readRegistry(key, hive) #' } read_registry <- function(...) { ans <- readRegistry(...) lapply( ans, function(x) { rx <- "%([[:alnum:]]+)%" if(is.character(x) && grepl(rx, x)) { env_var <- stringr::str_match(x, rx)[, 2] x <- gsub(rx, Sys.getenv(env_var), x) } x } ) } #' Add an ODBC data source to the Windows registry. #' #' Adds an ODBC data source to the Windows registry. #' #' @param data_source_name String specifying the name of the data source to add. #' @param database The name of the database to use as the default. #' @param database_server The name of the server holding the database. #' @param type Type of connection to add. Either ``sql'' or ``sql_native''. #' @param permission Whether the connection is for the user or the system. #' @return Nothing. Called for the side-effect of registering ODBC data sources. #' @details A key with the specified data source name is created in #' ``Software\\ODBC\\ODBC.INI'', either in ``HKEY_CURRENT_USER'' or #' ``HKEY_LOCAL_MACHINE'', depending upon the value of \code{permission}. #' Four values are added to this key. ``Database'' is given the value of the #' \code{database} arg. ``Server'' is given the value of the #' \code{database_server} arg. ``Trusted_Connection'' is given the value ``Yes''. #' ``Driver'' is given the value from the appropriate subkey of #' ``HKEY_LOCAL_MACHINE\\SOFTWARE\\ODBC\\ODBCINST.INI'', depending upon the type. #' Another key with the specified data source name is created in #' ``Software\\ODBC\\ODBC.INI\\ODBC Data Sources''. register_odbc_data_source <- function(data_source_name, database, database_server, type = c("sql", "sql_native"), permission = c("user", "system")) { #assert_os_is_windows() #data_source_name <- use_first(data_source_name) permission <- match.arg(permission) type <- match.arg(type) #Does key exist? odbc_key <- readRegistry( file.path("Software", "ODBC", "ODBC.INI", fsep = "\\"), switch(permission, user = "HCU", system = "HLM") ) if(data_source_name %in% names(odbc_key)) { message("The data source ", sQuote(data_source_name), " already exists.") return(invisible()) } hive <- switch( permission, user = "HKEY_CURRENT_USER", system = "HKEY_LOCAL_MACHINE" ) key <- shQuote( file.path(hive, "Software", "ODBC", "ODBC.INI", data_source_name, fsep = "\\") ) odbc_data_sources_key <- shQuote( file.path(hive, "Software", "ODBC", "ODBC.INI", "ODBC Data Sources", fsep = "\\") ) type_name <- switch( type, sql = "SQL Server", sql_native = "SQL Server Native Client 11.0" ) driver <- read_registry( file.path("SOFTWARE", "ODBC", "ODBCINST.INI", type_name, fsep = "\\"), "HLM" )$Driver system0(key) system0(key, "/v Database /t REG_SZ /d", database) system0(key, "/v Driver /t REG_SZ /d", shQuote(driver)) system0(key, "/v Server /t REG_SZ /d", database_server) system0(key, "/v Trusted_Connection /t REG_SZ /d Yes") system0(odbc_data_sources_key, "/v", data_source_name, "/t REG_SZ /d", shQuote(type_name)) } #' Wrapper to system for registry calls #' #' Wraps the \code{system} function that calls the OS shell. #' @param ... Passed to \code{paste} to create the command. #' @return The command that was passed to system is invisibly returned. #' @note Not meant to be called directly. system0 <- function(...) { cmd <- paste("reg add", ...) res <- system(cmd, intern = TRUE) if(res != "The operation completed successfully.\r") { stop(res) } else { message(res) } invisible(cmd) }
{ "language": "en", "url": "https://stackoverflow.com/questions/16015106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Execution failed for task ':app:compileDebugJavaWithJavac'. Error:(2055, 52) error: ';' expected Error:(2055, 59) error: expected I am new to android, my project was compiling and running properly a few moments ago but after I try to implement a navigation drawer, its giving me this error FAILURE: Build failed with an exception. What went wrong: Execution failed for task ':app:compileDebugJavaWithJavac'. Compilation failed; see the compiler error output for details. Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. BUILD FAILED Its also complaining about a line in this generated file Error:(2055, 52) error: ';' expected Error:(2055, 59) error: expected C:\Users\muoki\AndroidStudioProjects\MaterialTest\app\build\generated\source\r\debug\com\muoki\materialtest\R.java which is here public static final int fragment_navigation-drawer=0x7f0c0068; I have tried running running using the script parameter as in explained in this question but its still giving the same error Here is my gradle apply plugin: 'com.android.application' android { compileSdkVersion 22 buildToolsVersion "22.0.1" defaultConfig { applicationId "com.muoki.materialtest" minSdkVersion 15 targetSdkVersion 22 versionCode 1 versionName "1.0" } buildTypes { release { minifyEnabled false proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro' } } } dependencies { compile fileTree(dir: 'libs', include: ['*.jar']) testCompile 'junit:junit:4.12' compile 'com.android.support:appcompat-v7:22.2.1' compile 'com.android.support:design:22.2.1' compile 'com.android.support:support-v4:22.2.1' } A: Well the first thing you should do is remove the code you added and get back to a version that complies. Then try again. You should also supply some code because the errors are not enough to answer this problem on their own. You should also know that the R.java file is created each time you compile the app. The error from the R.java file probably indicates that there is a problem with the way you have coded the fragment navigation drawer. Check to see if you have used any spaces in the name, the R.java error shows an underscore and a hyphen. This may indicate that there is a problem with the name. I would also recommend looking up some youtube videos explaining how to use the LogCat output to identify errors. You should also read through the Google documentation on navigation drawer http://developer.android.com/training/implementing-navigation/nav-drawer.html and compare the example code with the code you have written. A: Its smart answer is that there is some syntax error in your Code may be in Java file or XML. Fix them first. Error will gone. work 100 percent for me every time Here is my Error! I found it & remove then this compile time error resolve.
{ "language": "en", "url": "https://stackoverflow.com/questions/34537910", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Read and write NFC tag at once Is it possible to read/write NFC-tag simultaneously with Android API? For example I have the counter in NDEF message and would like to read it, increment and write it back. Or read authentication key and update it. I can see only one operation as per example. A: Android’s Near Field Communication documentation page states that it is indeed possible. Android-powered devices with NFC simultaneously support three main modes of operation: * *Reader/writer mode, allowing the NFC device to read and/or write passive NFC tags and stickers. . . . A: Android will in the default setup automagically read any NDEF message and deliver it to your activity by invoking onNewIntent(..). Your app is then free to write to the tag - and can do so within the within the invokation to onNewIntent(..) or the following call to onResume(..). A: i'm note sure if looking for react native for example, but you can do that from once inside a scope of a function that returns you the message and you can write immediately what you want. The package is this one: https://github.com/whitedogg13/react-native-nfc-manager The feature is in the branch (tag-tech) open to discussion and sooner or later will be at master.
{ "language": "en", "url": "https://stackoverflow.com/questions/39393639", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Smaato custom banner view could not be instantiated (Android) I tried to use Smaato ad in my app. For first look it's simple to use, but in practice I got a problem. I put in main view XML a Smaato banner. The problem number one is that Eclipse shows me in GraphicLayout tab the next error message: * The following classes could not be instantiated: - com.smaato.SOMA.SOMABanner See the Error Log (Window > Show View) for more details. Tip: Use View.isInEditMode() in your custom views to skip code when shown in Eclipse * main.xml file code: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/RootLayout" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent"> <com.smaato.SOMA.SOMABanner android:id="@+id/bannerView" android:layout_width="fill_parent" android:layout_height="90dp" android:layout_alignParentTop="true" android:layout_alignParentLeft="true" /> </LinearLayout> Accordingly to official SOMA SDK Developer guide: 1) I defined a banner view in my application (described above) 2) In code I add manully to onCreate method: SOMABanner mBanner = (SOMABanner)findViewById(R.id.BannerView); //In order to fetch live ads inside the activity, add your PublisherID and AdspaceID in the //onCreate method. For example: mBanner.setPublisherId(my_publisher_id); mBanner.setAdSpaceId(my_adspace_id); When I ran a programm, throws an exception: 12-20 03:00:48.415: ERROR/AndroidRuntime(12819): FATAL EXCEPTION: main 12-20 03:00:48.415: ERROR/AndroidRuntime(12819): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.example.android.My/com.example.android.My.App}: java.lang.ClassCastException: android.widget.TextView 12-20 03:00:48.415: ERROR/AndroidRuntime(12819): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1821) 12-20 03:00:48.415: ERROR/AndroidRuntime(12819): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:1842) 12-20 03:00:48.415: ERROR/AndroidRuntime(12819): at android.app.ActivityThread.access$1500(ActivityThread.java:132) 12-20 03:00:48.415: ERROR/AndroidRuntime(12819): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1038) 12-20 03:00:48.415: ERROR/AndroidRuntime(12819): at android.os.Handler.dispatchMessage(Handler.java:99) 12-20 03:00:48.415: ERROR/AndroidRuntime(12819): at android.os.Looper.loop(Looper.java:143) 12-20 03:00:48.415: ERROR/AndroidRuntime(12819): at android.app.ActivityThread.main(ActivityThread.java:4268) 12-20 03:00:48.415: ERROR/AndroidRuntime(12819): at java.lang.reflect.Method.invokeNative(Native Method) 12-20 03:00:48.415: ERROR/AndroidRuntime(12819): at java.lang.reflect.Method.invoke(Method.java:507) 12-20 03:00:48.415: ERROR/AndroidRuntime(12819): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:839) 12-20 03:00:48.415: ERROR/AndroidRuntime(12819): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:597) 12-20 03:00:48.415: ERROR/AndroidRuntime(12819): at dalvik.system.NativeStart.main(Native Method) 12-20 03:00:48.415: ERROR/AndroidRuntime(12819): Caused by: java.lang.ClassCastException: android.widget.TextView 12-20 03:00:48.415: ERROR/AndroidRuntime(12819): at com.example.android.My.App.onCreate(Commander.java:132) 12-20 03:00:48.415: ERROR/AndroidRuntime(12819): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1093) 12-20 03:00:48.415: ERROR/AndroidRuntime(12819): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1785) 12-20 03:00:48.415: ERROR/AndroidRuntime(12819): ... 11 more Who knows what is the problem? What I need to do to fix it? Any body works with Smaato? Used Smaato SDK version 2.5.4 A: It shows you get a ClassCastException in your Commander class at line #132. Please post the onCreate method of your Commander class or look into TextView casts in onCreate method. A: Here is the solution what worked for me : - In eclipse, right click on the project-> properties -> Java Build Path -> Order and Export Check the SOMA jar file in the path and try now ! A: I copied the Smaato jar into the libs/ directory; that seemed to help.
{ "language": "en", "url": "https://stackoverflow.com/questions/8569710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Keycloack share same session between to client ID I'm not expert on this subject. On Keycloak on the same realm we have defined two client ID. My client ID are configured with OpenID connect and authorization code flow. Like this a user can be authenticated with its credentials over these clients. I have a problem. One of my user shares the same Keycloak session between these two differents client ID. In the fact, when user log on client A, he is logged out of client B. And same when he logs in client B, he is logged out of client A. Why it's possible to share the same Keycloak session? And how to be sure to have two different Keycloak session. UPDATE: I noticed, when the user logs in over client A or client B, he uses the same browser. If he logs in over client A, he doesn't need to enter its login/password over client B. The result is there is one Keycloak session. (If user uses different browser of each client, there is one keycloak session by client). Is it possible to force, one keycloak session by client ID? A: What you are describing is a basic function in almost all AM software implementing OIDC, there is no such a thing as log in to clientA, the users always log in to the IDP, i.e. Keycloak. Clients dont have sessions by default, keycloak does, and clients use keycloak session in thier OIDC flow. For example if you are already authenticated to keycloak, and you tried to do OIDC flow with ClientA or ClientB, you wont be prompted to enter username/password, keycloak will use the existing session. So if you want to have different session for the same user, then you have to create your own session, for example if your clients are using apache, you can use apache oidc module to create a local session ( which will be your ClientA session), as for keycloak session, you cant have two sessions, but you can have one keycloak session for the two clients and two apache session for each clients.
{ "language": "en", "url": "https://stackoverflow.com/questions/64350905", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: C# Please specify the assembly explicitly in the type name I have a umbraco site where i wish to add a fileuploader from a separate working VS project. I have added it to my umbraco VS project with Add existing project. I have then moved the code from default.aspx to a usercontrol so i could add it to my umbraco. But i get a 500 error when trying to upload some files. In my event viewer i have the following 2 exceptions: Exception message: The type 'jQueryUploadTest.FileTransferHandler' is ambiguous: it could come from assembly 'C:\UmbracoSites\sitename\sitename\bin\sitename.DLL' or from assembly 'C:\UmbracoSites\sitename\sitename\bin\jQueryUploadTest.DLL'. Please specify the assembly explicitly in the type name. The directory specified for caching compressed content C:\Users\name\AppData\Local\Temp\iisexpress\IIS Temporary Compressed Files\Clr4IntegratedAppPool is invalid. Static compression is being disabled. What does it mean? How can it come from the sitename.DLL when dll etc are copyed from jQueryUploadTest on build and have nothing to do with sitename.DLL. A: You surely have the type QueryUploadTest.FileTransferHandler in both assemblies. Make sure that sitename.dll doesn't have a type name like that or, to make sure this problem won't occur other times, change the namespace of one of the projects. It's good practice to have diferent namespaces for diferent assemblies, exactly to avoid this kind of problem.
{ "language": "en", "url": "https://stackoverflow.com/questions/17816500", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Identify offset of a value in numpy Given a flatten NxN array in numpy, I'd like to find the minimum value, and its offset in the array. I've managed to find the minimum value, but is it possible to identify the offset (which row and which column)? In the example below, a = 0.5, how can I know if it is 0.5 from [1,0], or [2,1]? from numpy import * value = 0 NUM_NODE = 5 EDGE = array(zeros((NUM_NODE, NUM_NODE))) EDGE = [[ 0., 0., 0., 0., 0. ], [ 0.5, 0., 0., 0., 0. ], [ 1., 0.5, 0., 0., 0. ], [ 1.41421356, 1.11803399, 1., 0., 0. ], [ 1., 1.11803399, 1.41421356, 1., 0. ]] a = reshape(EDGE, NUM_NODE*NUM_NODE) print min(filter(lambda x : x > value, a)) A: You could use np.where: >>> edge = np.array(EDGE) >>> edge[edge > 0].min() 0.5 >>> np.where(edge == edge[edge > 0].min()) (array([1, 2]), array([0, 1])) which gives the x coordinates and the y coordinates which hit the minimum value separately. If you want to combine them, there are lots of ways, e.g. >>> np.array(np.where(edge == edge[edge > 0].min())).T array([[1, 0], [2, 1]]) A few asides: from numpy import * is a bad habit because that replaces some built-in functions with numpy's versions which work differently, and in some cases have the opposite results; ALLCAPS variable names are usually only given to constants; and your EDGE = array(zeros((NUM_NODE, NUM_NODE))) line doesn't do anything, because your EDGE = [[ 0., ... etc line immediately makes a new list and binds EDGE to it instead. You made an array and threw it away. There's also no need to call array here; zeros already returns an array. A: numpy.ndenumerate will enumerate over the array(by the way, you shouldn't lose position information with reshaping). In [43]: a = array(EDGE) In [44]: a Out[44]: array([[ 0. , 0. , 0. , 0. , 0. ], [ 0.5 , 0. , 0. , 0. , 0. ], [ 1. , 0.5 , 0. , 0. , 0. ], [ 1.41421356, 1.11803399, 1. , 0. , 0. ], [ 1. , 1.11803399, 1.41421356, 1. , 0. ]]) In [45]: min((i for i in ndenumerate(a) if i[1] > 0), key=lambda i: i[1]) Out[45]: ((1, 0), 0.5) Or you can do it with the old way if you want every occurence: In [11]: m, ms = float("inf"), [] In [12]: for pos, i in ndenumerate(a): ....: if not i: continue ....: if i < m: ....: m, ms = i, [pos] ....: elif i == m: ....: ms.append(pos) ....: In [13]: ms Out[13]: [(1, 0), (2, 1)]
{ "language": "en", "url": "https://stackoverflow.com/questions/22894076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: export c++ classes to DLL, including external DLL So I am trying to export into a dll a class that uses a class from another dll. An example will explain the situation better. //aclass.h class __declspec(dllexport) aclass{ public: void amethod(); }; _ //aclass.cpp #include "aclass.h" void aclass::amethod(){ std::cout<<"a method"<<std::endl; } So without including anything external this compiles (aclass.dll) and runs from other projects, ex. #include <aclass.h> void main(){ aclass a; a.amethod(); _getch(); } The problem arises when I include an external header (that comes with a dll and a lib file, the paths of which are passed to the compiler). As soon as I include the external header: //aclass.h #include <externalapi.h> class __declspec(dllexport) aclass{ public void amethod(); }; without even calling any class of function from the externalapi, when try to compile I get: Error 1 error LNK2001: unresolved external symbol __imp__Math_e C:\...\aclass.obj aclass Error 1 error LNK2001: unresolved external symbol __imp__Math_pi C:\...\aclass.obj aclass Error 1 error LNK2001: unresolved external symbol __imp__Math_zero C:\...\aclass.obj aclass ....etc Originally (without the __declspec(dllexport) directive) I would access these by something like: Math::pi; Math::e; etc. as they are static constants of the externalapi. From what I understand on how the whole thing with dll exporting works, this is what is called name mangling(?). So two questions: * *What should I change in the syntax so that the function names of the external library are "loaded" with their original c++ names? This has to be somehow possible. Up until now I was developing my code, as a stand-alone application, meaning that I was not using the __declspec(dllexport) keywords, I was including the same header file, using the exact same dll and lib file and everything was compiling and running smoothly. Obviously, the code above is a oversimplification of my actual code to point out the problem. *In most of the "export to dll" how-to I have found around, people use __declspec(dllexport) and __declspec(dllimport). I understand __declspec(dllexport) more or less tells the compiler to export the part of code to dll, and this makes sense. What exactly is the meaning of __declspec(dllimport). For instance why is this first piece of code, I wrote at the beginning, compiling and usable as a dll without the need of __declspec(dllimport)? Thanks for your time! A: __declspec(dllimport) tells the compiler that the function will be imported from a DLL using an import LIB, rather than found in a different OBJ file or a static LIB. BTW: it sounds like you may not want DLLs at all. DLLs are specifically for swapping out the library after compilation without having to recompile the application, or for sharing large amounts of object code between applications. If all you want is to reuse a set of code between different projects without having to compile it for each one, static libraries are sufficient, and easier to reason about. A: It's usually best to have a conditional macro that imports or exports depending #ifdef MODULE1 #define MODULE1_DECL __declspec(dllexport) #else #define MODULE1_DECL __declspec(dllimport) #endif This way you export functions etc that you want to export, and import what you want to use. For example see this SO post You #define MODULE1 (maybe as a setting) in your project that will export definitions, and use the MODULE1_DECL define rather than explicitly putting either __declspec(dllimport) or __declspec(dllexport) in your code. Read the manual for further details. Name mangling just happens in C++: it indicates namespaces, and the parameters a function overload etc. takes for disambiguation.
{ "language": "en", "url": "https://stackoverflow.com/questions/17853382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What's the best way to have two modules which use functions from one another in Perl? Unfortunately, I'm a totally noob when it comes to creating packages, exporting, etc in Perl. I tried reading some of the modules and often found myself dozing off from the long chapters. It would be helpful if I can find what I need to understand in just one simple webpage without the need to scroll down. :P Basically I have two modules, A & B, and A will use some function off from B and B will use some functions off from A. I get a tons of warning about function redefined when I try to compile via perl -c. Is there a way to do this properly? Or is my design retarded? If so what would be a better way? As the reason I did this is to avoid copy n pasting the other module functions again into this module and renaming them. A: So... the suggestion to factor out common code into another module is a good one. But, you shouldn't name modules *.pl, and you shouldn't load them by require-ing a certain pathname (as in require "../lib/foo.pl";). (For one thing, saying '..' makes your script depend on being executed from the same working directory every time. So your script may work when you run it as perl foo.pl, but it won't work when you run it as perl YourApp/foo.pl. That is generally not good.) Let's say your app is called YourApp. You should build your application as a set of modules that live in a lib/ directory. For example, here is a "Foo" module; its filename is lib/YourApp/Foo.pm. package YourApp::Foo; use strict; sub do_something { # code goes here } Now, let's say you have a module called "Bar" that depends on "Foo". You just make lib/YourApp/Bar.pm and say: package YourApp::Bar; use strict; use YourApp::Foo; sub do_something_else { return YourApp::Foo::do_something() + 1; } (As an advanced exercise, you can use Sub::Exporter or Exporter to make use YourApp::Foo install subroutines in the consuming package's namespace, so that you don't have to write YourApp::Foo:: before everything.) Anyway, you build your whole app like this. Logical pieces of functionally should be grouped together in modules (or even better, classes). To make all this run, you write a small script that looks like this (I put these in bin/, so let's call it bin/yourapp.pl): #!/usr/bin/env perl use strict; use warnings; use feature ':5.10'; use FindBin qw($Bin); use lib "$Bin/../lib"; use YourApp; YourApp::run(@ARGV); The key here is that none of your code is outside of modules, except a tiny bit of boilerplate to start your app running. This is easy to maintain, and more importantly, it makes it easy to write automated tests. Instead of running something from the command-line, you can just call a function with some values. Anyway, this is probably off-topic now. But I think it's important to know. A: The simple answer is to not test compile modules with perl -c... use perl -e'use Module' or perl -e0 -MModule instead. perl -c is designed for doing a test compile of a script, not a module. When you run it on one of your When recursively using modules, the key point is to make sure anything externally referenced is set up early. Usually this means at least making use @ISA be set in a compile time construct (in BEGIN{} or via "use parent" or the deprecated "use base") and @EXPORT and friends be set in BEGIN{}. The basic problem is that if module Foo uses module Bar (which uses Foo), compilation of Foo stops right at that point until Bar is fully compiled and it's mainline code has executed. Making sure that whatever parts of Foo Bar's compile and run-of-mainline-code need are there is the answer. (In many cases, you can sensibly separate out the functionality into more modules and break the recursion. This is best of all.) A: It's not really good practice to have circular dependencies. I'd advise factoring something or another to a third module so you can have A depends on B, A depends on C, B depends on C.
{ "language": "en", "url": "https://stackoverflow.com/questions/580418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Tkinter OptionMenu arguments I like to explicitly specify arguments. For example: def func(a, b): return a+b Anytime I call it I write: func(a=6, b=7) Instead of: func(6, 7) I can't do this with the OptionMenu class in Tkinter. The following example is within a custom class self.var = tk.StringVav() choices = ['op1', 'op2'] self.menu_m = tk.OptionMenu(master=self.frame_1, variable=self.var, *choices) This results in multiple values for the argument 'master'. How can I explicitly define the master, variable, and list of options to use? A: Unfortunately, the OptionMenu widget is somewhat poorly implemented. Regardless of your preferences, the optionmenu isn't designed to accept keyword arguments for the first three parameters master, variable, and value. They must be presented in that order as positional arguments. self.menu_m = tk.OptionMenu(self.frame_1, self.var, *choices)
{ "language": "en", "url": "https://stackoverflow.com/questions/38536999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Scrapy for eCommerce with Selenium Infinite Scroll, Help Returning Values I am rather new to programming. I've done a few small projects on my own and have started getting into making webscrapers with Scrapy. I'm trying to make a scraper for Home Depot and have run into issues. The problem this is trying to solve is that the Home Depot webpage has javascript that only loads when you scroll down the page, so I added some code I found that scrolls down the page to reveal all the products so that it can grab the title, review count, and price of each product tile. Before adding this code, it was indeed scraping product info correctly; after adding it I originally had issues with the code only scraping the last page of results, so I moved some things around. I think being new I just don't understand something about objects in Scrapy and how information is passed, particularly the HTML I'm trying to get it to return values for in parse_product. So far this indeed opens up the page and goes to the next page, but it's not scraping any products anymore. Where am I going wrong? I have been struggling with this for hours, I'm taking a class in web scraping and while I've had some success it seems like if I have to do anything slightly off course it's a massive struggle. import scrapy import logging from scrapy.utils.markup import remove_tags from selenium import webdriver from selenium.webdriver.chrome.options import Options from shutil import which from scrapy.selector import Selector from time import sleep from datetime import datetime class HdSpider(scrapy.Spider): name = 'hd' allowed_domains = ['www.homedepot.com'] start_urls = ['https://www.homedepot.com/b/Home-Decor-Artificial-Greenery-Artificial-Flowers/N-5yc1vZcf9y?Nao='] #Add %%Nao= to end of URL you got from search or category def parse(self, response): options = Options() chrome_path = which("chromedriver") driver = webdriver.Chrome(executable_path=chrome_path)#, chrome_options=options) p = 0 # The home depot URLs end in =24, =48 etc basically products are grouped 24 on a page so this is my way of getting the next page start_url = 'https://www.homedepot.com/b/Home-Decor-Artificial-Greenery-Artificial-Flowers/N-5yc1vZcf9y?Nao=' while p < 25: driver.get(start_url + str(p)) driver.set_window_size(1920, 1080) #sleep(2) scroll_pause_time = 1 screen_height = driver.execute_script("return window.screen.height;") # get the screen height of the web i = 1 while True: #this is the infinite scoll thing which reveals all javascript generated product tiles driver.execute_script("window.scrollTo(0, {screen_height}*{i});".format(screen_height=screen_height, i=i)) i += 1 sleep(scroll_pause_time) scroll_height = driver.execute_script("return document.body.scrollHeight;") if (screen_height) * i > scroll_height: break self.html = driver.page_source p = p + 24 def parse_product(self, response): resp = Selector(text=self.html) for products in resp.xpath("//div[@class='product-pod--padding']"): date = datetime.now().strftime("%m-%d-%y") brand = products.xpath("normalize-space(.//span[@class='product-pod__title__brand--bold']/text())").get() title = products.xpath("normalize-space(.//span[@class='product-pod__title__product']/text())").get() link = products.xpath(".//div//a//@href").get() model = products.xpath("normalize-space(.//div[@class='product-pod__model'][2]/text())").get() review_count = products.xpath("normalize-space(.//span[@class='product-pod__ratings-count']/text())").get() price = products.xpath("normalize-space(.//div[@class='price-format__main-price']//span[2]/text())").get() yield { 'Date scraped' : date, 'Brand' : brand, 'Title' : title, 'Product Link' : "https://www.homedepot.com" + remove_tags(link), 'Price' : "$" + price, 'Model #' : model, 'Review Count' : review_count } A: I don't see where you runs parse_product. It will not execute it automatically for you. Besides function like your parse_product with response is rather to use it in some yield Requests(supage_url, parse_product) to parse data from subpage, not from page which you get in parse. You should rather move code from parse_product into parse like this: def parse(self, response): options = Options() chrome_path = which("chromedriver") driver = webdriver.Chrome(executable_path=chrome_path)#, chrome_options=options) driver.set_window_size(1920, 1080) p = 0 # The home depot URLs end in =24, =48 etc basically products are grouped 24 on a page so this is my way of getting the next page start_url = 'https://www.homedepot.com/b/Home-Decor-Artificial-Greenery-Artificial-Flowers/N-5yc1vZcf9y?Nao=' scroll_pause_time = 1 screen_height = driver.execute_script("return window.screen.height;") # get the screen height of the web while p < 25: driver.get(start_url + str(p)) #sleep(2) i = 1 # scrolling while True: #this is the infinite scoll thing which reveals all javascript generated product tiles driver.execute_script("window.scrollTo(0, {screen_height}*{i});".format(screen_height=screen_height, i=i)) i += 1 sleep(scroll_pause_time) scroll_height = driver.execute_script("return document.body.scrollHeight;") if (screen_height) * i > scroll_height: break # after scrolling self.html = driver.page_source p = p + 24 resp = Selector(text=self.html) for products in resp.xpath("//div[@class='product-pod--padding']"): date = datetime.now().strftime("%m-%d-%y") brand = products.xpath("normalize-space(.//span[@class='product-pod__title__brand--bold']/text())").get() title = products.xpath("normalize-space(.//span[@class='product-pod__title__product']/text())").get() link = products.xpath(".//div//a//@href").get() model = products.xpath("normalize-space(.//div[@class='product-pod__model'][2]/text())").get() review_count = products.xpath("normalize-space(.//span[@class='product-pod__ratings-count']/text())").get() price = products.xpath("normalize-space(.//div[@class='price-format__main-price']//span[2]/text())").get() yield { 'Date scraped' : date, 'Brand' : brand, 'Title' : title, 'Product Link' : "https://www.homedepot.com" + remove_tags(link), 'Price' : "$" + price, 'Model #' : model, 'Review Count' : review_count } But I would do other changes - you use p = p + 24 but when I check page in browser then I see I need p = p + 48 to get all product. Instead of p = p + ... I would rather use Selenium to click button > to get next page. EDIT: My version with other changes. Everyone can run it without creating project. #!/usr/bin/env python3 import scrapy from scrapy.utils.markup import remove_tags from selenium import webdriver from selenium.webdriver.chrome.options import Options from shutil import which from scrapy.selector import Selector from time import sleep from datetime import datetime class HdSpider(scrapy.Spider): name = 'hd' allowed_domains = ['www.homedepot.com'] start_urls = ['https://www.homedepot.com/b/Home-Decor-Artificial-Greenery-Artificial-Flowers/N-5yc1vZcf9y?Nao='] #Add %%Nao= to end of URL you got from search or category def parse(self, response): options = Options() chrome_path = which("chromedriver") driver = webdriver.Chrome(executable_path=chrome_path) #, chrome_options=options) #driver.set_window_size(1920, 1080) print(dir(driver)) driver.maximize_window() scroll_pause_time = 1 # loading first page start_url = 'https://www.homedepot.com/b/Home-Decor-Artificial-Greenery-Artificial-Flowers/N-5yc1vZcf9y?Nao=0' driver.get(start_url) screen_height = driver.execute_script("return window.screen.height;") # get the screen height of the web #while True: # all pages for _ in range(5): # only 5 pages #sleep(scroll_pause_time) # scrolling page i = 1 while True: #this is the infinite scoll thing which reveals all javascript generated product tiles driver.execute_script(f"window.scrollBy(0, {screen_height});") sleep(scroll_pause_time) i += 1 scroll_height = driver.execute_script("return document.body.scrollHeight;") if screen_height * i > scroll_height: break # after scrolling resp = Selector(text=driver.page_source) for products in resp.xpath("//div[@class='product-pod--padding']"): date = datetime.now().strftime("%m-%d-%y") brand = products.xpath("normalize-space(.//span[@class='product-pod__title__brand--bold']/text())").get() title = products.xpath("normalize-space(.//span[@class='product-pod__title__product']/text())").get() link = products.xpath(".//div//a//@href").get() model = products.xpath("normalize-space(.//div[@class='product-pod__model'][2]/text())").get() review_count = products.xpath("normalize-space(.//span[@class='product-pod__ratings-count']/text())").get() price = products.xpath("normalize-space(.//div[@class='price-format__main-price']//span[2]/text())").get() yield { 'Date scraped' : date, 'Brand' : brand, 'Title' : title, 'Product Link' : "https://www.homedepot.com" + remove_tags(link), 'Price' : "$" + price, 'Model #' : model, 'Review Count' : review_count } # click button `>` to load next page try: driver.find_element_by_xpath('//a[@aria-label="Next"]').click() except: break # --- run without project and save in `output.csv` --- from scrapy.crawler import CrawlerProcess c = CrawlerProcess({ 'USER_AGENT': 'Mozilla/5.0', # save in file CSV, JSON or XML 'FEEDS': {'output.csv': {'format': 'csv'}}, # new in 2.1 }) c.crawl(HdSpider) c.start()
{ "language": "en", "url": "https://stackoverflow.com/questions/67083509", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: homebrew Formula to install package As part of new joinees, we provide the FAQ for running internal homebrew formula, that installs from zip/tar.gz files, we install using " brew tap ..." . However, we are separately running "brew install [email protected]" as well as running "brew tap...". Question - wanted to consolidate all brew commands into one using formula, i.e i wanted to have ability to install for this package name "[email protected]" into TAP formula we have. Please share the information if any. Let me know if not clear. A: You can run brew install [email protected] && brew tap ..., but you can’t combine brew install [email protected] and brew tap ... into one brew command. However, running brew install on a formula from a tap you don’t have automatically taps the latter: brew install org/tap/thing Is equivalent to: brew tap org/tap brew install org/tap/thing Where org/tap is the GitHub repository https://github.com/org/tap. This means that if you want to install [email protected] as well as some other formula from that tap, you can run a command like this: brew install [email protected] org/firsttimesetup/xyz Which is equivalent to: brew tap org/firsttimesetup brew install [email protected] org/firsttimesetup/xyz
{ "language": "en", "url": "https://stackoverflow.com/questions/64457082", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Unable to connect to PayPal Sandbox to verify message received by IPN listener I am testing my IPN listener using the IPN simulator. After sending IPN on the IPN simulator, my log shows that my IPN listener receives the IPN but when it sends back the message with cmd=_notify-validate added to it, the sandbox does not respond. The error shown on the IPN simulator is "IPN Delivery Failed:500 Internal Server Error". The following error is met after my listener sends out the message to sandbox: "A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 173.0.82.77:443". When I test my listener with the live PayPal URL, it responds with the "INVALID" message - which shows that my listener is working fine. After checking with my web hosting company, they advised that the PayPal Sandbox IP was not blocked and that the problem should lie with PayPal sandbox. Please advise if it is possible that PayPal has blocked the source IP and hence is not responding to it. If so, how can I go about unblocking the source IP? Any help is appreciated. Thanks. I am using sample ASP.NET C# code provided by PayPal for my IPN listener and my log stops at "Sending request to PayPal...". SSLogger.logger.Info("IPN Invoked"); //Post back to either sandbox or live string strSandbox = "https://www.sandbox.paypal.com/cgi-bin/webscr"; string strLive = "https://www.paypal.com/cgi-bin/webscr"; HttpWebRequest req = (HttpWebRequest)WebRequest.Create(strSandbox); //Set values for the request back req.Method = "POST"; req.ContentType = "application/x-www-form-urlencoded"; byte[] param = Request.BinaryRead(Request.ContentLength); //get request values string strRequest = "cmd=_notify-validate&" + Encoding.ASCII.GetString(param); //set request values req.ContentLength = strRequest.Length; SSLogger.logger.Info("sb: " + Encoding.ASCII.GetString(param)); SSLogger.logger.Info("strRequest: " + strRequest); //Send the request to PayPal and get the response SSLogger.logger.Info("Sending request to PayPal..."); StreamWriter streamOut = new StreamWriter(req.GetRequestStream(), System.Text.Encoding.ASCII); streamOut.Write(strRequest); streamOut.Close(); SSLogger.logger.Info("Request sent out to PayPal."); StreamReader streamIn = new StreamReader(req.GetResponse().GetResponseStream()); string strResponse = streamIn.ReadToEnd(); streamIn.Close(); SSLogger.logger.Info("Response from PayPal received."); SSLogger.logger.Info(strResponse); A: Try this instead: req.Method = "POST"; req.ContentType = "application/x-www-form-urlencoded"; byte[] param = Request.BinaryRead(HttpContext.Current.Request.ContentLength); string strRequest = Encoding.ASCII.GetString(param); strRequest += "&cmd=_notify-validate"; req.ContentLength = strRequest.Length;
{ "language": "en", "url": "https://stackoverflow.com/questions/15921181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Can I make large rightBarButtonItem in Swift? What I have: self.title = "Title" navigationController?.navigationBar.prefersLargeTitles = true self.navigationController?.navigationBar.largeTitleTextAttributes = [NSAttributedString.Key.foregroundColor : UIColor.white] navigationItem.rightBarButtonItem = UIBarButtonItem(title: "Add", style: .plain, target: self, action: #selector(addTapped)) navigationItem.rightBarButtonItem?.tintColor = .white Is it possible to make this Add looks exactly like this Title? Something like this: Or any other solution for this - title left - button with image right, same heights? A: please go through this answer which shows how you can manage Navigation bar title when collapsed and when it's large. It won't give you an exact answer but it will definitely help you in achieving what you want. Other then that please go through this answer which will help you understand how give x,y positions to right bar button item. Just a quick overview how you can achieve what you want by the combination of this two answers:- * *Set the position of the right bar button item according to the position or height of the navigation bar which you wanted to do. Link number 2 will help you in doing that. *Observe your navigation bar when the height of your navigation bar is increased or decreased you can change the position of your right bar button item. and Done. By using this you can manage the position of your bar button item at both the item when your Navigation bar is Enlarged at that time you can show button at different position and when the Navigation Bar is collapsed you can show button at different position so that it doesn't look vice-versa of your question after changing position of your button. A: I've done something similar in my app - not exactly what you are looking for, but should give you enough to go on: func setupNavBar() { let rightButton = UIButton() rightButton.setTitle("Leave", for: .normal) rightButton.addTarget(self, action: #selector(rightButtonTapped(button:)), for: .touchUpInside) navigationController?.navigationBar.addSubview(rightButton) rightButton.tag = 97 rightButton.frame = CGRect(x: self.view.frame.width, y: 0, width: 120, height: 20) let targetView = self.navigationController?.navigationBar let trailingConstraint = NSLayoutConstraint(item: rightButton, attribute: .trailingMargin, relatedBy: .equal, toItem: targetView, attribute: .trailingMargin, multiplier: 1.0, constant: -16) let bottomConstraint = NSLayoutConstraint(item: rightButton, attribute: .bottom, relatedBy: .equal, toItem: targetView, attribute: .bottom, multiplier: 1.0, constant: -6) rightButton.translatesAutoresizingMaskIntoConstraints = false NSLayoutConstraint.activate([trailingConstraint, bottomConstraint]) } I also created this function to remove it (hence the tag use above): func removeRightButton(){ guard let subviews = self.navigationController?.navigationBar.subviews else{ log.info("Attempt to remove right button but subviews don't exist") return } for view in subviews{ if view.tag == 97 { view.removeFromSuperview() } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/63791874", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Working with ampersands in $_GET functions If I have a URL like asdf.com/index.php?a=0&b=2, then using $_GET for a would be 0 and for b would be 2. However, the term I put into a single $_GET function has an ampersand in it already, like a=Steak&Cheese. Is there a way to make ampersands work without the $_GET variable thinking its job ends when the ampersand shows up (therefore not pulling the entire term)? A: urlencode() it so & turns into %26. If you need to make a query string out of some parameters, you can use http_build_query() instead and it will URL encode your parameters for you. On the receiving end, your $_GET values will be decoded for you by PHP, so the query string a=Steak%26Cheese corresponds to $_GET = array('a' => 'Steak&Cheese'). A: Yes, you must URL Encode before request URL. Read this http://www.w3schools.com/TAGS/ref_urlencode.asp A: Here is a previous post covering this in jquery AJAX requests, but to summarize you have to encoded the uri. This will convert the ampersand value to a ascii value. Ampersand in GET, PHP
{ "language": "en", "url": "https://stackoverflow.com/questions/5228924", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Asp.Net Core 5 Docker container exit (1) I'm trying to run a docker container for my api application. The application is asp.net mvc (.net core) 5.0. I'm publishing the application to a /publish folder like: dotnet publish -c Release -o ./publish The Dockerfile contains: FROM mcr.microsoft.com/dotnet/sdk:5.0 WORKDIR /app COPY publish/ app/ EXPOSE 80 ENTRYPOINT ["dotnet", "AmaraCode.RainMaker.DataService.dll"] To create the image I'm using: docker build -t amaracodellc/dataservice . To create the container I'm using: docker run --name dataservice_rm --network="rm" --ip 192.168.0.40 -p 7601:80 -d amaracodellc/dataservice:latest The container creates but exits immediately with Exit (1). The log says: Could not execute because the specified command or file was not found. Possible reasons for this include: * You misspelled a built-in dotnet command. * You intended to execute a .NET program, but dotnet-AmaraCode.RainMaker.DataService.dll does not exist. * You intended to run a global tool, but a dotnet-prefixed executable with this name could not be found on the PATH. I'm a bit stumped because the assembly name is correct since I copied it from the screen versus typing it in. There are many messages on the web for Exit (1) but I can't seem to find one that helps in this case. It is worth noting that I'm VERY new to Docker so I'm hoping someone here can see something that I'm not seeing. Thanks in advance Scott
{ "language": "en", "url": "https://stackoverflow.com/questions/66104704", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: windows 2008 server how to get ASPEmail to send to own domain I have a friend who runs his website on a windows 2008 server. I have set up an asp classic form to email page on his website (hisdomain.com) which takes the input from a contact form and sends it out by email. The problem is that it only works if the email is being sent to a different domain ([email protected]). If the email is being sent to the same domain ([email protected]) it never arrives. Mailer.AddAddress "[email protected]" Works. Mailer.AddAddress "[email protected]" Does not work. I have hunted this site and Google for a resolution but cannot find one. Does anyone know how to fix this issue? Many thanks Tog Porter A: Update: It turns out he uses Gmail business to control his domain emails and there was a filter in there that bounced the messages because the sender was the same as the recipient. Bypassing the Gmail spam filter has fixed the problem. A: Use this code and replace your smtp server information <% Set myMail=CreateObject("CDO.Message") myMail.BodyPart.Charset = "UTF-8" myMail.Subject= Your Message Subject myMail.From= "[email protected]" myMail.To=Receiver Email Address myMail.CreateMHTMLBody "Test Email Subject" myMail.Configuration.Fields.Item ("http://schemas.microsoft.com/cdo/configuration/sendusing")=2 myMail.Configuration.Fields.Item ("http://schemas.microsoft.com/cdo/configuration/smtpserver")= SMTP_SERVER myMail.Configuration.Fields.Item ("http://schemas.microsoft.com/cdo/configuration/smtpserverport")=25 myMail.Configuration.Fields.Item ("http://schemas.microsoft.com/cdo/configuration/smtpauthenticate") = 1 myMail.Configuration.Fields.Item ("http://schemas.microsoft.com/cdo/configuration/sendusername")=SMTP_Email_Username myMail.Configuration.Fields.Item ("http://schemas.microsoft.com/cdo/configuration/sendpassword")=Smtp_Email_Password myMail.Configuration.Fields.Update myMail.Send set myMail=nothing %>
{ "language": "en", "url": "https://stackoverflow.com/questions/51513741", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to compile OpenGL with a python C++ extension using distutils on Mac OSX? When I try it I get: ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/cscalelib.so, 2): Symbol not found: _glBindFramebufferEXT Referenced from: /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/cscalelib.so Expected in: dynamic lookup I've tried all sort of things in the setup.py file. What do I actually need to put in it to link to OpenGL properly? My code compiles fine so there's no point putting that on there. Here is setup.py from distutils.core import setup, Extension module1 = Extension('cscalelib', extra_compile_args = ["-framework OpenGL", "-lm", "-lGL", "-lGLU"], sources = ['cscalelib.cpp']) setup (name = 'cscalelib', version = '0.1', description = 'Test for setup_framebuffer', ext_modules = [module1]) A: I didn't realise I had to remove the build directory. Now it imports correctly. For anyone that needs to know you need: extra_link_args=['-framework', 'OpenGL'] Delete the build directory and try it again. It will work.
{ "language": "en", "url": "https://stackoverflow.com/questions/2754460", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: Ansible consume AD command output i got a problem how to consume the following command output in Ansible basically im trying to get the list of Active Directory OUs and loop over that list to search for specific name. My script works well when multiple OUs exists but i have an issue when only single OU exists. Explained below tasks: - name: PS - Pull System OUs from AD win_command: powershell - args: stdin: "Get-ADOrganizationalUnit -LDAPFilter '(name=*)' -SearchScope 1 -SearchBase 'OU=SYSTEMS,DC=domain,DC=int' -Server domain.int | select-object name | ConvertTo-json" become: yes register: ou_reg_out - name: Select Systems OU block: - name: Set list of standardized OUs as facts set_fact: ou_reg_list: "{{ ou_reg_out.stdout }}" - name: Set System OU for system set_fact: ou_name: "OU={{item.name}}" loop: "{{ ou_reg_list }}" when: (item.name|upper) == (srv_type|upper) when: ou_reg_out.stdout|length != 0 basically i need to be able to loop over the ou_reg_out.stdout. It works when command returns multiple OUs as ou_reg_out.stdout returns list: ou_reg_out.stdout: - { name: OU1 } - { name: OU2 } issue is when only single OU exists , command doesnt return the list ou_reg_out.stdout: { name: OU1 } Any idea how to workaround this problem ? A: Test the type of the variable and branch the code. json_query filter helps to select the items from the list. Then ternary helps to conditionally select the value. The value of the first item that matches the condition is used. Defaults to 'NOTFOUND'. For example the play bellow for both versions of ou_reg_list - hosts: localhost vars: ou_reg_list: - { name: OU1 } - { name: OU2 } # ou_reg_list: # { name: OU1 } srv_type: 'ou1' tasks: - set_fact: ou_name: "OU={{ (ou_reg_list.name == srv_type|upper)| ternary( ou_reg_list.name, 'NOTFOUND') }}" when: ou_reg_list is mapping - block: - set_fact: ou_names: "{{ ou_reg_list|json_query(query) }}" vars: query: "[?name=='{{ srv_type|upper }}'].name" - set_fact: ou_name: "OU={{ (ou_names|length > 0)| ternary( ou_names.0, 'NOTFOUND') }}" when: ou_reg_list is not mapping - debug: var: ou_name gives "ou_name": "OU=OU1"
{ "language": "en", "url": "https://stackoverflow.com/questions/59122026", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: C# XmlSerializer Deserialize Date format I need to bring (among others) a date from an XML file to a PostgreSQL table in my C# application. My problem is that if I declare the field as a string like this: [XmlAttribute("startDate")] public string StartDate { get; set; } it doesn't get deserialized at all (the value is null). If I declare it as DateTime like this: [XmlAttribute("startDate")] public DateTime StartDate { get; set; } no matter what I enter into the XML field I always get the value 01.01.0001 00:00:00. I tried to enter the date using YYYY-MM-DD, YYYY/MM/DD and DD.MM.YYYY. What am I doing wrong? It works perfectly fine for other strings and integers. Edit: Example XML: <?xml version="1.0" encoding="utf-8" ?> <command name="TestCommand"> <weeks>11</weeks> <!-- this works fine --> <startDate>2017/02/01</startDate> <!-- this doesn't --> </command> Deserialization happens using XmlSerializer.Deserialize() into an Config file which consists of the fields I gave examples of above A: Ok the answer is simple. This is not XmlAttribute ... this is XmlElement. Change attribute to: [XmlElement("startDate")] public DateTime StartDate { get; set; } Are you sure element "weeks" works properly and is marked with XmlAttribute ?
{ "language": "en", "url": "https://stackoverflow.com/questions/39368083", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Pod OOM - Xmx not beeing respected I am starting a Java app as follows JAVA_OPTS=-Xmx7g -Xms512m -Xss1m -XX:+UseCompressedOops -XX:-OmitStackTraceInFastThrow The pod I am using for this, has resources: limits: cpu: 4096m memory: 9Gi requests: cpu: 1024m memory: 512Mi However, I see once and then, pods being killed with "Memory cgroup out of memory: Kill process 3284727 (java) score 1982 or sacrifice child Killed process 3284727 (java) total-vm:15318320kB, anon-rss:9380388kB, file-rss:20180kB, shmem-rss:0kB" Why does this take place? How come the memory usage surpasses 9G given that I set Xmx=7G A: With -Xmx you only specify the Java heap size - there is a lot of other memory that the JVM uses (like stack, native memory for the JVM, direct buffers, etc.). In our experience the correct size for total usage of the JVM is 1.5 to 2 times the heap size but this depends heavily on your use case (for example some applications using direct buffers may have 32GB of RAM using only 1GB of heap). So run your app with more limits, check the actual usage and then define that + 5% as a container limit. You should also adapt your request to at least 1Gi with -Xms512m.
{ "language": "en", "url": "https://stackoverflow.com/questions/57991719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Duplicated metrics using prometheus federation What happened? The duplicated metrics appear on main prometheus server: namespace_workload_pod:kube_pod_owner:relabel{cluster="sample-eks", job="federate", namespace="cert-manager", pod="cert-manager-848f547974-xkg8j", prometheus="monitoring/k8s", prometheus_replica="prometheus-k8s-0", workload="cert-manager", workload_type="deployment"} namespace_workload_pod:kube_pod_owner:relabel{cluster="sample-eks", namespace="cert-manager", pod="cert-manager-848f547974-xkg8j", workload="cert-manager", workload_type="deployment"} The difference is additional job,prometheus and prometheus_replica labels. Did you expect to see some different? No duplicated metrics. How to reproduce it (as minimally and precisely as possible): Install two kube-prometheus stacks on different clusters: * *one main *one child The main cluster will get the data from the child cluster using /federate endpoint. Federation config: - job_name: 'federate' scrape_interval: 15s honor_labels: true metrics_path: '/federate' params: 'match[]': - '{job=~".*"}' static_configs: - targets: - 'prometheus-sample.dev.xyz.com' Environment I use last git tag of kube-prometheus repo on AWS EKS 1.21. * *Prometheus Logs: level=warn ts=2021-10-01T15:04:21.300Z caller=scrape.go:1399 component="scrape manager" scrape_pool=federate target="http://prometheus-sample.dev.sample.com:80/federate?match%5B%5D=%7Bjob%3D~%22.%2A%22%7D" msg="Error on ingesting samples with different value but same timestamp" num_dropped=20 A: The best option here is to deploy all prometheus servers with the option replicaExternalLabelName: "" More info here: https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#prometheusspec
{ "language": "en", "url": "https://stackoverflow.com/questions/69493485", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: FLutter check if Textfield controller is empty i have a question in flutter about a Textfield and the controller attribute. I have a site add_new_currency_screen.dart, for the beginning i have textfields like: name, amount, price, startDate and endDate. Now my problem. The endDate is always "empty" if i open the "add screen", if i enter a startDate the Enddate should be added by a year. TextFormField( decoration: InputDecoration(labelText: 'EndDate'), controller: _endDateController..text = DateFormat("dd.MM.yyyy").format(_pickedStartDate.add(Duration(days: 365))), enabled: false, ), The problem is im getting an error "add" was called on "null" in controller i can't use if else, and _endDateController.text == null ? "empty" : DateFormat("dd.MM.yyyy").format(_pickedStartDate.add(Duration(days: 365))), does not work to.... How can i check or how can i fix this, that if the startDate is picked, the controller in the endDate is doing the thing, else, just dont use the "add" hope i could explain it correctly and someone can help me :) thanks all, best regards Thommy A: If you get "add" was called on null, then the problem has to do with _pickedStartDate So perhaps try something like: controller: _endDateController..text = _pickedStartDate != null ? DateFormat("dd.MM.yyyy").format(_pickedStartDate.add(Duration(days: 365))) : '',
{ "language": "en", "url": "https://stackoverflow.com/questions/67780007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Spring WebFlux How to get Flux execute result? I want to use WebClient in Spring WebFlux to call some urls, and then put all the monos to flux. when I call Flux.blockLast, I can not get the results. @Test public void reactiveGetTest() { long start = System.currentTimeMillis(); List<String> results = new ArrayList<>(); List<Mono<String>> monos = IntStream.range(0, 500) .boxed() .map(i -> reactiveGet("https://www.google.com/")) .collect(Collectors.toList()); Flux.mergeSequential(monos) .map(results::add) .blockLast(); System.out.println("result: " + results.size()); System.out.println("total time: " + (System.currentTimeMillis() - start)); } private Mono<String> reactiveGet(String url) { return WebClient.create(url) .get() .retrieve() .bodyToMono(String.class); } I want to get a list of size 500, but was 0! A: You can use Flux.collectList() to get all results in a list: @Test public void reactiveGetTest() { long start = System.currentTimeMillis(); List<Mono<String>> monos = IntStream.range(0, 500) .boxed() .map(i -> reactiveGet("https://www.google.com/")) .collect(Collectors.toList()); List<String> results = Flux.mergeSequential(monos).collectList().block(); System.out.println("result: " + results.size()); System.out.println("total time: " + (System.currentTimeMillis() - start)); }
{ "language": "en", "url": "https://stackoverflow.com/questions/55514202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What are the prohibited numbers in android dialer app? Learning the source code of Android Dialpad Fragment I've noticed quite strange thing - you can not call several numbers. if (number != null && !TextUtils.isEmpty(mProhibitedPhoneNumberRegexp) && number.matches(mProhibitedPhoneNumberRegexp)) { Log.i(TAG, "The phone number is prohibited explicitly by a rule."); if (getActivity() != null) { DialogFragment dialogFragment = ErrorDialogFragment.newInstance( R.string.dialog_phone_call_prohibited_message); dialogFragment.show(getFragmentManager(), "phone_prohibited_dialog"); } where the mProhibitedPhoneNumberRegexp should be loaded as mProhibitedPhoneNumberRegexp = getResources().getString( R.string.config_prohibited_phone_number_regexp); But I can't find the related string in dailer module and not in the PhoneCommen module and not in the Contacts module of the Android source. So my question is where to find this String, why Android OS prohibits to dial some numbers, is it White House or something? A: You can find the default in this file: https://android.googlesource.com/platform/packages/apps/Dialer/+/master/res/values/donottranslate_config.xml?autodive=0%2F%2F As you can see, it is empty.
{ "language": "en", "url": "https://stackoverflow.com/questions/35768835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Python Regex with split Method I am trying to create a list from a csv file. However, I'm having a tough time using the split method because some of the attributes in the csv file has comma's that are within parenthesis. for example: csv file: 500,403,34,"hello there, this attribute has a comma in it",567 So for example, when I iterate through the file: for line in f: fields = line.split(",") fields = ['500','403','34','"hello there','this attribute has a comma in it"','567'] How can I get it to look like this: fields = ['500','403','34','"hello there, this attribute has a comma in it"','567'] I would like to use Regex for this, but if there is an easier way I'd love to hear it. Thanks! A: import re x='500,403,34,"hello there, this attribute has a comma in it",567' print re.split(r""",(?=(?:[^"]*"[^"]*"[^"]*)*[^"]*$)""",x) Output : ['500', '403', '34', '"hello there, this attribute has a comma in it"', '567'] A: Just use the existing CSV package. Example: import csv with open('file.csv', 'rb') as csvfile: reader = csv.reader(csvfile) for row in reader: print ', '.join(row) A: The CSV module is the easiest way to go: import csv with open('input.csv') as f: for row in csv.reader(f): print row For input input.csv: 500,403,34,"hello there, this attribute has a comma in it",567 500,403,34,"hello there this attribute has no comma in it",567 500,403,34,"hello there, this attribute has multiple commas, in, it",567 The output is: ['500', '403', '34', 'hello there, this attribute has a comma in it', '567'] ['500', '403', '34', 'hello there this attribute has no comma in it', '567'] ['500', '403', '34', 'hello there, this attribute has multiple commas, in, it', '567']
{ "language": "en", "url": "https://stackoverflow.com/questions/31258629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: autoincrement in a mysql update query String textbox1=request.getParameter("textbox1"); String textbox2=request.getParameter("textbox2"); String textbox3=request.getParameter("textbox3"); String textbox4=request.getParameter("textbox4"); String textbox5=request.getParameter("textbox5"); String textbox6=request.getParameter("textbox6"); String textbox7=request.getParameter("textbox7"); String textbox8=request.getParameter("textbox8"); String textbox9=request.getParameter("textbox9"); String textbox10=request.getParameter("textbox10"); String textbox11=request.getParameter("textbox11"); String textbox12=request.getParameter("textbox12"); for(int i=1;i<13;i++){ String textbox=request.getParameter("textbox"+i+""); st.executeUpdate("update user_start2 set data='"+textbox+"'"); } I have a table with column data,name and id. I want to update the table with the above query,but the table gets populated with only the last value. Is it possible to auto-increement as I am updating? so that the first data goes to the first user,2nd data to 2nd user and so on in the table.. When I system out I am able to retrieve values from the jsp to my servlet A: I do not know how/where you are using this, but bear in mind that it is completely vulnerable to SQL injection attacks. String textbox1=request.getParameter("textbox1"); String textbox2=request.getParameter("textbox2"); String textbox3=request.getParameter("textbox3"); String textbox4=request.getParameter("textbox4"); String textbox5=request.getParameter("textbox5"); String textbox6=request.getParameter("textbox6"); String textbox7=request.getParameter("textbox7"); String textbox8=request.getParameter("textbox8"); String textbox9=request.getParameter("textbox9"); String textbox10=request.getParameter("textbox10"); String textbox11=request.getParameter("textbox11"); String textbox12=request.getParameter("textbox12"); for(int i=1;i<13;i++){ String textbox=request.getParameter("textbox"+i+""); st.executeUpdate("update user_start2 set data='"+textbox+"' where id="+i+";"); } A: I assume that id is a unique identifier in your table user_start2. If so, then you need to alter the update statement above so that it reads, "update user_start2 set data= '"+textbox+"' where id = '"+Myuserid); Based on what you are saying, I assume that there are 13 entries in your database and that you are updating each one. If so, there must be some correspondence between the ids and the index in your for loop. If it is a literal 1 to 1 mapping, then you can simply write your update statement as "update user_start2 set data= '"+textbox+"' where id = '"+i+"'"); If there is some transformation that needs to be done first, you could create a temporary variable and set the id to that: For example int temp = i+4; "update user_start2 set data= '"+textbox+"' where id = '"+temp+"'"); A: Use PreparedStatement it prevents SQL Injection in Java for(int i=1;i<13;i++){ String textbox=request.getParameter("textbox"+i+""); PreparedStatement ps = con.prepareStatement("update user_start2 set data=? where id=?"); //set the values of ?(place holders) ps.setString(1,textbox); ps.setInt(2,i); //assuming id is i. ps.executeUpdate(); } See also * *Oracle docs tutorial PreparedStatement
{ "language": "en", "url": "https://stackoverflow.com/questions/20901516", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Can't figure out why angular binding won't work in codepen script I'm trying to use angular in codepen, which it seems to support, but I can't bind to my controller's $scope object for some reason. I've tried a couple different versions of angular via cdn and there are other pens I've seen that are able to successfully use angular. Can anyone tell me why my implementation isn't working? Here is the pen. the version of angular used is 1.4.0. Here is the html code: <div class="container" ng-app="App"> <div class="row" ng-controller="catControl"> <div class="col-md-12"> <div class="well"> {{2 + 2}} </br> {{'cat'}} </br> {{$scope.cat}} <!-- Why doesnt this one work? --> </div> </div> </div> </div> and here is the JS code: var App = angular.module("App", []) .controller("catControl", function($scope) { $scope.cat = 'cat'; }); Thanks. A: You need learn more about $scope. We never user {{$scope.key}} in view. Instead we we use just {{key}} in View Your code should be <div class="container" ng-app="App"> <div class="row" ng-controller="catControl"> <div class="col-md-12"> <div class="well"> {{2 + 2}} </br> {{'cat'}} </br> {{$scope.cat}} <!-- This will not work --> {{cat}} <!-- This will work --> </div> </div> </div> </div> learn more about scope at This link & This Link
{ "language": "en", "url": "https://stackoverflow.com/questions/35763369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Populating parameter list with a query I have a custom big query that functions correctly, but I would like to add a parameter to that query. I would have expected to be able to populate the drop down list with another query but cannot see how to do it. Is this possible? A: You cannot populate a BigQuery query parameter dropdown list using another BigQuery query. A workaround for this would be: * *create a Community Connector using Advanced Services *Add the query parameter as a config param *in getConfig, use BigQuery REST API or Apps Script BigQuery service to retrieve list *in getData, pass the parameter from your getConfig to BigQuery
{ "language": "en", "url": "https://stackoverflow.com/questions/67340036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Required String parameter 'newPassword' is not present SOLVED We've solved it. The form that the backend requires data was param, so the form that the vue sends was changed to param, not data! I'm making a password change page. Meanwhile, I got an error that there was no old password, new password data but I definitely transferred the data from Vue through axios. There were answers to 'required=false', so I tried oldpassword, but here's the result. However, we took 'required= true' because we believe newpassword, oldpassword is essential data for this approach. When using post-man or swagger, the password is changed signaling 200 normally. I thought there might be a problem with 'patch', so I tried 'put', but the error didn't change. frontend <div class="input-with-label"> <input v-model="newpassword" id="newpassword" placeholder="새 비밀번호를 입력하세요." :type="passwordType" /> <label for="newpassword">새 비밀번호</label> <div class="error-text" v-if="error.newpassword">{{error.newpassword}}</div> </div> <form @submit.prevent="changepassword" @submit="checkForm"> <div v-if="activeButton() && !isSubmit"> <!-- <button @click="PopUpEmailModal" class="btn-bottom" >가입하기</button> --> <button class="btn-bottom">변경하기</button> </div> <div v-else> <button class="btn-bottom disabled" >변경하기</button> </div> </form> import axios from 'axios' export default { name:'ChangePassword', data: ()=> { return{ email:"", oldpassword:"", newpassword:"", isSubmit: true, error: { newpassword: false, }, passwordType: "password", passwordSchema: new PV(), } }, created() { this.passwordSchema .is() .min(8) .is() .max(100) .has() .digits() .has() .letters(); }, watch: { newpassword: function(v) { this.checkForm(); }, }, methods:{ changepassword(){ axios({ url:'http://127.0.0.1:8080/account/changePassword', method:'patch', data:{ email: this.email, oldPassword: this.oldpassword, newPassword: this.newpassword, }, }) .then(res=>{ console.log(res) this.$router.push({ name:'Login' }) }) .catch(err=>{ console.log(typeof this.oldpassword) console.log(this.oldpassword) console.log(this.newpassword) console.log(this.email) console.log(err) }) }, } backend @PatchMapping("/account/changePassword") @ApiOperation(value = "비밀번호변경") public Object changePassword(@RequestParam(required = false) final String oldPassword, @RequestParam(required = true) final String newPassword, @RequestParam(required = true) final String email){ Optional<User> userOpt = userDao.findByEmail(email); if(!passwordEncoder.matches(oldPassword, userOpt.get().getPassword())){ throw new IllegalArgumentException("잘못된 비밀번호입니다."); } User user = new User(userOpt.get().getUid(), userOpt.get().getNickname(), email, passwordEncoder.encode(newPassword), userOpt.get().getIntroduction(), userOpt.get().getThumbnail(), userOpt.get().getRoles()); userDao.save(user); final BasicResponse result = new BasicResponse(); result.status = true; result.data = "success"; ResponseEntity response = null; response = new ResponseEntity<>("OK", HttpStatus.OK); return response; } A: I believe you are using @RequestParam annotations but you are not sending any params in the URL from the frontend, hence the 400 error. Since you are using Patch/Put I would suggest you change your changePassword function to take a dto. And since you are already sending data in the body from frontend so no change needed there. For example public Object changePassword(@RequestBody @Valid ChangePasswordDto changePasswordDto){ // your logic } Then create a ChangePasswordDto class with @Required on properties that are required. public class ChangePasswordDto{ private String email; private String newPassword; private String oldPassword; @Required public void setEmail(String email) { this.email= email; } @Required public void setOldPassword(String oldPassword) { this.oldPassword= oldPassword; } @Required public void setNewPassword(String newPassword) { this.newPassword= newPassword; } // corresponding getters }
{ "language": "en", "url": "https://stackoverflow.com/questions/68524837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Pyglet: how to handle heavy resources loading I'm learning pyglet in order to write a game, and I've trying to create a loading screen/scene while loading all game resources (static images and image sprites), to avoid window freeze while loading. I wonder what are the best practices to handle heavy resources loading with pyglet, on window/game startup or between scenes.
{ "language": "en", "url": "https://stackoverflow.com/questions/69349107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Is there a way to indicate to Netty after a Bind occurs to timeout within a period of time if no client connects If I bind to a specific address/port is there a way to force it to unbind if no client has connected within a period of time and is there a way to be notified by a listener if there is an option? A: There is nothing like this build in. That said you can do this by yourself. Just add a ChannelInboundHandlerAdapter to the childHandler method that overrides channelActive(...). Then schedule a timer that will check if this method was called within time X and if not close the Channel.
{ "language": "en", "url": "https://stackoverflow.com/questions/65362411", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Vapor 4 Authentication Issue [User is not authenticated] I have an iOS client with OAuth 2.0 as an authenticating mechanism. When a user signs in, I authenticate him with this method (Google sign in for example): func processGoogleLogin(request: Request, token: String) throws -> EventLoopFuture<ResponseEncodable> { try Google .getUser(on: request) .flatMap { userInfo in User .query(on: request.db) .filter(\.$email == userInfo.email) .first() .flatMap { foundUser in guard let existingUser = foundUser else { //creating a new user return user .save(on: request.db) .map { request.session.authenticate(user) //redirecting with info } } request.session.authenticate(existingUser) //redirecting with info } } } After the login, I want to check if the user is authenticated and if I've successfully authenticated the user. So I have an endpoint that I protect from unauthenticated users, but even after signing in, the user cannot access this endpoint as he is not authenticated. Error: { "error": true, "reason": "User not authenticated." } My User Model conforms to ModelSessionAuthenticatable. I also use the SessionMiddleware (ImpreialController is the auth controller): let imperialController = ImperialController(sessionsMiddleware: app.sessions.middleware) app.middleware.use(app.sessions.middleware) In ImperialController: class ImperialController { private let sessionsMiddleware: Middleware init(sessionsMiddleware: Middleware) { self.sessionsMiddleware = sessionsMiddleware } .... And finally the protected route: let protected = app.grouped(User.guardMiddleware()) protected.get { req -> HTTPResponseStatus in return .ok } A: It may be that you are doing it, but just not showing it in your question. You need to create and register an instance of SessionsMiddleware using something like this: app.middleware.use(SessionsMiddleware(session: MemorySessions(storage: MemorySessions.Storage()))) Do this before you create the instance of your controller. EDIT in reply to OP' comment: I normally pass the instances of the different middleware explicitly because I tend to apply subsets to groups of routes rather than all the middleware. For example: app.middleware.use(FileMiddleware(publicDirectory: app.directory.publicDirectory)) app.middleware.use(SessionsMiddleware(session: MemorySessions(storage: MemorySessions.Storage()))) app.middleware.use(User.sessionAuthenticator(.mysql)) try app.register(collection: APIController(middleware: UserToken.authenticator())) var middleware: [Middleware] = [CustomMiddleware.InternalErrorMiddleware()] try app.register(collection: InsecureController(middleware: middleware)) middleware.append(contentsOf: [User.redirectMiddleware(path: [C.URI.Solidus].string), User.authenticator(), User.guardMiddleware(), CustomMiddleware.SessionTimeoutMiddleware()]) try app.register(collection: CustomerController(middleware: middleware)) BTW, have you included my line 3 above? That may be your problem.
{ "language": "en", "url": "https://stackoverflow.com/questions/73449861", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: MS SQL Server connection using ODBC Driver from Python This question may be repetitive as I checked this forum for the same issue and did as suggested in the forum for my issue. But my issue seems to be not going. All, I am trying to connect to MS SQL Server installed on an ETL Server from my PC using python pyodbc module. import pyodbc print(pyodbc.drivers()) connStr = pyodbc.connect("Driver = {SQL Server Native Client 11.0};" "SERVER = S871DBSERV;" "DATABASE = myclientdb;" "Trusted_Connection=yes;") and I also tried below as well print(pyodbc.drivers()) connStr = pyodbc.connect("Driver = {ODBC Driver 13 for SQL Server};" "SERVER = S871DBSERV;" "DATABASE = myclientdb;" "Trusted_Connection=yes;") am still not able to make a connection and below is the output with error I am seeing. As you can see the drivers that I specified in the above connect statement is listed as result of print(pyodbc.drivers()) ['Driver da Microsoft para arquivos texto (*.txt; .csv)', 'Driver do Microsoft Access (.mdb)', 'Driver do Microsoft dBase (.dbf)', 'Driver do Microsoft Excel(.xls)', 'Driver do Microsoft Paradox (.db )', 'Microsoft Access Driver (.mdb)', 'Microsoft Access-Treiber (.mdb)', 'Microsoft dBase Driver (.dbf)', 'Microsoft dBase-Treiber (.dbf)', 'Microsoft Excel Driver (.xls)', 'Microsoft Excel-Treiber (.xls)', 'Microsoft ODBC for Oracle', 'Microsoft Paradox Driver (.db )', 'Microsoft Paradox-Treiber (.db )', 'Microsoft Text Driver (.txt; .csv)', 'Microsoft Text-Treiber (.txt; *.csv)', 'SQL Server', 'DataDirect 7.1 DB2 Wire Protocol', 'DataDirect 7.1 Informix Wire Protocol', 'DataDirect 7.1 Sybase Wire Protocol', 'DataDirect 7.1 SQL Server Wire Protocol', 'DataDirect 7.1 dBaseFile', 'DataDirect 7.1 FoxPro 3.0', 'DataDirect 7.1 MySQL Wire Protocol', 'DataDirect 7.1 New SQL Server Wire Protocol', 'DataDirect 7.1 Greenplum Wire Protocol', 'Informatica MongoDB ODBC Driver', 'DataDirect 7.1 Oracle Wire Protocol', 'Informatica Cassandra ODBC Driver', 'SQL Server Native Client 11.0', 'ODBC Driver 13 for SQL Server'] Traceback (most recent call last): File "c:/Users/marunachalam/Downloads/FTPGetFiles.py", line 92, in connStr = pyodbc.connect("Driver = {'SQL Server Native Client 11.0'};" pyodbc.InterfaceError: ('IM002', '[IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified (0) (SQLDriverConnect)') I am not sure what I am missing here. Please help. Thanks, Mani A
{ "language": "en", "url": "https://stackoverflow.com/questions/59991483", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Entities not mapped when persistence unit is on EAR level I need an EAR java application in which I can define a persistence unit and use that persistence unit in different components. For example: I define 2 JAR's in my application.xml: my-product.jar and my-product-module.jar - both of the JAR files should be able to work with the same PersistenceUnit. This PersistenceUnit is defined in a separate jar which is the lib folder of my EAR application. In the JAR files, there's no other persistence.xml. So in the whole EAR folder: only 1 persistence.xml. I currently have the following EAR structure. We are deploying on a JBoss 6.1.0.Final server. / -- lib | | | -- my-persistence.jar | | | -- META-INF | | | -- persistence.xml -- my-product.jar | -- my-product.war | -- my-product-module.jar My persistence.xml is looking like this: <persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd" version="2.0"> <persistence-unit name="MyPersistenceUnitName"> <jta-data-source>java:/MyDatasource</jta-data-source> <jar-file>../my-product.jar</jar-file> <jar-file>../my-product-module.jar</jar-file> <properties> <property name="jboss.entity.manager.factory.jndi.name" value="java:/MyProduct/ManagerFactory" /> <property name="jboss.entity.manager.jndi.name" value="java:/MyProduct/EntityManager" /> </properties> </persistence-unit> </persistence> All of this is working fine. The persistence unit gets detected and bounded inside the EAR context. But, from the moment we create a JPA query and we ask the EntityManager to perform the query, we get the following error: org.hibernate.hql.ast.QuerySyntaxException: MyEntity is not mapped [from MyEntity] So all of our entities are not mapped/detected when the application was loaded. It seems like the jar-file element is not doing what it should do. How can we detect all of our entities (in both of our JAR's)? Anybody able to help us out? A: Our solution was to create a custom ANT task which gets all the classes annotated with @Entity (using reflections). This will generate the persistence.xml for us, with and nodes. So every class you want to map into the PersistenceContext, needs to be listed in the persistence.xml. This persistence.xml is placed inside a folder META-INF which is wrapped into my-persistence.jar. This jar is placed in the lib folder of my EAR.
{ "language": "en", "url": "https://stackoverflow.com/questions/12969089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: setTimout function # tag? Can someone please help? I have some jQuery code which is working perfectly, however there is an element within it which is: "#"+ and I don't understand what it does. Please see below JS. setTimeout (function() { $("#"+toneId2nd).animate({ backgroundColor: 'red'}).animate({ b ackgroundColor: 'white'}, 4000); play_multi_sound('tone-'+toneId2nd); }, 1000); Any help greatly appreciated. A: $('#myElement').animate({ backgroundColor: 'red'}).animate({ backgroundColor: 'white'}, 4000); play_multi_sound('tone-myElement'); is the same as: var toneId2nd = 'myElement'; $('#'+toneId2nd).animate({ backgroundColor: 'red'}).animate({ backgroundColor: 'white'}, 4000); play_multi_sound('tone-'+toneId2nd); toneId2nd is just a supplied variable. jQuery uses CSS selectors to grab elements. I suggest you start here if you need more help with jQuery. A: it is the Id of the Element $('#ElementId') toneId2nd is some kind of element ID Ex: $('#mydiv') A: See the answer on this question : using variables within a jquery selector And maybe try to do some research before asking? ;)
{ "language": "en", "url": "https://stackoverflow.com/questions/28939571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: same img size across browsers Do you guys know how to show same image size across browsers. Or similar size. I'm working on a site and the image i included in a div shows too different in size, especially for Safari, it looks too tiny. The image occupies 90% of the page in most browsers but in Safari it uses about 25% of it. CSS goes like this #menu img{ max-width:70% !important; height:auto; display:block; } @media screen and (orientation: portrait) { img { max-width: 80%; } } @media screen and (orientation: landscape) { img { max-height: 50%; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/28313299", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: how to update trained LSTM model by online data? I'm doing a time sequence predict task, first, I trained the LSTM model by previous 50% data, I want to update the model by new data, I just simulate the online predict, Can I use one new data to update the model? the flow just like predict update predict update and repeat this flow, now I'm using the Keras, I put the new data into the model.fit function, Is this right? A: I don't think there is a direct API for this # to get indices of incorrect predictions incorrects = np.nonzero(model.predict_class(X_test) != Y_test) # Now train on them newX, newY = X_test[incorrects], Y_test[incorrects] model.fit(newX, newY)
{ "language": "en", "url": "https://stackoverflow.com/questions/63546133", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: GUI for Dialog-design for WiX WiX is great in that there is no GUI, you just write the installer you want it to be. No fiddling with GUI-wizards! However, drawing GUI is actually one thing I prefer to use a GUI for. So, is there any Dialog-drawing program which exports WiX-data? (I suppose else-wise perhaps I could transform what Visual Studio's forms editor does to WiX-XML.) /L A: SharpDevelop also has built-in capabilities for laying out a WiX dialog. I prefer it over WixEdit. A: I created a full list of editors for WiX here: https://robmensching.com/blog/posts/2007/11/20/wix-editors/ (which is amazingly still up to date) A: this is excellent GUI IDE and it is open source..... try this... http://community.sharpdevelop.net/blogs/mattward/archive/2006/09/17/WixIntegration.aspx download IDE from here: http://www.icsharpcode.net/OpenSource/SD/Download/ A: You can try WixEdit. A: If you use Visual Studio 2008/2010 and want to install an application that requires .NET framework you might be interested in having a look at SharpSetup. It allows you to graphically edit installer UI as WinForms controls (and use VS designer for that).
{ "language": "en", "url": "https://stackoverflow.com/questions/2569707", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "44" }
Q: Swift: Setting progress of NSProgressIndicator I have tried to setup my own NSProgressIndicator with the following method. class ProgressViewController : NSViewController { override func loadView() { let view = NSView(frame: NSMakeRect(0,0,300,120)) view.wantsLayer = true view.layer?.borderWidth = 0 self.view = view let text = NSTextField(frame: NSRect(x: 20, y: 45, width: 260, height: 20)) text.drawsBackground = true text.isBordered = false text.backgroundColor = NSColor.controlColor text.isEditable = false self.view.addSubview(text) text.stringValue = "Progress Text" let indicator = NSProgressIndicator(frame: NSRect(x: 20, y: 20, width: 260, height: 20)) indicator.minValue = 0.0 indicator.maxValue = 100.0 indicator.doubleValue = 33.0 self.view.addSubview(indicator) } } My problem is that the indicator is always shown full. Am I missing a trick? A: The default style is indeterminate. Set it to false and everything should be OK: indicator.isIndeterminate = false
{ "language": "en", "url": "https://stackoverflow.com/questions/42862161", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Return only the items in php array that equal to my custom field month (array_filter) I am returning all of the Fridays for the current month and next month in a dropdown, which works fine. I have an additional step where I want to only show the months in that dropdown that equal to the month in this field: $productMonth = get_field('product_month'); That field returns the following value: "May". So I need to look at all items in that array and only return them if the month they have associated to them matches $productMonth. I have tried using array_filter but I know $var.date('F') isn't the answer, I assume I could probably do something by retrieving the last bit of the string as that would always return the month, but that's not ideal: $array2 = array_filter($fridaysUnique, "matches_month"); function matches_month($var, $productMonth) { return ($var.date('F') === $productMonth); } And here is the rest of my code: <?php $productMonth = get_field('product_month'); $thisMonth = date('F'); $nextMonth = date('F', strtotime("next month")); $fridays = array(); $fridays[0] = date('l jS F', strtotime('first friday of this month')); $fridays[1] = date('l jS F', strtotime('second friday of this month')); $fridays[2] = date('l jS F', strtotime('third friday of this month')); $fridays[3] = date('l jS F', strtotime('fourth friday of this month')); $fridays[4] = date('l jS F', strtotime('fifth friday of this month')); $fridays[5] = date('l jS F', strtotime('first friday of next month')); $fridays[6] = date('l jS F', strtotime('second friday of next month')); $fridays[7] = date('l jS F', strtotime('third friday of next month')); $fridays[8] = date('l jS F', strtotime('fourth friday of next month')); $fridays[9] = date('l jS F', strtotime('fifth friday of next month')); $fridaysUnique = array_unique($fridays); ?> <select> <?php foreach ( $fridaysUnique as $friday ) : ?> <option value=""><?php echo $friday; ?></option> <?php endforeach; ?> </select> Help would be greatly appreciated and any recommendations on making the code neater are welcomed. Thanks A: If you change your array_filter to the following the filter method will give you the values you want: $array2 = array_filter($fridaysUnique, function ($val) use ($productMonth) { return (DateTime::createFromFormat('l jS F', $val))->format('F') === $productMonth); }); What the code above does is it runs through all the values in your $fridaysUnique array and converts each value to a DateTime object that is formatted to a string for comparison with the value in $productMonth (note the use of use allowing you to use the $productMonth variable inside your anonymous filtering method). However instead of converting dates twice like this I would suggest you store DateTime objects in your array to start with.
{ "language": "en", "url": "https://stackoverflow.com/questions/67502154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Logical operators in jinja template Been pulling my hair out trying to use the "or" logical operator in a cloud-init jinja template. When I use: {% if distro == 'centos' or 'redhat' %} {% set group = 'wheel' %} cloud-init just ignores the directive. If I use separate if statements, (see below) then I get the desired result. I have tried with {% "value" or "value" %} and {% value or value %}, and the line is always ignored. Example snippet of the code: ## template: jinja #cloud-config {% set u1 = 'myuser' %} {% set u1pass = 'strong-passwd' %} {% set u1key = 'key1' %} {% set u2 = 'example' %} {% set u2pass = 'passwd2' %} {% set u2key = 'key2' %} ............. {% if distro == 'centos' %} {% set group = 'wheel' %} {% elif distro == 'rocky' %} {% set group = 'wheel' %} {% elif distro == 'ubuntu' or 'debian' %} {% set group = 'sudo' %} {%- endif %} distro: {{distro}} user1: {{u1}} user2: {{u2}} group: {{group}} ## Add users - name: {{ u1 }} groups: {{ group }} lock_passwd: false passwd: {{ u1pass }} ssh_authorized_keys: - {{ u1key }} shell: /bin/bash - name: {{ u2 }} groups: {{ group }} lock_passwd: false passwd: {{ u2pass }} ssh_authorized_keys: - {{ u2key }} shell: /bin/bash` ` I am trying to set jinja variables based on metadata values passed in from the datasource (LXD in my case) to dynamically build the user-data configuration, but can't seem to get the or logical operator to play well. Am I just stuck using separate if statements per metadata value? Thanks {% if distro == 'centos' or 'redhat' %} {% set group = 'wheel' %} Expecting: distro: redhat user1: myuser user2: example group: wheel A: Have you tried {% if distro == 'centos' or distro == 'redhat' %}
{ "language": "en", "url": "https://stackoverflow.com/questions/75537000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Submit datefrom html form to controller I want to submit a date to my controller using html form, I tried this but doesn't work: <form action="/Home/Index" method='get'> <input type="date" name="Choisir une date"><br> <input type="submit" value="Submit"> <input type="reset"> </form> A: First of all try with post method instead of get. A: Please add name property to your input element as <input type="date" name="date" placeholder="Choisir une date"> You should get the value for date param in your controller method now. The name property should match with your controller method parameter name. A: <form action="/Home/Index" method='get'> <input type="date" name="ChoisirDate"> <br/> <input type="submit" value="Submit"> <input type="reset"> </form> And in controller public IActionResult Index(string ChoisirDate) { string ChoisirDateText = ChoisirDate; Return View(); }
{ "language": "en", "url": "https://stackoverflow.com/questions/50118459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Using 'map' object in Jupyter notebook environment 'map' object of python doesn't work in jupyter notebook. But It works well in python shell. Can you explain why? Thanks for your answer. A: It looks like map is still a value, but not a callable. Most likely, you assigned a value to it earlier in the Jupyter notebook (as jasonharper said in a comment). You can check what type of object map is by executing this in a code cell: map? The notebook should show an overlay window at the bottom describing the type, string representation, and docstring.
{ "language": "en", "url": "https://stackoverflow.com/questions/65200968", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: App with playerduel functionality doesn't get push notification I am developing application which support playerduel framework. In which two players can play with each other. Person can send challenge to another one. i follow the documentation following. https://docs.urbanairship.com/display/DOCS/Getting+Started%3A+iOS%3A+Push I can get notification when i send it from command line (for testing) as describe in above documentation. But when i play game. Playerdual can't send notification when some one send challenge to another. appdelegate code :- - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { //Init Airship launch options NSMutableDictionary *takeOffOptions = [[[NSMutableDictionary alloc] init] autorelease]; [takeOffOptions setValue:launchOptions forKey:UAirshipTakeOffOptionsLaunchOptionsKey]; // Create Airship singleton that's used to talk to Urban Airship servers. // Please populate AirshipConfig.plist with your info from http://go.urbanairship.com [UAirship takeOff:takeOffOptions]; // Register for notifications [[UAPush shared]registerForRemoteNotificationTypes:(UIRemoteNotificationTypeBadge | UIRemoteNotificationTypeSound | UIRemoteNotificationTypeAlert)]; // Override point for customization after application launch. self.appStarted = YES; UIImage *bgImage= [UIImage imageNamed:@"default.png"]; [PlayerDuel initializeWithGameKey:@"gamekey" andBackground:bgImage andDelegate:[navigationController.viewControllers objectAtIndex:0] andOrientation:UIInterfaceOrientationPortrait]; - (void)application:(UIApplication *)application didRegisterForRemoteNotificationsWithDeviceToken:(NSData *)deviceToken { NSLog(@"deviceToken:- %@",deviceToken); // Updates the device token and registers the token with UA [[UAPush shared] registerDeviceToken:deviceToken]; [PlayerDuel registerDeviceToken:deviceToken]; } - (void)application:(UIApplication *)application didReceiveRemoteNotification:(NSDictionary *)userInfo { for (id key in userInfo) { NSLog(@"key: %@, value: %@", key, [userInfo objectForKey:key]); } NSLog(@"remote notification: %@",[userInfo description]); NSDictionary *apsInfo = [userInfo objectForKey:@"aps"]; NSString *alert = [apsInfo objectForKey:@"alert"]; NSLog(@"Received Push Alert: %@", alert); NSString *sound = [apsInfo objectForKey:@"sound"]; NSLog(@"Received Push Sound: %@", sound); NSString *itemName = @"my app"; NSString *messageTitle = [apsInfo objectForKey:@"alert"]; UIApplicationState state = [application applicationState]; if (state == UIApplicationStateActive){ AudioServicesPlaySystemSound(1007); [self _showAlert:messageTitle withTitle:itemName]; } else{ UIViewController *viewController = navigationController.visibleViewController; // NSLog(@"Controller Name:- %@",viewController); [viewController.view reloadInputViews]; [viewController playerDuelStartGame:nil]; } NSString *badge = [apsInfo objectForKey:@"badge"]; NSLog(@"Received Push Badge: %@", badge); } A: If the push notifications work directly through Urban Airship and not through PlayerDuel, You probably didn't specify the right urban airship details in PlayerDuel's developers website. Make sure you put Urban Airship's Master Secret and not the App Secret in PlayerDuel's website. A: I have used App Secrete as Urban Airship Key: . This is wrong. When i change it value as App Key . It works fine.
{ "language": "en", "url": "https://stackoverflow.com/questions/12383383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: discordjs using base64 image in webhook embed How do I insert an image into a discord embed using webhook. I have the image saved as a base64 string which I get from database. I have tried this but I only get an empty embed const data = b64image.split(',')[1]; const buf = new Buffer.from(data, 'base64'); const file = new Discord.MessageAttachment(buf, 'img.jpeg'); const embed = new Discord.MessageEmbed() .setImage('attachment://img.jpeg') webhookClient.send('', { username: userName, embeds: [embed], }); A: Enter the code here. Discord has added a feature (or it was already there, I don't know), which enables you to do what you want to do. const data = b64image.split(',')[1]; const buf = new Buffer.from(data, 'base64'); const file = new Discord.MessageAttachment(buf, 'img.jpeg'); const embed = new Discord.MessageEmbed() .attachFiles(file) .setImage('attachment://img.jpeg') webhookClient.send('', { username: userName, embeds: [embed], }); A: I tried with a smaller image, and the code in the question worked. So it was a problem with the size of the request. I fixed it by setting up an express route to serve images and used the URL in the embed router.get('/thumb/:imgId', (req, res) => { const imgId = req.params.imgId.toString().trim(); let file = Buffer.from(b64Image.split(',')[1], 'base64') res.status(200); res.set('Content-Type', 'image/jpeg'); res.set('Content-Length', file.length) res.send(file) }); const embed = new Discord.MessageEmbed() .setImage(`${base_url}/img/thumb/${imgId}`) webhookClient.send('', { username: userName, embeds: [embed], });
{ "language": "en", "url": "https://stackoverflow.com/questions/64834944", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What is the return type of address-of (&) operator? int main(){ int i = 0; int *p = 123; return 0; } The error message is: invalid conversion from 'int' to 'int*' [-fpermissive] int *p = 123; I know int *p = &i; could make this work, but how the complier convert the &i type to int * type (what return type of & is)? Thanks for anyone explaining this to me! A: From the reference: For an expression of the form & expr If the operand is an lvalue expression of some object or function type T, operator& creates and returns a prvalue of type T*, with the same cv qualification, that is pointing to the object or function designated by the operand. So, the type of the expression &i is an int*, where i is of type int. Note that for expressions of this form where expr is of class type, the address-of operator may be overloaded, and there are no constraints on the type that can be returned from this overloaded operator.
{ "language": "en", "url": "https://stackoverflow.com/questions/65067467", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to increment integer and return new value? I would like to have a table that stores sequences for objects that have too many types for each of them to have a dedicated table with their own auto incrementing primary key. Is there a way to have a single query increment the sequence by one and return the resulting number while also ensuring the sequence exists? The table would be simple: +------------------------------+ | name(varchar) | value(int) | |------------------------------| | foo | 1 | +------------------------------+
{ "language": "en", "url": "https://stackoverflow.com/questions/70333812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Update url when change slide I installed this cool demo http://tympanus.net/codrops/2014/06/26/draggable-dual-view-slideshow/ on this website http://wedding.halocommunication.com What I try to do, with no success, is to update url (like in this site wild.as) when a slide changes. This demo use dragdealer plugin skidding.github.io/dragdealer One of my experiment was to add this line at the end of this function, but generates errors /** * DragDealer plugin callback: update current value */ DragSlideshow.prototype._navigate = function( x, y ) { // add class "current" to the current slide / remove that same class from the old current slide classie.remove( this.slides[ this.current || 0 ], 'current' ); this.current = this.dd.getStep()[0] - 1; classie.add( this.slides[ this.current ], 'current' ); window.location.href = 'http://test.com/' + this.slides.attr('data-content'); } Anyone could help me with some hints? Thank you
{ "language": "en", "url": "https://stackoverflow.com/questions/25134847", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Avoid Matrix copy constructor I have a situation like this: using JacobiSVD = Eigen::JacobiSVD<MatrixXcd, Eigen::FullPivHouseholderQRPreconditioner>; class Foo { public: MatrixXcd matrixU; MatrixXcd matrixV; Foo(const Ref<const MatrixXcd>& mat); } Foo::Foo(const Ref<const MatrixXcd>& mat) { JacobiSVD svd(mat, Eigen::ComputeFullU | Eigen::ComputeFullV); matrixU = svd.matrixU(); matrixV = svd.matrixV(); // <proceed to mutate computeU and computeV> } I think the above creates a copy of svd.matrixU() and svd.matrixV() during construction of matrixU and matrixV. Is this true, and is there some way to avoid it? Thanks! A: No temporary copies are created during the construction of matrixU and matrixV. Eigen::JacobiSVD inherits from Eigen::SVDBase, which defines the member functions matrixU() and matrixV() which just return a reference to the protected member variables of Eigen::SVDBase that hold the actual matrices. However, you are still copying data, but explicitly: you copy the two matrices from the local variable svd into member variables of Foo. If you do not need to modify the U and V matrices in-place, you could store the whole svd in Foo, like so: class Foo { public: JacobiSVD svd; Foo(const Ref<const MatrixXcd>& mat); }; Foo::Foo(const Ref<const MatrixXcd>& mat): svd(mat, Eigen::ComputeFullU | Eigen::ComputeFullV) { // proceed to do other things } Unfortunately, you can't modify the member variables of svd. So if you really need to modify them, and don't need the original values, then your code is fine as it is.
{ "language": "en", "url": "https://stackoverflow.com/questions/58866557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Getting the column "number" and column name to make it easier to select several - not always adjacent - columns in a large df in pandas I have a large df that I'd like to do calculation and predictions from, the problem is that I can't find a way to get a list of all column names TOGETHER with the column "number". I'ts impossible to count from the top to see which number it is, and I'd rather not write out all the column names. It would be nice to be able to use something like this: df.iloc[:, np.r_[2, 5:10, 22:102, 109:129]] but for that to work I need to know which column has what number. list(df)gives me a nice list, but without the numbers which makes it pointless in this quest. A: I suggest to create dictionary with enumerate: df = pd.DataFrame({ 'A':list('abcdef'), 'B':[4,5,4,5,5,4], 'C':[7,8.0,9,4.0,2,3], 'D':[1,3,5,7,1,0], 'E':[5,3,6,9,2,4], 'F':list('aaabbb') }) d = dict(enumerate(df)) print (d) {0: 'A', 1: 'B', 2: 'C', 3: 'D', 4: 'E', 5: 'F'} Or list of tuples like suggested @Chris in comments: L = list(enumerate(df)) print (L) [(0, 'A'), (1, 'B'), (2, 'C'), (3, 'D'), (4, 'E'), (5, 'F')]
{ "language": "en", "url": "https://stackoverflow.com/questions/58523129", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Flash Pro CC HTML5 Canvas & CreateJS - how to update the lib files to latest versions? I'm using Flash Pro CC, and publishing as HTML5 Canvas. This incorporates & uses the CreateJS JavaScript libraries. I notice that, when the HTML file gets generated, the versions of the lib files are a bit older than what's available online on their CDN/GitHub. I know that what would beb online as recent will be ahead of what Flash Pro includes, so I'm wondering how I can tell Flash to use newer versions of these libraries. Here are the script tags that Flash Pro CC added. I already switched from the option of using hosted libraries to using local libs... libs/easeljs-0.7.1.min.js libs/tweenjs-0.5.1.min.js libs/movieclip-0.7.1.min.js libs/preloadjs-0.4.1.min.js libs/soundjs-0.5.2.min.js Of course, I can go and get the newer lib file(s) from the CDN and place it into the same folder, and edit the Flash-created HTML after publishing, but the HTML will get over-written during a subsequent publish. I see there's an option to uncheck "Overwrite HTML", which could solve this issue. After doing this there was obviously some incompatibility that prevented the page from even being displayed. For example, I switched the JS tags from... libs/soundjs-0.5.2.min.js to libs/soundjs-0.6.0.min.js ...and my file no longer worked; no visuals were displayed in the browser. Anybody know of how to smoothly update to new versions of the libs? Or thoughts on my approach just being wrong? MY goal was to try to use the latest versions for the max amount fo features that the CreateJS team has programmed into their libs. A: You should be able to swap the libraries as you suggested, but they need to all be swapped at once, otherwise you will run into incompatibilities around the event model and inheritance. Make sure to swap the MovieClip library as well. As you suggested, the easiest way to do this is to publish once, then turn off "overwrite HTML" and modify the html to point to the new libraries. We tested fairly extensively, and the new libraries should be compatible with the latest Flash CC output. The only issue we encountered is with FlashCC's spritesheet export tool, which is not compatible with the latest version of EaselJS. That's not to say that there may not be incompatibilities that we didn't find, so if you are able to reproduce an issue, let us know.
{ "language": "en", "url": "https://stackoverflow.com/questions/28569092", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: how to do i18n while using react-templates react-templates is a good solution to separate template and logic code,however,if i have multiply templates but referrer to one logic code,like i18n requests. eg. post.js //which is the logic code post.en.rt //which is the template for english post.es.rt //for spanish how to load in post.js? PS: i don't wanna load all language templates in post.js,that would be a big file and it would be a waste for network A: finaly,i use bundle loader with regexp to solove my question. code like this { test: /(en|cn)\.rt$/, loaders: [ "bundle?regExp=(en|cn)\.rt$&name=[1]", "react-templates-loader" ] }, then all xxx.en.rt files will bundle into one en.js
{ "language": "en", "url": "https://stackoverflow.com/questions/37318727", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: When and where should I drain the NSAutoreleasePool? I understand how to add items to an NSAutoreleasePool and how to drain the pool afterwards. But what's missing in my education is when and where this should be done. Clearly just doing it in Main makes no sense because that's no different than never releasing the memory at all. But the documentation I've read so far hasn't offered me any other guidance on this. A: It's useful to use an autorelease pool when you are allocating autoreleased objects in a loop, that will reduce the peak of memory consumption of the underlayer autorelease pool. More info on autorelease pool in https://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/MemoryMgmt/Articles/mmAutoreleasePools.html A: The autorelease pool in main fulfills your applications's responsibility to Cocoa that an autorelease pool always be available. This pool is drained at every cycle of the main event loop. Further, each NSThread you create must have its own autorelease pool. Beyond that, it is simply an issue of estimating how many autoreleased objects you are creating before the main autorelease pool drains. You may also use Instruments to look at the peak memory footprint as further evidence of where an autorelease pool may be used. A: The only time you need to manually manage NSAutoreleasePool objects is when you are running in a thread. If the thread doesn't use much memory then drain at the beginning and drain at the end. Otherwise drain every so many loop iterations. How many iterations between draining the pool depends on the amount of memory you are using in the pool. The more often you drain the more efficient your memory usage is. If you are doing this for something like a particle system with tens of thousands of particles, then you are better off not allocating & releasing memory all the time but instead allocate once and use a ring buffer or something similar.
{ "language": "en", "url": "https://stackoverflow.com/questions/9793128", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Docker on AWS filling up its thin pool while running somehow? I've got a server on ElasticBeanstalk on AWS. Even though no images are being pulled, the thin pool continually fills for under a day until the filesystem is re-mounted as read-only and the applications die. This happens with Docker 1.12.6 on latest Amazon AMI. I can't really make heads or tails of it. When an EC2 instance (hosting Beanstalk) starts, it has about 1.3GB in the thin pool. By the time my 1.2GB image is running, it has about 3.6GB (this is remembered info, it is very close to this). OK, that's fine. Cut to 5 hours later... (from the EC2 instance hosting it) docker info returns: Storage Driver: devicemapper Pool Name: docker-docker--pool Pool Blocksize: 524.3 kB Base Device Size: 107.4 GB Backing Filesystem: ext4 Data file: Metadata file: Data Space Used: 8.489 GB Data Space Total: 12.73 GB Data Space Available: 4.245 GB lvs agrees. In another few hours that will grow to be 12.73GB used and 0 B free. dmesg will report: [2077620.433382] Buffer I/O error on device dm-4, logical block 2501385 [2077620.437372] EXT4-fs warning (device dm-4): ext4_end_bio:329: I/O error -28 writing to inode 4988708 (offset 0 size 8388608 starting block 2501632) [2077620.444394] EXT4-fs warning (device dm-4): ext4_end_bio:329: I/O error [2077620.473581] EXT4-fs warning (device dm-4): ext4_end_bio:329: I/O error -28 writing to inode 4988708 (offset 8388608 size 5840896 starting block 2502912) [2077623.814437] Aborting journal on device dm-4-8. [2077649.052965] EXT4-fs error (device dm-4): ext4_journal_check_start:56: Detected aborted journal [2077649.058116] EXT4-fs (dm-4): Remounting filesystem read-only Yet hardly any space is used in the container itself... (inside the Docker container:) df -h /dev/mapper/docker-202:1-394781-1exxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 99G 1.7G 92G 2% / tmpfs 3.9G 0 3.9G 0% /dev tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/xvda1 25G 1.4G 24G 6% /etc/hosts shm 64M 0 64M 0% /dev/shm du -sh / 1.7G / How can this space be filling up? My programs are doing very low-volume logging, and the log files are extremely small. I have good reason not to write them to stdout/stderr. xxx@xxxxxx:/var/log# du -sh . 6.2M . I also did docker logs and the output is less than 7k: >docker logs ecs-awseb-xxxxxxxxxxxxxxxxxxx > w4 >ls -alh -rw-r--r-- 1 root root 6.4K Mar 27 19:23 w4 The same container does NOT do this to my local docker setup. And finally, running du -sh / on the EC2 instance itself reveals less than 1.4GB usage. It can't be being filled up by logfiles, and it isn't being filled inside the container. What can be going on? I am at my wits' end!
{ "language": "en", "url": "https://stackoverflow.com/questions/43061333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: WHMCS Remove "Country" selector from checkout page i work on WHMCS customizations, and want to hide Contry selector field. I tryed to comment county from template, and field are not shown, but when try to checkout, i got this error: Please correct the following errors before continuing: Please choose your country from the drop down box This is Code from template that handle that part: <div class="col-sm-12"> <div class="form-group prepend-icon"> <label for="inputCountry" class="field-icon" id="inputCountryIcon"> <i class="fa fa-globe"></i> </label> <select name="country" id="inputCountry" class="field"{if $loggedin} disabled="disabled"{/if}> {foreach $countries as $countrycode => $countrylabel} <option value="{$countrycode}"{if (!$country && $countrycode == $defaultcountry) || $countrycode eq $country} selected{/if}> {$countrylabel} </option> {/foreach} </select> </div> </div> and also this one: <div class="col-sm-12"> <div class="form-group prepend-icon"> <select name="domaincontactcountry" id="inputDCCountry" class="field"> {foreach $countries as $countrycode => $countrylabel} <option value="{$countrycode}"{if (!$domaincontact.country && $countrycode == $defaultcountry) || $countrycode eq $domaincontact.country} selected{/if}> {$countrylabel} </option> {/foreach} </select> </div> </div> </div> So my question is how to proceed with checkout without that warning? A: For each country div add style="display:none", e.g.: This will hide the div but will keep default country selection.
{ "language": "en", "url": "https://stackoverflow.com/questions/49032685", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to display Monday as first day CalendarView With: Calendar cal = Calendar.getInstance(); cal.setFirstDayOfWeek(Calendar.MONDAY); you just set Monday's integer value to 0, but I want to have Monday displayed as first day (at the left end, and Sunday at the right) A: Use xml parameter android:firstDayOfWeek with value from Calendar. 2 - is Monday. <CalendarView android:id="@+id/calendarView1" android:firstDayOfWeek="2" android:layout_width="match_parent" android:layout_height="match_parent" android:layout_alignParentBottom="true" android:layout_centerHorizontal="true" android:layout_marginBottom="157dp" /> Or you can specify it from code CalendarView calendarView = findViewById(R.id.calendarView1); calendarView.setFirstDayOfWeek(Calendar.MONDAY); A: String[] days = null; DateFormatSymbols names = new DateFormatSymbols(); days = names.getWeekdays(); for (int i=1; i<8; ++i) { system.out.println(days[i]); }
{ "language": "en", "url": "https://stackoverflow.com/questions/15218983", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: using puppeteer with headless_shell I am using: https://www.npmjs.com/package/puppeteer-pdf which has puppeteer as a dependency. Heroku is angry about my >500mb slug size, so I am trying to reduce it. It looks like if I can setup puppeteer to use headless_shell instead of the full chromium download, then I can greatly reduce size of my npm modules. However I am struggling to get it installed and working with headless_shell: mkdir headless && cd headless npm init -y touch .npmrc echo "PUPPETEER_SKIP_CHROMIUM_DOWNLOAD=true" > .npmrc npm i puppeteer Then I run node and try to launch puppeteer: const puppeteer = require('puppeteer'); puppeteer.launch({executablePath: 'out/Release/headless_shell'}); Uncaught: Error: Failed to launch the browser process! spawn out/Release/headless_shell ENOENT TROUBLESHOOTING: https://github.com/puppeteer/puppeteer/blob/main/docs/troubleshooting.md at onClose (/Users/me/delete/node_modules/puppeteer/lib/cjs/puppeteer/node/BrowserRunner.js:194:20) If anyone knows how to get setup with headless_shell that would be great.
{ "language": "en", "url": "https://stackoverflow.com/questions/68639229", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Suggest a JPA Unit test framework How to unit test JPA code? is there any way to generate Unit Test case itself? Note: I am lazy and new to Unit Test code. A: Check this out, Unitils. Here is a related discussion, with some example codes. Here is the example, showing DBUnit, Spring and OpenJPA together. You might not using all, but this can take you somewhere if you want to go with DBUnit, I believe. A: I'm in the middle of trying out OpenEJB (http://openejb.apache.org/) for my ongoing project. It's an embeddable container with support for EJB 3.0 (and partially 3.1). My first impression of it is fairly good. A: recently I faced a problem of testing: * *business logic operating on JPA Entities *complex JPQL queries mixed with business logic. I used JPA-Unit and it solved all my problems.
{ "language": "en", "url": "https://stackoverflow.com/questions/452630", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How to show one table a time when you have many in Angularjs? Following html code will show some tables (all at once) with 4 rows in each. I want to show them one by one on showNext button click. The id is generated automatically. I need help with jQuery code. <div ng-init="outerIndex=($index)" tableId="{{'table'+outerIndex}}" ng-repeat="oneList in mainList"> <table class="animate-if"> <thead> <tr> <th>Names</th> <th>Address</th> </tr> </thead> <tbody> <tr ng-repeat=" one in oneList | limitTo: 4 "> <td>{{one.name}}</td> <td>{{one.address}}</td> </tr> </tbody> </table> </div> <button class="btn " ng-click="showNext($tableId) "> Next </button> And here is JavaScript code. I am getting table id here but how to perform action show one by one. $scope.showNext = function (tableId) { $('[tableId ^= table]').each(function (i, e) { console.log($(e).attr('tableId')); //this is printing table-id on console //$('tableId').hide(); }); } Note: please see images for expected result scenario: without-click first-click and so on. A: Consider the following code snippet to dynamically change tables: var mainList = [{tableId:1}, {tableId:2}, {tableId:3}] function mainController(){ var vm = this; vm.mainList = mainList; vm.currentIndex = 0; vm.currentTable = currentTable(); function showNext(){ vm.currentIndex++; vm.currentTable = currentTable(); } function currentTable(){ return vm.mainList[currentIndex]; } } <div tableId="{{'table'+currentTable.tableId}}" > <table class="animate-if"> <thead> <tr> <th>Names</th> <th>Address</th> </tr> </thead> <tbody> <tr ng-repeat=" one in currentTable | limitTo: 4 "> <td>{{one.name}}</td> <td>{{one.address}}</td> </tr> </tbody> </table> </div> Next If you really need to have ng-repeat of all available tables and show/hide tables on next press than modify code in such way: var mainList = [{tableId:1}, {tableId:2}, {tableId:3}] function mainController(){ var vm = this; vm.mainList = mainList; vm.currentIndex = 0; vm.currentTable = currentTable(); function showNext(){ vm.currentIndex++; vm.currentTable = currentTable(); } function currentTable(){ return vm.mainList[currentIndex]; } function isActive(table){ var tableIndex = mainList.indexOf(table); return tableIndex === currentIndex; } } <div tableId="{{'table'+currentTable.tableId}}" ng-repeat="table in mainList" ng-if="isActive(table)"> <table class="animate-if"> <thead> <tr> <th>Names</th> <th>Address</th> </tr> </thead> <tbody> <tr ng-repeat=" one in currentTable | limitTo: 4 "> <td>{{one.name}}</td> <td>{{one.address}}</td> </tr> </tbody> </table> </div> <button class="btn " ng-click="showNext() "> Next </button> A: Replace the below code with your appropriate data structures since it is unclear from question. var myApp = angular.module('myApp',[]); //myApp.directive('myDirective', function() {}); //myApp.factory('myService', function() {}); function MyCtrl($scope) { $scope.mainList=[[{name:"dummy1"},{name:"dummy2"}],[{name:"dummy3"},{name:"dummy4"}]]; $scope.count=1; $scope.showNext = function () { $scope.count=$scope.count+1; } } <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.23/angular.min.js"></script> <div ng-app="myApp" ng-controller="MyCtrl"> <div ng-repeat="oneList in mainList | limitTo: count "> <table class="animate-if"> <thead> <tr> <th>Names</th> </tr> </thead> <tbody> <tr ng-repeat="one in oneList | limitTo: 4 "> <td>{{one.name}}</td> </tr> </tbody> </table> </div> <button class="btn" ng-click="showNext()"> Next </button> </div> A: After some workaround I found the answer what I was looking for. Thanks to @Ihor Korotenko. One modification in html file is "ng-show" attribute. <div ng-init="outerIndex=($index)" tableId="{{'table'+outerIndex}}" ng-repeat="oneList in mainList" ng-show="oneList.flagValue"> <table class="animate-if"> <thead> <tr> <th>Names</th> <th>Address</th> </tr> </thead> <tbody> <tr ng-repeat=" one in oneList | limitTo: 4 "> <td>{{one.name}}</td> <td>{{one.address}}</td> </tr> </tbody> </table> </div> <button class="btn " ng-click="more()"> More </button> <button class="btn " ng-click="less()"> Less </button> Now here is the jQuery code: $scope.getValueFromSvc= function ($index) { Service.myMethod().then(function (response) { var total = response.responseData; var mainList= []; // contains all oneList var oneList = []; //contain one table $.each(total, function (i, value){ // iterate data here to put in oneList then mainList } }); // assigning flag which will be helpful on applying condition in next jQuery segment $.each(mainList, function (i, value) { if (i == 0) value.flagValue = true; else value.flagValue = false; }); $scope.mainList = mainList; }); } And finally to show table on button click jQuery goes like... $scope.more = function () { $.each($scope.mainList, function (i, value) { if (!value.flagValue) { value.flagValue = true; return false; } }); }; $scope.less = function () { for (var v = $scope.mainList.length - 1; v > 0; v--){ if ($scope.mainList[v].flagValue) { $scope.mainList[v].flagValue = false; break; } } };
{ "language": "en", "url": "https://stackoverflow.com/questions/45159678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How should Erlang filter the elements in the list, and add punctuation and []? -module(solarSystem). -export([process_csv/1, is_numeric/1, parseALine/2, parse/1, expandT/1, expandT/2, parseNames/1]). parseALine(false, T) -> T; parseALine(true, T) -> T. parse([Name, Colour, Distance, Angle, AngleVelocity, Radius, "1" | T]) -> T;%Where T is a list of names of other objects in the solar system parse([Name, Colour, Distance, Angle, AngleVelocity, Radius | T]) -> T. parseNames([H | T]) -> H. expandT(T) -> T. expandT([], Sep) -> []; expandT([H | T], Sep) -> T. % https://rosettacode.org/wiki/Determine_if_a_string_is_numeric#Erlang is_numeric(L) -> S = trim(L, ""), Float = (catch erlang:list_to_float(S)), Int = (catch erlang:list_to_integer(S)), is_number(Float) orelse is_number(Int). trim(A) -> A. trim([], A) -> A; trim([32 | T], A) -> trim(T, A); trim([H | T], A) -> trim(T, A ++ [H]). process_csv(L) -> X = parse(L), expandT(X). The problem is that it will calls process_csv/1 function in my module in a main, L will be a file like this: [["name "," col"," dist"," a"," angv"," r "," ..."],["apollo11 ","white"," 0.1"," 0"," 77760"," 0.15"]] Or like this: ["planets ","earth","venus "] Or like this: ["a","b"] I need to display it as follows: apollo11 =["white", 0.1, 0, 77760, 0.15,[]]; Planets =[earth,venus] a,b [[59],[97],[44],[98]] My problem is that no matter how I make changes, it can only show a part, and there are no symbols. The list cannot be divided, so I can't find a way. In addition, because Erlang is a niche programming language, I can't even find examples online. So, can anyone help me? Thank you, very much. In addition, I am restricted from using recursion. A: I think the first problem is that it is hard to link what you are trying to achieve with what your code says thus far. Therefore, this feedback maybe is not exactly what you are looking for, but might give some ideas. Let's structure the problem into the common elements: (1) input, (2) process, and (3) output. * *Input You mentioned that L will be a file, but I assume it is a line in a file, where each line can be one of the 3 (three) samples. In this regard, the samples also do not have consistent pattern.For this, we can build a function to convert each line of the file into Erlang term and pass the result to the next step. *Process The question also do not mention the specific logic in parsing/processing the input. You also seem to care about the data type so we will convert and display the result accordingly. Erlang as a functional language will naturally be handling list, so on most cases we will need to use functions on lists module *Output You didn't specifically mention where you want to display the result (an output file, screen/erlang shell, etc), so let's assume you just want to display it in the standard output/erlang shell. Sample file content test1.txt (please note the dot at the end of each line) [["name "," col"," dist"," a"," angv"," r "],["apollo11 ","white","0.1"," 0"," 77760"," 0.15"]]. ["planets ","earth","venus "]. ["a","b"]. Howto run: solarSystem:process_file("/Users/macbook/Documents/test1.txt"). Sample Result: (dev01@Macbooks-MacBook-Pro-3)3> solarSystem:process_file("/Users/macbook/Documents/test1.txt"). apollo11 = ["white",0.1,0,77760,0.15] planets = ["earth","venus"] a = ["b"] Done processing 3 line(s) ok Module code: -module(solarSystem). -export([process_file/1]). -export([process_line/2]). -export([format_item/1]). %%This is the main function, input is file full path %%Howto call: solarSystem:process_file("file_full_path"). process_file(Filename) -> %%Use file:consult to convert the file content into erlang terms %%File content is a dot (".") separated line {StatusOpen, Result} = file:consult(Filename), case StatusOpen of ok -> %%Result is a list and therefore each element must be handled using lists function Ctr = lists:foldl(fun process_line/2, 0, Result), io:format("Done processing ~p line(s) ~n", [Ctr]); _ -> %%This is for the case where file not available io:format("Error converting file ~p due to '~p' ~n", [Filename, Result]) end. process_line(Term, CtrIn) -> %%Assume there are few possibilities of element. There are so many ways to process the data as long as the input pattern is clear. %%We basically need to identify all possibilities and handle them accordingly. %%Of course there are smarter (dynamic) ways to handle them, but below may give you some ideas. case Term of %%1. This is to handle this pattern -> [["name "," col"," dist"," a"," angv"," r "],["apollo11 ","white"," 0.1"," 0"," 77760"," 0.15"]] [[_, _, _, _, _, _], [Name | OtherParams]] -> %%At this point, Name = "apollo11", OtherParamsList = ["white"," 0.1"," 0"," 77760"," 0.15"] OtherParamsFmt = lists:map(fun format_item/1, OtherParams), %%Display the result to standard output io:format("~s = ~p ~n", [string:trim(Name), OtherParamsFmt]); %%2. This is to handle this pattern -> ["planets ","earth","venus "] [Name | OtherParams] -> %%At this point, Name = "planets ", OtherParamsList = ["earth","venus "] OtherParamsFmt = lists:map(fun format_item/1, OtherParams), %%Display the result to standard output io:format("~s = ~p ~n", [string:trim(Name), OtherParamsFmt]); %%3. Other cases _ -> %%Display the warning to standard output io:format("Unknown pattern ~p ~n", [Term]) end, CtrIn + 1. %%This is to format the string accordingly format_item(Str) -> StrTrim = string:trim(Str), %%first, trim it format_as_needed(StrTrim). format_as_needed(Str) -> Float = (catch erlang:list_to_float(Str)), case Float of {'EXIT', _} -> %%It is not a float -> check if it is an integer Int = (catch erlang:list_to_integer(Str)), case Int of {'EXIT', _} -> %%It is not an integer -> return as is (string) Str; _ -> %%It is an int Int end; _ -> %%It is a float Float end.
{ "language": "en", "url": "https://stackoverflow.com/questions/65084224", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Change row color using GridMvc How can I change the background color of a row using GridMvc? I've looked at the posting here: Changing Background of GridMvc Column on Condition I've tried using the SetRowCssClasses property (see below) but nothing happens. So, I don't know if I'm using it incorrectly or I need to do more than what I have in the below. Given that I'm relatively new to MVC and definitely new to GridMvc, it could be both. So, any help on what I need to change to get the rows where the bitTestOrderFlag is true to red is appreciated. Thank you. @Html.Grid(Model).SetRowCssClasses(i => i.bitTestOrderFlag ? "cssClassRed" : string.Empty).Columns(columns => { columns.Add(c => c.intOrderNumber).Titled("Order Number") .Encoded(false) .Sanitized(false) .SetWidth(30) .RenderValueAs(o => Html.ActionLink(o.intOrderNumber.ToString(), "GetOrderDetails", "Orders", new { orderNumber = o.intOrderNumber }, null)); columns.Add(c => c.strCustomerNumber).Titled("Customer Number"); columns.Add(c => c.dtEntryDate).Titled("Entered Date").Format("{0:MM/dd/yyyy}"); columns.Add(c => c.strBillToName).Titled("BillTo Name"); columns.Add(c => c.strBillToStreetAddr).Titled("BillTo Street Address"); columns.Add(c => c.strBillToCity).Titled("BillTo City"); columns.Add(c => c.strShipToName).Titled("ShipTo Name"); columns.Add(c => c.strShipToStreetAddr).Titled("ShipTo Street Address"); columns.Add(c => c.strShipToCity).Titled("ShipTo City"); columns.Add(c => c.strPoNumber).Titled("PO Number"); columns.Add(c => c.bitTestOrderFlag).Titled("TestOrder"); }).WithPaging(8) A: After trying unsuccessfully to add this to Site.css and GridMvc.css: .cssClassRed { background-color:red !important; } I ended up adding a new MyStyleSheet.css to the Content folder and placing the above in it. Then I updated the _Layout.cshtml adding to the head section the following: <link href="@Url.Content("~/Content/MyStyleSheet.css")" rel="stylesheet" type="text/css" /> Once I did these two things, the rows were colored red as I needed. I'm not sure whether what I did was the correct way or the best way, but it worked. So, I thought I would share my solution in case others run into a similar issue as I was.
{ "language": "en", "url": "https://stackoverflow.com/questions/63695480", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Change the precedence of the swagger-ui.html handler mapping When using springdoc-openapi-ui 1.6 in a spring boot 2.7 project, the handler mappings for the /v3/api-docs and /swagger-ui.html urls have different precedences. While working on a project that uses spring-integration-http, I noticed that its inbound components has: * *a lower precedence than the mapper for /v3/api-docs *a higher precedence than the mapper for /swagger-ui.html In other words, in some situations, though the openapi doc is available, the swagger ui isn't. I'll admit that the example is an unorthodox use of Spring Integration, but still, I am wondering if there is any possibility to configure the swagger ui mapper so that it has the same precedence as the openapi doc mapper.
{ "language": "en", "url": "https://stackoverflow.com/questions/75617367", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }