text
stringlengths
15
59.8k
meta
dict
Q: R nls exponential curve I have two vectors X and y2, and I wish to fit an exponential curve to the data. I tried many approaches described on Stack Overflow topics but all of them give me just a straight line. e.g. I tried this: model.three <- lm(log(y2) ~ log(X)) plot(X,predict(model.three)) abline(model.three) My data: X <- seq(1:50) Y <- rnorm(50,mean=0,sd=1) y2 <- exp(X) y2 <- Y+y2 A: Is this what you are looking for? model.three <- lm(log(y2) ~ log(X)) plot(X,predict(model.three)) ## Instead of abline(), use this: lines(model.three$fitted.values) A: Your data expresses an exponential relationship between Y and X, which is Y = exp(X) + eps where eps is some noise. Therefore, I would suggest fitting a model between log(Y) and X, to capture the linear relationship between the two: model.three <- lm(log(y2) ~ X) summary(model.three) The summary confirms that the relationship captured is as expected (i.e. the coefficient for X is very close to 1). Since plotting the data on a linear scale will not be useful, I think it is a good idea to plot the fitted straight line with abline. Note: to be exact, it would be more accurate to capture the relationship between y2 and exp(X), but with your data, the fit is essentially perfect.
{ "language": "en", "url": "https://stackoverflow.com/questions/37399316", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Overlay numbers from 0000 to 9999 on a picture I have a picture with some white/blank space. My goal is to automatically generate a number from 0000 to 9999 and put it on top of the picture, then export/save it as a png. the result should be mypicture_0000.png mypicture_0001.png ... mypicture_9999.png Has anyone tried something similar? I am thinking about trying autoit, but will that work? If so which software should I use with autoit? Thank You. A: autoit may work. i'd use python PIL. i can specify font, convert it to a layer and overlay on top of preexisting image. EDIT actually imagemagick can be easier than PIL http://www.imagemagick.org/Usage/text/ A: Should not be much of a problem if you have Python and the Python Imaging Library (PIL) installed: from PIL import Image, ImageFont, ImageDraw BACKGROUND = '/path/to/background.png' OUTPUT = '/path/to/mypicture_{0:04d}.png' START = 0 STOP = 9999 # Create a font object from a True-Type font file and specify the font size. fontobj = ImageFont.truetype('/path/to/font/arial.ttf', 24) for i in range(START, STOP + 1): img = Image.open(BACKGROUND) draw = ImageDraw.Draw(img) # Write a text over the background image. # Parameters: location(x, y), text, textcolor(R, G, B), fontobject draw.text((0, 0), '{0:04d}'.format(i), (255, 0, 0), font=fontobj) img.save(OUTPUT.format(i)) print 'Script done!' Please consult the PIL manual for other ways of creating font objects for other font formats
{ "language": "en", "url": "https://stackoverflow.com/questions/8017815", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Having trouble sniffing for a DNS response in scapy (Python 3) I currently have this piece of code that is proven to sniff for the DNS query which is defined within DNSsniff(pkt). I also have a second definition called DNSsniff2(pkt) that tries to sniff for the DNS response, however, I have had little to no luck in getting it to work. any help would be greatly appriciated :) import scapy.all as scapy def DNSsniff(pkt): if scapy.IP in pkt: ip_src=pkt[scapy.IP].src ip_dst=pkt[scapy.IP].dst if pkt.haslayer(scapy.DNS) and pkt.getlayer(scapy.DNS.qr) == 1: qname=pkt.getlayer(scapy.DNS).qd.qname print(str(ip_src) + " -> " + str(ip_dst) + " : " + "(" + str(qname)+ ")") def DNSsniff2(pkt): if scapy.IP in pkt: ip_src=pkt[scapy.IP].src ip_dst=pkt[scapy.IP].dst if pkt.haslayer(scapy.DNS) and pkt.getlayer(scapy.DNS.an) ==1: arname=pkt.getlayer(scapy.DNS).an.rdata print(str(ip_src) + " -> " + str(ip_dst) + " : " + "(" + str(arname) + ")") #attempts to capture and filter the nessasery DNS response from the server. capture = scapy.sniff(iface="Ethernet",filter="udp port 53", store=2, count=2) print("\nsniffing....") for i in capture: DNSsniff(i) DNSsniff2(i) print("\nsniffing dun") I have tried to change the scapy.DNS.an to scapy.DNS.rr but the code didnt work
{ "language": "en", "url": "https://stackoverflow.com/questions/74602637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Recyclerview with multiple view types I created a recyclerview with multiple view types.First problem is that images don't show and second problem is when i scroll recyclerview, values of its items changes and I don't know how can I fix it. my adapter code: public class HeterogenousRecyclerviewAdapter extends RecyclerView.Adapter<RecyclerView.ViewHolder>{ private ArrayList<DataObject> mDataset; Context context; RecyclerView.ViewHolder viewHolder; public HeterogenousRecyclerviewAdapter(ArrayList<DataObject> myDataset) { this.mDataset = myDataset; } @Override public int getItemCount() { return mDataset.size(); } @Override public int getItemViewType(int position) { int view_type=mDataset.get(position).getView_type(); return view_type; } @Override public RecyclerView.ViewHolder onCreateViewHolder(ViewGroup parent, int viewType) { context=parent.getContext(); LayoutInflater inflater = LayoutInflater.from(parent.getContext()); switch (viewType){ case 0: View v1 = inflater.inflate(R.layout.layout_viewholder1, parent, false); viewHolder = new ViewHolder1(v1); break; case 1: View v2 = inflater.inflate(R.layout.layout_viewholder2, parent, false); viewHolder = new ViewHolder2(v2); break; } return viewHolder; } @Override public void onBindViewHolder(RecyclerView.ViewHolder holder, int position) { switch (viewHolder.getItemViewType()){ case 0: ViewHolder1 vh1 = (ViewHolder1) viewHolder; configureViewHolder1(vh1, position); break; case 1: ViewHolder2 vh2 = (ViewHolder2) viewHolder; configureViewHolder2(vh2, position); break; } } private void configureViewHolder1(ViewHolder1 vh1, int position) { if (mDataset != null) { vh1.getLabel1().setText(mDataset.get(position).getName()); //vh1.getLabel2().setText("Hometown: " + user.hometown); } } private void configureViewHolder2(ViewHolder2 vh2, int position) { //vh2.getImageView().setImageResource(R.mipmap.img1); try { Resources res = context.getResources(); int resourceId = res.getIdentifier(mDataset.get(position).getImg(), "mipmap", context.getPackageName()); vh2.getImageView().setImageResource(resourceId); } catch (Exception e) { // TODO: handle exception } } } this is my ViewHolder1 class: public class ViewHolder1 extends RecyclerView.ViewHolder { private TextView label1; public ViewHolder1(View v) { super(v); label1 = (TextView) v.findViewById(R.id.text1); //label2 = (TextView) v.findViewById(R.id.text2); } public TextView getLabel1() { return label1; } public void setLabel1(TextView label1) { this.label1 = label1; } } this is ViewHolder1.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical"> <TextView android:id="@+id/text1" android:layout_width="match_parent" android:layout_height="wrap_content" android:gravity="center_vertical" android:textStyle="bold" /> </LinearLayout> this is ViewHolder2 class: public class ViewHolder2 extends RecyclerView.ViewHolder { private ImageView ivExample; public ViewHolder2(View v) { super(v); ivExample = (ImageView) v.findViewById(R.id.ivExample); } public ImageView getImageView() { return ivExample; } public void setImageView(ImageView ivExample) { this.ivExample = ivExample; } } this is ViewHolder2.xml: <?xml version="1.0" encoding="utf-8"?> <ImageView xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/ivExample" android:adjustViewBounds="true" android:scaleType="fitXY" android:layout_width="200dp" android:layout_height="200dp"/> this is my main activity code: ArrayList<DataObject> personList = new ArrayList<DataObject>(); DataObjectDBAdapter dataObjectDBAdapter = new DataObjectDBAdapter(getApplicationContext()); personList = dataObjectDBAdapter.getALL(); adapter = new HeterogenousRecyclerviewAdapter(personList); mRecyclerView = (RecyclerView) findViewById(R.id.my_recycler_view); mLayoutManager = new LinearLayoutManager(this, LinearLayoutManager.VERTICAL, false); mRecyclerView.setLayoutManager(mLayoutManager); mRecyclerView.setAdapter(adapter); can sb help me? A: I hadn't saved names of images in the SQLite database so the first problem was because of that, and the second problem was in my adapter code. I had to write "viewHolder" instead of "holder" in onBindViewHolder(), so the correct method is: @Override public void onBindViewHolder(RecyclerView.ViewHolder viewHolder, int position) { switch (viewHolder.getItemViewType()){ case 0: ViewHolder1 vh1 = (ViewHolder1) viewHolder; configureViewHolder1(vh1, position); break; case 1: ViewHolder2 vh2 = (ViewHolder2) viewHolder; configureViewHolder2(vh2, position); break; } } A: First make sure the given statement must only return 0 or 1. because view type must start with 0 int view_type=mDataset.get(position).getView_type(); No need to create the below viewHolder as a member of class RecyclerView.ViewHolder viewHolder; Don't check viewType in onBindViewHolder . Instead check holder instanceOf @Override public void onBindViewHolder(RecyclerView.ViewHolder holder, int position) { if(holder instanceOf ViewHolder1){ (ViewHolder1) holder.getLabel1().setText(mDataset.get(position).getName()); } else { try { Resources res = context.getResources(); int resourceId = res.getIdentifier(mDataset.get(position).getImg(), "mipmap", context.getPackageName()); (ViewHolder2) holder.getImageView().setImageResource(resourceId); } catch (Exception e) {} } } A: i had the same problem values changes after the scroll, then i overide these methods @Override public long getItemId(int position) { return position; } @Override public int getItemCount() { return dataList.size(); }, This solved my problem and for the Image heck the image resourceId is null or not i am not sure if your way is good to set the Resource. A: While using multiple view types with dynamic data some users may face issues like duplicate data items, data being swapped between questions. To avoid that you have to set a unique view type id for every item. @Override public int getItemViewType(int position) { // Here you can get decide from your model's ArrayList, which type of view you need to load. Like if (list.get(position).type == Something) { // Put your condition, according to your requirements return VIEW_TYPE_ONE; } return VIEW_TYPE_TWO; } The above code for getItemViewType can fail where 3 consecutive items will have the same type. For example, if the user enters ans1 in item 1 edit text,ans2 in item 2 edit text, ans3 in item 3 edit text and scroll the recycler view up and down then some users may face issues like duplicate data items, data being swapped between questions. Formula to create unique view type id : Formula : pos * Constants.Max + viewType; set value Constants.Max = 100000; class DataAdapter extends RecyclerView.Adapter<RecyclerView.ViewHolder> { private final int VIEW_TYPE_EDIT_TEXT = 1; private final int VIEW_TYPE_IMAGE_VIEW = 2; private final ArrayList<DataModel> dataModelArrayList = new ArrayList<>(); @Override public int getItemViewType(int position) { String dataType = dataModelArrayList.get(position).getDataType(); int viewType = VIEW_TYPE_EDIT_TEXT; switch (dataType) { case "A": case "E": viewType = VIEW_TYPE_EDIT_TEXT; break; case "I": viewType = VIEW_TYPE_IMAGE_VIEW; break; } int pos = position + 1; return pos * Constants.Max + viewType; } @Override public long getItemId(int position) { return position; } @NonNull @Override public RecyclerView.ViewHolder onCreateViewHolder(@NonNull ViewGroup parent, int type) { int count = (int) Math.ceil(type / Constants.Max); int viewType = type - count * Constants.Max; if (viewType == VIEW_TYPE_EDIT_TEXT) { return new ViewHolderTypeEditText(LayoutInflater.from(context).inflate(R.layout.adapter_status_prompts_et_item, parent, false)); } else if (viewType == VIEW_TYPE_IMAGE_VIEW) { return new ViewHolderTypeImageView(LayoutInflater.from(context).inflate(R.layout.adapter_status_prompts_sp_item, parent, false)); } return new ViewHolderTypeEditText(LayoutInflater.from(context).inflate(R.layout.adapter_status_prompts_et_item, parent, false)); } @Override public void onBindViewHolder(@NonNull RecyclerView.ViewHolder holder, int pos) { int type = holder.getItemViewType(); int count = (int) Math.ceil(type / Constants.Max); int viewType = type - count * Constants.Max; if (viewType == VIEW_TYPE_EDIT_TEXT) { DataAdapter.ViewHolderTypeEditText viewHolderTypeEditText = (DataAdapter.ViewHolderTypeEditText) holder; } else if (viewType == VIEW_TYPE_IMAGE_VIEW) { DataAdapter.ViewHolderTypeImageView viewHolderTypeImageView = (DataAdapter.ViewHolderTypeImageView) holder; } } @Override public int getItemCount() { return dataModelArrayList.size(); } public class ViewHolderTypeEditText extends RecyclerView.ViewHolder { EditText etText; public ViewHolderTypeEditText(@NonNull View itemView) { super(itemView); etText = itemView.findViewById(R.id.et_text); } } public class ViewHolderTypeImageView extends RecyclerView.ViewHolder { ImageView imageView; public ViewHolderTypeImageView(@NonNull View itemView) { super(itemView); imageView = itemView.findViewById(R.id.image_view); } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/34469254", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Laravel groupBy rank resulting in multiple grouped rows I'm using the groupby() function on my query to get some stats on two groups of athletes. I have a table with my athletes and one with their performance. - athletes table -- id -- name -- group_id -performance table -- id -- DateTime -- athlete_id -- sport_id -- time Each athlete do the same sport for multiples days (which is not the same for all athletes). I would like to see the evolution of the time average over days per group in a given sport. I run this query $query = '*, AVG(time) as average_time' ; $Performance = Performance::join('athletes','performance.athlete_id','=','athletes.id') ->select(\DB::raw($query)) ->where('performance.sport_id', '=', '43') ->orderBy('group_id', 'asc') ->groupBy('group_id') ->get() ->toArray(); However, this give me the average for a given sport_id but not over each days. How can I see the average for every day in a given sport groupby group_id? A: If DateTime is a timestamp then you can use the mysql date functions. Specifically GROUP BY WEEKDAY(timestamp_field). Added into your query it looks like this $query = '*, AVG(time) as average_time' ; $Performance = Performance::join('athletes','performance.athlete_id','=','athletes.id') ->select(\DB::raw($query)) ->where('performance.sport_id', '=', '43') ->orderBy('group_id', 'asc') ->groupBy('WEEKDAY(DateTime)') ->groupBy('group_id') ->get() ->toArray();
{ "language": "en", "url": "https://stackoverflow.com/questions/23047259", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Vue: Setting data from binded prop is using the default value When I pass in name as a prop, it works as expected and sets the nameData data field so that I can change it within the component. Parent <child name="charles"></child> Child data() { return { nameData: this.name } }, props: { name: {type: String, default: "NONE"} } When I bind the prop like below, the nameData data field is set to the default prop, which is "None". Why is that? Parent data() { return { firstName: "Charles" } } <child :name="firstName"></child> Child data() { return { nameData: this.name } }, props: { name: {type: String, default: "NONE"} } A: See my example * *First child component works as expected (your code) *Second displays "NONE" because it's data is initialized with prop value, which is undefined at the time the (child's) data() is executed. Any change to the prop in the future (in mounted in my example) wont affect child's data... const child = Vue.component('child', { data() { return { nameData: this.name } }, props: { name: { type: String, default: "NONE" } }, template: `<div> {{ nameData }} </div>` }) const vm = new Vue({ el: "#app", components: { child }, data() { return { firstName: "Charles", secondName: undefined } }, mounted() { this.secondName = "Fred" } }) <script src="https://cdnjs.cloudflare.com/ajax/libs/vue/2.5.17/vue.js"></script> <div id="app"> <child :name="firstName"></child> <child :name="secondName"></child> </div> A: name="charles" - you passed down the string "charles"; :name="firstName" - you passed down a variable "firstName" which seems to be undefined in the parent component at the time of child rendering and the prop in the child component gets the default value you provided it with. UPD: I played a little with Michal's example. You can use computed instead of data() {} or directly a prop itself if you don't need any data transformation. Because it seems that you assign parent's firstName value in async mode or just later. const child = Vue.component('child', { computed: { nameData() { return this.name; } }, props: { name: { type: String, default: "NONE" } }, template: `<div> {{ nameData }} </div>` }) const vm = new Vue({ el: "#app", components: { child }, data() { return { firstName: "Charles", secondName: undefined } }, mounted() { this.secondName = "Fred" } }) <script src="https://cdnjs.cloudflare.com/ajax/libs/vue/2.5.10/vue.js"></script> <div id="app"> <child :name="firstName"></child> <child :name="secondName"></child> </div>
{ "language": "en", "url": "https://stackoverflow.com/questions/66346835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Android app security test failed! ... Saying, component is not Protected. An intent-filter exists There is security issue reported from our client about some of the Activity and BroadcastReceiver. The security test result was talking about (com.****.*****.Activity / BroadcastReceiver) is not Protected. An intent-filter exists. Thing which is common is that all contains intent-filter Please suggest me what should I do? A: You can set android:exported="false" for the activity in your manifest: android:exported : This element sets whether the activity can be launched by components of other applications — "true" if it can be, and "false" if not. If "false", the activity can be launched only by components of the same application or applications with the same user ID. If you are using intent filters, you should not set this element "false". If you do so, and an app tries to call the activity, system throws an ActivityNotFoundException. Instead, you should prevent other apps from calling the activity by not setting intent filters for it. If you do not have intent filters, the default value for this element is "false". If you set the element "true", the activity is accessible to any app that knows its exact class name, but does not resolve when the system tries to match an implicit intent. This attribute is not the only way to limit an activity's exposure to other applications. You can also use a permission to limit the external entities that can invoke the activity (see the permission attribute). <activity android:name=".activities.YourActivity" android:exported="false" /> You can do same for BroadcastReceiver.
{ "language": "en", "url": "https://stackoverflow.com/questions/44063387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: ADF pipeline going into queue state I have a Copy activity where the source and destination are both Blobs. When i tried the copy pipeline previously,it ran successfully. But currently it is going into queue state for a long time i.e. 30 minutes. Can i know the reason behind it? A: This is not an answer/solution to the problem. Since i cannot comment yet, had to put it in the Answer section. It could be delay in assigning the compute resources. Please check the details. You can check the details by hovering mouse pointer between Name and Type beside Copy.
{ "language": "en", "url": "https://stackoverflow.com/questions/61456136", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Generating Random Number In Each Row In Oracle Query I want to select all rows of a table followed by a random number between 1 to 9: select t.*, (select dbms_random.value(1,9) num from dual) as RandomNumber from myTable t But the random number is the same from row to row, only different from each run of the query. How do I make the number different from row to row in the same execution? A: you don’t need a select … from dual, just write: SELECT t.*, dbms_random.value(1,9) RandomNumber FROM myTable t A: Something like? select t.*, round(dbms_random.value() * 8) + 1 from foo t; Edit: David has pointed out this gives uneven distribution for 1 and 9. As he points out, the following gives a better distribution: select t.*, floor(dbms_random.value(1, 10)) from foo t; A: If you just use round then the two end numbers (1 and 9) will occur less frequently, to get an even distribution of integers between 1 and 9 then: SELECT MOD(Round(DBMS_RANDOM.Value(1, 99)), 9) + 1 FROM DUAL A: At first I thought that this would work: select DBMS_Random.Value(1,9) output from ... However, this does not generate an even distribution of output values: select output, count(*) from ( select round(dbms_random.value(1,9)) output from dual connect by level <= 1000000) group by output order by 1 1 62423 2 125302 3 125038 4 125207 5 124892 6 124235 7 124832 8 125514 9 62557 The reasons are pretty obvious I think. I'd suggest using something like: floor(dbms_random.value(1,10)) Hence: select output, count(*) from ( select floor(dbms_random.value(1,10)) output from dual connect by level <= 1000000) group by output order by 1 1 111038 2 110912 3 111155 4 111125 5 111084 6 111328 7 110873 8 111532 9 110953
{ "language": "en", "url": "https://stackoverflow.com/questions/1568630", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "41" }
Q: "Possible multiple enumeration of IEnumerable" issue? I'm getting a Possible multiple enumeration of IEnumerable with Resharper and I'm trying to find out if it's really an issue or not. This is my method: public IEnumerable<Contact> GetContacts(IContactManager contactManager, string query) { IEnumerable<Contact> contacts = contactManager.GetContacts(query); if (contacts.Any()) return contacts; // Get the warning on this line // Do some other stuff return new[] { new Contact { Name = "Example" } } } Should be obvious, but I'm doing a search for Contact and if the search returns no results I'm returning an array of default values. The consumer should just receive a list which can be enumerated, not modified. Where is the "multiple enumeration" here? And if there is indeed one, is this not the best type to use in the situation? A: The multiple enumeration potential is you calling Any, which will cause the first enumeration, and then a potential second enumeration by the caller of that method. In this instance, I'd guess it is mostly guaranteed that two enumerations will occur at least. The warning exists because an IEnumerable can disguise something expensive such as a database call (most likely an IQueryable), and as IEnumerable doesn't have caching as part of it's contract, it will re-enumerate the source fresh. This can lead to performance issues later on (we have been stung by this a surprising amount and we don't even use IQueryable, ours was on domain model traversal). That said, it is still only a warning, and if you are aware of the potential expense of calling an enumerable over potentially slow source multiple times then you can suppress it. The standard answer to caching the results is ToList or ToArray. Although I do remember making an IRepeatable version of IEnumerable once that internally cached as it went along. That was lost in the depths of my gratuitous code library :-) A: Enumerable.Any executes the query to check whether or not the sequnce contains elements. You could use DefaultIfEmpty to provide a different default value if there are no elements: public IEnumerable<Contact> GetContacts(IContactManager contactManager, string query) { IEnumerable<Contact> contacts = contactManager.GetContacts(query) .DefaultIfEmpty(new Contact { Name = "Example" }); return contacts; } Note that this overload is not supported in LINQ-To-SQL.
{ "language": "en", "url": "https://stackoverflow.com/questions/23335853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Filtering with two conditions - Remove duplicates less than a certain value while keeping the original I have a table total type 23 Original 3 Duplicate 11 Duplicate 5 Original 16 Duplicate 4 Duplicate I want to Filter the df['total'] column for only values greater than 10, however, I want to remove only the Duplicates less or equal than 10. So if an Original row is less than 10 it can still be in the df. This is my Desirable output: total type 23 Original 11 Duplicate 5 Original 16 Duplicate I tried this: df[(df['total'] > 10) & df['type'] == "Duplicate"] is not working. Any idea? A: The conditions should be enclosed in parentheses, on the right you have square ones. And to get what you showed. You need to add a condition(df['type'] =="Original"), in my opinion. a = df[(df['total'] > 10) & (df['type'] == "Duplicate")|(df['type'] == "Original")] print(a) Output a total type 0 23 Original 2 11 Duplicate 3 5 Original 4 16 Duplicate
{ "language": "en", "url": "https://stackoverflow.com/questions/72217997", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Firebase do not return ascending order as expected by using orderByKey I have some data which stored on Firebase by using timestamp as its key value, but I cannot get all of them by sorting it with orderByKey as expected: returning value do not back in ascending order. My JSON structure looks like: { "1476986154" : { "Cons" : { "Black" : 91.531099, "Cancel" : 98.832554, "Happy" : 97.104925, "Pair" : 95.515542 }, "Fairy" : { "Apple" : { "Chair" : { "Area" : 1, "Count" : 96, "Teen" : 0.162139, "Score" : 95.16093 }, "Fake" : { "Area" : 3, "Count" : 98, "Teen" : 0.683259, "Score" : 98.249105 } }, "Dark" : { "Lake" : { "Read" : { "Height" : 0, "Width" : 0, "X" : 0, "Y" : 0 } }, "Red" : { "Read" : { "Height" : 0, "Width" : 0, "X" : 0, "Y" : 0 } } } }, "PhotoName" : "oo", "Versions" : { "Library" : "5.5.6.6" } }, "1477280739" : { "Cons" : { "Black" : 96.389055, "Cancel" : 98.265668, "Happy" : 93.661556, "Pair" : 91.361142 }, "Fairy" : { "Apple" : { "Chair" : { "Area" : 1, "Count" : 100, "Teen" : 0.171286, "Score" : 90.849593 }, "Fake" : { "Area" : 3, "Count" : 99, "Teen" : 0.200965, "Score" : 92.367154 } }, "Dark" : { "Lake" : { "Read" : { "Height" : 0, "Width" : 0, "X" : 0, "Y" : 0 } }, "Red" : { "Read" : { "Height" : 0, "Width" : 0, "X" : 0, "Y" : 0 } } }, }, "Name" : "pp", "Versions" : { "Library" : "5.5.6.6" } } } It returns me the largest one first(1490034200), then from smallest to second-largest value(from 1476510510 to 1488805137). Which confused me since I need to know if the callback is going to the last now. Code is here: I just query for last data before query for all data with same ValueEventListener: DatabaseReference.orderByKey().limitToLast(recordQty).addValueEventListener(valueEventListener); //... DatabaseReference.orderByKey().addValueEventListener(valueEventListener); Can anyone understand why this attempt not work? Here's how my listener looks: private class ValueEventOnChangeListener implements ValueEventListener { @Override public void onDataChange(DataSnapshot dataSnapshot) { if (mDataArrayList == null) { mDataArrayList = new ArrayList<>(); } else { mDataArrayList.clear(); } getDataChange(dataSnapshot); if (mListener != null) { mListener.success(mDataArrayList); } } private void getDataChange(DataSnapshot dataSnapshot){ try { if (mSubDataType == null){ for (DataSnapshot data : dataSnapshot.getChildren()){ long timestamp = Long.parseLong(data.getKey()); mDataArrayList.add(timestamp); } } } catch (Exception e){ Log.e(TAG, "onDataChange Exception: " + e.toString()); } } } A: I can't reproduce the problem. If I run this snippet of code: myRef.orderByKey().addValueEventListener(new ValueEventListener() { public ArrayList<Long> mDataArrayList; @Override public void onDataChange(DataSnapshot dataSnapshot) { if (mDataArrayList == null) { mDataArrayList = new ArrayList<>(); } else { mDataArrayList.clear(); } getDataChange(dataSnapshot); System.out.println(mDataArrayList); } private void getDataChange(DataSnapshot dataSnapshot){ try { for (DataSnapshot data : dataSnapshot.getChildren()){ long timestamp = Long.parseLong(data.getKey()); mDataArrayList.add(timestamp); } } catch (Exception e){ System.err.println("onDataChange Exception: " + e.toString()); } } @Override public void onCancelled(DatabaseError databaseError) { throw databaseError.toException(); } }); With this JSON: { "1476986154" : true, "1477280739" : true } It prints: [1476986154, 1477280739] Which seems the correct order to me since 1476986154 < 1477280739.
{ "language": "en", "url": "https://stackoverflow.com/questions/46765943", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: cancancan manage a relationship Rails I am running into an issue with the cancancan gem for rails gem 'cancancan', '~> 1.10' I have four models: User, Company, Locations, Groups User: Belongs to Company Company: Has many Locations Location: Belongs to company Group: Belongs to Location In the abilities model I have this: can :manage, Group, :location => {:id => user.company.locations.map{|l| l.id}} When creating a new group I am denied (don't have access) I am looking for the correct way to allow a User to create a group with one of the companies location id's (NOTE: Without cancancan on this all works and all ID's are related and so on). A: To make sure I'm understanding your question completely... You're looking for a way to limit a User's ability to manage Groups based on the locations of the Company that User belongs to? Assuming I've got that correct, I would recommend using #pluck: can :manage, Group, location_id: user.company.locations.pluck(:id) This collects all the :id's on a User's Company's Locations, and ensures a User can only manage Groups that have a :location_id contained within that collection. Functionally, this is identical to what you've done, but is more efficient in two ways: * *It doesn't involve any unnecessary Ruby logic *Using #pluck only queries the :id on a location, as opposed to the entire location and all its attributes Generally speaking, whenever you're accessing your database in Rails, you're better off doing so entirely with ActiveRecord methods. This will help prevent you from wasting time querying for extra information you don't need, and eliminate the extra overhead of using Ruby to re-structure your data. Hope that helps!
{ "language": "en", "url": "https://stackoverflow.com/questions/38643866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Celery receives periodic tasks but doesn't execute them I use celery to run periodic tasks in my Django DRF app. Unfortunately, registered tasks are not executed. Project structure: project_name ___ cron_tasks ______ __init__.py ______ celery.py celery.py: app = Celery('cron_tasks', include=['cron_tasks.celery']) app.conf.broker_url = settings.RABBITMQ_URL app.autodiscover_tasks() app.conf.redbeat_redis_url = settings.REDBEAT_REDIS_URL app.conf.broker_pool_limit = 1 app.conf.broker_heartbeat = None app.conf.broker_connection_timeout = 30 app.conf.worker_prefetch_multiplier = 1 app.conf.beat_schedule = { 'first_warning_overdue': { 'task': 'cron_tasks.celery.test_task', 'schedule': 60.0, # seconds 'options': {'queue': 'default', 'expires': 43100.0} } } @shared_task def test_task(): app.send_task('cron_tasks.celery.test_action') def test_action(): print('action!') # print is not executed # I also tried to change the data, but it never happens too. from django.contrib.auth import get_user_model u = get_user_model().objects.get(id=1) u.first_name = "testttt" u.save() setting.py: RABBITMQ_URL = os.environ.get('RABBITMQ_URL') REDBEAT_REDIS_URL = os.environ.get('REDBEAT_REDIS_URL') CELERY_BROKER_URL = os.environ.get('RABBITMQ_URL') CELERYD_TASK_SOFT_TIME_LIMIT = 60 CELERY_ACCEPT_CONTENT = ['application/json'] CELERY_TASK_SERIALIZER = 'json' CELERY_RESULT_SERIALIZER = 'json' CELERY_RESULT_BACKEND = os.environ.get('REDBEAT_REDIS_URL') CELERY_IMPORTS = ("cron_tasks.celery", ) from kombu import Queue CELERY_DEFAULT_QUEUE = 'default' CELERY_QUEUES = ( Queue('default'), ) CELERY_CREATE_MISSING_QUEUES = True redbeat_redis_url = REDBEAT_REDIS_URL Rabbitmq is running properly. I can see it's there in the celery worker terminal output: - ** ---------- .> transport: amqp://admin:**@localhost:5672/my_vhost Redis is pinging well. I use redis to send beats. I run: celery beat -S redbeat.RedBeatScheduler -A cron_tasks.celery:app --loglevel=debug It shows: [2019-02-15 09:32:44,477: DEBUG/MainProcess] beat: Waking up in 10.00 seconds. [2019-02-15 09:32:54,480: DEBUG/MainProcess] beat: Extending lock... [2019-02-15 09:32:54,481: DEBUG/MainProcess] Selecting tasks [2019-02-15 09:32:54,482: INFO/MainProcess] Loading 1 tasks [2019-02-15 09:32:54,483: INFO/MainProcess] Scheduler: Sending due task first_warning_overdue (cron_tasks.celery.test_task) [2019-02-15 09:32:54,484: DEBUG/MainProcess] cron_tasks.celery.test_task sent. id->f89083aa-11dc-41fc-9ebe-541840951f8f Celery worker is run this way: celery worker -Q default -A cron_tasks.celery:app -n .%%h --without-gossip --without-mingle --without-heartbeat --loglevel=info --max-memory-per-child=512000 It says: -------------- celery@.%me.local v4.2.1 (windowlicker) ---- **** ----- --- * *** * -- Darwin-16.7.0-x86_64-i386-64bit 2019-02-15 09:31:50 -- * - **** --- - ** ---------- [config] - ** ---------- .> app: cron_tasks:0x10e2a5ac8 - ** ---------- .> transport: amqp://admin:**@localhost:5672/my_vhost - ** ---------- .> results: disabled:// - *** --- * --- .> concurrency: 4 (prefork) -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker) --- ***** ----- -------------- [queues] .> default exchange=default(direct) key=default [tasks] . cron_tasks.celery.test_task [2019-02-15 09:31:50,833: INFO/MainProcess] Connected to amqp://admin:**@127.0.0.1:5672/my_vhost [2019-02-15 09:31:50,867: INFO/MainProcess] celery@.%me.local ready. [2019-02-15 09:41:46,218: INFO/MainProcess] Received task: cron_tasks.celery.test_task[3c121f04-af3b-4cbe-826b-a32da6cc156e] expires:[2019-02-15 21:40:05.779231+00:00] [2019-02-15 09:41:46,220: INFO/ForkPoolWorker-2] Task cron_tasks.celery.test_task[3c121f04-af3b-4cbe-826b-a32da6cc156e] succeeded in 0.001324941000000024s: None Expected behavior: This should run my test_action(). But, even though the celery worker output says succeeded in 0.001324941000000024s, the function never executes.
{ "language": "en", "url": "https://stackoverflow.com/questions/54706666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: strange behavior when using list of another class objects as class attribute in python I am having some issues with the classes below: class book: def __init__(self,name=None): self.name=name return class bookshelf: def __init__(self,books=[]): self.books=books return def add_book(self,name): self.books.append( book(name) ) return Next, I initialize list a list of bookshelves: bookshelves = [bookshelf() for i in range(3)] With all of them empty, both bookshelfs[0].books and bookshelfs[1].books are []. However, after adding a book to the first bookshelf, bookshelves[0].add_book('Book1') all books have a book named "Book1": both bookshelves[0].books[0].name and bookshelves[1].books[0].name have the value 'Book1'. This does not change even after I reinitialize the list of bookshelf. But if I rerun the section defining the classes, the bookshelfs will be cleared. Any ideas how does this happen? How should I implement this part correctly? By the way, I am running Python 3.8.3 . A: Python Mutable Default Arguments are counter-intuitive! Python creates the list once when the method is defined not when it is called. Therefore the list obj is shared between instances. Here's A good article on the issue. I would instead remove the default argument and check/set it in the init method. class bookshelf: def __init__(self,books): self.books = [] if books: self.books = books
{ "language": "en", "url": "https://stackoverflow.com/questions/63387054", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is there any good tool/framework to profile performance for C/C++ application I'm new to C/C++ and facing a performance issue that my program is running very slow. I want to find what's the hot spot is to reduce the overall execution time for my code. What's the most popular and the easiest way to profile a C/C++ application in Windows? I've been very amazed by how easy it is to profile a .NET application using Mini Proler. Do we have any similiar library in C/C++ that gives us that high quality and reliable of result with a minimal added code? Or is there any tool that is similiar to RedGate ANTS performance profiler that also provider insightful information about the running code? A: Intel's VTune or AMD's CodeAnalyst are both very good tools. On Linux, Perf or OProfile will do the same thing. A: While you are hunting around for a profiler, run the program in the debugger IDE and try this method. Some programmers rely on it. There's an example here of how it is used. In that example here's what happens. A series of problems are found and removed. * *The first iteration saved 33% of the time. (Speedup factor 1.5) *Of the time remaining, the second iteration saved 17%. (Speedup factor 1.2) *Of the time remaining, the third iteration saved 13%. (Speedup factor 1.15) *Of the time remaining, the fourth iteration saved 66%. (Speedup factor 2.95) *Of the time remaining, the fifth iteration saved 61%. (Speedup factor 2.59) *Of the time remaining, the sixth iteration saved 98%. (Speedup factor 45.9) All those big-percent changes were not big percents of the original time, but they became so after other problems were removed. The total amount of time saved from the original program was over 99.8%. The speedup was 730 times. Most programs that have not gone through a process like this have lots of room for speedup, but you're not likely to realize it using only a profiler because all they do is make measurements. They don't always point out to you what you need to fix, and each problem you miss keeps you from getting the really significant speedup. To put it another way, the final speedup factor is the product of all those individual factors, and if any one of them is missed, it is not only absent from the product, but it reduces the following factors. That's why, in performance diagnosis, "good enough" is not good enough. You have to find every problem.
{ "language": "en", "url": "https://stackoverflow.com/questions/14528629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Selenium Unable to select a date from date picker unable to select a date from date picker this is the website working on https://www.phptravels.net/ when i used developer options and Ctrl + F on firebug //div[@style='display: block; top: 390px; left: 680px;']//text()[contains(.,'15')] i am able to find the date on the page but when i am trying from the code i am unable to select the element This is my code self.driver.find_element(By.XPATH, "//div[@style='display: block; top: 390px; left: 680px;']//text()[contains(.,'"+start_date+"')]").click() test_Flight.py:37: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ..\pages\search_flights_form.py:68: in set_start_date_pick self.driver.find_element(By.XPATH, "//div[@style='display: block; top: 390px; left: 680px;']//text()[contains(.,'15')]").click() ..\..\..\..\AppData\Local\Programs\Python\Python38-32\lib\site-packages\selenium\webdriver\remote\webdriver.py:976: in find_element return self.execute(Command.FIND_ELEMENT, { ..\..\..\..\AppData\Local\Programs\Python\Python38-32\lib\site-packages\selenium\webdriver\remote\webdriver.py:321: in execute self.error_handler.check_response(response) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <selenium.webdriver.remote.errorhandler.ErrorHandler object at 0x052D6D18> response = {'status': 404, 'value': '{"value":{"error":"no such element","message":"Unable to locate element: //div[@style=\'disp...ntent/shared/webdriver/Errors.jsm:395:5\\nelement.find/</<@chrome://remote/content/marionette/element.js:300:16\\n"}}'} A: Try like below and confirm. driver.get("https://www.phptravels.net/") wait = WebDriverWait(driver,30) checkin = wait.until(EC.element_to_be_clickable((By.ID,"checkin"))) checkin.click() date = 15 select_date = wait.until(EC.element_to_be_clickable((By.XPATH,f"//div[@class='datepicker-days']//td[text()='{date}']"))) select_date.click() Update: As per comments to select date from Flights section. driver.get("https://www.phptravels.net/") wait = WebDriverWait(driver,30) flights = wait.until(EC.element_to_be_clickable((By.XPATH,"//button[@aria-controls='flights']"))) flights.click() departure_date = wait.until(EC.element_to_be_clickable((By.XPATH,"//input[contains(@class,'depart')]"))) departure_date.click() date = 15 select_date = wait.until(EC.element_to_be_clickable((By.XPATH,f"(//div[@class='datepicker-days'])[3]/table/tbody/tr[3]/td[text()='{date}']"))) select_date.click()
{ "language": "en", "url": "https://stackoverflow.com/questions/71346348", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: PHP-FPM (7.4.2) doesn't stop gracefully? During updating, we want to stop php-fpm and wait for all running scripts to be finished before we make any file changes. We found out that we needed to set process_control_timeout, so we placed "process_control_timeout = 36000s" in "/etc/php/7.4/fpm/pool.d/zz-00-overrides.ini" (and we restarted php-fpm). Then we created a test script to test it out. Our test script creates a file, then 30 seconds later, it creates another file. The script: $id = random_int(10000, 99999); file_put_contents(__DIR__ . '/' . $id . '-start', ''); sleep(30); file_put_contents(__DIR__ . '/' . $id . '-end', ''); When we run the script normally (browser -> nginx -> php-fpm), it creates the 1st file, 30 seconds later it creates the 2nd file. When we run the script, wait a few seconds, and then try to stop it (run the same way as before: browser -> nginx -> php-fpm) (stopped by: "service php7.4-fpm stop"), it will create the 1st file, then the service stop command is run which only takes 2-3 seconds, then the browser says "502 Bad Gateway", and then the 2nd file is never created. It doesn't gracefully stop. The desired outcome for us is that "service php7.4-fpm stop" waits for all the scripts to be done, and then stops, instead of it killing off any running scripts the way it is doing now in order to forcefully stop. Are we missing something, are we doing something wrong? Is it a bug somewhere somehow? Any help would be really appreciated. * *Debian 10 (Linux 4.19.0-6-cloud-amd64 #1 SMP Debian 4.19.67-2+deb10u2 (2019-11-11) x86_64 GNU/Linux) *PHP 7.4.2 *Nginx 1.14.2 A: Running kill -QUIT $(cat /run/php/php7.4-fpm.pid) does take the process_control_timeout config in account. It will cause the PHP-FPM process to stop as soon as all the scripts have finished their execution. At that point the PID will be removed. So, in order to make it work: * *run $(kill -QUIT $(cat /run/php/php7.4-fpm.pid)) *in a loop, check if /run/php/php7.4-fpm.pid still exists, if not, break the loop.
{ "language": "en", "url": "https://stackoverflow.com/questions/60673012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Can one cast an EnvDTE.Project into a VCProject I have seen two posts so far that concern my question. I am wondering how can one cast an EnvDTE.Project into a VCProject. In this post, fun4jimmy's answer does that exactly in the following line of code (taken from his answer) : VCProject vcProject = project.Object as VCProject; I have tried doing the same thing in my solution : using EnvDTE; using Microsoft.VisualStudio.VCProjectEngine; [...] private List<string> BuildAssembliesAndReturnTheirName(EnvDTE80.DTE2 dte) { Solution sln = dte.Solution; bool isDirty = false; foreach (Project project in sln.Projects) { VCProject vcProject = project.Object as VCProject; Configuration activeConfiguration = project.ConfigurationManager.ActiveConfiguration; foreach (VCConfiguration vcConfiguration in vcProject.Configurations) { //business logic } } [...] A solution is opened in VS. The solution contains a few C# projects. Everything seems to be in order for this code to execute until I reach foreach (VCConfiguration vcConfiguration in vcProject.Configurations) only to realise that this cast VCProject vcProject = project.Object as VCProject; returns null. Can anyone tell me why that is? I've seen this post in which hveiras suggests There is a VCCodeModel.dll for each VS version. If that's the case for VCProjectEngine.dll as well, how can I fix my issue? I have changed my reference to VCProjectEngine.dll so that it uses the one for Visual Studio 2012 (what I'm working with) but vcProject remains null. A: VCProject is for C++ projects, in order to use a similar interface with C#/VB project you'll have to use VSProject. There are a number of VSLangProj overloads/extensions and you'll have to find the one that is specific to the version you need to use. See: https://msdn.microsoft.com/en-us/library/1xt0ezx9.aspx for all the VSLangProj interfaces from 2 through 100 (I think thats Version 2 through Version 10). A: You can't cast the Project object itself, because there's no inheritance relationship. But you can use the inner object: VCProject vcProject = project.Object as VCProject;
{ "language": "en", "url": "https://stackoverflow.com/questions/29951353", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Is it possible to retrieve a Javascript file from a server? In Python, I can retrieve Javascript from an HTML document using the following code. import urllib2, jsbeautifier from bs4 import BeautifulSoup f = urllib2.urlopen("http://www.google.com.ph/") soup = BeautifulSoup(f, "lxml") script_raw = str(soup.script) script_pretty = jsbeautifier.beautify(script_raw) print(script_pretty) But how about if the script comes from a Javascript file on the server, like: <script src="some/directory/example.js" type="text/javascript"> Is it possible to retrieve "example.js"? If so, how? Context: I'm examining the Javascript of phishing web pages. A: <script src="some/directory/example.js" type="text/javascript"> the code above will get some/directory/example.js from server you just make folders and file structure follow the pattern above A: The easiest way is to right click on that page in your browser, choose page script, click on that .js link, and it will be there. A: If you want to load at run time, like some part of your javascript is dependent on other javascript, then for loading javascript at run time, you can use require.js .
{ "language": "en", "url": "https://stackoverflow.com/questions/40035331", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Switching Permissions of User Files I am new to Bash and trying to write a find command that finds all files that don't have read, write, or execute permissions for the user, and then changes the mode to enable read access. I've tried a few different ways including: find . ! -perm /u+rwx | -exec chmod u+r {} \; Am I on the right track?
{ "language": "en", "url": "https://stackoverflow.com/questions/62964060", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: how to get the file sent in xml response in perl? Using the LWP user agent I am sending the request and getting the response. I will get the response in html format and a file attached in it. eg: `<html> <head> <title>Download Files</title> <meta http-equiv=\'Content-Type\' content=\'text/html; charset=utf-8\'> <link rel=\'stylesheet\' href=\'http://res.mytoday.com/css/main.css\' type=\'text/css\'> <link rel=\'stylesheet\' href=\'http://res.mytoday.com/css/Menu.css\' type=\'text/css\'> <link rel=\'stylesheet\' href=\'/statsdoc/freeze.css\' type=\'text/css\'> </head> <body> <table border=1> <tr class=\'rightTableData\'> <th>No.</th> <th>File Name</th> <th>File Size</th> </tr><tr class=\'rightTableData\'> <td>1</td><td> <a href=\'/dlr_download?file=/mnt/dell6/SRM_DATA/data/API_FILE /20160329/LSUZisbZahtHNeImZJm_1-1.csv.zip\'>1-1.csv.zip</a> </td><td>487 bytes</td> </tr> </table> </br></br> <center><a href=\'/dlr_download?file=/mnt/dell6/SRM_DATA/data/API_FILE/20160329/LSUZisbZahtHNeImZJm-csv.zip\'>Download all</a></center> </body></html>` From this response I need to get the file. Can anyone help me to get the file from response. A: Use a parser to extract the information. I used XML::LibXML, but I had to remove the closing br tags that made the parser fail. #!/usr/bin/perl use warnings; use strict; my $html = '<html> <head> <title>Download Files</title> <meta http-equiv=\'Content-Type\' content=\'text/html; charset=utf-8\'> <link rel=\'stylesheet\' href=\'http://res.mytoday.com/css/main.css\' type=\'text/css\'> <link rel=\'stylesheet\' href=\'http://res.mytoday.com/css/Menu.css\' type=\'text/css\'> <link rel=\'stylesheet\' href=\'/statsdoc/freeze.css\' type=\'text/css\'> </head> <body> <table border=1> <tr class=\'rightTableData\'> <th>No.</th> <th>File Name</th> <th>File Size</th> </tr><tr class=\'rightTableData\'> <td>1</td><td> <a href=\'/dlr_download?file=/mnt/dell6/SRM_DATA/data/API_FILE /20160329/LSUZisbZahtHNeImZJm_1-1.csv.zip\'>1-1.csv.zip</a> </td><td>487 bytes</td> </tr> </table> <!-- </br></br> I had to comment this out! --> <center><a href=\'/dlr_download?file=/mnt/dell6/SRM_DATA/data/API_FILE/20160329/LSUZisbZahtHNeImZJm-csv.zip\'>Download all</a></center> </body></html>'; use XML::LibXML; my $dom = 'XML::LibXML'->load_html( string => $html ); print $dom->findvalue('/html/body/table/tr[2]/td[2]/a/@href'); You could also use the recover flag to parse invalid HTML: my $dom = 'XML::LibXML'->load_html( string => $html, recover => 1 );
{ "language": "en", "url": "https://stackoverflow.com/questions/36310346", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Failed to use Apache RewriteRule I would like to use Apache RewriteRule to change the URL target page to abc.php. I have set RewriteEngine On but I found this problem. Regexp I used: RewriteRule ^viewthread\.php.tid=12345$ abc.php The URL string to match: viewthread.php?tid=12345 Why it is not successfully matched? A: Rewriting URLs with query strings is slightly more complicated than rewriting plain URLs. You'll have to write something like this: RewriteCond %{REQUEST_URI} ^/viewthread\.php$ RewriteCond %{QUERY_STRING} ^tid=12345$ RewriteRule ^(.*)$ http://mydomain.site/abc.php [R=302,L] See those articles for more help: * *http://www.simonecarletti.com/blog/2009/01/apache-query-string-redirects/ *http://www.simonecarletti.com/blog/2009/01/apache-rewriterule-and-query-string/ A: i think because you have missed the ? in the rule... RewriteRule ^viewthread.php?tid=12345$ abc.php A: Shouldn't it be: RewriteRule ^/viewthread\.php\?tid=12345$ /abc.php
{ "language": "en", "url": "https://stackoverflow.com/questions/8906615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: unable to create sqlite database in Android i have class CreateDB public class CreateDB extends SQLiteOpenHelper{ // SQLiteOpenHelper auto create if db not exists. private static final int DB_VERSION = 1; private static final String DB_NAME = "mydb.db"; public CreateDB(Context ctx) { super(ctx, DB_NAME, null, DB_VERSION); } @Override public void onCreate(SQLiteDatabase db) { db.execSQL("CREATE TABLE friends (_id INTEGER PRIMARY KEY AUTOINCREMENT, name TEXT, phonenumber INTEGER);"); }} i call createdb in another class which act as background thread public class manager extends AsyncTask<Void, Void, String> { public Context context; @Override protected String doInBackground(Void... params) { CreateDB dbhelp = new CreateDB(context); } } when i run it then it stop working and emulator give error that app stop responding but when i run `CreateDB dbhelp = new CreateDB(this); in main activity then it works and database is created . so please help so that i can create database in background thread . A: but when i run CreateDB dbhelp = new CreateDB(this); in main activity then it works and database is created Because you forgot to initialize your context object in your AsyncTask. public class manager extends AsyncTask<Void, Void, String> { public Context context; <-- Declared but not initialized @Override protected String doInBackground(Void... params) { CreateDB dbhelp = new CreateDB(context); } You could create a constructor for your AsyncTask : public manager(Context context) { this.context = context; } And then in your activity : new manager(this).execute(); Also try to respect name conventions. A: You need to pass valid context as a parameter: CreateDB dbhelp = new CreateDB(getContext());
{ "language": "en", "url": "https://stackoverflow.com/questions/19605294", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Dispatch all methods in class C# Is there any way to dispatch all methods in class in easy way? For example my class is a adapter for UI items and contains static methods. So instead of dispatching all methods separately in class, it could be done like this: Who ever is calling a method from this class, everything is dispatched through UI thread. Like from: public MyClass { ... public static void Method1 { Application.Current.Dispatcher.Invoke(() => { /* do something */ } } public static bool Method2 { return Application.Current.Dispatcher.Invoke(() => { return UIitem.SomeProperty; } } } To this: [DispatcherAttribute] /* or something similar */ public MyClass { ... public static void Method1 { /* do something */ } public static bool Method2 { return UIitem.SomeProperty; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/27637363", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: AppLocalizations missing reference - Internationalization - Flutter flutter_gen applocalizations missing reference in Visual Studio Code. I already tried * *flutter upgrade *flutter pub cache clean *flutter clean / flutter pub get *Dart: Restart Analysis Server (VS Code) *Developer: Reload Window (VS Code) My pubspec.yaml has * *generate: true *flutter_localizations:sdk: flutter A: Solution: * *My machine was with 2 flutter sdk installed. I unnistalled one, and everything worked again.
{ "language": "en", "url": "https://stackoverflow.com/questions/73596775", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: how to use actors in akka client side websockets i am using akka client websockets https://doc.akka.io/docs/akka-http/current/client-side/websocket-support.html i have a server which takes requests and respond in json this is the pattern of my request and response request #1 { "janus" : "create", "transaction" : "<random alphanumeric string>" } response #1 { "janus": "success", "session_id": 2630959283560140, "transaction": "asqeasd4as3d4asdasddas", "data": { "id": 4574061985075210 } } then based on response #1 i need to initiate request #2 and upon receiving response #2 i need to initiate request #3 and so on for example then based on id 4574061985075210 i will send request #2 and receive it response request # 2 { } response # 2 { } ---- how can i use the actors with source and sink and re use the flow here is my initial code import akka.http.scaladsl.model.ws._ import scala.concurrent.Future object WebSocketClientFlow { def main(args: Array[String]) = { implicit val system = ActorSystem() implicit val materializer = ActorMaterializer() import system.dispatcher val incoming: Sink[Message, Future[Done]] = Sink.foreach[Message] { case message: TextMessage.Strict => println(message.text) //suppose here based on the server response i need to send another message to the server and so on do i need to repeat this same code here again ????? } val outgoing = Source.single(TextMessage("hello world!")) val webSocketFlow = Http().webSocketClientFlow(WebSocketRequest("ws://echo.websocket.org")) val (upgradeResponse, closed) = outgoing .viaMat(webSocketFlow)(Keep.right) // keep the materialized Future[WebSocketUpgradeResponse] .toMat(incoming)(Keep.both) // also keep the Future[Done] .run() val connected = upgradeResponse.flatMap { upgrade => if (upgrade.response.status == StatusCodes.SwitchingProtocols) { Future.successful(Done) } else { throw new RuntimeException(s"Connection failed: ${upgrade.response.status}") } } connected.onComplete(println) closed.foreach(_ => println("closed")) } } and here i used Source.ActorRef val url = "ws://0.0.0.0:8188" val req = WebSocketRequest(url, Nil, Option("janus-protocol")) implicit val system = ActorSystem() implicit val materializer = ActorMaterializer() import system.dispatcher val webSocketFlow = Http().webSocketClientFlow(req) val messageSource: Source[Message, ActorRef] = Source.actorRef[TextMessage.Strict](bufferSize = 10, OverflowStrategy.fail) val messageSink: Sink[Message, NotUsed] = Flow[Message] .map(message => println(s"Received text message: [$message]")) .to(Sink.ignore) val ((ws, upgradeResponse), closed) = messageSource .viaMat(webSocketFlow)(Keep.both) .toMat(messageSink)(Keep.both) .run() val connected = upgradeResponse.flatMap { upgrade => if (upgrade.response.status == StatusCodes.SwitchingProtocols) { Future.successful(Done) } else { throw new RuntimeException(s"Connection failed: ${upgrade.response.status}") } } val source = """{ "janus": "create", "transaction":"d1403sa54a5s3d4as3das"}""" val jsonAst = source.parseJson ws ! TextMessage.Strict(jsonAst.toString()) now i need help in how can i initiate the second request here because i need the "id" returned from the server to initiate request #2
{ "language": "en", "url": "https://stackoverflow.com/questions/63486992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: C# Property Grid Pass Constructor Variable I'm using the C# property grid to add new objects and change settings of a specific object. I need to know how to pass a a variable to the constructor using the Component Model. Reason why is because a parameter is required to correctly define the initial values of the chart object. List<Chart> charts = new List<Chart>(); [Description("Charts")] [Category("4. Collection Charts")] [DisplayName("Charts")] public List<Chart> _charts { get { return charts; } set { charts = value ; } } public class Chart { public static string collectionName = ""; int chartPosition = GetMaxChartIndex(collectionName); [Description("Chart posiion in document")] [Category("Control Chart Settings")] [DisplayName("Chart Position")] public int _chartPosition { get { return chartPosition; } set { chartPosition = value; } } public Chart(string _collectionName) { collectionName = _collectionName; } } A: What you can do is declare a custom TypeDescriptionProvider for the Chart type, early before you select your object into the PropertyGrid: ... TypeDescriptor.AddProvider(new ChartDescriptionProvider(), typeof(Chart)); ... And here is the custom provider (you'll need to implement the CreateInstance method): public class ChartDescriptionProvider : TypeDescriptionProvider { private static TypeDescriptionProvider _baseProvider = TypeDescriptor.GetProvider(typeof(Chart)); public override object CreateInstance(IServiceProvider provider, Type objectType, Type[] argTypes, object[] args) { // TODO: implement this return new Chart(...); } public override IDictionary GetCache(object instance) { return _baseProvider.GetCache(instance); } public override ICustomTypeDescriptor GetExtendedTypeDescriptor(object instance) { return _baseProvider.GetExtendedTypeDescriptor(instance); } public override string GetFullComponentName(object component) { return _baseProvider.GetFullComponentName(component); } public override Type GetReflectionType(Type objectType, object instance) { return _baseProvider.GetReflectionType(objectType, instance); } public override Type GetRuntimeType(Type reflectionType) { return _baseProvider.GetRuntimeType(reflectionType); } public override ICustomTypeDescriptor GetTypeDescriptor(Type objectType, object instance) { return _baseProvider.GetTypeDescriptor(objectType, instance); } public override bool IsSupportedType(Type type) { return _baseProvider.IsSupportedType(type); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/38418842", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: HTML- HREF Tag inside a Button This is my button code, which has a reference tag, and the URL takes me to an index action correctly <button type="submit" class="btn btn-primary"><a style="color:white" href="<?php echo $this->basePath('calendar/details/index').'?month='.$this->previousMonth?>"> Previous </a></button> and this is the picture of the button The problem is that only the text Previous on button is clickable, other than that any click on the blue part of the button does not work. How to make the whole area of the button work, without disturbing the URL? A: <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" rel="stylesheet"/> <a class="btn btn-primary" style="color: white" href="#">Previous</a> <a class="btn btn-primary" style="color: white" href="<?php echo $this->basePath('calendar/details/index').'?month='.$this->previousMonth?>">Previous</a>
{ "language": "en", "url": "https://stackoverflow.com/questions/42104997", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I override a plugin registration in MVVMCross I'm using the WPF Sqlite plugin in MVVMCross, but I want to override the directory that the database is written to. I've written a custom ISQLiteConnectionFactory which currently resides in my WPF bootstrapper project: internal class CustomMvxWpfSqLiteConnectionFactory : ISQLiteConnectionFactory { const string DirectoryName = "ProductName"; public ISQLiteConnection Create(string address) { var appData = Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData); var dir = Path.Combine(appData, DirectoryName); Directory.CreateDirectory(dir); var path = Path.Combine(dir, address); return new SQLiteConnection(path, SQLiteOpenFlags.ReadWrite | SQLiteOpenFlags.Create, false); } } What I can't figure out is how to override the Mvx.RegisterSingleton<ISQLiteConnectionFactory>(new MvxWpfSqLiteConnectionFactory()); that Cirrious.MvvmCross.Plugins.Sqlite.Wpf.Plugin does. My PCL project's App registers a singleton service that depends on ISQLiteConnectionFactory during Initialize, so I obviously want to override the IOC registration before then. But no matter what I do, the plugin's registration of MvxWpfSqLiteConnectionFactory rather than my own registration of CustomMvxWpfSqLiteConnectionFactory seems to take precedence. I've tried putting my register call in all sorts of overrides in my WPF Setup.cs, but nothing has worked so far. A: An article on how plugins are loaded is included in https://github.com/MvvmCross/MvvmCross/wiki/MvvmCross-plugins#how-plugins-are-loaded The Sqlite plugin by default is initialised during PerformBootstrapActions in Setup - see https://github.com/MvvmCross/MvvmCross/wiki/Customising-using-App-and-Setup#setupcs for where this occurs in the start sequence. From your question, it's not clear which overrides in Setup you've tried - I'm not sure which positions "all sorts" includes. However, it sounds like want to register your ISQLiteConnectionFactory at any point after PerformBootstrapActions and before InitializeApp - so one way to do this would be to override InitializeApp: protected virtual void InitializeApp(IMvxPluginManager pluginManager) { // your code here base.InitializeApp(pluginManager); } Some possible other ideas to consider: * *if you want to prevent the Sqlite plugin from self-initializing in the Wpf case, then you could remove the Sqlite bootstrap file from your Wpf project (but beware that nuget might try to add it again later) *the new "community fork" of the MvvmCross sqlite project has source code updated to the latest SQLite-net version (via @jarroda) and has a BasePath CreateEx option to allow the folder to be specified - see https://github.com/MvvmCross/MvvmCross-SQLite/blob/master/Sqlite/Cirrious.MvvmCross.Community.Plugins.Sqlite.Wpf/MvxWpfSqLiteConnectionFactory.cs#L24
{ "language": "en", "url": "https://stackoverflow.com/questions/19939634", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to set select all data up to data_upload_max_number_fields (example 500) in django admin listing page First of all sorry if is there any question already please let me know that because I am fresh in this community And thanks in advance for your valuable responses In the below screenshot in Django admin, How can I override the select all 1005 with some custom number like select up to 500. As you know data_upload_max_number_fields has only a 1000 limit but I don't want to increase the limit every time. Because it might be possible I have 100000 data then how much will increase the data_upload_max_number_fields in django settings. So is there any way that I can override this select all thing to less than the upload max field for example select all up to first 500 data I don't want like the default one shown in the 2nd image A: Well, there does not seem to be a direct attribute/method to change the max limit but what you can do is set a custom value in the 'list_per_page' attribute of the ModelAdmin class admin.py like below: class MyAppAdmin(admin.ModelAdmin): list_per_page = 500 class Meta: model = MyApp The default value of list_per_page is 100 hence now after modifying it you can directly select all 500(in your case) at once per page and perform the requisite operation.
{ "language": "en", "url": "https://stackoverflow.com/questions/70812708", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Unable to Update Global State for Redux Flow in SwiftUI I am trying to implement the Redux state in SwiftUI and I am stuck with updating the global state after the action creator has been executed. I am lost as what to do in the Store dispatch function. // Store.swift typealias ActionCreator = (_ dispatch: @escaping (Action) -> ()) -> () func getMovies() -> ActionCreator { return { dispatch in DispatchQueue.main.asyncAfter(deadline: .now() + 2.0) { dispatch(.populateMovies([Movie(title: "ABC")])) } } } class Store: ObservableObject { var reducer: Reducer @Published private (set) var appState: AppState init(appState: AppState, reducer: Reducer) { self.appState = appState self.reducer = reducer } func dispatch(_ dispatch: ActionCreator) { // how to send the movies to the reducer to update the state //self.reducer.update(&appState, dispatch) } } // ContentView: store.dispatch(getMovies())
{ "language": "en", "url": "https://stackoverflow.com/questions/63783970", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Speed of "sum" comprehension in Python I was under the impression that using a sum construction was much faster than running a for loop. However, in the following code, the for loop actually runs faster: import time Score = [[3,4,5,6,7,8] for i in range(40)] a=[0,1,2,3,4,5,4,5,2,1,3,0,5,1,0,3,4,2,2,4,4,5,1,2,5,4,3,2,0,1,1,0,2,0,0,0,1,3,2,1] def ver1(): for i in range(100000): total = 0 for j in range(40): total+=Score[j][a[j]] print (total) def ver2(): for i in range(100000): total = sum(Score[j][a[j]] for j in range(40)) print (total) t0 = time.time() ver1() t1 = time.time() ver2() t2 = time.time() print("Version 1 time: ", t1-t0) print("Version 2 time: ", t2-t1) The output is: 208 208 Version 1 time: 0.9300529956817627 Version 2 time: 1.066061019897461 Am I doing something wrong? Is there a way to do this faster? (Note that this is just a demo I set up, in my real application the scores will not be repeated in this manner) Some additional info: This is run on Python 3.4.4 64-bit, on Windows 7 64-bit, on an i7. A: This seems to depend on the system, probably the python version. On my system, the difference is is about 13%: python sum.py 208 208 ('Version 1 time: ', 0.6371259689331055) ('Version 2 time: ', 0.7342419624328613) The two version are not measuring sum versus manual looping because the loop "bodies" are not identical. ver2 does more work because it creates the generator expression 100000 times, while ver1's loop body is almost trivial, but it creates a list with 40 elements for every iteration. You can change the example to be identical, and then you see the effect of sum: def ver1(): r = [Score[j][a[j]] for j in range(40)] for i in xrange(100000): total = 0 for j in r: total+=j print (total) def ver2(): r = [Score[j][a[j]] for j in xrange(40)] for i in xrange(100000): total = sum(r) print (total) I've moved everything out of the inner loop body and out of the sum call to make sure that we are measuring only the overhead of hand-crafted loops. Using xrange instead of range further improves the overall runtime, but this applies to both versions and thus does not change the comparison. The results of the modified code on my system is: python sum.py 208 208 ('Version 1 time: ', 0.2034609317779541) ('Version 2 time: ', 0.04234910011291504) ver2 is five times faster than ver1. This is the pure performance gain of using sum instead of a hand-crafted loop. Inspired by ShadowRanger's comment on the question about lookups, I have modified the example to compare the original code and check if the lookup of bound symbols: def gen(s,b): for j in xrange(40): yield s[j][b[j]] def ver2(): for i in range(100000): total = sum(gen(Score, a)) print (total) I create a small custom generator which locally binds Score and a to prevent expensive lookups in parent scopes. Executing this: python sum.py 208 208 ('Version 1 time: ', 0.6167840957641602) ('Version 2 time: ', 0.6198039054870605) The symbol lookups alone account for ~12% of the runtime. A: Since j is iterating over both lists, I thought I'd see if zip worked any better: def ver3(): for i in range(100000): total = sum(s[i] for s,i in zip(Score,a)) print (total) On Py2 this runs about 30% slower than version 2, but on Py3 about 20% faster than version 1. If I change zip to izip (imported from itertools), this cuts the time down to between versions 1 and 2.
{ "language": "en", "url": "https://stackoverflow.com/questions/35191815", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Help with Generics in VB Im new to VB. I am coming from a Java background. In the following code Sub PrintList(Of T)(ByVal list As List(Of T)) For Each obj As T In list Console.Write(obj.ToString() + " ") Next Console.WriteLine() End Sub Can someone help me to understand what Sub PrintList(Of T)(ByVal list As List(Of T)) means? Why do you need the (Of T) part? Why isn't (ByVal list As List(Of T)) sufficient? A: In Java, this would be something like: public static <T> void printList(List<T> list) The (Of T) after PrintList is the equivalent to the <T> before void in the Java version. In other words, it's declaring the type parameter for the generic method. A: Adding to what Jon Skeet said, this sub appears to be able to take any type of list. If PrintList(Of T) was just PrintList, then you would be stuck specifying what type of List you want to use for your parameter. You could no longer have 2 calls to this sub taking two different types of lists without overloading the sub. What I mean by 2 different types of lists is: List(of string) List(of integer)
{ "language": "en", "url": "https://stackoverflow.com/questions/6998359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Add a border around parts of a region, matplotlib/geopandas I'll have a map showing the municipalities of Stockholm. Displayed below. fig, ax = plt.subplots(1, figsize=(4, 4)) matplotlib.rcParams["figure.dpi"] = 250 ax.axis('off') ax1 = geo_df1.plot(edgecolor='black', column=geo_df1.rel_grp, cmap=my_cmp, linewidth=0.3, ax=ax, categorical=True)#, plt.show(ax1) I want to add an amplified border to the east. Something like this. How can I do this in matplotlib? A: * *question does not include geometry, so have sourced *it's a simple case of plotting a LineString that is the eastern edge. Have generated one for purpose of example import requests import geopandas as gpd import shapely.ops import shapely.geometry res = requests.get("http://data.insideairbnb.com/sweden/stockholms-län/stockholm/2021-10-29/visualisations/neighbourhoods.geojson") # get geometry of stockholm gdf = gpd.GeoDataFrame.from_features(res.json()).set_crs("epsg:4326") # plot regions of stockholm ax = gdf.plot() # get linestring of exterior of all regions in stockhold ls = shapely.geometry.LineString(shapely.ops.unary_union(gdf["geometry"]).exterior.coords) b = ls.bounds # clip boundary of stockholm to left edge ls = ls.intersection(shapely.geometry.box(*[x-.2 if i==2 else x for i,x in enumerate(b)])) # add left edge to plot gpd.GeoSeries(ls).plot(edgecolor="yellow", lw=5, ax=ax)
{ "language": "en", "url": "https://stackoverflow.com/questions/70413099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: MySQL insert record I have this SQL query: insert into messages (message, hash, date_add) values ('message', 'hash', NOW()) ON DUPLICATE KEY IGNORE hash is unique, what is wrong with query? i got the error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'IGNORE' at line 1 A: According to MySQL doc the syntax should be : INSERT IGNORE INTO messages (message, hash, date_add) VALUES('message', 'hash', NOW());
{ "language": "en", "url": "https://stackoverflow.com/questions/5395855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: JsonDecode FormatException? Could someone help me find out why, the flutter is saying that my json: =FormatException: Unexpected character: "variable1 : nul " already checked my json file, and it is coming right, however when giving Json decode, this problem is occurring. this its the json example, its a list with like 1000 itens. [ { "id":2, "first_name":"NRD", "phone":"", "description":"", "created_at":"2020-12-22 08:02:20", "aveg":112, "updated_at":"2020-12-22 08:02:20", "long":1, "lat":"1", "link":"localhost:8000", "temperature_id":1, "email":"[email protected]", "user":"null" }, ]
{ "language": "en", "url": "https://stackoverflow.com/questions/65399389", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: pattern example for apache index of When reading the following documentation: https://httpd.apache.org/docs/2.2/mod/mod_autoindex.html I see the option: * *P=pattern lists only files matching the given pattern However, I can't find an example anywhere and can get it to work. Does someone know? A: Its quite simple really. Let's suppose: * *Your DocumentRoot is /var/www *You have defined Options Indexes or +Indexes for /var/www *Your DocumentRoot has this file list: a,b,c,d,d1,d2,f,g *You want to list files starting with d. In this case all you have to do is request this: http://example.com/?P=d* The pattern to use is similar or like used since DOS, ? for a caracter * for matching lots of characters. So if you wanted to match files which third character is a "n" you would use this pattern ??n* and it will list only files matching that pattern. Try it out.
{ "language": "en", "url": "https://stackoverflow.com/questions/39248046", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: context.GetInput returns base class instead of specified derived class in $Type Using Azure Durable Functions, I am trying to use the context.GetInput<model>() function which returns the specified model. The model being used has a parameter that is another model which is a derived class. The model that is outputted from context.GetInput<model>() returns the model with the base class instead of the derived class. I have checked the $type specified in the context, which shows the derived class, but when checking the outputted model, the result is the base class. for example: public class Student{ public Book book {get;set;} } public class Textbook:Book { public string classfor {get;set;} } public class Book { public string title {get;set;} } [ActivityTrigger] DurableActivityContextBase context is a parameter to the function. Then I would be calling : var model = context.GetInput<Student>() where the context includes { "$type": "Student", "book" : { "$type": "Textbook", "classfor" : "Math", "title" : "PreAlgebra" } } Yet the result is Model of student which contains a Book instead of Textbook, where the title is assigned "PreAlgebra" I expect the output of Student model to have a Textbook with properties: title = "PreAlgebra" classfor = "Math" but the actual Student output contains a Book with the property title = "PreAlgebra" A: I've encountered the same problem you did last week. Unfortunately right now Azure Functions (even 2.x) don't support polymorphism for durable functions. The durable context serializes your object to JSON, but there's no way to pass JSON serialization settings as described here on GitHub. There's also another issue about this specific problem. In my case I have an abstract base class, but you can use the same approach for your derived types. You can create a custom JSON converter that will deal with picking the correct type during deserialization. So for example if you have this sort of inheritance: [JsonConverter(typeof(DerivedTypeConverter))] public abstract class Base { [JsonProperty("$type")] public abstract string Type { get; } } public class Child : Base { public override string Type => nameof(Child); } public class Child2 : Base { public override string Type => nameof(Child2); } Then you can have your а JSON Converter: public class BaseDerivedTypeConverter : DefaultContractResolver { // You need this to protect yourself against circular dependencies protected override JsonConverter ResolveContractConverter(Type objectType) { return typeof(Base).IsAssignableFrom(objectType) && !objectType.IsAbstract ? null : base.ResolveContractConverter(objectType); } } public class DerivedTypeConverter : JsonConverter { private static readonly JsonSerializerSettings Settings = new JsonSerializerSettings() { ContractResolver = new BaseDerivedTypeConverter() }; public override bool CanConvert(Type objectType) => (objectType == typeof(Base)); public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer) { JObject jsonObject = JObject.Load(reader); // Make checks if jsonObject["$type"].Value<string>() has a supported type // You can have a static dictionary or a const array of supported types // You can leverage the array or dictionary to get the type you want again var type = Type.GetType("Full namespace to the type you want", false); // the false flag means that the method call won't throw an exception on error if (type != null) { return JsonConvert.DeserializeObject(jsonObject.ToString(), type, Settings); } else { throw new ValidationException("No valid $type has been specified!"); } } public override bool CanWrite => false; public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer) => throw new NotImplementedException(); } In my usage when I call context.GetInput<Base>() I can get either Child or Child1 because Base is abstract. In your case it can be Book or Student depending on what's the actual value. This also applies for other durable function operations like like var foobar = await context.CallActivityAsync<Base>("FuncName", context.GetInput<int>()); The converter will deal with that and you'll get the object you want inside foobar. A: Per my understanding, the class Textbook extends Book, so "Book" is parent class and "Textbook" is subclass. In your context, you want to turn the child class(Textbook) to the parent class(Book). After that, "book" will just have the attribute "title" which is their common attribute but doesn't have the specific attribute "classfor". You can refer to the code below: A: Tracked the updates to pass in Json serialization to Azure Functions here showing that it will be in v2.1!
{ "language": "en", "url": "https://stackoverflow.com/questions/58752301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Make grep print missing queries I'm using grep -f and would like to make grep print also lines in file 1 that are missing in file 2: file1: hello my name is bernardo file 2: hello 1 my 2 name 3 is 4 ideal output: hello 1 my 2 name 3 is 4 bernardo A: This will print the lines in file2 that are not in file1: fgrep -F -x -v -f file1 file2 The -F means to treat the input as fixed strings rather than patterns, the -x means to match the whole line, the -v means to print lines that don't match rather than those that do match, and -f file1 uses file1 as a list of patterns. Your question is kind of unclear but I'm guessing that you want all of the lines that appear in one or the other file but not both. There's several ways to do that. One is to do two greps: fgrep -F -x -v -f file2 file1; fgrep -F -x -v -f file1 file2 Another, if the order of the lines in the output doesn't matter, is to sort them and use comm: sort file1 -o sortfile1 sort file2 -o sortfile2 comm -3 sortfile1 sortfile2 A: grep -f file1 file2 && grep -o -f file1 file2 | sed s'/^\(.*\)$/-e "\1"/g' | tr '\n' ' ' | xargs grep -v file1 What this does is print all matches from file2 by patterns in file1, and after that print all lines from file1 that do not match files in file2. The second part is done as follows: * *grep -o -f file1 file2 returns matches between file and file2, but only the matching parts of the lines; *sed s'/^\(.*\)$/-e "\1"/g' | "\1"/g' | tr '\n' ' ' prefixes those matching parts with -e, encases them in double quotes, and replaces newlines printed by the grep -f command with spaces. This builds a string of the form -e "[pattern1]" -e "[pattern2]" ..., which is what grep uses for multiple patterns matching. The quotes (hopefully) ensure that spaces in patterns will not be a problem; *xargs grep -v file1 builds and executes the command grep -v file1 [whatever was piped to xargs]. The result is all lines from file1 that have not match in the output of the first command (and, thus, in file2). I'm not completely sure this solves your problem since non-matching lines from file1 are printed at the end (by far the easiest option), and you do not say where you want them. It probably could be done more elegantly, too. Here's a sample output: sh-4.3$ cat file1 hello my name is bernardo sh-4.3$ cat file2 hello 1 my 2 name 3 is 4 sh-4.3$ grep -f file1 file2 && grep -o -f file1 file2 | sed s'/^\(.*\)$/-e "\1"/g' | tr '\n' ' ' | xargs grep -v file1 hello 1 my 2 name 3 is 4 bernardo
{ "language": "en", "url": "https://stackoverflow.com/questions/37572449", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: how to livestream webcam server in c++? I am working on a personal project, i want to learn how to create a livestream webcam server. I was trying to upload files and POST them to my server, but the lag results to be too large to be livestream. does anyone have some source code examples for a c++ webcam server? or any pointers that would be helpful? thanks. A: RTSP, as suggested by its name, should suit better for real-life streaming applications. To reduce the lag you can play with bitrate and dropping frames. There is proprietary pvServer though, which is capable of HTTP-streaming.
{ "language": "en", "url": "https://stackoverflow.com/questions/11807463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to retrieve the A1-style reference of an Excel worksheet cell in VBA How do I retrieve the A1-style reference of an Excel worksheet cell in VBA? I'm using Access 2007 VBA. So where for example for the cell MyWorksheet.Range("A1").Offset(2, 3) the value "D3" is returned. It seems such a simple question. A: To print the A1 style address to the Immediate Window, use the following. By specifying that you don't want the columns or rows to be absolute, you don't have to use the replace function. Public Sub Test() Debug.Print Range("A1").Offset(2, 3).Address(RowAbsolute:=False, ColumnAbsolute:=False) End Sub A: MyWorksheet.Range("A1").Offset(2,3).Address(False,False) The arguments (all optional) for address are RowAboslute - False for no dollars signs ColumnAbsolute - False for no dollar signs ReferenceStyle - default is xlA1 (constant value is 1 if your late binding) External - include the workbook/worksheet name RelativeTo - This one's a complete mystery to me. It never works how I expect.
{ "language": "en", "url": "https://stackoverflow.com/questions/4451362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to migrate from string column to foreign key pointing to row in other table with same string value in Laravel? I am using Laravel 5.6 and need some help migrating a column from a populated table preserving the content logic. There is a table pages with a column named icon that accepts string values. Ex: Schema::create('pages', function (Blueprint $table) { $table->increments('id'); ... $table->string('icon')->nullable(); } The pages table is populated and the icon column, being nullable, is not always used. A new icons table was created to store all the usable icon classes. Ex: Schema::create('icons', function (Blueprint $table) { $table->increments('id'); $table->string('name'); }); How can i migrate the icon column from the pages table to be a foreign key that points to the icons table row that has the same value in the name column or null if not populated? A: I'd suggest a polymorphic many-to-many approach here so that icons are reusable and don't require a bunch of pivot tables, should you want icons on something other than a page. Schema::create('icons', function(Blueprint $table) { $table->increments('id'); $table->string('name'); }); Schema::create('iconables', function(Blueprint $table) { $table->integer('icon_id'); $table->integer('iconables_id'); $table->integer('iconables_type'); }); Now you just need to determine if the pages have an existing Icon. If they do, then hold reference to them so you can insert them: $pagesWithIcons = Page::whereNotNull('icon')->get(); At this point you need to define the polymorphic relations in your models: // icon class Icon extends Model { public function pages() { return $this->morphedByMany(Page::class, 'iconable'); } } // page class Page extends Model { public function pages() { return $this->morphToMany(Icon::class, 'iconable'); } } Now you just need to create the icons (back in our migration), and then attach them if they exist: $pagesWithIcons->each(function(Page $page) { $icon = Icon::firstOrNew([ 'name' => $page->icon }); $icon->pages()->attach($page); }); The above is creating an Icon if it doesn't exist, or querying for it if it does. Then it's attaching the page to that icon. As polymorphic many-to-many relationships just use belongsToMany() methods under the hood, you have all of the available operations at your leisure if this doesn't suite your needs. Finally, drop your icons column from pages, you don't need it. Schema::table('pages', function(Blueprint $table) { $table->dropColumn('icon'); }); And if you need to backfill support for only an individual icon (as the many-to-many will now return an array relationship), you may add the following to your page model: public function icon() { return $this->icons()->first(); } Apologies if typos, I did this on my phone so there may be some mistakes.
{ "language": "en", "url": "https://stackoverflow.com/questions/52829380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Web application in Apache cannot access another web application on the same server running on localhost? We have an Angular application running on Apache. This needs to access services from a backend application. This is a Java application running on localhost:9090 The website loads but on pages with backend calls, i'm seeing CORS errors: Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://localhost:9090/cord/titlesAuthors/. (Reason: CORS request did not succeed). I searched and SO.. there are several posts about CORS but most of them refer to localhost within Apache. * *The site is running on Apache (on CentOS 8, VM) at http://cordboard.info. *The page http://cordboard.info/papers internally calls a service at http://localhost:9090/cord/titlesAuthors *The service is also running on the same VM and can also fetch data if i do a curl -v http://localhost:9090/cord/titlesAuthors *Not sure where i am going wrong.. public site works, service individually works. Questions: * *If needed, i can change the ip and port for the backend application. What should it be? Thought since the angular application needs to only access the backend localhost would be fine (backend not needed to be public). *Is there something we should do on Google Cloud VM's network / rules / firewall? Our setup: * *Google cloud platform - VM (static ip <-- website domain has an A record) *Apache configs /etc/httpd/sites-enabled/cordboard.info.conf <VirtualHost *:80> ServerName www.site.com ServerAlias site.info DocumentRoot /var/www/site.com/html ErrorLog /var/www/site.com/log/error.log CustomLog /var/www/site.com/log/requests.log combined <Directory "/var/www/site.com/html"> Order Allow,Deny Allow from all AllowOverride All </Directory> </VirtualHost> and /etc/httpd/conf/httpd.conf LoadModule headers_module modules/mod_headers.so LoadModule rewrite_module modules/mod_rewrite.so *Backend application running as a service on localhost sudo systemctl status cord-java ... Loaded: loaded (/etc/systemd/system/cord-java.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2020-05-20 21:42:39 UTC; 32min ago
{ "language": "en", "url": "https://stackoverflow.com/questions/61877220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Rails Functional Tests and OAuth Parameter problem I've been pulling my hair out over an issue with using OAuth signed requests in Rails functional tests. I'd appreciate help, or pointers to working examples. I'm trying to work with the built in ActionController::TestRequest overrides that are in the oauth gem (0.4.5). I'd already tried this solution to no avail: http://d.hatena.ne.jp/falkenhagen/20091110/1257830144 This is what I'm doing now... require 'oauth/client/action_controller_request' I've created a method for doing the login to which I can pass one of my OauthConsumer objects (ActiveRecord), and my URL parameters (for the query string). def _do_oauth(consumer, params = {}) c=OAuth::Consumer.new(consumer.consumer_key, consumer.consumer_secret) t=OAuth::AccessToken.new(c) ActionController::TestRequest.use_oauth=true @request.configure_oauth(c, t, params) end and call it like so in my test case: params = { :store => 'foo' } _do_oauth(oauth_consumers(:one), params) # currently not working for passing params get :index, { :format => :json }.merge(params) But it doesn't look like the requests are pick up the "params" or encoding them properly. The error I'm getting is (which occurs on the "get" line above): ArgumentError: comparison of Array with Array failed /home/sp/.rvm/gems/ruby-1.9.2-p180/gems/oauth-0.4.5/lib/oauth/helper.rb:37:in `sort' /home/sp/.rvm/gems/ruby-1.9.2-p180/gems/oauth-0.4.5/lib/oauth/helper.rb:37:in `normalize' /home/sp/.rvm/gems/ruby-1.9.2-p180/gems/oauth-0.4.5/lib/oauth/request_proxy/base.rb:98:in `normalized_parameters' /home/sp/.rvm/gems/ruby-1.9.2-p180/gems/oauth-0.4.5/lib/oauth/request_proxy/base.rb:113:in `signature_base_string' /home/sp/.rvm/gems/ruby-1.9.2-p180/gems/oauth-0.4.5/lib/oauth/signature/base.rb:77:in `signature_base_string' /home/sp/.rvm/gems/ruby-1.9.2-p180/gems/oauth-0.4.5/lib/oauth/signature/hmac/base.rb:12:in `digest' /home/sp/.rvm/gems/ruby-1.9.2-p180/gems/oauth-0.4.5/lib/oauth/signature/base.rb:65:in `signature' /home/sp/.rvm/gems/ruby-1.9.2-p180/gems/oauth-0.4.5/lib/oauth/signature.rb:23:in `sign' /home/sp/.rvm/gems/ruby-1.9.2-p180/gems/oauth-0.4.5/lib/oauth/client/helper.rb:45:in `signature' /home/sp/.rvm/gems/ruby-1.9.2-p180/gems/oauth-0.4.5/lib/oauth/client/helper.rb:75:in `header' /home/sp/.rvm/gems/ruby-1.9.2-p180/gems/oauth-0.4.5/lib/oauth/client/action_controller_request.rb:54:in `set_oauth_header' /home/sp/.rvm/gems/ruby-1.9.2-p180/gems/oauth-0.4.5/lib/oauth/client/action_controller_request.rb:50:in `apply_oauth!' /home/sp/.rvm/gems/ruby-1.9.2-p180/gems/oauth-0.4.5/lib/oauth/client/action_controller_request.rb:14:in `process_with_new_base_test' /home/sp/.rvm/gems/ruby-1.9.2-p180/gems/actionpack-3.0.7/lib/action_controller/test_case.rb:412:in `process' /home/sp/.rvm/gems/ruby-1.9.2-p180/gems/actionpack-3.0.7/lib/action_controller/test_case.rb:47:in `process' /home/sp/.rvm/gems/ruby-1.9.2-p180/gems/actionpack-3.0.7/lib/action_controller/test_case.rb:350:in `get' test/functional/deals_controller_test.rb:56:in `block in <class:DealsControllerTest>' I'm assuming it's something to do with the query params not being encoded correctly, or the header not being formatted properly. Any help (or even pointers to examples that do work) would be greatly appreciated. I should also point out that the app in question I am trying to test is a 2-legged OAuth provider. So, the app is just parsing the signature and checking that the consumer key/secret check out. A: This probably won't help with the initial problem at this point but it might save someone a few minutes. The problem is that the sort method on hash freaks out if a hash has a mixture of symbol and string keys. Oauth adds some entries keyed by strings into the params hash.
{ "language": "en", "url": "https://stackoverflow.com/questions/6823563", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Multiple threads increases wifi speed? Why? I used my laptop to set up a hostednetwork and tested the wifi speed between my laptop and my android phones. The two phones are the same by the way. I first connect my laptop and one of my phone, and transmit a file of 400 Mb from laptop to phone and it took 169.6s on average. Then, I connect my laptop with two phones and use two threads to transmit a file of 200 Mb to each phone respectively. So totally 400 Mb data is transmitted. And on average, it only took 136.2s, which is less than 169.6s. My question is, since my laptop only have one wifi chipset, how can it be faster when I use two threads to transmit data of the same size? One thing I am sure is that it cannot be the interference from other wifi devices since it occurs all the time, even when there is no wifi devices around. Is it possible that when I did it using one thread, the computer didn't allocate all the wifi resources to that thread?
{ "language": "en", "url": "https://stackoverflow.com/questions/20626026", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How should I apply this function to the strings of a list given this situation? How can I apply the above function to each string of this list?: lis = ['hi how are you', 'pretty good', 'the quick brown fox', 'the is quick brown fox play', 'play the quick brown fox'] I tried to: [ request(x) for x in lis ] I also tried to map over the elements of the list and it did not worked. A: I believe that the problem lies in your global variable, lis_. You manipulate this through all of the lis entries. You return the list reference for each element of your comprehension, but continue to update the list on later parsing. What you get is a list of identical pointers (each one is the reference to lis_), each of which contains all of the processing. To fix this, use proper encapsulated programming practices. Use local variables, clean them on each call, and return only the value needed in the calling program. Does that get you moving? A: Would this be helpful? lis = ['hi how are you', 'pretty good', 'the quick brown fox', 'the is quick brown fox play', 'play the quick brown fox'] def requ(text): # This mimics the request output = [] for word in text.split(): # This just mimics getting some ID for each word xyz = "".join(str([ord(x) for x in word])).replace(", ", "") # This mimics the word root part if word in ["are", "is"]: output.append((word, "be", xyz)) else: output.append((word, word, xyz)) return output new_lis = [requ(w) for w in lis] print(new_lis) Output: [[('hi', 'hi', '[104105]'), ('how', 'how', '[104111119]'), ('are', 'be', '[97114101]'), ('you', 'you', '[121111117]')], [('pretty', 'pretty', '[112114101116116121]'), ('good', 'good', '[103111111100]')], [('the', 'the', '[116104101]'), ('quick', 'quick', '[11311710599107]'), ('brown', 'brown', '[98114111119110]'), ('fox', 'fox', '[102111120]')], [('the', 'the', '[116104101]'), ('is', 'be', '[105115]'), ('quick', 'quick', '[11311710599107]'), ('brown', 'brown', '[98114111119110]'), ('fox', 'fox', '[102111120]'), ('play', 'play', '[11210897121]')], [('play', 'play', '[11210897121]'), ('the', 'the', '[116104101]'), ('quick', 'quick', '[11311710599107]'), ('brown', 'brown', '[98114111119110]'), ('fox', 'fox', '[102111120]')]] A: Your list comprehension is fine, the reason you get strange output looks to be that the request function always returns a reference to the global variable lis_, which you keep appending things to. You should create lis_ as a local variable inside convert().
{ "language": "en", "url": "https://stackoverflow.com/questions/42656083", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: override ip address or host label in Ansible play On my personal network I have a simple list of hosts: [host1] 192.168.1.2 [host2] 192.168.1.10 When I set up a local host, say 'host2', it has a random (dhcp) IP address. I've changed my hosts.ini and overriden the host IP address, then I use host vars to actually set the IP address I want into its dhcpcd.conf. My play has all of my local machines, so I need the host label to match. But I can't get this to work on the first boot without some manual work. I can think of a few workarounds: * *override the IP in a manual inventory on the ansible-playbook commandline *specify a host ip and hostname in a manual inventory on the ansible-playbook commandline *set a ansible host name on the ansible-playbook commandline The problem is I can't get any of them to work: ansible-playbook play.yml -i "[host2]\n192.168.0.123," --limit host2 [WARNING]: Could not match supplied host pattern, ignoring: host2 ansible-playbook play.yml -i "192.168.0.123 ansible_host=host2," --limit host2 [WARNING]: Could not match supplied host pattern, ignoring: host2 ansible-playbook play.yml -i "192.168.0.123," -e "ansible_host=host2" --limit host2 [WARNING]: Could not match supplied host pattern, ignoring: host2 I really think the third idea has merit, I just can't get there from here. Since this is a oneshot type of problem I don't want to have to create a temporary hosts file, but I'm unsure of another way to do it. Note having an earlier play/task that calls add_hosts almost works, but given host2 already exists in inventory, I either have to null out my inventory or (somehow) call my host exclusively. Remember, this is for bootstrapping, so the idea is to avoid any magic later. Apologies for this being kinda long. I wanted to give context for the XY problem and also ask my specific strategy/problem. A: Why not using a dynamic inventory based on the mac address of your devices? Just a small example. Of course it needs to be improved but it is for your reference: #!/usr/bin/env python # -*- coding:utf-8 -*- from __future__ import (absolute_import, division, print_function, unicode_literals) import json import socket import subprocess import re def main(): print(json.dumps(inventory(), sort_keys=True, indent=2)) def inventory(): ip_address = find_ip() return { 'all': { 'hosts': [ip_address], 'vars': {}, }, '_meta': { 'hostvars': { ip_address: { 'ansible_ssh_user': 'ansible', } }, }, 'ip': [ip_address] } def find_ip(): lines = subprocess.check_output(['arp', '-a']).decode('utf-8').split('\n') for line in lines: if re.search('a0:d7:95:1a:80:f8', line): ip = re.search(r"(\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b)", line) return ip.group(1) if __name__ == '__main__': main() Output: { "_meta": { "hostvars": { "192.168.0.100": { "ansible_ssh_user": "ansible" } } }, "all": { "hosts": [ "192.168.0.100" ], "vars": { "ansible_connection": "local" } }, "ip": [ "192.168.0.100" ] } Example: ansible-playbook -i inventories/dynamic/mydyn.py hosts.yml PLAY [Test wait] **************************************************************************************************************** TASK [Debug] ******************************************************************************************************************** ok: [192.168.0.100] => { "ansible_host": "192.168.0.100" } TASK [Ping] ********************************************************************************************************************* ok: [192.168.0.100] PLAY RECAP ********************************************************************************************************************** 192.168.0.100 : ok=2 changed=0 unreachable=0 failed=0
{ "language": "en", "url": "https://stackoverflow.com/questions/50998150", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to use a third variable to choose color for scatter plot in Matplotlib I have a data set which provides latitude, longitude, and temperature for a time series of oceanographic readings. Right now I am plotting the trajectory through lat/lon space thus: fig = plt.figure() ax = fig.add_subplots(111) ax.plot(lon, lat, 'o') I want to have each dot display in a color representative of the temperature at that location (and display a colorbar). How would I go about doing this? If it's pertinent, this is actually a map from the basemaps toolkit. Thanks. This was marked as duplicate; none of the solutions provided in the questions which it purportedly duplicates works. Here is the code in its entirety: map = Basemap(llcrnrlon = -160, llcrnrlat = -90, urcrnrlon = 40, urcrnrlat = 10, resolution = 'l') map.drawcoastlines() map.drawcountries() map.fillcontinents(color='gray') map.drawmapboundary( cs = map.scatter(lon,lat, c = temp[0]) plt.savefig('test.pdf') plt.show() It raises this error: ValueError: Invalid RGBA argument: <xarray.DataArray 'TEMP' ()> array(6.459000110626221)
{ "language": "en", "url": "https://stackoverflow.com/questions/47558863", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Bind Caliburn Micro to EF Entity Error I'm binding an Entity Framework 6.0 Entity to a Caliburn Micro View: <aura:AuditView Grid.Row="0" x:Name="SelectedAudit" cal:View.Model="{Binding SelectedAudit}" cal:View.Context="SelectedAudit"/> The error produced on screen is: "Cannot find view for System.Data.Entity.DynamicProcies.Audit_9B5A..." SelectedAudit is the entity property on the ViewModel. Should I create a map (AutoMapper) from + to entity to a new AuditModel? Or am I missing something magical? EDIT: code for aura:AuditView <UserControl x:Class="Aura.AuditView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:aura="clr-namespace:Aura" xmlns:cal="http://www.caliburnproject.org" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="300"> <UserControl.Resources> <Style x:Key="LockedTextBox" TargetType="{x:Type TextBox}"> <Setter Property="Focusable" Value="False"/> <Setter Property="IsHitTestVisible" Value="False"/> <Setter Property="IsReadOnly" Value="True"/> <Setter Property="Background" Value="#FFEFE2E2"/> </Style> </UserControl.Resources> <Grid Grid.Row="0"> <Grid.ColumnDefinitions> <ColumnDefinition Width="*"/> <ColumnDefinition Width="*"/> <ColumnDefinition Width="*"/> <ColumnDefinition Width="*"/> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> </Grid.RowDefinitions> <Label Grid.Row="0" Grid.Column="0" Content="Description" HorizontalAlignment="Right"/> <TextBox Grid.Row="0" Grid.Column="1" Text="{Binding Description}" Margin="3" /> <Label Grid.Row="1" Grid.Column="0" Content="User" HorizontalAlignment="Right"/> <TextBox Grid.Row="1" Grid.Column="1" Text="{Binding UserId}" Margin="3" Style="{StaticResource LockedTextBox}"/> <Label Grid.Row="0" Grid.Column="2" Content="Begin date" HorizontalAlignment="Right"/> <DatePicker Grid.Row="0" Grid.Column="3" SelectedDate="{Binding BeginDate}" Margin="3"/> <Label Grid.Row="1" Grid.Column="2" Content="Deadline" HorizontalAlignment="Right"/> <DatePicker Grid.Row="1" Grid.Column="3" SelectedDate="{Binding Deadline}" Margin="3"/> <Label Grid.Row="2" Grid.Column="2" Content="End date" HorizontalAlignment="Right"/> <TextBox Grid.Row="2" Grid.Column="3" Text="{Binding EndDate, StringFormat=dd/MM/yyyy}" Margin="3" Style="{StaticResource LockedTextBox}"/> </Grid> </UserControl> A: I believe you actually want cal:Bind.Model="{Binding SelectedAudit}" Otherwise you are trying to do viewmodel-first resolution in which case Caliburn Micro will look to resolve a view for the VM instead of using the view that you have provided. e.g. <aura:AuditView Grid.Row="0" x:Name="SelectedAudit" cal:Bind.Model="{Binding SelectedAudit}" />
{ "language": "en", "url": "https://stackoverflow.com/questions/36693361", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Delete row before insert a new one How can I achieve to Delete a row before Insert a new one in the same Table. I tried it with a Trigger but I read that it is not possible because it could cause a deadlock. I also wanted to save the row which should be deleted to another table (example Table B) before delete it and then Insert a new one (into Table A). Is there any other ways to do it ? PS: They will have the same key A: You could use UPDATE... UPDATE tbl SET col1 = newCol1, col2 = newCol2 WHERE etc = etc And If you want to insert updated row to another table you could use TRIGGER AFTER UPDATE for that. CREATE TRIGGER TriggerName ON Tbl AFTER UPDATE AS INSERT INTO Log (Col1, Col2) SELECT Col1, Col2 FROM deleted
{ "language": "en", "url": "https://stackoverflow.com/questions/36937157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Pivot table custom aggregation function I have a table like this in an excel spreadshhet: Col1 | Col2 | Col3 -----------+-----------+----------- A | X | 1 A | Y | 2 B | X | 3 B | Y | 4 B | Z | 5 I want to use the aggregation feature of the pivot table. Using the typical SUM of VALUES aggregation, provided by Excel, I get: Col1 | Col3 -----------+----------- A | 3 B | 12 But I want to use a different aggregation function. I want to use something that does: Square root of ( Sum of (Square(x))) So that in the end I get the table: Col1 | Col3 -----------+----------- A | SQRT(5) <= Sqrt(1*1 + 2*2) B | SQRT(50) <= Sqrt(3*3 + 4*4 + 5*5) Is there ANY way (VBA, C++, assembly, whatever it takes) I can specify my own functions in the aggregation list? NOTE: I KNOW HOW TO DO THIS IN A SHEET, DON'T BOTHER ANSWERING IF IT'S NOT ABOUT WRITING CUSTOM AGGREGATION FUNCTIONS FOR PIVOTTABLE A: Its not possible to write a custom aggregation function for a standard pivot table. But you can probably do what you want using MDX... maybe an MDX expert would like to comment?
{ "language": "en", "url": "https://stackoverflow.com/questions/12204462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to resolve "Dependency convergence error" when using maven enforcer plugin? I am just trying to pickup with maven-enforcer-plugin using a small pom (before I jump in to my project pom which has 100+ dependencies.) After I have added the enforcer plugin, I am seeing Dependency convergence error. The pom.xml file is below (sorry its not tidy). How can i fix the errors with out disabling the enforcer plugin. Basically I want to understand the concept behind how to use dependencyConvergence rule. <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.demo</groupId> <artifactId>enforcer</artifactId> <version>0.0.1-SNAPSHOT</version> <dependencyManagement> <dependencies> <!-- <dependency> <groupId>org.springframework</groupId> <artifactId>spring-beans</artifactId> <version>5.2.13.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-aop</artifactId> <version>5.2.13.RELEASE</version> </dependency> --> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>5.2.10.RELEASE</version> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>5.3.5</version> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-web</artifactId> <version>5.4.5</version> </dependency> </dependencies> <build> <plugins> <plugin> <artifactId>maven-enforcer-plugin</artifactId> <version>3.0.0-M3</version> <executions> <execution> <id>dependency-convergence</id> <goals> <goal>enforce</goal> </goals> <configuration> <rules> <dependencyConvergence/> </rules> </configuration> </execution> </executions> <configuration> <rules> <dependencyConvergence /> </rules> </configuration> </plugin> </plugins> </build> </project> Does it mean that, I have to declare each non converging dependency in the dependencyManagement explicitly as in this version of pom.xml(added dependencies to dependencyManagement). The problem with spring-context still exists as I have added it as direct dependency and then in the dependency management with different version. Basically - am able to fix the error, but not able to grasp the rules crystal clear yet. * *fix one - pom.xml - updated the version in dependency management to the one used explicitly. So now there is no need to give the version explicitly in dependencies. But this would require me to have access to dependencyManagment of parent pom. If my statement is right, this might not be the situation every time. *fix two pom.xml - excluded spring-context from spring-security-web and it worked. But if there are a dozen of exclusion to be done, its going to be a pain. If this is the way to go about with the convergence rule? In an enterprise project with 100+ dependencies and 100+ of their transitive dependencies, then the Bill of Materials(BOM) is gonna be quite huge and take time to build. hhhmmm. (I agree, there is going to be more control over the versions used and using property like <xyz.version>, upgrades can be done easily). I will very much appreciate if anyone can list down the rules involving convergence. A: A dependency convergence error means that * *the dependency is not in dependencyManagement *there are different versions of the dependency in the dependency tree The typical resolution is to define an entry in dependencyManagement that resolves the issue or to import an appropriate BOM into the dependencyManagement. This is best done in the main POM of a multi module project, but also possible in modules. Note that it is better to leave out the <version> tag in the <dependencies> section so that dependencyManagement will be used everywhere.
{ "language": "en", "url": "https://stackoverflow.com/questions/69852850", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Correct way to implement a connection pool I'm trying to write a multithreading program that connects to a MySQL database and processes the returned set for a query (which has thousands of rows). The problem is that I have implemented the connection pool and I get every thread to open the connection to the database and get the resulting set. But I don't understand which is the advantage of using connection-pooling if retrieving that big set takes such a lot of time. It wouldn't be better if I get the whole set with only one connection (without using pooling) and then I use thread pooling to process it? Or is there a way that every thread takes the next row of the resulting set? A: If you have a limited number of threads, I would have a connection per thread. A connection pool is more efficient if the number of threads which could use a connection is too high and those thread use the connections a relatively low percentage of the time.
{ "language": "en", "url": "https://stackoverflow.com/questions/12192631", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What thread calls the delegate when using iPhone CoreLocation framework? If I create CLLocationManager, assign it's delegate, and finally tell it to start updating, exactly which thread is calling the delegate? Some system thread? A: Since the documentation doesn't say anything, you can safely assume that the delegate will be called from the run loop (main thread or UI thread, depending on which term you prefer).
{ "language": "en", "url": "https://stackoverflow.com/questions/1262982", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: No value retrieved from dropdown list with python mechanize when scraping a dynamic webpage I'm completely new to web scraping. I'm trying to follow the code snippet found in this question Web Scraper for dynamic forms in python I'm doing similar search with http://www.goodlifefitness.com/fitness-classes/find-a-class/. Filling in Province, City and Class Name, and search for schedule. But I'm stuck with step one, retrieving a list of Provinces #!/usr/bin/env python import re import mechanize from bs4 import BeautifulSoup br = mechanize.Browser() br.open('http://www.goodlifefitness.com/fitness-classes/find-a-class/') br.select_form('aspnetForm') ctl = br.form.find_control('ctl00$Copy$ddlRegion') But it seems that I cannot even get anything from the dropdown list >>> items=ctl.get_items() >>> items [<Item name='' id=None selected='selected' contents='' value='' label=''>] But when I inspect the element on the webpage, clearly there are values in the first dropdown list <select name="ctl00$Copy$ddlRegion" id="ctl00_Copy_ddlRegion" title="Select a Province" class="dropdown" onchange="comboBoxSearch_onChange(this);"> <option value="">Select a Province</option><option value="Alberta">Alberta</option><option value="British Columbia">British Columbia</option><option value="Manitoba">Manitoba</option><option value="New Brunswick">New Brunswick</option><option value="Newfoundland">Newfoundland</option><option value="Nova Scotia">Nova Scotia</option><option value="Ontario">Ontario</option><option value="Saskatchewan">Saskatchewan</option></select> Why ctl.get_items() returned nothing? Any pointers will be much appreciated. A: As you can see if you do View Source in Firefox, the items you're looking for aren't in the original HTML markup sent by the server. In fact, they are added by a JavaScript after the page has loaded. Mechanize doesn't run JavaScript, so it can't see those items; it only sees what's in the HTML. As an aside, this completely unnecessary use of JavaScript is a plague on modern Web development and makes doing things like you're trying to do much harder than they should be. (But then, maybe that's why they do it.) Anyway, to scrape that information from the page, you need to use something that actually loads the page in a real Web browser, such as Selenium. The other SO question you linked is different because the targeted site actually sends an HTTP POST when you select from the menus, and receives a whole new HTTP page back. This page doesn't do that.
{ "language": "en", "url": "https://stackoverflow.com/questions/36228185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: C# - How can I declare a variable in MainForm class of type "Generic Class" without specifing generic type I have the following generic class: internal class AutoRegisterThread<T> where T: AutoRegisterAbstract { field1.... method1... } I have 5 classes that implement AutoRegisterAbstract (abstract class). and in my Main form (internal partial class MainForm : Form), I need to declare a field : AutoRegisterThread<> _currentThread without specifying the generic type, because I may initiate _currentThread as: _currentThread=new AutoRegisterThread<implementedClass1> or _currentThread=new AutoRegisterThread<implementedClass2> _currentThread: will be used across the Form (in many events) A: When you have a generic class in C#, you have to provide the type parameter. You could write another class that would not be generic. If there is any logic, that should be shared between generic and non-generic classes, you can move that logic to one more new class. A: Inherit from a non-generic base class: internal abstract class AutoRegisterThreadBase { } internal class AutoRegisterThread<T> : AutoRegisterThreadBase where T: AutoRegisterAbstract { field1.... method1... } Your main form field can now be of type AutoRegisterThreadBase Note, if desired, the non-generic parent class can have the same name as the generic class; in your case, AutoRegisterThread. EDIT: Extended example, with usage: internal abstract class AutoRegisterThreadBase { /* Leave empty, or put methods that don't depend on typeof(T) */ } internal abstract class AutoRegisterAbstract { /* Can have whatever code you need */ } internal class AutoRegisterThread<T> : AutoRegisterThreadBase where T : AutoRegisterAbstract { private int someField; public void SomeMethod() { } } internal class AutoRegisterWidget : AutoRegisterAbstract { /* An implementation of AutoRegisterAbstract; put any relevant code here */ } // A type that stores an AutoRegisterThread<T> (as an AutoRegisterThreadBase) class SomeType { public AutoRegisterThreadBase MyAutoRegisterThread { get; set; } } // Your code that uses/calls the above types class Program { static void Main(string[] args) { var someType = new SomeType(); // Any sub-class of AutoRegisterThreadBase, including generic classes, is valid someType.MyAutoRegisterThread = new AutoRegisterThread<AutoRegisterWidget>(); // You can then get a local reference to that type // in the code that's created it - since you know the type here var localRefToMyAutoRegisterThread = someType.MyAutoRegisterThread as AutoRegisterThread<AutoRegisterWidget>; localRefToMyAutoRegisterThread.SomeMethod(); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/54728029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: OpenCV w/ Rapsberry Pi: VideoCapture IP camera not working I'm working on a project that includes running opencv on the raspberry pi 3B in java. I've followed the instructions on its website for installation in linux, and uploaded my code: cap = new VideoCapture(); cap.open("http://192.168.137.1:8000/video.mjpg"); System.out.println(cap.isOpened()); On my Windows computer, it prints out true, but on the Pi, it prints out false. However, I am able to wget the .mjpg file, and it downloads fine. Also, it works with my usb camera (cap.open(0);). I have found online that it could be ffmpeg, but I do have libav installed, so that should be fine. It worked on my Raspberry Pi model B, but not on my model 3 B. Is there a set of libraries I'm missing? A: Did you try to install the complete opencv package with all development dependencies? apt-get install libopencv-dev A: Okay, I was able to figure out what was going on. Apparently, if you don't install the prerequisites FIRST, Cmake will take into account not having them, and will disable the feature completely. I was able to figure this out during the Cmake process, it stated it was "looking" for the libav libraries, and in turn did not find them. So, I decided to completely reinstall the OS, (probably didn't have to, but wanted to save space) and reinstall the prereq's first, then made sure the cmake compiler was happy before the make process. I guess that's why they call them pre-requisites, huh?
{ "language": "en", "url": "https://stackoverflow.com/questions/41295135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: print the ping status output in tabular column I am a novice to Ansible. I am trying to ping a set of remote hosts and get its status. The ping status I need to print in one neat tabular column format. The below code is not showing the exact output that I am looking for. Could you please share the sample Ansible code which does this? - name: check reachable hosts hosts: protex gather_facts: false tasks: - command: ping -c1 {{ inventory_hostname }} delegate_to: localhost register: ping_result #ignore_errors: yes - debug: msg:"{{ping_result.rc}}" I am expecting the output in this format. Hostname Ping Status 10.0.0.1 Reachable 10.0.0.2 Not Reachable
{ "language": "en", "url": "https://stackoverflow.com/questions/74455037", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How can I create a new file with a simple way in spacemacs? Currently, I know that I can create a new file with the following ways: * *c key in Neotree *SPC ' in shell layer, and use the touch xxx command I am wondering whether there is a simple way (something like SPC f xxx) or not. Thanks. A: Yes, you can use SPC f f, enter the name of the new file and then select the line starting with [?] (which is the default if no other file matches). Note you can also use this to create files in non-existing subfolders, like SPC f f my/sub/folder/file.txt RET. A: If you are using the standard vim like keybindings, :e /path/to/file works opens a file ( which doesn't have to exist before ) :x saves the file and closes the buffer and :w saves without closing.
{ "language": "en", "url": "https://stackoverflow.com/questions/35531759", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37" }
Q: Multiple sticky headers Is there another way I can get multiple sticky headers to stack under each other than setting the top offset as the height of the previous sticky headers? In the code snippet if I set top: 50px in .inner-header it works fine but I am looking for some other solution where I don't need to touch .inner-header styles .container { overflow: auto; height: 300px } .header { height: 50px; background-color: pink; position: sticky; top: 0; z-index: 1; } .content { height: 1000px; } .section { height: 150px; border: 1px solid black; margin-top: 40px; } .inner-header { height: 30px; border-bottom: 1px solid black; position: sticky; top: 0; background-color: gray; } <div class="container"> <div class="header"> Main sticky header </div> <div class="content"> <div class="section"> <div class="inner-header"> Section sticky header </div> </div> </div> </div> A: .container { overflow: auto; height: 300px } .header { padding: 10px; text-align: center; background-color: pink; position: sticky; top: 0; z-index: 1; } .content { height: 1000px; } .section { height: 150px; border: 1px solid black; margin-top: 40px; } .inner-header { height: 30px; border-bottom: 1px solid black; position: sticky; top: 2.4rem; background-color: gray; } <div class="container"> <div class="header"> Main sticky header </div> <div class="content"> <div class="section"> <div class="inner-header"> Section sticky header </div> </div> </div> </div>
{ "language": "en", "url": "https://stackoverflow.com/questions/70434354", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Missing Template error for json return Rails I have a locations_controller and inside I have #index and #show action. I have a map on index.html.erb and when user pans/moves on the map #show action should send json data to map to show the listings. I am sending json request to #show action. But it returns an error saying; ActionView::MissingTemplate (Missing template locations/show, application/show with {:locale=>[:en], :formats=>[:html], :variants=>[], :handlers=>[:erb, :builder, :raw, :ruby, :coffee, :jbuilder]}. Searched in: * "/usr/local/lib/ruby/gems/2.2.0/gems/web-console-2.0.0.beta3/lib/action_dispatch/templates" * "/Users/emreozkan/Desktop/yedek/Last.1/app/views" * "/usr/local/lib/ruby/gems/2.2.0/gems/web-console-2.0.0.beta3/app/views" ): app/controllers/locations_controller.rb:48:in 'show' here is the request in index.html.erb <script> (function ( $ ) { $('#map-canvas').mapSearch({ request_uri: 'locations/show', initialPosition: [ <%= @initlat %> , <%= @initlng %> ], filters_form : '#filters', listing_template : function(listing){ return '<div class="listing">' + '<h3>'+listing.address + '</h3>' + '<div class="row">' + '<div class="col-sm-2">' + '<img class="thumbnail img-responsive" src="http://dummyimage.com/150x150/000/fff.jpg">' + '</div>' + '<div class="col-sm-5">' + '<p><strong>Address : </strong>' + listing.address+ '</p>' + '<p>'+'...'+', '+'...'+' '+l'...'+'</p>' + '<p>Reg Year: ' + '...'+'</p>' + '</div>' + '<div class="col-sm-5">' + '<p><strong>Demo:</strong> '+'...'+'</p>' + '<p><strong>Demo:</strong> '+'...'+'</p>' + '</div>' + '</div>' + '</div>'; }, marker_clusterer : true }); }( jQuery )); </script> And here is my locations_controller; class LocationsController < ApplicationController def index if params[:search].present? location = Geocoder.search(params[:search]) @locations =location[0] else @locations = Location.all.first end @initlat = @locations.latitude @initlng = @locations.longitude end def show ne_lat = params[:ne_lat].to_f ne_lng = params[:ne_lng].to_f sw_lat = params[:sw_lat].to_f sw_lng = params[:sw_lng].to_f mylatlong2 = Location.all locs = {'results' => mylatlong2} respond_to do |format| format.html format.json {render json: locs} end end end I do not know where I am doing wrong with json request. If you can help me I would appreciate. Thank you A: In this line request_uri: 'locations/show' in place of 'locations/show' try using '/locations/show.json' instead. Hope this one helps!
{ "language": "en", "url": "https://stackoverflow.com/questions/30394722", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to properly save Common Lisp image using SBCL? If I want to create a Lisp-image of my program, how do I do it properly? Are there any prerequisites? And doesn't it play nicely with QUICKLISP? Right now, if I start SBCL (with just QUICKLISP pre-loaded) and save the image: (save-lisp-and-die "core") And then try to start SBCL again with this image sbcl --core core And then try to do: (ql:quickload :cl-yaclyaml) I get the following: To load "cl-yaclyaml": Load 1 ASDF system: cl-yaclyaml ; Loading "cl-yaclyaml" ....... debugger invoked on a SB-INT:EXTENSION-FAILURE in thread #<THREAD "main thread" RUNNING {100322C613}>: Don't know how to REQUIRE sb-sprof. See also: The SBCL Manual, Variable *MODULE-PROVIDER-FUNCTIONS* The SBCL Manual, Function REQUIRE Type HELP for debugger help, or (SB-EXT:EXIT) to exit from SBCL. restarts (invokable by number or by possibly-abbreviated name): 0: [RETRY ] Retry completing load for #<REQUIRE-SYSTEM "sb-sprof">. 1: [ACCEPT ] Continue, treating completing load for #<REQUIRE-SYSTEM "sb-sprof"> as having been successful. 2: Retry ASDF operation. 3: [CLEAR-CONFIGURATION-AND-RETRY] Retry ASDF operation after resetting the configuration. 4: [ABORT ] Give up on "cl-yaclyaml" 5: Exit debugger, returning to top level. (SB-IMPL::REQUIRE-ERROR "Don't know how to ~S ~A." REQUIRE "sb-sprof") 0] Alternatively, if I try: (require 'sb-sprof) when sbcl is started with saved core, I get the same error. If sbcl is started just as sbcl there is no error reported. In fact, pre-loading QUICKLISP is not a problem: the same problem happens if sbcl is called initially with sbcl --no-userinit --no-sysinit. Am I doing it wrong? PS. If I use roswell, ros -L sbcl-bin -m core run somehow doesn't pick up the image (tested by declaring variable *A* before saving and not seeing it once restarted). PS2. So far what it looks like is that sbcl does not provide extension modules (SB-SPROF, SB-POSIX, etc.) unless they are explicitly required prior saving the image. A: Thanks for the help from @jkiiski here is the full explanation and solution: * *SBCL uses extra modules (SB-SPROF, SB-POSIX and others) that are not always loaded into the image. These module reside in contrib directory located either where SBCL_HOME environment variable pointing (if it is set) or where the image resides (for example, in /usr/local/lib/sbcl/). *When an image is saved in another location and if SBCL_HOME is not set, SBCL won't be able to find contrib, hence the errors that I saw. *Setting SBCL_HOME to point to contrib location (or copying contrib to image location or new image to contrib location) solves the problem. *Finally, about roswell: roswell parameter -m searches for images in a specific location. For SBCL (sbcl-bin) it would be something like ~/.roswell/impls/x86-64/linux/sbcl-bin/1.3.7/dump/. Secondly, the image name for SBCL must have the form <name>.core. And to start it, use: ros -m <name> -L sbcl-bin run. (Quick edit: better use ros dump for saving images using roswell as it was pointed out to me) A: If you want to create executables, you could try the following: (sb-ext:save-lisp-and-die "core" :compression t ;; this is the main function: :toplevel (lambda () (print "hell world") 0) :executable t) With this you should be able to call QUICKLOAD as you wish. Maybe you want to checkout my extension to CL-PROJECT for creating executables: https://github.com/ritschmaster/cl-project
{ "language": "en", "url": "https://stackoverflow.com/questions/39133421", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Removing encoded text from strings read from txt file Here's the problem: I copied and pasted this entire list to a txt file from https://www.cboe.org/mdx/mdi/mdiproducts.aspx Sample of text lines: BFLY - The CBOE S&P 500 Iron Butterfly Index BPVIX - CBOE/CME FX British Pound Volatility Index BPVIX1 - CBOE/CME FX British Pound Volatility First Term Structure Index BPVIX2 - CBOE/CME FX British Pound Volatility Second Term Structure Index These lines of course appear normal in my text file, and I saved the file with utf-8 encoding. My goal is to use python to strip out only the symbols from this long list, .e.g. BFLY, VPVIX etc, and write them to a new file I am using the following code to read the file and split it: x=open('sometextfile.txt','r') y=x.read().split() The issue I'm seeing is that there are unfamiliar characters popping up and they are affecting my ability to filter the list. Example: print(y[0]) BFLY I'm guessing that these characters have something to do with the encoding and I have tried a few different things with the codec module without success. Using .decode('utf-8') throws an error when trying to use it against the above variables x or y. I am able to use .encode('utf-8'), which obviously makes things even worse. The main problem is that when I try to loop through the list and remove any items that are not all upper case or contain non-alpha characters. Ex: y[0].isalpha() False y[0].isupper() False So in this example the symbol BFLY ends up being removed from the list. Funny thing is that these characters are not present in a txt file if I do something like: q=open('someotherfile.txt','w') q.write(y[0]) Any help would be greatly appreciated. I would really like to understand why this frequently happens when copying and pasting text from web pages like this one. A: Why not use Regex? I think this will catch the letters in caps "[A-Z]{1,}/?[A-Z]{1,}[0-9]?" This is better. I got a list of all such symbols. Here's my result. ['BFLY', 'CBOE', 'BPVIX', 'CBOE/CME', 'FX', 'BPVIX1', 'CBOE/CME', 'FX', 'BPVIX2', 'CBOE/CME', 'FX'] Here's the code import re reg_obj = re.compile(r'[A-Z]{1,}/?[A-Z]{1,}[0-9]?') sym = reg_obj.findall(a)enter code here print(sym)
{ "language": "en", "url": "https://stackoverflow.com/questions/38572626", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Best practice TDD - Java Object Validation and clean code Suppose a Java class called Car, whose objects are initialized through a static factory: public class Car { private String name; private Car(String name){//...} public static Car createCar(String name){ //mechanism to validate the car attributes return new Car(name); } } Of course, I want to extract Validation process into a dedicated class named CarValidator. There are two ways of providing this validator to the factory: Not stubbable/mockable validator: public static Car createCar(String name){ new CarValidator(name); // throw exception for instance if invalid cases return new Car(name); } Stubbable/mockable validator: public static Car createCar(CarValidator carValidator, String name){ //ideally being an interface instead carValidator.validate(); return new Car(name); } It look likes a redundancy here: CarValidator already contains name value since it stores Car parameters as its own fields (a priori cleanest way), thus we could bypass the second argument like this: public static Car createCar(CarValidator carValidator){ carValidator.validate(); return new Car(carValidator.getName()); } However, this looks unclear... why would a Car find its values from a Validator => no sense. So, we could refactorate like this: public static Car createCar(CarValidator carValidator, String name){ carValidator.validate(name); // throwing exception for instance if invalid cases return new Car(carValidator.name()); } Sounds pretty less weird, but CarValidator looses the benefit from creating fields rather than passing arguments to each of its necessary private methods like: private checkForCarName(String name); Which method should I choose? A: My proposition is following: I would not mix validation of domain object with the object itself. It would be a lot more cleaner if domain object would assume that the data passed to it are valid, and validation should be performed somewhere else (e.g. in a factory, but not necessary). In that "factory" you would perform data preparation state (validation, vulnerability removal etc.) and then you would create a new object. You will be able to test the factory (if it is validating properly) and not the domain object itself.
{ "language": "en", "url": "https://stackoverflow.com/questions/16388651", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How can I log something in USQL UDO? I have custom extractor, and I'm trying to log some messages from it. I've tried obvious things like Console.WriteLine, but cannot find where output is. However, I found some system logs in adl://<my_DLS>.azuredatalakestore.net/system/jobservice/jobs/Usql/.../<my_job_id>/. How can I log something? Is it possible to specify log file somewhere on Data Lake Store or Blob Storage Account? A: A recent release of U-SQL has added diagnostic logging for UDOs. See the release notes here. // Enable the diagnostics preview feature SET @@FeaturePreviews = "DIAGNOSTICS:ON"; // Extract as one column @input = EXTRACT col string FROM "/input/input42.txt" USING new Utilities.MyExtractor(); @output = SELECT * FROM @input; // Output the file OUTPUT @output TO "/output/output.txt" USING Outputters.Tsv(quoting : false); This was my diagnostic line from the UDO: Microsoft.Analytics.Diagnostics.DiagnosticStream.WriteLine(System.String.Format("Concatenations done: {0}", i)); This is the whole UDO: using System.Collections.Generic; using System.IO; using System.Text; using Microsoft.Analytics.Interfaces; namespace Utilities { [SqlUserDefinedExtractor(AtomicFileProcessing = true)] public class MyExtractor : IExtractor { //Contains the row private readonly Encoding _encoding; private readonly byte[] _row_delim; private readonly char _col_delim; public MyExtractor() { _encoding = Encoding.UTF8; _row_delim = _encoding.GetBytes("\n\n"); _col_delim = '|'; } public override IEnumerable<IRow> Extract(IUnstructuredReader input, IUpdatableRow output) { string s = string.Empty; string x = string.Empty; int i = 0; foreach (var current in input.Split(_row_delim)) { using (System.IO.StreamReader streamReader = new StreamReader(current, this._encoding)) { while ((s = streamReader.ReadLine()) != null) { //Strip any line feeds //s = s.Replace("/n", ""); // Concatenate the lines x += s; i += 1; } Microsoft.Analytics.Diagnostics.DiagnosticStream.WriteLine(System.String.Format("Concatenations done: {0}", i)); //Create the output output.Set<string>(0, x); yield return output.AsReadOnly(); // Reset x = string.Empty; } } } } } And these were my results found in the following directory: /system/jobservice/jobs/Usql/2017/10/20.../diagnosticstreams A: good question. I have been asking myself the same thing. This is theoretical, but I think it would work (I'll updated if I find differently). One very hacky way is that you could insert rows into a table with your log messages as a string column. Then you can select those out and filter based on some log_producer_id column. You also get the benefit of logging if part of the script works, but later parts do not assuming the failure does not roll back. Table can be dumped at end as well to file. For the error cases, you can use the Job Manager in ADLA to open the job graph and then view the job output. The errors often have detailed information for data-related errors (e.g. row number in file with error and a octal/hex/ascii dump of the row with issue marked with ###). Hope this helps, J ps. This isn't a comment or an answer really, since I don't have working code. Please provide feedback if the above ideas are wrong.
{ "language": "en", "url": "https://stackoverflow.com/questions/46800248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: OrientDB 2.2.13 console.sh: getting "Cannot create a connection to remote server address(es)" I have an embedded database where I start an OServer and trying to connect to it from the console. I've been doing this successfully for many months and upgrading the database as new versions come out. Now, with 2.2.13, the embedded operations seem to work but I can't connect to the server with the 2.2.13 console.sh. I get the message: Error: com.orientechnologies.orient.core.exception.OStorageException: Cannot create a connection to remote server address(es): [127.0.0.1:2424] DB name="master" The java code running the embedded database gets the following exception: $ANSI{green {db=db}} Error executing request com.orientechnologies.orient.core.exception.ODatabaseException: Error on plugin lookup: the server did not start correctly DB name="db" at com.orientechnologies.orient.server.OServer.getPlugin(OServer.java:850) at com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary.openDatabase(ONetworkProtocolBinary.java:857) at com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary.handshakeRequest(ONetworkProtocolBinary.java:229) at com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary.execute(ONetworkProtocolBinary.java:194) at com.orientechnologies.common.thread.OSoftThread.run(OSoftThread.java:77) Seems to be looking for the 'cluster' plugin. Any idea why this doesn't work anymore? It did work in 2.2.12. Thanks Curtis A: Seems I had automatic backup turned on but the config file was missing. So, the server looked like it started up but actually didn't. I created the config file and set enabled to false. Still didn't start up because it sees the false and stops the configuration and throws an exception because the 'delay' parameter isn't set. I think orientdb should start up without backups enabled if the config file is missing or the enabled parameter is set to false. At least the console is working now.
{ "language": "en", "url": "https://stackoverflow.com/questions/40812923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: vb Table cannot find primary key when using find I am trying to use the Rows find on a dataset column but it comes back saying "Table does not have primary key" It does have a primary key and the keycolumns show that the primary key does exist. Why doesn't this work. Dim dr As DataRow Dim cid As String Dim table As New DataTable Dim ds as new DataSet table.Columns.Add("cid", GetType(String)) table.Columns.Add("filename", GetType(String)) table.PrimaryKey = New DataColumn() { table.Columns("cid")} table.AcceptChanges() ds.Tables.Add(table) cmd = dbconn.CreateCommand() cmd.CommandText = "Select cid, filename from filetable" Dim myreader As DbDataReader = cmd.ExecuteReader() ds.Load(myreader, LoadOption.OverwriteChanges, "table") myreader.close ' check to see if primary key exists - it does. Dim keyColumns As DataColumn() keyColumns = table.PrimaryKey dr = ds.Tables("table").Rows.Find(“8”) A: Set the primarykey after you load from the database. I don't think dataadapters set the primarykey.
{ "language": "en", "url": "https://stackoverflow.com/questions/34407245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How Open Close Principle decides which instance of interface to use? I am reading the clean architecture chapter - 8 Page number 72, The Open-Closed Principle. The chapter has a thought experiment of a system that displays the Financial data on a web page and there is a requirement to show the data on black-white printed page with proper page headers, page footers etc. Uncle bob says that the problem should be modeled as shown in the diagram. In the diagram controller has no dependency on the Screen Presenter or Web Presenter and it's easy to add another Presenter as well. Does this architecture means that for formatting the data in the pdf format. I will have to initialize a new instance of the Financial Report Controller with Print Presenter as one of the instance variable? A: I will have to initialize a new instance of the Financial Report Controller with Print Presenter as one of the instance variable? No. But you will have to pass the appropriate Print Presenter to the Financial Report Controller somehow. When you decide which one is appropriate doesn’t have to be on initialization. You could pass it later with a setter. Or you could pass a collection of them to choose from. Or, like you said, create a new instance of the controller. They all work. Use what makes sense for your situation.
{ "language": "en", "url": "https://stackoverflow.com/questions/74763397", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Angularjs push overwriting scope array Every time someone clicks on an answer and thus executes data-ng-click="addAnswer(questionId, 0)" the $scope.answers array gets overwritten. I am unsure why it is doing this. Is it because every time the new template is loaded into the $scope gets reset? If so, that wasn't the behaviour I had expected. Thank you for any assistance. index.html (abbreviated) <div id="q" class="cta1_content ugh" data-ng-controller="testYourself"> <div ng-view></div> </div> test-yourself.html <div> <div class="row"> <h1 class="text-center">{{question.name}}</h1> <div class="col-xs-6 col-md-3 col-md-offset-3 text-center yesno"> <a href="#/test/{{questionId+1}}" class="q" data-ng-click="addAnswer(questionId, 1)"> <span class="cta_next"><i class="icon ion-checkmark-round"></i></span> </a> </div> <div class="col-xs-6 col-md-3 text-center yesno"> <a href="#/test/{{questionId+1}}" class="q" data-ng-click="addAnswer(questionId, 0)"> <span class="cta_next"><i class="icon ion-close-round"></i></span> </a> </div> </div> </div> app.js var calculonApp = angular.module('calculonApp', [ 'ngRoute', 'calculonControllers', 'ui.bootstrap.showErrors' ]); calculonApp.config(['$routeProvider', function($routeProvider) { $routeProvider. when('/test/:questionId', { templateUrl: 'app/partials/test-yourself.html', controller: 'testYourself' }). otherwise({ redirectTo: '/test/0' }); }]); controller.js calculonControllers.controller('testYourself', ['$scope', '$routeParams', function($scope, $routeParams) { $scope.quiz = [ {name:"a", answer: [{0: '1.', 1: '2'}], weight:25}, {name:"b", answer: [{0: '1', 1: '2'}], weight:25} ]; $scope.question = $scope.quiz[$routeParams.questionId]; $scope.questionId = parseInt($routeParams.questionId); $scope.answers = []; $scope.addAnswer = function(a) { $scope.answers.push({ 'question':$scope.questionId, 'answer':a }); }; }]); A: You will need to create a service to keep track of the anwers, yes you are correct when the route changes answers array will be overwritten. calculonApp.service('AnswerService', function() { var answers = []; this.addAnswers = function(questionId, a) { answers.push({ 'question':questionId, 'answer':a }); } return this; }); calculonControllers.controller('testYourself', ['$scope', '$routeParams', 'AnswerService' function($scope, $routeParams, AnswerService) { $scope.quiz = [ {name:"a", answer: [{0: '1.', 1: '2'}], weight:25}, {name:"b", answer: [{0: '1', 1: '2'}], weight:25} ]; $scope.question = $scope.quiz[$routeParams.questionId]; $scope.questionId = parseInt($routeParams.questionId); AnswerService.addAnswers($scope.questionId, a); }]);
{ "language": "en", "url": "https://stackoverflow.com/questions/26277438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: CSS Flex-box justify-self / align-self not working I have a component in React using Semanti UI react I am trying to justify a single container to the flex-end but it's not working <Item key={item._id} id={item._id}> <img className="image" src={item.imageURL} alt="Project Canvas"/> <div className="content"> <div className="title">{item.title}</div> <Item.Meta> {new Date(item.createdDate).toUTCString()} </Item.Meta> <Item.Description>{item.description}</Item.Description> <div className="footer"> <Button primary floated='right' href={item.workURL} target="_blank"> Go to application <Icon name='right chevron' /> </Button> <Label color='teal' tag> React </Label> </div> </div> </Item> the component that I am trying to flex-end is the < div class="footer"> My CSS .content{ margin: 1vh !important; display: flex !important; flex-direction: column !important; } .footer{ padding-top: 2vh !important; border-top: 1px solid rgba(0,173,181,1) !important; justify-self: flex-end !important; align-self: flex-end !important; } the justify-self and align-self doesn't work A: If you have defined your layout using display: flex. justify-self will be ignored, i.e it will have no effect. It will only have effect when you have used block or grid or have positioned an element using absolute. You can read more on that here. With display:flex, following properties are supported. justify-content: flex-end; // horizontal axis when flex direction is row. align-items: flex-end: // vertical axis when flex direction is row. So if you are trying to place the footer at right-bottom of your parent container i.e content. Try this : .footer{ padding-top: 2vh !important; border-top: 1px solid rgba(0,173,181,1) !important; justify-content: flex-end !important; align-items: flex-end !important; }
{ "language": "en", "url": "https://stackoverflow.com/questions/61767121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Generating a quiz in Emacs Lisp? Forgive the "duplicate" question. I'd like to see this solved in Emacs Lisp too, and if I just tagged it for both topics, I probably would have only gotten one answer. The Emacs answer should be sufficiently different that it's probably worthwhile to have it. I want to teach myself Spanish and I've got several word lists like the data show below. How can I generate a quiz from the data that looks like this? amarillo? [ ] blue [ ] yellow [ ] gray [ ] pink azul? [ ] red [ ] blue [ ] green [ ] orange . . . verde? [ ] purple [ ] gold [ ] green [ ] black The idea is to randomly include the answer with 3 randomly chosen incorrect answers. Ideally, the incorrect answers would not be too repetitive. amarillo|yellow azul|blue blanco|white dorado|golden gris|gray marrón|brown naranja|orange negro|black oro|gold púrpura|purple rojo|red rosa|pink verde|green A: Ok, so I'm assuming that you have the input in a file opened in an Emacs buffer. (defun insert-quiz (a-buffer) (interactive "bBuffer name: ") (let* ((question-pairs (split-string (with-current-buffer a-buffer (buffer-string)))) (quiz-answers (mapcar (lambda (x) (cadr (split-string x "|"))) question-pairs))) (insert (apply #'concat (mapcar (lambda (x) (let ((q-pair (split-string x "|"))) (make-question (car q-pair) (answers-list quiz-answers (cadr q-pair))))) question-pairs))))) insert-quiz is an interactive function that takes a buffer name, and uses the stuff in that buffer to generate a quiz for you, then insert that quiz at point as a side-effect. It calls some smaller functions which I'll explain below. (defun make-question (question answers) (apply #'format "%-16s[ ] %-16s[ ] %-16s[ ] %-16s[ ] %s \n" (append (list (concat question "?")) answers))) make-question takes a question and a list of answers, and formats them as one line of the quiz. (defun answers-list (quiz-answers right-answer) (replace (n-wrong-answers quiz-answers right-answer) (list right-answer) :start1 (random 3))) answers-list takes a list of all possible answers in the quiz, and the right answer and uses n-wrong-answers to create a list of four answers, one of which is the correct one. (defun n-wrong-answers (answer-list right-answer &optional answers) (if (= 4 (list-length answers)) answers (n-wrong-answers answer-list right-answer (add-to-list 'answers (random-wrong-answer answer-list right-answer))))) n-wrong-answers takes a list of all possible answers in the quiz, and the right answer, then uses random-wrong-answer to return a list of four unique incorrect answers. (defun random-wrong-answer (answer-list right-answer) (let ((gen-answer (nth (random (list-length answer-list)) answer-list))) (if (and gen-answer (not (string= gen-answer right-answer))) gen-answer (random-wrong-answer answer-list right-answer)))) Finally, at the lowest level, random-wrong-answer takes a list of all possible answers in the quiz, and returns a single wrong answer. After you load the above functions into Emacs, use M-x insert-quiz and type the name of the buffer you have your input loaded into (you'll get tab completion). It wouldn't be too difficult to change the insert-quiz function so that it takes a filename rather than an open buffer-name. The input you list above will yield: amarillo? [ ] yellow [ ] orange [ ] gray [ ] red azul? [ ] gold [ ] purple [ ] blue [ ] orange blanco? [ ] pink [ ] red [ ] white [ ] black dorado? [ ] yellow [ ] golden [ ] red [ ] orange gris? [ ] red [ ] pink [ ] gray [ ] green marrón? [ ] brown [ ] yellow [ ] white [ ] golden naranja? [ ] orange [ ] gold [ ] black [ ] golden negro? [ ] pink [ ] black [ ] blue [ ] white oro? [ ] red [ ] gold [ ] purple [ ] brown púrpura? [ ] purple [ ] orange [ ] gray [ ] black rojo? [ ] gray [ ] red [ ] black [ ] pink rosa? [ ] red [ ] green [ ] pink [ ] yellow verde? [ ] green [ ] purple [ ] red [ ] brown Hope that helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/2264286", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to delay curl request I'd like to add sleep to this request so as not to stress the server with too many requests at a go. I've tried adding sleep but I don't get the expected behaviour. The help is appreciated. xargs -I{} curl --location --request POST 'https://g.com' \ --header 'Authorization: Bearer cc' \ --header 'Content-Type: application/json' \ --data-raw '{ "c_ids": [ "{}" ] }' '; sleep 5m' < ~/recent.txt A: Putting the sleep in the xargs is a bit wonky. I advise the following approach which is more likely to supply the desired result. #!/bin/sh siteCommon="--location --request POST 'https://g.com' \ --header 'Authorization: Bearer cc' \ --header 'Content-Type: application/json' " while read -r line do eval curl ${siteCommon} --data-raw \'{ \"c_ids\": [ \"${line}\" ] }\' sleep 5m done < ~/recent.txt A: Escaping arbitrary strings into valid JSON is a job for jq. If you don't have a particular reason to define the curl args outside your loop: while IFS= read -r json; do curl \ --location --request POST 'https://g.com' \ --header 'Authorization: Bearer cc' \ --header 'Content-Type: application/json' \ --data-raw "$json" sleep 5m done < <(jq -Rc '{"c_ids": [ . ]}' recent.txt) ...or if you do: curl_args=( --location --request POST 'https://g.com' \ --header 'Authorization: Bearer cc' \ --header 'Content-Type: application/json' \ ) while IFS= read -r json; do curl "${curl_args[@]}" --data-raw "$json" sleep 5m done < <(jq -Rc '{"c_ids": [ . ]}' recent.txt)
{ "language": "en", "url": "https://stackoverflow.com/questions/74136847", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to hide those .exe and .o created by codeblocks it is kinda annoying to have all those .exe and .o lying around, but i only want to hide the .exe created by codeblocks but not other app, or set the result of the compilation to other folder, is that possible?
{ "language": "en", "url": "https://stackoverflow.com/questions/44225816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What means the "$" behing class name when doing a Dump Java Heap on android studio? Sorry I can't upload an image... So when I do a dump I can see for someone of my classes : myActivity---------------------- 1 (total count)--- 1 (heap count) myActivity$1-------------------- 1 (total count)--- 1 (heap count) myActivity$2-------------------- 1 (total count)--- 1 (heap count) I have a cout of 1 but I can see my class 3 times... Is it a leak memory or something ? And other question... I'm doing tests of robustness ans maybe it's a stupid question... I'm entering and exiting many times an activity. I can see the count increase. But when the garbage collector decide to pass (or when I launch it on android studio...) occurrences disappears and I can see only one. So that seems to be normal. BUT between this gap, when the garbage collector has not past, my app is vulnerable to an out of memory depending on so how the users manipulate it. How to prevent this behavior ? And last question... Is 100Mo (in android studio monitor...) of ram too much for an app ? Thanks ! A: These usually refer to anonymous inner classes.
{ "language": "en", "url": "https://stackoverflow.com/questions/36000897", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Selecting and Storing DIVs on ReactJs I have an App in which you select some products (DIVS) that changes the border color (with state). I would like to know how can I store these onClick actions, so that when people click on another link and comes back, the same DIVs are selected? I know I can use Cookies or Sessions, but how to identify each DIV on React and make them auto-select once you refresh the page for example? How do you guys manage this? Thank you. A: If you want to keep state after reloads you might want to take a look at HTML Web Storage. A: In order of preference I would use: 1) If you are on react 16.3 or greater use the react context api 2) If you are not on 16.3 or greater you can use a library such as redux or flux 3) you can use HTML local storage. Here is more info on Redux vs context api: https://daveceddia.com/context-api-vs-redux/ Web storage is the least desirable because it doesn't enforce any rules around state management like the other options do. A: Here, an example with local storage: import React from 'react'; class App extends React.Component { constructor(props) { super(props); this.state = { list: null }; } onSearch = (e) => { e.preventDefault(); const { value } = this.input; if (value === '') { return; } const cached = localStorage.getItem(value); if (cached) { this.setState({ list: JSON.parse(cached) }); return; } fetch('https://search?query=' + value) .then(response => response.json()) .then(result => { localStorage.setItem(value, JSON.stringify(result.list)); this.setState({ list: result.list }); }); } render() { return ( <div> <form type="submit" onSubmit={this.onSearch}> <input type="text" ref={node => this.input = node} /> <button type="button">Search</button> </form> { this.state.list && this.state.list.map(item => <div key={item.objectID}>{item.title}</div>) } </div> } } A: From your question, it sounds like you * *have some products on your page that you're representing as <div> elements *are changing their border on click *want them to show as selected when the user refreshes the page React is about the presentation of some data, but doesn't decide how you get that data onto the page. It sounds like you want to store the list of the products selected somewhere, then load that list onto the page again when the user refreshes. The Web Storage api might be helpful, but cookies and sessions could do the same thing. You need to * *choose what to store (probably a list of product ids) *choose where to store it (localStorage, cookie, server, or maybe in the url with https://reach.tech/router) *when your react page loads (componentDidMount for some component), read the list from localstorage into your state *match your list of 'selected products' to your individual products in render So, if you've loaded the list from one of the storage options into your state as selectedProductIds and your list of products is in state as products isSelected = (product) => { this.state.selectedProductIds.includes(product.id) } render() { return <section> { this.state.products.map((product) => <div className={this.isSelected(product) ? 'selected item' : 'item'}> {product.name} </div> )} </section> } Keeping React state in sync with some other storage mechanism can get pretty messy.
{ "language": "en", "url": "https://stackoverflow.com/questions/52953981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Open and show windows command prompt to user inside Qt application How would I go about opening the windows command prompt, then sending a command to it and showing it to the user inside my Qt application? I know you can send commands to the command prompt and get output behind the scenes without showing the command prompt to the user, but I want the user to be able to interact with the command prompt window and send their own commands. A: Found what I needed: QProcess CommandPrompt; QStringList Arguments; Arguments << "/K" << "echo" << "hello"; CommandPrompt.startDetached("cmd",Arguments);
{ "language": "en", "url": "https://stackoverflow.com/questions/22610590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Unable to start sonarqube 6.1 Installed the sonarqube 6.1, Set the JAVA_HOME as C:\JDK\jdk1.8.0_92 in StartSonar.bat file and added C:\JDK\jdk1.8.0_92\bin in path variable also When trying to start getting the below error Setting JAVA_HOME PATH: C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System3 2\WindowsPowerShell\v1.0\;C:\Program Files\TortoiseSVN\bin;C:\svnrepository\bin; C:\SVN\bin;C:\apache-maven-3.0.4\bin;C:\Sonar\sonar-runner-2.0\bin;C:\JDK\jdk1.7 .0_60\bin;C:\ANT\ant-1.8.2\bin;C:\Maven\apache-maven-3.0.4\bin;C:\JDK\jdk1.8.0_9 2/bin; JAVA_HOME: C:\JDK\jdk1.8.0_92 wrapper | --> Wrapper Started as Console wrapper | Launching a JVM... jvm 1 | Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org jvm 1 | Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved. jvm 1 | jvm 1 | WrapperSimpleApp: Unable to locate the class org.sonar.application.Ap p: java.lang.UnsupportedClassVersionError: org/sonar/application/App : Unsupport ed major.minor version 52.0 jvm 1 | jvm 1 | WrapperSimpleApp Usage: jvm 1 | java org.tanukisoftware.wrapper.WrapperSimpleApp {app_class} [app_arguments] jvm 1 | jvm 1 | Where: jvm 1 | app_class: The fully qualified class name of the application to run. jvm 1 | app_arguments: The arguments that would normally be passed to the jvm 1 | application. jvm 1 | Picked up _JAVA_OPTIONS: -Xms1024m -Xmx2048m wrapper | <-- Wrapper Stopped A: It looks like you're trying to run a class compiled with Java 8 on an older version of the JVM. Is that Tanuki wrapper honouring the JAVA_HOME variable that you set? What happens if you run it without going through the wrapper? See here: How to fix java.lang.UnsupportedClassVersionError: Unsupported major.minor version Edit: Also, I see that your path refers to both JDK 1.7 and JDK 1.8. I would try to remove the reference to JDK 1.7 to see if that makes a difference.
{ "language": "en", "url": "https://stackoverflow.com/questions/41095952", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Count unsigned integers less than a given unsigned integer with same number of set bits Is there an efficient way to count all unsigned integers less than a given unsigned integer with the same amount of set bits? What I've tried so far Given an unsigned integer represented as a list of set bits in descending order, I can use colex functions to count smaller unsigned integers with the same number of set bits. I've implemented both a recursive process (to show the logic) and an iterative process below in Python 3. Idea came from this paper. I'd rather have a solution that didn't rely on the bits being sorted. from math import factorial def n_choose_k(n, k): return 0 if n < k else factorial(n) // (factorial(k) * factorial(n - k)) def indexset_recursive(bitset, lowest_bit=0): """Return number of bitsets with same number of set bits but less than given bitset. Args: bitset (sequence) - Sequence of set bits in descending order. lowest_bit (int) - Name of the lowest bit. Default = 0. >>> indexset_recursive([51, 50, 49, 48, 47, 46, 45]) 133784559 >>> indexset_recursive([52, 51, 50, 49, 48, 47, 46], lowest_bit=1) 133784559 >>> indexset_recursive([6, 5, 4, 3, 2, 1, 0]) 0 >>> indexset_recursive([7, 6, 5, 4, 3, 2, 1], lowest_bit=1) 0 """ m = len(bitset) first = bitset[0] - lowest_bit if m == 1: return first else: t = n_choose_k(first, m) return t + indexset_recursive(bitset[1:], lowest_bit) def indexset(bitset, lowest_bit=0): """Return number of bitsets with same number of set bits but less than given bitset. Args: bitset (sequence) - Sequence of set bits in descending order. lowest_bit (int) - Name of the lowest bit. Default = 0. >>> indexset([51, 50, 49, 48, 47, 46, 45]) 133784559 >>> indexset([52, 51, 50, 49, 48, 47, 46], lowest_bit=1) 133784559 >>> indexset([6, 5, 4, 3, 2, 1, 0]) 0 >>> indexset([7, 6, 5, 4, 3, 2, 1], lowest_bit=1) 0 """ m = len(bitset) g = enumerate(bitset) return sum(n_choose_k(bit - lowest_bit, m - i) for i, bit in g)
{ "language": "en", "url": "https://stackoverflow.com/questions/35662746", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: audit_token_to_pid Undefined symbol trying to use audit_token_to_pid in objective c code . I have included #import <bsm/libbsm.h> but when I build the project i am seeing following build error : Undefined symbol: _audit_token_to_pid may be I require some library but not sure how to resolve this ?? A: #import is not enough. Read What is the difference between include and link when linking to a library? and search for similar questions. You have to link binary with this library: Select your target, switch to Build Phases and add libbsm to Link Binary with Libraries section. Or add -l bsm to clang command line options.
{ "language": "en", "url": "https://stackoverflow.com/questions/63315985", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: File path in asp.net mvc application with jquery file tree I'm trying to get a grip on the jquery file tree plugin, and I have a problem with file paths. The thing is, the jquery call sets a root directory, and if it is set to "/" I want it to be a path in my server directory. So I set this in the server code that the jquery code interacts with. Here's the jquery call: <script type="text/javascript"> $(document).ready(function () { root: "/", //Have made sure that the HomeController makes this correspond to the server application path or subfolder. $('#result').fileTree({ script: 'Home/JqueryFileTree', expandSpeed: 1000, collapseSpeed: 1000, multiFolder: false }, function (file) { alert(file); //This shows the name of the file if you click it }); }); </script> And here's how I set the root "/" to correspond to a location in my web application: if (Request.Form["dir"] == null || Request.Form["dir"].Length <= 0 || Request.Form["dir"] == "/") dir = Server.MapPath(Request.ApplicationPath); //Works but creates a strange mix of slashes and backslashes... else dir = Server.UrlDecode(Request.Form["dir"]); This works fine as far as getting the file tree to show the correct file tree. But the problem is, when I click a file, and the alert function is called in the jquery, the file path shown by the alert box is a mixture of a windows path (the one specified as root above), and a url (the relative end part of the path). E.g. c:\my documents\visual studio\MvcApplication\FileArea/Public/file.txt. If I had specified the root in the server code to be "/", as it was in the original script sample from jquery file tree, I would only get the last relative part in the alert box. Also, in the file tree generated I got the root of my c: drive, not the root of the web application... But now when I specify a path relative to my web application, I get this mixture of a whole absolute path. Since I want to be able to grab this path and do stuff to the file, I anticipate this path will be a problem. So what's going on here, why does the path end up like that and how can I fix it? I have no idea how to specify the path relative to my web application in the jquery, so doing it in the server code was the only thing I could think of. In any case, I guess it's good I good a whole absolute path anyway, as long as I can fix it so that it uses one format. But can anyone tell me how? EDIT: I thought I'd post the actual jquery fileTree code as well if that helps: // jQuery File Tree Plugin // // Version 1.01 // // Cory S.N. LaViska // A Beautiful Site (http://abeautifulsite.net/) // 24 March 2008 // // Visit http://abeautifulsite.net/notebook.php?article=58 for more information // // Usage: $('.fileTreeDemo').fileTree( options, callback ) // // Options: root - root folder to display; default = / // script - location of the serverside AJAX file to use; default = jqueryFileTree.php // folderEvent - event to trigger expand/collapse; default = click // expandSpeed - default = 500 (ms); use -1 for no animation // collapseSpeed - default = 500 (ms); use -1 for no animation // expandEasing - easing function to use on expand (optional) // collapseEasing - easing function to use on collapse (optional) // multiFolder - whether or not to limit the browser to one subfolder at a time // loadMessage - Message to display while initial tree loads (can be HTML) // // History: // // 1.01 - updated to work with foreign characters in directory/file names (12 April 2008) // 1.00 - released (24 March 2008) // // TERMS OF USE // // This plugin is dual-licensed under the GNU General Public License and the MIT License and // is copyright 2008 A Beautiful Site, LLC. // if (jQuery) (function ($) { $.extend($.fn, { fileTree: function (o, h) { // Defaults if (!o) var o = {}; if (o.root == undefined) o.root = '/'; if (o.script == undefined) o.script = 'jqueryFileTree.php'; if (o.folderEvent == undefined) o.folderEvent = 'click'; if (o.expandSpeed == undefined) o.expandSpeed = 500; if (o.collapseSpeed == undefined) o.collapseSpeed = 500; if (o.expandEasing == undefined) o.expandEasing = null; if (o.collapseEasing == undefined) o.collapseEasing = null; if (o.multiFolder == undefined) o.multiFolder = true; if (o.loadMessage == undefined) o.loadMessage = 'Loading...'; $(this).each(function () { function showTree(c, t) { $(c).addClass('wait'); $(".jqueryFileTree.start").remove(); $.post(o.script, { dir: t }, function (data) { $(c).find('.start').html(''); $(c).removeClass('wait').append(data); if (o.root == t) $(c).find('UL:hidden').show(); else $(c).find('UL:hidden').slideDown({ duration: o.expandSpeed, easing: o.expandEasing }); bindTree(c); }); } function bindTree(t) { $(t).find('LI A').bind(o.folderEvent, function () { if ($(this).parent().hasClass('directory')) { if ($(this).parent().hasClass('collapsed')) { // Expand if (!o.multiFolder) { $(this).parent().parent().find('UL').slideUp({ duration: o.collapseSpeed, easing: o.collapseEasing }); $(this).parent().parent().find('LI.directory').removeClass('expanded').addClass('collapsed'); } $(this).parent().find('UL').remove(); // cleanup showTree($(this).parent(), escape($(this).attr('rel').match(/.*\//))); $(this).parent().removeClass('collapsed').addClass('expanded'); } else { // Collapse $(this).parent().find('UL').slideUp({ duration: o.collapseSpeed, easing: o.collapseEasing }); $(this).parent().removeClass('expanded').addClass('collapsed'); } h($(this).attr('rel')); //Testing how to get the folder name to display... Works fine. } else { h($(this).attr('rel')); //Calls the callback event in the calling method on the page, with the rel attr as parameter } return false; }); // Prevent A from triggering the # on non-click events if (o.folderEvent.toLowerCase != 'click') $(t).find('LI A').bind('click', function () { return false; }); } //ASN: I think it starts here, the stuff before are just definitions that need to be called here. // Loading message $(this).html('<ul class="jqueryFileTree start"><li class="wait">' + o.loadMessage + '<li></ul>'); // Get the initial file list showTree($(this), escape(o.root)); }); } }); })(jQuery); So, long story short: I don't get how the file paths work here. Just specifying "/" as the root seems to work as a relative path (since the alert box then shows only a relative path), but it gives me a file tree of the root (c:) of my computer. So how do I work with this to use the relative path of my web application instead, and still get proper paths that I can work with? Any help appreciated!
{ "language": "en", "url": "https://stackoverflow.com/questions/3485773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to check whether one branch is an ancestor of the other in Git I have to Git branches A and B. How to check whether A is an ancestor of B, or vice versa? git merge-base does give the common ancestor. However I'd like to know whether there are even better solutions. A: git merge-base --is-ancestor A B if [ $? -eq 0 ] then # it's an ancestor else # it's not an ancestor fi This is obviously working on the commits that the branches point to. Git doesn't really track branch lineage the way something like Clearcase does though, so it's quite possible that you could have had A first, then branched off B, and then as a result of some merging end up with B as an ancestor to A.
{ "language": "en", "url": "https://stackoverflow.com/questions/26103673", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Reorder LINQ result (take 1 row and move it to top) when a condition match I have a LINQ statement that generate an anonymous type, for example: BookID, AuthorID, [Authors]*** Authors return a IEnumerable which also contains many authors, it has 2 columns: AuthorID and AuthorName For example: 1 | 32 | 12, Author 1 20, Author 3 32, Author 19 How can I re-order the Authors object so that [32, Author 19] is on top, as: 1 | 32 | 32, Author 19 12, Author 1 20, Author 3 Thank you very much, Kenny. A: As Alex says, you'll just recreate the anonymous type. To geth a specific author to the top of the list, you can use orderby clause (or OrderBy extension method), which I think, is a bit easier then using Where and Union: new { ... Authors = from a in record.Authors orderby a.AuthorID == 32 descending select a }; The only trick is that you can use boolean value (AuthorID == 32) as a key for the ordering. In this way, you'll first get all elements for which the predicate returns true (the one with ID=32) and then all other values (for which the predicate returned false). A: You can just recreate object of anonymous type, because its readonly property can not be changed. Something like this: record = new { record.BookId, record.AuthorId, Authors = record.Authors.Where(author => author.Id==32).Union( record.Authors.Where(author => author.Id!=32)) };
{ "language": "en", "url": "https://stackoverflow.com/questions/2061478", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: I am getting Microsoft jscript runtime error when initialising it on command prompt Being new here i am not allowed to attach image. So here the link to the image for more clarity to my problem. Script code: var http = require('http'); var dt = require('./myfirstmodule'); http.createServer(function (req, res) { res.writeHead(200, { 'Content-Type': 'text/html' }); res.write("The date and time is currently: " + dt.myDateTime()); res.end(); }).listen(8080); A: As i mentioned earlier there is no issue with your code. When executing application created using http server for the first time for the Windows platform, you will get the form dialog shown in below Figure. It’s better to check Private Network and then click Allow access In case of failure of Confirming from Windows Firewall to open a port, You will get above error.
{ "language": "en", "url": "https://stackoverflow.com/questions/49455948", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: formula to count how many instances of a row occur with one cell containing a string and the other having its entire content equal to another string I would like to count the number of times that the cell BN in my spreadsheet contains the string "FP_T" AND the cell BO of the same row contains the string "Step3CallerAndCalleeClassTracesImpliesMethodTracePattern". In other words, I would like to count how many times a row like the one highlighted in yellow and shown in the following picture occurs: I tried to use the formula: =COUNTIFS($BN$:$BN$,"FP_T",$BO$:$BO$,"Step3CallerAndCalleeClassTracesImpliesMethodTracePattern") but it's not working. Note that the cell BN contains the string that I am searching for (FP_T) while the cell BO has its entire content equal to the string that I am searching for. A: You can use SUMPRODUCT: =SUMPRODUCT(ISNUMBER(SEARCH("/FP_T",$C$2:$C$4))*($D$2:$D$4="Step3CallerAndCalleeClassTracesImpliesMethodTracePattern"))
{ "language": "en", "url": "https://stackoverflow.com/questions/61754805", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: winapi - display int variable value in MessageBox using macros For debugging purposes I am trying to make a short macros to display various types, instead of constant copying all the MessageBox functions' params. For strings I have following macros: #define DEBUG(x) MessageBox(NULL, x,"DEBUG",MB_ICONINFORMATION|MB_OK); Calling it working great, whether I pass variable (array of char) or direct string. Now, I try to make the same thing for int. Have defined macros like this: #define STRIGIFY(x) #x #define TOSTRING(x) STRIGIFY(x) #define DEBUGINT(x) DEBUG(TOSTRING(x)) It works only in case I pass direct integer value: DEBUGINT(742); However if I pass int variable, MessageBox displays variable name instead of its value: int count = 3; DEBUGINT(count); The thing I find pretty interesting for me is that I can pass literally anything in DEBUGINT macros and it will still work: DEBUGINT(some unescaped string) How do I define a macros that would use a variable value instead of its name? A: This doesn't answer the question as it was asked, but I'll risk my reputation and suggest a different solution. PLEASE, do yourself a favor and never use MessageBox() or other modal UI to display debug information. If you do want to interrupt program execution at that point, use the breakpoint; it also allows you to attach the condition, so that you don't need to examine the value manually. If you do not want the interruption, just print the value to a debug output window using ::OutputDebugString(). That can be seen in the debugger if it is attached, or via DebugView tool. Another small suggestion (for Visual Studio users): if you prepend your output with a source file name and the code line number, double-clicking on that line in the output window will take you straight to that line. Just use __FILE__ and __LINE__ in your formatted string. A: You can't. The preprocessor doesn't know anything about variables or their values, because it doesn't do anything run-time only at compile-time. A: You can use variable argument list #include <stdio.h> void message(const char* format, ...) { int len; char *buf; va_list args; va_start(args, format); len = _vscprintf(format, args) + 1; //add room for terminating '\0' buf = (char*)malloc(len * sizeof(char)); vsprintf_s(buf, len, format, args); MessageBoxA(0,buf,"debug",0); //OutputDebugStringA(buf); free(buf); } message("test %s %d %d %d", "str", 1, 2, 3); You might also want to change to unicode version. A: You need to "print" the variable to a buffer (array of char) using something like sprintf (or snprintf in VS 2015) and pass the resulting output to MessageBox as the string to be displayed.
{ "language": "en", "url": "https://stackoverflow.com/questions/33829374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Patch (findByIdAndUpdate) in Mongoose I have to do a simple CRUD in Node/Mongoose API. I also have to make put and patch and while doing patch, I'm a bit confused, because I wanted to use findByIdAndUpdate method, but as I see on examples, there is only one record updated, e.g. {name: "newName"}. I was wondering what I should do if for example I wanted to update two cells? My schema: const userSchema = new Schema({ login: String, email: String, registrationDate: Date, }); My Patch code: router.patch('/:id', async (req, res) => { const id = req.params.id; User.findByIdAndUpdate(id, { ??? }, function (err, docs) { if (err){ console.log(err) } else{ console.log("Updated User : ", docs); } }); }); I don't know what I should write in "???", because what if I wanted to update only the login, and what if I wanted to update name and email? Maybe I'm wrong and PATCH is used only to edit ONE cell and in other cases I should use PUT? Edit: I made it work using something like this: router.patch('/:id', async (req, res) => { const id = req.params.id; let updates={} if (req.body.login) { updates["login"] = req.body.login } if (req.body.email) { updates["email"] = req.body.email } if (req.body.registrationDate) { updates["registrationDate"] = req.body.registrationDate } User.findByIdAndUpdate(id, updates, function (err, docs) { if (err){ console.log(err) } else{ console.log("Updated User : ", docs); } }); }); Anyway I have a question what I should do to "stop" the action. I'm using HTTPie and when I write http PATCH......, it seems like I can't write anything else, because it's still working, I need to do CTRL+C to stop the query. A: Just write the fields you want to change and the new values ​​separated by commas. const id= req.params.id; const email = "newemail"; const date = "1991/13/01"; const user = await User.findByIdAndUpdate( { _id: id, }, { email, registrationDate: date}, { upsert: false } );
{ "language": "en", "url": "https://stackoverflow.com/questions/69614196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Creating a regex to parse html to MXML syntax I searched a lot over stackoverflow and found very interesting that's includes: How to create a Regular Expression for a span attribute? And Javascript regex to replace text div and &lt; &gt; But turns out that I couldn't really manage to parse my goal to replace div with the data-type attribute and remove the data-type attribute over the strings. Here's how I did. //Doesn't work with multi lines, just get first occurrency and nothing more. // Regex: /\s?data\-type\=(?:['"])?(\d+)(?:['"])?/ var source_code = $("body").html(); var rdiv = /div/gm; // remove divs var mxml = source_code.match(/\S?data\-type\=(?:['"])?(\w+)(?:['"])?/); var rattr =source_code.match(/\S?data\-type\=(?:['"])?(\w+)(?:['"])/gm); var outra = source_code.replace(rdiv,'s:'+mxml[1]); var nestr = outra.replace(rattr[0],'');// worked with only first element console.log(nestr); console.log(mxml); console.log(rattr); Over this HTML sample page <div id="app" data-type="Application"> <div data-type="Label"></div> <div data-type="Button"></div> <div data-type="VBox"></div> <div data-type="Group"></div> </div> Any light on that specific thing? I may missing something, but I really have no clue, there's no left space otherwise asking here. I've created a jsFiddle to show, just open the console of browser to see the results I have with me. http://jsfiddle.net/uWCjV/ Feel free to answer over jsfiddle or a better explanation of my regex, why it's fails. Until I get any feedback, I will keep trying to see if I can manage to replace the text. Thanks in advance. A: It would probably be easier to parse the markup into a tree of Objects and then convert that into MXML. Something like this: var source_code = $("body").html(); var openStartTagRx = /^\s*<div/i; var closeStartTagRx = /^\s*>/i; var closeTagRx = /^\s*<\/div>/i; var attrsRx = new RegExp( '^\\s+' + '(?:(data-type)|([a-z-]+))' + // group 1 is "data-type" group 2 is any attribute '\\=' + '(?:\'|")' + '(.*?)' + // group 3 is the data-type or attribute value '(?:\'|")', 'mi'); function Thing() { this.type = undefined; this.attrs = undefined; this.children = undefined; } Thing.prototype.addAttr = function(key, value) { this.attrs = this.attrs || {}; this.attrs[key] = value; }; Thing.prototype.addChild = function(child) { this.children = this.children || []; this.children.push(child); }; function getErrMsg(expected, str) { return 'Malformed source, expected: ' + expected + '\n"' + str.slice(0,20) + '"'; } function parseElm(str) { var result, elm, childResult; if (!openStartTagRx.test(str)) { return; } elm = new Thing(); str = str.replace(openStartTagRx, ''); // parse attributes result = attrsRx.exec(str); while (result) { if (result[1]) { elm.type = result[3]; } else { elm.addAttr(result[2], result[3]); } str = str.replace(attrsRx, ''); result = attrsRx.exec(str); } // close off that tag if (!closeStartTagRx.test(str)) { throw new Error(getErrMsg('end of opening tag', str)); } str = str.replace(closeStartTagRx, ''); // if it has child tags childResult = parseElm(str); while (childResult) { str = childResult.str; elm.addChild(childResult.elm); childResult = parseElm(str); } // the tag should have a closing tag if (!closeTagRx.test(str)) { throw new Error(getErrMsg('closing tag for the element', str)); } str = str.replace(closeTagRx, ''); return { str: str, elm: elm }; } console.log(parseElm(source_code).elm); jsFiddle This parses the markup you provided into the following: { "type" : "Application" "attrs" : { "id" : "app" }, "children" : [ { "type" : "Label" }, { "type" : "Button" }, { "type" : "VBox" }, { "type" : "Group" } ], } It's recursive, so embedded groups are parsed, too.
{ "language": "en", "url": "https://stackoverflow.com/questions/16641498", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Subversion upgrade to from 1.6 to 1.7 Im trying to upgrade Subversion from 1.6 to 1.7 and having issues. Im using Linux Redhat 5, Im installing SVN server only and not using Apache. I'm then accessing subversion via Eclipse Kepler. I have used the rpm from CollbaNet to upgrade my version of SVN and all appears to have worked When I run svn --version, I get svn, version 1.7.16 (r1569520) compiled Apr 9 2014, 14:32:02 I have then checked out a few test branches from my repository but Im not seeing any change or benefit. The working copy still has an .svn folder in every directory and still takes ages to check out or commit changes . Is there anything Im missing from the install, I followed the instructions from CollabNet to the letter. Do I need to do anything to eclipse to make it recognise 1.7? I should add this is on a test server running parallel to our live version of svn. Eclipse has repositories from both servers. Apologies if you need more information, if you do let me know and I will provide as needed thanks A: Generally speaking, the server-side upgrade does not affect your client. The client is still based on Subversion 1.6. You have to upgrade the client to benefit from the client-side improvements. In other words, upgrade the svn plug-in that you use in Eclipse (Subclipse / Subversive or whatever you use in the IDE) to the latest version. A: sudo yum update sudo yum groupinstall "Development tools" sudo yum groupinstall "Additional Development" wget https://archive.apache.org/dist/subversion/subversion-1.7.8.tar.gz tar zxvf subversion-1.7.8.tar.gz cd subversion-1.7.8 ./get-deps.sh ./configure make make check sudo make install On my system this seems to put the binary in /usr/local/bin/svn whereas the 1.6 binary is in /usr/bin/svn so you might need set up an alias.
{ "language": "en", "url": "https://stackoverflow.com/questions/23292167", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Embed Container View with Navigation Controller Programmatically I want to embed a Navigation controller with the container view and use the same navigation controller to push some other view controller. Here is the workaround with the help of storyboard. Can I embed the navigation controller with Container View programmatically? I can able to add SecondViewController's content as a subview in the container view. But in that case, my Navigation controller will not work. In the BaseViewController I have added this code, let secondViewController = self.storyboard?.instantiateViewController(withIdentifier: "SecondViewController") let navigationController = CustomNavigation(rootViewController: secondViewController!) // taking a navigation controller reference, so that I can use this to Push other view controller. Helper.shared.customNavController = navigationController self.addChildViewController(navigationController) secondViewController?.view.frame = CGRect(x: 0, y: 0, width: containerView.frame.size.width, height: containerView.frame.size.height) containerView.addSubview((secondViewController?.view)!) It added the SecondViewController's content in the Container View. But using this navigation controller (Helper.shared.customNavController) I am not able to Push any other View Controller.
{ "language": "en", "url": "https://stackoverflow.com/questions/50363385", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Adding new Excel files to MS Access database as they come in I am in the situation where I have a questionnaire that is basically just a plain excel spreadsheet with two columns: * *one column with the questions and *a second column next to it where users can fill in their answers. Each respondent has been sent a copy of the file and they will email back their files individually over a long time period. I can't wait until i have all files back; instead i would like to collect (and use) the data in Access as the files come in. Two questions: * *What is the best set up in terms of the manual steps required when a new datafile comes in. Can one just save the file in a specific folder and somehow have the column (column B) with responses "automatically" added to the main database? If not fully automatically, what could be done with just a few manual steps involved? *I realize that the shape of the questionnaire is not ideal (variables are in rows, not in columns). What's the best way to deal with that? Thanks in advance for any pointers! PS: I'be open to (simple) alternatives, if Access is not the best choice for this. Analysis of the data will be done in Excel again in the end. Update, to clarify the questions below: 1) In the short - medium term, we are expecting 50-100 replies. In the long term, it will be more as, people will be asked to send updates when their situation changes - these will have to be added as new entries with a new date attached to them. i.e. it will be a continuous process with a few answers coming in every few weeks. 2) There are 80 questions on the questionnaire. 3) The Excel files come back as email attachments. 4) I was contemplating using Acess, as I thought it will a) makeit a bit cleaner and less error prone, especially as project managers might change in the future, b) allow for better handling of the data, as it will have to be mashed up and reshaped in different ways for the anlysis (e.g. it has to be un-pivoted, which i don't even know if excel can do), and c) i thought it it would give us more flexibility in the future when it comes to using different tools for analysis. i.e. each tool can just query the database. I am open for other suggestions, including Excel-only solutions, if that makes it easier, though. 5) I envision the base table to have all the 80 variables in different columns, and the answers as rows (i.e. each new colum that comes with each excel file will need to be transposed and added as a new row). There will be other data tables with the same primary key as the row identifier in this table. 6) I havn't worked on the analysis part yet, but i know that it will require a lot of reshaping and merging of data sets. A: Answer 1 - Questions You do not provide enough information to allow any one to give you pointers. Some initial questions: * *How many questionaires are you expecting: 10, 100, 1000? *How many questions are there per questionaire? *How are the questionaires reaching you? You say "email back". Does this mean as an attachment or as a table in the body of the email. *You say the data is arriving as Excel files and you intend to do the analysis in Excel. Why are you storing the answers in Access? I am not saying you are wrong to store the results in Access; I just want to be convinced you have a reason. *Have you designed the planned table structure for Access? *Have you designed the structure of the Excel workbook(s) on which you will perform the analysis? A: Answer 2 Firstly, I should say that I agree with Mat. I am not an expert on questionnaires but my understanding is that there are companies that will host online questionnaires and provide the results in a convenient form. Most of the rest of this answer assumes it is too late to consider an online questionnaire or you have, for whatever reason, rejected that approach. An Access project is, to a degree, self-documenting. You can look at its list of tables and see that Table 1 has columns A, B and C. If created properly you can see the relationships between tables. With an Excel workbook you just have a number of worksheets which can contain anything. There is no automatic documentation. However, with both Excel and Access the author can create complete documentation that explains each table, worksheet, report and macro. If this project is going to last indefinitely and have a succession of project managers, such documentation will be essential. I can tell you from bitter experience that trying to understand a complex Access project or Excel workbook that you have inherited without proper documentation is at best difficult and at worst impossible. Don’t even start this unless you plan to create and maintain proper documentation. I do not mean: “We will knock up something when we have finished.” Once it is finished, people will be moving onto their next projects and will have little time for boring stuff like documentation. After the event documentation also loses all the decisions and the reasons for those decisions. The next team is left wondering why their predecessors did it that way. The reason will not matter in many cases but I have seen a product destroyed by a new team removing “unnecessary complexity” they did not understand. I always kept a notebook in which I recorded what I was doing and why during the day. I encouraged my staff to do the same. I insisted something for the project log every week. The level of detail depends on the project. The question I asked myself was: “If I had just inherited this project, what happened during the last week that I would need to know?” This was in addition to an up-to-date specification for each component. Sorry, I will get off my hobby-horse. “In the short - medium term, we are expecting 50-100 replies. In the long term, it will be more as, people will be asked to send updates when their situation changes - these will have to be added as new entries with a new date attached to them.” If you are going to keep a history of answers then Access will probably be a better repository than Excel. However, who is going to maintain the Access project and the central Excel workbooks? Access does not operate in the same way as Excel. Access VBA is not quite the same as Excel VBA. This will not matter if you are employing professionals experienced in both Access and Excel. But if you are employing amateurs who are picking up the necessary skills on the job then using both Access and Excel will increase what they have to learn and the likelihood that they will get confused. If there are only 100 people/organisations submitting responses, you could merge responses and maintain one workbook per respondent to create something like: Answers --> Question 1May2014 20Jun2014 7Nov2014 Aaaaaa aa bb cc Bbbbbb dd ee ff I am not necessarily recommending an Excel approach but it will have benefits in some circumstances. Personally, unless I was using professional programmers, I would start with an Excel only solution until I knew why I needed Access. “I envision the base table to have all the 80 variables in different columns, and the answers as rows (i.e. each new colum that comes with each excel file will need to be transposed and added as a new row).” I interpret this to mean a row will contain: * *Respondent identifier *Date *Answer to Q1 *Answer to Q2 *: : *Answer to Q80. My Access is very rusty. Is there a way of accessing attribute “Answer to Q(n)” or are you going to need 80 statements to move answers in and out? I hope there is no possibility of new questions. I found updating the database when a row changed a pain. I always favoured small rows such as: * *Respondent identifier *Date *Question number *Answer There are disadvantages to having lots of small rows but I always found the advantages outweighed them. Hope this helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/23081112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Portainer compose error failed to deploy a stack volumes must be a string, number, boolean or null I am trying to deploy below stack in Portainer.io. version: '3' services: app: image: 'jc21/nginx-proxy-manager:latest' restart: unless-stopped ports: - "80:80" - "81:81" - "443:443" environment: DB_MYSQL_HOST: "db" DB_MYSQL_PORT: 3306 DB_MYSQL_USER: "admin" DB_MYSQL_PASSWORD: "adminpwd" DB_MYSQL_NAME: "nginx" volumes: - '/mnt/nginx/data:/data' - '/mnt/nginx/letsencrypt:/etc/letsencrypt' db: image: 'jc21/mariadb-aria:latest' restart: unless-stopped environment: MYSQL_ROOT_PASSWORD: 'adminpwd' MYSQL_DATABASE: 'nginx' MYSQL_USER: 'admin' MYSQL_PASSWORD: 'adminpwd' volumes: - '/mnt/nginx/data/mysql:/var/lib/mysql' Issue: But I am getting this below error, Deployment error failed to deploy a stack: services.app.environment.volumes must be a string, number, boolean or null Question: I tried to change the format of volumes to different things but with no luck. What is wrong with this compose? A: Volumes are at the environment variables indentation level, and it is of type list. So you need to indent the app volume as in db service and it should work. version: '3' services: app: image: 'jc21/nginx-proxy-manager:latest' restart: unless-stopped ports: - "80:80" - "81:81" - "443:443" environment: DB_MYSQL_HOST: "db" DB_MYSQL_PORT: 3306 DB_MYSQL_USER: "admin" DB_MYSQL_PASSWORD: "adminpwd" DB_MYSQL_NAME: "nginx" volumes: - '/mnt/nginx/data:/data' - '/mnt/nginx/letsencrypt:/etc/letsencrypt' db: image: 'jc21/mariadb-aria:latest' restart: unless-stopped environment: MYSQL_ROOT_PASSWORD: 'adminpwd' MYSQL_DATABASE: 'nginx' MYSQL_USER: 'admin' MYSQL_PASSWORD: 'adminpwd' volumes: - '/mnt/nginx/data/mysql:/var/lib/mysql'
{ "language": "en", "url": "https://stackoverflow.com/questions/74087616", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: replacing integers in a data frame (logic issues) So I am trying to change some values in a df using pandas and, having already tried with df.replace, df.mask, and df.where I got to the conclusion that it must be a logical mistake since it keeps throwing the same mistake: ValueError: The truth value of a Series is ambiguous. I am trying to normalize a column in a dataset, thus the function and not just a single line. I need to understand why my logic is wrong, it seems to be such a dumb mistake. This is my function: def overweight_normalizer(): if df[df["overweight"] > 25]: df.where(df["overweight"] > 25, 1) elif df[df["overweight"] < 25]: df.where(df["overweight"] < 25, 0) A: df[df["overweight"] > 25] is not a valid condition. Try this: def overweight_normalizer(): df = pd.DataFrame({'overweight': [2, 39, 15, 45, 9]}) df["overweight"] = [1 if i > 25 else 0 for i in df["overweight"]] return df overweight_normalizer() Output: overweight 0 0 1 1 2 0 3 1 4 0
{ "language": "en", "url": "https://stackoverflow.com/questions/71084981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: AWS EC2 email sending limit when using third party smtp server Are there any limits on the number of emails I can send from an EC2 instances when I am using a third party SMTP server to send out emails ? I use the EC2 instance to call the client's smtp server. Thanks Santhosh A: Yes, if you are connecting to the third-party server over TCP port 25, there is a limit imposed by the EC2 infrastructure, as an anti-spam measure. You can request that this restriction be lifted, or, the simplest and arguably most correct solution, connect to the server on port 587 (SMTP-MSA) instead of 25 (SMTP-MTA). (The third party mail server should support it unless they really haven't been paying attention for several years.) See http://en.m.wikipedia.org/wiki/Simple_Mail_Transfer_Protocol Or, using SSL would be even better. If you aren't connecting to the 3rd party server on port 25, then there's absolutely no limit. https://aws-portal.amazon.com/gp/aws/html-forms-controller/contactus/ec2-email-limit-rdns-request ... is the form you can use if you want to request removal of the port 25 block, but this also requires you to establish reverse dns to take additional responsibility for the removed restriction on port 25, if you want to take that route, instead.
{ "language": "en", "url": "https://stackoverflow.com/questions/26311747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Authorized used of goo.gl API to add URLs to user history With Google opening the goo.gl API a few weeks ago, it's quite easy to use it with POST: curl -F "url=LONGURL" http://goo.gl/api/shorten The response is like this: {"short_url":"http://goo.gl/A9MR","added_to_history":false} So, does anyone know how to perform an authorized POST to the goo.gl API so that the shortened URL is added to the users history as if you would use the browser? I tried providing a basic Authorization header using my Google mail address and password, but that doesn't work. A: It's not really a public API, yet. What you're using is what the goo.gl site uses itself, but it's not designed for public use like you're trying to do. They do plan on launching one though, and when they do I'm sure they'll add it as an option. See this post EDIT: This is now possible with the newly launched API. See the docs here.
{ "language": "en", "url": "https://stackoverflow.com/questions/4314538", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Magento list associated simple products as configurable product I’m gonna tell my problem with an example. I have a configurable product. Name is: “test” And I created 4 simple products with different size. They are associated. Test-Small Test-Medium Test-Large Test-XLarge I’m getting all products with this code: $collection = Mage::getResourceModel('reports/product_collection') ->addAttributeToSelect('*') ->joinField('category_id', 'catalog/category_product', 'category_id', 'product_id=entity_id', null, 'left') ->addStoreFilter() ->setOrder('created_at', 'desc') ->setVisibility(Mage::getSingleton('catalog/product_visibility')->getVisibleInCatalogIds()); I want to these 4 products as one. With all size attributes. As one. If I’m gonna add “size=small” attribute to code, “test” will be returned product. If I’m gonna add “size=medium” attribute to code “test” will be returnded again. If I’m gonna add “size=large” attribute to code “test” will be returnded again. If I’m gonna add “size=xlarge” attribute to code “test” will be returnded again. How can I do that? Maybe I need advanced sql, let me know please. A: I don't fully understand your question but you might be able to filter the collection with the type attribute. eg type_id='configurable' or type_id='simple'. ->addAttributeToFilter('type_id', array('eq' => 'configurable')); ->addAttributeToFilter('type_id', array('eq' => 'simple')); **EDIT following comments below So, then, I think what you want to do is run the collection as-is, then from the simple product, find out what its configurable is using the function $productParentId = Mage::getResourceSingleton('catalog/product_type_configurable')->getParentIdsByChild($simpleProductId); The function will return an array and you may need additional logic to deduce which configurable but I expect most stores operate with only one configurable product per simple product. I use $productParentId[0] to identify the configurable products in my store.
{ "language": "en", "url": "https://stackoverflow.com/questions/22914780", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Retro, Lomo, Vignette filter in C/C++? I am trying to apply image filtering to images, like lomo, retro, vignette. Can anyone show me some sample codes in C / C++? Or is there any ready-to-use libraries implementing image filtering? Thanks A: Opencv is probably the easiest library to get started with, there are some tutorials on image filtering
{ "language": "en", "url": "https://stackoverflow.com/questions/6944166", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to limit the fields returned from Silverlight 4 RIA services query I'm using RIA services with Silverlight 4 and would like to limit the fields that are returned from the service. For example: TableA: ID Field1 Field2 Field3 TableB: ID TableAID (foreign key) Field1 RestrictedField2 In my domain service class I have something like this that was generated when I created the service. I added the includes (which are working fine): <RequiresAuthentication()> Public Function GetTableA() As IQueryable(Of TableA) Return Me.ObjectContext.TableA.Include("TableB") End Function My question is, how do I get all of the columns from TableA and also get Field1 from TableB without returning the RestrictedField2? I'm pretty sure this is done through some Linq fanciness, but I'm not quite sure how. Thanks! Matt Update One requirement that I didn't list above. The column must be removed on the server side as the data in RestrictedField1 cannot have any chance of being sent to the client. Also, I will need to use this field in a different domain service method (protected with RequiresRoleAttribute), so I can expose the information to an administrator. This requirement means that I don't want to create a different complex type and return that. I would prefer to continue working with the EF model type. A: Check this link, I think it may solve your problem without the need of a view model http://social.msdn.microsoft.com/Forums/en/adodotnetentityframework/thread/ab7b251a-ded0-487e-97a9- I appears you can return an anonymous type then convert it to your needed type. A: Based on some information that I found, the best way to accomplish what I need is to create a view in the database and expose the data I need via EF and RIA Services. This appears to be the best solution available.
{ "language": "en", "url": "https://stackoverflow.com/questions/5813074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: R read.zoo error for incorrect date format I have a data that has one date column and 10 other columns. The date column has the format of 199010. so it's yyyymm. It seems like that zoo/xts requires that the date has days info in it. Is there any way to address this issue? hier ist my data structure(list(Date = 198901:198905, NoDur = c(5.66, -1.44, 5.51, 5.68, 5.32)), .Names = c("Date", "NoDur"), class = "data.frame", row.names = c(NA, 5L)) data<-read.zoo("C:/***/data_port.csv",sep=",",format="%Y%m",header=TRUE,index.column=1,colClasses=c("character",rep("numeric",1))) A: The code has these problems: * *the data is space separated but the code specifies that it is comma separated *the data does not describe dates since there is no day but the code is using the default of dates *the data is not provided in reproducible form. Note how one can simply copy the data and code below and paste it into R without any additional work. Try this: Lines <- "Date NoDur 198901 5.66 198902 -1.44 198903 5.51 198904 5.68 198905 5.32 " library(zoo) read.zoo(text = Lines, format = "%Y%m", FUN = as.yearmon, header = TRUE, colClasses = c("character", NA)) The above converts the index to "yearmon" class which probably makes most sense here but it would alternately be possible to convert it to "Date" class by using FUN = function(x, format) as.Date(as.yearmon(x, format)) in place of the FUN argument above.
{ "language": "en", "url": "https://stackoverflow.com/questions/23019197", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: unable to write code in the main activity java file in android studio After upgrade to ver 4.1.1, I am not able to write code in the MainActivity.java and the code is not editable in Android Studio. How can I correct it? A: If I am not wrong when you click on any line in the source code, the lines get highlighted but the caret indicator is not showing and the code is not editable. I have faced the same problem many times and tried After File -> Invalidate Caches/Restart the problem has been solved. A: The problem faced has been solved by reverting Android Studio to its default settings. A: I was facing this exact same problem. I solved it by pressing the Insert key once—apparently, I had inadvertently toggled Insert/Overwrite mode, and that was preventing me from editing code.
{ "language": "en", "url": "https://stackoverflow.com/questions/65092994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }