text
stringlengths 15
59.8k
| meta
dict |
---|---|
Q: corruption in OpenGL texture with glDrawArrays, iOS I'm working on a game for iOS and I'm having some trouble with a texture using glDrawArrays (using cocos2D v1.0.1, with OpenGL ES 1.1). I first create an array of CGPoints that define the top and bottom of a hill, along with an array for the texture coordinates:
hillVertices[nHillVertices] = CGPointMake((topLeftVertexX)*CC_CONTENT_SCALE_FACTOR(), topLeftVertexY*CC_CONTENT_SCALE_FACTOR());
hillTexCoords[nHillVertices] = CGPointMake((topLeftVertexX)/textureSize.width, 0.0);
nHillVerticies++;
hillVertices[nHillVertices] = CGPointMake((bottomLeftVertexX)*CC_CONTENT_SCALE_FACTOR(), bottomLeftVertexY*CC_CONTENT_SCALE_FACTOR());
hillTexCoords[nHillVertices] = CGPointMake(topLeftVertexX)/textureSize.width, 1.0);
nHillVerticies++;
and then draw using...
glBindTexture(GL_TEXTURE_2D, groundSprite.texture.name);
glVertexPointer(2, GL_FLOAT, 0, hillVertices);
glTexCoordPointer(2, GL_FLOAT, 0, hillTexCoords);
glDrawArrays(GL_TRIANGLE_STRIP, 0, (nHillVertices-1));
In the beginning of the game (when the "topLeftVertexX" value is small), the hills look good...
But as the game continues, the x offset value (topLeftVertexX) increases, and things start to get worse...
And near the end, it gets bad...
I think the problem is that in the beginning of the game, the "topLeftVertexX" value is small, so the result of
(topLeftVertexX)/textureSize.width
in
hillTexCoords[nHillVertices] = CGPointMake((topLeftVertexX)/textureSize.width, 0.0);
is small. However, as the game continues and the x values increase, I think the value is getting large enough to cause some sort of corruption. The textureSize is 512x512, and the "topLeftVertexX" begins at 0 at the beginning of the game, and goes up to about 200,000.
I have tried increasing the number of triangles (including adding additional strips), but that doesn't help. I also tried using @jpsarda's CCSpriteBiCurve class, and got the same result. I also tried using the glHints (such as GL_LINE_SMOOTH_HINT with GL_NICEST), but I haven't found anything that helps.
Any ideas how I might fix this?
A: It's not getting corrupted, you're just losing floating point precision as the magnitude of your number increases.
As your number gets larger and larger, the delta between each successive point gets larger as well.
In IEEE754, the difference between 1.f and the next larger number is 0.0000001
At 200,000, the next larger number is 200,000.02. (courtesy of IEEE754 converter). I'm not even 100% positive what kind of FP precision GLSL uses (the quick ref card indicates it might be 14-bit mantissa in the fragment shader?). So in reality it could be even worse.
If you're just looking at a small window of a large number, then the error will continue to grow. I suspect that as the precision goes down, your texture starts to look more and more 'blocky'.
The only thing I can think of to do is to design your code such that the number does not have to grow unbounded forever. Is there a smarter way you could wrap the number over so that it doesn't have to get so large?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/11379954",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Java EE Webservice - How to save a json without a database I have a REST service with a simple get and post method in Java EE. The post method saves the recieved json in a file using Gson and FileWriter. On my local system the file is saved in C:\Users...\Documents\Glassfish Domains\Domain\config. The Get method reads this file and gives out the json.
When I test this on my local system using Postman, everything works fine, but when I deploy the project on a Ubuntu Server vm with Glassfish installed, I am able to connect but I get a http 500 Internal Server Error Code. I managed to find out, that the errors are thrown, when the FileReader/FileWriter tries to do stuff. I suppose that it is restricted to access this directory on a real glassfish instance.
So my question is, if there is a file path where I am allowed to write a file and read it afterwards. This file has to stay there (at least during the applicationr runs) and has to be the same for every request (A scheduler writes some stuff into the file every 24 hours). If anyone has a simple alternative how to save the json in Java EE without an extra database instance, that would be helpful, too :)
A: If you have access to the server then you can create a directory using the glassfish server user. Configure this path in some property file in your application and then use this property for reading and writing the file. This way you can configure different directory paths in different environments.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/53387868",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to implement online judge bot?(ex. TopCoder, Uva, ACM-ICPC) There are many online judge sites which can verify your program by comparing its output to the correct answers. What's more, they also check the running time and memory usage to make sure that your program doesn't exceed the maximum limit.
So here is my question, since some online judge sites run several test programs at the same time, how do they achieve performance isolation? and how do they achieve same running time on same program that run at another time?
I think there are isolated environment processes like 'VMware' or 'Sandbox' that always return same result. is this correct? and any idea about how to implement these things?
Current Solution
I'm using docker for sandboxing. it's a dead simple and the safest way.
A: Unfortunately it is VERY hard to actually guarantee consistent running times even on a dedicated machine versus a VM. If you do want to implement something like this as was mentioned you probably want a VM to keep all the code that will run sandboxed. Usually you don't want to service more than a couple of requests per core so I would say for algorithms that are memory and cpu bound use at most 2 VMs per physical core of the machine.
Although I can only speculate why not try different numbers of VMs per core and see how it performs. Try to aim for about a 90% or higher rate of SLO compliance (or 98-99 if you really need to) and you should be just fine. Again its hard to tell you exactly what to do as a lot of these things require just testing it out and seeing how it does.
A: May be overly simplistic depending on your other requirements which aren't in the question, but;
If the algorithms are CPU bound, simply running it in an isolated VM (or FreeBSD jail, or...) and using the built-in operating system instrumentation would be the simplest.
(Could be as simple as using the 'time' command in unix and setting memory limits with "limit")
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/8768537",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Merge Duplicate object in array I have an array I need to merge duplicate values with the sum of amount.
What would be an efficient algorithm
var arr = [{
item: {
id: 1,
name: "Abc"
},
amount: 1
}, {
item: {
id: 1,
name: "Abc"
},
amount: 2
}, {
item: {
id: 2,
name: "Abc"
},
amount: 2
},{
item: {
id: 1,
name: "Abc"
},
amount: 2
}]
I need solution as
[{
item: {
id: 1,
name: "Abc"
},
amount: 5
}, {
item: {
id: 2,
name: "Abc"
},
] amount: 2
}]
A: simply use Object.values() with Array.reudce() to merge objects and then get the values:
var arr = [{ item: { id: 1, name: "Abc" }, amount: 1 }, { item: { id: 1, name: "Abc" }, amount: 2 }, { item: { id: 2, name: "Abc" }, amount: 2 },{ item: { id: 1, name: "Abc" }, amount: 2 }];
var result = Object.values(arr.reduce((a,curr)=>{
if(!a[curr.item.id])
a[curr.item.id] = Object.assign({},curr); // Object.assign() is used so that the original element(object) is not mutated.
else
a[curr.item.id].amount += curr.amount;
return a;
},{}));
console.log(result);
A: used map to catch em all :D
var arr = [{ item: { id: 1, name: "Abc" }, amount: 1 }, { item: { id: 1, name: "Abc" }, amount: 2 }, { item: { id: 2, name: "Abc" }, amount: 2 },{ item: { id: 1, name: "Abc" }, amount: 2 }];
var res = {};
arr.map((e) => {
if(!res[e.item.id]) res[e.item.id] = Object.assign({},e); // clone, credits to: @amrender singh
else res[e.item.id].amount += e.amount;
});
console.log(Object.values(res));
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/51347045",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: Unable to upload an object to Firebase realtime database I'm trying to upload this object to Firebase realtime database:
public class DecorationRequest {
private String mName;
private String mRooms;
private String mBudget;
private String mArea;
private String mDescription;
private List<Uri> mPhotoUrls;
public DecorationRequest() {
// Default constructor required for calls to DataSnapshot.getValue(User.class)
}
public DecorationRequest(String name, String rooms, String budget, String area, String description, List<Uri> photoUrls) {
mName = name;
mRooms = rooms;
mBudget = budget;
mArea = area;
mDescription = description;
mPhotoUrls = photoUrls;
}
public String getName() {
return mName;
}
public void setName(String name) {
mName = name;
}
public String getRooms() {
return mRooms;
}
public void setRooms(String rooms) {
mRooms = rooms;
}
public String getBudget() {
return mBudget;
}
public void setBudget(String budget) {
mBudget = budget;
}
public String getArea() {
return mArea;
}
public void setArea(String area) {
mArea = area;
}
public String getDescription() {
return mDescription;
}
public void setDescription(String description) {
mDescription = description;
}
public List<Uri> getPhotoUrls() {
return mPhotoUrls;
}
public void setPhotoUrls(List<Uri> photoUrls) {
mPhotoUrls = photoUrls;
}
}
I have an activity called FillInformationActivity which has this code in it:
mSubmitButton.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
if(checkFilledInformation()){
DecorationRequest decorationRequest = new DecorationRequest(name,
mRoomsPicker.getText().toString(),
mEditBudget.getText().toString(),
mPickArea.getText().toString(),
mEditDescription.getText().toString(),
pictures);
HelperTools.sendHelpRequest(mActivity, decorationRequest);
}
}
});
And its calling the method in a class called Helper tools, here is the method:
public static void sendHelpRequest(final AppCompatActivity activity, DecorationRequest decorationRequest){
mRequestDatabaseReference.push().setValue(decorationRequest);
List<Uri> pictures = decorationRequest.getPhotoUrls();
for(int i = 0; i < pictures.size(); i++){
final StorageReference photoRef = mChatPhotosStorageReference.child(pictures.get(i).getLastPathSegment());
Task<Uri> urlTask = photoRef.putFile(pictures.get(i)).continueWithTask(new Continuation<UploadTask.TaskSnapshot, Task<Uri>>() {
@Override
public Task<Uri> then(@NonNull Task<UploadTask.TaskSnapshot> task) throws Exception {
if(!task.isSuccessful()) {
throw task.getException();
} else {
return photoRef.getDownloadUrl();
}
}
}).addOnCompleteListener(new OnCompleteListener<Uri>() {
@Override
public void onComplete(@NonNull Task<Uri> task) {
if(task.isSuccessful()){
Uri downloadUri = task.getResult();
ChatMessage chatMessage =
new ChatMessage(mUserName, getTime(), null, downloadUri.toString());
mMessagesDatabaseReference.push().setValue(chatMessage);
} else {
Toast.makeText(activity.getApplicationContext(), "Failure", Toast.LENGTH_LONG).show();
}
}
});
}
}
And finally here is the error log. It starts with literally about 5000 lines of this looping:
at com.google.android.gms.internal.firebase_database.zzkt.zzi(Unknown Source)
at com.google.android.gms.internal.firebase_database.zzkt.zzi(Unknown Source)
at com.google.android.gms.internal.firebase_database.zzkt.zzl(Unknown Source)
at com.google.android.gms.internal.firebase_database.zzku.zzm(Unknown Source)
And then ends with this:
at com.google.firebase.database.DatabaseReference.zza(Unknown Source)
at com.google.firebase.database.DatabaseReference.setValue(Unknown Source)
at com.example.tino.interiordecoration.HelperTools$override.sendHelpRequest(HelperTools.java:75)
at com.example.tino.interiordecoration.HelperTools$override.access$dispatch(HelperTools.java)
at com.example.tino.interiordecoration.HelperTools.sendHelpRequest(HelperTools.java)
at com.example.tino.interiordecoration.FillInformationActivity$2.onClick(FillInformationActivity.java:82)
at android.view.View.performClick(View.java:5706)
at android.view.View$PerformClick.run(View.java:22822)
at android.os.Handler.handleCallback(Handler.java:836)
at android.os.Handler.dispatchMessage(Handler.java:103)
at android.os.Looper.loop(Looper.java:203)
at android.app.ActivityThread.main(ActivityThread.java:6301)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1084)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:945)
The line 75 at HelperTools is this line:
mRequestDatabaseReference.push().setValue(decorationRequest);
If I change the object DecorationRequest to some random string, the code runs, and the file does upload to the Firebase Realtime Database, but obviously that is not what I want.
A: The Uri class is not a data type that is supported in Firebase. The List should contain String objects and not Uri objects.
My code works when I simply change this
public List<Uri> getPhotoUrls() {
return mPhotoUrls;
}
to this
public List<String> getPhotoUrls() {
List<String> photoUrlStrings = new ArrayList<String>();
for(int i=0; i<mPhotoUrls.size(); i++){
photoUrlStrings.add(mPhotoUrls.get(i).toString());
}
return photoUrlStrings;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/51726066",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: ElasticSearch group by documents field and count occurences My ElasticSearch 6.5.2 index look likes:
{
"_index" : "searches",
"_type" : "searches",
"_id" : "cCYuHW4BvwH6Y3jL87ul",
"_score" : 1.0,
"_source" : {
"querySearched" : "telecom",
}
},
{
"_index" : "searches",
"_type" : "searches",
"_id" : "cSYuHW4BvwH6Y3jL_Lvt",
"_score" : 1.0,
"_source" : {
"querySearched" : "telecom",
}
},
{
"_index" : "searches",
"_type" : "searches",
"_id" : "eCb6O24BvwH6Y3jLP7tM",
"_score" : 1.0,
"_source" : {
"querySearched" : "industry",
}
And I would like a query that return this result:
"result":
{
"querySearched" : "telecom",
"number" : 2
},
{
"querySearched" : "industry",
"number" : 1
}
I just want to group by occurence and get number of each, limit to ten biggest numbers. I tried with aggregations but bucket is empty.
Thanks!
A: Case your mapping
PUT /index
{
"mappings": {
"doc": {
"properties": {
"querySearched": {
"type": "text",
"fielddata": true
}
}
}
}
}
Your query should looks like
GET index/_search
{
"size": 0,
"aggs": {
"result": {
"terms": {
"field": "querySearched",
"size": 10
}
}
}
}
You should add fielddata:true in order to enable aggregation for text type field more of that
"size": 10, => limit to 10
After a short discussion with @Kamal i feel obligated to let you know that if you choose to enable fielddata:true you must know that
it can consume a lot of heap space.
From the link I've shared:
Fielddata can consume a lot of heap space, especially when loading high cardinality text fields. Once fielddata has been loaded into the heap, it remains there for the lifetime of the segment. Also, loading fielddata is an expensive process which can cause users to experience latency hits. This is why fielddata is disabled by default.
Another alternative (a more efficient one):
PUT /index
{
"mappings": {
"doc": {
"properties": {
"querySearched": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
}
}
}
Then your aggregation query
GET index/_search
{
"size": 0,
"aggs": {
"result": {
"terms": {
"field": "querySearched.keyword",
"size": 10
}
}
}
}
Both solutions works but you should take this under consideration.
Hope it helps
A: What did you tried?
POST
/searches/_search
{
"size": 0,
"aggs": {
"byquerySearched": {
"terms": {
"field": "querySearched",
"size": 10
}
}
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/58733898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: NetworkStream.Write returns immediately - how can I tell when it has finished sending data? Despite the documentation, NetworkStream.Write does not appear to wait until the data has been sent. Instead, it waits until the data has been copied to a buffer and then returns. That buffer is transmitted in the background.
This is the code I have at the moment. Whether I use ns.Write or ns.BeginWrite doesn't matter - both return immediately. The EndWrite also returns immediately (which makes sense since it is writing to the send buffer, not writing to the network).
bool done;
void SendData(TcpClient tcp, byte[] data)
{
NetworkStream ns = tcp.GetStream();
done = false;
ns.BeginWrite(bytWriteBuffer, 0, data.Length, myWriteCallBack, ns);
while (done == false) Thread.Sleep(10);
}
public void myWriteCallBack(IAsyncResult ar)
{
NetworkStream ns = (NetworkStream)ar.AsyncState;
ns.EndWrite(ar);
done = true;
}
How can I tell when the data has actually been sent to the client?
I want to wait for 10 seconds(for example) for a response from the server after sending my data otherwise I'll assume something was wrong. If it takes 15 seconds to send my data, then it will always timeout since I can only start counting from when NetworkStream.Write returns - which is before the data has been sent. I want to start counting 10 seconds from when the data has left my network card.
The amount of data and the time to send it could vary - it could take 1 second to send it, it could take 10 seconds to send it, it could take a minute to send it. The server does send an response when it has received the data (it's a smtp server), but I don't want to wait forever if my data was malformed and the response will never come, which is why I need to know if I'm waiting for the data to be sent, or if I'm waiting for the server to respond.
I might want to show the status to the user - I'd like to show "sending data to server", and "waiting for response from server" - how could I do that?
A: TCP is a "reliable" protocol, which means the data will be received at the other end if there are no socket errors. I have seen numerous efforts at second-guessing TCP with a higher level application confirmation, but IMHO this is usually a waste of time and bandwidth.
Typically the problem you describe is handled through normal client/server design, which in its simplest form goes like this...
The client sends a request to the server and does a blocking read on the socket waiting for some kind of response. If there is a problem with the TCP connection then that read will abort. The client should also use a timeout to detect any non-network related issue with the server. If the request fails or times out then the client can retry, report an error, etc.
Once the server has processed the request and sent the response it usually no longer cares what happens - even if the socket goes away during the transaction - because it is up to the client to initiate any further interaction. Personally, I find it very comforting to be the server. :-)
A: I'm not a C# programmer, but the way you've asked this question is slightly misleading. The only way to know when your data has been "received", for any useful definition of "received", is to have a specific acknowledgment message in your protocol which indicates the data has been fully processed.
The data does not "leave" your network card, exactly. The best way to think of your program's relationship to the network is:
your program -> lots of confusing stuff -> the peer program
A list of things that might be in the "lots of confusing stuff":
*
*the CLR
*the operating system kernel
*a virtualized network interface
*a switch
*a software firewall
*a hardware firewall
*a router performing network address translation
*a router on the peer's end performing network address translation
So, if you are on a virtual machine, which is hosted under a different operating system, that has a software firewall which is controlling the virtual machine's network behavior - when has the data "really" left your network card? Even in the best case scenario, many of these components may drop a packet, which your network card will need to re-transmit. Has it "left" your network card when the first (unsuccessful) attempt has been made? Most networking APIs would say no, it hasn't been "sent" until the other end has sent a TCP acknowledgement.
That said, the documentation for NetworkStream.Write seems to indicate that it will not return until it has at least initiated the 'send' operation:
The Write method blocks until the requested number of bytes is sent or a SocketException is thrown.
Of course, "is sent" is somewhat vague for the reasons I gave above. There's also the possibility that the data will be "really" sent by your program and received by the peer program, but the peer will crash or otherwise not actually process the data. So you should do a Write followed by a Read of a message that will only be emitted by your peer when it has actually processed the message.
A: In general, I would recommend sending an acknowledgment from the client anyway. That way you can be 100% sure the data was received, and received correctly.
A: If I had to guess, the NetworkStream considers the data to have been sent once it hands the buffer off to the Windows Socket. So, I'm not sure there's a way to accomplish what you want via TcpClient.
A: I can not think of a scenario where NetworkStream.Write wouldn't send the data to the server as soon as possible. Barring massive network congestion or disconnection, it should end up on the other end within a reasonable time. Is it possible that you have a protocol issue? For instance, with HTTP the request headers must end with a blank line, and the server will not send any response until one occurs -- does the protocol in use have a similar end-of-message characteristic?
Here's some cleaner code than your original version, removing the delegate, field, and Thread.Sleep. It preforms the exact same way functionally.
void SendData(TcpClient tcp, byte[] data) {
NetworkStream ns = tcp.GetStream();
// BUG?: should bytWriteBuffer == data?
IAsyncResult r = ns.BeginWrite(bytWriteBuffer, 0, data.Length, null, null);
r.AsyncWaitHandle.WaitOne();
ns.EndWrite(r);
}
Looks like the question was modified while I wrote the above. The .WaitOne() may help your timeout issue. It can be passed a timeout parameter. This is a lazy wait -- the thread will not be scheduled again until the result is finished, or the timeout expires.
A: I try to understand the intent of .NET NetworkStream designers, and they must design it this way. After Write, the data to send are no longer handled by .NET. Therefore, it is reasonable that Write returns immediately (and the data will be sent out from NIC some time soon).
So in your application design, you should follow this pattern other than trying to make it working your way. For example, use a longer time out before received any data from the NetworkStream can compensate the time consumed before your command leaving the NIC.
In all, it is bad practice to hard code a timeout value inside source files. If the timeout value is configurable at runtime, everything should work fine.
A: How about using the Flush() method.
ns.Flush()
That should ensure the data is written before continuing.
A: Bellow .net is windows sockets which use TCP.
TCP uses ACK packets to notify the sender the data has been transferred successfully.
So the sender machine knows when data has been transferred but there is no way (that I am aware of) to get that information in .net.
edit:
Just an idea, never tried:
Write() blocks only if sockets buffer is full. So if we lower that buffers size (SendBufferSize) to a very low value (8? 1? 0?) we may get what we want :)
A: Perhaps try setting
tcp.NoDelay = true
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/67761",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: how to close MessageBox in RCP plugin development? I have created a MessageBox while developing Eclipse plugin which opens when you perform some action...however even after i say "OK" on that messageBox the dialog MessageBox occurs again and again..
Can anyone tell me how to close MessageBox once it is shown to user..
To open dialog box i wrote following code:
MessageBox dialog = new MessageBox(new Shell(), SWT.OK);
dialog.setMessage("Some message");
dialog.setText("Title");
dialog.open();
A: Your Problem has nothing to do with the code you posted. Please provide us with additional information. Also consider to set the shell of the currently active widget as parent shell in the MessageBox constructor (e.g. new MessageBox(swtControl.getShell(), SWT.OK). Otherwise the dialog might not be modal. This depends on the modal style of the Shell.
A: After research i found that you need to dispose component that you need no longer once specific action has been completed. So once my MessageDialog appears and user clicks
OK... i need to dispose my MessageDialog using Display.getCurrent().dispose()
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/15108404",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Finding the degree of an undirected graph I am trying to find the degree distribution of an undirected graph. And I tried the following code:
graph = { "a" : ["c"],
"b" : ["c", "e"],
"c" : ["a", "b", "d", "e"],
"d" : ["c"],
"e" : ["c", "b"],
"f" : []
}
def generate_edges(graph):
edges = []
for node in graph:
for neighbour in graph[node]:
edges.append((node, neighbour))
return edges
print(generate_edges(graph))
And my output is something like this :
[('c', 'a'), ('c', 'b'), ('c', 'd'), ('c', 'e'), ('b', 'c'), ('b', 'e'), ('a', 'c'), ('e', 'c'), ('e', 'b'), ('d', 'c')]
I am trying to find the degree but I am not getting it. I need my output to be [1,2,2,0,1] which is a list, where the index value range from 0 to maximum degree in the graph(i.e in the above graph 4 is the maximum degree for "c") and the index values are number of nodes with degree equal to that index. ( in the above above graph, there is 1 node with 0 degrees,2 with 1 degree and again 2 with 2 degrees,none with 3 degree and finally 1 with 4 degree). Hence [1,2,2,0,4]. Can anyone help me with this without using NetworkX ?
A: graph = { "a" : ["c"],
"b" : ["c", "e"],
"c" : ["a", "b", "d", "e"],
"d" : ["c"],
"e" : ["c", "b"],
"f" : [] }
def max_length(x):
return len(graph[x])
# Determine what index has the longest value
index = max(graph, key=max_length)
m = len(graph[index])
# Fill the list with `m` zeroes
out = [0 for x in range(m+1)]
for k in graph:
l = len(graph[k])
out[l]+=1
print(out)
Outputs [1, 2, 2, 0, 1]
A: Another solution using Counter :
from collections import Counter
a = Counter(map(len, graph.values())) # map with degree as key and number of nodes as value
out = [a[i] for i in range(max(a)+1)] # transform the map to a list
A: You can find the degrees of individual nodes by simply finding lengths of each element's list.
all_degrees = map(len, graph.values())
This, in your case produces the individual degrees, not necessarily in same order as the elements.
[1, 4, 2, 2, 1, 0]
Next thing is simply frequency count in the list.
from collections import defaultdict
freq = defaultdict(int)
for i in all_degrees:
freq[i] += 1
print freq
Out: defaultdict(<type 'int'>, {0: 1, 1: 2, 2: 2, 4: 1})
As expected, freq now gives count of each values which you can then print, append to list etc. You can simply print values of the dictionary freq as
print freq.values()
Returns the desired list [1, 2, 2, 0, 1]. Or you can create an empty list and append values to it as
out = list()
for i in range(max(all_degrees)+1):
out.append(freq[i])
Again returns out = [1,2,2,0,1] - the required output.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/22438238",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Quick and Efficient way to generate random numbers in Java I am writing a multi-threaded Java program that generates lot of random numbers.
Additional Details:
These numbers are used to create a list of random numbers from 0-99 without repetition and such that every number in the range 0-99 exists in the list (In other words, the list contains 100 unique elements in the range 0-99).
Generating Random Numbers [Things already tried!]
*
*I have an ArrayList of numbers from 0-100. I generate a random number and use it as an index which is used to pop out an element from the ArrayList.
*I have used Collections.shuffle().
Here is the code for approach 1:
ArrayList<Integer> arr = new ArrayList<Integer>();
for (int i = 0; i < N; i++){
arr.add(i, i);
}
for(int i=0; i<N; i++){
int indx = rand.nextInt(arr.size());
res.add(arr.get(indx));
arr.remove(indx);
}
For second approach, I replaced the second for loop with Collections.shuffle(arr).
As generating list of random numbers is the most expensive part of my algorithm, I want to optimize it. This brings me to the questions:
*
*What is the fastest way to generate random numbers?
*What is the fastest way to generate the list of random numbers as described above?
PS:
*
*I found Collections.shuffle() to be slower than the first approach
*Someone suggested me using rngd to generate random numbers from hardware in Unix. Has anyone tried this before? How do you do that?
A: I think the problem with Collections.shuffle() is that is uses default Random instance which is a thread-safe singleton. You say that your program is multi-threaded, so I can imagine synchronization in Random being a bottle-neck.
If you are happily running on Java 7, simply use ThreadLocalRandom. Look carefully, there is a version of shuffle() taking Random instance explicitly:
Collections.shuffle(arr, threadLocalRandom);
where threadLocalRandom is created only once.
On Java 6 you can simply create a single instance of Random once per thread. Note that you shouldn't create a new instance of Random per run, unless you can provide random seed every time.
A: Part of the problem might be the overhead of the Integer boxing and unboxing. You might find it helpful to reimplement the Fisher-Yates shuffle directly on an int[].
A: My approach woul be to generate the numbers with the Math.random() method as in the example here and initialize the list via a static init block like this:
private static List<int> list = new ArrayList<int>();
static {
for(int i = 0; i < 100; i++) {
// randomize number
list.add(number);
}
}
Hope this helped, have Fun!
A: To shuffle an array a of n elements (indices 0..n-1):
for i from n − 1 downto 1 do
j ← random integer with 0 ≤ j ≤ i
exchange a[j] and a[i]
Check Fischer and Yattes algorithm.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/9649266",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: With hive's complex struct data type, how to write query with where clause I've have following hive table with complex data type, STRUCT. Can you please help writing hive query with where clause for specific city?
CREATE EXTERNAL TABLE user_t (
name STRING,
id BIGINT,
isFTE BOOLEAN,
role VARCHAR(64),
salary DECIMAL(8,2),
phones ARRAY<INT>,
deductions MAP<STRING, FLOAT>,
address ARRAY<STRUCT<street:STRING, city:STRING, state:STRING, zip:INT>>,
others UNIONTYPE<FLOAT,BOOLEAN,STRING>,
misc BINARY
)
I'm able to use STRUCT data type in select clause but not able to use same in where clause.
Working:
select address.city from user_t;
Not working:
select address.city from user_t where address.city = 'XYZ'
Documentation says it has limitation while using group by or where clause and gave a solution as well. But I didn't understand it clearly.
Link: Documentation
Please suggest. Thank you.
A: Demo
create table user_t
(
id bigint
,address array<struct<street:string, city:string, state:string, zip:int>>
)
;
insert into user_t
select 1
,array
(
named_struct('street','street_1','city','city_1','state','state_1','zip',11111)
,named_struct('street','street_2','city','city_1','state','state_1','zip',11111)
,named_struct('street','street_3','city','city_3','state','state_3','zip',33333)
)
union all
select 2
,array
(
named_struct('street','street_4','city','city_4','state','state_4','zip',44444)
,named_struct('street','street_5','city','city_5','state','state_5','zip',55555)
)
;
Option 1: explode
select u.id
,a.*
from user_t as u
lateral view explode(address) a as details
where details.city = 'city_1'
;
+----+---------------------------------------------------------------------+
| id | details |
+----+---------------------------------------------------------------------+
| 1 | {"street":"street_1","city":"city_1","state":"state_1","zip":11111} |
| 1 | {"street":"street_2","city":"city_1","state":"state_1","zip":11111} |
+----+---------------------------------------------------------------------+
Option 2: inline
select u.id
,a.*
from user_t as u
lateral view inline(address) a
where a.city = 'city_1'
;
+----+----------+--------+---------+-------+
| id | street | city | state | zip |
+----+----------+--------+---------+-------+
| 1 | street_1 | city_1 | state_1 | 11111 |
| 1 | street_2 | city_1 | state_1 | 11111 |
+----+----------+--------+---------+-------+
Option 3: self join
select u.*
from user_t as u
join (select distinct
u.id
from user_t as u
lateral view inline(address) a
where a.city = 'city_1'
) as u2
on u2.id = u.id
;
+----+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| id | address |
+----+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 1 | [{"street":"street_1","city":"city_1","state":"state_1","zip":11111},{"street":"street_2","city":"city_1","state":"state_1","zip":11111},{"street":"street_3","city":"city_3","state":"state_3","zip":33333}] |
+----+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/43148996",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: A/B testing for React SSR app with chunking Versions:
webpack: 4.30.0,
react: 16.8.6,
react-loadable: 5.5.0,
I'm having one entry JS in webpack. Other chunks are currently created using react-loadable and vendor JS is created using splitChunks.
Idea of my application:
SSR and chunking is working with react-loadable for route-level components. And dynamic-components chunk can be made using either webpack's import promise or react Loadable.
Now, I want to introduce A/B testing to these dynamic components.
I've 2 approaches:
*
*First approach: Wrap dynamic-import statement inside if-else to import A/B variation of that dynamic-component. And at server level, I would know that for a given user, I would set a flag in Redux store, in order to select A/B variation.
(This approach can be followed for whether app is SSR and SPA.)
Notes for this approach:
a. Seems doable :P
b. Can't scale for A/B/C very easily. In this case, I would have to introduce a 3rd else condition to dynamically import C variation. It'll bust the browser caching for the route-level component's chunk(containing these variations) because hash in file name is changed. And same will happen to all the route-level components which takes effect for adding additional C testing.
c. Have to unnecessarily pollute code with if-conditions. If A/B testing is completed, then code again will have to modified, which busts the browser cache again.
*Second Approach: I'll have only 1 dynamic import in code but I want 3 different chunks for the same component A, B and C, like A/some_chunk[hash].js, B/some_chunk[hash].js and C/some_chunk[hash].js. And at server level, I would use same logic for segmenting the user for A/B/C testing as in approach 1, but instead of setting flag in redux store, **I'll serve some_chunk[hash].js from A, B or C folder as per user's segment.
Notes for this approach:
a. I need to ask how we can create chunks for dynamic-component without they being actually imported in a file.
b. Can very easily scale for A/B/C testing, since now A/B testing is now dependent upon which variation of file server is serving.
c. No polluting of code with if-else conditions in client code. No issues of browser cache busting.
d. Will need separate wrapper component for server code for ReactDOMServer.renderToString to know which variation to pick for server side rendering.
Questions:
*
*So, I need to know how we can create chunk as per 2nd approach? Because webpack would ignore creating chunk for the file if not reachable (i.e not imported by any other file)
*Would you recommend this approach. What's the right micro FE app approach of doing AB testing?
PS:
If this is doable, then my roadmap would be to chunk for base layouts as well. Where I can avoid if-else conditions in code and entire layout can be changed as per which A/B variation server sends to browser.
A: This can be done using webpack's weak resolve
Example from webpack docs:
const page = 'Foo'; // Trick: Can be taken from props
__webpack_modules__[require.resolveWeak(`./page/${page}`)];
My use case:
Suppose, we're doing A/B testing on component D which has variations D1, D2 and D3.
We can make folder D/ with D1.js D2.js and D3.js variations inside this folder. Now, require.resolveWeak('./D/${variation}') will pack chunks for D1, D2 and D3 in the build folder. Now, on runtime, passing the props to pick the particular variation will dynamically load that JS.
Note: For eg: to pick D2 variation, experiment name also must be D2 (or else you must store mapping of experiment name to component name) to be passed as props. Generally, people do A/B testing by just having multiple if-elses. So, in the loadVariationOfD.js, instead of having weak resolve import statement, if-else is used with dynamic imports(I'm using loadable-components for this).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/56305311",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How do I send a parameter list to be used as arguments when a delegate is invoked? I've implemented a simple extension method in my asp.net mvc 3 app to pull objects out of session using generics:
public static T GetVal<T>(this HttpSessionStateBase Session, string key, Func<T> getValues)
{
if (Session[key] == null)
Session[key] = getValues();
return (T)Session[key];
}
This works great if getValues() doesn't require any arguments.
I was attempting to write an overload that takes in params object[] args to allow me to pass arguments if necessary to the getValues() function, but I don't know what the syntax is to apply those variables to the function.
Is this even possible? Thanks in advance for your advice.
A: I would argue that you shouldn't need to do this - the caller can handle that with a lambda expression. For example:
int x = session.GetVal<int>("index", () => "something".IndexOf("o"));
Here we're capturing the idea of calling IndexOf on "something" passing in the argument "o". All of that is captured in a simple Func<int>.
A: You can add an overload to your function
public static T GetVal<T>(this HttpSessionStateBase Session, string key, Func<IList<object>,T> getValues, IList<object> args)
{
if (Session[key] == null)
Session[key] = getValues(args);
return (T)Session[key];
}
A: You'll have to define your own delegate rather than Func. The following will work perfectly here:
public delegate TResult ParamsFunc<TResult>(params object[] args);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/6521623",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: WPF - Binding IsMouseOver to Visibility I have a window which overrides a RadioButton's ControlTemplate to show a custom control inside of it. Inside the custom control, I have a button's visibility tied to IsMouseOver, which works correctly in showing the button only when the mouse is hovering over the control. However, when I click on the RadioButton, the Button disappears. After some debugging and reading, it seems that the RadioButton is capturing the mouse on click, and this makes IsMouseOver for the UserControl false.
I tried binding the Button's visibility to FindAncestor {x:Type RadioButton} and it works, but it seems a bit fragile to me to have the UserControl depend on who is containing it. The code for the window and the user control is below. Any suggestions?
<Window x:Name="window" x:Class="WPFTest.Window1"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:WPFTest="clr-namespace:WPFTest"
Title="Window1" Height="300" Width="300">
<Window.Resources>
<Style TargetType="{x:Type RadioButton}">
<Setter Property="Template">
<Setter.Value>
<ControlTemplate TargetType="{x:Type RadioButton}">
<WPFTest:TestUC />
</ControlTemplate>
</Setter.Value>
</Setter>
</Style>
</Window.Resources>
<Border BorderBrush="Black" BorderThickness="2">
<StackPanel>
<RadioButton x:Name="OptionButton" Height="100" />
<TextBlock Text="{Binding ElementName=OptionButton, Path=IsMouseOver}" />
</StackPanel>
</Border>
</Window>
<UserControl x:Name="_this" x:Class="WPFTest.TestUC"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
Height="300" Width="300">
<UserControl.Resources>
<BooleanToVisibilityConverter x:Key="BooleanToVisibilityConverter" />
</UserControl.Resources>
<StackPanel>
<TextBlock Text="SomeText" />
<TextBlock Text="{Binding ElementName=_this, Path=IsMouseOver}" />
<Button x:Name="_cancelTextBlock" Content="Cancel" Visibility="{Binding ElementName=_this, Path=IsMouseOver, Converter={StaticResource BooleanToVisibilityConverter}}" />
</StackPanel>
</UserControl>
A: I seemed to have fixed the problem by setting a trigger in the control template, which binds to the RadioButton's IsMouseOver, and sets a custom DependencyProperty on the UserControl.
Something like:
<ControlTemplate TargetType="{x:Type RadioButton}">
<WPFTest:TestUC x:Name="UC" />
<ControlTemplate.Triggers>
<Trigger Property="IsMouseOver" Value="True">
<Setter Property="ShowCancel" Value="True" TargetName="UC"/>
</Trigger>
</ControlTemplate.Triggers>
</ControlTemplate>
I'm still confused as to why the Mouse Capture falsifies IsMouseOver on the UserControl child of the RadioButton however. Can anyone shed some light on this?
A: After the event is handled by the RadioButton , it is only set as handled but in reality it still bubbles up. So you just need to specify that you want to handle handled events too.
For that you need to look at handledEventsToo.
Unfortunately I don't think it can be set in xaml. only code.
A: Very interesting problem. I myself would like to know more of why the UserControl IsMouseOver changes to false when the TextBlock(s) in its visuals are mouse downed upon.
However, here is another way to solve it ... maybe you will like this approach better.
Instead of using RadioButton (since you are retemplating it) why don't you just use Control? (I think IsMouseOver is getting changed to false due to the fact that it is a Button derived control.)
Following is the xaml for the Window ...
<Window
x:Class="WpfApplication1.Window1"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="clr-namespace:WpfApplication1"
Title="Window1"
Width="300"
Height="300"
>
<Window.Resources>
<Style TargetType="{x:Type Control}">
<Setter Property="Template">
<Setter.Value>
<ControlTemplate TargetType="{x:Type Control}">
<local:UserControl1/>
</ControlTemplate>
</Setter.Value>
</Setter>
</Style>
</Window.Resources>
<Border BorderBrush="Black" BorderThickness="2">
<StackPanel>
<Control x:Name="OptionButton" Height="100"/>
<TextBlock Text="{Binding ElementName=OptionButton, Path=IsMouseOver}"/>
</StackPanel>
</Border>
</Window>
EDIT:
I just wanted to add ... that if you're okay with the above approach ... then, the right thing to do is probably to just use the UserControl in the Window's visual tree versus retemplating a Control. So ... like this:
<Window
x:Class="WpfApplication1.Window1"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="clr-namespace:WpfApplication1"
Title="Window1"
Width="300"
Height="300"
>
<Border BorderBrush="Black" BorderThickness="2">
<StackPanel>
<local:UserControl1 x:Name="OptionButton" Height="100"/>
<TextBlock Text="{Binding ElementName=OptionButton, Path=IsMouseOver}"/>
</StackPanel>
</Border>
</Window>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/258824",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: React app, API call that fetches routes for app Hello I am using react and redux, i have my action creator that fetches the routes of the page and i creat the routing with them this way:
First in app.js im calling the action creator (using mapDispatchToProps) in UseEffect and passing the result (mapStateToProps) to the Routes component:
useEffect(() => {
fetchMenu();
}, []);
<Routes menu={menu} />
Then in Routes.js:
{menu.map((item) => (
<PrivateRoute
key={item.id}
exact
path={item.link}
parentClass="theme-1"
component={(props) => selectPage(item.linkText, props)}
/>
))}
The problem is that if I refresh the page, there is a little delay between the api call and the render of the page, so for one second the browser shows "NOT FOUND PAGE" and then instantly redirect to the route. How can I make it work properly? Thank you !
A: Basically what you want is to be able to know that the data hasn't been loaded yet, and render differently based on that. A simple check would be see if the menu is empty. Something like this:
export const Menu = ({ menu, fetchMenu }) => {
useEffect(() => {
fetchMenu();
}, []);
if ( menu.length > 0 ) {
return <Routes menu={menu} />
} else {
return <MenuLoading />
}
}
A more advanced setup would be able to tell the difference between an empty menu due to an API error and a menu that's empty because it's still loading, but to do that you would need to store information about the status of the API call in the state.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/64424826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: PyQt: Overriding QGraphicsView.drawItems I need to customize the drawing process of a QGraphicsView, and so I override the drawItems method like this:
self.graphicsview.drawItems=self.drawer.drawItems
where self.graphicsview is a QGraphicsView, and self.drawer is a custom class with a method drawItems.
In this method I check a few flags to decide how to draw each item, and then call item.paint, like this:
def drawItems(self, painter, items, options):
for item in items:
print "Processing", item
# ... Do checking ...
item.paint(painter, options, self.target)
self.target is the QGraphicsView's QGraphicsScene.
However, once it reaches item.paint, it breaks out of the loop - without any errors. If I put conditionals around the painting, and for each possible type of QGraphicsItem paste the code that is supposed to be executed (by looking at the Qt git-sources), everything works.
Not a very nice solution though... And I don't understand how it could even break out of the loop?
A: There is an exception that occurs when the items are painted, but it is not reported right away. On my system (PyQt 4.5.1, Python 2.6), no exception is reported when I monkey-patch the following method:
def drawItems(painter, items, options):
print len(items)
for idx, i in enumerate(items):
print idx, i
if idx > 5:
raise ValueError()
Output:
45
0 <PyQt4.QtGui.QGraphicsPathItem object at 0x3585270>
1 <PyQt4.QtGui.QGraphicsSimpleTextItem object at 0x356ca68>
2 <PyQt4.QtGui.QGraphicsSimpleTextItem object at 0x356ce20>
3 <PyQt4.QtGui.QGraphicsSimpleTextItem object at 0x356cc88>
4 <PyQt4.QtGui.QGraphicsSimpleTextItem object at 0x356cc00>
5 <PyQt4.QtGui.QGraphicsSimpleTextItem object at 0x356caf0>
6 <PyQt4.QtGui.QGraphicsSimpleTextItem object at 0x356cb78>
However, once I close the application, the following method is printed:
Exception ValueError: ValueError() in <module 'threading' from '/usr/lib/python2.6/threading.pyc'> ignored
I tried printing threading.currentThread(), but it returns the same thread whether it's called in- or outside the monkey-patched drawItems method.
In your code, this is likely due to the fact that you pass options (which is a list of style options objects) to the individual items rather than the respective option object. Using this code should give you the correct results:
def drawItems(self, painter, items, options):
for item, option in zip(items, options):
print "Processing", item
# ... Do checking ...
item.paint(painter, option, self.target)
Also, you say the self.target is the scene object. The documentation for paint() says:
This function, which is usually called by QGraphicsView, paints the contents of an item in local coordinates. ... The widget argument is optional. If provided, it points to the widget that is being painted on; otherwise, it is 0. For cached painting, widget is always 0.
and the type is QWidget*. QGraphicsScene inherits from QObject and is not a widget, so it is likely that this is wrong, too.
Still, the fact that the exception is not reported at all, or not right away suggests some foul play, you should contact the maintainer.
A: The reason why the loop suddenly exits is that an Exception is thrown. Python doesn't handle it (there is no try: block), so it's passed to the called (Qt's C++ code) which has no idea about Python exceptions, so it's lost.
Add a try/except around the loop and you should see the reason why this happens.
Note: Since Python 2.4, you should not override methods this way anymore.
Instead, you must derive a new class from QGraphicsView and add your drawItems() method to this new class. This will replace the original method properly.
Don't forget to call super() in the __init__ method! Otherwise, your object won't work properly.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/1142970",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: ViewPager and Fragments: Why do I always see the same thing? I have created a ViewPager with three "pages". The code is this
MainActivity.java
public class MainActivity extends FragmentActivity {
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
ViewPager viewPager = (ViewPager) findViewById(R.id.pager);
PagerTabStrip pagerTabStrip = (PagerTabStrip) findViewById(R.id.pager_tab_strip);
FragmentPagerAdapter fragmentPagerAdapter = new MyFragmentPagerAdapter(
getSupportFragmentManager());
viewPager.setAdapter(fragmentPagerAdapter);
pagerTabStrip.setDrawFullUnderline(true);
pagerTabStrip.setTabIndicatorColor(Color.DKGRAY);
}
}
MyFragmentPageAdapter.java
public class MyFragmentPagerAdapter extends FragmentPagerAdapter {
private String[] pageTitle = {
"Page1", "Page2", "Page3"
};
public MyFragmentPagerAdapter(FragmentManager fragmentManager) {
super(fragmentManager);
}
@Override
public Fragment getItem(int position) {
Fragment fragment = new PageFragment();
Bundle arguments = new Bundle();
arguments.putString("pageIndex", Integer.toString(position + 1));
fragment.setArguments(arguments);
return fragment;
}
@Override
public int getCount() {
return pageTitle.length;
}
@Override
public CharSequence getPageTitle(int position) {
return pageTitle[position];
}
}
PageFragment.java
public class PageFragment extends Fragment {
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) {
View view = getActivity().getLayoutInflater().inflate(R.layout.fragment_page, null);
return view;
}
}
activity_main.xml
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:id="@+id/RelativeLayout1"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
>
<android.support.v4.view.ViewPager xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:id="@+id/pager"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity" >
<android.support.v4.view.PagerTabStrip
android:id="@+id/pager_tab_strip"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_gravity="top"
android:background="#33B5E5"
android:textColor="#FFFFFF"
android:paddingTop="10dp"
android:paddingBottom="10dp" />
</android.support.v4.view.ViewPager>
</RelativeLayout>
fragment_page.xml
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="wrap_content"
android:layout_height="wrap_content" >
<TextView
android:id="@+id/textView1"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_alignParentLeft="true"
android:layout_alignParentTop="true"
android:layout_marginTop="20dp"
android:textSize="16sp"
android:textStyle="italic"
android:gravity="center_horizontal"
android:textColor="@color/red"
android:text="@string/inf" />
<TextView
android:id="@+id/textView2"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_alignParentLeft="true"
android:layout_alignParentTop="true"
android:layout_marginTop="60dp"
android:textSize="28sp"
android:gravity="center_horizontal"
android:textStyle="bold"
android:text="@string/ben" />
<TextView
android:id="@+id/textView3"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_alignParentLeft="true"
android:layout_alignParentTop="true"
android:layout_marginTop="130dp"
android:gravity="center_horizontal"
android:textSize="18sp"
android:text="Prova"
/>
<Button
android:id="@+id/button1"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="center_vertical|center_horizontal"
android:layout_centerInParent="true"
android:text="@string/verifica" />
</RelativeLayout>
But now i visualize in all three pages the same thing. If I for example on page 2 I want a TextView with the text "This is the 2 page" and on the third page a TextView with the text "This is the page 3" and in the first page two TextView with the button ... how can I? I'm going crazy, please let pass me the code to do this thing. Please.
A: Once you inflate PageFragment's layout you need to get a reference of the TextView so you can display the position on it via the Bundle you are passing using setArguments(). Use your view variable inside onCreateView() to get a reference of the TextView. (i.e. view.findViewById()). Then use getArguments() in your PageFragment to retrieve the Bundle with that has position, and set the TextView to that value.
A: this is a good example for what you want.
A: Just create a function in your page fragment class to configure elements and modify onCreateView to attach childs.
public class PageFragment extends Fragment {
TextView tv1 ;
// ...
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) {
View view = getActivity().getLayoutInflater().inflate(R.layout.fragment_page, null);
/// Attach your childs
tv1 = view.findViewById(R.id.textView1);
return view;
}
public void configure(int position) {
tv1 .setText("This is "+ position);
}
}
Then just call the function configure in getItem function
@Override
public Fragment getItem(int position) {
PageFragment fr = new PageFragment();
fr.configure(position + 1);
return fr;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/18829893",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: What is difference between the KFP SDK v2, v2 namespace, and v2 compatible mode? What is the difference between KFP SDK v2, the v2 namespace in KFP SDK v1, and v2 compatible mode?
A: The KFP SDK has two major versions: v1.8.x and v2.x.x (in pre-release at the time of writing this).
KFP SDK v2.x.x compiles pipelines and components to IR YAML [example], a platform neutral pipeline representation format. It can be run on the KFP open source backend or on other platforms, such as Google Cloud Vertex AI Pipelines.
KFP SDK v1.8.x, by default, compiles pipelines and components to Argo Workflow YAML. Argo Workflow YAML is executed on Kubernetes and is not platform neutral.
KFP SDK v1.8.x provides two ways to author pipelines using v2 Python syntax:
KFP SDK v2-compatible mode is a feature in KFP SDK v1.8.x which permits using v2 Python authoring syntax within KFP SDK v1 but compiles to Argo Workflow YAML. v2-compatible mode is deprecated and should not be used.
The KFP SDK v2 namespace in KFP SDK v1.8.x (from kfp.v2 import dsl, compiler) permits using v2 Python authoring syntax within KFP SDK v1 and compiles to IR YAML [usage example]. While this mode is not deprecated, users should prefer authoring IR YAML via the pre-released KFP SDK v2.x.x.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/73964749",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Using the split function in Python I am working with the CSV module, and I am writing a simple program which takes the names of several authors listed in the file, and formats them in this manner: john.doe
So far, I've achieved the results that I want, but I am having trouble with getting the code to exclude titles such as "Mr."Mrs", etc. I've been thinking about using the split function, but I am not sure if this would be a good use for it.
Any suggestions? Thanks in advance!
Here's my code so far:
import csv
books = csv.reader(open("books.csv","rU"))
for row in books:
print '.'.join ([item.lower() for item in [row[index] for index in (1, 0)]])
A: It depends on how much messy the strings are, in worst cases this regexp-based solution should do the job:
import re
x=re.compile(r"^\s*(mr|mrs|ms|miss)[\.\s]+", flags=re.IGNORECASE)
x.sub("", text)
(I'm using re.compile() here since for some reasons Python 2.6 re.sub doesn't accept the flags= kwarg..)
UPDATE: I wrote some code to test that and, although I wasn't able to figure out a way to automate results checking, it looks like that's working fine.. This is the test code:
import re
x=re.compile(r"^\s*(mr|mrs|ms|miss)[\.\s]+", flags=re.IGNORECASE)
names = ["".join([a,b,c,d]) for a in ['', ' ', ' ', '..', 'X'] for b in ['mr', 'Mr', 'miss', 'Miss', 'mrs', 'Mrs', 'ms', 'Ms'] for c in ['', '.', '. ', ' '] for d in ['Aaaaa', 'Aaaa Bbbb', 'Aaa Bbb Ccc', ' aa ']]
print "\n".join([" => ".join((n,x.sub('',n))) for n in names])
A: Depending on the complexity of your data and the scope of your needs you may be able to get away with something as simple as stripping titles from the lines in the csv using replace() as you iterate over them.
Something along the lines of:
titles = ["Mr.", "Mrs.", "Ms", "Dr"] #and so on
for line in lines:
line_data = line
for title in titles:
line_data = line_data.replace(title,"")
#your code for processing the line
This may not be the most efficient method, but depending on your needs may be a good fit.
How this could work with the code you posted (I am guessing the Mr./Mrs. is part of column 1, the first name):
import csv
books = csv.reader(open("books.csv","rU"))
for row in books:
first_name = row[1]
last_name = row[0]
for title in titles:
first_name = first_name.replace(title,"")
print '.'.(first_name, last_name)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/8498514",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Cannot reach spring-boot-starter-parent dependencies I'm using https://start.spring.io/ to create Spring Boot project and in development I can't reach spring boot starters these dependencies:
<dependency>
<groupId>org.springframework.ws</groupId>
<artifactId>spring-ws-support</artifactId>
<version>3.0.10.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-oxm</artifactId>
<version>5.3.2</version>
</dependency>
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpclient</artifactId>
<version>4.5.13</version>
</dependency>
<dependency>
<groupId>com.sun.xml.messaging.saaj</groupId>
<artifactId>saaj-impl</artifactId>
<version>1.5.2</version>
</dependency>
I get warning messages:
Overriding managed version 5.3.1 for spring-oxm
Overriding managed version 5.3.1 for spring-oxm
Duplicating managed version 4.5.13 for httpclient
Duplicating managed version 1.5.2 for saaj-impl
I've searched for how to fix it, one of the workaround some people refers to use <!--$NO-MVN-MAN-VER$ --> at the end of the </version> tag to ignore the warning.
My question is how to make these (and other future dependencies) be seen from spring boot?
EDIT #1: pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.4.0</version>
<relativePath /> <!-- lookup parent from repository -->
</parent>
<groupId>xx.xx.xxx</groupId>
<artifactId>yyyy</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>yyyy</name>
<description>zzzzz</description>
<properties>
<java.version>11</java.version>
<apache.cxf.version>3.4.1</apache.cxf.version>
<apache.httpcomponents.version>4.5.13</apache.httpcomponents.version>
<jaxb2.maven2.version>0.14.0</jaxb2.maven2.version>
<springframework.version>5.3.2</springframework.version>
<springframework.ws.version>3.0.10.RELEASE</springframework.ws.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-thymeleaf</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<scope>runtime</scope>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-configuration-processor</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.cxf</groupId>
<artifactId>cxf-rt-frontend-jaxws</artifactId>
<version>${apache.cxf.version}</version>
</dependency>
<dependency>
<groupId>org.jvnet.jaxb2.maven2</groupId>
<artifactId>maven-jaxb2-plugin</artifactId>
<version>${jaxb2.maven2.version}</version>
<type>maven-plugin</type>
</dependency>
<dependency>
<groupId>org.springframework.ws</groupId>
<artifactId>spring-ws-support</artifactId>
<version>${springframework.ws.version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-oxm</artifactId>
<version>${springframework.version}</version>
</dependency>
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpclient</artifactId>
<version>${apache.httpcomponents.version}</version>
</dependency>
</dependencies>
</project>
A: Solution is to remove <version>...</version>, let Spring boot handle the versions.
Thanks to tgdavies.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/65881656",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: XCode 6 infinite building I have really strange problem. I'm writing swift ios project. It has some swift files. One of them is internet requester, which contains some small methods (5-6 rows). If i write 8 methods, my project builds in a second and runs well. But if i add extra method (it can be empty), builder stucks at "compiling swift files" for an inifinite time, PC starts lagging hardly and sometimes "source kit service" terminates. Can you help me?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/25482501",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Shim config not supported in Node, may or may not work I am working on a project which works fine on browser , now we are trying to run it on server side using nodejs.
I have below configurations :
*
*node : v4.2.1
*npm : v2.14.7
and when I am trying to run my project on nodejs , getting the error as :
Shim config not supported in Node, may or may not work
Since the modules and dependencies (AMD) are working fine on browser I assume the shims config are correct .
Please let me know if I am missing something ?
https://github.com/jrburke/requirejs/issues/1443
Regards
Manish
A:
Since the modules and dependencies (AMD) are working fine on browser I assume the shims config are correct .
That's an incorrect assumption. The problem is that Node.js operates with a set of basic assumptions that are very different from how browsers work. Consider this statement:
var foo = "something";
If you execute this at the top of your scope in Node.js, you've created a variable which is local to the file Node is executing. If you really want to make it global, then you have to explicitly shove it into global.
Now, put the same statement at the top of the scope of a script you load in a browser with the script element. The same statement creates a global variable. (The variable is global whether or not var is present.)
RequireJS' shim configuration is used for scripts that expect the second behavior. They expect a) that anything they declare at the top of their scope is leaked to the global space and b) that anything that has been leaked to the global space by scripts they depend on is available in the global space. Both expectations are almost always false in Node. (It would not be impossible for a module designed for node to manipulate global but that's very rare.)
The author of RequireJS explained in this issue report that he does not see it advisable for RequireJS to try to replicate the browser behavior in Node. I agree with him. If you want to run front-end code in the back-end you should replace those modules that need shim in the browser with modules that are designed to run in Node.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/33407645",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Authentication not working with Supabase and Node.js I am not sure even this is possible but I am trying to use Supabase with Node.js. I get the code from a React front-end, this link, which works okay.
Below is my code
client.js
const { createClient } = require('@supabase/supabase-js')
const supabaseUrl = "MY_URL"
const public_anon_key = "MY_ANON_KEY"
const supabase = createClient(supabaseUrl,public_anon_key)
exports.supabase = supabase
app.js
const { supabase } = require('./client')
async function signOut() {
/* sign the user out */
await supabase.auth.signOut();
}
async function signInWithGithub() {
/* authenticate with GitHub */
await supabase.auth.signIn({
provider: 'github'
});
}
async function printUser() {
const user = supabase.auth.user()
console.log(user.email)
}
signInWithGithub()
printUser()
signOut()
I am new to Node.js and I suspect something wrong with Promises is missing.
Is it possible to retrieve the user data from Supabase, if so, what is my code missing?
Thanks
Edit : title
A: I don't know what Supabase does but the problem seems to be Promise related
Try to change your app.js as follow
const { supabase } = require('./client')
function signOut() {
/* sign the user out */
supabase.auth.signOut();
}
function signInWithGithub() {
/* authenticate with GitHub */
return supabase.auth.signIn({
provider: 'github'
});
}
function printUser() {
const user = supabase.auth.user()
console.log(user.email)
}
signInWithGithub().then(() => {
printUser()
return signOut()
})
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/72042412",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: C++ and member function pointers
I'm trying to send an address of a member function to my "Thread" class so I can activate it from there.
I read that I can use functors but I want it to be generic in a way that I can send it to my "Thread" constructor and functors need templates, so it won't be enough for me...
Does anybody know a way to do this?
thanks :)
A: If I may suggest a different approach: derive from your thread class and make a virtual Run() function.
The reason is that although it is possible to call a function pointer from the static thread entry function, you face problem after problem. For example, you can solve the problem of having the right function signature with templates and variadic parameters, but it is not of much avail, because the entry function won't know what to send to your function.
On the other hand, deriving from Thread is easy and straightforward. You put into the contructor whatever the thread needs to know. Or, optionally, you can call any number of other functions and set any number of members before you create the thread. Once you do create the thread, the static thread entry function will simply call the virtual Run function. Now... the Run function is part of the thread object, so it knows anything the class knows -- no problem.
The extra overhead of a single virtual function call and of one pointer in the vtable is also ridiculously small compared to how easy it is.
A: I suggest you look at the treatment of this topic in the C++ FAQ Lite. In short, pointers to member functions are problematic, but there are several work-arounds, as well as a number of reasons why they should be avoided for certain purposes.
A: Finally I decided to use a stub function as suggested in this thread:
Create thread is not accepting the member function
thanks everbody :)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/5316943",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Why does (ps -f) create no subshell but a separate process? I need some help because I don't get something. From what I read from Internet, a subshell is created when we execute a shell script or if we run command in brackets: ( )
I tried to test this with a script which contains only the following command:
ps -f
When I run it I see the following result:
ID PID PPID C STIME TTY TIME CMD
me 2213 2160 0 08:53 pts/14 00:00:00 bash
me 3832 2213 0 18:41 pts/14 00:00:00 bash
me 3833 3832 0 18:41 pts/14 00:00:00 ps -f
Which is good, because I see that my bash process has spawned another bash process for my script.
But when I do:
( ps -f )
it produces:
UID PID PPID C STIME TTY TIME CMD
me 2213 2160 0 08:53 pts/14 00:00:00 bash
me 3840 2213 0 18:46 pts/14 00:00:00 ps -f
So if brackets spawn a subshell why it is not shown in the processes? And why does ps -f is counted as another process? Does every command run as a separate process?
A: It seems you've caught bash in a little bit of an optimization. if a subshell contains only a single command, why really make it a subshell?
$ ( ps -f )
UID PID PPID C STIME TTY TIME CMD
jovalko 29393 24133 0 12:05 pts/10 00:00:00 bash
jovalko 29555 29393 0 12:07 pts/10 00:00:00 ps -f
However, add a second command, say : (the bash null command, which does nothing) and this is the result:
$ ( ps -f ; : )
UID PID PPID C STIME TTY TIME CMD
jovalko 29393 24133 0 12:05 pts/10 00:00:00 bash
jovalko 29565 29393 0 12:08 pts/10 00:00:00 bash
jovalko 29566 29565 0 12:08 pts/10 00:00:00 ps -f
One of the main reasons to use a subshell is that you can perform operations like I/O redirection on a group of commands instead a single command, but if your subshell contains only a single command there's not much reason to really fork a new bash process first.
As to ps counting as a process, it varies. Many commands you use like ls, grep, awk are all external programs. But, there are builtins like cd, kill, too.
You can determine which a command is in bash using the type command:
$ type ps
ps is hashed (/bin/ps)
$ type cd
cd is a shell builtin
A: The main part of the question is:
Does every command run as a separate process?
YES!. Every command what isn't built-in into bash (like declare and such), runs as separate process. How it is works?
When you type ps and press enter, the bash analyze what you typed, do usual things as globbing, variable expansionas and such, and finally when it is an external command
*
*the bash forks itself.
The forking mean, than immediatelly after the fork, you will have two identical bash processes (each one with different process ID (PID)) - called as "parent" and "child", and the only difference between those two running bash programs is, than the "parent" gets (return value from the fork) the PID of the child but the child don't know the PID of the parent. (fork for the child returns 0).
*
*after the fork, (bash is written this way) - the child replaces itself with the new program image (such: ps) - using the exec call.
*after this, the child bash of course doesn't exist anymore, and running only the newly executed command - e.g. the ps.
Of course, when the bash going to fork itself, do many other things, like I/O redirections, opens-closes filehandles, changes signal handling for the child and many-many other things.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/25689656",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: customizing the NSPredicateEditor I want to customize the NSPredicateEditorRowTemplate, but I needn't the subtraction sign in the first RowTemplate, the second, and the third, like the Finder application.
A: If you tell the NSPredicateEditor that it cannot remove all the rows in the editor, then the editor will automatically remove the (-) button when necessary.
You can do this by unchecking the "Can Remove All Rows" checkbox when editing the predicate editor in a xib, or by doing it programmatically with the -setCanRemoveAllRows: method.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7607032",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Python Selenium: How can I print the link? How can I print the value of the href attribute?
<a href="aaaaa.pdf"></a>
How can I print the link aaaaa.pdf with python selenium?
HTML:
<div class="xxxx">
<a href="aaaaa.pdf"></a>
</div>
A: You can do like this:
print(driver.find_element_by_css_selector(".xxxx a").get_attribute('href'))
A: Try the below:
pName = driver.find_element_by_css_selector(".xxxx").text
print(pName)
or
pName = driver.find_element_by_css_selector(".xxxx").get_attribute("href")
print(pName)
A: div.xxxx a
first, check if this CSS_SELECTOR is representing the desired element.
Steps to check:
Press F12 in Chrome -> go to element section -> do a CTRL + F -> then paste the css and see, if your desired element is getting highlighted with 1/1 matching node.
If yes, then use explicit waits:
wait = WebDriverWait(driver, 20)
print(wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "div.xxxx a"))).get_attribute('href'))
Imports:
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
A: The value of the href attribute i.e. aaaaa.pdf is within the <a> tag which is the only descendant of the <div> tag.
Solution
To print the value of the href attribute you can use either of the following locator strategies:
*
*Using css_selector:
print(driver.find_element(By.CSS_SELECTOR, "div.xxxx > a").get_attribute("href"))
*Using xpath:
print(driver.find_element(By.XPATH, "//div[@class='xxxx']/a").get_attribute("href"))
To extract the value ideally you have to induce WebDriverWait for the visibility_of_element_located() and you can use either of the following Locator Strategies:
*
*Using CSS_SELECTOR:
print(WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.CSS_SELECTOR, "div.xxxx > a"))).get_attribute("href"))
*Using XPATH:
print(WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, "//div[@class='xxxx']/a"))).get_attribute("href"))
*Note : You have to add the following imports :
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/71938299",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How to get Cartesian product from different group member? As the title mentioned, I have some problems in C++.
If I have a std::vector<std::vector<int> >tmpvec
vector < vector <int> > tmpvec = {
{1,2,3},
{4,5},
{6,7,8,9},
{10,11}
};
how can I generate all possible combination of a vector of vector
1,4,6,10
1,4,6,11
1,4,7,10
1,4,7,11
......
There are at most 48 different combination.
A: You may do
bool increase(const std::vector<std::vector<int>>& v, std::vector<std::size_t>& it)
{
for (std::size_t i = 0, size = it.size(); i != size; ++i) {
const std::size_t index = size - 1 - i;
++it[index];
if (it[index] >= v[index].size()) {
it[index] = 0;
} else {
return true;
}
}
return false;
}
void do_job(const std::vector<std::vector<int>>& v,
const std::vector<std::size_t>& it)
{
for (std::size_t i = 0; i != it.size(); ++i) {
// TODO: manage case where v[i] is empty if relevant.
std::cout << v[i][it[i]] << " ";
}
std::cout << std::endl;
}
void iterate(const std::vector<std::vector<int>>& v)
{
std::vector<std::size_t> it(v.size(), 0u);
do {
do_job(v, it);
} while (increase(v, it));
}
Live Demo
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/32049360",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
}
|
Q: How to correctly move ownership from raw pointer to std::unique_ptr? My approach is:
class SomeClass
{
std::vector<std::unique_ptr<MyObject>> myObjects;
public:
void takeOwnership(MyObject *nowItsReallyMyObject)
{
myObjects.emplace_back(std::move(nowItsReallyMyObject));
}
};
Am I doing everything correctly or are there any better solutions?
A: You should accept the unique_ptr from the get-go:
class SomeClass
{
std::vector<std::unique_ptr<MyObject>> myObjects;
public:
// tells the world you 0wNz this object
void takeOwnership(std::unique_ptr<MyObject> myObject)
{
myObjects.push_back(std::move(myObject));
}
};
This way you make it clear you take ownership and you also help other programmers to avoid using raw pointers.
Further reading: CppCoreGuidelines R.32
A: The move is redundant.
Myself, I'd do this:
void takeOwnership(std::unique_ptr<MyObject> nowItsReallyMyObject)
{
myObjects.emplace_back(std::move(nowItsReallyMyObject));
}
because I would want to move the unique_ptr ownership semantics as far "out" as possible.
I might write this utility function:
template<class T>
std::unique_ptr<T> wrap_in_unique( T* t ) {
return std::unique_ptr<T>(t);
}
so callers can:
foo.takeOwnership(wrap_in_unique(some_ptr));
but even better, then can push the borders of unique_ptr semantics out as far as they reasonably can.
I might even do:
template<class T>
std::unique_ptr<T> wrap_in_unique( T*&& t ) {
auto* tmp = t;
t = 0;
return std::unique_ptr<T>(tmp);
}
template<class T>
std::unique_ptr<T> wrap_in_unique( std::unique_ptr<T> t ) {
return std::move(t);
}
which lets callers transition their T* into unique_ptrs easier. All of their T*->unique_ptr<T> is now wrapped in a std::move, and zeros the source pointer.
So if they had
struct I_am_legacy {
T* I_own_this = 0;
void GiveMyStuffTo( SomeClass& sc ) {
sc.takeOwnership( wrap_in_unique(std::move(I_own_this)) );
}
};
the code can be transformed into:
struct I_am_legacy {
std::unique_ptr<T> I_own_this;
void GiveMyStuffTo( SomeClass& sc ) {
sc.takeOwnership( wrap_in_unique(std::move(I_own_this)) );
}
};
and it still compiles and works the same. (Other interaction with I_own_this may have to change, but part of it will already be unique_ptr compatible).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/43857590",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Delete a record in MySql database by clicking a dynamically generated button in Winforms I'm new to programming and hoping someone will help me with my problem. I'm developing an e-commerce application and looking to add buttons to a dynamically generated list of items in the cart. I used a loop statement to go through each row of the table and show the results but I don't know how to remove a record once the user clicks a button. Here is the code for the cart. Please advise. end result screenshot
private void CartPage_Load(object sender, EventArgs e)
{
DB db = new DB();
DataTable table = new DataTable();
MySqlDataAdapter adapter = new MySqlDataAdapter();
MySqlCommand command = new MySqlCommand("SELECT * FROM `Product_Info` WHERE `Cart_Number` = @cart", db.getConnection());
command.Parameters.Add("@cart", MySqlDbType.Int32).Value = Global.cart;
adapter.SelectCommand = command;
adapter.Fill(table);
int rowcount = table.Rows.Count;
if (rowcount > 0)
{
//MessageBox.Show("You have products");
int Left = 45;
int Top = 100;
int rownumb = 0;
for (int i = 0; i < rowcount; i++)
{
Label ProdName_label = new Label();
this.Controls.Add(ProdName_label);
Label Name_label = new Label();
this.Controls.Add(Name_label);
Label ProdNo_label = new Label();
this.Controls.Add(ProdNo_label);
Label No_label = new Label();
this.Controls.Add(No_label);
Label ProdQty_label = new Label();
this.Controls.Add(ProdQty_label);
Label Qty_label = new Label();
this.Controls.Add(Qty_label);
Label ProdPrice_label = new Label();
this.Controls.Add(ProdPrice_label);
Label Price_label = new Label();
this.Controls.Add(Price_label);
Label PrdSub_label = new Label();
this.Controls.Add(PrdSub_label);
Label Sub_label = new Label();
this.Controls.Add(Sub_label);
Button Remove_button = new Button();
this.Controls.Add(Remove_button);
Panel Product_panel = new Panel();
this.Controls.Add(Product_panel);
Product_panel.BackColor = System.Drawing.Color.White;
Product_panel.Controls.Add(Remove_button);
Product_panel.Controls.Add(PrdSub_label);
Product_panel.Controls.Add(Sub_label);
Product_panel.Controls.Add(ProdNo_label);
Product_panel.Controls.Add(No_label);
Product_panel.Controls.Add(ProdQty_label);
Product_panel.Controls.Add(Qty_label);
Product_panel.Controls.Add(ProdPrice_label);
Product_panel.Controls.Add(Price_label);
Product_panel.Controls.Add(ProdName_label);
Product_panel.Controls.Add(Name_label);
Product_panel.Location = new System.Drawing.Point(Left, Top);
Product_panel.Name = "Product_panel";
Product_panel.Size = new System.Drawing.Size(200, 200);
Product_panel.TabIndex = 2;
//
// ProdName_label
//
ProdName_label.AutoSize = true;
ProdName_label.Location = new System.Drawing.Point(28, 25);
ProdName_label.Name = "ProdName_label";
ProdName_label.Size = new System.Drawing.Size(195, 32);
ProdName_label.TabIndex = 3;
ProdName_label.Text = "Product Name:";
Name_label.AutoSize = true;
Name_label.Location = new System.Drawing.Point(145, 25);
Name_label.Name = "Name_label";
Name_label.Size = new System.Drawing.Size(195, 32);
Name_label.TabIndex = 3;
Name_label.Text = table.Rows[rownumb]["Prod_Name"].ToString();
////
//// ProdNo_label
////
ProdNo_label.AutoSize = true;
ProdNo_label.Location = new System.Drawing.Point(28, 45);
ProdNo_label.Name = "ProdNo_label";
ProdNo_label.Size = new System.Drawing.Size(220, 32);
ProdNo_label.TabIndex = 4;
ProdNo_label.Text = "Product Number:";
No_label.AutoSize = true;
No_label.Location = new System.Drawing.Point(145, 45);
No_label.Name = "No_label";
No_label.Size = new System.Drawing.Size(220, 32);
No_label.TabIndex = 4;
No_label.Text = "1";
No_label.Text = table.Rows[rownumb]["Prod_Number"].ToString();
//
// ProdPrice_label
//
ProdPrice_label.AutoSize = true;
ProdPrice_label.Location = new System.Drawing.Point(28, 65);
ProdPrice_label.Name = "ProdPrice_label";
ProdPrice_label.Size = new System.Drawing.Size(80, 32);
ProdPrice_label.TabIndex = 4;
ProdPrice_label.Text = "Price:";
Price_label.AutoSize = true;
Price_label.Location = new System.Drawing.Point(145, 65);
Price_label.Name = "Price_label";
Price_label.Size = new System.Drawing.Size(80, 32);
Price_label.TabIndex = 4;
Price_label.Text = table.Rows[rownumb]["Prod_Price"].ToString();
////
//// ProdQty_label
////
ProdQty_label.AutoSize = true;
ProdQty_label.Location = new System.Drawing.Point(28, 85);
ProdQty_label.Name = "ProdQty_label";
ProdQty_label.Size = new System.Drawing.Size(173, 32);
ProdQty_label.TabIndex = 4;
ProdQty_label.Text = "Product Q-ty:";
Qty_label.AutoSize = true;
Qty_label.Location = new System.Drawing.Point(145, 85);
Qty_label.Name = "ProdQty_label";
Qty_label.Size = new System.Drawing.Size(173, 32);
Qty_label.TabIndex = 4;
Qty_label.Text = table.Rows[rownumb]["Prod_Qty"].ToString();
////
//// PrdSub_label
////
PrdSub_label.AutoSize = true;
PrdSub_label.Location = new System.Drawing.Point(28, 105);
PrdSub_label.Name = "PrdSub_label";
PrdSub_label.Size = new System.Drawing.Size(226, 32);
PrdSub_label.TabIndex = 5;
PrdSub_label.Text = "Product Subtotal:";
Sub_label.AutoSize = true;
Sub_label.Location = new System.Drawing.Point(145, 105);
Sub_label.Name = "Sub_label";
Sub_label.Size = new System.Drawing.Size(226, 32);
Sub_label.TabIndex = 5;
Sub_label.Text = table.Rows[rownumb]["Prod_Subtotal"].ToString();
////
//// Remove_button
////
Remove_button.BackColor = System.Drawing.Color.LightGreen;
Remove_button.Location = new System.Drawing.Point(40, 150);
Remove_button.Name = "Remove_button" + Convert.ToString(rownumb);
Remove_button.Size = new System.Drawing.Size(120, 34);
Remove_button.TabIndex = 3;
Remove_button.Text = "Remove";
Remove_button.UseVisualStyleBackColor = false;
Remove_button.Click += new System.EventHandler(Remove_button_Click);
Top += 220;
rownumb += 1;
}
}
else
{
MessageBox.Show("You don't have any products in your cart");
}
}
private void Remove_button_Click(object sender, EventArgs e)
{
MessageBox.Show("Are you sure you want to remove this item?", "Remove", MessageBoxButtons.OKCancel, MessageBoxIcon.Question);
}
A: Your issue apparently is "how do I know which button was clicked?".
You already know how to create a button, add it to the form and attach a click-handler:
Button Remove_button = new Button();
this.Controls.Add(Remove_button);
Remove_button.Name = "Remove_button" + Convert.ToString(rownumb);
Remove_button.Click += new System.EventHandler(Remove_button_Click);
To know which button was clicked, you can set its .Tag property:
Remove_button.Tag = Convert.ToString(rownumb); // or whatever the primary key for this row is
Next, in the click handler, you need to read that property.
Fortunately, when the button calls the handler, it uses (a reference to) itself as the sender parameter.
So when you cast that sender to a Button, you get the real button that originated the command and you can access all its properties, such as .Tag:
private void Remove_button_Click(object sender, EventArgs e)
{
Button clickedButton = (Button)sender;
string rownumb = clickedButton.Tag;
DialogResult selection = MessageBox.Show("Are you sure you want to remove item #" + rownumb + "?",
"Remove", MessageBoxButtons.OKCancel, MessageBoxIcon.Question);
// TODO execute a DELETE command for this row when the user clicked OK
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/68118973",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: determining the character set to use my delphi 2009 app has a basic translation system that uses GNUGetText. i had used some win API calls to prepare the fonts. i thought it was working correctly until recently when someone from Malta had a problem with my app failing precisely in this area. my app is used globally. some of this code may have become obsolete since d2009 uses unicode.
is all of this truly necessary in order for my app to work in all locales?
TForm.Font.Charset
it's been my understanding i must set the TForm instance's Font.Charset according to the user's locale. is this correct?
TranslateCharsetInfo( ) win API function
delphi 2009's windows.pas says:
function TranslateCharsetInfo(var lpSrc: DWORD; var lpCs: TCharsetInfo;
dwFlags: DWORD): BOOL;
delphi 5's windows.pas says:
function TranslateCharsetInfo(var lpSrc: DWORD; var lpCs: TCharsetInfo;
dwFlags: DWORD): BOOL; stdcall;
from microsoft's MSDN:
BOOL TranslateCharsetInfo(
__inout DWORD FAR *lpSrc,
__out LPCHARSETINFO lpCs,
__in DWORD dwFlags
);
back when this code was written (back in delphi 5 days), the word was the inport of the function was incorrect and the correct way was:
function TranslateCharsetInfo(lpSrc: Pointer; var lpCs: TCharsetInfo;
dwFlags: DWORD): BOOL; stdcall; external gdi32;
notice that the d2009 windows.pas file copy is not stdcall. which declaration of TranslateCharsetInfo should i be using?
The code
that aside, essentially i've been doing the following:
var
Buffer : PChar;
iSize, iCodePage : integer;
rCharsetInfo: TCharsetInfo;
begin
// SysLocale.DefaultLCID = 1802
iSize := GetLocaleInfo(SysLocale.DefaultLCID, LOCALE_IDefaultAnsiCodePage,
nil, 0);
// size=14
GetMem(Buffer, iSize);
try
if GetLocaleInfo(SysLocale.DefaultLCID, LOCALE_IDefaultAnsiCodePage, Buffer,
iSize)=0 then
RaiseLastOSError;
// Buffer contains 0 so codepage = 0
iCodePage:=Result := StrToInt(string(Buffer));
finally
FreeMem(Buffer);
end;
// this function is not called according to MSDN's directions for
// TCI_SRCCODEPAGE and the call fails.
if not TranslateCharsetInfo(Pointer(iCodePage), rCharsetInfo,
TCI_SRCCODEPAGE) then
RaiseLastOSError;
// acts upon the form
Font.Charset:= rCharsetInfo.ciCharset;
end;
i just don't know enough about this...strangely enough, years ago when i wrote this, i was persuaded that it was working correctly. the results of...failing to check API call return code...
isn't there a smarter way to do all this? doesn't the RTL/VCL do most/all of this automatically? my instincts tell me i'm working too hard on this...
thank you for your help!
A: Actually, I'm not sure about Delphi 2009, but MSDN says:
Note that DEFAULT_CHARSET is not a real charset; rather, it is a constant akin to NULL that means "show characters in whatever charsets are available."
So my guess is that you just need to remove all the code that you mentioned, and it should work.
A: Not a really answer to this question, but 'small' note on possible memory corruption with this code under D2009+. Function GetLocaleInfo "MSDN: Returns the number of characters retrieved in the locale data buffer..." not BYTES, so under D2009+ you MUST allocate 2 bytes for each characters. Best way to do this is write:
GetMem(Buffer, iSize * SizeOf(Char)); //This will be safe for all delphi versions
Without this you can lead to unpredicted AVs (D2009+), function GetLocaleInfo can overwrite your memory, because you have allocated too small buffer.
Also I don't understant why you're trying change charset to user locale one, I think that you should change charset to your destination translation (like, your program is set to be translated to Russian language, but running on English OS, then you need change charset to RUSSIAN_CHARSET, not ANSI_CHARSET). And under D2009+ I not sure if this is needed, but I might be wrong.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/1730926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: .net windows service local application data is different then in normal app In normal console app I have this
Environment.SpecialFolder.LocalApplicationData is C:\Users\Simon\AppData\Local\
In Windows service
Environment.SpecialFolder.LocalApplicationData is C:\Windows\system32\config\systemprofile\AppData\Local\
How can I specify same path in both type of application?
A: Remember that the services run under a different user profile (can be a LOCAL_SERVICE, NETWORK_SERVICE, etc.) If you'd like them to be the same, run the service under your user profile (You can specify this ServiceProcessInstaller.Account property when you create the installer, or in the Services manager of windows).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/4247581",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: What does the BluetoothGATTSetCharacteristicValue ReliableWriteContext parameter actually do? Using C++ VS2017 Win10...
I am testing some code to set a characteristic value on a BLE device. I had originally had the default function call
HRESULT hr = BluetoothGATTSetCharacteristicValue(
hBleService.get(),
_pGattCharacteristic,
gatt_value,
NULL,
BLUETOOTH_GATT_FLAG_NONE);
I had to switch the flag to BLUETOOTH_GATT_FLAG_WRITE_WITHOUT_RESPONSE to get any sets to take place even though IsWritable and IsReadable were the only 2 properties set showing true but that is a different story and another topic.
Anyway, before I found the problem with the flag I had tried to use the ReliableWriteContext parameter so the code changed to
HANDLE hDevice = _bleDeviceContext.getBleDeviceHandle();
BTH_LE_GATT_RELIABLE_WRITE_CONTEXT ReliableWriteContext = NULL;
HRESULT hr1 = BluetoothGATTBeginReliableWrite(hDevice,
&ReliableWriteContext,
BLUETOOTH_GATT_FLAG_NONE);
HRESULT hr = BluetoothGATTSetCharacteristicValue(
hBleService.get(),
_pGattCharacteristic,
gatt_value,
ReliableWriteContext,
BLUETOOTH_GATT_FLAG_WRITE_WITHOUT_RESPONSE);
if (NULL != ReliableWriteContext) {
BluetoothGATTEndReliableWrite(hDevice,
ReliableWriteContext,
BLUETOOTH_GATT_FLAG_NONE);
}
Once I fixed the flag issue my BluetoothGATTSetCharacteristicValue() function would work with either the NULL or the ReliableWriteContext parameters. No difference that I could see.
My question is,then, what does the ReliableWriteContext do exactly? MS docs only say:
The BluetoothGATTBeginReliableWrite function specifies that reliable
writes are about to begin.
Well that doesn't tell me anything. So should I keep the reliable writes because the word "Reliable" sure sounds like it is something that I want? Or do I not use it because it does not seem to be necessary>
Any insight would be appreciated.
Ed
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/73240693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Create a list of selected objects from S3 bucket with Boto3 I have a S3 bucket with this structure:
data/date=1900-01-01-00/id=abc123def/file1.parquet
data/date=1900-01-01-00/id=ghi456jkl/file2.parquet
data/date=2021-07-11-00/id=mno789pqr/file3.parquet
data/date=2021-07-11-00/id=stu123vwy/file4.parquet
.
.
.
the files in the folder date=1900-01-01-00 are dummy files, the others are "real file" coming from data acquisition.
I would like to create a list of S3Objects containing just the first dummy file and the first real file.
This is the code I wrote:
def __data_list(self):
datastore_bucket = s3_resource.Bucket(S3_DATASTORE_BUCKET)
len_dummy_file = len(
list(
datastore_bucket.objects.filter(
Prefix="data/date=1900-01-01-00/"
)
)
)
data_list = list(
datastore_bucket.objects.filter(
Prefix="data/"
).limit(len_dummy_file + 1)
)
return [data_list[0], data_list[-1]]
I can't know the number of dummy files and I could have thousands of real files, so read all the bucket could take a lot of time and I want to avoid it.
Does anyone know a better way to create the list?
A: You can do logic on the Key returned from the object list:
first_dummy = None
first_real = None
for object in s3_resource.Bucket(BUCKET_NAME).objects.filter(Prefix='data/'):
if not first_dummy and 'date=1900-01-01-00' in object.key:
first_dummy = object.key
elif not first_real and 'date=1900-01-01-00' not in object.key:
first_real = object.key
if first_dummy and first_real:
break
print(first_dummy, first_real)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/68338713",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: My script does not detect the redirection URL this is the case :
Needed a responsive html5 audio player and the best I found was one made by a person called Osvalds .
The first problem I found was that the " autoplay " attribute reproduced the same song twice in Firefox. I could fix this by adding a few lines of javascript (though the play button does not change the song to be played ) .
The second problem (and this is where I need help) is that the javascript does not detect or failure with URL redirect but it does work with the final URL.
I mean, my php variable to return a URL like:
" http://www.goear.com/action/sound/get/104fc33 "
But this in turn redirects me to a final URL:
"http://live1.goear.com/listen/d6a9757a390b5d49b8584288d5589318/52eac3f1/sst2/mp3files/10102006/450929654ac4765a83324119603d02d6.mp3"
Ok , when I used a flash-based player , there was no problem . And when I use the "audio" tag without the script there is no problem .
I tried to fix this by passing php variable to " CURL " and it worked!
But it turned out that my host does not support the " CURLOPT_FOLLOWLOCATION " .
That was another option to look at the javascript source code audio player .
This is my html:
<audio id="audiobox" preload="auto" controls loop>
<source src="http://live1.goear.com/listen/d941195f4a5f477381d8a95ba666a0cb/52eac666/sst2/mp3files/10102006/450929654ac4765a83324119603d02d6.mp3">
<script type="text/javascript">
function play() {
document.getElementById('audiobox').play();
}
play();
</script>
</audio>
<script src="jquery-1.11.0.min.js"></script>
<script src="audioplayer.js"></script>
In this direction is the code javascript audio player:
I'm no expert in javascript but I think the problem must be in line 56: "this.attr AudioFile = $ ('src')." I think if I could define the php variable directly in the script it could run.
But do not know if there's another way:
http://pastebin.com/WxnTQmXa
This left a jsFiddle example, in this case the "src" attribute is calling the final URL that ends in MP3 and as you can see, the music plays and the player works.:
http://jsfiddle.net/aEXsL/2/
And here is the example with the problem, the same code just labeled "source" points to the URL that returns me the variable (remember that this only fails using this script):
http://jsfiddle.net/aEXsL/3/
It appears only one button (it means the player is failing).
Hope you can help me. Thanks in advance.
Edit:
Here is the link to the script author:
http://tympanus.net/Development/AudioPlayer/AudioPlayer.zip
A: Use this code
<script type="text/javascript">
var song=new Audio('http://live1.goear.com/listen/d941195f4a5f477381d8a95ba666a0cb/52eac666/sst2/mp3files/10102006/450929654ac4765a83324119603d02d6.mp3');
song.play();
</script>
but your link not contain any MP3 i think now
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/21468319",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How long it will take the inapp purchase pending status change to success In Apple purchase history
It is a question regarding my application in-app purchase pending. I am using store kit delegate function and code as shown below
for (SKPaymentTransaction * transaction in transactions) {
switch (transaction.transactionState)
{
case SKPaymentTransactionStatePurchased:
[self completeTransaction:transaction];
break;
case SKPaymentTransactionStateFailed:
[self failedTransaction:transaction];
break;
case SKPaymentTransactionStateRestored:
[self restoreTransaction:transaction];
case SKPaymentTransactionStateDeferred:
[self DefferedTransaction:transaction];
break;
case SKPaymentTransactionStatePurchasing:
[self PurchasingTransaction:transaction];
break;
default:
break;
}
};
Then I get a pending transaction . and I update my server database with temporary value. But I want to know when the pending transaction becomes a success.
A: It was quite interesting that I have run about one week behind the inApp pending issue . And I got an answer from apple side that is when we deal the inapp purchase with the below code `for (SKPaymentTransaction * transaction in transactions) {
switch (transaction.transactionState)
{
case SKPaymentTransactionStatePurchased:
[self completeTransaction:transaction];
break;
case SKPaymentTransactionStateFailed:
[self failedTransaction:transaction];
break;
case SKPaymentTransactionStateRestored:
[self restoreTransaction:transaction];
case SKPaymentTransactionStateDeferred:
[self DefferedTransaction:transaction];
break;
case SKPaymentTransactionStatePurchasing:
[self PurchasingTransaction:transaction];
break;
default:
break;
}
when a transaction is happen without any failure it will get SKPaymentTransactionStatePurchased state and we can proceed the applications process with success methods And no need to worry about the pending transaction apple consider this as success transaction and within 48 hours the amount will credit into the account.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/58992477",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How should I batch upload to s3 and insert to MongoDB from nodeJS webserver with a final callback? I have a web server that accepts images from client, processes them, upload them to S3, batch-insert the urls to my mongoDB, and lastly sending the json result back to the client.
Working with a single image works as follows:
router.post("/upload", function(req, res){
var form = new multiparty.Form();
form.parse(req,function(err,fields,files){
s3.upload({
Key: filename,
Bucket: bucketname,
ACL: "public-read",
Body: fs.createReadStream(filepath)
}, function(err, data){
if(err) //error handle
Model.collection.insert({"name": "name", "url" : data.Location}, function(err, result){
if(err) //error handle
res.json({result: result})
})
})
})
})
This works very well as I am simply uploading the file data to s3 -> when done, insert s3's output (url) to database -> when done, send mongo's result as jsonarray to the client.
The issue is - my client html excepts multiple type=file input with the same name=images, so that i can access them in my form parser as images[i]. The above algorithm is repeated images.length times. Problem rises when I have to return the jsonarray result back to the client as I would have to wait for all of the asynchronous S3 upload->mongo insert to finish, and I can't pinpoint how and where the callback would be for this job.
What I've tried are the following:
*
*Iterate through images, upload them to S3 first, populate an array with the resulting url ([data.Location]). Batch insert them to mongoDB and return jsonarray result to client in the call back. This didn't work as mongo insert doesn't wait for S3 upload.
*Iterate through images, upload and insert the images to S3 and mongoDB each iteration. if (currentIndex = images.length), return jsonarray result. This doesn't work accurately as I do not know which images will end last (different size).
How should I design the algorithm to batch upload s3, batch insert to mongo, return result including s3 urls, filename etc back to client as jsonarray?
Thanks, in advance!
A: I usually solve this kind of problems with Promises, see: Bluebird.
You could then do a batch upload on S3 using Promise.all(), once you get that callback you can batch insert into Mongo, when done, run the final callback. OR, you could do a batch that does both things: upload->insert to mongo, and when all of those are done, return final callback. It will depend on your server and on how many files you want to upload at once. You could also use Promise.map() with the concurrency option set to whatever concurrent tasks you want to run.
Pseudo-code example:
Lets asume that getFiles, uploadFile and uploadToMongo return a Promise object.
var maxConcurrency = 10;
getFiles()
.map(function(file){
return uploadFile(file)
.then(uploadToMongo)
},{concurrency: maxConcurrency})
.then(function(){
return finalCallback();
}).catch(handleError);
Example, how to manually "promisify* S3:
function uploadMyFile(filename, filepath, bucketname) {
return new Promise(function(resolve, reject){
s3.upload({
Key: filename,
Bucket: bucketname,
ACL: "public-read",
Body: fs.createReadStream(filepath)
}, function(err, data){
//This err will get to the "catch" statement.
if (err) return reject(err);
// Handle success and eventually call:
return resolve(data);
});
});
}
You can use this as:
uploadMyFile
.then(handleSuccess)
.catch(handleFailure);
All nice and pretty!
A: If you cant get promises to work, you can store the status of your calls in a local variable. You would just break up your calls into 2 functions, a single upload and a bulk upload.
This is dirty code, but you should be able to get the idea:
router.post("/upload", function(req, res){
var form = new multiparty.Form();
form.parse(req,function(err,fields,files){
if (err){
cb(err);
} else {
bulkUpload(files, fields, function(err, result){
if (err){
cb(err);
} else {
res.json({result:result});
}
})
}
})
})
function singleUpload(file, field, cb){
s3.upload({
Key: filename,
Bucket: bucketname,
ACL: "public-read",
Body: fs.createReadStream(filepath)
}, function(err, data){
if(err)
cb(err);
} else {
Model.collection.insert({"name": "name", "url" : data.Location}, function(err, result){
if(err){
cb(err);
} else {
cb(null, result);
}
})
}
})
}
function bulkUpload (files, fields, cb) {
var count = files.length;
var successes = 0;
var errors = 0;
for (i=0;i<files.length;i++) {
singleUpload(files[i], fields[i], function (err, res) {
if (err) {
errors++
//do something with the error?
} else {
successes++
//do something with the result?
}
//when you have worked through all of the files, call the final callback
if ((successes + errors) >= count) {
cb(
null,
{
successes:successes,
errors:errors
}
)
}
});
}
}
This would not be my recommended method, but another user has already suggested promises. I figure an alternative method would be more helpful.
Good Luck!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/34512559",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Use 100% div width, if 50% width is too small How could one go about creating a div, that would have a default size of XX%, however, if the screen gets too small it would switch into 100% size? I have tried using min-width property but the problem with this is that it will not be responsive after the min-width is reached and will overflow if the screen is even smaller.
A: You have to use @media queries. So let's say you have a <div> that should take up only 50% of the web page and then you need to show it full width once it enters mobile phone, say 640px width:
div {
width: 50%;
}
@media screen and (max-width: 640px) {
div {
width: 100%;
}
}
A: You can do it with @media queries, e.g.:
div {
width: 50%;
height: 100px;
border: 1px solid;
}
@media (max-width: 568px) { /* adjust to your needs */
div {width: 100%}
}
<div></div>
A: you must use @media for that like this :
@media screen and (min-width: 769px) {
/* STYLES HERE */
}
@media screen and (min-device-width: 481px) and (max-device-width: 768px) {
/* STYLES HERE */
}
@media only screen and (max-device-width: 480px) {
/* STYLES HERE */
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/47339410",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How to decode an utf8 encoded string split in two buffers right in between a 4 byte long char? A character in UTF8 encoding has up to 4 bytes. Now imagine I read from a stream into one buffer and then into the another. Unfortunately it just happens to be that at the end of the first buffer 2 chars of the 4 byte UTF8 encoded char are left and at the beginning of the the second buffer the rest 2 bytes.
Is there a way to partially decode that string (while leaving the 2 rest byte) without copying those two buffers into one big
string str = "Hello\u263AWorld";
Console.WriteLine(str);
Console.WriteLine("Length of 'HelloWorld': " + Encoding.UTF8.GetBytes("HelloWorld").Length);
var bytes = Encoding.UTF8.GetBytes(str);
Console.WriteLine("Length of 'Hello\u263AWorld': " + bytes.Length);
Console.WriteLine(Encoding.UTF8.GetString(bytes, 0, 6));
Console.WriteLine(Encoding.UTF8.GetString(bytes, 7, bytes.Length - 7));
This returns:
Hello☺World
Length of 'HelloWorld': 10
Length of 'Hello☺World': 13
Hello�
�World
The smiley face is 3 bytes long.
Is there a class that deals with split decoding of strings?
I would like to get first "Hello" and then "☺World" reusing the reminder of the not encoded byte array. Without copying both arrays into one big array. I really just want to use the reminder of the first buffer and somehow make the magic happen.
A: You should use a Decoder, which is able to maintain state between calls to GetChars - it remembers the bytes it hasn't decoded yet.
using System;
using System.Text;
class Test
{
static void Main()
{
string str = "Hello\u263AWorld";
var bytes = Encoding.UTF8.GetBytes(str);
var decoder = Encoding.UTF8.GetDecoder();
// Long enough for the whole string
char[] buffer = new char[100];
// Convert the first "packet"
var length1 = decoder.GetChars(bytes, 0, 6, buffer, 0);
// Convert the second "packet", writing into the buffer
// from where we left off
// Note: 6 not 7, because otherwise we're skipping a byte...
var length2 = decoder.GetChars(bytes, 6, bytes.Length - 6,
buffer, length1);
var reconstituted = new string(buffer, 0, length1 + length2);
Console.WriteLine(str == reconstituted); // true
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/23940623",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Building an uberjar with Gradle I want to build an uberjar (AKA fatjar) that includes all the transitive dependencies of the project. What lines do I need to add to build.gradle?
This is what I currently have:
task uberjar(type: Jar) {
from files(sourceSets.main.output.classesDir)
manifest {
attributes 'Implementation-Title': 'Foobar',
'Implementation-Version': version,
'Built-By': System.getProperty('user.name'),
'Built-Date': new Date(),
'Built-JDK': System.getProperty('java.version'),
'Main-Class': mainClassName
}
}
A: Simply add this to your java module's build.gradle.
mainClassName = "my.main.Class"
jar {
manifest {
attributes "Main-Class": "$mainClassName"
}
from {
configurations.compile.collect { it.isDirectory() ? it : zipTree(it) }
}
}
This will result in [module_name]/build/libs/[module_name].jar file.
A: I found this project very useful. Using it as a reference, my Gradle uberjar task would be
task uberjar(type: Jar, dependsOn: [':compileJava', ':processResources']) {
from files(sourceSets.main.output.classesDir)
from configurations.runtime.asFileTree.files.collect { zipTree(it) }
manifest {
attributes 'Main-Class': 'SomeClass'
}
}
A: I replaced the task uberjar(.. with the following:
jar {
from(configurations.compile.collect { it.isDirectory() ? it : zipTree(it) }) {
exclude "META-INF/*.SF"
exclude "META-INF/*.DSA"
exclude "META-INF/*.RSA"
}
manifest {
attributes 'Implementation-Title': 'Foobar',
'Implementation-Version': version,
'Built-By': System.getProperty('user.name'),
'Built-Date': new Date(),
'Built-JDK': System.getProperty('java.version'),
'Main-Class': mainClassName
}
}
The exclusions are needed because in their absence you will hit this issue.
A: Have you tried the fatjar example in the gradle cookbook?
What you're looking for is the shadow plugin for gradle
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/10986244",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "42"
}
|
Q: Save file from HTTP response Looking for some guidance or pointers, not necessarily the solution however I will post a solution back when done.
Using WSO2 ESB / Integrator that is calling an external service. According to the external service [1], they will respond with a file back in the HTTP response. It is this file that I am after; both to save the binary to disk and send to a user via web page redirect.
Can anybody point me in a direction in which I can take the HTTP response and create a file? I am presuming that I need to transform the response and use VFS.
Any help would be greatly appreciated.
[1] - https://support.webmerge.me/hc/en-us/articles/206526216-Webhook
A: It is fairly straight forward, the first step is to enable the transport sender for VFS (and the receiver if you also want to read files) in de axis2.xml config file.
The lines are already there and just need to be uncommented.
<transportReceiver name="vfs" class="org.apache.synapse.transport.vfs.VFSTransportListener"/>
<transportSender name="vfs" class="org.apache.synapse.transport.vfs.VFSTransportSender"/>
Next you will need to transform your response to whatever format you want to write to the file. You could use an xslt transformation, payload mediator or datamapper mediator for example.
After that you simple send the payload to a vfs endpoint. This can be a local file location, or a remote ftp location for example. Make sure you have the correct privileges to write a file in said location. For example if you run your esb as WSO2 user, and want to write files to /opt/wso2/fileexchange, make sure the WSO2 user has write privileges on that folder.
Then to write to that location just use a send mediator like this:
<property xmlns:ns2="http://org.apache.synapse/xsd" name="transport.vfs.ReplyFileName" value="myFile.txt" scope="transport"/>
<property name="OUT_ONLY" value="true"/>
<send>
<endpoint name="FileEpr">
<address uri="vfs:file:///opt/wso2/fileexchange"/>
</endpoint>
</send>
Make sure to set the transport.vfs.ReplyFileName property to the filename you want to write. And the OUT_ONLY property, since the filesystem is not going to send a nice reply back to you.
For more on this subject you can always check this blog
A: I prefer send the URI which refer to the file instead of sending the binary data. So the more efficient way is upload the file to a FileSystem, then transfer it with a String(URI). Only when you need read the file's data, then download it from FileSystem. And you should ensure the network is OK between FileSystem and the final user.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/53924900",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Azure Service bus queues have a URL, do they also have an IP-Adress or even Range? I was wondering if there is any possibility of Service Buses or even their queues have an IP-address or even Ranges? I have been searching both in the documentation and also in azure but I couldn't find anything.
What are the possibilities here when it comes to IP-Control so to speak? is it a question of upgrading the Pricing teir for the service bus?
A: It doesn't work that way. The service is only accessible via DNS, not IP address.
Upgrading from Standard tier to Premium gives you throughput and latency commitment, but not an IP address.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/44205451",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Is there a way to dynamically evaluate/use an angular (angular 2+) pipe? Let's say I have a pipeVar, it can be any pipe, eg.: number, uppercase, customPipe, etc
Is there a simple way to just call something like
{{ myVal | pipeVar }}
or some special syntax like
{{ myVal | #pipeVar }}
?
The closest thing I have found so far is Dynamic pipe in Angular 2
A: i have update plunkr plunkr link
Change in dynamic-pipe.ts like this
const dynamicPipe = "";
//i have give one simple logic for example if your dynamic pipe is like
this.dynamicPipe = ['number','uppercase','customPipe']; //pipe,pipe1 ... pipeN
//now create a one variable like 'number' | 'uppercase' | 'customPipe'
for (let i=0;i<this.dynamicPipe.length;i++){
dynamicPipe = dynamicPipe + " | "+this.dynamicPipe[i];
}
@Component({
selector: 'dynamic-comp',
template: '{{ data ' + dynamicPipe + '}}'
})
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/42240598",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Unity Leap Motion Error Message "INPUT AXIS NOT SET UP" I'm currently doing my first VR project with Leap Motion, HTC Vive and Unity.
When I create a new project and add the LeapRig, everything is just fine, but after adding the Interaction Manager as a child of LeapRig, I get the following error message, which gets repeated multiple times a second:
INPUT AXIS NOT SET UP. Go to your Input Manager and add a definition for on the 9th Joystick Axis.
UnityEngine.Debug:LogError(Object)
Leap.Unity.Interaction.InteractionXRController:fixedUpdateGraspButtonState(Boolean) (at Assets/LeapMotion/Modules/InteractionEngine/Scripts/InteractionXRController.cs:733)
Leap.Unity.Interaction.InteractionXRController:fixedUpdateGraspingState() (at Assets/LeapMotion/Modules/InteractionEngine/Scripts/InteractionXRController.cs:706)
Leap.Unity.Interaction.InteractionController:fixedUpdateGrasping() (at Assets/LeapMotion/Modules/InteractionEngine/Scripts/InteractionController.cs:1783)
Leap.Unity.Interaction.InteractionController:Leap.Unity.Interaction.IInternalInteractionController.FixedUpdateController() (at Assets/LeapMotion/Modules/InteractionEngine/Scripts/InteractionController.cs:259)
Leap.Unity.Interaction.InteractionManager:fixedUpdateInteractionControllers() (at Assets/LeapMotion/Modules/InteractionEngine/Scripts/InteractionManager.cs:372)
Leap.Unity.Interaction.InteractionManager:FixedUpdate() (at Assets/LeapMotion/Modules/InteractionEngine/Scripts/InteractionManager.cs:299)
Has anyone an idea why this happens and how to fix it?
I already worked with the interaction manager, but suddenly this error message occured.
I can run my program just fine too, but the error bothers me anyway and makes it difficult to use the console properly.
Greetings from Germany
Marc
A: From the Interaction Engine docs:
If you intend to use the Interaction Engine with Oculus Touch or Vive
controllers, you'll need to configure your project's input settings
before you'll be able to use the controllers to grasp objects. Input
settings are project settings that cannot be changed by imported
packages, which is why we can't configure these input settings for
you. You can skip this section if you are only interested in using
Leap hands with the Interaction Engine.
Go to your Input Manager (Edit -> Project Settings -> Input) and set
up the joystick axes you'd like to use for left-hand and right-hand
grasps. (Controller triggers are still referred to as 'joysticks' in
Unity's parlance.) Then make sure each InteractionXRController has its
grasping axis set to the corresponding axis you set up. The default
prefabs for left and right InteractionXRControllers will look for axes
named LeftXRTriggerAxis and RightXRTriggerAxis, respectively.
Helpful diagrams and axis labels can be found in Unity's
documentation.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/55863548",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: CMFCPropertyGridCtrl last item not drawn? I have a CMFCPropertyGridCtrl that I'm using in an options dialog box. I have a method in my options dialog class called InitPropertyGrid(). This method clears any properties and populates the CMFCPropertyGrid objects (using a custom Settings object for the property values) and appends them to the grid.
When I open my dialog box the first time all the properties show correctly. However, if I then close my dialog box and reopen it, the very last property is not drawn on the screen. All other properties are drawn normally:
First time:
All subsequent times:
As you can see, the plus/minus icon is showing minus in both cases to indicated the section is expanded. When the last item is not showing, clicking on the +/- icon once to contract and once to expand causes the last item to be correctly shown.
Note when I close the dialog box, I do not destroy it but just reshow it. However immediately before calling ShowWindow on the dialog I call the InitPropertyGrid() (called by UpdateToCurrentSettings) method.
if(optionsDialog_ == NULL)
{
optionsDialog_ = new OptionsDialog(settings_, this);
optionsDialog_->Create(OptionsDialog::IDD, this);
}
optionsDialog_->UpdateToCurrentSettings();
optionsDialog_->ShowWindow(SW_SHOW);
A: I found I can eliminate this problem simply by calling myPropertyGrid.ExpandAll(TRUE) at the end of the code where I initialize the property grid (InitPropertyGrid() for me). This seems to force all the properties to expand.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7590565",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Java script object data for functions How do I acces some object data in a javascript function? All I want to do is to take some input from the html file and then if the input text is === to one of my objects in javascript i want to acces some data from that object to use within my function.
For example:
I have the html form with 2 text inputs and a button to acces the function. And in the javascript document I have two objecst called bob and susan with the data "bob.age = 25" and "susan.age = 30". So I want a function that calculates bob.age + susan.age. But I want to use the inputs of bob and susan in my form html. So when i have the inputs bob and susan i want the function to do bob.age + susan.age
here is my html form:
<form name="mmForm">
<label for="element1">E1</label>
<input type="text" id="element1">
<label for="element2">E2</label>
<input type="text" id="element2">
<input type="button" value="Calculate" onclick="procesForm_mm()">
<div id="resultfield_mm">Result:</div>
</form>
here my javascript function:
function procesForm_mm() {
var e1 = document.mmForm.element1.value;
var e2 = document.mmForm.element2.value;
result_mm = parseInt(e1) + parseInt(e2);
document.getElementById("resultfield_mm").innerHTML += result_mm;
}
and this is the data i want to acces:
var Fe = new Object();
Fe.denumire = "Fier";
Fe.A = 56;
Fe.Z = 26;
Fe.grupa = "VIIIB";
Fe.perioada = 4;
A: Try this (a lot of guessing involved):
function procesForm_mm() {
var e1 = document.mmForm.element1.value;
var e2 = document.mmForm.element2.value;
result_mm = parseInt(eval(e1).A) + parseInt(eval(e2).A);
document.getElementById("resultfield_mm").innerHTML += result_mm;
}
var Fe = new Object();
Fe.denumire = "Fier";
Fe.A = 56;
Fe.Z = 26;
Fe.grupa = "VIIIB";
Fe.perioada = 4;
var Co = new Object();
Co.denumire = "Cobalt";
Co.A = 59;
Co.Z = 27;
Co.grupa = "IXB";
Fe.perioada = 4;
See it working here: http://jsfiddle.net/KJdMQ/.
It's important to keep in mind that use of the JS eval function has some disadvantages: https://stackoverflow.com/a/86580/674700.
A better approach would be to keep your JS objects in an array and avoid the use of the eval function:
function procesForm_mm() {
var e1 = document.mmForm.element1.value;
var e2 = document.mmForm.element2.value;
result_mm = parseInt(tabelPeriodic[e1].A) + parseInt(tabelPeriodic[e2].A);
document.getElementById("resultfield_mm").innerHTML += result_mm;
}
var tabelPeriodic = [];
tabelPeriodic["Fe"] = new Object();
tabelPeriodic["Co"] = new Object();
var el = tabelPeriodic["Fe"];
el.denumire = "Fier";
el.A = 56;
el.Z = 26;
el.grupa = "VIIIB";
el.perioada = 4;
el = tabelPeriodic["Co"];
el.denumire = "Cobalt";
el.A = 59;
el.Z = 27;
el.grupa = "IXB";
el.perioada = 4;
(See it working here)
Note: This looks like a chemistry application, I assumed that the form is supposed to add some chemical property values for the chemical elements (i.e. A possibly being the standard atomic weight). The form would take as input the names of the JS objects (Fe and Co).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/17970194",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: root in binary tree always NULL The program should read a txt, store all the words alphabetically and print them in order, with the number of times the word appears on the txt.
The problem seems to be in the Insert method, because it never prints TEST, so it seems the pAux is always NULL for some reason. And because of that, the Print method returns in his first call.
What am I doing wrong?
tree.h
#ifndef TREE_H_
#define TREE_H_
typedef struct Item{
char* key;
int no;
} TItem;
typedef struct No{
TItem item;
struct No* pLeft;
struct No* pRight;
} TNo;
void TTree_Insert (TNo**, char[]);
void TTree_Print (TNo*);
#endif
tree.c
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "tree.h"
TNo* TNo_Create (char* c){
TNo* pNo = malloc(sizeof(TNo));
pNo->item.key = malloc(sizeof(char)*strlen(c));
strcpy(pNo->item.key, c);
pNo->item.no = 1;
pNo->pLeft = NULL;
pNo->pRight = NULL;
return pNo;
}
void TTree_Insert (TNo** pRoot, char word[80]){
char* c = malloc(sizeof(char)*strlen(word));
strcpy(c, word);
TNo** pAux;
pAux = pRoot;
while (*pAux != NULL){
if (strcmp(c, (*pAux)->item.key) < 0) pAux = &((*pAux)->pLeft);
else if (strcmp(c, (*pAux)->item.key) > 0) pAux = &((*pAux)->pRight);
else{
(*pAux)->item.no++;
return;
}
}
*pAux = TNo_Create(c);
return;
}
void TTree_Print (TNo *p){
if (p == NULL) return;
TTree_Print (p->pLeft);
printf("%s - %d", p->item.key, p->item.no);
TTree_Print (p->pRight);
}
main.c
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <ctype.h>
#include "tree.h"
int main(){
TNo* pRoot = NULL;
FILE* txt = fopen("Loremipsum.txt", "r");
char aux[80];
int c, x = 0;
while ((c = fgetc(txt)) != EOF){
while (!(isalpha((char)c))) c = fgetc(txt);
while (isalpha((char)c)) {
if (isupper((char)c)) c = c+32;
if (islower((char)c)) aux[x++] = (char)c;
c = fgetc(txt);
}
aux[x] = '\0';
TTree_Insert(&pRoot, aux);
x = 0;
aux[0] = '\0';
}
TTree_Print(pRoot);
fclose(txt);
return 0;
}
A: I did not look through all your code. I will answer only your question. You have to pass pRoot to TTree_Insert by reference. Otherwise you pass its copy to the function and any changes of the copy within the function do not influence the original value.
For example
void TTree_Insert ( TNo **pRoot, char word[80] ){
char* c = malloc(sizeof(char)*strlen(word) + 1 ); // <==
strcpy( c, word ); // <==
TNo* pAux;
pAux = *pRoot;
//...
And in main you have to call the function like
TTree_Insert( &pRoot, aux );
Take into account that you have to adjust all other code of the function. For example
void TTree_Insert( TNo **pRoot, const char word[80] )
{
char* c = malloc( sizeof( char ) * strlen( word ) + 1 );
strcpy( c, word );
TNo **pAux = pRoot;
while ( *pAux != NULL )
{
printf("TESTE");
if ( strcmp(c, ( *pAux )->item.key ) < 0 )
{
pAux = &pAux->pLeft;
}
else if ( strcmp(c, ( *pAux )->item.key ) > 0 )
{
pAux = &pAux->pRight;
}
else
{
( *pAux )->item.no++;
break;
}
}
if ( *pAux == NULL ) *pAux = TNo_Create(c);
return;
}
I hope it will work.:)
A: pRoot is initially NULL, and you never change it later.
so it seems the pAux is always NULL for some reason
Well, that's the reason... why don't you use a debugger or do some printing?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/26960856",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: which is fastest for an oracle query? This
select * from customers where id = 1;
select * from customers where id = 2;
or
select * from customers where id in(1,2);
which is faster?
A: The first one are actually two statements causing you to make two roundtrips to the database.
The second one will most likely be faster as it is just one statement.
A: Is this really what you are trying to determine? Are you asking if it is faster to make one trip returning two rows or two trips each returning one row? If that is the question, then I agree with the comments -- try it, measure it, and compare.
If you are trying to make this kind of thing efficient, then you should probably look at using bind variables instead. If your question really means what it says, then probably any answer here will do.
A: Any question with "faster" is always going to be dependent on the specifics of your database. I don't really have anything to add over plhmhck and MJB about the fact that you're talking about 2 queries vs. 1 query.
But be aware the the optimizer will usually (always?) rewrite WHERE id IN (1,2) to WHERE (id = 1 OR id = 2)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/8872904",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: .Net HotChocolate GraphQL Mutations pick from a list of stings like enum I have a bunch of data, in a .NET GRaphQL API that are basically list of strings like
Public List<string> avarageMessageSize = new List<string>{
"Up to 5 KB",
"Between 5 KB and 4 MB",
Between 4 MB and 20 MB",
"20 MB and bigger"
};
When a new Entity is made with a GraphQL Mutation, for other values that are stored in Enums I have input types that let me choose and pick in a mutation to add the correct value, is this also possible for list items, to pick and choose the correct value(string) in a Mutation?
A: You can easily make the string an enum. Essentially a GraphQL enum can be represented by any object in dotnet.
The strings have to follow the GraphQL enum rules. So Up to 5 KB would probably be represented by Up_to_5KB. But in your dotnet API you would just use the Up to 5 KB string.
The GraphQL engine would translate it to the enum in GraphQL. You also cannot have arbitrary values anymore, they must be represented by the enum values allowed in this case.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/73443042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to send a notification without waking the screen on Android I have a foreground service in my app, whose persistent notification has a timer in it, which means that the notification is sent once per second to update the timer that is shown in the notification. This means that on some devices, where notifications are set to either wake the screen entirely or show a dark version on the screen briefly, the screen is constantly awake.
Is there a way to send the notification in a way that it won't wake up the screen? Setting it as a silent notification on the device fixes this, but the point of a foreground service notification is that it's prominent on the device, so this isn't a great solution, and not all users would know to do that.
This is how I'm building the notification:
NotificationCompat.Builder(this, CHANNEL_ID)
.setContentTitle("Currently Reading")
.setSound(null)
.setContentIntent(TaskStackBuilder.create(this).run {
addNextIntentWithParentStack(timerIntent)
getPendingIntent(
0,
PendingIntent.FLAG_UPDATE_CURRENT or PendingIntent.FLAG_IMMUTABLE
)
})
.setSmallIcon(R.drawable.ic_launcher_foreground)
.setPriority(NotificationCompat.PRIORITY_DEFAULT)
.setAutoCancel(true)
A: Have you tried updating the notification instead? And use setOnlyAlertOnce()
"You can optionally call setOnlyAlertOnce() so your notification interupts the user (with sound, vibration, or visual clues) only the first time the notification appears and not for later updates."
Check this link
https://developer.android.com/training/notify-user/build-notification.html#Updating
A:
setPriority(NotificationCompat.PRIORITY_DEFAULT)?
lower it, so wont be shown on lockscreen (android 7 and up)
and here we go with a small fix for this part:
setPriority(NotificationCompat.PRIORITY_LOW)
you can also add:
setVisibility(NotificationCompat.VISIBILITY_PRIVATE)
but it'll also remove notification from lockscreen...
docs:
*
*https://developer.android.com/training/notify-user/build-notification.html#lockscreenNotification
*https://developer.android.com/training/notify-user/channels#importance
hope this helps =]
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/70035917",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How can I assign RGB color codes to a WORD? I am assigning the color values to the display frame buffer, and that buffer pointer return type is BYTE. But i am not able to assign the RGB color value into it. This i am doing to set the pixel location using directdraw on WINCE platform .Here is the snapshot code.
BYTE* pDisplayMemOffset = (BYTE*) ddsd.lpSurface;
int x = 100;
int y = 100;
pDisplayMemOffset += x*ddds.lXPitch + y*ddds.lPitch ;
***(WORD*)pDisplayMemOffset = 0x0f00;
But how i can assign RGB(100,150,100) combination in this, i have tried to put DWORD instead of WORD while assigment but it desnt work. i knw i required hex value for color in 0x000000 format(RGB), but i think BYTE cnt store such large value into it.
Can anyone tell me how to do this?
A: How this assignment can be done is very dependent on the pixel-format you specified when acquiring ddsd. See the field ddpfPixelFormat and also specifically in there: dwRGBBitCount.
Maybe you can provide this pixel format information so that i can improve my answer. However, i can easily give you an example of how you do this pixel-color assignment if e.g. the pixel-format is:
[1 byte red] [1 byte green] [1 byte blue] [1 byte unused]
Here's the example:
*(pDisplayMemOffset+0) = 0x10;// asigning 0x10 to the red-value of first pixel
*(pDisplayMemOffset+1) = 123; // asigning 123 to green-value of first pixel
// (no need for hex)
*(pDisplayMemOffset+4) = 200; // asigning 200 to red-value of second pixel
// (BYTE is unsigned)
If you have to extract the color values from an integer it largely depends on which byte-ordering and color-ordering that integer was given in, but you can try it out easily.
First i would try this:
*(((unsigned int*)pDisplayMemOffset)+0) = 0x1A2A3A4A
*(((unsigned int*)pDisplayMemOffset)+1) = 0x1B2B3B4B
If this works, then the pixel-format had either an unused 4th byte (like my example above) or an alpha-value that is now set to one of the values. Again: aside from the pixel-format also the ordering of the bytes in your integer decides whether this directly works or whether you have to do some byte-swapping.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/5271331",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Image files stored on Google cloud storage are not displayed but only strange text I have a project that uploads an multi-image to Google cloud storage, the upload works fine, but when I download an image, it only shows strange text.
It has 2 steps to upload a file to Google cloud storage. In step 1, I will get the URL that will help me upload the file to the storage. Step 2, I will upload the file to the response URL I got from step 1. And I can download the file from the response in step 2
Step 1: get upload URL from GCS
Options requestHeaders = await getRequestOptions(jwtBearer, hashing);
requestHeaders.headers!['Content-Type'] = 'application/json';
var signedUrlFiles = await dio.post(
genUrlsApi,
data: {
...(body != null ? body.toJson() : {}),
'files': files
.map((f) => FileElement(
fileName: f.fullName, type: buildApiQuery(f.type)["type"]!)
.toJson())
.toList()
},
queryParameters: query,
options: requestHeaders,
);
var response = DBSBaseResponse.fromJsonFactory(signedUrlFiles.data);
GCSUploadFileResponse result = GCSUploadFileResponse.fromJsonFactory(response.data);
step 2 : upload file from GCS URL
for(var file in result.files){
var file = files.firstWhere((f) => f.fullName == url.fileName);
FormData formData = FormData.fromMap({
'file': await MultipartFile.fromFile(
file.uri,
filename: url.fileName,
),
});
var result = await dio.put(
url.url,
data: formData,
onSendProgress: (
int sent,
int total,
) {
//show logger
},
);
}
I can not show the image because the company policy. sorry for this trouble
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/74051683",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: My function click() dont work with href of this site, how i can fix and make work? from selenium import webdriver
from selenium.webdriver.chrome.service import Service
#Service("local do diretorio do chromeDriver")
driver = webdriver.Firefox('/home/arch/Downloads/bot/')
driver.get("https://blaze.com/pt/?modal=auth&tab=login")
element=driver.find_element_by_name("username").send_keys("email")
driver.find_element_by_name("password").send_keys("pass")
#link da url desejada
driver.find_element_by_xpath('/html/body/div[1]/main/div[3]/div/div[2]/div[2]/form/div[4]/button').click()
driver.find_element_by_xpath('/html/body/div[1]/main/div[1]/div[4]/div[2]/div[1]/div/div/div[4]/div[1]/div/div[3]/div/a')
I have this code in python that opens in blazer.com and makes login but when i try to click o game mines which in this href:
<a href="/pt/games/mines"/>
and doesnt work justs stops , someone could help with this error
A: You probably forgot to click the "mines" button. Don't forget the .click() at the end of your code, like this:
driver.find_element_by_xpath('/html/body/div[1]/main/div[1]/div[4]/div[2]/div[1]/div/div/div[4]/div[1]/div/div[3]/div/a').click()
And you also might want to give the site some time to load by importing time and using time.sleep(x) which sleeps the code for x seconds.
import time
time.sleep(2) # Use this between the .click() lines to give the site time to load after clicking a button
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/71085539",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: what does my request does not inlclude if-non-match in header I am using https://github.com/filipw/AspNetWebApi-OutputCache NUGET package to manage caching for my ASP.NET WEB API.
The server side caching works well. However I don't see it working on my browser (client side).
On request I see that max-age and etag is being returned.
However I make further request I don't see etag being supplied with request header with if-non-match parameter. This is why I get response 200 (OK) back with response data. It should have parsed data from cache itself.
https://dl.dropboxusercontent.com/u/2781659/stackoverflow/Cache.jpg
Can anybody please advise?
A: I just briefly looked at the Github project you referenced, and I wanted to throw in my $0.02, for whatever it's worth -
Breeze.js is responsible for taking care of client side caching for you. The idea is to not have to worry about what the back-end is doing and simply make a logical decision on the client on whether to proceed and hit the server for data again. If you don't need to refresh the data, then never hit the server, just return from Breeze's cache.
The project you are referencing seems to do both - server side and client-side caching. The decision of caching on the server is one not to be taken lightly, but this project seems to handle it pretty well.
The problem is that you are trying to mix two libraries which, at best, conflict in the area of what their concerns are. There may be a way to marry the two up, but at what cost? Why would you need to cache on the server that which is already cached on the client? Why cache on the client if you plan to cache the exact same data on the server?
The only reason I can think of is for paging of data (looking at a subset of the whole) and wanting to see the next data set without having to hit the server again. In that case, your query to the server should not match the original anyway and therefore one would expect that you need to customize the two solutions to do what you are asking for.
In general that project seems to ignore queryString parameters, from what I can tell, to support JSONP so you should have no problem coming up with a custom solution, if you still think that is necessary.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/19314530",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Sort any random list of files by modification date Looking for generic way how to sort a random list of files by its modification time, so something like:
./make_list_of_files | some_sorter_by_mtime
my currect solution is (here the make_list_of_files is the find command):
find / -type f -print |\
perl -nle 'push @in,$_;END {@out = sort{ (stat($a))[9] <=> (stat($b))[9] } @in; $,="\n";print @out}'
exists some simpler solution (e.g. without perl)?
A: Your some_sorter_by_mtime should be for example:
xargs stat -f "%m %N" | sort -n | cut -f2-
the idea behind is:
*
*print out file modification time and the filename
*sort the output numerically (so by modification time)
*cut out the time field
so,
find / -type f -print | xargs stat -f "%m %N" | sort -n | cut -f2-
A: Like this?
find / -type f -print | xargs ls -l --time-style=full-iso | sort -k6 -k7 | sed 's/^.* \//\//'
A: Yes, without perl:
find / -type f -exec ls -lrt '{}' \+
Guru.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/12508076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: CSS rotate3d works correctly only on Firefox The CSS rotate3d animation seems to depend on the (modern) browser you use!!! Just test the code...
@keyframes KF_Rotate {
0% { transform: rotate3d(0,0,0, 0deg); }
100% { transform: rotate3d(0,1,0,180deg); }
}
.Rotate:hover { animation: KF_Rotate 3s; }
How is it possible?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/31733327",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Authorize ARB subscription payment I am creating a subscription using Authorize.net ARB API. I can see in my account, where the subscription is created.
Its a very simple subscription, where a user would be paying $100 every month. I would like to know, if the user would be billed at the time of creation of the subscription ?
My understanding is that, he would be billed only from next month ? Any idea, how to bill the customer from the time the subscription is created ?
Thanks
A: ARBCreateSubscriptionRequest has parameter "startDate" where you can set date the subscription begins.
If first price should be different from monthly payments you can also set parameter "trialAmount" for first payment
All information you can find here
http://www.authorize.net/support/ARB_guide.pdf
A: You should always charge the first subscription payment using the AIM API. The AIM API will process immediately and act as a verification for the credit card. You will know immediately if a card is invalid and before you create the subscription. If you schedule a payment to process the same day the subscription is created it does not process immediately. That process later that night. If the credit card is invalid you lose the opportunity to have the user correct it because they are no longer present at your website.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/21374778",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to sync the modification date of folders within two directories that are the same? I have a Dropbox folder on one computer with all the original modification dates. Recently, after transferring my data onto another computer, due to a .DS_Store issue, some of the folder's "Date Modified" dates were changed to today. I am trying to write a script that would take the original modification date of a folder, and then be able to find the corresponding folder in my new computer, and change it using touch. The idea is to use stat and touch -mt to do this. Does anyone have any suggestions or better thoughts? Thanks.
A: Use one folder as the reference for another with --reference=SOURCE:
$ cd "$(mktemp --directory)"
$ touch -m -t 200112311259 ./first
$ touch -m -t 200201010000 ./second
$ ls -l | sed "s/${USER}/user/g"
total 0
-rw-r--r-- 1 user user 0 Dec 31 2001 first
-rw-r--r-- 1 user user 0 Jan 1 2002 second
$ touch -m --reference=./first ./second
$ ls -l | sed "s/${USER}/user/g"
total 0
-rw-r--r-- 1 user user 0 Dec 31 2001 first
-rw-r--r-- 1 user user 0 Dec 31 2001 second
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/66536643",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Overriding Rails Default Routing Helpers I'm writing an app where I need to override the default routing helpers for a model. So if I have a model named Model, with the corresponding helper model_path() which generates "/model/[id]". I'd like to override that helper to generate "/something/[model.name]". I know I can do this in a view helper, but is there a way to override it at the routing level?
A: You can define to_param on your model. It's return value is going to be used in generated URLs as the id.
class Thing
def to_param
name
end
end
The you can adapt your routes to scope your resource like so
scope "/something" do
resources :things
end
Alternatively, you could also use sub-resources is applicable.
Finally you need to adapt your controller as Thing.find(params[:id]) will not work obviously.
class ThingsController < ApplicationController
def show
@thing = Thing.where(:name => params[:id).first
end
end
You probably want to make sure that the name of your Thing is unique as you will observe strange things if it is not.
To save the hassle from implementing all of this yourself, you might also be interested in friendly_id which gives you this and some additional behavior (e.g. for using generated slugs)
A: You need the scope in routes.rb
scope "/something" do
resources :models
end
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/9738952",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: iOS command-line build: How to create xcode archive from command line? I am using below command to create an signed ipa file from command line i.e. terminal.
xcrun -sdk iphoneos PackageApplication \
"path/to/build/MyApp.app" \
-o "output/path/to/MyApp.ipa" \
--sign "iPhone Distribution: My Company" \
--embed "path/to/something.mobileprovision"
As understood from above, this will create an ipa file. But, I want to create a xcarchive file which will be used to upload to app store using Application Loader. How can I modify this command to achieve that. Any help will be greatly appreciated. Thanks...
A: Just use xcodebuild archive command
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/21529718",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Teamwork Ruby on Rails I've been working with RoR for a while but now I need to work with designers and other developers. Is there a tool like github or something like dropbox where you can share with your team the files but with a URL where you can check live any change. For example for my own I just run Rails s and I can see what happen on my localhost but for a designer it isn't that simple. And also we don't want everybody running his own rails project on his localhost.
So is there a tool or what do you do guys when you have to work with others collaborates?
A: You consider to use a staging environment?
A staging environment (stage) is a nearly exact replica of a production environment for software testing. Staging environments are made to test codes, builds, and updates to ensure quality under a production-like environment before application deployment. The staging environment requires a copy of the same configurations of hardware, servers, databases, and caches. Everything in a staging environment should be as close a copy to the production environment as possible to ensure the software works correctly.
See the Font
To use it, i recommend you a application like Heroku, after configure, you can 'deploy' your app commiting in a branch (its not real time, but works for your case).
If you have a VM, i recommends you this tutorial: https://emaxime.com/2014/adding-a-staging-environment-to-rails.html
A: Open questions like this are not really best placed on StackOverflow, which is geared more toward solving specific issues, with provided code examples and errors etc.
However, in answer to your question:
I see you mention Github in your question, but do you fully understand the underlying concept of Git Version Control, or is there a speficic reason as to why it doesn't meet your needs? As far as I believe, it's main purpose is to solve your exact scenario.
https://guides.github.com/introduction/git-handbook/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/54755098",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: TabBarController status bar issue in iOS7 I added the UITabBarController view on the UIWindow. TabBarController view is messing up with Status Bar. The TabBarController is in the MainWindow.xib. How can i fix this?
window = [[UIWindow alloc] initWithFrame:[UIScreen mainScreen].bounds];
tabController.viewControllers = [NSArray arrayWithObjects:nearbySplit, mySplit, allSplit, messageSplit, nil];
tabController.selectedIndex = 0;
window.rootViewController = tabController;
[window addSubview:tabController.view];
[window makeKeyAndVisible];
A: Add the this code in view controller
if ([self respondsToSelector:@selector(edgesForExtendedLayout)])
self.edgesForExtendedLayout = UIRectEdgeNone; // iOS 7 specific
in your viewDidLoad method.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/20632316",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: youtube api v3 does something when a user uploads a new video How can I do that when an user uploads a new video, my javascript code in node.js execute something?
I readed the docs and I didn't fond anything.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/62863910",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to upload a folder to blob storage using SAS URI in storage explorer enter image description hereI'm trying to upload a folder to blob container using storage explorer via SAS URI, but the upload is failing for folder & files. How can I achieve that ? When I connect to blob storage using account name and key it works fine, but not with SAS URI.
A: I've created several tests and all succeeded with SAS URI.
I think you should check a few places:
*
*According to your screen shot. Maybe your SAS key has expired?
*The URI configuration. We should concat the connect string and the SAS token.
The configuration is as follows:
A: First you need to check the permissions allotted to SAS and also the expiry date, there can be two cases here: one you may not have adequate access to upload and second the SAS might have expired.
If this is a one time activity and then you can use AzCopy tool as well to upload files to blob storage.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/65670195",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Need Help To Uncheck Radio Button If Aleady Checked Using Jquery i need help to uncheck or check same radio button if button already checked? I have jquery function which is used to enable or disabled other radio buttons if already checked now i want to add some function that use for check or uncheck same radio button?
(function ($) {
$(document).ready(function () {
$('input:radio').click(function(){
var $inputs = $('input:radio')
if($(this).is(':checked')){
$inputs.not(this).prop('disabled',true);
}else{
$inputs.prop('disabled',false);
}
})
});
})(jQuery);
<input type="radio" value="Test1" />Test1
<input type="radio" value="Test2" />Test2
<input type="radio" value="Test1" />Test1
<input type="radio" value="Test2" />Test2
A: Although it is too late to answer, it could help others
I tried this solution.
Works very well for me!
$("input[type='radio'].myClass").click(function(){
var $self = $(this);
if ($self.attr('checkstate') == 'true')
{
$self.prop('checked', false);
$self.each( function() {
$self.attr('checkstate', 'false');
})
}
else
{
$self.prop('checked', true);
$self.attr('checkstate', 'true');
$("input[type='radio'].myClass:not(:checked)").attr('checkstate', 'false');
}
})
A: Simply replace disabled with checked:
$input.prop("checked", false);
or for this element:
this.checked = false;
However, if you are looking for form element which can be checked and unchecked, maybe input type="checkbox" /> is what you need.
A: $inputs.filter(':checked').prop('checked', false);
A: I think you need checkbox instead of radio button to uncheck the checked :
<input class="checkDisable" type="checkbox" value="Test1" />Test1
<input class="checkDisable" type="checkbox" value="Test2" />Test2
<input class="checkDisable" type="checkbox" value="Test3" />Test3
<input class="checkDisable" type="checkbox" value="Test4" />Test4
(function ($) {
$(document).ready(function () {
$('input:checkbox.checkDisable').change(function(){
var $inputs = $('input:checkbox.checkDisable')
if($(this).is(':checked')){
$inputs.not(this).prop('disabled',true);
}else{
$inputs.prop('disabled',false);
$(this).prop('checked',false);
}
})
});
})(jQuery);
A: The answer of Pablo Araya is good, but ...
$self.prop('checked', true);
is superfluous here. The radio button already has a checked state for every click.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/16055839",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Windows Phone 8 & Internet Explorer 10 caches 302 redirects We are developing a web app for mobile devices and we are experiencing a rare caching issue in Windows Phone 8 & Internet Explorer 10.
Our app is based on JSF and jQueryMobile. And we are using de "redirect-after-post" system for navigation.
When we make a call to a new page two requests should be executed and so they are in all the OS an navigators except in our beloved Microsoft system (WP8 & IE10, Nokia Lumia 620):
*
*Navigator requests URL
*Server returns HTTP 302 status with new location
*Navigator requests new location
The fact is that WP8 & IE10 doesn`t execute the last request, and shows a cached result. We are including the "Cache-control", "Pragma" and "Expires" headers in the 302 response to make the navigator not to show a cached page, but IE10 ignores them.
Any clue to solve this problem?
thanks & regards
A: Try forcing the Uri returned in step 2 to be unique (append a random or incrementing value to the end of the query string).
This works around the caching behaviour in the HttpWebRequest class in the SDK.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/18844553",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: how do I link my src/app.js file to babel src/app.js file keeps showing it doesn't exist in git bash
I tried adding quotations
babel src/app.js --out-file=public/scripts/app.js --presets="env,react"
instead of this
babel src/app.js --out-file=public/scripts/app.js --presets=env,react but it still did not work
A: Consider looking at the .gitignor file, please. if you put src folder or src/app in .gitignor then it's not shown in git bash
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/74681628",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: connect is unsuccessful in my socket programming function in C I have a function which initiates the socket connection. I am trying to a server with a name let's say xp. Socket() is working fine. But when it comes to connect, I am getting an error. I tried to get the IP of that server, I got some random IP. I passed the parameters to the connect API. I print the results of these in a log file. I think error lies within the connect(). I am working on Linux Ubuntu. Here is my program for SocketInit(). I can't get the error with that.
I call the SocketInit function as
SocketInit(argv[2]); argv[2] has my server name.
short SocketInit(char *xp)
{
if ( (local_socket = socket(AF_INET, SOCK_STREAM, 0)) < 0 ) {
printf("socket creation is unsuccessful check in SocketInit() \n");
sprintf(log_msg, "create socket descriptor error = %d", errno);
LogMsg('E', log_msg);
return(-1);
}
else
{
printf("socket connection is success\n");
}
pos_socket.sin_family = AF_INET;
pos_socket.sin_port = htons(port_no);
pos_socket.sin_addr.s_addr = inet_addr(xp);
if ( connect( local_socket, (struct sockaddr *) &pos_socket, sizeof(pos_socket) ) < 0 ) {
sprintf(log_msg, "connect on socket error=%d", errno);
printf("socket connect api is unsuccessful check in SocketInit() \n");
LogMsg('E', log_msg);
return(-1);
}
else{
printf("connect is successful\n");
return 0;
}
}
How can I connect to the server. How can I pass the address to the pos_socket.sin_addr.s_addr ? Sometimes I get connect error 110 and 111. But still I can't connect.
A: Use perror() to print the human-readable error string when connect() or most other unix-like system calls return an error. But since you told us the value of errno, I looked in errno.h for the meaning, and found:
#define ETIMEDOUT 110 /* Connection timed out */
#define ECONNREFUSED 111 /* Connection refused */
(BTW, you cannot count on errno's being the same from one unix to another which is why you need to use these defines when checking for specific errors. Never hard-code numeric errno values into your code. It worked out for me this time, but it won't necessarily every time).
ECONNREFUSED means that there was a machine listening at the specified IP address, but that no process was listening for connections on the specified port number. Either the remote process is not running, it is not binding or accepting connection properly, or it potentially could be blocked by some sort of firewall.
In any case, this points to a problem with the server.
So, check to make sure your remote process is actually ready to accept the connection. You can use telnet or netcat as a test client to see if other client programs that are known to work are able to connect to your server.
Also, I notice that your variable port_no is not declared, so we have no way of knowing what port you are trying to connect to.. but make sure that this variable is of the correct type and has the correct value for the service you are trying to connect to. If port_no doesn't specify the correct port you will get the same type of error.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/44141616",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: A program written in Go opens many connections to mongoDB I have a very simple HTP server written in Go which serves my AngularJS front end data from a mongodb instance through an API.
Here is the code:
// ReceiveData - used to handle incoming data
func ReceiveData(w http.ResponseWriter, r *http.Request) {
if r.Method != "POST" {
http.NotFound(w, r)
return
}
body, err := ioutil.ReadAll(r.Body)
if err != nil {
panic(err)
}
// database
session, err := mgo.Dial("localhost")
if err != nil {
panic(err)
} else {
fmt.Println("session created")
database := session.DB("schedule_calculator")
collection := database.C("schedule_save")
num, err := collection.Count()
if err == nil {
fmt.Println("schedule_save collection count = ", num)
mongodbData := SavedData{ID: bson.NewObjectId(), Data: string(body), Date: time.Now()}
collection.Insert(mongodbData)
num, _ := collection.Count()
fmt.Println("new count: ", num)
} else {
fmt.Println("schedule_save error - ", err)
}
}
if err := json.NewEncoder(w).Encode("todos"); err != nil {
panic(err)
}
}
type SavedData struct {
ID bson.ObjectId `bson:"_id"`
Data string
Date time.Time
}
// SendData - Called by UI to get saved data
func SendData(w http.ResponseWriter, r *http.Request) {
fmt.Println("SendData function")
session, err := mgo.Dial("localhost")
defer closeSession(session)
if err != nil {
panic(err)
} else {
fmt.Println("session created")
database := session.DB("schedule_calculator")
collection := database.C("schedule_save")
num, err := collection.Count()
if err == nil {
fmt.Println("schedule_save collection count = ", num)
var myData SavedData
dbSize, err2 := collection.Count()
if err2 != nil {
panic(err2)
}
if dbSize > 0 {
// db not empty
err2 = collection.Find(nil).Skip(dbSize - 1).One(&myData)
if err2 != nil {
// TODO: handle error
panic(err2)
}
// fmt.Println(myData.Data)
w.Header().Set("Content-Type", "application/json; charset=UTF-8")
w.WriteHeader(http.StatusOK)
if err := json.NewEncoder(w).Encode(myData.Data); err != nil {
// TODO: handle error
panic(err)
}
} else {
// db empty
fmt.Println("DB is empty")
}
} else {
fmt.Println("schedule_save error - ", err)
}
}
}
// closes the mongodb session
// TODO: make it use only 1 session
func closeSession(session *mgo.Session) {
session.Close()
fmt.Println("session closed")
}
and this is what I get in the console after some short interaction with the API:
2016-08-10T19:22:59.734+0300 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55401 #60 (6 connections now open)
2016-08-10T19:22:59.740+0300 I NETWORK [conn60] end connection 127.0.0.1:55401 (5 connections now open)
2016-08-10T19:23:58.794+0300 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55405 #61 (6 connections now open)
2016-08-10T19:23:58.800+0300 I NETWORK [conn61] end connection 127.0.0.1:55405 (5 connections now open)
2016-08-10T19:24:24.219+0300 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55411 #62 (6 connections now open)
2016-08-10T19:24:24.225+0300 I NETWORK [conn62] end connection 127.0.0.1:55411 (5 connections now open)
2016-08-10T19:25:56.149+0300 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55434 #63 (6 connections now open)
2016-08-10T19:25:56.155+0300 I NETWORK [conn63] end connection 127.0.0.1:55434 (5 connections now open)
2016-08-10T19:33:54.127+0300 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55460 #64 (6 connections now open)
2016-08-10T19:33:54.133+0300 I NETWORK [conn64] end connection 127.0.0.1:55460 (5 connections now open)
2016-08-10T19:35:12.060+0300 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55476 #65 (6 connections now open)
2016-08-10T19:35:12.066+0300 I NETWORK [conn65] end connection 127.0.0.1:55476 (5 connections now open)
2016-08-10T19:35:22.827+0300 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55477 #66 (6 connections now open)
2016-08-10T19:35:22.833+0300 I NETWORK [conn66] end connection 127.0.0.1:55477 (5 connections now open)
2016-08-10T19:35:37.720+0300 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55478 #67 (6 connections now open)
2016-08-10T19:35:52.725+0300 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55487 #68 (7 connections now open)
2016-08-10T19:36:20.498+0300 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55488 #69 (8 connections now open)
2016-08-10T19:36:20.508+0300 I NETWORK [conn69] end connection 127.0.0.1:55488 (7 connections now open)
2016-08-10T19:36:33.100+0300 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55490 #70 (8 connections now open)
2016-08-10T19:36:37.155+0300 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55492 #71 (9 connections now open)
2016-08-10T19:36:48.105+0300 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55493 #72 (10 connections now open)
2016-08-10T19:36:50.284+0300 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55494 #73 (11 connections now open)
2016-08-10T19:36:52.157+0300 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55495 #74 (12 connections now open)
2016-08-10T19:36:53.328+0300 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55496 #75 (13 connections now open)
2016-08-10T19:37:01.375+0300 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55497 #76 (14 connections now open)
2016-08-10T19:37:05.287+0300 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55498 #77 (15 connections now open)
2016-08-10T19:37:05.827+0300 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55499 #78 (16 connections now open)
2016-08-10T19:37:05.836+0300 I NETWORK [conn78] end connection 127.0.0.1:55499 (15 connections now open)
2016-08-10T19:37:08.333+0300 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55500 #79 (16 connections now open)
2016-08-10T19:37:16.376+0300 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55521 #80 (17 connections now open)
2016-08-10T19:37:23.323+0300 W NETWORK [HostnameCanonicalizationWorker] Failed to obtain name info for: [ (192.168.0.102, "nodename nor servname provided, or not known"), (192.168.0.102, "nodename nor servname provided, or not known") ]
2016-08-10T19:40:41.079+0300 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55546 #81 (18 connections now open)
2016-08-10T19:40:41.087+0300 I NETWORK [conn81] end connection 127.0.0.1:55546 (17 connections now open)
I am very new to Go, so this was the simplest way I managed to make it work however I would really like to know how to limit the opened connections to 1.
A: Missed defer closeSession(session) in ReceiveData
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/38879356",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How can I modify all comments in git repository? I have managed a my project using git.
But I must published this repository, so I have to modify all comments and author names of git repo, because I can't publish some comments and author name.
My git repo has 99 commits and is synced remote repository.
There are a lot of committing points in using git rebase -i.
How can I modify comments and author names?
A: You are after git filter-branch. With it you can easily change committer names, author names and commit messages throughout the whole history. But be aware that this changes all SHA1 values, so if someone has cloned that repository and based work off of it, he has to manually rebase all his branches onto the new history.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/41186417",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: java.sql.SQLException: No suitable driver found for jdbc:mysql://localhost:3307/db I have seen this error popping up many times. I have searched the web and tried stuff like adding the mysql-connector file to the buildpath, but nothing worked out for me.
This Code works when running with spigot, this is only another Linux user and not spigot.
Im running this Code in a plugin for the JTSServermod for TS3 on a Linux Debian with Java 8.
The full error message is:
java.sql.SQLException: No suitable driver found for jdbc:mysql://localhost:3307/db
at java.sql.DriverManager.getConnection(DriverManager.java:689)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
at net.mysticsouls.TeamSpeakBot.utils.NameUUIDUtils.connect(NameUUIDUtils.java:31)
at net.mysticsouls.TeamSpeakBot.utils.Updater.start(Updater.java:11)
at net.mysticsouls.TeamSpeakBot.TeamSpeakBot.activate(TeamSpeakBot.java:45)
at de.stefan1200.jts3servermod.JTS3ServerMod.e(Unknown Source)
at de.stefan1200.jts3servermod.JTS3ServerMod.b(Unknown Source)
at de.stefan1200.jts3servermod.i.run(Unknown Source)
at java.lang.Thread.run(Thread.java:745)
And my Code calling it is:
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.UUID;
public class NameUUIDUtils {
private static final String host = "localhost";
private static final String port = "3307";
private static final String database = "db";
private static final String username = "username";
private static final String password = "*********";
private static Connection connection;
public static Connection getConnection() {
return connection;
}
public static boolean isConnected() {
return connection != null;
}
public static void connect() {
if (!isConnected()) {
try {
connection = DriverManager.getConnection("jdbc:mysql://" + host + ":" + port + "/" + database, username,
password);
System.out.println("[NameUUID] MySQL connected!");
} catch (SQLException ex) {
ex.printStackTrace();
System.out.println("[NameUUID] MySQL failed to connect!");
}
}
if (isConnected()) {
try {
PreparedStatement preparedStatement = getConnection().prepareStatement(
"CREATE TABLE IF NOT EXISTS CoinSystem (Spielername VARCHAR(100), UUID VARCHAR(100), Coins INT(100), Strafpunkte INT(100))");
preparedStatement.executeUpdate();
preparedStatement.close();
System.out.println("[NameUUID] MySQL Table created!");
} catch (SQLException ex) {
System.out.println("[NameUUID] MySQL Table failed to create!");
}
}
}
}
What am i doing wrong.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/45936521",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to load related objects after saving a junction table record by foreign keys only? I am trying to add a new student and after that, insert data into a junction table called 'AdditionalCourse' with the help of EF Core 3.1.
Whenever I add the junction rows (additional courses) by only using the foreign keys, it doesn't fill in the course property as expected after saving. It does fill in the Student property however.
Can someone figure out what I'm doing wrong?
I can solve this by just adding a 'get' call in between to just receive the freshly updated object, but I believe this is not necessary.
A minimal working prototype below:
Program.cs
static void Main(string[] args)
{
using (var context = new SchoolContext())
{
Course course = new Course("Spanish");
context.Add(course);
context.SaveChanges();
Course course2 = new Course("French");
context.Add(course2);
context.SaveChanges();
Student student = new Student("Bill");
context.Add(student);
context.SaveChanges();
AdditionalCourse addCourse1 = new AdditionalCourse() { CourseId = course.Id, StudentId = student.Id };
AdditionalCourse addCourse2 = new AdditionalCourse() { CourseId = course2.Id, StudentId = student.Id };
student.AdditionalCourses.Add(addCourse1);
student.AdditionalCourses.Add(addCourse2);
context.SaveChanges();
// Debug at this line
var x = 0;
}
}
AdditionalCourse junction table
public class AdditionalCourse
{
public int CourseId { get; set; }
public Course Course { get; set; }
public int StudentId { get; set; }
public Student Student { get; set; }
}
Student.cs
public class Student
{
public int Id { get; set; }
public string Name { get; set; }
public List<AdditionalCourse> AdditionalCourses { get; set; }
public Student()
{
}
public Student(string name)
{
Name = name;
AdditionalCourses = new List<AdditionalCourse>();
}
}
Course.cs
public class Course
{
public int Id { get; set; }
public string Name { get; set; }
public Course()
{
}
public Course(string name)
{
Name = name;
}
}
SchoolContext.cs
public class SchoolContext : DbContext
{
public SchoolContext()
{ }
public SchoolContext(DbContextOptions<SchoolContext> options) : base(options)
{ }
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
optionsBuilder.UseSqlServer("Data Source=SERVER_NAME;Initial Catalog=DATABASE_NAME;Integrated Security=True;");
}
public DbSet<Student> Students { get; set; }
public DbSet<Course> Courses { get; set; }
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.Entity<Student>().ToTable("student");
modelBuilder.Entity<Course>().ToTable("course");
modelBuilder.Entity<AdditionalCourse>().ToTable("additional_course");
modelBuilder.Entity<AdditionalCourse>()
.HasKey(x => new { x.CourseId, x.StudentId });
modelBuilder.Entity<AdditionalCourse>()
.HasOne(x => x.Student)
.WithMany(x => x.AdditionalCourses)
.HasForeignKey(x => x.StudentId);
modelBuilder.Entity<AdditionalCourse>()
.HasOne(x => x.Course)
.WithMany()
.HasForeignKey(x => x.CourseId);
}
}
A:
Whenever I add the junction rows (additional courses) by only using the foreign keys, it doesn't fill in the course property as expected after saving. It does fill in the Student property however.
By design, EF Core does not automatically load navigation properties. The only exceptions are when lazy loading is enabled and the navigation property getter is called, navigations to owned entities (or in EF Core 5.0+, navigations configured as AutoInclude) only when querying, or (which is the case with your Student property) the referenced object is already loaded (tracked) in the context. The later is called navigation property fixup, and can happen both during the query materialization or later.
Since you are not using lazy loading, the only guaranteed way to get navigation properties populated is to eager/explicit load them after.
For more info, see Loading Related Data section of the EF Core documentation.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/65325557",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to parse an array to json using ruby on rails I'm trying to parse a hash to json but in the index.json.jbuilder I get empty hash.
what am I doing wrong:
def self.fake_objects
fake_objects = {id: 1,
title: 'appointment one',
description: 'bla bla bla',
start_time: '2014-08-19 14:00:00.000000000 Z',
end_time: '2014-08-19 14:30:00.000000000 Z'}
end
events_controller
def index
@events = Event.all
@fake_objects = Event.fake_objects
end
index.json.jbuilder
(@fake_objects).to_json do |event|
json.extract! event, :id, :title, :description
json.start event.start_time
json.end event.end_time
json.url event_url(event, format: :html)
end
class Event < ActiveRecord::Base
def self.fake_objects
fake_objects = Event.new(id: 1,
title: 'appointment one',
description: 'bla bla bla',
start_time: '2014-08-19 14:00:00.000000000 Z',
end_time: '2014-08-19 14:30:00.000000000 Z')
end
end
A: try something like:
json.events @fake_objects do |json, event|
json.extract! event, :id, :title, :description
json.start event.start_time
json.end event.end_time
json.url event_url(event, format: :html)
end
but @fake_objects should be defined as [], for single event you could write:
json.event do |json|
json.extract! @fake_object, :id, :title, :description
json.start @fake_object.start_time
json.end @fake_object.end_time
json.url event_url(@fake_object, format: :html)
end
If you need just array of results (without events: []) write json.array! ..
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/25485704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Develop a chrome extension to count tags of currently accessed webpage Hey I'm new to javascript and chrome extension developing. I'm trying to develop a chrome extension which use browser action to count the number of tags in currently active tab of the Google Chrome browser.I can use getElementsByTagName.lenght method to calculate the number of tags and I know that I can use console API to access the DOM of a webpage. But I have no idea how to call that API from my javascript file.Do you guys know anything regarding this ?
A: To get access to the current page DOM you need to write a content script.
1. Specify the content script in manifest.json
"content_scripts": [
{
"matches": ["http://www.google.com/*"],
"css": ["mystyles.css"],
"js": ["jquery.js", "myscript.js"]
}
]
If you need to inject the script sometimes use Programmatic Injection by specifying permissions field:
{
"name": "My extension",
...
"permissions": [
"activeTab"
],
...
}
2.I would prefer the latter in this case.In popup.js add the code:
function printResult(result){
//or You can print the result using innerHTML
console.log(result);
}
chrome.tabs.executeScript(null, { file: 'content.js' },function(result){
printResult(result);
});
3.In content script u have access to current page/active tab DOM.
var result=document.getElementsByTagName("input").length;
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/28688674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
}
|
Q: Xcode: How can I slide left/right between UIViews? I want the easiest way to implement a sliding right/left gesture between multiple UIViews like AngryBirds levels as in the below screenshot
A: Your best bet is probably to not use a slide gesture, instead use a UIScrollView with a contentSize.width smaller than its frame.size.width (to show the previous/next pages), with pagingEnabled = YES and clipsToBounds = NO.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/10798595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Is there any option to block messages of a specific person without blocking him from calling? Is there any option to block messages of a specific person without blocking him from calling in android phones.? also, will sender be able to know that he is blocked on calling/messaging to the phone in which he is being blocked?
A: try this one...SMS Blacklist for Android to block spam
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/15127550",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: how to find the remainder of division Q=A/B , Q is a real number expressed as a pair of 8 bits:
*
*most significant 8 bits for the integer part
*least significant 8 bits for the fractional part
*the number is unsigned
for example:
0 0 1 0 1 1 0 1 . 0 1 0 1 0 0 0 0
Can you find the remainder of division if you know B, on a paper? how?
I'll give an example:
2/172 :
0000 0010 . 0000 0000 /
1010 1100 . 0000 0000
=0000 0000 . 0001 0010
0000 0000 . 0001 0010 *
1010 1100 . 0000 0000
=0000 0000 . 1100 0001 (should be 2, or at least something greater than 1.5)
A: There are two algorithms: restoring and non-restoring. This is very well described in Division Algorithms and Hardware Implementations by Sherif Galal and Dung Pham. And here is about implementation in VHDL.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/19693111",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Django: validating forms declared on client side with angularjs I'd like to replicate something similar to the ModelForm, but for many instances (through an array of dictionaries).
If we have
models.py
Article(models.Model):
title = models.CharField()
author = models.CharField()
and modelforms.py
ArticleForm(ModelForm):
class meta:
model = Article
I can do
>> form = ArticleForm({"title": "example title", "author": "example author"})
>> form.is_valid()
True
Now, suppose I have an array of dictionaries [{"title": "example title 1", "author": "example author 1"}, {"title": "example title 2", "author": "example author 2"}, {"title": "example title 3", "author": "example author 3"}] that was passed by a post call, I'd like to do form validation with the passed info.
Is there a way to do it all in one go so I can if all of them are valid? Or is list comprehension with modelforms my best workaround? I know of formsets but I'm using client side forms and want to validate the forms on server side.
A:
Or is list comprehension with modelforms my best workaround?
Yes, or maps and filters:
valid_forms = filter(lambda fm: fm.is_val(),
map(ArticleForm, some_list_of_dictionaries))
If you're using Python3, this will return a generator object, which you can iterate over or you can immediately evaluate using something like the following:
valid_forms = list(filter(lambda fm: fm.is_val(),
map(ArticleForm, some_list_of_dictionaries)))
If you're only interested in finding out if all of them are valid, you can do:
all_forms_are_valid = all(lambda fm: fm.is_val(),
map(ArticleForm, some_list_of_dictionaries))
But if you want to access them later on (and you're on Python3), it seems like a list comprehension is a better way to go because otherwise you're inefficiently building the same generator object twice.
Edit with a question:
Is there any difference between map(ArticleForm, some_list_of_dictionaries) and [ArticleForm(dict) for dict in some_list_of_dictionaries]?
It depends on if you're using Python3 or not. In Python3, map will return a generator while in Python2, it will evaluate the expression (return a list), whereas either will evaluate the complete list in your list comprehension.
In Python3, there can be some advantages to doing something like this:
valid_forms = filter(lambda fm: fm.is_valid(),
map(ArticleForm, some_list_of_dictionaries))
Because you end up with a generator that pulls data through it instead of first evaluating into a list with the map and then evaluating into another list with the filter.
Edit 2:
Forgot to mention that you can turn your list comprehension into a generator by using parentheses instead:
all_valid = all(ArticleForm(fm).is_valid() for fm in some_list_of_dictionaries)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/25106649",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: C++ Developer functions that can only be called by other developer functions We have a number of functions that are very useful for developing and testing, but should not be part of any productive code - mostly for performance reasons. Our goal is to have the compiler ensure that functions marked as DEV_ONLY can only be called by functions with the same tag.
How would I implement something like:
virtual int foo() DEV_ONLY;
int bar() {
foo(); // fails
}
int blah() DEV_ONLY {
foo(); // works
}
with DEV_ONLY being a macro or something else?
The following ideas have been proposed so far, but are not completely what I am looking for:
*
*volatile: One option that I found was to mark them as "volatile" (see Dr. Dobbs), but I have two issues with that. First, it would misuse a specifier that has different semantics, potentially causing issues in the future. Second, the compiler warnings about functions being "volatile" would not be as helpful.
*friend: In my understanding, this would require friendship to be declared in the class that implements such a method. Since the tests or dev tools that use the method are not known beforehand, I am not a friend of the friend solution.
*not exporting: The code that may or may not use the method is possibly even within the same class.
*substituting with noop in Release build: Tests might still require those methods in Release mode.
A: The #ifdef preprocessor directive should be the most straightforward way to achieve two of your goals:
*
*not be part of any production code
*ensure that they can only be called from functions with the same "tag" (if they're not there, a build with DEV_ONLY undefined would not compile)
That would mean to wrap the function bodies as well as the corresponding calls.
As to the test methods that should be available in release builds: Then they are not DEV_ONLY, and should not be marked as such.
A: One way to deal with this is to use a build system that allows you to define libraries as test only and restricts production binaries from using them. For example, bazel provides the testonly option (http://bazel.io/docs/be/common-definitions.html#common.testonly)
Then you organize your code into your main binary/library, your test only library, and your test code. That would give you something like:
cc_library(
name = "foo",
srcs = ["foo.cc"],
hdrs = ["foo.h"],
)
cc_library(
name = "test-utils",
srcs = ["test-utils.cc"],
hdrs = ["test-utils.h"],
testonly = 1,
)
cc_test(
...
deps = ["foo", "test-utils"], # works
)
cc_libaray(
...
deps = [..., "test-utils"], # fails
)
cc_binary(
...
deps = ["test-utils"], # fails
)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/39019011",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: c# htmlAgilityPack read from id block until another id block i have a html document like this
<tr id="__TOC_1">
<div id="AUTOGENBOOKMARK_3_7899df20-f104-434d-a5e4-fa293412f5db">
<div style="visibility:hidden"> </div>
</div>
<div>District name</div>
<div>July 2019</div>
<div>Something</div>
<div>Something</div>
<div>Address</div>
<div>
<div style="visibility:hidden"> </div>
</div>
<div>
<div style="visibility:hidden"> </div>
</div>
</tr>
<tr>
<td class="style_6" >
<div class="style_7" id="AUTOGENBOOKMARK_4_d4d6">Apartment number.</div>
</td>
<td class="style_6" s>
<div class="style_7" id="AUTOGENBOOKMARK_5_87b456a7" >Personal account</div>
</td>
<td class="style_6">
<div class="style_7" id="AUTOGENBOOKMARK_6_2b05c0c6>Accrued</div>
</td>
<td class="style_6" >
<div class="style_7" id="AUTOGENBOOKMARK_7_f66f8084>Received</div>
</td>
</tr>
<tr>
<td class="style_6">
<div>195</div>
</td>
<td class="style_6">
<div>00060631402</div>
</td>
<td class="style_6">
<div>155.63</div>
</td>
<td class="style_6">
<div">155.63</div>
</td>
<tr>
<td class="style_6">
<div>Total</div>
</td>
<td class="style_6">
<div>30</div>
</td>
<td class="style_6">
<div>0.00</div>
</td>
<td class="style_6">
<div>271.04</div>
</td>
</tr>
and this code repeat n-time with __TOC_2...__TOC_3.. and next
i need to take district name, month date, skip next block and take all info until total
and i want to write it to my object
public class PaymentInfo
{
public string District { get; set; }
public string PaymentDate { get; set; }
public string Address { get; set; }
}
public class Payment
{
public string ApartmentNumber { get; set; }
public int PersonalAccount { get; set; }
public decimal Accrued { get; set; }
public decimal Received { get; set; }
}
i think i need to read form first id block until another, but i dont understand how i can stop and filter inforamation
A: You get these nodes by xpath starts-with
//tr[starts-with(@id,'__TOC')]
Then do foreach these results to process each block with hard code:
*
*div array order to get district name, address,...
*div id AUTOGENBOOKMARK_4, AUTOGENBOOKMARK_5 to get Apartment, Number,...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/60595933",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
}
|
Q: The argument type "User? Function(User)" cant be assign to the parameter type "User? Function(User?)' I've been coding a method that searches for the user. The analyzer shows that the argument type "User? Function(User)" cant be assign to the parameter type "User? Function(User?)'.
What does this error mean? Please tell me what is wrong with this piece of code ?
Stream<User?> get currentUser {
return _firebaseAuth.authStateChanges().map((User user){
return user != null ? User.fromFirebase(user, 0) : null;
});
}
A: The .map method here is requiring you to pass a function with has a parameter which is a nullable user (User?), and return a nullable user. But you have defined the parameter to be a non-nullable user. Add a ? to the type of your parameter to make it nullable.
Stream<User?> get currentUser {
return _firebaseAuth.authStateChanges().map((User? user){
return user != null ? User.fromFirebase(user, 0) : null;
});
}
Update
I used CLASS User, at the same time I used OBJECT User from Firebase.This led to confusion as I used the same name for two completely different constructs.
Stream<Client?> get currentUser {
return _firebaseAuth
.authStateChanges()
.map((User? user) =>
user != null ? Client.fromFirebase(user) : null);
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/68259318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Difference betwen TSO/RMO/PSO and Power/ARM This question is about memory consistency. There is an example below that might help if it's unclear.
A problem I am looking at asks for code that can do something when executed on Power/ARM that it could not on the Sparc under RMO. Is this possible, please? Many thanks.
[The hint I am given is that Power/ARM don't do atomic stores. This is in the sense that a store can appear in different L1 caches at different times, rather than in the sense that it's possible to get a view of a partly executed write. I think the hint might not be right because RMO does not preserve load order, and that in turn can do anything that a non-atomic write could?]
To clarify the question, suppose I asked about TSO rather than RMO. The answer could have four threads: i) x=1, ii) y=2, iii) r1=[x]; r2=[y]; iv) r3=[y]; r4=[x]. Variables x and y are initialised to 0. The outcome r1, r2, r3, r4 = 1, 0, 2, 0 would not be possible under TSO which only delays writes (and does not reorder anything). Either the assignment to x or y happened first, and the result is inconsistent with either of those possibilities. The outcome can however occur on the ARM because different CPUs can see different writes at different times.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/30529927",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: I am trying to place a bootstrap carousel on top of a bootstrap modal in my rails app, is this possible I upload my images using paper clip, I have six images. right now when I click th e image, the modal opens up and shows that image. that's good
now I am trying to add a carousel to the modal, so that when I click the image the modal opens up and I can slide through all the uploaded images for each project in my portfolio.
- content_for :title, "Portfolio Page"
= stylesheet_link_tag "application", media: :all
= stylesheet_link_tag "articles", media: :all
= stylesheet_link_tag "portfolio", media: :all
= javascript_include_tag "application"
= javascript_include_tag "portfolio"
%div.container.buffered-top
-# List news article in a reverse order to show last article first.yes please yes it does thanklyou, okay thankyou
- @portfolios.reverse.each_with_index do |portfolio,index|
= will_paginate @portfolios, renderer: BootstrapPagination::Rails
/ Trigger the modal with a button
/ Modal
.modal.fade{:role => "dialog", id: "#{"myModal" + index.to_s}"}
.modal-dialog
/ Modal content
.modal-content
.modal-header
%i.fa.fa-times.fa-2x.close{"aria-hidden" => "true","data-dismiss" => "modal", :type => "button"}
%br
%h4.modal-title=portfolio.title
.modal-body
.picture
.carousel.slide{"data-ride" => "carousel",id: "#{"carousel-example-generic" + index.to_s}"}
%ol.carousel-indicators
%li.active{"data-slide-to" => "0", "data-target" => "#carousel-example-generic#{index.to_s}"}
%li{"data-slide-to" => "1", "data-target" => "#carousel-example-generic#{index.to_s}"}
%li{"data-slide-to" => "2", "data-target" => "#carousel-example-generic#{index.to_s}"}
.carousel-inner{:role => "listbox"}
.carousel-item.active
=image_tag portfolio.image1.url(:thumb),:class => "style_image img-responsive"
.carousel-item
=image_tag portfolio.image2.url(:thumb),:class => "style_image img-responsive"
.carousel-item
=image_tag portfolio.image3.url(:thumb),:class => "style_image img-responsive"
%a.left.carousel-control{"data-slide" => "prev", :href => "#carousel-example-generic#{index.to_s}", :role => "button"}
%span.icon-prev{"aria-hidden" => "true"}
%span.sr-only Previous
%a.right.carousel-control{"data-slide" => "next", :href => "#carousel-example-generic#{index.to_s}", :role => "button"}
%span.icon-next{"aria-hidden" => "true"}
%span.sr-only Next
- if !portfolio.link.blank?
%a.link{:href => "http://#{portfolio.link}",:target => "_blank"}
%i.fa.fa-github.fa-2x
%br
%br
-#Show the fist 350 characters of the article
%p.text= portfolio.text
.col-xs-12.col-sm-6.col-md-6
.image.col-sm-12
%a{"data-target" => "#myModal#{index.to_s}", "data-toggle" => "modal", :index => index}
=image_tag portfolio.image1.url(:medium),:class => "style_image img-responsive", :index => index
%a{"data-target" => "#myModal#{index.to_s}", "data-toggle" => "modal"}
.imgDescription
.tags
=raw portfolio.all_tags
.title
=portfolio.title
%br
%i.fa.fa-search.fa-2x{"aria-hidden" => "true"}
A: Yes it's possible.
Take a look of this example:
<body>
<div class="container">
<div class="wrapper">
<a class="btn btn-primary btn-lg" id="open-modal-button" data-target=".mymodal" data-toggle="modal">Open Me </a>
</div>
<div aria-hidden="true" aria-labelledby="myModalLabel" class="modal fade mymodal" role="dialog" tabindex="-1">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="carousel slide" data-interval="false" data-ride="carousel" id="carousel">
<div class="carousel-inner">
<div class="item active">
<div class="row">
<div class="col-md-12">
<%= image_tag("image1.png") %>
<button aria-label="Close" class="btn btn-primary" data-dismiss="modal">Close Window </button>
</div>
</div>
</div>
<div class="item">
<div class="row">
<div class="col-md-6 col-md-offset-3">
<%= image_tag("image2.png") %>
<button aria-label="Close" class="btn btn-primary" data-dismiss="modal">Close Window </button>
</div>
</div>
</div>
<div class="item">
<div class="row">
<div class="col-md-6 col-md-offset-3">
<%= image_tag("image3.png") %>
<button aria-label="Close" class="btn btn-primary" data-dismiss="modal">Close Window </button>
</div>
</div>
</div>
<div class="item">
<div class="row">
<div class="col-md-6 col-md-offset-3">
<%= image_tag("image4.png") %>
<button aria-label="Close" class="btn btn-primary" data-dismiss="modal">Close Window </button>
</div>
</div>
</div>
<a class="left carousel-control" data-slide="prev" href="#carousel" role="button"><span class="glyphicon glyphicon-chevron-left"></span></a>
<a class="right carousel-control" data-slide="next" href="#carousel" role="button"><span class="glyphicon glyphicon-chevron-right"></span></a>
</div>
</div>
</div>
</div>
</div>
</body>
Some Css:
body {
margin: 0 auto;
text-align: center;
}
.wrapper {
text-align: center;
}
#open-modal-button {
position: absolute;
top: 30%;
}
Just be sure tu put a conditional statement before the carousel in order to validates the presence of the images.
Note: Just use bootstrap.js and bootstrap.css
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/38666729",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Create an array in object type I want to create an array in object type. But I couldn't make it ? How can I make it ?
CREATE OR REPLACE TYPE object AS OBJECT
(
type array1 IS VARRAY(1000) OF INTEGER,
exAr1 array1,
type array2 IS VARRAY(1000) OF INTEGER,
exAr2 array2,
);
/
A: You need to create the other types as database objects too:
create type array1 is varray(1000) of integer;
/
create type array2 is varray(1000) of integer;
/
create or replace type object as object
(
exar1 array1,
exar2 array2
);
Of course, since array1 and array2 types are identical, you don't really need them both:
create type array is varray(1000) of integer;
/
create or replace type object as object
(
exar1 array,
exar2 array
);
A: Hey please try to create first TABLE TYPE and then reference it while creating the OBJECT Type. Let me know if this helps
--Table type creation first
CREATE OR REPLACE TYPE NUMBER_NTT1
IS
TABLE OF NUMBER;
CREATE OR REPLACE TYPE NUMBER_NTT
IS
TABLE OF NUMBER;
--Object creation after that
CREATE OR REPLACE TYPE object
AS
OBJECT
(
exAr1 NUMBER_NTT,
exAr2 NUMBER_NTT1
);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/34088291",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: Initializing a ActionBarDrawerToggle with some confusing code Android Studio 1.1 Beta 4
Hello,
I am expecting some source code below, and I can't understand the reason behind it. I can understand this part
ActionBarDrawerToggle mActionBarDrawerToggle =
new ActionBarDrawerToggle(getActivity(), mDrawerLayout, toolbar, R.string.open, R.string.close)
Creating a new instance of the ActionBarDrawerToggle with a constructor that takes 5 arguments.
The part this is confusing is why the braces after, I have never seen that before. Is this a shortcut of doing something?:
{
@Override
public void onDrawerOpened(View drawerView) {
super.onDrawerOpened(drawerView);
}
@Override
public void onDrawerClosed(View drawerView) {
super.onDrawerClosed(drawerView);
}
};
complete:
private void init() {
ActionBarDrawerToggle mActionBarDrawerToggle =
new ActionBarDrawerToggle(getActivity(), mDrawerLayout, toolbar, R.string.open, R.string.close) {
@Override
public void onDrawerOpened(View drawerView) {
super.onDrawerOpened(drawerView);
}
@Override
public void onDrawerClosed(View drawerView) {
super.onDrawerClosed(drawerView);
}
};
A: The given construction:
SomeType st = new SomeType(){
...
}
creates anonymous subclass extending SomeType and allows to override/add methods, add members, make initialization etc
In your case {} creates anonymous subclass extending ActionBarDrawerToggle and overriding methods onDrawerOpened() and onDrawerClosed().
P.S. It's useful when you need class only once.
A: ActionBarDrawerToggle implements DrawerLayout.DrawerListener which has abstract methods. This means you have to define them.
ActionBarDrawerToggle does define them itself but you can overwrite them in {} after the constructor. Actually you are creating an anonymous subclass of ActionBarDrawerToggle. (A class without a name)
You can read about it in the Java documantation
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/28498211",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How do i configure my contact form to receive messages I am trying to configure my contact from to my remote server. The technical guys from my hosting have issued me this settings for my mail setting:
(POP3/IMAP) & outgoing mail (SMTP) server name is: mail.yourdomain.com
ports are: POP3 -> 110, IMAP -> 143 and SMTP -> 25 or 2525
Unfortunately, i do not know where to insert that into my php contact file.
these are the contact files:
contact.html
<form role="form" action="contact.php" method="post">
<div class="text-fields">
<div class="form-group">
<input type="text" class="form-control" name="bbname" id="bbname" placeholder="name:">
</div>
<div class="form-group">
<input type="email" class="form-control" name="bbemail" id="bbemail" placeholder="email:">
</div>
<div class="form-group">
<input type="text" class="form-control" name="bbphone" id="bbphone" placeholder="phone:">
</div>
</div>
<div class="submit-area">
<div class="form-group">
<textarea class="form-control" placeholder="message:" name="bbmessage" id="bbmessage"></textarea>
</div>
<button type="submit" class="btn btn-default" id="bbsubmit">Send it</button>
</div>
</form>
contact.php
<?php
$field_name = $_POST['bbname'];
$field_email = $_POST['bbemail'];
$field_phone = $_POST['bbphone'];
$field_message = $_POST['bbmessage'];
$mail_to = '[email protected]';
$subject = 'Message from '.$field_name;
$body_message = 'From: '.$field_name."\n";
$body_message .= 'E-mail: '.$field_email."\n";
$body_message .= 'Phone: '.$field_phone."\n";
$body_message .= 'Message: '.$field_message;
$headers = 'From: '.$field_email."\r\n";
$headers .= 'Reply-To: '.$field_email."\r\n";
$mail_status = mail($mail_to, $subject, $body_message, $headers);
if ($mail_status) { ?>
<script language="javascript" type="text/javascript">
alert('Thank you for the message. I will contact you shortly.');
window.location = 'contact.html';
</script>
<?php
}
else { ?>
<script language="javascript" type="text/javascript">
alert('Message failed. Please, send an email to [email protected]');
window.location = 'contact.html';
</script>
<?php
}
?>
A: I advise you rather use ready Mailer class for that - http://phpmailer.worxware.com/ is the best choice. I use it for many years and you can configure SMTP connection very easy
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/23416891",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: Importing a CSV file to textbox but not formatting properly I want to add the contents of a CSV file containing Hex values to a textbox and output each byte to a listbox.
When the add file button is clicked the contents of the CSV file show up in the textbox, each byte separated by a comma, But when I hit the write button it throws an exception.
"system format exception, Additional non-parsable characters are at the end of the string."
I cant see how this is happening as when other hex values separated by commas are entered it works just fine. the format is the exact same in the textbox (e.g AA,66,FF,EE) but just doesn't seem to work with CSV files?
private void AddFileSPI_Click(object sender, EventArgs e)
{
string AddFile = "";
DialogResult result = openFile.ShowDialog();
if (result == DialogResult.OK)
{
string file = openFile.FileName;
try
{
AddFile = File.ReadAllText(file);
}
catch (IOException ex)
{
MessageBox.Show(ex.Message);
}
}
Value.Text = AddFile;
}
private void Write_Click(object sender, EventArgs e)
{
string hex = Value.Text;
string[] hex1 = hex.Split(',');
byte[] bytes1 = new byte[hex1.Length];
for (int j = 0; j < hex1.Length; j++)
{
bytes1[j] = Convert.ToByte(hex1[j], 16);
hexValues1.Add(bytes1[j]);
writebuff = hexValues1.ToArray();
hexValue = writebuff[x].ToString("X2");
WriteHexValues.Items.Add("0x" + hexValue);
x++;
}
}
A: Remove whitespace:
for (int j = 0; j < hex1.Length; j++)
{
string fieldString = hex1[j].Trim();
if(string.IsNullOrWhiteSpace(fieldString)) throw ... // or other error handling
bytes1[j] = Convert.ToByte(hex1[j], 16);
Should help...
A: change hex.Split(','); to
hex.Split(",\r\n".ToCharArray(), StringSplitOptions.RemoveEmptyEntries);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/27623596",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: `CakeResponse` referencing undefined variables in `compact()` call triggers errors/notices I’m updating my first CakePHP application on a web server, but pages show errors which are at the end of this post. I can’t find the file(s) which have not uploded correctly. Thanks for help.
CakePHP version : 2.9
Error messages :
Notice (8): compact(): Undefined variable: etagMatches [CORE/Cake/Network/CakeResponse.php, line 1171]*
Notice (8): compact() [function.compact]: Undefined variable: timeMatches [CORE/Cake/Network/CakeResponse.php, line 1171]*
Notice (8): compact() [function.compact]: Undefined variable: subject [CORE/Cake/Utility/ObjectCollection.php, line 128]*
A: As of PHP 7.3 compact() will trigger an error when referencing undefined variables.
This has been fixed in CakePHP 2.10.13, either upgrade your application (preferred), or downgrade your PHP version.
https://github.com/cakephp/cakephp/pull/12487
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/62863886",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: I just need flutter dart help to guide me how i draw fix box in button of this map which show location speed battery information? I just need flutter dart help to guide me how i want draw fix overlap box in button of this map which show location speed battery information as this code show location from firebase as map is embedded in builder i am confuse to overlap the map with static box with text thanks
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text('Live UOH Buses Locations'),
),
body: Builder(
builder: (context) {
return StreamBuilder(
stream: FirebaseFirestore.instance.collection('location').snapshots(),
builder: (context, AsyncSnapshot<QuerySnapshot> snapshot) {
if (_added) {
mymap(snapshot);
}
if (!snapshot.hasData) {
return Center(child: CircularProgressIndicator());
}
return GoogleMap(
mapType: MapType.normal,
markers: {
Marker(
position: LatLng(
snapshot.data!.docs.singleWhere(
(element) => element.id == widget.user_id)['latitude'],
snapshot.data!.docs.singleWhere(
(element) => element.id == widget.user_id)['longitude'],
),
markerId: MarkerId('id'),
icon: BitmapDescriptor.defaultMarker),
},
initialCameraPosition: CameraPosition(
target: LatLng(
snapshot.data!.docs.singleWhere(
(element) => element.id == widget.user_id)['latitude'],
snapshot.data!.docs.singleWhere(
(element) => element.id == widget.user_id)['longitude'],
),
zoom: 14.47),
onMapCreated: (GoogleMapController controller) async {
setState(() {
_controller = controller;
_added = true;
});
},
);
},
);
}
)
);
}
Future<void> mymap(AsyncSnapshot<QuerySnapshot> snapshot) async {
await _controller
.animateCamera(CameraUpdate.newCameraPosition(CameraPosition(
target: LatLng(
snapshot.data!.docs.singleWhere(
(element) => element.id == widget.user_id)['latitude'],
snapshot.data!.docs.singleWhere(
(element) => element.id == widget.user_id)['longitude'],
),
zoom: 14.47)));
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/72569901",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Huggingface Transformers Tensorflow fine-tuned distilgpt2 bad outputs I fine-tuned a model starting from the 'distilgpt2' checkpoint. I fit the model with the model.fit() method and saved the resulting model with the .save_pretrained() method.
When I use this model to generate text:
import transformers
from transformers import TFAutoModelForCausalLM, AutoTokenizer
original_model = 'distilgpt2'
path2model = 'clm_model_save'
path2tok = 'clm_tokenizer_save'
tuned_model = TFAutoModelForCausalLM.from_pretrained(path2model, from_pt=False)
tuned_tokenizer = AutoTokenizer.from_pretrained(path2tok)
input_context = 'The dog'
input_ids = tuned_tokenizer.encode(input_context, return_tensors='tf') # encode input context
outputs = tuned_model.generate(input_ids=input_ids,
max_length=40,
temperature=0.7,
num_return_sequences=3,
do_sample=True) # generate 3 candidates using sampling
for i in range(3): # 3 output sequences were generated
print(f'Generated {i}: {tuned_tokenizer.decode(outputs[i], skip_special_tokens=True)}')
The model returns the output:
>>>All model checkpoint layers were used when initializing TFGPT2LMHeadModel.
>>>All the layers of TFGPT2LMHeadModel were initialized from the model checkpoint at clm_model_save.
>>>If your task is similar to the task the model of the checkpoint was trained on, you can already use TFGPT2LMHeadModel for predictions without further training.
>>>Setting `pad_token_id` to 50256 (first `eos_token_id`) to generate sequence
>>>Generated 0: The dog!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
>>>Generated 1: The dog!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
>>>Generated 2: The dog!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
When I use the original checkpoint, distilgpt2, the model generates text just fine. Is this a sign of some sort of misconfiguration? Or is this simply a sign of a poorly trained model?
I've tried using the original checkpoint tokenizer, manually setting the pad_token_id, using a much longer input context, and changing several parameters for the .generate() method. Same results each time.
Also, I added special tokens to my tuned_tokenizer:
tuned_tokenizer.special_tokens_map
>>>{'bos_token': '<|startoftext|>',
>>> 'eos_token': '<|endoftext|>',
>>> 'unk_token': '<|endoftext|>',
>>> 'pad_token': '<|PAD|>'}
Compared to the original tokenizer:
tokenizer.special_tokens_map
>>> {'bos_token': '<|endoftext|>',
>>> 'eos_token': '<|endoftext|>',
>>> 'unk_token': '<|endoftext|>'}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/70369412",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Redirect from jQuery modal to non-modal dashboard on log in Zend Framework application using jQuery. Login takes place via a nyroModal (jQuery plugin from http://nyromodal.nyrodev.com/). Everything works great, validation, etc - but once the user is logged in and Zend_Auth writes the identity I want to redirect to the dashboard. The redirect takes place inside of the modal instead of reloading the browser frame.
Here is the view script of the modal:
<?php if($this->login_success) echo $this->login_success; ?>
<div id="login_modal">
<h2>Login</h2>
<?php echo $this->form; ?>
<div class="submit" onclick="submitForm('login')">Log In</div>
</div>
Here is my submitForm():
function submitForm(thisform) {
var action = $('#' + thisform + '_form').attr('action');
var form = $('#' + thisform + '_form').serialize();
$.post(action, form, function(result) {
var response = $(result);
var html = response.filter('div:first').html();
$('#' + thisform + '_modal').html(html);
});
}
Here is the response from the authController on successful login:
$url = $this->view->url(array('controller' => 'billing',
'action' => 'index'), null, null);
$this->view->login_success = '<script type="text/javascript">
window.location = "'.$url.'"
</script>';
I've also tried just using:
$this->_helper->redirector('index', 'billing');
But that was always loading the dashboard in the modal; now I'm just seeing the Login header and form per the first code block above.
Looking forward to answers to get this modal closed, and user properly redirected to /billing!
A: Try it:
parent.window.location = "http://your.url.com";
I think it's enough.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/12271669",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Connection Strings for Entity Framework I want to share same Database information across multiple entities in Silverlight.. but I want the connection string to be named xyz and have everyone access that connection string from machine.config...
The meta data part of the entities will be different since I didn't name the entities the same..
Can I put multiple entities in that metadata section?
Here is an example.. I want to use this connection string but note that i put multiple entities in the metadata section..
Basically I want to take this Connection String
<add name="XYZ" connectionString="metadata=res://*/ModEntity.csdl|res://*/ModEntity.ssdl|res://*/ModEntity.msl;provider=System.Data.SqlClient;provider connection string="Data Source=SomeServer;Initial Catalog=SomeCatalog;Persist Security Info=True;User ID=Entity;Password=SomePassword;MultipleActiveResultSets=True"" providerName="System.Data.EntityClient" />
And this Connection String
<add name="XYZ" connectionString="metadata=res://*/Entity.csdl|res://*/Entity.ssdl|res://*/Entity.msl;provider=System.Data.SqlClient;provider connection string="Data Source=SOMESERVER;Initial Catalog=SOMECATALOG;Persist Security Info=True;User ID=Entity;Password=Entity;MultipleActiveResultSets=True"" providerName="System.Data.EntityClient" />
To make this Connection String
<add name="XYZ" connectionString="metadata=res://*/Entity.csdl|res://*/Entity.ssdl|res://*/Entity.msl|res://*/ModEntity.csdl|res://*/ModEntity.ssdl|res://*/ModEntity.msl;provider=System.Data.SqlClient;provider connection string="Data Source=SOMESERVER;Initial Catalog=SOMECATALOG;Persist Security Info=True;User ID=Entity;Password=SOMEPASSWORD;MultipleActiveResultSets=True"" providerName="System.Data.EntityClient" />
But it simply doesn't work. Neither project can connect to it.
string encConnection = ConfigurationManager.ConnectionStrings[connectionName].ConnectionString;
Type contextType = typeof(test_Entities);
object objContext = Activator.CreateInstance(contextType, encConnection);
return objContext as test_Entities;
A: Instead of using config files you can use a configuration database with a scoped systemConfig table and add all your settings there.
CREATE TABLE [dbo].[SystemConfig]
(
[Id] [int] IDENTITY(1, 1)
NOT NULL ,
[AppName] [varchar](128) NULL ,
[ScopeName] [varchar](128) NOT NULL ,
[Key] [varchar](256) NOT NULL ,
[Value] [varchar](MAX) NOT NULL ,
CONSTRAINT [PK_SystemConfig_ID] PRIMARY KEY NONCLUSTERED ( [Id] ASC )
WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON ) ON [PRIMARY]
)
ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
ALTER TABLE [dbo].[SystemConfig] ADD CONSTRAINT [DF_SystemConfig_ScopeName] DEFAULT ('SystemConfig') FOR [ScopeName]
GO
With such configuration table you can create rows like such:
Then from your your application dal(s) wrapping EF you can easily retrieve the scoped configuration.
If you are not using dal(s) and working in the wire directly with EF, you can make an Entity from the SystemConfig table and use the value depending on the application you are on.
A: Unfortunately, combining multiple entity contexts into a single named connection isn't possible. If you want to use named connection strings from a .config file to define your Entity Framework connections, they will each have to have a different name. By convention, that name is typically the name of the context:
<add name="ModEntity" connectionString="metadata=res://*/ModEntity.csdl|res://*/ModEntity.ssdl|res://*/ModEntity.msl;provider=System.Data.SqlClient;provider connection string="Data Source=SomeServer;Initial Catalog=SomeCatalog;Persist Security Info=True;User ID=Entity;Password=SomePassword;MultipleActiveResultSets=True"" providerName="System.Data.EntityClient" />
<add name="Entity" connectionString="metadata=res://*/Entity.csdl|res://*/Entity.ssdl|res://*/Entity.msl;provider=System.Data.SqlClient;provider connection string="Data Source=SOMESERVER;Initial Catalog=SOMECATALOG;Persist Security Info=True;User ID=Entity;Password=Entity;MultipleActiveResultSets=True"" providerName="System.Data.EntityClient" />
However, if you end up with namespace conflicts, you can use any name you want and simply pass the correct name to the context when it is generated:
var context = new Entity("EntityV2");
Obviously, this strategy works best if you are using either a factory or dependency injection to produce your contexts.
Another option would be to produce each context's entire connection string programmatically, and then pass the whole string in to the constructor (not just the name).
// Get "Data Source=SomeServer..."
var innerConnectionString = GetInnerConnectionStringFromMachinConfig();
// Build the Entity Framework connection string.
var connectionString = CreateEntityConnectionString("Entity", innerConnectionString);
var context = new EntityContext(connectionString);
How about something like this:
Type contextType = typeof(test_Entities);
string innerConnectionString = ConfigurationManager.ConnectionStrings["Inner"].ConnectionString;
string entConnection =
string.Format(
"metadata=res://*/{0}.csdl|res://*/{0}.ssdl|res://*/{0}.msl;provider=System.Data.SqlClient;provider connection string=\"{1}\"",
contextType.Name,
innerConnectionString);
object objContext = Activator.CreateInstance(contextType, entConnection);
return objContext as test_Entities;
... with the following in your machine.config:
<add name="Inner" connectionString="Data Source=SomeServer;Initial Catalog=SomeCatalog;Persist Security Info=True;User ID=Entity;Password=SomePassword;MultipleActiveResultSets=True" providerName="System.Data.SqlClient" />
This way, you can use a single connection string for every context in every project on the machine.
A: First try to understand how Entity Framework Connection string works then you will get idea of what is wrong.
*
*You have two different models, Entity and ModEntity
*This means you have two different contexts, each context has its own Storage Model, Conceptual Model and mapping between both.
*You have simply combined strings, but how does Entity's context will know that it has to pickup entity.csdl and ModEntity will pickup modentity.csdl? Well someone could write some intelligent code but I dont think that is primary role of EF development team.
*Also machine.config is bad idea.
*If web apps are moved to different machine, or to shared hosting environment or for maintenance purpose it can lead to problems.
*Everybody will be able to access it, you are making it insecure. If anyone can deploy a web app or any .NET app on server, they get full access to your connection string including your sensitive password information.
Another alternative is, you can create your own constructor for your context and pass your own connection string and you can write some if condition etc to load defaults from web.config
Better thing would be to do is, leave connection strings as it is, give your application pool an identity that will have access to your database server and do not include username and password inside connection string.
A: To enable the same edmx to access multiple databases and database providers and vise versa I use the following technique:
1) Define a ConnectionManager:
public static class ConnectionManager
{
public static string GetConnectionString(string modelName)
{
var resourceAssembly = Assembly.GetCallingAssembly();
var resources = resourceAssembly.GetManifestResourceNames();
if (!resources.Contains(modelName + ".csdl")
|| !resources.Contains(modelName + ".ssdl")
|| !resources.Contains(modelName + ".msl"))
{
throw new ApplicationException(
"Could not find connection resources required by assembly: "
+ System.Reflection.Assembly.GetCallingAssembly().FullName);
}
var provider = System.Configuration.ConfigurationManager.AppSettings.Get(
"MyModelUnitOfWorkProvider");
var providerConnectionString = System.Configuration.ConfigurationManager.AppSettings.Get(
"MyModelUnitOfWorkConnectionString");
string ssdlText;
using (var ssdlInput = resourceAssembly.GetManifestResourceStream(modelName + ".ssdl"))
{
using (var textReader = new StreamReader(ssdlInput))
{
ssdlText = textReader.ReadToEnd();
}
}
var token = "Provider=\"";
var start = ssdlText.IndexOf(token);
var end = ssdlText.IndexOf('"', start + token.Length);
var oldProvider = ssdlText.Substring(start, end + 1 - start);
ssdlText = ssdlText.Replace(oldProvider, "Provider=\"" + provider + "\"");
var tempDir = Environment.GetEnvironmentVariable("TEMP") + '\\' + resourceAssembly.GetName().Name;
Directory.CreateDirectory(tempDir);
var ssdlOutputPath = tempDir + '\\' + Guid.NewGuid() + ".ssdl";
using (var outputFile = new FileStream(ssdlOutputPath, FileMode.Create))
{
using (var outputStream = new StreamWriter(outputFile))
{
outputStream.Write(ssdlText);
}
}
var eBuilder = new EntityConnectionStringBuilder
{
Provider = provider,
Metadata = "res://*/" + modelName + ".csdl"
+ "|" + ssdlOutputPath
+ "|res://*/" + modelName + ".msl",
ProviderConnectionString = providerConnectionString
};
return eBuilder.ToString();
}
}
2) Modify the T4 that creates your ObjectContext so that it will use the ConnectionManager:
public partial class MyModelUnitOfWork : ObjectContext
{
public const string ContainerName = "MyModelUnitOfWork";
public static readonly string ConnectionString
= ConnectionManager.GetConnectionString("MyModel");
3) Add the following lines to App.Config:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<connectionStrings>
<add name="MyModelUnitOfWork" connectionString=... />
</connectionStrings>
<appSettings>
<add key="MyModelUnitOfWorkConnectionString" value="data source=MyPc\SqlExpress;initial catalog=MyDB;integrated security=True;multipleactiveresultsets=True" />
<add key="MyModelUnitOfWorkProvider" value="System.Data.SqlClient" />
</appSettings>
</configuration>
The ConnectionManager will replace the ConnectionString and Provider to what ever is in the App.Config.
You can use the same ConnectionManager for all ObjectContexts (so they all read the same settings from App.Config), or edit the T4 so it creates one ConnectionManager for each (in its own namespace), so that each reads separate settings.
A: What I understand is you want same connection string with different Metadata in it. So you can use a connectionstring as given below and replace "" part. I have used your given connectionString in same sequence.
connectionString="<METADATA>provider=System.Data.SqlClient;provider connection string="Data Source=SomeServer;Initial Catalog=SomeCatalog;Persist Security Info=True;User ID=Entity;Password=SomePassword;MultipleActiveResultSets=True""
For first connectionString replace <METADATA> with "metadata=res://*/ModEntity.csdl|res://*/ModEntity.ssdl|res://*/ModEntity.msl;"
For second connectionString replace <METADATA> with "metadata=res://*/Entity.csdl|res://*/Entity.ssdl|res://*/Entity.msl;"
For third connectionString replace <METADATA> with "metadata=res://*/Entity.csdl|res://*/Entity.ssdl|res://*/Entity.msl|res://*/ModEntity.csdl|res://*/ModEntity.ssdl|res://*/ModEntity.msl;"
Happy coding!
A: Silverlight applications do not have direct access to machine.config.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/5781059",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
}
|
Q: Handle SSL connections from an inetd ruby script I'd like to run a Ruby script that handles encrypted communications from inetd. As I need the certificate information for further processing, I can't "offload" the SSL to something like stunnel.
In order to do so, I'd have to somehow use STDIN and STDOUT with the Ruby SSL object. Unfortunately, the OpenSSL:SSL:SSLSocket only accepts an IO in its constructor. Is there a way to tie STDIN and STDOUT to an IO, so that it reads from standard input and writes to standard output?
A: $stdin and $stdout can be interchangeably used as IO objects. You may pass them to the SSLSocket. Does that help? Otherwise I'd need more code to help you out.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/6479611",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Spring boot test with multiple configuration In my Spring boot 2.1 project I have different @Configurations for different test (ConfigurationA and ConfigurationB), that reside in different packages. Both configurations define the same set of beans but in a different manner (mocked vs. the real thing)
As I am aware of the Bean overriding mechanism introduced in Spring Boot 2.1, I have set the property: spring.main.allow-bean-definition-overriding=true.
However I do have a test with the following the setup of the following configuration and test class. First there is a @Configuration in the productive part (I'm using Maven):
package com.stackoverflow;
@Configuration
public class ProdConfiguration{
...
}
Then in the test branch there is a general Test @Configuration on the same package level:
package com.stackoverflow
@Configuration
public class TestConfiguration {
@Bean
public GameMap gameMap() {
return Mockito.mock(GameMap.class);
}
}
And in a subpackage I have another @Configuration:
package com.stackoverflow.impl;
@Configuration
public class RealMapTestConfiguration {
@Bean
public GameMap gameMap() {
return new GameMap("testMap.json");
}
}
And then of course there is the test that is troubling me:
package com.stackoverflow.impl;
@ExtendWith(SpringExtension.class)
@SpringBootTest
@ContextConfiguration(classes={RealMapTestConfiguration.class, ProdConfiguration.class})
@ActiveProfiles("bug") // spring.main.allow-bean-definition-overriding=true
public class MapImageServiceIT {
@Autowired
private GameMap map;
}
It turns out that the injected GameMap into my test is a mock instance from TestConfiguration instead of the real thing from RealMapTestConfiguration. Aparrently in my test I have the configuration from ProdConfiguration and TestConfiguration, when I wanted ProdConfiguration and RealMapTestConfiguration. As the beans defined in the ProdConfiguration and *TestConfiguration are different the combination works, but TestConfiguration and RealMapTestConfiguration define the same been. It seems like the TestConfiguration is picked up by component scanning as it is in the same package as ProdConfiguration.
I was under the impression that when overriding beans the bean definition that is closer to the test class would be preferred. However this seems not to be the case.
So here are my questions:
*
*When overriding beans, what is the order? Which bean overrides which one?
*How to go about to get the correct instance in my test (using a different bean name is not an option, as in reality the injected bean is not directly used in the test but in a service the test uses and there is no qualifier on it.)
A: I've not used the spring.main.allow-bean-definition-overriding=true property, but specifying specific config in a test class has worked fine for me as a way of switching between objects in different tests.
You say...
It turns out that the injected GameMap into my test is a mock instance from TestConfiguration instead of the real thing from RealMapTestConfiguration.
But RealMapTestConfiguration does return a mock
package com.stackoverflow.impl;
@Configuration
public class RealMapTestConfiguration {
@Bean
public GameMap gameMap() {
return Mockito.mock(GameMap.class);
}
}
A: I think the problem here is that including ContextConfiguration nullifies (part of) the effect of @SpringBootTest. @SpringBootTest has the effect of looking for @SpringBootConfiguration in your application (starting from the same package, I believe). However, if ContextConfiguration is applied, then configurations are loaded from there.
Another way of saying that: because you have ContextConfiguration in your test, scanning for @Configuration classes is disabled, and TestConfiguration is not loaded.
I don't think I have a full picture of your configuration setup so can't really recommend a best practice here, but a quick way to fix this is to add TestConfiguration to your ContextConfiguration in your test. Make sure you add it last, so that it overrides the bean definitions in the other two configurations.
The other thing that might work is removing @ContextConfiguration entirely and letting the SpringBootApplication scanning do its thing - that's where what you said about the bean definition that is closest may apply.
A: In that case just don't use @Configuration on configuration class and import it to the test manually using @Import, example:
@SpringBootTest
@Import(MyTest.MyTestConfig.class)
public class MyTest {
@Autowired
private String string;
@Test
public void myTest() {
System.out.println(string);
}
static class MyTestConfig {
@Bean
public String string() {
return "String";
}
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/53685135",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Conflicts between Autowired and Validated I have been looking for a few days, issues related to a conflict that may be occurring or a wrong configuration I made when putting the spring boot into the project.
All dependencies are normally injected (@Autowired) when the (@Validated) is not present in the code, so it is inserted, everything stops being injected by the spring.
gradle.properties
SPRING_BOOT_VERSION=2.0.3.RELEASE
SPRING_CLOUD_VERSION=Finchley.RELEASE
I tried the version:
SPRING_BOOT_VERSION=2.1.0.M2
dependencies build.gradle
buildscript {
ext {
springBootVersion = SPRING_BOOT_VERSION
springCloudVersion = SPRING_CLOUD_VERSION
}
repositories {
jcenter()
mavenLocal()
mavenCentral()
maven { url 'https://plugins.gradle.org/m2/' }
maven { url "https://repo.spring.io/snapshot" }
maven { url "https://repo.spring.io/milestone" }
}
dependencies {
classpath "org.springframework.boot:spring-boot-gradle-plugin:${springBootVersion}",
"org.owasp:dependency-check-gradle:3.2.1"
}
}
dependencies {
compile "org.springframework.boot:spring-boot-starter-actuator",
"org.springframework.boot:spring-boot-starter-webflux",
"org.springframework.boot:spring-boot-starter-validation",
"org.springframework.boot:spring-boot-starter-logging",
"org.springframework.boot:spring-boot-starter-jetty",
"org.springframework.boot:spring-boot-starter-cache",
"org.springframework.boot:spring-boot-starter-jdbc",
"org.springframework.boot:spring-boot-starter-web",
"org.springframework.boot:spring-boot-starter-aop",
"org.springframework:spring-context",
"io.springfox:springfox-swagger2:2.9.0",
"io.springfox:springfox-swagger-ui:2.9.0"
compile "org.eclipse.jetty:jetty-alpn-conscrypt-server",
"org.eclipse.jetty.http2:http2-server",
"org.owasp:dependency-check-gradle:3.2.1",
"org.codehaus.groovy:groovy-all:2.4.15",
"org.liquibase:liquibase-core:3.6.1"
compileOnly "org.projectlombok:lombok:1.18.0"
runtime "mysql:mysql-connector-java:5.1.34"
}
Application.java
package br.com.app;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.transaction.annotation.EnableTransactionManagement;
/**
* Spring Boot Application class
*/
@EnableTransactionManagement
@SpringBootApplication(scanBasePackages = "br.com.app.*")
public class Application {
/**
* Spring boot application main
*
* @param args
*/
public static void main(final String[] args) {
SpringApplication.run(Application.class, args);
}
}
UserController.java
package br.com.app.entrypoints.rest;
import br.com.app.user.usecases.CreateUser;
import io.swagger.annotations.Api;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.http.MediaType;
import org.springframework.validation.annotation.Validated;
import org.springframework.web.bind.annotation.*;
@Api(value = "/user", tags = "User")
@RequestMapping(
value = "/user",
produces = MediaType.APPLICATION_JSON_VALUE,
consumes = MediaType.APPLICATION_JSON_VALUE
)
@Validated
@RequiredArgsConstructor
@RestController
@Slf4j
public class UserController {
private final CreateUser createUser;
@PostMapping(value = "/create")
public final ResponseEntity<Response> createUser(@RequestHeader("token") final String token, @RequestBody final Request request) {
return createUser.execute( ... ); // createUser is null
}
}
Log
org.springframework.web.util.NestedServletException: Request processing failed; nested exception is java.lang.NullPointerException
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:982) ~[spring-webmvc-5.0.7.RELEASE.jar:5.0.7.RELEASE]
at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:877) ~[spring-webmvc-5.0.7.RELEASE.jar:5.0.7.RELEASE]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) ~[javax.servlet-api-3.1.0.jar:3.1.0]
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:851) ~[spring-webmvc-5.0.7.RELEASE.jar:5.0.7.RELEASE]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) ~[javax.servlet-api-3.1.0.jar:3.1.0]
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:865) ~[jetty-servlet-9.4.11.v20180605.jar:9.4.11.v20180605]
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1655) ~[jetty-servlet-9.4.11.v20180605.jar:9.4.11.v20180605]
at org.eclipse.jetty.websocket.server.WebSocketUpgradeFilter.doFilter(WebSocketUpgradeFilter.java:215) ~[websocket-server-9.4.11.v20180605.jar:9.4.11.v20180605]
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642) ~[jetty-servlet-9.4.11.v20180605.jar:9.4.11.v20180605]
at org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.filterAndRecordMetrics(WebMvcMetricsFilter.java:158) ~[spring-boot-actuator-2.0.3.RELEASE.jar:2.0.3.RELEASE]
at org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.filterAndRecordMetrics(WebMvcMetricsFilter.java:126) ~[spring-boot-actuator-2.0.3.RELEASE.jar:2.0.3.RELEASE]
at org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.doFilterInternal(WebMvcMetricsFilter.java:111) ~[spring-boot-actuator-2.0.3.RELEASE.jar:2.0.3.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-5.0.7.RELEASE.jar:5.0.7.RELEASE]
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642) ~[jetty-servlet-9.4.11.v20180605.jar:9.4.11.v20180605]
at org.springframework.boot.actuate.web.trace.servlet.HttpTraceFilter.doFilterInternal(HttpTraceFilter.java:90) ~[spring-boot-actuator-2.0.3.RELEASE.jar:2.0.3.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-5.0.7.RELEASE.jar:5.0.7.RELEASE]
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642) ~[jetty-servlet-9.4.11.v20180605.jar:9.4.11.v20180605]
at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99) ~[spring-web-5.0.7.RELEASE.jar:5.0.7.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-5.0.7.RELEASE.jar:5.0.7.RELEASE]
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642) ~[jetty-servlet-9.4.11.v20180605.jar:9.4.11.v20180605]
at org.springframework.web.filter.HttpPutFormContentFilter.doFilterInternal(HttpPutFormContentFilter.java:109) ~[spring-web-5.0.7.RELEASE.jar:5.0.7.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-5.0.7.RELEASE.jar:5.0.7.RELEASE]
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642) ~[jetty-servlet-9.4.11.v20180605.jar:9.4.11.v20180605]
at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:93) ~[spring-web-5.0.7.RELEASE.jar:5.0.7.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-5.0.7.RELEASE.jar:5.0.7.RELEASE]
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642) ~[jetty-servlet-9.4.11.v20180605.jar:9.4.11.v20180605]
at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:200) ~[spring-web-5.0.7.RELEASE.jar:5.0.7.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-5.0.7.RELEASE.jar:5.0.7.RELEASE]
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642) ~[jetty-servlet-9.4.11.v20180605.jar:9.4.11.v20180605]
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533) ~[jetty-servlet-9.4.11.v20180605.jar:9.4.11.v20180605]
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146) ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) ~[jetty-security-9.4.11.v20180605.jar:9.4.11.v20180605]
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257) ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595) ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255) ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317) ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203) ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473) ~[jetty-servlet-9.4.11.v20180605.jar:9.4.11.v20180605]
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564) ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201) ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219) ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144) ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:724) ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
at org.eclipse.jetty.server.Server.handle(Server.java:531) ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352) ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260) [jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281) [jetty-io-9.4.11.v20180605.jar:9.4.11.v20180605]
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102) [jetty-io-9.4.11.v20180605.jar:9.4.11.v20180605]
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118) [jetty-io-9.4.11.v20180605.jar:9.4.11.v20180605]
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:762) [jetty-util-9.4.11.v20180605.jar:9.4.11.v20180605]
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:680) [jetty-util-9.4.11.v20180605.jar:9.4.11.v20180605]
at java.base/java.lang.Thread.run(Thread.java:844) [na:na]
Caused by: java.lang.NullPointerException: null
at br.com.app.entrypoints.rest.UserController.createUser(UserController.java:33) ~[classes/:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:564) ~[na:na]
at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:209) ~[spring-web-5.0.7.RELEASE.jar:5.0.7.RELEASE]
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:136) ~[spring-web-5.0.7.RELEASE.jar:5.0.7.RELEASE]
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:102) ~[spring-webmvc-5.0.7.RELEASE.jar:5.0.7.RELEASE]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:877) ~[spring-webmvc-5.0.7.RELEASE.jar:5.0.7.RELEASE]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:783) ~[spring-webmvc-5.0.7.RELEASE.jar:5.0.7.RELEASE]
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) ~[spring-webmvc-5.0.7.RELEASE.jar:5.0.7.RELEASE]
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:991) ~[spring-webmvc-5.0.7.RELEASE.jar:5.0.7.RELEASE]
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:925) ~[spring-webmvc-5.0.7.RELEASE.jar:5.0.7.RELEASE]
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:974) ~[spring-webmvc-5.0.7.RELEASE.jar:5.0.7.RELEASE]
... 53 common frames omitted
These are some links that I used as a reference:
https://coderanch.com/t/602044/java/Autowiring-work-custom-constraint-validator
https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/validation/annotation/Validated.html
https://github.com/spring-projects/spring-boot/tree/master/spring-boot-samples/spring-boot-sample-property-validation
https://memorynotfound.com/custom-password-constraint-validator-annotation
Tks for help
A: I understood that, by defining a method final the class designer promises this method will always work as described, or implied. But validations need to create a partial customization that is only possible without the final.
changing from:
public final ResponseEntity<Response> createUser(@RequestHeader("token") final String token, @RequestBody final Request request) {
return createUser.execute( ... ); // createUser is null
}
for
public ResponseEntity<Response> createUser(@RequestHeader("token") final String token, @RequestBody final Request request) {
return createUser.execute( ... ); // createUser is null
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/52133139",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
}
|
Q: How to update a model but return unmodified model in Django? I'm using django-piston to write a RESTful Web Service and have a problem.
in models.py:
class Status(models.Model):
user = models.ForeignKey(User)
content = models.TextField(max_length=140)
class StatusReply(models.Model):
user = models.ForeignKey(User)
reply_to = models.ForeignKey(Status, related_name='replies')
content = models.TextField(max_length=140)
has_read = models.BooleanField(default=False, help_text="has the publisher of the status read the reply")
in handlers.py:
class StatusHandler(BaseHandler):
allowed_methods = ('GET', 'POST', 'DELETE' )
model = Status
fields = ('id',
('user', ('id', 'username', 'name')),
'content',
('replies', ('id',
('user', ('id', 'username', 'name')),
'content',
'has_read'),
),
)
@need_login
def read(self, request, id, current_user): # the current_user arg is an instance of user created in @need_login
try:
status = Status.objects.get(pk=id)
except ObjectDoesNotExist:
return rc.NOT_FOUND
else:
if status.user == current_user: #if current_user is the publisher of the status, set all replies read
status.replies.all().update(has_read=True)
return status
In the handler, it returned a specific status by id. Now I want to return the status before status.replies.all().update(has_read=True) but also do the update operation in database. How to do it? Thanks in advance.
A: Not sure if I understand what you need. As I understand your code, status.replies.all().update(has_read=True) doesn't change status but only changes the replies. If that's true, the code should do what you want. If it isn't, you could make a copy of status and return the copy:
if status.user == current_user:
old_status = status.make_copy()
status.replies.all().update(has_read=True)
return old_status
return status
Or do you just want the method to return early and do the database update asynchronously? Then you should have a look at celery and maybe this nice explanation.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/6264258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Angulr6 application not running in IE and Edge I have one SPA devel0ped in Angular6, it works fine in chrome and Firefox but i get blank screen in IE and Edge.I have un-commented all the imports in polyfill.js for IE and Edge still i can't see my application running in IE n Chrome both.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/53166808",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How do I convert form data from multiselect dropdown list from array to string In angular 7 Typescript? I have a form in which there is a field which dynamically loads data from the database into a ng-multiselect dropdown list. I want to store that form data in database using post request, which is not working. When there are all text input fields it is working fine. My research tells me I have to convert the ng-multiselect values to string for it to work .
Component.ts
export class AssociateDetailsComponent implements OnInit,OnDestroy{
//Variable for dropdown multiselect
str : string;
//list : Skills[];
dropdownList = [];
selectedItems =[];
dropdownSettings = {};
readonly rootURL= 'http://localhost:67764/api';
emailPattern = "^[a-z0-9._%+-]+@[a-z0-9.-]+\.[a-z]{2,4}$";
constructor(private associateService: AssociateServiceService,
private data : DataService, private http : HttpClient) { }
ngOnInit() {
this.http.get(this.rootURL+ '/Skills').toPromise()
.then(res => this.dropdownList = res as Skills[]);
this.dropdownSettings = {
singleSelection: false,
idField: 'SkillsId',
textField: 'Skill',
selectAllText: 'Select All',
unSelectAllText: 'UnSelect All',
itemsShowLimit: 100,
allowSearchFilter: false
};
this.selectedItems = [
{ SkillsId: 3, Skill: 'java' }
];
//Shared service for data transfer between forms
this.AsId=this.data.AsId;
this.associateService.formData={
Name: '',
AssociateId: this.AsId,
Skills:'',
Hobbies : '',
Experience : '',
}
component.html
postAssociateDetail(formData: Associate){
//JSON.stringify(formData);
return this.http.post(this.rootURL+'/AssociateDetails',formData);
}
<div class="col s6">
<div class="row">
<h3><div class="col s6">Name</div></h3>
<h3>{{AsId}}</h3>
</div>
<div class="container">
<div class="card">
<div class="sm-jumbotron center center-align">
<h2>Please Provide Below Details</h2>
</div>
</div>
<div class="row">
<div class="col s12">
<form #form="ngForm" autocomplete="off" (submit)="OnSubmit(form)">
<div class="formgroup">
<label data-error="Required field!">AssociateId</label>
<input type="text" placeholder="AssociateId" class="validate" name="AssociateId" #AssociateId="ngModel" [(ngModel)]="associateService.formData.AssociateId" required>
</div>
<div class="formgroup">
<blockquote>
List your skills
</blockquote>
<ng-multiselect-dropdown
name ="Skills"
type="input-field"
[placeholder]="'Skills'"
[data]="dropdownList"
#Skills="ngModel"
[(ngModel)]="associateService.formData.Skills"
[settings]="dropdownSettings"
>
</ng-multiselect-dropdown>
</div>
<div class="formgroup">
<blockquote>
Please enter your interests and hobbies
</blockquote>
<input type="text" placeholder="Hobbies" class="validate" name="hobbies" #Hobbies="ngModel" [(ngModel)]="associateService.formData.Hobbies" required>
</div>
<div class="formgroup">
<blockquote>
Please share your experiences.
</blockquote>
<input type="text" placeholder="Experience" class="validate" name="experience" #Experience="ngModel" [(ngModel)]="associateService.formData.Experience" required>
</div>
<div class="row">
<div class="col s6">
<div class="formgroup">
<button type="submit" class="btn btn-large waves-effect">Submit</button>
</div>
</div>
<div class="col s6">
<div class="formgroup">
<button type="reset" class="btn btn-large waves-effect" (click)="resetForm()">Reset</button>
</div>
</div>
</div>
</form>
</div>
</div>
</div>
</div>
Backend
'[HttpPost] public async Task<IActionResult> PostAssociateDetails([FromBody] AssociateDetails associateDetails) { if (!ModelState.IsValid) { return BadRequest(ModelState); } _context.NewAssociateDetails.Add(associateDetails); await _context.SaveChangesAsync(); return CreatedAtAction("GetAssociateDetails", new { id = associateDetails.Id }, associateDetails); }'
'
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/54898236",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Remove duplicates with less null values I have a table of employees which contains about 25 columns. Right now there are a lot of duplicates and I would like to try and get rid of some of these duplicates.
First, I want to find the duplicates by looking for multiple records that have the same values in first name, last name, employee number, company number and status.
SELECT
firstname,lastname,employeenumber, companynumber, statusflag
FROM
employeemaster
GROUP BY
firstname,lastname,employeenumber,companynumber, statusflag
HAVING
(COUNT(*) > 1)
This gives me duplicates but my goal is to find and keep the best single record and delete the other records. The "best single record" is defined by the record with the least amount of NULL values in all of the other columns. How can I do this?
I am using Microsoft SQL Server 2012 MGMT Studio.
EXAMPLE:
Red: DELETE
Green: KEEP
NOTE: There are a lot more columns in the table than what this table shows.
A: You can use the sys.columns table to get a list of columns and build a dynamic query. This query will return a 'KeepThese' value for every record you want to keep based on your given criteria.
-- insert test data
create table EmployeeMaster
(
Record int identity(1,1),
FirstName varchar(50),
LastName varchar(50),
EmployeeNumber int,
CompanyNumber int,
StatusFlag int,
UserName varchar(50),
Branch varchar(50)
);
insert into EmployeeMaster
(
FirstName,
LastName,
EmployeeNumber,
CompanyNumber,
StatusFlag,
UserName,
Branch
)
values
('Jake','Jones',1234,1,1,'JJONES','PHX'),
('Jake','Jones',1234,1,1,NULL,'PHX'),
('Jake','Jones',1234,1,1,NULL,NULL),
('Jane','Jones',5678,1,1,'JJONES2',NULL);
-- get records with most non-null values with dynamic sys.column query
declare @sql varchar(max)
select @sql = '
select e.*,
row_number() over(partition by
e.FirstName,
e.LastName,
e.EmployeeNumber,
e.CompanyNumber,
e.StatusFlag
order by n.NonNullCnt desc) as KeepThese
from EmployeeMaster e
cross apply (select count(n.value) as NonNullCnt from (select ' +
replace((
select 'cast(' + c.name + ' as varchar(50)) as value union all select '
from sys.columns c
where c.object_id = t.object_id
for xml path('')
) + '#',' union all select #','') + ')n)n'
from sys.tables t
where t.name = 'EmployeeMaster'
exec(@sql)
A: Try this.
;WITH cte
AS (SELECT Row_number()
OVER(
partition BY firstname, lastname, employeenumber, companynumber, statusflag
ORDER BY (SELECT NULL)) rn,
firstname,
lastname,
employeenumber,
companynumber,
statusflag,
username,
branch
FROM employeemaster),
cte1
AS (SELECT a.firstname,
a.lastname,
a.employeenumber,
a.companynumber,
a.statusflag,
Row_number()
OVER(
partition BY a.firstname, a.lastname, a.employeenumber, a.companynumber, a.statusflag
ORDER BY (CASE WHEN a.username IS NULL THEN 1 ELSE 0 END +CASE WHEN a.branch IS NULL THEN 1 ELSE 0 END) )rn
-- add the remaining columns in case statement
FROM cte a
JOIN employeemaster b
ON a.firstname = b.firstname
AND a.lastname = b.lastname
AND a.employeenumber = b.employeenumber
AND a.companynumbe = b.companynumber
AND a.statusflag = b.statusflag)
SELECT *
FROM cte1
WHERE rn = 1
A: I test with MySQL and use NULL String concat to found the best record. Because LENGTH ( NULL || 'data') is 0. Only if all column not NULL some length exists. Maybe this is not perfekt.
create table EmployeeMaster
(
Record int auto_increment,
FirstName varchar(50),
LastName varchar(50),
EmployeeNumber int,
CompanyNumber int,
StatusFlag int,
UserName varchar(50),
Branch varchar(50),
PRIMARY KEY(record)
);
INSERT INTO EmployeeMaster
(
FirstName, LastName, EmployeeNumber, CompanyNumber, StatusFlag, UserName, Branch
) VALUES ('Jake', 'Jones', 1234, 1, 1, 'JJONES', 'PHX'), ('Jake', 'Jones', 1234, 1, 1, NULL, 'PHX'), ('Jake', 'Jones', 1234, 1, 1, NULL, NULL), ('Jane', 'Jones', 5678, 1, 1, 'JJONES2', NULL);
My query idea looks like this
SELECT e.*
FROM employeemaster e
JOIN ( SELECT firstname,
lastname,
employeenumber,
companynumber,
statusflag,
MAX( LENGTH ( username || branch ) ) data_quality
FROM employeemaster
GROUP BY firstname, lastname, employeenumber, companynumber, statusflag
HAVING count(*) > 1
) g
ON LENGTH ( username || branch ) = g.data_quality
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/27927251",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Pygraphviz crashes after drawing 170 graphs I am using pygraphviz to create a large number of graphs for different configurations of data. I have found that no matter what information is put in the graph the program will crash after drawing the 170th graph. There are no error messages generated the program just stops. Is there something that needs to be reset if drawing this many graphs?
I am running Python 3.7 on a Windows 10 machine, Pygraphviz 1.5, and graphviz 2.38
for graph_number in range(200):
config_graph = pygraphviz.AGraph(strict=False, directed=False, compound=True, ranksep='0.2', nodesep='0.2')
# Create Directory
if not os.path.exists('Graph'):
os.makedirs('Graph')
# Draw Graph
print('draw_' + str(graph_number))
config_graph.layout(prog = 'dot')
config_graph.draw('Graph/'+str(graph_number)+'.png')
A: I was able to constantly reproduce the behavior with:
*
*Python 3.7.6 (pc064 (64bit), then also with pc032)
*PyGraphviz 1.5 (that I built - available for download at [GitHub]: CristiFati/Prebuilt-Binaries - Various software built on various platforms. (under PyGraphviz, naturally).
Might also want to check [SO]: Installing pygraphviz on Windows 10 64-bit, Python 3.6 (@CristiFati's answer))
*Graphviz 2.42.2 ((pc032) same as #2.)
I suspected an Undefined Behavior somewhere in the code, even if the behavior was precisely the same:
*
*OK for 169 graphs
*Crash for 170
Did some debugging (added some print(f) statements in agraph.py, and cgraph.dll (write.c)).
PyGraphviz invokes Graphviz's tools (.exes) for many operations. For that, it uses subprocess.Popen and communicates with the child process via its 3 available streams (stdin, stdout, stderr).
From the beginning I noticed that 170 * 3 = 510 (awfully close to 512 (0x200)), but didn't pay as much attention as I should have until later (mostly because the Python process (running the code below) had no more than ~150 open handles in Task Manager (TM) and also Process Explorer (PE)).
However, a bit of Googleing revealed:
*
*[SO]: Is there a limit on number of open files in Windows (@stackprogrammer's answer) (and from here)
*[MS.Learn]: _setmaxstdio (which states (emphasis is mine)):
C run-time I/O now supports up to 8,192 files open simultaneously at the low I/O level. This level includes files opened and accessed using the _open, _read, and _write family of I/O functions. By default, up to 512 files can be open simultaneously at the stream I/O level. This level includes files opened and accessed using the fopen, fgetc, and fputc family of functions. The limit of 512 open files at the stream I/O level can be increased to a maximum of 8,192 by use of the _setmaxstdio function.
*[SO]: Python: Which command increases the number of open files on Windows? (@NorthCat's answer)
Below is your code that I modified for debugging and reproducing the error. It needs (for code shortness' sake, as same thing can be achieved via CTypes) the PyWin32 package (python -m pip install pywin32).
code00.py:
#!/usr/bin/env python
import os
import sys
#import time
import pygraphviz as pgv
import win32file as wfile
def handle_graph(idx, dir_name):
graph_name = "draw_{:03d}".format(idx)
graph_args = {
"name": graph_name,
"strict": False,
"directed": False,
"compound": True,
"ranksep": "0.2",
"nodesep": "0.2",
}
graph = pgv.AGraph(**graph_args)
# Draw Graph
img_base_name = graph_name + ".png"
print(" {:s}".format(img_base_name))
graph.layout(prog="dot")
img_full_name = os.path.join(dir_name, img_base_name)
graph.draw(img_full_name)
graph.close() # !!! Has NO (visible) effect, but I think it should be called anyway !!!
def main(*argv):
print("OLD max open files: {:d}".format(wfile._getmaxstdio()))
# 513 is enough for your original code (170 graphs), but you can set it up to 8192
#wfile._setmaxstdio(513) # !!! COMMENT this line to reproduce the crash !!!
print("NEW max open files: {:d}".format(wfile._getmaxstdio()))
dir_name = "Graph"
# Create Directory
if not os.path.isdir(dir_name):
os.makedirs(dir_name)
#ts_global_start = time.time()
start = 0
count = 170
#count = 1
step_sleep = 0.05
for i in range(start, start + count):
#ts_local_start = time.time()
handle_graph(i, dir_name)
#print(" Time: {:.3f}".format(time.time() - ts_local_start))
#time.sleep(step_sleep)
handle_graph(count, dir_name)
#print("Global time: {:.3f}".format(time.time() - ts_global_start - step_sleep * count))
if __name__ == "__main__":
print("Python {:s} {:03d}bit on {:s}\n".format(" ".join(elem.strip() for elem in sys.version.split("\n")),
64 if sys.maxsize > 0x100000000 else 32, sys.platform))
rc = main(*sys.argv[1:])
print("\nDone.\n")
sys.exit(rc)
Output:
e:\Work\Dev\StackOverflow\q060876623> "e:\Work\Dev\VEnvs\py_pc064_03.07.06_test0\Scripts\python.exe" ./code00.py
Python 3.7.6 (tags/v3.7.6:43364a7ae0, Dec 19 2019, 00:42:30) [MSC v.1916 64 bit (AMD64)] 064bit on win32
OLD max open files: 512
NEW max open files: 513
draw_000.png
draw_001.png
draw_002.png
...
draw_167.png
draw_168.png
draw_169.png
Done.
Conclusions:
*
*Apparently, some file handles (fds) are open, although they are not "seen" by TM or PE (probably they are on a lower level). However I don't know why this happens (is it a MS UCRT bug?), but from what I am concerned, once a child process ends, its streams should be closed, but I don't know how to force it (this would be a proper fix)
*Also, the behavior (crash) when attempting to write (not open) to a fd (above the limit), seems a bit strange
*As a workaround, the max open fds number can be increased. Based on the following inequality: 3 * (graph_count + 1) <= max_fds, you can get an idea about the numbers. From there, if you set the limit to 8192 (I didn't test this) you should be able handle 2729 graphs (assuming that there are no additional fds opened by the code)
Side notes:
*
*While investigating, I ran into or noticed several adjacent issues, that I tried to fix:
*
*Graphviz:
*
*[GitLab]: graphviz/graphviz - [Issue #1481]: MSB4018 The NativeCodeAnalysis task failed unexpectedly. (merged on 200406)
*PyGraphviz:
*
*[GitHub]: pygraphviz/pygraphviz - AGraph Graphviz handle close mechanism (merged on 200720)
*There's also an issue open for this behavior (probably the same author): [GitHub]: pygraphviz/pygraphviz - Pygraphviz crashes after drawing 170 graphs
A: I tried you code and it generated 200 graphs with no problem (I also tried with 2000).
My suggestion is to use these versions of the packages, I installed a conda environment on mac os with python 3.7 :
graphviz 2.40.1 hefbbd9a_2
pygraphviz 1.3 py37h1de35cc_1
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/60876623",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.