text
stringlengths 15
59.8k
| meta
dict |
---|---|
Q: Return path and selected file xamarin I'm having trouble returning the name and path of my file on android, how can I do this? My command below already the option to select the file I need only return the name of the selected file and the path that it enters, can anyone tell me how can I do?
private void PicSelected()
{
Intent intent = new Intent();
intent.SetType("file/*");
intent.SetAction(Intent.ActionGetContent);
this.StartActivityForResult(Intent.CreateChooser(intent, "Selecione o arquivo"), 0);
//this.StartActivityForResult(intent, FILE_SELECT_CODE);
}
A: You need to override OnActivityResult. In its arguments you will get an Intent containing the data you requested with StartActivityForResult.
The Intent you get back you will be able to get the Uri, by just getting the Data property, for the file you have picked. From there you will be able to get whatever you need.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/43502500",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: What's an elegant solution in beanshell for loops and arrays I'm working with beanshell to parse SWIFT data and need to extract values by referencing these SWIFT tags. Right now, I statically get these values as such:
String getACRU = swiftMessage.getTagData("19A",":ACRU//");
String getANTO = swiftMessage.getTagData("19A",":ANTO//");
String getCHAR = swiftMessage.getTagData("19A",":CHAR//");
String getCOUN = swiftMessage.getTagData("19A",":COUN//");
String getEXEC = swiftMessage.getTagData("19A",":EXEC//");
String getISDI = swiftMessage.getTagData("19A",":ISDI//");
String getLADT = swiftMessage.getTagData("19A",":LADT//");
String getLEVY = swiftMessage.getTagData("19A",":LEVY//");
String getLOCL = swiftMessage.getTagData("19A",":LOCL//");
String getLOCO = swiftMessage.getTagData("19A",":LOCO//");
String getMARG = swiftMessage.getTagData("19A",":MARG//");
String getOTHR = swiftMessage.getTagData("19A",":OTHR//");
String getPOST = swiftMessage.getTagData("19A",":POST//");
String getREGF = swiftMessage.getTagData("19A",":REGF//");
String getSHIP = swiftMessage.getTagData("19A",":SHIP//");
String getSPCN = swiftMessage.getTagData("19A",":SPCN//");
String getSTAM = swiftMessage.getTagData("19A",":STAM//");
String getSTEX = swiftMessage.getTagData("19A",":STEX//");
String getTRAN = swiftMessage.getTagData("19A",":TRAN//");
String getTRAX = swiftMessage.getTagData("19A",":TRAX//");
String getVATA = swiftMessage.getTagData("19A",":VATA//");
String getWITH = swiftMessage.getTagData("19A",":WITH//");
String getCOAX = swiftMessage.getTagData("19A",":COAX//");
String getACCA = swiftMessage.getTagData("19A",":ACCA//");
My question is two-fold: what's the best way to elegantly rewrite this and what is the best way in beanshell to add a method/function that would remove the first three characters, change the comma to a period and once all those values have been parsed out of the message, to add them all up?
A: Sorry, I'm still newbie on BeanShell and Java, but can it does works? (It's like a workaround...)
String [] tagArray = new String []
{ "ACRU", "ANTO", "CHAR", "COUN", "EXEC",
"ISDI", "LADT", "LEVY", "LOCL", "LOCO",
"MARG", "OTHR", "POST", "REGF", "SHIP",
"SPCN", "STAM", "STEX", "TRAN", "TRAX",
"VATA", "WITH", "COAX", "ACCA" };
for (i: tagArray) {
// it was a test: print(i);
eval("String get" + i + " = swiftMessage.getTagData(\"19A\", \":" + i + "//\")");
}
(Sorry for my bad english too...)
A: It seems that this worked quite well. Store all the values I need in an array:
String [] tagArray = new String [] { ":ACRU//",":ANTO//",":CHAR//",":COUN//",":EXEC//",":ISDI//",":LADT//",":LEVY//",":LOCL//",":LOCO//",":MARG//",":OTHR//",":POST//",":REGF//",":SHIP//",":SPCN//",":STAM//",":STEX//",":TRAN//",":TRAX//",":VATA//",":WITH//",":COAX//",":ACCA//" };
And create a function to loop and add:
double sumTags(SwiftMessage inboundSwiftmessage, String inboundTagNumber, String [] inboundTagArray){
double getTotal;
for( tagArrayData : inboundTagArray ){
String getData = stripData(inboundSwiftmessage.getTagData(inboundTagNumber,tagArrayData));
getTotal = getTotal + Double.parseDouble(getData);
}
return getTotal;
}
And this is the function to remove the first 3 characters and convert, then remove, the comma into a period:
String stripData(String inboundString){
if (inboundString==null){
return "0";
}
else
{
char strippedString;
StringBuffer strippedBuffer = new StringBuffer("");
char [] inboundArray = inboundString.toCharArray();
for (int counter = 3 ; counter < inboundArray.length; counter++)
{
strippedString = inboundArray[counter];
strippedBuffer.append(strippedString);
}
return strippedBuffer.toString().replace(",",".");
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/9072596",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Python-Guessing a number-High and Low For my assignment I had to create a program where the user chose a number between 0-511 and my program had to guess it within 10 tries. An error came up telling me I had to define "response" but not sure exactly what to write. If anyone has any other advice to fix my code, that would be great. I am completely new to programming and any pieces of advice would help greatly!
Here is my code:
LOW = 0
HIGH = 511
guess = (LOW + HIGH)/2
response =
print("Think of an integer from", LOW, "to", HIGH)
while not(response == "y" ):
response = input
print("Is the answer", guess, "?")
if (response == "L"):
LOW = guess
guess = (LOW + HIGH)/2
elif (response == "y" ):
high = into(guess)<br>
guess = int(low + high)/2
HIGH = guess
print("Is the answer", guess, "?")
response = input()
response("got it")
A: In general "help me do my homework" will not be answered here - see https://softwareengineering.meta.stackexchange.com/questions/6166/open-letter-to-students-with-homework-problems
However, I think you might find the following enlightening - often these we know how to do these sorts of tasks ourselves, and (especially for a student) have trouble breaking down the steps. I suggest the following:
Find a friend, and do the procedure with him. I see from your code that you know the rough procedure. Just do it yourself - keep numbers on paper if you need to. Don't concentrate on HOW you're doing it, don't analyse it. Just do it. Then do it again, writing down the steps you took - don't use loops at this stage, and don't generalize it yet, just note it down - if you have an audio recorder, say the steps out loud, so you can concentrate on what you're doing practically, not the underlying code. Write it down, break it down into steps, look at what you've done, roll up the loops. Then write the code.
A: Anything other than "y" should work for response. I suggest setting it to none: response = None. See https://docs.python.org/2/library/constants.html for info about None.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/26725583",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: How to use ADSBACKUP.EXE without password I want to backup an Advantage Database Server 11.10 with ADSBACKUP.EXE however I can't get it to work. It looks like the source path is interpreted as password, but we don't have an ADSSYS password.
When I try:
adsbackup.exe -p C:\Database\db.add E:\
I get
Missing argument, no destination path given
Backup arguments:
Source path: E:\
Destination path: NULL
When I leave the -p parameter out:
adsbackup.exe C:\Database\db.add E:\
I get:
Error 7078: The Advantage Database Server cannot authenticate the user. Make sure the user name and password are correct. axServerConnect
I'm following: https://devzone.advantagedatabase.com/dz/webhelp/Advantage9.1/mergedProjects/devguide/part1point5/creating_a_backup_using_adsbackup_exe.htm
Update after comment Jens
I already tired empty passwords in the parameter with '', "" or even NULL. Neither worked.
When I try to run the backup through asqlcmd.exe I get (I'm logged in to the server by RDP):
Error: 5185 Error 5185: Local server connections are restricted in this environment. See the 5185 error code documentation for details. axServerConnect
This happens while we have MTIER_LOCAL_CONNECTIONS=1 in ADS.INI.
And when I try the query in Advantage Data Architect I can't connect as AdSys:
Could it be that there IS a password for AdSys, even when the vendor says it's not? Or am having some complete other problem?
A: As the adsbackup.exe tool basically just runs the sp_BackupDatabase / sp_BackupFreeTables stored procedures, you can easily replace it with the newer asqlcmd.exe tool:
https://devzone.advantagedatabase.com/dz/webhelp/Advantage12/index.html?master_sql_command_line_switches.htm
Maybe you have more luck with the command line switches there.
On the other hand you say that you don't have an adssys password. First of all this is a big security risk!
I don't know if it is possible, but maybe you can add a second user for backup purposes that has a password. That way you could circumvent the password problem with adsbackup.exe.
Another approach would be to write your own tool in any language supported by ADS. If you have a talented programmer at hand that shouldn't be a big deal.
Finally I have another idea: Have you tried quoting your empty password with single or double quotes? Maybe the adsbackup.exe tool does quote processing and / or trimming on the password switch. You could also try passing a quoted string that contains of one or more white space characters.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/54822914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: OPC UA with Codesys v. 2.3 I'm trying to use a Wago 750-881 as a OPC UA Server, which I'm programming with Codesys v. 2.3.
My problem is how to make this happen? I have found a lot of info about OPC UA with Codesys v. 3.5, but not any information about Codesys v. 2.3.
Could please anyone help me?
A: Codesys v2.3 only has OPC-DA (Availability depends on PLC model and manufacturer. Even though OPC-UA may not always be available on a PLC with Codesys v3.5, check your vendor's documentation to check if is a optional).
There are Gateways (hardware and software) that allow "conversion" between OPC-DA and OPC-UA or between other protocols for OPC-UA, for example, if your PLC has Modbus-TCP/RTU. It is also possible to use another PLC, as Camile G. commented, but possibly the cost and development should be more complicated, depending on the case it may be worth migrating the entire system to a single PLC with Codesys v3.5 and OPC-UA.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/48150355",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: SSL Server on Windows CE 4.2 Does the the web server on Windows CE 4.2 support TLS 1.0? From this documentation:
http://msdn.microsoft.com/en-us/library/ms836811.aspx#wincesec_topic6
It looks like Window CE 4.2 does support SSL 3.1 (TLS 1.0), but when I try to connect it does not work. Is there just something I'm missing?
Thanks.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/27431050",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Sorting the values of linked list and printing them I have a list:
struct node
{
string name;
int value;
node* next;
};
For example, I save 5 elements to it.
Now I want to find the three greatest elements.
node* help = head;
for (int i = 0; i < M; i++)
{
node* help2 = head;
while (help->next)
{
if (help->next->value > help->value)
{
help2 = help->next;
}
help = help->next;
}
cout << help2->name;
}
Thanks to that, I can find the greatest number and show the name of it. But I don't know how I can find the second and third elements and show them.
A: If you don't know sorting refer this link https://www.geeksforgeeks.org/sorting-algorithms/ to better understand the concept and types of sorting algorithms.
In this example we have used bubble sort algorithm to sort elements in descending order
//sort your linked list this way as you said you have 5 element in your linked list
//n=5 i. e size of linked list
for(int i=0;i<n;i++)//sorting logic for linked list to be in descending order
{
temp=head;
for(int j=0;j<n-1-i;j++)
{
if(temp->value<temp->next->value)
{
val=temp->value; //val is integer variable
nm=temp->name; //nm is string variable
temp->value=temp->next->value;
temp->name=temp->next->name;
temp->next->value=val;
temp->next->name=nm;
}
temp=temp->next;
}
}
temp=head;
for(int i=0;i<3;i++)// prints 3 elements as you want 3 greatest numbers
{
cout<<temp->value<<endl;
cout<<temp->name<<endl;
temp=temp->next;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/66992426",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: Access individual items in Blazor Virtualized list I have a simple collection of "Conversation" like this:
public class Conversation
{
public string? contactName { get; set; }
public string? thread { get; set; }
public DateTime newestMsg { get; set; }
}
I was originally using @foreach to put these onto a page to display as a scrollable list. I'd load 30 of them from an API call and populate them into "ConversationList":
@foreach (var conv in ConversationList)
{
<div class="conversation">
<div>@conv.contactName</div>
<div>@conv.thread</div>
<div>@conv.newestMsg</div>
</div>
}
@code {
Conversation[]? ConversationList = new Conversation[] { };
}
I saw Blazor has the Virtualize component which would let me have tons of conversations be scrollable and load on demand, so I changed my code to use it:
<Virtualize ItemsProvider="@LoadConversations" Context="conv">
<ItemContent>
<div class="conversation">
<div>@conv.contactName</div>
<div>@conv.thread</div>
<div>@conv.newestMsg</div>
</div>
</ItemContent>
</Virtualize>
@code {
private async ValueTask<ItemsProviderResult<Conversation>> LoadConversations(ItemsProviderRequest request)
{
string apiURL = "https://myAPI:00000?start=" + request.StartIndex + "&limit=" + request.Count;
ConversationRequest? convRequest = await Http.GetFromJsonAsync<ConversationReq>(apiURL);
if (convRequest != null && convRequest.success)
{
return new ItemsProviderResult<Conversation>(convRequest.Conversations, request.StartIndex + request.Count + 4);
}
else
{
return new ItemsProviderResult<Conversation>(null, 0);
}
}
}
In my first attempt, I could select a specific item in "ConversationList" and modify it's contents, having the state change on the page. For example, if a new message came in from SignalR, I could select a Conversation by it's "thread" property and update the "newestMsg" datetime.
Now that I have them Virtualized instead, it's not clear how to do this. I can't use Virtualize's "ItemsProvider" and "Items" at the same time. I want to keep the ItemsProvider for the sake of automatic loading while scrolling, knowing the StartIndex and Count automatically.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/72321708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: I published one package on npm, but it's not showing in the search list when i am going to search I published one package as a public & i am trying to search it on npm
(https://www.npmjs.com/), but there is no package available with that name on npm.
Tried with:
npm install package-name -> working fine
Here is the package link:
https://www.npmjs.com/package/and-or-search
Is there any thing i am missing?
A: It takes some time for website to show the latest version, but it also takes some time for npm show <package-name> From my experience I haven't noticed difference between the command and between the website. You should also receive email.
I recommend waiting a few minutes.
A: The npm website takes time to show the latest packages or package versions because of the delays in CDN, website cache etc.
But it will show up eventually. Meanwhile, you can check for the package with:
npm show <package-name>
This will output all the versions of the package as well so you can be confident that the package exists or the latest version is published.
Your package now shows up correctly in npm website at https://www.npmjs.com/package/and-or-search
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/54059264",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: How to get just some events from an Overlay that sees in the top of all applications in Android Im trying to develope an app which is an Overlay working on top of all aplications in the system, till there I dont have problems.
The issue is that my overlay has to be able to manage some events and others just let them go under it.
Example:
The App has to manage onFling events and do some operations.
The App must not manage onClick events, letting them go to the activity posicionated under its
A: I have found some information about how to do that, although you need to have root permissions to inject events in the system or another app running on it.
Here you can find further information :
http://www.pocketmagic.net/2012/04/injecting-events-programatically-on-android/#.UReb1KWIes0
http://www.pocketmagic.net/2013/01/programmatically-injecting-events-on-android-part-2/#.UReb3aWIes2
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/14778539",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Lev Distance vs Use fuzzy search in ElasticSearch I am trying to understand whether writing an in memory algorithm like Lev Dist (or any other) is better or utilizing the already existing fuzzy search query in ES (which uses lev dist underneath) is better?
The most important metric i am looking for is computation time.
*
*Which out of the two options is faster? How can i evaluate which one works better for my use case (Kotlin programming)
*Any other pros and cons of using one vs the other? One thing i can think of is if i write my own algorithm i can write customizations to improve speed. But if ES infrastructure is comparable, i do not want to spend time building my own algorithm uneccesarily
Thanks
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/74855399",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Convert hyphen delimited string to camelCase? For example:
abc-def-xyz to abcDefXyz
the-fooo to theFooo
etc.
What's the most efficient way to do this PHP?
Here's my take:
$parts = explode('-', $string);
$new_string = '';
foreach($parts as $part)
$new_string .= ucfirst($part);
$new_string = lcfirst($new_string);
But i have a feeling that it can be done with much less code :)
ps: Happy Holidays to everyone !! :D
A: $parts = explode('-', $string);
$parts = array_map('ucfirst', $parts);
$string = lcfirst(implode('', $parts));
You might want to replace the first line with $parts = explode('-', strtolower($string)); in case someone uses uppercase characters in the hyphen-delimited string though.
A: $subject = 'abc-def-xyz';
$results = preg_replace_callback ('/-(.)/', create_function('$matches','return strtoupper($matches[1]);'), $subject);
echo $results;
A: If that works, why not use it? Unless you're parsing a ginormous amount of text you probably won't notice the difference.
The only thing I see is that with your code the first letter is going to get capitalized too, so maybe you could add this:
foreach($parts as $k=>$part)
$new_string .= ($k == 0) ? strtolower($part) : ucfirst($part);
A: str_replace('-', '', lcfirst(ucwords('foo-bar-baz', '-'))); // fooBarBaz
ucwords accepts a word separator as a second parameter, so we only need to pass an hyphen and then lowercase the first letter with lcfirst and finally remove all hyphens with str_replace.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/8631434",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Guzzle HTTP from Docker to NodeJS in Host folks.
I have a servive running on my host machine. It is a NodeJS app with Express. It works fine at "localhost:3000".
Then, in a separate project, I have a Laravel App running fine inside Docker, and I access it at "http://localhost".
Now, my Laravel app needs to call the NodeJS app. I saw in Docker documentation I should use "host.docker.internal", since it will resolve to my host machine.
The this->http is a Guzzle\Client instance.
In my PHP code I have this:
$response = $this->http->request('POST', env($store->remote), [
'form_params' => [
'login' => $customer->login,
'password' => $customer->password,
]);
If I call the NodeJS app from Postman it works fine. But calling from that PHP I got this error:
"message": "Client error: `POST http://host.docker.internal:3000` resulted in a `404 Not Found` response:\n<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n<meta charset=\"utf-8\">\n<title>Error</title>\n</head>\n<body>\n<pre>Cannot POST /</p (truncated...)\n",
"exception": "GuzzleHttp\\Exception\\ClientException",
"file": "/var/www/html/vendor/guzzlehttp/guzzle/src/Exception/RequestException.php",
"line": 113,
Does anyone have any clue how I can call my node app from PHP in Docker?
EDIT
I was thinking if I should not open the port 80 and bind it to port 3000 in my PHP instance (since the request is running in php docker image). I put in my Docker file these ports attribute:
php:
build: ./docker
volumes:
- .:/var/www/html
- ./.env:/var/www/html/.env
- ./docker/config/php.ini:/usr/local/etc/php/php.ini
- ./docker/config/php-fpm.conf:/usr/local/etc/php/php-fpm.conf
- ./docker/config/xdebug.ini:/usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini
links:
- mysql
ports:
- "3000:80"
So, port 80 in my PHP instance would bind to my OSX port 3000. But Docker complains port 3000 is in use:
Cannot start service php: b'driver failed programming external connectivity on endpoint project_php_1 (241090....): Error starting userland proxy: Bind for 0.0.0.0:3000 failed: port is already allocated'
Yes! In fact it is allocated. It is allocated by my NodeJS app, that is where I want to go. It looks like I do not know very well how ports and DNS works inside Docker for Mac.
Any help is very appreciated.
SOLVED
Hey, guys. I figured it out. I turned off Docker container, point a regular Apache to my Laravel project and I got what was happening: CORS.
I already had cors in my Express app, but after configure it better, it worked!
Here it is, in case anyone stumbled here and needs it:
1) Add cors to your Express (if you haven't yet)
2) Configure cors to your domains. For now, I will keep it open, but, for production APPS, please, take care and control wisely who can query your app:
// Express app:
app.use(
cors({
"origin": "*",
"methods": "GET,HEAD,PUT,PATCH,POST,DELETE",
"preflightContinue": false,
"optionsSuccessStatus": 204
})
);
app.options('*', cors());
3) Use the host.docker.internal address (in my case, host.docker.internal:3000 , since my app is running on that port) from PHP to get to your Express App in OSX host machine. In my case, it will be a different domain/IP when it gets to production.
4) Just use Guzzle\Client to make your http call:
$response = $this->http->request('POST', env($store->remote) . '/store-api/customers/login', [
'json' => [
"login" => $customer->login,
"password" => encrypt($customer->password),
]
]);
A important point to note: Express waits for json (in my app, at least), so do NOT use "form_data", use "json" option to POST requests:
At least, it was NOT a duplication of the other answers, as marked by @Phil, because those answers points to the same solution I have already mentioned, use the 'host.docker.internal' address.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/52350205",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to convert byte array to BigInteger in java I'm working on java... I want to know how to convert array of byte into BigInteger.
Actually i used md5's digest method which returned me array of byte which i want to convert into Biginteger.
A: This example Get MD5 hash in a few lines of Java has a related example.
I believe you should be able to do
MessageDigest m=MessageDigest.getInstance("MD5");
m.update(message.getBytes(), 0, message.length());
BigInteger bi = new BigInteger(1,m.digest());
and if you want it printed in the style "d41d8cd98f00b204e9800998ecf8427e" you should be able to do
System.out.println(bi.toString(16));
A:
Actually i used md5's digest method which returned me array of byte which i want to convert into a BigInteger.
You can use new BigInteger(byte[]).
However, it should be noted that the MD5 hash is not really an integer in any useful sense. It is really just a binary bit pattern.
I guess you are just doing this so that you can print or order the MD5 hashes. But there are less memory hungry ways of accomplishing both tasks.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/4182029",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: rake db:miagrate aborted abnormal: rake aborted! uninitialized constant Rake::DSL Anyone could help me?
I searched the same problem, but I can't figure out a solution still.
I ran successfully with "bundle update" "bundle install" but when running "rake db:migrate" I got the following problem...
rake aborted!
uninitialized constant Rake::DSL
C:/Ruby192/lib/ruby/gems/1.9.1/gems/rake-0.9.2.2/lib/rake/tasklib.rb:8:in `<clas
s:TaskLib>'
C:/Ruby192/lib/ruby/gems/1.9.1/gems/rake-0.9.2.2/lib/rake/tasklib.rb:6:in `<modu
le:Rake>'
C:/Ruby192/lib/ruby/gems/1.9.1/gems/rake-0.9.2.2/lib/rake/tasklib.rb:3:in `<top
(required)>'
C:/Ruby192/lib/ruby/gems/1.9.1/gems/rdoc-3.11/lib/rdoc/task.rb:37:in `<top (requ
ired)>'
C:/Ruby192/lib/ruby/gems/1.9.1/gems/railties-3.0.9/lib/rails/tasks/documentation
.rake:2:in `<top (required)>'
C:/Ruby192/lib/ruby/gems/1.9.1/gems/railties-3.0.9/lib/rails/tasks.rb:15:in `blo
ck in <top (required)>'
C:/Ruby192/lib/ruby/gems/1.9.1/gems/railties-3.0.9/lib/rails/tasks.rb:6:in `each
'
C:/Ruby192/lib/ruby/gems/1.9.1/gems/railties-3.0.9/lib/rails/tasks.rb:6:in `<top
(required)>'
C:/Ruby192/lib/ruby/gems/1.9.1/gems/railties-3.0.9/lib/rails/application.rb:215:
in `initialize_tasks'
C:/Ruby192/lib/ruby/gems/1.9.1/gems/railties-3.0.9/lib/rails/application.rb:139:
in `load_tasks'
C:/Ruby192/lib/ruby/gems/1.9.1/gems/railties-3.0.9/lib/rails/application.rb:77:i
n `method_missing'
C:/F/desktop/Projects/recle/recle/rails/eway/Rakefile:7:in `<top (required)>'
C:/Ruby192/lib/ruby/1.9.1/rake.rb:2373:in `load'
C:/Ruby192/lib/ruby/1.9.1/rake.rb:2373:in `raw_load_rakefile'
C:/Ruby192/lib/ruby/1.9.1/rake.rb:2007:in `block in load_rakefile'
C:/Ruby192/lib/ruby/1.9.1/rake.rb:2058:in `standard_exception_handling'
C:/Ruby192/lib/ruby/1.9.1/rake.rb:2006:in `load_rakefile'
C:/Ruby192/lib/ruby/1.9.1/rake.rb:1991:in `run'
C:/Ruby192/bin/rake:31:in `<main>'
A: Put this in your Rakefile above require 'rake':
require 'rake/dsl_definition'
OR if the above solution does not work,
write this in your gemfile for rake
gem "rake", "0.8.7"
and go to command prompt and write.
gem uninstall rake
This will uninstall the existing rake gem.
Then type bundle update in your project folder which will install rake 9.8.7 again.
And enjoy rails :).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/8147084",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: _Field table in progress 4GL database Which is the field in this table which will give information of table to which field of _field belongs. for example say _Field is having one record with _field-name = 'XYZ', how can i identify to which table this XYZ field belongs?
A: The RecordID (RECID) of the _file table is stored in a field in the _filed table.
FOR EACH _file NO-LOCK, EACH _field NO-LOCK WHERE _field._file-recid = RECID(_file):
DISPLAY _file._file-name _field._field-name.
END.
Or utilize the primary index in the query using the "OF" operator:
FOR EACH _file NO-LOCK, EACH _field NO-LOCK OF _file:
DISPLAY _file._file-name _field._field-name.
END.
A: It is linked to the _File table through the _File-recid field.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/30499304",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: A list of dataframes in R, and I have to make three sub-dataframes with each df element I have a set of csv files, 64 of them. all of those title are named like "LineNum_nnn.csv" (LineNum_101.csv, LineNum_107.csv, LineNum_501.csv, ...)
Each csv file has five columns:
Date / On / Off / Transfer / LineNum
2020-01-02 / 8874 / 7170 / 1886 / 211
2020-01-03 / 8928 / 7170 / 1886 / 211
... so on. All of them have about 800 rows.
I used lapply function to do two things:
1.import all the csv files in my working directory and 2.apply my pre-processing function(named function_merged) to all 67 dataframes in the list.
I imported csv files with this code. Works well.
wholeDataList = lapply(fileList, function(x) read.csv(x, encoding = "UTF-8"))
head(wholeDataList[[2]], 3)
A data.frame: 3 × 5
Date On Off Transfer LineNum
<chr> <int> <int> <int> <int>
1 2020-01-02 16830 14564 4536 102
2 2020-01-03 17440 14978 4614 102
3 2020-01-04 12579 10862 3011 102
applied my pre-processing function(function-Merged) to the dataframe list:
wholeDataList_merged = lapply(wholeDataList, function(x) function_Merged(x))
Also works well. Now I have a one-dimensional list with 67 processed dataframes in a row. for example:
head(wholeDataList_merged[[2]], 3)
gives me
A data.frame: 3 × 11
Date On Off Transfer LineNum Days Workdays On_RunMed NumericDate Loess_Fit Loess_SE
<date> <int> <int> <int> <int> <chr> <fct> <dbl> <dbl> <dbl> <dbl>
1 2020-01-02 16830 14564 4536 102 Thu TRUE 16709 18263 16628.28 261.7660
6 2020-01-07 15734 13311 4268 102 Tue TRUE 16709 18268 16919.24 187.8690
7 2020-01-08 16709 14375 4698 102 Wed TRUE 16830 18269 16960.49 175.9965
Then the major problem: as you can see these dataframes have a Date column, and I have to split all those 67 dataframes, each into three frames based on their dates: 2010-08-09 / 2020-11-17 / 2021-07-04.
for example, that "wholeDataList_merged[[2]]" (dataframe of line number 102. right above one) has to be trimmed into three dataframes with their date information:
I want Line102_Phase1(before 20-08-09), Line102_Phase2(btw 20-08-09 and 20-11-17), Line102_Phase3(btw 20/11/17 and 21/07/04).
... for all those 67 dataframes. (sigh)
I know R function cannot make multiple outputs. so I hope I can do this(splitting a dataframe into three sub-frames based on date value)
like,
function_name <- function(dataframe) {
temp_list <- list[]
df_Phase1 <-
dataframe["2020-02-18" <= dataframe$Date
& dataframe$Date <= "2020-08-09",]
temp_list <- append(temp_list, df_Phase1)
df_Phase2 <-
dataframe["2020-08-10" <= dataframe$Date
& dataframe$Date <= "2020-11-17",]
temp_list <- append(temp_list, df_Phase2)
df_Phase3 <-
dataframe["2020-11-18" <= dataframe$Date
& dataframe$Date <= "2021-07-04",]
temp_list <- append(temp_list, df_Phase3)
return(temp_list)
#and attatch those three splitted frames right next to the original dataframe(then it would be 67*4),
# or make a new 67*3 list with those splitted frames... whatever.
}
And most af all, these splitted dataframes have to contain 1. their line numbers(102, 104, ..) and 2. each phases(1, 2, 3) in the dataframe variable name. for example "Line104_Phase2".
How can I do this? Well do I have to unlist those frames and extract three of them by date from each dataframes with for loops and dynamic variables?
splitting... I think it could be done with lots of effort somehow, but I cannot even grasp anything with the variable names. Help me.
A: Since you didn't provide a reproducible examples, here are some data that hopefully replicate your problem.
set.seed(42)
dateIntervals<-as.Date(c("2010-08-09", "2020-11-17", "2021-07-04"))
possibleDates<-seq(dateIntervals[1]-1000, dateIntervals[3], by = "day")
genDF<-function() data.frame(Date = sample(possibleDates, 100), Value = runif(100))
listdf<-replicate(2, genDF(), simplify = FALSE)
Now listdf, which should play the role of your wholeDataList_merged, has only two elements and each element just two columns, but it shouldn't make any difference. Next, you can try:
lapply(listdf, function(x) split(x, findInterval(x$Date, dateIntervals)))
And you will see each element being split into three elements depending on the date.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/72640841",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: C# TCP client does not connect to IP Adress Hello I am using Android phone Wi-Fi hotspot to create network, then using C# to connect to this hotspot.
The ipadress of hotspot is: 192.168.43.1.
First I connect to Wi-Fi hotspot by laptop Wi-Fi.
Now I am using C# code:
private void connectToServer()
{
try
{
TcpClient tcpclnt = new TcpClient();
Console.WriteLine("Connecting.....");
tcpclnt.Connect("192.168.43.1", 8001);
// use the ipaddress as in the server program
Console.WriteLine("Connected");
Console.Write("Enter the string to be transmitted : ");
String str = Console.ReadLine();
Stream stm = tcpclnt.GetStream();
ASCIIEncoding asen = new ASCIIEncoding();
byte[] ba = asen.GetBytes(str);
Console.WriteLine("Transmitting.....");
stm.Write(ba, 0, ba.Length);
byte[] bb = new byte[100];
int k = stm.Read(bb, 0, 100);
for (int i = 0; i < k; i++)
Console.Write(Convert.ToChar(bb[i]));
tcpclnt.Close();
}
catch (Exception e)
{
Console.WriteLine("Error..... " + e.Message);
}
}
But I always get this exception:
Error..... No connection could be made because the target machine
actively refused it 192.168.43.1:8001
Hi, after some searching, i found the port not found in my machine, by using netstat:
TCP 127.0.0.1:5037 admin-PC:65298 TIME_WAIT
TCP 127.0.0.1:5037 admin-PC:65299 TIME_WAIT
TCP 127.0.0.1:5037 admin-PC:65300 TIME_WAIT
TCP 127.0.0.1:5037 admin-PC:65301 TIME_WAIT
TCP 127.0.0.1:5037 admin-PC:65302 TIME_WAIT
TCP 127.0.0.1:5037 admin-PC:65304 TIME_WAIT
TCP 127.0.0.1:5037 admin-PC:65305 TIME_WAIT
TCP 127.0.0.1:49165 admin-PC:49436 ESTABLISHED
TCP 127.0.0.1:49263 admin-PC:49264 ESTABLISHED
TCP 127.0.0.1:49264 admin-PC:49263 ESTABLISHED
TCP 127.0.0.1:49265 admin-PC:49266 ESTABLISHED
TCP 127.0.0.1:49266 admin-PC:49265 ESTABLISHED
TCP 127.0.0.1:49436 admin-PC:49165 ESTABLISHED
TCP 127.0.0.1:49559 admin-PC:49560 ESTABLISHED
TCP 127.0.0.1:49560 admin-PC:49559 ESTABLISHED
TCP 127.0.0.1:51477 admin-PC:51478 ESTABLISHED
TCP 127.0.0.1:51478 admin-PC:51477 ESTABLISHED
TCP 127.0.0.1:55300 admin-PC:55301 ESTABLISHED
TCP 127.0.0.1:55301 admin-PC:55300 ESTABLISHED
TCP 127.0.0.1:61797 admin-PC:61798 ESTABLISHED
TCP 127.0.0.1:61798 admin-PC:61797 ESTABLISHED
TCP 127.0.0.1:61800 admin-PC:61801 ESTABLISHED
TCP 127.0.0.1:61801 admin-PC:61800 ESTABLISHED
TCP 127.0.0.1:61807 admin-PC:5037 ESTABLISHED
TCP 127.0.0.1:61809 admin-PC:5037 ESTABLISHED
TCP 127.0.0.1:61810 admin-PC:5037 ESTABLISHED
TCP 127.0.0.1:61811 admin-PC:5037 ESTABLISHED
TCP 127.0.0.1:61813 admin-PC:5037 ESTABLISHED
TCP 127.0.0.1:63271 admin-PC:63272 ESTABLISHED
TCP 127.0.0.1:63272 admin-PC:63271 ESTABLISHED
TCP 127.0.0.1:63274 admin-PC:5037 ESTABLISHED
TCP 127.0.0.1:63275 admin-PC:5037 ESTABLISHED
TCP 127.0.0.1:63279 admin-PC:5037 ESTABLISHED
TCP 127.0.0.1:63284 admin-PC:5037 ESTABLISHED
TCP 127.0.0.1:63304 admin-PC:5037 ESTABLISHED
TCP 127.0.0.1:63351 admin-PC:5037 ESTABLISHED
TCP 127.0.0.1:63353 admin-PC:5037 ESTABLISHED
TCP 127.0.0.1:63354 admin-PC:5037 ESTABLISHED
TCP 127.0.0.1:63355 admin-PC:5037 ESTABLISHED
TCP 127.0.0.1:63356 admin-PC:5037 ESTABLISHED
TCP 127.0.0.1:63357 admin-PC:5037 ESTABLISHED
TCP 127.0.0.1:63358 admin-PC:5037 ESTABLISHED
TCP 127.0.0.1:63359 admin-PC:5037 ESTABLISHED
TCP 127.0.0.1:63367 admin-PC:5037 ESTABLISHED
TCP 127.0.0.1:63368 admin-PC:5037 ESTABLISHED
TCP 127.0.0.1:63370 admin-PC:5037 ESTABLISHED
TCP 127.0.0.1:63373 admin-PC:5037 ESTABLISHED
TCP 127.0.0.1:63377 admin-PC:5037 ESTABLISHED
TCP 127.0.0.1:63378 admin-PC:5037 ESTABLISHED
TCP 127.0.0.1:63385 admin-PC:5037 ESTABLISHED
TCP 127.0.0.1:63386 admin-PC:5037 ESTABLISHED
TCP 127.0.0.1:63387 admin-PC:5037 ESTABLISHED
TCP 127.0.0.1:63388 admin-PC:5037 ESTABLISHED
TCP 127.0.0.1:63389 admin-PC:5037 ESTABLISHED
TCP 127.0.0.1:63396 admin-PC:5037 ESTABLISHED
TCP 127.0.0.1:63462 admin-PC:5037 ESTABLISHED
TCP 127.0.0.1:64544 admin-PC:49333 TIME_WAIT
TCP 127.0.0.1:64545 admin-PC:64546 TIME_WAIT
TCP 127.0.0.1:64555 admin-PC:5037 TIME_WAIT
TCP 127.0.0.1:64557 admin-PC:5037 TIME_WAIT
TCP 127.0.0.1:64558 admin-PC:5037 TIME_WAIT
TCP 127.0.0.1:64919 admin-PC:5037 ESTABLISHED
TCP 127.0.0.1:65303 admin-PC:5563 SYN_SENT
TCP 192.168.1.34:64035 43.239.149.131:http TIME_WAIT
TCP 192.168.12.2:63262 192.168.12.101:22469 ESTABLISHED
I still read the answer but still get this error.
Here is my android code:
public class MainActivity extends Activity {
private ServerSocket serverSocket;
Handler updateConversationHandler;
Thread serverThread = null;
private TextView text;
public static final int SERVERPORT = 8001;
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
text = (TextView) findViewById(R.id.text2);
updateConversationHandler = new Handler();
this.serverThread = new Thread(new ServerThread());
this.serverThread.start();
}
@Override
protected void onStop() {
super.onStop();
try {
serverSocket.close();
} catch (IOException e) {
e.printStackTrace();
}
}
class ServerThread implements Runnable {
public void run() {
Socket socket = null;
try {
serverSocket = new ServerSocket(SERVERPORT);
} catch (IOException e) {
e.printStackTrace();
}
while (!Thread.currentThread().isInterrupted()) {
try {
socket = serverSocket.accept();
CommunicationThread commThread = new CommunicationThread(socket);
new Thread(commThread).start();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
class CommunicationThread implements Runnable {
private Socket clientSocket;
private BufferedReader input;
public CommunicationThread(Socket clientSocket) {
this.clientSocket = clientSocket;
try {
this.input = new BufferedReader(new InputStreamReader(this.clientSocket.getInputStream()));
} catch (IOException e) {
e.printStackTrace();
}
}
public void run() {
while (!Thread.currentThread().isInterrupted()) {
try {
String read = input.readLine();
updateConversationHandler.post(new updateUIThread(read));
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
class updateUIThread implements Runnable {
private String msg;
public updateUIThread(String str) {
this.msg = str;
}
@Override
public void run() {
text.setText(text.getText().toString()+"Client Says: "+ msg + "\n");
}
}
}
A: Based on the exception you are getting, the problem is not in your code, it is in the connection itself. This can be a firewall issue or the process listening on a different port.
EDIT: The OP has found that the problem he had was in the IIS and that resetting the IIS solved his problem. To reset IIS, you can do this either manually or through command prompt:
Run (Win+R) -> open cmd (with admin privileges) -> type "iisreset" (without "")
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/42052056",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to fix error when creating an controller in laravel? I am using below command to create controller.
$ php artisan make:controller PagesController
After executing command getting below error.
PHP Warning: require(C:\xampp\htdocs\WebDevApp\bootstrap/../vendor/autoload.php): failed to open stream: No such file or directory in C:\xampp\htdocs\WebDevApp\bootstrap\autoload.php on line 17
Warning: require(C:\xampp\htdocs\WebDevApp\bootstrap/../vendor/autoload.php): failed to open stream: No such file or directory in C:\xampp\htdocs\WebDevApp\bootstrap\autoload.php on line 17
PHP Fatal error: require(): Failed opening required 'C:\xampp\htdocs\WebDevApp\bootstrap/../vendor/autoload.php' (include_path='C:\xampp\php\PEAR') in C:\xampp\htdocs\WebDevApp\bootstrap\autoload.php on line 17
Fatal error: require(): Failed opening required 'C:\xampp\htdocs\WebDevApp\bootstrap/../vendor/autoload.php' (include_path='C:\xampp\php\PEAR') in C:\xampp\htdocs\WebDevApp\bootstrap\autoload.php on line 17
A: Just run this inside the directory where you installed your project:
composer install
after
php artisan make:controller PagesController
A: composer update --no-scripts
run this command after that add a env file to your folder
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/48860066",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: lambda -> delegate doesn't compile The last statement does not compile. please refer to the comments along with the code for the detail of my question.
class Test
{
private static void Foo(Delegate d){}
private static void Bar(Action a){}
static void Main()
{
Foo(new Action(() => { Console.WriteLine("a"); })); // Action converts to Delegate implicitly
Bar(() => { Console.WriteLine("b"); }); // lambda converts to Action implicitly
Foo(() => { Console.WriteLine("c"); }); // Why doesn't this compile ? (lambda converts to Action implicitly, and then Action converts to Delegate implicitly)
}
}
A: Because the .net compiler doesn't know what type of delegate to turn the lambda into. It could be an Action, or it could be a void MyDelegate().
If you change it as follows, it should work:
Foo(new Action(() => { Console.WriteLine("c"); }));
A: Why should the compiler know how to two-step: from lambda -> Action -> Delegate?
This compiles:
class Test
{
private static void Foo(Delegate d) { }
private static void Bar(Action a) { }
static void Main()
{
Foo(new Action(() => { Console.WriteLine("world2"); })); // Action converts to Delegate implicitly
Bar(() => { Console.WriteLine("world3"); }); // lambda converts to Action implicitly
Foo((Action)(() => { Console.WriteLine("world3"); })); // This compiles
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/6338459",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: converting infix to prefix in python I am trying to write an Infix to Prefix Converter where e.g. I would like to convert this:
1 + ((C + A ) * (B - F))
to something like:
add(1, multiply(add(C, A), subtract(B, F)))
but I get this instead :
multiply(add(1, add(C, A), subtract(B, F)))
This is the code I have so far
postfix = []
temp = []
newTemp = []
def textOperator(s):
if s is '+':
return 'add('
elif s is '-':
return 'subtract('
elif s is '*':
return 'multiply('
else:
return ""
def typeof(s):
if s is '(':
return leftparentheses
elif s is ')':
return rightparentheses
elif s is '+' or s is '-' or s is '*' or s is '%' or s is '/':
return operator
elif s is ' ':
return empty
else :
return operand
infix = "1 + ((C + A ) * (B - F))"
for i in infix :
type = typeof(i)
if type is operand:
newTemp.append(i)
elif type is operator:
postfix.append(textOperator(i))
postfix.append(newTemp.pop())
postfix.append(', ')
elif type is leftparentheses :
newTemp.append(i)
elif type is rightparentheses :
next = newTemp.pop()
while next is not '(':
postfix.append(next)
next = newTemp.pop()
postfix.append(')')
newTemp.append(''.join(postfix))
while len(postfix) > 0 :
postfix.pop()
elif type is empty:
continue
print("newTemp = ", newTemp)
print("postfix = ", postfix)
while len(newTemp) > 0 :
postfix.append(newTemp.pop())
postfix.append(')')
print(''.join(postfix))
Can someone please help me figure out how I would fix this.
A: What I see, with the parenthetical clauses, is a recursive problem crying out for a recursive solution. The following is a rethink of your program that might give you some ideas of how to restructure it, even if you don't buy into my recursion argument:
import sys
from enum import Enum
class Type(Enum): # This could also be done with individual classes
leftparentheses = 0
rightparentheses = 1
operator = 2
empty = 3
operand = 4
OPERATORS = { # get your data out of your code...
"+": "add",
"-": "subtract",
"*": "multiply",
"%": "modulus",
"/": "divide",
}
def textOperator(string):
if string not in OPERATORS:
sys.exit("Unknown operator: " + string)
return OPERATORS[string]
def typeof(string):
if string == '(':
return Type.leftparentheses
elif string == ')':
return Type.rightparentheses
elif string in OPERATORS:
return Type.operator
elif string == ' ':
return Type.empty
else:
return Type.operand
def process(tokens):
stack = []
while tokens:
token = tokens.pop()
category = typeof(token)
print("token = ", token, " (" + str(category) + ")")
if category == Type.operand:
stack.append(token)
elif category == Type.operator:
stack.append((textOperator(token), stack.pop(), process(tokens)))
elif category == Type.leftparentheses:
stack.append(process(tokens))
elif category == Type.rightparentheses:
return stack.pop()
elif category == Type.empty:
continue
print("stack = ", stack)
return stack.pop()
INFIX = "1 + ((C + A ) * (B - F))"
# pop/append work from right, so reverse, and require a real list
postfix = process(list(INFIX[::-1]))
print(postfix)
The result of this program is a structure like:
('add', '1', ('multiply', ('add', 'C', 'A'), ('subtract', 'B', 'F')))
Which you should be able to post process into the string form you desire (again, recursively...)
PS: type and next are Python built-ins and/or reserved words, don't use them for variable names.
PPS: replace INFIX[::-1] with sys.argv[1][::-1] and you can pass test cases into the program to see what it does with them.
PPPS: like your original, this only handles single digit numbers (or single letter variables), you'll need to provide a better tokenizer than list() to get that working right.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/35759247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Need to write files to disk, how can I reference the folder in a web application? In a web application, that may be installed anywhere on the filesystem, I need to figure out the path to the root of the installation folder.
I want to write xml files to the directory:
c:/installation/path/web_app/files/
Is this possible or do I have to store this path in the web.config?
A: You can use Server.MapPath()
as in
Server.MapPath("~/files/ ")
A: Assuming "web_app" in your example is always the root folder of your web application, you can reference the files like...
string path = Server.MapPath("/files/");
A: You can use var rootFolder = Server.MapPath("~") to retrieve the physical path.
The tilde character ~ is replaced with the root directory of your web application, e.g. c:\installation\path\web_app
A: Use Server.MapPath
http://msdn.microsoft.com/en-us/library/ms524632.aspx
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/2359471",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: triggering multiples functions with react/typescript Im trying to trigger a function every time a different radio button is clicked in my form. This is the form:
<FormControl>
<RadioGroup
row
aria-labelledby="demo-row-radio-buttons-group-label"
name="row-radio-buttons-group"
value = {selectionValue}
onChange={handleSelection}
>
<FormLabel id="demo-radio-buttons-group-label">Price Selection</FormLabel>
<FormControlLabel id="standard" value="standard" control={<Radio />} label="standard" />
<FormControlLabel id="premium" value="premium" control={<Radio />} label="premium" />
<FormControlLabel id="excelium" value="excelium" control={<Radio />} label="excelium" />
</RadioGroup>
</FormControl>
and it triggers this onChange={handlefunction} which works and calls serviceCalc():
const [selectionValue, setSelectionValue] = useState("")
const handleSelection = (event : any) => {
setSelectionValue(event.target.value);
serviceCalc()
}
my problem is when I get to serviceCalc() the function prints my console.log and thats it. how can I get standard(), premium() and excelium() to go trough?
const serviceCalc = () => {
console.log("service calc")
const standard1 = (document.getElementById("standard") as HTMLInputElement);
const premium1 = (document.getElementById("premium") as HTMLInputElement);
const excelium1 = (document.getElementById("excelium") as HTMLInputElement);
if (standard1.checked){
standard();
}
else if (premium1.checked) {
premium();
}
else if (excelium1.checked) {
excelium();
}
}
any help is greatly appreciated.
A: The id you're passing to <FormControlLabel id="someId"/> component, is NOT the id of the <input> HTML element but the id of its <label> element.
So when you check for document.getElementById("someId").checked you always get undefined and then you'll never go through your if - else checks.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/71639394",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: move an entire html div from one page to another using jquery how can i move an entire html div from one page to another using jquery
here is an example :
index.html
<div id="parent1" class="con">
<h1>text</h1>
<p>Lorem ipsum dolor sit amet,
consectetur adipisicing elit, sed do eiusmod t
empor incididunt ut labore et dolore
<span class="icon">icon</span>
</p>
</div>
<a href="#" class="btn">click me</a>
</div>
input.html
<div id="parent2" class="con">
</div>
I want when I click on the btn I move what is inside #parent1 to the #parent2 that is in the input.html page
any help please and thank you in advance I have read that question but it didn't help me solve the problem
How to move all HTML element children to another parent using JavaScript?
A: Try this code, it will work as your wish.
On 1st Page store data into localStorage variable
var page_content = document.getElementsByTagName("body")[0].innerHTML;
console.log( page_content );
localStorage.setItem("page_content", page_content );
Retrieve on 2nd page
document.getElementById("parent2").innerHTML = localStorage.getItem("page_content");
console.log( page_content );
Check console on 1st page for confirmation data storing successfully.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/57201504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: Executing Local Query with loaded metadata fails I'm new to breeze, this looks like a bug, but thought I'd ask here in case I just don't get it.
Setup loading metadata:
var metadataStore = new breeze.MetadataStore();
metadataStore.importMetadata(metadata);
queryOptions = new breeze.QueryOptions( {
fetchStrategy: breeze.FetchStrategy.FromLocalCache
});
mgr = new breeze.EntityManager({
serviceName: 'breeze',
metadataStore: metadataStore,
queryOptions: queryOptions
});
Executing local query explicitly works:
var q = breeze.EntityQuery.from("Boards")
.toType('Board')
.where('isImplicit', 'equals', withImplicits)
.orderBy('name');
return manager.executeQueryLocally(q) // returns result
But using query.using doesn't:
var q = breeze.EntityQuery.from("Boards")
.toType('Board')
.where('isImplicit', 'equals', withImplicits)
.orderBy('name');
q = q.using(breeze.FetchStrategy.FromLocalCache)
return manager.executeQuery(q)
UPDATE: To clarify, the above throws an error as it tries to fetchMetdata and there is no endpoint to fetch from. If I monkey patch the code below, it works fine. It seems like if the dataService .hasServerMetadata, you don't need to fetch it. I'm creating a test harness for a breeze adapter, so I want to be able to run without the backend
Looks like problem is this line in EntityManager:
if ( (!dataService.hasServerMetadata ) || this.metadataStore.hasMetadataFor(dataService.serviceName)) {
promise = executeQueryCore(this, query, queryOptions, dataService);
} else {
var that = this;
promise = this.fetchMetadata(dataService).then(function () {
return executeQueryCore(that, query, queryOptions, dataService);
});
}
I believe line should be if( dataService.hasServerMetadata || ..., but being new to Breeze thought I'd ask here before opening GH issue.
A: EntityManager.executeQueryLocally is a synchronous function and you can use its result immediately. i.e.
var myEntities = myEntityManager.executeQueryLocally(myQuery);
Whereas EntityManager.executeQuery is an asynchonous function ( even if the query has a 'using' call that specifies that this is a local query). So you need to call it like this:
var q2 = myQuery.using(breeze.FetchStrategy.FromLocalCache);
myEntityManager.executeQuery(q2).then(function(data) {
var myEntities = data.results;
});
The idea behind this is that with executeQuery you treat all queries in exactly the same fashion, i.e. asynchronously, regardless of whether they are actually asynchronous under the hood.
If you want to create an EntityManager that does not go to the server for metadata you can do the following:
var ds = new breeze.DataService({
serviceName: "none",
hasServerMetadata: false
});
var manager = new breeze.EntityManager({
dataService: ds
});
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/25653216",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Angular 4 Filter Search Custom Pipe for selected columns I have created a custom pipe for filtering table data.Now I wanted to add a dropdown that contain the table column, on selecting particular column it will able to search by that column.
Any help on this part is appreciated.
code as below :
home.component.html
<select id="Select1" [(ngModel)]="selected">
<option>EmpID</option>
<option>EmpName</option>
<option>Age</option>
<option>Address1</option>
<option>Address2</option>
</select>
<input type="text" placeholder="Search" [(ngModel)]="query">
<table *ngIf="employee">
<tr>
<th>EmpID</th>
<th>EmpName</th>
<th>EmpAge</th>
<th>Address1</th>
<th>Address2</th>
<th>Change Detail</th>
<th>Add Detail</th>
</tr>
<tr *ngFor="let employe of employee | search:query | paginate: { itemsPerPage: 10, currentPage: p }" >
<td>{{employe.EmpID}}</td>
<td>{{employe.EmpName}}</td>
<td>{{employe.Age}}</td>
<td>{{employe.Address1}}</td>
<td>{{employe.Address2}}</td>
<td><button class="btn btn-primary" (click)="open(employe);">Edit</button></td>
<td><button class="btn btn-primary" (click)="add();">Add</button></td>
</tr>
</table>
Search.pipe.ts
import { Pipe, PipeTransform } from '@angular/core';
@Pipe({
name: 'search'
})
export class SearchPipe implements PipeTransform {
transform(value: any, args?: any): any {
if(!value)return null;
if(!args)return value;
args = args.toLowerCase();
return value.filter(function(item){
return JSON.stringify(item).toLowerCase().includes(args);
});
}
home.component.ts
employee: any [] = [{
"EmpID": "1",
"EmpName": "mukesh12",
"Age": "182",
"Address1": "Streptopelia",
"Address2": "Streptopelia hghg"
},
{
"EmpID": "2",
"EmpName": "Rakesh",
"Age": "1821",
"Address1": "Streptopelia",
"Address2": "Streptopelia hghg"
},
{
"EmpID": "3",
"EmpName": "abhishek",
"Age": "184",
"Address1": "Streptopelia",
"Address2": "Streptopelia hghg"
},
{
"EmpID": "4",
"EmpName": "rawt",
"Age": "186",
"Address1": "ktreptopelia",
"Address2": "Streptopelia hghg"
},
{
"EmpID": "5",
"EmpName": "boy",
"Age": "11",
"Address1": "Vtgdreptopelia",
"Address2": "Ttrnneptopelia hghg"
},
{
"EmpID": "6",
"EmpName": "himanshu",
"Age": "28",
"Address1": "MStreptopelia",
"Address2": "Streptopelia hghg"
},
{
"EmpID": "7",
"EmpName": "katat",
"Age": "18",
"Address1": "Streptopelia",
"Address2": "Streptopelia hghg"
},
{
"EmpID": "8",
"EmpName": "gd",
"Age": "18",
"Address1": "Streptopelia",
"Address2": "Streptopelia hghg"
},
{
"EmpID": "9",
"EmpName": "tyss",
"Age": "18",
"Address1": "Streptopelia",
"Address2": "Streptopelia hghg"
},
{
"EmpID": "10",
"EmpName": "mukesh",
"Age": "18",
"Address1": "Streptopelia",
"Address2": "Streptopelia hghg"
},
{
"EmpID": "11",
"EmpName": "mukesh",
"Age": "18",
"Address1": "Streptopelia",
"Address2": "Streptopelia hghg"
},
{
"EmpID": "12",
"EmpName": "lopa",
"Age": "18",
"Address1": "Streptopelia",
"Address2": "Streptopelia hghg"
},
{
"EmpID": "13",
"EmpName": "todo",
"Age": "18",
"Address1": "Streptopelia",
"Address2": "Streptopelia hghg"
},
{
"EmpID": "14",
"EmpName": "mukesh",
"Age": "16",
"Address1": "Streptopelia",
"Address2": "Streptopelia hghg"
},
{
"EmpID": "15",
"EmpName": "mukesh",
"Age": "38",
"Address1": "Streptopelia",
"Address2": "Streptopelia hghg"
},
{
"EmpID": "16",
"EmpName": "mukesh",
"Age": "18",
"Address1": "Streptopelia",
"Address2": "Streptopelia hghg"
},
{
"EmpID": "17",
"EmpName": "see",
"Age": "08",
"Address1": "Streptopelia",
"Address2": "Streptopelia hghg"
},
{
"EmpID": "18",
"EmpName": "hmmm",
"Age": "18",
"Address1": "Streptopelia",
"Address2": "Streptopelia hghg"
},
{
"EmpID": "19",
"EmpName": "mukesh",
"Age": "28",
"Address1": "Streptopelia",
"Address2": "Streptopelia hghg"
},
{
"EmpID": "20",
"EmpName": "tuta",
"Age": "68",
"Address1": "Streptopelia",
"Address2": "Streptopelia hghg"
}];
If EmpID is selected then it will search according to Empid in search field, if EmpName is selected then it will search according to EmpName and so on........
A: Add another parameter your pipe
import { Pipe, PipeTransform } from '@angular/core';
@Pipe({ name: 'search' })
export class SearchPipe implements PipeTransform {
transform(value: any, q?: any,colName: any="EmpName"): any {
if(!value) return null;
if(!q) return value;
q = q.toLowerCase();
return value.filter((item)=> {
return item[colName].toLowerCase().includes(q);
});
}
}
home.component.html
<tr *ngFor="let employe of employee | search:query:selected | paginate: { itemsPerPage: 10, currentPage: p }" >
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/54627870",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to set a device admin app to default launcher without prompt in a rooted device? I have a rooted device where I have successfully made my app a device admin app and a home application launcher too but the user is given the choice to start my app or the default Google launcher. I'd like to always have my app be the default launcher. Is there a way to do this without the user interaction ?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/31244826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Speed of PHP UDP scraper is incredibly slow, how to improve? I am working on a little project of mine and have built a UDP scraper that uses sockets to return data about a specific sha1 hash.
It works but is incredibly slow and wondered if any one knows how I could speed it up or improve the existing code.
The code is below;
// SCRAPE UDP
private function scrapeUDP($tracker, $hash) {
// GET TRACKER DETAILS
preg_match('%udp://([^:/]*)(?::([0-9]*))?(?:/)?%i', $tracker, $info);
// GENERATE TRANSACTION ID
$transID = mt_rand(0, 65535);
// PACKED TRANSACTION ID
$packedTransID = pack('N', $transID);
// ATTEMPT TO CREATE A SOCKET
if(!$socket = @fsockopen('udp://' . $info[1], $info[2], $errno, $errstr, 2)) {
return;
}
// SET STREAM TIMEOUT
stream_set_timeout($socket, 2);
// CONNECTION ID
$connID = "\x00\x00\x04\x17\x27\x10\x19\x80";
// BUILD CONNECTION REQUEST PACKET
$packet = $connID . pack('N', 0) . $packedTransID;
// SEND PACKET
fwrite($socket, $packet);
// CONNECTION RESPONSE
$response = fread($socket, 16);
// CHECK CONNECTION RESPONSE LENGTH
if(strlen($response) < 16) {
return;
}
// UNPACK CONNECTION RESPONSE
$returnData = unpack('Naction/NtransID', $response);
// CHECK CONNECTION RESPONSE DATA
if($returnData['action'] != 0 || $returnData['transID'] != $transID) {
return;
}
// GET CONNECTION ID
$connID = substr($response, 8, 8);
// BUILD SCRAPE PACKET
$packet = $connID . pack('N', 2) . $packedTransID . $hash;
// SEND SCRAPE PACKET
fwrite($socket, $packet);
// SCRAPE RESPONSE
$response = fread($socket, 20);
// CHECK SCRAPE RESPONSE LENGTH
if(strlen($response) < 20) {
return;
}
// UNPACK SCRAPE RESPONSE
$returnData = unpack('Naction/NtransID', $response);
// CHECK SCRAPE RESPONSE DATA
if($returnData['action'] != 2 || $returnData['transID'] != $transID) {
return;
}
// UNPACK SCRAPE INFORMATION
$returnData = unpack('Nseeders/Ncompleted/Nleechers', substr($response, 8, 12));
// RETURN TRACKER INFORMATION
return array('seeders' => $returnData['seeders'], 'leechers' => $returnData['leechers'],);
}
It is my first time I have ever created anything to do with sockets or UDP so forgive me if it is a mess!
Thanks...
A: You have to make parallel request using socket_select() and non-blocking sockets or forks, because you are spending a lot of time in waiting for the response. Additionally, it may be better to use low-level functions like socket_read() or similar to control connection and data transmission better.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/10849587",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Create a user with email which has been processed from the previous page with UserCreationForm Django I am trying to create a user with email as username and the email is first screened, i.e., the email is not registered then create a user. How can I pass or set the email in forms.py as it is already processed in the previous page?
Models.py
from django.contrib.auth.models import AbstractBaseUser
from django.db import models
class CUserManager(models.Manager):
def _create_user(self, email, password, **extra_fields):
now = timezone.now();
if not email:
raise ValueError(_('Email is required'))
user = self.model(
email = email,
date_joined = now, **extra_fields
)
user.set_password(password)
user.save(using = self._db)
return account
def create_user(self, email, password=None, **extra_fields):
return self._create_user(email, password, **extra_fields)
def get_by_natural_key(self, email):
return self.get(email=email)
def create_superuser(self, email, password, **extra_fields):
return self._create_user(email, password,**extra_fields)
class CUser(AbstractBaseUser):
email = models.EmailField(_('email address'), unique=True)
first_name = models.CharField(_('first name'), max_length=255)
last_name = models.CharField(_('last name'), max_length=255)
date_joined = models.DateTimeField(_('date created'), auto_now_add=True)
is_active = models.BooleanField(default=True)
objects = CUserManager()
USERNAME_FIELD = 'email'
...
Forms.py
from django.contrib.auth.forms import UserCreationForm
class RegistrationForm(UserCreationForm):
class Meta:
model = CUser
fields = ('first_name', 'last_name', 'password1', 'password2')
in HTML
<form action="" method="post" role="form">
{% csrf_token %}
{{ form.as_p }}
<input type="submit" value="Submit" />
</form>
The email comes from session['email'] which is saved in the previous page.
How can I pass this session['email'] to forms.py?
A: You can try like this:
# Form
class RegistrationForm(UserCreationForm):
class Meta:
model = CUser
fields = ('first_name', 'last_name', 'password1', 'password2')
def save(self, **kwargs):
email = kwargs.pop('email')
user = super(RegistrationForm, self).save(commit=False)
user.set_password(self.cleaned_data['password1'])
user.email = email
user.save()
return user
# View
# your form codes...
if form.is_valid():
form.save(email=request.session.get('email'))
# rest of the codes
What I am doing here is that, first I have override that save method to catch email value from keyword arguments. Then I am passing the email value to the ModelForm's save method. Hope it helps!!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/53424205",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Unable to edit spreadsheets on server using ASP.NET web system I am currently creating an ASP.NET/VB web system that manipulates an Excel file as part of it's functionality. This excel file contains multiple combo boxes, text boxes and labels that need to be edited, as well as certain cells.
When trying to edit any of the "form fields" - the labels, text boxes and combo boxes - I get the following error.
Public member 'comboOrderedBy' on type 'Worksheet' not found.
This appears on the line
xlWorkSheet.comboOrderedBy.Text = OrderedBy.Text
What is causing this? This error only appears when running the system from the server - when I run it using a local host (Visual Studios built in debugger), it works fine.
The code for this section is;
Dim xlApp As Excel.Application
Dim xlWorkBook As Excel.Workbook
Dim xlWorkSheet As Excel.Worksheet
xlApp = New Excel.Application
xlWorkBook = xlApp.Workbooks.Open("C:\Testing\BAndQCardSubmissionsTemplate.xlsx")
xlWorkSheet = xlWorkBook.Worksheets("Card Order Details")
xlWorkSheet.comboOrderedBy.Text = OrderedBy.Text
xlWorkSheet.txtcustref = CustomerRef.Text
xlWorkSheet.frm_txt_CustRef.Text = CustomerRef.Text
xlWorkSheet.Cells(8, 4) = CardNo1Label.Text & CheckDigit1Label.Text.Trim()
xlWorkSheet.Cells(8, 5) = Value1Label.Text
xlWorkSheet.Cells(8, 6) = PropRef1Label.Text
xlWorkSheet.Cells(8, 7) = NewLetName1Label.Text & RepairName1Label.Text
xlWorkSheet.Cells(8, 8) = NewLetOrRep1Label.Text
xlWorkSheet.Cells(8, 9) = Concatenation1.Text
Dim FileDate As String = SysdateTextBox.Text
FileDate = FileDate.Replace(" ", "")
FileDate = FileDate.Replace("/", "")
FileDate = FileDate.Replace(":", "")
xlWorkBook.SaveAs("C:\Testing\BAndQCardSubmission" & FileDate & ".xlsx")
xlWorkBook.Close(True)
xlApp.Quit()
A: This was fixed when I instead decided to add the data into some other cells, and then create a Macro that ran in order to grab the information from the cells and place them in the form fields.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/27376348",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Flutter Firebase Dynamic Link is not working on new install I faced with problem of first run with query parameters after new install of app. I use flutter app and check for new links by this pease of code:
final PendingDynamicLinkData? initialLink =
await FirebaseDynamicLinks.instance.getInitialLink();
And it works if I just click on link when app is closed or in tray (paused). So custom firebase scheme works in this scenario.
https://example.com/?route=season
and full link at firebase console:
https://example.page.link/?link=https://example.com/?route%3Dseason&apn=***&isi=***&ibi=***
Sometimes due installing I get this log from firebase:
my.custom.schema://google/link/?request_ip_version=IP_V6&match_message=No%20pre-install%20link%20matched%20for%20this%20device.
But I use the same link that works while app is installed. Note that I don't use "Skip the preview page" flag.
Flow with this link (when app is not installed):
*
*click on link from email
*see preview page "https://preview.page.link/example.page.link/season"
*redirect to app store (after click on OPEN)
*first open an App
Any help could be appreciated! Thank in advance.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/72394369",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Elasticsearch completion suggester issue Issue - completion suggester with custom keyword lowercase analyzer not working as expected. We can reproduce the issue with following steps.
Not able to understand whats causing issue here. However, if we search for "PRAXIS CONSULTING AND INFORMATION SERVICES PRIVATE" , it is giving result.
Create index
curl -X PUT "localhost:9200/com.tmp.index?pretty" -H 'Content-Type: application/json' -d'{
"mappings": {
"dynamic": "false",
"properties": {
"namesuggest": {
"type": "completion",
"analyzer": "keyword_lowercase_analyzer",
"preserve_separators": true,
"preserve_position_increments": true,
"max_input_length": 50,
"contexts": [
{
"name": "searchable",
"type": "CATEGORY"
}
]
}
}
},
"settings": {
"index": {
"mapping": {
"ignore_malformed": "true"
},
"refresh_interval": "5s",
"analysis": {
"analyzer": {
"keyword_lowercase_analyzer": {
"filter": [
"lowercase"
],
"type": "custom",
"tokenizer": "keyword"
}
}
},
"number_of_replicas": "0",
"number_of_shards": "1"
}
}
}'
Index document
curl -X PUT "localhost:9200/com.tmp.index/_doc/123?pretty" -H 'Content-Type: application/json' -d'{
"namesuggest": {
"input": [
"PRAXIS CONSULTING AND INFORMATION SERVICES PRIVATE LIMITED."
],
"contexts": {
"searchable": [
"*"
]
}
}
}
'
Issue - Complete suggest not giving result
curl -X GET "localhost:9200/com.tmp.index/_search?pretty" -H 'Content-Type: application/json' -d'{
"suggest": {
"legalEntity": {
"prefix": "PRAXIS CONSULTING AND INFORMATION SERVICES PRIVATE LIMITED.",
"completion": {
"field": "namesuggest",
"size": 10,
"contexts": {
"searchable": [
{
"context": "*",
"boost": 1,
"prefix": false
}
]
}
}
}
}
}'
A: You are facing this issue because of default value of max_input_length parameter is set to 50.
Below is description given for this parameter in documentation:
Limits the length of a single input, defaults to 50 UTF-16 code
points. This limit is only used at index time to reduce the total
number of characters per input string in order to prevent massive
inputs from bloating the underlying datastructure. Most use cases
won’t be influenced by the default value since prefix completions
seldom grow beyond prefixes longer than a handful of characters.
If you enter below string which is exact 50 character then you will get response:
PRAXIS CONSULTING AND INFORMATION SERVICES PRIVATE
Now if you add one more or two character to above string then it will not resturn the result:
PRAXIS CONSULTING AND INFORMATION SERVICES PRIVATE L
You can use this default behaviour or you can updated your index mapping with increase value of max_input_length parameter and reindex your data.
{
"mappings": {
"dynamic": "false",
"properties": {
"namesuggest": {
"type": "completion",
"analyzer": "keyword_lowercase_analyzer",
"preserve_separators": true,
"preserve_position_increments": true,
"max_input_length": 100,
"contexts": [
{
"name": "searchable",
"type": "CATEGORY"
}
]
}
}
},
"settings": {
"index": {
"mapping": {
"ignore_malformed": "true"
},
"refresh_interval": "5s",
"analysis": {
"analyzer": {
"keyword_lowercase_analyzer": {
"filter": [
"lowercase"
],
"type": "custom",
"tokenizer": "keyword"
}
}
},
"number_of_replicas": "0",
"number_of_shards": "1"
}
}
}
You will get response like below after updating index:
"suggest": {
"legalEntity": [
{
"text": "PRAXIS CONSULTING AND INFORMATION SERVICES PRIVATE LIMITED",
"offset": 0,
"length": 58,
"options": [
{
"text": "PRAXIS CONSULTING AND INFORMATION SERVICES PRIVATE LIMITED.",
"_index": "74071871",
"_id": "123",
"_score": 1,
"_source": {
"namesuggest": {
"input": [
"PRAXIS CONSULTING AND INFORMATION SERVICES PRIVATE LIMITED."
],
"contexts": {
"searchable": [
"*"
]
}
}
},
"contexts": {
"searchable": [
"*"
]
}
}
]
}
]
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/74071871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: PHP - file_get_contents skip style and script I am trying to pull a html of a site,
I am using file_get_contents($url).
When I run file_get_contents then its takes too much time pull html of host site,
Can I skip style, scripts and images ?
I think then it will take less time to pull html of that site.
A: Try:
$file = file_get_contents($url);
$only_body = preg_replace("/.*<body[^>]*>|<\/body>.*/si", "", $file);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/18426158",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: javascript ampersand (&) in return data will not show as value I have this bit of code:
...
var aData = request.responseXML.getElementsByTagName('data')[0];
var sDescription = aData.getElementsByTagName('description')[0].firstChild.data;
alert(escape(sDescription));
document.getElementById('tempLabourLineDescription').value = sDescription;
...
sDescription is outputting: SUPPORT ASSY-FUEL TANK MOUNTING, R&R (LH) (L-ENG)
I think it is obvious what i want to do here (get the sDescription in to a field called tempLabourLineDescription but that just will not work.
However, if i in my php script replace or delete the &-char from that string it all works fine. So i thought, just escape the darn string. But that will just not work.
alerting the string doesn't work either until i remove the &-character.
What is doing this? Is sDescription not a string when it comes out of the xml file?
How can i solve this?
A: The answer is in this snippet:
var aData = request.responseXML...
You're expecting XML. An & by itself is not legal XML. You need to output your result like this:
SUPPORT ASSY-FUEL TANK MOUNTING, R&R (LH) (L-ENG)
A: It's very difficult to tell without seeing your output script, but the first thing to try is to mask the ampersand: &
The neater way, though, would be to add CDATA to your XML output:
<data><![CDATA[SUPPORT ASSY-FUEL TANK MOUNTING, R&R (LH) (L-ENG)]]></data>
your XML parser on client side should understand it no problem.
A: You escape the ampersand by using the HTML eqv. &
A: If you are unable to alter the XML output from the server (it's not your app or some other issue), a "hack" fix would be:
function htmlizeAmps(s){
return s.replace(/\x26/g,"&"); //globalreplace "&" (hex 26) with "&"
}
document.getElementById('tempLabourLineDescription').value = htmlizeAmps(sDescription);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/2141119",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: PHP Count 1 too many on Array as it's 0 based? I've had this problem a few times now when for looping over an array item.
In this instance I'm generating all 2 letter combinations of the alphabet.
The code works (and I know there's a much easier way of doing it with 2 for loops, but I'm trying something different).
However I have to do count -1 as count() returns the number 26 for the array length, however the 26th item obviously doesn't exist as it's 0 based?
Is there not a version of count() that works on a zero-based basis?
<?php
$alphas = range('a', 'z');
$alphacount = count($alphas);
// Why do I have to do this bit here?
$alphaminus = $alphacount -1;
$a = 0;
for ($i=0;$i<$alphacount;$i++) {
$first = $alphas[$a];
$second = $alphas[$i];
if ($i === $alphaminus && $a < $alphaminus ) {
$i = 0;
$a ++;
}
echo "$first$second<br>";
}
?>
Without $alphaminus = $alphacount -1; I get undefined offset 26?
A: How about:
<?php
$alphas = range('a', 'z');
$alphacount = count($alphas);
$a = 0;
for ($i=0;$i<$alphacount;$i++) {
$first = $alphas[$a];
$second = $alphas[$i];
if ($i >= $alphacount && $a < $alphaminus ) {
$i = 0;
$a ++;
}
echo "$first$second<br>";
}
So you don't have to to -1 since you don't like it! :)
And how about:
$alphas = range('a', 'z');
for ($i = 0; $i < count($alphas); $i++) {
for ($a = 0; $a < count($alphas); $a++) {
echo "{$alphas[$i]}{$alphas[$a]}\n";
}
}
Or forget about arrays! This is more fun :)
array_walk($alphas, function ($a) use ($alphas) {
array_walk($alphas, function ($b) use ($a) {
print "$a$b\n";
});
});
A: The problem is that you reset $i to 0 in the loop; then on encountering the end of the loop $i is incremented, so the next run in the loop will be with $i = 1 instead of $i = 0.
That is, the next subrange of letters starts with (letter)b instead of (letter)a. (See your output: the next line after az is bb rather than ba.)
Solution: reset $i to -1 in the loop, then at the end it will run with the value 0 again.
A: You have 26 characters, but arrays in PHP are indexed from 0. So, indexes are 0, 1, ... 25.
A: count is 1-based and arrays created by range() are 0-based.
It means that:
$alphas[0] == a
$alphas[25] == z
$count($alphas) = 26; // there are 26 elements. First element is $alphas[0]
A: Why does it have to be so complicated? You could simply do
foreach ($alphas as $alpha)
{
foreach($alphas as $alpha2)
{
echo $alpha.$alpha2."<br>";
}
}
Note: It is mostly not a good idea to manipulate the loop counter variable inside the body of that very loop. You set $i to 0 on a certain condition. That could give you unexpected results, hence the reason why you have to navigate around it.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/23979502",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Azure eventhub Kafka org.apache.kafka.common.errors.TimeoutException for some of the records Have a ArrayList containing 80 to 100 records trying to stream and send each individual record(POJO ,not entire list) to Kafka topic (event hub) . Scheduled a cron job like every hour to send these records(POJO) to event hub.
Able to see messages being sent to eventhub ,but after 3 to 4 successful run getting following exception (which includes several messages being sent and several failing with below exception)
Expiring 14 record(s) for eventhubname: 30125 ms has passed since batch creation plus linger time
Following is the config for Producer used,
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.ACKS_CONFIG, "1");
props.put(ProducerConfig.RETRIES_CONFIG, "3");
Message Retention period - 7
Partition - 6
using spring Kafka(2.2.3) to send the events
method marked as @Async where kafka send is written
@Async
protected void send() {
kafkatemplate.send(record);
}
Expected - No exception to be thrown from kafka
Actual - org.apache.kafka.common.errors.TimeoutException is been thrown
A: Prakash - we have seen a number of issues where spiky producer patterns see batch timeout.
The problem here is that the producer has two TCP connections that can go idle for > 4 mins - at that point, Azure load balancers close out the idle connections. The Kafka client is unaware that the connections have been closed so it attempts to send a batch on a dead connection, which times out, at which point retry kicks in.
*
*Set connections.max.idle.ms to < 4mins – this allows Kafka client’s network client layer to gracefully handle connection close for the producer’s message-sending TCP connection
*Set metadata.max.age.ms to < 4mins – this is effectively a keep-alive for the producer metadata TCP connection
Feel free to reach out to the EH product team on Github, we are fairly good about responding to issues - https://github.com/Azure/azure-event-hubs-for-kafka
A: This exception indicates you are queueing records at a faster rate than they can be sent. Once a record is added a batch, there is a time limit for sending that batch to ensure it has been sent within a specified duration. This is controlled by the Producer configuration parameter, request.timeout.ms. If the batch has been queued longer than the timeout limit, the exception will be thrown. Records in that batch will be removed from the send queue.
Please check the below for similar issue, this might help better.
Kafka producer TimeoutException: Expiring 1 record(s)
you can also check this link
when-does-the-apache-kafka-client-throw-a-batch-expired-exception/34794261#34794261 for reason more details about batch expired exception.
Also implement proper retry policy.
Note this does not account any network issues scanner side. With network issues you will not be able to send to either hub.
Hope it helps.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/58010247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Is it necessary to call end() on a received http request in Node? I have a Node server which can reject requests based upon security headers. If I reject a request, is it necessary to use blank data and end handlers to read the request body or can I just send the response, a 401, and leave the request unread?
It seems that if I leave the response unread then I get occasional "The existing connection has been forcibly closed by the remote host" errors at the client. Adding code to wait for the request body to be read does seem to fix the issue but then again, adding delays at various points in the server code also seems to have a beneficial effect. It can be hard to tell with an intermittent fault.
The coffeescript code that seems to fix the issue is:
@res.writeHead status, message, @headers
@req.on 'data', (d) ->
# wait for request to be completely read before ending response stream
@req.on 'end', => @res.end()
The empty data handler is required to get the end event and the end event is possibly required to avoid the error at the client. Given that the request body might be megabytes is this the best way to send a 401 response or is there a better way that doesn't require reading the whole request.
A: Further investigation has revealed a large can of worms. It would seem that there is no properly implemented method to kill the request stream without potentially causing an error at the client.
This question covers the difficulty of terminating a request early without causing an error:
How to cancel HTTP upload from data events?
I have decided to abort very long requests and to allow shorter ones to complete. In my application this should only normally abort requests that are probably a DOS attack (legitimate requests are usually short)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/25845733",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to erase specific row from QListWidget I have a small ui with 1 QListWidget. In this uithere is a predefined row in the QListWidget called "Add New".
I drag and drop several files from a folder into a QListWidget. So there are all my files plus the "Add New" record.
The "Add Record" is used as a doubleClick to create a New File and store it both in the QListWidget and in the local folder of the computer.
The problem is that when I erase a specific record from the QListWidget, several files are erased together and I don't know why.
See below the steps:
*
*I launch the application:
*I drag and drop some files and I right click on the last one to erase it:
*As soon as I erase that, several other are also erased as shown below:
Below is the logic I used:
mainwindow.cpp
MainWindow::MainWindow(QWidget *parent)
: QMainWindow(parent)
, ui(new Ui::MainWindow)
{
ui->setupUi(this);
QAction *remove;
remove = new QAction(QIcon(":/icons/remove_item.png"), "Remove", this);
QObject::connect(remove, SIGNAL(triggered()), this, SLOT(on_eraseBtn_clicked()));
ui->listWidget->setContextMenuPolicy(Qt::ActionsContextMenu);
ui->listWidget->addAction(remove);
setAcceptDrops(true);
ui->listWidget->addItem("Add New");
ui->listWidget->item(0)->setSelected(true);
}
void MainWindow::on_eraseBtn_clicked()
{
for(int i = 0; i < ui->listWidget->count(); ++i)
{
QString str = ui->listWidget->item(i)->text();
if (str != "Add New")
delete ui->listWidget->item(i);
qDebug() << ui->listWidget->item(i) << str;
}
}
UPDATE
As proposed by @absolute.madeness, a whileloop could be used too. Not exactly the behavior but close. With the following loop, all the records (or row) of the QListwidget are erased with the exception of the row "Add New"
void MainWindow::on_eraseBtn_clicked()
{
for(int i = 0; i < ui->listWidget->count(); ++i)
{
while(i < ui->listWidget->count() && ui->listWidget->item(i)->text() != "Add New")
delete ui->listWidget->item(i);
}
}
What I have done so far is:
*
*on the slot on_eraseBtn_clicked() I just added the line delete ui->listWidget->currentItem(); to see if at least the record was successfully erased. Which it was so I was sure that the slot was properly triggered.
*After that I did some research and the best approach was to loop through the all QListWidget and and as soon as the record with the QString Add New is found, please keep it and pass to the next record and erase it. This methodology is also described in this post. But for some reason more then one record is erased.
*I tried to do additional research and found this post
*I tried to delete by context as explained in this post.
Any other pointers you can suggest?
A: I originally wrote a for loop that was straightforward but I incurred in the problem to delete item(i) recursively. The next item previously accessed as item(i+1) will decrease by 1, creating a strange effect of erasing several rows unexpectedly.
As proposed by @absolute.madeness, below the solution that avoids that. I hope other will benefit from this solution and don't get stuck wondering why a simple loop can create such unexpected issues:
MainWindow::MainWindow(QWidget *parent)
: QMainWindow(parent)
, ui(new Ui::MainWindow)
{
ui->setupUi(this);
remove = new QAction(QIcon(":/icons/remove_item.png"), "Remove", this);
QObject::connect(remove, SIGNAL(triggered()), this, SLOT(on_eraseBtn_clicked()));
// other operations in the constructor....
}
void MainWindow::on_eraseBtn_clicked()
{
for(auto item: ui->listWidget->selectedItems())
if(item->text() != "Add New") delete item;
}
This way the user selectively erase single rows in the QListWidget leaving only a specific row.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/73639771",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: JavaScript small game example Learning Javascript and stack in small game example.
So why println(beaver.holes) has "NaN" value ??? It must be normal number without flouting point.
var Beaver = function(x, y) {
this.x = x;
this.y = y;
this.img = getImage("creatures/Hopper-Happy");
this.sticks = 0;
};
Beaver.prototype.draw = function() {
fill(255, 0, 0);
this.y = constrain(this.y, 0, height-50);
image(this.img, this.x, this.y, 40, 40);
};
Beaver.prototype.hop = function() {
this.img = getImage("creatures/Hopper-Jumping");
this.y -= 5;
};
Beaver.prototype.fall = function() {
this.img = getImage("creatures/Hopper-Happy");
this.y += 5;
};
Beaver.prototype.checkForStickGrab = function(stick) {
if ((stick.x >= this.x && stick.x <= (this.x + 40)) &&
(stick.y >= this.y && stick.y <= (this.y + 40))) {
stick.y = -11;
this.sticks++;
}
};
Beaver.prototype.checkForHoleDrop = function(hole) {
if ((hole.x >= this.x && hole.x<=(this.x +40)) &&
(hole.y >=this.y && hole.y <= (this.y + 40))) {
hole.y = -11;
this.holes++;
}
};
var Stick = function(x, y) {
this.x = x;
this.y = y;
};
var Hole = function(x,y) {
this.x = x;
this.y = y;
};
Hole.prototype.draw = function() {
rectMode(CENTER);
noStroke();
fill(120, 144, 204);
ellipse(this.x,this.y, 51,19);
fill(0, 40, 71);
ellipse(this.x,this.y, 40,15);
};
Stick.prototype.draw = function() {
fill(89, 71, 0);
rectMode(CENTER);
rect(this.x, this.y, 5, 40);
};
var beaver = new Beaver(37, 300);
var sticks = [];
for (var i = 0; i < 40; i++) {
sticks.push(new Stick(i * 44 + 300, random(44, 260)));
}
// holes var
var holes = [];
for (var i = 0;i< 10; i ++) {
holes.push(new Hole(random(33,400)*i + 306, 378));
}
var grassXs = [];
for (var i = 0; i < 25; i++) {
grassXs.push(i*18);
}
draw = function() {
// static
background(227, 254, 255);
fill(130, 79, 43);
rectMode(CORNER);
rect(0, height*0.90, width, height*0.10);
for (var i = 0; i < grassXs.length; i++) {
image(getImage("cute/GrassBlock"), grassXs[i], height*0.85, 20, 20);
grassXs[i] -= 1;
if (grassXs[i] <= -32) {
grassXs[i] = width;
}
}
// making holes
for (var i = 0; i < holes.length; i++) {
holes[i].draw();
beaver.checkForHoleDrop(holes[i]);
holes[i].x -=2;
}
for (var i = 0; i < sticks.length; i++) {
sticks[i].draw();
beaver.checkForStickGrab(sticks[i]);
sticks[i].x -= 2;// value turn speed of sticks
}
So here if we print beaver.holes on screen it will be give us NaN value.
textSize(18);
text("Score: " + beaver.sticks, 20, 30);
textSize(18);
text("Score: " + beaver.holes, 105, 30);
if (beaver.sticks/sticks.length >= 0.95) {
textSize(36);
text("YOU WIN!!!!", 100, 200);
}
if (beaver.hole >41) {
textSize(36);
text("YOU LOSE!!!",100,200);}
//println(beaver.holes/holes.length);
if (keyIsPressed && keyCode === 0) {
beaver.hop();
} else {
beaver.fall();
}
beaver.draw();
};
A: Looks like holes was never initialized for beaver prototypes.
Try modifying the constructor function to look like so.
var Beaver = function(x, y) {
this.x = x;
this.y = y;
this.img = getImage("creatures/Hopper-Happy");
this.sticks = 0;
this.holes = 0;
};
Performing a ++ operation on an undefined variable returns a NaN result.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/40953945",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Joomla 3 - template only works with category, will not show single article If I set the menu link to be a category blog, it all works fine... but even if I remove all coding from the template 'index.php' file (besides etc) and add one word, it doesn't work. Just shows a blank page.
e.g.
http://dev.addrenaline.com/acces/index.php?option=com_content&view=article&id=2&Itemid=101
*** I just realized there was a dedicated Joomla stack site - have posted this over there now...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/30741601",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to share variables across in scripts on python two fills I want share two variables in two python files , they are working on serial communication
python serial communication to python gui(tikinter)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/72933722",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Mockito test cases for foreach loop I have an arraylist and I am converting it into another arraylist of different type using foreach loop. I want to write test cases using mockito. How can I do it?
List<Product1> list1 = new ArrayList<Product1>();
List<Product2> list2 = new ArrayList<Product2>();
list1.foreach(productList1 -> list2.add(new Product2(product1.getName())));
class Product1{
}
class Product2{
String name;
public Product2(String name){
this.name=name;
}
}
A: You dont' need mocking here. You can write a simple test such as
@Test
public void testListConversionForEmpty() {
assertThat(theConvertingMethod(emptyListOfProduct1), is(emptyListOfProduct2));
}
And then you go in, and add more test methods that act on lists with real content.
In other words: you only use mocking frameworks when creating "real" objects is too complicated.
In your case, you should simply instantiate a few Product1 and Product2 objects, put them into lists, and make sure that your conversion code delivers the expected results. Meaning: you can fully control the input without mocking anything.
( for the record: is() up there is a hamcrest matcher )
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/54920566",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
}
|
Q: Kronecker tensor product in javascript I need to calculate the Kronecker product of two matrices (like the kron() function in MatLab). I can't seem to find any code on it already, maybe someone has one lying on their computer tha is ready to use? I've already searched GitHub, and none of them seem to be working properly.
https://en.wikipedia.org/wiki/Kronecker_product
So for example:
A = [1, 2];
B = [3, 4];
C =kroneckerProduct(A,B)
C will then give [3, 4, 6, 8]
A: I don't know the topic, I had to read it up. So I'm not entirely sure the code is works right (for every input), though I checked it for some examples I found on the net.
function mapAB(a,b,fn){
var k=0, out = Array(a.length*b.length);
for(var i=0; i<a.length; ++i)
for(var j=0; j<b.length; ++j)
out[k++] = fn(a[i], b[j]);
return out;
}
function kroneckerProduct(a,b){
return Array.isArray(a)?
Array.isArray(b)?
mapAB(a,b, kroneckerProduct):
a.map(v => kroneckerProduct(v, b)):
Array.isArray(b)?
b.map(v => kroneckerProduct(a, v)):
a*b;
}
function mapAB(a, b, fn) {
var k = 0,
out = Array(a.length * b.length);
for (var i = 0; i < a.length; ++i)
for (var j = 0; j < b.length; ++j)
out[k++] = fn(a[i], b[j]);
return out;
}
function kroneckerProduct(a, b) {
return Array.isArray(a) ?
Array.isArray(b) ?
mapAB(a, b, kroneckerProduct) :
a.map(v => kroneckerProduct(v, b)) :
Array.isArray(b) ?
b.map(v => kroneckerProduct(a, v)) :
a * b;
}
function compute() {
var a = document.getElementById("a").value;
var b = document.getElementById("b").value;
var text;
try {
text = JSON.stringify(kroneckerProduct(
JSON.parse(a.trim()),
JSON.parse(b.trim())
), null, 2);
} catch (err) {
text = err;
}
document.getElementById("out").innerHTML = text;
}
<input id=a value=[1,2]><br>
<input id=b value=[3,4]><br>
<input type=button value=compute onclick=compute()>
<div id=out></div>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/43823896",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: AccessViolationException outside Visual Studio? I'm developing a small C# winforms application that consumes an unmanaged C++ library.
I have no access to the code of this library.
If I'm in Visual Studio and I runt my application, do my calls to the library everything is working find. Once I run CTRL + F5 I get an AccessViolationException.
I did some testing, created another winforms application in .NET 2.0 (others were in .NET 4.0) and there I don't receive the AccessViolationException. So I thought i'd create a .net 2.0 class library in my .net 4.0 solution and consume that class lib. This didn't help, still I had the AccessViolationException.
Tried setting allow unsafe code, optimize code on and off but that didn't help.
Why am I getting the AccessViolationException once I'm out of debug mode?
Thanks
A: I just stumbled upon the same issue. To reproduce the problem in the debugger, I had to go to:
Tools\Options
Debugging\General
and disable: Suppress JIT optimization on module load (managed only).
Of course the problem would only appear for a optimized code.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/2679248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to Access Activation Functions from Saved Model .h5 without importing tensorflow? Is the activation function for each layer stored in the .h5 file produced by model.save()? Or is it already "baked in" to the weights?
I am writing an AWS Lambda function to generate time-series predictions from multiple regression models every five minutes. Unfortunately, TensorFlow is too large of a library to be loaded into an AWS Lambda function, so I am writing my own Python code to load the saved .h5 model file and generate predictions based on the weights and input data. Here's where I'm at so far:
def generate_predictions(model_path, df):
model_info = h5py.File(model_path, 'r')
model_weights = model_info['model_weights']
# Initialize predictions matrix with preprocessed inputs
predictions = preprocessing.scale(df[inputs])
layer_list = list(model_weights.keys())
for layer in layer_list:
weights = model_weights[layer][layer]['kernel:0'][:]
bias = model_weights[layer][layer]['bias:0'][:]
predictions = predictions.dot(weights)
predictions += bias
# How to retrieve activation function for layer?
# predictions = activation_function(predictions)
return predictions
I understand I'll probably want some kind of case/switch statement to handle the various activation functions.
A: The model configuration is accessible through an attribute called "model_config" on the top group that seems to contain the full model configuration JSON that is produced by model.to_json().
import json
import h5py
model_info = h5py.File('model.h5', 'r')
model_config_json = json.loads(model_info.attrs['model_config'])
A: If you save the full model with model.save, you can access each layer and it's activation function.
from tensorflow.keras.models import load_model
model = load_model('model.h5')
for l in model.layers:
try:
print(l.activation)
except: # some layers don't have any activation
pass
<function tanh at 0x7fa513b4a8c8>
<function softmax at 0x7fa513b4a510>
Here, for example, softmax is used in the last layer.
If you don't want to import tensorflow, you can also read from h5py.
import h5py
import json
model_info = h5py.File('model.h5', 'r')
model_config = json.loads(model_info.attrs.get('model_config').decode('utf-8'))
for k in model_config['config']['layers']:
if 'activation' in k['config']:
print(f"{k['class_name']}: {k['config']['activation']}")
LSTM: tanh
Dense: softmax
Here, last layer is a dense layer which has softmax activation.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/61580758",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Can't get Chrome Extension to store settings via popup.html & popup.js I am trying to setup a Chrome Extension, so when the user clicks on the extension icon, it presents two checkboxes that the user can check or not (via popup.html). This works. Then, I want the user to click on a Save button, and have popup.js fire, which will store their choices in chrome storage. I cannot get this to work at all. I can't get anything to show up in console.log either.
Ultimately, I want my code to check to see if anything has been set in storage, retrieve the values and pre-set them, and then have the choices show to the user. If they change something, I want to establish a listener that notes the change, and saves the current settings to storage. But I need to walk before I run.
I should point out that I am a neophyte web developer and have done my best searching the web, looking at code examples, etc. for weeks to no avail. Any help would be greatly appreciated !
Here are my key files:
manifest.json
{
"manifest_version": 2,
"name": "WSH",
"description": "WSH Description",
"version": "2.7.3",
"web_accessible_resources": [
"icon-32.png"
],
"content_scripts": [{
"matches": [
"*://*.boardgamegeek.com/*"
],
"js": [
"content.js"
],
"css": [
"content.css"
],
"run_at": "document_idle"
}],
"permissions" : [
"https://raw.githubusercontent.com/*",
"storage"
],
"browser_action": {
"default_icon": {
"32": "icon-32.png",
"48": "icon-48.png",
"128": "icon-128.png"
},
"default_title": "Click for GCV Options",
"default_popup": "popup.html"
}
}
popup.html
<!doctype html>
<html>
<style>
/* The container */
.container {
width: 125px;
margin: auto;
display: block;
position: relative;
padding-left: 35px;
margin-bottom: 12px;
cursor: pointer;
font-size: 22px;
-webkit-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
}
/* Hide the browser's default checkbox */
.container input {
position: absolute;
opacity: 0;
cursor: pointer;
height: 0;
width: 0;
}
/* Create a custom checkbox */
.checkmark {
position: absolute;
top: 0;
left: 0;
height: 25px;
width: 25px;
background-color: #eee;
}
/* On mouse-over, add a grey background color */
.container:hover input ~ .checkmark {
background-color: #ccc;
}
/* When the checkbox is checked, add a blue background */
.container input:checked ~ .checkmark {
background-color: #2196F3;
}
/* Create the checkmark/indicator (hidden when not checked) */
.checkmark:after {
content: "";
position: absolute;
display: none;
}
/* Show the checkmark when checked */
.container input:checked ~ .checkmark:after {
display: block;
}
/* Style the checkmark/indicator */
.container .checkmark:after {
left: 9px;
top: 5px;
width: 5px;
height: 10px;
border: solid white;
border-width: 0 3px 3px 0;
-webkit-transform: rotate(45deg);
-ms-transform: rotate(45deg);
transform: rotate(45deg);
}
</style>
<body>
<h1>GCV Objects</h1>
<label class="container">Cards
<input type="checkbox" id="GCVCards" name="GCVCards" value="1" checked>
<span class="checkmark"></span>
</label>
<label class="container">Tokens
<input type="checkbox" id="GCVTokens" name="GCVTokens" value="1" checked>
<span class="checkmark"></span>
</label>
<input id="Save" type="submit" name="Save" value="Save">
<script type="text/javascript" src="popup.js"></script>
</body>
</html>
popup.js
function save_options(){
var GCVCards = document.getElementById('GCVCards').value;
chrome.storage.sync.set({'GCVCards': GCVCards}, function() {
console.log('GCVCards Value is set to: ' + GCVCards);
});
}
document.getElementById('Save').onclick = save_options;
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/61239619",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to pop up div inside a ? Here is my code?
<tr style="display: none"><td colspan="5">
<div id="sub-155642" style="display:none;">
<table width="100%">
<tr>
<td class="inner-table"></td>
<td class="inner-table">Document No</td>
<td class="inner-table">Document Type</td>
<td class="inner-table" id="amount-row">Total Amount</td>
</tr>
</table>
</div >
</td>
</tr>
I want to popup content of inside <div id="sub-155642"></div> from JavaScript or jQuery.
A: You can clone table to some popup and show it:
HTML
<table>
<tr id="sub-155642" style="display: none">
<td colspan="5">
<div id="sub-155642" style="display:none;">
<table width="100%">
<tr>
<td class="inner-table"></td>
<td class="inner-table">Document No</td>
<td class="inner-table">Document Type</td>
<td class="inner-table" id="amount-row">Total Amount</td>
</tr>
</table>
</div>
</td>
</tr>
</table>
<button onclick="popup()">Pop-up</button>
JavaScript
var popupEl;
function popup() {
var divEl,
tableEl,
xEl;
if(!popupEl) {
// Find table
tableEl = document.querySelector('#sub-155642 > table');
divEl = tableEl.parentNode;
// Create popup and clone table to it
popupEl = document.createElement('div');
popupEl.innerHTML = divEl.innerHTML;
popupEl.setAttribute('style', 'position:fixed;top:50%;left:50%;width:300px;height:100px;margin-left:-150px;margin-top:-50px;border:1px solid gray');
// Show popup
document.querySelector('body').appendChild(popupEl);
} else {
document.querySelector('body').removeChild(popupEl);
popupEl = null;
}
}
JSBin: http://jsbin.com/OyeXiNu/1/edit
otherwise you can make wrapping tables visible (but be sure IDs are unique):
JavaScript
var div = document.getElementById('sub-155642');
div.style.display = 'block';
div.parentNode.parentNode.style.display = 'table-row';
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/21571012",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-4"
}
|
Q: Unpacking mono channel wave data and storing it in an array I'm trying to unpack the data from a single channel WAVE file using struct.unpack. I want to store the data in an array and be able to manipulate it (say by adding noise of given variance). I have extracted the header data and stored it in a dictionary as follows:
stHeaderFields['ChunkSize'] = struct.unpack('<L', bufHeader[4:8])[0]
stHeaderFields['Format'] = bufHeader[8:12]
stHeaderFields['Subchunk1Size'] = struct.unpack('<L', bufHeader[16:20])[0]
stHeaderFields['AudioFormat'] = struct.unpack('<H', bufHeader[20:22])[0]
stHeaderFields['NumChannels'] = struct.unpack('<H', bufHeader[22:24])[0]
stHeaderFields['SampleRate'] = struct.unpack('<L', bufHeader[24:28])[0]
stHeaderFields['ByteRate'] = struct.unpack('<L', bufHeader[28:32])[0]
stHeaderFields['BlockAlign'] = struct.unpack('<H', bufHeader[32:34])[0]
stHeaderFields['BitsPerSample'] = struct.unpack('<H', bufHeader[34:36])[0]
When I pass in a file I get the following output:
NumChannels: 1
ChunkSize: 78476
BloackAlign: 0
Filename: foo.wav
ByteRate: 32000
BlockAlign: 2
AudioFormat: 1
SampleRate: 16000
BitsPerSample: 16
Format: WAVE
Subchunk1Size: 16
I then try to get the data by doing struct.unpack('<h', self.bufHeader[36:])[0] but doing this returns a simple integer value 24932. I'm not allowed to use the wave library or anything else to do with waves specifically as I Will have to adapt this to other sorts of signals. How can I store and manipulate the actual wave data?
EDIT:
while chunk_reader < stHeaderFields['ChunkSize']:
data.append(struct.unpack('<H', bufHeader[chunk_reader:chunk_reader+stHeaderFields['BlockAlign']]))
A: Okay, I'll try to write a complete walk-through.
First, it is a common mistake to treat WAV (or, more likely, RIFF) file as a linear structure. It is actually a tree, with each element having a 4-byte tag, 4-byte length of data and/or child elements, and some kind of data inside.
It is just common for WAV files to have only two child elements ('fmt ' and 'data'), but it also may have metadata ('LIST') with some child elements ('INAM', 'IART', 'ICMT' etc.) or some other elements. Also there's no actual order requirement for blocks, so it is incorrect to think that 'data' follows 'fmt ', because metadata may stick in between.
So let's look at the RIFF file:
'RIFF'
|-- file type ('WAVE')
|-- 'fmt '
| |-- AudioFormat
| |-- NumChannels
| |-- ...
| L_ BitsPerSample
|-- 'LIST' (optional)
| |-- ... (other tags)
| L_ ... (other tags)
L_ 'data'
|-- sample 1 for channel 1
|-- ...
|-- sample 1 for channel N
|-- sample 2 for channel 1
|-- ...
|-- sample 2 for channel N
L_ ...
So, how should you read a WAV file? Well, first you need to read 4 bytes from the beginning of the file and make sure it is RIFF or RIFX tag, otherwise it is not a valid RIFF file. The difference between RIFF and RIFX is the former uses little-endian encoding (and is supported everywhere) while the latter uses big-endian (and virtually nobody supports it). For simplicity let's assume we're dealing only with little-endian RIFF files.
Next you read the root element length (in file endianness) and following file type. If file type is not WAVE, it is not a WAV file, so you might abandon further processing. After reading the root element, you start to read all child elements and process those you're interested in.
Reading fmt header is pretty straightforward, and you have actually done it in your code.
Data samples are usually represented as 1, 2, 3 or 4 bytes (again, in the file endianness). The most common format is a so-called s16_le (you might have seen such naming in some audio processing utilities like ffmpeg), which means samples are presented as signed 16-bit integers in little endian. Other possible formats are u8 (8-bit samples are unsigned numbers!), s24_le, s32_le. Data samples are interleaved, so it is easy to seek to arbitrary position in a stream even for multi-channel audio. Note: this is valid only for uncompressed WAV files, as indicated by AudioFormat == 1. For other formats data samples may have another layout.
So let's take a look at a simple WAV reader:
stHeaderFields = dict()
rawData = None
with open("file.wav", "rb") as f:
riffTag = f.read(4)
if riffTag != 'RIFF':
print 'not a valid RIFF file'
exit(1)
riffLength = struct.unpack('<L', f.read(4))[0]
riffType = f.read(4)
if riffType != 'WAVE':
print 'not a WAV file'
exit(1)
# now read children
while f.tell() < 8 + riffLength:
tag = f.read(4)
length = struct.unpack('<L', f.read(4))[0]
if tag == 'fmt ': # format element
fmtData = f.read(length)
fmt, numChannels, sampleRate, byteRate, blockAlign, bitsPerSample = struct.unpack('<HHLLHH', fmtData)
stHeaderFields['AudioFormat'] = fmt
stHeaderFields['NumChannels'] = numChannels
stHeaderFields['SampleRate'] = sampleRate
stHeaderFields['ByteRate'] = byteRate
stHeaderFields['BlockAlign'] = blockAlign
stHeaderFields['BitsPerSample'] = bitsPerSample
elif tag == 'data': # data element
rawData = f.read(length)
else: # some other element, just skip it
f.seek(length, 1)
Now we know file format info and its sample data, so we can parse it. As it was said, sample may have any size, but for now let's assume we're dealing only with 16-bit samples:
blockAlign = stHeaderFields['BlockAlign']
numChannels = stHeaderFields['NumChannels']
# some sanity checks
assert(stHeaderFields['BitsPerSample'] == 16)
assert(numChannels * stHeaderFields['BitsPerSample'] == blockAlign * 8)
for offset in range(0, len(rawData), blockAlign):
samples = struct.unpack('<' + 'h' * numChannels, rawData[offset:offset+blockAlign])
# now samples contains a tuple with sample values for each channel
# (in case of mono audio, you'll have a tuple with just one element).
# you may store it in the array for future processing,
# change and immediately write to another stream, whatever.
So now you have all the samples in rawData, and you may access and modify it as you like. It might be handy to use Python's array() to effectively access and modify data (but it won't do in case of 24-bit audio, you'll need to write your own serialization and deserialization).
After you've done with data processing (which may involve upscaling or downscaling the number of bits per sample, changing number of channels, sound levels manipulation etc.), you just write a new RIFF header with correct data length (usually may be computed with a simplified formula 36 + len(rawData)), an altered fmt header and data stream.
Hope this helps.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/27370180",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to specify soft constraints using prolog? (puzzle with soft constraints) I've been given as an assignment to solve a puzzle using prolog.
Being a beginner in prolog, who just read a couple of online tutorials on prolog the last few hours, I have no idea how to even start this..
The puzzle is much larger scale and has more constraints, but I will simplify it here because I just want to get the idea so I can scale it later.
The puzzle goes like this:
You need to schedule shifts for 2 employees, A and B, for the week.(including weekends)
Rules:
*
*In a single day, either A or B (but not both) must be working
*A must work on Tuesday
*no one can work continuously for more than 2 days
Soft constraints:
*
*A prefers to work on Thursday (add 2 points if A works on Thursday )
*B dislike to work on Wednesday (add 5 points if B doesn't work on Wednesday)
The chart:
|--------|--------|--------|---------|-----------|----------|--------|----------|
| | Sunday | Monday | Tuesday | Wednesday | Thursday | Friday | Saturday |
|--------|--------|--------|---------|-----------|----------|--------|----------|
| A | | | | | | | |
|--------|--------|--------|---------|-----------|----------|--------|----------|
| B | | | | | | | |
|--------|--------|--------|---------|-----------|----------|--------|----------|
How to schedule this so that can achieve the highest points possible?
One possible solution (there are several) will be:
|--------|--------|--------|---------|-----------|----------|--------|----------|
| | Sunday | Monday | Tuesday | Wednesday | Thursday | Friday | Saturday |
|--------|--------|--------|---------|-----------|----------|--------|----------|
| A | | | X | X | | | X |
|--------|--------|--------|---------|-----------|----------|--------|----------|
| B | X | X | | | X | X | |
|--------|--------|--------|---------|-----------|----------|--------|----------|
The maximum we can get is 5 points. Since there's no solution that can get full marks of 7 points while satisfying all rules.
Question that I have:
*
*How do we set rules with soft constraints? Since it is impossible to respect all the rules under soft constraints above, have to decide which one to take base on highest points.
*How to represent a 2d array such as above in prolog? I have read about List, but not sure how to represent a 2d List.
Thanks in advance!
A: Plain Prolog is probably not the best choice here. Such problems are most easily modelled using 0/1 Integer Programming, and solved with an IP or Finite-Domain solver, which several enhanced Prologs provide. Here is a solution in ECLiPSe (disclaimer: I'm a co-developer). The soft constraints are handled via an objective function.
:- lib(ic).
:- lib(ic_global_gac).
:- lib(branch_and_bound).
schedule(Points, As, Bs) :-
As = [_ASu,_AMo, ATu,_AWe, ATh,_AFr,_ASa],
Bs = [_BSu,_BMo,_BTu, BWe,_BTh,_BFr,_BSa],
As :: 0..1,
Bs :: 0..1,
( foreach(A,As), foreach(B,Bs) do A+B #= 1 ), % Rule 1
ATu = 1, % Rule 2
sequence(0, 2, 3, As), % 0..2 out of 3, Rule 3
sequence(0, 2, 3, Bs),
Points #= 2*ATh + 5*(1-BWe), % Soft
Cost #= -Points,
append(As, Bs, ABs),
minimize(labeling(ABs), Cost).
?- schedule(P, As, Bs).
P = 5
As = [0, 0, 1, 1, 0, 0, 1]
Bs = [1, 1, 0, 0, 1, 1, 0]
Yes (0.03s cpu)
A: In Prolog, for simple problems, we can try to apply a simple 'pattern', plain old generate and test, interesting for its simplicity. We 'just' provide appropriate domain generator and test.
generate(D0, D) :-
length(D0, DL),
length(D, DL),
maplist(dom_element, D).
dom_element(a).
dom_element(b).
test(D, Score, D) :-
D = [_Sunday, _Monday, Tuesday, Wednesday, Thursday, _Friday, _Saturday],
% 1. In a single day, either A or B (but not both) must be working
% Here True by construction
% 2. A must work on Tuesday
Tuesday = a,
% 3. no one can work continuously for more than 2 days
no_more_than_2(D),
% soft 1. A prefers to work on Thursday (add 2 points if A works on Thursday )
( Thursday = a -> S1 is 2 ; S1 is 0 ),
% soft 2. B dislike to work on Wednesday (add 5 points if B doesn't work on Wednesday)
( Wednesday \= b -> S2 is 5 ; S2 is 0 ),
Score is S1 + S2.
% edit: jshimpfs suggestion, beside being much better, highlights a bug
no_more_than_2(D) :-
nth0(I, D, E),
length(D, L),
J is (I+1) mod L,
nth0(J, D, E),
K is (J+1) mod L,
nth0(K, D, E),
!, fail.
no_more_than_2(_).
solve(D0, Best) :-
D0 = [sun,mon,tue,wed,thu,fri,sat],
setof(Score/Sol, D^(generate(D0, D), test(D, Score, Sol)), All),
last(All, Best).
test:
?- solve(_,L).
L = 5/[b, b, a, a, b, b, a].
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/25470468",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Selenium | Unable to create new service: ChromeDriverService | Driver info: driver.version: unknown (SessionNotCreated)' | C# Try to start selenium with chrome by remoteDriver with a hub.
I create a config.json for capabilities, configuration etc...
I manage to run tests with IE and Firefox but can't with Chrome.
When start my test with chrome, I get this mistake :
System.InvalidOperationException : 'Unable to create new service: ChromeDriverService
... Driver info: driver.version: unknown (SessionNotCreated)'
I run my test with chrome V84 also I use chromeDriver84. I define my version like this :
"browserName":"chrome",
"browserVersion" : 84.0,
"maxInstances":10,
"seleniumProtocol":"WebDriver",
"webdriver.chrome.driver": "D:/Tools/Selenium/WebDrivers/chromedriver84.exe"
I try "version" : 84.0 and "version" : "84.0" and "version" : 84 and "version" : "latest" etc...
But i understand why driver.version is unknown .
Thanks in advance !
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/63230979",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Node.JS - How to identify which async HTTP GET response is returning data in a callback? I'm trying to follow the NodeSchool tutorials on async principles. There is a lesson with the requirement to:
*
*make asynchronous GET requests to 3 URLs
*collect the data returned in HTTP responses using callbacks
*print the data collected from each response, preserving the order of the correlating input URLs
One way I thought of doing this was to make 3 separate callbacks, each writing to its own dedicated buffer. But this seemed like a poor solution to me, because it requires code duplication and is not scalable. However, I am stuck on figuring out another way to get a callback to remember "where it came from" for lack of a better term. Or what order it was called in, etc.
I feel like I'm close, but I've been stuck at 'close' for long enough to look for help. Here's the closest I've felt so far:
var http = require('http');
var replies = 0;
var results = ['','',''];
var running = '';
for(var i = 2; i < 5; i++) {
http.get(process.argv[i], function (response) {
response.setEncoding('utf8');
response.on('data', handleGet);
response.on('error', handleError);
response.on('end', handleEnd);
});
}
function handleGet(data) {
running += data;
}
function handleError(error) {
console.log(error);
}
function handleEnd() {
results[replies] = running;
running = '';
replies++;
if(replies === 3) {
for(var i = 0; i < 3; i++) {
console.log(totals[i]);
}
}
}
How can I get the callback to recognize which GET its response is a response to?
Note: the assignment specifically prohibits the use of 3rd party libs such as async or after.
Edit: another thing I tried (that also obviously failed), was inlining the callback definition for handleGet like so, to try and preserve the 'index' of the callback:
response.on('data', function(data) {
results[i-2] += data;
});
On execution, this always indexes to results[3] because the async callbacks don't happen until after the for loop is already long done. (Actually I'm not sure why the value of i is preserved at 5 at all, since the completed for loop would go out of scope, as I understand it... I would have thought it'd be undefined in retrospect.)
A: I will suggest using your second solution but passing the value of i and creating another function which encloses that variable. By this I mean the following:
(function(){
var index = i;
http.get(process.argv[i], function (response) {
response.setEncoding('utf8');
response.on('data', handleGetFrom(index));
response.on('error', handleError);
response.on('end', handleEnd);
})
}());
and the handleGetFrom:
var handleGetFrom = function(i) {
return function(data) {
results[i-2] += data;
}
}
Edited my original answer.
A: You can use the 'this' object:
var http = require('http');
http.get('http://www.google.com', function (response) {
response.setEncoding('utf8');
response.on('data', function(chunk) {
console.log(this); // Holds the respons object
console.log(this.headers.location); // Holds the request url
});
});
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/28529636",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Receiving IrDA message on STM32H7 I'm trying to receive some data using USART in IrDA mode on an STM32H7 board with HAL drivers.
I get the reply as I expect it on the gpio pin (baud rate, logic, and timing are ok) but for some reason the data is never moved to the RDR register of the USART and when I try to read it I just get zeroes on the first try and a timeout after that (polling mode).
After filling the IRDA handle structure I call HAL_IRDA_DeInit() and HAL_IRDA_Init(). I configure the GPIO in HAL_IRDA_MSP_Init() and send the first message that reaches the target (with HAL_IRDA_Transmit() ). The target then is sending a reply that I can check on the UART_RX pin. And here something happens.. or better doesn't happen. If I read the UART with HAL_IRDA_Receive() (1 byte at a time) I get only zero and then timeouts.
IRDA_HandleTypeDef hirda4;
void vIrdaInit(void)
{
hirda.Instance = USART3;
hirda.Init.BaudRate = 60100;
hirda.Init.WordLength = IRDA_WORDLENGTH_9B;
hirda.Init.Parity = IRDA_PARITY_NONE;
hirda.Init.Mode = IRDA_MODE_TX_RX;
hirda.Init.Prescaler = 1;
hirda.Init.PowerMode = IRDA_POWERMODE_NORMAL;
/* Initialize the IRDA registers. Here also HAL_IRDA_MspInit() will be called */
if (HAL_IRDA_Init(&hirda4) != HAL_OK)
{
Error_Handler();
}
}
/* Initialize IrDA low level resources. This function is called by HAL_IRDA_Init() */
void HAL_IRDA_MspInit(IRDA_HandleTypeDef* irdaHandle)
{
GPIO_InitTypeDef GPIO_InitStruct = {0};
if(irdaHandle->Instance==USART3)
{
/* UART4 clock enable */
__HAL_RCC_USART3_CLK_ENABLE();
__HAL_RCC_GPIOB_CLK_ENABLE();
/**UART4 GPIO Configuration
PB10 ------> USART3_RX
PB11 ------> USART3_TX
*/
GPIO_InitStruct.Pin = GPIO_PIN_10;
GPIO_InitStruct.Mode = GPIO_MODE_AF_PP;
GPIO_InitStruct.Pull = GPIO_NOPULL;
GPIO_InitStruct.Speed = GPIO_SPEED_FREQ_LOW;
GPIO_InitStruct.Alternate = GPIO_AF7_UART3;
HAL_GPIO_Init(GPIOB, &GPIO_InitStruct);
GPIO_InitStruct.Pin = GPIO_PIN_11;
GPIO_InitStruct.Mode = GPIO_MODE_INPUT;
HAL_GPIO_Init(GPIOB, &GPIO_InitStruct);
}
}
here the call to HAL_IRDA_Transmit()/HAL_IRDA_Receive():
if(HAL_IRDA_Transmit(&hirda (uint8_t*)TxBuf, sizeof(RxBuf), 5000)!= HAL_OK)
{
Error_Handler();
}
memset(RxBuf, '\0', sizeof(RxBuf));
for (i = 0; i < 8; i++)
{
// blocks here until timeout or data
if(HAL_IRDA_Receive(&hirda, (uint8_t*)RxBuf, 1, 500)!= HAL_OK)
{
Error_Handler();
}
}
With the first time in the loop, I have the RXNE flag raised but in RDR there is only 0. The following iterations are resulting always in a timeout (from IRDA_WaitOnFlagUntilTimeout()). I have no idea where to look... The pulses I receive are bigger than the 3/16 of a period, the levels are ok but seem I cannot get the message through the SIR receive decoder and in the data register.
UPDATE: here a screenshot with the received signal:
Baud rate is fine, start and stop bits are present and the message (9 bits) is what I am waiting for. But is not recognized by the decoder and passed to the UART.
A: Adding a dummy read of 1 byte just after calling the init function allowed me to successfully read once without Timeouts. The problem is that after that, the Receive() function starts again to return Timeouts. The only workaround I could find was to re-init the UART just before calling the Receive() function. This allows me to retrieve the full message. It is sloppy, but it works.
I tried to find the registers that change but I was not able to isolate what is causing the problem.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/55357876",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: lodash break down array based on sub-array retaining the array record I have an array like below. As you can see, a book can have many authors. But I want to group the book by an author. But regardless I want to get to break the array first so that the multiple authors will be broken down to a single author and the other one will go to create a new set of an array.
[0]
'book_name': 'book1'
'authors': array(1)
0:
author_id:'value2'
author_name: 'Name2'
[1]
'book_name': 'book2'
'authors': array(2)
0:
author_id:'value1'
author_name: 'Name1'
1:
author_id:'value2'
author_name: 'Name2'
I want to have expected output like this
[0]
'book_name': 'book1'
'authors': array(1)
0:
author_id:'value2'
author_name: 'Name2'
[1]
'book_name': 'book2'
'authors': array(1)
0:
author_id:'value1'
author_name: 'Name1'
[2]
'book_name': 'book2'
'authors': array(1)
0:
author_id:'value2'
author_name: 'Name2'
What I have done so far is:
grouping = _.chain(books)
.flatMap('authors')
.groupBy('author_id')
.value()
My solution actually works since it flattens the array, but my problem is that it only returns the 'authors', I want also to retrieve the 'books' information.
A: Actually you don't need lodash to achieve the result you're expecting:
const books = [
{
'book_name': 'book1',
'authors': [
{
author_id: 'value2',
author_name: 'Name2',
},
],
},
{
'book_name': 'book2',
'authors': [
{
author_id: 'value1',
author_name: 'Name1',
},
{
author_id: 'value2',
author_name: 'Name2',
}
],
},
];
const transformed = books.reduce((accumulator, book) => {
const authors = book.authors.map(author => ({
...book,
authors: [author]
}));
return accumulator.concat(authors);
}, []);
console.log('original', books);
console.log('transformed', transformed);
A: It can be done with simple JavaScript.
const books = [{ 'book_name': 'book1',
'authors': [{author_id:'value2',
author_name: 'Name2'}] },
{'book_name': 'book2',
'authors': [{author_id:'value3',
author_name: 'Name3'},
{author_id:'value4',
author_name: 'Name4'}] }];
const arr = [];
books.map((book)=>{
book.authors.forEach((author) => {
arr.push({
book_name: book.book_name,
authors: [author]
});
});
})
console.log(arr);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/52078077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Run reducer after state is updated by another reducer Let's say I've got an app with two reducers - tables and footer combined using combineReducers().
When I click on some button two actions are being dispatched - one after another: "REFRESH_TABLES" and "REFRESH_FOOTER".
tables reducer is listening for the first action and it modifies the state of tables. The second action triggers footer reducer. The thing is it needs current state of tables in order to do it's thing.
My implementation looks something like below.
Button component:
import React from 'react';
const refreshButton = React.createClass({
refresh () {
this.props.refreshTables();
this.props.refreshFooter(this.props.tables);
},
render() {
return (
<button onClick={this.refresh}>Refresh</button>
)
}
});
export default refreshButton;
ActionCreators:
export function refreshTables() {
return {
type: REFRESH_TABLES
}
}
export function refreshFooter(tables) {
return {
type: REFRESH_FOOTER,
tables
}
}
The problem is that the props didn't update at this point so the state of tables that footer reducer gets is also not updated yet and it contains the data form before the tables reducer run.
So how do I get a fresh state to the reducer when multiple actions are dispatched one after another from the view?
A: Seems you need to handle the actions async so you can use a custom middleware like redux-thuk to do something like this:
actions.js
function refreshTables() {
return {
type: REFRESH_TABLES
}
}
function refreshFooter(tables) {
return {
type: REFRESH_FOOTER,
tables
}
}
export function refresh() {
return function (dispatch, getState) {
dispatch(refreshTables())
.then(() => dispatch(refreshFooter(getState().tables)))
}
}
component
const refreshButton = React.createClass({
refresh () {
this.props.refresh();
},
{/* ... */}
});
A: Although splitting it asynchronous may help, the issue may be in the fact that you are using combineReducers. You should not have to rely on the tables from props, you want to use the source of truth which is state.
You need to look at rewriting the root reducer so you have access to all of state. I have done so by writing it like this.
const rootReducer = (state, action) => ({
tables: tableReducer(state.tables, action, state),
footer: footerReducer(state.footer, action, state)
});
With that you now have access to full state in both reducers so you shouldn't have to pass it around from props.
Your reducer could then looks like this.
const footerReducer = (state, action, { tables }) => {
...
};
That way you are not actually pulling in all parts of state as it starts to grow and only access what you need.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/38609715",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Can't list_options for capture card in ffmpeg : "Unable to BindToObject" I'm trying to list the options of a "AJA Kona3G Quad" device on Windows 10 using ffmpeg.
[dshow @ 000001343e7d9740] DirectShow video devices (some may be both
video and audio devices) [dshow @ 000001343e7d9740] "AJA S-Capture
Kona3GQuad - 0"
[dshow @ 000001343e7d9740] Alternative name
"@device_sw_{860BB310-5D01-11D0-BD3B-00A0C911CE86}{89BB1170-565C-49B6-9DED-D6D2DFCA06A8}"
[dshow @ 000001343e7d9740] DirectShow audio
[dshow @ 000001343e7d9740] "AJA S-Capture [Audio] Kona3GQuad - 0"
[dshow @ 000001343e7d9740] Alternative name
"@device_sw_{33D9A762-90C8-11D0-BD43-00A0C911CE86}{89BB1190-565C-49B6-9DED-D6D2DFCA06A8}"
Listing the devices seemsok, but once I get my device name and try to list it's options ffmpeg returns the following
[dshow @ 000001aeac2a96c0] Unable to BindToObject for AJA S-Capture Kona3GQuad - 0
[dshow @ 000001aeac2a96c0] Could not find video device with name [AJA
S-Capture Kona3GQuad - 0] among source devices of type video.
video=AJA S-Capture Kona3GQuad - 0: I/O error
Have any of you already encountered this issue, and if so could you point me in the right direction
Thanks and take care during these stressful times
A: The symptoms you mention indicate a problem in AJA Kona3G driver (actually, not even a driver per se but it's DirectShow integration implemented as a custom AJA DirectShow filter).
The system has this integration but it is either broken or out of date and so you see the DirectShow API issue coming from third party component and forwarded to FFmpeg.
You need to have AJA stuff re-installed and/or contact their support for a solution.
It is not really a programming question.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/61253610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: I need a way to make sure that words that are given to an array are real words.(RTP-hsa,console) I solved the issue with making sure that only letters in the array are used, now I need a way to import or load a dictionary into the console so fake words can't be used. I had the idea of finding a txt document and load it into an array when the program starts, but I haven't had any luck finding one. If anyone finds a way, please provide a good description of how to load it into the hsa.console form. Thanks. (Java Ready to Program hsa.console)
A: Here is some that solves the does a word exist. I wasn't sure how you were storing the array of characters so I guessed it was a 2d array of chars. The method wordExists is the one used to check if the word exists. I used a test to check that it worked and it did on multiple inputs. Edit: I just realized that this could cause the words to cycle back onto itself if its a palindrome, ill post the fixed code as soon as I make it
public static boolean wordExists(char[] [] characters, String word)
{
char[] word_as_char = word.toCharArray();
for (int i = 0; i < characters.length; i++)
{
for (int j = 0; j < characters[0].length; j++)
{
boolean wordExists = checkWord(characters, word_as_char, 0, i, j);
if (wordExists)
{
return true;
}
}
}
return false;
}
public static boolean checkWord(char[] [] characters, char[] word, int index, int x, int y)
{
boolean wordExists = false;
if (index < word.length) {
if (characters[x] [y] == word[index])
{
if (x + 1 < characters.length)
{
wordExists = wordExists | checkWord(characters, word, index + 1, x + 1, y);
}
if (x - 1 > 0)
{
wordExists = wordExists | checkWord(characters, word, index + 1, x - 1, y);
}
if (y + 1 < characters[0].length)
{
wordExists = wordExists | checkWord(characters, word, index + 1, x, y + 1);
}
if (y - 1 > 0)
{
wordExists = wordExists | checkWord(characters, word, index + 1, x, y - 1);
}
}
}
if (index == word.length - 1 && characters[x] [y] == word[index])
{
wordExists = true;
}
return wordExists;
}
}
A: CheckLetter != (" ") doesn't check content equality, it checks reference equality - are these 2 variables referring to the same object. Use !" ".equals(CheckLetter). Better yet, don't use String for holding single characters, there is char for this and there is Character class that has some methods for handling them (like toUpperCase).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/14348763",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Using the '-' sign in jQuery Selector gives error I am trying the run the following code:
$(document).ready({
$("#menu-nav a").hover(
function () {
$(this).css ( marginRight: '20px' );
},
function () {
$(this).css ( marginRight: '10px' );
}
);
}); //end ready
But, my dreamweaver reports and error on the line $("#menu-nav a").hover(. Is it okay to use the selector as #menu-nav a or should it be something else ?
A: The problem is instead of this:
$(document).ready({
You need this:
$(document).ready(function () {
I am sure you knew that, but it's easy to overlook since the error is shown to be in the following line.
Another issue:
I think you will also run into problems here:
$(this).css ( marginRight: '20px' );
Per the jQuery docs, you should use this:
$(this).css ('margin-right', '20px');
An alternative:
Here's one more thing, just to give a complete answer. As is noted in the comments, you really don't need jQuery at all for this, if you don't want to use it. Try this:
#menu-nav a:hover { margin-right: 20px; }
You can add whatever styles you want like that.
A: In your example
$(document).ready({...});
should be
$(document).ready(function(){
//...
});
And also change
$(this).css ( marginRight: '20px' );
to
$(this).css('marginRight','20px');
or
$(this).css({'marginRight':'20px'});
in both lines
A: You don't pass an object, so you cannot use property: value, you can code:
$(this).css({ marginRight: '20px'});
or:
$(this).css( 'marginRight', '20px' );
A: I use this convention to avoid getting lost and confused in a myriad of brackets and parenthesis:
$(document).ready(DocReady);
function DocReady()
{
// More code here
}
An advantage is that you can also call the DocReady function from a button click (maybe to test it).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/13495192",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
}
|
Q: How to fetch Images from server Using API and push in Array I want to fetch images from my server using API and then I want to push in array so that I can show this images in loop or as slideshow, I have page named common service where I have mentioned image API url. now I just want to know how to get it in my component.ts and want to push in array.
I am Using Angular 4
I have tried to create a function in .ts to get images but not working.
common.service.ts
this.surveyImageUrl = this.authService.website_url + '/SurveyImages/';
here i tried to get images in ts.file
optionimage: any[] = [{}];
surveyImageUrl = function () {
debugger;
this.commonService.surveyImageUrl().subscribe(data => {
if (data.success) {
data.survey.array.forEach(element => {
var obj = {
is_optionimages: true,
surveyImage: element.surveyImage,
}
this.optionimage.push(element);
});
}
});
}
Just want to get images then want to push in array and want to display in HTML page as loop.
A: Try this
optionimage: any[] = [];
surveyImageUrl = function () {
debugger;
this.commonService.surveyImageUrl().subscribe(data => {
if (data.success) {
data.survey.array.forEach(element => {
var obj = {
is_optionimages: true,
surveyImage: element.surveyImage,
}
this.optionimage.push(obj);
});
}
});
}
Then you can bind your image urls to View as below.
<div *ngFor="let image of optionimage">
<img [src] = "image.surveyImage"/>
</div>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/54721231",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
}
|
Q: cross fading multiple images gallery autoplay id lke to create a fixed gallery as a background on my main page, cross fadinging 90 pictures in autoplay... and i get the code but when im gonna preview, theres a lot of glitch and not working as expected.... as you can see here
i did this code with js but i dont know why its glitching ...
anyone can help to understand this prob?
thanks so much
i got this code:
HTML
<div id="fadey">
<img src="https://freight.cargo.site/t/original/i/e594ec985f9c8cd1b35aa442e5ba9db42c2fa32f4a238ff025bdb1514eda6824/IMG_8959.JPG">
<img src="https://freight.cargo.site/t/original/i/d8dc257fac5364abd772ad1b7b296899bf14c012b11d1e5266e163756cdb0c90/IMG_7321.JPG">
<img src="https://freight.cargo.site/t/original/i/e594ec985f9c8cd1b35aa442e5ba9db42c2fa32f4a238ff025bdb1514eda6824/IMG_8959.JPG">
<img src="https://freight.cargo.site/t/original/i/d8dc257fac5364abd772ad1b7b296899bf14c012b11d1e5266e163756cdb0c90/IMG_7321.JPG"><img src="https://freight.cargo.site/t/original/i/e594ec985f9c8cd1b35aa442e5ba9db42c2fa32f4a238ff025bdb1514eda6824/IMG_8959.JPG">
<img src="https://freight.cargo.site/t/original/i/d8dc257fac5364abd772ad1b7b296899bf14c012b11d1e5266e163756cdb0c90/IMG_7321.JPG"><img src="https://freight.cargo.site/t/original/i/e594ec985f9c8cd1b35aa442e5ba9db42c2fa32f4a238ff025bdb1514eda6824/IMG_8959.JPG">
<img src="https://freight.cargo.site/t/original/i/d8dc257fac5364abd772ad1b7b296899bf14c012b11d1e5266e163756cdb0c90/IMG_7321.JPG"><img src="https://freight.cargo.site/t/original/i/e594ec985f9c8cd1b35aa442e5ba9db42c2fa32f4a238ff025bdb1514eda6824/IMG_8959.JPG">
<img src="https://freight.cargo.site/t/original/i/d8dc257fac5364abd772ad1b7b296899bf14c012b11d1e5266e163756cdb0c90/IMG_7321.JPG">
</div>
CSS:
@keyframes fadey {
0% {
opacity: 1; }
16.66% {
opacity: 1; }
25% {
opacity: 0; }
91.66% {
opacity: 0; }
100% {
opacity: 1; }
}
#fadey {
position: absolute;
top: 0;
left: 0;
z-index: -1;
width: 100%;
height: 100vh;
max-height: 100vh;
}
#fadey img {
display: block;
width: 100%;
font-size: 0;
}
and JS:
var cssFadey = function(newOptions) {
var options = (function() {
var mergedOptions = {},
defaultOptions = {
presentationTime: 25,
durationTime: 15,
fadeySelector: '#fadey',
cssAnimationName: 'fadey',
fallbackFunction: function() {}
};
for (var option in defaultOptions) mergedOptions[option] = defaultOptions[option];
for (var option in newOptions) mergedOptions[option] = newOptions[option];
return mergedOptions;
})(),
CS = this;
CS.animationString = 'animation';
CS.hasAnimation = false;
CS.keyframeprefix = '';
CS.domPrefixes = 'Webkit Moz O Khtml'.split(' ');
CS.pfx = '';
CS.element = document.getElementById(options.fadeySelector.replace('#', ''));
CS.init = (function() {
if (CS.element.style.animationName !== undefined) CS.hasAnimation = true;
if (CS.hasAnimation === false) {
for (var i = 0; i < CS.domPrefixes.length; i++) {
if (CS.element.style[CS.domPrefixes[i] + 'AnimationName'] !== undefined) {
CS.pfx = domPrefixes[i];
CS.animationString = pfx + 'Animation';
CS.keyframeprefix = '-' + pfx.toLowerCase() + '-';
CS.hasAnimation = true;
break;
}
}
}
if (CS.hasAnimation === true) {
function loaded() {
var imgAspectRatio = firstImage.naturalHeight / (firstImage.naturalWidth / 100);
var imageCount = CS.element.getElementsByTagName("img").length,
totalTime = (options.presentationTime + options.durationTime) * imageCount,
css = document.createElement("style");
css.type = "text/css";
css.id = options.fadeySelector.replace('#', '') + "-css";
css.innerHTML += "@" + keyframeprefix + "keyframes " + options.cssAnimationName + " {\n";
css.innerHTML += "0% { opacity: 1; }\n";
css.innerHTML += (options.presentationTime / totalTime) * 100+"% { opacity: 1; }\n";
css.innerHTML += (1/imageCount)*100+"% { opacity: 0; }\n";
css.innerHTML += (100-(options.durationTime/totalTime*100))+"% { opacity: 0; }\n";
css.innerHTML += "100% { opacity: 1; }\n";
css.innerHTML += "}\n";
css.innerHTML += options.fadeySelector + " img { position: absolute; top: 0; left: 0; " + keyframeprefix + "animation: " + options.cssAnimationName + " " + totalTime + "s ease-in-out infinite; }\n";
css.innerHTML += options.fadeySelector + "{ box-sizing: border-box; padding-bottom: " + imgAspectRatio + "%; }\n";
for (var i=0; i < imageCount; i++) {
css.innerHTML += options.fadeySelector + " img:nth-last-child("+(i+1)+") { " + keyframeprefix + "animation-delay: "+ i * (options.durationTime + options.presentationTime) + "s; }\n";
}
document.body.appendChild(css);
}
var firstImage = CS.element.getElementsByTagName("img")[0];
if (firstImage.complete) {
loaded();
} else {
firstImage.addEventListener('load', loaded);
firstImage.addEventListener('error', function() {
alert('error');
})
}
} else {
// fallback function
options.fallbackFunction();
}
})();
}
cssFadey();
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/73545265",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Can I judge my status by android sensor, moving or motionless? we can do many things by android sensor. However, can I judge my status, such as moving, motionless.
A: The simplest way is to use Activity Recognition API http://developer.android.com/training/location/activity-recognition.html .
Or you could read raw data from sensors to recognize patterns. But eventually you'll come the same functionality.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/19655019",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to detect colors one at a time I need to detect two colors, one after one.
So this is an example of my program workflow:
detect and object with a specific color, after that object is close enough(specified how much) to the camera, the program should try to start looking for another color.
this what i tried so far
# All python's imports
vs = VideoStream(src=0).start()
# Defining the two colors bound
blueLower = np.array([110, 50, 50])
blueUpper = np.array([130, 255, 255])
greenLower = np.array([29, 86, 6])
greenUpper = np.array([64, 255, 255])
# Defining a function to start the loop so i can later rerun it with different color bounds
def loop(lower, upper):
while True:
....
....
# If the object is close enough, change the loopj arguments to search for a new color
if radius > 250:
loop(greenLower, greenUpper)
What happens, is when the radius is bigger than 250, it just reruns the original
A: You can achieve this with itertools that is in the standard library (you do not need to install, just import). Although there are other ways you can toggle between values, this one is convenient. I changed some parts of you code, you can let me know if theres something you do not understand.
import itertools
blueLower = [110, 50, 50]
blueUpper = [130, 255, 255]
greenLower = [29, 86, 6]
greenUpper = [64, 255, 255]
greenBounds = (greenLower, greenUpper)
blueBounds = (blueLower, blueUpper)
def loop(colorBounds, iterator):
radius = 0
lower, upper = colorBounds
print(lower, upper)
while True:
radius += 1
if radius > 250:
loop(iterator(), iterator)
toggle = itertools.cycle([greenBounds, blueBounds]).__next__
loop(greenBounds, toggle)
To clarify, I added radius=0 and radius += 1 for my own testing purposes.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/62031399",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Error in subset "incorrect number of dimensions" when using "last" and "lag" After downloading stocks data using Quantmod package I want to subset the data and also compare the last row data in the xts with the previous row using (last / lag).
First I created a function to classify the volume in its quartile.
Second I create a new dataset to filter out which stocks in the list get yesterday a volume of 3(3rd quartile) = "stocks_with3"
Now I'd like to subset again the newly created "stocks_with3" dataset.
Specifically what I'm trying to get is TRUE/FALSE of comparing the "Open" of Yesterday (using last) and the "Close" of before yesterday "(using lag).
Exactly what I'm trying to get is if the "Open" was less or equal than the "Close" before yesterday of the stocks that yesterday had a volume in the 3rd quartile.
But when running the subset I'm getting an error message: "incorrect number of dimensions"
My approach for the subset is using last(to get the last available data in the xts) and lag ( to compare it with the immediately previous row)
#Get stock list data
library(quantmod)
library(xts)
Symbols <- c("XOM","MSFT","JNJ","IBM","MRK","BAC","DIS","ORCL","LW","NYT","YELP")
start_date=as.Date("2018-06-01")
getSymbols(Symbols,from=start_date)
stock_data = sapply(.GlobalEnv, is.xts)
all_stocks <- do.call(list, mget(names(stock_data)[stock_data]))
#function to split volume data quartiles into 0-4 results
Volume_q_rank <- function(x) {
stock_name <- stringi::stri_extract(names(x)[1], regex = "^[A-Z]+")
stock_name <- paste0(stock_name, ".Volqrank")
column_names <- c(names(x), stock_name)
x$volqrank <- as.integer(cut(quantmod::Vo(x),
quantile(quantmod::Vo(x),probs=0:4/4),include.lowest=TRUE))
x <- setNames(x, column_names)return(x)
}
all_stocks <- lapply(all_stocks, Volume_q_rank)
#Create a new dataset using names and which with stocks of Volume in the 3rd quartile.
stock3 <- sapply(all_stocks, function(x) {last(x[, grep("\\.Volqrank",names(x))]) == 3})
stocks_with3 <- names(which(stock3 == TRUE))
#Here is when I get the error.
stock3_check <- sapply(stocks_with3, function(x) {last(x[, grep("\\.Open",names(x))]) <= lag(x[, grep("\\.Close", 1), names(x)])})
#Expected result could be the same or running this for a single stock but applied to all the stocks in the list:
last(all_stocks$MSFT$MSFT.Open) <= lag(all_stocks$MSFT$MSFT.Close, 1)
#But I'm having the error when trying to apply to whole list using "sapply" "last" and "lag"
Any suggestion will be appreciated.
Thank you very much.
A: You have 2 mistakes in your sapply function. First you are trying use a character vector (stock_with3) instead of a list (all_stocks). Second the function used inside the sapply is incorrect. the lag closing bracket is before the grep.
This should work.
stock3_check <- sapply(all_stocks[stocks_with3], function(x) {
last(x[, grep("\\.Open", names(x))]) <= lag(x[, grep("\\.Close", names(x))])
})
additional comments
I'm not sure what you are trying to achieve with this code. As for retrieving your data, the following code is easier to read, and doesn't first put all the objects in your R session and then you putting them into a list:
my_stock_data <- lapply(Symbols , getSymbols, auto.assign = FALSE)
names(my_stock_data) <- Symbols
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/53962372",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: gradle exclude external resource folder from all jars Using gradle 5.4 with a project dependency from external folder in a netbeans project.
The external folder contains resources like images, xml and custom objects that can only be created by this netbeans project. These external assets are then used to create binary files that get packed into a separate jar by that netbeans project.
These same external resources are also used during runtime for development in the gradle project. While I need the resources for development in the gradle project, I do not need or want them to be included in any jars anywhere for any reason when using the task build command because only the binaries are needed for distribution.
How to exclude the external resources from any and all jar files in the gradle project but allow them to be used for the classPath so I can run the project?
Some code examples of failure.
apply plugin: 'java'
apply plugin: 'application'
apply plugin: 'idea'
sourceSets {
main {
resources {
srcDir '../ExternalResourceFolder/assets'
}
}
}
jar {
exclude('../ExternalResourceFolder/assets/**')
}
dependencies {
runtimeOnly files('../ExternalResourceFolder/assets')
}
jar {
exclude('../ExternalResourceFolder/assets/**')
}
distributions {
main {
contents {
exclude '../ExternalResourceFolder/assets/**'
}
}
}
Tried many more things like adding to classPath and exclude but it would just be clutter to add them. Changing from sourceSet to dependency only moves the problem around from "build/lib" folder to "build/distributions" folder.
A: Had to exclude per file type in the end.
sourceSets {
main {
resources {
srcDir '.'
exclude ('**/*.j3odata','**/*.mesh','**/*.skeleton',\
'**/*.mesh.xml','**/*.skeleton.xml','**/*.scene',\
'**/*.material','**/*.obj','**/*.mtl','**/*.3ds',\
'**/*.dae','**/*.blend','**/*.blend*[0-9]','**/*.bin',\
'**/*.gltf')
}
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/56072293",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: preparing a private message system I am working on preparing a private message system
the problem is here and now that it will only show one even though there are 4 into the database.
I want it to display all 4 at once, not just one of them.
The problem can be seen here
http://billedeupload.dk/?v=rXFTj.png
$sql = "SELECT id, title, datoTime, checkpm FROM pm WHERE til=? ORDER BY datoTime DESC";
if ($stmt = $this->mysqli->prepare($sql))
{
$stmt->bind_param('i', $til);
$til = $_GET["id"];
$stmt->execute();
$stmt->store_result();
$stmt->bind_result($id, $title, $datoTime, $checkpm);
$stmt->fetch();
$count = $stmt->num_rows;
$stmt->close();
if($count >= 1)
{
?>
<tr>
<td><img src="/img/besked/reply.png" alt="svar" id="beskedu"></td>
<td><a href="/pm-set/<?php echo $id;?>/"><?php echo $title;?></a></td>
<td>
<?php
if($checkpm == 0)
{
?>
<a href="/pm-set/<?php echo $id;?>/"><img src="/img/besked/ulase.png" alt="ulæst" id="beskedu"></a>
<?php
}
else
{
?>
<a href="/pm-set/<?php echo $id;?>/"><img src="/img/besked/lase.png" alt="læst" id="beskedu"></a>
<?php
}
?>
</td>
<td><?php echo date("H:i - d, M - Y", strtotime($datoTime));?></td>
<td>Slet</td>
</tr>
<?php
}
else
{
?>
<div id="error"><p>Ingen besked</p></div>
<?php
}
}
else
{
echo 'Der opstod en fejl i erklæringen: ' . $this->mysqli->error;
}
A: At the moment you read one line and then close the result.
You need to loop through the results reading and processing one line at a time and then only once you are done close the result.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/20593067",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: MYSQL select 3 results of each "page id" I have two tables... the first is a page list with (page_id, page_title) rows. The second is a list of items ON those pages each with a price (item_id, page_id, item_title, item_price).
I'd like to grab the top three items from each page (ordered by highest item_price first) with the page having the cumulatively highest price ordered first. This is quite beyond my MYSQL abilities and I'm looking for advice on how to make this the most efficient! :) Thanks!
A: You could do this a few different ways. What I would do is run one query that says "get me all pages ordered by the sum total of their items" then loop through them in php, and for each one, do a "get me the top 3 items for the current page".
Make sense?
Query one (untested, written on my phone):
SELECT p.page_name, (SELECT SUM(item_price) FROM items WHERE page_id = p.page_id) AS cumulative_price FROM pages p ORDER BY cumulative_price DESC;
Query two (also untested) looping through results of query one:
SELECT * FROM items WHERE page_id = '$currentPageId' ORDER BY item_price DESC LIMIT 3;
A: *
*I'm assuming every page_id is equal to item_id, and that's how they're linked together. (If not, please correct me.)
*I'm just going to call the second table "table2" and the item_id I'm looking up "SOME_ITEM_ID"
SELECT * FROM `table2` WHERE `item_id` = 'SOME_ITEM_ID' ORDER BY `item_price` DESC;
In English what this is saying is:
Select everything from table2 where the item_id is this, and order the list by item_price in descending order
This SQL statement will return every hit, but you would just output the first three in your code.
A: My gut tells me that there is probably no faster way to do this than to do a for-loop in the application over all of the pages, doing a little select item_price from item where page_id = ? order by item_price desc limit 3 for each paqe, and possibly sticking the results in something like memcached so you aren't taxing your database too much.
But I like a challenge, so i'll attempt it anyhow.
SELECT p1.*, i1.*,
(SELECT count(*)
FROM items i2
WHERE i1.item_price < i2.item_price
AND p1.page_id = i2.page_id) price_rank,
FROM pages p1
LEFT OUTER JOIN items i1
ON p1.page_id = i1.page_id
WHERE price_rank < 3;
That odd sub-select is going to probably do an awful lot of work for every single row in items. many other RDBMses have a feature called window functions that can do the above much more elegantly. For instance, if you were using PostgreSQL, you could have written:
SELECT p1.*, i1.*,
RANK(i1.item_price)
OVER (PARTITION BY p1.page_id
ORDER BY i1.item_price DESC) price_rank
FROM pages p1
LEFT OUTER JOIN items i1
ON p1.page_id = i1.page_id
WHERE price_rank <= 3;
And the planner would have arranged to visit rows in an order that results in the rank occurring properly.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/9857514",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Why is my basic Ajax script not working? I have been playing around with Javascript and now I came to Ajax. I am trying to write very simple script that would get the file contents - print the txt file contents in the div with id=test. This is the script :
function loadXMLDoc(url)
{
if (window.XMLHttpRequest)
{// code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp = new XMLHttpRequest();
}
else
{// code for IE6, IE5
xmlhttp=new ActiveXObject("Microsoft.XMLHTTP");
}
xmlhttp.open("GET" , url ,false);
xmlhttp.send(null);
document.getElementById('test').innerHTML = xmlhttp.responseText;
}
when I use it on this website :
<div id="test" name="test"> HELLo </div>
<button type="button" onclick="loadXMLDoc('test1.txt')">ClickMe1</button>
With this script HELLo is substituted by nothing - the script empties the container.
Maybe I am missing something trivial but do I need PHP installed ? I don't think so but... I am not sure what is happening in here. When I am debugging the xmlhttp is empty the whole time. Why ?
A: You'll need to check for readyState and the HTTP response status before replacing the text;
if (xmlhttp.readyState==4 && xmlhttp.status==200)
{
document.getElementById("test").innerHTML=xmlhttp.responseText;
}
example on http://www.w3schools.com/ajax/ajax_xmlhttprequest_onreadystatechange.asp
Please let me know if it works.
A: For browsers other than IE
IE's active X object seems not to care much about the ready state, other browsers may not have the text loaded quickly enough at the time you run your function (hence why you are getting the blank instead of file contents). IE's active X seems to handle this automatically and ignores the ready state, so you have to break up the code differently as below. Normally you check the status of the request to see if it's been fully read or not before accessing the responseText.
Add onreadystatechange you cannot check the status attribute since there is no HTTP requests being made on a file system request. (The status will always be 0 for request not made via HTTP) The best I can offer is this:
function loadXMLDoc(url)
{
if (window.XMLHttpRequest)
{// code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp = new XMLHttpRequest();
xmlhttp.onreadystatechange = function() {
document.getElementById('test').innerHTML = xmlhttp.responseText;
}
xmlhttp.open( "GET", url );
xmlhttp.send(null);
}
else
{// code for IE6, IE5
xmlhttp=new ActiveXObject("Microsoft.XMLHTTP");
xmlhttp.open( "GET", url );
xmlhttp.send(null);
document.getElementById('test').innerHTML = xmlhttp.responseText;
}
}
For CHROME
If you are using CHROME you must start chrome up with the --allow-file-access-from-files switch. Otherwise, it will refuse file system ajax requests. (You will have to set this even if using a so-called "easier" library such as jQuery).
Running AJAX apps on File System In General
Not usually a good idea, a lot of caveats to this route. Typically local development is done with a web server installed to localhost on your development machine.
A: Today its old fashion to call ajax like xmlhttp = new XMLHttpRequest();
You have many other options for this.
*
*http://api.jquery.com/jQuery.ajax/
*http://www.w3schools.com/jquery/jquery_ref_ajax.asp
*http://net.tutsplus.com/tutorials/javascript-ajax/5-ways-to-make-ajax-calls-with-jquery/
A: Firstly, you have to fight with Same Origin Policy.
A simple working code for a synchronous request is following:
var req = new XMLHttpRequest();
req.onreadystatechange = function() {
if (req.status == 200 && req.readyState == 4) {
...
}
req.open('GET', url, true);
req.send(null);
Note this is working for Firefox/Opera/Chrome. If IE, use:
xmlhttp=new ActiveXObject("Microsoft.XMLHTTP");
A: Try with jQuery. Download the last version here and write this code snippet:
function loadXMLDoc(url) {
$("#test").load(url);
}
It's much simpler and less error prone
A: You need a server to listen up to requests. Your regular file system will not be able to respond to AJAX requests.
You don't need PHP, however you'll need apache or a similar web server.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/9634273",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Collecting names/ids of failing items in Camel I would like to implement an aggregator-splitter pattern using Camel where I process a list of files. removeOnFailure(true) would help remove files that would cause/trigger a failure from the list and the process will complete successfully as shown in the code below.
def files(exchange: Exchange): java.util.Iterator[Path]
def fileProcessor(exchange: Exchange): Unit
from(aDirectEndpoint) ==> {
aggregate(
split(files _)
.streaming
.parallelProcessing
.idempotentConsumer(_.in)
.eager(true)
.skipDuplicate(true)
.repository(aRepository)
.removeOnFailure(true)
.process(fileProcessor _),
new GroupedExchangeAggregationStrategy
)
.completionTimeout(completionTimeoutMillis)
.process(jobProcessor)
I'm wondering if there is a way to collect those failing files using Camel API?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/45508111",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to append to the beginning of an int in Python? First of all, is it possible to append to an int ? If not I guess i'll have to convert it to a string first. I know how to append to a list
Anyways, how would you append a digit to the beginning of a number(instead of appending it to the end). Say number = 345, and you'd like to append super which is equal to 2 to the beginning of "number" to make it "2345".
How would you do it ?
To append it to the end of a list i'd use:
alist.append("hello")
A: To "append" to numbers, you need to convert them to strings first with str(). Numbers are immutable objects in Python. Strings are not mutable either but they have an easy-to-use interface which makes it easy to create modified copies of them.
number = 345
new_number = int('2' + str(number))
Once you are done editing your string, you can easily convert it back to an int with int().
Note that strings don't have an append method like lists do. You can easily concatenate strings with the + operator however.
A: As @Fredrik comments, you can get from any integer to any other integer by a single addition - and this is how you should probably do it, if you deal with integers only.
>>> i = 345
>>> 2000 + i
2345
There are, however, many way to express "prepend a 2 to 345", one would be to use format:
>>> '2{}'.format(345)
'2345'
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/18815780",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Accessing an element in django (for) I have a Django template with the following code which creates multiple buttons and tries to hide/show description text on a click (on the same button in each card):
{% for comida in comidas %}
{% if comida.slug_food == page.slug %}
<div class="food2">
<div id="food-title">{{comida.titulo}}</div>
<div id="food-price">{{comida.precio|floatformat:"-1"}}€</div>
<button class="button" onclick="showDescription()">ver+
<div id="food-description" >
{{comida.descripcion|safe}}
</div>
</button>
<div id="add-cart">AÑADIR AL PEDIDO</div>
{% if comida.imagen != null %}
<img src="{{comida.imagen.url}}"></img>
{% endif %}
</div>
{% endif %}
{% endfor %}
where comidas is a list of strings, and later in the script block I have
function showDescription(){
var showText = document.getElementById("food-description");
if (showText.style.display === "block"){
showText.style.display = "none";
} else {
showText.style.display = "block";
}
}
The function runs, but as you may expect, it runs only on the first element of my for loop.
My question is ¿anyone can help me? i want work all my buttons and not only the first element.
A: Use {{comida.id}} to get unique ids :
{% for comida in comidas %}
{% if comida.slug_food == page.slug %}
<div class="food2">
<div id="food-title">{{comida.titulo}}</div>
<div id="food-price">{{comida.precio|floatformat:"-1"}}€</div>
<button class="button" onclick="showDescription('{{comida.id}}')">ver+
<div id="food-description-{{comida.id}}" >
{{comida.descripcion|safe}}
</div>
</button>
<div id="add-cart">AÑADIR AL PEDIDO</div>
{% if comida.imagen != null %}
<img src="{{comida.imagen.url}}"></img>
{% endif %}
</div>
{% endif %}
{% endfor %}
And javascript :
function showDescription(comidaId){
var showText = document.getElementById("food-description-" + comidaId);
if (showText.style.display === "block"){
showText.style.display = "none";
} else {
showText.style.display = "block";
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/63522524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Visual Studio 2015 Xamarin freezing I have an Android app on Visual Studio 2015 with Xamarin and it is freezing frequently.
The only scenario I can reproduce every time is right after deploy. VS freezes right after the app is deployed on emulator or device. The app works fine, but the VS stands frozen for several seconds. It also freezes in other scenarios too.
VS doesn't show the "not responding" message no matter what I do. It seems like it's doing some background work and it's not really frozen, but I can't figure out what the problem is.
I recreated my project from scratch and the problem started to happen again when I added my resources and installed some NuGet packages (appcompat v7 and firebase cloud messaging).
I'm guessing it's somehow related to VS not recognizing some attributes in some layout files, like
.
1 - Any help on the freezing problem?
2 - What can I do to VS recognize those attributes?
3 - Can I configure the resources.cs to be regenerated only on build?
Update
Found this, but didn't worked for me!
Update 2
Ok, now I'm 100% sure the problem is with the generation of android resources. The aapt.exe is the villain of this history, but I still can't make it stop executing and freezing Visual Studio every time. Is there some setting to ignore xml errors on layout files or something like that?
A: I am the guy who created the bugzilla post you are referencing.
Are you sure you have the <AndroidResgenExtraArgs>--no-crunch </AndroidResgenExtraArgs> on the right place? It must be inside the configuration you are using, i.e. the .csproj must be like this if you are deploying in Debug | AnyCPU configuration:
<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' ">
<AndroidResgenExtraArgs>--no-crunch </AndroidResgenExtraArgs>
(also notice the trailing space after --no-crunch)
BTW I also have trouble with aapt.exe when installing Nugets and --no-crunch does not help with this. What helps is temporarily renaming aapt.exe to aapt2.exe - Visual Studio does not run the aapt process, because it can't find the exe file and Nuget installations are fast. But this would probably not work for build and deploy, I think it throws an error.
A: Well, after trying a lot of stuff I made a lucky guess and it worked.
I was looking to my csproj and noticed some weird things, like:
<ItemGroup>
<AndroidResource Include="Resources\layout\onefile.axml" />
</ItemGroup>
<ItemGroup>
<AndroidResource Include="Resources\layout\anotherfile.axml" />
</ItemGroup>
<ItemGroup>
<AndroidResource Include="Resources\layout\onemorefile.axml">
<SomeOtherAttributes />
</AndroidResource>
</ItemGroup>
So I imagined that organizing the ItemGroups would worth a try and put all ItemGroups (class, layout, image, string, etc) together like:
<ItemGroup>
<AndroidResource Include="Resources\layout\onefile.axml" />
<AndroidResource Include="Resources\layout\anotherfile.axml" />
<AndroidResource Include="Resources\layout\onemorefile.axml" />
</ItemGroup>
And it worked beyond what I expected. Not only fixed my freezing problems... it made VS faster than before of my freezing problems... REALLY faster!
I don't know if reorganizing the csproj file can cause any kind of problems, but I'll do the same with other projects and see what happen.
I did a quick research about csproj and ItemGroup and found nothing relevant... I believe it's kind a bug!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/42416968",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: No value given for one or more required parameters. c# oledb Not sure where the problem is. No value given for one or more parameters. I've tried moving the code out of the if statements but it hasn't helped. Not sure if the data adapter requires me to enter all of the column values.
int ID = Convert.ToInt32(txtStudent.Text);
string grade = "A";
int result = Convert.ToInt32(txtResult.Text);
string sql = "UPDATE Student SET grade=@grade WHERE ID = ?";
OleDbCommand command = new OleDbCommand(sql, connectionString);
if ((result >= 70) && (result <= 100))
{
grade = "A";
MessageBox.Show("Result is: A");
command.Parameters.Add("@grade", OleDbType.VarChar).Value = grade;
command.Parameters.Add("@ID", OleDbType.VarChar).Value = ID;
}
else if ((result >= 60) && (result < 70))
{
grade = "B";
MessageBox.Show("Result is: B");
command.Parameters.Add("@grade", OleDbType.VarChar).Value = grade;
command.Parameters.Add("@ID", OleDbType.VarChar).Value = ID;
}
else if ((result >= 50) && (result < 60))
{
grade = "C";
MessageBox.Show("Result is: C");
command.Parameters.Add("@grade", OleDbType.VarChar).Value = grade;
command.Parameters.Add("@ID", OleDbType.VarChar).Value = ID;
}
else if (result < 50)
{
grade = "F";
MessageBox.Show("Result is: F");
command.Parameters.Add("@grade", OleDbType.VarChar).Value = grade;
command.Parameters.Add("@ID", OleDbType.VarChar).Value = ID;
}
else
{
MessageBox.Show("Incorrect data entered");
}
OleDbDataAdapter dataadapter = new OleDbDataAdapter(command);
DataSet ds = new DataSet();
dataadapter.Fill(ds, "Student");
dataGridView1.DataSource = ds;
dataGridView1.DataMember = "Student";
A: I think the problem on that line;
OleDbDataAdapter dataadapter = new OleDbDataAdapter(sql, connectionString);
You add your parameters on your command but you still using sql string which expects parameter and their values in OleDbDataAdapter constructor.
Use your command instead of your sql query;
OleDbDataAdapter dataadapter = new OleDbDataAdapter(command, connectionString);
And use using statement to dispose your OleDbConnection, OleDbCommand and OleDbDataAdapter automaticaly.
As your second problem, from the documentaion of OleDbCommand.Parameters:
The OLE DB .NET Provider does not support named parameters for passing parameters to an SQL statement or a stored procedure called by an OleDbCommand when CommandType is set to Text.
Actually, it does support named parameters but names are negligible. Only matter is your parameter orders. Since you add @ID parameter your command first, this will be added to your first parameter in your command that you defined which is @grade. As you can see, this will generate a problem.
Change your parameters order as well;
command.Parameters.Add("@grade", OleDbType.VarChar).Value = grade;
command.Parameters.Add("@ID", OleDbType.VarChar).Value = ID;
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/30260748",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Make div use remaining width I have found several post with similar problems, but none targeting my exact problem:
I am creating a menu with this HTML code:
<ul id="menu">
<li id="item1">Item 1</li>
<li id="item2" class="active">Item 2</li>
<li id="item3">Item 3</li>
<li id="item4">Item 4</li>
</ul>
What I want to do is to make the .active element use rest of remaining width on the page. I will make an click event on each LI to switch the active class.
Is it possible to do the width part with only css?
Here is the CSS I have so far:
ul#menu
{
list-style:none;
background: grey;
position: absolute;
left: 0;
right: 0;
top: 0;
height: 50xp;
}
ul#menu li
{
float:left;
height:30px;
border:1px solid black;
width: 50px;
}
ul#menu li.active
{
/* what to put here to make it use rest of widht */
}
Here is the jsfiddle to play with:
http://jsfiddle.net/GMpeD/
A: Style it as a table row, set width to 100% for both the table and the “active” cell, and prevent line breaks inside cells. Demo: http://jsfiddle.net/yucca42/jyTCw/1/
This won’t work on older versions of IE. To cover them as well, use an HTML table and style it similarly.
A: If you are fine with css3, you could use box-flex property.
box-flex property specifies how a box grows to fill the box that contains it.
Try this,
#menu {
width: 100%;
padding: 0;
list-style: none;
display: -moz-box; /* Mozilla */
-moz-box-orient: horizontal; /* Mozilla */
display: -webkit-box; /* WebKit */
-webkit-box-orient: horizontal; /* WebKit */
display: box;
box-orient: horizontal;
}
.active {
-moz-box-flex: 1; /* Mozilla */
-webkit-box-flex: 1; /* WebKit */
box-flex: 1;
}
A: No. You have to add some JS that will monitor window size changes and adjusts your element accordingly.
A: Instead of giving widths in px, set the widths in percentages.
So you could have the non-active each be 10%, and the active will be give a width of 70%
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/8880993",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: LAMP server on EC2 (Amazon Linux Micro Instance) I've launched an instance of the Basic 32-bit Amazon Linux AMI which has an 8GB volume as it's root device. If I terminate it, the EBS volume is destroyed as well. What I'd like to know is whether or not my data is protected (for example, the apache document root, or MySQL data) if the server crashes? A lot of tutorials seem to indicate that another EBS volume should be created and my data stored on that, but I'm not really seeing why two EBS volumes are needed?
Or is the current setup okay for a web server setup?
Many thanks in advance for your help!
A: When you spin an EC2 instance up, the root volume is ephemeral - that is, when the instance is terminated, the root volume is destroyed** (taking any data you put there with it). It doesn't matter how you partition that ephemeral volume and where you tuck your data on it - when it is destroyed, everything contained in that volume is lost.
So if the data in the volume is entirely transient and fully recoverable/retrievable from somewhere else the next time you need it, there's no problem; terminate the instance, then spin a new one up and re-acquire the data you need to carry on working.
However, if the data is NOT transient, and needs to be persisted so that work can carry on after an instance crash (and by crash, I mean something that terminates the instance or otherwise renders it inoperable and unrecoverable) then your data MUST NOT be on the root volume, but should be on another EBS volume which is attached to the instance. If and when that instance terminates or breaks irretrievably, your data is safe on that other volume - it can then be re-attached to a new instance for work to continue.
** the exception is where your instance is EBS-backed and you swapped root volumes - in this case, the root volume is left behind after the instance terminates because it wasn't part of the 'package' created by the AMI when you started it.
A: The other volume would be needed in case your server gets broken and you cannot start it. In such case you would just remove initial server, create a second one and attach the additional storage to the new server. You cannot attach root volume of one server to another.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7431995",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Cordova Youtube App - sorting videos So I'm working on this Youtube Channel app that I've developed with Cordova and it all went very well until I had to sort the videos newest first.
So I have this code:
channelID: string = ‘[xxxxx]’;<BR/>
maxResults: string = ’10’;<BR/>
pageToken: string;<BR/>
googleToken: string = ‘[xxxxx]’;<BR/>
searchQuery: string = ”;<BR/>
I tried adding:
order: 'date'
but nothing happend.
I would really apreciate some help, thank you :)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/42646550",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: HtmlWebpackPlugin doesn't pass pug variables to included pug file I have a basic understanding of how webpack and pug can work together, using HtmlWebpackPlugin to generate a page with the bundled assets using the pug template.
I've created a very simple test project with two pug files: head.pug contains the stuff that goes in <head>, and index.pug is the rest. I created some variables in index.pug which I expect to use in head.pug by using include head.pug. Here is what they look like:
// head.pug //
title #{title}
if isProduction
base(href='myurl.com/welcome/')
// index.pug //
- var isProduction = true
- var title = 'Testing'
doctype html
html
head
include head.pug
body
p My Site
If I use the pug-cli to compile index.pug, it creates the following index.html file:
<!DOCTYPE html>
<html>
<head>
<title>Testing</title>
<base href="myurl.com/welcome/">
</head>
<body>
<p>My Site</p>
</body>
</html>
Looks good. Now, if I use webpack to build my assets and generate index.html, it looks like this:
<!DOCTYPE html>
<html>
<head>
<title></title>
</head>
<body>
<p>My Site</p>
<script src="/bundle6028aa4f7993fc1329ca.js"></script>
</body>
</html>
As you can see, the title wasn't defined, and isProduction is false, so <base> isn't inserted. What's going wrong? Here is my webpack config file:
const webpack = require('webpack');
const path = require('path');
const { CleanWebpackPlugin } = require('clean-webpack-plugin');
const HtmlWebpackPlugin = require('html-webpack-plugin');
module.exports = {
entry: './src/js/index.js',
output: {
path: path.join(__dirname, 'dist'),
filename: 'bundle[contenthash].js'
},
module: {
rules: [
{ test: /\.pug$/, loader: "pug-loader" },
]
},
plugins: [
new CleanWebpackPlugin(),
new HtmlWebpackPlugin({
template: '!!pug-loader!src/pug/index.pug',
filename: path.join(__dirname, 'dist/index.html')
})
]
};
A: Use the Webpack rule for pug files with these loaders:
...
{
test: /\.pug$/,
use: [
{
loader: 'html-loader'
},
{
loader: 'pug-html-loader'
}
],
},
...
And maybe you can rid of !!pug-loader! for the plugins template property:
...
new HtmlWebpackPlugin({
template: './src/pug/index.pug',
filename: path.join(__dirname, 'dist/index.html')
})
...
Probably you have to install the loaders via npm:
npm i html-loader pug-html-loader
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/65627085",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Microsoft Teams Tab Authentication with ADAL - can't get token in Teams DESKTOP app I'm working on a custom MS Teams Tab app, and inside it I need to get an authentication token from Azure.
I did it using "AuthenticationContext" from the 'adal-angular' library.
Relevant code:
var options: AuthenticationContext.Options = {
clientId: this.properties.azureAppId,
extraQueryParameter: "scope=openid+profile&login_hint=" + encodeURIComponent(this._teamsContext.loginHint),
popUp: true,
redirectUri: window.location.origin
};
authContext = new AuthenticationContext(options);
authContext.acquireTokenPopup(authContext.config.loginResource, undefined, undefined, (errDesc, token, err) => {
this.properties.azureAuthToken = token;
// ... and so on...
This works fine for the Teams web client, but does not work in the Teams Desktop app.
In there, instead of a pop-up, a real tab gets opened in my default browser, but then the Teams Desktop app and the browser "loose track of each other".
After inspecting developer tools of the Teams Desktop app, I found that the "authContext.acquireTokenPopup" call fails, and the error message is "Popup Window is null. This can happen if you are using IE".
Can anybody help ? What else do I need to do, or is it even possible for this to work from the Teams Desktop app as well ?
Thanks.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/64914570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Not getting correct value in return The idea of the task is to allow the user to add and withdraw "money" to and from their account. The problem is I can add money, but I can't withdraw it
$funds = $_POST['funds'];
$withdraw_or_add = $_POST['list'];
if($withdraw_or_add == "add")
{
$sql = "UPDATE users SET userFunds = '".$funds."' WHERE userId = 1";
}
else
{
$info = mysql_query("SELECT * FROM users WHERE userId = '1'");
$info = mysql_fetch_assoc($info);
$new_fund = $info['userFunds'] - $funds;
$sql = "UPDATE users SET userFunds = '".$new_fund."' WHERE userId = 1";
}
mysql_select_db('details_db');
$retval = mysql_query( $sql, $conn );
if(! $retval ) {
die('Could not update data: ' . mysql_error());
}
echo "Updated data successfully\n";
mysql_close($conn);
So for example, let's say $fund = 5 and $info['userFunds'] = 20 then the variable $new_fund should be 15. But instead it equals -5. If anyone can help it would be much appreciated.
A: Firstly page of top you put used db connection related code :
$conn = mysql_connect('localhost', 'user', 'pass');
mysql_select_db('details_db');
and then bellow and removed mysql_select_db('details_db'); line after mysql_
$funds = $_POST['funds'];
$withdraw_or_add = $_POST['list'];
if($withdraw_or_add == "add")
{
$sql = "UPDATE users SET userFunds = '".$funds."' WHERE userId = 1";
}
else
{
$info = mysql_query("SELECT * FROM users WHERE userId = '1'");
$info = mysql_fetch_assoc($info);
$new_fund = $info['userFunds'] - $funds;
$sql = "UPDATE users SET userFunds = '".$new_fund."' WHERE userId = 1";
}
//mysql_select_db('details_db');
$retval = mysql_query( $sql, $conn );
if(! $retval ) {
die('Could not update data: ' . mysql_error());
}
echo "Updated data successfully\n";
mysql_close($conn);
Note: Please stop using mysql_* functions. mysql_* extensions have been removed in PHP 7. Please used PDO and MySQLi.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/40611373",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: SpringBoot: Interceptor to read particular field from request and set it in the response All requests and responses handled by our Spring Rest Controller has a Common section which has certain values:
{
"common": {
"requestId": "foo-bar-123",
"otherKey1": "value1",
"otherKey2": "value2",
"otherKey3": "value3"
},
...
}
Currently all my controller functions are reading the common and copying it into the response manually. I would like to move it into an interceptor of some sort.
I tried to do this using ControllerAdvice and ThreadLocal:
@ControllerAdvice
public class RequestResponseAdvice extends RequestBodyAdviceAdapter
implements ResponseBodyAdvice<MyGenericPojo> {
private ThreadLocal<Common> commonThreadLocal = new ThreadLocal<>();
/* Request */
@Override
public boolean supports(
MethodParameter methodParameter, Type type, Class<? extends HttpMessageConverter<?>> aClass) {
return MyGenericPojo.class.isAssignableFrom(methodParameter.getParameterType());
}
@Override
public Object afterBodyRead(
Object body,
HttpInputMessage inputMessage,
MethodParameter parameter,
Type targetType,
Class<? extends HttpMessageConverter<?>> converterType) {
var common = (MyGenericPojo)body.getCommon();
if (common.getRequestId() == null) {
common.setRequestId(generateNewRequestId());
}
commonThreadLocal(common);
return body;
}
/* Response */
@Override
public boolean supports(
MethodParameter returnType, Class<? extends HttpMessageConverter<?>> converterType) {
return MyGenericPojo.class.isAssignableFrom(returnType.getParameterType());
}
@Override
public MyGenericPojo beforeBodyWrite(
MyGenericPojo body,
MethodParameter returnType,
MediaType selectedContentType,
Class<? extends HttpMessageConverter<?>> selectedConverterType,
ServerHttpRequest request,
ServerHttpResponse response) {
body.setCommon(commonThreadLocal.get());
commonThreadLocal.remove();
return body;
}
}
This works when I test sending one request at a time. But, is it guaranteed that afterBodyRead and beforeBodyWrite is called in the same thread, when multiple requests are coming?
If not, or even otherwise, what is the best way of doing this?
A: I think that there is no need of your own ThreadLocal you can use request attributes.
@Override
public Object afterBodyRead(
Object body,
HttpInputMessage inputMessage,
MethodParameter parameter,
Type targetType,
Class<? extends HttpMessageConverter<?>> converterType) {
var common = ((MyGenericPojo) body).getCommon();
if (common.getRequestId() == null) {
common.setRequestId(generateNewRequestId());
}
Optional.ofNullable((ServletRequestAttributes) RequestContextHolder.getRequestAttributes())
.map(ServletRequestAttributes::getRequest)
.ifPresent(request -> {request.setAttribute(Common.class.getName(), common);});
return body;
}
@Override
public MyGenericPojo beforeBodyWrite(
MyGenericPojo body,
MethodParameter returnType,
MediaType selectedContentType,
Class<? extends HttpMessageConverter<?>> selectedConverterType,
ServerHttpRequest request,
ServerHttpResponse response) {
Optional.ofNullable(RequestContextHolder.getRequestAttributes())
.map(rc -> rc.getAttribute(Common.class.getName(), RequestAttributes.SCOPE_REQUEST))
.ifPresent(o -> {
Common common = (Common) o;
body.setCommon(common);
});
return body;
}
EDIT
Optionals can be replaced with
RequestContextHolder.getRequestAttributes().setAttribute(Common.class.getName(),common,RequestAttributes.SCOPE_REQUEST);
RequestContextHolder.getRequestAttributes().getAttribute(Common.class.getName(),RequestAttributes.SCOPE_REQUEST);
EDIT 2
About thread safety
1) standard servlet-based Spring web application we have thread-per-request scenario. Request is processed by one of the worker threads through all the filters and routines. The processing chain will be executed by the very same thread from start to end . So afterBodyRead and beforeBodyWrite guaranteed to be executed by the very same thread for a given request.
2) Your RequestResponseAdvice by itself is stateless. We used RequestContextHolder.getRequestAttributes() which is ThreadLocal and declared as
private static final ThreadLocal<RequestAttributes> requestAttributesHolder =
new NamedThreadLocal<>("Request attributes");
And ThreadLocal javadoc states:
his class provides thread-local variables. These variables differ from
their normal counterparts in that each thread that accesses one (via
its get or set method) has its own, independently initialized copy of
the variable.
So I don't see any thread-safety issues into this sulotion.
A: Quick answer: RequestBodyAdvice and ResponseBodyAdvice are invoked within the same thread for one request.
You can debug the implementation at: ServletInvocableHandlerMethod#invokeAndHandle
The way you're doing it is not safe though:
*
*ThreadLocal should be defined as static final, otherwise it's similar to any other class property
*Exception thrown in body will skip invocation of ResponseBodyAdvice (hence the threadlocal data is not removed)
"More safe way": Make the request body supports any class (not just MyGenericPojo), in the afterBodyRead method:
*
*First call ThreadLocal#remove
*Check if type is MyGenericPojo then set the common data to threadlocal
A: Also I have already answered this thread, but I prefer another way to solve such kind of problems
I would use Aspect-s in this scenario.
I have written included this in one file but you should create proper separate classes.
@Aspect
@Component
public class CommonEnricher {
// annotation to mark methods that should be intercepted
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
public @interface EnrichWithCommon {
}
@Configuration
@EnableAspectJAutoProxy
public static class CommonEnricherConfig {}
// Around query to select methods annotiated with @EnrichWithCommon
@Around("@annotation(com.example.CommonEnricher.EnrichWithCommon)")
public Object enrich(ProceedingJoinPoint joinPoint) throws Throwable {
MyGenericPojo myGenericPojo = (MyGenericPojo) joinPoint.getArgs()[0];
var common = myGenericPojo.getCommon();
if (common.getRequestId() == null) {
common.setRequestId(UUID.randomUUID().toString());
}
//actual rest controller method invocation
MyGenericPojo res = (MyGenericPojo) joinPoint.proceed();
//adding common to body
res.setCommon(common);
return res;
}
//example controller
@RestController
@RequestMapping("/")
public static class MyRestController {
@PostMapping("/test" )
@EnrichWithCommon // mark method to intercept
public MyGenericPojo test(@RequestBody MyGenericPojo myGenericPojo) {
return myGenericPojo;
}
}
}
We have here an annotation @EnrichWithCommon which marks endpoints where enrichment should happen.
A: If it's only a meta data that you copy from the request to the response, you can do one of the followings:
1- store the meta in the request/response header,and just use filters to do the copy :
@WebFilter(filterName="MetaDatatFilter", urlPatterns ={"/*"})
public class MyFilter implements Filter{
@Override
public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain)
throws IOException, ServletException {
HttpServletRequest httpServletRequest = (HttpServletRequest) request;
HttpServletResponse httpServletResponse = (HttpServletResponse) response;
httpServletResponse.setHeader("metaData", httpServletRequest.getHeader("metaData"));
}
}
2- move the work into the service layer where you can do the cope through a reusable common method, or have it run through AOP
public void copyMetaData(whatEverType request,whatEverType response) {
response.setMeta(request.getMeta);
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/60488621",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Set CSS Font Size With Drop Box I'm trying to add live update font size list box to a project and I cant seem to get it to work.
I've add a HTML font size selector and some java-script to an existing working project with no luck, I cant seem to figure it out as it looks complete to me and I think it should work.
Here is my code that I'm using.
<html>
<head>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.11.2/jquery.min.js"></script>
<link rel="stylesheet" type="text/css" href="http://www.jqueryscript.net/demo/Easy-Google-Web-Font-Selector-With-jQuery-Fontselect/fontselect.css" />
<script src="http://www.jqueryscript.net/demo/Easy-Google-Web-Font-Selector-With-jQuery-Fontselect/jquery.fontselect.js"></script>
<style>
body { padding:50px; background-color:#333;}
p, h1 { color:#fff;}
</style>
</head>
<body>
<script>
$(function(){
$('#font').fontselect().change(function(){
// replace + signs with spaces for css
var font = $(this).val().replace(/\+/g, ' ');
// split font into family and weight
font = font.split(':');
// set family on paragraphs
$('p').css('font-family', font[0]);
});
});
</script>
<script>
$("#size").change(function() {
$('p').css("font-size", $(this).val() + "px");
});
</script>
<p> </p>
<input id="font" type="text" />
<select id="size">
<option value="7">7</option>
<option value="10">10</option>
<option value="20">20</option>
<option value="30">30</option>
</select>
<p>Lorem Ipsum is simply dummy text of the printing and typesetting industry.
Lorem Ipsum has been the industry's standard dummy text ever since the 1500s,
when an unknown printer took a galley of type and scrambled it to make a type
specimen book.</p>
</body>
</html>
A: Here is my updated answer with code snippet. the problems are: 1) missing <html><head> at top, 2) missing jquery package, 3) fonteselect.js and fontselect.css need to be called with https://, not http://.
<html>
<head>
<script src="https://code.jquery.com/jquery-3.2.1.min.js" integrity="sha256-hwg4gsxgFZhOsEEamdOYGBf13FyQuiTwlAQgxVSNgt4=" crossorigin="anonymous"></script>
<link rel="stylesheet" type="text/css" href="https://www.jqueryscript.net/demo/Easy-Google-Web-Font-Selector-With-jQuery-Fontselect/fontselect.css" />
<script src="https://www.jqueryscript.net/demo/Easy-Google-Web-Font-Selector-With-jQuery-Fontselect/jquery.fontselect.js"></script>
<style>
body { padding:50px; background-color:#333;}
p, h1 { color:#fff;}
</style>
</head>
<body>
<script>
$(function(){
$('#font').fontselect().change(function(){
// replace + signs with spaces for css
var font = $(this).val().replace(/\+/g, ' ');
// split font into family and weight
font = font.split(':');
// set family on paragraphs
$('p').css('font-family', font[0]);
});
});
</script>
<script>
$("#size").change(function() {
$('p').css("font-size", $(this).val() + "px");
});
</script>
<select id="size">
<option value="7">7</option>
<option value="10">10</option>
<option value="20">20</option>
<option value="30">30</option>
</select>
<p> </p>
<input id="font" type="text" />
<p>Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book.</p>
</body>
</html>
A: Chrome is beginning to refuse to download links from non-encrypted sources in an effort to combat phishing fraud and you can read more about that here.
So, the issue is that your <script> links to download the JQuery and fontselect libraries are being done via http and not https and your browser is not downloading them because of that.
Change:
<script src="http://ajax.googleapis.com/ajax/l...
<script src="http://www.jqueryscript.net/demo/Easy-Google-Web-F...
To:
<script src="https://ajax.googleapis.com/ajax/l...
<script src="https://www.jqueryscript.net/demo/Easy-Google-Web-F...
$(function(){
$('#font').fontselect().change(function(){
// replace + signs with spaces for css
var font = $(this).val().replace(/\+/g, ' ');
// split font into family and weight
font = font.split(':');
// set family on paragraphs
$('p').css('font-family', font[0]);
});
});
$("#size").change(function() {
$('p').css("font-size", $(this).val() + "px");
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<script src="https://www.jqueryscript.net/demo/Easy-Google-Web-Font-Selector-With-jQuery-Fontselect/jquery.fontselect.js"></script>
<link rel="stylesheet" type="text/css" href="http://www.jqueryscript.net/demo/Easy-Google-Web-Font-Selector-With-jQuery-Fontselect/fontselect.css">
<style>
body { padding:50px; background-color:#333;}
p, h1 { color:#fff;}
</style>
<select id="size">
<option value="7">7</option>
<option value="10">10</option>
<option value="20">20</option>
<option value="30">30</option>
</select>
<p> </p>
<input id="font" type="text" />
<p>Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book.</p>
With that said and done, this feature is incredibly easy to implement yourself, without any external libraries at all:
// First, get a reference to the drop down and the target element
var dd = document.getElementById("size");
var target = document.querySelector("p");
// Then, set up an event handling function for when the value of the select changes
dd.addEventListener("change", function(){
// Remove the previously applied class
target.className = "";
// Just apply the CSS class that corresponds to the size selected
target.classList.add(dd.value);
});
body { padding:50px; background-color:#333;}
p, h1 { color:#fff;}
/* CSS class names must start with a letter */
.seven { font-size:7px; }
.ten { font-size:10px; }
.twenty { font-size:20px; }
.thirty { font-size:30px; }
<select id="size">
<option value="">Choose a font size...</option>
<!-- The values here must match the names of the CSS classes -->
<option value="seven">7</option>
<option value="ten">10</option>
<option value="twenty">20</option>
<option value="thirty">30</option>
</select>
<p>Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book.</p>
A: First, make sure you have the jQuery library linked to in your document. Then, make sure that $(this).val() gives you a correct value.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/46365927",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Render MDX content in Next.js I am trying to load MDX content in a Next.js application but the content is coming like this instead of the normal view
function MDXContent(_ref) { let { components } = _ref, props = _objectWithoutProperties(_ref, ["components"]); return Object(_mdx_js_react__WEBPACK_IMPORTED_MODULE_1__["mdx"])(MDXLayout, _extends({}, layoutProps, props, { components: components, mdxType: "MDXLayout", __self: this, __source: { fileName: _jsxFileName, lineNumber: 17, columnNumber: 10 } }), Object(_mdx_js_react__WEBPACK_IMPORTED_MODULE_1__["mdx"])("h1", { __self: this, __source: { fileName: _jsxFileName, lineNumber: 18, columnNumber: 5 } }, `Hello from MD!`)); }
My _app.js is like this -
import Head from 'next/head';
import { MDXProvider } from '@mdx-js/react';
export default function App({ Component, pageProps }) {
return (
<MDXProvider>
<Head>
<link
href="https://fonts.googleapis.com/css2?family=Inter:wght@100;200;300;400;500;600;700;800;900&display=swap"
rel="stylesheet"
/>
</Head>
<Component {...pageProps} />
</MDXProvider>
);
}
My next.config.js is like this
const withMDX = require('@next/mdx')({
options: {
remarkPlugins: [images],
rehypePlugins: []
}
});
module.exports = withMDX();
I tried following the setup instructions for Webpack by referring here - https://mdxjs.com/getting-started/webpack and here - https://nextjs.org/docs/api-reference/next.config.js/custom-webpack-config but the result was still the same, can anyone help me out?
A: Looking at the official nextjs mdx example this is the correct configuration:
const withMDX = require('@next/mdx')({
extension: /\.mdx?$/,
})
module.exports = withMDX({
pageExtensions: ['js', 'jsx', 'mdx'],
})
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/63772502",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Doctrine entity repository with additional aggregated fields I have an entity Client, with a relation to an entity Contract. Contract has got a field amount and a field payingDelay.
Client.php
/**
* @ORM\OneToMany(targetEntity="Contract", mappedBy="client")
* @ORM\JoinColumn(name="contract_id", referencedColumnName="id")
*/
private $contract;
I'd like to show a list of all clients with some basic client fields and also some calculated (SUM, etc.) information on contracts, like this:
name - num contracts - sum(amounts) - aggregated risk
John - COUNT(contracts) - SUM(C.amount) - SUM(C.amount * C.payingDelay)
This is my basic `findClientWithCalculations()` method in `ClientRepository`:
return $this->createQueryBuilder('CLI')
->join('CLI.contract', 'CON')
->orderBy('CON.startDate', 'DESC')
->getQuery()
->getResult();
Is there a way I can add extra columns to this QueryBuilder, even if the final structure doesn't match the structure of a Client object or this must be done outside from a repository?
If not, maybe I can build a custom query in a controller and pass the query result to a twig template to show this structure.
Thank you.
A: Although not trivial, the question is not correctly formulated. I thought that an entitity repository method had to implement always some kind of findBy() method and return an object or a collection of objects of that entity to which this repository belongs.
Actually, an entitity repository method can return anything, so this problem can be solved using native query inside the entitity repository method.
For example:
ClientRepository.php:
public function findWithContractStatus($contractStatusShortname)
{
$em = $this->getEntityManager();
$clientQuery = "select distinct CLI.id, CLI.name, COUNT(contracts) as ncontracts, SUM(C.amount) as amount from client CLI join contract CON on CON.client_id = CON.id group by CLI.id, CLI.name"
$rsm = new ResultSetMapping();
$rsm->addScalarResult('id', 'id');
$rsm->addScalarResult('name', 'name');
$rsm->addScalarResult('ncontracts', 'ncontracts');
$rsm->addScalarResult('amount', 'amount');
$query = $em->createNativeQuery($clientQuery, $rsm);
return $query->getResult();
}
This will return an array with the given structure - id, name, ncontracts, amount - which can be iterated in controller, twig template or whereever.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/62105892",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: PHP CURL GET/POST Digest authentication I am using curl method to fetch data from REST API. API requires digest authenticate. I have done digest authenticate but not working for post method. Working fine for GET method.
$username = 'username';
$password = 'password';
$method = 'GET';
// $method = 'POST';
// FOR POST METHOD. API REQUIRE THIS FORMAT
// $fields = array('APIRquestData' => '{"name":"value","name1":["v4","v5"]}');
$url = "http://apiurl/getmethodname";
// $url = "http://apiurl/postmethodname";
$ch = curl_init();
curl_setopt($ch,CURLOPT_URL, $url);
curl_setopt($ch,CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch,CURLOPT_SSL_VERIFYHOST, false);
curl_setopt($ch,CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch,CURLOPT_FOLLOWLOCATION, false);
curl_setopt($ch,CURLOPT_TIMEOUT, 30);
curl_setopt($ch,CURLOPT_CONNECTTIMEOUT, 30);
if($method == 'POST')
{
$fieldsData = http_build_query($fields);
curl_setopt($ch,CURLOPT_POSTFIELDS, $fieldsData);
}
curl_setopt($ch, CURLOPT_HEADER, 1)
$first_response = curl_exec($ch);
$info = curl_getinfo($ch);
preg_match('/WWW-Authenticate: Digest (.*)/', $first_response, $matches);
if(!empty($matches))
{
$auth_header = $matches[1];
$auth_header_array = explode(',', $auth_header);
$parsed = array();
foreach ($auth_header_array as $pair)
{
$vals = explode('=', $pair);
$parsed[trim($vals[0])] = trim($vals[1], '" ');
}
$response_realm = (isset($parsed['realm'])) ? $parsed['realm'] : "";
$response_nonce = (isset($parsed['nonce'])) ? $parsed['nonce'] : "";
$response_opaque = (isset($parsed['opaque'])) ? $parsed['opaque'] : "";
$authenticate1 = md5($username.":".$response_realm.":".$password);
$authenticate2 = md5($method.":".$url);
$authenticate_response = md5($authenticate1.":".$response_nonce.":".$authenticate2);
$request = sprintf('Authorization: Digest username="%s", realm="%s", nonce="%s", opaque="%s", uri="%s", response="%s"',
$username, $response_realm, $response_nonce, $response_opaque, $url, $authenticate_response);
$request_header = array($request);
$ch = curl_init();
curl_setopt($ch,CURLOPT_URL, $url);
curl_setopt($ch,CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch,CURLOPT_SSL_VERIFYHOST, false);
curl_setopt($ch,CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch,CURLOPT_FOLLOWLOCATION, false);
curl_setopt($ch,CURLOPT_TIMEOUT, 30);
curl_setopt($ch,CURLOPT_CONNECTTIMEOUT, 30);
if($method == 'POST')
{
$fieldsData = http_build_query($fields);
curl_setopt($ch,CURLOPT_POSTFIELDS, $fieldsData);
}
curl_setopt($ch, CURLOPT_HTTPHEADER, $request_header);
$result['response'] = curl_exec($ch);
$result['info'] = curl_getinfo ($ch);
$result['info']['errno'] = curl_errno($ch);
$result['info']['errmsg'] = curl_error($ch);
}
/*
I am getting this as response
Array
(
[response] =>
HTTP Status 404 -
type Status report
message
description The requested resource () is not available.
[info] => Array
(
[url] => http://apiurl/postmethodname
[content_type] => text/html;charset=ISO-8859-1
[http_code] => 404
[header_size] => 361
[request_size] => 551
[filetime] => -1
[ssl_verify_result] => 0
[redirect_count] => 0
[total_time] => 0.109
[namelookup_time] => 0
[connect_time] => 0.063
[pretransfer_time] => 0.063
[size_upload] => 114
[size_download] => 956
[speed_download] => 8770
[speed_upload] => 1045
[download_content_length] => 956
[upload_content_length] => 114
[starttransfer_time] => 0.109
[redirect_time] => 0
[redirect_url] =>
[primary_ip] => XXX.X.XX.XX
[certinfo] => Array
(
)
[primary_port] => 80
[local_ip] => XX.XX.XX.XXX
[local_port] => 58850
[errno] => 0
[errmsg] =>
)
)
*/
Response says that, 404 means URL not found. But URL is correct.
cURL Information 7.40.0
Thanks in advance.
A: Instead of hitting first curl use get_headers($url).
add Content-Type:application/json in header.
add $request_header[] = 'Content-Type:application/json'; this line after $request_header = array($request).
A: Thanks @Sufi, Working code for POST request (In case someone else needs this):
<?php
$username = 'username';
$password = 'password';
$url = "your url";
$ch = curl_init();
curl_setopt($ch,CURLOPT_URL, $url);
curl_setopt($ch,CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch,CURLOPT_SSL_VERIFYHOST, false);
curl_setopt($ch,CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch,CURLOPT_FOLLOWLOCATION, false);
curl_setopt($ch,CURLOPT_TIMEOUT, 30);
curl_setopt($ch,CURLOPT_CONNECTTIMEOUT, 30);
curl_setopt($ch,CURLOPT_CUSTOMREQUEST, "POST");
curl_setopt($ch, CURLOPT_HEADER, 1);
$first_response = curl_exec($ch);
$info = curl_getinfo($ch);
preg_match('/WWW-Authenticate: Digest (.*)/', $first_response, $matches);
if(!empty($matches))
{
$auth_header = $matches[1];
$auth_header_array = explode(',', $auth_header);
$parsed = array();
foreach ($auth_header_array as $pair)
{
$vals = explode('=', $pair);
$parsed[trim($vals[0])] = trim($vals[1], '" ');
}
$response_realm = (isset($parsed['realm'])) ? $parsed['realm'] : "";
$response_nonce = (isset($parsed['nonce'])) ? $parsed['nonce'] : "";
$response_opaque = (isset($parsed['opaque'])) ? $parsed['opaque'] : "";
$authenticate1 = md5($username.":".$response_realm.":".$password);
$authenticate2 = md5("POST:".$url);
$authenticate_response = md5($authenticate1.":".$response_nonce.":".$authenticate2);
$request = sprintf('Authorization: Digest username="%s", realm="%s", nonce="%s", opaque="%s", uri="%s", response="%s"',
$username, $response_realm, $response_nonce, $response_opaque, $url, $authenticate_response);
$request_header = array($request);
$request_header[] = 'Content-Type:application/json';
$ch = curl_init();
curl_setopt($ch,CURLOPT_URL, $url);
curl_setopt($ch,CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch,CURLOPT_SSL_VERIFYHOST, false);
curl_setopt($ch,CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch,CURLOPT_FOLLOWLOCATION, false);
curl_setopt($ch,CURLOPT_TIMEOUT, 30);
curl_setopt($ch,CURLOPT_CONNECTTIMEOUT, 30);
curl_setopt($ch,CURLOPT_CUSTOMREQUEST, "POST");
curl_setopt($ch, CURLOPT_HTTPHEADER, $request_header);
$result['response'] = curl_exec($ch);
$result['info'] = curl_getinfo ($ch);
$result['info']['errno'] = curl_errno($ch);
$result['info']['errmsg'] = curl_error($ch);
var_dump($result);
}
?>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/31892143",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: use extern in C I try to use some global variable in my project, but don't work. I declared my variable like this:
In file kernel.h :
extern DBConnection * conn;
And, in my other file, called kernel.c, i do this:
#include "kernel.h"
int get_info() {
conn = (DBConnection *) malloc(sizeof(DBConnection));
}
But, at compile, i received an error that is:
/home/fastway/VFirewall-Monitor/kernel.c:19: undefined reference to `conn'
What i'm doing wrong?
A: You provided a declaration but you also need a definition. Add this to your kernel.c, at the top after the include:
DBConnection * conn;
A: extern DBConnection * conn;
declares the variable without defining it.
You need to add a file scope definition in one source file, for example in kernel.c:
DBConnection * conn;
A: extern doesn't allocate memory for the variable it qualifies, it only allows it to be used. You'll need a declaration of conn without the extern. You could add this to your kernel.c:
DBConnection * conn;
A: Try This :
#include "kernel.h"
DBConnection * conn
int get_info() {
conn = (DBConnection *) malloc(sizeof(DBConnection));
}
You need to add the file scope of conn in kernel.c
A: The extern keyword simply states that there is a variable somewhere in the final linked binary that has that name and type, it doesn't define said variable. The error message you're getting is about not being able to find the definition that the extern is referring to.
Define your method in your .C file, outside of any function definition.
A: to use a variable in many files declare it outside a function in any file and then use the extern nameofvar to use it in the other files example
file 1 :
int externalvar;
main(void)
{
//stuff ...
}
file 2 :
extern externalvar;
void someFunc(void)
{
externalvar = 5;
//stuff ...
}
A: Actually this is not a compiler error, but the error found during linker stage. Because, for compilation, the extern declaration is more than enough where compiler got to know a object and its type where declared in some other file. As long as the compilable file .c file knows the object declared somewhere than it will not throw any error. so the below code snippet in .c file also will not throw any compilation error.
extern DBConnection * conn
int get_info()
{
conn = (DBConnection *) malloc(sizeof(DBConnection));
}
But in the linker stage the kernel.o(object file) while linking looks for the real location of this objects reference, by that time if it is not able to find this object defined in some other object file than an linker error will be thrown.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/19146116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: parallelize a time-consuming Python loop I have a nested for loop that is time-consuming. I think parallelization can make it faster, but I do not know how I use it. this is my for loop in my code :
for itr2 in range(K):
tmp_cl=clusters[itr2+1]
if len(tmp_cl)>1:
BD_cent=np.zeros((len(tmp_cl),1))
for itr3 in range(len(tmp_cl)):
sumv=0
for itr5 in range(len(tmp_cl)):
condition = psnr_bitrate == tmp_cl[itr3,:]
where_result = np.where(condition)
tidx1 = where_result[0]
condition = psnr_bitrate == tmp_cl[itr5,:]
where_result = np.where(condition)
tidx2 = where_result[0]
BD_R=bd_rate(rate[tidx1[0],:],tmp_cl[itr3,:],rate[tidx2[0],:],tmp_cl[itr5,:])
BD_R=(BD_R-min_BDR)/(max_BDR-min_BDR)
BD_Q=bd_PSNR(rate[tidx1[0],:],tmp_cl[itr3,:],rate[tidx2[0],:],tmp_cl[itr5,:])
BD_Q=(BD_Q-min_BDQ)/(max_BDQ-min_BDQ)
value=(wr*BD_R+wq*BD_Q)
if value!=np.NINF:
sumv+=(value)
else:
sumv+=1000#for the curve which has not overlap with others
BD_cent[itr3]=sumv/len(tmp_cl)
new_centroid_index=np.argmin(BD_cent)
centroid[itr2]=clusters[itr2+1][new_centroid_index]
I checked some other examples about parallelization in Stackoverflow, but as a beginner, I could not understand what is the solution. do I have to define a function for code in the for loops? this for loops compute the distance between every two points in K=6 different clusters. but for parallelization, I do not know how do I use asyncio or joblib. is it possible for these loops or not?
A: CPython implementation detail: In CPython, due to the Global Interpreter Lock, only one thread can execute Python code at once (even though certain performance-oriented libraries might overcome this limitation). If you want your application to make better use of the computational resources of multi-core machines, you are advised to use multiprocessing or concurrent.futures.ProcessPoolExecutor.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/73941643",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Compare() of two variables returns 'Equal' but identical() returns FALSE Suppose you declared following function:
compound <- function(x,i,t) {
x*(1+i)^t
}
What are the fundamentals of following results:
compare(compound(100, 0.1, 2),121) => 'Equal'
and
identical(compound(100, 0.1, 2),121) => FALSE
In the package testthat expect_identical() checks the second condition and returns a failure in that case, although the value is 121. What is a better alternative to verify the above function compound()?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/56876265",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Why the whitespace vanished when I added text in inline-block element? This is the code when I add a text inside inline-block div.
.block{
display:inline-block;
}
.box{
height:300px;
width:300px;
background:red;
}
.blue{
background:blue;
}
<div class="box block">
<p>hello</p>
</div>
<div class="box blue"></div>
This is the code without any text, inside inline-block div.
.block{
display:inline-block;
}
.box{
height:300px;
width:300px;
background:red;
}
.blue{
background:blue;
}
<div class="box block">
</div>
<div class="box blue"></div>
We know that below inline elements, by default a white space is added. The same applies for inline-block elements. But when I input some text within the inline-block element why the whitespace automatically vanishes.
I was thinking to use vertical-align: bottom property. But that lorem5 text did the work by itself.
What's the concept behind this??
A: It's related to the baseline of the element. An empty inline-block will have its bottom edge as the baseline:
The baseline of an 'inline-block' is the baseline of its last line box in the normal flow, unless it has either no in-flow line boxes or if its 'overflow' property has a computed value other than 'visible', in which case the baseline is the bottom margin edge. ref
So in both cases the baseline is not the same.
To better understand consider some text next to your inline-block element to see how baseline is creating that space:
.box {
height: 100px;
width: 100px;
display: inline-block;
background: red;
}
<div style="border:2px solid">
<div class="box block">
<p>hello</p>
</div>
pq jy
</div>
<div style="border:2px solid">
<div class="box block">
</div>
pq jy
</div>
I think the above is self-explanatory. I have used descender letter to understand why we need that bottom space.
If you change the alignment, which is the common solution provided in most of the question, you will no more have that space:
.box {
height: 100px;
width: 100px;
display: inline-block;
background: red;
vertical-align:top; /* or bottom or middle */
}
<div style="border:2px solid">
<div class="box block">
<p>hello</p>
</div>
pq jy
</div>
<div style="border:2px solid">
<div class="box block">
</div>
pq jy
</div>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/69422449",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Should front end restrict list size or back end? So I've got a list of objects which is processed in the backend side of my application and is then passed to the front end for displaying purposes. However, I only want to show 4 items from the list at most. My question is- should I clip the list in the back end or should I pass the whole list to the front end and allow it to clip the list?
Thanks.
A: Of course, on the back-end, to remove unneeded network transmission. You need do it once for all your front-ends.
I don't think there are practical use-cases when it's more suitable to do it on front-end, since, even if back-end doesn't trim and, thus, saves it's CPU a bit, much more processing is done by backend underlying OS to transmit redundant bytes to clients.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/13124269",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to find interface in assemblies? For plugin system I am trying to create a plugin system but I get "Object reference not set to object" Error.
I have loaded each dll as a assembly then used Assembly.GetTypes(). I loop through my types to find my IPlugin interface. I then use Activator.CreateInstance() to create a new IPlugin for a reference and add it to my Plugins list.
foreach(string dll in dllFiles) {
AssemblyName an = AssemblyName.GetAssemblyName(dll);
Assembly assembly = Assembly.Load(an);
Type[] types = assembly.GetTypes();
foreach(Type type in types) {
if(type.GetInterface(PluginType.FullName) != null) {
IPlugin plugin = Activator.CreateInstance(type) as IPlugin;
Plugins.Add(plugin);
Console.WriteLine("Added a plugin!");
}
}
}
I expect to be able to loop through my Plugins list and call the method "Do()" in each but I get a object reference not set to object.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/55931819",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Issues with Select Option loop with jQuery I am using jQuery to loop through many Select Option html elements, and I can get it to work one at a time, but I'm having difficulty accessing the data if it is stored in a jQuery element.
For example I have these 2 Select html elements
var lList = $(".p1_lSelect");
var rList = $(".p1_rSelect");
I can get my desired output with the following
$(".p1_lSelect option:selected").each(function() {
selected = $(this).text(); //selected == "p1_lSelect_Data"
console.log(selected);
});
$(".p1_rSelect option:selected").each(function() {
selected = $(this).text(); //selected == "p1_rSelect_Data"
console.log(selected);
});
//Console logs .p1_lSelect_Data and .p1_rSelect_Data as expected
What I would like is something similar to the following so that I can change which list I'm feeding into the loop
function foo(){
var lList = $(".p1_lSelect");
var rList = $(".p1_rSelect");
bar(lList);
bar(rList);
}
function bar(list){
$("list option:selected").each(function() {
selected = $(this).text();
console.log(selected);
});
//Console log does not print
}
How would I go about doing this?
A: You can pass the name of the list and append it to the selector as follows:
function foo(){
bar("p1_lSelect");
bar("p1_rSelect");
}
function bar(list){
$(`.${list} option:selected`).each(function() {
selected = $(this).text();
console.log(selected);
});
//Console log does not print
}
A: Please try following code based on your structure:
function foo(){
var lList = $(".p1_lSelect");
var rList = $(".p1_rSelect");
bar(lList);
bar(rList);
}
function bar(list){
$(list).find("option:selected").each(function() {
selected = $(this).text();
console.log(selected);
});
//Console log does not print
}
A: Use the find() method:
function bar(list) {
list.find("option:selected").each(function() {
console.log($(this).text());
});
});
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/62439017",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Symfony get url parameters with spaces and underscore I'm new to Symfony, and I'm trying to filter my table with a search field. I'm using KnpPaginatorBundle to paginate and sort my table, and I've created a form to filter my request ( method GET )
It is generally working, but when I use spaces or underscores in my search input, it doesn't work, I assume there is something to do about the GET method and the way to encode the text, but I don't know how.
Here's my code :
View :
<div class="row">
<div class="well row">
<form action="" method="get">
<div class="col-md-2">
<label for="famille">Famille d'articles</label>
<select name="famille">
<option value="0">Toutes</option>
{% for famille in listFamilles %}
<option value="{{ famille.id }}" {% if data.famille is defined %} {% if famille.id == data.famille %} selected {% endif %} {% endif %}>{{ famille.nom }}</option>
{% endfor %}
</select>
</div>
<div class="col-md-4">
<input type="checkbox" name="rds" {% if data.rds == 1 %} checked {% endif %}>
<label for="rds" style="margin-left:5px">Montrer les articles en rupture de stock</label>
</div>
<div class="col-md-4">
<label for="recherche">Recherche</label>
<input name="recherche" style="width:100%" type="text" placeholder="Recherche" {% if data.recherche is defined %} value="{{ data.recherche }}" {% endif %}>
</div>
<div class="col-md-2" style="text-align:center">
<button type="submit" class="btn btn-primary">Rechercher</button>
</div>
</form>
</div>
<div class="well row">
<table class="table table-bordered table-striped" style="width: 100%" cellspacing="0">
<thead>
<tr>
<th>{{ knp_pagination_sortable(listArticles, 'Référence client', 'a.ref_article') }}</th>
<th>{{ knp_pagination_sortable(listArticles, 'Référence interne', 'a.ref_logistique') }}</th>
<th>{{ knp_pagination_sortable(listArticles, 'Famille', 'f.nom') }}</th>
<th>{{ knp_pagination_sortable(listArticles, 'Libellé', 'a.libelle') }}</th>
<th>{{ knp_pagination_sortable(listArticles, 'Alerte', 'a.stock_alerte') }}</th>
<th>{{ knp_pagination_sortable(listArticles, 'Stock', 'a.stock_actuel') }}</th>
</tr>
</thead>
<tbody id="bodyListeArticles">
{% for article in listArticles %}
<tr>
<td><a href="{{ path('gr_bo_modif_article', {'article_id': article.id}) }}">{{ article.refArticle }}</a></td>
<td>{{ article.refLogistique }}</td>
<td>{{ article.famille.nom }}</td>
<td>{{ article.libelle }}</td>
<td>{{ article.StockAlerte }}</td>
<td>{{ article.StockActuel }}</td>
</tr>
{% endfor %}
</tbody>
</table>
<div class="navigation text-center">
{{ knp_pagination_render(listArticles) }}
</div>
</div>
</div>
Controller :
public function listeAction(Request $request) {
if ($this->get('security.authorization_checker')->isGranted('ROLE_OPERATEUR')) {
$session = $request->getSession();
if ($session->get('client_id')) {
$clientId = $session->get('client_id');
} else {
$request->getSession()->getFlashBag()->add('info', 'Vous devez sélectionner un client pour accéder à la liste de ses articles.');
return $this->redirectToRoute('gr_bo_liste_clients');
}
} elseif ($this->get('security.authorization_checker')->isGranted('ROLE_SUPERCOLLABORATEUR') || ($this->get('security.authorization_checker')->isGranted('ROLE_COLLABORATEUR') && $this->getUser()->getListeArticles())) {
$clientId = $this->getUser()->getClient()->getId();
} else {
$request->getSession()->getFlashBag()->add('info', 'Vous n\'avez pas les permissions requises pour accéder à cette page.');
return $this->redirectToRoute('gr_bo_liste_commandes');
}
$em = $this->getDoctrine()->getManager();
$data = [];
$data['clientId'] = $clientId;
if ($request->query->getAlnum('recherche')) {
$data['recherche'] = $request->query->getAlnum('recherche');
}
if ($request->query->getAlnum('famille') && $request->query->getAlnum('famille') != "0") {
$data['famille'] = $request->query->getAlnum('famille');
}
if ($request->query->getAlNum('rds') == "on" || ($request->query->getAlnum('rds') == "" && $request->query->getAlnum('famille') == "" && $request->query->getAlnum('recherche') == "")) {
$data['rds'] = 1;
} else {
$data['rds'] = 0;
}
$listArticles = $em->getRepository('GRBackOfficeBundle:Article')->getQueryArticles($data);
/**
* @var $paginator \Knp\Component\Pager\Paginator
*/
$paginator = $this->get('knp_paginator');
$result = $paginator->paginate(
$listArticles, $request->query->getInt('page', 1), $request->query->getInt('limit', 5)
);
$listFamilles = $em->getRepository('GRBackOfficeBundle:Famille')->findAll();
return $this->render('GRBackOfficeBundle:Article:liste_articles.html.twig', array(
'listArticles' => $result,
'listFamilles' => $listFamilles,
'data' => $data
));
}
Repository :
public function getQueryArticles($data) {
$query = $this->createQueryBuilder('a')
->leftJoin('a.images', 'i')
->addSelect('i')
->leftJoin('a.type_stockage', 't')
->addSelect('t')
->leftJoin('a.famille', 'f')
->addSelect('f');
if (array_key_exists('famille', $data)) {
$query->andWhere('f.id = :famille')
->setParameter('famille', $data['famille']);
}
if (array_key_exists('rds', $data)) {
if ($data['rds'] == 0) {
$query->andWhere('a.stock_actuel > 0');
}
}
if (array_key_exists('recherche', $data)) {
$query->andWhere('a.ref_article LIKE :recherche OR a.ref_logistique LIKE :recherche OR a.libelle LIKE :recherche')
->setParameter('recherche', '%' . $data['recherche'] . '%');
}
$query->leftJoin('a.sousfamille', 's')
->addSelect('s')
->leftJoin('a.client', 'c')
->addSelect('c')
->andWhere('c.id = :client')
->setParameter('client', $data['clientId'])
->orderBy('a.ref_article', 'ASC')
->getQuery();
return $query;
}
When I use a space or an underscore in my "recherche" search filter, my table appears empty, and it seems to delete the spaces or the underscore so if I try with "John Doe" or "John_Doe", it will return me the results for "JohnDoe", which is empty.
If someone have an idea of how I can proceed, it will be appreciated !
A: You can use urlencode on your data.recherche. But there also more natural way to do this in twig
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/44546499",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Recyler view listitem image onclick I am working on xamarin android apps. I used recycler view for displaying the images in each row. Now for each item image there is some link which I need to redirect. How can I achieve this onclick action for each image in recycler view.
Please help me in this regard.
Thanks
A: When creating a recycler view, you need to create a RecyclerView adapter which (among other things) implements methods for creating and binding a viewholder to the item in the recycler view. Somewhere in your code (oftentimes within this recycler view adapter class), you need to define the viewholder that you will use for your recyclerview items. This is where you should assign the onClickListener to your imageView.
Here is an example of a viewholder definition that I think may help you:
public class YourViewHolder extends RecyclerView.ViewHolder {
protected ImageView yourImage;
public YourViewHolder(View v) {
super(v);
final View theView = v;
yourImage = (ImageView) v.findViewById(R.id.yourImage);
yourImage.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
// image clicked... do something
}
});
}
}
Let me know if you need more information on how to set up the recycler view adapter class (I have assumed that you already have done this, but I could be mistaken).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/30893457",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Changing protobuf field type from double to float I have a proto message I'm using in a service to store data in various places
My message is like this:
message Matrix {
double width = 1;
double height = 2;
repeated double entries = 3;
}
My team has decided that the Matrix message is too large, and changing the types to float seems like an easy way to achieve a payload size reduction.
But when I change the proto definition to use float instead of double here, and try to read old data (in a Python reader), it looks corrupted.
One option I can think of is to add a new float option for each field:
message Matrix {
oneof r_oneof {
double width_d = 1;
float width_f = 4;
}
oneof c_oneof {
double height_d = 2;
float height_f = 5;
}
oneof e_oneof {
repeated double entries_d = 3;
repeated float entries_f = 6;
}
}
Then my deserializing code can check whether each oneof field is the double or float field. This works, but feels like a clunky design pattern.
Is there another way to provide backwards-compatibility with old data in this example?
A: I think you have the right idea. You will want to keep the old fields together with their field numbers unchanged as long as there is data stored in the old format. One of the great things about protocol buffers is that unset fields are essentially free, so you can add as many new fields as you want to facilitate the migration.
What I would do is add a new set of float fields and rename the double fields while preserving their field numbers:
message Matrix {
// Values in these fields should be transitioned to floats
double deprecated_width = 1;
double deprecated_height = 2;
repeated double deprecated_entries = 3;
float width = 4;
float height = 5;
repeated float entries = 6;
}
Whenever you read a Matrix from persistent storage move any values from the deprecated fields to the non-deprecated fields and write back the result. This should facilitate incremental migration to floats.
I will mention one more thing that you probably already know: protocol buffers don't care about the fields names. Only field numbers matter for serialization and deserialization. This means fields can be renamed freely as long as the code that manipulates them is likewise updated.
At some point in the future when the migration has been completed remove the migration code and delete the deprecated fields but reserve their field numbers:
message Matrix {
reserved 1, 2, 3;
float width = 4;
float height = 5;
repeated float entries = 6;
}
This ensures any stray messages in the old format blow up on deserialization instead of cause data corruption.
Hope this helps!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/61648365",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: If an array is not sorted, binarySearch returns a random index; what is the logic behind the returned index? If I have an array that looks like this:
int[] arr = {6, 12, 3, 9, 8, 25, 10};
Why does this return -2:
Arrays.binarySearch(arr, 8);
I understand that binarySearch only works if the array is sorted. My question is what determines the returned index?
A: As @assylias mentioned in the comments the documentation for binarySearch I can quote from it
Returns:
index of the search key, if it is contained in the array within the specified range; otherwise, (-(insertion point) - 1). The insertion point is defined as the point at which the key would be inserted into the array: the index of the first element in the range greater than the key, or toIndex if all elements in the range are less than the specified key. Note that this guarantees that the return value will be >= 0 if and only if the key is found.
So basically this is what happens in your attempt to search unsorted array:
{6, 12, 3, 9, 8, 25, 10}
*
*it takes middle element 9 and compares it to your searched element 8, since 8 is lower it takes lower half {6, 12, 3}
*in second step it compares 12 to 8 and since 8 is again lower it takes lower half {6}
*6 doesn't equal 8 and since it's the last element it didn't find what you wanted
*it returns (-(insertion point) - 1) where insertion point is
the index of the first element in the range greater than the key
*in your case that index is 1 since first element greater then 8 is 12 and it's index is 1
*when you put that into equation it returns (-1 - 1) which equals -2
Hope I have answered your question.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/47222267",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Windows DLL linker errors with template class template <typename SenderType__, typename... Args__>
class CORE_API Event
{
public:
typedef typename std::function<void(const SenderType__*, Args__ ...)> EventHandler;
Event& operator+=(const EventHandler& toSubscribe)
{
Subscribe(toSubscribe);
return *this;
}
void operator()(const SenderType__* sender, Args__ ... args) const
{
Invoke(sender, args...);
}
void Subscribe(const EventHandler& toSubscribe)
{
std::lock_guard<std::mutex> locker(m_callbackMutex);
m_callbacks.push_back(toSubscribe);
}
void Clear()
{
std::lock_guard<std::mutex> locker(m_callbackMutex);
m_callbacks.clear();
}
void Invoke(const SenderType__* sender, Args__ ... args) const
{
std::lock_guard<std::mutex> locker(m_callbackMutex);
for (auto iter = m_callbacks.begin(); iter != m_callbacks.end(); ++iter)
{
(*iter)(sender, args...);
}
}
private:
std::vector<EventHandler> m_callbacks;
mutable std::mutex m_callbackMutex;
};
template class CORE_API Event<std::string, std::string>;
In a consumer of the DLL....
TEST(EventTest, TestEventFiresAndPassesArgs)
{
Event<std::string, std::string> event;
event += &TestFunction;
event += &TestFunction2;
std::string sender = "TestEventFiresAndPassesArgs";
std::string arg = "boo!";
event.Invoke(&sender, arg);
ASSERT_EQ(sender, testEventFiresAndPassesArgsSenderName);
ASSERT_EQ(arg, testEventFiresAndPassesArgsTestArg);
ASSERT_EQ(sender + "Function2", testEventFiresAndPassesArgsSenderName2);
ASSERT_EQ(arg + "Function2", testEventFiresAndPassesArgsTestArg2);
}
Then the linker output is:
EventTest.obj : error LNK2019: unresolved external symbol "__declspec(dllimport) public: class CompanyName::Utils::Event<class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > > & __cdecl CompanyName::Utils::Event<class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > >::operator+=(class std::function<void __cdecl(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const *,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >)> const &)" (__imp_??Y?$Event@V?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@V12@@Utils@CompanyName@@QEAAAEAV012@AEBV?$function@$$A6AXPEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@V12@@Z@std@@@Z) referenced in function "private: virtual void __cdecl EventTest_TestEventFiresAndPassesArgs_Test::TestBody(void)" (?TestBody@EventTest_TestEventFiresAndPassesArgs_Test@@EEAAXXZ)
2>EventTest.obj : error LNK2019: unresolved external symbol "__declspec(dllimport) public: void __cdecl CompanyName::Utils::Event<class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > >::Invoke(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const *,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >)const " (__imp_?Invoke@?$Event@V?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@V12@@Utils@CompanyName@@QEBAXPEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@V45@@Z) referenced in function "private: virtual void __cdecl EventTest_TestEventFiresAndPassesArgs_Test::TestBody(void)" (?TestBody@EventTest_TestEventFiresAndPassesArgs_Test@@EEAAXXZ)
2>EventTest.obj : error LNK2019: unresolved external symbol "__declspec(dllimport) public: __cdecl CompanyName::Utils::Event<class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > >::Event<class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > >(void)" (__imp_??0?$Event@V?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@V12@@Utils@CompanyName@@QEAA@XZ) referenced in function "private: virtual void __cdecl EventTest_TestEventFiresAndPassesArgs_Test::TestBody(void)" (?TestBody@EventTest_TestEventFiresAndPassesArgs_Test@@EEAAXXZ)
2>EventTest.obj : error LNK2019: unresolved external symbol "__declspec(dllimport) public: __cdecl CompanyName::Utils::Event<class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > >::~Event<class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > >(void)" (__imp_??1?$Event@V?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@V12@@Utils@CompanyName@@QEAA@XZ) referenced in function "private: virtual void __cdecl EventTest_TestEventFiresAndPassesArgs_Test::TestBody(void)" (?TestBody@EventTest_TestEventFiresAndPassesArgs_Test@@EEAAXXZ)
2>..\runCoreUnitTests.exe : fatal error LNK1120: 4 unresolved externals
Any ideas what I'm doing wrong here?
A: You need functions to be exported using __declspec(dllexport) when creating a DLL.
You can use such functions from another DLL by declaring those functions using __declspec(dllimport).
These work great for regular functions.
However, for class templates and function templates, the templates are instantiated on an as needed basis. They don't get exported to the DLL in which they are defined. Hence, they can't get imported from the DLL either. For this reason, you don't use __declspec(dllexport) or __declspec(dllimport) with class templates and function templates.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/28181020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: PDO with k/v pairs and unknown numbers? I have a question about PDO for talking to databases,
the example I am familiar with is:
$data = array('Cathy', '9 Dark and Twisty Road', 'Cardiff');
$STH = $DBH->("INSERT INTO folks (name, addr, city) values (?, ?, ?);
$STH->execute($data);
But, if we had a k/v pair, would it be the same? ala
$data = array('one'=>'Cathy', 'two'=>'9 Dark and Twisty Road', 'three'=>'Cardiff');
$STH = $DBH->("INSERT INTO folks (?, ?, ?) values (?, ?, ?);
$STH->execute($data);
And what if we had a none ascertainable amount of values?
$data = array(range(0, rand(1,99));
$STH = $DBH->("INSERT INTO folks (/* how would you put stuff here? */) values (/* how would you put stuff here? */);
$STH->execute($data);
It leaves me more confused than not....
Could someone show me how the above two would work with k/v pairs and unknown counts?
Much thanks
A: You don't have to use ? as the binding placeholder, you can use :names and an associative array. You can then pass the associative array as the binding list and PDO will now to match the keys of the array with the :binding_names. For example, with an associative array, if the keys match the fields in the database, you can do something like this:
$data = array('one'=>'Cathy', 'two'=>'9 Dark and Twisty Road', 'three'=>'Cardiff');
$fields = array_keys($data);
$field_str = '`'.implode('`,`',$fields).'`';
$bind_vals = ':'.implode(',:',$fields);
$sql = 'INSERT INTO tablename ('.$field_str.') VALUES ('.$bind_vals.')';
$sth = $dbh->prepare($sql);
$sth->execute($data);
That will handle an unknown number of name/value pairs. There is no getting around not knowing what field names to use for the insert. This example would also work with ? as the binding placeholder. So instead of names, you could just repeat the ?:
$bind_vals = str_repeat('?,', count($data));
$sql = 'INSERT INTO tablename ('.$field_str.') VALUES ('.$bind_vals.')';
A: Prepared statements only work with literals, not with identifiers. So you need to construct the SQL statement with the identifiers filled in (and properly escaped).
Properly escaping literals is tricky, though. PDO doesn't provide a method for doing literal-escaping, and MySQL's method of escaping literals (using `) is completely different from every other database and from the ANSI SQL standard. See this question for more detail and for workarounds.
If we simplify the issue of escaping the identifiers, you can use a solution like this:
// assuming mysql semantics
function escape_sql_identifier($ident) {
if (preg_match('/[\x00`\\]/', $ident)) {
throw UnexpectedValueException("SQL identifier cannot have backticks, nulls, or backslashes: {$ident}");
}
return '`'.$ident.'`';
}
// returns a prepared statement and the positional parameter values
function prepareinsert(PDO $pdo, $table, $assoc) {
$params = array_values($assoc);
$literals = array_map('escape_sql_identifier', array_keys($assoc));
$sqltmpl = "INSERT INTO %s (%s) VALUES (%s)";
$sql = sprintf($sqltmpl, escape_sql_identifier($table), implode(',',$literals), implode(',', array_fill(0,count($literals),'?'));
return array($pdo->prepare($sql), $params);
}
function prefixkeys($arr) {
$prefixed = array();
for ($arr as $k=>$v) {
$prefixed[':'.$k] = $v;
}
return $prefixed;
}
// returns a prepared statement with named parameters
// this is less safe because the parameter names (keys) may be vulnerable to sql injection
// In both circumstances make sure you do not trust column names given through user input!
function prepareinsert_named(PDO $pdo, $table, $assoc) {
$params = prefixkeys($assoc);
$literals = array_map('escape_sql_identifier', array_keys($assoc));
$sqltmpl = "INSERT INTO %s (%s) VALUES (%s)";
$sql = sprintf($sqltmpl, escape_sql_identifier($table), implode(',',$literals), implode(', ', array_keys($params)));
return array($pdo->prepare($sql), $params);
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/9318188",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Issue with firebase query My query:
let query = recentRef.queryOrderedByChild(FRECENT_GROUPID).queryEqualToValue(group_id)
query.observeSingleEventOfType(.Value, withBlock: { snapshot in
And database structure is :
And my query looks like:
(/Recent {
ep = fb343534ca520c70fe35b0a316ea8e4c;
i = groupId;
sp = fb343534ca520c70fe35b0a316ea8e4c;
})
and getting Snap (Recent) <null> when I print(snapshot).
Its strange that it was working fine but now its suddenly stopped working.
EDIT:
Complete JSON:
{
"Message" : {
"fb343534ca520c70fe35b0a316ea8e4c" : {
"-Kp0jed1EZ5BLllL5_cm" : {
"createdAt" : 1.500046597341153E9,
"groupId" : "fb343534ca520c70fe35b0a316ea8e4c",
"objectId" : "-Kp0jed1EZ5BLllL5_cl",
"senderId" : "lI6SRppSboScWo5xVjcfLL82Ogr2",
"senderName" : "Test1 Test1",
"status" : "",
"text" : "hi",
"type" : "text",
"updatedAt" : 1.50004659734136E9
}
}
},
"Recent" : {
"-Kp0jecwejhzQbbm62CW" : {
"counter" : 0,
"createdAt" : 1.500046600967624E9,
"description" : "Test1 Test1",
"groupId" : "fb343534ca520c70fe35b0a316ea8e4c",
"lastMessage" : "hi",
"members" : [ "lI6SRppSboScWo5xVjcfLL82Ogr2", "fnRvHFpaoDhXqM1se7NoTSiWZIZ2" ],
"objectId" : "-Kp0jecwejhzQbbm62CV",
"picture" : "",
"type" : "private",
"updatedAt" : 1.500046600967647E9,
"userId" : "fnRvHFpaoDhXqM1se7NoTSiWZIZ2"
},
"-Kp0jed-FU1PXt1iPr29" : {
"counter" : 0,
"createdAt" : 1.500046600971885E9,
"description" : "Srikant Root",
"groupId" : "fb343534ca520c70fe35b0a316ea8e4c",
"lastMessage" : "hi",
"members" : [ "lI6SRppSboScWo5xVjcfLL82Ogr2", "fnRvHFpaoDhXqM1se7NoTSiWZIZ2" ],
"objectId" : "-Kp0jed-FU1PXt1iPr28",
"picture" : "https://s3.amazonaws.com/top500golfdev/uploads/profile/[email protected]/profilepicture.jpg",
"type" : "private",
"updatedAt" : 1.500046600971896E9,
"userId" : "lI6SRppSboScWo5xVjcfLL82Ogr2"
}
},
"User" : {
"fnRvHFpaoDhXqM1se7NoTSiWZIZ2" : {
"createdAt" : 1.500045753102713E9,
"email" : "[email protected]",
"firstname" : "Srikant",
"fullname" : "Srikant Yadav",
"handle" : "Srikant",
"lastname" : "Yadav",
"networkImage" : "https://s3.amazonaws.com/top500golfdev/uploads/profile/[email protected]/profilepicture.jpg",
"objectId" : "fnRvHFpaoDhXqM1se7NoTSiWZIZ2",
"online" : false,
"updatedAt" : 1.500045753102731E9
},
"lI6SRppSboScWo5xVjcfLL82Ogr2" : {
"createdAt" : 1.500045791892967E9,
"email" : "[email protected]",
"firstname" : "Test1",
"fullname" : "Test1 Test1",
"handle" : "test1",
"lastname" : "Test1",
"networkImage" : "",
"objectId" : "lI6SRppSboScWo5xVjcfLL82Ogr2",
"online" : false,
"updatedAt" : 1.500046571456235E9
}
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/45105653",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to make a shortcut for rebuild and relaunch debug session with eclipse cdt I use eclipse neon with cdt plugin for embedded development (mainly Cortex-M targets).
Most projects are makefile based. For remote debugging, I use the Gdb Hardware Debugging plugin and Segger Jlink (running JlingGdbServer).
Each time I make changes in source code and then want to run them e.g. under debugger control, I have to:
*
*stop the current debug session
*build the project
*lauch a new debug session
I want to have this 3 steps executed by a single button click (or keyboard shortcut).
Does anybody know how to archive this?
I think, Nxp's LpcXpresso (also eclipse based) have such button.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/40709276",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Find the file having lowest timestamp I have a requirement which needs to be done by only using Java Script and would appreciate if someone can help here.
*
*We have 10 files coming into a folder of format MMddyyyyHHmmss (ex. 07192013114030) - MonthDayYearHourMinuteSecond
*The file gets dropped from an external system once every day
*When the 11th file comes in I need to find the file that was dropped on the first and delete it so that the total count of the files should always be 10 (latest 10 files)
Sample example
07192013114030
07202013114030
07212013114030
07222013114030
07232013114030
07242013114030
07252013114030
07262013114030
07272013114030
07282013114030
When the 11th file comes in on 07292013114030, I want to find the file 07192013114030 using Java Script.
I can provide the incoming file names in any format, ex. MM/dd/yyyy/HHmmss or MM_dd_yyyy_HH_mm_ss if that helps to do this using JS
A: Since you can get the dates in any format, get them in YYYYMMDDHHmmss format. Then get those timestamps in an array. There's not enough information about your system in your question to explain how to do this but just loop through the files pulling out the timestamps and pushing them into an array.
Basically you should have an array like this when you're done:
dates = ['20130719114030',
'20130720114030',
'20130721114030',
'20130722114030',
'20130723114030',
'20130724114030',
'20130725114030',
'20130726114030',
'20130727114030',
'20130728114030'];
Once done, simply sort the array:
dates.sort();
Dates will be in alphanumeric order, which also happens to be chronological order because of our date format. The oldest date will be the first one in the array, so
dates[0] // '20130719114030'
Again, there's not enough information about your system to explain how to delete the file, but perhaps you could loop through the files again to find a matching timestamp, then delete the file.
A: I'm not experienced with Javascript, but my logical progression would be:
Out of the 11 files, find the lowest year
If the same
Out of the 11 files, find the lowest month
[...]
all the way down to second
A: Convert them all to date objects and then compare them. You would only have to do two pass throughs of the list to find the smallest date (one to convert and one to compare)... instead of extracting each snippet and going through the list multiple times.
http://www.w3schools.com/js/js_obj_date.asp
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/17753314",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Streaming from IP camera using chromecast I am trying to stream through IP camera through my android device.
I have edited the URL from the code posted on GitHub It is streaming on chrome browser when I tried to run from IP address of camera.
But when I try to cast it, It shows me player status:IDLE and blank screen on TV
Other videos are playing but for streaming I am facing problem. Any help??
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/20489251",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Making qgis2web search engine to search multiple columns rather than just one column? I have created a map using Leaflet and QGIS2Web. The search engine works fine, however it looks through only one of column of the attribute table. My goal is to have it search through four different columns (Country, division, subdivision, language) rather than just one (language). For that, I want to create a custom data array from layer features when loading the layer, which will contain four array items for each feature. The new array will then be used as the source data (using sourceData option) for the search engine. The only issue remaining is that if the user searches for a language that is spoken at various locations in Africa, how can I have the map show all the locations that speak the language searched for?
Code sample that I will be using to create a custom data array:
var data = [
{"loc":[41.575330,13.102411], "title":"aquamarine"},
{"loc":[41.575730,13.002411], "title":"black"},
{"loc":[41.807149,13.162994], "title":"blue"},
{"loc":[41.507149,13.172994], "title":"chocolate"},
{"loc":[41.847149,14.132994], "title":"coral"},
{"loc":[41.219190,13.062145], "title":"cyan"},
{"loc":[41.344190,13.242145], "title":"darkblue"},
{"loc":[41.679190,13.122145], "title":"darkred"},
{"loc":[41.329190,13.192145], "title":"darkgray"},
{"loc":[41.379290,13.122545], "title":"dodgerblue"},
{"loc":[41.409190,13.362145], "title":"gray"},
{"loc":[41.794008,12.583884], "title":"green"},
{"loc":[41.805008,12.982884], "title":"greenyellow"},
{"loc":[41.536175,13.273590], "title":"red"},
{"loc":[41.516175,13.373590], "title":"rosybrown"},
{"loc":[41.506175,13.173590], "title":"royalblue"},
{"loc":[41.836175,13.673590], "title":"salmon"},
{"loc":[41.796175,13.570590], "title":"seagreen"},
{"loc":[41.436175,13.573590], "title":"seashell"},
{"loc":[41.336175,13.973590], "title":"silver"},
{"loc":[41.236175,13.273590], "title":"skyblue"},
{"loc":[41.546175,13.473590], "title":"yellow"},
{"loc":[41.239190,13.032145], "title":"white"}
];
var map = new L.Map('map', {zoom: 9, center: new L.latLng(data[0].loc) });
map.addLayer(new L.TileLayer('https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png')); //base layer
function localData(text, callResponse)
{
//here can use custom criteria or merge data from multiple layers
callResponse(data);
return { //called to stop previous requests on map move
abort: function() {
console.log('aborted request:'+ text);
}
};
}
Below is the JavaScript code showing the fields I export:
<tr>\
<th scope="row">Language</th>\
<td>' + (!!feature.properties['q2wHide_lang2'] ? autolinker.link(feature.properties['q2wHide_lang2'].toLocaleString()) : '') + '</td>\
</tr>\
<tr>\
<th scope="row">State</th>\
<td>' + (!!feature.properties['State'] ? autolinker.link(feature.properties['State'].toLocaleString()) : '') + '</td>\
</tr>\
<tr>\
<th scope="row">Local Gove</th>\
<td>' + (!!feature.properties['Local Gove'] ? autolinker.link(feature.properties['Local Gove'].toLocaleString()) : '') + '</td>\
</tr>\
<tr>\
<th scope="row">Country</th>\
<td>' + (!!feature.properties['Country'] ? autolinker.link(feature.properties['Country'].toLocaleString()) : '') + '</td>\
</tr>\
</table>';
Below is the code for the search box:
setBounds();
map.addControl(new L.Control.Search({
layer: layer_Eth_Region_2013_Project_Merg_1,
initial: false,
hideMarkerOnCollapse: true,
propertyName: 'q2wHide_lang2'}));
document.getElementsByClassName('search-button')[0].className +=
' fa fa-binoculars';
var mapDiv = document.getElementById('map');
I have checked online and have found similar issues but not exactly the same. The solution for those were concatenation, however I am not if this is a correct solution for me. I appreciate any help.
A: Have you thought about creating a fifth column by adding the data of the four columns into it?
Just use the QGIS field calculator to do it in a single command.
Then do the qgis2web setup in this fifth column.
This way when your users start typing the language they will see the different locations in the drop-down menu that they can select for zooming.
Greetings.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/70713803",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.