id
stringlengths 40
40
| text
stringlengths 29
2.03k
| original_text
stringlengths 3
154k
| subdomain
stringclasses 20
values | metadata
dict |
---|---|---|---|---|
cee6dde89e83b40ced7f303b5b60e5b40a510bd6 | Stackoverflow Stackexchange
Q: VueJS - Function to display image I'm new to VueJS and having a hard to on this situation.
display.vue
<template>
<img :src="getLogo(logo)" />
</template>
<script>
export default {
methods: {
getLogo(logo){
return '../assets/'+logo;
}
}
}
</script>
I got an 404 error on that code.
But I tried not using the getLogo() function and it displayed.
<template>
<img src="../assets/logo.svg" />
</template>
The image structure is:
src/assets/logo1.svg
webpack.base.conf.js
test: /\.(png|jpe?g|gif|svg)(\?.*)?$/,
loader: 'url-loader',
options: {
limit: 10000,
name: utils.assetsPath('img/[name].[hash:7].[ext]')
}
Anybody here can help me displaying the image by using the getLogo function? Thank you very much!
A: I reckon when using v-bind:src it should be as follows
<img v-bind:src="'../assets/logo.svg'">
<!-- or shorthand -->
<img :src="'../assets/logo.svg'">
Notice the ' '
While using <img src="../assets/logo.svg" /> you do not need to bind a string, hence that's why it works.
| Q: VueJS - Function to display image I'm new to VueJS and having a hard to on this situation.
display.vue
<template>
<img :src="getLogo(logo)" />
</template>
<script>
export default {
methods: {
getLogo(logo){
return '../assets/'+logo;
}
}
}
</script>
I got an 404 error on that code.
But I tried not using the getLogo() function and it displayed.
<template>
<img src="../assets/logo.svg" />
</template>
The image structure is:
src/assets/logo1.svg
webpack.base.conf.js
test: /\.(png|jpe?g|gif|svg)(\?.*)?$/,
loader: 'url-loader',
options: {
limit: 10000,
name: utils.assetsPath('img/[name].[hash:7].[ext]')
}
Anybody here can help me displaying the image by using the getLogo function? Thank you very much!
A: I reckon when using v-bind:src it should be as follows
<img v-bind:src="'../assets/logo.svg'">
<!-- or shorthand -->
<img :src="'../assets/logo.svg'">
Notice the ' '
While using <img src="../assets/logo.svg" /> you do not need to bind a string, hence that's why it works.
| stackoverflow | {
"language": "en",
"length": 137,
"provenance": "stackexchange_0000F.jsonl.gz:905289",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44668344"
} |
a45001898ef71bde664b20eee4c8fab193be2c56 | Stackoverflow Stackexchange
Q: Amazon SNS - Sending SMS, delivery status I am trying to send messages using Amazon SNS but it's showing atypical behavior. It sends messages to some of the numbers while may or may not to others.
import boto3
client = boto3.client('sns', .....)
client.publish(PhoneNumber, Message)
I am using the publish API to directly send SMS for OTPs without using Topics. Is there a way I can get the delivery status for them? Would region/DND affect the delivery. This is for Indian numbers. I am using Transactional messages for the same.
A: On the console there is 'Text Messaging Preferences'. One needs to set up, IAM role for CloudWatch Logs access and Report Storage bucket. Once they are set up, Amazon will start storing logs in CloudWatch. Once done it will start showing delivery rates and status in the console UI and detailed logs in CloudWatch.
There are multitudes of reason for which delivery may fail. It lists out all on them. DND can also affect the delivery even if message type is transactional.
Also there is a availalble API to query these logs accordingly.
| Q: Amazon SNS - Sending SMS, delivery status I am trying to send messages using Amazon SNS but it's showing atypical behavior. It sends messages to some of the numbers while may or may not to others.
import boto3
client = boto3.client('sns', .....)
client.publish(PhoneNumber, Message)
I am using the publish API to directly send SMS for OTPs without using Topics. Is there a way I can get the delivery status for them? Would region/DND affect the delivery. This is for Indian numbers. I am using Transactional messages for the same.
A: On the console there is 'Text Messaging Preferences'. One needs to set up, IAM role for CloudWatch Logs access and Report Storage bucket. Once they are set up, Amazon will start storing logs in CloudWatch. Once done it will start showing delivery rates and status in the console UI and detailed logs in CloudWatch.
There are multitudes of reason for which delivery may fail. It lists out all on them. DND can also affect the delivery even if message type is transactional.
Also there is a availalble API to query these logs accordingly.
| stackoverflow | {
"language": "en",
"length": 184,
"provenance": "stackexchange_0000F.jsonl.gz:905320",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44668417"
} |
4a27f779e3fa2b0a5d9e733abb68ed25f68d5b29 | Stackoverflow Stackexchange
Q: Java 8 Stream: Extract map from list of objects and sum specific key I have a list of Customer objects
class Customer {
//Other properties
Map<String, Object> additionalData;
}
When I got a List<Customer> customers, I want to sum a key called transactions in additionalData. How can I do that with java 8 streams?
A: You can map each element of your list to the value of the "transactions" key in the corresponding map, and then sum these values:
int sum = customers.stream()
.map(c->(Integer)c.additionalData.get("transactions"))
.filter(Objects::nonNull)
.mapToInt (Integer::intValue)
.sum();
For example:
Customer c1 = new Customer();
c1.additionalData = new LinkedHashMap<> ();
c1.additionalData.put ("transactions", 14);
Customer c2 = new Customer();
c2.additionalData = new LinkedHashMap<> ();
c2.additionalData.put ("transactions", 7);
Customer c3 = new Customer();
c3.additionalData = new LinkedHashMap<> ();
List<Customer> customers = Arrays.asList (c1,c2,c3);
int sum = customers.stream()
.map(c->(Integer)c.additionalData.get("transactions"))
.filter(Objects::nonNull)
.mapToInt (Integer::intValue)
.sum();
System.out.println ("sum is " + sum);
Output:
sum is 21
This code is assuming the additionalData member is never null, and the value of the "transactions" key (if it exists in the Map) is always an Integer. If these assumptions are incorrect, the code should be adjusted accordingly.
| Q: Java 8 Stream: Extract map from list of objects and sum specific key I have a list of Customer objects
class Customer {
//Other properties
Map<String, Object> additionalData;
}
When I got a List<Customer> customers, I want to sum a key called transactions in additionalData. How can I do that with java 8 streams?
A: You can map each element of your list to the value of the "transactions" key in the corresponding map, and then sum these values:
int sum = customers.stream()
.map(c->(Integer)c.additionalData.get("transactions"))
.filter(Objects::nonNull)
.mapToInt (Integer::intValue)
.sum();
For example:
Customer c1 = new Customer();
c1.additionalData = new LinkedHashMap<> ();
c1.additionalData.put ("transactions", 14);
Customer c2 = new Customer();
c2.additionalData = new LinkedHashMap<> ();
c2.additionalData.put ("transactions", 7);
Customer c3 = new Customer();
c3.additionalData = new LinkedHashMap<> ();
List<Customer> customers = Arrays.asList (c1,c2,c3);
int sum = customers.stream()
.map(c->(Integer)c.additionalData.get("transactions"))
.filter(Objects::nonNull)
.mapToInt (Integer::intValue)
.sum();
System.out.println ("sum is " + sum);
Output:
sum is 21
This code is assuming the additionalData member is never null, and the value of the "transactions" key (if it exists in the Map) is always an Integer. If these assumptions are incorrect, the code should be adjusted accordingly.
| stackoverflow | {
"language": "en",
"length": 189,
"provenance": "stackexchange_0000F.jsonl.gz:905326",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44668426"
} |
b45fa5ccfeb94bd45ca92ebcf39e08d7b20ba0c6 | Stackoverflow Stackexchange
Q: Angular 2: get x, y coordinate from selected point on image In my Angular 2 app I would like to have a canvas with the image. When I click on the canvas I will get the x, y coordinates of the point on the image.
Thanks for any advice!
| Q: Angular 2: get x, y coordinate from selected point on image In my Angular 2 app I would like to have a canvas with the image. When I click on the canvas I will get the x, y coordinates of the point on the image.
Thanks for any advice!
| stackoverflow | {
"language": "en",
"length": 50,
"provenance": "stackexchange_0000F.jsonl.gz:905334",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44668451"
} |
56c570a7417916a89f1118fed274340cf4b73e21 | Stackoverflow Stackexchange
Q: ORDER BY performance when executing a query in Oracle I have been working on a Spring application that connects to an Oracle database.
After three years, the amount of records in our tables are so much bigger that the response time of queries is so bad and our customer is dissatisfied.
So, I searched and got this URL for Oracle performance tuning.
The factor's 22 of this URL tells to NOT use ORDER BY in the query when the response time is important. So, if I omit ORDER BY from my query, the response time is more than half than with ORDER BY.
But I can not omit ORDER BY from my query because the customer needs sorting.
How do I fix my problem, so that I have ordering and a response time?
A: one of the best sulotion that Markus Winand metion in his blog that is using pipelined order by and it's detail in in the this link
| Q: ORDER BY performance when executing a query in Oracle I have been working on a Spring application that connects to an Oracle database.
After three years, the amount of records in our tables are so much bigger that the response time of queries is so bad and our customer is dissatisfied.
So, I searched and got this URL for Oracle performance tuning.
The factor's 22 of this URL tells to NOT use ORDER BY in the query when the response time is important. So, if I omit ORDER BY from my query, the response time is more than half than with ORDER BY.
But I can not omit ORDER BY from my query because the customer needs sorting.
How do I fix my problem, so that I have ordering and a response time?
A: one of the best sulotion that Markus Winand metion in his blog that is using pipelined order by and it's detail in in the this link
A:
The factor's 22 of this URL tell that DO NOT use order by in the query
when the response time is important, I omit order by from my query for
this the response time is more half than the first
On the Internet, you should always question every advice you get.
In order for the ORDER BY clause to be fast, you need to use the right index. Make sure the sorting is done using a database index, therefore avoiding a full-table scan or an explicit sort operation. When in doubt, just search for SQL performance issues on Markus Winand's Use the Index Luke site or, even better, read his SQL Performance Explained book.
So, you should make sure that the Buffer Pool is properly configured and you have enough RAM to hold the data working set and indexes as well.
If you really have huge data (e.g. billions of records), then you should use partitioning. Otherwise, for tens or hundreds of millions of records, you could just scale vertically using more RAM.
Also, make sure you use compact data types. For example, don't store an Enum ordinal value into a 32-bit integer value since a single byte would probably be more than enough to store all Enum values you might use.
| stackoverflow | {
"language": "en",
"length": 374,
"provenance": "stackexchange_0000F.jsonl.gz:905338",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44668461"
} |
fd7d4e5de185bb13afbbdffbf5aa4e745c55813b | Stackoverflow Stackexchange
Q: What is "Keys" in Certificates, Identifiers & Profiles section of Apple Dev center Today I noticed a new section named "Keys." I don't know which services uses this? Anybody have any idea? Or I'm the beta user to see this?
A: I noticed it also quite recently and used it right away for push notification configuration of a 3rd party service. In my case I created a key and then added it to the Visual Studio Mobile Center push notification configuration site along with the BundleID and the TeamID.
Additional to this you still have to configure Push Notification on your App Identifier in the corresponding section.
It looks like the keys here are a new and more convenient way for passing push authentication info like the PEM files before.
But can't actually find and official docs on this topic by Apple :( by now.
| Q: What is "Keys" in Certificates, Identifiers & Profiles section of Apple Dev center Today I noticed a new section named "Keys." I don't know which services uses this? Anybody have any idea? Or I'm the beta user to see this?
A: I noticed it also quite recently and used it right away for push notification configuration of a 3rd party service. In my case I created a key and then added it to the Visual Studio Mobile Center push notification configuration site along with the BundleID and the TeamID.
Additional to this you still have to configure Push Notification on your App Identifier in the corresponding section.
It looks like the keys here are a new and more convenient way for passing push authentication info like the PEM files before.
But can't actually find and official docs on this topic by Apple :( by now.
A: Found this info https://developer.clevertap.com/docs/how-to-create-an-ios-apns-auth-key
If you’d like to send push notifications to your iOS users, you will
need to upload either an APNs Push Certificate, or an APNs Auth Key.
We recommend that you create and upload an APNs Auth Key for the
following reasons:
No need to re-generate the push certificate every year One auth key
can be used for all your apps – this avoids the complication of
maintaining different certificates When sending push notifications
using an APNs Auth Key, we require the following information about
your app:
Auth Key file Team ID Your app’s bundle ID
This sounds like a convenient way to send APN as no need to keep renew annually, but the 1 key is used for all your apps and the p8 file can only be downloaded once after generated. Not sure if the APN still work if I delete the key afterward.
A: Keys are used for a variety of Apple services. Here's a screenshot:
| stackoverflow | {
"language": "en",
"length": 309,
"provenance": "stackexchange_0000F.jsonl.gz:905340",
"question_score": "14",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44668464"
} |
2335a04e7ad26f7018c6542de5eeb9f5b99dd54c | Stackoverflow Stackexchange
Q: Django OAuth Toolkit: could not import ext.rest_framework I am trying to set up OAuth2 authentication system for my Django REST API (using DjangoRestFramework and Django-Oauth-Toolkit).
I wrote everything according to the official documentation, but the system gives error "could not import ext.rest_framework"
Here is my setting.py file:
OAUTH2_PROVIDER = {
# this is the list of available scopes
'SCOPES': {'read': 'Read scope', 'write': 'Write scope', 'groups': 'Access to your groups'}
}
REST_FRAMEWORK = {
'DEFAULT_PERMISSION_CLASSES': [
'oauth2_provider.ext.rest_framework.OAuth2Authentication',
],
'DEFAULT_PERMISSION_CLASSES': ('rest_framework.permissions.IsAuthenticated',),
'PAGE_SIZE': 10
}
Thanks!
A: OK, I checked the source code for oauth2_provider. Apparently they changed the structure, but did not update the tutorial on their website. So, oauth2_provider.ext package does not exist anymore, you should use oauth2_provider.contrib instead. That is, the following code works fine:
REST_FRAMEWORK = {
'DEFAULT_AUTHENTICATION_CLASSES': (
'oauth2_provider.contrib.rest_framework.OAuth2Authentication',
),
'DEFAULT_PERMISSION_CLASSES': (
'rest_framework.permissions.IsAuthenticated',
),
'PAGE_SIZE': 10
}
| Q: Django OAuth Toolkit: could not import ext.rest_framework I am trying to set up OAuth2 authentication system for my Django REST API (using DjangoRestFramework and Django-Oauth-Toolkit).
I wrote everything according to the official documentation, but the system gives error "could not import ext.rest_framework"
Here is my setting.py file:
OAUTH2_PROVIDER = {
# this is the list of available scopes
'SCOPES': {'read': 'Read scope', 'write': 'Write scope', 'groups': 'Access to your groups'}
}
REST_FRAMEWORK = {
'DEFAULT_PERMISSION_CLASSES': [
'oauth2_provider.ext.rest_framework.OAuth2Authentication',
],
'DEFAULT_PERMISSION_CLASSES': ('rest_framework.permissions.IsAuthenticated',),
'PAGE_SIZE': 10
}
Thanks!
A: OK, I checked the source code for oauth2_provider. Apparently they changed the structure, but did not update the tutorial on their website. So, oauth2_provider.ext package does not exist anymore, you should use oauth2_provider.contrib instead. That is, the following code works fine:
REST_FRAMEWORK = {
'DEFAULT_AUTHENTICATION_CLASSES': (
'oauth2_provider.contrib.rest_framework.OAuth2Authentication',
),
'DEFAULT_PERMISSION_CLASSES': (
'rest_framework.permissions.IsAuthenticated',
),
'PAGE_SIZE': 10
}
| stackoverflow | {
"language": "en",
"length": 141,
"provenance": "stackexchange_0000F.jsonl.gz:905350",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44668493"
} |
13e4ab390d60445d13b77a1adaf062130e05b44a | Stackoverflow Stackexchange
Q: Columns index has to be unique for fixed format error in pandas time series I got ValueError: Columns index has to be unique for fixed format when I tried to save a dataframe which I formed by combining multiple time series dataframe. This is the sample of what I have done
df1=pd.concat([d1,d2,d3,d4],axis=1]
df2=pd.DataFrame(d5)
df3=pd.concat([d6,d7,d8],axis=1]
main_df=pd.concat([df1,df2,df3],axis=1)
main_df=main_df.dropna()
main_df.head()
till here it works fine but when I tried to save those data into a HDF5 file its giving me this error Columns index has to be unique for fixed format
fi=pd.read_hdf("data.h5")
fi['df']=main_df #this line cause the error
A: You can use cumcount for count duplicates, replace 0 if necessary and add it to original columns names:
df = pd.DataFrame([[1,2,3,4]], columns = list('abbc'))
print (df)
a b b c
0 1 2 3 4
s = df.columns.to_series()
df.columns = s + s.groupby(s).cumcount().astype(str).replace({'0':''})
print (df)
a b b1 c
0 1 2 3 4
| Q: Columns index has to be unique for fixed format error in pandas time series I got ValueError: Columns index has to be unique for fixed format when I tried to save a dataframe which I formed by combining multiple time series dataframe. This is the sample of what I have done
df1=pd.concat([d1,d2,d3,d4],axis=1]
df2=pd.DataFrame(d5)
df3=pd.concat([d6,d7,d8],axis=1]
main_df=pd.concat([df1,df2,df3],axis=1)
main_df=main_df.dropna()
main_df.head()
till here it works fine but when I tried to save those data into a HDF5 file its giving me this error Columns index has to be unique for fixed format
fi=pd.read_hdf("data.h5")
fi['df']=main_df #this line cause the error
A: You can use cumcount for count duplicates, replace 0 if necessary and add it to original columns names:
df = pd.DataFrame([[1,2,3,4]], columns = list('abbc'))
print (df)
a b b c
0 1 2 3 4
s = df.columns.to_series()
df.columns = s + s.groupby(s).cumcount().astype(str).replace({'0':''})
print (df)
a b b1 c
0 1 2 3 4
| stackoverflow | {
"language": "en",
"length": 151,
"provenance": "stackexchange_0000F.jsonl.gz:905352",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44668502"
} |
c602202b5127e2b132808c67d89e1ca3deb4c0d5 | Stackoverflow Stackexchange
Q: MBProgressHud View Customization Not Work In Swift 3 And Not Showing? i am using the following code but not showing progress hud so please help for that.simple hud showing fine but customise not showing
let loadingHUD = MBProgressHUD()
loadingHUD.mode = MBProgressHUDModeCustomView
loadingHUD.labelText = nil
loadingHUD.detailsLabelText = nil
let customView = UIView.init(frame: CGRect(x: 0, y: 0, width: 80, height: 80))
let gifmanager = SwiftyGifManager(memoryLimit:20)
let gif = UIImage(gifName: "miniballs1.gif")
let imageview = UIImageView(gifImage: gif, manager: gifmanager)
imageview.frame = CGRect(x: 0 , y: 0, width: customView.frame.width, height: customView.frame.height)
customView.addSubview(imageview)
customView.bringSubview(toFront: imageview)
loadingHUD.customView = customView
loadingHUD.customView.bringSubview(toFront: customView)
loadingHUD.show(true)
A: Try with this library ACProgressHud-Swift
In xib file whatever view make then customise and use it.
For Show
ACProgressHUD.shared.showHUD(withStatus: “Your Message Name“)
For hide
ACProgressHUD.shared.hideHUD()
| Q: MBProgressHud View Customization Not Work In Swift 3 And Not Showing? i am using the following code but not showing progress hud so please help for that.simple hud showing fine but customise not showing
let loadingHUD = MBProgressHUD()
loadingHUD.mode = MBProgressHUDModeCustomView
loadingHUD.labelText = nil
loadingHUD.detailsLabelText = nil
let customView = UIView.init(frame: CGRect(x: 0, y: 0, width: 80, height: 80))
let gifmanager = SwiftyGifManager(memoryLimit:20)
let gif = UIImage(gifName: "miniballs1.gif")
let imageview = UIImageView(gifImage: gif, manager: gifmanager)
imageview.frame = CGRect(x: 0 , y: 0, width: customView.frame.width, height: customView.frame.height)
customView.addSubview(imageview)
customView.bringSubview(toFront: imageview)
loadingHUD.customView = customView
loadingHUD.customView.bringSubview(toFront: customView)
loadingHUD.show(true)
A: Try with this library ACProgressHud-Swift
In xib file whatever view make then customise and use it.
For Show
ACProgressHUD.shared.showHUD(withStatus: “Your Message Name“)
For hide
ACProgressHUD.shared.hideHUD()
A: I Solved This Problem In Swift 3
var hud = MBProgressHUD()
hud.backgroundColor = UIColor.clear
// Set an image view with a checkmark.
let gifmanager = SwiftyGifManager(memoryLimit:20)
let gif = UIImage(gifName: "eclipse.gif")
let imageview = UIImageView(gifImage: gif, manager: gifmanager)
hud.labelText = NSLocalizedString(string, comment: "")
hud.labelColor = UIColor.red
imageview.frame = CGRect(x: 0 , y: 0, width: 80 , height: 80)
let views = UIView.init(frame: CGRect(x: 0 , y: 0, width: 80 , height: 80))
views.backgroundColor = UIColor.black
views.addSubview(imageview)
hud.customView = views
hud.customView.backgroundColor = UIColor.clear
hud.dimBackground = true
hud.show(true)
A: import UIKit
class Loader: NSObject {
class func setup()
{
MBProgressHUD.setDefaultMaskType(.Black)
MBProgressHUD.setBackgroundColor(UIColor(white: 0, alpha: 0.7))
MBProgressHUD.setForegroundColor(UIColor(white: 1, alpha: 1))
}
class func Show(message:String = "loading..."){
MBProgressHUD.showWithStatus(message)
UIApplication.sharedApplication().networkActivityIndicatorVisible = true
}
class func Hide(){
MBProgressHUD.dismiss()
UIApplication.sharedApplication().networkActivityIndicatorVisible = false
}
}
Make a class of MBProgressHUD and use it as Loader.Show() Or you can customise it as per your requirement
| stackoverflow | {
"language": "en",
"length": 268,
"provenance": "stackexchange_0000F.jsonl.gz:905360",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44668532"
} |
e8ceec3445aa9f13a93e252fd9253c7de9d8afb9 | Stackoverflow Stackexchange
Q: How to setup routes with Express and NGINX? I'm trying to configure an Express server with NGINX as a reverse proxy. NGINX to serve static files, and Express for the dynamic content.
Problem : The normal root link works (website.com) , but when I navigate to (website.com/api), I get a 404 from NGINX
This is my server.js :
var express = require("express");
var app = express();
var server = app.listen(process.env.PORT || 5000);
console.log("Server Running");
app.get("/",function(req,res){res.send("HOME PAGE")});
app.get("/api", function(req, res) {
res.send('API PAGE');
});
This is my NGINX Config file:
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name website.com www.website.com;
location ~ ^/(assets/|images/|img/|javascript/|js/|css/|stylesheets/|flash/|media/|static/|robots.txt|humans.txt|favicon.ico) {
root /home/foobar/public; #this is where my static files reside
access_log off;
expires 24h;
}
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
try_files $uri $uri/ =404;
}
}
A: Try to remove this line:
try_files $uri $uri/ =404;
With this directive nginx tries to serve a static file (or directory), and returns 404 if there is no such file.
| Q: How to setup routes with Express and NGINX? I'm trying to configure an Express server with NGINX as a reverse proxy. NGINX to serve static files, and Express for the dynamic content.
Problem : The normal root link works (website.com) , but when I navigate to (website.com/api), I get a 404 from NGINX
This is my server.js :
var express = require("express");
var app = express();
var server = app.listen(process.env.PORT || 5000);
console.log("Server Running");
app.get("/",function(req,res){res.send("HOME PAGE")});
app.get("/api", function(req, res) {
res.send('API PAGE');
});
This is my NGINX Config file:
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name website.com www.website.com;
location ~ ^/(assets/|images/|img/|javascript/|js/|css/|stylesheets/|flash/|media/|static/|robots.txt|humans.txt|favicon.ico) {
root /home/foobar/public; #this is where my static files reside
access_log off;
expires 24h;
}
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
try_files $uri $uri/ =404;
}
}
A: Try to remove this line:
try_files $uri $uri/ =404;
With this directive nginx tries to serve a static file (or directory), and returns 404 if there is no such file.
A: In my case, the location property inside server is this:
location {
proxy_pass http://localhost:3000;
rewrite ^/(.*)$ /$1 break; # --> this helps
}
| stackoverflow | {
"language": "en",
"length": 206,
"provenance": "stackexchange_0000F.jsonl.gz:905366",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44668559"
} |
b9b0913e364b740a011f52d6367bf92601a7ceae | Stackoverflow Stackexchange
Q: Unable to understand javascript syntax I was going through the code of react-jsonschema-form.I came across following lines which I am unable to comprehend.
var formData = (0, _utils.getDefaultFormState)(schema, props.formData, definitions);
How is content within the first bracket a function to which arguments(schema, props.formData etc.) are passed?
A: I guess the answer to this question is that in the first expression (0,_utils.getDefaultFormState) the comma , operator evaluates to the last argument and returns it.
So, comma operator operates on it's operands from left to right and returns the last right most evaluated operand in the expression.
But that is different in terms of using functions and its returned values.
// sample from MDN.
function myFunc() {
var x = 0;
return (x += 1, x); // the same as return ++x;
}
As i mentioned in the comment:
First brackets are self executing function and it returns it's value as a function of _utils object, which accepts 3 or more arguments.
| Q: Unable to understand javascript syntax I was going through the code of react-jsonschema-form.I came across following lines which I am unable to comprehend.
var formData = (0, _utils.getDefaultFormState)(schema, props.formData, definitions);
How is content within the first bracket a function to which arguments(schema, props.formData etc.) are passed?
A: I guess the answer to this question is that in the first expression (0,_utils.getDefaultFormState) the comma , operator evaluates to the last argument and returns it.
So, comma operator operates on it's operands from left to right and returns the last right most evaluated operand in the expression.
But that is different in terms of using functions and its returned values.
// sample from MDN.
function myFunc() {
var x = 0;
return (x += 1, x); // the same as return ++x;
}
As i mentioned in the comment:
First brackets are self executing function and it returns it's value as a function of _utils object, which accepts 3 or more arguments.
A: In that context the first parenthesis pair is a sequence of statement whose value is the value of the last expression. Then:
(0,_utils.getDefaultFormState)
returns the function objet _utils.getDefaultFormState which is then called with the following arguments.
| stackoverflow | {
"language": "en",
"length": 198,
"provenance": "stackexchange_0000F.jsonl.gz:905434",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44668777"
} |
dd000aca6b60cd992b71bf39b50d38462c2e125e | Stackoverflow Stackexchange
Q: passkit - pass not refreshing via push notification. (Server response was malformed) I have successfully integrated passkit but I am facing issue when updating passbook via push notification.
Notification successfully received to wallet but pass not updated.
Following are steps which I have implemented:
*
*Used XMPP server (PHP).
*used this library in php to generate pass (https://github.com/tschoffelen/PHP-PKPass). I have replaced pass type and team identifier.
*Swift code to add pass to wallet. Pass successfully created and add to wallet.
*To refresh pass "pull to refresh" is working in wallet app.
Here is my full code (PHP + SWIFT):
https://www.dropbox.com/sh/e3wk8bwqgv8zs3f/AACZa_x7vD8KByl6WdrrgNExa?dl=0
Here are some logs:
*
*While creating pass: https://www.dropbox.com/s/j14zfudy9mbllmp/add%20card.png?dl=0
*Add card on wallet : -https://www.dropbox.com/s/yek9rf8js45p8xb/add%20card%20to%20wallet.png?dl=0
*Pull to refresh two request from wallet app https://www.dropbox.com/s/k1sfpxfbqlwwu6q/pull%20to%20refresh%20request%201.png?dl=0
https://www.dropbox.com/s/9jall5xmxpx806o/pull%20to%20refresh%20request%202.png?dl=0
*when push notification received, two request from wallet:
https://www.dropbox.com/s/sg3v9sgyu0w1e3n/push%20request%201.png?dl=0
https://www.dropbox.com/s/xd2us3771f2xn3s/push%20request%202.png?dl=0
The error is Server response was malformed...
Please help!
Thanks!
A: I have solved this my self.
The problem was in 'Last-Modified' header date format.
It should be header('Last-Modified: ' . gmdate('D, d M Y H:i:s T')); in PKPass.php file
I received push notification and my pass is now updated automatically.
Thanks!
| Q: passkit - pass not refreshing via push notification. (Server response was malformed) I have successfully integrated passkit but I am facing issue when updating passbook via push notification.
Notification successfully received to wallet but pass not updated.
Following are steps which I have implemented:
*
*Used XMPP server (PHP).
*used this library in php to generate pass (https://github.com/tschoffelen/PHP-PKPass). I have replaced pass type and team identifier.
*Swift code to add pass to wallet. Pass successfully created and add to wallet.
*To refresh pass "pull to refresh" is working in wallet app.
Here is my full code (PHP + SWIFT):
https://www.dropbox.com/sh/e3wk8bwqgv8zs3f/AACZa_x7vD8KByl6WdrrgNExa?dl=0
Here are some logs:
*
*While creating pass: https://www.dropbox.com/s/j14zfudy9mbllmp/add%20card.png?dl=0
*Add card on wallet : -https://www.dropbox.com/s/yek9rf8js45p8xb/add%20card%20to%20wallet.png?dl=0
*Pull to refresh two request from wallet app https://www.dropbox.com/s/k1sfpxfbqlwwu6q/pull%20to%20refresh%20request%201.png?dl=0
https://www.dropbox.com/s/9jall5xmxpx806o/pull%20to%20refresh%20request%202.png?dl=0
*when push notification received, two request from wallet:
https://www.dropbox.com/s/sg3v9sgyu0w1e3n/push%20request%201.png?dl=0
https://www.dropbox.com/s/xd2us3771f2xn3s/push%20request%202.png?dl=0
The error is Server response was malformed...
Please help!
Thanks!
A: I have solved this my self.
The problem was in 'Last-Modified' header date format.
It should be header('Last-Modified: ' . gmdate('D, d M Y H:i:s T')); in PKPass.php file
I received push notification and my pass is now updated automatically.
Thanks!
| stackoverflow | {
"language": "en",
"length": 188,
"provenance": "stackexchange_0000F.jsonl.gz:905479",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44668916"
} |
60ac4977262c0fbc9db759448878ff20254ab4d5 | Stackoverflow Stackexchange
Q: Serverless offline custom error from API gateway with Lambda Is there any way to return a custom error object and status code from API gateway? I am getting 200 status.
var response = {
status: 400,
errors: [
{
code: "226",
message: "Password and password confirmation do not match."
}
]
}
context.done(JSON.stringify(response));
A: If you want to respond with an error, you have to use the success callback with an error response construct.
If you are using the context.fail() callback, AWS will assume that the Lambda technically failed and respond with the default mapping present in your API Gateway.
Sample error response:
'use strict';
module.exports.hello = (event, context, callback) => {
const response = {
statusCode: 400,
body: JSON.stringify({
errors:[{
code: "226",
message:"Password confirmation do not match"
}]
}),
};
context.done(null, response);
};
| Q: Serverless offline custom error from API gateway with Lambda Is there any way to return a custom error object and status code from API gateway? I am getting 200 status.
var response = {
status: 400,
errors: [
{
code: "226",
message: "Password and password confirmation do not match."
}
]
}
context.done(JSON.stringify(response));
A: If you want to respond with an error, you have to use the success callback with an error response construct.
If you are using the context.fail() callback, AWS will assume that the Lambda technically failed and respond with the default mapping present in your API Gateway.
Sample error response:
'use strict';
module.exports.hello = (event, context, callback) => {
const response = {
statusCode: 400,
body: JSON.stringify({
errors:[{
code: "226",
message:"Password confirmation do not match"
}]
}),
};
context.done(null, response);
};
A: This way I can change API Gatway. I can manage my API response to using s-templates.json to adding this code base.
ValidationError":{
"selectionPattern": ".*ValidationError.*",
"statusCode": "400",
"responseParameters": {
"method.response.header.Access-Control-Allow-Headers": "'Content-
Type,X-Amz-Date,Authorization,X-Api-Key,Cache-Control,Token'",
"method.response.header.Access-Control-Allow-Methods": "'*'",
"method.response.header.Access-Control-Allow-Origin": "'*'"
},
"responseModels": {},
"responseTemplates": {
"application/json": "$input.path('$.errorMessage')"
}
}
This way I return my response with 400 statusCode and a valid message.
module.exports.handler = function(event, context) {
const validationError={
statsCode:"ValidationError",
section:"Login",
validationType:"emailConfirmation",
message:"Email is not confirmed",
otherInfo:"Would you like to get the email again?",
client:"web|ios|android"
}
context.done(null, response);
};
| stackoverflow | {
"language": "en",
"length": 220,
"provenance": "stackexchange_0000F.jsonl.gz:905520",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44669027"
} |
0e9cab0787fb77af3e6500ba28052511204035c7 | Stackoverflow Stackexchange
Q: Declaring empty collections in Kotlin How can i declare an empty collection for mapOf,listOf and setOf in Kotlin?
I have declared below variables:
val occupations = mapOf(Pair("Malcolm", "Captain"), Pair("Kaylee", "Mechanic"))
val shoppingList = listOf("catfish", "water", "tulips", "blue paint")
val favoriteGenres = setOf("Rock", "Classical", "Hip hop")
I want to check is these collection are empty or not.
A: You can create empty collections like this:
val occupations = mapOf<String, String>()
val shoppingList = listOf<String>()
val favoriteGenres = setOf<String>()
| Q: Declaring empty collections in Kotlin How can i declare an empty collection for mapOf,listOf and setOf in Kotlin?
I have declared below variables:
val occupations = mapOf(Pair("Malcolm", "Captain"), Pair("Kaylee", "Mechanic"))
val shoppingList = listOf("catfish", "water", "tulips", "blue paint")
val favoriteGenres = setOf("Rock", "Classical", "Hip hop")
I want to check is these collection are empty or not.
A: You can create empty collections like this:
val occupations = mapOf<String, String>()
val shoppingList = listOf<String>()
val favoriteGenres = setOf<String>()
A:
I want to check is these collections are empty or not.
Why can't you simply use the isEmpty() method?
print(occupations.isEmpty()) // >>> false
print(shoppingList.isEmpty()) // >>> false
print(favoriteGenres.isEmpty()) // >>> false
Anyway, if you really want to declare an empty collection, you can do it like this:
val emptyList = listOf<String>()
val emptySet = setOf<String>()
val emptyMap = mapOf<String, String>()
OR
val emptyList = emptyList<String>()
val emptySet = emptySet<String>()
val emptyMap = emptyMap<String, String>()
Let's take a look under the hood. Method listOf() called with no arguments has the following implementation:
/** Returns an empty read-only list. The returned list is serializable (JVM). */
@kotlin.internal.InlineOnly
public inline fun <T> listOf(): List<T> = emptyList()
It's easy to see that it simply calls another method - emptyList():
/** Returns an empty read-only list. The returned list is serializable (JVM). */
public fun <T> emptyList(): List<T> = EmptyList
that returns an internal object named EmptyList:
internal object EmptyList : List<Nothing>, Serializable, RandomAccess {
// <...>
}
So the summary is that (as @brescia123 said) these methods do exactly the same thing: both of them return an empty immutable List and it's up to you to decide which one to use.
| stackoverflow | {
"language": "en",
"length": 276,
"provenance": "stackexchange_0000F.jsonl.gz:905529",
"question_score": "11",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44669079"
} |
1d2be40ba53adafc4f4c28dc4f1d91dbd98bbd88 | Stackoverflow Stackexchange
Q: No server chosen by ReadPreferenceServerSelector I am using Mongodb Java async driver which is newly released. I am writing some simple test codes, which are:
MongoClient mongoClient = MongoClients.create("mongodb://192.168.1.162:27017");
MongoDatabase database = mongoClient.getDatabase("mongodb1");
MongoCollection<Document> collection = database.getCollection("t_test");
collection.count(
new SingleResultCallback<Long>() {
@Override
public void onResult(final Long count, final Throwable t) {
System.out.println(count);
}
});
System.out.println("AAAAAAAAAAAAAAAAAAAAAAAAA------end”);
however, the callback function is not called, the console output is:
2017-06-21 13:57:58 [ INFO ] Cluster created with settings {hosts=[192.168.1.162:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500}
2017-06-21 13:57:58 [ DEBUG ] Updating cluster description to {type=UNKNOWN, servers=[{address=192.168.1.162:27017, type=UNKNOWN, state=CONNECTING}]
2017-06-21 13:57:58 [ INFO ] No server chosen by ReadPreferenceServerSelector{readPreference=primary} from cluster description ClusterDescription{type=UNKNOWN, connectionMode=SINGLE, all=[ServerDescription{address=192.168.1.162:27017, type=UNKNOWN, state=CONNECTING}]}. Waiting for 30000 ms before timing out
| Q: No server chosen by ReadPreferenceServerSelector I am using Mongodb Java async driver which is newly released. I am writing some simple test codes, which are:
MongoClient mongoClient = MongoClients.create("mongodb://192.168.1.162:27017");
MongoDatabase database = mongoClient.getDatabase("mongodb1");
MongoCollection<Document> collection = database.getCollection("t_test");
collection.count(
new SingleResultCallback<Long>() {
@Override
public void onResult(final Long count, final Throwable t) {
System.out.println(count);
}
});
System.out.println("AAAAAAAAAAAAAAAAAAAAAAAAA------end”);
however, the callback function is not called, the console output is:
2017-06-21 13:57:58 [ INFO ] Cluster created with settings {hosts=[192.168.1.162:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500}
2017-06-21 13:57:58 [ DEBUG ] Updating cluster description to {type=UNKNOWN, servers=[{address=192.168.1.162:27017, type=UNKNOWN, state=CONNECTING}]
2017-06-21 13:57:58 [ INFO ] No server chosen by ReadPreferenceServerSelector{readPreference=primary} from cluster description ClusterDescription{type=UNKNOWN, connectionMode=SINGLE, all=[ServerDescription{address=192.168.1.162:27017, type=UNKNOWN, state=CONNECTING}]}. Waiting for 30000 ms before timing out
| stackoverflow | {
"language": "en",
"length": 120,
"provenance": "stackexchange_0000F.jsonl.gz:905560",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44669185"
} |
093e2ff230e149c9afb1b45d001abc448bfaacbd | Stackoverflow Stackexchange
Q: installing Oracle JDK 8 on Debian 9 I always installed Oracle JDK 8 on Debian 8 using the following instructions with no problems
echo "deb http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main" | tee /etc/apt/sources.list.d/webupd8team-java.list
echo "deb-src http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main" | tee -a /etc/apt/sources.list.d/webupd8team-java.list
apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys EEA14886
apt-get update
apt-get install oracle-java8-installer
But on Debian 9 when I execute the following command
apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys EEA14886
I get the following errors
root@debian:/etc/apt/sources.list.d# apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys EEA14886
Executing: /tmp/apt-key-gpghome.OEi3EFigqe/gpg.1.sh --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys EEA14886
gpg: failed to start the dirmngr '/usr/bin/dirmngr': No such file or directory
gpg: connecting dirmngr at '/tmp/apt-key-gpghome.OEi3EFigqe/S.dirmngr' failed: No such file or directory
gpg: keyserver receive failed: No dirmngr
How can I install Oracle JDK 8 on Debian 9?
A: You can install it manually if you want,
download the JDK8
wget --no-cookies --no-check-certificate --header "Cookie:oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.tar.gz
make it in the environment variables of your OS:
JAVA_HOME= ~/path_Of_Your_JDK_DIR
PATH=$PATH:$JAVA_HOME/bin
export JAVA_HOME
export PATH
I hope that help you Arya.
| Q: installing Oracle JDK 8 on Debian 9 I always installed Oracle JDK 8 on Debian 8 using the following instructions with no problems
echo "deb http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main" | tee /etc/apt/sources.list.d/webupd8team-java.list
echo "deb-src http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main" | tee -a /etc/apt/sources.list.d/webupd8team-java.list
apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys EEA14886
apt-get update
apt-get install oracle-java8-installer
But on Debian 9 when I execute the following command
apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys EEA14886
I get the following errors
root@debian:/etc/apt/sources.list.d# apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys EEA14886
Executing: /tmp/apt-key-gpghome.OEi3EFigqe/gpg.1.sh --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys EEA14886
gpg: failed to start the dirmngr '/usr/bin/dirmngr': No such file or directory
gpg: connecting dirmngr at '/tmp/apt-key-gpghome.OEi3EFigqe/S.dirmngr' failed: No such file or directory
gpg: keyserver receive failed: No dirmngr
How can I install Oracle JDK 8 on Debian 9?
A: You can install it manually if you want,
download the JDK8
wget --no-cookies --no-check-certificate --header "Cookie:oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.tar.gz
make it in the environment variables of your OS:
JAVA_HOME= ~/path_Of_Your_JDK_DIR
PATH=$PATH:$JAVA_HOME/bin
export JAVA_HOME
export PATH
I hope that help you Arya.
A: I was able to fix with the following commands:
*
*apt remove gnupg
*apt install --reinstall gnupg2 (possibly not necessary, but doesn't hurt)
*apt install dirmngr
A: It seems that dirmngr is not installed: gpg: failed to start the dirmngr '/usr/bin/dirmngr': No such file or directory. You could try to install it via apt-get install dirmngr and retry.
| stackoverflow | {
"language": "en",
"length": 224,
"provenance": "stackexchange_0000F.jsonl.gz:905582",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44669247"
} |
b9dadb1f2d7d97dbd87337298b3ab02cfb6e0dfb | Stackoverflow Stackexchange
Q: Using a variable inside Timer I'm working on a program that has a prints time only when a function is called. I'm using timer for the continuous adding of seconds.
Timer gameTimer = new Timer ();
TimerTask time = new TimerTask() {
int sec = 0;
public void run()
{
sec++;
}
};
gameTimer.scheduleAtFixedRate(time, 1000, 1000);
However, I cannot use the variable sec outside the run() so I can print it. I tried to place sec outside TimerTask but of course sec++ would not work. Any help? Thanks!
A: Just make your own interface extending the original interface.
https://docs.oracle.com/javase/tutorial/java/IandI/nogrow.html
interface CustomTimerTask extends TimerTask {
public int getTicks();
}
CustomTimerTask time = new CustomTimerTask () {
int sec = 0;
@Override
public void run()
{
sec++;
}
@Override
public synchronized int getTicks() {
return sec;
}
};
System.out.println("The time passed is: " + time.getTicks());
Just make sure you make it synchronized because you're working with two threads now.
| Q: Using a variable inside Timer I'm working on a program that has a prints time only when a function is called. I'm using timer for the continuous adding of seconds.
Timer gameTimer = new Timer ();
TimerTask time = new TimerTask() {
int sec = 0;
public void run()
{
sec++;
}
};
gameTimer.scheduleAtFixedRate(time, 1000, 1000);
However, I cannot use the variable sec outside the run() so I can print it. I tried to place sec outside TimerTask but of course sec++ would not work. Any help? Thanks!
A: Just make your own interface extending the original interface.
https://docs.oracle.com/javase/tutorial/java/IandI/nogrow.html
interface CustomTimerTask extends TimerTask {
public int getTicks();
}
CustomTimerTask time = new CustomTimerTask () {
int sec = 0;
@Override
public void run()
{
sec++;
}
@Override
public synchronized int getTicks() {
return sec;
}
};
System.out.println("The time passed is: " + time.getTicks());
Just make sure you make it synchronized because you're working with two threads now.
A: Since only final variables accessible in anonymous class, but with this below hack you can achieve what you want.
final int [] result = new int[1]; // Create a final array
TimerTask time = new TimerTask() {
int sec = 0;
public void run()
{
sec++;
result[0] = sec;
}
};
// Now Print whenver you want it
System.out.println(result[0]);
This way you are not reassigning the array to a new object just changing the content inside it
| stackoverflow | {
"language": "en",
"length": 236,
"provenance": "stackexchange_0000F.jsonl.gz:905592",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44669275"
} |
93f78829816cb2199879416fe61bca0537adef8d | Stackoverflow Stackexchange
Q: A element disappears from the DOM A fragment of my web output, when seen under Sources in Chrome Development Tools and Debugger in Edge is:
</div>
<form id="form" style="display:none;">
<textarea class="form-control" id="editor" name="editor"></textarea>
<input class="btn btn-primary" type="submit" value="Submit">
<button class="btn btn-default" onclick="discard();" type="button">Cancel</button>
</form>
<!--**END**-->
However, under Elements in Chrome and DOM Explorer in Edge, the <form> element is missing, as follows:
Though I have some scripts, none of them did anything to the <form> element. What could be causing the anomaly?
A: You can check your DOM. Form element can not generate as a nested.
may be there is another form element exist in parent tag.
| Q: A element disappears from the DOM A fragment of my web output, when seen under Sources in Chrome Development Tools and Debugger in Edge is:
</div>
<form id="form" style="display:none;">
<textarea class="form-control" id="editor" name="editor"></textarea>
<input class="btn btn-primary" type="submit" value="Submit">
<button class="btn btn-default" onclick="discard();" type="button">Cancel</button>
</form>
<!--**END**-->
However, under Elements in Chrome and DOM Explorer in Edge, the <form> element is missing, as follows:
Though I have some scripts, none of them did anything to the <form> element. What could be causing the anomaly?
A: You can check your DOM. Form element can not generate as a nested.
may be there is another form element exist in parent tag.
| stackoverflow | {
"language": "en",
"length": 108,
"provenance": "stackexchange_0000F.jsonl.gz:905603",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44669299"
} |
0cb0971ca8d4b9dfdaaa04cf2b8500c06402cf31 | Stackoverflow Stackexchange
Q: openCV's gnustl conflict with dlib's c++_shared in the android NDK I'm trying to integrate both OpenCV and dlib-android in the NDK.
I'm able to get both OpenCV and dlib working in seperate projects, but the project breaks when they both are integrated.
This is my gradle config for dlib
android {
compileSdkVersion 25
buildToolsVersion "25.0.3"
defaultConfig {
...
externalNativeBuild {
cmake {
cppFlags "-std=c++11 -frtti -fexceptions"
arguments "-DANDROID_PLATFORM=android-16",
"-DANDROID_TOOLCHAIN=clang",
"-DANDROID_STL=c++_shared",
"-DANDROID_CPP_FEATURES=rtti exceptions"
}
}
sourceSets {
main {
jniLibs.srcDirs = ["src/main/jniLibs/dlib/libs"]
}
}
}
...
When I integrate openCV, I get
undefined reference to 'cv::CascadeClassifier::detectMultiScale'
The solution to which according to this answer, is to have the stl as gnustl_shared
dlib with gnustl_shared gives errors like std::exceptions not found.
How do I go forward and integrate both?
I tried to recompile OpenCV with c++_shared on CMake, but ran into
fatal error: iostream: No such file or directory
#include
| Q: openCV's gnustl conflict with dlib's c++_shared in the android NDK I'm trying to integrate both OpenCV and dlib-android in the NDK.
I'm able to get both OpenCV and dlib working in seperate projects, but the project breaks when they both are integrated.
This is my gradle config for dlib
android {
compileSdkVersion 25
buildToolsVersion "25.0.3"
defaultConfig {
...
externalNativeBuild {
cmake {
cppFlags "-std=c++11 -frtti -fexceptions"
arguments "-DANDROID_PLATFORM=android-16",
"-DANDROID_TOOLCHAIN=clang",
"-DANDROID_STL=c++_shared",
"-DANDROID_CPP_FEATURES=rtti exceptions"
}
}
sourceSets {
main {
jniLibs.srcDirs = ["src/main/jniLibs/dlib/libs"]
}
}
}
...
When I integrate openCV, I get
undefined reference to 'cv::CascadeClassifier::detectMultiScale'
The solution to which according to this answer, is to have the stl as gnustl_shared
dlib with gnustl_shared gives errors like std::exceptions not found.
How do I go forward and integrate both?
I tried to recompile OpenCV with c++_shared on CMake, but ran into
fatal error: iostream: No such file or directory
#include
| stackoverflow | {
"language": "en",
"length": 149,
"provenance": "stackexchange_0000F.jsonl.gz:905610",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44669324"
} |
802f0d6181f5ca789b5aed8b92909b323f05aa17 | Stackoverflow Stackexchange
Q: Google maps infowindow autopan: is it possible to customize the distance to the map's top? I have a map with a semi-transparent search bar on top of it. I have also got a bunch of markers with infowindow attached to them.
The problem is that google maps autopan feature obviously doesn't take my search bar into account, therefore if my marker is too close to the top, a part of the infowindow gets covered by the bar.
Is it possible to somehow specify the minimum distance the infowindow needs to be from the map's top?
I was also thinking of limiting the bounds of the map using markers' positions but in my case the markers can also end up under the search bar, so it is not an option.
Any ideas? Thank you for your time!
A: There exists a library called Snazzy Info Window that can be used instead of the usual InfoWindow. Then you can pass an option edgeOffset in pixels to this object and voila, problem solved. Hope this helps someone in the future!
| Q: Google maps infowindow autopan: is it possible to customize the distance to the map's top? I have a map with a semi-transparent search bar on top of it. I have also got a bunch of markers with infowindow attached to them.
The problem is that google maps autopan feature obviously doesn't take my search bar into account, therefore if my marker is too close to the top, a part of the infowindow gets covered by the bar.
Is it possible to somehow specify the minimum distance the infowindow needs to be from the map's top?
I was also thinking of limiting the bounds of the map using markers' positions but in my case the markers can also end up under the search bar, so it is not an option.
Any ideas? Thank you for your time!
A: There exists a library called Snazzy Info Window that can be used instead of the usual InfoWindow. Then you can pass an option edgeOffset in pixels to this object and voila, problem solved. Hope this helps someone in the future!
| stackoverflow | {
"language": "en",
"length": 178,
"provenance": "stackexchange_0000F.jsonl.gz:905623",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44669356"
} |
a8364cfbc5625bf649a99d6e8586c2c81d38e6c3 | Stackoverflow Stackexchange
Q: How to implement aws ses SendRawEmail with attachment in golang I need to implement Amazon ses SendRawEmail with attachment in golang,
i tried with the following code :
session, err := session.NewSession()
svc := ses.New(session, &aws.Config{Region: aws.String("us-west-2")})
source := aws.String("XXX <[email protected]>")
destinations := []*string{aws.String("xxx <[email protected]>")}
message := ses.RawMessage{ Data: []byte(` From: xxx <[email protected]>\\nTo: xxx <[email protected]>\\nSubject: Test email (contains an attachment)\\nMIME-Version: 1.0\\nContent-type: Multipart/Mixed; boundary=\"NextPart\"\\n\\n--NextPart\\nContent-Type: text/plain\\n\\nThis is the message body.\\n\\n--NextPart\\nContent-Type: text/plain;\\nContent-Disposition: attachment; filename=\"sample.txt\"\\n\\nThis is the text in the attachment.\\n\\n--NextPart--" `)}
input := ses.SendRawEmailInput{Source: source, Destinations: destinations, RawMessage: &message}
output, err := svc.SendRawEmail(&input)
but in the mail I receive, it shows the content which I have given in the message, instead of the attachment. Not sure what exactly is wrong???
A: if you're trying to attach a file from bytes:
msg.Attach("report.pdf", gomail.SetCopyFunc(func(w io.Writer) error {
_, err := w.Write(reportData)
return err
}))
| Q: How to implement aws ses SendRawEmail with attachment in golang I need to implement Amazon ses SendRawEmail with attachment in golang,
i tried with the following code :
session, err := session.NewSession()
svc := ses.New(session, &aws.Config{Region: aws.String("us-west-2")})
source := aws.String("XXX <[email protected]>")
destinations := []*string{aws.String("xxx <[email protected]>")}
message := ses.RawMessage{ Data: []byte(` From: xxx <[email protected]>\\nTo: xxx <[email protected]>\\nSubject: Test email (contains an attachment)\\nMIME-Version: 1.0\\nContent-type: Multipart/Mixed; boundary=\"NextPart\"\\n\\n--NextPart\\nContent-Type: text/plain\\n\\nThis is the message body.\\n\\n--NextPart\\nContent-Type: text/plain;\\nContent-Disposition: attachment; filename=\"sample.txt\"\\n\\nThis is the text in the attachment.\\n\\n--NextPart--" `)}
input := ses.SendRawEmailInput{Source: source, Destinations: destinations, RawMessage: &message}
output, err := svc.SendRawEmail(&input)
but in the mail I receive, it shows the content which I have given in the message, instead of the attachment. Not sure what exactly is wrong???
A: if you're trying to attach a file from bytes:
msg.Attach("report.pdf", gomail.SetCopyFunc(func(w io.Writer) error {
_, err := w.Write(reportData)
return err
}))
A: Refer to AWS example for Sending RAW email with attachment.
Implementation Suggestion: for an easy to compose email and get email as bytes and send it to SES as mentioned in the above reference example.
Use library gopkg.in/gomail.v2 to compose your email message with attachment and then call WriteTo method.
var emailRaw bytes.Buffer
emailMessage.WriteTo(&emailRaw)
// while create instance of RawMessage
RawMessage: &ses.RawMessage{
Data: emailRaw.Bytes(),
}
Good luck!
EDIT: For the comment.
Compose the email-
msg := gomail.NewMessage()
msg.SetHeader("From", "[email protected]")
msg.SetHeader("To", "[email protected]", "[email protected]")
msg.SetHeader("Subject", "Hello!")
msg.SetBody("text/html", "Hello <b>Bob</b> and <i>Cora</i>!")
msg.Attach("/home/Alex/lolcat.jpg")
var emailRaw bytes.Buffer
msg.WriteTo(&emailRaw)
message := ses.RawMessage{ Data: emailRaw.Bytes() }
// Remaining is same as what you mentioned the question.
| stackoverflow | {
"language": "en",
"length": 251,
"provenance": "stackexchange_0000F.jsonl.gz:905628",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44669384"
} |
a9000916565edf9f6ddd0e05a5cfe8420be6486a | Stackoverflow Stackexchange
Q: Displaying rows values in column In Excel, Am trying to show A1 cell value in C1 and B1 cell value in C2 and vice versa. not sure how to do this. Is there any way to do this in VBA? Please Help!
A: try the following code
Public Sub program()
Dim i As Long
Dim j As Long
i = 1
j = 1
Do While Cells(i, "A").Value <> ""
Cells(j, "C").Value = Cells(i, "A").Value
j = j + 1
Cells(j, "C").Value = Cells(i, "B").Value
i = i + 1
j = j + 1
Loop
End Sub
| Q: Displaying rows values in column In Excel, Am trying to show A1 cell value in C1 and B1 cell value in C2 and vice versa. not sure how to do this. Is there any way to do this in VBA? Please Help!
A: try the following code
Public Sub program()
Dim i As Long
Dim j As Long
i = 1
j = 1
Do While Cells(i, "A").Value <> ""
Cells(j, "C").Value = Cells(i, "A").Value
j = j + 1
Cells(j, "C").Value = Cells(i, "B").Value
i = i + 1
j = j + 1
Loop
End Sub
A: Do it with the formula: =IF(MOD(ROW();2)=1;INDEX(A:B;(1+ROW())/2;1);INDEX(A:B;ROW()/2;2)). Just place it on C1 and drag it.
A: Copy and Paste Special - Paste in Cell C1, Tick Transpose Box
Shortcut: Ctrl + C (together) Then Alt , E , S , E (in succession)
| stackoverflow | {
"language": "en",
"length": 142,
"provenance": "stackexchange_0000F.jsonl.gz:905632",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44669391"
} |
114488d5d6fc0985d7c963968dbde805a54d5c1f | Stackoverflow Stackexchange
Q: How Can I Restart The Redis Server If It Goes Down Automatically using a script? Redis has master slave configuration.If the master goes down,slave becomes the new master.How can i restart the previous redis master ( as a slave of new master or if it again become master that's fine ) using a script.I don't want to do it manually.
A: This can be done using a small script. Make a watcher script that keeps pinging REDIS at port 6379 and if it fails. Just restart the server using redis-server command.
#!/bin/bash
a=$(redis-cli -p 6379 PING)
if [ "$a" = "PONG" ]
then
echo 'Already running'
else
b=$(/etc/init.d/redis_6379 start)
echo $b
fi
Now schedule this script in crontab for everyone minute.
| Q: How Can I Restart The Redis Server If It Goes Down Automatically using a script? Redis has master slave configuration.If the master goes down,slave becomes the new master.How can i restart the previous redis master ( as a slave of new master or if it again become master that's fine ) using a script.I don't want to do it manually.
A: This can be done using a small script. Make a watcher script that keeps pinging REDIS at port 6379 and if it fails. Just restart the server using redis-server command.
#!/bin/bash
a=$(redis-cli -p 6379 PING)
if [ "$a" = "PONG" ]
then
echo 'Already running'
else
b=$(/etc/init.d/redis_6379 start)
echo $b
fi
Now schedule this script in crontab for everyone minute.
A: OK. I had to teach myself after a catastrophic server failure caused by redis dying and staying dead:
*
*Edit /etc/systemd/system/multi-user.target.wants/redis.service and add this in the [Service] section, at the bottom:
Restart=always
RestartSec=10s
*Run sudo systemctl daemon-reload
*Run sudo systemctl restart redis
It will now restart after 10 seconds every time it crashes.
This is usually "Good enough" for redis.
A: Taking from the answers above, I have added a bit more descriptions to make it easier to understand.
Step One: Create a script on the root folder, or on a folder of your choice
with the command:
vi redis-watch.sh
Step two: Paste the script below:
#!/bin/bash
a=$(redis-cli -p 6379 PING)
if [ "$a" = "PONG" ]
then
echo 'Already running'
else
b=$(systemctl start redis)
echo $b
fi
Save the changes.
Step three: Change the permissions of the file to prevent permission denied response on execution. Run:
sudo chmod 777 redis-watch.sh
Test if the script executes. Run:
./redis-watch.sh
The expected response should be Already running if Redis is already running okay else the script will start redis.
Step four: To automate the check, create a cronjob to run the script with an interval of your choosing. For our case we will have it running every second.
Run : crontab -e and paste the line:
* * * * * ./redis-watch.sh
That should be it.
Tested on Cent OS 8
A: #!/bin/bash
a=$(redis-cli -p 6379 PING)
if ["$a" != "PONG" ]
b=$(systemctl start redis)
echo $b
fi
A: Run
$ redis-cli
> config set stop-writes-on-bgsave-error no
> exit
| stackoverflow | {
"language": "en",
"length": 380,
"provenance": "stackexchange_0000F.jsonl.gz:905648",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44669437"
} |
cda1c33b4e6a4c602ccd2c6e4ecd29fb3d35500b | Stackoverflow Stackexchange
Q: How to modify "remember me" expired time in Laravel 5.2+? After using php artisan make:auth, Laravel's "remember me" will remember the user indefinitely.
How to change that time? Like make it expire in 7 days?
A: Use @Hexor has a problem, when user first login, you can't use
Cookie::get($rememberTokenName);
it's empty!
You should get cookie queue value first, then reset cookie expire time.
$rememberTokenExpireMinutes = 20;
// 首先获取 记住我 这个 Cookie 的名字, 这个名字一般是随机生成的,
// First, get remember me cookie name. This is randomly generated.
$rememberTokenName = \Auth::getRecallerName();
$cookieJar = $this->guard()->getCookieJar();
$cookieValue = $cookieJar->queued($rememberTokenName)->getValue();
$cookieJar->queue($rememberTokenName, $cookieValue, $rememberTokenExpireMinutes);
$jumpUrl = '/user/xxxx';
return $this->authenticated($request, $this->guard()->user())
?: redirect()->intended($jumpUrl);
| Q: How to modify "remember me" expired time in Laravel 5.2+? After using php artisan make:auth, Laravel's "remember me" will remember the user indefinitely.
How to change that time? Like make it expire in 7 days?
A: Use @Hexor has a problem, when user first login, you can't use
Cookie::get($rememberTokenName);
it's empty!
You should get cookie queue value first, then reset cookie expire time.
$rememberTokenExpireMinutes = 20;
// 首先获取 记住我 这个 Cookie 的名字, 这个名字一般是随机生成的,
// First, get remember me cookie name. This is randomly generated.
$rememberTokenName = \Auth::getRecallerName();
$cookieJar = $this->guard()->getCookieJar();
$cookieValue = $cookieJar->queued($rememberTokenName)->getValue();
$cookieJar->queue($rememberTokenName, $cookieValue, $rememberTokenExpireMinutes);
$jumpUrl = '/user/xxxx';
return $this->authenticated($request, $this->guard()->user())
?: redirect()->intended($jumpUrl);
A: Step 1
In LoginController, you'll see use AuthenticatesUsers.
Let's copy protected function sendLoginResponse(Request $request) from AuthenticatesUsers to LoginController.
Step 2
We can change the cookie's expire time before server response to the browser. Let's add some code into sendLoginResponse() in LoginController. Like this
class LoginController extends Controller
{
...
protected function sendLoginResponse(Request $request)
{
// set remember me expire time
$rememberTokenExpireMinutes = 60;
// first we need to get the "remember me" cookie's key, this key is generate by laravel randomly
// it looks like: remember_web_59ba36addc2b2f9401580f014c7f58ea4e30989d
$rememberTokenName = Auth::getRecallerName();
// reset that cookie's expire time
Cookie::queue($rememberTokenName, Cookie::get($rememberTokenName), $rememberTokenExpireMinutes);
// the code below is just copy from AuthenticatesUsers
$request->session()->regenerate();
$this->clearLoginAttempts($request);
return $this->authenticated($request, $this->guard()->user())
?: redirect()->intended($this->redirectPath());
}
}
A: You can set the remember me cookie duration by adding 'remember' => 43800 //(use minutes) in the config in config/auth.php
Just change this:
'guards' => [
'web' => [
'driver' => 'session',
'provider' => 'users',
],
],
to:
'guards' => [
'web' => [
'driver' => 'session',
'provider' => 'users',
'remember' => 43800 // Set remember me duration here
],
],
Note: The 'remember' key is a mandatory keyword because it will be read by laravel in Illuminate\Auth\AuthManager namespace
A: I know the question is old but I had a hard time getting a solution for Laravel 7.3, so I thought I should add what worked for me here.
In your App\Http\Controllers\Auth\LoginController.php file, make the following changes
//Add this after namespace declaration
use Illuminate\Http\Request;
//Add this function to the class
protected function sendLoginResponse(Request $request)
{
$rememberTokenExpiresAt = 60*24*30; //expires in 30 days
$rememberTokenCookieKey = $this->guard()->getRecallerName();
$cookieJar = $this->guard()->getCookieJar();
/* check if remember me token exists and then override it using the same name and value but different expiration time.
If you don't add the if condition, it will throw an error when user doesn't check the remember me box*/
if ($cookieJar->queued($rememberTokenCookieKey)) {
$cookieValue = $cookieJar->queued($rememberTokenCookieKey)->getValue();
$cookieJar->queue($rememberTokenCookieKey, $cookieValue, $rememberTokenExpiresAt);
}
$request->session()->regenerate();
$this->clearLoginAttempts($request);
if ($response = $this->authenticated($request, $this->guard()->user())) {
return $response;
}
return $request->wantsJson()
? new JsonResponse([], 204)
: redirect()->intended($this->redirectPath());
}
A: in laravel 5.8
class CookiesJar.php
function forever($name, $value, $path = null, $domain = null, $secure = null, $httpOnly = true, $raw = false, $sameSite = null)
return $this->make($name, $value, 2628000, $path, $domain, $secure, $httpOnly, $raw, $sameSite);
change it into ,this
return $this->make($name, $value, 1440, $path, $domain, $secure, $httpOnly, $raw, $sameSite);
| stackoverflow | {
"language": "en",
"length": 493,
"provenance": "stackexchange_0000F.jsonl.gz:905681",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44669541"
} |
402ce93f82e18074ca280ef3df74b8f2906270ff | Stackoverflow Stackexchange
Q: Send and receive data through one channel on different Go package/file +----------------+ +-----------------+
| Channel foo | | Channel foo |
| a.go | | b.go |
+----------------+ +-----------------+
| |
|__________________________________________|
Send or receive data through 'foo' channel
Can I create two Go source code files and send or receive data through one channel such as described from image/illustration above? So, these files could communicate each other with sending data while source code is running.
A: Channels are just variables in Go, like int, []byte or bool. If you can pass any variables between packages, you can obviously then pass channels between packages as well. And once you've passed a channel from one package to another, you are of course free to use it there, just as you would any other variable. That means you can send data in one package, and receive it in another.
Ultimately channels wouldn't be very useful if they couldn't be shared between packages.
| Q: Send and receive data through one channel on different Go package/file +----------------+ +-----------------+
| Channel foo | | Channel foo |
| a.go | | b.go |
+----------------+ +-----------------+
| |
|__________________________________________|
Send or receive data through 'foo' channel
Can I create two Go source code files and send or receive data through one channel such as described from image/illustration above? So, these files could communicate each other with sending data while source code is running.
A: Channels are just variables in Go, like int, []byte or bool. If you can pass any variables between packages, you can obviously then pass channels between packages as well. And once you've passed a channel from one package to another, you are of course free to use it there, just as you would any other variable. That means you can send data in one package, and receive it in another.
Ultimately channels wouldn't be very useful if they couldn't be shared between packages.
| stackoverflow | {
"language": "en",
"length": 161,
"provenance": "stackexchange_0000F.jsonl.gz:905686",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44669550"
} |
18e330d131ea455fa2a7563048facc4d34d55fb5 | Stackoverflow Stackexchange
Q: Restart docker when it consumes certain amount of memory I have multiple docker processes running on a machine, I'd like to monitor them and restart docker process once it passes certain memory threshold.
Can I make docker restart it's process when it hits memory threshold?
What's my options?
Thanks
Alex
A: Always remember that if you don't set --memory-swap, the container will no to restart at --memory value limit because Docker will use Swap after reach --memory limit, so, if you want to restart at --memory value, you need to set --memory-swap with the same value as --memory limit.
docker run --memory 50m --memory-swap 50m --rm -it progrium/stress --vm 1 --vm-bytes 62914560 --timeout 1s
| Q: Restart docker when it consumes certain amount of memory I have multiple docker processes running on a machine, I'd like to monitor them and restart docker process once it passes certain memory threshold.
Can I make docker restart it's process when it hits memory threshold?
What's my options?
Thanks
Alex
A: Always remember that if you don't set --memory-swap, the container will no to restart at --memory value limit because Docker will use Swap after reach --memory limit, so, if you want to restart at --memory value, you need to set --memory-swap with the same value as --memory limit.
docker run --memory 50m --memory-swap 50m --rm -it progrium/stress --vm 1 --vm-bytes 62914560 --timeout 1s
A: You could make a shell script to monitor resources usage, and restart Docker daemon when it reaches your memory limit, but I think that's not actually a good approach.
Using this command you can see your containers ordered by memory usage. Find which container is using too much memory and try to find the reason because that's happening.
docker stats --no-stream --format "table {{.Name}}\t{{.Container}}\t{{.CPUPerc}}\t{{.MemUsage}}" | sort -k 4 -h
Also, if memory consumption of your containers is normal, but you want to limit it, you can limit resources assigned to each container. You can do this using option --memory in docker run.
For further information about memory limits, check this info in Docker docs: https://docs.docker.com/engine/admin/resource_constraints/
Hope this helps, good luck.
Edit: Answering your response, if your container runs out of memory, it will be automatically killed by the kernel. You can configure a memory limit using option --memory and set restart policy as --restart=always. This way, your container will be killed automatically by an OOM (out-of-memory) error, but it will be restarted since its restart policy is to keep restarting after any error.
| stackoverflow | {
"language": "en",
"length": 299,
"provenance": "stackexchange_0000F.jsonl.gz:905691",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44669568"
} |
2eea9e03514a9f1076d28bbec6811f8607e3b9f1 | Stackoverflow Stackexchange
Q: 'ng' is not recognized as an internal or external command, operable program or batch file I tried running npm install -g angular-cli
I also tried adding it to the Enviorment Variables under PATH: (C:\Users\Administrator\AppData\Roaming\npm\node_modules\angular-cli\bin\ng) , with no success also.
A: make sure environment variables are set properly.
control panel-> system->advanced system settings-> select advanced Tab->
click on environment variables
and make sure in the path below line is available
`C:\Users\username\AppData\Roaming\npm`
here username will get changed based on the user
.
still if its not working yourenvironment variables are not getting reflected
so please restart your machine it will work fine
if still you are facing issue
your angular cli is not installed properly
please run below commands for reinstalling
npm uninstall -g @angular/cli
npm cache clean or npm cache clean --force
npm install -g @angular/cli@latest
| Q: 'ng' is not recognized as an internal or external command, operable program or batch file I tried running npm install -g angular-cli
I also tried adding it to the Enviorment Variables under PATH: (C:\Users\Administrator\AppData\Roaming\npm\node_modules\angular-cli\bin\ng) , with no success also.
A: make sure environment variables are set properly.
control panel-> system->advanced system settings-> select advanced Tab->
click on environment variables
and make sure in the path below line is available
`C:\Users\username\AppData\Roaming\npm`
here username will get changed based on the user
.
still if its not working yourenvironment variables are not getting reflected
so please restart your machine it will work fine
if still you are facing issue
your angular cli is not installed properly
please run below commands for reinstalling
npm uninstall -g @angular/cli
npm cache clean or npm cache clean --force
npm install -g @angular/cli@latest
A: This error is simply telling you that Angular CLI is either not installed or not added to the PATH. To solve this error, first, make sure you’re running Node 6.9 or higher. A lot of errors can be resolved by simply upgrading your Node to the latest stable version.
Open up the Terminal on macOS/Linux or Command Prompt on Windows and run the following command to find out the version of Node you are running:
node --version
A: I had the same issue on Windows7. I resolved it setting correct path.
*
*First find ng.cmd file on your System. It will usually at:
E:\Users\<USERNAME>\AppData\Roaming\npm
*Set PATH to this location.
*Close existing command window and open new one
*Type
ng version
Also remember to install angular with -g command.
npm install -g @angular/cli
A: This answer is based on the following answer by @YuSolution https://stackoverflow.com/a/44622211/4567504.
In my case Installing MySQL changed my path variable and even after reinstalling @angular/cli globally many times I was not able to fix the issue.
Solution:
In command prompt, run the following command
npm config get prefix
A path will be returned like
C:\Users{{Your_Username}}\AppData\Roaming\npm
Copy this path and go to ControlPanel > System and Security > System, Click on Advanced System settings, go to advanced tab and select environment variable button like
Now in User Variables box click on Path row and edit and in variable value box paste your copied path.
Restart the command prompt and it will work
A: Just adding a little info to the previous answers, If you have windows 7 or above then go to start and search Node.js command prompt and you will be directly shown the app. Click on it, and start working by using that command prompt for angular.
A: You should not add C:\Users\Administrator\AppData\Roaming\npm\node_modules\angular-cli\bin\ng to your PATH. There is only a javascript file which you cannot use in terminal.
You need ng.cmd which is probably located at %AppData%\Roaming\npm.
Make sure this path is included in your PATH variable.
A: If angular cli is installed and ng command is not working then please see below suggestion, it may work
In my case problem was with npm config file (.npmrc ) which is available at C:\Users{user}. That file does not contain line
registry https://registry.npmjs.org/=true. When i have added that line command started working. Use below command to edit config file. Edit file and save. Try to run command again. It should work now.
npm config edit
A: You don't need to set any path. Follow the below step to resolve the problem-
Step 1- go to
C:\Users\user\AppData\Roaming and delete npm, npm-update and npm-cache folder
Step 2- run
npm install -g @angular/cli@yourangularversion again.
A: No need to uninstall angular/cli.
*
*You just need to make sure the PATH to npm is in your environment PATH and at the top.
C:\Users\yourusername\AppData\Roaming\npm
*Then close whatever git or command client your using and run ng-v again and should work
A: I have tried with this below Steps and its working fine:-
Download latest version for nodejs, it should work
A: I just installed angular cli and it solved my issue, simply run:
npm install -g @angular/cli
A: note: you may lose values once system restarts.
You can also add system environment variables without Admin rights in Windows 10.
now don't restart, close any opened cmd or powershell, reopen cmd and test by ng version command if you see this it is confirmed working fine.
hope this helps
A: I was with the same problem and now discovered a working solution.
After successful installation of node and angular CLI do the following steps.
Open C:\usr\local and copy the path or the path where angular CLI located on your machine.
Now open environment variable in your Windows, and add copied path in the following location:
Advanced > Environment Variable > User Variables and System Variables as below image:
That's all, now open cmd and try with any 'ng' command:
A: You can also try:
> npm run ng <command>
A: You should add the path where ng.cmd located. By default, it should be located on C:\Users\user\AppData\Roaming\npm
NB: Here "user" may vary as per your pc username!
A: What worked for me was that I was missing an file
.npmrc
which is located under
C:\Users\username
That file should contain
prefix=$(APPDATA)\npm
Also my environment path was pointing to my admin user
A: npm update solves the issue for me
A: This issue also bother me and then i find possible cases to reproduce this issue
when i run my window in administrator then it working fine
ng but when i run this in my second space like other user then i got this issue.
so if i want to run my angular application then i need to run this command
npm run ng serve which is working
but when i run the command with --host npm run ng server --host IP it not working given some error
so i find some possible solution
1. go appdata and then user\admin\AppData\Roaming\npm folder then copy this path but if you using other user account user\newuser\AppData\Roaming\npm folder you can copy this npm folder from other user i.e admin user account.
if you do not want to copy this folder then copy the path of ****user\admin\AppData\Roaming\npm folder**** folder and then open your environment variable setting and add this path in path variable name
enter this path in system path variable not user variable
C:\Users\admin\AppData\Roaming\npm
and run the command prompt as administrator then run ng command it will work
A: Short answer:
Just install the latest version of nodejs and then restart your system.
more description:
It's related to Environment variables in your system at least as far as I know, you can make changes in path variable in your system as others talked about it in the current thread but the easiest way to solve this is installing nodejs!
| stackoverflow | {
"language": "en",
"length": 1114,
"provenance": "stackexchange_0000F.jsonl.gz:905695",
"question_score": "80",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44669589"
} |
9167d74e6a5ef51c3545b9e721e41dd0809bdf0c | Stackoverflow Stackexchange
Q: Eureka Getting List of Services How can I fetch the already registered service from eureka?
The below code gives the details about a particular service. But I want the list of the registered services .
Code:
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.ServiceInstance;
import org.springframework.cloud.client.discovery.DiscoveryClient;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import java.util.List;
@EnableDiscoveryClient
@SpringBootApplication
public class EurekaClientApplication {
public static void main(String[] args) {
SpringApplication.run(EurekaClientApplication.class, args);
}
}
@RestController
class ServiceInstanceRestController {
@Autowired
private DiscoveryClient discoveryClient;
@RequestMapping("/service-instances/{applicationName}")
public List<ServiceInstance> serviceInstancesByApplicationName(
@PathVariable String applicationName) {
return this.discoveryClient.getInstances(applicationName);
}
}
A: Pretty simple really :)
List<Application> applications = discoveryClient.getApplications().getRegisteredApplications();
for (Application application : applications) {
List<InstanceInfo> applicationsInstances = application.getInstances();
for (InstanceInfo applicationsInstance : applicationsInstances) {
String name = applicationsInstance.getAppName();
String url = applicationsInstance.getHomePageUrl();
System.out.println(name + ": " + url);
}
}
| Q: Eureka Getting List of Services How can I fetch the already registered service from eureka?
The below code gives the details about a particular service. But I want the list of the registered services .
Code:
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.ServiceInstance;
import org.springframework.cloud.client.discovery.DiscoveryClient;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import java.util.List;
@EnableDiscoveryClient
@SpringBootApplication
public class EurekaClientApplication {
public static void main(String[] args) {
SpringApplication.run(EurekaClientApplication.class, args);
}
}
@RestController
class ServiceInstanceRestController {
@Autowired
private DiscoveryClient discoveryClient;
@RequestMapping("/service-instances/{applicationName}")
public List<ServiceInstance> serviceInstancesByApplicationName(
@PathVariable String applicationName) {
return this.discoveryClient.getInstances(applicationName);
}
}
A: Pretty simple really :)
List<Application> applications = discoveryClient.getApplications().getRegisteredApplications();
for (Application application : applications) {
List<InstanceInfo> applicationsInstances = application.getInstances();
for (InstanceInfo applicationsInstance : applicationsInstances) {
String name = applicationsInstance.getAppName();
String url = applicationsInstance.getHomePageUrl();
System.out.println(name + ": " + url);
}
}
A: Or, and if you like don't want to drag in the whole world of spring-boot/Eureka libs and prefer a "clean" thin client, you can just perform a simple GET towards
http://<eureka host>:<port>/eureka/apps
as is described here Using Eureka as a registry using REST APIs or, and as we do it, utilize springboot admin's API (described here https://codecentric.github.io/spring-boot-admin/1.5.7/ for instance) and just do a simple GET towards
http://<springbootadmin host>:<port>/api/applications
while providing login credentils using a Basic auth header, ie
"Basic", java.util.Base64.getEncoder().encodeToString(("<springboot admin user>" + ":" + "<pwd>").getBytes());
You'll then get a nice JSON response easily parsed into collections of java objects utilizing JSON-attributes
private String name; // service name
private String managementUrl; // an url from which host and port can be easily grabbed
private StatusInfo statusInfo; // telling whether this service instance is up or down
for instance. Add proper toString()-based hashcode() and equals() and you have something rather proper to work with whether you want to work with sets of running services "on any instance" or unique instances.
| stackoverflow | {
"language": "en",
"length": 304,
"provenance": "stackexchange_0000F.jsonl.gz:905699",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44669600"
} |
b666d4e07db3d2e0979dcbda2949db3f7eff3c51 | Stackoverflow Stackexchange
Q: What is the Android Studio Master Password? I want to generate a signed APK. I have created keystore file. And when I generate signed APK it asks for keystore file, alias, password. I fill all the details in and when I click next it ask for "Master Password" so what is the "Master Password?
A:
Your Master Password is the key to your encrypted database.
This is the only password you will need to remember, and it is critical that you do not lose or forget this password.
when You reset this password all app keys and alias will be lost
Reset Master Password
Goto settings-> system settings -> passwords
Under Disk storage protection You can see Master password button click it
In dialog click on reset
| Q: What is the Android Studio Master Password? I want to generate a signed APK. I have created keystore file. And when I generate signed APK it asks for keystore file, alias, password. I fill all the details in and when I click next it ask for "Master Password" so what is the "Master Password?
A:
Your Master Password is the key to your encrypted database.
This is the only password you will need to remember, and it is critical that you do not lose or forget this password.
when You reset this password all app keys and alias will be lost
Reset Master Password
Goto settings-> system settings -> passwords
Under Disk storage protection You can see Master password button click it
In dialog click on reset
A: Seems like you need to reset your password, in case Android Studio is not asking you for the initial setting of the master password. Given the fact that you still know your Key store password and Key password, of course.
*
*File -> Settings -> Type "password" in search. The appropriate entry will be highlighted.
*Click on master password -> leave empty -> click OK.
*Click on master password again -> reset -> enter new password
Additional information:
The internal password manager of Android Studio, to which this (Master) password belongs, is just for convenience, so you don't have to enter your Key store password and Key password each time you generate a signed APK. It can also save your repository credentials for you.
A: It is the password you set at some point of time for important Android Studio functions like pushing the project to github . For me whenever I want to push the project it asks for the master password .
| stackoverflow | {
"language": "en",
"length": 293,
"provenance": "stackexchange_0000F.jsonl.gz:905717",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44669656"
} |
f0591a210d0db2a767c112042f2756d21b40590b | Stackoverflow Stackexchange
Q: Detect when Editor exit/closes I am making an Editor plugin that communicates with a native plugin made with C++. I am required to notify the native plugin when Editor is about to be closed. I spent few hours on Unity's doc looking for an event or callback function that can be used to detect when Editor closes but I couldn't find one.
Usually, OnApplicationQuit, OnApplicationPause and OnApplicationFocus are used for something like this on standalone build but this is for the Editor so it wouldn't work
Does anyone know about any function or event to do this? If this there no built-in solution to do, is there a hack or some other ways to do this?
A: There is no native way to detect the shutdown of the editor it self.
However, you possibly could hook up to the proccess itself and wait for the exited event as described in this answer.
But if possible you will want to do this on the c++ side itself instead.
C++, How to determine if a Windows Process is running?
| Q: Detect when Editor exit/closes I am making an Editor plugin that communicates with a native plugin made with C++. I am required to notify the native plugin when Editor is about to be closed. I spent few hours on Unity's doc looking for an event or callback function that can be used to detect when Editor closes but I couldn't find one.
Usually, OnApplicationQuit, OnApplicationPause and OnApplicationFocus are used for something like this on standalone build but this is for the Editor so it wouldn't work
Does anyone know about any function or event to do this? If this there no built-in solution to do, is there a hack or some other ways to do this?
A: There is no native way to detect the shutdown of the editor it self.
However, you possibly could hook up to the proccess itself and wait for the exited event as described in this answer.
But if possible you will want to do this on the c++ side itself instead.
C++, How to determine if a Windows Process is running?
A: now there is EditorApplication.quitting in unity 2018.2
UnityEditor.EditorApplication.quitting += OnQuitting;
private void OnQuitting()
{
// do what you want to do when editor quit
}
| stackoverflow | {
"language": "en",
"length": 204,
"provenance": "stackexchange_0000F.jsonl.gz:905727",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44669678"
} |
dc71a36922ce467e72d7b036c7f6d450a1922617 | Stackoverflow Stackexchange
Q: How to connect postgres to xcode I'm making an application on xcode 8.3.3 and am using swift to make the app. We have a hosted postgres database which is up and running but I don't know how to connect to it through swift and xcode. I have a host, port number, username and password but I don't know how to connect to the database so I can get the data. I'm a newbie to xcode, swift and basically programming in general please help.
A: A simple google search would yield tremondous results.
http://druware.tumblr.com/post/112163075395/getting-started-with-pgsqlkit-and-swift
I believe the above link should get you sorted.
Basically you have to use the C API libpq library.
See also: https://github.com/stepanhruda/PostgreSQL-Swift
| Q: How to connect postgres to xcode I'm making an application on xcode 8.3.3 and am using swift to make the app. We have a hosted postgres database which is up and running but I don't know how to connect to it through swift and xcode. I have a host, port number, username and password but I don't know how to connect to the database so I can get the data. I'm a newbie to xcode, swift and basically programming in general please help.
A: A simple google search would yield tremondous results.
http://druware.tumblr.com/post/112163075395/getting-started-with-pgsqlkit-and-swift
I believe the above link should get you sorted.
Basically you have to use the C API libpq library.
See also: https://github.com/stepanhruda/PostgreSQL-Swift
| stackoverflow | {
"language": "en",
"length": 116,
"provenance": "stackexchange_0000F.jsonl.gz:905736",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44669710"
} |
0906227b5d5eed2dac04b6f28fe096b60da936ed | Stackoverflow Stackexchange
Q: DevExtreme override css of dxButton I have the following dxButton element.
<div class="dx-field-value" data-bind="dxButton: { text: name, onClick: $root.name }"></div>
The text of the button is being assigned via the variable name. Now, I want the button text to be always uppercase for this specific element only.
I have tried setting an id and a class for it, and create custom css however it did not work. I have also tried inline styling as follows:
<div class="dx-field-value" style="text-transform: uppercase;" data-bind="dxButton: { text: 'Text' }"></div>
A: Use the dx-button-text class name to customize your button text.
Make all buttons lowercase:
.dx-button-text {
text-transform: lowercase;
}
Then, add a specific css class to the button you want to be uppercase:
<div data-bind="dxButton: { text: name }" class="uppercase"></div>
And apply the following rule:
.uppercase .dx-button-text {
text-transform: uppercase;
}
Demo.
| Q: DevExtreme override css of dxButton I have the following dxButton element.
<div class="dx-field-value" data-bind="dxButton: { text: name, onClick: $root.name }"></div>
The text of the button is being assigned via the variable name. Now, I want the button text to be always uppercase for this specific element only.
I have tried setting an id and a class for it, and create custom css however it did not work. I have also tried inline styling as follows:
<div class="dx-field-value" style="text-transform: uppercase;" data-bind="dxButton: { text: 'Text' }"></div>
A: Use the dx-button-text class name to customize your button text.
Make all buttons lowercase:
.dx-button-text {
text-transform: lowercase;
}
Then, add a specific css class to the button you want to be uppercase:
<div data-bind="dxButton: { text: name }" class="uppercase"></div>
And apply the following rule:
.uppercase .dx-button-text {
text-transform: uppercase;
}
Demo.
| stackoverflow | {
"language": "en",
"length": 138,
"provenance": "stackexchange_0000F.jsonl.gz:905762",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44669809"
} |
0ed8394bfb909e007cecc191c7e81e84ab7bfbb1 | Stackoverflow Stackexchange
Q: How to add a button to nav-bar? How can I add another button/dropdown to the navbar in sonata admin listing template for my MapAdmin class?
I just want this button in one admin class.
A: You have to override the default template (layout: 'SonataAdminBundle::standard_layout.html.twig') with your own in coding your logique here
Here is an extract of existing code :
{% block sonata_admin_content_actions_wrappers %}
{% if _actions|replace({ '<li>': '', '</li>': '' })|trim is not empty %}
<ul class="nav navbar-nav navbar-right">
{% if _actions|split('</a>')|length > 2 %}
<li class="dropdown sonata-actions">
<a href="#" class="dropdown-toggle" data-toggle="dropdown">{{ 'link_actions'|trans({}, 'SonataAdminBundle') }} <b class="caret"></b></a>
<ul class="dropdown-menu" role="menu">
{{ _actions|raw }}
</ul>
</li>
{% else %}
{{ _actions|raw }}
{% endif %}
</ul>
{% endif %}
{% endblock sonata_admin_content_actions_wrappers %}
| Q: How to add a button to nav-bar? How can I add another button/dropdown to the navbar in sonata admin listing template for my MapAdmin class?
I just want this button in one admin class.
A: You have to override the default template (layout: 'SonataAdminBundle::standard_layout.html.twig') with your own in coding your logique here
Here is an extract of existing code :
{% block sonata_admin_content_actions_wrappers %}
{% if _actions|replace({ '<li>': '', '</li>': '' })|trim is not empty %}
<ul class="nav navbar-nav navbar-right">
{% if _actions|split('</a>')|length > 2 %}
<li class="dropdown sonata-actions">
<a href="#" class="dropdown-toggle" data-toggle="dropdown">{{ 'link_actions'|trans({}, 'SonataAdminBundle') }} <b class="caret"></b></a>
<ul class="dropdown-menu" role="menu">
{{ _actions|raw }}
</ul>
</li>
{% else %}
{{ _actions|raw }}
{% endif %}
</ul>
{% endif %}
{% endblock sonata_admin_content_actions_wrappers %}
A: It requires adding a custom action and overriding a certain template. You can follow the documentation on symfony.com.
Read up to the following code block:
{# src/AppBundle/Resources/views/CRUD/list__action_clone.html.twig #}
<a class="btn btn-sm" href="{{ admin.generateObjectUrl('clone', object)}}">clone</a>
A: I have just come across with the same problem. I am using Symfony 3.4.6 and Sonata Admin Bundle 3.9.1.These are the steps I've followed:
1. Find the standard template which lives in:/vendor/sonata-project/admin-bundle/src/Resources/views/standard_layout.html.twig.
2. Go to /app/config/config.yml and under the key sonata_admin, you just override that template as shown below
sonata_admin:
templates:
# Layout
layout: '@MyBundle/Admin/Default/Layout/standard_layout.html.twig'
3. Within your newly created template (standard_layout.html.twig) make sure you have extended the sonata standard template file like so : {% extends '@SonataAdmin/standard_layout.html.twig' %}. Now, all you need to do is override any block you want from the original sonata template file as I described in point 1, in my case I've just overridden the block tab_menu_navbar_header and Added my custom button like so:
{% block tab_menu_navbar_header %}
{% if _navbar_title is not empty %}
<div class="navbar-header">
<a class="navbar-brand" href="#">{{ _navbar_title|raw }}</a>
{% if object.state is defined and object.state is not null and object.state is same as('finished') %}
<button type="button" class="btn btn-info" style="margin-top: 10px;">
<i class="fa fa-check-square" aria-hidden="true"> History</i>
</button>
{% endif %}
</div>
{% endif %}
{% endblock %}
| stackoverflow | {
"language": "en",
"length": 335,
"provenance": "stackexchange_0000F.jsonl.gz:905779",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44669861"
} |
aa2e2a31405ed7cd23ecc787cd2e8d4ee9764ae5 | Stackoverflow Stackexchange
Q: 'Installation failed: Could not create directory.' I get this error everytime i try to download plugin in wp 4.8 I am getting error accessing my admin panel. The error is due to plugin. I fixed the issue by renaming the plugin folder from cpanel and changed it back. Since then,i couldn't download any plugins and get the error
Installation failed: Could not create directory.
A: The permissions or ownership on wp-content/plugins is incorrect. That directory should have a 775 permission set.
If its already that then reapply 755 and checked “apply recursively to all directories and files.
| Q: 'Installation failed: Could not create directory.' I get this error everytime i try to download plugin in wp 4.8 I am getting error accessing my admin panel. The error is due to plugin. I fixed the issue by renaming the plugin folder from cpanel and changed it back. Since then,i couldn't download any plugins and get the error
Installation failed: Could not create directory.
A: The permissions or ownership on wp-content/plugins is incorrect. That directory should have a 775 permission set.
If its already that then reapply 755 and checked “apply recursively to all directories and files.
A: You need to set permission on plugin folder/directory, now the user does not have permission to create folder/directory in plugin folder/directory.
You need to set permission on wp-content and all inside folder/directory to 755.
You can set permission via terminal
sudo chmod -R 755 wp-content
wp-content should me you folder path for Linux server path should be /var/www/html/project_folder/wp-content
OR
you can set permission via FTP. Right click on wp-content folder click on permission menu and set 755 permission to the folder.
A: If you use a docker container or docker-compose, you also need to give permission to the database volume.
| stackoverflow | {
"language": "en",
"length": 199,
"provenance": "stackexchange_0000F.jsonl.gz:905822",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44670008"
} |
004ba492e07fbb9099e26ffa26f01a1be8e261ad | Stackoverflow Stackexchange
Q: Insert a seperator string in a field value in MySQL i have this this value generated using MAX and LPAD
result:
What i wanted is to put a dash after every 3rd number just like 000-000-002, but i don't know what to do. Please help me.
Thank you in advanced!
A: use php wordwrap() function to put a specific character after the number of characters.The wordwrap() function wraps a string into new lines when it reaches a specific length.
$output = wordwrap($orNumber,3,'-',true);
Note: This function may leave white spaces at the beginning of a line. So you have to trim your data to remove blank spaces
You can check the mannual here PHP wordwrap()
| Q: Insert a seperator string in a field value in MySQL i have this this value generated using MAX and LPAD
result:
What i wanted is to put a dash after every 3rd number just like 000-000-002, but i don't know what to do. Please help me.
Thank you in advanced!
A: use php wordwrap() function to put a specific character after the number of characters.The wordwrap() function wraps a string into new lines when it reaches a specific length.
$output = wordwrap($orNumber,3,'-',true);
Note: This function may leave white spaces at the beginning of a line. So you have to trim your data to remove blank spaces
You can check the mannual here PHP wordwrap()
A: MySQL :
set @var ='000000002';
select CONCAT_WS('-', SUBSTR(@var , 1, 3), SUBSTR(@var, 4,3), SUBSTR(@var, 7))
This will accept only varchar. For int values, numbers starting with 0 will not give correct error. Hence recommended to handle on PHP side.
A: set @chr = '000000002';
select @chr,
concat(substring(@chr,1,3),'-',substring(@chr,4,3),'-',substring(@chr,7,9)) hyphenated
Result
+-----------+-------------+
| @chr | hyphenated |
+-----------+-------------+
| 000000002 | 000-000-002 |
+-----------+-------------+
1 row in set (0.00 sec)
A: if the length is always fixed, you can use LEFT(), MID(), and RIGHT()
update TABLE_NAME set orNumber = CONCAT(LEFT(orNumber, 3), '-' , MID(orNumber, 4, 3), '-' ,RIGHT(orNumber, 3))
find more on docs
| stackoverflow | {
"language": "en",
"length": 217,
"provenance": "stackexchange_0000F.jsonl.gz:905862",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44670145"
} |
e0aa8364dd9769272dfeb6559643de65bdb0c288 | Stackoverflow Stackexchange
Q: How to start multiple Processes in Contiki to create load on CPU How can I start multiple processes in parallel to create a congestion in CPU usage? Below is the code I was trying:-
#include "contiki.h"
#include <stdio.h>
PROCESS(cpu_load_process_1, "CPU Loading Process 1");
PROCESS(cpu_load_process_2, "CPU Loading Process 2");
PROCESS(cpu_load_process_3, "CPU Loading Process 3");
PROCESS(cpu_load_process_4, "CPU Loading Process 4");
AUTOSTART_PROCESSES(&cpu_load_process_1);
AUTOSTART_PROCESSES(&cpu_load_process_2);
PROCESS_THREAD(cpu_load_process_1, ev, data)
{
PROCESS_BEGIN();
PROCESS_END();
}
PROCESS_THREAD(cpu_load_process_3, ev, data)
{
PROCESS_BEGIN();
PROCESS_END();
}
but, this gives error as following:-
/home/user/contiki-3.0/core/./sys/autostart.h:48:24: error: redefinition of ‘autostart_processes’
struct process * const autostart_processes[] = {__VA_ARGS__, NULL}struct process * const autostart_processes[] = {__VA_ARGS__, NULL}
Please guide me through. Any alternative way/suggestion of creating CPU congestion would also be helpful.
A: Behind the macro AUTOSTART_PROCESSES, a structure definition is hidden.
#define AUTOSTART_PROCESSES(...) \
struct process * const autostart_processes[] = {__VA_ARGS__, NULL}
By calling twice AUTOSTART_PROCESSES, you define the structure twice.
Solution:
Given the macro parameters, I guess that you should write:
AUTOSTART_PROCESSES(&cpu_load_process_1, &cpu_load_process_2);
Instead of
AUTOSTART_PROCESSES(&cpu_load_process_1);
AUTOSTART_PROCESSES(&cpu_load_process_2);
| Q: How to start multiple Processes in Contiki to create load on CPU How can I start multiple processes in parallel to create a congestion in CPU usage? Below is the code I was trying:-
#include "contiki.h"
#include <stdio.h>
PROCESS(cpu_load_process_1, "CPU Loading Process 1");
PROCESS(cpu_load_process_2, "CPU Loading Process 2");
PROCESS(cpu_load_process_3, "CPU Loading Process 3");
PROCESS(cpu_load_process_4, "CPU Loading Process 4");
AUTOSTART_PROCESSES(&cpu_load_process_1);
AUTOSTART_PROCESSES(&cpu_load_process_2);
PROCESS_THREAD(cpu_load_process_1, ev, data)
{
PROCESS_BEGIN();
PROCESS_END();
}
PROCESS_THREAD(cpu_load_process_3, ev, data)
{
PROCESS_BEGIN();
PROCESS_END();
}
but, this gives error as following:-
/home/user/contiki-3.0/core/./sys/autostart.h:48:24: error: redefinition of ‘autostart_processes’
struct process * const autostart_processes[] = {__VA_ARGS__, NULL}struct process * const autostart_processes[] = {__VA_ARGS__, NULL}
Please guide me through. Any alternative way/suggestion of creating CPU congestion would also be helpful.
A: Behind the macro AUTOSTART_PROCESSES, a structure definition is hidden.
#define AUTOSTART_PROCESSES(...) \
struct process * const autostart_processes[] = {__VA_ARGS__, NULL}
By calling twice AUTOSTART_PROCESSES, you define the structure twice.
Solution:
Given the macro parameters, I guess that you should write:
AUTOSTART_PROCESSES(&cpu_load_process_1, &cpu_load_process_2);
Instead of
AUTOSTART_PROCESSES(&cpu_load_process_1);
AUTOSTART_PROCESSES(&cpu_load_process_2);
| stackoverflow | {
"language": "en",
"length": 163,
"provenance": "stackexchange_0000F.jsonl.gz:905867",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44670154"
} |
b3fd9d0f58f7e94d2f2afb3fd9dbdd7a8d2c2809 | Stackoverflow Stackexchange
Q: Make datatables work with turbolinks I would like to use Turbolinks in my web application. However, i have some troubles with turbolinks and datatables.
Indeed, when i load my datatable for the first time, it generate the DOM for it and everythings is ok. But with turbolinks, the page is cached with the generated DOM and so when I change page and return on my datatable, all datatable feature doesn't works (paging, order, filter ...)
I think, reload the datatable when its DOM is already generated doesn't work, so i tried to destroy it before reload it : same problem
Here it's my datatable function :
Datatable function
And here the call to it : call to it
Have you ever met this problem and what can i do to resolve it ?
Thanks for your responses
A: Do not initial the datetables again when return back from turbolink cache.
document.addEventListener("turbolinks:load", function() {
"use strict";
if ($("#users-table_wrapper").length == 0) {
$('#users-table').DataTable();
}
})
| Q: Make datatables work with turbolinks I would like to use Turbolinks in my web application. However, i have some troubles with turbolinks and datatables.
Indeed, when i load my datatable for the first time, it generate the DOM for it and everythings is ok. But with turbolinks, the page is cached with the generated DOM and so when I change page and return on my datatable, all datatable feature doesn't works (paging, order, filter ...)
I think, reload the datatable when its DOM is already generated doesn't work, so i tried to destroy it before reload it : same problem
Here it's my datatable function :
Datatable function
And here the call to it : call to it
Have you ever met this problem and what can i do to resolve it ?
Thanks for your responses
A: Do not initial the datetables again when return back from turbolink cache.
document.addEventListener("turbolinks:load", function() {
"use strict";
if ($("#users-table_wrapper").length == 0) {
$('#users-table').DataTable();
}
})
| stackoverflow | {
"language": "en",
"length": 164,
"provenance": "stackexchange_0000F.jsonl.gz:905873",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44670160"
} |
004a816656d234964d4dafe8e4c6c65eee7642c4 | Stackoverflow Stackexchange
Q: Open another app from My app? I am having two apps.
I want to open a SecondApp from my FirstApp Button Click.
Second App is having that custom Schema which is required for deep linking.
Now I want to know what code I need to do on my FirstApp button click to open SecondApp?
A: As much I can tell you in brief.
You need to add custom Url schema for in your application.
For example you need to launch App2 from App1.
This is the code that you need to add in App2 info.plist or you can add "URL Types" in your info section of target.
<array>
<dict>
<key>CFBundleTypeRole</key>
<string>Editor</string>
<key>CFBundleURLName</key>
<string>com.company.App1</string>
<key>CFBundleURLSchemes</key>
<array>
<string>CompanyApp2</string>
</array>
</dict>
</array>
And this is the code that you need to add in you App1 info.plist file.
<key>LSApplicationQueriesSchemes</key>
<array>
<string>CompanyApp2</string>
</array>
Then you will launch App2 from App1 like as:
let app2Url: URL = URL(string: "CompanyApp2://")!
if UIApplication.shared.canOpenURL(app2Url) {
UIApplication.shared.openURL(app2Url)
}
Hope this will help.
| Q: Open another app from My app? I am having two apps.
I want to open a SecondApp from my FirstApp Button Click.
Second App is having that custom Schema which is required for deep linking.
Now I want to know what code I need to do on my FirstApp button click to open SecondApp?
A: As much I can tell you in brief.
You need to add custom Url schema for in your application.
For example you need to launch App2 from App1.
This is the code that you need to add in App2 info.plist or you can add "URL Types" in your info section of target.
<array>
<dict>
<key>CFBundleTypeRole</key>
<string>Editor</string>
<key>CFBundleURLName</key>
<string>com.company.App1</string>
<key>CFBundleURLSchemes</key>
<array>
<string>CompanyApp2</string>
</array>
</dict>
</array>
And this is the code that you need to add in you App1 info.plist file.
<key>LSApplicationQueriesSchemes</key>
<array>
<string>CompanyApp2</string>
</array>
Then you will launch App2 from App1 like as:
let app2Url: URL = URL(string: "CompanyApp2://")!
if UIApplication.shared.canOpenURL(app2Url) {
UIApplication.shared.openURL(app2Url)
}
Hope this will help.
A: Try below code
let appURL: URL = URL(string: "CustomUrlScheme://")!
if UIApplication.shared.canOpenURL(appURL) {
UIApplication.shared.openURL(appURL)
}
A: In android, we can perform in the below ways
//Dial a phone
Intent callIntent = new Intent(Intent.ACTION_CALL);
callIntent.setData(Uri.parse("tel:0377778888"));
startActivity(callIntent);
//View a map
// Map point based on address
Uri location = Uri.parse("geo:0,0?q=1600+Amphitheatre+Parkway,+Mountain+View,+California");
// Or map point based on latitude/longitude
// Uri location = Uri.parse("geo:37.422219,-122.08364?z=14"); // z param is zoom level
Intent mapIntent = new Intent(Intent.ACTION_VIEW, location);
startActivity(mapIntent);
//View a webpage
Uri webpage = Uri.parse("http://www.android.com");
Intent webIntent = new Intent(Intent.ACTION_VIEW, webpage);
startActivity(webIntent);
| stackoverflow | {
"language": "en",
"length": 250,
"provenance": "stackexchange_0000F.jsonl.gz:905880",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44670176"
} |
7c9e76e75d8cbe28ce47ff0aaeec329c7969d637 | Stackoverflow Stackexchange
Q: How to compile C code in Linux to run on Windows? I am using Linux/GNU GCC to compile C source code. Is there any way I can generate .exe files for Windows running on x86 or x64 architecture? The compiled code needs to be generated on the Linux machine.
A: You would need a cross-compiler to create a Windows executable in Linux.
Mingw-w64 is an advancement of the original mingw.org project, created to support the GCC compiler on Windows systems.
Installing the cross-compilation
sudo apt-get install mingw-w64
32bit
i686-w64-mingw32-gcc -o test.exe test.c
64bit
x86_64-w64-mingw32-gcc -o test.exe test.c
| Q: How to compile C code in Linux to run on Windows? I am using Linux/GNU GCC to compile C source code. Is there any way I can generate .exe files for Windows running on x86 or x64 architecture? The compiled code needs to be generated on the Linux machine.
A: You would need a cross-compiler to create a Windows executable in Linux.
Mingw-w64 is an advancement of the original mingw.org project, created to support the GCC compiler on Windows systems.
Installing the cross-compilation
sudo apt-get install mingw-w64
32bit
i686-w64-mingw32-gcc -o test.exe test.c
64bit
x86_64-w64-mingw32-gcc -o test.exe test.c
A: It is called cross-compiling. But GCC does not provide that functionality on its own. You can use the toolset provided by MinGW and/or MinGW-w64 projects.
MinGW allows you to cross compile to Win32 platform and MinGW-w64 allows you to do the same thing for both Win32 and Win64.
They are both based on GCC.
A: You need a cross compiler: A compiler that targets another system than it runs on.
A gcc targeting windows comes in the mingw package and can also be compiled to run on Linux, and many Linux distributions already have packages containing it, so just search your package management tools for "mingw".
Cross compilers have by convention the name of the system they target prepended to their name, so the variant of gcc for compiling windows executables will probably be named something like i686-w64-mingw32-gcc, but this might differ depending on the packages provided by your Linux distribution.
A: You need to use cross-compiller to compile for different OS. The most popular is mingw
| stackoverflow | {
"language": "en",
"length": 266,
"provenance": "stackexchange_0000F.jsonl.gz:905893",
"question_score": "11",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44670209"
} |
4528c635bd01e7878a1044ab16c2942e835d1ec1 | Stackoverflow Stackexchange
Q: Deep Populate With Condition I have a Message Collection which contains messages, models look like this.
var MessageSchema = new mongoose.Schema({
groupId: { type: Schema.ObjectId, ref: 'Group' },
});
var GroupSchema = new mongoose.Schema({
type: String,
groupMembers: [{ "user": { type: Schema.ObjectId, ref: 'User' } }],
});
Here is my code:
Message.find({ 'groupId': { $in: groupIds } })
.populate(
{ path: 'groupId', select: 'groupMembers type name level',
populate: { path: 'groupMembers.user', select: 'name _id photo', model: 'User' } })
How can I populate groupMembers.user only if groupId.type matches with a condition?
I've tried this but :-(
{match:"groupId.type":'individual'}
| Q: Deep Populate With Condition I have a Message Collection which contains messages, models look like this.
var MessageSchema = new mongoose.Schema({
groupId: { type: Schema.ObjectId, ref: 'Group' },
});
var GroupSchema = new mongoose.Schema({
type: String,
groupMembers: [{ "user": { type: Schema.ObjectId, ref: 'User' } }],
});
Here is my code:
Message.find({ 'groupId': { $in: groupIds } })
.populate(
{ path: 'groupId', select: 'groupMembers type name level',
populate: { path: 'groupMembers.user', select: 'name _id photo', model: 'User' } })
How can I populate groupMembers.user only if groupId.type matches with a condition?
I've tried this but :-(
{match:"groupId.type":'individual'}
| stackoverflow | {
"language": "en",
"length": 98,
"provenance": "stackexchange_0000F.jsonl.gz:905938",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44670346"
} |
08be7f3c6c343ff66d8b5ef9b64a8588dc838f87 | Stackoverflow Stackexchange
Q: Typescript + jasmine.createSpy() I'm writing unit tests for my angular project with Typescript
When I try to create mock for some service, I use this way:
const serviceMock = <IMyService>{
method: _.noop,
};
beforeEach(inject($injector => {
testingService = new AccountingService(serviceMock);
spyOn(serviceMock, 'method').and.callFake(()=>'hello');
}
this works ok
but when I try to use jasmine.createSpy(), I get compilation errors:
const serviceMock = <IMyService>{
method: jasmine.createSpy('method').and.callFake(()=>'hello'),
};
Type '{ method: Spy;}' cannot be converted to type 'MyService'. Property 'getParams' is missing in type '{ method: Spy;}'.
But getParams is private method of MyService
What am I doing wrong?
A: Try it with a mapped type
export type Spied<T> = {
[Method in keyof T]: jasmine.Spy;
};
and cast your service mock with it
const serviceMock = Spied<IMyService>{
Take a look here for a detailed description
| Q: Typescript + jasmine.createSpy() I'm writing unit tests for my angular project with Typescript
When I try to create mock for some service, I use this way:
const serviceMock = <IMyService>{
method: _.noop,
};
beforeEach(inject($injector => {
testingService = new AccountingService(serviceMock);
spyOn(serviceMock, 'method').and.callFake(()=>'hello');
}
this works ok
but when I try to use jasmine.createSpy(), I get compilation errors:
const serviceMock = <IMyService>{
method: jasmine.createSpy('method').and.callFake(()=>'hello'),
};
Type '{ method: Spy;}' cannot be converted to type 'MyService'. Property 'getParams' is missing in type '{ method: Spy;}'.
But getParams is private method of MyService
What am I doing wrong?
A: Try it with a mapped type
export type Spied<T> = {
[Method in keyof T]: jasmine.Spy;
};
and cast your service mock with it
const serviceMock = Spied<IMyService>{
Take a look here for a detailed description
A: Use the type already defined and used by Jasmine SpyObj<T>.
const serviceMock: jasmine.SpyObj<IMyService> = jasmine.createSpyObj<IMyService>('service',['method']);
This way every method of IMyService will be augmented with the Spy ones:
serviceMock.method.and.callFake(()=>'hello');
A: Try using the Partial type:
const serviceMock = <Partial<IMyService>>{
For more information check out: https://www.typescriptlang.org/docs/handbook/advanced-types.html
| stackoverflow | {
"language": "en",
"length": 179,
"provenance": "stackexchange_0000F.jsonl.gz:905964",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44670443"
} |
ac7c7babd5ccced4ce05bb77450f09e968e765f4 | Stackoverflow Stackexchange
Q: onPress in TouchableOpacity doesn't trigger I need your help! My goal is to change the style of my button after I clicked it! I heard about direct manipulation and I decided to give it a try. Now I don't know why but the onPress inside my TouchableOpacity doesn't work. Here is the code:
<TouchableOpacity onPress={() => this.changeStyle}>
<TouchableHighlight style={styles.answer} ref="answer1">
<Text ...> Some Text </Text>
</TouchableHighlight>
</TouchableOpacity>
And here is my changeStyle function:
changeStyle() {
this.refs['answer1'].setNativeProps({
style: { backgroundColor: "#13a88a"}
});
}
Now i don't know why but the 'onPress' is never triggered.
Thank you for your answers!
A: If you want to execute the function by using 'this.changeStyle`, write your onPress like so:
<TouchableOpacity onPress={this.changeStyle}/>
If you're going to pass a function within the onPress prop that executes this.changeStyle write your onPress like so:
<TouchableOpacity onPress={() => this.changeStyle()}/>
P.S: Why do you have <TouchableHighlight/> inside a <TouchableOpacity/>? Just use one and add the onPress prop on it.
| Q: onPress in TouchableOpacity doesn't trigger I need your help! My goal is to change the style of my button after I clicked it! I heard about direct manipulation and I decided to give it a try. Now I don't know why but the onPress inside my TouchableOpacity doesn't work. Here is the code:
<TouchableOpacity onPress={() => this.changeStyle}>
<TouchableHighlight style={styles.answer} ref="answer1">
<Text ...> Some Text </Text>
</TouchableHighlight>
</TouchableOpacity>
And here is my changeStyle function:
changeStyle() {
this.refs['answer1'].setNativeProps({
style: { backgroundColor: "#13a88a"}
});
}
Now i don't know why but the 'onPress' is never triggered.
Thank you for your answers!
A: If you want to execute the function by using 'this.changeStyle`, write your onPress like so:
<TouchableOpacity onPress={this.changeStyle}/>
If you're going to pass a function within the onPress prop that executes this.changeStyle write your onPress like so:
<TouchableOpacity onPress={() => this.changeStyle()}/>
P.S: Why do you have <TouchableHighlight/> inside a <TouchableOpacity/>? Just use one and add the onPress prop on it.
A: You need to import TouchableOpacity from react-native instead of importing it from react-native-gesture-handler. The version in react-native-gesture-handler is 100% broken. The version in react-native works.
| stackoverflow | {
"language": "en",
"length": 185,
"provenance": "stackexchange_0000F.jsonl.gz:905974",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44670473"
} |
75f07188db137539fcc11e6a829a231dca4d5bd0 | Stackoverflow Stackexchange
Q: Why version is not printable? I have this one liner:
perl -Mversion -e 'our $VERSION = v1.02; print $VERSION'
The output is (It is not visible, there is two characters: 1, 2):
Why module version is not printable? I expect to see v1.02
A: I have found this DOC
print v9786; # prints SMILEY, "\x{263a}"
print v102.111.111; # prints "foo"
print 102.111.111; # same
Answering to my question:
Despite on that v1.02 is v-string that is not string internally. And when we want to print it we should do extra steps. For example, use module version as suggested above.
UPD
I found next solution (DOC):
printf "%vd", $VERSION; # prints "1.2"
UPD
And this should be read:
There are two ways to enter v-strings: a bare number with two or more decimal points, or a bare number with one or more decimal points and a leading 'v' character (also bare). For example:
$vs1 = 1.2.3; # encoded as \1\2\3
$vs2 = v1.2; # encoded as \1\2
| Q: Why version is not printable? I have this one liner:
perl -Mversion -e 'our $VERSION = v1.02; print $VERSION'
The output is (It is not visible, there is two characters: 1, 2):
Why module version is not printable? I expect to see v1.02
A: I have found this DOC
print v9786; # prints SMILEY, "\x{263a}"
print v102.111.111; # prints "foo"
print 102.111.111; # same
Answering to my question:
Despite on that v1.02 is v-string that is not string internally. And when we want to print it we should do extra steps. For example, use module version as suggested above.
UPD
I found next solution (DOC):
printf "%vd", $VERSION; # prints "1.2"
UPD
And this should be read:
There are two ways to enter v-strings: a bare number with two or more decimal points, or a bare number with one or more decimal points and a leading 'v' character (also bare). For example:
$vs1 = 1.2.3; # encoded as \1\2\3
$vs2 = v1.2; # encoded as \1\2
| stackoverflow | {
"language": "en",
"length": 167,
"provenance": "stackexchange_0000F.jsonl.gz:905997",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44670559"
} |
8686c2a57007b53d9375e6338e451d6bd808a9e3 | Stackoverflow Stackexchange
Q: Regex for Indonesian phone number I need a regular expression that can accept the following number format.
+62 361 222777
+62 813-444-5555
+62 812-3333-3333
+62 811 391 2103
+62 361-2277777
(0361) 227337
+62 8113912103
08134455555
0361-2277777 (*)
+62 812 3333 3333 (*)
+62 877 80803550 (*)
081339222111 (*)
081 339 222 111 (*)
+62 811338429196 (*)
Here is what I did so far:
\+62\s\d{3}[-\.\s]??\d{3}[-\.\s]??\d{3,4}|\(0\d{2,3}\)\s?\d+|0\d{2,3}\s?\d{6,7}|\+62\s?361\s?\d+|\+62\d+|\+62\s?(?:\d{3,}-)*\d{3,5}
The last six numbers (indicated by *) is not fully found by the regular expression I have above. Can someone help me? Thanks.
A: My advice would be to try breaking problem into smaller ones.
You can solve like this:
(\+62 ((\d{3}([ -]\d{3,})([- ]\d{4,})?)|(\d+)))|(\(\d+\) \d+)|\d{3}( \d+)+|(\d+[ -]\d+)|\d+
You can see demo here. I've broke problem into smaller ones divided like this ()|()|..., so you can see what cases I was solving by deleting individual brackets.
Even easier way is:
\+?([ -]?\d+)+|\(\d+\)([ -]\d+)
You can see this version here.
| Q: Regex for Indonesian phone number I need a regular expression that can accept the following number format.
+62 361 222777
+62 813-444-5555
+62 812-3333-3333
+62 811 391 2103
+62 361-2277777
(0361) 227337
+62 8113912103
08134455555
0361-2277777 (*)
+62 812 3333 3333 (*)
+62 877 80803550 (*)
081339222111 (*)
081 339 222 111 (*)
+62 811338429196 (*)
Here is what I did so far:
\+62\s\d{3}[-\.\s]??\d{3}[-\.\s]??\d{3,4}|\(0\d{2,3}\)\s?\d+|0\d{2,3}\s?\d{6,7}|\+62\s?361\s?\d+|\+62\d+|\+62\s?(?:\d{3,}-)*\d{3,5}
The last six numbers (indicated by *) is not fully found by the regular expression I have above. Can someone help me? Thanks.
A: My advice would be to try breaking problem into smaller ones.
You can solve like this:
(\+62 ((\d{3}([ -]\d{3,})([- ]\d{4,})?)|(\d+)))|(\(\d+\) \d+)|\d{3}( \d+)+|(\d+[ -]\d+)|\d+
You can see demo here. I've broke problem into smaller ones divided like this ()|()|..., so you can see what cases I was solving by deleting individual brackets.
Even easier way is:
\+?([ -]?\d+)+|\(\d+\)([ -]\d+)
You can see this version here.
| stackoverflow | {
"language": "en",
"length": 153,
"provenance": "stackexchange_0000F.jsonl.gz:906009",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44670612"
} |
3b7a865474982097a0edba9f36a69a007a9526f0 | Stackoverflow Stackexchange
Q: How can I add a custom intrinsic in LLVM? I am new to LLVM. I have added a custom intrinsic foo_sqrt by creating IntrinsicsFoo.td in include/llvm/IR/. I then built the entire llvm project and the intrinsic foo has been added successfully (foo_sqrt has been added to the Intrinsic namespace). But I am unable to figure out how to add the pseudo instruction for it so that Intrinsic::getDeclaration() function will work inside my pass. If I want my intrinsic foo to calculate the square root of a floating point number, where should I add this instruction? I have searched a lot on the Internet and couldn't find anything concrete.
Here is the content of the td file:
let TargetPrefix = "foo" in { // All intrinsics start with "llvm.foo."
def int_foo_sqrt : GCCBuiltin<"__builtin_foo_sqrt">,
Intrinsic<[llvm_anyfloat_ty], [llvm_anyfloat_ty],
[IntrNoMem]>;
} // end TargetPrefix
| Q: How can I add a custom intrinsic in LLVM? I am new to LLVM. I have added a custom intrinsic foo_sqrt by creating IntrinsicsFoo.td in include/llvm/IR/. I then built the entire llvm project and the intrinsic foo has been added successfully (foo_sqrt has been added to the Intrinsic namespace). But I am unable to figure out how to add the pseudo instruction for it so that Intrinsic::getDeclaration() function will work inside my pass. If I want my intrinsic foo to calculate the square root of a floating point number, where should I add this instruction? I have searched a lot on the Internet and couldn't find anything concrete.
Here is the content of the td file:
let TargetPrefix = "foo" in { // All intrinsics start with "llvm.foo."
def int_foo_sqrt : GCCBuiltin<"__builtin_foo_sqrt">,
Intrinsic<[llvm_anyfloat_ty], [llvm_anyfloat_ty],
[IntrNoMem]>;
} // end TargetPrefix
| stackoverflow | {
"language": "en",
"length": 140,
"provenance": "stackexchange_0000F.jsonl.gz:906017",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44670637"
} |
7936750a854a2bd0aead22b3853549dddbfb2d6b | Stackoverflow Stackexchange
Q: Knitr: print text from code block as R markdown I have the following R Markdown document:
---
title: "Test"
output: html_document
---
```{r cars, echo=FALSE}
myCondition <- TRUE
if(myCondition) {
print("## Car Summary")
}
summary(cars)
```
When I Knit it to HTML, the "Car Summary" header is rendered in "terminal-like" monospaced font as this:
## [1] "## Car Summary"
But I want it rendered as a header. How do I achieve this?
A: This should work for you:
```{r cars, echo=FALSE, results='asis'}
myCondition <- TRUE
if(myCondition) {
cat("## Car Summary")
}
```
```{r, echo=FALSE}
summary(cars)
```
Note that the option results = 'asis' is important to print the header. Also note that print() will not work, but cat().
| Q: Knitr: print text from code block as R markdown I have the following R Markdown document:
---
title: "Test"
output: html_document
---
```{r cars, echo=FALSE}
myCondition <- TRUE
if(myCondition) {
print("## Car Summary")
}
summary(cars)
```
When I Knit it to HTML, the "Car Summary" header is rendered in "terminal-like" monospaced font as this:
## [1] "## Car Summary"
But I want it rendered as a header. How do I achieve this?
A: This should work for you:
```{r cars, echo=FALSE, results='asis'}
myCondition <- TRUE
if(myCondition) {
cat("## Car Summary")
}
```
```{r, echo=FALSE}
summary(cars)
```
Note that the option results = 'asis' is important to print the header. Also note that print() will not work, but cat().
| stackoverflow | {
"language": "en",
"length": 119,
"provenance": "stackexchange_0000F.jsonl.gz:906035",
"question_score": "15",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44670712"
} |
74cba553ff01053516e51e5ba386b2571b692f22 | Stackoverflow Stackexchange
Q: How to create a custom file input in Ionic 2+ with a button styling? Here is my template:
<label>{{label}}</label>
<input type="file" (change)="fileUpload($event)" id="file-input" style="position:absolute; top: -999999px" #fileInp>
<button ion-button (click)="onClick()">Upload</button>
and the ts file:
@ViewChild('fileInp') fileInput: ElementRef;
@Input() label: string;
@Output() data = new EventEmitter<FormData>();
fileUpload(event) {
let fd = new FormData();
fd.append('file', event.srcElement.files[0]);
this.data.emit(fd);
}
onClick() {
this.fileInput.nativeElement.click();
}
Everything works fine on Android and in the browser, but not on iOS.
The same code works if I disable the button in the template.
A: You can't trigger the click on a file input on iOS. A workaround is to use css to set the opacity of the input element to 0, and place it just on top of the button. That way, the file input won't be seen, but it will be clicked when clicking the button.
<ion-item>
<label>{{label}}</label>
<input type="file" (change)="fileUpload($event)" id="file-input" style="opacity: 0" #fileInp>
<button ion-button (click)="onClick()">Upload</button>
</ion-item>
and then in the .scss file:
#file-input {
opacity: 0;
position: absolute;
top: 0;
width: 100%;
height: 100%;
left: 0;
z-index: 999;
}
There're probably some other ways to fix this issue, but that's how I managed on an app I worked on in the past.
| Q: How to create a custom file input in Ionic 2+ with a button styling? Here is my template:
<label>{{label}}</label>
<input type="file" (change)="fileUpload($event)" id="file-input" style="position:absolute; top: -999999px" #fileInp>
<button ion-button (click)="onClick()">Upload</button>
and the ts file:
@ViewChild('fileInp') fileInput: ElementRef;
@Input() label: string;
@Output() data = new EventEmitter<FormData>();
fileUpload(event) {
let fd = new FormData();
fd.append('file', event.srcElement.files[0]);
this.data.emit(fd);
}
onClick() {
this.fileInput.nativeElement.click();
}
Everything works fine on Android and in the browser, but not on iOS.
The same code works if I disable the button in the template.
A: You can't trigger the click on a file input on iOS. A workaround is to use css to set the opacity of the input element to 0, and place it just on top of the button. That way, the file input won't be seen, but it will be clicked when clicking the button.
<ion-item>
<label>{{label}}</label>
<input type="file" (change)="fileUpload($event)" id="file-input" style="opacity: 0" #fileInp>
<button ion-button (click)="onClick()">Upload</button>
</ion-item>
and then in the .scss file:
#file-input {
opacity: 0;
position: absolute;
top: 0;
width: 100%;
height: 100%;
left: 0;
z-index: 999;
}
There're probably some other ways to fix this issue, but that's how I managed on an app I worked on in the past.
A: I usually do the following.
<ion-item>
<ion-button color="primary" (click)="inputFile.click()">
<ion-icon name="attach"></ion-icon> Anexar documentos
</ion-button>
<input #inputFile id="input-file" style="opacity:0" type="file" (change)="uploadFiles($event)"
multiple/>
</ion-item>
A: The best way I found to do it is use a label with the for attribute and customized it using css. So when the user clicks on the label, the input is triggered. Keep in mind that the for label must be the input id.
<label class="myFakeUploadButton" for="myFileInput">Upload</label>
<input type="file" id="myFileInput">
#myFileInput{
position: absolute;
opacity: 0;
}
.myFakeUploadButton{
/* Whatever you want */
}
| stackoverflow | {
"language": "en",
"length": 287,
"provenance": "stackexchange_0000F.jsonl.gz:906059",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44670800"
} |
2541517e19b07d69d2e98fcc570639271e074be4 | Stackoverflow Stackexchange
Q: Angular2 insert custom attribute in tag with directive I have an angular 2 project and I'm using PrimeNG.
I'm using a special tag with a lot of custom attributes and these attributes are always the same for this tag.
I want to externalize these attributes and I created a custom directive used to add all attributes I need.
The problem is that some of these attributes aren't native and maybe they aren't recognized. I get the error "Failed to execute 'setAttribute' on 'Element': '[myCustomAttribute]' is not a valid attribute name.
This is my directive:
@Directive({
selector: '[def-calendar]'
})
export class DefaultCalendarDirective {
constructor(private _elRef: ElementRef, private _renderer: Renderer2) {
}
ngOnInit() {
this._renderer.setAttribute(this._elRef.nativeElement, '[yearRange]', '1900:2100');
}
}
Anyone know how can I fix it?
I don't know if is there a way to copy the element such as string and manipulate the string adding my attributes.
Thanks
Fabrizio
A: This might be useful for you.
Angular2 add attribute with Renderer using a directive.
I think the square bracket between the yearRange is the culprit. Hope this helps.
| Q: Angular2 insert custom attribute in tag with directive I have an angular 2 project and I'm using PrimeNG.
I'm using a special tag with a lot of custom attributes and these attributes are always the same for this tag.
I want to externalize these attributes and I created a custom directive used to add all attributes I need.
The problem is that some of these attributes aren't native and maybe they aren't recognized. I get the error "Failed to execute 'setAttribute' on 'Element': '[myCustomAttribute]' is not a valid attribute name.
This is my directive:
@Directive({
selector: '[def-calendar]'
})
export class DefaultCalendarDirective {
constructor(private _elRef: ElementRef, private _renderer: Renderer2) {
}
ngOnInit() {
this._renderer.setAttribute(this._elRef.nativeElement, '[yearRange]', '1900:2100');
}
}
Anyone know how can I fix it?
I don't know if is there a way to copy the element such as string and manipulate the string adding my attributes.
Thanks
Fabrizio
A: This might be useful for you.
Angular2 add attribute with Renderer using a directive.
I think the square bracket between the yearRange is the culprit. Hope this helps.
A: You can't use renderer.setAttribute(...) to set attributes that don't belong to the native HTML element you're using.
yearRange isn't even an attribute to be accurate for any native HTML element. It should be declared as an input in class of the directive in order to set values for it properly:
@Directive({
selector: '[def-calendar]'
})
export class DefaultCalendarDirective implements OnInit {
@Input() yearRange: string = '1900:2100';
constructor() {
}
public ngOnInit() {}
}
You can also change the input value by passing it a string (or you can also use binding instead) when you're using the directive on an element.
<someElement def-calendar yearRange="1900:2100"></someElement>
A: We can use setAttribute method of Renderer2 class
import {Directive, ElementRef, Renderer2, Input, HostListener} from '@angular/core';
@Directive({
selector: '[DirectiveName]'
})
export class DirectiveNameDirective {
constructor(public renderer : Renderer2,public hostElement: ElementRef){}
ngOnInit() {
this.renderer.setAttribute(this.hostElement.nativeElement, "data-name", "testname");
}
}
| stackoverflow | {
"language": "en",
"length": 319,
"provenance": "stackexchange_0000F.jsonl.gz:906175",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44671144"
} |
313e096909ada2c092ff6b61deb5c7a193988e57 | Stackoverflow Stackexchange
Q: Jackson filtering out fields without annotations I was trying to filter out certain fields from serialization via SimpleBeanPropertyFilter using the following (simplified) code:
public static void main(String[] args) {
ObjectMapper mapper = new ObjectMapper();
SimpleFilterProvider filterProvider = new SimpleFilterProvider().addFilter("test",
SimpleBeanPropertyFilter.filterOutAllExcept("data1"));
try {
String json = mapper.writer(filterProvider).writeValueAsString(new Data());
System.out.println(json); // output: {"data1":"value1","data2":"value2"}
} catch (JsonProcessingException e) {
e.printStackTrace();
}
}
private static class Data {
public String data1 = "value1";
public String data2 = "value2";
}
Us I use SimpleBeanPropertyFilter.filterOutAllExcept("data1")); I was expecting that the created serialized Json string contains only {"data1":"value1"}, however I get {"data1":"value1","data2":"value2"}.
How to create a temporary writer that respects the specified filter (the ObjectMapper can not be re-configured in my case).
Note: Because of the usage scenario in my application I can only accept answers that do not use Jackson annotations.
A: If for some reason MixIns does not suit you. You can try this approach:
ObjectMapper objectMapper = new ObjectMapper();
objectMapper.setAnnotationIntrospector(new JacksonAnnotationIntrospector(){
@Override
public boolean hasIgnoreMarker(final AnnotatedMember m) {
List<String> exclusions = Arrays.asList("field1", "field2");
return exclusions.contains(m.getName())|| super.hasIgnoreMarker(m);
}
});
| Q: Jackson filtering out fields without annotations I was trying to filter out certain fields from serialization via SimpleBeanPropertyFilter using the following (simplified) code:
public static void main(String[] args) {
ObjectMapper mapper = new ObjectMapper();
SimpleFilterProvider filterProvider = new SimpleFilterProvider().addFilter("test",
SimpleBeanPropertyFilter.filterOutAllExcept("data1"));
try {
String json = mapper.writer(filterProvider).writeValueAsString(new Data());
System.out.println(json); // output: {"data1":"value1","data2":"value2"}
} catch (JsonProcessingException e) {
e.printStackTrace();
}
}
private static class Data {
public String data1 = "value1";
public String data2 = "value2";
}
Us I use SimpleBeanPropertyFilter.filterOutAllExcept("data1")); I was expecting that the created serialized Json string contains only {"data1":"value1"}, however I get {"data1":"value1","data2":"value2"}.
How to create a temporary writer that respects the specified filter (the ObjectMapper can not be re-configured in my case).
Note: Because of the usage scenario in my application I can only accept answers that do not use Jackson annotations.
A: If for some reason MixIns does not suit you. You can try this approach:
ObjectMapper objectMapper = new ObjectMapper();
objectMapper.setAnnotationIntrospector(new JacksonAnnotationIntrospector(){
@Override
public boolean hasIgnoreMarker(final AnnotatedMember m) {
List<String> exclusions = Arrays.asList("field1", "field2");
return exclusions.contains(m.getName())|| super.hasIgnoreMarker(m);
}
});
A: You would normally annotate your Data class to have the filter applied:
@JsonFilter("test")
class Data {
You have specified that you can't use annotations on the class. You could use mix-ins to avoid annotating Data class.
@JsonFilter("test")
class DataMixIn {}
Mixins have to be specified on an ObjectMapper and you specify you don't want to reconfigure that. In such a case, you can always copy the ObjectMapper with its configuration and then modify the configuration of the copy. That will not affect the original ObjectMapper used elsewhere in your code. E.g.
ObjectMapper myMapper = mapper.copy();
myMapper.addMixIn(Data.class, DataMixIn.class);
And then write with the new ObjectMapper
String json = myMapper.writer(filterProvider).writeValueAsString(new Data());
System.out.println(json); // output: {"data1":"value1"}
A: The example of excluding properties by name:
public Class User {
private String name = "abc";
private Integer age = 1;
//getters
}
@JsonFilter("dynamicFilter")
public class DynamicMixIn {
}
User user = new User();
String[] propertiesToExclude = {"name"};
ObjectMapper mapper = new ObjectMapper()
.addMixIn(Object.class, DynamicMixIn.class);
FilterProvider filterProvider = new SimpleFilterProvider()
.addFilter("dynamicFilter", SimpleBeanPropertyFilter.filterOutAllExcept(propertiesToExclude));
mapper.setFilterProvider(filterProvider);
mapper.writeValueAsString(user); // {"name":"abc"}
You can instead of DynamicMixIn create MixInByPropName
@JsonIgnoreProperties(value = {"age"})
public class MixInByPropName {
}
ObjectMapper mapper = new ObjectMapper()
.addMixIn(Object.class, MixInByPropName.class);
mapper.writeValueAsString(user); // {"name":"abc"}
Note: If you want exclude property only for User you can change parameter Object.class of method addMixIn to User.class
Excluding properties by type you can create MixInByType
@JsonIgnoreType
public class MixInByType {
}
ObjectMapper mapper = new ObjectMapper()
.addMixIn(Integer.class, MixInByType.class);
mapper.writeValueAsString(user); // {"name":"abc"}
A: It seems you have to add an annotation which indicts which filter to use when doing the serialization to the bean class if you want the filter to work:
@JsonFilter("test")
public class Data {
public String data1 = "value1";
public String data2 = "value2";
}
EDIT
The OP has just added a note that just take the answer that not using a bean animation, then if the field you want to export is very less amount, you can just retrieve that data and build a Map of List yourself, there seems no other way to do that.
Map<String, Object> map = new HashMap<String, Object>();
map.put("data1", obj.getData1());
...
// do the serilization on the map object just created.
If you want to exclude specific field and kept the most field, maybe you could do that with reflect. Following is a method I have written to transfer a bean to a map you could change the code to meet your own needs:
protected Map<String, Object> transBean2Map(Object beanObj){
if(beanObj == null){
return null;
}
Map<String, Object> map = new HashMap<String, Object>();
try {
BeanInfo beanInfo = Introspector.getBeanInfo(beanObj.getClass());
PropertyDescriptor[] propertyDescriptors = beanInfo.getPropertyDescriptors();
for (PropertyDescriptor property : propertyDescriptors) {
String key = property.getName();
if (!key.equals("class")
&& !key.endsWith("Entity")
&& !key.endsWith("Entities")
&& !key.endsWith("LazyInitializer")
&& !key.equals("handler")) {
Method getter = property.getReadMethod();
if(key.endsWith("List")){
Annotation[] annotations = getter.getAnnotations();
for(Annotation annotation : annotations){
if(annotation instanceof javax.persistence.OneToMany){
if(((javax.persistence.OneToMany)annotation).fetch().equals(FetchType.EAGER)){
List entityList = (List) getter.invoke(beanObj);
List<Map<String, Object>> dataList = new ArrayList<>();
for(Object childEntity: entityList){
dataList.add(transBean2Map(childEntity));
}
map.put(key,dataList);
}
}
}
continue;
}
Object value = getter.invoke(beanObj);
map.put(key, value);
}
}
} catch (Exception e) {
Logger.getAnonymousLogger().log(Level.SEVERE,"transBean2Map Error " + e);
}
return map;
}
But I recommend you to use Google Gson as the JSON deserializer/serializer And the main reason is I hate dealing with exception stuff, it just messed up with the coding style.
And it's pretty easy to satisfy your need with taking advantage of the version control annotation on the bean class like this:
@Since(GifMiaoMacro.GSON_SENSITIVE) //mark the field as sensitive data and will not export to JSON
private boolean firstFrameStored; // won't export this field to JSON.
You can define the Macro whether to export or hide the field like this:
public static final double GSON_SENSITIVE = 2.0f;
public static final double GSON_INSENSITIVE = 1.0f;
By default, Gson will export all field that not annotated by @Since So you don't have to do anything if you do not care about the field and it just exports the field.
And if some field you are not want to export to json, ie sensitive info just add an annotation to the field. And generate json string with this:
private static Gson gsonInsensitive = new GsonBuilder()
.registerTypeAdapter(ObjectId.class,new ObjectIdSerializer()) // you can omit this line and the following line if you are not using mongodb
.registerTypeAdapter(ObjectId.class, new ObjectIdDeserializer()) //you can omit this
.setVersion(GifMiaoMacro.GSON_INSENSITIVE)
.disableHtmlEscaping()
.create();
public static String toInsensitiveJson(Object o){
return gsonInsensitive.toJson(o);
}
Then just use this:
String jsonStr = StringUtils.toInsensitiveJson(yourObj);
Since Gson is stateless, it's fine to use a static method to do your job, I have tried a lot of JSON serialize/deserialize framework with Java, but found Gson to be the sharp one both performance and handily.
| stackoverflow | {
"language": "en",
"length": 939,
"provenance": "stackexchange_0000F.jsonl.gz:906180",
"question_score": "21",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44671154"
} |
894f6d30650885b7a607c761d856cea5b1b211c0 | Stackoverflow Stackexchange
Q: TypeScript - Define a subset of type Say I have a type like so:
interface IAll {
foo: boolean,
bar: Function,
baz: number
}
instead of manually defining all the possible subtypes of IAll, like so:
interface IAll1 {
foo: boolean,
bar: Function,
}
interface IAll2 {
bar: Function,
baz: number
}
interface IAll3 {
foo: boolean,
}
interface IAll4 {
foo: boolean,
}
...etc
and then doing
type IAll = IAll1 | IAll2 | IAll3 ... etc.
Is there a way for TypeScript to statically check whether an object is a subtype or subset of another?
This is useful for some cases where we combine several subtypes or subsets to form a full type.
A: You can use Partial<T>. This will make all the properties in IAll optional:
type SubsetOfIAll = Partial<IAll>;
| Q: TypeScript - Define a subset of type Say I have a type like so:
interface IAll {
foo: boolean,
bar: Function,
baz: number
}
instead of manually defining all the possible subtypes of IAll, like so:
interface IAll1 {
foo: boolean,
bar: Function,
}
interface IAll2 {
bar: Function,
baz: number
}
interface IAll3 {
foo: boolean,
}
interface IAll4 {
foo: boolean,
}
...etc
and then doing
type IAll = IAll1 | IAll2 | IAll3 ... etc.
Is there a way for TypeScript to statically check whether an object is a subtype or subset of another?
This is useful for some cases where we combine several subtypes or subsets to form a full type.
A: You can use Partial<T>. This will make all the properties in IAll optional:
type SubsetOfIAll = Partial<IAll>;
| stackoverflow | {
"language": "en",
"length": 134,
"provenance": "stackexchange_0000F.jsonl.gz:906185",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44671164"
} |
4d47b3240ebbf9e7df1af98a63b965d10161eca4 | Stackoverflow Stackexchange
Q: How to use STDIN twice from pipe I have a awk script something like
awk 'FNR==NR {col1[$1]++; col2[$2]++; next} {print $0, col2[$2] "/" length(col1)}' input input
But in case I have lot of files and need to use this script for concatenated files together like:
cat *all_input | awk 'FNR==NR {col1[$1]++; col2[$2]++; next} {print $0, col2[$2] "/" length(col1)}' STDIN STDIN
Does not work. How to use STDIN twice from pipe?
A: You don't need to use a pipe. If you are using bash use process-substitution as <(cmd) i.e. to achieve a redirection where the input or output of a process (some sequence of commands) appear as a temporary file.
awk 'FNR==NR {col1[$1]++; col2[$2]++; next} {print $0, col2[$2] "/" length(col1)}' <(cut -f3 5- input) <(cut -f3 5- input)
| Q: How to use STDIN twice from pipe I have a awk script something like
awk 'FNR==NR {col1[$1]++; col2[$2]++; next} {print $0, col2[$2] "/" length(col1)}' input input
But in case I have lot of files and need to use this script for concatenated files together like:
cat *all_input | awk 'FNR==NR {col1[$1]++; col2[$2]++; next} {print $0, col2[$2] "/" length(col1)}' STDIN STDIN
Does not work. How to use STDIN twice from pipe?
A: You don't need to use a pipe. If you are using bash use process-substitution as <(cmd) i.e. to achieve a redirection where the input or output of a process (some sequence of commands) appear as a temporary file.
awk 'FNR==NR {col1[$1]++; col2[$2]++; next} {print $0, col2[$2] "/" length(col1)}' <(cut -f3 5- input) <(cut -f3 5- input)
A: The answer to How to use STDIN twice from pipe is "you can't". If you want to use the data from stdin twice then you need to save it somewhere when you read it the first time so you have it next time. For example:
$ seq 3 |
awk '
BEGIN {
if ( ("mktemp"|getline line) > 0) tmp=line; else exit
ARGV[ARGC]=tmp; ARGC++
}
NR==FNR { print > tmp }
{ print FILENAME, NR, FNR, $0 }
' -
- 1 1 1
- 2 2 2
- 3 3 3
/var/folders/11/vlqr7jmn6jj3fglyl12lj0l00000gn/T/tmp.Y03l9pS7 4 1 1
/var/folders/11/vlqr7jmn6jj3fglyl12lj0l00000gn/T/tmp.Y03l9pS7 5 2 2
/var/folders/11/vlqr7jmn6jj3fglyl12lj0l00000gn/T/tmp.Y03l9pS7 6 3 3
or you can store it in an internal array or string and read it back from there later.
Having said that, your specific problem doesn't need anything that fancy, just a simple:
cat *all_input | awk 'FNR==NR {col1[$1]; col2[$2]++; next} {print $0, col2[$2] "/" length(col1)}' - *all_input
would do it but unless your files are huge all you really need is the store-it-in-array approach:
awk '{ col1[$1]; col2[$2]++; f0[NR]=$0; f2[NR]=$2 }
END {
for (nr=1; nr<=NR; nr++) {
print f0[nr], col2[f2[nr]] "/" length(col1)
}
}' *all_input
A: I don't know if that can help because I am not an awk expert, but any Linux application (including awk) can read stdin straight from /proc/self/fd/0
Note that this is way less portable than open(0) and will only work on Linux with readable procfs (nearly all Linux distributions today).
if the application allows for parallel file descriptors consumption, you can open that file descriptor twice and read from it twice.
self in the path designates the PID of the accessing application.
| stackoverflow | {
"language": "en",
"length": 401,
"provenance": "stackexchange_0000F.jsonl.gz:906229",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44671307"
} |
5ae29c4a03b7a87d5c14e57b25d8d53be2b5e4e9 | Stackoverflow Stackexchange
Q: How to start a nodejs child process in separate cmd window or terminal I want to start a nodejs child process in a separate console window rather than of listening to its data event.
as per the Documentation
with the detached option, the child supposes to have its own console window but it's not happening.
my code in main.js
const { spawn} = require("child_process");
var child = spawn("node", ["./count.js"], {
detached: true,
stdio: 'ignore'
});
in the count.js file, I have
console.log(`running in child process with PID ${process.pid})
A: The only solution I've found to work on windows 10 is to spawn a separate cmd.exe process entirely:
const { spawn} = require("child_process");
var child = spawn("cmd.exe", ["/c", "node", "count.js"], {
detached: true,
stdio: 'ignore'
});
Also be sure to add some delay to your child process so it doesn't quit before you can see it:
console.log(`running in child process with PID ${process.pid}`);
// wait 5 seconds before closing
setTimeout(() => true, 5000);
And finally, if you want your child window to be entirely independent from the parent (stays open even when the parent closes), you should unref it after you spawn it:
child.unref();
| Q: How to start a nodejs child process in separate cmd window or terminal I want to start a nodejs child process in a separate console window rather than of listening to its data event.
as per the Documentation
with the detached option, the child supposes to have its own console window but it's not happening.
my code in main.js
const { spawn} = require("child_process");
var child = spawn("node", ["./count.js"], {
detached: true,
stdio: 'ignore'
});
in the count.js file, I have
console.log(`running in child process with PID ${process.pid})
A: The only solution I've found to work on windows 10 is to spawn a separate cmd.exe process entirely:
const { spawn} = require("child_process");
var child = spawn("cmd.exe", ["/c", "node", "count.js"], {
detached: true,
stdio: 'ignore'
});
Also be sure to add some delay to your child process so it doesn't quit before you can see it:
console.log(`running in child process with PID ${process.pid}`);
// wait 5 seconds before closing
setTimeout(() => true, 5000);
And finally, if you want your child window to be entirely independent from the parent (stays open even when the parent closes), you should unref it after you spawn it:
child.unref();
| stackoverflow | {
"language": "en",
"length": 194,
"provenance": "stackexchange_0000F.jsonl.gz:906235",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44671330"
} |
1e3a4f310cdfd077f3ac9a7a78e9ec76230e2323 | Stackoverflow Stackexchange
Q: OSError Bad file descriptor when using Popen I write the following sample code as demonstrating my problem:
from subprocess import Popen
while 1:
try:
proc = Popen(['tail', '-3', '/var/log/syslog'])
except KeyboardInterrupt:
break
When I type ctrl + c, I'll get the traceback:
File "/usr/lib/python2.7/subprocess.py" line 1317, in _execute_child
os.close(errpipe_read)
OSError: [Errno 9] Bad file descriptor
I originally think the file descriptor is shared between by spawn process and the parent process, so after goolgling I try the following code:
proc = Popen(['tail', '-3', '/var/log/syslog'], preexec_fn=os.setpgrp)
But still have that traceback log. How can solve this? Any suggestion is appreciated.
| Q: OSError Bad file descriptor when using Popen I write the following sample code as demonstrating my problem:
from subprocess import Popen
while 1:
try:
proc = Popen(['tail', '-3', '/var/log/syslog'])
except KeyboardInterrupt:
break
When I type ctrl + c, I'll get the traceback:
File "/usr/lib/python2.7/subprocess.py" line 1317, in _execute_child
os.close(errpipe_read)
OSError: [Errno 9] Bad file descriptor
I originally think the file descriptor is shared between by spawn process and the parent process, so after goolgling I try the following code:
proc = Popen(['tail', '-3', '/var/log/syslog'], preexec_fn=os.setpgrp)
But still have that traceback log. How can solve this? Any suggestion is appreciated.
| stackoverflow | {
"language": "en",
"length": 100,
"provenance": "stackexchange_0000F.jsonl.gz:906236",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44671347"
} |
8f20dac3d165aa17b7dcf8d50b494a6e802a50bb | Stackoverflow Stackexchange
Q: How to test a Lucene Analyzer? I'm not getting the expected results from my Analyzer and would like to test the tokenization process.
The answer this question: How to use a Lucene Analyzer to tokenize a String?
List<String> result = new ArrayList<String>();
TokenStream stream = analyzer.tokenStream(field, new StringReader(keywords));
try {
while(stream.incrementToken()) {
result.add(stream.getAttribute(TermAttribute.class).term());
}
}
catch(IOException e) {
// not thrown b/c we're using a string reader...
}
return result;
Uses the TermAttribute to extract the tokens from the stream. The problem is that TermAttribute is no longer in Lucene 6.
What has it been replaced by?
What would the equivalent be with Lucene 6.6.0?
A: I'm pretty sure it was replaced by CharTermAttribute javadoc
The ticket is pretty old, but maybe the code was kept around a bit longer:
https://issues.apache.org/jira/browse/LUCENE-2372
| Q: How to test a Lucene Analyzer? I'm not getting the expected results from my Analyzer and would like to test the tokenization process.
The answer this question: How to use a Lucene Analyzer to tokenize a String?
List<String> result = new ArrayList<String>();
TokenStream stream = analyzer.tokenStream(field, new StringReader(keywords));
try {
while(stream.incrementToken()) {
result.add(stream.getAttribute(TermAttribute.class).term());
}
}
catch(IOException e) {
// not thrown b/c we're using a string reader...
}
return result;
Uses the TermAttribute to extract the tokens from the stream. The problem is that TermAttribute is no longer in Lucene 6.
What has it been replaced by?
What would the equivalent be with Lucene 6.6.0?
A: I'm pretty sure it was replaced by CharTermAttribute javadoc
The ticket is pretty old, but maybe the code was kept around a bit longer:
https://issues.apache.org/jira/browse/LUCENE-2372
| stackoverflow | {
"language": "en",
"length": 132,
"provenance": "stackexchange_0000F.jsonl.gz:906251",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44671388"
} |
e4abb9beac1f1eebbf0b8cee6b0d718e60d520e4 | Stackoverflow Stackexchange
Q: How to perform row wise OR operation on a 2D numpy array? I have a numpy array.
[[1, 0, 1],
[1, 0, 0],
[0, 0, 1]]
I want to perform rowise OR operation on it so that the resulting array looks like this:
[1, 0, 1]
Is there a straight forward way for doing this without implementing loops ?
I will be very grateful if someone could suggest something. Thanks
A: You could do this by calling any to generate a boolean mask and then cast to int to convert the True and False to 1 and 0 respectively:
In[193]:
a.any(0).astype(int)
Out[193]: array([1, 0, 1])
The first param to any is the axis arg, here we can see the differences between axis 0 and 1:
In[194]:
a.any(0)
Out[194]: array([ True, False, True], dtype=bool)
In[195]:
a.any(1)
Out[195]: array([ True, True, True], dtype=bool)
| Q: How to perform row wise OR operation on a 2D numpy array? I have a numpy array.
[[1, 0, 1],
[1, 0, 0],
[0, 0, 1]]
I want to perform rowise OR operation on it so that the resulting array looks like this:
[1, 0, 1]
Is there a straight forward way for doing this without implementing loops ?
I will be very grateful if someone could suggest something. Thanks
A: You could do this by calling any to generate a boolean mask and then cast to int to convert the True and False to 1 and 0 respectively:
In[193]:
a.any(0).astype(int)
Out[193]: array([1, 0, 1])
The first param to any is the axis arg, here we can see the differences between axis 0 and 1:
In[194]:
a.any(0)
Out[194]: array([ True, False, True], dtype=bool)
In[195]:
a.any(1)
Out[195]: array([ True, True, True], dtype=bool)
A: If you'd prefer to stick with bitwise or (the | operator in Python is a bitwise or, whereas the or operator is the boolean or), you can use np.bitwise_or(). However, this only takes two arrays as input, so you can use Numpy's reduce() capabilities to combine all the subarrays in the array.
>>> a = np.array([[1, 0, 1],[1, 0, 0],[0, 0, 1]])
>>> np.bitwise_or.reduce(a, 0)
array([1, 0, 1])
I like how explicit this is, but the a.any() solution is common enough to not raise any eyebrows. The first argument for reduce is of course the array and the second is the axis you're reducing along. So you could also do it column-wise, if you preferred, or any other axis for that matter.
>>> a = np.array([[1, 0, 1],[1, 0, 0],[0, 0, 1]])
>>> np.bitwise_or.reduce(a, 1)
array([1, 1, 1])
| stackoverflow | {
"language": "en",
"length": 281,
"provenance": "stackexchange_0000F.jsonl.gz:906257",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44671407"
} |
62e47feb925cc96983e9d2d484d5d79054288eed | Stackoverflow Stackexchange
Q: What is =~ operator in elixir From the documentation I understand how =~ operator works to match regex, but I don't understand the general use of this operator.
For example, what does "foo" =~ "foo" mean?
How is it different from "foo" == "foo"?
A: It's not documented on that page, but it's documented in Kernel.=~/2 that when the RHS is a string, =~ checks if LHS contains RHS:
iex(1)> "foo" =~ "f"
true
iex(2)> "foo" =~ "o"
true
It does not implicitly convert RHS to regex:
iex(3)> "foo" =~ "."
false
If RHS is a regular expression, returns true if left matches right:
iex(4)> "abcd" =~ ~r/e/
false
| Q: What is =~ operator in elixir From the documentation I understand how =~ operator works to match regex, but I don't understand the general use of this operator.
For example, what does "foo" =~ "foo" mean?
How is it different from "foo" == "foo"?
A: It's not documented on that page, but it's documented in Kernel.=~/2 that when the RHS is a string, =~ checks if LHS contains RHS:
iex(1)> "foo" =~ "f"
true
iex(2)> "foo" =~ "o"
true
It does not implicitly convert RHS to regex:
iex(3)> "foo" =~ "."
false
If RHS is a regular expression, returns true if left matches right:
iex(4)> "abcd" =~ ~r/e/
false
| stackoverflow | {
"language": "en",
"length": 110,
"provenance": "stackexchange_0000F.jsonl.gz:906268",
"question_score": "40",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44671445"
} |
4669e23bb1e67a3f33a430d941d4e45e1802aeca | Stackoverflow Stackexchange
Q: Laravel changing foreign key constraint I have a table with an already created foreign key constraint:
$table->foreign('cms_id')->references('id')->on('inventories');
I need to change this foreign key so that it references remote_id and not id column in the inventories table.
I have tried that by doing this:
public function up()
{
Schema::table('contents', function (Blueprint $table) {
$table->dropForeign('contents_cms_id_foreign');
$table->foreign('cms_id')->references('remote_id')->on('inventories');
});
}
But, I get:
[Illuminate\Database\QueryException]
SQLSTATE[HY000]: General error: 1215 Cannot add foreign key constraint
(SQL : alter table contents add constraint
contents_cms_id_foreign foreign k ey (cms_id) references
inventories (remote_id))
[PDOException]
SQLSTATE[HY000]: General error: 1215 Cannot add foreign key constraint
A: Add new foreign key in two steps, aside from separating to Schema::table:
public function up()
{
Schema::table('contents', function (Blueprint $table) {
$table->dropForeign('contents_cms_id_foreign');
$table->integer('cmd_id')->unsigned();
});
Schema::table('contents', function (Blueprint $table) {
$table->foreign('cms_id')->references('remote_id')->on('inventories');
});
}
| Q: Laravel changing foreign key constraint I have a table with an already created foreign key constraint:
$table->foreign('cms_id')->references('id')->on('inventories');
I need to change this foreign key so that it references remote_id and not id column in the inventories table.
I have tried that by doing this:
public function up()
{
Schema::table('contents', function (Blueprint $table) {
$table->dropForeign('contents_cms_id_foreign');
$table->foreign('cms_id')->references('remote_id')->on('inventories');
});
}
But, I get:
[Illuminate\Database\QueryException]
SQLSTATE[HY000]: General error: 1215 Cannot add foreign key constraint
(SQL : alter table contents add constraint
contents_cms_id_foreign foreign k ey (cms_id) references
inventories (remote_id))
[PDOException]
SQLSTATE[HY000]: General error: 1215 Cannot add foreign key constraint
A: Add new foreign key in two steps, aside from separating to Schema::table:
public function up()
{
Schema::table('contents', function (Blueprint $table) {
$table->dropForeign('contents_cms_id_foreign');
$table->integer('cmd_id')->unsigned();
});
Schema::table('contents', function (Blueprint $table) {
$table->foreign('cms_id')->references('remote_id')->on('inventories');
});
}
| stackoverflow | {
"language": "en",
"length": 129,
"provenance": "stackexchange_0000F.jsonl.gz:906294",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44671525"
} |
c0063b43ad9301481ce137e4d28bcc0c9f515efb | Stackoverflow Stackexchange
Q: How to download dependency jar's from S3 using gradle? How to download dependency jar's from S3 using gradle?
Here is my build.gradle
group 'com.hello'
version '1.0-SNAPSHOT'
apply plugin: 'java'
apply plugin: 'maven-publish'
repositories {
mavenCentral()
maven {
url "https://s3.amazonaws.com/maven-repo-bucket"
credentials(AwsCredentials) {
accessKey "${System.getenv('aws_access_id')}"
secretKey "${System.getenv('aws_secret_key')}"
}
}
}
dependencies {
compile files('hello1.jar')
compile files('hello2.jar')
compile group: 'com.google.code.gson', name: 'gson', version: '2.7'
testCompile group: 'junit', name: 'junit', version: '4.11'
}
maven-repo-bucket is my s3 bucket name and hello1.jar and hello2.jar is the name of my jar files that are under my s3 bucket I don't know the group id and artifact id of these files but I want to download hello1.jar and hello2.jar and put into local maven repo just like any other dependency.
| Q: How to download dependency jar's from S3 using gradle? How to download dependency jar's from S3 using gradle?
Here is my build.gradle
group 'com.hello'
version '1.0-SNAPSHOT'
apply plugin: 'java'
apply plugin: 'maven-publish'
repositories {
mavenCentral()
maven {
url "https://s3.amazonaws.com/maven-repo-bucket"
credentials(AwsCredentials) {
accessKey "${System.getenv('aws_access_id')}"
secretKey "${System.getenv('aws_secret_key')}"
}
}
}
dependencies {
compile files('hello1.jar')
compile files('hello2.jar')
compile group: 'com.google.code.gson', name: 'gson', version: '2.7'
testCompile group: 'junit', name: 'junit', version: '4.11'
}
maven-repo-bucket is my s3 bucket name and hello1.jar and hello2.jar is the name of my jar files that are under my s3 bucket I don't know the group id and artifact id of these files but I want to download hello1.jar and hello2.jar and put into local maven repo just like any other dependency.
| stackoverflow | {
"language": "en",
"length": 124,
"provenance": "stackexchange_0000F.jsonl.gz:906299",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44671548"
} |
ce5106e3aef74e4a1e6f177bc9a34f82ebc64da1 | Stackoverflow Stackexchange
Q: Wamp not working properly after enable ssl module I have using WampServer Version 3.0.6 32bit with window 10 (32bit) in VMware Workstation 12 Player.
Apache 2.4.23 -
PHP 5.6.25 -
MySQL 5.7.14
and also using
Win32OpenSSL-1_1_0f
I try to run localhost with https.
I followed this link.
private.key & certificate.crt created successfully and I have setup properly in httpd-ssl.conf file as per instruction.
But I enable below module with remove # Apache
LoadModule ssl_module modules/mod_ssl.so
Include conf/extra/httpd-ssl.conf
After then restart WampServer with services that time WampServer not start properly and WampServer icon convert red to orange only but not convert in green.
but I disable below module then start wamp server its working fine properly.
LoadModule ssl_module modules/mod_ssl.so
Include conf/extra/httpd-ssl.conf
I try to run httpd -t its given
"Cannot load modules/mod_ssl.so into server: The operating system cannot run %1"
I enable also php extension.
extension=php_openssl.dll
Please Help how to configure localhost with https.
| Q: Wamp not working properly after enable ssl module I have using WampServer Version 3.0.6 32bit with window 10 (32bit) in VMware Workstation 12 Player.
Apache 2.4.23 -
PHP 5.6.25 -
MySQL 5.7.14
and also using
Win32OpenSSL-1_1_0f
I try to run localhost with https.
I followed this link.
private.key & certificate.crt created successfully and I have setup properly in httpd-ssl.conf file as per instruction.
But I enable below module with remove # Apache
LoadModule ssl_module modules/mod_ssl.so
Include conf/extra/httpd-ssl.conf
After then restart WampServer with services that time WampServer not start properly and WampServer icon convert red to orange only but not convert in green.
but I disable below module then start wamp server its working fine properly.
LoadModule ssl_module modules/mod_ssl.so
Include conf/extra/httpd-ssl.conf
I try to run httpd -t its given
"Cannot load modules/mod_ssl.so into server: The operating system cannot run %1"
I enable also php extension.
extension=php_openssl.dll
Please Help how to configure localhost with https.
| stackoverflow | {
"language": "en",
"length": 154,
"provenance": "stackexchange_0000F.jsonl.gz:906303",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44671557"
} |
2f78ae593de3380119d64ab8151016c63bfd99e4 | Stackoverflow Stackexchange
Q: How to replace null values with a specific value in Dataframe using spark in Java? I am trying improve the accuracy of Logistic regression algorithm implemented in Spark using Java. For this I'm trying to replace Null or invalid values present in a column with the most frequent value of that column. For Example:-
Name|Place
a |a1
a |a2
a |a2
|d1
b |a2
c |a2
c |
|
d |c1
In this case I'll replace all the NULL values in column "Name" with 'a' and in column "Place" with 'a2'. Till now I am able to extract only the most frequent columns in a particular column. Can you please help me with the second step on how to replace the null or invalid values with the most frequent values of that column.
A: You can use DataFrame.na.fill() to replace the null with some value
To update at once you can do as
val map = Map("Name" -> "a", "Place" -> "a2")
df.na.fill(map).show()
But if you want to replace a bad record too then you need to validate the bad records first. You can do this by using regular expression with like function.
| Q: How to replace null values with a specific value in Dataframe using spark in Java? I am trying improve the accuracy of Logistic regression algorithm implemented in Spark using Java. For this I'm trying to replace Null or invalid values present in a column with the most frequent value of that column. For Example:-
Name|Place
a |a1
a |a2
a |a2
|d1
b |a2
c |a2
c |
|
d |c1
In this case I'll replace all the NULL values in column "Name" with 'a' and in column "Place" with 'a2'. Till now I am able to extract only the most frequent columns in a particular column. Can you please help me with the second step on how to replace the null or invalid values with the most frequent values of that column.
A: You can use DataFrame.na.fill() to replace the null with some value
To update at once you can do as
val map = Map("Name" -> "a", "Place" -> "a2")
df.na.fill(map).show()
But if you want to replace a bad record too then you need to validate the bad records first. You can do this by using regular expression with like function.
A: You can use .na.fill function (it is a function in org.apache.spark.sql.DataFrameNaFunctions).
Basically the function you need is: def fill(value: String, cols: Seq[String]): DataFrame
You can choose the columns, and you choose the value you want to replace the null or NaN.
In your case it will be something like:
val df2 = df.na.fill("a", Seq("Name"))
.na.fill("a2", Seq("Place"))
A: You'll want to use the fill(String value, String[] columns) method of your dataframe, which automatically replaces Null values in a given list of columns with the value you specified.
So if you already know the value that you want to replace Null with...:
String[] colNames = {"Name"}
dataframe = dataframe.na.fill("a", colNames)
You can do the same for the rest of your columns.
A: In order to replace the NULL values with a given string I've used fill function present in Spark for Java. It accepts the word to be replaced with and a sequence of column names. Here is how I have implemented that:-
List<String> colList = new ArrayList<String>();
colList.add(cols[i]);
Seq<String> colSeq = scala.collection.JavaConverters.asScalaIteratorConverter(colList.iterator()).asScala().toSeq();
data=data.na().fill(word, colSeq);
| stackoverflow | {
"language": "en",
"length": 368,
"provenance": "stackexchange_0000F.jsonl.gz:906316",
"question_score": "27",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44671597"
} |
281a299eb167ae4ca77c12dfc90a17f10c8e1869 | Stackoverflow Stackexchange
Q: Incorrect value returned by IdentityHashMap - Why? To my understanding, the following code should print false as it is doing identity based comparison.
However, when I run the following code it is printing true:
public class Test1 {
public static void main(String[] args) {
IdentityHashMap m = new IdentityHashMap();
m.put("A", new String("B"));
System.out.println(m.remove("A", new String("B")));
}
}
Can some one help me understand why this behaving like this?
A: You have actually hit a bug in JDK, see JDK-8178355. IdentityHashMap does not have custom implementation of the remove(K,V) method added to Map via default method, which is causing this issue.
| Q: Incorrect value returned by IdentityHashMap - Why? To my understanding, the following code should print false as it is doing identity based comparison.
However, when I run the following code it is printing true:
public class Test1 {
public static void main(String[] args) {
IdentityHashMap m = new IdentityHashMap();
m.put("A", new String("B"));
System.out.println(m.remove("A", new String("B")));
}
}
Can some one help me understand why this behaving like this?
A: You have actually hit a bug in JDK, see JDK-8178355. IdentityHashMap does not have custom implementation of the remove(K,V) method added to Map via default method, which is causing this issue.
A: You put "A", new "B"
You remove "A", new "B"
So, yes, your assumption that this IdentityHashMap should not remove that value looks correct.
But you are using the remove(key, value) method from the base AbstractMap - which is not overriden by this specific subclass!
So, although the javadoc says:
This class implements the Map interface with a hash table, using reference-equality in place of object-equality when comparing keys (and values).
The (and values) part is (probably) implemented only for inserting key/value pairs.
So, the important part again comes from the javadoc:
This class is not a general-purpose Map implementation! While this class implements the Map interface, it intentionally violates Map's general contract, which mandates the use of the equals method when comparing objects. This class is designed for use only in the rare cases wherein reference-equality semantics are required.
My (probably opinionated) take-away: this class is a very special thing. It has a very clear, and narrow purpose. And you found an example where it falls apart. (which I don't find surprising: when you "change" semantics but decide to re-use existing code, it is almost inevitable to run into such kind of inconsistencies).
It could be seen as bug; and as the other answer confirms: it is a bug!
| stackoverflow | {
"language": "en",
"length": 312,
"provenance": "stackexchange_0000F.jsonl.gz:906362",
"question_score": "33",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44671737"
} |
5d6435cdb45a52fde47c2ec9ca101dc6db8056d7 | Stackoverflow Stackexchange
Q: Facebook Webhook Subscription not showing in Lead Ads Testing Tool I have subscribed to leadgen topic for my page. Tested it using Webhooks console, it works fine. I created an ad with lead form and tried to create a test lead. But the webhook is not being triggered. What could be the reason?
I'm the admin of the app.
In Lead Ads Testing Tool I'm able to see the below message. When I navigate to Webhooks page I can see that I have already subscribed to leadgen topic.
WEBHOOK SUBSCRIPTION FOR THE SELECTED PAGE
There is no webhook subscription with Lead Ads for the selected page
| Q: Facebook Webhook Subscription not showing in Lead Ads Testing Tool I have subscribed to leadgen topic for my page. Tested it using Webhooks console, it works fine. I created an ad with lead form and tried to create a test lead. But the webhook is not being triggered. What could be the reason?
I'm the admin of the app.
In Lead Ads Testing Tool I'm able to see the below message. When I navigate to Webhooks page I can see that I have already subscribed to leadgen topic.
WEBHOOK SUBSCRIPTION FOR THE SELECTED PAGE
There is no webhook subscription with Lead Ads for the selected page
| stackoverflow | {
"language": "en",
"length": 107,
"provenance": "stackexchange_0000F.jsonl.gz:906366",
"question_score": "10",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44671751"
} |
0fa1464c82b469d242c79d74678f48f7a37aadeb | Stackoverflow Stackexchange
Q: How to call Javascript function on AMX page load? I'm trying to call a Javascript function on page load using invoke action in AMX page but its throwing exception.
I'm using following code.
My js file contains following code:
(function(){
if (!window.application) window.application = {};
DayView.gotoFirstOperation =function(){
var element =document.getElementById('box');
alert('Method exeuted');
if( 'null' != element){
element.scrollIntoView();
}
}; })();
My invoke action method I'm calling js function in the following code:
AdfmfContainerUtilities.invokeContainerJavaScriptFunction(AdfmfJavaUtilities.getFeatureName(), "DayView.gotoFirstOperation", new Object[]{});
I'm getting following exception:
invokeContainerUtilitiesMethod 'invokeContainerJavaScriptFunction' encountered an error[ERROR[oracle.adfmf.framework.exception.AdfException]-JS Response returned a nil response.].
Is there any other way I can call the js function on AMX page load?
A: Try to add that code inside amx:facet of amx page:
And remember to include your js file to maf-feature.xml content list.
<amx:verbatim id="v1">
<![CDATA[
<script type="text/javascript">
document.onload = myMethod();
</script>
]]>
</amx:verbatim>
| Q: How to call Javascript function on AMX page load? I'm trying to call a Javascript function on page load using invoke action in AMX page but its throwing exception.
I'm using following code.
My js file contains following code:
(function(){
if (!window.application) window.application = {};
DayView.gotoFirstOperation =function(){
var element =document.getElementById('box');
alert('Method exeuted');
if( 'null' != element){
element.scrollIntoView();
}
}; })();
My invoke action method I'm calling js function in the following code:
AdfmfContainerUtilities.invokeContainerJavaScriptFunction(AdfmfJavaUtilities.getFeatureName(), "DayView.gotoFirstOperation", new Object[]{});
I'm getting following exception:
invokeContainerUtilitiesMethod 'invokeContainerJavaScriptFunction' encountered an error[ERROR[oracle.adfmf.framework.exception.AdfException]-JS Response returned a nil response.].
Is there any other way I can call the js function on AMX page load?
A: Try to add that code inside amx:facet of amx page:
And remember to include your js file to maf-feature.xml content list.
<amx:verbatim id="v1">
<![CDATA[
<script type="text/javascript">
document.onload = myMethod();
</script>
]]>
</amx:verbatim>
| stackoverflow | {
"language": "en",
"length": 139,
"provenance": "stackexchange_0000F.jsonl.gz:906381",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44671793"
} |
f76b302dfedcbc16b8a6207de41a53624ddd5d71 | Stackoverflow Stackexchange
Q: Share Multiple images + Text in Whatsapp in Android I want to share multiple images with a single text on Whatsapp.
I am using this code
Intent intent = new Intent();
intent.setAction(Intent.ACTION_SEND_MULTIPLE);
intent.putExtra(Intent.EXTRA_TEXT, "Text caption message!!");
intent.putExtra(Intent.EXTRA_HTML_TEXT, "<html>Text caption message!!");
intent.setType("text/plain");
intent.setType("image/jpeg");
intent.setPackage("com.whatsapp");
intent.putParcelableArrayListExtra(Intent.EXTRA_STREAM, files);
startActivity(intent);
This code is working fine but the problem is that the text is coming on every image. I want the text to be placed on only one image. Is there any way I can do that? TIA
| Q: Share Multiple images + Text in Whatsapp in Android I want to share multiple images with a single text on Whatsapp.
I am using this code
Intent intent = new Intent();
intent.setAction(Intent.ACTION_SEND_MULTIPLE);
intent.putExtra(Intent.EXTRA_TEXT, "Text caption message!!");
intent.putExtra(Intent.EXTRA_HTML_TEXT, "<html>Text caption message!!");
intent.setType("text/plain");
intent.setType("image/jpeg");
intent.setPackage("com.whatsapp");
intent.putParcelableArrayListExtra(Intent.EXTRA_STREAM, files);
startActivity(intent);
This code is working fine but the problem is that the text is coming on every image. I want the text to be placed on only one image. Is there any way I can do that? TIA
| stackoverflow | {
"language": "en",
"length": 84,
"provenance": "stackexchange_0000F.jsonl.gz:906445",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44672007"
} |
2ad7bd7b973a0d5c0cd7b2b87569b66e9d02a93d | Stackoverflow Stackexchange
Q: Call child component method from parent component in reactjs I have requirement to call child component method from parent component
in reactjs. I have tried using refs but not able to do it. Can anyone please suggest any solution.
Thanks.
A: If using React Hooks, you can make use of useRef and useImperativeHandle hooks.
In the child component, add the functions in the hook.
const Child = forwardRef((props, ref) => {
const printSomething = () =>{
console.log('printing from child function')
}
useImperativeHandle(ref, () => ({
printSomething: printSomething
}));
return <h1>Child Component</h1>;
});
Call the exposed function from the parent with the ref.
const Parent = () => {
const childRef = useRef();
return (
<div>
<Child ref={childRef} />
<button onClick={() => childRef.current.printSomething()}>Click</button>
</div>
);
};
| Q: Call child component method from parent component in reactjs I have requirement to call child component method from parent component
in reactjs. I have tried using refs but not able to do it. Can anyone please suggest any solution.
Thanks.
A: If using React Hooks, you can make use of useRef and useImperativeHandle hooks.
In the child component, add the functions in the hook.
const Child = forwardRef((props, ref) => {
const printSomething = () =>{
console.log('printing from child function')
}
useImperativeHandle(ref, () => ({
printSomething: printSomething
}));
return <h1>Child Component</h1>;
});
Call the exposed function from the parent with the ref.
const Parent = () => {
const childRef = useRef();
return (
<div>
<Child ref={childRef} />
<button onClick={() => childRef.current.printSomething()}>Click</button>
</div>
);
};
A: Don't :)
Hoist the function to the parent and pass data down as props. You can pass the same function down, in case the child needs to call it also.
https://facebook.github.io/react/docs/lifting-state-up.html
A: You can assign a ref to the child component and then call the function form parent like
class Parent extends React.Component {
callChildFunction = () => {
this.child.handleActionParent(); ///calling a child function here
}
render(){
return (
<div>
{/* other things */}
<Child ref={(cd) => this.child = cd}/>
</div>
)
}
}
class Child extends React.Component {
handleActionParent = () => {
console.log('called from parent')
}
render() {
return (
{/*...*/}
)
}
}
A: You can pass registerCallback props to your child and call it from componentDidMount and pass reference to your child component method, then you can call it method at anytime
A: in your parent you can create a reference
in the constructor:
this.child = React.createRef();
in any function:
execute=(comment)=>{
this.child.current.addComment();
}
render(){
return (
<div>
<Messages ref={this.child} comment={this.state.comment}/>
</div>
)
}
and in the Message(child) component
addComment=()=>{
console.log("Pi ", this.props);
};
| stackoverflow | {
"language": "en",
"length": 304,
"provenance": "stackexchange_0000F.jsonl.gz:906448",
"question_score": "11",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44672021"
} |
bca88ecb34cc65804568661a1895cc0f23043b03 | Stackoverflow Stackexchange
Q: Not able to take screenshot for a specific element in protractor I want to take the snapshot of an element using protractor and protractor supports the element.takeScreeshot(). However when i am using it throws the Some session error(below)
element(by.model('model.username')).takeScreenshot().then(ab=>{
)}
Error
**- Failed: GET /session/5d58e1ca-f55d-4b51-aee8-1d518498cb35/element/0/screenshot
Build info: version: '3.4.0', revision: 'unknown', time: 'unknown'
System info: host: 'INBEN10174', ip: '157.237.220.180', os.name: 'Windows 10', os.arch: 'amd64', os.version: '10.0', java.version: '1.8.0_121'
Driver info: driver.version: unknown**
A: You can screenshot the entire page and then crop the image to the element you want:
const fs = require('fs');
const PNG = require('pngjs').PNG;
var elem = element(by.model('model.username'));
promise.all([
elem.getLocation(),
elem.getSize(),
browser.takeScreenshot()
]).then(function(result) {
var src = PNG.sync.read(Buffer.from(result[2], 'base64'));
var dst = new PNG({width: result[1].width, height: result[1].height});
PNG.bitblt(src, dst, result[0].x, result[0].y, dst.width, dst.height, 0, 0);
fs.writeFileSync('out.png', PNG.sync.write(dst));
});
This will output a .png image of the selected element.
As mentioned below, you will need to make sure the element is on the screen prior to this; which is achievable like so:
var elem = element(by.model('model.username'));
browser.actions().mouseMove(elem).perform();
| Q: Not able to take screenshot for a specific element in protractor I want to take the snapshot of an element using protractor and protractor supports the element.takeScreeshot(). However when i am using it throws the Some session error(below)
element(by.model('model.username')).takeScreenshot().then(ab=>{
)}
Error
**- Failed: GET /session/5d58e1ca-f55d-4b51-aee8-1d518498cb35/element/0/screenshot
Build info: version: '3.4.0', revision: 'unknown', time: 'unknown'
System info: host: 'INBEN10174', ip: '157.237.220.180', os.name: 'Windows 10', os.arch: 'amd64', os.version: '10.0', java.version: '1.8.0_121'
Driver info: driver.version: unknown**
A: You can screenshot the entire page and then crop the image to the element you want:
const fs = require('fs');
const PNG = require('pngjs').PNG;
var elem = element(by.model('model.username'));
promise.all([
elem.getLocation(),
elem.getSize(),
browser.takeScreenshot()
]).then(function(result) {
var src = PNG.sync.read(Buffer.from(result[2], 'base64'));
var dst = new PNG({width: result[1].width, height: result[1].height});
PNG.bitblt(src, dst, result[0].x, result[0].y, dst.width, dst.height, 0, 0);
fs.writeFileSync('out.png', PNG.sync.write(dst));
});
This will output a .png image of the selected element.
As mentioned below, you will need to make sure the element is on the screen prior to this; which is achievable like so:
var elem = element(by.model('model.username'));
browser.actions().mouseMove(elem).perform();
A: As said by @suresh-salloju, it's a new feature and even on my chromedriver 2.30 and selenium 3.4.0 is throws the same answer.
If you want to be able to take a screenshot of an element you can maybe use protractor-image-comparison. The methods saveElement or checkElement can help with testing. Only be sure that you scroll the element in the viewport.
| stackoverflow | {
"language": "en",
"length": 231,
"provenance": "stackexchange_0000F.jsonl.gz:906499",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44672177"
} |
bbd095c4545fede45739ab18d749426dd3b89295 | Stackoverflow Stackexchange
Q: Import nameof function from the ts-nameof package that is not exported in d.ts I found an interesting package and want to use it in my typescript application: https://github.com/dsherret/ts-nameof
But I cannot import nameof function. It is not exported in d.ts file:
declare module "ts-nameof" {
interface Api {
...
}
var func: Api;
export = func;
}
declare function nameof<T>(func?: (obj: T) => void): string;
declare function nameof(obj: Object | null | undefined): string;
declare namespace nameof {
function full<T>(periodIndex?: number): string;
function full(obj: Object | null | undefined, periodIndex?: number): string;
}
How should I import nameof function into my typescript module?
for import 'ts-nameof'; I have Uncaught ReferenceError: nameof is not defined error.
A: Add this into tsd.d.ts:
/// <reference path="../node_modules/ts-nameof/ts-nameof.d.ts" />
Make sure to put correct path to node_modules
| Q: Import nameof function from the ts-nameof package that is not exported in d.ts I found an interesting package and want to use it in my typescript application: https://github.com/dsherret/ts-nameof
But I cannot import nameof function. It is not exported in d.ts file:
declare module "ts-nameof" {
interface Api {
...
}
var func: Api;
export = func;
}
declare function nameof<T>(func?: (obj: T) => void): string;
declare function nameof(obj: Object | null | undefined): string;
declare namespace nameof {
function full<T>(periodIndex?: number): string;
function full(obj: Object | null | undefined, periodIndex?: number): string;
}
How should I import nameof function into my typescript module?
for import 'ts-nameof'; I have Uncaught ReferenceError: nameof is not defined error.
A: Add this into tsd.d.ts:
/// <reference path="../node_modules/ts-nameof/ts-nameof.d.ts" />
Make sure to put correct path to node_modules
| stackoverflow | {
"language": "en",
"length": 133,
"provenance": "stackexchange_0000F.jsonl.gz:906542",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44672308"
} |
4dbc54554626bb5a9175d2821c93a74ddaf36a42 | Stackoverflow Stackexchange
Q: iOS how to add a provisioning profile I am using Xcode Version 8.3.3 (8E3004b). I have an app developed that I would like to deploy to Apple's App Store.
In order to Archive and deploy, I understand I first need a provisioning profile. So, in the developer console, I set up an iOS Distribution Provisioning Profile.
I also have the following certificates:
On my MacBook, I added the following certificates to the key chain:
Then when I go to Xcode, I would expect to have a Provisioning Profile:
But as you can see, it Failed to create provisioning profile and No profiles for 'com.ionicframework.thewhozoo912107' were found.
I am obviously missing some step in order to create the Provisioning Profile in order to distribute the app to the Apple App Store.
Question
If anyone can suggest what I need to do in order to create a working provisioning profile in order to distribute the app, I would appreciate the help.
More info:
A: Solution:
I fixed this in Xcode by unchecking Automatically Manage Signing, and then selecting my provisioning profile.
| Q: iOS how to add a provisioning profile I am using Xcode Version 8.3.3 (8E3004b). I have an app developed that I would like to deploy to Apple's App Store.
In order to Archive and deploy, I understand I first need a provisioning profile. So, in the developer console, I set up an iOS Distribution Provisioning Profile.
I also have the following certificates:
On my MacBook, I added the following certificates to the key chain:
Then when I go to Xcode, I would expect to have a Provisioning Profile:
But as you can see, it Failed to create provisioning profile and No profiles for 'com.ionicframework.thewhozoo912107' were found.
I am obviously missing some step in order to create the Provisioning Profile in order to distribute the app to the Apple App Store.
Question
If anyone can suggest what I need to do in order to create a working provisioning profile in order to distribute the app, I would appreciate the help.
More info:
A: Solution:
I fixed this in Xcode by unchecking Automatically Manage Signing, and then selecting my provisioning profile.
A: I had that problem as well when I was developing my first app. It took a while, since I managed it. Try to plug your iOS device in your computer and then it should work.
| stackoverflow | {
"language": "en",
"length": 216,
"provenance": "stackexchange_0000F.jsonl.gz:906543",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44672311"
} |
131b6370bd82610dd288f5d3cd813df387be699c | Stackoverflow Stackexchange
Q: How can I read from a computed field? In Odoo 9 I have a computed field in a model called page_price.
Class Page(models.Model):
page_price = fields.Float(compute='compute_page_price')
def compute_page_price(self):
self.page_price = 7 # this value is an example
If I show this field in a view, it shows 7.
The problem is when I try to get the value from another model.
Class Book(models.Model):
book_price = fields.Float(compute='compute_book_price')
def compute_book_price(self):
# page_id has the value of a Page row id
page_price = self.env['Page'].search([('id', '=', page_id)])[0].page_price
self.book_price = page_price * 10
Here, the value of book_price is always 0 instead of 70.
The vaule of page_price inside the compute_book_price function is 0 instead of 7.
Why is that and how can I obtain the correct value?
Note: If the page_price field is defined as a Float field instead of a computed field, the result of book_price is 70.
| Q: How can I read from a computed field? In Odoo 9 I have a computed field in a model called page_price.
Class Page(models.Model):
page_price = fields.Float(compute='compute_page_price')
def compute_page_price(self):
self.page_price = 7 # this value is an example
If I show this field in a view, it shows 7.
The problem is when I try to get the value from another model.
Class Book(models.Model):
book_price = fields.Float(compute='compute_book_price')
def compute_book_price(self):
# page_id has the value of a Page row id
page_price = self.env['Page'].search([('id', '=', page_id)])[0].page_price
self.book_price = page_price * 10
Here, the value of book_price is always 0 instead of 70.
The vaule of page_price inside the compute_book_price function is 0 instead of 7.
Why is that and how can I obtain the correct value?
Note: If the page_price field is defined as a Float field instead of a computed field, the result of book_price is 70.
| stackoverflow | {
"language": "en",
"length": 146,
"provenance": "stackexchange_0000F.jsonl.gz:906546",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44672321"
} |
7eedc1ebe44df5079b5546772d06d60f75def645 | Stackoverflow Stackexchange
Q: I have running json_object in oracle 12.1.0.2 and it is saying error . why? I have run this query .I am new to oracle.
> CREATE TABLE sample_emp ( empno NUMBER(4) NOT NULL PRIMARY
> KEY, ename VARCHAR2(10), hiredate DATE, sal
> NUMBER(7,2) );
> INSERT INTO sample_emp VALUES (7369, 'Smith',
> '17-DEC-1980', 800);
> INSERT INTO sample_emp VALUES (7782, 'Clark',
> '09-JUN-1981', 2450);
> INSERT INTO sample_emp VALUES (7839, 'King',
> '17-NOV-1981', 5000); COMMIT;
>
> SELECT * from sample_emp SELECT json_object ('ename' VALUE
> d.ename,'hiredate' VALUE d.hiredate) FROM sample_emp d
oracle version
error coming
Can we check that particular keyword exist in oracle so that i can check about
json_object
is present in oracle . and why this error is coming?
A: Working fine for "Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production". You can try using concat operator.
SELECT '{"ename":"'||d.ename||'","hiredate":"'||d.hiredate||'"}' from sample_emp d;
| Q: I have running json_object in oracle 12.1.0.2 and it is saying error . why? I have run this query .I am new to oracle.
> CREATE TABLE sample_emp ( empno NUMBER(4) NOT NULL PRIMARY
> KEY, ename VARCHAR2(10), hiredate DATE, sal
> NUMBER(7,2) );
> INSERT INTO sample_emp VALUES (7369, 'Smith',
> '17-DEC-1980', 800);
> INSERT INTO sample_emp VALUES (7782, 'Clark',
> '09-JUN-1981', 2450);
> INSERT INTO sample_emp VALUES (7839, 'King',
> '17-NOV-1981', 5000); COMMIT;
>
> SELECT * from sample_emp SELECT json_object ('ename' VALUE
> d.ename,'hiredate' VALUE d.hiredate) FROM sample_emp d
oracle version
error coming
Can we check that particular keyword exist in oracle so that i can check about
json_object
is present in oracle . and why this error is coming?
A: Working fine for "Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production". You can try using concat operator.
SELECT '{"ename":"'||d.ename||'","hiredate":"'||d.hiredate||'"}' from sample_emp d;
A: JSON_OBJECT is available from DB 12.2 onwards
| stackoverflow | {
"language": "en",
"length": 157,
"provenance": "stackexchange_0000F.jsonl.gz:906548",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44672324"
} |
6eef64a3fb19019df2ce8aadcbcbc8a9b055a593 | Stackoverflow Stackexchange
Q: Don't execute jenkins job if svn polling failed I have a jenkins job, that is polling svn every 5 minutes and executing my unittests if some changes occured.
My probleme is, the svn polling fails randomly due to a unreachable proxy.
org.tmatesoft.svn.core.SVNAuthenticationException: svn: E170001: HTTP proxy authorization failed
I guess this problem is related to some issues with the proxy we use and not the configuration of my job or machine.
My question now is, can I skip the job if the svn poll is failing and only execute if it was succesful?
So that I don't have failed builds in my job list because of the proxy issue.
Or does anyhow have an idea why this random error can occure?
Fyi, I don't want the proxy problem itself fixed, as this is probably happening due to network problems, but I just want to skip the execution of the job if the svn poll fails.
A: Instead of polling svn, you can try a post-commit hook so that svn notifies Jenkins of changes; see https://wiki.jenkins-ci.org/display/JENKINS/Subversion+Plugin?focusedCommentId=43352266
| Q: Don't execute jenkins job if svn polling failed I have a jenkins job, that is polling svn every 5 minutes and executing my unittests if some changes occured.
My probleme is, the svn polling fails randomly due to a unreachable proxy.
org.tmatesoft.svn.core.SVNAuthenticationException: svn: E170001: HTTP proxy authorization failed
I guess this problem is related to some issues with the proxy we use and not the configuration of my job or machine.
My question now is, can I skip the job if the svn poll is failing and only execute if it was succesful?
So that I don't have failed builds in my job list because of the proxy issue.
Or does anyhow have an idea why this random error can occure?
Fyi, I don't want the proxy problem itself fixed, as this is probably happening due to network problems, but I just want to skip the execution of the job if the svn poll fails.
A: Instead of polling svn, you can try a post-commit hook so that svn notifies Jenkins of changes; see https://wiki.jenkins-ci.org/display/JENKINS/Subversion+Plugin?focusedCommentId=43352266
A: In order to prevent running next action when the previous action is failed,
add set +e to the top of your shell script.
-e option is exit immediately when any action returns 1(which means failed). And also. @mikep's answer is useful thought. Instead of polling, Post-commit hook is more efficient.
| stackoverflow | {
"language": "en",
"length": 227,
"provenance": "stackexchange_0000F.jsonl.gz:906568",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44672379"
} |
0e3f6cd4beeecb332b8af3f289ce280cd39e678a | Stackoverflow Stackexchange
Q: Facet wrap radar plot with three apexes in R I have created the following plot which gives the shape of the plot I desire. But when I facet wrap it, the shapes no longer remain triangular and become almost cellular. How can I keep the triangular shape after faceting?
Sample data:
lvls <- c("a","b","c","d","e","1","2","3","4","5","6","7","8","9","10","11","12","13","14","15")
df <- data.frame(Product = factor(rep(lvls, 3)),
variable = c(rep("Ingredients", 20),
rep("Defence", 20),
rep("Benefit", 20)),
value = rnorm(60, mean = 5))
Now when I use this code, I get the shapes I desire.
ggplot(df,
aes(x = variable,
y = value,
color = Product,
group = Product)) +
geom_polygon(fill = NA) +
coord_polar()
However, the products are all on top of one another so ideally I would like to facet wrap.
ggplot(df,
aes(x = variable,
y = value,
color = Product,
group = Product)) +
geom_polygon(fill = NA) +
coord_polar() +
facet_wrap(~Product)
But when I facet wrap, the shapes become oddly cellular and not triangular (straight lines from point to point). Any ideas on how to alter this output?
Thanks.
| Q: Facet wrap radar plot with three apexes in R I have created the following plot which gives the shape of the plot I desire. But when I facet wrap it, the shapes no longer remain triangular and become almost cellular. How can I keep the triangular shape after faceting?
Sample data:
lvls <- c("a","b","c","d","e","1","2","3","4","5","6","7","8","9","10","11","12","13","14","15")
df <- data.frame(Product = factor(rep(lvls, 3)),
variable = c(rep("Ingredients", 20),
rep("Defence", 20),
rep("Benefit", 20)),
value = rnorm(60, mean = 5))
Now when I use this code, I get the shapes I desire.
ggplot(df,
aes(x = variable,
y = value,
color = Product,
group = Product)) +
geom_polygon(fill = NA) +
coord_polar()
However, the products are all on top of one another so ideally I would like to facet wrap.
ggplot(df,
aes(x = variable,
y = value,
color = Product,
group = Product)) +
geom_polygon(fill = NA) +
coord_polar() +
facet_wrap(~Product)
But when I facet wrap, the shapes become oddly cellular and not triangular (straight lines from point to point). Any ideas on how to alter this output?
Thanks.
| stackoverflow | {
"language": "en",
"length": 173,
"provenance": "stackexchange_0000F.jsonl.gz:906601",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44672480"
} |
5ed61642f2c6514a175b32fd30dbf90c979928ce | Stackoverflow Stackexchange
Q: Hadoop metrics via REST API: allocatedMB, allocatedVcores and runningContainers is always -1 we try to report our monthly hadoop application metrics for each user and use the REST API using the following REST API path:
http://[host:port]/ws/v1/cluster/app
The returned data looks good except allocatedMB, allocatedVcores and runningContainers which is always -1.
Can anybody explain why that is?
A: If there are no running jobs on your cluster when you call the RM cluster apps API you are looking historical data. Based on the Hadoop code (QueueStatisticsPBImpl.java under hadoop-yarn-project/), -1 is used as a default value when the RM doesn't know the value of that item.
@Override
public long getAllocatedVCores() {
QueueStatisticsProtoOrBuilder p = viaProto ? proto : builder;
return (p.hasAllocatedVCores()) ? p.getAllocatedVCores() : -1;
}
Since the other fields are values that would stored in the Job History Server (other than allocatedMB, allocatedVCores, and runningContainers), they contain actual values.
https://hadoop.apache.org/docs/stable/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/HistoryServerRest.html
| Q: Hadoop metrics via REST API: allocatedMB, allocatedVcores and runningContainers is always -1 we try to report our monthly hadoop application metrics for each user and use the REST API using the following REST API path:
http://[host:port]/ws/v1/cluster/app
The returned data looks good except allocatedMB, allocatedVcores and runningContainers which is always -1.
Can anybody explain why that is?
A: If there are no running jobs on your cluster when you call the RM cluster apps API you are looking historical data. Based on the Hadoop code (QueueStatisticsPBImpl.java under hadoop-yarn-project/), -1 is used as a default value when the RM doesn't know the value of that item.
@Override
public long getAllocatedVCores() {
QueueStatisticsProtoOrBuilder p = viaProto ? proto : builder;
return (p.hasAllocatedVCores()) ? p.getAllocatedVCores() : -1;
}
Since the other fields are values that would stored in the Job History Server (other than allocatedMB, allocatedVCores, and runningContainers), they contain actual values.
https://hadoop.apache.org/docs/stable/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/HistoryServerRest.html
| stackoverflow | {
"language": "en",
"length": 150,
"provenance": "stackexchange_0000F.jsonl.gz:906636",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44672566"
} |
25ee087d2f71465177a49089d2d87b9e6bad5e87 | Stackoverflow Stackexchange
Q: Why doesn't equality involving "mod" not typecheck in Idris? Why won't the following typecheck:
v1 : mod 3 2 = 1
v1 = Refl
Yet this will typecheck fine:
v2 : 3 - 2 = 1
v2 = Refl
A: It happens due to partiality of mod function (thanks to @AntonTrunov clarification). It's polymorphic and by default numeral constants are Integers.
Idris> :t mod
mod : Integral ty => ty -> ty -> ty
Idris> :t 3
3 : Integer
Idris> :t mod 3 2
mod 3 2 : Integer
For Integer type mod function is not total.
Instead use modNatNZ function so everything type checks perfectly and works.
v1 : modNatNZ 3 2 SIsNotZ = 1
v1 = Refl
| Q: Why doesn't equality involving "mod" not typecheck in Idris? Why won't the following typecheck:
v1 : mod 3 2 = 1
v1 = Refl
Yet this will typecheck fine:
v2 : 3 - 2 = 1
v2 = Refl
A: It happens due to partiality of mod function (thanks to @AntonTrunov clarification). It's polymorphic and by default numeral constants are Integers.
Idris> :t mod
mod : Integral ty => ty -> ty -> ty
Idris> :t 3
3 : Integer
Idris> :t mod 3 2
mod 3 2 : Integer
For Integer type mod function is not total.
Instead use modNatNZ function so everything type checks perfectly and works.
v1 : modNatNZ 3 2 SIsNotZ = 1
v1 = Refl
| stackoverflow | {
"language": "en",
"length": 121,
"provenance": "stackexchange_0000F.jsonl.gz:906640",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44672590"
} |
d3964612473b63333262f62638367d554d07969c | Stackoverflow Stackexchange
Q: Is it possible to get color name in swift I am trying to get color name from UIButton in swift instead of value is there any way to do that. Thanks
I am using tintColor to set value as well to get value
clickButton.tintColor = UIColor.blue
var color = clickButton.tintColor
when I am printing color value I get (UIExtendedSRGBColorSpace 0 0 1 1) is there anyway I can get blue instead of value
A: From iOS 14.0+ you can also use https://developer.apple.com/documentation/uikit/uicolor/3600314-accessibilityname
UIColor.systemRed.accessibilityName // returns "Red"
| Q: Is it possible to get color name in swift I am trying to get color name from UIButton in swift instead of value is there any way to do that. Thanks
I am using tintColor to set value as well to get value
clickButton.tintColor = UIColor.blue
var color = clickButton.tintColor
when I am printing color value I get (UIExtendedSRGBColorSpace 0 0 1 1) is there anyway I can get blue instead of value
A: From iOS 14.0+ you can also use https://developer.apple.com/documentation/uikit/uicolor/3600314-accessibilityname
UIColor.systemRed.accessibilityName // returns "Red"
A: You cannot get the "human-readable" name of a UIColor by using a built-in. However you can get the RGB values, as described in this post.
If you really want to get the name of the color, you can build your own dictionary, as @BoilingFire pointed out in their answer:
var color = clickButton.tintColor! // it is set to UIColor.blue
var colors = [UIColor.red:"red", UIColor.blue:"blue", UIColor.black:"black"] // you should add more colors here, as many as you want to support.
var colorString = String()
if colors.keys.contains(color){
colorString = colors[color]!
}
print(colorString) // prints "blue"
A: You can use this extension to get name of color created via Color Assets in XCode.
extension UIColor {
/// Name of color. Only colors created with XCode Color Assets will return actual name, colors created programatically will always return nil.
var name: String? {
let str = String(describing: self).dropLast()
guard let nameRange = str.range(of: "name = ") else {
return nil
}
let cropped = str[nameRange.upperBound ..< str.endIndex]
if cropped.isEmpty {
return nil
}
return String(cropped)
}
}
Result:
A: Swift 5 and above iOS14
Create an extension for UIColor
extension UIColor {
convenience init(_ r: Double,_ g: Double,_ b: Double,_ a: Double) {
self.init(red: CGFloat(r/255), green: CGFloat(g/255), blue: CGFloat(b/255), alpha: CGFloat(a))
}
convenience init(hex: String) {
let scanner = Scanner(string: hex)
scanner.scanLocation = 0
var rgbValue: UInt64 = 0
scanner.scanHexInt64(&rgbValue)
let r = (rgbValue & 0xff0000) >> 16
let g = (rgbValue & 0xff00) >> 8
let b = rgbValue & 0xff
self.init(
red: CGFloat(r) / 0xff,
green: CGFloat(g) / 0xff,
blue: CGFloat(b) / 0xff, alpha: 1
)
}
func getRGBAComponents() -> (red: Int, green: Int, blue: Int, alpha: Int)?
{
var fRed : CGFloat = 0
var fGreen : CGFloat = 0
var fBlue : CGFloat = 0
var fAlpha: CGFloat = 0
if self.getRed(&fRed, green: &fGreen, blue: &fBlue, alpha: &fAlpha) {
let iRed = Int(fRed * 255.0)
let iGreen = Int(fGreen * 255.0)
let iBlue = Int(fBlue * 255.0)
let iAlpha = 1
return (red:iRed, green:iGreen, blue:iBlue, alpha:iAlpha)
} else {
// Could not extract RGBA components:
return nil
}
}
class func colorWithRGB(r: CGFloat, g: CGFloat, b: CGFloat, alpha: CGFloat = 1.0) -> UIColor {
return UIColor(red: r/255.0, green: g/255.0, blue: b/255.0, alpha: alpha)
}
convenience init(red: Int, green: Int, blue: Int) {
assert(red >= 0 && red <= 255, "Invalid red component")
assert(green >= 0 && green <= 255, "Invalid green component")
assert(blue >= 0 && blue <= 255, "Invalid blue component")
self.init(red: CGFloat(red) / 255.0, green: CGFloat(green) / 255.0, blue: CGFloat(blue) / 255.0, alpha: 1.0)
}
convenience init(rgb: Int) {
self.init(
red: (rgb >> 16) & 0xFF,
green: (rgb >> 8) & 0xFF,
blue: rgb & 0xFF
)
}
}
Use like below Code
let RGBArray = YOUR_RGB_COLOR?.components(separatedBy: ",")
let _color = UIColor(red:Int(RGBArray?[0] ?? "0") ?? 255 , green: Int(RGBArray?[1] ?? "0") ?? 255, blue: Int(RGBArray?[2] ?? "0") ?? 255)
if #available(iOS 14.0, *) {
let mString = _color.accessibilityName
print(mString)
}
OUTPUT:
Red
A: Add this extension to your project
extension UIColor {
var name: String? {
switch self {
case UIColor.black: return "black"
case UIColor.darkGray: return "darkGray"
case UIColor.lightGray: return "lightGray"
case UIColor.white: return "white"
case UIColor.gray: return "gray"
case UIColor.red: return "red"
case UIColor.green: return "green"
case UIColor.blue: return "blue"
case UIColor.cyan: return "cyan"
case UIColor.yellow: return "yellow"
case UIColor.magenta: return "magenta"
case UIColor.orange: return "orange"
case UIColor.purple: return "purple"
case UIColor.brown: return "brown"
default: return nil
}
}
}
Now you can write
print(UIColor.red.name) // Optional("red")
A: I don't think it's possible but you could build your own dictionnary and search for the key that corresponds to that color object.
Not any color would have a name anyways.
var colors = ["blue": UIColor.blue, ...]
A: Swift 5.5
Let's assume you have a variable Orange of type UIColor.
var orange: UIColor = UIColor(displayP3Red: 1, green: 0.5, blue: 0, alpha: 1)
You can get the color name by:
var colorName: String {
orange.accessibilityName
}
Hope this helped. Cheers
A: An extencion for class Color in SwiftUI can be :
extension Color {
var name: String? {
let description = self.description
let firstOccurenceIndex = description.firstIndex(of: "\"") ?? description.startIndex
let startIndex = description.index(firstOccurenceIndex, offsetBy: 1)
let suffix = description.suffix(from: startIndex)
let lastOccurenceIndex = suffix.firstIndex(of: "\"") ?? description.endIndex
let name = suffix.prefix(upTo: lastOccurenceIndex)
return String(name)
}
}
| stackoverflow | {
"language": "en",
"length": 804,
"provenance": "stackexchange_0000F.jsonl.gz:906641",
"question_score": "10",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44672594"
} |
31d46e2b20ba4e9ae8e721a86956df0fb1a0bdc7 | Stackoverflow Stackexchange
Q: What does the 'training loss' mean in machine learning? I found some sample code on the tensorflow website as follows.
input_fn = tf.contrib.learn.io.numpy_input_fn({"x":x_train}, y_train, batch_size=4, num_epochs=1000)
eval_input_fn = tf.contrib.learn.io.numpy_input_fn({"x":x_eval}, y_eval, batch_size=4, num_epochs=1000)
# We can invoke 1000 training steps by invoking the method and passing the
# training data set.
estimator.fit(input_fn=input_fn, steps=1000)
# Here we evaluate how well our model did.
train_loss = estimator.evaluate(input_fn=input_fn)
eval_loss = estimator.evaluate(input_fn=eval_input_fn)
print("train loss: %r"% train_loss)
print("eval loss: %r"% eval_loss)
Would you let me know what the 'training loss' means?
A: Training loss is the loss on training data. Loss is a function that takes the correct output and model output and computes the error between them. The loss is then used to adjust weights based on how big the error was and which elements contributed to it the most.
| Q: What does the 'training loss' mean in machine learning? I found some sample code on the tensorflow website as follows.
input_fn = tf.contrib.learn.io.numpy_input_fn({"x":x_train}, y_train, batch_size=4, num_epochs=1000)
eval_input_fn = tf.contrib.learn.io.numpy_input_fn({"x":x_eval}, y_eval, batch_size=4, num_epochs=1000)
# We can invoke 1000 training steps by invoking the method and passing the
# training data set.
estimator.fit(input_fn=input_fn, steps=1000)
# Here we evaluate how well our model did.
train_loss = estimator.evaluate(input_fn=input_fn)
eval_loss = estimator.evaluate(input_fn=eval_input_fn)
print("train loss: %r"% train_loss)
print("eval loss: %r"% eval_loss)
Would you let me know what the 'training loss' means?
A: Training loss is the loss on training data. Loss is a function that takes the correct output and model output and computes the error between them. The loss is then used to adjust weights based on how big the error was and which elements contributed to it the most.
| stackoverflow | {
"language": "en",
"length": 136,
"provenance": "stackexchange_0000F.jsonl.gz:906710",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44672832"
} |
07c84a5656641506625742286c2ad59f8b190c82 | Stackoverflow Stackexchange
Q: Is it possible to precompile xib/storyboard files? We have a large monolithic app: 700 ObjC files, 300 Swift files and up to 100 xib and storyboard files.
We are looking for any techniques that would help us to optimize the app build time.
One idea is to precompile some of the stable xibs/storyboards so that we don't compile them every time we build the app. Is there a technique to do this?
P.S. I know that xib are compiled only the first time, however like it is with iOS frameworks which are precompiled binaries, the question is how to do the same for xib/storyboard files.
| Q: Is it possible to precompile xib/storyboard files? We have a large monolithic app: 700 ObjC files, 300 Swift files and up to 100 xib and storyboard files.
We are looking for any techniques that would help us to optimize the app build time.
One idea is to precompile some of the stable xibs/storyboards so that we don't compile them every time we build the app. Is there a technique to do this?
P.S. I know that xib are compiled only the first time, however like it is with iOS frameworks which are precompiled binaries, the question is how to do the same for xib/storyboard files.
| stackoverflow | {
"language": "en",
"length": 106,
"provenance": "stackexchange_0000F.jsonl.gz:906720",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44672855"
} |
6e55b1c9593d0652a5d49cf9a24e183e1871902a | Stackoverflow Stackexchange
Q: Can tqdm be embedded to html? I want to embed tqdm progressbar to html or at least print it as html tag but i can't find any documention of it. I only find how to print the progressbar in python notebook.
Is it possible to embed it in html?
Also is it possible to integrate tqdm with bokeh?
A: Tqdm progress bars can't be embedded into HTML. The progress bar in the browser should somehow communicate with Python in order to update the progress bar. Here is one good example of how to do this in Flask.
Bokeh has a request opened in 2017 for a progress bar that is still open and here is a similar question for how to create a progress bar in Bokeh.
| Q: Can tqdm be embedded to html? I want to embed tqdm progressbar to html or at least print it as html tag but i can't find any documention of it. I only find how to print the progressbar in python notebook.
Is it possible to embed it in html?
Also is it possible to integrate tqdm with bokeh?
A: Tqdm progress bars can't be embedded into HTML. The progress bar in the browser should somehow communicate with Python in order to update the progress bar. Here is one good example of how to do this in Flask.
Bokeh has a request opened in 2017 for a progress bar that is still open and here is a similar question for how to create a progress bar in Bokeh.
| stackoverflow | {
"language": "en",
"length": 128,
"provenance": "stackexchange_0000F.jsonl.gz:906738",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44672906"
} |
6a17e51ad8a18a1f13d66207c8ddcf9bc3bcc93c | Stackoverflow Stackexchange
Q: Stream response to file using Fetch API and fs.createWriteStream I'm creating an Electron application and I want to stream an image to a file (so basically download it).
I want to use the native Fetch API because the request module would be a big overhead.
But there is no pipe method on the response, so I can't do something like
fetch('https://imageurl.jpg')
.then(response => response.pipe(fs.createWriteStream('image.jpg')));
So how can I combine fetch and fs.createWriteStream?
A: Fetch is not really able to work with nodejs Streams out of the box, because the Stream API in the browser differs from the one nodejs provides, i.e. you can not pipe a browser stream into a nodejs stream or vice versa.
The electron-fetch module seems to solve that for you. Or you can look at this answer: https://stackoverflow.com/a/32545850/2016129 to have a way of downloading files without the need of nodeIntegration.
There is also needle, a smaller alternative to the bulkier request, which of course supports Streams.
| Q: Stream response to file using Fetch API and fs.createWriteStream I'm creating an Electron application and I want to stream an image to a file (so basically download it).
I want to use the native Fetch API because the request module would be a big overhead.
But there is no pipe method on the response, so I can't do something like
fetch('https://imageurl.jpg')
.then(response => response.pipe(fs.createWriteStream('image.jpg')));
So how can I combine fetch and fs.createWriteStream?
A: Fetch is not really able to work with nodejs Streams out of the box, because the Stream API in the browser differs from the one nodejs provides, i.e. you can not pipe a browser stream into a nodejs stream or vice versa.
The electron-fetch module seems to solve that for you. Or you can look at this answer: https://stackoverflow.com/a/32545850/2016129 to have a way of downloading files without the need of nodeIntegration.
There is also needle, a smaller alternative to the bulkier request, which of course supports Streams.
A: I guess today the answer is with nodejs 18+
node -e 'fetch("https://github.com/stealify").then(response => stream.Readable.fromWeb(response.body).pipe(fs.createWriteStream("./github.com_stealify.html")))'
in the above example we use the -e flag it tells nodejs to execute our cli code we download the page of a interristing Project here and save it as ./github.com_stealify.html in the current working dir the below code shows the same inside a nodejs .mjs file for convinience
Cli example using CommonJS
node -e 'fetch("https://github.com/stealify").then(({body:s}) =>
stream.Readable.fromWeb(s).pipe(fs.createWriteStream("./github.com_stealify.html")))'
fetch.cjs
fetch("https://github.com/stealify").then(({body:s}) =>
require("node:stream").Readable.fromWeb(s)
.pipe(require("node:fs").createWriteStream("./github.com_stealify.html")));
Cli example using ESM
node --input-type module -e 'stream.Readable.fromWeb(
(await fetch("https://github.com/stealify")).body)
.pipe(fs.createWriteStream("./github.com_stealify.html"))'
fetch_tla_no_tli.mjs
(await import("node:stream")).Readable.fromWeb(
(await fetch("https://github.com/stealify")).body).pipe(
(await import("node:fs")).createWriteStream("./github.com_stealify.html"));
fetch.mjs
import stream from 'node:stream';
import fs from 'node:fs';
stream.Readable
.fromWeb((await fetch("https://github.com/stealify")).body)
.pipe(fs.createWriteStream("./github.com_stealify.html"));
see: https://nodejs.org/api/stream.html#streamreadablefromwebreadablestream-options
Update i would not use this method when dealing with files
this is the correct usage as fs.promises supports all forms of iterators equal to the stream/consumers api
node -e 'fetch("https://github.com/stealify").then(({ body }) =>
fs.promises.writeFile("./github.com_stealify.html", body)))'
A: I got it working. I made a function which transforms the response into a readable stream.
const responseToReadable = response => {
const reader = response.body.getReader();
const rs = new Readable();
rs._read = async () => {
const result = await reader.read();
if(!result.done){
rs.push(Buffer.from(result.value));
}else{
rs.push(null);
return;
}
};
return rs;
};
So with it, I can do
fetch('https://imageurl.jpg')
.then(response => responseToReadable(response).pipe(fs.createWriteStream('image.jpg')));
| stackoverflow | {
"language": "en",
"length": 372,
"provenance": "stackexchange_0000F.jsonl.gz:906751",
"question_score": "10",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44672942"
} |
ec945454e9b52ab26a42465998a3cca32da4a2ae | Stackoverflow Stackexchange
Q: " D/OpenGLRenderer: DecorView @4fea9e4 is drawn by HWUI ". repeats in Android Studio log " D/OpenGLRenderer: DecorView @4fea9e4 is drawn by HWUI ".
The above line keeps repeating in Android Studio log. I have selected the app name and show only selected application option. I have googled for it, but found no references to such a log. Can anyone please tell what this log is about and what should be done to eliminate/hide these. I have attached the screenshot link android log screenshot. Please help me.
| Q: " D/OpenGLRenderer: DecorView @4fea9e4 is drawn by HWUI ". repeats in Android Studio log " D/OpenGLRenderer: DecorView @4fea9e4 is drawn by HWUI ".
The above line keeps repeating in Android Studio log. I have selected the app name and show only selected application option. I have googled for it, but found no references to such a log. Can anyone please tell what this log is about and what should be done to eliminate/hide these. I have attached the screenshot link android log screenshot. Please help me.
| stackoverflow | {
"language": "en",
"length": 87,
"provenance": "stackexchange_0000F.jsonl.gz:906757",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44672959"
} |
08d07658e8b3abdc27e70de84dff133fd667358d | Stackoverflow Stackexchange
Q: Does Azure blob storage support alias for referring to a location or a file? Does Azure blob storage support alias for referring to a location or a file?
I am having an Azure subscription and would like to know if blob storage supports alias while referrring to a location or a file
If file is available like this - /storage/container/folderA/fileA
I want to do something like this -/foo/pointertofileA
which will still allow me to retrieve the file if I access the above url
A: There are no such path aliases with Azure Storage. You need to specify the blob's full URI.
Remember that Blob Storage is not the same as file I/O.
You may want to look at Azure Files, which attaches as SMB mount. Then you can use normal File I/O for accessing content.
| Q: Does Azure blob storage support alias for referring to a location or a file? Does Azure blob storage support alias for referring to a location or a file?
I am having an Azure subscription and would like to know if blob storage supports alias while referrring to a location or a file
If file is available like this - /storage/container/folderA/fileA
I want to do something like this -/foo/pointertofileA
which will still allow me to retrieve the file if I access the above url
A: There are no such path aliases with Azure Storage. You need to specify the blob's full URI.
Remember that Blob Storage is not the same as file I/O.
You may want to look at Azure Files, which attaches as SMB mount. Then you can use normal File I/O for accessing content.
| stackoverflow | {
"language": "en",
"length": 136,
"provenance": "stackexchange_0000F.jsonl.gz:906796",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44673092"
} |
992a1204473271390426edf9ca2705aad52d64dc | Stackoverflow Stackexchange
Q: How to import database into vagrant mysql? In window cmd ,I usually do like this and it is working
C:\Users\davi> mysql -u root -p mydb < D:/xampp/mysql/bin/db.sql
I am new in vagrant , this is what I do in vagrant ssh
[vagrant@localhost bs]$ mysql -u root -p mydb < D:/xampp/mysql/bin/db.sql
But I got this error
-bash: D:/xampp/mysql/bin/db.sql: ??????????????????????
Is there anyway to import correctly ?
A: You do not have D:/xampp/mysql/bin/db.sql from your vagrant VM
*
*Place the file db.sql in the same folder where you have the Vagrantfile
*vagrant ssh as you normally do to get into the VM
*run the following command to import the db
$ mysql -u root -p mydb < /vagrant/db.sql
The /vagrant folder is a default shared folder in the VM.
| Q: How to import database into vagrant mysql? In window cmd ,I usually do like this and it is working
C:\Users\davi> mysql -u root -p mydb < D:/xampp/mysql/bin/db.sql
I am new in vagrant , this is what I do in vagrant ssh
[vagrant@localhost bs]$ mysql -u root -p mydb < D:/xampp/mysql/bin/db.sql
But I got this error
-bash: D:/xampp/mysql/bin/db.sql: ??????????????????????
Is there anyway to import correctly ?
A: You do not have D:/xampp/mysql/bin/db.sql from your vagrant VM
*
*Place the file db.sql in the same folder where you have the Vagrantfile
*vagrant ssh as you normally do to get into the VM
*run the following command to import the db
$ mysql -u root -p mydb < /vagrant/db.sql
The /vagrant folder is a default shared folder in the VM.
A: Copy DB.sql under your project folder in Windows
In Vagrant after mysqling into database, you can access your source /var/www/PROJECTNAME/DB.sql
| stackoverflow | {
"language": "en",
"length": 149,
"provenance": "stackexchange_0000F.jsonl.gz:906823",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44673158"
} |
7ef973f5579108a954b667f6331d8185558b67ce | Stackoverflow Stackexchange
Q: How to generate rsa key pair in client side using angular2? I need to know how to generate 'rsa' key-pair on the client-side using angular2.
I need to generate private/public key pair and save the private key into database and want to use public key inside the client side. How can I implement this?
I found this https://www.npmjs.com/package/generate-rsa-keypair for generating key pair. But its for node? Can I implement it into my client side? If yes how?
Is any other way to implement this?
A: you must use https://github.com/juliangruber/keypair library
then import it in angular component like
import * as keypair from 'keypair';
and use library method
const pubprivkey = keypair();
console.log(pubprivkey);
it will return object of RSA public and private key
{ public: '-----BEGIN RSA PUBLIC KEY-----\r\nMIGJAoGBAM3CosR73CBNcJsLvAgMBAAE=\r\n-----END RSA PUBLIC KEY-----\n',
private: '-----BEGIN RSA PRIVATE KEY-----\r\nMIICXAIBAAKBgQDNwqLEe9wgTXNHoyxi7Ia\r\nPQUCQCwWU4U+v4lD7uYBw00Ga/xt+7+UqFPlPVdz1yyr4q24Zxaw0LgmuEvgU5dycq8N7Jxj\r\nTubX0MIRR+G9fmDBBl8=\r\n-----END RSA PRIVATE KEY-----\n' }
| Q: How to generate rsa key pair in client side using angular2? I need to know how to generate 'rsa' key-pair on the client-side using angular2.
I need to generate private/public key pair and save the private key into database and want to use public key inside the client side. How can I implement this?
I found this https://www.npmjs.com/package/generate-rsa-keypair for generating key pair. But its for node? Can I implement it into my client side? If yes how?
Is any other way to implement this?
A: you must use https://github.com/juliangruber/keypair library
then import it in angular component like
import * as keypair from 'keypair';
and use library method
const pubprivkey = keypair();
console.log(pubprivkey);
it will return object of RSA public and private key
{ public: '-----BEGIN RSA PUBLIC KEY-----\r\nMIGJAoGBAM3CosR73CBNcJsLvAgMBAAE=\r\n-----END RSA PUBLIC KEY-----\n',
private: '-----BEGIN RSA PRIVATE KEY-----\r\nMIICXAIBAAKBgQDNwqLEe9wgTXNHoyxi7Ia\r\nPQUCQCwWU4U+v4lD7uYBw00Ga/xt+7+UqFPlPVdz1yyr4q24Zxaw0LgmuEvgU5dycq8N7Jxj\r\nTubX0MIRR+G9fmDBBl8=\r\n-----END RSA PRIVATE KEY-----\n' }
| stackoverflow | {
"language": "en",
"length": 141,
"provenance": "stackexchange_0000F.jsonl.gz:906832",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44673186"
} |
c26d0e0647c22b2f9e9be618de94e981c98df1a5 | Stackoverflow Stackexchange
Q: Failed to initialise build environment I am trying to build "yocto" with "jethro" (https://wiki.yoctoproject.org/wiki/Releases) version but when I try initialise build environment it gives following error.
Error: 'meta-poky/conf' must be a directory containing local.conf &
bblayers.conf
I found out that meta-poky folder is not available in jethro version.
What am I doing wrong in initialisation?
I tried with later version krogoth and it is working fine with it.
A: meta-yocto was indeed renamed meta-poky in Krogoth. There is code to handle your configuration in the upgrade case (going from jethro to krogoth) but downgrade probably isn't tested: I'm guessing you did a build with a newer release and then jethro.
This could maybe be fixed by just modifying conf/templateconf.cfg & conf/bblayers.conf manually (to refer to "meta-yocto" instead of "meta-poky"). Alternatively you could move your whole conf/ out of the way, re-generate a template configuration with . oe-init-build-env and then redo any local configuration you had.
| Q: Failed to initialise build environment I am trying to build "yocto" with "jethro" (https://wiki.yoctoproject.org/wiki/Releases) version but when I try initialise build environment it gives following error.
Error: 'meta-poky/conf' must be a directory containing local.conf &
bblayers.conf
I found out that meta-poky folder is not available in jethro version.
What am I doing wrong in initialisation?
I tried with later version krogoth and it is working fine with it.
A: meta-yocto was indeed renamed meta-poky in Krogoth. There is code to handle your configuration in the upgrade case (going from jethro to krogoth) but downgrade probably isn't tested: I'm guessing you did a build with a newer release and then jethro.
This could maybe be fixed by just modifying conf/templateconf.cfg & conf/bblayers.conf manually (to refer to "meta-yocto" instead of "meta-poky"). Alternatively you could move your whole conf/ out of the way, re-generate a template configuration with . oe-init-build-env and then redo any local configuration you had.
| stackoverflow | {
"language": "en",
"length": 156,
"provenance": "stackexchange_0000F.jsonl.gz:906842",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44673220"
} |
37ebbbe80087a5f091874c87906b07bb9b0c7d6b | Stackoverflow Stackexchange
Q: Edittext field when using password input does not hide password Edittext field in android when using password input does not hide password. It worked before but I am unable to figure out what went wrong or what changed. Here is the source code:
XML
<EditText
android:id="@+id/login_password"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_gravity="center_vertical"
android:background="@drawable/rectangular_border_edittext"
android:hint="@string/enter_password"
android:inputType="textPassword"
android:maxLines="1"
android:padding="8dp" />
JAVA
password.setSingleLine();
password.setImeOptions(EditorInfo.IME_ACTION_NEXT); password.setImeActionLabel(getResources().getString(R.string.goButton), EditorInfo.IME_ACTION_NEXT);
password.setOnEditorActionListener((v, actionId, event) -> {
if (actionId == EditorInfo.IME_ACTION_NEXT) {
checkPasswordAndSend();
}
return false;
});
If somebody has encountered similiar problem before please let me know. Also I would like to tell you I am using latest version of support libraries.(25.3.1).
A: So I figured it out. The maxlines attribute in password tends to this behaviour. Remove maxLines = 1 from xml and setSingleLine from java and everything goes back to normal. Dont know why this works, but just works.
Hope this helps someone.
| Q: Edittext field when using password input does not hide password Edittext field in android when using password input does not hide password. It worked before but I am unable to figure out what went wrong or what changed. Here is the source code:
XML
<EditText
android:id="@+id/login_password"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_gravity="center_vertical"
android:background="@drawable/rectangular_border_edittext"
android:hint="@string/enter_password"
android:inputType="textPassword"
android:maxLines="1"
android:padding="8dp" />
JAVA
password.setSingleLine();
password.setImeOptions(EditorInfo.IME_ACTION_NEXT); password.setImeActionLabel(getResources().getString(R.string.goButton), EditorInfo.IME_ACTION_NEXT);
password.setOnEditorActionListener((v, actionId, event) -> {
if (actionId == EditorInfo.IME_ACTION_NEXT) {
checkPasswordAndSend();
}
return false;
});
If somebody has encountered similiar problem before please let me know. Also I would like to tell you I am using latest version of support libraries.(25.3.1).
A: So I figured it out. The maxlines attribute in password tends to this behaviour. Remove maxLines = 1 from xml and setSingleLine from java and everything goes back to normal. Dont know why this works, but just works.
Hope this helps someone.
A: Remove this line in your XML :
android:maxLines="1"
You should have this :
<EditText
android:id="@+id/login_password"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_gravity="center_vertical"
android:background="@drawable/rectangular_border_edittext"
android:hint="@string/enter_password"
android:inputType="textPassword"
android:padding="8dp" />
And keep this line in your JAVA like in your example :
password.setSingleLine();
| stackoverflow | {
"language": "en",
"length": 182,
"provenance": "stackexchange_0000F.jsonl.gz:906846",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44673233"
} |
9367f09748b3ca674a5041a428e0bbb49b1e1a20 | Stackoverflow Stackexchange
Q: Tab not working after upgrade Wordpress 4.8 the tab below on article page edit stops working after upgrade to WP 4.8
I get this js error in console:
Any idea to solve this?
Thank you
A: I've tested this on a wordpress install with version 4.8, and it works fine for me.
Most probably this is caused by a plugin, so I recommend temporarily disabling your plugins until you figure out which one is causing the issue.
Start by disabling the plugin which adds the sidebar visible on the left.
Then contact the plugin developer to see if this is a known issue.
| Q: Tab not working after upgrade Wordpress 4.8 the tab below on article page edit stops working after upgrade to WP 4.8
I get this js error in console:
Any idea to solve this?
Thank you
A: I've tested this on a wordpress install with version 4.8, and it works fine for me.
Most probably this is caused by a plugin, so I recommend temporarily disabling your plugins until you figure out which one is causing the issue.
Start by disabling the plugin which adds the sidebar visible on the left.
Then contact the plugin developer to see if this is a known issue.
| stackoverflow | {
"language": "en",
"length": 104,
"provenance": "stackexchange_0000F.jsonl.gz:906863",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44673313"
} |
4e0d2ef5bb8d80c85b71c0f963803a5b463a33a0 | Stackoverflow Stackexchange
Q: What are the difference between res_model, src_model and model terminology in odoo 10? These three terminologies can be seen in XML and python files, so can anyone please explain its usage, and what are the effects of these keywords?
A: <act_window
id="act_test_test"
name="Am here"
res_model="account.move.line"
context="{'search_default_unreconciled':1, 'search_default_payable':1}"
domain="[('partner_id', '=', False)]"
src_model="account.invoice"/>
In the above example code, act_window is used to create a window action of a particular model. Here it is account.move.line. That is res_model is used to define a resource model.
Now you have an action you need to call it from somewhere. To do that you have to define a menu link. This task is done by src_model. You define a model name and the menu link will appear under a submenu of Action/More in form/list view. In the above example, I have defined account.invoice, so the menu will appear under 'Action' in form/list view of 'account.invoice model'.
This is what I understood. Hope it will help you.
| Q: What are the difference between res_model, src_model and model terminology in odoo 10? These three terminologies can be seen in XML and python files, so can anyone please explain its usage, and what are the effects of these keywords?
A: <act_window
id="act_test_test"
name="Am here"
res_model="account.move.line"
context="{'search_default_unreconciled':1, 'search_default_payable':1}"
domain="[('partner_id', '=', False)]"
src_model="account.invoice"/>
In the above example code, act_window is used to create a window action of a particular model. Here it is account.move.line. That is res_model is used to define a resource model.
Now you have an action you need to call it from somewhere. To do that you have to define a menu link. This task is done by src_model. You define a model name and the menu link will appear under a submenu of Action/More in form/list view. In the above example, I have defined account.invoice, so the menu will appear under 'Action' in form/list view of 'account.invoice model'.
This is what I understood. Hope it will help you.
| stackoverflow | {
"language": "en",
"length": 161,
"provenance": "stackexchange_0000F.jsonl.gz:906874",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44673341"
} |
ff822676274669f56f0316ad5cf2dcecb85ed9a2 | Stackoverflow Stackexchange
Q: How to use user variables with file provisioner in Packer? I have a packer json like:
"builders": [{...}],
"provisioners": [
{
"type": "file",
"source": "packer/myfile.json",
"destination": "/tmp/myfile.json"
}
],
"variables": {
"myvariablename": "value"
}
and myfile.json is:
{
"var" : "{{ user `myvariablename`}}"
}
The variable into the file does get replaced, is a sed replacement with shell provisioner after the file the only option available here?
Using packer version 0.12.0
A: You have to pass these as environment variables. For example:
"provisioners": [
{
"type": "shell"
"environment_vars": [
"http_proxy={{user `proxy`}}",
],
"scripts": [
"some_script.sh"
],
}
],
"variables": {
"proxy": null
}
And in the script you can use $http_proxy
| Q: How to use user variables with file provisioner in Packer? I have a packer json like:
"builders": [{...}],
"provisioners": [
{
"type": "file",
"source": "packer/myfile.json",
"destination": "/tmp/myfile.json"
}
],
"variables": {
"myvariablename": "value"
}
and myfile.json is:
{
"var" : "{{ user `myvariablename`}}"
}
The variable into the file does get replaced, is a sed replacement with shell provisioner after the file the only option available here?
Using packer version 0.12.0
A: You have to pass these as environment variables. For example:
"provisioners": [
{
"type": "shell"
"environment_vars": [
"http_proxy={{user `proxy`}}",
],
"scripts": [
"some_script.sh"
],
}
],
"variables": {
"proxy": null
}
And in the script you can use $http_proxy
A: So far I've come just with the solution to use file & shell provisioner. Upload file and then replace variables in file via shell provisioner which can be fed from template variables provided by e.g. HashiCorp Vault
A: Yo may use OS export function to set environment and pass it to Packer
Here is a config using OS ENV_NAME value to choose local folder to copy from
export ENV_NAME=dev will set local folder to dev
{
"variables": {
...
"env_folder": "{{env `ENV_NAME`}}",
},
"builders": [{...}]
"provisioners": [
{
"type": "file",
"source": "files/{{user `env_folder`}}/",
"destination": "/tmp/"
},
{...}
]
}
A: User variables must first be defined in a variables section within your template. Even if you want a user variable to default to an empty string, it must be defined. This explicitness helps reduce the time it takes for newcomers to understand what can be modified using variables in your template.
The variables section is a key/value mapping of the user variable name to a default value. A default value can be the empty string. An example is shown below:
{
"variables": {
"aws_access_key": "",
"aws_secret_key": ""
},
"builders": [{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
// ...
}]
}
check this link for more information
| stackoverflow | {
"language": "en",
"length": 321,
"provenance": "stackexchange_0000F.jsonl.gz:906901",
"question_score": "12",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44673408"
} |
0d8cc7047b208819ffae869c1aa96f6371ec697a | Stackoverflow Stackexchange
Q: How to prevent the JSESSIONID showing in the URL I have created an login page in servlet using Google Datastore, it is working fine. but sometimes its showing the JSESSIONID in the URL.
How can I prevent the JSESSIONID sending through the URL?
why its passing through the URL instead of request message?
A: Add the following entry in your web.xml.
<session-config>
<tracking-mode>COOKIE</tracking-mode>
</session-config>
This will instruct the container that the client supports cookies and hence there is no need to put the JSessionId in the URL.
| Q: How to prevent the JSESSIONID showing in the URL I have created an login page in servlet using Google Datastore, it is working fine. but sometimes its showing the JSESSIONID in the URL.
How can I prevent the JSESSIONID sending through the URL?
why its passing through the URL instead of request message?
A: Add the following entry in your web.xml.
<session-config>
<tracking-mode>COOKIE</tracking-mode>
</session-config>
This will instruct the container that the client supports cookies and hence there is no need to put the JSessionId in the URL.
A: Are you using response.encodeURL()? If so, try to remove it or disable "URL Rewriting" and check the URL.
See also:
*
*disableURLRewriting
Apache Tomcat Configuration Reference
Additional information:
response.encodeURL(URL) adds ;jsessionid=xxxx... to URL. To disable this(="URL Rewriting"),
Tomcat 7.0 or later:
<session-config>
<tracking-mode>COOKIE</tracking-mode>
</session-config>
Tomcat 6.0:
<Context disableURLRewriting="true" ...
| stackoverflow | {
"language": "en",
"length": 138,
"provenance": "stackexchange_0000F.jsonl.gz:906922",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44673490"
} |
8536cd2b0391c01c64c9de028ac496e55b7f800d | Stackoverflow Stackexchange
Q: Regex to restrict special characters in the beginning of an email address PFB the regex. I want to make sure that the regex should not contain any special character just after @ and just before. In-between it can allow any combination.
The regex I have now:
@"^[^\W_](?:[\w.-]*[^\W_])?@(([a-zA-Z0-9]+)(\.))([a-zA-Z]{2,3}|[0-9]{1,3})(\]?)$"))"
For example, the regex should not match
[email protected]
[email protected]
SSDFF-SAF@-_.SAVAVSAV-_.IP
A: Since you consider _ special, I'd recommend using [^\W_] at the beginning and then rearrange the starting part a bit. To prevent a special char before a @, just make sure there is a letter or digit there. I also recommend to remove redundant capturing groups/convert them into non-capturing:
@"^[^\W_](?:[\w.-]*[^\W_])?@(?:\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.|(?:[\w-]+\.)+)(?:[a-zA-Z]{2,3}|[0-9]{1,3})\]?$"
Here is a demo of how this regex matches now.
The [^\W_](?:[\w.-]*[^\W_])? matches:
*
*[^\W_] - a digit or a letter only
*(?:[\w.-]*[^\W_])? - a 1 or 0 occurrences of:
*
*[\w.-]* - 0+ letters, digits, _, . and -
*[^\W_] - a digit or a letter only
| Q: Regex to restrict special characters in the beginning of an email address PFB the regex. I want to make sure that the regex should not contain any special character just after @ and just before. In-between it can allow any combination.
The regex I have now:
@"^[^\W_](?:[\w.-]*[^\W_])?@(([a-zA-Z0-9]+)(\.))([a-zA-Z]{2,3}|[0-9]{1,3})(\]?)$"))"
For example, the regex should not match
[email protected]
[email protected]
SSDFF-SAF@-_.SAVAVSAV-_.IP
A: Since you consider _ special, I'd recommend using [^\W_] at the beginning and then rearrange the starting part a bit. To prevent a special char before a @, just make sure there is a letter or digit there. I also recommend to remove redundant capturing groups/convert them into non-capturing:
@"^[^\W_](?:[\w.-]*[^\W_])?@(?:\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.|(?:[\w-]+\.)+)(?:[a-zA-Z]{2,3}|[0-9]{1,3})\]?$"
Here is a demo of how this regex matches now.
The [^\W_](?:[\w.-]*[^\W_])? matches:
*
*[^\W_] - a digit or a letter only
*(?:[\w.-]*[^\W_])? - a 1 or 0 occurrences of:
*
*[\w.-]* - 0+ letters, digits, _, . and -
*[^\W_] - a digit or a letter only
A: Change the initial [\w-\.]+ for [A-Za-z0-9\-\.]+.
Note that this excludes many acceptable email addresses.
Update
As pointed out, [A-Za-z0-9] is not an exact translation of \w. However, you appear to have a specific definition as to what you consider special characters and so it is probably easier for you to define within the square brackets what you class as allowable.
| stackoverflow | {
"language": "en",
"length": 218,
"provenance": "stackexchange_0000F.jsonl.gz:906959",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44673600"
} |
4982d12a8657695402efae90459111571b5532cd | Stackoverflow Stackexchange
Q: How to add color overlay on an image So I have an image. I need to put a color overlay rgba(56, 59, 64, 0.7) on top of this image.
HTML:
<div class="home">
<img src="http://via.placeholder.com/350x150" />
</div>
CSS:
.home img {
width: 100%;
padding: 0;
margin: 0;
}
.home img {
width: 100%;
padding: 0;
margin: 0;
}
<div class="home">
<img src="http://via.placeholder.com/350x150" />
</div>
A: You can use pseudo-elements like before and absolute position it on top of the image
Added a blue background color for example purposes , so you can see it better, but you can use any color with opacity
img {
width: 100%;
padding: 0;
margin: 0;
}
.home {
position:relative;
}
.home:before {
content: "";
width: 100%;
height: 100%;
position: absolute;
background: rgba(0,0,255,0.5);
}
<div class="home">
<img src="http://via.placeholder.com/350x150">
</div>
| Q: How to add color overlay on an image So I have an image. I need to put a color overlay rgba(56, 59, 64, 0.7) on top of this image.
HTML:
<div class="home">
<img src="http://via.placeholder.com/350x150" />
</div>
CSS:
.home img {
width: 100%;
padding: 0;
margin: 0;
}
.home img {
width: 100%;
padding: 0;
margin: 0;
}
<div class="home">
<img src="http://via.placeholder.com/350x150" />
</div>
A: You can use pseudo-elements like before and absolute position it on top of the image
Added a blue background color for example purposes , so you can see it better, but you can use any color with opacity
img {
width: 100%;
padding: 0;
margin: 0;
}
.home {
position:relative;
}
.home:before {
content: "";
width: 100%;
height: 100%;
position: absolute;
background: rgba(0,0,255,0.5);
}
<div class="home">
<img src="http://via.placeholder.com/350x150">
</div>
A: Here you go
.home {
}
img {
width: 100%;
padding: 0;
margin: 0;
display:block;
}
.wrap {
position:relative;
}
.wrap:before {
content:"";
position: absolute;
top:0;
left:0;
height:100%;
width:100%;
background: rgba(0,0,0,0.5);
z-index:999;
}
<div class="home">
<div class="wrap">
<img src="http://via.placeholder.com/350x150" />
</div>
</div>
A: HTML
<div class="home">
<img src="mango.jpg" />
<div class="overlay"></div>
</div>
SCSS
.home {
position: relative;
img {
max-width: 100%;
padding: 0;
margin: 0;
}
.overlay {
position: absolute;
top: 0;
left: 0;
bottom: 0;
right: 0;
background-color: rgba(56, 59, 64, 0.7);
}
}
| stackoverflow | {
"language": "en",
"length": 220,
"provenance": "stackexchange_0000F.jsonl.gz:906982",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44673676"
} |
78b9fc12144607c8198cb17f4432c646263ca73d | Stackoverflow Stackexchange
Q: Load encrypted PKCS1 private key from file I have a PKCS1 private key in a file and I load it using
b, err := ioutil.ReadFile(path)
if err != nil {
return nil, err
}
Then, I try to convert it into private key object
block, _ := pem.Decode(b)
der, err := x509.DecryptPEMBlock(block, []byte("qwerty"))
if err != nil {
return nil, err
}
bytes := pem.EncodeToMemory(&pem.Block{Type: "RSA PRIVATE KEY", Bytes: der})
return x509.ParsePKCS1PrivateKey(bytes)
But this code throws exception in DecryptPEMBlock
x509: no DEK-Info header in block
I didn't find any documentation about this in golang
A: I made a mistake with my private key file and here is a working code
func GetPrivateKey(path string) (*rsa.PrivateKey, error) {
b, err := ioutil.ReadFile(path)
if err != nil {
return nil, err
}
block, _ := pem.Decode(b)
der, err := x509.DecryptPEMBlock(block, []byte(*PrivateKeyPassword))
if err != nil {
return nil, err
}
return x509.ParsePKCS1PrivateKey(der)
}
P.S. Go does have a package to decrypt PKCS1 private keys, but does not have for PKCS8.
| Q: Load encrypted PKCS1 private key from file I have a PKCS1 private key in a file and I load it using
b, err := ioutil.ReadFile(path)
if err != nil {
return nil, err
}
Then, I try to convert it into private key object
block, _ := pem.Decode(b)
der, err := x509.DecryptPEMBlock(block, []byte("qwerty"))
if err != nil {
return nil, err
}
bytes := pem.EncodeToMemory(&pem.Block{Type: "RSA PRIVATE KEY", Bytes: der})
return x509.ParsePKCS1PrivateKey(bytes)
But this code throws exception in DecryptPEMBlock
x509: no DEK-Info header in block
I didn't find any documentation about this in golang
A: I made a mistake with my private key file and here is a working code
func GetPrivateKey(path string) (*rsa.PrivateKey, error) {
b, err := ioutil.ReadFile(path)
if err != nil {
return nil, err
}
block, _ := pem.Decode(b)
der, err := x509.DecryptPEMBlock(block, []byte(*PrivateKeyPassword))
if err != nil {
return nil, err
}
return x509.ParsePKCS1PrivateKey(der)
}
P.S. Go does have a package to decrypt PKCS1 private keys, but does not have for PKCS8.
A: Go does not have package to decode PKCS files it seems
Checkout this link for more details PKCS
I cant find a package to decode PKCS1 keys
Use this package to decode the PKCS8 files, there are some packages to decode PKCS8, PKCS10, PKCS12 but not for PKCS1
| stackoverflow | {
"language": "en",
"length": 217,
"provenance": "stackexchange_0000F.jsonl.gz:906989",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44673695"
} |
5801da03df14a7401d70c2a8b1ce460b51def89b | Stackoverflow Stackexchange
Q: How to get disp_apache2.4.dll for Apache 2.4 version for AEM Recently I have installed Apache Http server 2.4 but I need disp_apache2.4.dll. I have gone through number of forums but no luck. Please suggest me some links or forums to get that dll file.
Note : I am using Adobe CQ (AEM)
A: Only "Dispatcher for Apache HTTP Server 2.2" is available for IIS on Windows. Dispatcher for Apache HTTP Server 2.4 is not supported and therefore the version you are looking for is not available on this platform.
| Q: How to get disp_apache2.4.dll for Apache 2.4 version for AEM Recently I have installed Apache Http server 2.4 but I need disp_apache2.4.dll. I have gone through number of forums but no luck. Please suggest me some links or forums to get that dll file.
Note : I am using Adobe CQ (AEM)
A: Only "Dispatcher for Apache HTTP Server 2.2" is available for IIS on Windows. Dispatcher for Apache HTTP Server 2.4 is not supported and therefore the version you are looking for is not available on this platform.
| stackoverflow | {
"language": "en",
"length": 90,
"provenance": "stackexchange_0000F.jsonl.gz:906991",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44673698"
} |
f1315c760b1a95eea61b5af9d160c72ccefea9b8 | Stackoverflow Stackexchange
Q: Rspec - capture stderr output of linux process I want to test what my bash script output some message to stderr.
I try this:
require 'rspec'
describe 'My behaviour' do
it 'should do something' do
expect { `echo "foo" 1>&2` }.to output.to_stderr
end
end
But seems output to stderr happened not during test.
A: RSpec's output.to_stderr matcher is looking for things that write to $stdout/$stderr -- which your shell command is not doing, as it runs as a separate sub-process.
In order to test this, you need to explicitly capture the stdout and stderr of the shell code. You could build your own implementation of this quite easily using the Open3 standard library, or for example use the rspec-bash gem:
require 'rspec'
require 'rspec/bash'
describe 'My behaviour' do
include Rspec::Bash
let(:stubbed_env) { create_stubbed_env }
it 'should do something' do
stdout, stderr, status = stubbed_env.execute(
'echo "foo" 1>&2'
)
expect(stderr).to eq('foo')
end
end
| Q: Rspec - capture stderr output of linux process I want to test what my bash script output some message to stderr.
I try this:
require 'rspec'
describe 'My behaviour' do
it 'should do something' do
expect { `echo "foo" 1>&2` }.to output.to_stderr
end
end
But seems output to stderr happened not during test.
A: RSpec's output.to_stderr matcher is looking for things that write to $stdout/$stderr -- which your shell command is not doing, as it runs as a separate sub-process.
In order to test this, you need to explicitly capture the stdout and stderr of the shell code. You could build your own implementation of this quite easily using the Open3 standard library, or for example use the rspec-bash gem:
require 'rspec'
require 'rspec/bash'
describe 'My behaviour' do
include Rspec::Bash
let(:stubbed_env) { create_stubbed_env }
it 'should do something' do
stdout, stderr, status = stubbed_env.execute(
'echo "foo" 1>&2'
)
expect(stderr).to eq('foo')
end
end
A: Found a less precise, but more easier to read method:
require 'rspec'
describe 'My behaviour' do
it 'should do something' do
expect { system('echo "foo" 1>&2 ') }.to output.to_stderr_from_any_process
end
end
AFAIU - it cannot check exact message, but it's enough for me
| stackoverflow | {
"language": "en",
"length": 197,
"provenance": "stackexchange_0000F.jsonl.gz:907003",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44673739"
} |
63c2087b62d1ad516cdd0f8915faa26cd8c3a4c0 | Stackoverflow Stackexchange
Q: How to test a PrivateObject in a .net standard test project? I am following this tutorial, https://learn.microsoft.com/en-us/dotnet/core/testing/unit-testing-with-mstest
but I don't have the type PrivateObject available, so I am wondering if it would be possible to test private objects with a .net standard 2.0 project.
A: You can always use a reflection
ClassToTest obj = new ClassToTest();
Type t = typeof(ClassToTest);
FieldInfo f = t.GetField("field", BindingFlags.Instance | BindingFlags.NonPublic | BindingFlags.Public);
f.SetValue(obj, "Don't panic");
t.InvokeMember("PrintField",
BindingFlags.InvokeMethod | BindingFlags.NonPublic |
BindingFlags.Public | BindingFlags.Instance,
null, obj, null);
You should write a helper class for this, or else your tests will conatin a lot of identical code
P.S. Sample of code is from here
| Q: How to test a PrivateObject in a .net standard test project? I am following this tutorial, https://learn.microsoft.com/en-us/dotnet/core/testing/unit-testing-with-mstest
but I don't have the type PrivateObject available, so I am wondering if it would be possible to test private objects with a .net standard 2.0 project.
A: You can always use a reflection
ClassToTest obj = new ClassToTest();
Type t = typeof(ClassToTest);
FieldInfo f = t.GetField("field", BindingFlags.Instance | BindingFlags.NonPublic | BindingFlags.Public);
f.SetValue(obj, "Don't panic");
t.InvokeMember("PrintField",
BindingFlags.InvokeMethod | BindingFlags.NonPublic |
BindingFlags.Public | BindingFlags.Instance,
null, obj, null);
You should write a helper class for this, or else your tests will conatin a lot of identical code
P.S. Sample of code is from here
A: Private objects are accessible only within the body of the class, so in order to test them you must do one of the following:
*
*make private objects public
or
*implement public methods which will interact with these private objects
| stackoverflow | {
"language": "en",
"length": 151,
"provenance": "stackexchange_0000F.jsonl.gz:907030",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44673817"
} |
e3c1b8f589bf17e377034b7c508b044dbc0c9aa2 | Stackoverflow Stackexchange
Q: Rank error in tf.nn.dynamic_rnn I am trying to build a CNN + RNN model and I am getting the following error.
Any help will be appreciated.
fc2 has shape (?,4096)
cell = tf.contrib.rnn.BasicLSTMCell(self.rnn_hidden_units)
stack = tf.contrib.rnn.MultiRNNCell([cell]*self.rnn_layers)
initial_state = cell.zero_state(self.batch_size, tf.float32)
initial_state = tf.identity(initial_state, name='initial_state')
outputs, _ = tf.nn.dynamic_rnn(stack, fc2,dtype=tf.float32)
File "rcnn.py", line 182, in model
outputs, _ = tf.nn.dynamic_rnn(stack, [fc2],dtype=tf.float32)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/rnn.py", line 574, in dynamic_rnn
dtype=dtype)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/rnn.py", line 637, in _dynamic_rnn_loop
for input_ in flat_input)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/rnn.py", line 637, in
for input_ in flat_input)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/tensor_shape.py", line 649, in with_rank_at_least
raise ValueError("Shape %s must have rank at least %d" % (self, rank))
ValueError: Shape (4096, ?) must have rank at least 3
A: Copying the answer of @jdehesa from his comment for better visibility:
The error seems fairly clear, tf.nn.dynamic_rnn expects a 3-dimensional tensor as input (i.e. rank 3), but fc2 has only two dimensions. The shape of fc2 should be something like (<batch_size>, <max_time>, <num_features>) (or (<max_time>, <batch_size>, <num_features>) if you pass time_major=True)
| Q: Rank error in tf.nn.dynamic_rnn I am trying to build a CNN + RNN model and I am getting the following error.
Any help will be appreciated.
fc2 has shape (?,4096)
cell = tf.contrib.rnn.BasicLSTMCell(self.rnn_hidden_units)
stack = tf.contrib.rnn.MultiRNNCell([cell]*self.rnn_layers)
initial_state = cell.zero_state(self.batch_size, tf.float32)
initial_state = tf.identity(initial_state, name='initial_state')
outputs, _ = tf.nn.dynamic_rnn(stack, fc2,dtype=tf.float32)
File "rcnn.py", line 182, in model
outputs, _ = tf.nn.dynamic_rnn(stack, [fc2],dtype=tf.float32)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/rnn.py", line 574, in dynamic_rnn
dtype=dtype)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/rnn.py", line 637, in _dynamic_rnn_loop
for input_ in flat_input)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/rnn.py", line 637, in
for input_ in flat_input)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/tensor_shape.py", line 649, in with_rank_at_least
raise ValueError("Shape %s must have rank at least %d" % (self, rank))
ValueError: Shape (4096, ?) must have rank at least 3
A: Copying the answer of @jdehesa from his comment for better visibility:
The error seems fairly clear, tf.nn.dynamic_rnn expects a 3-dimensional tensor as input (i.e. rank 3), but fc2 has only two dimensions. The shape of fc2 should be something like (<batch_size>, <max_time>, <num_features>) (or (<max_time>, <batch_size>, <num_features>) if you pass time_major=True)
A: The default input of tf.nn.dynamic_rnn has a dimension of 3 (Batchsize, sequence_length, num_features). Since your num_features is 1 you can expand your fc_seq with
fc2 = tf.expand_dims(fc2, axis = 2)
and then use the code you have above.
| stackoverflow | {
"language": "en",
"length": 206,
"provenance": "stackexchange_0000F.jsonl.gz:907046",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44673865"
} |
db317a1432a2541bf53721b7c64a9871f2c80740 | Stackoverflow Stackexchange
Q: React Native App Not Installed Error I'm testing out react native and can run the dev server fine on an emulator. When I go to build the apk to test on a real device, I get an
Application not installed error
System
*
*My Device is Android 5.1.1
*Emulator is Android 4.4
*React Native 16
Steps to build apk
*
*keytool -genkey -v -keystore my-app-key.keystore -alias my-app-alias -keyalg RSA -keysize 2048 -validity 10000
*react-native bundle --platform android --dev false --entry-file index.android.js --bundle-output android/app/src/main/assets/index.android.bundle --assets-dest android/app/src/main/res/
*cd android && ./gradlew assembleRelease
*Got apk app-release-unsigned.apk
Is there anything I am missing and is the generated file supposed to be unsigned.apk ?
A: This happens due to your app already install for other users so It is not visible in the app drawer. So required to uninstall your for all user and reinstall the app.
*
*Go to Device/Android OS Settings
*Select APPS
*Your app will be listed here
*Goto details of your app by Selecting your app from list
*Tap in tree dot menu at top right corner
*Press uninstall for all users
*Try again to install your app
*Now It will install successfully.
| Q: React Native App Not Installed Error I'm testing out react native and can run the dev server fine on an emulator. When I go to build the apk to test on a real device, I get an
Application not installed error
System
*
*My Device is Android 5.1.1
*Emulator is Android 4.4
*React Native 16
Steps to build apk
*
*keytool -genkey -v -keystore my-app-key.keystore -alias my-app-alias -keyalg RSA -keysize 2048 -validity 10000
*react-native bundle --platform android --dev false --entry-file index.android.js --bundle-output android/app/src/main/assets/index.android.bundle --assets-dest android/app/src/main/res/
*cd android && ./gradlew assembleRelease
*Got apk app-release-unsigned.apk
Is there anything I am missing and is the generated file supposed to be unsigned.apk ?
A: This happens due to your app already install for other users so It is not visible in the app drawer. So required to uninstall your for all user and reinstall the app.
*
*Go to Device/Android OS Settings
*Select APPS
*Your app will be listed here
*Goto details of your app by Selecting your app from list
*Tap in tree dot menu at top right corner
*Press uninstall for all users
*Try again to install your app
*Now It will install successfully.
A: For me, the same problem was from insufficient storage space. Maybe it'll help you.
A: I have solved the issue by just uninstalling the older app by on my device. Seems I had build.apk before releasing my app. So, you cannot build.apk and release.apk of the same app simultaneously.
A: The app was previously installed, after removing that instance the issue gets resolved.
| stackoverflow | {
"language": "en",
"length": 257,
"provenance": "stackexchange_0000F.jsonl.gz:907084",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44673996"
} |
b0b033aad68f39c5218334d5986f0a13b860f17a | Stackoverflow Stackexchange
Q: Can I make my Telegram bot for private chat only (not groups)? Is it possible to prevent my bot from being invited into groups and for it to only be available for private chat? I intend to create a bot that will give user-specific information and this would be confusing within a group.
A: Sure, you can use /setjoingroups command in @BotFather.
BTW, there has leaveChat, you can leave yourself if your bot joined group before set this.
| Q: Can I make my Telegram bot for private chat only (not groups)? Is it possible to prevent my bot from being invited into groups and for it to only be available for private chat? I intend to create a bot that will give user-specific information and this would be confusing within a group.
A: Sure, you can use /setjoingroups command in @BotFather.
BTW, there has leaveChat, you can leave yourself if your bot joined group before set this.
A: When receiving a message, check if the chatid and userid is the same, if so, ignore.
| stackoverflow | {
"language": "en",
"length": 96,
"provenance": "stackexchange_0000F.jsonl.gz:907108",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44674075"
} |
e72237247d69c053005309dffbe976695025413f | Stackoverflow Stackexchange
Q: window.location.href on thymeleaf I have this piece of code in my Thymeleaf template but it does not work properly since this is the location generated
deviceevent/@%7B/deviceevent/list/%7Bid%7D(id=$%7BdeviceEvent.id%7D)%7D
in the template
<tr th:each="deviceEvent : ${deviceEvents}" onclick="window.location.href = '@{/deviceevent/list/{id}(id=${deviceEvent.id})}'" >
A: Thymeleaf doesn't evaluate attributes unless they are prefixed with th. In this case, th:onclick. The complete string should look like this:
th:onclick="'window.location.href = \'' + @{/deviceevent/list/{id}(id=${deviceEvent.id})} + '\''"
| Q: window.location.href on thymeleaf I have this piece of code in my Thymeleaf template but it does not work properly since this is the location generated
deviceevent/@%7B/deviceevent/list/%7Bid%7D(id=$%7BdeviceEvent.id%7D)%7D
in the template
<tr th:each="deviceEvent : ${deviceEvents}" onclick="window.location.href = '@{/deviceevent/list/{id}(id=${deviceEvent.id})}'" >
A: Thymeleaf doesn't evaluate attributes unless they are prefixed with th. In this case, th:onclick. The complete string should look like this:
th:onclick="'window.location.href = \'' + @{/deviceevent/list/{id}(id=${deviceEvent.id})} + '\''"
| stackoverflow | {
"language": "en",
"length": 67,
"provenance": "stackexchange_0000F.jsonl.gz:907114",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44674095"
} |
d0d3d3dc9af69e37f7d3d02ea6972410e090d9a5 | Stackoverflow Stackexchange
Q: Alternate to "`--whole-archive`" in bazel I want to link an external static lib in one of my bazel based c++ project. I need "whole-archive" option for linking the library like gcc or g++ build:
g++ main.cc -Wl,--whole-archive -lhttp -Wl,--no-whole-archive
Can anybody suggest what is the alternate to "--whole-archive" in bazel?
A: Sadly, alwayslink doesn't work with precompiled libraries, only with cc_library compiled and linked by Bazel. There is one undocumented hack (I guess I'm just documenting it by mentioning it here), and it's to rename .a file to .lo file. Then Bazel will link it as whole archive.
Beware that this is a hack, and will stop working without warning. We have plans for some variation of cc_import rule exactly for this use case, to import a precompiled binary into the workspace with the ability to set whole archiveness on it. It's just not there yet.
| Q: Alternate to "`--whole-archive`" in bazel I want to link an external static lib in one of my bazel based c++ project. I need "whole-archive" option for linking the library like gcc or g++ build:
g++ main.cc -Wl,--whole-archive -lhttp -Wl,--no-whole-archive
Can anybody suggest what is the alternate to "--whole-archive" in bazel?
A: Sadly, alwayslink doesn't work with precompiled libraries, only with cc_library compiled and linked by Bazel. There is one undocumented hack (I guess I'm just documenting it by mentioning it here), and it's to rename .a file to .lo file. Then Bazel will link it as whole archive.
Beware that this is a hack, and will stop working without warning. We have plans for some variation of cc_import rule exactly for this use case, to import a precompiled binary into the workspace with the ability to set whole archiveness on it. It's just not there yet.
A: https://bazel.build/versions/master/docs/be/c-cpp.html#cc_library.alwayslink
alwayslink
Boolean; optional; nonconfigurable; default is 0
If 1, any binary that depends (directly or indirectly) on this C++
library will link in all the object files for the files listed in
srcs, even if some contain no symbols referenced by the binary. This
is useful if your code isn't explicitly called by code in the binary,
e.g., if your code registers to receive some callback provided by some
service.
| stackoverflow | {
"language": "en",
"length": 219,
"provenance": "stackexchange_0000F.jsonl.gz:907139",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44674163"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.