qid
int64 1
74.6M
| question
stringlengths 45
24.2k
| date
stringlengths 10
10
| metadata
stringlengths 101
178
| response_j
stringlengths 32
23.2k
| response_k
stringlengths 21
13.2k
|
---|---|---|---|---|---|
6,548,940 |
I have an SQLite database, which does not support trig functions. I would like to sort a set of lat,lng pairs in my table by distance as compared to a second lat,lng pair. I'm familiar with the standard haversine distance formula for sorting lat,lng pairs by distance.
In this case I don't care particularly for precision, my points are separated by large distances, so I don't mind rounding off the distances by treating curves as straight lines.
My question, is there a generally accepted formula for this kind of query? Remember no trig functions!
|
2011/07/01
|
['https://Stackoverflow.com/questions/6548940', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/475329/']
|
If your points are within reasonable distance of each other (i.e. not across half the world, and not across the date line), you can make a correction for the difference between latitude and longitude (as a longitude degree is shorter, except at the Equator), and then just calculate the distance as if the earth was flat.
As you just want to sort the values, you don't even have to use the square root, you can just add the squares of the differences.
Example, where `@lat` and `@lng` is your current position, and `2` is the difference correction:
```
select *
from Points
order by (lat - @lat) * (lat - @lat) + ((lng - @lng) * 2) * ((lng - @lng) * 2)
```
You can calculate the difference correction for a specific latitude as `1 / cos(lat)`.
---
Cees Timmerman came up with this formula which also works across the date line:
```
pow(lat-lat2, 2) + pow(2 * min(abs(lon-lon2), 360 - abs(lon-lon2)), 2)
```
|
You could always truncate the [Taylor series expansion](http://en.wikipedia.org/wiki/Cosine#Series_definitions) of sine and use the fact that sin^2(x)+cos^2(x)=1 to get the approximation of cosine. The only tricky part would be [using Taylor's theorem to estimate the number of terms that you'd need for a given amount of precision](http://en.wikipedia.org/wiki/Taylor%27s_theorem#Explicit_formulae_for_the_remainder).
|
6,548,940 |
I have an SQLite database, which does not support trig functions. I would like to sort a set of lat,lng pairs in my table by distance as compared to a second lat,lng pair. I'm familiar with the standard haversine distance formula for sorting lat,lng pairs by distance.
In this case I don't care particularly for precision, my points are separated by large distances, so I don't mind rounding off the distances by treating curves as straight lines.
My question, is there a generally accepted formula for this kind of query? Remember no trig functions!
|
2011/07/01
|
['https://Stackoverflow.com/questions/6548940', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/475329/']
|
If your points are within reasonable distance of each other (i.e. not across half the world, and not across the date line), you can make a correction for the difference between latitude and longitude (as a longitude degree is shorter, except at the Equator), and then just calculate the distance as if the earth was flat.
As you just want to sort the values, you don't even have to use the square root, you can just add the squares of the differences.
Example, where `@lat` and `@lng` is your current position, and `2` is the difference correction:
```
select *
from Points
order by (lat - @lat) * (lat - @lat) + ((lng - @lng) * 2) * ((lng - @lng) * 2)
```
You can calculate the difference correction for a specific latitude as `1 / cos(lat)`.
---
Cees Timmerman came up with this formula which also works across the date line:
```
pow(lat-lat2, 2) + pow(2 * min(abs(lon-lon2), 360 - abs(lon-lon2)), 2)
```
|
If you want proper spatial data in your model then use SpatiaLite, a spatially-enabled version of SQLite:
<http://www.gaia-gis.it/spatialite/>
Its like PostGIS is for PostgreSQL. All your SQLite functionality will work perfectly and unchanged, and you'll get spatial functions too.
|
6,548,940 |
I have an SQLite database, which does not support trig functions. I would like to sort a set of lat,lng pairs in my table by distance as compared to a second lat,lng pair. I'm familiar with the standard haversine distance formula for sorting lat,lng pairs by distance.
In this case I don't care particularly for precision, my points are separated by large distances, so I don't mind rounding off the distances by treating curves as straight lines.
My question, is there a generally accepted formula for this kind of query? Remember no trig functions!
|
2011/07/01
|
['https://Stackoverflow.com/questions/6548940', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/475329/']
|
If your points are within reasonable distance of each other (i.e. not across half the world, and not across the date line), you can make a correction for the difference between latitude and longitude (as a longitude degree is shorter, except at the Equator), and then just calculate the distance as if the earth was flat.
As you just want to sort the values, you don't even have to use the square root, you can just add the squares of the differences.
Example, where `@lat` and `@lng` is your current position, and `2` is the difference correction:
```
select *
from Points
order by (lat - @lat) * (lat - @lat) + ((lng - @lng) * 2) * ((lng - @lng) * 2)
```
You can calculate the difference correction for a specific latitude as `1 / cos(lat)`.
---
Cees Timmerman came up with this formula which also works across the date line:
```
pow(lat-lat2, 2) + pow(2 * min(abs(lon-lon2), 360 - abs(lon-lon2)), 2)
```
|
Change "\*" with "/" works for me:
select \*
from Points
order by (lat - @lat) \* (lat - @lat) + ((lng - @lng) / 2) \* ((lng - @lng) / 2)
|
6,548,940 |
I have an SQLite database, which does not support trig functions. I would like to sort a set of lat,lng pairs in my table by distance as compared to a second lat,lng pair. I'm familiar with the standard haversine distance formula for sorting lat,lng pairs by distance.
In this case I don't care particularly for precision, my points are separated by large distances, so I don't mind rounding off the distances by treating curves as straight lines.
My question, is there a generally accepted formula for this kind of query? Remember no trig functions!
|
2011/07/01
|
['https://Stackoverflow.com/questions/6548940', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/475329/']
|
If you want proper spatial data in your model then use SpatiaLite, a spatially-enabled version of SQLite:
<http://www.gaia-gis.it/spatialite/>
Its like PostGIS is for PostgreSQL. All your SQLite functionality will work perfectly and unchanged, and you'll get spatial functions too.
|
You could always truncate the [Taylor series expansion](http://en.wikipedia.org/wiki/Cosine#Series_definitions) of sine and use the fact that sin^2(x)+cos^2(x)=1 to get the approximation of cosine. The only tricky part would be [using Taylor's theorem to estimate the number of terms that you'd need for a given amount of precision](http://en.wikipedia.org/wiki/Taylor%27s_theorem#Explicit_formulae_for_the_remainder).
|
6,548,940 |
I have an SQLite database, which does not support trig functions. I would like to sort a set of lat,lng pairs in my table by distance as compared to a second lat,lng pair. I'm familiar with the standard haversine distance formula for sorting lat,lng pairs by distance.
In this case I don't care particularly for precision, my points are separated by large distances, so I don't mind rounding off the distances by treating curves as straight lines.
My question, is there a generally accepted formula for this kind of query? Remember no trig functions!
|
2011/07/01
|
['https://Stackoverflow.com/questions/6548940', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/475329/']
|
You could always truncate the [Taylor series expansion](http://en.wikipedia.org/wiki/Cosine#Series_definitions) of sine and use the fact that sin^2(x)+cos^2(x)=1 to get the approximation of cosine. The only tricky part would be [using Taylor's theorem to estimate the number of terms that you'd need for a given amount of precision](http://en.wikipedia.org/wiki/Taylor%27s_theorem#Explicit_formulae_for_the_remainder).
|
Change "\*" with "/" works for me:
select \*
from Points
order by (lat - @lat) \* (lat - @lat) + ((lng - @lng) / 2) \* ((lng - @lng) / 2)
|
6,548,940 |
I have an SQLite database, which does not support trig functions. I would like to sort a set of lat,lng pairs in my table by distance as compared to a second lat,lng pair. I'm familiar with the standard haversine distance formula for sorting lat,lng pairs by distance.
In this case I don't care particularly for precision, my points are separated by large distances, so I don't mind rounding off the distances by treating curves as straight lines.
My question, is there a generally accepted formula for this kind of query? Remember no trig functions!
|
2011/07/01
|
['https://Stackoverflow.com/questions/6548940', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/475329/']
|
If you want proper spatial data in your model then use SpatiaLite, a spatially-enabled version of SQLite:
<http://www.gaia-gis.it/spatialite/>
Its like PostGIS is for PostgreSQL. All your SQLite functionality will work perfectly and unchanged, and you'll get spatial functions too.
|
Change "\*" with "/" works for me:
select \*
from Points
order by (lat - @lat) \* (lat - @lat) + ((lng - @lng) / 2) \* ((lng - @lng) / 2)
|
23,051,085 |
Is there any way that I can create a program where it gives input to another file and and collects its output?
The best that google give me is this. And I tried to recreate (read: copying the code in some unknown manner (read: stabbing in the dark))
And I got this
```
import time
string="file.py"
process=subprocess.Popen(string,stdin=subprocess.PIPE,stdout=subprocess.PIPE);
process.stdin.write("3 5")
time.sleep(1)
print process.stdout.read()
```
And it gives me error
```
File "D:\Pekerjaan non website\IO\reader.py", line 3, in <module>
process=subprocess.Popen(string,stdin=subprocess.PIPE,stdout=subprocess.PIPE);
NameError: name 'subprocess' is not defined
```
Can anybody tell me how to create that program. (
Note: I have no knowledge in this kind of program I/O or subprocess
Note: It's up to you. Where you will explain this from my code or throw my code away and explain this from zero.
Thank you beforehand.
(PS: If my previous statement is confusing. My point is simple: Can you teach me? :))
|
2014/04/14
|
['https://Stackoverflow.com/questions/23051085', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2872852/']
|
[subprocess](https://docs.python.org/2/library/subprocess.html) is a stdlib module that you need to import (the same way `time` is) - so you just need to:
```
import subprocess
```
some time before you try to use the functions in it (usually, you want to do this near the top of your code, right underneath your current `import time`).
|
In addition to lvc's answer you should consider using [Popen.communicate()](https://docs.python.org/library/subprocess.html#subprocess.Popen.communicate). something like,
```
import subprocess
string="file.py"
process=subprocess.Popen(string,stdin=subprocess.PIPE,stdout=subprocess.PIPE)
res=process.communicate("3 5")
print res[0]
```
In this way you don't have to use that sleep command.it'll Wait for process to terminate and return a tuple as (stdoutdata, stderrdata).
|
59,642,732 |
I have a Rails app which uses db `my_database_development` in my `config/database.yml`:
```
development:
<<: *default
database: my_database_development
```
Works correctly when I run `rails server`.
Now I want to use another db, so I change my `config/database.yml`:
```
development:
<<: *default
database: my_prev_database
```
Now when I run `rails server`, they give me an `ActiveRecord::PendingMigrationError. To resolve this issue, run: bin/rails db:migrate RAILS_ENV=development`. When I run that command, `my_prev_database` gets cleared. I don't want that to happen. I want to use `my_prev_database` and all the data it has (which I backed up from somewhere)
How can I switch database in Rails effectively?
Thank you!
|
2020/01/08
|
['https://Stackoverflow.com/questions/59642732', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3943491/']
|
When you switch database, you will have new `schema_migrations` table. In new database, `schema_migrations` is empty so Rails will think you have `pending_migration`. I think you need to re-migrate in your new database. You can use some feature like database dump to migrate date from old database to new database
|
I am now unable to replicate the problem above. (Not sure why, maybe I was copying over my database incorrectly.)
Also, a possible resolution was to copy tables over individually from `my_prev_database` to `my_database_development`
Note for anyone troubleshooting similar problems:
The commenters mentioned that,
- Running `rails db:migrate` shouldn't delete database data
|
333,882 |
One problem that has always bothered me is the limitations of computers in studying math. With a chaotic dynamical system, for example, we know *mathematically* that they possess trajectories that never repeat themselves. But *computationally*, it seems that such an orbit can never be realized (since given the finite number of values that a computer can work with, periodicity seems inevitable). Yet, people study chaos with computers. They use mathematics of course, but considering the role that computers play in helping us gain intuition of complex systems, I wonder if there are aspects of a system we might totally miss by using computers in such a fashion.
Thanks,
Jack
|
2013/03/18
|
['https://math.stackexchange.com/questions/333882', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/62659/']
|
Numerical simulation of dynamic systems is indeed hard.
One difficulty is that
it is implemented using floating-point arithmetic,
which is subject to rounding errors.
For chaotic dynamical systems,
the ones that have strange attractors,
rounding errors are potentially serious,
because
orbits starting at nearby points can diverge from each other exponentially.
Sometimes,
this strong sensitivity to initial conditions
does not affect the overall picture,
because numerically computed orbits are *shadowed* by exact orbits
that capture the typical behavior of the system.
However,
the truth is that
rounding errors affect numerical simulations of dynamical systems in
very complex ways [1].
Well-conditioned dynamical systems may display chaotic numerical behavior [2,3].
Conversely,
numerical methods can suppress chaos in some chaotic dynamical systems [3].
*(Text extracted from [4].)*
Nevertheless, there are computational methods for studying dynamical systems that work reliably even in the presence of rounding errors. See the work of Warwick Tucker, Zbigniew Galias, Pawel Pilarczyk, and others.
[1] Corless, What good are numerical simulations of chaotic dynamical systems? *Comput. Math. Appl.* **28** (1994) 107–121. [MR1300684](http://www.ams.org/mathscinet-getitem?mr=1300684)
[2] Adams et al., Computational chaos may be due to a single local error. *J. Comput. Phys.* **104** (1993) 241–250. [MR1198231](http://www.ams.org/mathscinet-getitem?mr=1198231)
[3] Corless et al., Numerical methods can suppress chaos, *Physics Letters A* **157** (1991) 127-36. [DOI 10.1016/0375-9601(91)90404-V](http://www.sciencedirect.com/science/article/pii/037596019190404V)
[4] Paiva et al., Robust visualization of strange attractors using affine arithmetic, *Computers & Graphics* **30** (2006), no. 6, 1020-1026. [DOI 10.1016/j.cag.2006.08.016](http://www.sciencedirect.com/science/article/pii/S0097849306001543)
|
It's a valid concern, and I have a historical anecdote to match it. When Mandelbrot published the first picture of the Mandelbrot set, the image was of poor quality due to the state of informatics back then. Seeing what he thought where "speck of dusts" on the picture, his editor erased them. Since then we obtained much better pictures of course, and we know that these specks of dust are "islands" that are linked to the main body of the Mandelbrot set by "hairs" of empty interior (which were not very visible in Mandelbrot's picture).
The point is, computers are a tool to guide your intuition but you should not rely solely on them. Think of imperfect computer pictures/simulation as drawing a surface instead of a n-manifold : it helps understanding what is going on, but it works *together* with a more rigorous understanding of the objects you are manipulating. Otherwise you are indeed bound to make mistakes.
|
333,882 |
One problem that has always bothered me is the limitations of computers in studying math. With a chaotic dynamical system, for example, we know *mathematically* that they possess trajectories that never repeat themselves. But *computationally*, it seems that such an orbit can never be realized (since given the finite number of values that a computer can work with, periodicity seems inevitable). Yet, people study chaos with computers. They use mathematics of course, but considering the role that computers play in helping us gain intuition of complex systems, I wonder if there are aspects of a system we might totally miss by using computers in such a fashion.
Thanks,
Jack
|
2013/03/18
|
['https://math.stackexchange.com/questions/333882', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/62659/']
|
Numerical simulation of dynamic systems is indeed hard.
One difficulty is that
it is implemented using floating-point arithmetic,
which is subject to rounding errors.
For chaotic dynamical systems,
the ones that have strange attractors,
rounding errors are potentially serious,
because
orbits starting at nearby points can diverge from each other exponentially.
Sometimes,
this strong sensitivity to initial conditions
does not affect the overall picture,
because numerically computed orbits are *shadowed* by exact orbits
that capture the typical behavior of the system.
However,
the truth is that
rounding errors affect numerical simulations of dynamical systems in
very complex ways [1].
Well-conditioned dynamical systems may display chaotic numerical behavior [2,3].
Conversely,
numerical methods can suppress chaos in some chaotic dynamical systems [3].
*(Text extracted from [4].)*
Nevertheless, there are computational methods for studying dynamical systems that work reliably even in the presence of rounding errors. See the work of Warwick Tucker, Zbigniew Galias, Pawel Pilarczyk, and others.
[1] Corless, What good are numerical simulations of chaotic dynamical systems? *Comput. Math. Appl.* **28** (1994) 107–121. [MR1300684](http://www.ams.org/mathscinet-getitem?mr=1300684)
[2] Adams et al., Computational chaos may be due to a single local error. *J. Comput. Phys.* **104** (1993) 241–250. [MR1198231](http://www.ams.org/mathscinet-getitem?mr=1198231)
[3] Corless et al., Numerical methods can suppress chaos, *Physics Letters A* **157** (1991) 127-36. [DOI 10.1016/0375-9601(91)90404-V](http://www.sciencedirect.com/science/article/pii/037596019190404V)
[4] Paiva et al., Robust visualization of strange attractors using affine arithmetic, *Computers & Graphics* **30** (2006), no. 6, 1020-1026. [DOI 10.1016/j.cag.2006.08.016](http://www.sciencedirect.com/science/article/pii/S0097849306001543)
|
The set of nonlinear systems that can be understood analytically is a measure zero set. Numerical experiments, and numerical algorithms, are a necessary tool in the study of these systems. But I agree that there is a need for more robust numerical tools for studying complex systems.
Anyways, regarding chaotic sets, the "set-oriented" Perron-Frobenius operator-theoretic approach to numerically studying the densities and probability distributions on these systems rather than individual trajectories has been quite successful. See work by G.Froyland, M. Dellnitz, I. Mezic etc. On the other hand, topological quantities such as topological entropy are robust to numerical errors and don't rely on exact computation of trajectories, e.g. see work by P. Boyland etc. So there are other ways of understanding these systems than just numerically tracking individual trajectories.
|
34,175,959 |
I was debugging a watch kit extension app with notification in the device and watch.
Then the watch app runs with the notification, and should start the companion app in the iPhone using WCSession, the iPhone prints only this in the log.
What can the problem to run the app. All settings are default offered by Xcode. The iPhone app is an old app, now I added watchkit extension.
using watchos 2 and iOS 9.1. Min target iOS 6.0
```
iPhone SpringBoard[61] <Warning>: [FBSystemService] Error launching com.appId: Disabled (5)
```
|
2015/12/09
|
['https://Stackoverflow.com/questions/34175959', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/312627/']
|
I had to remove this from the plist file.
```
<key>UIApplicationExitsOnSuspend</key>
<true/>
```
|
Check the value for `WKCompanionAppBundleIdentifier` in your info.plist for watch kit app. It should match the bundle identifier of your application
|
31,799,751 |
I have two lists of phone numbers. 1st list is a subset of 2nd list. I ran two different algorithms below to determine which phone numbers are contained in both of two lists.
* Way 1:
+ Sortting 1st list: Arrays.sort(FirstList);
+ Looping 2nd list to find matched element: If Arrays.binarySearch(FistList, 'each of 2nd list') then OK
* Way 2:
+ Convert 1st list into HashMap with key/valus is ('each of 1st list', Boolean.TRUE)
+ Looping 2nd list to find matched element: If FirstList.containsKey('each of 2nd list') then OK
It results in Way 2 ran within 5 seconds is faster considerably than Way 1 with 39 seconds. I can't understand the reason why.
I appreciate your any comments.
|
2015/08/04
|
['https://Stackoverflow.com/questions/31799751', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1517270/']
|
Because hashing is *O(1)* and binary searching is *O(log N)*.
|
Look at the source code for HashMap: it creates and stores a hash for each added (key, value) pair, then the containsKey() method calculates a hash for the given key, and uses a very fast operation to check if it is already in the map. So most retrieval operations are very fast.
|
31,799,751 |
I have two lists of phone numbers. 1st list is a subset of 2nd list. I ran two different algorithms below to determine which phone numbers are contained in both of two lists.
* Way 1:
+ Sortting 1st list: Arrays.sort(FirstList);
+ Looping 2nd list to find matched element: If Arrays.binarySearch(FistList, 'each of 2nd list') then OK
* Way 2:
+ Convert 1st list into HashMap with key/valus is ('each of 1st list', Boolean.TRUE)
+ Looping 2nd list to find matched element: If FirstList.containsKey('each of 2nd list') then OK
It results in Way 2 ran within 5 seconds is faster considerably than Way 1 with 39 seconds. I can't understand the reason why.
I appreciate your any comments.
|
2015/08/04
|
['https://Stackoverflow.com/questions/31799751', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1517270/']
|
`HashMap` relies on a very efficient algorithm called 'hashing' which has been in use for many years and is reliable and effective. Essentially the way it works is to split the items in the collection into much smaller groups which can be accessed extremely quickly. Once the group is located a less efficient search mechanism can be used to locate the specific item.
Identifying the group for an item occurs via an algorithm called a 'hashing function'. In Java the hashing method is `Object.hashCode()` which returns an `int` representing the group. As long as `hashCode` is well defined for your class you should expect `HashMap` to be very efficient which is exactly what you've found.
There's a very good discussion on the various types of `Map` and which to use at [Difference between HashMap, LinkedHashMap and TreeMap](https://stackoverflow.com/questions/2889777/difference-between-hashmap-linkedhashmap-and-treemap)
My shorthand rule-of-thumb is to always use `HashMap` unless you can't define an appropriate `hashCode` for your keys or the items need to be ordered (either natural or insertion).
|
Look at the source code for HashMap: it creates and stores a hash for each added (key, value) pair, then the containsKey() method calculates a hash for the given key, and uses a very fast operation to check if it is already in the map. So most retrieval operations are very fast.
|
31,799,751 |
I have two lists of phone numbers. 1st list is a subset of 2nd list. I ran two different algorithms below to determine which phone numbers are contained in both of two lists.
* Way 1:
+ Sortting 1st list: Arrays.sort(FirstList);
+ Looping 2nd list to find matched element: If Arrays.binarySearch(FistList, 'each of 2nd list') then OK
* Way 2:
+ Convert 1st list into HashMap with key/valus is ('each of 1st list', Boolean.TRUE)
+ Looping 2nd list to find matched element: If FirstList.containsKey('each of 2nd list') then OK
It results in Way 2 ran within 5 seconds is faster considerably than Way 1 with 39 seconds. I can't understand the reason why.
I appreciate your any comments.
|
2015/08/04
|
['https://Stackoverflow.com/questions/31799751', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1517270/']
|
Because hashing is *O(1)* and binary searching is *O(log N)*.
|
Way 1:
* Sorting: around `O(nlogn)`
* Search: around `O(logn)`
Way 2:
* Creating HashTable: `O(n)` for small density (no collisions)
* Contains: `O(1)`
|
31,799,751 |
I have two lists of phone numbers. 1st list is a subset of 2nd list. I ran two different algorithms below to determine which phone numbers are contained in both of two lists.
* Way 1:
+ Sortting 1st list: Arrays.sort(FirstList);
+ Looping 2nd list to find matched element: If Arrays.binarySearch(FistList, 'each of 2nd list') then OK
* Way 2:
+ Convert 1st list into HashMap with key/valus is ('each of 1st list', Boolean.TRUE)
+ Looping 2nd list to find matched element: If FirstList.containsKey('each of 2nd list') then OK
It results in Way 2 ran within 5 seconds is faster considerably than Way 1 with 39 seconds. I can't understand the reason why.
I appreciate your any comments.
|
2015/08/04
|
['https://Stackoverflow.com/questions/31799751', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1517270/']
|
Because hashing is *O(1)* and binary searching is *O(log N)*.
|
`HashMap` relies on a very efficient algorithm called 'hashing' which has been in use for many years and is reliable and effective. Essentially the way it works is to split the items in the collection into much smaller groups which can be accessed extremely quickly. Once the group is located a less efficient search mechanism can be used to locate the specific item.
Identifying the group for an item occurs via an algorithm called a 'hashing function'. In Java the hashing method is `Object.hashCode()` which returns an `int` representing the group. As long as `hashCode` is well defined for your class you should expect `HashMap` to be very efficient which is exactly what you've found.
There's a very good discussion on the various types of `Map` and which to use at [Difference between HashMap, LinkedHashMap and TreeMap](https://stackoverflow.com/questions/2889777/difference-between-hashmap-linkedhashmap-and-treemap)
My shorthand rule-of-thumb is to always use `HashMap` unless you can't define an appropriate `hashCode` for your keys or the items need to be ordered (either natural or insertion).
|
31,799,751 |
I have two lists of phone numbers. 1st list is a subset of 2nd list. I ran two different algorithms below to determine which phone numbers are contained in both of two lists.
* Way 1:
+ Sortting 1st list: Arrays.sort(FirstList);
+ Looping 2nd list to find matched element: If Arrays.binarySearch(FistList, 'each of 2nd list') then OK
* Way 2:
+ Convert 1st list into HashMap with key/valus is ('each of 1st list', Boolean.TRUE)
+ Looping 2nd list to find matched element: If FirstList.containsKey('each of 2nd list') then OK
It results in Way 2 ran within 5 seconds is faster considerably than Way 1 with 39 seconds. I can't understand the reason why.
I appreciate your any comments.
|
2015/08/04
|
['https://Stackoverflow.com/questions/31799751', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1517270/']
|
`HashMap` relies on a very efficient algorithm called 'hashing' which has been in use for many years and is reliable and effective. Essentially the way it works is to split the items in the collection into much smaller groups which can be accessed extremely quickly. Once the group is located a less efficient search mechanism can be used to locate the specific item.
Identifying the group for an item occurs via an algorithm called a 'hashing function'. In Java the hashing method is `Object.hashCode()` which returns an `int` representing the group. As long as `hashCode` is well defined for your class you should expect `HashMap` to be very efficient which is exactly what you've found.
There's a very good discussion on the various types of `Map` and which to use at [Difference between HashMap, LinkedHashMap and TreeMap](https://stackoverflow.com/questions/2889777/difference-between-hashmap-linkedhashmap-and-treemap)
My shorthand rule-of-thumb is to always use `HashMap` unless you can't define an appropriate `hashCode` for your keys or the items need to be ordered (either natural or insertion).
|
Way 1:
* Sorting: around `O(nlogn)`
* Search: around `O(logn)`
Way 2:
* Creating HashTable: `O(n)` for small density (no collisions)
* Contains: `O(1)`
|
50,493,197 |
currently, I'm practicing solidity. However, I'm a little confused about accessing a private variable in a contract.
For example here;
```
address private a;
address private b;
mapping (bytes32 => uint) public people;
mapping (bytes32 => mapping(address => uint)) public listOfEmp;
bytes32[] public list;
bytes32 private z;
```
I can access 'a' with
```
web3.eth.getStorageAt("0x501...", 0)
```
How can I access 'z' here? From a different contract.
Thank you
|
2018/05/23
|
['https://Stackoverflow.com/questions/50493197', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6181020/']
|
You can access the storage of your contract even if it's private.
Try this:
```
web3.eth.getStorageAt("0x501...", 5)
```
If you want to access the map or array, check this document for layout of state variables: <https://solidity.readthedocs.io/en/v0.4.24/miscellaneous.html>
By the way, you should always use getProof to validate the value.
|
I don't believe you can. A private variable is meant to only be used within the contract in which it is defined.
See here: <http://solidity.readthedocs.io/en/v0.4.21/contracts.html>
|
50,493,197 |
currently, I'm practicing solidity. However, I'm a little confused about accessing a private variable in a contract.
For example here;
```
address private a;
address private b;
mapping (bytes32 => uint) public people;
mapping (bytes32 => mapping(address => uint)) public listOfEmp;
bytes32[] public list;
bytes32 private z;
```
I can access 'a' with
```
web3.eth.getStorageAt("0x501...", 0)
```
How can I access 'z' here? From a different contract.
Thank you
|
2018/05/23
|
['https://Stackoverflow.com/questions/50493197', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6181020/']
|
Think of Ethereum as a process running on your machine or remotely. Using `web3.eth.getStorageAt` you read data from the process memory. In the same way you can read the data of every program on your computer.
On other hands, high level programming languages like Java, C++ or Solidity frequently define access rules on variables and functions (private, protected, etc). But these rules only hold in the context of the program execution. For Solidity that context is the execution of transaction.
It means that the private field is private only for other contracts trying to read it. But can be read by external (and pretty low-level) API's like `web3.eth.getStorageAt`.
|
I don't believe you can. A private variable is meant to only be used within the contract in which it is defined.
See here: <http://solidity.readthedocs.io/en/v0.4.21/contracts.html>
|
50,493,197 |
currently, I'm practicing solidity. However, I'm a little confused about accessing a private variable in a contract.
For example here;
```
address private a;
address private b;
mapping (bytes32 => uint) public people;
mapping (bytes32 => mapping(address => uint)) public listOfEmp;
bytes32[] public list;
bytes32 private z;
```
I can access 'a' with
```
web3.eth.getStorageAt("0x501...", 0)
```
How can I access 'z' here? From a different contract.
Thank you
|
2018/05/23
|
['https://Stackoverflow.com/questions/50493197', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6181020/']
|
You can access the storage of your contract even if it's private.
Try this:
```
web3.eth.getStorageAt("0x501...", 5)
```
If you want to access the map or array, check this document for layout of state variables: <https://solidity.readthedocs.io/en/v0.4.24/miscellaneous.html>
By the way, you should always use getProof to validate the value.
|
Think of Ethereum as a process running on your machine or remotely. Using `web3.eth.getStorageAt` you read data from the process memory. In the same way you can read the data of every program on your computer.
On other hands, high level programming languages like Java, C++ or Solidity frequently define access rules on variables and functions (private, protected, etc). But these rules only hold in the context of the program execution. For Solidity that context is the execution of transaction.
It means that the private field is private only for other contracts trying to read it. But can be read by external (and pretty low-level) API's like `web3.eth.getStorageAt`.
|
68,885,669 |
I have RGBA image from canvas and I use typedArray to remove alpha channel.
```
// data - arr from canvas.
// [1,2,3,255, 1,2,3,255, 1,2,3,255,]
// R G B A R G B A R G B A
const delta = 4;
const length = data.length;
const newLength = length - length / delta;
const rgbArr = new Uint8Array(newLength);
let j = 0;
for (i = 0; i < data.length; i = i + delta) {
rgbArr[j] = data[i]; // R
rgbArr[j + 1] = data[i + 1]; // G
rgbArr[j + 2] = data[i + 2]; // B
j = j + 3;
}
// rgbArr [1,2,3, 1,2,3, 1,2,3]
```
I copy every 3 bytes to new Uint8Array. Can I do it in more optimized way without byte copying?
|
2021/08/22
|
['https://Stackoverflow.com/questions/68885669', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2497351/']
|
Looks like your solution is pretty good. At least none of the alternatives I came up with so far comes anywhere close in performance. Run the snippet to see for yourself.
**Updated** with Justin's suggestion using `.filter` -- elegant but not faster.
```js
const data = new Uint8Array(1e8);
const delta = 4;
const length = data.length;
const newLength = length - length / delta;
const rgbArr = new Uint8Array(newLength);
let j = 0;
console.time('first');
for (i = 0; i < data.length; i = i + delta) {
rgbArr[j] = data[i]; // R
rgbArr[j + 1] = data[i + 1]; // G
rgbArr[j + 2] = data[i + 2]; // B
j = j + 3;
}
console.timeEnd('first');
j = 0;
console.time('set');
const rgbArr2 = new Uint8Array(newLength);
for (i = 0; i < data.length; i = i + delta) {
rgbArr2.set(data.slice(i, i+2), j);
j = j + 3;
}
console.timeEnd('set');
console.time('filter');
data.filter((el,i) => {
return i % 4 !== 4 - 1
})
console.timeEnd('filter');
j = 0;
console.time('copyWithin');
for (i = 0; i < data.length; i = i + delta) {
data.copyWithin(j, i, i+2);
j = j + 3;
}
console.timeEnd('copyWithin');
```
Results:
```
first: 102.900ms
set: 1185.700ms
filter: 2779.800ms
copyWithin: 415.100ms
```
|
filter would be good here
```js
let array = new Uint8Array([1,2,3,255,1,2,3,255,1,2,3,255,1,2,3,255])
let filtered = array.filter((el,i) => {
return i % 4 !== 4 - 1
})
console.log(filtered)
```
|
16,158,995 |
hi all ive been up all night trying to get this work, im trying to test the `$year` value against the date to make sure that it is not less than the current date
here is my code at the moment it doesn't seem to work im not sure what im doing wrong
```
public function checkDateField($month, $day, $year)
{
if (!is_numeric($month) || !is_numeric($day) || !is_numeric($year) || !checkdate($month, $day, $year)< date("m-d-y")) {
$msg = '*Invalid date';
}
return $msg;
}
```
|
2013/04/23
|
['https://Stackoverflow.com/questions/16158995', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2228587/']
|
you can run the example by providing a path to the arm-linux-gnueabi shared libs using the -L flag.
```
qemu-arm -L /usr/arm-linux-gnueabi/
```
also make sure the LD\_LIBRARY\_PATH is not set.
```
unset LD_LIBRARY_PATH
```
|
```
$ export QEMU_LD_PREFIX=/usr/arm-linux-gnueabi
```
This works for me.
It's basically the same thing as:
```
$ qemu-arm -L /usr/arm-linux-gnueabi/
```
You can add it to the ~/.bashrc file so you don't have to type it everytime you open the terminal.
|
16,158,995 |
hi all ive been up all night trying to get this work, im trying to test the `$year` value against the date to make sure that it is not less than the current date
here is my code at the moment it doesn't seem to work im not sure what im doing wrong
```
public function checkDateField($month, $day, $year)
{
if (!is_numeric($month) || !is_numeric($day) || !is_numeric($year) || !checkdate($month, $day, $year)< date("m-d-y")) {
$msg = '*Invalid date';
}
return $msg;
}
```
|
2013/04/23
|
['https://Stackoverflow.com/questions/16158995', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2228587/']
|
you can run the example by providing a path to the arm-linux-gnueabi shared libs using the -L flag.
```
qemu-arm -L /usr/arm-linux-gnueabi/
```
also make sure the LD\_LIBRARY\_PATH is not set.
```
unset LD_LIBRARY_PATH
```
|
If you want to run **ARM** without Linux, then you need a different compiler (at least). `arm-linux-gnueabi-gcc` is a compiler for **Linux**. The compiler and `libc` are intimately linked. You will need a `newlib` compiler with a portability layer for *qemu*.[porting newlib](http://wiki.osdev.org/Porting_Newlib)
See: [Balau](http://balau82.wordpress.com/2010/02/28/hello-world-for-bare-metal-arm-using-qemu/) and [Google newlib+qemu](https://www.google.ca/search?q=newlib+qemu). A `newlib` port is hosted at [Github](https://github.com/balau/arm-sandbox/tree/master/qemu-newlib) and seems to the same as the *Balau* blog.
Typically a non-Linux gcc is called `arm-none-eabi-gcc`. The prefix *arm-none-eabi-* is recognized by some configure scripts.
|
16,158,995 |
hi all ive been up all night trying to get this work, im trying to test the `$year` value against the date to make sure that it is not less than the current date
here is my code at the moment it doesn't seem to work im not sure what im doing wrong
```
public function checkDateField($month, $day, $year)
{
if (!is_numeric($month) || !is_numeric($day) || !is_numeric($year) || !checkdate($month, $day, $year)< date("m-d-y")) {
$msg = '*Invalid date';
}
return $msg;
}
```
|
2013/04/23
|
['https://Stackoverflow.com/questions/16158995', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2228587/']
|
you can run the example by providing a path to the arm-linux-gnueabi shared libs using the -L flag.
```
qemu-arm -L /usr/arm-linux-gnueabi/
```
also make sure the LD\_LIBRARY\_PATH is not set.
```
unset LD_LIBRARY_PATH
```
|
A variant, which worked for me, was to pass the loader library directly and to specify the required library paths using the loader parameter `--library-path`. For example:
```
$ TOOLCHAIN_ROOT=/usr/local/gcc-linaro-arm-linux-gnueabihf-4.7-2013.03-20130313_linux/arm-linux-gnueabihf
$ qemu-arm $TOOLCHAIN_ROOT/libc/lib/ld-linux-armhf.so.3 --library-path $TOOLCHAIN_ROOT/libc/lib/arm-linux-gnueabihf:/$TOOLCHAIN_ROOT/lib ./my_executable
```
Or equivalently export `LD_LIBRARY_PATH` instead of using `--library-path`.
|
16,158,995 |
hi all ive been up all night trying to get this work, im trying to test the `$year` value against the date to make sure that it is not less than the current date
here is my code at the moment it doesn't seem to work im not sure what im doing wrong
```
public function checkDateField($month, $day, $year)
{
if (!is_numeric($month) || !is_numeric($day) || !is_numeric($year) || !checkdate($month, $day, $year)< date("m-d-y")) {
$msg = '*Invalid date';
}
return $msg;
}
```
|
2013/04/23
|
['https://Stackoverflow.com/questions/16158995', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2228587/']
|
I also met this problem when running a C program with assembly code. My solution is to build the executable with the option "-static", for instance
```
arm-linux-gnueabi-gcc -static -g main.c square.s
```
Then
```
qemu-arm a.out
```
will not report the error saying "can not find the /lib/ld-linux.so.3".
The only drawback is that the executable could be with a large size. But it's helpful when you just want to test your code.
Of course, you can go with the method from Balau(see artless noise's answer). But if you don't want to feel frustrated by something like "UART serial ports" in this step, which is only to run a simple "test" function, go for a try of my fix.
|
A variant, which worked for me, was to pass the loader library directly and to specify the required library paths using the loader parameter `--library-path`. For example:
```
$ TOOLCHAIN_ROOT=/usr/local/gcc-linaro-arm-linux-gnueabihf-4.7-2013.03-20130313_linux/arm-linux-gnueabihf
$ qemu-arm $TOOLCHAIN_ROOT/libc/lib/ld-linux-armhf.so.3 --library-path $TOOLCHAIN_ROOT/libc/lib/arm-linux-gnueabihf:/$TOOLCHAIN_ROOT/lib ./my_executable
```
Or equivalently export `LD_LIBRARY_PATH` instead of using `--library-path`.
|
16,158,995 |
hi all ive been up all night trying to get this work, im trying to test the `$year` value against the date to make sure that it is not less than the current date
here is my code at the moment it doesn't seem to work im not sure what im doing wrong
```
public function checkDateField($month, $day, $year)
{
if (!is_numeric($month) || !is_numeric($day) || !is_numeric($year) || !checkdate($month, $day, $year)< date("m-d-y")) {
$msg = '*Invalid date';
}
return $msg;
}
```
|
2013/04/23
|
['https://Stackoverflow.com/questions/16158995', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2228587/']
|
```
$ export QEMU_LD_PREFIX=/usr/arm-linux-gnueabi
```
This works for me.
It's basically the same thing as:
```
$ qemu-arm -L /usr/arm-linux-gnueabi/
```
You can add it to the ~/.bashrc file so you don't have to type it everytime you open the terminal.
|
A variant, which worked for me, was to pass the loader library directly and to specify the required library paths using the loader parameter `--library-path`. For example:
```
$ TOOLCHAIN_ROOT=/usr/local/gcc-linaro-arm-linux-gnueabihf-4.7-2013.03-20130313_linux/arm-linux-gnueabihf
$ qemu-arm $TOOLCHAIN_ROOT/libc/lib/ld-linux-armhf.so.3 --library-path $TOOLCHAIN_ROOT/libc/lib/arm-linux-gnueabihf:/$TOOLCHAIN_ROOT/lib ./my_executable
```
Or equivalently export `LD_LIBRARY_PATH` instead of using `--library-path`.
|
16,158,995 |
hi all ive been up all night trying to get this work, im trying to test the `$year` value against the date to make sure that it is not less than the current date
here is my code at the moment it doesn't seem to work im not sure what im doing wrong
```
public function checkDateField($month, $day, $year)
{
if (!is_numeric($month) || !is_numeric($day) || !is_numeric($year) || !checkdate($month, $day, $year)< date("m-d-y")) {
$msg = '*Invalid date';
}
return $msg;
}
```
|
2013/04/23
|
['https://Stackoverflow.com/questions/16158995', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2228587/']
|
If you want to run **ARM** without Linux, then you need a different compiler (at least). `arm-linux-gnueabi-gcc` is a compiler for **Linux**. The compiler and `libc` are intimately linked. You will need a `newlib` compiler with a portability layer for *qemu*.[porting newlib](http://wiki.osdev.org/Porting_Newlib)
See: [Balau](http://balau82.wordpress.com/2010/02/28/hello-world-for-bare-metal-arm-using-qemu/) and [Google newlib+qemu](https://www.google.ca/search?q=newlib+qemu). A `newlib` port is hosted at [Github](https://github.com/balau/arm-sandbox/tree/master/qemu-newlib) and seems to the same as the *Balau* blog.
Typically a non-Linux gcc is called `arm-none-eabi-gcc`. The prefix *arm-none-eabi-* is recognized by some configure scripts.
|
A variant, which worked for me, was to pass the loader library directly and to specify the required library paths using the loader parameter `--library-path`. For example:
```
$ TOOLCHAIN_ROOT=/usr/local/gcc-linaro-arm-linux-gnueabihf-4.7-2013.03-20130313_linux/arm-linux-gnueabihf
$ qemu-arm $TOOLCHAIN_ROOT/libc/lib/ld-linux-armhf.so.3 --library-path $TOOLCHAIN_ROOT/libc/lib/arm-linux-gnueabihf:/$TOOLCHAIN_ROOT/lib ./my_executable
```
Or equivalently export `LD_LIBRARY_PATH` instead of using `--library-path`.
|
16,158,995 |
hi all ive been up all night trying to get this work, im trying to test the `$year` value against the date to make sure that it is not less than the current date
here is my code at the moment it doesn't seem to work im not sure what im doing wrong
```
public function checkDateField($month, $day, $year)
{
if (!is_numeric($month) || !is_numeric($day) || !is_numeric($year) || !checkdate($month, $day, $year)< date("m-d-y")) {
$msg = '*Invalid date';
}
return $msg;
}
```
|
2013/04/23
|
['https://Stackoverflow.com/questions/16158995', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2228587/']
|
I also met this problem when running a C program with assembly code. My solution is to build the executable with the option "-static", for instance
```
arm-linux-gnueabi-gcc -static -g main.c square.s
```
Then
```
qemu-arm a.out
```
will not report the error saying "can not find the /lib/ld-linux.so.3".
The only drawback is that the executable could be with a large size. But it's helpful when you just want to test your code.
Of course, you can go with the method from Balau(see artless noise's answer). But if you don't want to feel frustrated by something like "UART serial ports" in this step, which is only to run a simple "test" function, go for a try of my fix.
|
If you want to run **ARM** without Linux, then you need a different compiler (at least). `arm-linux-gnueabi-gcc` is a compiler for **Linux**. The compiler and `libc` are intimately linked. You will need a `newlib` compiler with a portability layer for *qemu*.[porting newlib](http://wiki.osdev.org/Porting_Newlib)
See: [Balau](http://balau82.wordpress.com/2010/02/28/hello-world-for-bare-metal-arm-using-qemu/) and [Google newlib+qemu](https://www.google.ca/search?q=newlib+qemu). A `newlib` port is hosted at [Github](https://github.com/balau/arm-sandbox/tree/master/qemu-newlib) and seems to the same as the *Balau* blog.
Typically a non-Linux gcc is called `arm-none-eabi-gcc`. The prefix *arm-none-eabi-* is recognized by some configure scripts.
|
16,158,995 |
hi all ive been up all night trying to get this work, im trying to test the `$year` value against the date to make sure that it is not less than the current date
here is my code at the moment it doesn't seem to work im not sure what im doing wrong
```
public function checkDateField($month, $day, $year)
{
if (!is_numeric($month) || !is_numeric($day) || !is_numeric($year) || !checkdate($month, $day, $year)< date("m-d-y")) {
$msg = '*Invalid date';
}
return $msg;
}
```
|
2013/04/23
|
['https://Stackoverflow.com/questions/16158995', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2228587/']
|
you can run the example by providing a path to the arm-linux-gnueabi shared libs using the -L flag.
```
qemu-arm -L /usr/arm-linux-gnueabi/
```
also make sure the LD\_LIBRARY\_PATH is not set.
```
unset LD_LIBRARY_PATH
```
|
I also met this problem when running a C program with assembly code. My solution is to build the executable with the option "-static", for instance
```
arm-linux-gnueabi-gcc -static -g main.c square.s
```
Then
```
qemu-arm a.out
```
will not report the error saying "can not find the /lib/ld-linux.so.3".
The only drawback is that the executable could be with a large size. But it's helpful when you just want to test your code.
Of course, you can go with the method from Balau(see artless noise's answer). But if you don't want to feel frustrated by something like "UART serial ports" in this step, which is only to run a simple "test" function, go for a try of my fix.
|
16,158,995 |
hi all ive been up all night trying to get this work, im trying to test the `$year` value against the date to make sure that it is not less than the current date
here is my code at the moment it doesn't seem to work im not sure what im doing wrong
```
public function checkDateField($month, $day, $year)
{
if (!is_numeric($month) || !is_numeric($day) || !is_numeric($year) || !checkdate($month, $day, $year)< date("m-d-y")) {
$msg = '*Invalid date';
}
return $msg;
}
```
|
2013/04/23
|
['https://Stackoverflow.com/questions/16158995', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2228587/']
|
```
$ export QEMU_LD_PREFIX=/usr/arm-linux-gnueabi
```
This works for me.
It's basically the same thing as:
```
$ qemu-arm -L /usr/arm-linux-gnueabi/
```
You can add it to the ~/.bashrc file so you don't have to type it everytime you open the terminal.
|
I solved the problem by copying the following libraries into /lib but I believe there should be a way better solution rather than this nasty solution I invented!
```
sudo cp /usr/arm-linux-gnueabi/lib/ld-linux.so.3 /lib
sudo cp /usr/arm-linux-gnueabi/lib/libgcc_s.so.1 /lib
sudo cp /usr/arm-linux-gnueabi/lib/libc.so.6 /lib
```
Please let me know if there are other better solutions as I am interested to know.
|
16,158,995 |
hi all ive been up all night trying to get this work, im trying to test the `$year` value against the date to make sure that it is not less than the current date
here is my code at the moment it doesn't seem to work im not sure what im doing wrong
```
public function checkDateField($month, $day, $year)
{
if (!is_numeric($month) || !is_numeric($day) || !is_numeric($year) || !checkdate($month, $day, $year)< date("m-d-y")) {
$msg = '*Invalid date';
}
return $msg;
}
```
|
2013/04/23
|
['https://Stackoverflow.com/questions/16158995', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2228587/']
|
I solved the problem by copying the following libraries into /lib but I believe there should be a way better solution rather than this nasty solution I invented!
```
sudo cp /usr/arm-linux-gnueabi/lib/ld-linux.so.3 /lib
sudo cp /usr/arm-linux-gnueabi/lib/libgcc_s.so.1 /lib
sudo cp /usr/arm-linux-gnueabi/lib/libc.so.6 /lib
```
Please let me know if there are other better solutions as I am interested to know.
|
A variant, which worked for me, was to pass the loader library directly and to specify the required library paths using the loader parameter `--library-path`. For example:
```
$ TOOLCHAIN_ROOT=/usr/local/gcc-linaro-arm-linux-gnueabihf-4.7-2013.03-20130313_linux/arm-linux-gnueabihf
$ qemu-arm $TOOLCHAIN_ROOT/libc/lib/ld-linux-armhf.so.3 --library-path $TOOLCHAIN_ROOT/libc/lib/arm-linux-gnueabihf:/$TOOLCHAIN_ROOT/lib ./my_executable
```
Or equivalently export `LD_LIBRARY_PATH` instead of using `--library-path`.
|
32,376,066 |
I have `registertemptable` in `Apache Spark` using `Zeppelin` below:
```
val hvacText = sc.textFile("...")
case class Hvac(date: String, time: String, targettemp: Integer, actualtemp: Integer, buildingID: String)
val hvac = hvacText.map(s => s.split(",")).filter(s => s(0) != "Date").map(
s => Hvac(s(0),
s(1),
s(2).toInt,
s(3).toInt,
s(6))).toDF()
hvac.registerTempTable("hvac")
```
After I have done with my queries with this temp table, how do I remove it ?
I checked all docs and it seems I am getting nowhere.
Any guidance ?
|
2015/09/03
|
['https://Stackoverflow.com/questions/32376066', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5296786/']
|
**Spark 2.x**
For temporary views you can use [`Catalog.dropTempView`](https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.catalog.Catalog@dropTempView%28viewName:String%29:Boolean):
```
spark.catalog.dropTempView("df")
```
For global views you can use [`Catalog.dropGlobalTempView`](https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.catalog.Catalog@dropGlobalTempView%28viewName:String%29:Boolean):
```
spark.catalog.dropGlobalTempView("df")
```
Both methods are safe to call if view doesn't exist and, since Spark 2.1, return boolean indicating if the operation succeed.
**Spark 1.x**
You can use [`SQLContext.dropTempTable`](https://spark.apache.org/docs/1.6.0/api/scala/index.html#org.apache.spark.sql.SQLContext):
```
scala.util.Try(sqlContext.dropTempTable("df"))
```
It can be still used in Spark 2.0, but delegates processing to `Catalog.dropTempView` and is safe to use if table doesn't exist.
|
If you want to remove your temp table on zeppelin, try like this.
```
sqlc.dropTempTable("hvac")
```
or
```
%sql DROP VIEW hvac
```
And you can get the informations you need from spark API Docs(<http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.package>)
|
32,376,066 |
I have `registertemptable` in `Apache Spark` using `Zeppelin` below:
```
val hvacText = sc.textFile("...")
case class Hvac(date: String, time: String, targettemp: Integer, actualtemp: Integer, buildingID: String)
val hvac = hvacText.map(s => s.split(",")).filter(s => s(0) != "Date").map(
s => Hvac(s(0),
s(1),
s(2).toInt,
s(3).toInt,
s(6))).toDF()
hvac.registerTempTable("hvac")
```
After I have done with my queries with this temp table, how do I remove it ?
I checked all docs and it seems I am getting nowhere.
Any guidance ?
|
2015/09/03
|
['https://Stackoverflow.com/questions/32376066', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5296786/']
|
**Spark 2.x**
For temporary views you can use [`Catalog.dropTempView`](https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.catalog.Catalog@dropTempView%28viewName:String%29:Boolean):
```
spark.catalog.dropTempView("df")
```
For global views you can use [`Catalog.dropGlobalTempView`](https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.catalog.Catalog@dropGlobalTempView%28viewName:String%29:Boolean):
```
spark.catalog.dropGlobalTempView("df")
```
Both methods are safe to call if view doesn't exist and, since Spark 2.1, return boolean indicating if the operation succeed.
**Spark 1.x**
You can use [`SQLContext.dropTempTable`](https://spark.apache.org/docs/1.6.0/api/scala/index.html#org.apache.spark.sql.SQLContext):
```
scala.util.Try(sqlContext.dropTempTable("df"))
```
It can be still used in Spark 2.0, but delegates processing to `Catalog.dropTempView` and is safe to use if table doesn't exist.
|
in new ver (2.0 and latest) of spark.
one should use: `createOrReplaceTempView` in place of `registerTempTable` (depricated)
and corresponding method to deallocate is: `dropTempView`
```
spark.catalog.dropTempView("temp_view_name") //drops the table
```
|
32,376,066 |
I have `registertemptable` in `Apache Spark` using `Zeppelin` below:
```
val hvacText = sc.textFile("...")
case class Hvac(date: String, time: String, targettemp: Integer, actualtemp: Integer, buildingID: String)
val hvac = hvacText.map(s => s.split(",")).filter(s => s(0) != "Date").map(
s => Hvac(s(0),
s(1),
s(2).toInt,
s(3).toInt,
s(6))).toDF()
hvac.registerTempTable("hvac")
```
After I have done with my queries with this temp table, how do I remove it ?
I checked all docs and it seems I am getting nowhere.
Any guidance ?
|
2015/09/03
|
['https://Stackoverflow.com/questions/32376066', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5296786/']
|
**Spark 2.x**
For temporary views you can use [`Catalog.dropTempView`](https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.catalog.Catalog@dropTempView%28viewName:String%29:Boolean):
```
spark.catalog.dropTempView("df")
```
For global views you can use [`Catalog.dropGlobalTempView`](https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.catalog.Catalog@dropGlobalTempView%28viewName:String%29:Boolean):
```
spark.catalog.dropGlobalTempView("df")
```
Both methods are safe to call if view doesn't exist and, since Spark 2.1, return boolean indicating if the operation succeed.
**Spark 1.x**
You can use [`SQLContext.dropTempTable`](https://spark.apache.org/docs/1.6.0/api/scala/index.html#org.apache.spark.sql.SQLContext):
```
scala.util.Try(sqlContext.dropTempTable("df"))
```
It can be still used in Spark 2.0, but delegates processing to `Catalog.dropTempView` and is safe to use if table doesn't exist.
|
You can use sql drop table/view statement to remove it like below
```
spark.sql("drop view hvac");
```
|
32,376,066 |
I have `registertemptable` in `Apache Spark` using `Zeppelin` below:
```
val hvacText = sc.textFile("...")
case class Hvac(date: String, time: String, targettemp: Integer, actualtemp: Integer, buildingID: String)
val hvac = hvacText.map(s => s.split(",")).filter(s => s(0) != "Date").map(
s => Hvac(s(0),
s(1),
s(2).toInt,
s(3).toInt,
s(6))).toDF()
hvac.registerTempTable("hvac")
```
After I have done with my queries with this temp table, how do I remove it ?
I checked all docs and it seems I am getting nowhere.
Any guidance ?
|
2015/09/03
|
['https://Stackoverflow.com/questions/32376066', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5296786/']
|
If you want to remove your temp table on zeppelin, try like this.
```
sqlc.dropTempTable("hvac")
```
or
```
%sql DROP VIEW hvac
```
And you can get the informations you need from spark API Docs(<http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.package>)
|
You can use sql drop table/view statement to remove it like below
```
spark.sql("drop view hvac");
```
|
32,376,066 |
I have `registertemptable` in `Apache Spark` using `Zeppelin` below:
```
val hvacText = sc.textFile("...")
case class Hvac(date: String, time: String, targettemp: Integer, actualtemp: Integer, buildingID: String)
val hvac = hvacText.map(s => s.split(",")).filter(s => s(0) != "Date").map(
s => Hvac(s(0),
s(1),
s(2).toInt,
s(3).toInt,
s(6))).toDF()
hvac.registerTempTable("hvac")
```
After I have done with my queries with this temp table, how do I remove it ?
I checked all docs and it seems I am getting nowhere.
Any guidance ?
|
2015/09/03
|
['https://Stackoverflow.com/questions/32376066', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5296786/']
|
in new ver (2.0 and latest) of spark.
one should use: `createOrReplaceTempView` in place of `registerTempTable` (depricated)
and corresponding method to deallocate is: `dropTempView`
```
spark.catalog.dropTempView("temp_view_name") //drops the table
```
|
You can use sql drop table/view statement to remove it like below
```
spark.sql("drop view hvac");
```
|
20,344,535 |
Here is the sample code. Since there is not source to backtrack given a code get the pattern.
```
ResultSet rs = stmt.executeQuery("SELECT a, b, c FROM Table1");
while (rs.next()) {
int x = rs.getInt("a");
String s = rs.getString("b");
float f = rs.getFloat("c");
}
```
I think this is some pattern, has a name and has more applications which I am unaware about. Let me know the pattern name ?
|
2013/12/03
|
['https://Stackoverflow.com/questions/20344535', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2458372/']
|
A ResultSet is an instance of the [Iterator](http://en.wikipedia.org/wiki/Iterator_pattern) pattern. But not the `java.util.Iterator` interface.
[Iterator as ArrayList http://ts1.mm.bing.net/th?id=H.4589598971789992&pid=15.1](http://ts1.mm.bing.net/th?id=H.4589598971789992&pid=15.1)
|
the patten name is "Iterator Pattern".
|
5,384,157 |
I have zero experience with OpenGL and a small amount of experience with Objective-C, but I'm fairly decent with C++. What resources should I be looking at to start learning how to use OpenGL within Objective-C?
I read somewhere at some point that starting out with NSOpenGLView is a good start.
|
2011/03/21
|
['https://Stackoverflow.com/questions/5384157', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/657850/']
|
Given your affinity for C++, [this](http://iphone-3d-programming.labs.oreilly.com/) will probably be the most appealing resource for you. It covers OpenGL ES 1.1 and 2.0, so you'll get all you need to know.
Personally, I really enjoy [this set of tutorials](http://iphonedevelopment.blogspot.com/2009/05/opengl-es-from-ground-up-table-of.html), which avoid C++ and are quite well written. It only covers 1.1, but that may be an easier place to start anyway.
|
[NeHe's tutorials](http://nehe.gamedev.net/) include full Cocoa versions of all the early tutorials.
|
5,384,157 |
I have zero experience with OpenGL and a small amount of experience with Objective-C, but I'm fairly decent with C++. What resources should I be looking at to start learning how to use OpenGL within Objective-C?
I read somewhere at some point that starting out with NSOpenGLView is a good start.
|
2011/03/21
|
['https://Stackoverflow.com/questions/5384157', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/657850/']
|
Honestly, you're probably not going to need to know much Objective-C for dealing with OpenGL, just C. OpenGL is C-based, so you don't need to learn anything new, language-wise, to deal with it. Objective-C knowledge is only really necessary when you plan on using Cocoa to build up your interface. Even then, the language is not hard to pick up if you're coming from a solid C / C++ background.
I highly recommend the book [iPhone 3D Programming](http://oreilly.com/catalog/9780596804831?cmp=il-orm-ofps-iphone3d) that Matt's first resource is based on. Even though you're asking about desktop OpenGL, and this book covers OpenGL ES, much is shared between the two APIs. The book does a great job of starting with simple concepts and fundamentals, and then building to more advanced topics like environment mapping and custom shaders. The author uses C++ as his base language for the book, so you should be familiar with even the most complex code he shows. OpenGL ES is effectively a subset of OpenGL, so almost everything translates across to the desktop.
Within a desktop Cocoa application, you have two ways of presenting OpenGL content: NSOpenGLView and CAOpenGLLayer. The former is an older NSView subclass that you can customize to place your rendering code within. The latter is a Core Animation CALayer that also acts as an OpenGL rendering target, but it give you a little more flexibility in how you can overlay other items on top of the OpenGL content. Getting the display set up for your OpenGL rendering will not take a lot of effort, with most of your time being spent on your OpenGL code.
You might want to pick apart some of Apple's sample applications, such as [GLSL Showpiece](http://developer.apple.com/library/mac/#samplecode/GLSLShowpiece/Introduction/Intro.html#//apple_ref/doc/uid/DTS10003935), [Cocoa OpenGL](http://developer.apple.com/library/mac/#samplecode/CocoaGL/Introduction/Intro.html#//apple_ref/doc/uid/DTS10004501), [GLEssentials](http://developer.apple.com/library/mac/#samplecode/GLEssentials/Introduction/Intro.html#//apple_ref/doc/uid/DTS40010104), and [CubePuzzle](http://developer.apple.com/library/mac/#samplecode/CubePuzzle/Introduction/Intro.html#//apple_ref/doc/uid/DTS10000521), among the other OpenGL examples they have in the developer center.
|
Given your affinity for C++, [this](http://iphone-3d-programming.labs.oreilly.com/) will probably be the most appealing resource for you. It covers OpenGL ES 1.1 and 2.0, so you'll get all you need to know.
Personally, I really enjoy [this set of tutorials](http://iphonedevelopment.blogspot.com/2009/05/opengl-es-from-ground-up-table-of.html), which avoid C++ and are quite well written. It only covers 1.1, but that may be an easier place to start anyway.
|
5,384,157 |
I have zero experience with OpenGL and a small amount of experience with Objective-C, but I'm fairly decent with C++. What resources should I be looking at to start learning how to use OpenGL within Objective-C?
I read somewhere at some point that starting out with NSOpenGLView is a good start.
|
2011/03/21
|
['https://Stackoverflow.com/questions/5384157', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/657850/']
|
Honestly, you're probably not going to need to know much Objective-C for dealing with OpenGL, just C. OpenGL is C-based, so you don't need to learn anything new, language-wise, to deal with it. Objective-C knowledge is only really necessary when you plan on using Cocoa to build up your interface. Even then, the language is not hard to pick up if you're coming from a solid C / C++ background.
I highly recommend the book [iPhone 3D Programming](http://oreilly.com/catalog/9780596804831?cmp=il-orm-ofps-iphone3d) that Matt's first resource is based on. Even though you're asking about desktop OpenGL, and this book covers OpenGL ES, much is shared between the two APIs. The book does a great job of starting with simple concepts and fundamentals, and then building to more advanced topics like environment mapping and custom shaders. The author uses C++ as his base language for the book, so you should be familiar with even the most complex code he shows. OpenGL ES is effectively a subset of OpenGL, so almost everything translates across to the desktop.
Within a desktop Cocoa application, you have two ways of presenting OpenGL content: NSOpenGLView and CAOpenGLLayer. The former is an older NSView subclass that you can customize to place your rendering code within. The latter is a Core Animation CALayer that also acts as an OpenGL rendering target, but it give you a little more flexibility in how you can overlay other items on top of the OpenGL content. Getting the display set up for your OpenGL rendering will not take a lot of effort, with most of your time being spent on your OpenGL code.
You might want to pick apart some of Apple's sample applications, such as [GLSL Showpiece](http://developer.apple.com/library/mac/#samplecode/GLSLShowpiece/Introduction/Intro.html#//apple_ref/doc/uid/DTS10003935), [Cocoa OpenGL](http://developer.apple.com/library/mac/#samplecode/CocoaGL/Introduction/Intro.html#//apple_ref/doc/uid/DTS10004501), [GLEssentials](http://developer.apple.com/library/mac/#samplecode/GLEssentials/Introduction/Intro.html#//apple_ref/doc/uid/DTS40010104), and [CubePuzzle](http://developer.apple.com/library/mac/#samplecode/CubePuzzle/Introduction/Intro.html#//apple_ref/doc/uid/DTS10000521), among the other OpenGL examples they have in the developer center.
|
[NeHe's tutorials](http://nehe.gamedev.net/) include full Cocoa versions of all the early tutorials.
|
59,770,828 |
How can I create a script of inserts for my sybase to oracle Migration? The Migration wizard only gives me the option to migrate procedures and triggers and such. But there is no select for just tables. When I try to migrate tables offline and move data. the datamove/ folder is empty. I would also want to only migrate specific tables (ones with long identifiers) because i was able to migrate the rest with Copy to Oracle.
I must also note that i do not want to upgrade to an new version of oracle. Currently on ~12.1 so i need to limit the identifiers.
How can I get the offline scripts for table inserts?
|
2020/01/16
|
['https://Stackoverflow.com/questions/59770828', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10847608/']
|
You can find solution for your problem in the following codesandbox
<https://codesandbox.io/s/reactjs-accordion-automatic-close-mechanism-yejc0>
Change prop names as it fits your code base, but the logic is solid
|
You could do something like this, using the state hook in the App component
```
export default function App() {
const items = [
{ id: 1, title: 'First Accordion', content: 'Hello' },
{ id: 2, title: 'Click me', content: 'Hello 2' },
{ id: 3, title: 'Third Accordion Accordion', content: 'Hello 3' },
]
const [selectedItem, setSelectedItem] = useState(1)
const handleClick = id => {
setSelectedItem(id)
}
return (
<div className="App">
{items.map(x => {
return (
<Accordion
key={x.id}
id={x.id}
title={x.title}
open={x.id === selectedItem}
onClick={handleClick}
>
<p>{x.title}</p>
</Accordion>
)
})}
</div>
);
}
```
Then your accordion component is a bit simpler
```
class Accordion extends React.Component {
accToggle() {
this.props.onClick(this.props.id);
}
sectionClasses() {
let classes = "accordion";
classes += this.props.open ? " sec-on" : "";
classes += "sec-underway";
return classes.trim();
}
render() {
return (
<section className={this.sectionClasses()} id={this.props.id}>
<div className="acc-title" onClick={this.accToggle.bind(this)}>
<h3 className="acc-text">{this.props.title}</h3>
<div className="acc-nav">
<span className="acc-toggle" />
</div>
</div>
<div className="acc-content">{this.props.children}</div>
</section>
);
}
}
Accordion.defaultProps = {
open: false
};
Accordion.propTypes = {
id: PropTypes.number.isRequired,
children: PropTypes.any,
onFocus: PropTypes.func,
progress: PropTypes.oneOfType([
PropTypes.number,
PropTypes.string,
PropTypes.bool
]),
title: PropTypes.string,
open: PropTypes.bool
};
export default Accordion;
```
The accordion calls a function on the app component, which updates the state and the display logic is passed down in the props
|
353,015 |
Specifically, when a post is flagged as **in need of moderator intervention** what does it mean?
A while ago, this happened:
---
[](https://i.stack.imgur.com/lPfVU.png)
---
Little backstory:
I asked a question which was *related* to a question supplied in the comments but did not answer me. Following that, I received a private feedback to review if that answers my questions or not. Obviously, I clicked on no and pressed submit. Also notified the user who marked it as duplicate and edited the question to make it look pointed towards the author's intent(me). After a few hours, I saw my post taken down as a [duplicate] and **Comments on the post deleted?**. Upon flagging the post for moderator intervention, it was declined with the above message. No *custom comments* were put as well.
This of course looks like a custom message, but was my flag a standard one?
---
I have read the thread
[What are acceptable reasons for flagging as "Requires Moderator attention"?](https://meta.stackexchange.com/questions/10848/what-are-acceptable-reasons-for-flagging-as-requires-moderator-attention)
but unfortunately it's a continuation to :
[Is "Please Vote to Close" a valid use for Flag - Requires Moderation Attention](https://meta.stackexchange.com/questions/10781/is-please-vote-to-close-a-valid-use-for-flag-requires-moderation-attention)
and has answers which genuinely could not address what I have asked.
|
2020/08/14
|
['https://meta.stackexchange.com/questions/353015', 'https://meta.stackexchange.com', 'https://meta.stackexchange.com/users/826559/']
|
I've looked through a few of your flags and I think you need to rethink how flags work on our sites and what moderators are and do.
Flags - particularly moderator attention flags - are designed to draw attention of the moderators when there is something that needs special treatment. These are generally rare flags and users should attempt to use the standard flags first as some of these are given to community members to review rather than the very few moderators on the site.
A question getting closed or marked as a duplicate is not generally something a moderator should be asked to look into. If you want to understand why a post was closed or get a post reopened, flag to reopen the post or go to the per-site meta for the site you're using at the time - so if you're on [Physics](https://physics.stackexchange.com/), go to [Physics Meta](https://physics.meta.stackexchange.com/). On some sites, you may also be able to talk to other users about it in chat.
If you see comments deleted - firstly, question whether they were necessary. Comments aren't intended to be permanent, so it's quite common for them to be removed by moderators... and users can, themselves remove their own comments, so it's possible that mods had nothing to do with it at all. Again, if you have questions about comments being removed, go to meta, don't use flags.
Additionally, our sites are community edited, so edits are normal. If you find an edit changes your question drastically in a way you didn't intend, you're able to roll it back but don't get into a "rollback war" with someone who is repeatedly editing your post. Try to understand why the edits were made and whether they are making your post easier to understand or adding missing details. If you can, engage them in the comments to understand the edits they're making or take it to Meta to discuss if it's a bigger issue.
All in all, use chat and meta more, use flags less. Moderators are not generally going to unilaterally reopen your questions when they get closed - that's for the community to decide, so start communicating with the community, not the moderators.
|
>
> Also notified the user who marked it as duplicate and edited the question to make it look pointed towards the author's intent(me).
>
>
>
That's unfortunate, but the correct procedure here is to [edit your question to clarify why it's not a duplicate](https://meta.stackexchange.com/q/194476/295232). That's something you **need** to do yourself, not the task of ♦ moderators. Why? Because if everybody would do that, the workload for them would be unbearable. Therefore, a moderator flag is not warranted for these cases. Perhaps the response you got isn't entirely appropriate, but since your flag message is rather vague, chances are the moderator didn't understand what you wanted them to do.
>
> Comments on the post deleted?
>
>
>
The "Does this answer your question" comment is automatically deleted when the question is closed as a duplicate. If you clarified why you think it's not a duplicate in the comments, please edit that information into the question itself; comments are [temporary by nature](/help/privileges/comment) and are sometimes cleaned up.
|
60,485,133 |
I got to find that we could read the contents of a file into a std::vector like this:
```
ifstream fin(..., ios::in);
std::vector<char> buf(
std::istreambuf_iterator<char>(fin),
std::istreambuf_iterator<char>());
```
Will this method cause plenty of memory reallocation like when I call `buf.push_back();` for many times? What is the fastest or best method to read a file into a `std::vector`?
Edit:
By the way, I find there is a method to read a file into a stringstream:
```
stringstream ss;
ifstream fin(..., ios::in);
fin >> ss.rdbuf();
```
Will this method have same problem of memory reallocating ?
|
2020/03/02
|
['https://Stackoverflow.com/questions/60485133', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10132474/']
|
[`std::istreambuf_iterator`](https://en.cppreference.com/w/cpp/iterator/istreambuf_iterator) is an input iterator, so the distance between begin and end is not known in advance. There will be several reallocations during the constructor, unless the file is very small. For a random access iterator the distance would be known and in such case the constructor could avoid the extra memory allocations.
If you roughly know the size of the file, you can use [`reserve()`](https://en.cppreference.com/w/cpp/container/vector/reserve) before reading:
```
std::vector<char> buf;
buf.reserve(file_size);
buf.insert(buf.end(), std::istreambuf_iterator<char>(fin),
std::istreambuf_iterator<char>());
```
|
Vector keeps the data allocated sequentially.
When a new element is added it may have no memory free after the last element, then it need to move all the data to a place in memory where it has enough room to the old and new data.
The best solution is give a buffer to vector with the follow command: vector::reserve(size);
Your code could be:
```
std::vector<char> buf;
buf.reserve(10000);
buf.assign(std::istreambuf_iterator<char>(fin),
std::istreambuf_iterator<char>());
buf.shrink_to_fit(); //free the unused memory
```
|
66,880,148 |
I have a custom function and I would like to evaluate its media performance. For this, I would like to loop by executing my function a certain number of times. I want to do this because I see that the runtime is very unstable.
A typical execution measuring the execution time would look something like this
```
@time my_function()
```
or
```
@time begin
my_function()
end
```
In this particular case, I only visualize the execution time, but I don't know how to register the execution time to allocate the time for each iteration in a vector. For example, I would like something like this
```
vector_time = 1:100
for i in 1:100
@time my_function()
vector_time[i] = get_time_i # register the time of the i-th iteration
end
```
|
2021/03/31
|
['https://Stackoverflow.com/questions/66880148', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/12005038/']
|
Try [`@elapsed`](https://docs.julialang.org/en/v1/base/base/#Base.@elapsed) instead to get the execution time in seconds.
|
If you are timing multiple sections of your code at once, you can also use [TimerOutputs.jl](https://github.com/KristofferC/TimerOutputs.jl), which I've found to be very convenient. It automatically computes average runtimes and % total runtime.
|
593,683 |
Sometimes it's quite confusing when it comes to determining how to answer a probability question. Confusions always arise as of whether I should multiply/add or make conditional the probabilities. For example the following:
>
> Consider influenza epidemics for two parent heterosexual families.
> Suppose that the probability is 15% that at least one of the parents
> has contracted the disease. The probability that the father has
> contracted influenza is 10% while that the mother contracted the
> disease is 9%. What is the probability that both contracted influenza
> expressed as a whole number percentage?
>
>
>
Let P(F) = Probability that father catches it; P(M) for mother.
I thought the P(both catch it) = P(F)P(M), but the answer is P(at least 1 catch it)= P(F)+P(M)-P(F AND M) and solve for P(F AND M).
My first question is that: I find it particularly difficult to differentiate between addition or multiplication rule when it comes to probabilities from independent events.
My second question is that: I'm also thinking if I'm to use P(at least 1 catch it)= P(F)+P(M)-P(F AND M), I would have make something like: P(at least 1 catch it)= P(F)P(NOT M)+P(M)P(NOT F)+P(F AND M). But it seems the P(F AND M) from two cases are not equivalent? Aren't these 2 expressions representing the same thing?
My third question, even when I calculate P(at least 1 catch it) = 1-P(both not catching it) = 1-P(NOT F)\*P(NOT M), P(at least 1 catch it) does not equal to .15 given in the question. What's wrong with my calculation?
Are there any rules in governing which approach to use when solving a probability problem?
|
2022/10/27
|
['https://stats.stackexchange.com/questions/593683', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/331633/']
|
Let's follow up on GlenB's advice and make those [Venn diagrams](https://en.m.wikipedia.org/wiki/Venn_diagram). We do this below with the heterosexual stereotype colours representing mother sick with red/pink and dad sick with blue.
[](https://i.stack.imgur.com/2tPUI.jpg)
With the two variables mother and father you can create 4 different disjoint situations.
```
father sick and not mom sick
not father sick and mom sick
father sick and mom sick
not father sick and not mom sick
```
It is with those 4 situations that you can perform additive computations.
Intuitively you want to figure out how much the two situations mom sick and father sick overlap (those two may not need to be jisjoint)
[](https://i.stack.imgur.com/xr1J1.jpg)
Your formula
>
> but the answer is P(at least 1 catch it)= P(F)+P(M)-P(F AND M) and solve for P(F AND M)
>
>
>
Stems from the following algebra
[](https://i.stack.imgur.com/j9e5t.jpg)
You can compare it to a situation with 4 unknowns (the area's/probability of the 4 disjoint pieces) and you try to figure out the values by means of 4 equations. You know
```
mom sick
0.09 = P(mom sick & not dad sick) + P(mom sick & dad sick)
dad sick
0.10 = P(mom sick & dad sick) + P(not mom sick & dad sick)
one or more sick
0.15 = P(mom sick & dad sick) + P(not mom sick & dad sick) + P(mom sick & dad sick)
total probability must be one
1.00 = P(mom sick & dad sick) + P(not mom sick & dad sick) + P(mom sick & dad sick) + P(not mom sick & not dad sick)
```
One final figure to explain the product and sum rule:
[](https://i.stack.imgur.com/zkt8y.jpg)
* When events are disjoint then you can use summation $$P(A \text{ or } B) = P(A) + P(B)$$ note that 'father sick' and 'mom sick' do not meed to be disjoint events. You still get a sum of those events in your solution, but that is due to algebra where we combine multiple equations.
* When events are independent then you can use the product $$P(A \text{ and } B) = P(A) \cdot P(B)$$ The independence means that the ratio's of the area's/probabilities are unaffected by the other variable. In the image you see the ratio's of 'mom sick' for different states of 'dad sick' whether or not dad is sick the ratio remains the same.
|
>
> My first question is that: I find it particularly difficult to differentiate between addition or multiplication rule when it comes to probabilities from independent events.
>
>
>
That's not a question (you don't ask anything), but the answer to what I assume is your implied question is simple: there *isn't an addition rule* for independent events.
The "addition rule" $P(A \text{ or } B) = P(A)+P(B)$ is for *mutually exclusive events*.
Draw a Venn diagram, from which it's obvious why there's another term there for non-mutually exclusive events (representing the overlap which gets counted twice, once in A and once in B, whereupon you must then subtract one of the overlaps back off again).
>
> My third question, even when I calculate P(at least 1 catch it) = 1-P(both not catching it) = 1-P(NOT F)\*P(NOT M), P(at least 1 catch it) does not equal to .15 given in the question. What's wrong with my calculation?
>
>
>
Note that the multiplication rule requires independence.
Did you make sure the events whose probability you multiplied were independent?
---
Rules for union ("OR") and intersection ("AND") are:
(i) $P(A \text{ or } B) = P(A) + P(B) - P(A \text{ and } B)$
(ii) $P(A \text{ and } B) = P(A)\times P(B|A)$ $\:$ ([General product rule](https://en.wikipedia.org/wiki/Chain_rule_%28probability%29#Two_events))
*If* you have mutually exclusive events, the third term on the RHS in (i) is $0$, whence "addition rule for mutually exclusive events".
*If* you have independent events, the second term on the RHS in (ii) is equal to $P(B)$, whence "multiplication rule for independent events".
|
32,126,987 |
Where can i find tutorials about advanced borders and box-shadows in css?
I discovered shape of css but cant explanation this:
```css
#space-invader{
box-shadow:
0 0 0 1em red,
0 1em 0 1em red,
-2.5em 1.5em 0 .5em red,
2.5em 1.5em 0 .5em red,
-3em -3em 0 0 red,
3em -3em 0 0 red,
-2em -2em 0 0 red,
2em -2em 0 0 red,
-3em -1em 0 0 red,
-2em -1em 0 0 red,
2em -1em 0 0 red,
3em -1em 0 0 red,
-4em 0 0 0 red,
-3em 0 0 0 red,
3em 0 0 0 red,
4em 0 0 0 red,
-5em 1em 0 0 red,
-4em 1em 0 0 red,
4em 1em 0 0 red,
5em 1em 0 0 red,
-5em 2em 0 0 red,
5em 2em 0 0 red,
-5em 3em 0 0 red,
-3em 3em 0 0 red,
3em 3em 0 0 red,
5em 3em 0 0 red,
-2em 4em 0 0 red,
-1em 4em 0 0 red,
1em 4em 0 0 red,
2em 4em 0 0 red;
background: red;
width: 1em;
height: 1em;
overflow: hidden;
margin: 50px 0 70px 65px;
}
```
```html
<div id="space-invader"></div>
```
[Link](https://css-tricks.com/examples/ShapesOfCSS/)
Can anyone explain me how it works?
Are this for all browsers?
Thanx.
|
2015/08/20
|
['https://Stackoverflow.com/questions/32126987', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5230154/']
|
You could use a mutex (one per car).
Lock: before changing location of the associated car
Unlock: after changing location of the associated car
Lock: before getting location of the associated car
Unlock: after done doing work that relies on that location being up to date
|
There are several ways to do this. Which way you choose depends a lot on the number of cars, the frequency of updates and position requests, the expected response time, and how accurate (up to date) you want the position reports to be.
The easiest way to handle this is with a simple mutex (lock) that allows only one thread at a time to access the data structure. Assuming you're using a dictionary or hash map, your code would look something like this:
```
Map Cars = new Map(...)
Mutex CarsMutex = new Mutex(...)
Location GetLocation(carKey)
{
acquire mutex
result = Cars[carKey].Location
release mutex
return result
}
```
You'd do that for Add, Remove, Update, etc. Any method that reads or updates the data structure would require that you acquire the mutex.
If the number of queries far outweighs the number of updates, then you can do better with a reader/writer lock instead of a mutex. With an RW lock, you can have an unlimited number of readers, OR you can have a single writer. With that, querying the data would be:
```
acquire reader lock
result = Cars[carKey].Location
release reader lock
return result
```
And Add, Update, and Remove would be:
```
acquire writer lock
do update
release writer lock
```
Many runtime libraries have a concurrent dictionary data structure already built in. .NET, for example, has `ConcurrentDictionary`. With those, you don't have to worry about explicitly synchronizing access with a Mutex or RW lock; the data structure handles synchronization for you, either with a technique similar to that shown above, or by implementing lock-free algorithms.
As mentioned in comments, a relational database can handle this type of thing quite easily and can scale to a very large number of requests. Modern relational databases, properly constructed and with sufficient hardware, are surprisingly fast and can handle huge amounts of data with very high throughput.
There are other, more involved, methods that can increase throughput in some situations depending on what you're trying to optimize. For example, if you're willing to have some latency in reported position, then you could have position requests served from a list that's updated once per minute (or once every five minutes). So position requests are fulfilled immediately with no lock required from a static copy of the list that's updated once per minute. Updates are queued and once per minute a new list is created by applying the updates to the old list, and the new list is made available for requests.
There are *many* different ways to solve your problem.
|
32,126,987 |
Where can i find tutorials about advanced borders and box-shadows in css?
I discovered shape of css but cant explanation this:
```css
#space-invader{
box-shadow:
0 0 0 1em red,
0 1em 0 1em red,
-2.5em 1.5em 0 .5em red,
2.5em 1.5em 0 .5em red,
-3em -3em 0 0 red,
3em -3em 0 0 red,
-2em -2em 0 0 red,
2em -2em 0 0 red,
-3em -1em 0 0 red,
-2em -1em 0 0 red,
2em -1em 0 0 red,
3em -1em 0 0 red,
-4em 0 0 0 red,
-3em 0 0 0 red,
3em 0 0 0 red,
4em 0 0 0 red,
-5em 1em 0 0 red,
-4em 1em 0 0 red,
4em 1em 0 0 red,
5em 1em 0 0 red,
-5em 2em 0 0 red,
5em 2em 0 0 red,
-5em 3em 0 0 red,
-3em 3em 0 0 red,
3em 3em 0 0 red,
5em 3em 0 0 red,
-2em 4em 0 0 red,
-1em 4em 0 0 red,
1em 4em 0 0 red,
2em 4em 0 0 red;
background: red;
width: 1em;
height: 1em;
overflow: hidden;
margin: 50px 0 70px 65px;
}
```
```html
<div id="space-invader"></div>
```
[Link](https://css-tricks.com/examples/ShapesOfCSS/)
Can anyone explain me how it works?
Are this for all browsers?
Thanx.
|
2015/08/20
|
['https://Stackoverflow.com/questions/32126987', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5230154/']
|
I'd answer with:
Try to make threading an external concept to your system yet make the system as modular and encapsulated as possible at the same time. It will allow adding concurrency at later phase at low cost and in case the solution happens to work nicely in a single thread (say by making it event-loop-based) no time will have been burnt for nothing.
|
There are several ways to do this. Which way you choose depends a lot on the number of cars, the frequency of updates and position requests, the expected response time, and how accurate (up to date) you want the position reports to be.
The easiest way to handle this is with a simple mutex (lock) that allows only one thread at a time to access the data structure. Assuming you're using a dictionary or hash map, your code would look something like this:
```
Map Cars = new Map(...)
Mutex CarsMutex = new Mutex(...)
Location GetLocation(carKey)
{
acquire mutex
result = Cars[carKey].Location
release mutex
return result
}
```
You'd do that for Add, Remove, Update, etc. Any method that reads or updates the data structure would require that you acquire the mutex.
If the number of queries far outweighs the number of updates, then you can do better with a reader/writer lock instead of a mutex. With an RW lock, you can have an unlimited number of readers, OR you can have a single writer. With that, querying the data would be:
```
acquire reader lock
result = Cars[carKey].Location
release reader lock
return result
```
And Add, Update, and Remove would be:
```
acquire writer lock
do update
release writer lock
```
Many runtime libraries have a concurrent dictionary data structure already built in. .NET, for example, has `ConcurrentDictionary`. With those, you don't have to worry about explicitly synchronizing access with a Mutex or RW lock; the data structure handles synchronization for you, either with a technique similar to that shown above, or by implementing lock-free algorithms.
As mentioned in comments, a relational database can handle this type of thing quite easily and can scale to a very large number of requests. Modern relational databases, properly constructed and with sufficient hardware, are surprisingly fast and can handle huge amounts of data with very high throughput.
There are other, more involved, methods that can increase throughput in some situations depending on what you're trying to optimize. For example, if you're willing to have some latency in reported position, then you could have position requests served from a list that's updated once per minute (or once every five minutes). So position requests are fulfilled immediately with no lock required from a static copy of the list that's updated once per minute. Updates are queued and once per minute a new list is created by applying the updates to the old list, and the new list is made available for requests.
There are *many* different ways to solve your problem.
|
26,499,775 |
I'm trying to use GMP with C++11, but apparently it's not allowed to use mpz\_class in constexpr functions because mpz\_class is not a literal type.
```
#include <iostream>
#include <gmpxx.h>
using namespace std;
constexpr mpz_class factorial(mpz_class n)
{
if (n == 0) return 1;
else return n * factorial(n - 1);
}
int main()
{
cout << factorial(20);
}
```
Is it possible?
|
2014/10/22
|
['https://Stackoverflow.com/questions/26499775', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4168350/']
|
Instagram says:
>
> Note that we do not include an expiry time. Our access\_tokens have no explicit expiry, though your app should handle the case that either the user revokes access or we expire the token after some period of time. In this case, your response’s meta will contain an “error\_type=OAuthAccessTokenError”. In other words: do do not assume your access\_token is valid forever.
>
>
>
If the token is somehow expired then they request for a new one, but that should be a rare case as instagram says they do no include an expiry time.
For more [info read this](http://instagram.com/developer/authentication/). Hope this helps.. :)
|
First you have to get the access token and save that access token in your app, access token is not valid forever, so you should implement and handle this on your application follow this tutorial for login with instagram using oauth and its implementation on app side
<http://codegerms.com/login-with-instagram-ios-application-objective-c/>
|
53,116,406 |
When I am working on the subdirectory of a git-repo, I should change dir to the parent to issue commands like
```
$ cd ..; git add .; git commit -m "2018-11-02 17:58:09" ; cd -
[master 0984351] 2018-11-02 17:58:09
12 files changed, 558 insertions(+), 13 deletions(-)
```
Change to parent dir, commit changes and change back
How could I commit directly under the child directory?
|
2018/11/02
|
['https://Stackoverflow.com/questions/53116406', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7301792/']
|
In our official documentation for Kubernetes filter we have an example about how to make your Pod suggest a parser for your data based in an annotation:
<https://docs.fluentbit.io/manual/filter/kubernetes>
|
Look at this configmap:
<https://github.com/fluent/fluent-bit-kubernetes-logging/blob/master/output/elasticsearch/fluent-bit-configmap.yaml>
The nginx parser should be there:
```
[PARSER]
Name nginx
Format regex
Regex ^(?<remote>[^ ]*) (?<host>[^ ]*) (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
Time_Key time
Time_Format %d/%b/%Y:%H:%M:%S %z
```
|
20,735,570 |
It is my understanding that in Lua 5.2 that environments are stored in upvalues named `_ENV`. This has made it really confusing for me to modify the environment of a chunk before running it, but after loading it.
I would like to load a file with some functions and use the chunk to inject those functions into various environments. Example:
```
chunk = loadfile( "file" )
-- Inject chunk's definitions
chunk._ENV = someTable -- imaginary syntax
chunk( )
chunk._ENV = someOtherTable
chunk( )
```
Is this possible from within Lua? The only examples I can find of modifying this upvalue are [with the C api](https://stackoverflow.com/questions/19167986/how-do-i-set-via-the-lua-c-api-the-environment-table-for-a-chunk-of-lua-code-p) (another [example from C api](http://lua.2524044.n2.nabble.com/Change-value-of-upvalue-from-within-function-td7642150.html)), but I am trying to do this from within Lua. Is this possible?
Edit: I'm unsure of accepting answers using the debug library. The [docs state](http://www.lua.org/manual/5.2/manual.html#6.10) that the functions may be slow. I'm doing this for efficiency so that entire chunks don't have to be parsed from strings (or a file, even worse) just to inject variable definitions into various environments.
Edit: Looks like this is impossible: [Recreating setfenv() in Lua 5.2](https://stackoverflow.com/questions/14290527/recreating-setfenv-in-lua-5-2?rq=1)
Edit: I suppose the best way for me to do this is to bind a C function that can modify the environment. Though this is a much more annoying way of going about it.
Edit: I believe a more natural way to do this would be to load all chunks into separate environments. These can be "inherited" by any other environment by setting a metatable that refers to a global copy of a chunk. This does not require any upvalue modification post-load, but still allows for multiple environments with those function definitions.
|
2013/12/23
|
['https://Stackoverflow.com/questions/20735570', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1825760/']
|
The simplest way to allow a chunk to be run in different environments is to make this explicit and have it receive an environment. Adding this line at the top of the chunk achieves this:
`_ENV=...`
Now you can call `chunk(env1)` and later `chunk(env2)` at your pleasure.
There, no `debug` magic with upvalues.
Although it will be clear if your chunk contains that line, you can add it at load time, by writing a suitable reader function that first sends that line and then the contents of the file.
|
I do not understand why you want to avoid using the debug library, while you are happy to use a C function (neither is possible in a sandbox.)
It can be done using `debug.upvaluejoin`:
```
function newEnvForChunk(chunk, index)
local newEnv = {}
local function source() return newEnv end
debug.upvaluejoin(chunk, 1, source, 1)
if index then setmetatable(newEnv, {__index=index}) end
return newEnv
end
```
Now load any chunk like this:
```
local myChunk = load "print(x)"
```
It will initially inherit the enclosing `_ENV`. Now give it a new one:
```
local newEnv = newEnvForChunk(myChunk, _ENV)
```
and insert a value for 'x':
```
newEnv.x = 99
```
Now when you run the chunk, it should see the value for `x`:
```
myChunk()
```
=> `99`
|
20,735,570 |
It is my understanding that in Lua 5.2 that environments are stored in upvalues named `_ENV`. This has made it really confusing for me to modify the environment of a chunk before running it, but after loading it.
I would like to load a file with some functions and use the chunk to inject those functions into various environments. Example:
```
chunk = loadfile( "file" )
-- Inject chunk's definitions
chunk._ENV = someTable -- imaginary syntax
chunk( )
chunk._ENV = someOtherTable
chunk( )
```
Is this possible from within Lua? The only examples I can find of modifying this upvalue are [with the C api](https://stackoverflow.com/questions/19167986/how-do-i-set-via-the-lua-c-api-the-environment-table-for-a-chunk-of-lua-code-p) (another [example from C api](http://lua.2524044.n2.nabble.com/Change-value-of-upvalue-from-within-function-td7642150.html)), but I am trying to do this from within Lua. Is this possible?
Edit: I'm unsure of accepting answers using the debug library. The [docs state](http://www.lua.org/manual/5.2/manual.html#6.10) that the functions may be slow. I'm doing this for efficiency so that entire chunks don't have to be parsed from strings (or a file, even worse) just to inject variable definitions into various environments.
Edit: Looks like this is impossible: [Recreating setfenv() in Lua 5.2](https://stackoverflow.com/questions/14290527/recreating-setfenv-in-lua-5-2?rq=1)
Edit: I suppose the best way for me to do this is to bind a C function that can modify the environment. Though this is a much more annoying way of going about it.
Edit: I believe a more natural way to do this would be to load all chunks into separate environments. These can be "inherited" by any other environment by setting a metatable that refers to a global copy of a chunk. This does not require any upvalue modification post-load, but still allows for multiple environments with those function definitions.
|
2013/12/23
|
['https://Stackoverflow.com/questions/20735570', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1825760/']
|
I do not understand why you want to avoid using the debug library, while you are happy to use a C function (neither is possible in a sandbox.)
It can be done using `debug.upvaluejoin`:
```
function newEnvForChunk(chunk, index)
local newEnv = {}
local function source() return newEnv end
debug.upvaluejoin(chunk, 1, source, 1)
if index then setmetatable(newEnv, {__index=index}) end
return newEnv
end
```
Now load any chunk like this:
```
local myChunk = load "print(x)"
```
It will initially inherit the enclosing `_ENV`. Now give it a new one:
```
local newEnv = newEnvForChunk(myChunk, _ENV)
```
and insert a value for 'x':
```
newEnv.x = 99
```
Now when you run the chunk, it should see the value for `x`:
```
myChunk()
```
=> `99`
|
If you don't want to modify your chunk (per LHF's great answer) here are two alternatives:
### Set up a blank environment, then dynamically change its environment to yours
```lua
function compile(code)
local meta = {}
local env = setmetatable({},meta)
return {meta=meta, f=load('return '..code, nil, nil, env)}
end
function eval(block, scope)
block.meta.__index=scope
return block.f()
end
local block = compile('a + b * c')
print(eval(block, {a=1, b=2, c=3})) --> 7
print(eval(block, {a=2, b=3, c=4})) --> 14
```
### Set up a blank environment, and re-set its values with your own each time
```lua
function compile(code)
local env = {}
return {env=env, f=load('return '..code, nil, nil, env)}
end
function eval(block, scope)
for k,_ in pairs(block.env) do block.env[k]=nil end
for k,v in pairs(scope) do block.env[k]=v end
return block.f()
end
local block = compile('a + b * c')
print(eval(block, {a=1, b=2, c=3})) --> 7
print(eval(block, {a=2, b=3, c=4})) --> 14
```
Note that if micro-optimizations matter, the first option is about 2✕ as slow as the `_ENV=...` answer, while the second options is about 8–9✕ as slow.
|
20,735,570 |
It is my understanding that in Lua 5.2 that environments are stored in upvalues named `_ENV`. This has made it really confusing for me to modify the environment of a chunk before running it, but after loading it.
I would like to load a file with some functions and use the chunk to inject those functions into various environments. Example:
```
chunk = loadfile( "file" )
-- Inject chunk's definitions
chunk._ENV = someTable -- imaginary syntax
chunk( )
chunk._ENV = someOtherTable
chunk( )
```
Is this possible from within Lua? The only examples I can find of modifying this upvalue are [with the C api](https://stackoverflow.com/questions/19167986/how-do-i-set-via-the-lua-c-api-the-environment-table-for-a-chunk-of-lua-code-p) (another [example from C api](http://lua.2524044.n2.nabble.com/Change-value-of-upvalue-from-within-function-td7642150.html)), but I am trying to do this from within Lua. Is this possible?
Edit: I'm unsure of accepting answers using the debug library. The [docs state](http://www.lua.org/manual/5.2/manual.html#6.10) that the functions may be slow. I'm doing this for efficiency so that entire chunks don't have to be parsed from strings (or a file, even worse) just to inject variable definitions into various environments.
Edit: Looks like this is impossible: [Recreating setfenv() in Lua 5.2](https://stackoverflow.com/questions/14290527/recreating-setfenv-in-lua-5-2?rq=1)
Edit: I suppose the best way for me to do this is to bind a C function that can modify the environment. Though this is a much more annoying way of going about it.
Edit: I believe a more natural way to do this would be to load all chunks into separate environments. These can be "inherited" by any other environment by setting a metatable that refers to a global copy of a chunk. This does not require any upvalue modification post-load, but still allows for multiple environments with those function definitions.
|
2013/12/23
|
['https://Stackoverflow.com/questions/20735570', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1825760/']
|
The simplest way to allow a chunk to be run in different environments is to make this explicit and have it receive an environment. Adding this line at the top of the chunk achieves this:
`_ENV=...`
Now you can call `chunk(env1)` and later `chunk(env2)` at your pleasure.
There, no `debug` magic with upvalues.
Although it will be clear if your chunk contains that line, you can add it at load time, by writing a suitable reader function that first sends that line and then the contents of the file.
|
If you don't want to modify your chunk (per LHF's great answer) here are two alternatives:
### Set up a blank environment, then dynamically change its environment to yours
```lua
function compile(code)
local meta = {}
local env = setmetatable({},meta)
return {meta=meta, f=load('return '..code, nil, nil, env)}
end
function eval(block, scope)
block.meta.__index=scope
return block.f()
end
local block = compile('a + b * c')
print(eval(block, {a=1, b=2, c=3})) --> 7
print(eval(block, {a=2, b=3, c=4})) --> 14
```
### Set up a blank environment, and re-set its values with your own each time
```lua
function compile(code)
local env = {}
return {env=env, f=load('return '..code, nil, nil, env)}
end
function eval(block, scope)
for k,_ in pairs(block.env) do block.env[k]=nil end
for k,v in pairs(scope) do block.env[k]=v end
return block.f()
end
local block = compile('a + b * c')
print(eval(block, {a=1, b=2, c=3})) --> 7
print(eval(block, {a=2, b=3, c=4})) --> 14
```
Note that if micro-optimizations matter, the first option is about 2✕ as slow as the `_ENV=...` answer, while the second options is about 8–9✕ as slow.
|
11,306,060 |
I'm trying to do it like this:
Every single user can choose fields (like structures on MySQL) where this fields can handle their respective value, it's like doing a DB inside a DB.
But how can I do it using a single table?
(not talking about user accounts etc where I should be able to use a pointer to his own "structure")
Do something like: varchar Key where register something like "Name:asd" where PHP explode : to get the respective structure ('name' in this case) and the respective value? ('asd')
Use BLOB? can someone turn the light on for me? I don't know how to do something where works better than my current explanation...
I know my text is confuse and sorry for any bad english.
EDIT:
Also, they could add multiple keys/"structures" where accepts a new value
And they are not able to see the Database or Tables, they still normal users
My server does not support Postogre
|
2012/07/03
|
['https://Stackoverflow.com/questions/11306060', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/996134/']
|
In my opinion you should create two tables.
1. with the user info
2. with 3 fields (userid, key and value)
Each user has 1 record in the first table. Each user can have 0 or more records in the second table. This will ensure you can still search the data and that users can easily add more key/value pairs when needed.
|
Use a table with key-value pairs. So three columns:
* user id
* key ("name")
* value ("asd")
Add an index on *user id*, so that you can query a user's attributes easily. If you wanted to query all users with the same properties, then you could add a second index on *key* and/or *value*.
|
11,306,060 |
I'm trying to do it like this:
Every single user can choose fields (like structures on MySQL) where this fields can handle their respective value, it's like doing a DB inside a DB.
But how can I do it using a single table?
(not talking about user accounts etc where I should be able to use a pointer to his own "structure")
Do something like: varchar Key where register something like "Name:asd" where PHP explode : to get the respective structure ('name' in this case) and the respective value? ('asd')
Use BLOB? can someone turn the light on for me? I don't know how to do something where works better than my current explanation...
I know my text is confuse and sorry for any bad english.
EDIT:
Also, they could add multiple keys/"structures" where accepts a new value
And they are not able to see the Database or Tables, they still normal users
My server does not support Postogre
|
2012/07/03
|
['https://Stackoverflow.com/questions/11306060', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/996134/']
|
In my opinion you should create two tables.
1. with the user info
2. with 3 fields (userid, key and value)
Each user has 1 record in the first table. Each user can have 0 or more records in the second table. This will ensure you can still search the data and that users can easily add more key/value pairs when needed.
|
Don't start building a database in a database. In this case, since the user makes the field by himself there is no relation between the fields as I understand? In that case it would make sense to take a look at the NoSQL databases since they seem to fit very good for this kind of situations.
Another thing to check is something like:
<http://www.postgresql.org/docs/8.4/static/hstore.html>
Do not try to build tables like: records, fields, field types etc. That's a bad practice and should not be needed.
For a more specific answer on your wishes we need a bit more info about the data the user is storing.
|
11,306,060 |
I'm trying to do it like this:
Every single user can choose fields (like structures on MySQL) where this fields can handle their respective value, it's like doing a DB inside a DB.
But how can I do it using a single table?
(not talking about user accounts etc where I should be able to use a pointer to his own "structure")
Do something like: varchar Key where register something like "Name:asd" where PHP explode : to get the respective structure ('name' in this case) and the respective value? ('asd')
Use BLOB? can someone turn the light on for me? I don't know how to do something where works better than my current explanation...
I know my text is confuse and sorry for any bad english.
EDIT:
Also, they could add multiple keys/"structures" where accepts a new value
And they are not able to see the Database or Tables, they still normal users
My server does not support Postogre
|
2012/07/03
|
['https://Stackoverflow.com/questions/11306060', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/996134/']
|
In my opinion you should create two tables.
1. with the user info
2. with 3 fields (userid, key and value)
Each user has 1 record in the first table. Each user can have 0 or more records in the second table. This will ensure you can still search the data and that users can easily add more key/value pairs when needed.
|
While i think the rational answer to this question is the one given by PeeHaa, if you really want the data to fit into one table you could try saving a serialized PHP array in one of the fields. Check out [serialize](http://php.net/manual/en/function.serialize.php) and [unserialize](http://www.php.net/manual/en/function.unserialize.php)
>
> Generates a storable representation of a value
>
>
> This is useful for storing or passing PHP values around without losing
> their type and structure.
>
>
>
This method is discouraged as it is not at all scalable.
|
11,306,060 |
I'm trying to do it like this:
Every single user can choose fields (like structures on MySQL) where this fields can handle their respective value, it's like doing a DB inside a DB.
But how can I do it using a single table?
(not talking about user accounts etc where I should be able to use a pointer to his own "structure")
Do something like: varchar Key where register something like "Name:asd" where PHP explode : to get the respective structure ('name' in this case) and the respective value? ('asd')
Use BLOB? can someone turn the light on for me? I don't know how to do something where works better than my current explanation...
I know my text is confuse and sorry for any bad english.
EDIT:
Also, they could add multiple keys/"structures" where accepts a new value
And they are not able to see the Database or Tables, they still normal users
My server does not support Postogre
|
2012/07/03
|
['https://Stackoverflow.com/questions/11306060', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/996134/']
|
In my opinion you should create two tables.
1. with the user info
2. with 3 fields (userid, key and value)
Each user has 1 record in the first table. Each user can have 0 or more records in the second table. This will ensure you can still search the data and that users can easily add more key/value pairs when needed.
|
Hope you are using a programming language also to get the data and present them.
You can have a single table which has a varchar field. Then you store the serialized data of the field structure and their value in that field. When you want to get the structure, query the data and De-serialize that varchar field data.
As per my knowledge every programming language supports serialization and De-serialization.
Edited : This is not a scalable option.
|
11,306,060 |
I'm trying to do it like this:
Every single user can choose fields (like structures on MySQL) where this fields can handle their respective value, it's like doing a DB inside a DB.
But how can I do it using a single table?
(not talking about user accounts etc where I should be able to use a pointer to his own "structure")
Do something like: varchar Key where register something like "Name:asd" where PHP explode : to get the respective structure ('name' in this case) and the respective value? ('asd')
Use BLOB? can someone turn the light on for me? I don't know how to do something where works better than my current explanation...
I know my text is confuse and sorry for any bad english.
EDIT:
Also, they could add multiple keys/"structures" where accepts a new value
And they are not able to see the Database or Tables, they still normal users
My server does not support Postogre
|
2012/07/03
|
['https://Stackoverflow.com/questions/11306060', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/996134/']
|
Don't start building a database in a database. In this case, since the user makes the field by himself there is no relation between the fields as I understand? In that case it would make sense to take a look at the NoSQL databases since they seem to fit very good for this kind of situations.
Another thing to check is something like:
<http://www.postgresql.org/docs/8.4/static/hstore.html>
Do not try to build tables like: records, fields, field types etc. That's a bad practice and should not be needed.
For a more specific answer on your wishes we need a bit more info about the data the user is storing.
|
Use a table with key-value pairs. So three columns:
* user id
* key ("name")
* value ("asd")
Add an index on *user id*, so that you can query a user's attributes easily. If you wanted to query all users with the same properties, then you could add a second index on *key* and/or *value*.
|
11,306,060 |
I'm trying to do it like this:
Every single user can choose fields (like structures on MySQL) where this fields can handle their respective value, it's like doing a DB inside a DB.
But how can I do it using a single table?
(not talking about user accounts etc where I should be able to use a pointer to his own "structure")
Do something like: varchar Key where register something like "Name:asd" where PHP explode : to get the respective structure ('name' in this case) and the respective value? ('asd')
Use BLOB? can someone turn the light on for me? I don't know how to do something where works better than my current explanation...
I know my text is confuse and sorry for any bad english.
EDIT:
Also, they could add multiple keys/"structures" where accepts a new value
And they are not able to see the Database or Tables, they still normal users
My server does not support Postogre
|
2012/07/03
|
['https://Stackoverflow.com/questions/11306060', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/996134/']
|
While i think the rational answer to this question is the one given by PeeHaa, if you really want the data to fit into one table you could try saving a serialized PHP array in one of the fields. Check out [serialize](http://php.net/manual/en/function.serialize.php) and [unserialize](http://www.php.net/manual/en/function.unserialize.php)
>
> Generates a storable representation of a value
>
>
> This is useful for storing or passing PHP values around without losing
> their type and structure.
>
>
>
This method is discouraged as it is not at all scalable.
|
Use a table with key-value pairs. So three columns:
* user id
* key ("name")
* value ("asd")
Add an index on *user id*, so that you can query a user's attributes easily. If you wanted to query all users with the same properties, then you could add a second index on *key* and/or *value*.
|
11,306,060 |
I'm trying to do it like this:
Every single user can choose fields (like structures on MySQL) where this fields can handle their respective value, it's like doing a DB inside a DB.
But how can I do it using a single table?
(not talking about user accounts etc where I should be able to use a pointer to his own "structure")
Do something like: varchar Key where register something like "Name:asd" where PHP explode : to get the respective structure ('name' in this case) and the respective value? ('asd')
Use BLOB? can someone turn the light on for me? I don't know how to do something where works better than my current explanation...
I know my text is confuse and sorry for any bad english.
EDIT:
Also, they could add multiple keys/"structures" where accepts a new value
And they are not able to see the Database or Tables, they still normal users
My server does not support Postogre
|
2012/07/03
|
['https://Stackoverflow.com/questions/11306060', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/996134/']
|
Don't start building a database in a database. In this case, since the user makes the field by himself there is no relation between the fields as I understand? In that case it would make sense to take a look at the NoSQL databases since they seem to fit very good for this kind of situations.
Another thing to check is something like:
<http://www.postgresql.org/docs/8.4/static/hstore.html>
Do not try to build tables like: records, fields, field types etc. That's a bad practice and should not be needed.
For a more specific answer on your wishes we need a bit more info about the data the user is storing.
|
Hope you are using a programming language also to get the data and present them.
You can have a single table which has a varchar field. Then you store the serialized data of the field structure and their value in that field. When you want to get the structure, query the data and De-serialize that varchar field data.
As per my knowledge every programming language supports serialization and De-serialization.
Edited : This is not a scalable option.
|
11,306,060 |
I'm trying to do it like this:
Every single user can choose fields (like structures on MySQL) where this fields can handle their respective value, it's like doing a DB inside a DB.
But how can I do it using a single table?
(not talking about user accounts etc where I should be able to use a pointer to his own "structure")
Do something like: varchar Key where register something like "Name:asd" where PHP explode : to get the respective structure ('name' in this case) and the respective value? ('asd')
Use BLOB? can someone turn the light on for me? I don't know how to do something where works better than my current explanation...
I know my text is confuse and sorry for any bad english.
EDIT:
Also, they could add multiple keys/"structures" where accepts a new value
And they are not able to see the Database or Tables, they still normal users
My server does not support Postogre
|
2012/07/03
|
['https://Stackoverflow.com/questions/11306060', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/996134/']
|
While i think the rational answer to this question is the one given by PeeHaa, if you really want the data to fit into one table you could try saving a serialized PHP array in one of the fields. Check out [serialize](http://php.net/manual/en/function.serialize.php) and [unserialize](http://www.php.net/manual/en/function.unserialize.php)
>
> Generates a storable representation of a value
>
>
> This is useful for storing or passing PHP values around without losing
> their type and structure.
>
>
>
This method is discouraged as it is not at all scalable.
|
Hope you are using a programming language also to get the data and present them.
You can have a single table which has a varchar field. Then you store the serialized data of the field structure and their value in that field. When you want to get the structure, query the data and De-serialize that varchar field data.
As per my knowledge every programming language supports serialization and De-serialization.
Edited : This is not a scalable option.
|
52,835,726 |
I want to hide the internal type from the user of a library.
Currently I have something like this:
foo.h
```
typedef struct public
{
uint16 a;
//...
unsigned char internals[4];
} public_type;
```
foo.c
```
typedef struct public
{
uint32_t a;
}internals_type;
```
Then in the functions, I'm doing a cast like.
```
void bar(public_type * const public_struct)
{
internals_type* const internals = &public_struct->internals;
intrnals->a = getSomething();
// .....
}
```
**Is there a better way of doing this?**
I've tried some weird stuff with unions and pointers in the header, but nothing seems to be better and I'm curious if this can get any cleaner or at least if the warnings from casting a pointer from one type to pointer to another could be removed.
|
2018/10/16
|
['https://Stackoverflow.com/questions/52835726', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7757891/']
|
I suggest you read more about [opaque data types](https://en.wikipedia.org/wiki/Opaque_data_type), and consider e.g. the `FILE` structure.
In short, don't split your structure into "public" and "private" variants (that way lies madness and possible undefined behavior). Instead just *declare* a structure in a public header file, and have your functions return pointers to that structure or accept arguments that are pointers to that structure.
Then internally in the library you have a private header file which have the *definition* of the structure, and use that header file for your implementation.
---
Simple example
==============
Public header file
------------------
```
#ifndef PUBLIC_HEADER_FILE_H
#define PUBLIC_HEADER_FILE_H
typedef my_private_structure MY_PUBLIC_TYPE;
MY_PUBLIC_TYPE *mylib_create(void);
void mylib_destroy(MY_PUBLIC_TYPE *ptr);
#endif
```
Private header file
-------------------
```
#ifndef PRIVATE_HEADER_FILE_H
#define PRIVATE_HEADER_FILE_H
#include "public_header_file.h"
struct my_private_structure
{
// Some private fields here
};
#endif
```
Private library source file
---------------------------
```
#include "private_header_file.h"
#include <stdlib.h>
MY_PUBLIC_TYPE *mylib_create(void)
{
MY_PUBLIC_TYPE *ptr = malloc(sizeof *ptr);
return ptr;
}
void mylib_destroy(MY_PUBLIC_TYPE *ptr)
{
free(ptr);
}
```
You distribute `public_header_file.h` together with your library. It's the header file that the users of the library will use.
The source of your library, and especially the `private_header_file.h` file should *not* be distributed, or at least not installed if you make an open-source library.
Note that this scheme make *all* of the structure "private", which is usually a good idea since then you can modify it as you like without the users of the library needing to rebuild their applications using your library. To access members of the private structure you can use functions which simply returns the value of whatever member needs to be accessed.
|
You can do in foo.h:
```
typedef struct internals_type;
typedef struct {
uint16 a;
internals_type* internals;
} public_type
```
Including foo.h is then enough for your user to compile, without knowing what is exactly inside `internals_type`.
|
52,835,726 |
I want to hide the internal type from the user of a library.
Currently I have something like this:
foo.h
```
typedef struct public
{
uint16 a;
//...
unsigned char internals[4];
} public_type;
```
foo.c
```
typedef struct public
{
uint32_t a;
}internals_type;
```
Then in the functions, I'm doing a cast like.
```
void bar(public_type * const public_struct)
{
internals_type* const internals = &public_struct->internals;
intrnals->a = getSomething();
// .....
}
```
**Is there a better way of doing this?**
I've tried some weird stuff with unions and pointers in the header, but nothing seems to be better and I'm curious if this can get any cleaner or at least if the warnings from casting a pointer from one type to pointer to another could be removed.
|
2018/10/16
|
['https://Stackoverflow.com/questions/52835726', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7757891/']
|
I suggest you read more about [opaque data types](https://en.wikipedia.org/wiki/Opaque_data_type), and consider e.g. the `FILE` structure.
In short, don't split your structure into "public" and "private" variants (that way lies madness and possible undefined behavior). Instead just *declare* a structure in a public header file, and have your functions return pointers to that structure or accept arguments that are pointers to that structure.
Then internally in the library you have a private header file which have the *definition* of the structure, and use that header file for your implementation.
---
Simple example
==============
Public header file
------------------
```
#ifndef PUBLIC_HEADER_FILE_H
#define PUBLIC_HEADER_FILE_H
typedef my_private_structure MY_PUBLIC_TYPE;
MY_PUBLIC_TYPE *mylib_create(void);
void mylib_destroy(MY_PUBLIC_TYPE *ptr);
#endif
```
Private header file
-------------------
```
#ifndef PRIVATE_HEADER_FILE_H
#define PRIVATE_HEADER_FILE_H
#include "public_header_file.h"
struct my_private_structure
{
// Some private fields here
};
#endif
```
Private library source file
---------------------------
```
#include "private_header_file.h"
#include <stdlib.h>
MY_PUBLIC_TYPE *mylib_create(void)
{
MY_PUBLIC_TYPE *ptr = malloc(sizeof *ptr);
return ptr;
}
void mylib_destroy(MY_PUBLIC_TYPE *ptr)
{
free(ptr);
}
```
You distribute `public_header_file.h` together with your library. It's the header file that the users of the library will use.
The source of your library, and especially the `private_header_file.h` file should *not* be distributed, or at least not installed if you make an open-source library.
Note that this scheme make *all* of the structure "private", which is usually a good idea since then you can modify it as you like without the users of the library needing to rebuild their applications using your library. To access members of the private structure you can use functions which simply returns the value of whatever member needs to be accessed.
|
Typically you use `void*` to hide your implementation, e.g.
```
void *create_foo(int param1, int param2);
void print_foo(void* foo);
int operate_on_foo(void* foo);
```
So you "cast" the `void*` in your functions to your internal type.
The downside of this is, that the compiler can't help you with the types, e.g. the user of the library can use a `int*` and pass it to your function and the compiler will accept it. When you use an actual type, the compiler will complain.
|
4,643,055 |
I have to find the range of $f(x)=x\sqrt{1-x^2}$ on the interval $[-1,1]$. I have done so by setting $x=\sin\theta$ and thus finding it to be $[-0.5,0.5]$.
Let $x=\sinθ$. Then, for $x\in[-1,1]$ we get that $θ \in [-\frac{\pi}{2}, \frac{\pi}{2}]$. Thus, $f(x)$ becomes:
$f(\theta)=\sin\theta \sqrt{1-(\sin\theta)^2}= \sin\theta |\cos\theta|$. Since for $\theta \in [-\frac{\pi}{2}, \frac{\pi}{2}]$ we have that $cos\theta \geq 0$:
$f(θ)=\sinθ\cosθ=\frac{1}{2}\sin2θ$. Which has maximum value $\frac{1}{2}$ when $\theta = \frac{\pi}{4}$ and minimum value $-\frac{1}{2}$ when $\theta = -\frac{π}{4}$. When $\theta = \frac{\pi}{4}$ we have $x=\frac{\sqrt{2}}{2}$ and when $\theta = -\frac{\pi}{4}$ we have $x=-\frac{\sqrt{2}}{2}$, which are the maximum and minimum positions respectively. So, $f(\frac{\sqrt{2}}{2})=\frac{1}{2}$ and $f(-\frac{\sqrt{2}}{2})=-\frac{1}{2}$. Thus the range of $f(x)$ is $[-\frac{1}{2},\frac{1}{2}]$.
I know that it can be found with derivatives as well. I was wondering how can **quadratic theory** can be used to find the Range.
|
2023/02/20
|
['https://math.stackexchange.com/questions/4643055', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1033909/']
|
You could reason like this: $f(x)$ will have its maximum value at the same point as $g(x) = (f(x))^2 = x^2 - x^4.$ This is quadratic in $x^2$ and an even function. Since $z-z^2$ takes its maximum value when $z=1/2$ (by finding the vertex of the parabola) then we know $g(x)$, and hence $f(x)$ takes its maximum value when $x^2=1/2$ or $x=\pm 1/\sqrt{2}.$ And $f(1/\sqrt{2}) = 1/2.$ To get the lower end of the range, you can argue by symmetry.
|
Alternatively, observe that the function is odd over $[-1,1]$. Hence, if the max value occurs at $x\_0 > 0$, then the min occurs at $-x\_0$. But if $x > 0$, then by AM-GM inequality: $f(x) =x\sqrt{1-x^2} \le \dfrac{x^2+ (1-x^2)}{2}=\dfrac{1}{2}$ with $=$ occurs when $x=\sqrt{1-x^2}\implies x^2 = 1-x^2\implies x=\dfrac{1}{\sqrt{2}}$. So the max is $\dfrac{1}{2}$ and the min is $-\dfrac{1}{2}$.
|
47,222,510 |
Hi I am using `json` parsing, I am getting `json` response from backend and parsing is working well. but the issue is that when I try to set data as per response it sets last index only. Following is my snippet code and `json` response can any one help me with this please.
Right now it shows output
(in my first textview)
Buy 12 Canex + 1 Dose Free
(in my second textview)
Buy 12 Canex + 1 Dose Free
```
{
"Data": {
"shippingText": "heyy",
"productOffersList": [
{
"bgColorA": "#ffffff",
"bgColorI": "255*255*255",
"offerLine": [
{
"text": "BUY 6",
"colorA": "#82d7ff",
"colorI": "130*215*255"
},
{
"text": " Canex + 1 Dose Free",
"colorA": "#ff8282",
"colorI": "255*130*130"
}
]
},
{
"bgColorA": "#ffffff",
"bgColorI": "255*255*255",
"offerLine": [
{
"text": "BUY 12",
"colorA": "#65dd63",
"colorI": "101*221*99"
},
{
"text": " Canex + 1 Dose Free",
"colorA": "#ff8282",
"colorI": "255*130*130"
}
]
}
]
},
"Status": 1,
"Message": "",
"UserMessage": ""
}
```
Code
```
JSONArray productOffersList=drawerdatas.getJSONArray("productOffersList");
for(int k=0;k<productOffersList.length();k++)
{
JSONObject joofer = productOffersList.getJSONObject(k);
JSONArray offerLine=joofer.getJSONArray("offerLine");
offertextlist=new ArrayList<ProductOffersModel>();
for(int l=0;l<offerLine.length();l++)
{
JSONObject jooferline = offerLine.getJSONObject(l);
ProductOffersModel pom=new ProductOffersModel();
pom.setProductOffers_text(jooferline.getString("text"));
pom.setProductOffers_colorA(jooferline.getString("colorA"));
offertextlist.add(pom);
}
}
for(int v=0;v<offertextlist.size();v++)
{
product_view_offertextfirst.setText(offertextlist.get(v).getProductOffers_text()+" "+offertextlist.get(v).getProductOffers_text());
}
for(int v=0;v<offertextlist.size();v++)
{
product_view_offertexttwo.setText(offertextlist.get(v).getProductOffers_text()+" "+offertextlist.get(v).getProductOffers_text());
}
```
|
2017/11/10
|
['https://Stackoverflow.com/questions/47222510', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4817574/']
|
because you are doing a for loop to set the text of the textviews you are setting on both all the offers and the last one is what you are able to see
```
JSONArray productOffersList=drawerdatas.getJSONArray("productOffersList");
offertextlist=new ArrayList<ProductOffersModel>();
// You were re creating the list array list inside the for loop so you were losing the data from the 1st product.
for(int k=0;k<productOffersList.length();k++)
{
JSONObject joofer = productOffersList.getJSONObject(k);
JSONArray offerLine=joofer.getJSONArray("offerLine");
for(int l=0;l<offerLine.length();l++)
{
JSONObject jooferline = offerLine.getJSONObject(l);
ProductOffersModel pom=new ProductOffersModel();
pom.setProductOffers_text(jooferline.getString("text"));
pom.setProductOffers_colorA(jooferline.getString("colorA"));
offertextlist.add(pom);
}
}
if(offertextlist !=null && offertextlist.size()==4)
{
product_view_offertextfirst.setText(offertextlist.get(0).getProductOffers_text()
+ " " + offertextlist.get(1).getProductOffers_text());
product_view_offertexttwo.setText(offertextlist.get(2).getProductOffers_text()
+ " " + offertextlist.get(3).getProductOffers_text());
}
```
|
Handling JSON data manually is not a recommended practice for large datasets. You should go with a JSON parsing library, Gson may be a good choice.
<https://github.com/google/gson>
|
47,222,510 |
Hi I am using `json` parsing, I am getting `json` response from backend and parsing is working well. but the issue is that when I try to set data as per response it sets last index only. Following is my snippet code and `json` response can any one help me with this please.
Right now it shows output
(in my first textview)
Buy 12 Canex + 1 Dose Free
(in my second textview)
Buy 12 Canex + 1 Dose Free
```
{
"Data": {
"shippingText": "heyy",
"productOffersList": [
{
"bgColorA": "#ffffff",
"bgColorI": "255*255*255",
"offerLine": [
{
"text": "BUY 6",
"colorA": "#82d7ff",
"colorI": "130*215*255"
},
{
"text": " Canex + 1 Dose Free",
"colorA": "#ff8282",
"colorI": "255*130*130"
}
]
},
{
"bgColorA": "#ffffff",
"bgColorI": "255*255*255",
"offerLine": [
{
"text": "BUY 12",
"colorA": "#65dd63",
"colorI": "101*221*99"
},
{
"text": " Canex + 1 Dose Free",
"colorA": "#ff8282",
"colorI": "255*130*130"
}
]
}
]
},
"Status": 1,
"Message": "",
"UserMessage": ""
}
```
Code
```
JSONArray productOffersList=drawerdatas.getJSONArray("productOffersList");
for(int k=0;k<productOffersList.length();k++)
{
JSONObject joofer = productOffersList.getJSONObject(k);
JSONArray offerLine=joofer.getJSONArray("offerLine");
offertextlist=new ArrayList<ProductOffersModel>();
for(int l=0;l<offerLine.length();l++)
{
JSONObject jooferline = offerLine.getJSONObject(l);
ProductOffersModel pom=new ProductOffersModel();
pom.setProductOffers_text(jooferline.getString("text"));
pom.setProductOffers_colorA(jooferline.getString("colorA"));
offertextlist.add(pom);
}
}
for(int v=0;v<offertextlist.size();v++)
{
product_view_offertextfirst.setText(offertextlist.get(v).getProductOffers_text()+" "+offertextlist.get(v).getProductOffers_text());
}
for(int v=0;v<offertextlist.size();v++)
{
product_view_offertexttwo.setText(offertextlist.get(v).getProductOffers_text()+" "+offertextlist.get(v).getProductOffers_text());
}
```
|
2017/11/10
|
['https://Stackoverflow.com/questions/47222510', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4817574/']
|
because you are doing a for loop to set the text of the textviews you are setting on both all the offers and the last one is what you are able to see
```
JSONArray productOffersList=drawerdatas.getJSONArray("productOffersList");
offertextlist=new ArrayList<ProductOffersModel>();
// You were re creating the list array list inside the for loop so you were losing the data from the 1st product.
for(int k=0;k<productOffersList.length();k++)
{
JSONObject joofer = productOffersList.getJSONObject(k);
JSONArray offerLine=joofer.getJSONArray("offerLine");
for(int l=0;l<offerLine.length();l++)
{
JSONObject jooferline = offerLine.getJSONObject(l);
ProductOffersModel pom=new ProductOffersModel();
pom.setProductOffers_text(jooferline.getString("text"));
pom.setProductOffers_colorA(jooferline.getString("colorA"));
offertextlist.add(pom);
}
}
if(offertextlist !=null && offertextlist.size()==4)
{
product_view_offertextfirst.setText(offertextlist.get(0).getProductOffers_text()
+ " " + offertextlist.get(1).getProductOffers_text());
product_view_offertexttwo.setText(offertextlist.get(2).getProductOffers_text()
+ " " + offertextlist.get(3).getProductOffers_text());
}
```
|
try with this parsing,
```
JSONArray productOffersList=drawerdatas.getJSONArray("productOffersList");
for(int k=0;k<productOffersList.length();k++)
{
JSONObject joofer = productOffersList.getJSONObject(k);
JSONArray offerLine=joofer.getJSONArray("offerLine");
offertextlist=new ArrayList<ProductOffersModel>();
ProductOffersModel pom=new ProductOffersModel();
pom.setProductOffers_text
(offerLine.getJSONObject(0).getString("text")+""+
offerLine.getJSONObject(1).getString("text"));
offertextlist.add(pom);
}
for(int v=0;v<offertextlist.size();v++)
{
if(v==0)
{
product_view_offertextfirst.setText
(offertextlist.get(v).getProductOffers_text());
}
else if(v==1)
{
product_view_offertexttwo.setText(
offertextlist.get(v).getProductOffers_text());
}
}
```
|
47,222,510 |
Hi I am using `json` parsing, I am getting `json` response from backend and parsing is working well. but the issue is that when I try to set data as per response it sets last index only. Following is my snippet code and `json` response can any one help me with this please.
Right now it shows output
(in my first textview)
Buy 12 Canex + 1 Dose Free
(in my second textview)
Buy 12 Canex + 1 Dose Free
```
{
"Data": {
"shippingText": "heyy",
"productOffersList": [
{
"bgColorA": "#ffffff",
"bgColorI": "255*255*255",
"offerLine": [
{
"text": "BUY 6",
"colorA": "#82d7ff",
"colorI": "130*215*255"
},
{
"text": " Canex + 1 Dose Free",
"colorA": "#ff8282",
"colorI": "255*130*130"
}
]
},
{
"bgColorA": "#ffffff",
"bgColorI": "255*255*255",
"offerLine": [
{
"text": "BUY 12",
"colorA": "#65dd63",
"colorI": "101*221*99"
},
{
"text": " Canex + 1 Dose Free",
"colorA": "#ff8282",
"colorI": "255*130*130"
}
]
}
]
},
"Status": 1,
"Message": "",
"UserMessage": ""
}
```
Code
```
JSONArray productOffersList=drawerdatas.getJSONArray("productOffersList");
for(int k=0;k<productOffersList.length();k++)
{
JSONObject joofer = productOffersList.getJSONObject(k);
JSONArray offerLine=joofer.getJSONArray("offerLine");
offertextlist=new ArrayList<ProductOffersModel>();
for(int l=0;l<offerLine.length();l++)
{
JSONObject jooferline = offerLine.getJSONObject(l);
ProductOffersModel pom=new ProductOffersModel();
pom.setProductOffers_text(jooferline.getString("text"));
pom.setProductOffers_colorA(jooferline.getString("colorA"));
offertextlist.add(pom);
}
}
for(int v=0;v<offertextlist.size();v++)
{
product_view_offertextfirst.setText(offertextlist.get(v).getProductOffers_text()+" "+offertextlist.get(v).getProductOffers_text());
}
for(int v=0;v<offertextlist.size();v++)
{
product_view_offertexttwo.setText(offertextlist.get(v).getProductOffers_text()+" "+offertextlist.get(v).getProductOffers_text());
}
```
|
2017/11/10
|
['https://Stackoverflow.com/questions/47222510', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4817574/']
|
try with this parsing,
```
JSONArray productOffersList=drawerdatas.getJSONArray("productOffersList");
for(int k=0;k<productOffersList.length();k++)
{
JSONObject joofer = productOffersList.getJSONObject(k);
JSONArray offerLine=joofer.getJSONArray("offerLine");
offertextlist=new ArrayList<ProductOffersModel>();
ProductOffersModel pom=new ProductOffersModel();
pom.setProductOffers_text
(offerLine.getJSONObject(0).getString("text")+""+
offerLine.getJSONObject(1).getString("text"));
offertextlist.add(pom);
}
for(int v=0;v<offertextlist.size();v++)
{
if(v==0)
{
product_view_offertextfirst.setText
(offertextlist.get(v).getProductOffers_text());
}
else if(v==1)
{
product_view_offertexttwo.setText(
offertextlist.get(v).getProductOffers_text());
}
}
```
|
Handling JSON data manually is not a recommended practice for large datasets. You should go with a JSON parsing library, Gson may be a good choice.
<https://github.com/google/gson>
|
68,928 |
Is there any command that could show me the size of several folders in linux, perhaps ranked from biggest to smallest?
|
2009/11/11
|
['https://superuser.com/questions/68928', 'https://superuser.com', 'https://superuser.com/users/4586/']
|
As others said, `du` is the way to go. But knowing the options to `du` is essential. Here they are:
```
du -m --max-depth 1 /foo /bar
```
This will give you the size in megabytes of the directories contained in `/foo` and `/bar`. If you want the output to be sorted, pipe it through the `sort` utility:
```
du -m --max-depth 1 /foo /bar | sort -n -k 1
```
|
Or you can pass:
```
du -sm /dir1 /dir2 | sort -nrk 1
#or
du -sm * | sort -nrk 1
```
The difference between the first and the second is that the sencond will pick all the files and dirs in the current directory and the first just the dirs you passed.
|
68,928 |
Is there any command that could show me the size of several folders in linux, perhaps ranked from biggest to smallest?
|
2009/11/11
|
['https://superuser.com/questions/68928', 'https://superuser.com', 'https://superuser.com/users/4586/']
|
>
> du [options] [directories and/or files]
>
>
>
|
From the command line I prefer to list the biggest last so I use:
```
du -shm ./* | sort -n
```
|
68,928 |
Is there any command that could show me the size of several folders in linux, perhaps ranked from biggest to smallest?
|
2009/11/11
|
['https://superuser.com/questions/68928', 'https://superuser.com', 'https://superuser.com/users/4586/']
|
Or you can pass:
```
du -sm /dir1 /dir2 | sort -nrk 1
#or
du -sm * | sort -nrk 1
```
The difference between the first and the second is that the sencond will pick all the files and dirs in the current directory and the first just the dirs you passed.
|
use `du` in terminal.
|
68,928 |
Is there any command that could show me the size of several folders in linux, perhaps ranked from biggest to smallest?
|
2009/11/11
|
['https://superuser.com/questions/68928', 'https://superuser.com', 'https://superuser.com/users/4586/']
|
As others said, `du` is the way to go. But knowing the options to `du` is essential. Here they are:
```
du -m --max-depth 1 /foo /bar
```
This will give you the size in megabytes of the directories contained in `/foo` and `/bar`. If you want the output to be sorted, pipe it through the `sort` utility:
```
du -m --max-depth 1 /foo /bar | sort -n -k 1
```
|
use `du` in terminal.
|
68,928 |
Is there any command that could show me the size of several folders in linux, perhaps ranked from biggest to smallest?
|
2009/11/11
|
['https://superuser.com/questions/68928', 'https://superuser.com', 'https://superuser.com/users/4586/']
|
If you would like a graphical (X11) display, consider installing **`xdiskusage`**. You can either pipe the output of `du` into it (as you might do if you're running `du` as another user, or on another system, or at another time), or you can run it interactively and it will invoke `du` for itself.
As usual, once it's installed, consult the man page.
|
From the command line I prefer to list the biggest last so I use:
```
du -shm ./* | sort -n
```
|
68,928 |
Is there any command that could show me the size of several folders in linux, perhaps ranked from biggest to smallest?
|
2009/11/11
|
['https://superuser.com/questions/68928', 'https://superuser.com', 'https://superuser.com/users/4586/']
|
>
> du [options] [directories and/or files]
>
>
>
|
use `du` in terminal.
|
68,928 |
Is there any command that could show me the size of several folders in linux, perhaps ranked from biggest to smallest?
|
2009/11/11
|
['https://superuser.com/questions/68928', 'https://superuser.com', 'https://superuser.com/users/4586/']
|
Or you can pass:
```
du -sm /dir1 /dir2 | sort -nrk 1
#or
du -sm * | sort -nrk 1
```
The difference between the first and the second is that the sencond will pick all the files and dirs in the current directory and the first just the dirs you passed.
|
If you would like a graphical (X11) display, consider installing **`xdiskusage`**. You can either pipe the output of `du` into it (as you might do if you're running `du` as another user, or on another system, or at another time), or you can run it interactively and it will invoke `du` for itself.
As usual, once it's installed, consult the man page.
|
68,928 |
Is there any command that could show me the size of several folders in linux, perhaps ranked from biggest to smallest?
|
2009/11/11
|
['https://superuser.com/questions/68928', 'https://superuser.com', 'https://superuser.com/users/4586/']
|
Or you can pass:
```
du -sm /dir1 /dir2 | sort -nrk 1
#or
du -sm * | sort -nrk 1
```
The difference between the first and the second is that the sencond will pick all the files and dirs in the current directory and the first just the dirs you passed.
|
From the command line I prefer to list the biggest last so I use:
```
du -shm ./* | sort -n
```
|
68,928 |
Is there any command that could show me the size of several folders in linux, perhaps ranked from biggest to smallest?
|
2009/11/11
|
['https://superuser.com/questions/68928', 'https://superuser.com', 'https://superuser.com/users/4586/']
|
Or you can pass:
```
du -sm /dir1 /dir2 | sort -nrk 1
#or
du -sm * | sort -nrk 1
```
The difference between the first and the second is that the sencond will pick all the files and dirs in the current directory and the first just the dirs you passed.
|
>
> du [options] [directories and/or files]
>
>
>
|
68,928 |
Is there any command that could show me the size of several folders in linux, perhaps ranked from biggest to smallest?
|
2009/11/11
|
['https://superuser.com/questions/68928', 'https://superuser.com', 'https://superuser.com/users/4586/']
|
If you would like a graphical (X11) display, consider installing **`xdiskusage`**. You can either pipe the output of `du` into it (as you might do if you're running `du` as another user, or on another system, or at another time), or you can run it interactively and it will invoke `du` for itself.
As usual, once it's installed, consult the man page.
|
use `du` in terminal.
|
38,042,541 |
Function
```
private void startService() {
if (!onForeground) {
Log.d(Constants.TAG, "RecordService startService");
Intent intent = new Intent(this, MainActivity.class);
// intent.setAction(Intent.ACTION_VIEW);
// intent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
intent.setFlags(Intent.FLAG_ACTIVITY_REORDER_TO_FRONT);
PendingIntent pendingIntent = PendingIntent.getActivity(
getBaseContext(), 0, intent, 0);
Notification notification = new NotificationCompat.Builder(
getBaseContext())
.setContentTitle(
this.getString(R.string.notification_title))
.setTicker(this.getString(R.string.notification_ticker))
.setContentText(this.getString(R.string.notification_text))
.setSmallIcon(R.drawable.ic_launcher)
.setContentIntent(pendingIntent).setOngoing(false)
.getNotification();
notification.flags = Notification.FLAG_AUTO_CANCEL;
startForeground(1337, notification);
onForeground = false;
}
}
```
what should I do that i notifiction is cancalable when i want to remove it from notification.. i am new to android so plz help me
|
2016/06/26
|
['https://Stackoverflow.com/questions/38042541', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5702485/']
|
>
> NotificationCompat.Builder **setAutoCancel (boolean autoCancel)** Setting
> this flag will make it so the notification is automatically canceled
> when the user clicks it in the panel.
>
>
>
You can find more about how to create a notification [here](http://www.tutorialspoint.com/android/android_notifications.htm).
|
Add a ID in your Notification builder, after this set notification.cancel (id);
|
38,042,541 |
Function
```
private void startService() {
if (!onForeground) {
Log.d(Constants.TAG, "RecordService startService");
Intent intent = new Intent(this, MainActivity.class);
// intent.setAction(Intent.ACTION_VIEW);
// intent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
intent.setFlags(Intent.FLAG_ACTIVITY_REORDER_TO_FRONT);
PendingIntent pendingIntent = PendingIntent.getActivity(
getBaseContext(), 0, intent, 0);
Notification notification = new NotificationCompat.Builder(
getBaseContext())
.setContentTitle(
this.getString(R.string.notification_title))
.setTicker(this.getString(R.string.notification_ticker))
.setContentText(this.getString(R.string.notification_text))
.setSmallIcon(R.drawable.ic_launcher)
.setContentIntent(pendingIntent).setOngoing(false)
.getNotification();
notification.flags = Notification.FLAG_AUTO_CANCEL;
startForeground(1337, notification);
onForeground = false;
}
}
```
what should I do that i notifiction is cancalable when i want to remove it from notification.. i am new to android so plz help me
|
2016/06/26
|
['https://Stackoverflow.com/questions/38042541', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5702485/']
|
Notifications associated with a foreground service are ongoing, i.e., not cancelable. You should provide a separate action in the notification to cancel the notification (which would have to call [stopForeground()](https://developer.android.com/reference/android/app/Service.html#stopForeground(boolean)) to remove the notification).
|
Add a ID in your Notification builder, after this set notification.cancel (id);
|
23,201,713 |
I need to create directories from for loop. I have a list.
```
a = [banana 1234, apple 456, orange 789]
```
And I need to create folders named by the numbers that are ending with numbers in list. for example.
```
C:\folder\1234; C:\folder\456
```
and so on.
code is
```
for length in atrinkta:
folder = re.findall(r'\d{9,}', length)
os.makedirs("D:/python_study/images/skelbimai/" + folder)
```
but I get an error **"TypeError: cannot concatenate 'str' and 'list' objects"** don't know what to do. Please someone help.
**EDIT**
I tried your method 2nd time. After deleting created folder for whatever reason python gives me error
```
WindowsError: [Error 183] Cannot create a file when that file already exists: 'D:/python_study/images/skelbimai/'
```
**2ndEdit**
shaktimaan, now i get other error **AttributeError: 'NoneType' object has no attribute 'group'**. What does this mean?
|
2014/04/21
|
['https://Stackoverflow.com/questions/23201713', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3383245/']
|
You will receive the following error
```
undefined method `collect' for nil:NilClass
```
on
```
<%= f.select :id , @courses.collect{|c| [c.id , c.name]} %>
```
Only when `@courses` instance variable **was not set in the action that rendered this particular view.**
I see that `@courses` variable is set in the `existing` method. If you are using `existing` as an action which renders this view then your view name must be `existing.html.erb`.
Or if you are rendering the view from a different action then in that case you should set `@courses` value in that particular action by either directly setting the value within action OR by calling `existing` method from there.
|
If you have your courses as a database table, you might want to try using rails' built in field helper `collection_select`. It will populate your select field with all of the data available in your model. If you want a drop-down like the one you are describing, I believe using collection select is the best way to handle it.
You can read up on it here: <http://api.rubyonrails.org/classes/ActionView/Helpers/FormOptionsHelper.html>
Alternatively, if you have a ton of courses, maybe try looking into using a text field with autocomplete. Jquery UI has a plugin that makes this very easy. You can check out the railscasts for it here: <http://railscasts.com/episodes/102-auto-complete-association-revised>.
It requires a pro account but if you do a lot of rails developing it will be the best $9 you spend every month.
If you would like to continue to do it this way, make sure that you are defining
`@courses = Courses(:all)` in the correct controller action, otherwise you will have nothing to render.
|
12,861 |
I am looking for ideas for the following. I want to design a flexible plate where I can control the bending of the plate. In particular, a 5”x2” plate will be used which can be bent with some attached mechanisms. The plate will be dragged in water so it should not buckle under water pressure itself, rather its buckling should be controllable. I know there are certain shape memory alloys which can do that when powered (may be some piezoelectric actuator can do this also, I am not sure).
[](https://i.stack.imgur.com/Mtht6.jpg)
Can it be designed with some actuating mechanism like this below:
[](https://i.stack.imgur.com/LRl1c.jpg)
I would appreciate your comments and suggestion. Thanks
|
2016/12/15
|
['https://engineering.stackexchange.com/questions/12861', 'https://engineering.stackexchange.com', 'https://engineering.stackexchange.com/users/9237/']
|
There are some works by professor Wiciak, where vibrations or noise are controlled by piezoelements.
[Development of noise reduction panel using piezoelectric material](http://www.sciencedirect.com/science/article/pii/S2212017316305503)
Piezomaterials can be used for active vibration control.
Maybe you could apply this principle to this problem. How about bimetal? Can you use heat? Or cooling of predeformed shape?
|
You will want to choose a thickness of plate to resist the water flow conditions, then select an acuator with enough force to deflect the plate.
Your stroke length will be set by how much deflection you need. Your actuator type will be selected based on the required accuracy and budget.
Pizeo would work great since it is the highest accuracy and waterproof(probably only very clean water though). I probably wouldnt mess with shape memory alloys as that is likely to introduce more complexity into a simple design requirement. A gear motor linear actuator would work well too if the accuracy is sufficent and it can be kept out of the water. Radio controlled servo actuators come in a wide range of torques and accuracies would also likely work. Possibly hydraulics if the location is to hazardous for electric actuators.
As far as attaching the actuator; the easiest would be in tension like you show. Piano wire could be used to allow the actuator to be located in a more convenient or dryer location.
The actuator could also be in compression from the end if necessary. You just need to make sure the load is pinned and loaded eccentrically so it deflects the same way every time.
|
12,861 |
I am looking for ideas for the following. I want to design a flexible plate where I can control the bending of the plate. In particular, a 5”x2” plate will be used which can be bent with some attached mechanisms. The plate will be dragged in water so it should not buckle under water pressure itself, rather its buckling should be controllable. I know there are certain shape memory alloys which can do that when powered (may be some piezoelectric actuator can do this also, I am not sure).
[](https://i.stack.imgur.com/Mtht6.jpg)
Can it be designed with some actuating mechanism like this below:
[](https://i.stack.imgur.com/LRl1c.jpg)
I would appreciate your comments and suggestion. Thanks
|
2016/12/15
|
['https://engineering.stackexchange.com/questions/12861', 'https://engineering.stackexchange.com', 'https://engineering.stackexchange.com/users/9237/']
|
There are some works by professor Wiciak, where vibrations or noise are controlled by piezoelements.
[Development of noise reduction panel using piezoelectric material](http://www.sciencedirect.com/science/article/pii/S2212017316305503)
Piezomaterials can be used for active vibration control.
Maybe you could apply this principle to this problem. How about bimetal? Can you use heat? Or cooling of predeformed shape?
|
Presumably you will also want a reasonably well defined curve.
To create a smooth, repeatable and adjustable curve you will want to stay withinthe elastic limit of the the material you are using which will depend on the material itself and its thickness. Steel or a composite sheet would seem like reasonable candidates.
This also puts a limit on the stiffness of the surface as the elastic deformation will be limited by the thickness of the plate.
The profile you end up with can be calculated by beam deflection theory and will depend on where you apply loads, in what direction and how the ends of the plate are supported.
Probably the simplest solution is to support one end and apply a load at the other (much like your example mechanism). Although there are lots of potential permutations of load and support arrangements.
|
26,009,316 |
I am trying to use dynamic memory for this project. I am getting a seg fault but I cannot figure out what I am doing incorrectly. Can anyone point to where my mistake is? The file seems to read in correctly...but im assuming the fault is a rogue pointer..help!
I am just trying to read in "heart two 2" to "spade ace 11" in from a file, all words seperated by a space. my program worked before using dynamic memory..
```
#include <iostream>
#include <fstream>
#include <ctime>
#include <stdlib.h>
#include <string>
using namespace std;
//global constant(s)
const int maxCards = 52;
//Structs
struct card
{
char *suit;
char *rank;
int cvalue;
char location;
};
void readDeck(card* deckPtr);
void cardsInit(char *finNameP,card *deckPtr);
//program
int main()
{
card *deckPtr = new card[52];
char *finNameP = new char[13];
strcopy(finNameP,"cardFile.txt");
cardsInit(finNameP,deckPtr); // function i wrote that works
readDeck(deckPtr); //simply reads the deck from &deckPtr[0] -> &deck[51]
delete [] finNameP;
}
void cardsInit(char *finNameP, card *deckPtr)
{
//set up card file to be read in
ifstream fin;
cout << "Please enter file name...(cardFile.txt)" << endl;;
cin >> *finNameP;
fin.open(finNameP);
//create pointer and set initial value
card *deckHome = deckPtr;
for(int i=0;i<52;i++)
{
(*deckPtr).suit = new char[9];
(*deckPtr).rank = new char[9];
deckPtr++;
}
deckPtr = deckHome;
//check if cardFile.txt opens correctly
if(!fin.good())
{
cout << "Error with card file" << endl;
}
else
{
while(fin.good())
{
for(deckPtr = &deckPtr[0]; deckPtr < &deckPtr[maxCards];deckPtr++)
{
fin >> (*deckPtr).suit;
fin >> (*deckPtr).rank;
fin >> (*deckPtr).cvalue;
}
}
}
fin.close();
delete []finNameP;
delete [] (*deckPtr).suit;
delete [] (*deckPtr).rank;
}
```
|
2014/09/24
|
['https://Stackoverflow.com/questions/26009316', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4073150/']
|
This is a really ancient way to program. Instead of using `new`, use `std::string` or `std::vector<char>`. Those also use dynamic memory but they make it much harder for you to accidentally cause memory allocation bugs.
The first problem comes here:
```
cin >> *finNameP;
```
Since `finNameP` has type `char *`, then `*finNameP` has type `char`. So this instruction reads a single character. Then you go onto do `fin.open(finNameP);` which causes undefined behaviour because there is no string in `finNameP`.
The simplest fix is to make `finNameP` be a `std::string`. Note that doing `cin >> finNameP` (without changing the type) would compile, however it is a bad idea because there is no buffer overflow protection. You could write `cin >> setw(12) >> finNameP;` but that is still substantially worse than using a string.
|
`deckPtr < &deckPtr[maxCards]` is always true, the for loop runs forever.
|
126,666 |
I cannot understand a situation with `MeijerG` function. My problem is as follows. I obtained a `MeijerG` function as the result of a Fourier transform:
```
Integrate[
Integrate[
Exp[I*k*r*Cos[φ]]/(k^4 + 1)*k, {φ, 0, 2 π},
Assumptions -> {k > 0, r > 0}],
{k, 0, ∞}, Assumptions -> {r > 0}]
(* 1/2 π MeijerG[{{}, {}}, {{0, 1/2, 1/2}, {0}}, r^4/256] *)
```
Let us look at it:
```
Plot[MeijerG[{{}, {}}, {{0, 1/2, 1/2}, {0}}, r^4/256], {r, 0, 5}]
```
[](https://i.stack.imgur.com/qBNsQ.jpg)
However, few years ago I remember to have obtained the same integral in terms of a more familiar Kelvin kei function as follows $$-\frac{\mathrm{kei}\_0(r)}{2 \pi }.$$ I tried to get it using `FullSimplify`, but failed.
**This is my first question on this subject**: are you aware of an approach to transform this `MeijerG` function into any Bessel-based (or Kelvin kei) form?
OK, at that time I used the tables of integrals of Prudnikov, Brychkov and Marichev, and these tables might have had an error.
However, now I try to get this result at `r = 0`. According to the plot above it is regular in this point. This calculation might be done in several ways: by a direct substitution `r -> 0` in the above result, provided I change the assumptions to `Assumptions -> {r >= 0}` everywhere. Alternatively I can go to a limit `r -> 0` or within the old assumptions I can substitute, say, `r -> 0.001` to get a close numeric answer. I can, finally, simply put 0 instead of `r` into the `MeijerG` function. Here you are:
```
MeijerG[{{}, {}}, {{0, 1/2, 1/2}, {0}}, 0] // Evaluate
(* MeijerG[{{}, {}}, {{0, 1/2, 1/2}, {0}}, 0] *)
MeijerG[{{}, {}}, {{0, 1/2, 1/2}, {0}}, 0]
(* MeijerG[{{}, {}}, {{0, 1/2, 1/2}, {0}}, 0] *)
MeijerG[{{}, {}}, {{0, 1/2, 1/2}, {0}}, r] /. r -> 0
(* MeijerG[{{}, {}}, {{0, 1/2, 1/2}, {0}}, 0] *)
MeijerG[{{}, {}}, {{0, 1/2, 1/2}, {0}}, 0.00001]
(* 3.00952 *)
Limit[ MeijerG[{{}, {}}, {{0, 1/2, 1/2}, {0}}, r^4/256], r -> 0.00001]
(* 3.14159 *)
```
In addition I can do the following:
```
f[r_] := Integrate[
Integrate[
Exp[I*k*r*Cos[φ]]/(k^4 + 1)*k, {φ, 0, 2 π}, Assumptions -> {k > 0, r >= 0}], {k, 0, ∞},
Assumptions -> {r >= 0}] ;
f[0]
(* π^2/2 *)
```
However, one can act another way around. One can substitute `r=0` into the initial integral, which will then reduce to the following:
```
1/(2 π)*Integrate[k/(k^4 + 1), {k, 0, ∞}]
(* 1/8 *)
```
which, evidently, differs from the results about 3 obtained above.
So, **the second question is**: what happens here? Why is it so difficult to obtain the value of the `MeijerG` function in zero?
|
2016/09/19
|
['https://mathematica.stackexchange.com/questions/126666', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/788/']
|
The standard way to express `MeijerG` in terms of more specific special functions is with `FunctionExpand`. For example
```
MeijerG[{{1}, {}}, {{1/2, 1, 3/2}, {}}, z]
```
>
>
> ```
> MeijerG[{{1}, {}}, {{1/2, 1, 3/2}, {}}, z]
>
> ```
>
>
```
FunctionExpand[%]
```
>
>
> ```
> -2 π z - π^2 z BesselY[1, 2 Sqrt[z]] + π^2 z StruveH[1, 2 Sqrt[z]]
>
> ```
>
>
However your expression does not simplify with this approach
```
FunctionExpand[1/2 π MeijerG[{{}, {}}, {{0, 1/2, 1/2}, {0}}, r^4/256], r > 0]
```
>
>
> ```
> 1/2 π MeijerG[{{}, {}}, {{0, 1/2, 1/2}, {0}}, r^4/256]
>
> ```
>
>
---
The best work around I could find is to express the 3-arg form of `MeijerG` into a 4-arg form, which will be correct on the positive real axis.
```
mei = 1/2 π MeijerG[{{}, {}}, {{0, 1/2, 1/2}, {0}}, r^4/256];
FunctionExpand[mei /. {
MeijerG[as_, bs_, c_. r^n_?Positive] :> MeijerG[as, bs, c^(1/n) r, 1/n]
}]
```
>
>
> ```
> -2 π KelvinKei[0, r]
>
> ```
>
>
---
Another (less successful) workaround is to solve the ODE your `MeijerG` satisfies.
```
DifferentialRootReduce[1/2 π MeijerG[{{}, {}}, {{0, 1/2, 1/2}, {0}}, r^4/256], r]
```
[](https://i.stack.imgur.com/rPzTV.png)
Now `FunctionExpand` knows how to handle `DifferentialRoot` objects, but it seems to spin forever on this input presumably because the solution from `DSolve` on this ODE has very complicated coefficients.
If we approximate the initial conditions, we can see your expression is really $-2\pi \text{kei}\_0(r)$.
```
DSolveValue[{
r^3 y[r] + y'[r] - r y''[r] + 2 r^2 y'''[r] + r^3 y''''[r] == 0,
y[1] == 3.1101430273071208511784026627334110346453670869257748826627`30.,
y'[1] == -2.2140054621659709246306930546689437640742733642857570978703`30.,
y''[1] == 0.412577224007051831900819509950661362815906739014512338784`30.,
y'''[1] == 1.7377422767006054496810801840889187820973078345984715247552`30.
}, y[r], r] // Chop
```
>
>
> ```
> -6.28318530717958647692528677 KelvinKei[0, r]
>
> ```
>
>
---
Addressing your second question, my guess is this particular `MeijerG` isn't evaluating at the origin because it's a (rather complicated looking) branch point:
[](https://i.stack.imgur.com/ExAxp.png)
|
To evaluate at `r = 0`, take the `Limit` of the `Series` expansion
```
int = Assuming[{r > 0},
Integrate[
Exp[I*k*r*Cos[φ]]/(k^4 + 1)*k, {k, 0, ∞}, {φ,
0, 2 π}]]
(* (1/2)*Pi*MeijerG[{{}, {}}, {{0, 1/2, 1/2}, {0}}, r^4/256] *)
approx = Series[int, {r, 0, 1}] // Normal // FullSimplify[#, r > 0] &
(* (Pi*(3456*Pi - r^2*(6*EulerGamma*(-576 + r^4) +
864*(4 + Log[16]) + r^4*(-11 + Log[64])) +
3456*r^2*Log[r]))/6912 *)
```
`approx` is also undefined at `r = 0` due to the `Log[r]`
```
Plot[{int, approx}, {r, 0, 2.8},
PlotStyle -> {Automatic, Dashed},
PlotLegends -> {"int1", "approx"}]
```
[](https://i.stack.imgur.com/sGOF3.png)
```
Limit[approx, r -> 0]
(* Pi^2/2 *)
```
which has an approximate numeric value of
```
% // N
(* 4.9348 *)
```
|
126,666 |
I cannot understand a situation with `MeijerG` function. My problem is as follows. I obtained a `MeijerG` function as the result of a Fourier transform:
```
Integrate[
Integrate[
Exp[I*k*r*Cos[φ]]/(k^4 + 1)*k, {φ, 0, 2 π},
Assumptions -> {k > 0, r > 0}],
{k, 0, ∞}, Assumptions -> {r > 0}]
(* 1/2 π MeijerG[{{}, {}}, {{0, 1/2, 1/2}, {0}}, r^4/256] *)
```
Let us look at it:
```
Plot[MeijerG[{{}, {}}, {{0, 1/2, 1/2}, {0}}, r^4/256], {r, 0, 5}]
```
[](https://i.stack.imgur.com/qBNsQ.jpg)
However, few years ago I remember to have obtained the same integral in terms of a more familiar Kelvin kei function as follows $$-\frac{\mathrm{kei}\_0(r)}{2 \pi }.$$ I tried to get it using `FullSimplify`, but failed.
**This is my first question on this subject**: are you aware of an approach to transform this `MeijerG` function into any Bessel-based (or Kelvin kei) form?
OK, at that time I used the tables of integrals of Prudnikov, Brychkov and Marichev, and these tables might have had an error.
However, now I try to get this result at `r = 0`. According to the plot above it is regular in this point. This calculation might be done in several ways: by a direct substitution `r -> 0` in the above result, provided I change the assumptions to `Assumptions -> {r >= 0}` everywhere. Alternatively I can go to a limit `r -> 0` or within the old assumptions I can substitute, say, `r -> 0.001` to get a close numeric answer. I can, finally, simply put 0 instead of `r` into the `MeijerG` function. Here you are:
```
MeijerG[{{}, {}}, {{0, 1/2, 1/2}, {0}}, 0] // Evaluate
(* MeijerG[{{}, {}}, {{0, 1/2, 1/2}, {0}}, 0] *)
MeijerG[{{}, {}}, {{0, 1/2, 1/2}, {0}}, 0]
(* MeijerG[{{}, {}}, {{0, 1/2, 1/2}, {0}}, 0] *)
MeijerG[{{}, {}}, {{0, 1/2, 1/2}, {0}}, r] /. r -> 0
(* MeijerG[{{}, {}}, {{0, 1/2, 1/2}, {0}}, 0] *)
MeijerG[{{}, {}}, {{0, 1/2, 1/2}, {0}}, 0.00001]
(* 3.00952 *)
Limit[ MeijerG[{{}, {}}, {{0, 1/2, 1/2}, {0}}, r^4/256], r -> 0.00001]
(* 3.14159 *)
```
In addition I can do the following:
```
f[r_] := Integrate[
Integrate[
Exp[I*k*r*Cos[φ]]/(k^4 + 1)*k, {φ, 0, 2 π}, Assumptions -> {k > 0, r >= 0}], {k, 0, ∞},
Assumptions -> {r >= 0}] ;
f[0]
(* π^2/2 *)
```
However, one can act another way around. One can substitute `r=0` into the initial integral, which will then reduce to the following:
```
1/(2 π)*Integrate[k/(k^4 + 1), {k, 0, ∞}]
(* 1/8 *)
```
which, evidently, differs from the results about 3 obtained above.
So, **the second question is**: what happens here? Why is it so difficult to obtain the value of the `MeijerG` function in zero?
|
2016/09/19
|
['https://mathematica.stackexchange.com/questions/126666', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/788/']
|
The standard way to express `MeijerG` in terms of more specific special functions is with `FunctionExpand`. For example
```
MeijerG[{{1}, {}}, {{1/2, 1, 3/2}, {}}, z]
```
>
>
> ```
> MeijerG[{{1}, {}}, {{1/2, 1, 3/2}, {}}, z]
>
> ```
>
>
```
FunctionExpand[%]
```
>
>
> ```
> -2 π z - π^2 z BesselY[1, 2 Sqrt[z]] + π^2 z StruveH[1, 2 Sqrt[z]]
>
> ```
>
>
However your expression does not simplify with this approach
```
FunctionExpand[1/2 π MeijerG[{{}, {}}, {{0, 1/2, 1/2}, {0}}, r^4/256], r > 0]
```
>
>
> ```
> 1/2 π MeijerG[{{}, {}}, {{0, 1/2, 1/2}, {0}}, r^4/256]
>
> ```
>
>
---
The best work around I could find is to express the 3-arg form of `MeijerG` into a 4-arg form, which will be correct on the positive real axis.
```
mei = 1/2 π MeijerG[{{}, {}}, {{0, 1/2, 1/2}, {0}}, r^4/256];
FunctionExpand[mei /. {
MeijerG[as_, bs_, c_. r^n_?Positive] :> MeijerG[as, bs, c^(1/n) r, 1/n]
}]
```
>
>
> ```
> -2 π KelvinKei[0, r]
>
> ```
>
>
---
Another (less successful) workaround is to solve the ODE your `MeijerG` satisfies.
```
DifferentialRootReduce[1/2 π MeijerG[{{}, {}}, {{0, 1/2, 1/2}, {0}}, r^4/256], r]
```
[](https://i.stack.imgur.com/rPzTV.png)
Now `FunctionExpand` knows how to handle `DifferentialRoot` objects, but it seems to spin forever on this input presumably because the solution from `DSolve` on this ODE has very complicated coefficients.
If we approximate the initial conditions, we can see your expression is really $-2\pi \text{kei}\_0(r)$.
```
DSolveValue[{
r^3 y[r] + y'[r] - r y''[r] + 2 r^2 y'''[r] + r^3 y''''[r] == 0,
y[1] == 3.1101430273071208511784026627334110346453670869257748826627`30.,
y'[1] == -2.2140054621659709246306930546689437640742733642857570978703`30.,
y''[1] == 0.412577224007051831900819509950661362815906739014512338784`30.,
y'''[1] == 1.7377422767006054496810801840889187820973078345984715247552`30.
}, y[r], r] // Chop
```
>
>
> ```
> -6.28318530717958647692528677 KelvinKei[0, r]
>
> ```
>
>
---
Addressing your second question, my guess is this particular `MeijerG` isn't evaluating at the origin because it's a (rather complicated looking) branch point:
[](https://i.stack.imgur.com/ExAxp.png)
|
In addition to Chip's nice answer, another way would be to do a "round trip" using the Mellin transform and its inverse:
```
InverseMellinTransform[
MellinTransform[π/2 MeijerG[{{}, {}}, {{0, 1/2, 1/2}, {0}}, r^4/256], r, t], t, r]
-2 π KelvinKei[0, r]
```
---
**Bonus:** the original integral can in fact be expressed as an appropriate Mellin convolution:
```
2 π MellinConvolve[BesselJ[0, k], k^2/(1 + k^4), k, r]
```
Unfortunately, this also returns the normal $G$ function (as with `Integrate[]`) instead of the more useful (at least in this case) extended $G$ function.
|
69,674,741 |
How do you completely delete an object via the Amazon S3 Console? "Completely delete" meaning the object is entirely gone and isn't visible when enabling "Show versions." I cannot delete the "Show versions" objects themselves as the button is greyed out when they're selected. I've also tried deleting them via CLI but have had no luck.
|
2021/10/22
|
['https://Stackoverflow.com/questions/69674741', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4471173/']
|
If you have versioning enabled, deleting an object just adds another version of the object, the so called "delete marker". If you want to remove all versions, including the delete marker, you'll need to enumerate and delete all versions of the object. For instance, with the Python SDK, you can enumerate and delete them with one line after some setup:
```python
import boto3
s3 = boto3.resource("s3")
bucket = s3.Bucket("-example-bucket-") # ARN without the 'arn:aws:s3:::' prefix
key = "example_object.txt"
object = bucket.object_versions.filter(Prefix=key)
versions = len(list(object.all()))
object.delete()
print(f"Deleted {versions} versions for '{key}'")
```
|
Looks like your IAM User accessing the bucket does not have s3:DeleteObjectVersion permission. Once you add this permission to your policy, you should be able to delete the versions as well.
|
11,334,089 |
My CouchApp has the following folder structur, where files inside the app folder are compiled into the `_attachments` folder:
```
my_couchapp
├── _attachments/
│ ├── app.js
│ ├── app-tests.js
│ └── index.html
├── app/
│ └── app.js
├── Assetfile
└── views/
```
I want to exclude the file `Assetfile`, `_attachments/app-tests.js` and the folder `app`.
My current `.couchappignore` looks like this:
```
[
"app",
"Assetfile",
"_attachments/app-tests.js"
]
```
But this doesn't seem to work. All files beginning with `app` inside the `_attachments` folder are not pushed.
How do I define folders and specific files to be excluded when the CouchApp is pushed via `couchapp push`?
|
2012/07/04
|
['https://Stackoverflow.com/questions/11334089', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/65542/']
|
After a little more experimentation I found a way: the `app` folder can be excluded by specifying `app$`, so the final `.couchappignore` now looks like this:
```
[
"app$",
"Assetfile",
"app-tests.js"
]
```
|
In case you arrived here looking for a way to ignore subfolders, you are just like me. Here's my problem:
```
my-couchapp/
├── node_modules/
│ ├── react.js
│ ├── url/
│ ├── browserify/
│ └── coffee-script/
├── app/
│ └── app.js
└── views/
```
I wanted to include `node_modules/react.js` and `node_modules/url/` (and all subfolders), but didn't want to include `node_modules/browserify/` and `node_modules/coffeescript`.
I was trying
```
[
"node_modules/browserify$",
"node_modules/coffee-script$"
]
```
but it wasn't working.
```
[
"node_modules\/browserify",
"node_modules\/coffee-script"
]
```
also didn't work.
The only thing that worked was
```
[
"browserify",
"coffee-script"
]
```
I don't know why.
|
60,710,171 |
I have created one service called **fleetman-webapp**:
```
apiVersion: v1
kind: Service
metadata:
name: fleetman-webapp
spec:
selector:
app: webapp
ports:
- name: http
port: 80
nodePort: 30080
type: NodePort
```
also, a pod named **webapp**:
```
apiVersion: v1
kind: Pod
metadata:
name: webapp
labels:
app: webapp
spec:
containers:
- name: webapp
image: richardchesterwood/k8s-fleetman-webapp-angular:release0
```
I have checked the **minikube ip**:
***192.168.99.102***
But when I type in the browser **192.168.99.102:30080**, the webapp is not reachable:[](https://i.stack.imgur.com/v6GiX.png)
Please note that I use Ubuntu latest version. I have verified furthermore if proxies and firewalls are active:
**cat /etc/environment**:
```
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games"
```
**iptables -L**:
```
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (1 references)
target prot opt source destination
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target prot opt source destination
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
```
I have also disabled **ufw** in Ubuntu, but no success, the url **192.168.99.102:30080** .
Would you help me please ? thanks in advance for your answer.
|
2020/03/16
|
['https://Stackoverflow.com/questions/60710171', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8384029/']
|
Even though, you are exposing port 30080 via NodePort in minikube, minikube will still not expose it because it will use its own external port to listen to this service. Minikube tunnels the service to expose to the outer world. To find out that exposed port:
```
minikube service $SERVICE_NAME
```
so, in your case
```
minikube service fleetman-webapp
```
|
There are a lot of different hypervisors which can work with `minikube`. Choosing one will be highly dependent on variables like operating system. Some of them are:
* Virtualbox
* Hyper-V
* VMware Fusion
* KVM2
* Hyperkit
* "Docker (`--vm-driver=none`)" (see the quotes)
There is official documentation talking about it: *[Kubernetes.io: Minikube: Specifying the vm driver](https://kubernetes.io/docs/setup/learning-environment/minikube/#specifying-the-vm-driver)*
**Choosing `Hypervisor` will affect how the `minikube` will behave.**
Focusing on:
* Docker: `--vm-driver=none`
* Virtualbox: `--vm-driver=virtualbox`
Docker
------
Official documentation sums it up:
>
> Minikube also supports a `--vm-driver=none` option that runs the Kubernetes components on the host and not in a VM. Using this driver requires [Docker](https://www.docker.com/products/docker-desktop) and a Linux environment but not a hypervisor.
>
>
> -- *[Kubernetes.io: Install minikube: Install a hypervisor](https://kubernetes.io/docs/tasks/tools/install-minikube/#install-a-hypervisor)*
>
>
>
The output of command`$ sudo minikube ip` will show IP address of a **host machine**.
Service object type of `NodePort` will be available with `IP_ADDRESS_OF_HOST:NODEPORT_PORT`.
Following with command: `$ kubectl get nodes -o wide`:
```sh
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
K8S Ready master 95s v1.17.3 192.168.0.114 <none> Ubuntu 18.04.4 LTS 5.3.0-28-generic docker://19.3.8
```
Please take a specific look on:
```sh
INTERNAL-IP
192.168.0.114
```
**It's the same IP address as a host it's working on**. You can (for example) `curl` pods without any restrictions. Please consider reading the article in included citing:
>
> **Caution:** The `none` VM driver can result in security and data loss issues. Before using `--vm-driver=none`, consult [this documentation](https://minikube.sigs.k8s.io/docs/reference/drivers/none/) for more information.
>
>
>
You can check what was exposed with command:
`$ sudo netstat -tulpn`
Virtualbox
----------
Creating a `minikube` instance with `--vm-driver=virtualbox` will create a virtual machine with Virtualbox as host.
Virtual machine created with this kind of `--vm-driver` will have 2 network interfaces provided below:
* NAT
* **Host-only adapter**
What is important is that your minikube instance will be accessible by **Host-only adapter**.
>
> **Host-only networking.** This can be used to create a network containing the host and a set of virtual machines, without the need for the host's physical network interface. Instead, a virtual network interface, similar to a loopback interface, is created on the host, providing connectivity among virtual machines and the host.
>
>
> -- *[Virtualbox.org: Virtual networking](https://www.virtualbox.org/manual/ch06.html)*
>
>
>
For example:
* minikube host-only adapter will have an address: `192.168.99.103`
* Your host-only adapter will have an address: `192.168.99.1`
They must be different!
**If you are having issues with connecting to this adapter please check:**
* If minikube's host-only adapter address is responding to `ping` when `minikube start` completed successfully.
* **Your** host-only adapter is present in your network configuration by issuing either:
+ `ip a`
+ `ifconfig`
* Your host-only adapter address is in range of your minikube instance (subnet)
From my experience reboot/recreation of this adapter worked all the time if something wasn't right.
The output of command`$ sudo minikube ip` will show IP address of a **Host-only adapter**.
Following with command: `$ kubectl get nodes -o wide`:
```sh
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
m01 Ready master 29m v1.17.3 192.168.99.103 <none> Buildroot 2019.02.9 4.19.94 docker://19.3.6
```
Please take a specific look once more on `INTERNAL-IP` and `ip address` associated with it.
Service object type of `NodePort` will be available with:
`IP_ADDRESS_OF_HOST_ONLY_ADAPTER:NODEPORT_PORT`.
I recreated your `Deployment` and `Service` attached to it and it worked in both `--vm-driver=none` and `--vm-driver=virtualbox` cases.
Please let me know if you have any questions in this topic.
|
60,710,171 |
I have created one service called **fleetman-webapp**:
```
apiVersion: v1
kind: Service
metadata:
name: fleetman-webapp
spec:
selector:
app: webapp
ports:
- name: http
port: 80
nodePort: 30080
type: NodePort
```
also, a pod named **webapp**:
```
apiVersion: v1
kind: Pod
metadata:
name: webapp
labels:
app: webapp
spec:
containers:
- name: webapp
image: richardchesterwood/k8s-fleetman-webapp-angular:release0
```
I have checked the **minikube ip**:
***192.168.99.102***
But when I type in the browser **192.168.99.102:30080**, the webapp is not reachable:[](https://i.stack.imgur.com/v6GiX.png)
Please note that I use Ubuntu latest version. I have verified furthermore if proxies and firewalls are active:
**cat /etc/environment**:
```
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games"
```
**iptables -L**:
```
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (1 references)
target prot opt source destination
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target prot opt source destination
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
```
I have also disabled **ufw** in Ubuntu, but no success, the url **192.168.99.102:30080** .
Would you help me please ? thanks in advance for your answer.
|
2020/03/16
|
['https://Stackoverflow.com/questions/60710171', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8384029/']
|
There are a lot of different hypervisors which can work with `minikube`. Choosing one will be highly dependent on variables like operating system. Some of them are:
* Virtualbox
* Hyper-V
* VMware Fusion
* KVM2
* Hyperkit
* "Docker (`--vm-driver=none`)" (see the quotes)
There is official documentation talking about it: *[Kubernetes.io: Minikube: Specifying the vm driver](https://kubernetes.io/docs/setup/learning-environment/minikube/#specifying-the-vm-driver)*
**Choosing `Hypervisor` will affect how the `minikube` will behave.**
Focusing on:
* Docker: `--vm-driver=none`
* Virtualbox: `--vm-driver=virtualbox`
Docker
------
Official documentation sums it up:
>
> Minikube also supports a `--vm-driver=none` option that runs the Kubernetes components on the host and not in a VM. Using this driver requires [Docker](https://www.docker.com/products/docker-desktop) and a Linux environment but not a hypervisor.
>
>
> -- *[Kubernetes.io: Install minikube: Install a hypervisor](https://kubernetes.io/docs/tasks/tools/install-minikube/#install-a-hypervisor)*
>
>
>
The output of command`$ sudo minikube ip` will show IP address of a **host machine**.
Service object type of `NodePort` will be available with `IP_ADDRESS_OF_HOST:NODEPORT_PORT`.
Following with command: `$ kubectl get nodes -o wide`:
```sh
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
K8S Ready master 95s v1.17.3 192.168.0.114 <none> Ubuntu 18.04.4 LTS 5.3.0-28-generic docker://19.3.8
```
Please take a specific look on:
```sh
INTERNAL-IP
192.168.0.114
```
**It's the same IP address as a host it's working on**. You can (for example) `curl` pods without any restrictions. Please consider reading the article in included citing:
>
> **Caution:** The `none` VM driver can result in security and data loss issues. Before using `--vm-driver=none`, consult [this documentation](https://minikube.sigs.k8s.io/docs/reference/drivers/none/) for more information.
>
>
>
You can check what was exposed with command:
`$ sudo netstat -tulpn`
Virtualbox
----------
Creating a `minikube` instance with `--vm-driver=virtualbox` will create a virtual machine with Virtualbox as host.
Virtual machine created with this kind of `--vm-driver` will have 2 network interfaces provided below:
* NAT
* **Host-only adapter**
What is important is that your minikube instance will be accessible by **Host-only adapter**.
>
> **Host-only networking.** This can be used to create a network containing the host and a set of virtual machines, without the need for the host's physical network interface. Instead, a virtual network interface, similar to a loopback interface, is created on the host, providing connectivity among virtual machines and the host.
>
>
> -- *[Virtualbox.org: Virtual networking](https://www.virtualbox.org/manual/ch06.html)*
>
>
>
For example:
* minikube host-only adapter will have an address: `192.168.99.103`
* Your host-only adapter will have an address: `192.168.99.1`
They must be different!
**If you are having issues with connecting to this adapter please check:**
* If minikube's host-only adapter address is responding to `ping` when `minikube start` completed successfully.
* **Your** host-only adapter is present in your network configuration by issuing either:
+ `ip a`
+ `ifconfig`
* Your host-only adapter address is in range of your minikube instance (subnet)
From my experience reboot/recreation of this adapter worked all the time if something wasn't right.
The output of command`$ sudo minikube ip` will show IP address of a **Host-only adapter**.
Following with command: `$ kubectl get nodes -o wide`:
```sh
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
m01 Ready master 29m v1.17.3 192.168.99.103 <none> Buildroot 2019.02.9 4.19.94 docker://19.3.6
```
Please take a specific look once more on `INTERNAL-IP` and `ip address` associated with it.
Service object type of `NodePort` will be available with:
`IP_ADDRESS_OF_HOST_ONLY_ADAPTER:NODEPORT_PORT`.
I recreated your `Deployment` and `Service` attached to it and it worked in both `--vm-driver=none` and `--vm-driver=virtualbox` cases.
Please let me know if you have any questions in this topic.
|
For this specific (and really great) course about [Kubernetes on Udemy from Richard Chesterwood](https://www.udemy.com/course/kubernetes-microservices/) the following solution should work out of the box on Windows: just start the `minikube` with `hyper-v` driver, then it will automatically map all the ports you are expecting onto your host machine, like detailed explained here by Dawid Kruk. Therefore all you need to start minikube "correctly" is the following command:
```
minikube start --driver=hyperv
```
---
Be careful by specifying exact amount of memory you give to this minikube instance. In my experiences hyper-v is a bit sensitive about how much memory you will give to it, what could result in errors:
```
minikube start --driver=hyperv --memory=8192
...
Not enough memory in the system to start the virtual machine minikube.
Could not initialize memory: Not enough memory resources are available to complete this operation. (0x8007000E).
'minikube' failed to start. (Virtual machine ID D4BC7B61-4E4D-4079-94DE-...)
Not enough memory in the system to start the virtual machine minikube with ram size 8192 megabytes. (Virtual machine ID ...)
```
Therefore just use the unspecific command given above and hyper-v will figure out, how much memory it really needs on it's own.
|
60,710,171 |
I have created one service called **fleetman-webapp**:
```
apiVersion: v1
kind: Service
metadata:
name: fleetman-webapp
spec:
selector:
app: webapp
ports:
- name: http
port: 80
nodePort: 30080
type: NodePort
```
also, a pod named **webapp**:
```
apiVersion: v1
kind: Pod
metadata:
name: webapp
labels:
app: webapp
spec:
containers:
- name: webapp
image: richardchesterwood/k8s-fleetman-webapp-angular:release0
```
I have checked the **minikube ip**:
***192.168.99.102***
But when I type in the browser **192.168.99.102:30080**, the webapp is not reachable:[](https://i.stack.imgur.com/v6GiX.png)
Please note that I use Ubuntu latest version. I have verified furthermore if proxies and firewalls are active:
**cat /etc/environment**:
```
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games"
```
**iptables -L**:
```
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (1 references)
target prot opt source destination
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target prot opt source destination
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
```
I have also disabled **ufw** in Ubuntu, but no success, the url **192.168.99.102:30080** .
Would you help me please ? thanks in advance for your answer.
|
2020/03/16
|
['https://Stackoverflow.com/questions/60710171', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8384029/']
|
There are a lot of different hypervisors which can work with `minikube`. Choosing one will be highly dependent on variables like operating system. Some of them are:
* Virtualbox
* Hyper-V
* VMware Fusion
* KVM2
* Hyperkit
* "Docker (`--vm-driver=none`)" (see the quotes)
There is official documentation talking about it: *[Kubernetes.io: Minikube: Specifying the vm driver](https://kubernetes.io/docs/setup/learning-environment/minikube/#specifying-the-vm-driver)*
**Choosing `Hypervisor` will affect how the `minikube` will behave.**
Focusing on:
* Docker: `--vm-driver=none`
* Virtualbox: `--vm-driver=virtualbox`
Docker
------
Official documentation sums it up:
>
> Minikube also supports a `--vm-driver=none` option that runs the Kubernetes components on the host and not in a VM. Using this driver requires [Docker](https://www.docker.com/products/docker-desktop) and a Linux environment but not a hypervisor.
>
>
> -- *[Kubernetes.io: Install minikube: Install a hypervisor](https://kubernetes.io/docs/tasks/tools/install-minikube/#install-a-hypervisor)*
>
>
>
The output of command`$ sudo minikube ip` will show IP address of a **host machine**.
Service object type of `NodePort` will be available with `IP_ADDRESS_OF_HOST:NODEPORT_PORT`.
Following with command: `$ kubectl get nodes -o wide`:
```sh
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
K8S Ready master 95s v1.17.3 192.168.0.114 <none> Ubuntu 18.04.4 LTS 5.3.0-28-generic docker://19.3.8
```
Please take a specific look on:
```sh
INTERNAL-IP
192.168.0.114
```
**It's the same IP address as a host it's working on**. You can (for example) `curl` pods without any restrictions. Please consider reading the article in included citing:
>
> **Caution:** The `none` VM driver can result in security and data loss issues. Before using `--vm-driver=none`, consult [this documentation](https://minikube.sigs.k8s.io/docs/reference/drivers/none/) for more information.
>
>
>
You can check what was exposed with command:
`$ sudo netstat -tulpn`
Virtualbox
----------
Creating a `minikube` instance with `--vm-driver=virtualbox` will create a virtual machine with Virtualbox as host.
Virtual machine created with this kind of `--vm-driver` will have 2 network interfaces provided below:
* NAT
* **Host-only adapter**
What is important is that your minikube instance will be accessible by **Host-only adapter**.
>
> **Host-only networking.** This can be used to create a network containing the host and a set of virtual machines, without the need for the host's physical network interface. Instead, a virtual network interface, similar to a loopback interface, is created on the host, providing connectivity among virtual machines and the host.
>
>
> -- *[Virtualbox.org: Virtual networking](https://www.virtualbox.org/manual/ch06.html)*
>
>
>
For example:
* minikube host-only adapter will have an address: `192.168.99.103`
* Your host-only adapter will have an address: `192.168.99.1`
They must be different!
**If you are having issues with connecting to this adapter please check:**
* If minikube's host-only adapter address is responding to `ping` when `minikube start` completed successfully.
* **Your** host-only adapter is present in your network configuration by issuing either:
+ `ip a`
+ `ifconfig`
* Your host-only adapter address is in range of your minikube instance (subnet)
From my experience reboot/recreation of this adapter worked all the time if something wasn't right.
The output of command`$ sudo minikube ip` will show IP address of a **Host-only adapter**.
Following with command: `$ kubectl get nodes -o wide`:
```sh
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
m01 Ready master 29m v1.17.3 192.168.99.103 <none> Buildroot 2019.02.9 4.19.94 docker://19.3.6
```
Please take a specific look once more on `INTERNAL-IP` and `ip address` associated with it.
Service object type of `NodePort` will be available with:
`IP_ADDRESS_OF_HOST_ONLY_ADAPTER:NODEPORT_PORT`.
I recreated your `Deployment` and `Service` attached to it and it worked in both `--vm-driver=none` and `--vm-driver=virtualbox` cases.
Please let me know if you have any questions in this topic.
|
I have had the same issue and have been trying to solve that for the last 2 days I have tried to install ingress addon:
```sh
minikube addons enable ingress
```
and also tried to run :
```sh
minikube tunnel
```
looked for a way to allow the host machine to access the container IP address but apparently couldn't find a way to do that.
If you run minikube on docker:
```sh
minikube start --driver=docker
```
you won't be able to access the **minikube IP** from your **host machine** since the minikube container's IP address would by accessible through `DockerDesktopVM` but not from your host machine.
You could run **minikube** on another driver such as `VirtualBox` or `Hyperv`, and that might help.
```sh
minikube start --driver=hyperv
minikube start --driver=virtualbox
```
Read more about the [minikube drivers](https://minikube.sigs.k8s.io/docs/drivers/)
>
> In fact, that's really annoying if you don't have enough resources on your computer to run both the Docker desktop VM and the minikube VM at the same time and will eventually slow down your computer.
>
>
>
To solve that `docker-desktop UI` for `Mac and Windows` provides an easier alternative compared to minikube, which you could simply [activate the Kubernetes feature on your docker-desktop UI](https://docs.docker.com/desktop/kubernetes/):

once it is setup you can right click on the docker desktop icon > Kubernetes

To verify now that your deployement/service works properly:
```sh
kubectl apply -f /file.yaml
```
|
60,710,171 |
I have created one service called **fleetman-webapp**:
```
apiVersion: v1
kind: Service
metadata:
name: fleetman-webapp
spec:
selector:
app: webapp
ports:
- name: http
port: 80
nodePort: 30080
type: NodePort
```
also, a pod named **webapp**:
```
apiVersion: v1
kind: Pod
metadata:
name: webapp
labels:
app: webapp
spec:
containers:
- name: webapp
image: richardchesterwood/k8s-fleetman-webapp-angular:release0
```
I have checked the **minikube ip**:
***192.168.99.102***
But when I type in the browser **192.168.99.102:30080**, the webapp is not reachable:[](https://i.stack.imgur.com/v6GiX.png)
Please note that I use Ubuntu latest version. I have verified furthermore if proxies and firewalls are active:
**cat /etc/environment**:
```
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games"
```
**iptables -L**:
```
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (1 references)
target prot opt source destination
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target prot opt source destination
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
```
I have also disabled **ufw** in Ubuntu, but no success, the url **192.168.99.102:30080** .
Would you help me please ? thanks in advance for your answer.
|
2020/03/16
|
['https://Stackoverflow.com/questions/60710171', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8384029/']
|
There are a lot of different hypervisors which can work with `minikube`. Choosing one will be highly dependent on variables like operating system. Some of them are:
* Virtualbox
* Hyper-V
* VMware Fusion
* KVM2
* Hyperkit
* "Docker (`--vm-driver=none`)" (see the quotes)
There is official documentation talking about it: *[Kubernetes.io: Minikube: Specifying the vm driver](https://kubernetes.io/docs/setup/learning-environment/minikube/#specifying-the-vm-driver)*
**Choosing `Hypervisor` will affect how the `minikube` will behave.**
Focusing on:
* Docker: `--vm-driver=none`
* Virtualbox: `--vm-driver=virtualbox`
Docker
------
Official documentation sums it up:
>
> Minikube also supports a `--vm-driver=none` option that runs the Kubernetes components on the host and not in a VM. Using this driver requires [Docker](https://www.docker.com/products/docker-desktop) and a Linux environment but not a hypervisor.
>
>
> -- *[Kubernetes.io: Install minikube: Install a hypervisor](https://kubernetes.io/docs/tasks/tools/install-minikube/#install-a-hypervisor)*
>
>
>
The output of command`$ sudo minikube ip` will show IP address of a **host machine**.
Service object type of `NodePort` will be available with `IP_ADDRESS_OF_HOST:NODEPORT_PORT`.
Following with command: `$ kubectl get nodes -o wide`:
```sh
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
K8S Ready master 95s v1.17.3 192.168.0.114 <none> Ubuntu 18.04.4 LTS 5.3.0-28-generic docker://19.3.8
```
Please take a specific look on:
```sh
INTERNAL-IP
192.168.0.114
```
**It's the same IP address as a host it's working on**. You can (for example) `curl` pods without any restrictions. Please consider reading the article in included citing:
>
> **Caution:** The `none` VM driver can result in security and data loss issues. Before using `--vm-driver=none`, consult [this documentation](https://minikube.sigs.k8s.io/docs/reference/drivers/none/) for more information.
>
>
>
You can check what was exposed with command:
`$ sudo netstat -tulpn`
Virtualbox
----------
Creating a `minikube` instance with `--vm-driver=virtualbox` will create a virtual machine with Virtualbox as host.
Virtual machine created with this kind of `--vm-driver` will have 2 network interfaces provided below:
* NAT
* **Host-only adapter**
What is important is that your minikube instance will be accessible by **Host-only adapter**.
>
> **Host-only networking.** This can be used to create a network containing the host and a set of virtual machines, without the need for the host's physical network interface. Instead, a virtual network interface, similar to a loopback interface, is created on the host, providing connectivity among virtual machines and the host.
>
>
> -- *[Virtualbox.org: Virtual networking](https://www.virtualbox.org/manual/ch06.html)*
>
>
>
For example:
* minikube host-only adapter will have an address: `192.168.99.103`
* Your host-only adapter will have an address: `192.168.99.1`
They must be different!
**If you are having issues with connecting to this adapter please check:**
* If minikube's host-only adapter address is responding to `ping` when `minikube start` completed successfully.
* **Your** host-only adapter is present in your network configuration by issuing either:
+ `ip a`
+ `ifconfig`
* Your host-only adapter address is in range of your minikube instance (subnet)
From my experience reboot/recreation of this adapter worked all the time if something wasn't right.
The output of command`$ sudo minikube ip` will show IP address of a **Host-only adapter**.
Following with command: `$ kubectl get nodes -o wide`:
```sh
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
m01 Ready master 29m v1.17.3 192.168.99.103 <none> Buildroot 2019.02.9 4.19.94 docker://19.3.6
```
Please take a specific look once more on `INTERNAL-IP` and `ip address` associated with it.
Service object type of `NodePort` will be available with:
`IP_ADDRESS_OF_HOST_ONLY_ADAPTER:NODEPORT_PORT`.
I recreated your `Deployment` and `Service` attached to it and it worked in both `--vm-driver=none` and `--vm-driver=virtualbox` cases.
Please let me know if you have any questions in this topic.
|
If you are running minikube in a Windows, then minikube must run as an Administrator command prompt window.
|
60,710,171 |
I have created one service called **fleetman-webapp**:
```
apiVersion: v1
kind: Service
metadata:
name: fleetman-webapp
spec:
selector:
app: webapp
ports:
- name: http
port: 80
nodePort: 30080
type: NodePort
```
also, a pod named **webapp**:
```
apiVersion: v1
kind: Pod
metadata:
name: webapp
labels:
app: webapp
spec:
containers:
- name: webapp
image: richardchesterwood/k8s-fleetman-webapp-angular:release0
```
I have checked the **minikube ip**:
***192.168.99.102***
But when I type in the browser **192.168.99.102:30080**, the webapp is not reachable:[](https://i.stack.imgur.com/v6GiX.png)
Please note that I use Ubuntu latest version. I have verified furthermore if proxies and firewalls are active:
**cat /etc/environment**:
```
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games"
```
**iptables -L**:
```
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (1 references)
target prot opt source destination
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target prot opt source destination
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
```
I have also disabled **ufw** in Ubuntu, but no success, the url **192.168.99.102:30080** .
Would you help me please ? thanks in advance for your answer.
|
2020/03/16
|
['https://Stackoverflow.com/questions/60710171', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8384029/']
|
Even though, you are exposing port 30080 via NodePort in minikube, minikube will still not expose it because it will use its own external port to listen to this service. Minikube tunnels the service to expose to the outer world. To find out that exposed port:
```
minikube service $SERVICE_NAME
```
so, in your case
```
minikube service fleetman-webapp
```
|
For this specific (and really great) course about [Kubernetes on Udemy from Richard Chesterwood](https://www.udemy.com/course/kubernetes-microservices/) the following solution should work out of the box on Windows: just start the `minikube` with `hyper-v` driver, then it will automatically map all the ports you are expecting onto your host machine, like detailed explained here by Dawid Kruk. Therefore all you need to start minikube "correctly" is the following command:
```
minikube start --driver=hyperv
```
---
Be careful by specifying exact amount of memory you give to this minikube instance. In my experiences hyper-v is a bit sensitive about how much memory you will give to it, what could result in errors:
```
minikube start --driver=hyperv --memory=8192
...
Not enough memory in the system to start the virtual machine minikube.
Could not initialize memory: Not enough memory resources are available to complete this operation. (0x8007000E).
'minikube' failed to start. (Virtual machine ID D4BC7B61-4E4D-4079-94DE-...)
Not enough memory in the system to start the virtual machine minikube with ram size 8192 megabytes. (Virtual machine ID ...)
```
Therefore just use the unspecific command given above and hyper-v will figure out, how much memory it really needs on it's own.
|
60,710,171 |
I have created one service called **fleetman-webapp**:
```
apiVersion: v1
kind: Service
metadata:
name: fleetman-webapp
spec:
selector:
app: webapp
ports:
- name: http
port: 80
nodePort: 30080
type: NodePort
```
also, a pod named **webapp**:
```
apiVersion: v1
kind: Pod
metadata:
name: webapp
labels:
app: webapp
spec:
containers:
- name: webapp
image: richardchesterwood/k8s-fleetman-webapp-angular:release0
```
I have checked the **minikube ip**:
***192.168.99.102***
But when I type in the browser **192.168.99.102:30080**, the webapp is not reachable:[](https://i.stack.imgur.com/v6GiX.png)
Please note that I use Ubuntu latest version. I have verified furthermore if proxies and firewalls are active:
**cat /etc/environment**:
```
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games"
```
**iptables -L**:
```
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (1 references)
target prot opt source destination
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target prot opt source destination
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
```
I have also disabled **ufw** in Ubuntu, but no success, the url **192.168.99.102:30080** .
Would you help me please ? thanks in advance for your answer.
|
2020/03/16
|
['https://Stackoverflow.com/questions/60710171', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8384029/']
|
Even though, you are exposing port 30080 via NodePort in minikube, minikube will still not expose it because it will use its own external port to listen to this service. Minikube tunnels the service to expose to the outer world. To find out that exposed port:
```
minikube service $SERVICE_NAME
```
so, in your case
```
minikube service fleetman-webapp
```
|
I have had the same issue and have been trying to solve that for the last 2 days I have tried to install ingress addon:
```sh
minikube addons enable ingress
```
and also tried to run :
```sh
minikube tunnel
```
looked for a way to allow the host machine to access the container IP address but apparently couldn't find a way to do that.
If you run minikube on docker:
```sh
minikube start --driver=docker
```
you won't be able to access the **minikube IP** from your **host machine** since the minikube container's IP address would by accessible through `DockerDesktopVM` but not from your host machine.
You could run **minikube** on another driver such as `VirtualBox` or `Hyperv`, and that might help.
```sh
minikube start --driver=hyperv
minikube start --driver=virtualbox
```
Read more about the [minikube drivers](https://minikube.sigs.k8s.io/docs/drivers/)
>
> In fact, that's really annoying if you don't have enough resources on your computer to run both the Docker desktop VM and the minikube VM at the same time and will eventually slow down your computer.
>
>
>
To solve that `docker-desktop UI` for `Mac and Windows` provides an easier alternative compared to minikube, which you could simply [activate the Kubernetes feature on your docker-desktop UI](https://docs.docker.com/desktop/kubernetes/):

once it is setup you can right click on the docker desktop icon > Kubernetes

To verify now that your deployement/service works properly:
```sh
kubectl apply -f /file.yaml
```
|
60,710,171 |
I have created one service called **fleetman-webapp**:
```
apiVersion: v1
kind: Service
metadata:
name: fleetman-webapp
spec:
selector:
app: webapp
ports:
- name: http
port: 80
nodePort: 30080
type: NodePort
```
also, a pod named **webapp**:
```
apiVersion: v1
kind: Pod
metadata:
name: webapp
labels:
app: webapp
spec:
containers:
- name: webapp
image: richardchesterwood/k8s-fleetman-webapp-angular:release0
```
I have checked the **minikube ip**:
***192.168.99.102***
But when I type in the browser **192.168.99.102:30080**, the webapp is not reachable:[](https://i.stack.imgur.com/v6GiX.png)
Please note that I use Ubuntu latest version. I have verified furthermore if proxies and firewalls are active:
**cat /etc/environment**:
```
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games"
```
**iptables -L**:
```
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (1 references)
target prot opt source destination
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target prot opt source destination
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
```
I have also disabled **ufw** in Ubuntu, but no success, the url **192.168.99.102:30080** .
Would you help me please ? thanks in advance for your answer.
|
2020/03/16
|
['https://Stackoverflow.com/questions/60710171', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8384029/']
|
Even though, you are exposing port 30080 via NodePort in minikube, minikube will still not expose it because it will use its own external port to listen to this service. Minikube tunnels the service to expose to the outer world. To find out that exposed port:
```
minikube service $SERVICE_NAME
```
so, in your case
```
minikube service fleetman-webapp
```
|
If you are running minikube in a Windows, then minikube must run as an Administrator command prompt window.
|
60,710,171 |
I have created one service called **fleetman-webapp**:
```
apiVersion: v1
kind: Service
metadata:
name: fleetman-webapp
spec:
selector:
app: webapp
ports:
- name: http
port: 80
nodePort: 30080
type: NodePort
```
also, a pod named **webapp**:
```
apiVersion: v1
kind: Pod
metadata:
name: webapp
labels:
app: webapp
spec:
containers:
- name: webapp
image: richardchesterwood/k8s-fleetman-webapp-angular:release0
```
I have checked the **minikube ip**:
***192.168.99.102***
But when I type in the browser **192.168.99.102:30080**, the webapp is not reachable:[](https://i.stack.imgur.com/v6GiX.png)
Please note that I use Ubuntu latest version. I have verified furthermore if proxies and firewalls are active:
**cat /etc/environment**:
```
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games"
```
**iptables -L**:
```
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (1 references)
target prot opt source destination
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target prot opt source destination
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
```
I have also disabled **ufw** in Ubuntu, but no success, the url **192.168.99.102:30080** .
Would you help me please ? thanks in advance for your answer.
|
2020/03/16
|
['https://Stackoverflow.com/questions/60710171', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8384029/']
|
For this specific (and really great) course about [Kubernetes on Udemy from Richard Chesterwood](https://www.udemy.com/course/kubernetes-microservices/) the following solution should work out of the box on Windows: just start the `minikube` with `hyper-v` driver, then it will automatically map all the ports you are expecting onto your host machine, like detailed explained here by Dawid Kruk. Therefore all you need to start minikube "correctly" is the following command:
```
minikube start --driver=hyperv
```
---
Be careful by specifying exact amount of memory you give to this minikube instance. In my experiences hyper-v is a bit sensitive about how much memory you will give to it, what could result in errors:
```
minikube start --driver=hyperv --memory=8192
...
Not enough memory in the system to start the virtual machine minikube.
Could not initialize memory: Not enough memory resources are available to complete this operation. (0x8007000E).
'minikube' failed to start. (Virtual machine ID D4BC7B61-4E4D-4079-94DE-...)
Not enough memory in the system to start the virtual machine minikube with ram size 8192 megabytes. (Virtual machine ID ...)
```
Therefore just use the unspecific command given above and hyper-v will figure out, how much memory it really needs on it's own.
|
If you are running minikube in a Windows, then minikube must run as an Administrator command prompt window.
|
60,710,171 |
I have created one service called **fleetman-webapp**:
```
apiVersion: v1
kind: Service
metadata:
name: fleetman-webapp
spec:
selector:
app: webapp
ports:
- name: http
port: 80
nodePort: 30080
type: NodePort
```
also, a pod named **webapp**:
```
apiVersion: v1
kind: Pod
metadata:
name: webapp
labels:
app: webapp
spec:
containers:
- name: webapp
image: richardchesterwood/k8s-fleetman-webapp-angular:release0
```
I have checked the **minikube ip**:
***192.168.99.102***
But when I type in the browser **192.168.99.102:30080**, the webapp is not reachable:[](https://i.stack.imgur.com/v6GiX.png)
Please note that I use Ubuntu latest version. I have verified furthermore if proxies and firewalls are active:
**cat /etc/environment**:
```
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games"
```
**iptables -L**:
```
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (1 references)
target prot opt source destination
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target prot opt source destination
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
```
I have also disabled **ufw** in Ubuntu, but no success, the url **192.168.99.102:30080** .
Would you help me please ? thanks in advance for your answer.
|
2020/03/16
|
['https://Stackoverflow.com/questions/60710171', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8384029/']
|
I have had the same issue and have been trying to solve that for the last 2 days I have tried to install ingress addon:
```sh
minikube addons enable ingress
```
and also tried to run :
```sh
minikube tunnel
```
looked for a way to allow the host machine to access the container IP address but apparently couldn't find a way to do that.
If you run minikube on docker:
```sh
minikube start --driver=docker
```
you won't be able to access the **minikube IP** from your **host machine** since the minikube container's IP address would by accessible through `DockerDesktopVM` but not from your host machine.
You could run **minikube** on another driver such as `VirtualBox` or `Hyperv`, and that might help.
```sh
minikube start --driver=hyperv
minikube start --driver=virtualbox
```
Read more about the [minikube drivers](https://minikube.sigs.k8s.io/docs/drivers/)
>
> In fact, that's really annoying if you don't have enough resources on your computer to run both the Docker desktop VM and the minikube VM at the same time and will eventually slow down your computer.
>
>
>
To solve that `docker-desktop UI` for `Mac and Windows` provides an easier alternative compared to minikube, which you could simply [activate the Kubernetes feature on your docker-desktop UI](https://docs.docker.com/desktop/kubernetes/):

once it is setup you can right click on the docker desktop icon > Kubernetes

To verify now that your deployement/service works properly:
```sh
kubectl apply -f /file.yaml
```
|
If you are running minikube in a Windows, then minikube must run as an Administrator command prompt window.
|
8,737,854 |
I have a string in php formatted like this:
```
http://aaaaaaaaaa/*http://bbbbbbbbbbbbbbb
```
where aaa... and bbb.... represent random characters and are random in length.
I would like to parse the string so that I am left with this:
```
http://bbbbbbbbbbbbbbb
```
|
2012/01/05
|
['https://Stackoverflow.com/questions/8737854', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/657818/']
|
In this case I wouldn't recommend regex but a simple substring or explode
```
$data = "http://aaaaaaaaaa/*http://bbbbbbbbbbb"
$parts = explode('*', $data);
echo $parts[1];
```
fin :)
|
You don't need regular expressions at all in this case:
```
$str = 'http://aaaaaaaaaa/*http://bbbbbbbbbbbbbbb';
echo substr($str, strpos($str, 'http://', 1));
```
|
8,737,854 |
I have a string in php formatted like this:
```
http://aaaaaaaaaa/*http://bbbbbbbbbbbbbbb
```
where aaa... and bbb.... represent random characters and are random in length.
I would like to parse the string so that I am left with this:
```
http://bbbbbbbbbbbbbbb
```
|
2012/01/05
|
['https://Stackoverflow.com/questions/8737854', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/657818/']
|
You don't need regular expressions at all in this case:
```
$str = 'http://aaaaaaaaaa/*http://bbbbbbbbbbbbbbb';
echo substr($str, strpos($str, 'http://', 1));
```
|
Here is a clean solution: grab everything after the last occurrence of "http://".
```
$start = strrpos($input, 'http://');
$output = substr($input, $start);
```
|
8,737,854 |
I have a string in php formatted like this:
```
http://aaaaaaaaaa/*http://bbbbbbbbbbbbbbb
```
where aaa... and bbb.... represent random characters and are random in length.
I would like to parse the string so that I am left with this:
```
http://bbbbbbbbbbbbbbb
```
|
2012/01/05
|
['https://Stackoverflow.com/questions/8737854', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/657818/']
|
In this case I wouldn't recommend regex but a simple substring or explode
```
$data = "http://aaaaaaaaaa/*http://bbbbbbbbbbb"
$parts = explode('*', $data);
echo $parts[1];
```
fin :)
|
Here is the regular expression way:
```
$str = 'http://aaaaaaaaaa/*http://bbbbbbbbbbbbbbb';
$url = preg_replace("/^.*(http:\/\/.*[^(http:\/\/)+])$/", "$1", $str);
echo $url;
```
|
8,737,854 |
I have a string in php formatted like this:
```
http://aaaaaaaaaa/*http://bbbbbbbbbbbbbbb
```
where aaa... and bbb.... represent random characters and are random in length.
I would like to parse the string so that I am left with this:
```
http://bbbbbbbbbbbbbbb
```
|
2012/01/05
|
['https://Stackoverflow.com/questions/8737854', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/657818/']
|
In this case I wouldn't recommend regex but a simple substring or explode
```
$data = "http://aaaaaaaaaa/*http://bbbbbbbbbbb"
$parts = explode('*', $data);
echo $parts[1];
```
fin :)
|
Hi This would help you to get the address:
```
$str = 'http://www.example.com/*http://www.another.org/';
$pattern = '/^http:\/\/[\.\w\-]+\/\*(http:\/\/.+)$/';
//$result = preg_replace($pattern, '$1', $str);
$found = preg_match_all($pattern, $str, $result);
$url = (!$found==0) ? $result[1][0] : '';
echo $str . '<br />' . $url;
```
|
8,737,854 |
I have a string in php formatted like this:
```
http://aaaaaaaaaa/*http://bbbbbbbbbbbbbbb
```
where aaa... and bbb.... represent random characters and are random in length.
I would like to parse the string so that I am left with this:
```
http://bbbbbbbbbbbbbbb
```
|
2012/01/05
|
['https://Stackoverflow.com/questions/8737854', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/657818/']
|
In this case I wouldn't recommend regex but a simple substring or explode
```
$data = "http://aaaaaaaaaa/*http://bbbbbbbbbbb"
$parts = explode('*', $data);
echo $parts[1];
```
fin :)
|
Here is a clean solution: grab everything after the last occurrence of "http://".
```
$start = strrpos($input, 'http://');
$output = substr($input, $start);
```
|
8,737,854 |
I have a string in php formatted like this:
```
http://aaaaaaaaaa/*http://bbbbbbbbbbbbbbb
```
where aaa... and bbb.... represent random characters and are random in length.
I would like to parse the string so that I am left with this:
```
http://bbbbbbbbbbbbbbb
```
|
2012/01/05
|
['https://Stackoverflow.com/questions/8737854', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/657818/']
|
Here is the regular expression way:
```
$str = 'http://aaaaaaaaaa/*http://bbbbbbbbbbbbbbb';
$url = preg_replace("/^.*(http:\/\/.*[^(http:\/\/)+])$/", "$1", $str);
echo $url;
```
|
Here is a clean solution: grab everything after the last occurrence of "http://".
```
$start = strrpos($input, 'http://');
$output = substr($input, $start);
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.