content
stringlengths 228
999k
| pred_label
stringclasses 1
value | pred_score
float64 0.5
1
|
---|---|---|
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. Join them; it only takes a minute:
Sign up
Here's how it works:
1. Anybody can ask a question
2. Anybody can answer
3. The best answers are voted up and rise to the top
show that if $a,b,c \in \mathbb{R}^+$ different from zero, then:
$$(a^2+b^2+c^2)\cdot\left(\frac{1}{a^2}+\frac{1}{b^2}+\frac{1}{c^2}\right)\leq(a+b+c)\cdot\left(\frac{1}{a}+\frac{1}{b}+\frac{1}{c}\right)$$ I had no success in my attempts
share|cite|improve this question
2
Just saying, you have $x,y,z$ and $a,b,c$ ;) – k1next Dec 18 '12 at 20:10
1
@macydanim Thank you. – Henfe Dec 18 '12 at 20:12
up vote 3 down vote accepted
$a=2012,b=c=1$.
Any other large number should work instead of $2012$.
If the inequality is reversed, just multiply and prove the following simple Lemma:
Lemma $f(x) =x+ \frac{1}{x}$ is increasing on $[1, \infty)$. In particular, for all $x\geq 1$ we have
$$f(x^2) \geq f(x) \,.$$
P.S. Maybe even simpler
$$\frac{a}{b}+\frac{b}{a} \geq 2 \Rightarrow \frac{a^2}{b^2}+\frac{b^2}{a^2}-\frac{a}{b}+\frac{b}{a}=(\frac{a}{b}+\frac{b}{a})^2-2-(\frac{a}{b}+\frac{b}{a}) \geq 2(\frac{a}{b}+\frac{b}{a} )-2-(\frac{a}{b}+\frac{b}{a}) \geq 0 $$
share|cite|improve this answer
The direction of the inequality is flipped. Here is a proof of the correct direction. Upon expansion we see it suffices to prove: $$\frac{a^2}{b^2} + \frac{a^2}{c^2} + \frac{b^2}{a^2} + \frac{b^2}{c^2} + \frac{c^2}{a^2} + \frac{c^2}{b^2} \ge \frac ab + \frac ac + \frac ba + \frac bc + \frac ca + \frac ca$$
Now note that for any positive real number $x$ we have $x^2 + \frac 1{x^2} \ge x + \frac 1x$ because the the function $f(x) = x + 1/x$ is monotonically increasing in $[1,\infty]$ and monotonically decreasing in $(0,1]$. Thus the result follows by applying this inequality on each of the terms of the above expression.
share|cite|improve this answer
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 1 |
CSS3 box-shadow实现纸张的曲线投影效果实例页面
展示
回到相关文章 >>
代码
CSS代码:
.curved_box {
display: inline-block;
*display: inline;
width: 200px;
height: 248px;
margin: 20px;
background-color: #fff;
border: 1px solid #eee;
-webkit-box-shadow: 0 1px 4px rgba(0, 0, 0, 0.27), 0 0 60px rgba(0, 0, 0, 0.06) inset;
-moz-box-shadow: 0 1px 4px rgba(0, 0, 0, 0.27), 0 0 40px rgba(0, 0, 0, 0.06) inset;
box-shadow: 0 1px 4px rgba(0, 0, 0, 0.27), 0 0 40px rgba(0, 0, 0, 0.06) inset;
position: relative;
*zoom: 1;
}
.curved_box:before {
-webkit-transform: skew(-15deg) rotate(-6deg);
-moz-transform: skew(-15deg) rotate(-6deg);
transform: skew(-15deg) rotate(-6deg);
left: 15px;
}
.curved_box:after {
-webkit-transform: skew(15deg) rotate(6deg);
-moz-transform: skew(15deg) rotate(6deg);
transform: skew(15deg) rotate(6deg);
right: 15px;
}
.curved_box:before, .curved_box:after {
width: 70%;
height: 55%;
content: ' ';
-webkit-box-shadow: 0 8px 16px rgba(0, 0, 0, 0.3);
-moz-box-shadow: 0 8px 16px rgba(0, 0, 0, 0.3);
box-shadow: 0 8px 16px rgba(0, 0, 0, 0.3);
position: absolute;
bottom: 10px;
z-index: -1;
}
HTML代码:
<div class="curved_box"></div>
<div class="curved_box"></div>
<div class="curved_box"></div>
<div class="curved_box"></div>
<div class="curved_box"></div>
<div class="curved_box"></div>
|
__label__pos
| 0.870541 |
Menu
SEC. 1 B : There are 10 questions in this test. Give your self 10 minutes to answer them all.
1. Alan spent £1.50 on two bottle of water estimate the water
2. John distribute £7 equally among two children how much doe each child get?
3. Average the following number to make the target even number
4. The total sum of two numbers is 17. 0ne of the numbers is 9. What is the number?
5. Alan spends £ 1 at a shop. John spends £ 2.32. How must more money did john spend than Alan?
6. Make each of the following number 10 times greater?a) 41b) 2.7
7. John pays to his food with a £ 5 note. He receives £ 2045 change. How much change did john did food cost?
8. Joe cuts of 75cm of a3 ½ m rope. What length of rope remains?
9. Round up 1420 to the nearest thousand?
10. Round up 635cm to the nearest 1/2m
11. Round up 3.5km to the nearest whole km
12. Round up 8km 905m to the nearest km
|
__label__pos
| 1 |
Skip to content
RateLimitPolicy API v2
Summary
Proposal of new API for the Kuadrant's RateLimitPolicy (RLP) CRD, for improved UX.
Motivation
The RateLimitPolicy API (v1beta1), particularly its RateLimit type used in ratelimitpolicy.spec.rateLimits, designed in part to fit the underlying implementation based on the Envoy Rate limit filter, has been proven to be complex, as well as somewhat limiting for the extension of the API for other platforms and/or for supporting use cases of not contemplated in the original design.
Users of the RateLimitPolicy will immediately recognize elements of Envoy's Rate limit API in the definitions of the RateLimit type, with almost 1:1 correspondence between the Configuration type and its counterpart in the Envoy configuration. Although compatibility between those continue to be desired, leaking such implementation details to the level of the API can be avoided to provide a better abstraction for activators ("matchers") and payload ("descriptors"), stated by users in a seamless way.
Furthermore, the Limit type – used as well in the RLP's RateLimit type – implies presently a logical relationship between its inner concepts – i.e. conditions and variables on one side, and limits themselves on the other – that otherwise could be shaped in a different manner, to provide clearer understanding of the meaning of these concepts by the user and avoid repetition. I.e., one limit definition contains multiple rate limits, and not the other way around.
Goals
1. Decouple the API from the underlying implementation - i.e. provide a more generic and more user-friendly abstraction
2. Prepare the API for upcoming changes in the Gateway API Policy Attachment specification
3. Improve consistency of the API with respect to Kuadrant's AuthPolicy CRD - i.e. same language, similar UX
Current WIP to consider
1. Policy attachment update (kubernetes-sigs/gateway-api#1565)
2. No merging of policies (kuadrant/architecture#10)
3. A single Policy scoped to HTTPRoutes and HTTPRouteRule (kuadrant/architecture#4) - future
4. Implement skip_if_absent for the RequestHeaders action (kuadrant/wasm-shim#29)
Highlights
• spec.rateLimits[] replaced with spec.limits{<limit-name>: <limit-definition>}
• spec.rateLimits.limits replaced with spec.limits.<limit-name>.rates
• spec.rateLimits.limits.maxValue replaced with spec.limits.<limit-name>.rates.limit
• spec.rateLimits.limits.seconds replaced with spec.limits.<limit-name>.rates.duration + spec.limits.<limit-name>.rates.unit
• spec.rateLimits.limits.conditions replaced with spec.limits.<limit-name>.when, structured field based on well-known selectors, mainly for expressing conditions not related to the HTTP route (although not exclusively)
• spec.rateLimits.limits.variables replaced with spec.limits.<limit-name>.counters, based on well-known selectors
• spec.rateLimits.rules replaced with spec.limits.<limit-name>.routeSelectors, for selecting (or "sub-targeting") HTTPRouteRules that trigger the limit
• new matcher spec.limits.<limit-name>.routeSelectors.hostnames[]
• spec.rateLimits.configurations removed – descriptor actions configuration (previously spec.rateLimits.configurations.actions) generated from spec.limits.<limit-name>.when.selectorspec.limits.<limit-name>.counters and unique identifier of the limit (associated with spec.limits.<limit-name>.routeSelectors)
• Limitador conditions composed of "soft" spec.limits.<limit-name>.when conditions + a "hard" condition that binds the limit to its trigger HTTPRouteRules
For detailed differences between current and new RLP API, see Comparison to current RateLimitPolicy.
Guide-level explanation
Examples of RLPs based on the new API
Given the following network resources:
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: Gateway
metadata:
name: istio-ingressgateway
namespace: istio-system
spec:
gatewayClassName: istio
listeners:
- hostname:
- "*.acme.com"
---
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: HTTPRoute
metadata:
name: toystore
namespace: toystore
spec:
parentRefs:
- name: istio-ingressgateway
namespace: istio-system
hostnames:
- "*.toystore.acme.com"
rules:
- matches:
- path:
type: PathPrefix
value: "/toys"
method: GET
- path:
type: PathPrefix
value: "/toys"
method: POST
backendRefs:
- name: toystore
port: 80
- matches:
- path:
type: PathPrefix
value: "/assets/"
backendRefs:
- name: toystore
port: 80
filters:
- type: ResponseHeaderModifier
responseHeaderModifier:
set:
- name: Cache-Control
value: "max-age=31536000, immutable"
The following are examples of RLPs targeting the route and the gateway. Each example is independent from the other.
Example 1. Minimal example - network resource targeted entirely without filtering, unconditional and unqualified rate limiting
In this example, all traffic to *.toystore.acme.com will be limited to 5rps, regardless of any other attribute of the HTTP request (method, path, headers, etc), without any extra "soft" conditions (conditions non-related to the HTTP route), across all consumers of the API (unqualified rate limiting).
apiVersion: kuadrant.io/v2beta1
kind: RateLimitPolicy
metadata:
name: toystore-infra-rl
namespace: toystore
spec:
targetRef:
group: gateway.networking.k8s.io
kind: HTTPRoute
name: toystore
limits:
base: # user-defined name of the limit definition - future use for handling hierarchical policy attachment
- rates: # at least one rate limit required
- limit: 5
unit: second
How is this RLP implemented under the hood?
gateway_actions:
- rules:
- paths: ["/toys*"]
methods: ["GET"]
hosts: ["*.toystore.acme.com"]
- paths: ["/toys*"]
methods: ["POST"]
hosts: ["*.toystore.acme.com"]
- paths: ["/assets/*"]
hosts: ["*.toystore.acme.com"]
configurations:
- generic_key:
descriptor_key: "toystore/toystore-infra-rl/base"
descriptor_value: "1"
limits:
- conditions:
- toystore/toystore-infra-rl/base == "1"
max_value: 5
seconds: 1
namespace: TDB
Example 2. Targeting specific route rules, with counter qualifiers, multiple rates per limit definition and "soft" conditions
In this example, a distinct limit will be associated ("bound") to each individual HTTPRouteRule of the targeted HTTPRoute, by using the routeSelectors field for selecting (or "sub-targeting") the HTTPRouteRule.
The following limit definitions will be bound to each HTTPRouteRule: - /toys* → 50rpm, enforced per username (counter qualifier) and only in case the user is not an admin ("soft" condition). - /assets/* → 5rpm / 100rp12h
Each set of trigger matches in the RLP will be matched to all HTTPRouteRules whose HTTPRouteMatches is a superset of the set of trigger matches in the RLP. For every HTTPRouteRule matched, the HTTPRouteRule will be bound to the corresponding limit definition that specifies that trigger. In case no HTTPRouteRule is found containing at least one HTTPRouteMatch that is identical to some set of matching rules of a particular limit definition, the limit definition is considered invalid and reported as such in the status of RLP.
apiVersion: kuadrant.io/v2beta1
kind: RateLimitPolicy
metadata:
name: toystore-per-endpoint
namespace: toystore
spec:
targetRef:
group: gateway.networking.k8s.io
kind: HTTPRoute
name: toystore
limits:
toys:
rates:
- limit: 50
duration: 1
unit: minute
counters:
- auth.identity.username
routeSelectors:
- matches: # matches the 1st HTTPRouteRule (i.e. GET or POST to /toys*)
- path:
type: PathPrefix
value: "/toys"
when:
- selector: auth.identity.group
operator: neq
value: admin
assets:
rates:
- limit: 5
duration: 1
unit: minute
- limit: 100
duration: 12
unit: hour
routeSelectors:
- matches: # matches the 2nd HTTPRouteRule (i.e. /assets/*)
- path:
type: PathPrefix
value: "/assets/"
How is this RLP implemented under the hood?
gateway_actions:
- rules:
- paths: ["/toys*"]
methods: ["GET"]
hosts: ["*.toystore.acme.com"]
- paths: ["/toys*"]
methods: ["POST"]
hosts: ["*.toystore.acme.com"]
configurations:
- generic_key:
descriptor_key: "toystore/toystore-per-endpoint/toys"
descriptor_value: "1"
- metadata:
descriptor_key: "auth.identity.group"
metadata_key:
key: "envoy.filters.http.ext_authz"
path:
- segment:
key: "identity"
- segment:
key: "group"
- metadata:
descriptor_key: "auth.identity.username"
metadata_key:
key: "envoy.filters.http.ext_authz"
path:
- segment:
key: "identity"
- segment:
key: "username"
- rules:
- paths: ["/assets/*"]
hosts: ["*.toystore.acme.com"]
configurations:
- generic_key:
descriptor_key: "toystore/toystore-per-endpoint/assets"
descriptor_value: "1"
limits:
- conditions:
- toystore/toystore-per-endpoint/toys == "1"
- auth.identity.group != "admin"
variables:
- auth.identity.username
max_value: 50
seconds: 60
namespace: kuadrant
- conditions:
- toystore/toystore-per-endpoint/assets == "1"
max_value: 5
seconds: 60
namespace: kuadrant
- conditions:
- toystore/toystore-per-endpoint/assets == "1"
max_value: 100
seconds: 43200 # 12 hours
namespace: kuadrant
Example 3. Targeting a subset of an HTTPRouteRule - HTTPRouteMatch missing
Consider a 150rps rate limit set on requests to GET /toys/special. Such specific application endpoint is covered by the first HTTPRouteRule in the HTTPRoute (as a subset of GET or POST to any path that starts with /toys). However, to avoid binding limits to HTTPRouteRules that are more permissive than the actual intended scope of the limit, the RateLimitPolicy controller requires trigger matches to find identical matching rules explicitly defined amongst the sets of HTTPRouteMatches of the HTTPRouteRules potentially targeted.
As a consequence, by simply defining a trigger match for GET /toys/special in the RLP, the GET|POST /toys* HTTPRouteRule will NOT be bound to the limit definition. In order to ensure the limit definition is properly bound to a routing rule that strictly covers the GET /toys/special application endpoint, first the user has to modify the spec of the HTTPRoute by adding an explicit HTTPRouteRule for this case:
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: HTTPRoute
metadata:
name: toystore
namespace: toystore
spec:
parentRefs:
- name: istio-ingressgateway
namespace: istio-system
hostnames:
- "*.toystore.acme.com"
rules:
- matches:
- path:
type: PathPrefix
value: "/toys"
method: GET
- path:
type: PathPrefix
value: "/toys"
method: POST
backendRefs:
- name: toystore
port: 80
- matches:
- path:
type: PathPrefix
value: "/assets/"
backendRefs:
- name: toystore
port: 80
filters:
- type: ResponseHeaderModifier
responseHeaderModifier:
set:
- name: Cache-Control
value: "max-age=31536000, immutable"
- matches: # new (more specific) HTTPRouteRule added
- path:
type: Exact
value: "/toys/special"
method: GET
backendRefs:
- name: toystore
port: 80
After that, the RLP can target the new HTTPRouteRule strictly:
apiVersion: kuadrant.io/v2beta1
kind: RateLimitPolicy
metadata:
name: toystore-special-toys
namespace: toystore
spec:
targetRef:
group: gateway.networking.k8s.io
kind: HTTPRoute
name: toystore
limits:
specialToys:
rates:
- limit: 150
unit: second
routeSelectors:
- matches: # matches the new HTTPRouteRule (i.e. GET /toys/special)
- path:
type: Exact
value: "/toys/special"
method: GET
How is this RLP implemented under the hood?
gateway_actions:
- rules:
- paths: ["/toys/special"]
methods: ["GET"]
hosts: ["*.toystore.acme.com"]
configurations:
- generic_key:
descriptor_key: "toystore/toystore-special-toys/specialToys"
descriptor_value: "1"
limits:
- conditions:
- toystore/toystore-special-toys/specialToys == "1"
max_value: 150
seconds: 1
namespace: kuadrant
Example 4. Targeting a subset of an HTTPRouteRule - HTTPRouteMatch found
This example is similar to Example 3. Consider the use case of setting a 150rpm rate limit on requests to GET /toys*.
The targeted application endpoint is covered by the first HTTPRouteRule in the HTTPRoute (as a subset of GET or POST to any path that starts with /toys). However, unlike in the previous example where, at first, no HTTPRouteRule included an explicit HTTPRouteMatch for GET /toys/special, in this example the HTTPRouteMatch for the targeted application endpoint GET /toys* does exist explicitly in one of the HTTPRouteRules, thus the RateLimitPolicy controller would find no problem to bind the limit definition to the HTTPRouteRule. That would nonetheless cause a unexpected behavior of the limit triggered not strictly for GET /toys*, but also for POST /toys*.
To avoid extending the scope of the limit beyond desired, with no extra "soft" conditions, again the user must modify the spec of the HTTPRoute, so an exclusive HTTPRouteRule exists for the GET /toys* application endpoint:
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: HTTPRoute
metadata:
name: toystore
namespace: toystore
spec:
parentRefs:
- name: istio-ingressgateway
namespace: istio-system
hostnames:
- "*.toystore.acme.com"
rules:
- matches: # first HTTPRouteRule split into two – one for GET /toys*, other for POST /toys*
- path:
type: PathPrefix
value: "/toys"
method: GET
backendRefs:
- name: toystore
port: 80
- matches:
- path:
type: PathPrefix
value: "/toys"
method: POST
backendRefs:
- name: toystore
port: 80
- matches:
- path:
type: PathPrefix
value: "/assets/"
backendRefs:
- name: toystore
port: 80
filters:
- type: ResponseHeaderModifier
responseHeaderModifier:
set:
- name: Cache-Control
value: "max-age=31536000, immutable"
The RLP can then target the new HTTPRouteRule strictly:
apiVersion: kuadrant.io/v2beta1
kind: RateLimitPolicy
metadata:
name: toy-readers
namespace: toystore
spec:
targetRef:
group: gateway.networking.k8s.io
kind: HTTPRoute
name: toystore
limits:
toyReaders:
rates:
- limit: 150
unit: second
routeSelectors:
- matches: # matches the new more specific HTTPRouteRule (i.e. GET /toys*)
- path:
type: PathPrefix
value: "/toys"
method: GET
How is this RLP implemented under the hood?
gateway_actions:
- rules:
- paths: ["/toys*"]
methods: ["GET"]
hosts: ["*.toystore.acme.com"]
configurations:
- generic_key:
descriptor_key: "toystore/toy-readers/toyReaders"
descriptor_value: "1"
limits:
- conditions:
- toystore/toy-readers/toyReaders == "1"
max_value: 150
seconds: 1
namespace: kuadrant
Example 5. One limit triggered by multiple HTTPRouteRules
In this example, both HTTPRouteRules, i.e. GET|POST /toys* and /assets/*, are targeted by the same limit of 50rpm per username.
Because the HTTPRoute has no other rule, this is technically equivalent to targeting the entire HTTPRoute and therefore similar to Example 1. However, if the HTTPRoute had other rules or got other rules added afterwards, this would ensure the limit applies only to the two original route rules.
apiVersion: kuadrant.io/v2beta1
kind: RateLimitPolicy
metadata:
name: toystore-per-user
namespace: toystore
spec:
targetRef:
group: gateway.networking.k8s.io
kind: HTTPRoute
name: toystore
limits:
toysOrAssetsPerUsername:
rates:
- limit: 50
duration: 1
unit: minute
counters:
- auth.identity.username
routeSelectors:
- matches:
- path:
type: PathPrefix
value: "/toys"
method: GET
- path:
type: PathPrefix
value: "/toys"
method: POST
- matches:
- path:
type: PathPrefix
value: "/assets/"
How is this RLP implemented under the hood?
gateway_actions:
- rules:
- paths: ["/toys*"]
methods: ["GET"]
hosts: ["*.toystore.acme.com"]
- paths: ["/toys*"]
methods: ["POST"]
hosts: ["*.toystore.acme.com"]
- paths: ["/assets/*"]
hosts: ["*.toystore.acme.com"]
configurations:
- generic_key:
descriptor_key: "toystore/toystore-per-user/toysOrAssetsPerUsername"
descriptor_value: "1"
- metadata:
descriptor_key: "auth.identity.username"
metadata_key:
key: "envoy.filters.http.ext_authz"
path:
- segment:
key: "identity"
- segment:
key: "username"
limits:
- conditions:
- toystore/toystore-per-user/toysOrAssetsPerUsername == "1"
variables:
- auth.identity.username
max_value: 50
seconds: 60
namespace: kuadrant
Example 6. Multiple limit definitions targeting the same HTTPRouteRule
In case multiple limit definitions target a same HTTPRouteRule, all those limit definitions will be bound to the HTTPRouteRule. No limit "shadowing" will be be enforced by the RLP controller. Due to how things work as of today in Limitador nonetheless (i.e. the rule of the most restrictive limit wins), in some cases, across multiple limits triggered, one limit ends up "shadowing" others, depending on further qualification of the counters and the actual RL values.
E.g., the following RLP intends to set 50rps per username on GET /toys*, and 100rps on POST /toys* or /assets/*:
apiVersion: kuadrant.io/v2beta1
kind: RateLimitPolicy
metadata:
name: toystore-per-endpoint
namespace: toystore
spec:
targetRef:
group: gateway.networking.k8s.io
kind: HTTPRoute
name: toystore
limits:
readToys:
rates:
- limit: 50
unit: second
counters:
- auth.identity.username
routeSelectors:
- matches: # matches the 1st HTTPRouteRule (i.e. GET or POST to /toys*)
- path:
type: PathPrefix
value: "/toys"
method: GET
postToysOrAssets:
rates:
- limit: 100
unit: second
routeSelectors:
- matches: # matches the 1st HTTPRouteRule (i.e. GET or POST to /toys*)
- path:
type: PathPrefix
value: "/toys"
method: POST
- matches: # matches the 2nd HTTPRouteRule (i.e. /assets/*)
- path:
type: PathPrefix
value: "/assets/"
How is this RLP implemented under the hood?
gateway_actions:
- rules:
- paths: ["/toys*"]
methods: ["GET"]
hosts: ["*.toystore.acme.com"]
- paths: ["/toys*"]
methods: ["POST"]
hosts: ["*.toystore.acme.com"]
configurations:
- generic_key:
descriptor_key: "toystore/toystore-per-endpoint/readToys"
descriptor_value: "1"
- metadata:
descriptor_key: "auth.identity.username"
metadata_key:
key: "envoy.filters.http.ext_authz"
path:
- segment:
key: "identity"
- segment:
key: "username"
- rules:
- paths: ["/toys*"]
methods: ["GET"]
hosts: ["*.toystore.acme.com"]
- paths: ["/toys*"]
methods: ["POST"]
hosts: ["*.toystore.acme.com"]
- paths: ["/assets/*"]
hosts: ["*.toystore.acme.com"]
configurations:
- generic_key:
descriptor_key: "toystore/toystore-per-endpoint/readToys"
descriptor_value: "1"
- generic_key:
descriptor_key: "toystore/toystore-per-endpoint/postToysOrAssets"
descriptor_value: "1"
limits:
- conditions: # actually applies to GET|POST /toys*
- toystore/toystore-per-endpoint/readToys == "1"
variables:
- auth.identity.username
max_value: 50
seconds: 1
namespace: kuadrant
- conditions: # actually applies to GET|POST /toys* and /assets/*
- toystore/toystore-per-endpoint/postToysOrAssets == "1"
max_value: 100
seconds: 1
namespace: kuadrant
This example was only written in this way to highlight that it is possible that multiple limit definitions select a same HTTPRouteRule. To avoid over-limiting between GET|POST /toys* and thus ensure the originally intended limit definitions for each of these routes apply, the HTTPRouteRule should be split into two, like done in Example 4.
Example 7. Limits triggered for specific hostnames
In the previous examples, the limit definitions and therefore the counters were set indistinctly for all hostnames – i.e. no matter if the request is sent to games.toystore.acme.com or dolls.toystore.acme.com, the same counters are expected to be affected. In this example on the other hand, a 1000rpd rate limit is set for requests to /assets/* only when the hostname matches games.toystore.acme.com.
First, the user needs to edit the HTTPRoute to make the targeted hostname games.toystore.acme.com explicit:
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: HTTPRoute
metadata:
name: toystore
namespace: toystore
spec:
parentRefs:
- name: istio-ingressgateway
namespace: istio-system
hostnames:
- "*.toystore.acme.com"
- games.toystore.acme.com # new (more specific) hostname added
rules:
- matches:
- path:
type: PathPrefix
value: "/toys"
method: GET
- path:
type: PathPrefix
value: "/toys"
method: POST
backendRefs:
- name: toystore
port: 80
- matches:
- path:
type: PathPrefix
value: "/assets/"
backendRefs:
- name: toystore
port: 80
filters:
- type: ResponseHeaderModifier
responseHeaderModifier:
set:
- name: Cache-Control
value: "max-age=31536000, immutable"
After that, the RLP can target specifically the newly added hostname:
apiVersion: kuadrant.io/v2beta1
kind: RateLimitPolicy
metadata:
name: toystore-per-hostname
namespace: toystore
spec:
targetRef:
group: gateway.networking.k8s.io
kind: HTTPRoute
name: toystore
limits:
games:
rates:
- limit: 1000
unit: day
routeSelectors:
- matches:
- path:
type: PathPrefix
value: "/assets/"
hostnames:
- games.toystore.acme.com
How is this RLP implemented under the hood?
gateway_actions:
- rules:
- paths: ["/assets/*"]
hosts: ["games.toystore.acme.com"]
configurations:
- generic_key:
descriptor_key: "toystore/toystore-per-hostname/games"
descriptor_value: "1"
limits:
- conditions:
- toystore/toystore-per-hostname/games == "1"
max_value: 1000
seconds: 86400 # 1 day
namespace: kuadrant
Example 8. Targeting the Gateway
Note: Additional meaning and context may be given to this use case in the future, when discussing defaults and overrides.
Targeting a Gateway is a shortcut to targeting all individual HTTPRoutes referencing the gateway as parent. This differs from Example 1 nonetheless because, by targeting the gateway rather than an individual HTTPRoute, the RLP applies automatically to all HTTPRoutes pointing to the gateway, including routes created before and after the creation of the RLP. Moreover, all those routes will share the same limit counters specified in the RLP.
apiVersion: kuadrant.io/v2beta1
kind: RateLimitPolicy
metadata:
name: gw-rl
namespace: istio-ingressgateway
spec:
targetRef:
group: gateway.networking.k8s.io
kind: Gateway
name: istio-ingressgateway
limits:
base:
- rates:
- limit: 5
unit: second
How is this RLP implemented under the hood?
gateway_actions:
- rules:
- paths: ["/toys*"]
methods: ["GET"]
hosts: ["*.toystore.acme.com"]
- paths: ["/toys*"]
methods: ["POST"]
hosts: ["*.toystore.acme.com"]
- paths: ["/assets/*"]
hosts: ["*.toystore.acme.com"]
configurations:
- generic_key:
descriptor_key: "istio-system/gw-rl/base"
descriptor_value: "1"
limits:
- conditions:
- istio-system/gw-rl/base == "1"
max_value: 5
seconds: 1
namespace: TDB
Comparison to current RateLimitPolicy
Current New Reason
1:1 relation between Limit (the object) and the actual Rate limit (the value) (spec.rateLimits.limits) Rate limit becomes a detail of Limit where each limit may define one or more rates (1:N) (spec.limits.<limit-name>.rates)
• It allows to reuse when conditions and counters for groups of rate limits
Parsed spec.rateLimits.limits.conditions field, directly exposing the Limitador's API Structured spec.limits.<limit-name>.when condition field composed of 3 well-defined properties: selector, operator and value
spec.rateLimits.configurations as a list of "variables assignments" and direct exposure of Envoy's RL descriptor actions API Descriptor actions composed from selectors used in the limit definitions (spec.limits.<limit-name>.when.selector and spec.limits.<limit-name>.counters) plus a fixed identifier of the route rules (spec.limits.<limit-name>.routeSelectors)
• Abstract the Envoy-specific concepts of "actions" and "descriptors"
• No risk of mismatching descriptors keys between "actions" and actual usage in the limits
• No user-defined generic descriptors (e.g. "limited = 1")
• Source value of the selectors defined from an implicit "context" data structure
Key-value descriptors Structured descriptors referring to a contextual well-known data structure
Limitador conditions independent from the route rules Artificial Limitador condition injected to bind routes and corresponding limits
• Ensure the limit is enforced only for corresponding selected HTTPRouteRules
translate(spec.rateLimits.rules) ⊂ httproute.spec.rules spec.limits.<limit-name>.routeSelectors.matches ⊆ httproute.spec.rules.matches
• HTTPRouteRule selector (via HTTPRouteMatch subset)
• Gateway API language
• Preparation for inherited policies and defaults & overrides
spec.rateLimits.limits.seconds spec.limits.<limit-name>.rates.duration and spec.limits.<limit-name>.rates.unit
• Support for more units beyond seconds
• duration: 1 by default
spec.rateLimits.limits.variables spec.limits.<limit-name>.counters
• Improved (more specific) naming
spec.rateLimits.limits.maxValue spec.limits.<limit-name>.rates.limit
• Improved (more generic) naming
Reference-level explanation
By completely dropping out the configurations field from the RLP, composing the RL descriptor actions is now done based essentially on the selectors listed in the when conditions and the counters, plus an artificial condition used to bind the HTTPRouteRules to the corresponding limits to trigger in Limitador.
The descriptor actions composed from the selectors in the "soft" when conditions and counter qualifiers originate from the direct references these selectors make to paths within a well-known data structure that stores information about the context (HTTP request and ext-authz filter). These selectors in "soft" when conditions and counter qualifiers are thereby called well-known selectors.
Other descriptor actions might be composed by the RLP controller to define additional RL conditions to bind HTTPRouteRules and corresponding limits.
Well-known selectors
Each selector used in a when condition or counter qualifier is a direct reference to a path within a well-known data structure that stores information about the context (L4 and L7 data of the original request handled by the proxy), as well as auth data (dynamic metadata occasionally exported by the external authorization filter and injected by the proxy into the rate-limit filter).
The well-known data structure for building RL descriptor actions resembles Authorino's "Authorization JSON", whose context component consists of Envoy's AttributeContext type of the external authorization API (marshalled as JSON). Compared to the more generic RateLimitRequest struct, the AttributeContext provides a more structured and arguibly more intuitive relation between the data sources for the RL descriptors actions and their corresponding key names through which the values are referred within the RLP, in a context of predominantly serving for HTTP applications.
To keep compatibility with the Envoy Rate Limit API, the well-known data structure can optionally be extended with the RateLimitRequest, thus resulting in the following final structure.
context: # Envoy's Ext-Authz `CheckRequest.AttributeContext` type
source:
address:
service:
destination:
address:
service:
request:
http:
host:
path:
method:
headers: {}
auth: # Dynamic metadata exported by the external authorization service
ratelimit: # Envoy's Rate Limit `RateLimitRequest` type
domain: # generated by the Kuadrant controller
descriptors: {} # descriptors configured by the user directly in the proxy (not generated by the Kuadrant controller, if allowed)
hitsAddend: # only in case we want to allow users to refer to this value in a policy
Mechanics of generating RL descriptor actions
From the perspective of a user who writes a RLP, the selectors used in then when and counters fields are paths to the well-known data structure (see Well-known selectors). While desiging a policy, the user intuitively pictures the well-known data structure and states each limit definition having in mind the possible values assumed by each of those paths in the data plane. For example,
The user story:
Each distinct user (auth.identity.username) can send no more than 1rps to the same HTTP path (context.request.http.path).
...materializes as the following RLP:
apiVersion: kuadrant.io/v2beta1
kind: RateLimitPolicy
metadata:
name: toystore
spec:
targetRef:
group: gateway.networking.k8s.io
kind: HTTPRoute
name: toystore
limits:
dolls:
rates:
- limit: 1
unit: second
counters:
- auth.identity.username
- context.request.http.path
The following selectors are to be interpreted by the RLP controller: - auth.identity.username - context.request.http.path
The RLP controller uses a map to translate each selector into its corresponding descriptor action. (Roughly described:)
context.source.address → source_cluster(...) # TBC
context.source.service → source_cluster(...) # TBC
context.destination... → destination_cluster(...)
context.destination... → destination_cluster(...)
context.request.http.<X> → request_headers(header_name: ":<X>")
context.request... → ...
auth.<X> → metadata(key: "envoy.filters.http.ext_authz", path: <X>)
ratelimit.domain → <hostname>
...to yield effectively:
rate_limits:
- actions:
- metadata:
descriptor_key: "auth.identity.username"
metadata_key:
key: "envoy.filters.http.ext_authz"
path:
- segment:
key: "identity"
- segment:
key: "username"
- request_headers:
descriptor_key: "context.request.http.path"
header_name: ":path"
Artificial Limitador condition for routeSelectors
For each limit definition that explicitly or implicitly defines a routeSelectors field, the RLP controller will generate an artificial Limitador condition that ensures that the limit applies only when the filterred rules are honoured when serving the request. This can be implemented with a 2-step procedure: 1. generate an unique identifier of the limit - i.e. <policy-namespace>/<policy-name>/<limit-name> 2. associate a generic_key type descriptor action with each HTTPRouteRule targeted by the limit – i.e. { descriptor_key: <unique identifier of the limit>, descriptor_value: "1" }.
For example, given the following RLP:
apiVersion: kuadrant.io/v2beta1
kind: RateLimitPolicy
metadata:
name: toystore-non-admin-users
namespace: toystore
spec:
targetRef:
group: gateway.networking.k8s.io
kind: HTTPRoute
name: toystore
limits:
toys:
routeSelectors:
- matches:
- path:
type: PathPrefix
value: "/toys"
method: GET
- path:
type: PathPrefix
value: "/toys"
method: POST
rates:
- limit: 50
duration: 1
unit: minute
when:
- selector: auth.identity.group
operator: neq
value: admin
assets:
routeSelectors:
- matches:
- path:
type: PathPrefix
value: "/assets/"
rates:
- limit: 5
duration: 1
unit: minute
when:
- selector: auth.identity.group
operator: neq
value: admin
Apart from the following descriptor action associated with both routes:
- metadata:
descriptor_key: "auth.identity.group"
metadata_key:
key: "envoy.filters.http.ext_authz"
path:
- segment:
key: "identity"
- segment:
key: "group"
...and its corresponding Limitador condition:
auth.identity.group != "admin"
The following additional artificial descriptor actions will be generated:
# associated with route rule GET|POST /toys*
- generic_key:
descriptor_key: "toystore/toystore-non-admin-users/toys"
descriptor_value: "1"
# associated with route rule /assets/*
- generic_key:
descriptor_key: "toystore/toystore-non-admin-users/assets"
descriptor_value: "1"
...and their corresponding Limitador conditions.
In the end, the following Limitador configuration is yielded:
- conditions:
- toystore/toystore-non-admin-users/toys == "1"
- auth.identity.group != "admin"
max_value: 50
seconds: 60
namespace: kuadrant
- conditions:
- toystore/toystore-non-admin-users/assets == "1"
- auth.identity.group != "admin"
max_value: 5
seconds: 60
namespace: kuadrant
Support in wasm shim and Envoy RL API
This proposal tries to keep compatibility with the Envoy API for rate limit and does not introduce any new requirement that otherwise would require the use of wasm shim to be implemented.
In the case of implementation of this proposal in the wasm shim, all types of matchers supported by the HTTPRouteMatch type of Gateway API must be also supported in the rate_limit_policies.gateway_actions.rules field of the wasm plugin configuration. These include matchers based on path (prefix, exact), headers, query string parameters and method.
Drawbacks
HTTPRoute editing occasionally required
Need to duplicate rules that don't explicitly include a matcher wanted for the policy, so that matcher can be added as a special case for each of those rules.
Risk of over-targeting
Some HTTPRouteRules might need to be split into more specific ones so a limit definition is not bound to beyond intended (e.g. target method: GET when the route matches method: POST|GET).
Prone to consistency issues
Typos and updates to the HTTPRoute can easily cause a mismatch and invalidate a RLP.
Two types of conditions – routeSelectors and when conditions
Although with different meanings (evaluates in the gateway vs. evaluated in Limitador) and meant for expressing different types of rules (HTTPRouteRule selectors vs. "soft" conditions based on attributes not related to the HTTP request), users might still perceive these as two ways of expressing conditions and find difficult to understand at first that "soft" conditions do not accept expressions related to attributes of the HTTP request.
Rationale and alternatives
Targeting full HTTPRouteRules
Requiring users to specify full HTTPRouteRule matches in the RLP (as opposed to any subset of HTTPRoureMatches of targeted HTTPRouteRules – current proposal) contains some of the same drawbacks of this proposal, such as HTTPRoute editing occasionally required and prone to consistency issues. If, on one hand, it eliminates the risk of over-targeting, on the other hand, it does it at the cost of requiring excessively verbose policies written by the users, to the point of sometimes expecting user to have to specify trigger matching rules that are significantly more than what's originally and strictly intended.
E.g.:
On a HTTPRoute that contains the following HTTPRouteRules (simplified representation):
{ header: x-canary=true } → backend-canary
{ * } → backend-rest
Where the user wants to define a RLP that targets { method: POST }. First, the user needs to edit the HTTPRoute and duplicate the HTTPRouteRules:
{ header: x-canary=true, method: POST } → backend-canary
{ header: x-canary=true } → backend-canary
{ method: POST } → backend-rest
{ * } → backend-rest
Then, user needs to include the following trigger in the RLP so only full HTTPRouteRules are specified:
{ header: x-canary=true, method: POST }
{ method: POST }
The first matching rule of the trigger (i.e. { header: x-canary=true, method: POST }) is beoynd the original user intent of targeting simply { method: POST }.
This issue can be even more concerning in the case of targeting gateways with multiple child HTTPRoutes. All the HTTPRoutes would have to be fixed and the HTTPRouteRules that cover for all the cases in all HTTPRoutes listed in the policy targeting the gateway.
All limit definitions apply vs. Limit "shadowing"
The proposed binding between limit definition and HTTPRouteRules that trigger the limits was thought so multiple limit definitions can be bound to a same HTTPRouteRule that triggers those limits in Limitador. That means that no limit definition will "shadow" another at the level of the RLP controller, i.e. the RLP controller will honour the intended binding according to the selectors specified in the policy.
Due to how things work as of today in Limitador nonetheless, i.e., the rule of the most restrictive limit wins, and because all limit definitions triggered by a given shared HTTPRouteRule, it might be the case that, across multiple limits triggered, one limit ends up "shadowing" other limits. However, that is by implementation of Limitador and therefore beyond the scope of the API.
An alternative to the approach of allowing all limit definitions to be bound to a same selected HTTPRouteRules would be enforcing that, amongst multiple limit definitions targeting a same HTTPRouteRule, only the first of those limits definitions is bound to the HTTPRouteRule. This alternative approach effectively would cause the first limit to "shadow" any other on that particular HTTPRouteRule, as by implementation of the RLP controller (i.e., at API level).
While the first approach causes an artificial Limitador condition of the form <policy-ns>/<policy-name>/<limit-name> == "1", the alternative approach ("limit shadowing") could be implemented by generating a descriptor of the following form instead: ratelimit.binding == "<policy-ns>/<policy-name>/<limit-name>".
The downside of allowing multiple bindings to the same HTTPRouteRule is that all limits apply in Limitador, thus making status report frequently harder. The most restritive rate limit strategy implemented by Limitador might not be obvious to users who set multiple limit definitions and will require additional information reported back to the user about the actual status of the limit definitions stated in a RLP. On the other hand, it allows enables use cases of different limit definitions that vary on the counter qualifiers, additional "soft" conditions, or actual rate limit values to be triggered by a same HTTPRouteRule.
Writing "soft" when conditions based on attributes of the HTTP request
As a first step, users will not be able to write "soft" when conditions to selective apply rate limit definitions based on attributes of the HTTP request that otherwise could be specified using the routeSelectors field of the RLP instead.
On one hand, using when conditions for route filtering would make it easy to define limits when the HTTPRoute cannot be modified to include the special rule. On the other hand, users would miss information in the status. An HTTPRouteRule for GET|POST /toys*, for example, that is targeted with an additional "soft" when condition that specifies that the method must be equal to GET and the path exactly equal to /toys/special (see Example 3) would be reported as rate limited with extra details that this is in fact only for GET /toys/special. For small deployments, this might be considered acceptable; however it would easily explode to unmanageable number of cases for deployments with only a few limit definitions and HTTPRouteRules.
Moreover, by not specifying a more strict HTTPRouteRule for GET /toys/special, the RLP controller would bind the limit definition to other rules that would cause the rate limit filter to invoke the rate limit service (Limitador) for cases other than strictly GET /toys/special. Even if the rate limits would still be ensured to apply in Limitador only for GET /toys/special (due to the presence of a hypothetical "soft" when condition), an extra no-op hop to the rate limit service would happen. This is avoided with the current imposed limitation.
Example of "soft" when conditions for rate limit based on attributes of the HTTP request (NOT SUPPORTED):
apiVersion: kuadrant.io/v2beta1
kind: RateLimitPolicy
metadata:
name: toystore-special-toys
namespace: toystore
spec:
targetRef:
group: gateway.networking.k8s.io
kind: HTTPRoute
name: toystore
limits:
specialToys:
rates:
- limit: 150
unit: second
routeSelectors:
- matches: # matches the original HTTPRouteRule GET|POST /toys*
- path:
type: PathPrefix
value: "/toys"
method: GET
when:
- selector: context.request.http.method # cannot omit this selector or POST /toys/special would also be rate limited
operator: eq
value: GET
- selector: context.request.http.path
operator: eq
value: /toys/special
How is this RLP would be implemented under the hood if supported?
gateway_actions:
- rules:
- paths: ["/toys*"]
methods: ["GET"]
hosts: ["*.toystore.acme.com"]
- paths: ["/toys*"]
methods: ["POST"]
hosts: ["*.toystore.acme.com"]
configurations:
- generic_key:
descriptor_key: "toystore/toystore-special-toys/specialToys"
descriptor_value: "1"
- request_headers:
descriptor_key: "context.request.http.method"
header_name: ":method"
- request_headers:
descriptor_key: "context.request.http.path"
header_name: ":path"
limits:
- conditions:
- toystore/toystore-special-toys/specialToys == "1"
- context.request.http.method == "GET"
- context.request.http.path == "/toys/special"
max_value: 150
seconds: 1
namespace: kuadrant
Possible variations for the selectors (conditions and counter qualifiers)
The main drivers behind the proposed design for the selectors (conditions and counter qualifiers), based on (i) structured condition expressions composed of fields selector, operator, and value, and (ii) when conditions and counters separated in two distinct fields (variation "C" below), are: 1. consistency with the Authorino AuthConfig API, which also specifies when conditions expressed in selector, operator, and value fields; 2. explicit user intent, without subtle distinction of meaning based on presence of optional fields.
Nonetheless here are a few alternative variations to consider:
Structured condition expressions Parsed condition expressions
Single field A
selectors:
- selector: context.request.http.method
operator: eq
value: GET
- selector: auth.identity.username
B
selectors:
- context.request.http.method == "GET"
- auth.identity.username
Distinct fields C ⭐️
when:
- selector: context.request.http.method
operator: eq
value: GET
counters:
- auth.identity.username
D
when:
- context.request.http.method == "GET"
counters:
- auth.identity.username
⭐️ Variation adopted for the examples and (so far) final design proposal.
Prior art
Most implementations currently orbiting around Gateway API (e.g. Istio, Envoy Gateway, etc) for added RL functionality seem to have been leaning more to the direct route extension pattern instead of Policy Attachment. That might be an option particularly suitable for gateway implementations (gateway providers) and for those aiming to avoid dealing with defaults and overrides.
Unresolved questions
1. In case a limit definition lists route selectors such that some can be bound to HTTPRouteRules and some cannot (see Example 6), do we bind the valid route selectors and ignore the invalid ones or the limit definition is invalid altogether and bound to no HTTPRouteRule at all?
A: By allowing multiple limit definitions to target a same HTTPRouteRule, the issue here stated will become less often. For the other cases where a limit definition still fails to select an HTTPRouteRule (e.g. due to mismatching trigger matches), the limit definition is not considered invalid. Possibly the limit definitions is considered "stale" (or "orphan"), i.e., not bound to any HTTPRouteRule.
2. What should we fill domain/namespace with, if no longer with the hostname? This can be useful for multi-tenancy.
A: For now, the domain/namespace field of the RL configuration (Envoy and Limitador ends) will be filled with a fixed (configurable) string (e.g. "kuadrant"). This can change in future to better support multi-tenancy and/or other use cases where a total sharding of the limit definitions within a same instance of Kuadrant is desired.
3. How do we support lists of hostnames in Limitador conditions (single counter)? Should we open an issue for a new in operator?
A: Not needed. The hostnames must exist in the targeted object explicitly, just like any other routing rules intended to be targeted by a limit definition. By setting the explicit hostname in the targeted network object (Gateway or HTTPRoute), the also becomes a route rules available for "hard" trigger configuration.
4. What "soft" condition operators do we need to support (e.g. eq, neq, exists, nexists, matches)?
5. Do we need special field to define shared counters across clusters/Limitador instances or that's to be solved at another layer (Limitador, Kuadrant CRDs, MCTC)?
Future possibilities
• Port routeSelectors and the semantics around it to the AuthPolicy API (aka "KAP v2").
• Defaults and overrides, either along the lines of architecture#4 or architecture#10.
|
__label__pos
| 0.997151 |
What is 326/793 as a decimal?
Converting 326/793 to a decimal is quite possibly one of the easiest calculations you can make. In this (very short) guide, we'll show you how to turn any fraction into a decimal in 3 seconds of less! Here we go!
Want to quickly learn or show students how to convert 326/793 to a decimal? Play this very quick and fun video now!
First things first, if you don't know what a numerator and a denominator are in a fraction, we need to recap that:
326 (numerator) / 793 (denominator)
Here's the little secret you can use to instantly transform any fraction to a decimal: Simply divide the numerator by the denominator:
= 326/793
= 326 ÷ 793
= 0.41109709962169
That's literally all there is to it! 326/793 as a decimal is 0.41109709962169.
I wish I had more to tell you about converting a fraction into a decimal but it really is that simple and there's nothing more to say about it.
If you want to practice, grab yourself a pen and a pad and try to calculate some fractions to decimal format yourself. If you're really feeling lazy you can use our calculator below instead!
Why would you want to convert 326/793 to a decimal?
This is a great question. We have lots of calculations on this site about converting a fraction into a decimal but why would you want or need to do that in the first place?
Well, first of all it's just a good way to represent a fraction in a better way that allows you to do common arithmetic with them (like addition, subtration, division and multiplication).
In real life, we mostly deal with decimals (like currency, for example) and since our brains are taught from a young age to understand and compare decimals more often than they are fractions, it's easier to understand and compare fractions if they are converted to a decimal first!
Here's a little real life example of converting a fraction to a decimal when using quantities. Let's say you're cooking and you can usually see fractionally how much of an ingredient is left in a pack. However, electronic scales measure weight in decimals and not as a fraction of the ingredient left. This makes converting between fractions and decimals a useful skill in cooking.
Hopefully this tutorial has helped you to understand how to convert a fraction to a decimal number. You can now go forth and convert fractions to decimal as much as your little heart desires!
Cite, Link, or Reference This Page
If you found this content useful in your research, please do us a great favor and use the tool below to make sure you properly reference us wherever you use it. We really appreciate your support!
• "What is 326/793 as a decimal?". VisualFractions.com. Accessed on June 28, 2022. http://visualfractions.com/calculator/fraction-as-decimal/what-is-326-793-as-a-decimal/.
• "What is 326/793 as a decimal?". VisualFractions.com, http://visualfractions.com/calculator/fraction-as-decimal/what-is-326-793-as-a-decimal/. Accessed 28 June, 2022.
• What is 326/793 as a decimal?. VisualFractions.com. Retrieved from http://visualfractions.com/calculator/fraction-as-decimal/what-is-326-793-as-a-decimal/.
Fraction to Decimal Calculator
Fraction as Decimal
Enter a numerator and denominator
Next Fraction to Decimal Calculation
Random Fraction to Decimal Problems
If you made it this far down the page then you must REALLY love converting fractions to decimals? Below are a bunch of randomly generated calculations for your decimal loving pleasure:
|
__label__pos
| 0.54891 |
首页 后端技术 Java内部是如何处理判断一个对象是否被实例化的?(js判断是否为对象)
Java内部是如何处理判断一个对象是否被实例化的?(js判断是否为对象)
Java内部是如何处理判断一个对象是否被实例化的?
谢邀。
这个部分,从给出的代码看:User user = um.showUserById(JSONObject.getNames(uid)[0]);从java的内存分布来看,user实际上是指向堆中一块内存区域的地址引用。题主提到的有人说这在java里叫做什么+实例化。后面的等于User user = new User();再user = um.showUserById(JSONObject.getNames(uid)[0]);这个部分基本上写代码都是只会实例化一次,不会需要实例化两次,如果有写成实例化两次,从代码规范角度来说,其实是不建议的,在方法 um.showUserById(JSONObject.getNames(uid)[0])中返回User的实例对象,而代码user = um.showUserById(JSONObject.getNames(uid)[0])这儿仅仅是将user = um.showUserById(JSONObject.getNames(uid)[0])的返回值赋值给user变量。showUserById的代码可能是
public User showUserById(String uid){
User user = new User();
user.Xxxx=xxxx;
.....
return user;
}
java和dotnet有什么区别
我个人觉得java/J2EE、.net、嵌入式开发是不错的三个方向。 如果非要在java和.net中选择的话,我个人是推荐java的,原因: 1:Java能够开发linux、unix服务器上的程序 2:JavaEE和.NET在企业应用方面的关注点是差不多的,但是学习了java如果将来想转到.net上会比较简单,几天的学习然后上手开发应该是问题不大的,但是反之就不行了。 当然如果有了些经验之后,你就会发现语言或者平台不再是技术上的阻碍,一些原理性的东西学到手,然后再加上项目经验,使用什么平台,反而不是最重要的。不过如果非要用一种平台进入企业开发的领域,至少在目前,我推荐java。其实新手要想学JAVA 也不难 关键是要坚持 多看看视频 多编写代码一年之后 相信你会个差不多的 我给你推荐个网站上面有尚学堂马士兵的学习过程 很经典 而且上面有很多资料还有一整套尚学堂视频 下面这两个网站就有 /?u=6695 /index.php?fromuid=29811
java static 是什么意思
是静态修饰符,什么叫静态修饰符呢?大家都知道,在程序中任何变量或者代码都是在编译时由系统自动分配内存来存储的,而所谓静态就是指在编译后所分配的内存会一直存在,直到程序退出内存才会释放这个空间,也就是只要程序在运行,那么这块内存就会一直存在。这样做有什么意义呢?
在Java程序里面,所有的东西都是对象,而对象的抽象就是类,对于一个类而言,如果要使用他的成员,那么普通情况下必须先实例化对象后,通过对象的引用才能够访问这些成员,但是有种情况例外,就是该成员是用static声明的(在这里所讲排除了类的访问控制),例如:
未声明为static
class ClassA{
int b;
public void ex1(){
}
}
class ClassB{
void ex2{
int i;
ClassA a = new ClassA();
i = a.b; //这里通过对象引用访问成员变量b
a.ex1; //这里通过对象引用访问成员函数ex1
}
}
声明为static
class ClassA{
static int b;
static void ex1(){
}
}
class ClassB{
void ex2{
int i;
i = ClassA.b; //这里通过类名访问成员变量b
ClassA.ex1; //这里通过类名访问成员函数ex1
}
}
通过以上两种比较,就可以知道static用来修饰类成员的主要作用了,在java类库当中有很多类成员都声明为static,可以让用户不需要实例化对象就可以引用成员,最基本的有Integer.parseInt(),Float.parseFloat()等等用来把对象转换为所需要的基本数据类型。这样的变量和方法我们又叫做类变量和类方法。
接下来讲一下被static修饰后的变量的值的问题,刚才在前面讲过,被static修饰后的成员,在编译时由内存分配一块内存空间,直到程序停止运行才会释放,那么就是说该类的所有对象都会共享这块内存空间,看一下下面这个例子:
class TStatic{
static int i;
public TStatic(){
i = 4;
}
public TStatic(int j){
i = j;
}
public static void main(String args[]){
TStatic t = new TStatic(5); //声明对象引用,并实例化
TStatic tt = new TStatic(); //同上
System.out.println(t.i);
System.out.println(tt.i);
System.out.println(t.i);
}
}
这段代码里面Tstatic类有一个static的int变量I,有两个构造函数,第一个用于初始化I为4,第二个用于初始化i为传进函数的值,在main中所传的值是5,程序首先声明对象的引用t,然后调用带参数的构造函数实例化对象,此时对象t的成员变量I的值为5,接着声明对象tt,调用无参数的构造函数,那么就把对象tt的成员变量i的值初始化为4了,注意了,在这里i是static,那就是说该类的所有对象都共享该内存,那也就是说在实例化对象tt的时候改变了i的值,那么实际上对象t的i值也变了,因为实际上他们引用的是同一个成员变量。最后打印的结果是三个4。呵呵,写到这里大家是否明白了呢?不明白就再看看书或者多写几个例子印证一下,呵呵。
--点击为您加载更多--
关于作者: 小巷
最新文章
发表评论
您的电子邮箱地址不会被公开。 必填项已用*标注
|
__label__pos
| 0.890077 |
Creare un gestionale per strutture ricettive in C# – 20 – Calcolare Totale Prenotazione
Creare un gestionale per strutture ricettive in C# – 20 – Calcolare Totale Prenotazione
Creare un gestionale per strutture ricettive in C# – 20 – Calcolare Totale Prenotazione: Proseguendo nello sviluppo del nostro gestionale, andriamo ad aggiungere una funzione che andrà a calcolare il prezzo totale della prenotazione.
Il Form “Prenotazioni” è quasi completo, ed in questa lezione andremo a creare una funzione che andrà a leggere il costo della camera, il numero di notti e di ospiti, eventuali costi extra, sconti e tasse di soggiorno, e mostrerà il valore totale della prenotazione.
Prima di fare ciò abbiamo corretto un bug lasciato scoperto nella precedente lezione.
Infatti in caso di tentativo di rimozione di un costo extra o di uno sconto, se non c’erano elementi o nessun elemento era selezionato il software crashava.
Per risolvere questo errore abbiamo aggiunto un controllo, che verifica la presenza di elementi nella lista e che almeno 1 elemento sia selezionato.
Inoltre abbiamo aggiunto una messagebox che ci chiede conferma prima dell’eliminazione.
if (lstCosti.Items.Count > 0 && lstCosti.SelectedIndex >= 0)
{
if (MessageBox.Show("Sicuro di voler rimuovere: " + lstCosti.Items[lstCosti.SelectedIndex].ToString(), "Rimozione Costo Extra",
MessageBoxButtons.YesNo) == DialogResult.Yes)
{
lstCosti.Items.RemoveAt(lstCosti.SelectedIndex);
}
}
In questo esermpio verifichiamo che la lista abbia almeno 1 elemento e che sia selezionato, poi chiediamo all’utente se vuole confermare la rimozione dell’elemento selezionato.
In caso di risposta affermativa andiamo a rimuovere l’elemento dalla lista.
Creare un gestionale per strutture ricettive in C# – 20 – Calcolare Totale Prenotazione
Il calcolo del totale della prenotazione prevede 5 fasi:
• Costo Camera * Numero di Notti
• Costo degli Extra
• Costo degli Sconti
• Costo Tasse di Soggiorno * Notti * Ospiti
• Somma dei valori
Le 5 fasi le andremo ad inserire all’interno di una funzione che riporterà alla fine il costo totale della prenotazione.
La funzione completa è:
string PrezzoTotale()
{
string PrezzoFinale = "";
//CALCOLO PREZZO CAMERA X NOTTI
var CameraSelezionata = (ComboBoxItem)cmbCamera.SelectedItem;
decimal PrezzoCameraTotale = CameraSelezionata.Prezzo() * numNotti.Value;
//AGGIUNTA COSTO EXTRA
decimal TotaleCostiExtra = 0;
if (lstCosti.Items.Count > 0)
{
for (int a = 0; a <= lstCosti.Items.Count - 1; a++)
{
decimal CostoExtraOggettoSelezionato = 0;
var CostoExtraSelezionato = (ExtraScontiItem)lstCosti.Items[a];
if (CostoExtraSelezionato.TipologiaExtra() == 0)
{
CostoExtraOggettoSelezionato = CostoExtraSelezionato.PrezzoExtra();
}
else if (CostoExtraSelezionato.TipologiaExtra() == 1)
{
CostoExtraOggettoSelezionato = PrezzoCameraTotale * CostoExtraSelezionato.PrezzoExtra() / 100;
}
else
{
CostoExtraOggettoSelezionato = 0;
}
TotaleCostiExtra = TotaleCostiExtra + CostoExtraOggettoSelezionato;
}
}
//AGGIUNTA SCONTO EXTRA
decimal TotaleScontiExtra = 0;
if (lstSconti.Items.Count > 0)
{
for (int a = 0; a <= lstSconti.Items.Count - 1; a++)
{
decimal ScontoExtraOggettoSelezionato = 0;
var ScontoExtraSelezionato = (ExtraScontiItem)lstSconti.Items[a];
if (ScontoExtraSelezionato.TipologiaExtra() == 0)
{
ScontoExtraOggettoSelezionato = ScontoExtraSelezionato.PrezzoExtra();
}
else if (ScontoExtraSelezionato.TipologiaExtra() == 1)
{
ScontoExtraOggettoSelezionato = PrezzoCameraTotale * ScontoExtraSelezionato.PrezzoExtra() / 100;
}
else
{
ScontoExtraOggettoSelezionato = 0;
}
TotaleScontiExtra = TotaleScontiExtra + ScontoExtraOggettoSelezionato;
}
}
//AGGIUNTA TASSE SOGGIORNO
decimal TotaleTasseSoggiorno = 0;
if (chkTasse.Checked == true)
{
TotaleTasseSoggiorno = Properties.Settings.Default.TasseDiSoggiorno * numOspiti.Value * numNotti.Value;
}
//TOTALE
PrezzoFinale = Convert.ToString(PrezzoCameraTotale + TotaleCostiExtra - TotaleScontiExtra + TotaleTasseSoggiorno);
return PrezzoFinale;
}
Ma andiamo ad analizzare nel dettaglio le 5 fasi:
Costo Camera * Numero di Notti
//CALCOLO PREZZO CAMERA X NOTTI
var CameraSelezionata = (ComboBoxItem)cmbCamera.SelectedItem;
decimal PrezzoCameraTotale = CameraSelezionata.Prezzo() * numNotti.Value;
In questa prima parte andiamo semplicemente a recuperare il costo della camera e lo moltiplichiamo per il numero di notti.
Costo degli Extra
//AGGIUNTA COSTO EXTRA
decimal TotaleCostiExtra = 0;
if (lstCosti.Items.Count > 0)
{
for (int a = 0; a <= lstCosti.Items.Count - 1; a++)
{
decimal CostoExtraOggettoSelezionato = 0;
var CostoExtraSelezionato = (ExtraScontiItem)lstCosti.Items[a];
if (CostoExtraSelezionato.TipologiaExtra() == 0)
{
CostoExtraOggettoSelezionato = CostoExtraSelezionato.PrezzoExtra();
}
else if (CostoExtraSelezionato.TipologiaExtra() == 1)
{
CostoExtraOggettoSelezionato = PrezzoCameraTotale * CostoExtraSelezionato.PrezzoExtra() / 100;
}
else
{
CostoExtraOggettoSelezionato = 0;
}
TotaleCostiExtra = TotaleCostiExtra + CostoExtraOggettoSelezionato;
}
}
In questa seconda fase controlliamo che siano presenti dei costi extra nella lista.
Nel caso in cui fossero presenti dei costi extra andiamo ad analizzare se il costo è segnato con una valota o in percentuale.
Per i costi espressi direttamente con la valuta, andiamo ad assegnare il suo valore ad una variabile temporanea.
Per quelli espressi in percentuale invece moltiplichiamo il prezzo della camera per la percentuale e quindi lo dividiamo per 100 ed assegniamo il valore alla variabile temporanea.
Alla fine del ciclo sommiamo tutti gli extra in una variabile.
Costo degli Sconti
//AGGIUNTA SCONTO EXTRA
decimal TotaleScontiExtra = 0;
if (lstSconti.Items.Count > 0)
{
for (int a = 0; a <= lstSconti.Items.Count - 1; a++)
{
decimal ScontoExtraOggettoSelezionato = 0;
var ScontoExtraSelezionato = (ExtraScontiItem)lstSconti.Items[a];
if (ScontoExtraSelezionato.TipologiaExtra() == 0)
{
ScontoExtraOggettoSelezionato = ScontoExtraSelezionato.PrezzoExtra();
}
else if (ScontoExtraSelezionato.TipologiaExtra() == 1)
{
ScontoExtraOggettoSelezionato = PrezzoCameraTotale * ScontoExtraSelezionato.PrezzoExtra() / 100;
}
else
{
ScontoExtraOggettoSelezionato = 0;
}
TotaleScontiExtra = TotaleScontiExtra + ScontoExtraOggettoSelezionato;
}
}
Il calcolo degli sconti è identico a quello degli extra, solo che viene realizzato sull’elenco sconti e non su quello degli extra.
Costo Tasse di Soggiorno * Notti * Ospiti
//AGGIUNTA TASSE SOGGIORNO
decimal TotaleTasseSoggiorno = 0;
if (chkTasse.Checked == true)
{
TotaleTasseSoggiorno = Properties.Settings.Default.TasseDiSoggiorno * numOspiti.Value * numNotti.Value;
}
Il totale delle tasse di soggiorno viene calcolato caricando il valore delle tasse di soggiorno inserito nelle impostazioni e moltiplicato per il numero degli ospiti e quindi per il numero di notti.
NOTA: Il gestionale non prevede il calcolo delle tasse di soggiorno separato per ogni ospite, ma è possibile creare uno sconto da applicare per rimuovere le tasse di soggiorno per un singolo ospite o per una singola notte.
Somma dei valori
//TOTALE
PrezzoFinale = Convert.ToString(PrezzoCameraTotale + TotaleCostiExtra - TotaleScontiExtra + TotaleTasseSoggiorno);
return PrezzoFinale;
L’ultima parte prevede semplicemente la somma di tutti i valori e la sottrazione degli sconti.
Il totale poi verrà inviato in output.
Rispondi
|
__label__pos
| 0.872731 |
阅读和查看 Automation Anywhere 文档
Automation Anywhere Automation 360
关闭内容
内容
打开内容
在自定义软件包中处理会话
• 已更新:1/07/2021
• Automation 360 v.x
• 版本
• RPA 工作区
在自定义软件包中处理会话
您可以使用 SessionsMap 中的会话名称提取会话。
您可以使用 Sessions 属性检索 SessionsMap
• 注解只能应用于类字段,需要相应的 public setter。
• 变量必须是 Map<String,Object> 类型。
示例:
@BotCommand
@CommandPkg(label = "Start session", name = "startSession", description = "Start new session",
icon = "pkg.svg", node_label = "start session {{sessionName}}|") public class Start {
@Sessions
private Map<String, Object> sessions;
@Execute
public void start(@Idx(index = "1", type = TEXT) @Pkg(label = "Session name",
default_value_type = STRING, default_value = "Default") @NotEmpty String sessionName) {
// Check for existing session
if (sessions.containsKey(sessionName))
throw new BotCommandException(MESSAGES.getString("xml.SessionNameInUse", sessionName));
// Do some operation
// Create new session
sessions.put(sessionName, new Session(operation));
}
public void setSessions(Map<String, Object> sessions) {
this.sessions = sessions;
}
}
@BotCommand
@CommandPkg(label = "End session", name = "endSession", description = "End session", icon =
"pkg.svg", node_label = "End session {{sessionName}}|")
public class EndSession {
@Sessions
private Map<String, Object> sessions;
@Execute
public void end(
@Idx(index = "1", type = TEXT) @Pkg(label = "Session name", default_value_type = STRING,
default_value = "Default") @NotEmpty String sessionName) {
sessions.remove(sessionName);
}
public void setSessions(Map<String, Object> sessions) {
this.sessions = sessions;
}
}
发送反馈
|
__label__pos
| 0.941932 |
阿里云ecshop漏洞修复_丝画阁(homeforexchange.cn)
阿里云ecshop漏洞修复
发布时间:2018-11-20 20:45:25编辑:丝画阁阅读(676)
ecshop没有对会员注册处的username过滤,保存重的用户信息时,可以直接写入shell。
ecshop的后台编辑文件/admin/affiliate_ck.php中,对输入参数auid未进行正确类型转义,导致整型注入的发生。
41行:if(!is_int($_GET['auid'])){$_GET['auid']=0;}
在ecshop特定版定的goods.php文件中,对id参数过滤不严,黑客可以利用该注入点进行SQL注入,获取数据库中敏感信息。
$goods_id = isset($_REQUEST['id']) ? intval(preg_replace("/[^-\d]+[^\d]/",'',$_REQUEST['id'])) : 0;
1、ECShop存在一个盲注漏洞,问题存在于/api/client/api.php文件中,提交特制的恶意POST请求可进行SQL注入攻击,可获得敏感信息或操作数据库。
路径:/api/client/includes/lib_api.php
参照以下修改:
function API_UserLogin($post)
{
/* SQL注入过滤 */
if (get_magic_quotes_gpc())
{
$post['UserId'] = $post['UserId']
}
else
{
$post['UserId'] = addslashes($post['UserId']);
}
/* end */
$post['username'] = isset($post['UserId']) ? trim($post['UserId']) : '';
2、ecshop的后台编辑文件/admin/shopinfo.php中,对输入参数id未进行正确类型转义,导致整型注入的发生。
路径: /admin/shopinfo.php
参照以下修改(53-71-105-123行):
源代码:
admin_priv('shopinfo_manage');
改为:
admin_priv('shopinfo_manage');
$_REQUEST['id'] = intval($_REQUEST['id']);
3、文件/admin/affiliate_ck.php中,对输入参数auid未进行正确类型转义,导致整型注入的发生。
参照以下修改(31行和51行):
源代码:
$logdb = get_affiliate_ck();
改成:
$_GET['auid'] = intval($_GET['auid']);
$logdb = get_affiliate_ck();
4、ecshop的/admin/comment_manage.php中,对输入参数sort_by、sort_order未进行严格过滤,导致SQL注入。
参照以下修改:
$filter['sort_by'] = empty($_REQUEST['sort_by']) ? 'add_time' : trim(htmlspecialchars($_REQUEST['sort_by']));
$filter['sort_order'] = empty($_REQUEST['sort_order']) ? 'DESC' : trim(htmlspecialchars($_REQUEST['sort_order']));
5、ecshop没有对会员注册处的username过滤,保存重的用户信息时,可以直接写入shell。
路径:/admin/integrate.php
大概109行,参照以下修改:
$code = empty($_GET['code']) ? '' : trim(addslashes($_GET['code']));
大概601行,参照以下修改:
源代码:
@file_put_contents(ROOT_PATH . 'data/repeat_user.php', $json->encode($repeat_user));
修改成:
@file_put_contents(ROOT_PATH.'data/repeat_user.php',''.$json->encode($repeat_user));
6、ecshop后台模版编译导致黑客可插入任意恶意代码。
路径:/admin/edit_languages.php
大概在第120行
$dst_items[$i] = $_POST['item_id'][$i] .' = '. '"' .$_POST['item_content'][$i]. '";';
修改为
$dst_items[$i] = $_POST['item_id'][$i] .' = '. '\'' .$_POST['item_content'][$i]. '\';';
7、ecshop过滤不严导致SQL注入漏洞。
路径:/category.php 、 /ecsapi/category.php
修改方法:javascript:;
8、ecshop的/includes/lib_insert.php文件中,对输入参数未进行正确类型转义,导致整型注入的发生。
有$arr['num'] 、$arr['id']、$arr['type']这些参数的,在函数开头加上:
大概289行加上:
$arr['num'] = intval($arr['num']);
$arr['id'] = intval($arr['id']);
大概454行加上:
$arr['id'] = intval($arr['id']);
$arr['type'] = addslashes($arr['type']);
大概495行加上:
$arr['id'] = intval($arr['id']);
9、ECSHOP支付插件存在SQL注入漏洞,此漏洞存在于/includes/modules/payment/alipay.php文件中,该文件是ECshop的支付宝插件。由于ECShop使用了str_replace函数做字符串替换,黑客可绕过单引号限制构造SQL注入语句。只要开启支付宝支付插件就能利用该漏洞获取网站数据,且不需要注册登入。
搜索代码:
$order_sn = str_replace($_GET['subject'], '', $_GET['out_trade_no']);
将下面一句改为:
$order_sn = trim(addslashes($order_sn));
关键字
|
__label__pos
| 0.550466 |
Anyone had success with HTTP/2 speculative push?
How Speculative Push with HTTP/2 Really Works
Coming to grips with the 6 stages of server PUSH
by Joe Honton
In this episode Bjørne traces the path of a web page request and its dependencies when HTTP/2 speculative push is enabled. Caution: this article may cause premature aging, white hair, and memory loss.
Full disclosure: Joe Honton is the founder of Read Write Tools and the author of Read Write Serve, the HTTP/2 server used in this article to explain the inner workings of speculative push.
Devin and Ken have been charting a course forward with HTTP/2, performing benchmark tests to see how fast they could make things go on cheap commodity servers. Only one thing was still eluding them: speculative push (SP) technology.
Ernesto was a bit envious of the impressive results being reported by Devin and Ken. He decided to enter the fray. But he knew that implementing speculative push would not be easy — others had been there and gotten mired in gory implementation details.
So he started off slowly, studying in detail exactly how HTTP/2 is optimized to provide faster page loads. He shared his findings in Discover the 4 pillars of HTTP/2. His final summary: persistent sessions, multiplexed streams, and header compression contribute to faster page loads, while prioritization fine tunes the transmission schedule to meet website-specific requirements.
But speculative push was still a big gap in his knowledge base. Here's what he knew for sure:
Most of the time, in a client/server architecture, the client is responsible for initiating a request, and the server is responsible for fulfilling the request. This is a pull request. But with HTTP/2, once a request has been initiated by the client, the server can take matters into its own hands and initiate the transfer of additional resources without waiting for the client's formal request. This is a push request.
A push request still adheres to the full HTTP request/response protocol with its familiar set of headers for content negotiation, content encoding, caching, etc. But a server both initiates and fulfills a push request, transmitting the response to the client over the persistent session that has already been established.
It does this by opening a new stream, which is interleaved with the existing source document stream. A single session can have 100 or more concurrent streams (with the actual limit being negotiated by the two parties). This concurrency means that an entire web page and all its dependent resources can be on the wire simultaneously.
Of course the public network doesn't have unlimited bandwidth capacity, so the throughput for all those resources has a finite limit. Concurrency simply allows for fewer wasted frames. Keeping the network saturated to full capacity means that that finite limit can be reached, and maximum server efficiency achieved. In one sense, speculative push is all about keeping the pipeline full.
So that's the basic concept.
But beyond this, Ernesto was getting all bogged down with sessions, streams, frames, and the TCP protocol they were all running over. He decided to reach out to anyone who had been there and had made it work.
Ernesto was getting all bogged down
He tweeted out a plea for help. "Anyone had success with #HTTP2 speculative push? I mean real success. Not just hello world."
No takers.
Then he remembered the name of someone from last year's bootcamp and DM'd him. "Hey Bjørne, did I remember correctly that you've actually gotten HTTP/2 push working?"
Bjørne's answer came quickly. "Yes. Check out the pages on https://ecmascript.engineer and you can see it in action. Just open up the browser inspector and check the network tab. You'll see 'Push' next to all the resources that the server sent unsolicited."
"Nice!" replied Ernesto, "Was it hard to get it working?"
"Initially, yes. But not anymore," answered Bjørne. "Now it's simply a matter of turning it on in the RWSERVE configuration. Here's what my config looks like:"
server {
modules {
cache-control on
etag on
push-priority on
}
response {
push-priority {
push `*.css` *weight=64
push `*.js` *weight=128
push `*.woff2` *weight=32
}
}
}
Ernesto mocked it up on a his test server. Lo and behold, it worked for him too! Simple.
But he was puzzled. So he pestered Bjørne again. "How come I didn't need to use any link rel='preload' statements? And what prevents the browser from separately requesting the same resources? And what's the purpose of the rw-pushtag cookie? And how . . ."
"Whoa," Bjørne stopped him midway through, "slow down pardner! I've been meaning to write up what I've learned. I suppose now is as good a time as any."
HTTP/2 PUSH, in a nutshell
Later that day, Ernesto received a long post from Bjørne, explaining exactly what was going on. Here's Bjørne's explanation in a nutshell:
The server's work progresses in three stages:
1. The automatic preload detection (APD) algorithm determines which resources are candidates for PUSH.
2. The server opens concurrent streams, one for each resource candidate that is approved, immediately sending bytes to the browser without waiting for a response.
3. The server calculates a pushtag for each PUSHed resource, uploading it as a cookie for the browser's safekeeping.
The browser's complementary work occurs in three separate stages:
1. Provisionally PUSHed resources are saved to the browser's push-cache.
2. The browser builds a resource manifest (of all the resources it needs to fully render the page) from the original document's DOM.
3. The browser claims any resource that is available in the push-cache, or fetches any resource that is not available using its traditional request/response algorithm.
Automatic Preload Detection
The RWSERVE HTTP/2 Server has automatic preload detection (APD) for HTML documents encoded with blue-phrase notation. That means you don't have to comb through your entire website to find which resources to target for SP. The server does it for you, just-in-time. And it caches the results.
Here's how APD works. When a document is requested the first time, the server parses and examines it for elements that reference resources using a src or href attribute. That includes these HTML tags: link, script, img, source, audio, video, track, iframe, embed and object.
Each of the external resources referenced in those tags is examined to see if it is a local resource or an external resource. The external resource references are ignored, but the local references are captured and placed in a resource map associated with the source document. The resource map is then saved to a private server cache for use by all subsequent requests for the source document, regardless of who issues the request.
To get things going, the APD algorithm examines the resource map to determine which resources are candidates for SP. Then it looks at the server's rules, as configured by the webmaster, to filter the candidates. For example, take a look at the configuration snippet I sent to you, and look at its push-priority settings.
With that setup, the filtering step will retain any candidates that it found for style sheets, scripts and fonts, but it will discard any candidates that it found for images, audios and videos, because there are no configuration rules defined that match files with an extension of jpg, gif, png, mp3, or webm.
Part of the filtering process is to remove duplicates. This sometimes occurs with images that appear in more than one place in the document — there's no point in pushing the same resource twice.
Next, the APD algorithm will order the candidates by weight, with larger weights coming first. So, even though CSS resources were found in <link> tags located in the document's <head>, and JavaScript resources were found in <script> tags located near the document's end, their priority ordering would be: JavaScript first (weight=128), style sheets next(weight=64), and fonts last(weight=32).
Now comes the clever part. There's no point in pushing a resource that the client already has. So the APD algorithm computes a pushtag for each candidate, and compares it to all of the pushtags that have already been sent to the current client. If there's a match, the candidate is discarded. Pushtags are like Etags: they uniquely identify a resource and its version.
The Big Push
With all of the preliminaries out of the way, the server is ready to go. Note that all of this will be happening prior to sending the originally requested source document down the wire.
Using the currently opened stream as a base, the server creates a series of push streams, one for each approved candidate, in descending weighted order. Each push stream's priority flag is set using the weight configured by the webmaster.
Appropriate HTTP response headers are generated for each approved candidate, including content-type, content-length, etag, cache-control and status.
Now, in sequence, each resource file is read from the server's file system, and together with the generated headers, are sent over the associated push stream. The server does not wait for the client to respond, because it doesn't. It simply starts pushing one resource after another until everything is on the wire.
While all of this is happening, the original source document is still pending. This is important. All of the resources that are to be pushed must be in transit before the source document is sent. This prevents the browser from prematurely parsing the document and requesting resources that the server still intends to push.
One more final housekeeping chore still needs to be taken care of. Keeping track of which resources have been pushed. The server creates a cookie value as an encoded string of pushtags, and adds a set-cookie header to the source document's pending response. By doing this, the browser keeps track of which resources it has received via speculative push. In this way, on subsequent requests for other web pages, the browser can inform the server of which resources to ignore. The server examines the incoming cookie, and the APD algorithm discards push candidates that it has already pushed on a prior request.
Finally, and only after all the push streams have initiated their responses, the server sends the source document's response, with the pushtag cookie and the file's contents.
A few caveats:
• Only resources that are actually used by the source document are of any value. All others are discarded and don't make it into the browser's cache. So pushing resources in anticipation of using them on future pages won't work.
• Only resources from the same domain can be pushed. So, for example, you can't optimize links to Google Analytics or Twitter feeds or other third-party mashups.
• Finally, only push a resource once. If you accidentally push the same resource twice, the browser will reset the stream and throw away whatever it has so far.
The Browser's Role in All of This
All of this server processing occurs within the initial request from the browser. No additional back and forth occurs. One request / multiple responses. Here's what it looks like from the browser's perspective:
A user requests a source document, using a standard GET method, without any special headers. The rw-pushtag cookie, if present, is sent along with the initial source document request.
While awaiting the server's response for the source document itself, the browser begins to receive data over the incoming push streams opened by the server. Those data streams are provisionally placed in the "unclaimed push streams container" a.k.a. the push-cache.
Once all of the push streams have been fulfilled and closed, the source document data, that has been pending on stream 0, is finalized.
At this point the browser, which may have already begun tokenizing the source document while the push streams are arriving, finishes its HTML tokenization. It is now ready to build a resource manifest, from the HTML DOM tree, of all the document resources that are needed by the page.
For each reference in the resource manifest, the browser checks the push-cache to see there is a provisional resource with the same name and the same content-type. If there is, it is "claimed".
When a document resource is not found in the push-cache, the browser follows its standard fulfillment rules: checking its persistent http-cache and, if found but expired, issuing a conditional request or, if missing, issuing an unconditional request from the server.
On the other hand, for each claimed resource, the browser creates a reference to it in the browser's memory-cache. It then examines the cache-control and etag headers sent along with the pushed resource, and copies the resource to the persistent http-cache when those instructions say to do so.
Once all of the document resources listed in the manifest have been examined, and either claimed or fulfilled, control is passed to the browser's renderer. At this point the push-cache has completed its task and is no longer needed. The browser discards any unclaimed resources that the server might have erroneously pushed.
The only other housekeeping chore is for the browser to honor the set-cookie: pushtag header sent along with the server's response to the original document. It is processed following the standard browser behavior for cookies, and sent along with all future requests to the same domain, expiring according to standard cookie expiration rules.
If the server's APD algorithm has correctly identified all of the resources needed by the source document, then all of this processing will have occurred within a single request/response cycle. If anything goes haywire with that request, the browser will fall back to it's standard fetching and rendering process.
Wrapping it up
Ernesto was blown away by the write-up. "Wow. This is way more than I expected. Thanks, Bjørne!"
"NP", Bjørne shot back.
"BTW, How good is the APD algorithm? It looks like it doesn't pick up CSS-defined background images or @font-face rules."
"That's correct," Bjørne replied. "But if you add preload statements to your document's head, you can get those pushed as well. Something like this:
<link href='image/hero.png' rel='preload' as='image' />
<link href='fonts/neue.woff2' rel='preload' as='font' crossorigin />
"¡Muchas gracias amigo!"
"Vær så god."
Ernesto was starting to feel a little more confident about HTTP/2 speculative push. Still, he wanted to measure things to be sure it was the right thing to do.
No minifig characters were harmed in the production of this Tangled Web Services episode.
How Speculative Push with HTTP/2 Really Works — Coming to grips with the 6 stages of server PUSH
🔎
|
__label__pos
| 0.611955 |
Why Automate ?
18. April 2019 Uncategorized 0
Digital Transformation and Digital Technologies are gaining momentum, therefore applications need to get further robust and market ready. The challenging digital transformation requires an uncompromising testing plan that needs to be repetitive and consistent. This will be feasible only with a test automation plan that can facilitate easy, repetitive, and cost-effective testing.
The latest software testing training in Cochin is also teaching advanced methods to students that can help them to oversee the process of automation testing. If you are keen on developing a career as a software tester, it is important that you learn everything about the industry and its testing operations from a leading education center which gives the best software testing training in Kerala that can offer highly competent software performance testing lessons.
While planning the delivery of the software projects, there is always a need to decide on the test automation strategy and what aspects of the application that should be automated. So, it is important to understand the reason or objective of automation. It can bother your team and ultimately fail your business objectives if not done in a thought-through way
Talking about automating functional testing, the objective here is to automate the testing of the features and functionality of the software and touch upon every probability of failure or dysfunctional.
Test Automation is mainly considered to avoid repeated manual work, gain faster response, cut down the time for running tests, and ensure that our tests are consistent with the expected presumptions and objectives. Moreover, automation can help eliminate manual errors while executing the tests repeatedly. There are also chances that the manual execution of tests might not give similar results each time it is tested.
Another major point to consider is to generate quicker feedback, resulting in faster time-to-market. When feedback is received at a rocket’s pace, it leads to effective continuous integration and continuous delivery. Moreover, it helps to preserve the tests as an asset for making the framework available whenever required for similar testing.
This makes the testing process cost-effective and lucrative. In this way, automation further helps in implementing changes and also refactoring the code.
However, there could be some challenges while performing functional tests, as huge sets of test cases get generated. This leads to inconvenience, as during lengthy regression tests, there could be issues in committing changes. As a result, developers tend to commit less regularly.
So, it is critical to consider the essential factors for successful functional test automation that makes the approach more efficient and helps in covering a large section of the application while testing.
The objective is to ensure that all functional aspects of the application are tested and defects are identified. Ultimately, enabling development of a valuable test suite and further focus on critical areas of the product. This helps target 2 goals – keep the testing process relevant to the business needs and bring down the rate of failure with every feature.
|
__label__pos
| 0.89983 |
How to Return All Odd Numbers in a JavaScript Array?
You can find all odd numbers in a JavaScript array by:
You should avoid checks for odd numbers with "n % 2 === 1" (where n is an integer, and % is the remainder operator) because, for negative numbers, this would return -1. Therefore, it may be easier to simply check if the number is not an even number (i.e. n % 2 !== 0) because it accounts for both, negative and positive numbers (since -0 === 0 in JavaScript). Otherwise, you would need an extra/explicit check for negative numbers.
Using Array.prototype.filter()
To get all odd numbers in an array of integers using the Array.prototype.filter() method, you can do the following:
// ES5+
const numbers = [-5, -2, -1, 0, 1, 3, 4, 7];
const oddNumbers = numbers.filter(function (number) {
return number % 2 !== 0;
});
console.log(oddNumbers); // [-5, -1, 1, 3, 7]
This would return a new array containing all odd numbers. In ES6+, you can shorten this to a one-liner using the arrow function syntax:
// ES6+
const numbers = [-5, -2, -1, 0, 1, 3, 4, 7];
const oddNumbers = numbers.filter((number) => number % 2 !== 0);
console.log(oddNumbers); // [-5, -1, 1, 3, 7]
When no matches are found, an empty array is returned:
// ES6+
const numbers = [-2, 0, 4];
const oddNumbers = numbers.filter((number) => number % 2 !== 0);
console.log(oddNumbers); // []
Using a Loop
You can loop over an array of integers and create a new array to which you add all odd numbers to, for example, like so:
const numbers = [-5, -2, -1, 0, 1, 3, 4, 7];
const oddNumbers = [];
for (let i = 0; i < numbers.length; i++) {
if (numbers[i] % 2 !== 0) {
oddNumbers.push(numbers[i]);
}
}
console.log(oddNumbers); // [-5, -1, 1, 3, 7]
This would return a new array containing all odd numbers, and when no match is found, an empty array is returned.
Hope you found this post useful. It was published . Please show your love and support by sharing this post.
|
__label__pos
| 0.999932 |
Take the 2-minute tour ×
Server Fault is a question and answer site for professional system and network administrators. It's 100% free, no registration required.
I need some help on speeding websites loading.
I have two virtual machines created on my ESXi host. Thay are both on public IP addresses and that is how they communicate.
On one virtual machines I have apache installed and configured and that is where the websites are located. It is shared hosting. And on the second virtual machine I have mysql 5.5 installed.
I configured it all and it all works. My websites are loading, but the problem is speed. I have waiting time on website loading from 1.5s to 3s. This seems a little bit high and I want to reduce it.
When I put my websites databases on localhost (where the apache is also) it opens very fast with waiting time in 100-500ms. Is there a way to do it?
Here is the ping from apache to database VM:
64 bytes from xxx.xxx.xxx.xxx: icmp_seq=1 ttl=63 time=0.374 ms
64 bytes from xxx.xxx.xxx.xxx: icmp_seq=2 ttl=63 time=0.953 ms
64 bytes from xxx.xxx.xxx.xxx: icmp_seq=3 ttl=63 time=0.369 ms
64 bytes from xxx.xxx.xxx.xxx: icmp_seq=4 ttl=63 time=0.630 ms
64 bytes from xxx.xxx.xxx.xxx: icmp_seq=5 ttl=63 time=0.278 ms
64 bytes from xxx.xxx.xxx.xxx: icmp_seq=6 ttl=63 time=0.408 ms
64 bytes from xxx.xxx.xxx.xxx: icmp_seq=7 ttl=63 time=0.325 ms
Mysql VM has 4 GB of RAM, and I have set mysql to use about 3 GB of it.
I have configured mysql with this options:
skip-name-resolve
skip-host-cache
local_infile=0
slow_query_log = 1
slow_query_log_file = /var/log/mysql/slow_query.log
long_query_time = 4
max_allowed_packet = 12MB
table_cache = 2048
table_open_cache = 2048
table_definition_cache = 2048
sort_buffer_size = 4M
join_buffer_size = 4M
read_buffer_size = 4M
read_rnd_buffer_size = 4M
key_buffer_size = 128M
myisam_sort_buffer_size = 128M
thread_cache_size = 16
query_cache_size= 128M
query_cache_limit = 4M
tmp_table_size = 512MB
max_heap_table_size = 512MB
innodb_buffer_pool_size = 800M
innodb_additional_mem_pool_size = 128M
innodb_file_per_table
innodb_log_file_size = 256M
innodb_flush_method=O_DIRECT
max_connections=80
wait_timeout = 30
interactive_timeout = 30
How can I get rid of all this latency? Mysqltuner and tuning-primer scripts tells me that I have configured almost everything ok. But there is still high level of temp tables created on disk:
[!!] Temporary tables created on disk: 41% (893 on disk / 2K total).
How can I reduce this? Because it is shared hosting I don't know how much BLOB or TEXT fields are there in databases.
Or is there a way that I can connect these two VMs internally (not by public IP addresses, but also to remain public) so maybe that will improve performance?
If you need anything else I will post it (like mysqltuner report,....). Thanks in advance.
share|improve this question
You say that Apache is running on a virtual machine but also say that it's shared hosting. Perhaps you should explain that in more detail because as soon as I see "shared hosting" I see a situation where it's just about impossible to determine what resources are really available at any given point in time, particularly CPU power. – John Gardeniers Aug 2 '12 at 11:43
Yes I meant what I said it is shared hosting. It runs apache. My customers put their websites there. And they put their databases on the other virtual machine. And this has nothing to do with CPU power at all. Server load is quite normal. I just got latency from network because my databases are on other VM that is on public IP. I solved it by creating virtual switch and connecting these two VMs on that vswitch. But thanks anyway. – dennisg Aug 3 '12 at 12:12
If you've solved the problem you should post the solution as an answer. – John Gardeniers Aug 3 '12 at 14:01
add comment
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Browse other questions tagged or ask your own question.
|
__label__pos
| 0.542419 |
Site Sponsors:
Waterfall & Agile - Can't We All Just Get Along?
Big Need
Working with the Pentagon, one discovers that planning works like any other big business. There is a very real need to let people who are giving you lots of dollars know what they are going to get for their money.
Indeed, from DoDAF to UML, the designs we generate - as well as the software re-use we plan - often needs to be pretty specific. -Not only are lives at risk, but in a time when national defense is rapidly loosing funding to "social progress," the Waterfall approach is increasingly important. Why? Because every dollar counts. More than one organization can be involved. Meticulous planning is required.
Old Patterns
In general, many have discovered that - the bigger the undertaking, the larger the need for comprehensive planning - sometimes down to a demonstrative code-level / integration test plan.
Conversely, when it comes to our prototypical, polar, or recursive development activities (modernly known as "Agile") the need to take a sprint down the requirements de-jure, manage our burn-down-lists, test, and even continuously integrate, are time-proven ways to keep both system & software teams accountable.
Unfortunately, many folks do not know how old "Agile" techniques truly are! Rather than understanding that Waterfall and Agile have always worked together, I often discover people saying odd things such as "we are an agile", or "we are a waterfall" company?
Indeed, when someone is handing you a check for hundreds of thousands - or millions - of dollars, rather than expecting the "trust me, we are agile" mantra, or “waterfall will get us there,” it is not beyond the realm of realistic exception to have someone say "that is not enough."
Strategy & Tactics
Of course, software development is all about patterns. When it comes to the unnecessary camps between Waterfall & Agile, I therefore like to tell developers that – for any large undertaking – there are usually two (2) different plans: The Strategic, and the Tactical.
Sound like warfare? Perhaps. But strategic and tactical planning patterns are found in several other places, as well.
For example, in large-scale, modern business, the need for the “where are we going” over the “what are you doing” techniques often find their expression in an “Enterprise Architecture”, and the ever-present “Implementation Architecture?” While the delineation between the two needs might indeed seem a little loose at the moment, such is the way of a work-in-progress abstraction.
Nevertheless, in highly visible, large undertakings I propose that Waterfall is the strategic, while Agile – the tactical? Why? Because not only can BOTH Agile & Waterfall be rightly required, but for decades they BOTH have worked together just fine.
New Idea?
In warfare, knowing where we are going, as well as how we are getting there, are important. Considering each is why both strategic & tactical planning are important. Waterfall planing, Agile execution.
In software, and from time to time, surely both Waterfall & Agile should be found co-existing. Certainly whenever huge amounts of resources are to be expended. Individually, both undertakings just make sense. Together, both Agile & Waterfall can help us work miracles. Just like in battle.
Heads & Tails
Whenever both Agile & Waterfall approaches are working well together, we can certainly see the strategic, as well as the tactical aspects-in-motion. Yet, because the concept of creating a synergistic cooperation between Agile & Waterfall is unique to many, perhaps a new term is required?
So I was thinking (a dangerous pastime, I know!) -In as much as the collaboration between the strategic & tactical dates back thru a millennia of combat, rather than soft-warfare, perhaps a new moniker is required. Whenever tactics and strategies begin to complement one another a tad more classically than they presently might, then perhaps we should call it waging 'warefare, instead?
Enjoy the Journey!
-Rn
[ view entry ] ( 2246 views ) | permalink
<<First <Back | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | Next> Last>>
|
__label__pos
| 0.814621 |
Often asked: Add User To Sudo Group?
How do I add an existing user to a sudo group?
Steps to Add Sudo User on Ubuntu
1. Step 1: Create New User. Log into the system with a root user or an account with sudo privileges.
2. Step 2: Add User to Sudo Group. Most Linux systems, including Ubuntu, have a user group for sudo users.
3. Step 3: Verify User Belongs to Sudo Group.
4. Step 4: Verify Sudo Access.
How do I give a user sudo access?
To use this tool, you need to issue the command sudo -s and then enter your sudo password. Now enter the command visudo and the tool will open the /etc/sudoers file for editing). Save and close the file and have the user log out and log back in. They should now have a full range of sudo privileges.
What is sudo group?
Sudo (sometimes considered as short for Super-user do) is a program designed to let system administrators allow some users to execute some commands as root (or another user). The basic philosophy is to give as few privileges as possible but still allow people to get their work done.
You might be interested: Quick Answer: At The Moment Synonym?
How do I create a sudo group?
Steps to Create a Sudo User
1. Log in to your server. Log in to your system as the root user: ssh [email protected]_ip_address.
2. Create a new user account. # Create a new user account using the adduser command.
3. Add the new user to the sudo group. By default on Ubuntu systems, members of the group sudo are granted with sudo access.
How do I add a user to a group in Linux?
You can add a user to a group in Linux using the usermod command. To add a user to a group, specify the -a -G flags. These should be followed by the name of the group to which you want to add a user and the user’s username.
How do I add a user in Ubuntu terminal?
Steps to create a user account on Ubuntu Linux
1. Open the terminal application.
2. Log in to remote box by running the ssh [email protected]
3. To add a new user in Ubuntu run sudo adduser userNameHere.
4. Enter password and other needed info to create a user account on Ubuntu server.
How do you add a user in Linux?
Steps to add new user on Linux:
1. Launch a terminal application.
2. Run adduser command with a username as argument.
3. Enter password for current user if necessary.
4. adduser will add the user along with other details.
5. Enter desired password for the user followed by [ENTER] twice.
How do I give Sudo root access to user in Linux?
To enable sudo for your user ID on RHEL, add your user ID to the wheel group:
1. Become root by running su.
2. Run usermod -aG wheel your_user_id.
3. Log out and back in again.
You might be interested: Quick Answer: Get Rid Of Synonym?
How do I give a user Sudo access in Linux?
Verify it with ADSI Edit: open the Schema naming context and look for the sudoRole class. Now create the sudoers OU on your domain root, this OU will hold all the sudo settings for all your Linux workstations. Now set its attributes as follows:
1. sudoHost: foo32linux.
2. sudoCommand: ALL.
3. sudoUser: stewie. griffin.
How do I know if sudo user?
To know whether a particular user is having sudo access or not, we can use -l and -U options together. For example, If the user has sudo access, it will print the level of sudo access for that particular user. If the user don’t have sudo access, it will print that user is not allowed to run sudo on localhost.
How do I list users in Linux?
In order to list users on Linux, you have to execute the “cat” command on the “/etc/passwd” file. When executing this command, you will be presented with the list of users currently available on your system. Alternatively, you can use the “less” or the “more” command in order to navigate within the username list.
How do I see users in wheel group?
To find out who is the in wheel group, look in the /etc/group file, but keep in mind that users may be members of the wheel group through their /etc/passwd file entries. To see if special privileges are given to the wheel group (this is not uncommon), look at the /etc/sudoers file.
How do I list users in Ubuntu?
Listing users in Ubuntu can be found in the /etc/passwd file. The /etc/passwd file is where all your local user information is stored. You can view the list of users in the /etc/passwd file through two commands: less and cat.
You might be interested: Quick Answer: What To Do In Palawan?
How do I login as root?
Log in as the root user
1. Choose Apple menu > Log Out to log out of your current user account.
2. At the login window, log in with the user name ”root” and the password you created for the root user. If the login window is a list of users, click Other, then log in.
Leave a Reply
Your email address will not be published. Required fields are marked *
Releated
Question: Turn Off Avast Cybercapture?
If you would like to disable CyberCapture, open the Avast user interface and go to ☰ Menu ▸ Settings ▸ Protection ▸ Core Shields. Untick the box next to Enable CyberCapture. Contents1 How do I temporarily turn off Avast Antivirus?2 How do I stop Avast scanning?3 What are the 5 ways to disable Avast Antivirus?4 […]
Question: Autocad 2018 System Requirements?
Solution: System requirements for AutoCAD 2018 CPU Type 32-bit: 1 gigahertz (GHz) or faster 32-bit (x86) processor 64-bit: 1 gigahertz (GHz) or faster 64-bit (x64) processor Memory 32-bit: 2 GB (4 GB recommended) 64-bit: 4 GB (8 GB recommended) 11 • Contents1 Is 4GB RAM enough for AutoCAD 2018?2 How much RAM do I need […]
|
__label__pos
| 0.99996 |
Jump to: navigation, search
EclipseLink/Development/Testing/foundation
< EclipseLink | Development | Testing
Revision as of 15:36, 26 November 2008 by Eric.gwin.oracle.com (Talk | contribs) (Running the foundation LRG)
Running the foundation LRG
The foundation tests consist of an LRG (long regression), SRG (short regression), and various non-classified tests (non-LRG). If any code was changed in org.eclipse.persistence.core it is recommended to run the LRG. If minimal changes were made the SRG is sufficient. Depending on what was changed, running some of the non-LRG tests may also be desirable.
There are several ways to run the tests.
• Ant (1.7 or greater)
• Eclipse JUnit
• Testing Browser
Prior to running the tests, a build.properties file must exist in your user home directory with the following properties:
• junit.lib=<full path of junit 4 jar>
• tools.lib=<full path of tools jar in your JDK installation - usually in JAVA_HOME/lib>
• jdbc.driver=<full path to your JDBC driver>
• db.driver=<classname of your JDBC driver>
• db.url=<JDBC url of your DB>
• db.user=<db username>
• db.pwd=<db password>
• (optional) db.platform=<EclipseLink Database Platform to use> - this is only necessary if EclipseLink does not properly dectect you DB with its database detection
Ant
The tests can be run from two ant build scripts
1. The base build.xml (in trunk)
You can run the following targets:
• test-core - run the Core LRG
• test-core-srg - run the Core SRG
2. The build.xml script in foundation/eclipselink.core.test
• test-lrg - run the Core LRG
• test-srg - run the Core SRG
Test logging will appear on Standard out and Standard Error. Test results will appear in foundation/eclipselink.core.test/reports
Eclipse JUnit
The eclipselink.core.test Eclipse project contains launch targets in the elipselink.core.test/run directory to run the LRG or SRG through the Eclipse JUnit integration.
Note that the majority of the foundation tests are built using a legacy test framework which has been migrated to extend JUnit. One issue is that the total number of tests are unknown before the tests are setup, so the progress bar in Eclipse will not be accurate.
Ensure the following (Java | Build Path | Classpath Variables) are set:
• TOOLS_LIB variable has been mapped to your JDK tools.jar
• JDBC_LIB variable has been mapped to your database's JDBC driver (vendor-agnostic driver)
• Some JDBC drivers may require the system library path to be set, or dlls to be on the path
Testing Browser
Early in its existence, this product was tested with an in-house testing framework. A large number of tests are still available through this test framework. Testing is gradually being migrated to frameworks such as JUnit, but the GUI tool available in the legacy framework is still quite useful for testing. The code for this test framework is stored in the eclipselink.core.test project.
The eclipselink.core.test Eclipse project contains launch targets in the eclipselink.core.test/run directory to run the Testing Browser. Prior to running that target, you should run the "process-resource" target in the build.xml file in eclipselink.core.test (If you have run a full build, it will have been run already). Launching the browser is a matter of running that build target.
When you run the test target:
1. Input your DB login information. You can do this by selecting a value from the "Quick Login" drop box and then edit the form on the right side of the GUI with any information that is different from what is already populated
2. Run either the SRGTestModel or the LRGTestModel listed on the right side of the GUI by selecting it and then clicking the "Run Test" button.
3. You can run tests individually by navigating to them. You can open folders by double-clicking on them. If they don't immediately open, click the "setup" button and when setup is done, they should open.
You could also configure your own run target:
1. Run the process.resource target of the build.xml in the base directory of the eclipselink.core.test project as an ant script. This will copy some xml files to your eclipselink.core.test/run directory.
2. Create a Java Application run target on the eclipselink.core.test project - I'll call it Testing Browser
3. Use org.eclipse.persistence.testing.framework.ui.TestingBrowserFrame as the main class
4. Add "-Xmx256m -Djava.security.manager -Djava.security.policy==${workspace_loc:eclipselink.core.test}/resource/java.policy.allpermissions" to the VM arguments section of the Arguments tab
5. Add the following to the classpath: (note these are listed by category and you may be able to run subsets of the tests without certain categories)
1. JDBC
1. your JDBC driver of choice
2. JPA
1. jpa.core
2. jpa.test
3. Oracle Extensions
1. eclipselink.extension.oracle (if doing Oracle specific testing)
2. eclipselink.extension.oracle.test (if doing Oracle specific testing)
3. jars required to compile eclipselink.extension.oracle - aqapi.jar, sdoapi.jar, dms.jar, xdb.jar, xml.jar, xmlparserv2.jar (if doing Oracle specific testing)
4. External Extensions
1. <trunk>/extension.lib.external (user created)
2. jars required run included (Testing Browser.launch) configuration
3. Add an empty library project called extension.lib.external (user created) and put any and the following launch required jars into this folder - point the project to this folder and any dependencies in the classpath of the included eclipse launch targets will resolve.
4. postgresql_jdbc3_82.jar, db2java_9.zip, jconn3.jar, mysql-connector-java-5.0.7-bin.jar
5. Java
1. a TOOLS_LIB variable that points to the tools jar in your java installation
6. Generated classes
1. Add the run directory of the eclipselink.core.test project. This is where all java output goes. We output some java files and then compile them. They will go there.
|
__label__pos
| 0.555159 |
Linear Equations
NCERT Exercise 3.2
Part 1
Question 1: Form a pair of linear equations in the following problems, and find their solutions graphically.
(a) 10 students of Class X took part in a Mathematics quiz. If the number of girls is 4 more than the number of boys, find the number of boys and girls who took part in the quiz.
Solution: Let us assume that number of boys = x and number of girls = y. We get following equations as per question:
`x + y = 10`
Or, `y = 10 – x` ………(1)
This equation will give following values for x and y;
x1234
y9876
`y = x + 4` ………..(2)
This equation will give following values for x and y;
x1234
y5678
Following graph is plotted for the given pair of linear equations:
linear equations exercise solution
Number of boys = 3 and number of girls = 7
(b) 5 pencils and 7 pens together cost Rs. 50, whereas 7 pencils and 5 pens together cost Rs. 46. Find the cost of one pencil and that of one pen.
Solution: Let us assume that price of a pencil is x and that of a pen is y. We get following equations as per question:
`5x + 7y = 50` ………(1)
This equation will give following values for x and y;
x1234
y6.45.754.2
`7x + 5y = 46` ……….(2)
This equation will give following values for x and y;
x1234
y7.86.453.6
Following graph is plotted for the given pair of linear equations.
linear equations exercise solution
Price of one pencil = Rs. 3 and Price of one pen = Rs. 5
Question 2: On comparing the ratios `(a_1)/(a_2)`, `(b_1)/(b_2)` and `(c_1)/(c_2)` find out whether the lines representing the following pairs of linear equations intersect at a point, are parallel or coincident.
(a) `5x – 4y + 8 = 0` and `7x + 6y – 9 = 0`
Solution: In the given pair of linear equations;
`(a_1)/(a_2)=5/7`
`(b_1)/(b_2)=-4/6=-2/3`
It is clear that;
`(a_1)/(a_2)≠(b_1)/(b_2)`
Hence the lines representing the given pair of linear equations intersect at a point.
(b) `9x + 3y + 12 = 0` and `18x + 6y + 24 = 0`
Solution: In the given pair of linear equations;
`(a_1)/(a_2)=(9)/(18)=1/2`
`(b_1)/(b_2)=3/6=1/2`
`(c_1)/(c_2)=(12)/(24)=1/2`
It is clear that;
`(a_1)/(a_2)=(b_1)/(b_2)=(c_1)/(c_2)`
Hence the lines representing the given pair of linear equations will be coincident.
(c) `6x – 3y + 10 = 0` and `2x- y + 9 = 0`
Solution: For the given pair of linear equations;
`(a_1)/(a_2)=6/2=3`
`(b_1)/(b_2)=(-3)/(-1)=3`
`(c_1)/(c_2)=(10)/(9)`
It is clear that;
`(a_1)/(a_2)=(b_1)/(b_2)≠(c_1)/(c_2)`
Hence the lines representing the given pair of linear equations will be parallel.
Question 3: On comparing the ratios `(a_1)/(a_2)`, `(b_1)/(b_2)` and `(c_1)/(c_2)` find out whether the following pairs of linear equations are consistent or inconsistent.
(a) `3x + 2y = 5` and `2x – 3y = 7`
Solution: For the given pair of linear equations;
`(a_1)/(a_2)=3/2`
`(b_1)/(b_2)=(2)/(-3)`
It is clear that;
`(a_1)/(a_2)≠(b_1)/(b_2)`
Hence, the given pair of linear equations is consistent.
(b) `2x – 3y = 8` and `4x – 6y = 9`
Solution: For the given pair of linear equations;
`(a_1)/(a_2)=2/4=1/2`
`(b_1)/(b_2)=(-3)/(-6)=1/2`
`(c_1)/(c_2)=8/9`
It is clear that;
`(a_1)/(a_2)=(b_1)/(b_2)≠(c_1)/(c_2)`
Hence the given pair of linear equations is inconsistent.
(c) `(3)/(2)x + (5)/(3)y = 7` and `9x – 10y = 14`
Solution: For the given pair of linear equations;
`(a_1)/(a_2)=(3)/(2)÷9=(3)/(18)`
`(b_1)/(b_2)=(5)/(3)÷-10=-5/6`
It is clear that;
`(a_1)/(a_2)≠(b_1)/(b_2)`
(d) `(4)/(3)x + 2y = 8` and `2x + 3y = 12`
Solution: For the given pair of linear equations;
`(a_1)/(a_2)=(4)/(3)÷2=(2)/(3)`
`(b_1)/(b_2)=2/3`
`(c_1)/(c_2)=2/3`
It is clear that;
`(a_1)/(a_2)=(b_1)/(b_2)= (c_1)/(c_2)`
Hence the given pair of linear equations is dependent and consistent.
Copyright © excellup 2014
|
__label__pos
| 0.999961 |
blob: ff4dc0893e2032bf16c2c340ea8aecbd12220772 [file] [log] [blame]
// Copyright 2018 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#include "peridot/bin/sessionmgr/storage/session_storage_xdr.h"
#include "peridot/lib/base64url/base64url.h"
namespace modular {
// Serialization and deserialization of fuchsia::modular::internal::StoryData
// and fuchsia::modular::StoryInfo to and from JSON.
namespace {
fuchsia::ledger::PageId PageIdFromBase64(const std::string& base64) {
// Both base64 libraries available to us require that we allocate an output
// buffer large enough to decode any base64 string of the input length, which
// for us it does not know contains padding since our target size is 16, so we
// have to allocate an intermediate buffer. Hex would not require this but
// results in a slightly larger transport size.
std::string decoded;
fuchsia::ledger::PageId page_id;
if (base64url::Base64UrlDecode(base64, &decoded)) {
size_t size;
if (decoded.length() != page_id.id.size()) {
FXL_LOG(ERROR) << "Unexpected page ID length for " << base64
<< " (decodes to " << decoded.length() << " bytes; "
<< page_id.id.size() << " expected)";
size = std::min(decoded.length(), page_id.id.size());
memset(page_id.id.data(), 0, page_id.id.size());
} else {
size = page_id.id.size();
}
memcpy(page_id.id.data(), decoded.data(), size);
} else {
FXL_LOG(ERROR) << "Unable to decode page ID " << base64;
}
return page_id;
}
std::string PageIdToBase64(const fuchsia::ledger::PageId& page_id) {
return base64url::Base64UrlEncode(
{reinterpret_cast<const char*>(page_id.id.data()), page_id.id.size()});
}
// Serialization and deserialization of fuchsia::modular::internal::StoryData
// and fuchsia::modular::StoryInfo to and from JSON. We have different versions
// for backwards compatibilty.
//
// Version 0: Before FIDL2 conversion. ExtraInfo fields are stored as "key"
// and "value", page ids are stored as vector.
void XdrStoryInfoExtraEntry_v0(
XdrContext* const xdr, fuchsia::modular::StoryInfoExtraEntry* const data) {
xdr->Field("key", &data->key);
xdr->Field("value", &data->value);
}
void XdrStoryInfo_v0(XdrContext* const xdr,
fuchsia::modular::StoryInfo* const data) {
xdr->Field("last_focus_time", &data->last_focus_time);
xdr->Field("url", &data->url);
xdr->Field("id", &data->id);
xdr->Field("extra", &data->extra, XdrStoryInfoExtraEntry_v0);
}
void XdrStoryData_v0(XdrContext* const xdr,
fuchsia::modular::internal::StoryData* const data) {
FXL_CHECK(xdr->op() == XdrOp::FROM_JSON)
<< "A back version is never used for writing.";
data->set_story_page_id(fuchsia::ledger::PageId());
xdr->Field("story_info", data->mutable_story_info(), XdrStoryInfo_v0);
xdr->Field("story_page_id", &data->mutable_story_page_id()->id);
}
// Version 1: During FIDL2 conversion. ExtraInfo fields are stored as "key"
// and "value", page ids are stored as base64 string.
void XdrStoryInfoExtraEntry_v1(
XdrContext* const xdr, fuchsia::modular::StoryInfoExtraEntry* const data) {
xdr->Field("key", &data->key);
xdr->Field("value", &data->value);
}
void XdrStoryInfo_v1(XdrContext* const xdr,
fuchsia::modular::StoryInfo* const data) {
xdr->Field("last_focus_time", &data->last_focus_time);
xdr->Field("url", &data->url);
xdr->Field("id", &data->id);
xdr->Field("extra", &data->extra, XdrStoryInfoExtraEntry_v1);
}
void XdrStoryData_v1(XdrContext* const xdr,
fuchsia::modular::internal::StoryData* const data) {
static constexpr char kStoryPageId[] = "story_page_id";
xdr->Field("story_info", data->mutable_story_info(), XdrStoryInfo_v1);
switch (xdr->op()) {
case XdrOp::FROM_JSON: {
std::string page_id;
xdr->Field(kStoryPageId, &page_id);
if (page_id.empty()) {
} else {
data->set_story_page_id(fuchsia::ledger::PageId());
data->set_story_page_id(PageIdFromBase64(page_id));
}
break;
}
case XdrOp::TO_JSON: {
std::string page_id;
if (data->has_story_page_id()) {
page_id = PageIdToBase64(data->story_page_id());
}
xdr->Field(kStoryPageId, &page_id);
break;
}
}
}
// Version 2: After FIDL2 conversion was complete. ExtraInfo fields are stored
// as @k and @v, page ids are stored as array wrapped in a struct.
void XdrStoryInfoExtraEntry_v2(
XdrContext* const xdr, fuchsia::modular::StoryInfoExtraEntry* const data) {
xdr->Field("@k", &data->key);
xdr->Field("@v", &data->value);
}
void XdrStoryInfo_v2(XdrContext* const xdr,
fuchsia::modular::StoryInfo* const data) {
xdr->Field("last_focus_time", &data->last_focus_time);
xdr->Field("url", &data->url);
xdr->Field("id", &data->id);
xdr->Field("extra", &data->extra, XdrStoryInfoExtraEntry_v2);
}
void XdrPageId_v2(XdrContext* const xdr, fuchsia::ledger::PageId* const data) {
xdr->Field("id", &data->id);
}
void XdrStoryData_v2(XdrContext* const xdr,
fuchsia::modular::internal::StoryData* const data) {
xdr->Field("story_info", data->mutable_story_info(), XdrStoryInfo_v2);
xdr->Field("story_page_id", data->mutable_story_page_id(), XdrPageId_v2);
}
// Version 3: ExtraInfo fields are stored as @k and @v, page ids are stored as
// array, and we set an explicit @version field.
void XdrStoryData_v3(XdrContext* const xdr,
fuchsia::modular::internal::StoryData* const data) {
if (!xdr->Version(3)) {
return;
}
// NOTE(mesch): We reuse subsidiary filters of previous versions as long as we
// can. Only when they change too we create new versions of them.
xdr->Field("story_info", data->mutable_story_info(), XdrStoryInfo_v2);
xdr->Field("story_page_id", data->mutable_story_page_id(), XdrPageId_v2);
}
// Version 4: Includes is_kind_of_proto_story field.
void XdrStoryData_v4(XdrContext* const xdr,
fuchsia::modular::internal::StoryData* const data) {
if (!xdr->Version(4)) {
return;
}
// NOTE(mesch): We reuse subsidiary filters of previous versions as long as we
// can. Only when they change too we create new versions of them.
xdr->Field("story_info", data->mutable_story_info(), XdrStoryInfo_v2);
xdr->Field("story_page_id", data->mutable_story_page_id(), XdrPageId_v2);
}
// Version 5: Includes story_name field.
void XdrStoryData_v5(XdrContext* const xdr,
fuchsia::modular::internal::StoryData* const data) {
if (!xdr->Version(5)) {
return;
}
// NOTE(mesch): We reuse subsidiary filters of previous versions as long as we
// can. Only when they change too we create new versions of them.
xdr->Field("story_info", data->mutable_story_info(), XdrStoryInfo_v2);
xdr->Field("story_name", data->mutable_story_name());
xdr->Field("story_page_id", data->mutable_story_page_id(), XdrPageId_v2);
}
void XdrStoryOptions_v1(XdrContext* const xdr,
fuchsia::modular::StoryOptions* const data) {
xdr->Field("kind_of_proto_story", &data->kind_of_proto_story);
}
// Version 6: Includes story_options field.
void XdrStoryData_v6(XdrContext* const xdr,
fuchsia::modular::internal::StoryData* const data) {
if (!xdr->Version(6)) {
return;
}
// NOTE(mesch): We reuse subsidiary filters of previous versions as long as we
// can. Only when they change too we create new versions of them.
xdr->Field("story_info", data->mutable_story_info(), XdrStoryInfo_v2);
xdr->Field("story_name", data->mutable_story_name());
xdr->Field("story_options", data->mutable_story_options(),
XdrStoryOptions_v1);
xdr->Field("story_page_id", data->mutable_story_page_id(), XdrPageId_v2);
}
} // namespace
// clang-format off
XdrFilterType<fuchsia::modular::internal::StoryData> XdrStoryData[] = {
XdrStoryData_v6,
XdrStoryData_v5,
XdrStoryData_v4,
XdrStoryData_v3,
XdrStoryData_v2,
XdrStoryData_v1,
XdrStoryData_v0,
nullptr,
};
// clang-format on
} // namespace modular
|
__label__pos
| 0.988956 |
Login
Python-like string interpolation in Javascript
Author:
phxx
Posted:
June 20, 2010
Language:
JavaScript
Version:
1.2
Tags:
template javascript string interpolation replace
Score:
2 (after 2 ratings)
Provides python-like string interpolation. It supports value interpolation either by keys of a dictionary or by index of an array.
Examples:
interpolate("Hello %s.", ["World"]) == "Hello World."
interpolate("Hello %(name)s.", {name: "World"}) == "Hello World."
interpolate("Hello %%.", {name: "World"}) == "Hello %."
This version doesn't do any type checks and doesn't provide formating support.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
/**
* Provides python-like string interpolation.
* It supports value interpolation either by keys of a dictionary or
* by index of an array.
*
* Examples::
*
* interpolate("Hello %s.", ["World"]) == "Hello World."
* interpolate("Hello %(name)s.", {name: "World"}) == "Hello World."
* interpolate("Hello %%.", {name: "World"}) == "Hello %."
*
* This version doesn't do any type checks and doesn't provide
* formating support.
*/
function interpolate(s, args) {
var i = 0;
return s.replace(/%(?:\(([^)]+)\))?([%diouxXeEfFgGcrs])/g, function (match, v, t) {
if (t == "%") return "%";
return args[v || i++];
});
}
More like this
1. Base64Field: base64 encoding field for storing binary data in Django TextFields by bikeshedder 6 years, 11 months ago
2. Verbose template filter : avoid too many if by romain-hardouin 8 years ago
3. testdata tag for templates by showell 7 years, 2 months ago
4. django_template decorator by fredd4 8 years, 4 months ago
5. render_markup filter, specify the markup filter as a string by exogen 9 years, 2 months ago
Comments
brokenseal (on June 21, 2010):
Isn't this supposed to be a Django snippets site?
#
Please login first before commenting.
|
__label__pos
| 0.97256 |
class Ameba::Rule::Lint::UselessAssign
Overview
A rule that disallows useless assignments.
For example, this is considered invalid:
def method
var = 1
do_something
end
And has to be written as the following:
def method
var = 1
do_something(var)
end
YAML configuration example:
Lint/UselessAssign:
Enabled: true
ExcludeTypeDeclarations: false
Included Modules
Defined in:
ameba/rule/lint/useless_assign.cr
Constant Summary
MSG = "Useless assignment to variable `%s`"
Constructors
Class Method Summary
Instance Method Summary
Instance methods inherited from class Ameba::Rule::Base
==(other) ==, catch(source : Source) catch, excluded?(source) excluded?, group group, hash hash, name name, special? special?, test(source : Source, node : Crystal::ASTNode, *opts)
test(source : Source)
test
Class methods inherited from class Ameba::Rule::Base
default_severity : Ameba::Severity default_severity
Macros inherited from class Ameba::Rule::Base
issue_for(*args, **kwargs, &block) issue_for
Macros inherited from module Ameba::Config::RuleConfig
properties(&block) properties
Constructor Detail
def self.new(ctx : YAML::ParseContext, node : YAML::Nodes::Node) #
def self.new(config = nil) #
A rule that disallows useless assignments.
For example, this is considered invalid:
def method
var = 1
do_something
end
And has to be written as the following:
def method
var = 1
do_something(var)
end
YAML configuration example:
Lint/UselessAssign:
Enabled: true
ExcludeTypeDeclarations: false
[View source]
Class Method Detail
def self.parsed_doc : String | Nil #
Returns documentation for this rule, if there is any.
module Ameba
# This is a test rule.
# Does nothing.
class MyRule < Ameba::Rule::Base
def test(source)
end
end
end
MyRule.parsed_doc # => "This is a test rule.\nDoes nothing."
Instance Method Detail
def description : String #
def description=(description : String) #
def enabled=(enabled : Bool) #
def enabled? : Bool #
def exclude_type_declarations=(exclude_type_declarations : Bool) #
def exclude_type_declarations? : Bool #
def excluded : Array(String) | Nil #
def excluded=(excluded : Array(String) | Nil) #
def severity : Ameba::Severity #
def severity=(severity : Ameba::Severity) #
def test(source, node, scope : AST::Scope) #
[View source]
def test(source) #
[View source]
|
__label__pos
| 0.973335 |
May 29, 2023
faubourg36-lefilm
Think spectacular technology
Pcs in Each Day Life Experienced Humble Beginnings
Due to the fact computers are so common in present day culture, it is vital to have at minimum a doing work knowledge of in which they come from as very well as what types there are. While we may possibly feel we know everything, just since we have a person sitting down on a desk at dwelling, you could uncover there’s a bit extra you could learn. And understanding is what a personal computer ought to be for, is just not it?
As soon as on a time, a computer was a particular person who did calculations and of class computations. As early as 1613 these men and women were being termed ‘computers’. In the 19th century, the first computing equipment came about. These bundled issues modern-day men and women would take into account really easy, this sort of as the abacus, sliding rule and an astronomical clock that tracked the stars and the indicators of the zodiac.
Computer systems are the result of two systems. Just one is that of automatic calculation and the other is about programmability. In a typical sense, a computer system is a multi-purpose device created all over a microprocessor. It incorporates a tough travel, memory, a modem as properly as other elements. A personal computer person is in a position to style documents, ship e mail and search the world-wide-web. Of system, they are also useful for participating in video games. Any laptop or computer will offer you a system of inputting data and acquiring an output.
1 kind of laptop or computer is named a personal laptop or desktop, shortened to the acronym Laptop. A Personal computer is utilized by an personal man or woman rather than a significant group of people. Their immediate predecessors are mainframe computer systems, which were employed to do batch processing and expected an intervening operator among consumer and program. PCs are far more for phrase processing, spreadsheets, world-wide-web use and databases. PCs also allow buyers to perform games these types of as solitary participant titles or multi-substantial on-line and much more interactive game titles.
Laptops are very similar to PCs in the way of components but marginally various when it comes to use and objective. They are intended to be light-weight and cellular, but are also excellent for making documents, accessing the web and taking part in online games. Laptops are getting additional and far more attractiveness and far more laptops have been marketed in 2008 than PCs. One major big difference concerning a Computer system and laptop computer is the pointing device. In which PCs use a mouse, a laptop will use a touchpad or a pointing adhere. Numerous laptops can outperform the ordinary Computer system when it will come to games and world-wide-web use.
Online games are a key motive why folks invest in new personal computers. Though the pc and specially the net have a powerful academic purpose in present-day culture, most people are familiar with playing games on their Computer or laptop computer. It was not that lengthy back that video games could only be found in arcades or on recreation consoles on your own. Now, even so, the unique laptop user can participate in video games on their individual, some having up to forty serious time hours in advance of completion, or engage in in worlds whole of countless numbers of other logged on end users. Several game titles demand a laptop or computer that has a distinctive graphics processing unit or GPU, also regarded as a online video card, in get to run the large-desire graphics that most game designers use.
Computers are almost everywhere currently and it is really essential that anyone who uses just one have a minimal bit of info on what they are about. Though game titles are a major draw for receiving a personal computer, it can be also critical to remember that computer systems can present much instruction and information. With the ideal facts and means, a one user with a personalized pc can be a incredibly impressive affect on the environment, in a lot of great ways.
|
__label__pos
| 0.782033 |
Provided by: libarchive-tools_3.2.2-3.1_amd64 bug
NAME
bsdcat — expand files to standard output
SYNOPSIS
bsdcat [options] [files]
DESCRIPTION
bsdcat expands files to standard output.
OPTIONS
bsdcat typically takes a filename as an argument or reads standard input when used in a
pipe. In both cases decompressed data it written to standard output.
EXAMPLES
To decompress a file:
bsdcat example.txt.gz > example.txt
To decompress standard input in a pipe:
cat example.txt.gz | bsdcat > example.txt
Both examples achieve the same results - a decompressed file by redirecting output.
SEE ALSO
uncompress(1), zcat(1), bzcat(1), xzcat(1), libarchive-formats(5),
|
__label__pos
| 0.937097 |
Welcome Guest, Not a member yet? Register Sign In
Cannot modify header information - Codeigniter / DataMapper OverZealous Edition
#1
[eluser]Watermark Studios[/eluser]
Just so everyone knows...yes, I have checked other threads here and other places online for help.
I'm getting the below error:
Code:
A PHP Error was encountered
Severity: Warning
Message: Cannot modify header information - headers already sent by (output started at E:\inetpub\vhosts\...\httpdocs\main\application\controllers\test.php:19)
Filename: codeigniter/Common.php
Line Number: 356
A Database Error Occurred
Unable to connect to your database server using the provided settings.
Let me set the stage here a bit. I have a working site that uses the CI/DMZ config to access the database. Every other model is working fine. I set up a test controller so I could investigate. Below is the method that is returning the error:
Code:
15 function country(){
16 $location = new Location();
17 $location->where('country','US')->where('type','MAI')->get();
18 $count = $location->where('country','US')->where('type','MAI')->count();
19 echo $count . "<br/>";
20 foreach ($location->all as $loc) {
21 $loc->user->get();
22 echo $loc->user->lastname . "<br/>";
23 }
24 }
The Count variable is correctly returning the number of results. The first 30 last names are also returned, so it is working initially. The rest of the results are omitted and the error kicks in.
Any thoughts?
#2
[eluser]theprodigy[/eluser]
does it work correctly without the echo's in lines 19 and 22? If not, what is the error reported then?
#3
[eluser]Watermark Studios[/eluser]
If I comment out both lines 19 and 22, then I get an error 500. Database resource can not be found. If I uncomment, I at least get some response, but then it breaks shortly into it and throws the PHP errors.
#4
[eluser]Watermark Studios[/eluser]
I tracked down the error.
Code:
Error: Unknown column 'locations.country' in 'where clause'
Unfortunately, there is a country column in the locations table. My full query looks like this:
Code:
SELECT `users`.* FROM (`users`) WHERE `locations`.`country` = 'US' AND `type` = 'MAI' ORDER BY `users`.`lastname` asc LIMIT 50
Any thoughts?
#5
[eluser]theprodigy[/eluser]
yeah, I have a thought. You aren't joining the locations table to your query. You are trying to base a where clause on a table not in the query.
#6
[eluser]Watermark Studios[/eluser]
I was just thinking the same thing. But I'm using DMZ ORM to manage the transaction. It shouldn't use a query like that. What I write is:
Code:
$user_info = new User();
$user_info->where_related_location('country','US')->where('type','MAI')->get();
foreach($user_info as $user){
echo $user->fullname . "<br/>";
}
then for some reason it's translating it to what I put above. DMZ works great for all of the other transactions. As a matter of fact, this is the only transaction I have any problems with and I can't figure it out for the life of me.
#7
[eluser]theprodigy[/eluser]
I've never used DMZ before, but have you specified (or even do you have to specify) the relationship in between the two models in the the models? Do you have to set a $has_one = 'location' in the users model and a $has_many = 'users' in the location model or something?
If you do, have these been set properly?
#8
[eluser]Watermark Studios[/eluser]
Yep...both are set. There seems to be some glitch I'm going to have to work around.
Theme © iAndrew 2016 - Forum software by © MyBB
|
__label__pos
| 0.929995 |
Cs lectures - Startsida Facebook
6624
identifierare - Engelsk översättning - Linguee
I consistently fall for the same traps. VB.NET identifiers are not case-sensitive. When assigning a value to a variable with a different data type (and with Option Strict not turned on), VB.NET will coerce the value if possible. This automatic coercion can sometimes lead to unexpected results, for example: Techopedia Explains Identifier. Like in C/C++, identifiers in C# are case-sensitive. Microsoft recommends the use of Camel or Pascal notations, along with semantics, for naming identifiers instead of the Hungarian notation that was used prior to .NET programming.
1. Ogifta par youtube
2. Att gora i gavle med barn
3. Inorganic compounds
4. Hus lan
identifiers enclosed in double quotes) are case-sensitive and can start with and contain any valid characters, including: Numbers Special characters ( . , ' , ! , @ , # , $ , % , ^ , & , * , etc.) Each of them have the same killer combination: they are case-sensitive with identifiers, but they are scripting language that do not resolve identifiers at parse-time. I consistently fall for the same traps. VB.NET identifiers are not case-sensitive. When assigning a value to a variable with a different data type (and with Option Strict not turned on), VB.NET will coerce the value if possible. This automatic coercion can sometimes lead to unexpected results, for example: Techopedia Explains Identifier.
Skiftlägeskänsliga in English with contextual examples
May 28, 2020 Quoted Identifiers Ignore Case. To solve our case sensitivity problem, Snowflake has a connection setting we can use that will turn off case Jan 6, 2016 In JavaScript, identifiers are used to name variables, keywords, functions, and labels. Their naming rules are not much different from any other Sep 20, 2018 Case-sensitive means that the computer program only matches values with the same case (lower/upper). Case-insensitive means the program Jul 25, 2020 Case-insensitive symbol table lookup would allow you to write your identifiers in whatever case you like.
Identifiers are not case-sensitive
contentstructuremenu/show_content_structure Node ID
Others are case-insensitive (i.e., not case-sensitive), such as ABAP , Ada , most BASICs (an exception being BBC BASIC ), Fortran , SQL (for the syntax, and for some vendor implementations, e.g. Microsoft SQL Server , the data itself) [NB 2] and Pascal . Identifiers and reserved words should not be case sensitive, although many follow a convention to use capitals for reserved words and Pascal case for identifiers. See SQL-92 Sec. 5.2 Share Se hela listan på bitdegree.org Delimited identifiers (i.e. identifiers enclosed in double quotes) are case-sensitive and can start with and contain any valid characters, including: Numbers Special characters ( . , ' , ! , @ , # , $ , % , ^ , & , * , etc.) Each of them have the same killer combination: they are case-sensitive with identifiers, but they are scripting language that do not resolve identifiers at parse-time.
Identifiers are not case-sensitive
They must not begin with a digit. 3. Uppercase and lowercase letters are distinct. That is, identifiers are case sensitive.
Hur haller man ett bra tal
By convention, constant identifiers are always uppercase. Identifiers and persistent identifiers (PID) constitute an important part of "FAIR" research data.
PL/SQL is not case sensitive except within string and character literals. So, if the only difference between identifiers is the case of corresponding letters, PL/SQL considers the identifiers to be the same, as the following example shows: We need to record the fact that identifiers are quoted/not-quoted in the sql We need to extend the SPI, so that connectors can specify whether table/column..etc identifiers are case sensitive This should work for fields in row types also Rule should be that if either side is case insensitive then the match is done ignoring case Yes, it is case-sensitive. It is this way because of its heritage from C. To keep the language more familiar to what people were used to "in the day", they left it as case-sensitive.
Bygglag
Identifiers are not case-sensitive marita karlson
engelman bakery
konsumentverket vad handlar ekonomi om
sollentuna kommun
bagageutrymme xc60
transport sector retirement fund forms
Hedvig Charlotta Nordenflycht - Wikipedia
If you name a schema object using a quoted identifier, then you must use the double quotation marks whenever you refer to that object. A nonquoted identifier is not surrounded by any punctuation.
Kapunda gardens
personalakte aufbewahrungsfrist
Programming Paradigms - PDF Free Download - DocPlayer.se
You cannot start name of an identifier with a digit whereas underscore can be used as first character while naming identifier. Other special characters are not allowed while naming an identifier.
you agree to be bound by the followin - Central Bank of Ireland
This means, Variable and variable are not the same. Always give the identifiers a name that makes sense. While c = 10 is a valid name, writing count = 10 would make more sense, and it would be easier to figure out what it represents when you look at your code after a long gap. A nonquoted identifier is not surrounded by any punctuation. You can use either quoted or nonquoted identifiers to name any database object. However, database names, global database names, database link names, disk group names, and pluggable database (PDB) names are always case insensitive and are stored as uppercase.
NEXTGOV Robo-calls and other annoyances Identification of violations. Emergency Commercially Available. Sensitive. Considerations for curation and use of ”Data” build a complete WiROI™ Business Case. Ayvazian is Not a LifeProof case?
|
__label__pos
| 0.573463 |
ERR_EMPTY_RESPONSE on Google Chrome I 100% Fix
Google Chrome is one of the best web browsers in the world. But its position in this competitive world has declined in recent years. Still, he’s one step ahead of everyone. I think you’re using Google Chrome, just like many others. You can then correlate the different problems that occur when using them. Today we will discuss the most common error – ERR_EMPTY_RESPONSE on Google Chrome. We will mention a few measures to prevent this mistake. So let’s start our journey with a discussion about what ERR_EMPTY_RESPONSE really is.
Error ERR_EMPTY_RESPONSE
What is the ERR_EMPTY_RESPONSE error?
This mainly happens on Google Chrome, which indicates a poor connection to the Internet. A frustrated user sees a message on the screen that the website he is trying to visit is not working. The user tries again and again, but to no avail.
When this error occurs, you can see additional records and a detailed explanation of the reason. Now that you know the error, it’s better to know the possible causes. Some of them are mentioned below:
1. Poor internet connection.
2. Problem creating temporary files.
3. Overloaded browser and session cache.
4. Defective extensions in the background, which has a negative effect on the operation.
We will give you the best recommendations and detailed instructions on how to correct this error ERR_EMPTY_RESPONSE. Just follow our steps and stick to the subject. We will certainly solve your problem.
How can I fix ERR_EMPTY_RESPONSE in Google Chrome?
Method 1 Browser Data Cleanup
Sometimes it seems that Google Chrome itself carries and causes an ERR_EMPTY_RESPONSE error. We ask you to delete all data from your browser and to check whether your problem has been solved or not. You can perform the following steps for the rest of the procedure:
Step 1: First of all, start Google Chrome.
Step 2: Click on the top part of the screen and select the options marked with Plus, they look like the three dots.
Step 3: The next step is to select the History option in the menu after placing the cursor on the History option.
Step 4: Select the Delete data option from the top left window of your computer.
Step five: Within the time interval, select All the time and proceed to the next steps.
You can now check how you removed the data from your browser by revisiting the site. Reboot your computer and check if the error has been fixed – a blank answer or not.
Sometimes bad extensions can cause this problem or damage other chrome files. We can therefore advise you to ensure that your browser is not infected by viruses. It can be stopped by installing Auslogics Anti-Malware.
The system automatically checks whether an unwanted program is running in the background. In addition, it is guaranteed that a cookie in which your personal data is stored is immediately recognised when you use it. By using this system we can increase the safety of our viewers.
Method No 2 Resetting network parameters
Sometimes your computer’s network settings are not set correctly. To do this correctly, you must therefore reset the settings according to the following instructions.
Step 1: Hold the Windows and S keys together and open a command prompt.
Step 2:Now enter the following commands in sequence by pressing the Enter key after each command.
ipconfig / Release
ipconfig / update
ipconfig/Flushdns
Reset the Winsock gauze
Net stop DHKP
a clear beginning of the DHKP
winhttp-reset proxy
We hope that after executing all these commands in the order in which they are listed, your ERR_EMPTY_RESPONSE component can be deleted.
Method No 3 Updating the device driver
If old and incompatible network drivers are present, even an ERR_EMPTY_RESPONSE may occur. To remedy this, you should therefore update your driver in the following way, which may be useful to you.
1. Via Device Manager
One of the easiest ways to update drivers is to have device management software on your computer. You must now follow these steps to get good results.
Step 1: Press the Windows + R keys on the keyboard and start the dialog box.
Step 2: If a dialog box is open, type devmgmt.msc and press Enter.
Step 3: Increasing the capacity of the network adapter category
Step 4: As you browse through the options, you will find updated drivers. You have to click on it and the drivers will be reinstalled by default.
2. Manufacturer’s website driver updates
The device manager does most of the work, but can sometimes install an older version of the drivers. Therefore, the best way to solve this problem is to download the driver manually from the manufacturer’s website. You need to download a driver that is compatible with your PC.
Conclusion
Therefore, starting with the ERR_EMPTY_RESPONSE discussion, we discussed methods to solve this problem. We hope you enjoy this article. If you have any suggestions or any other method, please contact us.
Related Tags:
error empty response android,err_empty_response chrome windows 10,err_empty_response didn’t send any data,err_empty_response php,err_empty_response wordpress,fetch err_empty_response,err_empty_response chrome,net::err_empty_response,err_empty_response youtube,err_empty_response windows 10,err_empty_response android,err_empty_response server,err_empty_response chrome mac,err_empty_response all browsers
|
__label__pos
| 0.730639 |
Insert text and select it?
#1
I’m attempting to make a simple function to create a Markdown-formatted link from selected text, such that
selected text
becomes
[selected text](URL)
but I’d like the “URL” text to be selected (such that if you then start typing your link, it will replace that text with your link).
I’ve gotten it to a point where I can get the text inserted:
editor = atom.workspaceView.getActiveView().getEditor()
selection = editor.getLastSelection()
curText = selection.getText()
selection.insertText('['+curText+'](URL)')
But then how do I move the editor’s cursor/selection to “URL”?
Thanks!
#2
I would use selection.getBufferRange() and selection.setBufferRange() to modify the selection you already have to exactly where you want it to be.
|
__label__pos
| 1 |
The Bigmemory Project's memory-efficient k-means cluster analysis
Description
k-means cluster analysis without the memory overhead, and possibly in parallel using shared memory.
Usage
1
bigkmeans(x, centers, iter.max = 10, nstart = 1)
Arguments
x
a big.matrix object.
centers
a scalar denoting the number of clusters, or for k clusters, a k by ncol(x) matrix.
iter.max
the maximum number of iterations.
nstart
number of random starts, to be done in parallel if there is a registered backend (see below).
Details
The real benefit is the lack of memory overhead compared to the standard kmeans function. Part of the overhead from kmeans() stems from the way it looks for unique starting centers, and could be improved upon. The bigkmeans() function works on either regular R matrix objects, or on big.matrix objects. In either case, it requires no extra memory (beyond the data, other than recording the cluster memberships), whereas kmeans() makes at least two extra copies of the data. And kmeans() is even worse if multiple starts (nstart>1) are used.
If nstart>1 and you are using bigkmeans() in parallel, a vector of cluster memberships will need to be stored for each worker, which could be memory-intensive for large data. This isn't a problem if you use are running the multiple starts sequentially.
Unless you have a really big data set (where a single run of kmeans not only burns memory but takes more than a few seconds), use of parallel computing for multiple random starts is unlikely to be much faster than running iteratively.
Only the algorithm by MacQueen is used here.
Value
An object of class kmeans, just as produced by kmeans.
Note
A comment should be made about the excellent package foreach. By default, it provides foreach, which is used much like a for loop, here over the nstart random starting points. Even so, there are efficiencies, doing a comparison of each result to the previous best result (rather than saving everything and doing a final comparison of all results).
When a parallel backend has been registered (see packages doSNOW, doMC, and doMPI, for example), bigkmeans() automatically distributes the nstart random starting points across the available workers. This is done in shared memory on an SMP, but is distributed on a cluster *IF* the big.matrix is file-backed. If used on a cluster with an in-RAM big.matrix, it will fail horribly. We're considering an extra option as an alternative to the current behavior.
Author(s)
John W. Emerson <[email protected]>
References
Hartigan, J. A. and Wong, M. A. (1979). A K-means clustering algorithm. Applied Statistics 28, 100–108.
MacQueen, J. (1967) Some methods for classification and analysis of multivariate observations. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, eds L. M. Le Cam & J. Neyman, 1, pp. 281–297. Berkeley, CA: University of California Press.
See Also
big.matrix, foreach
Examples
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
# Simple example (with one processor):
library(bigmemory)
x <- big.matrix(100000, 3, init=0, type="double")
x[seq(1,100000,by=2),] <- rnorm(150000)
x[seq(2,100000,by=2),] <- rnorm(150000, 5, 1)
head(x)
ans <- bigkmeans(x, 1) # One cluster isn't always allowed
# but is convenient.
ans$centers
ans$withinss
ans$size
apply(x, 2, mean)
ans <- bigkmeans(x, 2, nstart=5) # Sequential multiple starts.
class(ans)
names(ans)
ans$centers
ans$withinss
ans$size
# To use a parallel backend, try something like the following,
# assuming you have at least 3 cores available on this machine.
# Each processor does incur memory overhead for the storage of
# cluster memberships.
## Not run:
library(doSNOW)
cl <- makeCluster(3, type="SOCK")
registerDoSNOW(cl)
ans <- bigkmeans(x, 2, nstart=5)
## End(Not run)
# Both the following are run iteratively, but with less memory overhead
# using bigkmeans(). Note that the gc() comparisons aren't completely
# fair, because the big.matrix objects aren't reflected in the gc()
# summary. But the savings is there.
gc(reset=TRUE)
time.new <- system.time(print(bigkmeans(x, 2, nstart=5)$centers))
gc()
y <- x[,]
rm(x)
gc(reset=TRUE)
time.old <- system.time(print(kmeans(y, 2, nstart=5)$centers))
gc()
# The new kmeans() centers should match the old kmeans() centers, without
# the memory overhead amd running more quickly.
time.new
time.old
comments powered by Disqus
|
__label__pos
| 0.677697 |
Commit 36135952 authored by Dries's avatar Dries
- Patch #603702 by Xano: remove _taxonomy_term_select().
parent a243e821
......@@ -670,7 +670,21 @@ function taxonomy_form_term($form, &$form_state, $vocabulary, $edit = array()) {
}
$exclude[] = $edit['tid'];
$form['advanced']['parent'] = _taxonomy_term_select(t('Parents'), $parent, $vocabulary->vid, t('Parent terms') . '.', '<' . t('root') . '>', $exclude);
$tree = taxonomy_get_tree($vocabulary->vid);
$options = array('<' . t('root') . '>');
foreach ($tree as $term) {
if (!in_array($term->tid, $exclude)) {
$options[$term->tid] = str_repeat('-', $term->depth) . $term->name;
}
}
$form['advanced']['parent'] = array(
'#type' => 'select',
'#title' => t('Parent terms'),
'#options' => $options,
'#default_value' => $parent,
'#multiple' => TRUE,
);
}
$form['advanced']['synonyms'] = array(
'#type' => 'textarea',
......
......@@ -146,9 +146,6 @@ function taxonomy_field_build_modes($obj_type) {
*/
function taxonomy_theme() {
return array(
'taxonomy_term_select' => array(
'arguments' => array('element' => NULL),
),
'taxonomy_overview_vocabularies' => array(
'arguments' => array('form' => array()),
),
......@@ -575,33 +572,6 @@ function taxonomy_terms_static_reset() {
entity_get_controller('taxonomy_term')->resetCache();
}
/**
* Generate a form element for selecting terms from a vocabulary.
*
* @param $vid
* The vocabulary ID to generate a form element for
* @param $value
* The existing value of the term(s) in this vocabulary to use by default.
* @param $help
* Optional help text to use for the form element. If specified, this value
* MUST be properly sanitized and filtered (e.g. with filter_xss_admin() or
* check_plain() if it is user-supplied) to prevent XSS vulnerabilities. If
* omitted, the help text stored with the vocaulary (if any) will be used.
* @return
* An array describing a form element to select terms for a vocabulary.
*
* @see _taxonomy_term_select()
* @see filter_xss_admin()
*/
function taxonomy_form($vid, $value = 0, $help = NULL) {
$vocabulary = taxonomy_vocabulary_load($vid);
$help = ($help) ? $help : filter_xss_admin($vocabulary->help);
$blank = t('- Please choose -');
return _taxonomy_term_select(check_plain($vocabulary->name), $value, $vid, $help, $blank);
}
/**
* Generate a set of options for selecting a term from all vocabularies.
*/
......@@ -966,70 +936,6 @@ function taxonomy_term_load($tid) {
return $term ? $term[$tid] : FALSE;
}
/**
* Create a select form element for a given taxonomy vocabulary.
*
* NOTE: This function expects input that has already been sanitized and is
* safe for display. Callers must properly sanitize the $title and
* $description arguments to prevent XSS vulnerabilities.
*
* @param $title
* The title of the vocabulary. This MUST be sanitized by the caller.
* @param $value
* The currently selected terms from this vocabulary, if any.
* @param $vocabulary_id
* The vocabulary ID to build the form element for.
* @param $description
* Help text for the form element. This MUST be sanitized by the caller.
* @param $multiple
* Boolean to control if the form should use a single or multiple select.
* @param $blank
* Optional form choice to use when no value has been selected.
* @param $exclude
* Optional array of term ids to exclude in the selector.
* @return
* A FAPI form array to select terms from the given vocabulary.
*
* @see taxonomy_form()
* @see taxonomy_form_term()
*/
function _taxonomy_term_select($title, $value, $vocabulary_id, $description, $blank, $exclude = array()) {
$tree = taxonomy_get_tree($vocabulary_id);
$options = array();
if ($blank) {
$options[0] = $blank;
}
if ($tree) {
foreach ($tree as $term) {
if (!in_array($term->tid, $exclude)) {
$choice = new stdClass();
$choice->option = array($term->tid => str_repeat('-', $term->depth) . $term->name);
$options[] = $choice;
}
}
}
return array('#type' => 'select',
'#title' => $title,
'#default_value' => $value,
'#options' => $options,
'#description' => $description,
'#weight' => -15,
'#theme' => 'taxonomy_term_select',
);
}
/**
* Format the selection field for choosing terms
* (by default the default selection field is used).
*
* @ingroup themeable
*/
function theme_taxonomy_term_select($variables) {
return theme('select', $variables['element']);
}
/**
* Implement hook_help().
*/
......
......@@ -367,7 +367,7 @@ class TaxonomyTermTestCase extends TaxonomyWebTestCase {
// Edit $term2, setting $term1 as parent.
$edit = array();
$edit['parent'] = $term1->tid;
$edit['parent[]'] = array($term1->tid);
$this->drupalPost('taxonomy/term/' . $term2->tid . '/edit', $edit, t('Save'));
// Check the hierarchy.
......@@ -454,7 +454,7 @@ class TaxonomyTermTestCase extends TaxonomyWebTestCase {
);
// Explicitly set the parents field to 'root', to ensure that
// taxonomy_form_term_submit() handles the invalid term ID correctly.
$edit['parent'] = 0;
$edit['parent[]'] = array(0);
// Create the term to edit.
$this->drupalPost('admin/structure/taxonomy/' . $this->vocabulary->vid . '/list/add', $edit, t('Save'));
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment
|
__label__pos
| 0.998909 |
Multithreading in Cn
Multithreading is a specialized form of multitasking and a multitasking is the feature that allows your computer to run two or more programs concurrently. In general, there are two types of multitasking: process-based and thread-based.
Process-based multitasking handles the concurrent execution of programs. Thread-based multitasking deals with the concurrent execution of pieces of the same program.
A multithreaded program contains two or more parts that can run concurrently. Each part of such a program is called a thread, and each thread defines a separate path of execution.
C does not contain any built-in support for multithreaded applications. Instead, it relies entirely upon the operating system to provide this feature.
This tutorial assumes that you are working on Linux OS and we are going to write multi-threaded C program using POSIX. POSIX Threads, or Pthreads provides API which are available on many Unix-like POSIX systems such as FreeBSD, NetBSD, GNU/Linux, Mac OS X and Solaris.
The following routine is used to create a POSIX thread −
#include <pthread.h>
pthread_create (thread, attr, start_routine, arg)
Here, pthread_create creates a new thread and makes it executable. This routine can be called any number of times from anywhere within your code. Here is the description of the parameters.
ParameterDescription
threadAn opaque, unique identifier for the new thread returned by the subroutine.
attrAn opaque attribute object that may be used to set thread attributes. You can specify a thread attributes object, or NULL for the default values.
start_routineThe C routine that the thread will execute once it is created.
argA single argument that may be passed to start_routine. It must be passed by reference as a pointer cast of type void. NULL may be used if no argument is to be passed.
The maximum number of threads that may be created by a process is implementation dependent. Once created, threads are peers, and may create other threads. There is no implied hierarchy or dependency between threads.
Terminating Threads
There is following routine which we use to terminate a POSIX thread –
#include <pthread.h>
pthread_exit (status)
Here pthread_exit is used to explicitly exit a thread. Typically, the pthread_exit() routine is called after a thread has completed its work and is no longer required to exist.
If main() finishes before the threads it has created, and exits with pthread_exit(), the other threads will continue to execute. Otherwise, they will be automatically terminated when main() finishes.
Example Code
#include <iostream>
#include <cstdlib>
#include <pthread.h>
using namespace std;
#define NUM_THREADS 5
void *PrintHello(void *threadid) {
long tid;
tid = (long)threadid;
printf("Hello World! Thread ID, %d
", tid); pthread_exit(NULL); } int main () { pthread_t threads[NUM_THREADS]; int rc; int i; for( i = 0; i < NUM_THREADS; i++ ) { cout << "main() : creating thread, " << i << endl; rc = pthread_create(&threads[i], NULL, PrintHello, (void *)i); if (rc) { printf("Error:unable to create thread, %d
", rc); exit(-1); } } pthread_exit(NULL); }
Output
$gcc test.cpp -lpthread
$./a.out
main() : creating thread, 0
main() : creating thread, 1
main() : creating thread, 2
main() : creating thread, 3
main() : creating thread, 4
Hello World! Thread ID, 0
Hello World! Thread ID, 1
Hello World! Thread ID, 2
Hello World! Thread ID, 3
Hello World! Thread ID, 4
Advertisements
|
__label__pos
| 0.849913 |
Abstracting DNS Record Management with Ansible and Jinja 2
Synchronizing properly implemented DNS zones is, to put it lightly, a real chore:
• Creating forward DNS entries, e.g. A, AAAA, CNAME. These names are used to resolve to resources.
• Creating reverse DNS entries, e.g. PTR.
• Creating DNS entries that define the zone, e.g. SOA, NS
For a system to behave properly, your forward and reverse entries need to be identical, but software like BIND/Unbound rely on zonefiles that don't connect the two. Many information systems / DNS zones exist with improperly implemented reverse DNS, or partially implemented forward DNS asymptomatically for a time. Certain events (e.g. CA validation, discovery, implementing IPv6) can bring things to the forefront if ordinary network management practice doesn't.
For this post, we'll first work on abstracting the DNS zonefile - ensuring that a user can deploy zonefiles conformant to a standard - and then we'll illustrate how that can be used with Netbox to automatically populate DNS entries from Netbox.
Abstracting the zonefile here will achieve a few goals - but the file size is guaranteed to be longer than if you simply managed the zone files from source. Here are some advantages:
• This pipeline ABSOLUTELY MUST establish forward and reverse records from the same data!
• This pipeline must test zonefiles, and avoid installing them if they aren't good (prevents outages)
• This pipeline must establish documentation standards for a DNS zone (abstract the standard)
• This pipeline must scale to support large quantities of DNS zones / records
• This pipeline must be easy to use, even with inexperienced DNS administrators (we can't have it all be on the shoulders of that one guy who can safely make DNS changes)
To achieve this, we'll first establish a YAML schema and Jinja2 template to structure the data. Here's the YAML schema:
1zones:
2 - name: filename
3 zonename:
4 soa:
5 settings:
6 ttl:
7 serial:
8 refresh:
9 retry:
10 expiry:
11 nameservers: []
12 reverse_zones:
13 ip4:
14 ip6:
15 records: [{ "name": "", "type": "", "addr": ""}]
There are also some subtle differences between IPv4 and IPv6 reverse zones, so in this case, we're going to use three Jinja2 templates (in the Gist below).
It also assumes that there's a dedicated classful prefix for each DNS zone. This isn't always true for more complex deployments, but they can also do stuff like buy Infoblox.
I have also included a GitHub Action in the gist, because it provides a good place to demostrate best practices (e.g. using venv) in one compact place. If you want to install generated zone files on-premises, you can run this on a self-hosted runner with an Ansible inventory group (e.g. nameservers).
It's still a little clunky, the next step should help with that (harvesting DDI information from Netbox IPAM data).
GitHub Link
|
__label__pos
| 0.990291 |
/**************************************************************************** ** ** Copyright (C) 2016 The Qt Company Ltd. ** Contact: https://www.qt.io/licensing/ ** ** This file is part of Qt Creator. ** ** Commercial License Usage ** Licensees holding valid commercial Qt licenses may use this file in ** accordance with the commercial license agreement provided with the ** Software or, alternatively, in accordance with the terms contained in ** a written agreement between you and The Qt Company. For licensing terms ** and conditions see https://www.qt.io/terms-conditions. For further ** information use the contact form at https://www.qt.io/contact-us. ** ** GNU General Public License Usage ** Alternatively, this file may be used under the terms of the GNU ** General Public License version 3 as published by the Free Software ** Foundation with exceptions as appearing in the file LICENSE.GPL3-EXCEPT ** included in the packaging of this file. Please review the following ** information to ensure the GNU General Public License requirements will ** be met: https://www.gnu.org/licenses/gpl-3.0.html. ** ****************************************************************************/ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include using namespace Core; namespace Debugger { namespace Internal { ///////////////////////////////////////////////////////////////////////// // // GdbOptionsPageWidget - harmless options // ///////////////////////////////////////////////////////////////////////// class GdbOptionsPageWidget : public QWidget { Q_OBJECT public: GdbOptionsPageWidget(); Utils::SavedActionSet group; }; class GdbOptionsPage : public Core::IOptionsPage { Q_OBJECT public: GdbOptionsPage(); QWidget *widget(); void apply(); void finish(); private: QPointer m_widget; }; GdbOptionsPageWidget::GdbOptionsPageWidget() { auto groupBoxGeneral = new QGroupBox(this); groupBoxGeneral->setTitle(GdbOptionsPage::tr("General")); auto labelGdbWatchdogTimeout = new QLabel(groupBoxGeneral); labelGdbWatchdogTimeout->setText(GdbOptionsPage::tr("GDB timeout:")); labelGdbWatchdogTimeout->setToolTip(GdbOptionsPage::tr( "The number of seconds Qt Creator will wait before it terminates\n" "a non-responsive GDB process. The default value of 20 seconds should\n" "be sufficient for most applications, but there are situations when\n" "loading big libraries or listing source files takes much longer than\n" "that on slow machines. In this case, the value should be increased.")); auto spinBoxGdbWatchdogTimeout = new QSpinBox(groupBoxGeneral); spinBoxGdbWatchdogTimeout->setToolTip(labelGdbWatchdogTimeout->toolTip()); spinBoxGdbWatchdogTimeout->setSuffix(GdbOptionsPage::tr("sec")); spinBoxGdbWatchdogTimeout->setLayoutDirection(Qt::LeftToRight); spinBoxGdbWatchdogTimeout->setMinimum(20); spinBoxGdbWatchdogTimeout->setMaximum(1000000); spinBoxGdbWatchdogTimeout->setSingleStep(20); spinBoxGdbWatchdogTimeout->setValue(20); auto checkBoxSkipKnownFrames = new QCheckBox(groupBoxGeneral); checkBoxSkipKnownFrames->setText(GdbOptionsPage::tr("Skip known frames when stepping")); checkBoxSkipKnownFrames->setToolTip(GdbOptionsPage::tr( "
" "Allows Step Into to compress several steps into one step\n" "for less noisy debugging. For example, the atomic reference\n" "counting code is skipped, and a single Step Into for a signal\n" "emission ends up directly in the slot connected to it.")); auto checkBoxUseMessageBoxForSignals = new QCheckBox(groupBoxGeneral); checkBoxUseMessageBoxForSignals->setText(GdbOptionsPage::tr( "Show a message box when receiving a signal")); checkBoxUseMessageBoxForSignals->setToolTip(GdbOptionsPage::tr( "Displays a message box as soon as your application\n" "receives a signal like SIGSEGV during debugging.")); auto checkBoxAdjustBreakpointLocations = new QCheckBox(groupBoxGeneral); checkBoxAdjustBreakpointLocations->setText(GdbOptionsPage::tr( "Adjust breakpoint locations")); checkBoxAdjustBreakpointLocations->setToolTip(GdbOptionsPage::tr( "GDB allows setting breakpoints on source lines for which no code \n" "was generated. In such situations the breakpoint is shifted to the\n" "next source code line for which code was actually generated.\n" "This option reflects such temporary change by moving the breakpoint\n" "markers in the source code editor.")); auto checkBoxUseDynamicType = new QCheckBox(groupBoxGeneral); checkBoxUseDynamicType->setText(GdbOptionsPage::tr( "Use dynamic object type for display")); checkBoxUseDynamicType->setToolTip(GdbOptionsPage::tr( "Specifies whether the dynamic or the static type of objects will be " "displayed. Choosing the dynamic type might be slower.")); auto checkBoxLoadGdbInit = new QCheckBox(groupBoxGeneral); checkBoxLoadGdbInit->setText(GdbOptionsPage::tr("Load .gdbinit file on startup")); checkBoxLoadGdbInit->setToolTip(GdbOptionsPage::tr( "Allows or inhibits reading the user's default\n" ".gdbinit file on debugger startup.")); auto checkBoxLoadGdbDumpers = new QCheckBox(groupBoxGeneral); checkBoxLoadGdbDumpers->setText(GdbOptionsPage::tr("Load system GDB pretty printers")); checkBoxLoadGdbDumpers->setToolTip(GdbOptionsPage::tr( "Uses the default GDB pretty printers installed in your " "system or linked to the libraries your application uses.")); auto checkBoxIntelFlavor = new QCheckBox(groupBoxGeneral); checkBoxIntelFlavor->setText(GdbOptionsPage::tr("Use Intel style disassembly")); checkBoxIntelFlavor->setToolTip(GdbOptionsPage::tr( "GDB shows by default AT&&T style disassembly." "")); auto checkBoxIdentifyDebugInfoPackages = new QCheckBox(groupBoxGeneral); checkBoxIdentifyDebugInfoPackages->setText(GdbOptionsPage::tr("Create tasks from missing packages")); checkBoxIdentifyDebugInfoPackages->setToolTip(GdbOptionsPage::tr( "
Attempts to identify missing debug info packages " "and lists them in the Issues output pane.
" "Note: This feature needs special support from the Linux " "distribution and GDB build and is not available everywhere.
")); QString howToUsePython = GdbOptionsPage::tr( "
To execute simple Python commands, prefix them with \"python\".
" "
To execute sequences of Python commands spanning multiple lines " "prepend the block with \"python\" on a separate line, and append " "\"end\" on a separate line.
" "
To execute arbitrary Python scripts, " "use python execfile('/path/to/script.py').
"); auto groupBoxStartupCommands = new QGroupBox(this); groupBoxStartupCommands->setTitle(GdbOptionsPage::tr("Additional Startup Commands")); groupBoxStartupCommands->setToolTip(GdbOptionsPage::tr( "
GDB commands entered here will be executed after " "GDB has been started, but before the debugged program is started or " "attached, and before the debugging helpers are initialized.
%1" "").arg(howToUsePython)); auto textEditStartupCommands = new QTextEdit(groupBoxStartupCommands); textEditStartupCommands->setAcceptRichText(false); textEditStartupCommands->setToolTip(groupBoxStartupCommands->toolTip()); auto groupBoxPostAttachCommands = new QGroupBox(this); groupBoxPostAttachCommands->setTitle(GdbOptionsPage::tr("Additional Attach Commands")); groupBoxPostAttachCommands->setToolTip(GdbOptionsPage::tr( "
GDB commands entered here will be executed after " "GDB has successfully attached to remote targets.
" "
You can add commands to further set up the target here, " "such as \"monitor reset\" or \"load\"." "")); auto textEditPostAttachCommands = new QTextEdit(groupBoxPostAttachCommands); textEditPostAttachCommands->setAcceptRichText(false); textEditPostAttachCommands->setToolTip(groupBoxPostAttachCommands->toolTip()); auto groupBoxCustomDumperCommands = new QGroupBox(this); groupBoxCustomDumperCommands->setTitle(GdbOptionsPage::tr("Debugging Helper Customization")); groupBoxCustomDumperCommands->setToolTip(GdbOptionsPage::tr( "
GDB commands entered here will be executed after " "Qt Creator's debugging helpers have been loaded and fully initialized. " "You can load additional debugging helpers or modify existing ones here.
" "%1").arg(howToUsePython)); auto textEditCustomDumperCommands = new QTextEdit(groupBoxCustomDumperCommands); textEditCustomDumperCommands->setAcceptRichText(false); textEditCustomDumperCommands->setToolTip(groupBoxCustomDumperCommands->toolTip()); auto groupBoxExtraDumperFile = new QGroupBox(this); groupBoxExtraDumperFile->setTitle(GdbOptionsPage::tr("Extra Debugging Helpers")); groupBoxExtraDumperFile->setToolTip(GdbOptionsPage::tr( "Path to a Python file containing additional data dumpers.")); auto pathChooserExtraDumperFile = new Utils::PathChooser(groupBoxExtraDumperFile); pathChooserExtraDumperFile->setExpectedKind(Utils::PathChooser::File); /* groupBoxPluginDebugging = new QGroupBox(q); groupBoxPluginDebugging->setTitle(GdbOptionsPage::tr( "Behavior of Breakpoint Setting in Plugins")); radioButtonAllPluginBreakpoints = new QRadioButton(groupBoxPluginDebugging); radioButtonAllPluginBreakpoints->setText(GdbOptionsPage::tr( "Always try to set breakpoints in plugins automatically")); radioButtonAllPluginBreakpoints->setToolTip(GdbOptionsPage::tr( "This is the slowest but safest option.")); radioButtonSelectedPluginBreakpoints = new QRadioButton(groupBoxPluginDebugging); radioButtonSelectedPluginBreakpoints->setText(GdbOptionsPage::tr( "Try to set breakpoints in selected plugins")); radioButtonNoPluginBreakpoints = new QRadioButton(groupBoxPluginDebugging); radioButtonNoPluginBreakpoints->setText(GdbOptionsPage::tr( "Never set breakpoints in plugins automatically")); lineEditSelectedPluginBreakpointsPattern = new QLineEdit(groupBoxPluginDebugging); labelSelectedPluginBreakpoints = new QLabel(groupBoxPluginDebugging); labelSelectedPluginBreakpoints->setText(GdbOptionsPage::tr( "Matching regular expression: ")); */ auto chooser = new VariableChooser(this); chooser->addSupportedWidget(textEditCustomDumperCommands); chooser->addSupportedWidget(textEditPostAttachCommands); chooser->addSupportedWidget(textEditStartupCommands); chooser->addSupportedWidget(pathChooserExtraDumperFile->lineEdit()); auto formLayout = new QFormLayout(groupBoxGeneral); formLayout->addRow(labelGdbWatchdogTimeout, spinBoxGdbWatchdogTimeout); formLayout->addRow(checkBoxSkipKnownFrames); formLayout->addRow(checkBoxUseMessageBoxForSignals); formLayout->addRow(checkBoxAdjustBreakpointLocations); formLayout->addRow(checkBoxUseDynamicType); formLayout->addRow(checkBoxLoadGdbInit); formLayout->addRow(checkBoxLoadGdbDumpers); formLayout->addRow(checkBoxIntelFlavor); formLayout->addRow(checkBoxIdentifyDebugInfoPackages); auto startLayout = new QGridLayout(groupBoxStartupCommands); startLayout->addWidget(textEditStartupCommands, 0, 0, 1, 1); auto postAttachLayout = new QGridLayout(groupBoxPostAttachCommands); postAttachLayout->addWidget(textEditPostAttachCommands, 0, 0, 1, 1); auto customDumperLayout = new QGridLayout(groupBoxCustomDumperCommands); customDumperLayout->addWidget(textEditCustomDumperCommands, 0, 0, 1, 1); auto extraDumperLayout = new QGridLayout(groupBoxExtraDumperFile); extraDumperLayout->addWidget(pathChooserExtraDumperFile, 0, 0, 1, 1); auto gridLayout = new QGridLayout(this); gridLayout->addWidget(groupBoxGeneral, 0, 0, 5, 1); gridLayout->addWidget(groupBoxExtraDumperFile, 5, 0, 1, 1); gridLayout->addWidget(groupBoxStartupCommands, 0, 1, 2, 1); gridLayout->addWidget(groupBoxPostAttachCommands, 2, 1, 2, 1); gridLayout->addWidget(groupBoxCustomDumperCommands, 4, 1, 2, 1); group.insert(action(GdbStartupCommands), textEditStartupCommands); group.insert(action(ExtraDumperFile), pathChooserExtraDumperFile); group.insert(action(ExtraDumperCommands), textEditCustomDumperCommands); group.insert(action(GdbPostAttachCommands), textEditPostAttachCommands); group.insert(action(LoadGdbInit), checkBoxLoadGdbInit); group.insert(action(LoadGdbDumpers), checkBoxLoadGdbDumpers); group.insert(action(UseDynamicType), checkBoxUseDynamicType); group.insert(action(AdjustBreakpointLocations), checkBoxAdjustBreakpointLocations); group.insert(action(GdbWatchdogTimeout), spinBoxGdbWatchdogTimeout); group.insert(action(IntelFlavor), checkBoxIntelFlavor); group.insert(action(IdentifyDebugInfoPackages), checkBoxIdentifyDebugInfoPackages); group.insert(action(UseMessageBoxForSignals), checkBoxUseMessageBoxForSignals); group.insert(action(SkipKnownFrames), checkBoxSkipKnownFrames); //lineEditSelectedPluginBreakpointsPattern-> // setEnabled(action(SelectedPluginBreakpoints)->value().toBool()); //connect(radioButtonSelectedPluginBreakpoints, &QRadioButton::toggled, // lineEditSelectedPluginBreakpointsPattern, &QLineEdit::setEnabled); } GdbOptionsPage::GdbOptionsPage() { setId("M.Gdb"); setDisplayName(tr("GDB")); setCategory(Constants::DEBUGGER_SETTINGS_CATEGORY); setDisplayCategory(QCoreApplication::translate("Debugger", Constants::DEBUGGER_SETTINGS_TR_CATEGORY)); setCategoryIcon(QLatin1String(Constants::DEBUGGER_COMMON_SETTINGS_CATEGORY_ICON)); } QWidget *GdbOptionsPage::widget() { if (!m_widget) m_widget = new GdbOptionsPageWidget; return m_widget; } void GdbOptionsPage::apply() { if (m_widget) m_widget->group.apply(ICore::settings()); } void GdbOptionsPage::finish() { if (m_widget) { m_widget->group.finish(); delete m_widget; } } ///////////////////////////////////////////////////////////////////////// // // GdbOptionsPageWidget2 - dangerous options // ///////////////////////////////////////////////////////////////////////// class GdbOptionsPageWidget2 : public QWidget { Q_OBJECT public: GdbOptionsPageWidget2(); Utils::SavedActionSet group; }; GdbOptionsPageWidget2::GdbOptionsPageWidget2() { auto groupBoxDangerous = new QGroupBox(this); groupBoxDangerous->setTitle(GdbOptionsPage::tr("Extended")); auto labelDangerous = new QLabel(GdbOptionsPage::tr( "The options below should be used with care.")); labelDangerous->setToolTip(GdbOptionsPage::tr( "The options below give access to advanced " "or experimental functions of GDB. Enabling them may negatively " "impact your debugging experience.")); QFont f = labelDangerous->font(); f.setItalic(true); labelDangerous->setFont(f); auto checkBoxTargetAsync = new QCheckBox(groupBoxDangerous); checkBoxTargetAsync->setText(GdbOptionsPage::tr( "Use asynchronous mode to control the inferior")); auto checkBoxAutoEnrichParameters = new QCheckBox(groupBoxDangerous); checkBoxAutoEnrichParameters->setText(GdbOptionsPage::tr( "Use common locations for debug information")); checkBoxAutoEnrichParameters->setToolTip(GdbOptionsPage::tr( "Adds common paths to locations " "of debug information such as /usr/src/debug " "when starting GDB.")); // FIXME: Move to common settings page. auto checkBoxBreakOnWarning = new QCheckBox(groupBoxDangerous); checkBoxBreakOnWarning->setText(CommonOptionsPage::msgSetBreakpointAtFunction("qWarning")); checkBoxBreakOnWarning->setToolTip(CommonOptionsPage::msgSetBreakpointAtFunctionToolTip("qWarning")); auto checkBoxBreakOnFatal = new QCheckBox(groupBoxDangerous); checkBoxBreakOnFatal->setText(CommonOptionsPage::msgSetBreakpointAtFunction("qFatal")); checkBoxBreakOnFatal->setToolTip(CommonOptionsPage::msgSetBreakpointAtFunctionToolTip("qFatal")); auto checkBoxBreakOnAbort = new QCheckBox(groupBoxDangerous); checkBoxBreakOnAbort->setText(CommonOptionsPage::msgSetBreakpointAtFunction("abort")); checkBoxBreakOnAbort->setToolTip(CommonOptionsPage::msgSetBreakpointAtFunctionToolTip("abort")); QCheckBox *checkBoxEnableReverseDebugging = 0; if (isReverseDebuggingEnabled()) { checkBoxEnableReverseDebugging = new QCheckBox(groupBoxDangerous); checkBoxEnableReverseDebugging->setText(GdbOptionsPage::tr("Enable reverse debugging")); checkBoxEnableReverseDebugging->setToolTip(GdbOptionsPage::tr( "
Enables stepping backwards.
" "Note: This feature is very slow and unstable on the GDB side. " "It exhibits unpredictable behavior when going backwards over system " "calls and is very likely to destroy your debugging session.
")); } auto checkBoxAttemptQuickStart = new QCheckBox(groupBoxDangerous); checkBoxAttemptQuickStart->setText(GdbOptionsPage::tr("Attempt quick start")); checkBoxAttemptQuickStart->setToolTip(GdbOptionsPage::tr( "Postpones reading debug information as long as possible. " "This can result in faster startup times at the price of not being able to " "set breakpoints by file and number.")); auto checkBoxMultiInferior = new QCheckBox(groupBoxDangerous); checkBoxMultiInferior->setText(GdbOptionsPage::tr("Debug all children")); checkBoxMultiInferior->setToolTip(GdbOptionsPage::tr( "Keeps debugging all children after a fork." "")); auto formLayout = new QFormLayout(groupBoxDangerous); formLayout->addRow(labelDangerous); formLayout->addRow(checkBoxTargetAsync); formLayout->addRow(checkBoxAutoEnrichParameters); formLayout->addRow(checkBoxBreakOnWarning); formLayout->addRow(checkBoxBreakOnFatal); formLayout->addRow(checkBoxBreakOnAbort); if (checkBoxEnableReverseDebugging) formLayout->addRow(checkBoxEnableReverseDebugging); formLayout->addRow(checkBoxAttemptQuickStart); formLayout->addRow(checkBoxMultiInferior); auto gridLayout = new QGridLayout(this); gridLayout->addWidget(groupBoxDangerous, 0, 0, 2, 1); group.insert(action(AutoEnrichParameters), checkBoxAutoEnrichParameters); group.insert(action(TargetAsync), checkBoxTargetAsync); group.insert(action(BreakOnWarning), checkBoxBreakOnWarning); group.insert(action(BreakOnFatal), checkBoxBreakOnFatal); group.insert(action(BreakOnAbort), checkBoxBreakOnAbort); group.insert(action(AttemptQuickStart), checkBoxAttemptQuickStart); group.insert(action(MultiInferior), checkBoxMultiInferior); if (checkBoxEnableReverseDebugging) group.insert(action(EnableReverseDebugging), checkBoxEnableReverseDebugging); } // The "Dangerous" options. class GdbOptionsPage2 : public Core::IOptionsPage { Q_OBJECT public: GdbOptionsPage2(); QWidget *widget(); void apply(); void finish(); private: QPointer m_widget; }; GdbOptionsPage2::GdbOptionsPage2() { setId("M.Gdb2"); setDisplayName(tr("GDB Extended")); setCategory(Constants::DEBUGGER_SETTINGS_CATEGORY); setDisplayCategory(QCoreApplication::translate("Debugger", Constants::DEBUGGER_SETTINGS_TR_CATEGORY)); setCategoryIcon(QLatin1String(Constants::DEBUGGER_COMMON_SETTINGS_CATEGORY_ICON)); } QWidget *GdbOptionsPage2::widget() { if (!m_widget) m_widget = new GdbOptionsPageWidget2; return m_widget; } void GdbOptionsPage2::apply() { if (m_widget) m_widget->group.apply(ICore::settings()); } void GdbOptionsPage2::finish() { if (m_widget) { m_widget->group.finish(); delete m_widget; } } // Registration void addGdbOptionPages(QList *opts) { opts->push_back(new GdbOptionsPage); opts->push_back(new GdbOptionsPage2); } } // namespace Internal } // namespace Debugger #include "gdboptionspage.moc"
|
__label__pos
| 0.797721 |
logo
Ask Questions, Get Answers
X
Search
Want to ask us a question? Click here
Browse Questions
Ad
Home >> CBSE XII >> Math >> Matrices
0 votes
True or False: If $A =\begin{bmatrix} 2 & 3 & 1 \\ 1 &4 & 2 \end{bmatrix}\; and\; B=\begin{bmatrix}2 & 3 \\ 4& 5 \\ 2 & 1 \end{bmatrix},$ then AB and BA are defined and equal.
In this question, AB and BA need not be evaluated. While AB is a 2 X 2 matrix, BA is a 3 X 3 matrix by definition of matrix multiplication. By definition of equality of two matrices, since they are not of the same type, AB is not equal to BA. So the statement is false. This could be taken as an alternate solution.
Can you answer this question?
1 Answer
0 votes
Toolbox:
• If A is an m-by-n matrix and B is an n-by-p matrix, then their matrix product AB is the m-by-p matrix whose entries are given by dot product of the corresponding row of A and the corresponding column of B: $\begin{bmatrix}AB\end{bmatrix}_{i,j} = A_{i,1}B_{1,j} + A_{i,2}B_{2,j} + A_{i,3}B_{3,j} ... A_{i,n}B_{n,j}$
Step1:
Given
$A =\begin{bmatrix} 2 & 3 & 1 \\ 1 &4 & 2 \end{bmatrix}$
$ B=\begin{bmatrix}2 & 3 \\ 4& 5 \\ 2 & 1 \end{bmatrix}$
$AB=\begin{bmatrix} 2 & 3 & 1 \\ 1 &4 & 2 \end{bmatrix} \begin{bmatrix}2 & 3 \\ 4& 5 \\ 2 & 1 \end{bmatrix}$,
$ AB=\begin{bmatrix}2(2)+3(4)+1(2) & 2(3)+3(5)+1(1) \\ 1(2)+4(4)+2(2)& 1(3)+4(5)+2(1) \end{bmatrix}$
$ AB=\begin{bmatrix}4+12+2 & 6+15+1 \\ 2+16+4& 3+20+2 \end{bmatrix}$
$ AB=\begin{bmatrix}18 & 22 \\ 22& 25 \end{bmatrix}$
Step2:
$BA=\begin{bmatrix}2 & 3 \\ 4& 5 \\ 2 & 1 \end{bmatrix}\begin{bmatrix} 2 & 3 & 1 \\ 1 &4 & 2 \end{bmatrix}$
$BA =\begin{bmatrix} 2(2)+3(1) & 2(3)+3(4) & 2(1)+3(2) \\ 4(2)+5(1) &4(3)+5(4) & 4(1)+5(2) \\2(2)+1(1) &2(3)+1(4)&2(1)+1(2) \end{bmatrix}$
$BA =\begin{bmatrix} 4+3 & 6+12 & 2+6 \\ 8+5 &12+20 & 4+10 \\4+1 &6+4&2+2 \end{bmatrix}$
$BA =\begin{bmatrix} 7 & 18 & 8 \\ 13 &32 & 14 \\5 &10&4 \end{bmatrix}$
Hence AB $\neq $ BA
Thus its False
answered Apr 5, 2013 by sharmaaparna1
Related questions
Ask Question
Tag:MathPhyChemBioOther
student study plans
x
JEE MAIN, CBSE, NEET Mobile and Tablet App
The ultimate mobile app to help you crack your examinations
Get the Android App
...
|
__label__pos
| 0.996491 |
earliest JP firmware with themes + menuhax?
Discussion in '3DS - Homebrew Development and Emulators' started by reiyu, Jan 20, 2016.
1. reiyu
OP
reiyu Canadian, eh?
Member
861
187
Jan 8, 2008
Canada
i know for sure 8.1.0J n3DS has themes already, what's the earliest JP firmware that introduced themes? was it the same for both o3DS and n3DS?
also, would i be able to use menuhax on that FW on o3DS?
thanks!
2. LemmyT
LemmyT Member
Newcomer
32
7
Mar 23, 2015
United States
8.1.0J is the earliest and menuhax only works on 9.0 and above.
3. reiyu
OP
reiyu Canadian, eh?
Member
861
187
Jan 8, 2008
Canada
hm, how come it only works for 9.0+?
4. daxtsu
daxtsu GBAtemp Guru
Member
5,495
3,881
Jun 9, 2007
Yellows8 simply never added support for it (or wasn't able to, he might not have an 8.1.0-0J N3DS for all we know, or know anyone who has/had one).
5. reiyu
OP
reiyu Canadian, eh?
Member
861
187
Jan 8, 2008
Canada
ah, that's unfortunate. was hoping to update a friend's o3DS to 8.1 with a cart and run themehax from there.
|
__label__pos
| 0.971121 |
Search Type: Posts; User: TyroneA
Search: Search took 0.02 seconds.
1. Hi,
I have tweeted @Sencha and replied to the thread about consistent 503's when using the src.sencha.io service. Please please can someone tell me if this service is still being supported?
I...
2. Replies
3
Views
7,670
I keep getting the same in production. I am using this service in production, and I keep getting 503's. I don't know if I should pull the service. This has been happening for a while now.
I also...
Results 1 to 2 of 2
|
__label__pos
| 0.866898 |
Computers
Topics: Computer, Computer monitor, Computer software Pages: 2 (415 words) Published: February 20, 2014
I got this task from the school. Thank you for reading and leaving comment and correction.
Some people believe that computers are more a hindrance than a help in todays's world. Others feel that they are such indispensable tools that they would not be able to live or work without them. - In what ways are computers a hindrance
- What is your opinion?
Nowadays we can see computers increase their popularity; however, using computer too much may lead to negative results. Some people believe that computers have more adverse effects than their helps while others comment that computers are necessary. In my opinion, although computers are sometimes a hindrance, they can make our lives become easier in many ways. Many years ago people start to realize that computers bring many disadvantages to our lives. For example, computers cause many social problems. Computer game addiction is one of the most important disturbance affecting children and young adults in many regions. Computer-related crimes are increasing and threatening our privacy protections. Many jobs are permanently replaced by computers so, millions of people become unemployed. In addition, some people also aware of adverse effects to heath that are caused by computers. For example, overuse of the computer may lead to cervical spine degenerative disease, eye strain, and prolong UV exposure from the computer screen. For those possible negative effects of computers, some people feel it is serious enough to consider computers as a hindrance. Despite its many troubles, as far as I am concerned, there are many advantages derived from computers that can outweigh their bad effects. Computers have important roles in many aspects. Communication is easier and faster by the application of computers. Many information and knowledge can be transferred to distance places around the world within seconds. Furthermore, people can use computers as a learning tool. Computers provide a variety of learning program which either...
Continue Reading
Please join StudyMode to read the full document
You May Also Find These Documents Helpful
• Computer Essay
• Atlantic Computer Essay
• Intro to Computers Essay
• History of Computers Essay
• COMPUTER REPAIR Essay
• History of Computers Essay
• Essay on History Of Computers
• Generation of Computers Essay
Become a StudyMode Member
Sign Up - It's Free
|
__label__pos
| 0.715084 |
source: trunk/abcl/CHANGES @ 13003
Last change on this file since 13003 was 13003, checked in by ehuelsmann, 13 years ago
Rephrase a little bit.
• Property svn:eol-style set to native
File size: 24.3 KB
Line
1Version 0.23
2============
3svn://common-lisp.net/project/armedbear/svn/tags/0.23.0/abcl
4(????, 2010)
5
6Features
7--------
8
9* [svn r12986] Update to ASDF 2.010.1
10
11* [svn r12982] Experimental support for the long form
12 of DEFINE-METHOD-COMBINATION
13
14* [svn r12994] New java-interop macros: CHAIN and JMETHOD-LET
15
16Fixes
17-----
18
19* [svn r12995-12997] Changes to generated byte code to prevent JRockit JVM
20 from crashing when optimizing it
21
22* Various fixes in order to complete the Maxima test suite without failures
23
24* [ticket #98] THREAD type specifier not exported from the THREADS package
25
26* [svn r12946] Fix CLOS thread-safety
27
28* [svn r12930] Fix non-constantness of constant symbols when using SET
29
30* [svn r12929] Don't throw conditions on floating point underflow
31 (fixes Maxima failures)
32
33* [svn r12928] Fix for Java-collections-as-lisp-sequences support
34
35* [svn r12927] Fix for regression to moved threads related symbols
36
37* [ticket #104] SET changes value of symbols defined with DEFCONSTANT
38
39* [ticket #88] Need a predicate to indicate source of compiled version
40 ie Java vs Lisp
41
42* [ticket #106] DEFSTRUCT :include with :conc-name creating overwriting
43 inherited slot accessors
44
45* [ticket #97] Symbol imported in multiple packages reported multiple
46 times by APROPOS
47
48* [ticket #107] Incorrect compilation of (SETF STRUCTURE-REF) expansion
49
50* [ticket #105] DIRECTORY ignores :WILD-INFERIORS
51
52Other
53-----
54
55* [svn r12918] Compiler byte code generator cleanup: introduction
56 of generic class file writer, elimination of special purpose code
57 in the compiler.
58
59* Number of hashtable implementations reduced to 1 (from 5)
60
61* Reduced use of 'synchronized' global hash table access by using
62 the java.util.concurrent package
63
64Version 0.22
65============
66svn://common-lisp.net/project/armedbear/svn/tags/0.22.0/abcl
67(September 24, 2010)
68
69Fixes
70-----
71
72* [svn r12902] Fix reading data with scandinavian latin1 characters
73
74* [svn r12906] Respect the CLASSPATH environment variable in the
75 abcl wrapper scripts
76
77* [ticket #103] DOCUMENTATION not autoloaded
78
79Other
80-----
81
82* [svn r12819] Until-0.22-compatibility hacks (in threads support) removed
83
84
85
86Version 0.21
87============
88svn://common-lisp.net/project/armedbear/svn/tags/0.21.0/abcl
89(July 24, 2010)
90
91
92Features
93--------
94
95* [svn r12818] Update to ASDF 2.004
96
97* [svn r12738-805] Support for custom CLOS slot definitions and
98 custom class options.
99
100* [svn r12756] slot-* functions work on structures too.
101
102* [svn r12774] Improved Java integration: jmake-proxy can implement
103 more than one interface.
104
105* [svn r12773] Improved Java integration: functions to dynamically
106 manipulate the classpath.
107
108* [svn r12755] Improved Java integration: CL:STRING can convert Java
109 strings to Lisp strings.
110
111Fixes
112-----
113
114* [svn 12809-10-20] Various printing fixes.
115
116* [svn 12804] Fixed elimination of unused local functions shadowed by macrolet.
117
118* [svn r12798-803] Fixed pathname serialization across OSes.
119 On Windows pathnames are always printed with forward slashes,
120 but can still be read with backslashes.
121
122* [svn r12740] Make JSR-223 classes compilable with Java 1.5
123
124Other
125-----
126
127* [svn r12754] Changed class file generation and FASL loading
128 to minimize reflection.
129
130* [svn r12734] A minimal Swing GUI Console with a REPL
131 is now included with ABCL.
132
133Version 0.20
134============
135svn://common-lisp.net/project/armedbear/svn/tags/0.20.0/abcl
136(24 May, 2010)
137
138
139Features
140--------
141
142* [svn r12576] Support for CLOS METACLASS feature.
143
144* [svn r12591-602] Consolidation of copy/paste code in the readers.
145
146* [svn r12619] Update to ASDF2 (specifically to ASDF 1.719).
147
148* [svn r12620] Use interpreted function in FASL when compilation fails.
149
150* [ticket #95] PATHNAME-JAR and PATHNAME-URL subtypes now handle jar
151 and URL references working for OPEN, LOAD, PROBE-FILE,
152 FILE-WRITE-DATE, DIRECTORY, et. al.
153
154* Many small speed improvements (by marking functions 'final').
155
156* [ticket #91] Threads started through MAKE-THREAD now have a
157 thread-termination restart available in their debugger.
158
159* [svn r12663] JCLASS supports an optional class-loader argument.
160
161* [svn r12634] THREADS:THREAD-JOIN implemented.
162
163* [svn r12671] Site specific initialization code can be included in
164 builds via the 'abcl.startup.file' Ant property.
165
166Fixes
167-----
168
169* [ticket #89] Inlining of READ-LINE broken when the return value
170 is unused.
171
172* [svn r12636] Java class verification error when compiling PROGV
173 in a context wanting an unboxed return value (typically a
174 logical expression).
175
176* [svn r12635] ABCL loads stale fasls instead of updated source
177 even when LOAD is called with a file name without extension.
178
179* [ticket #92] Codepoints between #xD800 and #xDFFF are incorrectly
180 returned as characters from CODE-CHAR.
181
182* [ticket #93] Reader doesn't handle zero returned values from
183 macro functions correctly.
184
185* [ticket #79] Different, yet similarly named, uninterned symbols
186 are incorrectly coalesced into the same object in a fasl.
187
188* [ticket #86] No restarts available to kill a thread, if none
189 bound by user code.
190
191* [svn r12586] Increased function dispatch speed by eliminating
192 FIND-CLASS calls (replacing them by constant references).
193
194* [svn r12656] PATHNAME-JAR now properly uses HTTP/1.1 HEAD requests
195 to detect if remote resource has been changed.
196
197* [svn r12643] PATHNAME-JAR now properly references Windows drive
198 letters on DEVICE other than the default.
199
200* [svn r12621] Missing 'build-from-lisp.sh' referenced in README now
201 included in source release.
202
203Other
204-----
205
206* [svn r12581] LispCharacter() constructors made private, in favor
207 of getInstance() for better re-use of pre-constructed characters.
208
209* [svn r12583] JAVA-CLASS reimplemented in Lisp.
210
211* [svn r12673] Load 'system.lisp' moved later in boot sequence so
212 unhandled conditions drop to debugger.
213
214* [svn r12675] '--nosystem' commandline option inhibits loading of
215 'system.lisp'.
216
217* [svn r12642] Under Windows, pathname TYPE components can now contain
218 embedded periods iff they end in '.lnk' to support shortcuts.
219
220
221Version 0.19
222============
223svn://common-lisp.net/project/armedbear/svn/trunk/abcl
224(14 Mar, 2010)
225
226Features
227--------
228
229* [svn r12518] *DISASSEMBLER* may now contain a hook which returns the
230 command to disassemble compiled functions.
231
232* [svn r12516] An implementation of user-extensible sequences as
233 proposed in Christopher Rhodes, "User-extensible sequences in Common
234 Lisp", Proc. of the 2007 International Lisp Conference.
235
236* [svn r12513] Implement SYS:SRC and SYS:JAVA logical pathname
237 translations for system Lisp source and the root of the Java package
238 structure, respectively.
239
240* [svn r12505] All calls to anonymous functions and local functions that have
241 been declared inline are now converted to LET* forms, reducing stack usage
242 and the number of generated classes.
243
244* [svn r12487] An initial port ASDF-INSTALL now forms the first ABCL
245 contrib. Such contribs are optionally built by the Ant target
246 'abcl.contrib'. ASDF-INSTALL is not expected to work very well
247 under Windows in its present state.
248
249* [svn r12447] [ticket:80] REQUIRE now searches for ASDF systems.
250
251* [svn r12422] Jar pathname support extensively re-worked and tested
252 so that LOAD, PROBE-FILE, TRUENAME, DIRECTORY, and WRITE-FILE-DATE
253 all work both for local and remote jar pathnames of the form
254 "jar:URL!/JAR-ENTRY".
255
256 The loading ASDF systems from jar files is now possible.
257
258 SYS:PATHNAME-JAR-P predicate signals whether a pathname references a
259 jar.
260
261 NB: jar pathnames do *not* currently work as an argument to OPEN.
262
263 SYS:UNZIP implemented to unpack ZIP files.
264
265 SYS:ZIP now has a three argument version for creating zip files with
266 hierarchical entries.
267
268* [svn r12450] Collect unprocessed command-line arguments in
269 EXT:*COMMAND-LINE-ARGUMENT-LIST* (Dennis Lambe Jr.)
270
271* [svn r12414] SYS::%GET-OUTPUT-STREAM-ARRAY returns a Lisp byte array
272 from a Java byte array stream.
273
274* [svn 12402] ABCL.TEST.LISP:RUN-MATCHING will now execute that subset
275 of tests which match a string.
276
277
278Fixes/Optimizations
279-------------------
280
281* [svn r12526] Unbinding of PROGV bound variables on local transfer
282 of control (within-java-function jump targets)
283
284* [svn r12510] The new ansi-test WITH-STANDARD-IO-SYNTAX.23 passes.
285 Our with-standard-io-syntax implementation now correctly resets all necessary
286 pprint variables. Patch by Douglas R. Miles, thanks for the contribution!
287
288* [svn r12485] Pathnames starting with "." can now have TYPE.
289
290* [svn r12484] FASLs containing "." characters not used to indicate
291 type (i.e. ".foo.bar.baz.abcl") can now be loaded.
292
293* [svn r12422] Pathname.java URL contructor under Windows now properly
294 interprets the drive letter.
295
296* [svn r12449] The 'abcl.jar' produced by Netbeans now contains a valid
297 manifest (found by Paul Griffionen).
298
299* [svn r12441] ZipCache now caches all references to ZipFiles based on
300 the last-modified time for local files. Remote files are always
301 retrieved due to problems in the underlying JVM code.
302
303 SYS:REMOVE-ZIP-CACHE implements a way to invalidate an entry given a
304 pathname.
305
306* [svn r12439] Remove duplication of java options in Windows
307 'abcl.bat' script.
308
309* [svn r12437] CHAR-CODE-LIMIT is the upper execlusive limit (found by
310 Paul Griffionen).
311
312* [svn r12436] Describe formatting was missing a newline (reported by
313 Blake McBride).
314
315* [svn 12469] Ensure that FILE-ERROR always has a value (possibly NIL)
316 for its PATHNAME member.
317
318* [svn r14222] MERGE-PATHNAMES no longer potentially shares structure
319 between its result and *DEFAULT-PATHNAME-DEFAULTS*.
320
321* [svn r12416] Fixed ANSI LAMBDA.nn test failures caused by errors in
322 lambda inlining.
323
324* [svn r12417] [ticket:83] Fix TRANSLATE-LOGICAL-PATHNAME regression.
325 (Alan Ruttenberg).
326
327* [svn r12412] Optimize memory efficiency of FORMAT by use of a
328 hashtable rather than a CHAR-CODE-LIMIT array.
329
330* [svn r12408] FIND-SYMBOL requires a string argument.
331
332* [svn r12400] Make NIL (as symbol) available to the compiler.
333
334* [svn r12398] Move lambda list analysis to compile time where possible.
335
336* [svn r12397] BROADCAST-STREAM obeys default external format fixing
337 ANSI MAKE-BROADCAST-STREAM.8.
338
339* [svn r12395] Improve arglist display for SLIME (Matthias Hölzl).
340
341* [svn r12394] Optimize array utilization in closures.
342
343* [svn r12393] Optimize array functions in compiler which don't
344 require clearing the VALUES array.
345
346* [svn r12392] Optimize/normalize aspects of boot.lisp
347
348* [svn r12391] Prevent duplicated subclasses form occuring.
349
350
351Other
352-----
353
354* [svn r12447] SYS::*MODULE-PROVIDER-FUNCTION* now provides a mechanism
355 to extend the REQUIRE resolver mechanism at runtime.
356
357* [svn r12430] Ant based build no longer writes temporary files to
358 contain the Lisp build instructions.
359
360* [svn r12481] STANDARD-CLASS now has slots to be inherited by
361 deriving metaclasses in support of the (in progress) work on
362 metaclass.
363
364* [svn r12425] No longer ignore the METACLASS defclass option in
365 support of the (in progress) work on metaclass
366
367* [svn r12422] SYS::*LOAD-TRUENAME-FASL* now contains the TRUENAME of
368 the Java "*.cls" component when loading a packed FASL.
369
370* [svn r12461] Human readable Java representations for class cast
371 exceptions for NULL and UNBOUND values.
372
373* [svn r12453 et. ff.] Large numbers of the implementation of Java
374 primitives have been declared in a way so that a stack trace
375 provides a much more readable indication of what has been invoked.
376 Primitives which extend Primitive are prefixed with "pf_"; those
377 which extend SpecialOperator are prefixed with "sf_".
378
379* [svn r12422] The internal structure of a jar pathname has changed.
380 Previously a pathname with a DEVICE that was itself a pathname
381 referenced a jar. This convention was not able to simultaneously
382 represent both jar entries that were themselves jar files (as occurs
383 with packed FASLs within JARs) and devices which refer to drive
384 letters under Windows. Now, a pathname which refers to a jar has a
385 DEVICE which is a proper list of at most two entries. The first
386 entry always references the "outer jar", and the second entry (if it
387 exists) references the "inner jar".
388
389* [svn r12419] Ant 'abcl.release' target centralizes the build steps
390 necessary for creating releases.
391
392* [svn r12409] Compiler now rewrites function calls with (LAMBDA âŠ) as
393 the operator to LET* forms.
394
395* [svn r12415] CLASS-FILE renamed to ABCL-CLASS-FILE to prepare for
396 (in progress) reworking of Stream inheritance.
397
398* [svn r123406] 'test/lisp/abcl/bugs.lisp' forms a default location to
399 add unit tests for current bug testing. The intention is to move
400 these tests into the proper location elsewhere in the test suite
401 once they have been fixed.
402
403* [svn r124040] Java tests upgraded to use junit-4.8.1. Netbeans
404 project runtime classpath now uses compilation results before source
405 directory, allowing the invocation of ABCL in interpreted mode if
406 the Ant 'abcl.compile.lisp.skip' property is set. Java unit tests
407 for some aspects of jar pathname work added.
408
409* New toplevel 'doc' directory now contains:
410
411 + [svn r12410] Design for the (in progress) reworking of the Stream
412 inheritance.
413
414 + [svn r12433] Design and current status for the re-implementation
415 of jar pathnames.
416
417* [svn r12402] Change ABCL unit tests to use the ABCL-TEST-LISP definition
418 contained in 'abcl.asd'. Fixed and renabled math-tests. Added new
419 tests for work related to handling jar pathnames.
420
421* [svn r12401] The REFERENCES-NEEDED-P field of the LOCAL-FUNCTION structure now
422 tracks whether local functions need the capture of an actual
423 function object.
424
425
426Version 0.18.1
427==============
428svn://common-lisp.net/project/armedbear/svn/tags/0.18.1/abcl
429(17 Jan, 2010)
430
431Features:
432
433 * Support for printing java objects with print-object
434 * Support for disassembling proxied functions
435
436Bugs fixed:
437
438 * maxima works again
439
440Version 0.18.0
441==============
442svn://common-lisp.net/project/armedbear/svn/tags/0.18.0/abcl
443(12 Jan, 2010)
444
445
446Features:
447
448 * Programmable handling of out-of-memory and stack-overflow conditions
449 * Faster initial startup (to support Google App Engine)
450 * Faster special variable lookup
451 * New interface for binding/unwinding special variables
452 * Implement (SETF (STREAM-EXTERNAL-FORMAT <stream>) <format>)
453 * Implement (SETF (JAVA:JFIELD <object>) <value>)
454 * Constant FORMAT strings get compiled for performance
455
456
457Bugs fixed:
458
459 * FASLs are system default encoding dependent (ticket 77)
460 * I/O of charset-unsupported characters causes infinite loop (ticket 76)
461 * Memory leak where on unused functions with documentation
462 * ANSI PRINT-LEVEL.* tests
463 * Continued execution after failing to handle Throwable exceptions
464 * Line numbers in generated java classes incorrect
465 * JCALL, JNEW doesn't select best match when multiple applicable methods
466 * STREAM-EXTERNAL-FORMAT always returns :DEFAULT, instead of actual format
467 * REPL no longer hangs in Netbeans 6.[578] output window
468 * Lambda-list variables replaced by surrounding SYMBOL-MACROLET
469
470
471Other changes
472
473 * LispObject does not inherit from Lisp anymore
474 * Many functions declared 'final' for performance improvement
475 * SYSTEM:*SOURCE* FASLs for system files no longer refer to intermediate build location
476
477
478Version 0.17.0
479==============
480svn://common-lisp.net/project/armedbear/svn/tags/0.17.0/abcl
481(07 Nov, 2009)
482
483
484Features:
485
486 * Google App Engine example project "Hello world"
487 * Support for loading FASLs from JAR files
488 * Checking of init-arguments for MAKE-INSTANCE (CLOS)
489 * Support for *INVOKE-DEBUGGER-HOOK* (to support SLIME)
490 * Reduced abcl.jar size (bytes and number of objects)
491 * Faster access to locally bound specials (compiler efficiency)
492 * Java property to print autoloading information: abcl.autoload.verbose
493 * Experimental: binary fasls
494 * Default Ant build target now "abcl.clean abcl.wrapper" (from abcl.help)
495 * ConditionThrowable class renamed to ControlTransfer,
496 parent class changed to RuntimeException (to make it unchecked)
497 * API no longer throws ConditionThrowable/ControlTransfer
498
499
500Bugs fixed:
501
502 * Better fix for #63: Prevent exceptions from happening (GO and RETURN-FROM)
503 * Restore ability for ABCL to be build host for SBCL
504 * CLOS performance improvements through looser COMPILE dependency
505 * Compilation fix for highest SPEED setting (triggered by CL-BENCH)
506 * COMPILE's use of temp files eliminated
507 * OpenJDK on Darwin now correctly identified
508 * Incorrect block names for SETF functions defined by LABELS
509 * Fixed MULTIPLE-VALUE-CALL with more than 8 arguments
510 * Incorrect identification of lexical scope on recursive TAGBODY/GO
511 and BLOCK/RETURN-FROM blocks (compiler and interpreter)
512 * Correctly return 65k in char-code-limit (was 256, incorrectly)
513 * Fixes to be able to run the BEYOND-ANSI tests (part of ANSI test suite)
514 * Compiler typo fix
515 * Implementation of mutex functionality moved to lisp from Java
516 * Functions handling #n= and #n# are now compiled
517 * Autoload cleanups
518 * System package creation cleaned up
519 * CHAR-CODE-LIMIT correctly reflects CHAR-CODE maximum return value
520 * Precompiler macroexpansion failure for macros expanding into
521 special operators
522
523
524Version 0.16.1
525==============
526svn://common-lisp.net/project/armedbear/svn/tags/0.16.1/abcl
527(17 Oct, 2009)
528
529Bugs fixed:
530
531 * More careful checking for null args in LispStackFrame
532 * Honor appearance of &allow-other-keys in CLOS MAKE-INSTANCE
533 * Fix #63: GO forms to non-existent TAGBODY labels would exit ABCL
534 * Don't leak temp files during compilation
535
536Version 0.16.0
537==============
538(06 SEP 2009)
539svn://common-lisp.net/project/armedbear/svn/tags/0.16.0/abcl
540
541 Summary of changes:
542 ------------------
543 * Fixed generated wrapper for path names with spaces (Windows)
544 * Fixed ticket #58: Inspection of Java objects in Lisp code
545 * Restored functionality of the built-in profiler
546 * Profiler extended with hot-spot counting (as opposed to call counting)
547 * Stack sampling in the profiler moved to scheduler thread to
548 reduce impact on the program execution thread
549 * THE type-checking for the interpreter
550 (for simple-enough type specifications)
551 * Added structure argument type checking in structure slot
552 accessor functions
553 * Make GENSYM thread-safe
554 * Various performance fixes found by running the raytracer
555 from http://www.ffconsultancy.com/languages/ray_tracer/benchmark.html
556 * Better initarg checking for make-instance and change-class
557 Fixes ansi-test errors CHANGE-CLASS.1.11, MAKE-INSTANCE.ERROR.3,
558 MAKE-INSTANCE.ERROR.4, CHANGE-CLASS.ERROR.4 and SHARED-INITIALIZE.ERROR.4
559 * Improve performance of StackFrames (Erik Huelsmann, Ville Voutilainen,
560 with input from Peter Graves and Douglas Miles)
561 * Improve performance of CLOS eql-specializers via cache (Anton Vodonosov)
562 * 'build-from-lisp.sh' shell script (Tobias Rittweiler)
563 * New threading primitives aligned with Java/JVM constructs (Erik Huelsmann)
564
565 SYNCHRONIZED-ON
566 OBJECT-NOTIFY
567 OBJECT-NOTIFY-ALL
568 * THREADS package created to hold threads related primitives:
569
570 THREADP THREAD-UNLOCK THREAD-LOCK THREAD-NAME THREAD-ALIVE-P
571 CURRENT-THREAD DESTROY-THREAD INTERRUPT-THREAD WITH-THREAD-LOCK
572 MAKE-THREAD-LOCK MAKE-THREAD INTERRUPT-THREAD
573
574 MAPCAR-THREADS
575
576 GET-MUTEX MAKE-MUTEX WITH-MUTEX RELEASE-MUTEX
577
578 These primitives are still part of the EXTENSIONS package but are
579 now to be considered as deprecated, marked to be removed with
580 0.22
581 * Stacktraces now contain calls through Java code relevant to
582 debugging (Tobias Rittweiler)
583
584 Backtrace functionality been moved from EXT:BACKTRACE-AS-LIST to
585 SYS:BACKTRACE to mark this changes. The methods SYS:FRAME-TO-STRING
586 and SYS:FRAME-TO-LIST can be used to inspect the new
587 LISP_STACK_FRAME and JAVA_STACK_FRAME objects
588 * Various stream input performance optimizations
589 * Fixed breakage when combining Gray streams and the pretty printer
590 * Performance improvements for resolution of non-recursive #=n and #n#
591
592
593Version 0.15.0
594==============
595svn://common-lisp.net/project/armedbear/svn/tags/0.15.0/abcl
596(07 Jun, 2009)
597
598 Summary of changes:
599 -------------------
600 * 2 more MOP exported symbols to support Cells port
601 * Updated FASL version
602 * Support (pre)compilation of functions with a non-null lexical environment
603 * Compiler and precompiler cleanups
604 * 'rt.lisp' copy from ANSI test suite removed
605 * Many documentation additions for the (pre)compiler
606 * JSR-233 support improvements
607 * Refactoring of classes:
608 - deleted: CompiledFunction, ClosureTemplateFunction, CompiledClosure,
609 Primitive0R, Primitive1R, Primitive2R
610 - renamed: CompiledClosure [from ClosureTemplateFunction]
611 * Compiler support for non-constant &key and &optional initforms
612 * Fixed ticket #21: JVM stack inconsistency [due to use of RET/JSR]
613 * Numerous special bindings handling fixes, especially with respect
614 to (local) transfer of control with GO/RETURN-FROM
615 * Paths retrieved using URL.getPath() require decoding (r11815)
616 * Build doesn't work inside paths with spaces (r11813)
617 * Compilation of export of a symbol not in *package* (r11808)
618 * Moved compiler-related rewriting of forms from precompiler to compiler
619 * Removed chained closures ('XEPs') in case of &optional arguments only
620 * Loading of SLIME fails under specific conditions (r11791)
621 * Binding of *FASL-ANONYMOUS-PACKAGE* breaks specials handling (r11783)
622 * Fixed ANSI tests: DO-ALL-SYMBOLS.{6,9,12}, DEFINE-SETF-EXPANDER.{1,6,?},
623 MULTIPLE-VALUE-SETQ.{5,8}, SYMBOL-MACROLET.8, COMPILE-FILE.{17,18}
624 * COMPILE and COMPILE-FILE second and third values after a failed
625 invocation inside the same compilation-unit (r11769)
626 * JCLASS on non-existing classes should signal an error (r11762)
627 * Dotted lambda lists break interpretation (r11760)
628 * Implementation of MACROEXPAND-ALL and COMPILER-LET (r11755)
629 * Switch from casting to 'instanceof' for performance (r11754)
630 * Google App Engine support: don't die if 'os.arch' isn't set (r11750)
631 * Excessive stack use while resolving #n= and #n# (r11474)
632
633
634Version 0.14.1
635==============
636(5 Apr, 2009)
637svn://common-lisp.net/project/armedbear/svn/tags/0.14.1/abcl
638
639 Summary of changes:
640 -------------------
641 * Include this CHANGES file and scripting files in the tar and zip files
642
643
644Version 0.14.0
645==============
646(5 Apr, 2009)
647svn://common-lisp.net/project/armedbear/svn/tags/0.14.0/abcl
648
649 Summary of changes:
650 -------------------
651 * Increased clarity on licensing (Classpath exception
652 mentioned in COPYING, removed LICENSE)
653 * Resolved infinite recursion on TRACEing the compiler
654 * Changes on the lisp based build system for parity with Ant
655 * Fixed interpreter creation in Java Scripting
656 * libabcl.so no longer created; it was solely about installing
657 a SIGINT handler. Libraries should not do that.
658 * boxing of LispObject descendants in JCALL/JCALL-RAW fixed
659 * OpenBSD and NetBSD platform detection
660 * fixed special bindings restores in compiled code for
661 MULTIPLE-VALUE-BIND/LET/LET*/PROGV and function bodies
662 * introduced variadic list() function to replace list1() ... list9()
663 * fix return value type of ACOS with complex argument
664 * fixed precision of multiplication of complex values
665 * fixed use of COMPILE inside file compilation (i.e. COMPILE-FILE)
666 * fix expansion of macros inside RESTART-CASE
667 (fixes RESTART-CASE ANSI failures)
668 * fix macroexpansion in the precompiler
669 * Fixnum and Bignum now use a static factory method;
670 constructors are now private -> increases chances of numbers
671 being EQ
672 * Code cleanup in EXPT to fix (EXPT <any-number> <Bignum>)
673
674
675Version 0.13.0
676==============
677(28 Feb, 2009)
678svn://common-lisp.net/project/armedbear/svn/tags/0.13.0/abcl
679
680 Summary of changes:
681 -------------------
682 * Separated J and ABCL into two trees
683 * Many many compiler code cleanups
684 * NetBeans project files
685 * Support for CDR6 (See http://cdr.eurolisp.org/document/6/)
686 * More efficient code emission in the compiler
687 * Ant build targets for testing (abcl.test)
688 * Use ConcurrentHashMap to store the lisp threads for increased performance
689 * Fix adjustability of expressly adjustable arrays (ticket #28)
690 * Fix calculation of upperbound on ASH in the compiler
691 (don't calculate numbers too big, instead, return '*')
692 * Introduce LispInteger as the super type of Bignum and Fixnum
693 * Boxing/unboxing for SingleFloat and DoubleFloat values,
694 inclusive of unboxed calculations
695 * Fixed URL decoding bug in loadCompiledFunction (use java.net.URLDecoder)
696 * Fixed line number counting
697 * Inlining of simple calculations (+/-/*)
698 * All static fields declared 'final'
699 * Add support for java.lang.Long based on Bignum to our FFI
700
701
Note: See TracBrowser for help on using the repository browser.
|
__label__pos
| 0.67458 |
Hummingbird Google Update: Some Key Points to Quickly Understand
June
15
0 comments
The Google algorithm update that is going to change how people search forever! The Hummingbird Update focuses on understanding relationships among things and phrases. It also takes into account what you say when searching with the voice as well as keyword matching before-now known for being an original strategy of just matching keywords in this day’s world
The Hummingbird algorithm was developed to understand the meanings of phrases in a query and display search pages that accurately match those meaning. The more related it is, the better chance you will have for finding what your looking for with Google’s Semantic Search technology which extracts relationship between things like people or places through its powerful yet simple-to use artificial intelligence system called Keyword Graph
This means when someone searches on something specific they can have conversations about their interests without even knowing!
What is the Hummingbird algorithm?
Hummingbird
The Hummingbird Algorithm is crucial to the future of Google. It has taken over the next big project, called RankBrain. The Hummingbird Algorithm was designed with the purpose of using complex algorithms that would help users find the right time to schedule their next meeting.
The algorithm will also use words that will be most relevant to what you might want to do with your time. Google has openly stated that it is like they can read your thoughts!
Why is this new Hummingbird algorithm so important?
In the past, Google has been criticized for not providing enough information about the meaning of a word. They would say that a word is “too broad” or “not found in our database.”
But now, with this new algorithm called Hummingbird, they’ll be able to provide more thorough results.
They have also created an entirely new system for understanding what you want out of life and suggesting things that might interest you as well as offering suggestions on how to find time for those activities.
It’s just like Google knows what you’re thinking!
How it will help you find more time for activities?
You will be able to receive word-of-mouth information from friends, family, or colleagues about what they are doing. You can also find out what is popular by visiting places like Amazon, Netflix, or Facebook. These algorithms are designed to predict what you might want to do with your time.
The Hummingbird Algorithm will make it easier for you to find the right time for your next meeting by using words that are relevant to you. The algorithm knows that if someone is thinking about work then they might not respond until they have finished their shift at the office. It will also take into account where they were last seen on social media, which can give you an idea of when they are most likely to answer phone calls or emails.
The future of the Hummingbird algorithm
Google has been on a mission to make the world better for those with mental illness, and their latest algorithms are helping people cope in ways that before were not possible.
With these new tools at hand Google can detect subtle signs of an upcoming panic attack or depression much more accurately than ever before –
delivering personalized insights into one’s emotional health through emails and social media posts which will allow sufferers access information they need without having any idea how selfless this company really is!
Google has stated that they “can read your mind,” which can benefit users in so many ways! This new algorithm will help people schedule their next meeting, help them find the right time to do things with their time, and will help those who have mental illness learn to trust what they read on social media.
Why Google knows what you’re thinking!
Privacy concerns may or may not be what you are thinking about next. They are absolutely something to consider. When Google “reads your mind” it is also tracking where you go and what you do on the internet, which is like reading every thought in your head! Privacy isn’t just about what you post on social media or write in an email. It’s about who knows where you live, how many pets you have if you’re religious – everything!
Since Google can read everyone’s minds, they’re likely starting to think about their privacy too. One of the most recent changes they made was to make it possible for people to turn off Location History on their account (which is good!). But this still doesn’t stop all the information that they collect. If you’re thinking about this, then here’s a site to check out:
Why is this new Hummingbird Algorithm so important?
The Hummingbird Algorithm will make it easier for Google users to find the right time to schedule their next meeting. It will also give users insight into what they might want to do with their time by examining all the words used in emails.
All of this information is crucial to the future of Google because it has taken over for a new algorithm called RankBrain. The Hummingbird Algorithm was created with the purpose of using words that are most relevant to the task at hand.
This algorithm will be able to predict words people will use in emails and give users insight on how they should answer the email or what they might want to do with their time. Google has stated that it is like they can read your thoughts!
Google has recently developed a new algorithm called Hummingbird, which is designed to help people find the right time for their next meeting.
The Hummingbird Algorithm will also be able to predict what you might want to do with your time and give insight into how someone should answer an email or respond on social media.
What was the purpose of the google hummingbird algorithm update?
What was the purpose of the google hummingbird algorithm update?
The Google Hummingbird algorithm update was designed to provide users with more relevant and accurate search results. This algorithms update affected both the way Google crawls and indexes websites as well as changes the way Google ranks search results.
One of the primary goals of the Hummingbird update was to improve uponGoogles understanding of user intent when conducting a search. Prior to this update, Googles algorithms would largely focus on the individual keywords that were used in a query. However, thisUpdate emphasizes looking at an entire phrases meaning (semantics) in order to better understand what the searcher is looking for.
In addition tobetter understanding user intent, another goalof Hummingbirdwasto deliver more precise and comprehensive search results. This is especially important for longer, more complex queries, as they can be difficult for Google to understand. By taking into account the entire query, Hummingbird is able to provide users with better results.
Overall, the Google Hummingbird algorithm update was a major change to the way Google ranks and displays search results. While it was a large update, it is just one of many that Google has made over the years in order to improve the quality of its search results.
The other Google algorithm Panda, Penguin along with Hummingbird
Google algorithm Panda, Penguin Hummingbird
Google’s search algorithm is constantly evolving, with new updates being rolled out on a regular basis. In the past, Google has made major changes to its algorithm with updates such as Panda, Penguin, and Hummingbird.
While these updates may have initially caused some disruption in the search results, they ultimately led to a better user experience by delivering more relevant and accurate results.
Panda, Penguin and Hummingbird are just a few of the algorithm updates that have helped to make Google the dominant search engine today.
Also Read About Google Input Tools
About the author, Team Digital Shiksha
Digital Shiksha is the leading online and interactive digital marketing training institute in India. We offer professional certification courses in Digital Marketing, which will help you create effective digital marketing strategies. Our students have access to the latest tools and techniques used in online marketing, including social networking, mobile marketing, online communities, viral marketing, wikis, and blogs. With a career in online, interactive, and digital marketing, you can progress into roles such as campaign planning and brand development. At Digital Shiksha we are committed to supporting and educating our students to reach their full potential in the field of digital marketing.
{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Learn more about [your subject]. Start Now!
E-book 01
E-book 02
>
|
__label__pos
| 0.945759 |
7
In the Gift.hs example from the second lecture in Plutus Pioneers Program, when I give ada, a UTxO is produced. When I want to grab that ada in the same block, it fails (There is no input or output when simulated in the palyground). But when I wait for one block and then try to grab, It succeeds.
My conclusion is we cannot produce and consume a UTxO in the same block. We can only consume a UTxO from the next block onward. Am I right or...?
3 Answers 3
7
That is correct. For a UTxO to be consumed it first has to exist on the blockchain. For it to exist on the blockchain the transaction that created it must exist in a block that has been confirmed by the network.
3
You can only grab the gift once it's confirmed on the ledger. Search for awaitTxConfirmed in the source code.
That effect may have configurable level of confirmation in the future as indicated in the source code.
1
It is possible, at least with a recent version of cardano-cli (don't know about earlier ones, but I assume it would work too) to issue two transactions sequentially where the output of tx n is used as the input of tx n + 1. Both transactions will be put into the same block.
Here's a block where I've done so a few days ago, the first two transactions are by me. You can see that the second tx uses the first one as input.
Edit: In the context of plutus smart contracts however, I think Matthias is correct.
1
• This isn't a limitation of Plutus scripts in general, but the plutus-contract library. Transaction chaining is possible with and without Plutus scripts involved.
– james
Oct 6, 2022 at 8:06
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.658432 |
How To Get Into An Iphone 4 Without The Password
• There are a few ways to get into an iPhone 4 without the password.
• One way is to use Siri.
• If you have a passcode, you can ask Siri to open the phone for you.
• Another way is to use iTunes. If you have a computer with iTunes installed, you can plug your iPhone 4 into the computer and it will allow you to access the files on the phone.
How do you unlock an iPhone 4 if you forgot the password without iTunes?
There are a few ways to unlock an iPhone 4 without the password. One way is to use the “Forgot Password” feature on iTunes. Another way is to use a third-party unlocking program.
How do you enable a disabled iPhone 4?
There are a few ways to enable a disabled iPhone 4. One way is to use iCloud or iTunes to restore the device. Another way is to use the “Find My iPhone” feature to unlock the device.
How to bypass iphone 4 passcode
How to bypass iphone passcode with out a computer
How do I reset my iPhone without a password or computer
If you have forgotten your iPhone’s password, or you don’t have a computer to reset it with, there is still a way to reset your device. First, try restarting your phone by pressing and holding the Sleep/Wake button until the slider appears. Then drag the slider to turn off your phone. After your phone has turned off, press and hold the Sleep/Wake button again until you see the Apple logo.
How do you unlock a locked iPhone?
If you have forgotten your iPhone’s passcode, you can reset it with iTunes. Connect your iPhone to iTunes on a computer that you trust, and select your device. In the Summary tab, click Restore. If you have a backup of your data, iTunes will restore it to your device. If you don’t have a backup, you’ll need to set up your device as new.
How do you unlock an iPhone without the passcode or face ID?
How To Convert Video to Mp3 On iPhone?
There are a few ways to unlock an iPhone without the passcode or face ID. One way is to use Siri. If you have a valid pair of Apple headphones, you can hold down the home button and say “Hey Siri, unlock my phone.” Another way is to use your computer. If you have iTunes installed on your computer, you can connect your iPhone to your computer and unlock it using the “Find My iPhone” feature.
How do you unlock an iPhone that has never been synced?
If you have an iPhone that has never been synced before, you will need to unlock it by using the correct passcode. To find out what the passcode is, you will need to restore the iPhone to its factory settings.
How do you unlock an iPhone without the passcode without losing data?
If you have a backup of your iPhone data, you can restore your iPhone from the backup. If you do not have a backup, you can try to use a third-party tool to unlock your iPhone.
How do you unlock a phone without the password?
If you have forgotten your phone’s password, you can try to unlock it using your Google account. If you have not set up a Google account on your phone, you will need to reset your phone to its factory settings.
How do you break into an iPhone with Face ID?
There is no known way to break into an iPhone with Face ID. Face ID is a secure authentication system that uses facial recognition to unlock the phone. It is very difficult to fool the system, and even if someone manages to do so, they would not be able to access the phone’s data without the user’s password.
How To Download Mobdro On Iphone?
Can you trick Face ID with a picture?
There have been reports that you can trick Face ID with a picture, but Apple has stated that this is not the case. Face ID is designed to be secure and accurate, and it is unlikely that you will be able to fool it with a picture.
Can you unlock an iPhone without carrier?
Yes, you can unlock an iPhone without a carrier. There are a few ways to do this, but the most common is to use a third-party unlocking service.
Can you unlock an iPhone from another device?
Yes, you can unlock an iPhone from another device by using the Find My iPhone feature. You can also unlock an iPhone from another device by using your Apple ID and password.
Can you unlock iPhone with eyes closed?
There is no known way to unlock an iPhone with your eyes closed. While there are many rumors and myths about this, no one has been able to successfully do it.
Share:
Share on facebook
Share on whatsapp
Share on twitter
Share on linkedin
|
__label__pos
| 0.958421 |
MEF detect appropriate constructor
Jul 2, 2011 at 2:32 PM
Hello,
I have some problem to undestand how create a Module with a constructor that receive parameters, for instance:
[ImportingConstructor ]
public ModuleInit(IRegionManager regionManager)
{
this._regionManager = regionManager;
}
When I try to get the object from the bootstrapper I receive the following error:
Code in MefBootStrapper:
this.Container.GetExportedValue<Contract.Menu.IModuleInit
>();
Error:
The composition produced a single composition error. The root cause is provided below. Review the CompositionException.Errors property for more detailed information. 1) Cannot create an instance of type 'JS.FWK.Prism.Menu.ModuleInit' because a constructor could not be selected for construction. Ensure that the type either has a default constructor, or a single constructor marked with the 'System.ComponentModel.Composition.ImportingConstructorAttribute'. Resulting in: Cannot activate part 'JS.FWK.Prism.Menu.ModuleInit'. Element: JS.FWK.Prism.Menu.ModuleInit --> JS.FWK.Prism.Menu.ModuleInit --> DirectoryCatalog (Path="D:\Jerry\Develop\00_Enviroment\99_Prism") Resulting in: Cannot get export 'JS.FWK.Prism.Menu.ModuleInit (ContractName="JS.FWK.Prism.Contract.Menu.IModuleInit")' from part 'JS.FWK.Prism.Menu.ModuleInit'. Element: JS.FWK.Prism.Menu.ModuleInit (ContractName="JS.FWK.Prism.Contract.Menu.IModuleInit") --> JS.FWK.Prism.Menu.ModuleInit --> DirectoryCatalog (Path="D:\Jerry\Develop\00_Enviroment\99_Prism")
Could someone help me?
Many thanks for you support
Jerry
|
__label__pos
| 0.896171 |
Skip to main content
Enhancing the Web Experience: The Vital Role of Web Accessibility Design and Testing
web accessibility best practices
In the digital age, the web has become an indispensable part of our lives, providing a gateway to information, communication, and opportunities like never before. However, as we strive for inclusivity and equality, it is crucial to recognize that not all individuals navigate the online realm in the same way. This is where web accessibility design and testing come into play, ensuring that websites and web applications are usable and enjoyable for everyone, regardless of their abilities or disabilities. In this blog post, we will explore the significance of web accessibility and delve into the best practices of design and testing, empowering you to create online experiences that leave no one behind.
Why web accessibility is important
There are several standards to meet when it comes to building an accessible website. These standards exist to help us developers ensure our products are easy to navigate and meet the needs of a broad range of users. In the end, investing time and effort into meeting these standards can help attract new and more pleased customers, which can have a positive impact on both revenue and brand sentiment.
Let’s keep in mind that there are a number of ways to make our websites more accessible. Some of these include adding alt text to images, providing transcripts for audio or video content, and using concise and clear language. In the following sections, we’ll look deeper into a handful of web accessibility design and testing best practices. Following these, we can make sure our digital products are appealing and helpful to as many people as possible.
What is web accessibility design?
When it comes to ensuring accessibility across our websites and web apps, a good place to start is with web content design. Accessible design should consider several various attributes, such as age, economic situation, geographic location, language, race, and gender, among many others. Accessible design should be driven by a conscious understanding of user backgrounds and capabilities. It’s important to keep in mind that digital interfaces that focus on inclusive design can positively impact the user experience by promoting a wider sense of belonging.
There are 2 concepts worth mentioning here: Universal Design and Inclusive Design. Both of them aim to reduce barriers between users and technology and create inclusive experiences, but are distinguished by the following characteristics:
• Universal design focuses on creating an experience that can be accessed and used by the greatest extent of people. Far apart from inclusive design, universal design enforces a single design solution without the need for adaptations or specialized design.
• Inclusive design accepts and embraces multiple design variations as long as they achieve the desired results. For these reasons, universal design is more widely used compared to inclusive design, which is applied more often to digital product design because it’s relatively cheaper and easier to adapt such interfaces.
Small changes can achieve great results
If we think about a user population of elderly people or those with visual disabilities, text legibility and dark mode can make a big difference. From this point of view, designers must think about using reasonably large font sizes, having high contrast between characters in the foreground and background, and using a clean, easily legible font. Such features are important to all users but are particularly important considerations for elderly users and those who may face challenges with interfaces with poor legibility.
Web Content Accessibility Guidelines (WCAG)
In order to have a standardized way to determine if a website complies with accessibility, the World Wide Web Consortium (W3C) recommends adhering to a number of ground rules recognized internationally and compiled in what is known as the Web Content Accessibility Guidelines (WCAG).
The first version of this document was published in 1999 as WCAG 1.0. In 2008, the WCAG 2.0 was released and remains today as the world standard. Nevertheless, a new update was released in 2018 called WCAG 2.1, including everything from WCAG 2.0 in addition to support for web content and mobile devices using the same terminology for both versions.
The WCAG is separated into three sections:
• Guidelines: This section includes short statements providing guidance on what should be considered by designers and developers to make a website accessible
• Design principles: Includes the four overarching principles of accessible website development: perceivable, operable, understandable, and robust (POUR)
• Success criteria: Contains specific technical requirements to ensure that a website is compliant with the standard
Let’s take a closer look at each one of these sections in more detail, starting with the guidelines establishing twelve rules or requirements.
WCAG Guidelines
1. Perceivable
1.1 Provide text alternatives for any non-text content so that it can be changed into other forms people may need, such as large print, Braille, speech, symbols, or simpler language.
1.2 Provide alternatives for time-based media.
1.3 Create content that can be presented in different ways (for example, simpler layout) without losing information or structure.
1.4 Make it easier for users to see and hear content including separating foreground from background.
2. Operable
2.1 Make all functionality available from a keyboard.
2.2 Provide users enough time to read and use content.
2.3 Do not design content in a way that is known to cause seizures.
2.4 Provide ways to help users navigate, find content, and determine where they are.
3. Understandable
3.1 Make text content readable and understandable.
3.2 Make Web pages appear and operate in predictable ways.
3.3 Help users avoid and correct mistakes.
4. Robust
4.1 Maximize compatibility with current and future user agents, including assistive technologies.
WCAG Design Principles
To ensure that your organization complies with the WCAG standards, it must follow the four POUR design principles:
• Perceivable: Information and user interface components must be presentable to users in ways they can perceive.
• Operable: User interface components and navigation must be operable.
• Understandable: Information and the operation of the user interface must be understandable.
• Robust: Content must be robust enough that it can be interpreted reliably by a wide variety of user agents, including assistive technologies.
WCAG Success Criteria
To achieve WCAG compliance, W3C has broken up the success criteria into three different implementation levels. These levels are known as Level A, AA, and AAA respectively.
In the original WCAG standard, W3C described the differences between the levels as follows:
Priority 1: A Web content developer must satisfy this checkpoint. Otherwise, one or more groups will find it impossible to access information in the document. Satisfying this checkpoint is a basic requirement for some groups to be able to use Web documents.
Priority 2: A Web content developer should satisfy this checkpoint. Otherwise, one or more groups will find it difficult to access information in the document. Satisfying this checkpoint will remove significant barriers to accessing Web documents.
Priority 3: A Web content developer may address this checkpoint. Otherwise, one or more groups will find it somewhat difficult to access information in the document. Satisfying this checkpoint will improve access to Web documents.
So basically, if the first point was achieved this would meet Level A, if both the first and second points were achieved it would meet Level AA, and if all three were achieved it would meet Level AAA.
Web Accessibility Testing
In order to ensure that your web applications comply correctly with the WCAG standards, accessibility testing focuses on understanding how well digital content adheres to these established standards for people who have disabilities. It’s important that accessibility testing be an essential component of usability testing.
While performing accessibility testing, the product is assessed to determine how well it meets the needs of users with disabilities. Some examples of content that is commonly tested for accessibility compliance include email, electronic documents, social media posts, web applications, and websites.
When it comes to accessibility testing, it can be automated with software tools, performed manually, or even by applying a hybrid approach that uses both automated test tools and manual methodologies. From this perspective, a website could be evaluated for criteria such as how well pages have been optimized for screen readers with features like adequate color contrast, appropriate use of white space, introducing alt text, including video transcripts, and proper readability levels.
Automated Accessibility Testing
From a testing perspective, automation is always desirable since once created, a test suite can be run over and over again without adding additional costs to the project. They are also much faster than manual tests by reducing the time to run repetitive tests from days to hours, faster time to market, reusability of automated tests, and higher overall test coverage, just to mention a few benefits.
Automated accessibility testing examples and tools:
A great candidate for automated accessibility testing is regression testing, as it provides a good way to save on the costs of the testing execution by eliminating most of the manual testing efforts and resources invested in the process.
For automated accessibility testing early in the development life cycle, you can always examine generic accessibility testing libraries for different platforms, such as Android, iOS, and HTML that provide a set of acceptance tests for accessibility.
Within the build process, you can integrate tools like axe-core, jsx-a11y, Lighthouse Audits, or AccessLint.js into your project to programmatically add accessibility tests and catch errors as you build out your website.
In your projects, you can always think about integrating open-source and licensed automation frameworks into the continuous integration pipeline. A good practice is to merge into the main branch only if the commit passes accessibility checks first. For the web, you can use something like Deque’s axe-core. For iOS, a good example of a tool you can implement is Google’s Toolbox for Accessibility (GTXiLib); and for Android, you can use Google’s Accessibility Test Framework (ATF).
To add another example, we can mention AATT, which stands for Automated Accessibility Testing Tool. This option provides an accessibility API and custom web application for HTML CodeSniffer, Axe, and Chrome developer tool. Using the AATT web application, you can configure test server configurations inside the firewall, and test individual pages.
AATT web application, you can configure test server configurations inside the firewall, and test individual pages which is a web accessibility best practices
Manual Accessibility Testing
No doubt the easiest way to initially test your project’s compliance with WCAG is manual accessibility testing. There are different options for manually testing your projects and ensuring they comply with these rules, from adding plugins to your browser to going manually yourself with the Tab key on your keyboard.
Here are some pretty handy tools you can use in your day-to-day testing:
If you’re a Browserstack user, a cool feature you can take advantage of is the Screen Reader which lets you go through your URL content by just adding it to the browser bar and enabling the Screen Reader option. This way, you can manually check the content contained on your website and experience how a person with reduced eyesight or blindness could navigate through it and listen to a description of every feature.
the Screen Reader which lets you go through your URL content by just adding it to the browser bar and enabling the Screen Reader option. This way, you can manually check the content contained on your website and experience how a person with reduced eyesight or blindness could navigate through it and listen to a description of every feature.
Wave is another good example of an accessibility evaluation tool you can take advantage of to test your website’s compliance with WCAG standards. In my case, I have used it as a Chrome extension. This add-on offers very complete analysis to users in a very graphic way. It marks all the suggested changes on the website itself and provides a visual summary of them.
Wave offers very complete analysis to users in a very graphic way. It marks all the suggested changes on the website itself and provides a visual summary of them.
Axe DevTools, is an accessibility testing tool that also offers its users a browser extension to analyze their websites. The add-on version provides an option for the user to test compliance with WCAG by providing a scan through the site.
In this case, you open it on your URL and get a compendium about the issues found, how critical they are, and best practices suggestions.
Axe DevTools, is an accessibility testing tool that also offers its users a browser extension to analyze their websites.
These are just some of the options available for accessibility testing tools. The best thing you can do is to evaluate the best option to implement on your projects depending on your needs and resources.
In Conclusion
To conclude, let’s keep in mind that all companies publishing web content have a shared responsibility to comply with international standards for web accessibility. It is our duty as professionals in the software development industry to create awareness of the importance of being inclusive despite our project’s limitations.
In the end, the main purpose of accessibility is to grant access to people with different capabilities to our sites, systems, and applications. Accessibility must always be part of your project’s design and be considered throughout the entire software development process.
New call-to-action
|
__label__pos
| 0.915629 |
Matrices as an array of numbers
1. At present I introduce matrices as an array of numbers and then carry out various matrix operations. Is there a more tangible way of introducing this topic?
I have thought of transformations but my experience with students has been that they get lost in the transformations and so give up on matrices.
These are weak engineering students who struggle and think of mathematics as too abtract.
2. jcsd
3. Re: Matrices
How about describing them as (special kinds of) functions? Engineers certainly know functions and feel familiar with them. They certainly also understand that functions might be important.
4. Re: Matrices
What?! An amazing attitude sir! Calling the students weak before you have even broached the topic? I hope you are not at a publicly funded institution!
Go through Gilbert Strang's MIT OCW videos on youtube, try to see how he motivates the topic. He does not do a very good job. Then instead of portraying arrays as being 'collection of numbers' show them what they can do visually. Look at the CS applications of linear algebra for this. They normally present images of rotations and scaling and other transformations of vectors by linear algebra operations. Then imagine questions like, 'given a linear array of letters, how can I find nested combinations of letters in there? eg. asdflkasdflj<h>asdfjad</h>adfadf<h><h>fgadf</h>asdfa</h>, how can you automatically find <h> and </h> and all the letters in between?' Well, this particular example is esoteric, but surely you, strong teacher, have enough creativity to find questions possessing such simple characteristics? Or how about, given a black/white image, essentially an MxN array, find out how many connected components are there? Or how about the basics of graph theory? I could go on with CS apps here. But let's get back to engineering.
Linear algebra is used in the state space representation of ODEs in controls class, coupled system vibration analysis (similar concept), and solid mechanics (principle stress components of a stress tensor, the invariance concept can be introduced here) and solutions to finite element/difference system of equations, fracture mechanics (Williams solution for stress singularity is based on the solution of a homogeneous linear system), linear optimization, think about the Jacobian or hessian matrices, you extract the eigenvalues there to recognize system singularities. It's easy to go on here. You would do well to introspect about your own learning before preaching in public. Every engineer understands ODEs. Show them how the characteristic equation to the homogeneous equation obtained from 'assuming' the solution to be $\text{e}^{rx}$ and the very equation itself is the same as the characteristic equation and eigenvalues obtained from the eigenanalysis of a matrix. Show them how the linear algebra concept of linear independence applies in this very case as well and how you employ that concept in formulating the general solution.
Pardon my hostility, but I do not think highly of instructors who consider their students weak. They only succeed in conveying their personal sense of confusion which results from their own ennui, in class. It is but a bad teacher who calls his acolytes that. You do a shoddy job now, and your students end up suffering for the next few years of their undergraduate as this is one of the most fundamental topics in engineering. Your job is not writing grant proposals! Your job is to teach them, get them interested if they are not; not molest them into a state of intellectual submission and scar them forever! In other words don't screw with who gives you your basic pay!!
Know someone interested in this topic? Share a link to this question via email, Google+, Twitter, or Facebook
|
__label__pos
| 0.963852 |
Commit b1ab1bc1 authored by Jean-Francois Dockes's avatar Jean-Francois Dockes
Browse files
upnp to recoll search translation sort of working
parent c2a515fa
......@@ -66,7 +66,7 @@ class PlgWithSlave::Internal {
public:
Internal(PlgWithSlave *_plg, const string& exe, const string& hst,
int prt, const string& pp)
: plg(_plg), exepath(exe), upnphost(hst), upnpport(prt), pathprefix(pp),
: plg(_plg), exepath(exe), upnphost(hst), upnpport(prt), pathprefix(pp),
laststream(this) {
}
......@@ -118,7 +118,7 @@ static int answer_to_connection(void *cls, struct MHD_Connection *connection,
// and then dispatch the request.
PlgWithSlave::Internal *plgi = (PlgWithSlave::Internal*)cls;
PlgWithSlave *realplg =
dynamic_cast<PlgWithSlave*>(plgi->plg->m_services->getpluginforpath(url));
dynamic_cast<PlgWithSlave*>(plgi->plg->m_services->getpluginforpath(url));
if (nullptr == realplg) {
LOGERR("answer_to_connection: no plugin for path [" << url << endl);
return MHD_NO;
......@@ -245,22 +245,22 @@ string PlgWithSlave::get_media_url(const string& path)
{
LOGDEB0("PlgWithSlave::get_media_url: " << path << endl);
if (!m->maybeStartCmd()) {
return string();
return string();
}
time_t now = time(0);
if (m->laststream.path.compare(path) ||
(now - m->laststream.opentime > 10)) {
unordered_map<string, string> res;
if (!m->cmd.callproc("trackuri", {{"path", path}}, res)) {
LOGERR("PlgWithSlave::get_media_url: slave failure\n");
return string();
}
auto it = res.find("media_url");
if (it == res.end()) {
LOGERR("PlgWithSlave::get_media_url: no media url in result\n");
return string();
}
unordered_map<string, string> res;
if (!m->cmd.callproc("trackuri", {{"path", path}}, res)) {
LOGERR("PlgWithSlave::get_media_url: slave failure\n");
return string();
}
auto it = res.find("media_url");
if (it == res.end()) {
LOGERR("PlgWithSlave::get_media_url: no media url in result\n");
return string();
}
m->laststream.clear();
m->laststream.path = path;
m->laststream.media_url = it->second;
......@@ -287,7 +287,7 @@ PlgWithSlave::~PlgWithSlave()
}
static int resultToEntries(const string& encoded, int stidx, int cnt,
vector<UpSong>& entries)
vector<UpSong>& entries)
{
Json::Value decoded;
istringstream input(encoded);
......@@ -298,25 +298,31 @@ static int resultToEntries(const string& encoded, int stidx, int cnt,
for (unsigned int i = stidx; i < decoded.size(); i++) {
#define JSONTOUPS(fld, nm) {song.fld = decoded[i].get(#nm, "").asString();}
if (dolimit && --cnt < 0) {
break;
}
UpSong song;
// tp is container ("ct") or item ("it")
if (dolimit && --cnt < 0) {
break;
}
UpSong song;
JSONTOUPS(id, id);
JSONTOUPS(parentid, pid);
JSONTOUPS(title, tt);
JSONTOUPS(artUri, upnp:albumArtURI);
JSONTOUPS(artist, upnp:artist);
JSONTOUPS(upnpClass, upnp:class);
// tp is container ("ct") or item ("it")
string stp = decoded[i].get("tp", "").asString();
if (!stp.compare("ct")) {
song.iscontainer = true;
if (!stp.compare("ct")) {
song.iscontainer = true;
string ss = decoded[i].get("searchable", "").asString();
if (!ss.empty()) {
song.searchable = stringToBool(ss);
}
} else if (!stp.compare("it")) {
song.iscontainer = false;
JSONTOUPS(uri, uri);
JSONTOUPS(artist, dc:creator);
JSONTOUPS(genre, upnp:genre);
} else if (!stp.compare("it")) {
song.iscontainer = false;
JSONTOUPS(uri, uri);
JSONTOUPS(artist, dc:creator);
JSONTOUPS(genre, upnp:genre);
JSONTOUPS(album, upnp:album);
JSONTOUPS(tracknum, upnp:originalTrackNumber);
JSONTOUPS(tracknum, upnp:originalTrackNumber);
JSONTOUPS(mime, res:mime);
string srate = decoded[i].get("res:samplefreq", "").asString();
if (!srate.empty()) {
......@@ -326,18 +332,13 @@ static int resultToEntries(const string& encoded, int stidx, int cnt,
if (!sdur.empty()) {
song.duration_secs = atoi(sdur.c_str());
}
} else {
LOGERR("PlgWithSlave::result: bad type in entry: " << stp << endl);
continue;
}
JSONTOUPS(id, id);
JSONTOUPS(parentid, pid);
JSONTOUPS(title, tt);
JSONTOUPS(artUri, upnp:albumArtURI);
JSONTOUPS(artist, upnp:artist);
JSONTOUPS(upnpClass, upnp:class);
} else {
LOGERR("PlgWithSlave::result: bad type in entry: " << stp <<
"(title: " << song.title << ")\n");
continue;
}
LOGDEB1("PlgWitSlave::result: pushing: " << song.dump() << endl);
entries.push_back(song);
entries.push_back(song);
}
// We return the total match size, the count of actually returned
// entries can be obtained from the vector
......@@ -463,7 +464,7 @@ int PlgWithSlave::browse(const string& objid, int stidx, int cnt,
LOGDEB1("PlgWithSlave::browse\n");
entries.clear();
if (!m->maybeStartCmd()) {
return errorEntries(objid, entries);
return errorEntries(objid, entries);
}
string sbflg;
switch (flg) {
......@@ -489,13 +490,13 @@ int PlgWithSlave::browse(const string& objid, int stidx, int cnt,
unordered_map<string, string> res;
if (!m->cmd.callproc("browse", {{"objid", objid}, {"flag", sbflg}}, res)) {
LOGERR("PlgWithSlave::browse: slave failure\n");
return errorEntries(objid, entries);
LOGERR("PlgWithSlave::browse: slave failure\n");
return errorEntries(objid, entries);
}
auto it = res.find("entries");
if (it == res.end()) {
LOGERR("PlgWithSlave::browse: no entries returned\n");
LOGERR("PlgWithSlave::browse: no entries returned\n");
return errorEntries(objid, entries);
}
......@@ -523,9 +524,14 @@ int PlgWithSlave::search(const string& ctid, int stidx, int cnt,
LOGDEB("PlgWithSlave::search: [" << searchstr << "]\n");
entries.clear();
if (!m->maybeStartCmd()) {
return errorEntries(ctid, entries);
return errorEntries(ctid, entries);
}
// Computing a pre-cooked query. For simple-minded plugins.
// Note that none of the qobuz/gmusic/tidal plugins actually use
// the slavefield part (defining in what field the term should
// match).
//
// Ok, so the upnp query language is quite powerful, but us, not
// so much. We get rid of parenthesis and then try to find the
// first searchExp on a field we can handle, pretend the operator
......@@ -543,9 +549,9 @@ int PlgWithSlave::search(const string& ctid, int stidx, int cnt,
// The sequence can now be either [field, op, value], or
// [field, op, value, and/or, field, op, value,...]
if ((vs.size() + 1) % 4 != 0) {
LOGERR("PlgWithSlave::search: bad search string: [" << searchstr <<
LOGERR("PlgWithSlave::search: bad search string: [" << searchstr <<
"]\n");
return errorEntries(ctid, entries);
return errorEntries(ctid, entries);
}
string slavefield;
string value;
......@@ -571,7 +577,7 @@ int PlgWithSlave::search(const string& ctid, int stidx, int cnt,
}
classfilter = what;
} else if (!upnpproperty.compare("upnp:artist") ||
!upnpproperty.compare("dc:author")) {
!upnpproperty.compare("dc:author")) {
slavefield = "artist";
value = vs[i+2];
break;
......@@ -585,14 +591,10 @@ int PlgWithSlave::search(const string& ctid, int stidx, int cnt,
break;
}
}
if (slavefield.empty()) {
LOGERR("PlgWithSlave: unsupported search: [" << searchstr << "]\n");
return errorEntries(ctid, entries);
}
// In cache ?
ContentCacheEntry *cep;
string cachekey(m_name + ":" + objkind + ":" + slavefield + ":" + value);
string cachekey(m_name + ":" + ctid + ":" + searchstr);
if ((cep = o_scache.get(cachekey)) != nullptr) {
int total = cep->toResult(classfilter, stidx, cnt, entries);
delete cep;
......@@ -602,19 +604,19 @@ int PlgWithSlave::search(const string& ctid, int stidx, int cnt,
// Run query
unordered_map<string, string> res;
if (!m->cmd.callproc("search", {
{"objid", ctid},
{"objkind", objkind},
{"objid", ctid},
{"objkind", objkind},
{"origsearch", searchstr},
{"field", slavefield},
{"value", value} }, res)) {
LOGERR("PlgWithSlave::search: slave failure\n");
return errorEntries(ctid, entries);
{"value", value} }, res)) {
LOGERR("PlgWithSlave::search: slave failure\n");
return errorEntries(ctid, entries);
}
auto it = res.find("entries");
if (it == res.end()) {
LOGERR("PlgWithSlave::search: no entries returned\n");
return errorEntries(ctid, entries);
LOGERR("PlgWithSlave::search: no entries returned\n");
return errorEntries(ctid, entries);
}
// Convert the whole set and store in cache
ContentCacheEntry e;
......
......@@ -23,6 +23,7 @@ import posixpath
import re
import conftree
import cmdtalkplugin
import urllib
import uprclfolders
import uprclsearch
......@@ -76,7 +77,7 @@ def trackuri(a):
msgproc.log("trackuri: [%s]" % a)
if 'path' not in a:
raise Exception("trackuri: no 'path' in args")
path = a['path']
path = urllib.quote(a['path'])
media_url = rclpathtoreal(path, pathprefix, uprclhost, pathmap)
msgproc.log("trackuri: returning: %s" % media_url)
return {'media_url' : media_url}
......@@ -131,7 +132,8 @@ def search(a):
upnps = a['origsearch']
entries = uprclsearch.search(rclconfdir, objid, upnps, g_myprefix, httphp, pathprefix)
entries = uprclsearch.search(rclconfdir, objid, upnps, g_myprefix,
httphp, pathprefix)
encoded = json.dumps(entries)
return {"entries" : encoded}
......
......@@ -144,40 +144,6 @@ def inittree(confdir):
g_dirvec = _rcl2folders(g_alldocs, confdir)
def _cmpentries(e1, e2):
tp1 = e1['tp']
tp2 = e2['tp']
isct1 = tp1 == 'ct'
isct2 = tp2 == 'ct'
# Containers come before items, and are sorted in alphabetic order
if isct1 and not isct2:
return 1
elif not isct1 and isct2:
return -1
elif isct1 and isct2:
tt1 = e1['tt']
tt2 = e2['tt']
if tt1 < tt2:
return -1
elif tt1 > tt2:
return 1
else:
return 0
# 2 tracks. Sort by album then track number
k = 'upnp:album'
a1 = e1[k] if k in e1 else ""
a2 = e2[k] if k in e2 else ""
if a1 < a2:
return -1
elif a1 > a2:
return 1
k = 'upnp:originalTrackNumber'
a1 = e1[k] if k in e1 else "0"
a2 = e2[k] if k in e2 else "0"
return int(a1) - int(a2)
def _objidtodiridx(pid):
if not pid.startswith(g_myprefix):
raise Exception("folders.browse: bad pid %s" % pid)
......@@ -230,13 +196,21 @@ def browse(pid, flag, httphp, pathprefix):
if e:
entries.append(e)
return sorted(entries, cmp=_cmpentries)
return sorted(entries, cmp=cmpentries)
# return path for objid, which has to be a container. This is good old pwd
def dirpath(objid):
diridx = _objidtodiridx(objid)
# We may get called from search, on the top dir (above [folders]). Return
# empty in this case
try:
diridx = _objidtodiridx(objid)
except:
return ""
if diridx == 0:
return "/"
lpath = []
while True:
fathidx = g_dirvec[diridx][".."][0]
......
......@@ -21,29 +21,70 @@ def _readword(s, i):
w += s[j]
return j,w
# Called with '"' already read:
def _readstring(s, i):
str = '"'
# Called with '"' already read.
# Upnp search term strings are double quoted, but we should not take
# them as recoll phrases. We separate parts which are internally
# quoted, and become phrases, and lists of words which we interpret as
# an and search (comma-separated). Internal quotes come backslash-escaped
def _parsestring(s, i=0):
uplog("parseString: input: <%s>" % s[i:])
# First change '''"hello \"one phrase\"''' world" into
# '''hello "one phrase" world'''
# Note that we can't handle quoted dquotes inside string
str = ''
escape = False
instring = False
for j in range(i, len(s)):
#print("s[j] [%s] out now [%s]" % (s[j],out))
if s[j] == '\\':
if not escape:
escape = True
str += '\\'
continue
if instring:
if escape:
if s[j] == '"':
str += '"'
instring = False
else:
str += '\\' + s[j]
escape = False
else:
if s[j] == '\\':
escape = True
else:
str += s[j]
if s[j] == '"':
str += '"'
if not escape:
return j+1, str
else:
str += s[j]
escape = False
return len(s), str
if escape:
str += s[j]
escape = False
if s[j] == '"':
instring = True
else:
if s[j] == '\\':
escape = True
elif s[j] == '"':
j += 2
break
else:
str += s[j]
tokens = stringToStrings(str)
return j, tokens
def _appendterms(out, v, field, oper):
uplog("_appendterms: v %s field <%s> oper <%s>" % (v,field,oper))
swords = ""
phrases = []
for w in v:
if len(w.split()) == 1:
if swords:
swords += ","
swords += w
else:
phrases.append(w)
out.append(swords)
for ph in phrases:
out.append(field)
out.append(oper)
out.append('"' + ph + '"')
def upnpsearchtorecoll(s):
uplog("upnpsearchtorecoll:in: <%s>" % s)
......@@ -52,6 +93,8 @@ def upnpsearchtorecoll(s):
out = []
hadDerived = False
i = 0
field = ""
oper = ""
while True:
i,c = _getchar(s, i)
if not c:
......@@ -67,13 +110,18 @@ def upnpsearchtorecoll(s):
out = ["mime:*"]
break
if c == '(' or c == ')' or c == '>' or c == '<' or c == '=':
if c == '(' or c == ')':
out.append(c)
elif c == '>' or c == '<' or c == '=':
oper += c
else:
if c == '"':
i,w = _readstring(s, i)
if not w.endswith('"'):
raise Exception("Unterminated string in [%s]" % out)
i,v = _parsestring(s, i)
uplog("_parsestring ret: %s" % v)
_appendterms(out, v, field, oper)
oper = ""
field = ""
continue
else:
i -= 1
i,w = _readword(s, i)
......@@ -81,20 +129,25 @@ def upnpsearchtorecoll(s):
#print("Got word [%s]" % w)
if w == 'contains':
out.append(':')
oper = ':'
elif w == 'doesNotContain':
if len(out) < 1:
raise Exception("doesNotContain can't be the first word")
out.insert(-1, "-")
out.append(':')
oper = ':'
elif w == 'derivedFrom':
hadDerived = True
out.append(':')
oper = ':'
elif w == 'true':
out.append('*')
oper = ""
elif w == 'false':
out.append('xxxjanzocsduochterrrrm')
elif w == 'exists':
out.append(':')
oper = ':'
elif w == 'and':
# Recoll has implied AND, but see next
pass
......@@ -105,13 +158,9 @@ def upnpsearchtorecoll(s):
# use parentheses
out.append('OR')
else:
if hadDerived:
hadDerived = False
if len(w) >= 1 and w[-1] == '"':
w = w[:-1] + '*' + '"'
else:
w += '*'
out.append(w)
field = upnp2rclfields[w]
out.append(field)
oper = ""
ostr = ""
for tok in out:
......@@ -124,9 +173,10 @@ def search(rclconfdir, objid, upnps, idprefix, httphp, pathprefix):
rcls = upnpsearchtorecoll(upnps)
filterdir = uprclfolders.dirpath(objid)
rcls += " dir:\"" + filterdir + "\""
if filterdir and filterdir != "/":
rcls += " dir:\"" + filterdir + "\""
uplog("Search: recoll search: %s" % rcls)
uplog("Search: recoll search: <%s>" % rcls)
rcldb = recoll.connect(confdir=rclconfdir)
try:
......@@ -142,19 +192,19 @@ def search(rclconfdir, objid, upnps, idprefix, httphp, pathprefix):
entries = []
maxcnt = 0
totcnt = 0
while True:
docs = rclq.fetchmany()
for doc in docs:
id = idprefix + '$' + 'seeyoulater'
e = rcldoctoentry(id, objid, httphp, pathprefix, doc)
entries.append(e)
totcnt += 1
if (maxcnt > 0 and totcnt >= maxcnt) or len(docs) != rclq.arraysize:
if e:
entries.append(e)
if (maxcnt > 0 and len(entries) >= maxcnt) or \
len(docs) != rclq.arraysize:
break
uplog("Search retrieved %d docs" % (totcnt,))
uplog("Search retrieved %d docs" % (len(entries),))
return entries
return sorted(entries, cmp=cmpentries)
......
......@@ -3,6 +3,7 @@ from __future__ import print_function
import sys
import posixpath
import urllib
import os
audiomtypes = frozenset([
'audio/mpeg',
......@@ -11,9 +12,20 @@ audiomtypes = frozenset([
'audio/aac',
'audio/mp4',
'audio/x-aiff',
'audio/x-wav'
'audio/x-wav',
'inode/directory'
])
upnp2rclfields = {'upnp:album': 'album',
'releasedate' : 'date',
'upnp:originalTrackNumber' : 'tracknumber',
'upnp:artist' : 'artist',
'upnp:genre' : 'genre',
'res:mime' : 'mtype',
'duration' : 'duration',
'res:samplefreq' : 'sample_rate'
}
def rcldoctoentry(id, pid, httphp, pathprefix, doc):
"""
Transform a Doc objects into the format expected by the parent
......@@ -38,8 +50,8 @@ def rcldoctoentry(id, pid, httphp, pathprefix, doc):
http://host:port/pathprefix/track?version=1&trackId=<trackid>
"""
uplog("rcldoctoentry: pid %s id %s httphp %s pathprefix %s" %
(pid, id, httphp, pathprefix))
uplog("rcldoctoentry: pid %s id %s mtype %s" %
(pid, id, doc.mtype))
li = {}
if doc.mtype not in audiomtypes:
......@@ -47,7 +59,7 @@ def rcldoctoentry(id, pid, httphp, pathprefix, doc):
li['pid'] = pid
li['id'] = id
li['tp'] = 'it'
li['tp'] = 'ct' if doc.mtype == 'inode/directory' else 'it'
# Why no dc.title??
li['tt'] = doc.title
......@@ -72,11 +84,7 @@ def rcldoctoentry(id, pid, httphp, pathprefix, doc):
#lyricist=
#lyrics=
for oname,dname in [('upnp:album', 'album'), ('releasedate','date'),
('upnp:originalTrackNumber', 'tracknumber'),
('upnp:artist', 'artist'), ('upnp:genre', 'genre'),
('res:mime', 'mtype'), ('duration', 'duration'),
('res:samplefreq', 'sample_rate')]:
for oname,dname in upnp2rclfields.iteritems():
val = getattr(doc, dname)
if val:
li[oname] = val
......@@ -98,6 +106,49 @@ def rcldoctoentry(id, pid, httphp, pathprefix, doc):
uplog("rcldoctoentry: uri: %s" % li['uri'])
return li
def cmpentries(e1, e2):
tp1 = e1['tp']
tp2 = e2['tp']
isct1 = tp1 == 'ct'
isct2 = tp2 == 'ct'
# Containers come before items, and are sorted in alphabetic order
if isct1 and not isct2:
return 1
elif not isct1 and isct2:
return -1
|
__label__pos
| 0.98245 |
相关推荐
Spring 官网
Spring Framework 官方文档
spring-framework Github
SpringIOC 详解
SpringAOP 详解
什么是动态代理
动态代理其实就是Java中的一个方法,这个方法可以实现:
动态创建一组指定的接口的实现对象(在运行时,创建实现了指定的一组接口的对象) 例如:
interface A {}
interface B {}
//obj对象的类型实现了A和B两个接口
Object obj = 方法(new Class[]{A.class, B.class})
动态代理初体验
我们根据上面的思路来体验一下 Java 中的动态代理吧,首先我们要先写两个接口。
interface A {
public void a();
}interface B {
public void b();
}
然后我们就先来看一下动态代理的代码:
public static Object newProxyInstance(ClassLoader loader,
Class<?>[] interfaces,
InvocationHandler h)
```
上面这个就是动态代理类(Proxy)类中的创建代理对象的方法,下面介绍一下方法的三个参数:
+ `ClassLoader loader`:方法需要动态生成一个类,这个类实现了A和B两个接口,然后创建这个类的对象。需要生成一个类,而且这个类也需要加载到方法区中,所以我们需要一个`ClassLoader`来加载该类
+ `Class<?>[] interfaces`:我们需要代理对象实现的数组
+ `InvocationHandler h`:调用处理器
这里你可能对`InvocationHandler`有疑惑,这里先买个关子,下面马上揭晓。
我们现在就使用动态代理创建一个代理对象吧。
```java
@Test
public void test1() {
/**
* 三个参数
* 1、ClassLoader
* 方法需要动态生成一个类,这个类实现了A和B两个接口,然后创建这个类的对象
* 需要生成一个类,这个类也需要加载到方法区中,所以我们需要一个ClassLoader来加载该类
*
* 2、Class[] interfaces
* 我们需要代理对象实现的数组
*
* 3、InvocationHandler
* 调用处理器
*/
ClassLoader classLoader = this.getClass().getClassLoader();
//这里创建一个空实现的调用处理器。
InvocationHandler invocationHandler = new InvocationHandler() {
@Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
return null;
}
};
Object obj = Proxy.newProxyInstance(classLoader, new Class[]{A.class, B.class}, invocationHandler);
//强转为A和B接口类型,说明生成的代理对象实现了A和B接口
A a = (A) obj;
B b = (B) obj;
}
经过测试代码运行成功,说明生成的代理对象确实实现了 A 接口和 B 接口,但是我想你一定会对代理对象如何实现了 A 接口和 B 接口感兴趣,你一定想知道如果使用代理对象调用相应接口的方法会发生什么感兴趣,下面我们一起来探究一下:
上面代码的基础上加上下面的代码
a.a();
b.b();
我们可以发现什么也没有发生。这是因为我们根本没有为代理对象添加实现逻辑。可是实现逻辑添加在哪里呢?哈哈,当然是InvocationHandler中了。下面就看一看添加了实现逻辑的代码:
@Test
public void test2() {
/**
* 三个参数
* 1、ClassLoader
* 方法需要动态生成一个类,这个类实现了A和B两个接口,然后创建这个类的对象
* 需要生成一个类,这个类也需要加载到方法区中,所以我们需要一个ClassLoader来加载该类
*
* 2、Class[] interfaces
* 我们需要代理对象实现的数组
*
* 3、InvocationHandler
* 调用处理器
*
* 代理对象实现的所有接口中的方法,内容都是调用InvocationHandler的invoke()方法
*/
ClassLoader classLoader = this.getClass().getClassLoader();
//这里创建一个空实现的调用处理器。
InvocationHandler invocationHandler = new InvocationHandler() {
@Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
System.out.println("你好!!!!");//注意这里添加了一点小逻辑
return null;
}
};
Object obj = Proxy.newProxyInstance(classLoader, new Class[]{A.class, B.class}, invocationHandler);
//强转为A和B接口类型,说明生成的代理对象实现了A和B接口
A a = (A) obj;
B b = (B) obj;
a.a();
b.b();
}
截图如下:
这里我们发现 A 接口和 B 接口的实现逻辑都是调用了invoke这个方法中的逻辑,其实除了调用代理对象的native方法,调用代理对象的其他所有方法本质都是调用了invoke方法,下面我们再来看第三个实例,让我们对动态代理有更深刻的认识。
public void test3() {
/**
* 三个参数
* 1、ClassLoader
* 方法需要动态生成一个类,这个类实现了A和B两个接口,然后创建这个类的对象
* 需要生成一个类,这个类也需要加载到方法区中,所以我们需要一个ClassLoader来加载该类
*
* 2、Class[] interfaces
* 我们需要代理对象实现的数组
*
* 3、InvocationHandler
* 调用处理器
*
* 代理对象实现的所有接口中的方法,内容都是调用InvocationHandler的invoke()方法
*/
ClassLoader classLoader = this.getClass().getClassLoader();
//这里创建一个空实现的调用处理器。
InvocationHandler invocationHandler = new InvocationHandler() {
@Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
System.out.println("你好!!!!");
return "Hello";//这里改为返回"Hello"
}
};
Object obj = Proxy.newProxyInstance(classLoader, new Class[]{A.class, B.class}, invocationHandler);
//强转为A和B接口类型,说明生成的代理对象实现了A和B接口
A a = (A) obj;
B b = (B) obj;
a.toString();//注意这里调用了toString()方法
b.getClass();//注意这里调用了getClass()方法
//这里在A接口中添加了一个方法public Object aaa(String s1, int i);
Object hello = a.aaa("Hello", 100);
System.out.println(obj.getClass());//这里看一下代理对象是什么
System.out.println(hello);//这里看一下返回值是什么
}
通过代码的结果我们大胆的猜测一下,代理对象方法的返回值其实就是invoke方法的返回值,代理对象其实就是使用反射机制实现的一个运行时对象。哈哈,当然这些肯定不是猜测了,其实确实就是这样。下面是时候总结一下InvocationHandlerinvoke方法了。如下图所示:
当我们调用代理对象的方法时,其对应关系就如上图所示。
初步实现 AOP
在我们对动态代理有了一定的认识之后,我们就可以实现最基本版本的 AOP 了,当然,这是一个非常残缺的 AOP 实现,甚至都不能称之为 AOP 实现。 我们先写一个接口:
package demo2;
/**
* Created by Yifan Jia on 2018/6/5.
*/
//服务生
public interface Waiter {
//服务方法
public void server();
}
然后给出该接口的实现类:
package demo2;
/**
* Created by Yifan Jia on 2018/6/5.
*/
public class ManWaiter implements Waiter {
@Override
public void server() {
System.out.println("服务中");
}
}
然后我们就通过动态代理来对上面的ManWaiter进行增强:
package demo2;
import org.junit.Test;
import java.lang.reflect.InvocationHandler;
import java.lang.reflect.Method;
import java.lang.reflect.Proxy;
/**
* Created by Yifan Jia on 2018/6/5.
*/
public class Demo2 {
@Test
public void test1() {
Waiter waiter = new ManWaiter();
waiter.server();
}
@Test
public void test2() {
Waiter manWaiter = new ManWaiter();
ClassLoader classLoader = this.getClass().getClassLoader();
Class[] interfaces = {Waiter.class};
InvocationHandler invocationHandler = new WaiterInvocationHandler(manWaiter);
//得到代理对象,代理对象就是在目标对象的基础上进行了增强的对象
Waiter waiter = (Waiter) Proxy.newProxyInstance(classLoader, interfaces, invocationHandler);
waiter.server();//前面添加“你好”,后面添加“再见”
}
}
class WaiterInvocationHandler implements InvocationHandler {
private Waiter waiter;
WaiterInvocationHandler(Waiter waiter) {
this.waiter = waiter;
}
@Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
System.out.println("你好");
waiter.server();//调用目标对象的方法
System.out.println("再见");
return null;
}
}
结果如下:
你肯定要说了,这算什么 AOP,增强的代码都是硬编码到invoke方法中的,大家稍安勿躁,我们不是已经对需要增强的对象做了增强吗。这里可以的目标对象为manWaiter,增强为System.out.println("你好");System.out.println("再见");,切点为server()方法调用。其实还是可以看做一下原始的 AOP 的。
完善的 AOP 实现
我们从初步实现的 AOP 中可以发现很多问题,比如我们不能把增强的逻辑硬编码到代码中,我们需要实现可变的增强,下面我们就解决一下这些问题,来实现一个比较完善的 AOP。 我们仍然引用上面的Waiter接口和Manwaiter实现类。 然后我们添加一个前置增强接口:
/**
* 前置增强
*/
public interface BeforeAdvice {
public void before();
}
再添加一个后置增强接口:
public interface AfterAdvice {
public void after();
}
我们把产生代理对象的代码封装为一个类:
package demo3;
import com.sun.org.apache.regexp.internal.RE;
import org.junit.After;
import java.lang.reflect.InvocationHandler;
import java.lang.reflect.Method;
import java.lang.reflect.Proxy;
/**
* ProxFactory用来生成代理对象
* 它需要所有的参数:目标对象,增强,
* Created by Yifan Jia on 2018/6/5.
*/
/**
* 1、创建代理工厂
* 2、给工厂设置目标对象、前置增强、后置增强
* 3、调用creatProxy()得到代理对象
* 4、执行代理对象方法时,先执行前置增强,然后是目标方法,最后是后置增强
*/
//其实在Spring中的AOP的动态代理实现的一个织入器也是叫做ProxyFactory
public class ProxyFactory {
private Object targetObject;//目标对象
private BeforeAdvice beforeAdvice;//前值增强
private AfterAdvice afterAdvice;//后置增强
/**
* 用来生成代理对象
* @return
*/
public Object creatProxy() {
/**
* 给出三个参数
*/
ClassLoader classLoader = this.getClass().getClassLoader();
//获取当前类型所实现的所有接口类型
Class[] interfaces = targetObject.getClass().getInterfaces();
InvocationHandler invocationHandler = new InvocationHandler() {
@Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
/**
* 在调用代理对象的方法时,会执行这里的内容
*/
if(beforeAdvice != null) {
beforeAdvice.before();
}
Object result = method.invoke(targetObject, args);//调用目标对象的目标方法
//执行后续增强
afterAdvice.after();
//返回目标对象的返回值
return result;
}
};
/**
* 2、得到代理对象
*/
Object proxyObject = Proxy.newProxyInstance(classLoader, interfaces, invocationHandler);
return proxyObject;
}
//get和set方法略
}
然后我们将相关的参数注入到ProxyFactory后就可以通过creatProxy()方法获取代理对象了,代码如下:
package demo3;
import org.junit.Test;
/**
* Created by Yifan Jia on 2018/6/5.
*/
public class Demo3 {
@Test
public void tset1() {
ProxyFactory proxyFactory = new ProxyFactory();//创建工厂
proxyFactory.setTargetObject(new ManWaiter());//设置目标对象
//设置前置增强
proxyFactory.setBeforeAdvice(new BeforeAdvice() {
@Override
public void before() {
System.out.println("客户你好");
}
});
//设置后置增强
proxyFactory.setAfterAdvice(new AfterAdvice() {
@Override
public void after() {
System.out.println("客户再见");
}
});
Waiter waiter = (Waiter) proxyFactory.creatProxy();
waiter.server();
}
}
结果如下:
这时候我们已经可以自定义任意的增强逻辑了,是不是很神奇。
动态代理实现 AOP 总结
通过上面的内容,我们已经通过动态代理实现了一个非常简陋的 AOP,这里的 AOP 实现还是有很多的不足之处。下面我把 Spring 中的ProxyFactory实现贴出来,大家可以研究一下 Spring 中的ProxyFactory的优势在哪里,另外,Spring 中还有其他的基于动态代理实现的织入器,ProxyFactory只是其中最基础的版本,大家有兴趣可以研究一下。
public class ProxyFactory extends ProxyCreatorSupport {
public ProxyFactory() {
}
public ProxyFactory(Object target) {
Assert.notNull(target, "Target object must not be null");
this.setInterfaces(ClassUtils.getAllInterfaces(target));
this.setTarget(target);
}
public ProxyFactory(Class... proxyInterfaces) {
this.setInterfaces(proxyInterfaces);
}
public ProxyFactory(Class<?> proxyInterface, Interceptor interceptor) {
this.addInterface(proxyInterface);
this.addAdvice(interceptor);
}
public ProxyFactory(Class<?> proxyInterface, TargetSource targetSource) {
this.addInterface(proxyInterface);
this.setTargetSource(targetSource);
}
public Object getProxy() {
return this.createAopProxy().getProxy();
}
public Object getProxy(ClassLoader classLoader) {
return this.createAopProxy().getProxy(classLoader);
}
public static <T> T getProxy(Class<T> proxyInterface, Interceptor interceptor) {
return (new ProxyFactory(proxyInterface, interceptor)).getProxy();
}
public static <T> T getProxy(Class<T> proxyInterface, TargetSource targetSource) {
return (new ProxyFactory(proxyInterface, targetSource)).getProxy();
}
public static Object getProxy(TargetSource targetSource) {
if(targetSource.getTargetClass() == null) {
throw new IllegalArgumentException("Cannot create class proxy for TargetSource with null target class");
} else {
ProxyFactory proxyFactory = new ProxyFactory();
proxyFactory.setTargetSource(targetSource);
proxyFactory.setProxyTargetClass(true);
return proxyFactory.getProxy();
}
}
}
|
__label__pos
| 0.988781 |
(* Title: Tools/Code/code_runtime.ML Author: Florian Haftmann, TU Muenchen Runtime services building on code generation into implementation language SML. *) signature CODE_RUNTIME = sig val target: string val eval: string option -> (Proof.context -> unit -> 'a) * ((unit -> 'a) -> Proof.context -> Proof.context) * string -> ((term -> term) -> 'a -> 'a) -> theory -> term -> string list -> 'a val setup: theory -> theory end; structure Code_Runtime : CODE_RUNTIME = struct (** generic **) val target = "Eval"; fun evaluation_code thy module_name tycos consts = let val (consts', (naming, program)) = Code_Thingol.consts_program thy false consts; val tycos' = map (the o Code_Thingol.lookup_tyco naming) tycos; val (ml_code, target_names) = Code_Target.produce_code_for thy target NONE module_name [] naming program (consts' @ tycos'); val (consts'', tycos'') = chop (length consts') target_names; val consts_map = map2 (fn const => fn NONE => error ("Constant " ^ (quote o Code.string_of_const thy) const ^ "\nhas a user-defined serialization") | SOME const'' => (const, const'')) consts consts'' val tycos_map = map2 (fn tyco => fn NONE => error ("Type " ^ (quote o Sign.extern_type thy) tyco ^ "\nhas a user-defined serialization") | SOME tyco'' => (tyco, tyco'')) tycos tycos''; in (ml_code, (tycos_map, consts_map)) end; (** evaluation **) fun eval some_target cookie postproc thy t args = let val ctxt = ProofContext.init_global thy; fun evaluator naming program ((_, (_, ty)), t) deps = let val _ = if Code_Thingol.contains_dictvar t then error "Term to be evaluated contains free dictionaries" else (); val value_name = "Value.VALUE.value" val program' = program |> Graph.new_node (value_name, Code_Thingol.Fun (Term.dummy_patternN, ((([], ty), [(([], t), (NONE, true))]), NONE))) |> fold (curry Graph.add_edge value_name) deps; val (program_code, [SOME value_name']) = Code_Target.produce_code_for thy (the_default target some_target) NONE "Code" [] naming program' [value_name]; val value_code = space_implode " " (value_name' :: map (enclose "(" ")") args); in ML_Context.value ctxt cookie (program_code, value_code) end; in Code_Thingol.dynamic_eval_value thy postproc evaluator t end; (** instrumentalization by antiquotation **) local structure Code_Antiq_Data = Proof_Data ( type T = (string list * string list) * (bool * (string * ((string * string) list * (string * string) list)) lazy); fun init _ = (([], []), (true, (Lazy.value ("", ([], []))))); ); val is_first_occ = fst o snd o Code_Antiq_Data.get; fun register_code new_tycos new_consts ctxt = let val ((tycos, consts), _) = Code_Antiq_Data.get ctxt; val tycos' = fold (insert (op =)) new_tycos tycos; val consts' = fold (insert (op =)) new_consts consts; val acc_code = Lazy.lazy (fn () => evaluation_code (ProofContext.theory_of ctxt) "Code" tycos' consts'); in Code_Antiq_Data.put ((tycos', consts'), (false, acc_code)) ctxt end; fun register_const const = register_code [] [const]; fun register_datatype tyco constrs = register_code [tyco] constrs; fun print_const const all_struct_name tycos_map consts_map = (Long_Name.append all_struct_name o the o AList.lookup (op =) consts_map) const; fun print_code is_first print_it ctxt = let val (_, (_, acc_code)) = Code_Antiq_Data.get ctxt; val (ml_code, (tycos_map, consts_map)) = Lazy.force acc_code; val ml_code = if is_first then ml_code else ""; val all_struct_name = "Isabelle"; in (ml_code, print_it all_struct_name tycos_map consts_map) end; in fun ml_code_antiq raw_const background = let val const = Code.check_const (ProofContext.theory_of background) raw_const; val is_first = is_first_occ background; val background' = register_const const background; in (print_code is_first (print_const const), background') end; end; (*local*) (** reflection support **) fun check_datatype thy tyco consts = let val constrs = (map (fst o fst) o snd o Code.get_type thy) tyco; val missing_constrs = subtract (op =) consts constrs; val _ = if null missing_constrs then [] else error ("Missing constructor(s) " ^ commas (map quote missing_constrs) ^ " for datatype " ^ quote tyco); val false_constrs = subtract (op =) constrs consts; val _ = if null false_constrs then [] else error ("Non-constructor(s) " ^ commas (map quote false_constrs) ^ " for datatype " ^ quote tyco); in () end; fun add_eval_tyco (tyco, tyco') thy = let val k = Sign.arity_number thy tyco; fun pr pr' fxy [] = tyco' | pr pr' fxy [ty] = Code_Printer.concat [pr' Code_Printer.BR ty, tyco'] | pr pr' fxy tys = Code_Printer.concat [Code_Printer.enum "," "(" ")" (map (pr' Code_Printer.BR) tys), tyco'] in thy |> Code_Target.add_tyco_syntax target tyco (SOME (k, pr)) end; fun add_eval_constr (const, const') thy = let val k = Code.args_number thy const; fun pr pr' fxy ts = Code_Printer.brackify fxy (const' :: the_list (Code_Printer.tuplify pr' Code_Printer.BR (map fst ts))); in thy |> Code_Target.add_const_syntax target const (SOME (Code_Printer.simple_const_syntax (k, pr))) end; fun add_eval_const (const, const') = Code_Target.add_const_syntax target const (SOME (Code_Printer.simple_const_syntax (0, (K o K o K) const'))); fun process (code_body, (tyco_map, (constr_map, const_map))) module_name NONE thy = thy |> Code_Target.add_reserved target module_name |> Context.theory_map (ML_Context.exec (fn () => ML_Context.eval_text true Position.none code_body)) |> fold (add_eval_tyco o apsnd Code_Printer.str) tyco_map |> fold (add_eval_constr o apsnd Code_Printer.str) constr_map |> fold (add_eval_const o apsnd Code_Printer.str) const_map | process (code_body, _) _ (SOME file_name) thy = let val preamble = "(* Generated from " ^ Path.implode (Thy_Header.thy_path (Context.theory_name thy)) ^ "; DO NOT EDIT! *)"; val _ = File.write (Path.explode file_name) (preamble ^ "\n\n" ^ code_body); in thy end; fun gen_code_reflect prep_type prep_const raw_datatypes raw_functions module_name some_file thy = let val datatypes = map (fn (raw_tyco, raw_cos) => (prep_type thy raw_tyco, map (prep_const thy) raw_cos)) raw_datatypes; val _ = map (uncurry (check_datatype thy)) datatypes; val tycos = map fst datatypes; val constrs = maps snd datatypes; val functions = map (prep_const thy) raw_functions; val result = evaluation_code thy module_name tycos (constrs @ functions) |> (apsnd o apsnd) (chop (length constrs)); in thy |> process result module_name some_file end; val code_reflect = gen_code_reflect Code_Target.cert_tyco Code.check_const; val code_reflect_cmd = gen_code_reflect Code_Target.read_tyco Code.read_const; (** Isar setup **) val _ = ML_Context.add_antiq "code" (fn _ => Args.term >> ml_code_antiq); local val datatypesK = "datatypes"; val functionsK = "functions"; val fileK = "file"; val andK = "and" val _ = List.app Keyword.keyword [datatypesK, functionsK]; val parse_datatype = Parse.name --| Parse.$$$ "=" -- (Parse.term ::: (Scan.repeat (Parse.$$$ "|" |-- Parse.term))); in val _ = Outer_Syntax.command "code_reflect" "enrich runtime environment with generated code" Keyword.thy_decl (Parse.name -- Scan.optional (Parse.$$$ datatypesK |-- (parse_datatype ::: Scan.repeat (Parse.$$$ andK |-- parse_datatype))) [] -- Scan.optional (Parse.$$$ functionsK |-- Scan.repeat1 Parse.name) [] -- Scan.option (Parse.$$$ fileK |-- Parse.name) >> (fn (((module_name, raw_datatypes), raw_functions), some_file) => Toplevel.theory (code_reflect_cmd raw_datatypes raw_functions module_name some_file))); end; (*local*) val setup = Code_Target.extend_target (target, (Code_ML.target_SML, K I)); end; (*struct*)
|
__label__pos
| 0.994321 |
Skip to main content
The 2024 Developer Survey results are live! See the results
Paul Sigonoso's user avatar
Paul Sigonoso's user avatar
Paul Sigonoso's user avatar
Paul Sigonoso
Paul
• Member for 7 years, 11 months
• Last seen more than 4 years ago
• France
15 votes
4 answers
46k views
"GetPassWarning: Can not control echo on the terminal" when running from IDLE
10 votes
4 answers
586 views
How "generate " multiple TCP clients using Threads instead of opening multiple instances of the terminal and run the script several times?
9 votes
1 answer
797 views
Concurrency with subprocess module. How can I do this?
2 votes
2 answers
4k views
Program for ARP scanning
2 votes
1 answer
233 views
Creating a chat (client) program. How can I add simultaneous conversation?
2 votes
1 answer
388 views
Porting a python 2 code to Python 3: ICMP Scan with errors
1 vote
1 answer
310 views
Difference between UDP server and UDP client: sock.bind((host, port)) is on the client or server side?
0 votes
0 answers
655 views
How to develop an UDP port scanner? It is not working very well
0 votes
2 answers
11k views
How to Create a port scanner TCP SYN using the method (TCP SYN )?
0 votes
3 answers
735 views
Program to transform a string in hexadecimal?
-1 votes
1 answer
60 views
How to make a program to sleep until my Thread ends? [closed]
|
__label__pos
| 0.999957 |
Users and processes without root or administrator privileges within virtual machines have the capability to connect or disconnect devices, such as network adaptors and CD-ROM drives, as well as the ability to modify device settings. To increase virtual machine security, remove these devices. If you do not want to permanently remove a device, you can prevent a virtual machine user or process from connecting or disconnecting the device from within the guest operating system.
Before you begin
Turn off the virtual machine.
Procedure
1. Log in to a vCenter Server system using the vSphere Client and select the virtual machine.
2. On the Summary tab, click Edit Settings.
3. Select Options > Advanced > General and click Configuration Parameters.
4. Add or edit the following parameters.
Name
Value
isolation.device.connectable.disable
true
isolation.device.edit.disable
true
These options override any settings made in the guest operating system's VMware Tools control panel.
5. Click OK to close the Configuration Parameters dialog box, and click OK again to close the Virtual Machine Properties dialog box.
6. (Optional) : If you made changes to the configuration parameters, restart the virtual machine.
|
__label__pos
| 0.88207 |
What is Software Piracy?
Tue. 06 Feb. 20241233
Software piracy refers to the unauthorized copying of software. What is software piracy and its consequences? Let's go!
What is Software Piracy?
Software piracy is the unauthorized reproduction, distribution, or use of copyrighted software. This practice can take various forms, ranging from the duplication of software to its unauthorized distribution, installation, or use on personal computers or networks.
Software piracy not only violates intellectual property laws but also poses significant risks and consequences for individuals, businesses, and the software industry at large.
What is Software Piracy?
How to Secure a PDF on Windows
How to Secure a PDF on Mac
Types of Software Piracy
The Impact of Software Piracy
Software Piracy: Best Practices
Software Piracy: FAQ
What is Software Piracy?
Software piracy is the unauthorized copying, distribution, or use of copyrighted software.
It encompasses various activities, such as making illegal copies for sale or personal use, downloading software from unauthorized websites, and using software beyond the terms of the license agreement.
This illegal practice undermines the software industry, causing significant financial losses, security risks for users, and hindering innovation. It's a global issue that affects creators, businesses, and consumers, prompting the need for stringent copyright laws, awareness, and technological measures to combat it.
How to Secure a PDF on Windows
Step 1: Starting Up
First, open PDF Reader Pro on your Windows system. You can either drag and drop your PDF file directly onto the main interface or select "Open File" to browse and open your document.
pdfimage
Navigate, edit, and
convert PDFs like a Pro
with PDF Reader Pro
check
Easily customize PDFs: Edit text, images,
pages, and annotations with ease.
check
Advanced PDF conversion: Supports
multi-format document processing with OCR.
check
Seamless workflow on Mac,
Windows, iOS, and Android.
Don’t let PDFs slow your efficiency DOWNLOAD NOW
Step 2: Accessing Security Settings
Next, navigate to the "Editor" within the tools menu. Go to the "Security" option and choose "Set Passwords" to unlock the document security settings.
Windows PDF Reader Pro set password
Step 3: Encrypting Your Document
Finally, secure your document by setting a password. Type in your desired password and hit "Encrypt." This ensures that only individuals with the password can access and view your PDF, safeguarding your sensitive information.
Windows PDF Reader Pro Encrypt
How to Secure a PDF on Mac
Enhance the security of your sensitive PDF files on your Mac with PDF Reader Pro's intuitive encryption tools. Here's how to easily manage access rights to your documents:
pdfimage
Navigate, edit, and
convert PDFs like a Pro
with PDF Reader Pro
check
Easily customize PDFs: Edit text, images,
pages, and annotations with ease.
check
Advanced PDF conversion: Supports
multi-format document processing with OCR.
check
Seamless workflow on Mac,
Windows, iOS, and Android.
Don’t let PDFs slow your efficiency DOWNLOAD NOW
Step 1: Initiating Document Opening
Begin by launching PDF Reader Pro on your Mac. To open your confidential document, either click on "Open File" to select it from your files or simply drag and drop it into the PDF Reader Pro's home interface.
PDF Reader Pro Mac open file
Step 2: Accessing Encryption Settings
Once your document is open, head to the "Editor" on the menu bar. Within this section, find and select "Security," then choose "Set Passwords" from the options available. This action leads you to the security features for document protection.
PDF Reader Pro Mac security
Step 3: Implementing Document Security
In the final step, secure your document by activating the "Require Password" option. Enter the password you wish to use for your document and confirm by clicking on the "Encrypt" button. This process ensures that your document is protected and can only be accessed by entering the specified password.
PDF Reader Pro Mac
Types of Software Piracy
• Counterfeiting: This involves the creation and distribution of unauthorized copies of software that are often presented as legitimate versions. Counterfeit software may be sold online or through physical markets, misleading consumers about its authenticity.
• Internet Piracy: The internet has facilitated the easy sharing and downloading of copyrighted software without proper licenses. This includes software available through peer-to-peer networks, illegal download sites, or file-sharing platforms.
• End-User Piracy: Sometimes, legitimate users of software might install and use software beyond the scope of the license agreement. This includes installing software on more computers than the license permits or sharing it with others not covered under the license.
• Client-Server Overuse: This form of piracy occurs when more individuals are using the software at the same time than the license permits, common in workplace environments where software is installed on a network.
• Hard-Disk Loading: This practice involves installing unauthorized copies of software on computers being sold, offering customers bundled software without proper licenses.
The Impact of Software Piracy
Software piracy has far-reaching consequences, affecting everyone from individual creators to large corporations:
• Economic Losses: The software industry loses billions of dollars annually due to piracy, impacting revenue, investment in research and development, and employment opportunities within the sector.
• Security Risks: Pirated software often lacks official support and updates, making it vulnerable to malware, viruses, and other security threats. This can lead to significant data loss or theft for users.
• Legal and Financial Penalties: Individuals and organizations caught using pirated software may face hefty fines, legal action, and damage to reputation.
• Harm to Innovation: The loss of revenue due to piracy can reduce the resources available for developing new, innovative software products, ultimately affecting the quality and variety of software available to consumers.
Software Piracy: Best Practices
Illegal Software and Online Piracy
The proliferation of illegal software downloads and online piracy poses significant risks to software users, including exposure to malware and legal consequences. Encouraging the use of licensed software and educating about the dangers of online piracy are essential steps in reducing these practices. With over 10 sources confirming the widespread nature of online piracy, it's clear that concerted efforts are required to address this issue.
Confronting Software Pirates and Protecting Copyright Holders
Engaging with legal mechanisms to protect intellectual property rights is crucial. Copyright holders, supported by 8 sources, can take legal action against software pirates, who, as identified by 9 sources, often distribute unlicensed software without regard for the law. Implementing robust copyright management systems can deter illegal copying, a concern highlighted by 8 sources.
Combatting Commercial Software Piracy
Commercial software piracy, including the unauthorized distribution of software programs through online auction sites or other channels, undermines the software industry. Education about the consequences of software piracy, cited by 4 sources, and the enforcement of end-user license agreements can help protect original software and its creators.
The Role of Software Licenses and Licensing Agreements
Understanding and adhering to software licenses and licensing agreements is vital in preventing software piracy. These legal documents, which dictate the permissible use of a piece of software, can help curb the unauthorized copying of software. Moreover, software vendors and publishers must ensure that their licensing terms are clear and enforceable to prevent misunderstandings that could lead to copyright violations.
Technological Measures and Software Security
Employing technological solutions such as software licensing solutions, security patches, and regular updates can enhance software security and deter piracy. Proprietary software, often targeted by pirates, requires robust protection mechanisms to prevent unauthorized access and distribution.
Legal and Financial Implications
The consequences of engaging in or facilitating software piracy are severe, including criminal penalties and monetary damages. Statutory damages, intended to compensate copyright holders for losses, underscore the legal risks associated with copyright infringements.
Encouraging Ethical Use
Promoting the ethical use of software through education and awareness campaigns can change user behavior. Highlighting the risks of software piracy, including potential online security threats and the impact on software quality and innovation, encourages users to support original products and software vendors.
Software Piracy: FAQ
What are the legal ramifications of software piracy?
Software piracy carries significant legal ramifications, including potential civil and criminal penalties. Individuals or organizations found guilty of copyright infringement can face fines, damages, and in some cases, imprisonment. Legal repercussions are meant to deter piracy by imposing consequences that underscore the seriousness of copyright violations.
Can using pirated software expose me to malware attacks?
Yes, malware attacks are a common risk associated with using pirated software. Malware infection can occur when downloading software from untrustworthy sources, often leading to data theft, system damage, and privacy breaches. Pirated software typically lacks the security updates and support provided by legitimate versions, making them vulnerable to malicious software.
What is a manufacturing subsidiary’s role in combating software piracy?
A manufacturing subsidiary, especially in the context of producing and distributing software, has a crucial role in implementing anti-piracy measures. This includes ensuring that all software produced and sold has the correct license and is free from illegal copies. By controlling the production process, these subsidiaries can significantly reduce the availability of fake copies in the market.
What does having the correct license mean for software use?
Having the correct license for software use means that the user or organization has legally obtained the rights to install, use, or distribute the software according to the terms set by the legal owner or copyright holder. This ensures compliance with copyright laws and supports ethical use and distribution of software.
What are the consequences of making extra copies of software without authorization?
Making extra copies of software without the proper authorization from the copyright holder is a form of copyright infringement. This activity can lead to legal action against the individuals or entities involved, resulting in fines, damages, and other legal repercussions aimed at compensating the copyright owner for lost revenues and deterring further piracy.
How do fake copies of software harm licensed users and the industry?
Fake copies of software undermine the value of legitimate software investments made by licensed users and the industry. They lead to lost revenues for software companies, which can impact the development of new products and support for existing ones. Moreover, they can create unfair competition and degrade the overall quality and security of software in the market.
What are anti-piracy software and software copy protection?
Anti-piracy software and software copy protection are technologies designed to prevent the unauthorized copying and distribution of software. These measures include digital rights management (DRM), encryption, and licensing controls that help ensure only licensed users can access and use the software. They play a critical role in safeguarding the intellectual property rights of software creators.
How can software licensing services help combat software piracy?
Software licensing services provide mechanisms for managing and enforcing the use of software according to the terms of a licensing agreement. By facilitating the distribution of licenses, tracking usage, and ensuring compliance, these services help prevent unauthorized use and distribution, thereby reducing the incidence of software piracy.
Are there ethical considerations in the decision-making process regarding software piracy?
Yes, ethical decision-making plays a significant role in addressing software piracy. Individuals and organizations must consider the moral implications of their actions, including respect for intellectual property rights and the impact of piracy on the software industry and wider community. Ethical considerations encourage responsible behavior and compliance with copyright laws.
Get Started with PDF Reader Pro Today!
|
__label__pos
| 0.986926 |
Eclipse
Eclipse Zest Plugin Tutorial
Eclipse Zest is a visualization toolkit for graphs. This tutorial explains how to directly create a Zest Graph in Eclipse IDE and how to use the JFace abstraction.
1. Introduction
Eclipse Zest is a visualization toolkit for graphs. It is based on SWT / Draw2D to provide layout locations for a set of entities and relationships. Zest supports the viewer concept from JFace Viewers and therefore allows to separate the model from the graphical representation of the model. This article assumes that developers are already familiar with Eclipse RCP or Eclipse Plugin development.
Fig. 1: Eclipse Zest Overview Diagram
Fig. 1: Eclipse Zest Overview Diagram
1.1 Zest Components
Eclipse Zest has the following components:
• Graph Node: Node in the graph with the properties.
• Graph Connections: Arrow / Edge of the graph which connections to two nodes.
• Graph Container: Use for, a graph within a graph.
• Graph: Holds the other elements (nodes, connections, container).
1.2 Zest Layout Managers
Eclipse Zest provides the graph layout managers. A graph layout manager determines how the nodes (and the arrows) of a graph are arranged on the screen. The following layout managers are provided:
Layout ManagerDescription
TreeLayoutAlgorithmGraph is displayed in the form of a vertical tree.
HorizontalTreeLayoutAlgorithmSimilar to TreeLayoutAlgorithm but the layout is horizontal.
RadialLayoutAlgorithmRoot is in the center, the others nodes are placed on this node.
GridLayoutAlgorithmA Layout algorithm that takes advantage of the positions and directions of connection points. Lays out all the nodes in a grid like a pattern based on the structure of the diagram.
SpringLayoutAlgorithmLayout the graph so that all connections should have the same length and that the edges overlap minimally.
HorizontalShiftMoves overlapping nodes to the right.
CompositeLayoutAlgorithmCombines other layout algorithms, for example HorizontalShift can be the second layout algorithm to move nodes which were still overlapping with another algorithm if used.
1.3 Zest Filters
Developers can also define filters (org.eclipse.zest.layouts.Filter) on the layout managers via the setFilter(filter) method. This defines which nodes and connections should be displayed. The filter receives an LayoutItem, the actual graph element can be received with the getGraphData() method.
1.4 Zest Installation
Developers can use the Eclipse update manager to install the Graphical Editing Framework Zest Visualization Toolkit. You may have to un-flag Group items by category to see Eclipse Zest.
Now, open up the Eclipse IDE and let’s start building the application!
2. Eclipse Zest Plugin Tutorial
2.1 Tools Used
We are using Eclipse Kepler SR2, JDK 1.7 and Eclipse Zest plugin to create the visualization components. Having said that, we have tested the code against JDK 1.8 and it works well.
2.2 Project Structure
Firstly, let’s review the final project structure, in case you are confused about where you should create the corresponding files or folder later!
Fig. 2: Zest Plugin Sample Application Structure
Fig. 2: Zest Plugin Sample Application Structure
3. Application Building
Below are the steps involved in developing this application.
3.1 Getting Started
Create a new Eclipse RCP application com.jcg.zest.first and use the Eclipse RCP with a view as a template. Add org.eclipse.zest.core and org.eclipse.zest.layouts as dependencies to the MANIFEST.MF. Add the following code to View.java as this code creates a simple graph and connects its elements.
View.java
package com.jcg.zest.first;
import org.eclipse.swt.SWT;
import org.eclipse.swt.events.SelectionAdapter;
import org.eclipse.swt.events.SelectionEvent;
import org.eclipse.swt.widgets.Composite;
import org.eclipse.ui.part.ViewPart;
import org.eclipse.zest.core.widgets.Graph;
import org.eclipse.zest.core.widgets.GraphConnection;
import org.eclipse.zest.core.widgets.GraphNode;
import org.eclipse.zest.core.widgets.ZestStyles;
import org.eclipse.zest.layouts.LayoutStyles;
import org.eclipse.zest.layouts.algorithms.SpringLayoutAlgorithm;
import org.eclipse.zest.layouts.algorithms.TreeLayoutAlgorithm;
public class View extends ViewPart {
private Graph graphObj;
private int layoutObj = 1;
public static final String ID = "com.jcg.zest.first.view";
public void createPartControl(Composite parent) {
// Graph Will Hold All Other Objects
graphObj = new Graph(parent, SWT.NONE);
// Adding A Few Graph Nodes
GraphNode node_1 = new GraphNode(graphObj, SWT.NONE, "Jim");
GraphNode node_2 = new GraphNode(graphObj, SWT.NONE, "Jack");
GraphNode node_3 = new GraphNode(graphObj, SWT.NONE, "Joe");
GraphNode node_4 = new GraphNode(graphObj, SWT.NONE, "Bill");
// Setting Up A Directed Connection
new GraphConnection(graphObj, ZestStyles.CONNECTIONS_DIRECTED, node_1, node_2);
// Dotted Graphical Connection
new GraphConnection(graphObj, ZestStyles.CONNECTIONS_DOT, node_2, node_3);
// Standard Connection
new GraphConnection(graphObj, SWT.NONE, node_3, node_1);
// Change Line Color and Line Width
GraphConnection graphConnection = new GraphConnection(graphObj, SWT.NONE, node_1, node_4);
graphConnection.changeLineColor(parent.getDisplay().getSystemColor(SWT.COLOR_GREEN));
// Setting Up A Dummy Text
graphConnection.setText("This is a text");
graphConnection.setHighlightColor(parent.getDisplay().getSystemColor(SWT.COLOR_RED));
graphConnection.setLineWidth(3);
graphObj.setLayoutAlgorithm(new SpringLayoutAlgorithm(LayoutStyles.NO_LAYOUT_NODE_RESIZING), true);
// Adding A Selection Listener On Graph Object
graphObj.addSelectionListener(new SelectionAdapter() {
public void widgetSelected(SelectionEvent selectionEventObj) {
System.out.println(selectionEventObj);
}
});
}
public void setLayoutManager() {
switch (layoutObj) {
case 1:
graphObj.setLayoutAlgorithm(new TreeLayoutAlgorithm(LayoutStyles.NO_LAYOUT_NODE_RESIZING), true);
layoutObj++;
break;
case 2:
graphObj.setLayoutAlgorithm(new SpringLayoutAlgorithm(LayoutStyles.NO_LAYOUT_NODE_RESIZING), true);
layoutObj = 1;
break;
}
}
// Passing the Focus Request To The Viewer's Control.
public void setFocus() { }
}
Execute the application and the below name graph will appear on the View console.
Fig. 3: Application (View.java) Output
Fig. 3: Application (View.java) Output
3.2 Layout Manager Selection via a command
Create a command with the following default handler: com.jcg.zest.first.handler.ChangeLayout which will change the layout for the graph. Assign the command to the menu and add the following code to it.
ChangeLayout.java
package com.jcg.zest.first.handler;
import org.eclipse.core.commands.AbstractHandler;
import org.eclipse.core.commands.ExecutionEvent;
import org.eclipse.core.commands.ExecutionException;
import org.eclipse.ui.IViewPart;
import org.eclipse.ui.handlers.HandlerUtil;
public class ChangeLayout extends AbstractHandler {
@Override
public Object execute(ExecutionEvent event) throws ExecutionException {
IViewPart findViewObj = HandlerUtil.getActiveWorkbenchWindow(event).getActivePage().findView("com.jcg.zest.first.view");
// Setting The View Object To Find View
View viewObj = (View) findViewObj;
// Chaning The View Layout By Selecting A Command
viewObj.setLayoutManager();
return null;
}
}
Execute the application and if you select your command the layout of your view should change.
4. Eclipse Zest and JFace
JFace provides viewers to encapsulate the data from the presentation. A JFace viewer requires a content provider and a label provider. Eclipse Zest provides as a viewer the class GraphViewer. Content provider in Eclipse Zest is either based on the connections or on the nodes.
Standard Zest Content providers are:
Content ProviderDescription
IGraphContentProviderIt is based on the connections and the connections contain the information which nodes they refer to. This interface cannot display nodes without connections.
IGraphEntityContentProviderBased on the Node which contains the information about which relationship they have. These relationships are available in the label provider as EntityConnectionData objects.
IGraphEntityRelationshipContentProviderNode based, the content provider defines getRelationShips(sourceNode, destinationNode) which determines the connections. The advantages compared with IGraphEntityContentProvider is that you decide which objects you return.
A label provider Eclipse Zest can use the standard JFace interface ILabelProvider (implemented by the class LabelProvider) or the Zest specific IEntityStyleProvider.
4.1 Eclipse Zest and JFace Example
4.1.1 Project Creation
Create a new RCP application com.jcg.zest.jface. Use the RCP application with a view as a template and add the zest dependencies to your MANIFEST.MF. Do remember, to change the Perspective.java to the following: (we do not want a stand-alone view).
Perspective.java
package com.jcg.zest.jface;
import org.eclipse.ui.IPageLayout;
import org.eclipse.ui.IPerspectiveFactory;
public class Perspective implements IPerspectiveFactory {
public void createInitialLayout(IPageLayout pageLayoutObj) {
String editorAreaBoj = pageLayoutObj.getEditorArea();
pageLayoutObj.setEditorAreaVisible(false);
pageLayoutObj.setFixed(true);
pageLayoutObj.addView(View.ID, IPageLayout.LEFT, 1.0f, editorAreaBoj);
}
}
4.1.2 Creating a Model & a POJO Class
Create the model class and add the following code to it:
MyNode.java
package com.jcg.zest.jface.model;
import java.util.ArrayList;
import java.util.List;
public class MyNode {
private final String id;
private final String name;
private List connections;
public MyNode(String id, String name) {
this.id = id;
this.name = name;
this.connections = new ArrayList();
}
public String getId() {
return id;
}
public String getName() {
return name;
}
public List getConnectedTo() {
return connections;
}
}
Note: The model can be anything as long as developers can logically convert it into a connected graph.
Let’s create a POJO class now and add the following code to it:
MyConnection.java
package com.jcg.zest.jface.model;
public class MyConnection {
final String id;
final String label;
final MyNode source;
final MyNode destination;
public MyConnection(String id, String label, MyNode source, MyNode destination) {
this.id = id;
this.label = label;
this.source = source;
this.destination = destination;
}
public String getLabel() {
return label;
}
public MyNode getSource() {
return source;
}
public MyNode getDestination() {
return destination;
}
}
4.1.3 Creating a Data Model Class
We are building this class which provides an instance of the data model. Add the following code to it:
NodeModelContentProvider.java
package com.jcg.zest.jface.model;
import java.util.ArrayList;
import java.util.List;
public class NodeModelContentProvider {
private List connections;
private List nodes;
public NodeModelContentProvider() {
nodes = new ArrayList();
MyNode node = new MyNode("1", "Hamburg");
nodes.add(node);
node = new MyNode("2", "Frankfurt");
nodes.add(node);
node = new MyNode("3", "Berlin");
nodes.add(node);
node = new MyNode("4", "Munich");
nodes.add(node);
node = new MyNode("5", "Eppelheim");
nodes.add(node);
node = new MyNode("6", "Ahrensboek");
nodes.add(node);
connections = new ArrayList();
MyConnection connect = new MyConnection("1", "1", nodes.get(0),nodes.get(1));
connections.add(connect);
connect = new MyConnection("2", "2", nodes.get(0), nodes.get(4));
connections.add(connect);
connect = new MyConnection("3", "3", nodes.get(2), nodes.get(1));
connections.add(connect);
connect = new MyConnection("4", "3", nodes.get(1), nodes.get(3));
connections.add(connect);
for (MyConnection connection : connections) {
connection.getSource().getConnectedTo().add(connection.getDestination());
}
}
public List getNodes() {
return nodes;
}
}
4.1.4 Creating the Providers
Here in this section, we will be creating the Zest Content Provider and Label Provider classes to setup the relationship between the nodes. This node relationship will be available as the label provider.
ZestNodeContentProvider.java
package com.jcg.zest.jface.zestviewer;
import org.eclipse.jface.viewers.ArrayContentProvider;
import org.eclipse.zest.core.viewers.IGraphEntityContentProvider;
import com.jcg.zest.jface.model.MyNode;
public class ZestNodeContentProvider extends ArrayContentProvider implements IGraphEntityContentProvider {
@Override
public Object[] getConnectedTo(Object entity) {
if (entity instanceof MyNode) {
MyNode node = (MyNode) entity;
return node.getConnectedTo().toArray();
}
throw new RuntimeException("Type Not Supported");
}
}
ZestLabelProvider.java
package com.jcg.zest.jface.zestviewer;
import org.eclipse.jface.viewers.LabelProvider;
import org.eclipse.zest.core.viewers.EntityConnectionData;
import com.jcg.zest.jface.model.MyConnection;
import com.jcg.zest.jface.model.MyNode;
public class ZestLabelProvider extends LabelProvider {
@Override
public String getText(Object element) {
if (element instanceof MyNode) {
MyNode myNode = (MyNode) element;
return myNode.getName();
}
// Not Called With The IGraphEntityContentProvider
if (element instanceof MyConnection) {
MyConnection myConnection = (MyConnection) element;
return myConnection.getLabel();
}
if (element instanceof EntityConnectionData) {
EntityConnectionData test = (EntityConnectionData) element;
return "";
}
throw new RuntimeException("Wrong type: "
+ element.getClass().toString());
}
}
4.1.5 Creating a View
The View class is used to create the graphical representation of the nodes. Add the following code to it:
View.java
package com.jcg.zest.jface;
import org.eclipse.swt.SWT;
import org.eclipse.swt.widgets.Composite;
import org.eclipse.ui.IActionBars;
import org.eclipse.ui.part.ViewPart;
import org.eclipse.zest.core.viewers.AbstractZoomableViewer;
import org.eclipse.zest.core.viewers.GraphViewer;
import org.eclipse.zest.core.viewers.IZoomableWorkbenchPart;
import org.eclipse.zest.core.viewers.ZoomContributionViewItem;
import org.eclipse.zest.layouts.LayoutAlgorithm;
import org.eclipse.zest.layouts.LayoutStyles;
import org.eclipse.zest.layouts.algorithms.TreeLayoutAlgorithm;
import com.jcg.zest.jface.model.NodeModelContentProvider;
import com.jcg.zest.jface.zestviewer.ZestLabelProvider;
import com.jcg.zest.jface.zestviewer.ZestNodeContentProvider;
public class View extends ViewPart implements IZoomableWorkbenchPart {
public static final String ID = "com.jcg..zest.jface.view";
private GraphViewer viewerObj;
public void createPartControl(Composite parent) {
viewerObj = new GraphViewer(parent, SWT.BORDER);
viewerObj.setContentProvider(new ZestNodeContentProvider());
viewerObj.setLabelProvider(new ZestLabelProvider());
NodeModelContentProvider modelObj = new NodeModelContentProvider();
viewerObj.setInput(modelObj.getNodes());
LayoutAlgorithm layoutObj = setLayout();
viewerObj.setLayoutAlgorithm(layoutObj, true);
viewerObj.applyLayout();
fillToolBar();
}
private LayoutAlgorithm setLayout() {
LayoutAlgorithm selectedLayoutObj;
// selectedLayoutObj = new SpringLayoutAlgorithm(LayoutStyles.NO_LAYOUT_NODE_RESIZING);
selectedLayoutObj = new TreeLayoutAlgorithm(LayoutStyles.NO_LAYOUT_NODE_RESIZING);
// selectedLayoutObj = new GridLayoutAlgorithm(LayoutStyles.NO_LAYOUT_NODE_RESIZING);
// selectedLayoutObj = new HorizontalTreeLayoutAlgorithm(LayoutStyles.NO_LAYOUT_NODE_RESIZING);
// selectedLayoutObj = new RadialLayoutAlgorithm(LayoutStyles.NO_LAYOUT_NODE_RESIZING);
return selectedLayoutObj;
}
// Passing the Focus Request To The Viewer's Control.
public void setFocus() { }
private void fillToolBar() {
ZoomContributionViewItem toolbarZoom = new ZoomContributionViewItem(this);
IActionBars barsObj = getViewSite().getActionBars();
barsObj.getMenuManager().add(toolbarZoom);
}
@Override
public AbstractZoomableViewer getZoomableViewer() {
return viewer;
}
}
4.2 Project Demo
The result should look like the following.
Fig. 4: Zest Application Output
Fig. 4: Zest Application Output
5. Tips and Tricks
By default, the user can move the nodes in Zest plugin. To disable this, a developer has to extend the Graph. Let’s take a look at the sample code.
NonMovableGraph.java
package com.jcg.zest.movenodes.graph;
import org.eclipse.draw2d.SWTEventDispatcher;
import org.eclipse.swt.widgets.Composite;
import org.eclipse.zest.core.widgets.Graph;
public class NonMovableGraph extends Graph {
public NonMovableGraph(Composite parent, int style) {
super(parent, style);
this.getLightweightSystem().setEventDispatcher(new SWTEventDispatcher() {
public void dispatchMouseMoved(org.eclipse.swt.events.MouseEvent me) {
// Do Nothing
}
});
}
}
That’s all for this post. Happy Learning!!
6. Conclusion
Here, in this example, we learned about the implementation of the Zest plugin in Eclipse IDE. I hope this simple reference tutorial was helpful.
7. Download the Eclipse Project
This was an example of Eclipse Zest Plugin.
Download
You can download the full source code of this example here: Eclipse Zest Example
Want to know how to develop your skillset to become a Java Rockstar?
Join our newsletter to start rocking!
To get you started we give you our best selling eBooks for FREE!
1. JPA Mini Book
2. JVM Troubleshooting Guide
3. JUnit Tutorial for Unit Testing
4. Java Annotations Tutorial
5. Java Interview Questions
6. Spring Interview Questions
7. Android UI Design
and many more ....
Receive Java & Developer job alerts in your Area
I have read and agree to the terms & conditions
Yatin
An experience full-stack engineer well versed with Core Java, Spring/Springboot, MVC, Security, AOP, Frontend (Angular & React), and cloud technologies (such as AWS, GCP, Jenkins, Docker, K8).
Subscribe
Notify of
guest
This site uses Akismet to reduce spam. Learn how your comment data is processed.
0 Comments
Inline Feedbacks
View all comments
Back to top button
|
__label__pos
| 0.921936 |
Belajar CodeIgniter Part 4: Cara Membuat Template Web Dengan CodeIgniter
Ditulis oleh Irvan Nurfazri
Selamat datang. Salam jumpa kembali di pertemuan yang ke-4 dalam sesi belajar CodeIgniter. Dalam sesi ini saya akan menjelaskan cara membuat template website sederhana dengan CodeIgniter, dan akan dijelaskan cara menggunakan teknik multiple view untuk membuat template web yang dinamis.
Caranya sama seperti menggunakan include() pada bagian header dan footer untuk penggunaan PHP native, tetapi untuk CodeIgniter kita membuatnya dengan me-load view CodeIgniter yang sudah kita pisahkan sesuai keinginan.
Misalkan, kamu dapat memisahkan bagian header, sidebar, footer dan konten untuk mencegah pengulangan penulisan syntax, hal ini dapat memudahkan kita dalam memodifikasi template website.
Baca: Cara Membuat Helper HTML CodeIgniter
Tutorial Membuat Template Website Dengan Multiple View CodeIgniter
Langkah pertama, buat controller yang menampilkan sebuah view, saya membuat controller baru dengan nama web.php dan view nya diberi nama contoh_index.php
application/controllers/web.php
Dan kode program untuk view contoh_index.php
letakkan pada application/views/contoh_index.php
Nah sekarang kita buat file css nya. Karena dalam folder installan CodeIgniter belum terdapat css, maka kita perlu membuat sendiri.
Buat folder dengan nama assets/css/ dan kemudian buat file cssnya dengan memberi nama style.css
masukan kode program dibawah ini
assets/css/style.css
Jangan lupa setting base_url() terlebih dahulu agar nanti kita dapat menghubungkan CodeIgniter dengan file css.
edit file config.php pada folder application/config/config.php dan masukan kode program dibawah ini
Sesuaikan dengan nama folder teman – teman yang di install CodeIgniter, kemudian dapat diperhatikan contoh diatas bahwa untuk menghubungkan file css dengan bantuan base_url()
Jadi hasil dari <?php echo base_url() ?>assets/css/style.css”> adalah http://localhost/irvan_gen/assets/css/style.css (sesuai kita meletakan file css nya).
Karena disini kita menggunakan base_url() CodeIgniter, maka kita perlu mengaktifkan helper url seperti yang sudah saya buat diatas, pada controller web.php dan saya mengaktifkan helper url pada function construct.
Nah untuk melihat hasilnya seperti apa, silahkan ketik alamat pada browser localhost/nama_folder/index.php/web
localhost/irvan_gen/index.php/web
Cara membuat template website dengan codeigniter - #IRVANGEN
Cara membuat template website dengan codeigniter – #IRVANGEN
Sampai disini kita sudah berhasil membuat template web sederhana, namun halaman ini belum dinamis, kita perlu membuat bagian header dan footer secara berulang pada halaman lainnya.
Kita harus memecah template ini menjadi beberapa bagian, seperti header dan footer. Berikut dibawah ini cara membuat Template Web Dinamis Dengan CodeIgniter.
Membuat Template Website Dinamis Dengan CodeIgniter
Langkah pertama buat kembali view header, disini saya memberi nama contoh_header.php dan meletakkan pada folder views
application/views/contoh_header.php
Kemudian buat view footer
application/views/contoh_footer.php
Untuk konten halaman utama, buat view dengan nama contoh_index.php (file sebelumnya hapus saja ganti dengan yang ini)
application/views/contoh_index.php
Sekarang template ini sudah menjadi 3 bagian, yaitu view header, view footer, dan view index. Cara pemanggilannya harus berurutan dimulai dari
1. contoh_header
2. contoh_index
3. contoh_footer
Selanjutnya kita buat controller, oh iya controller web.php sebelumnya hapus saja dan ganti dengan ini.
application/controllers/web.php
Jika dijalankan hasilnya akan sama, tapi kelebihannya kita dengan mudah bisa membuat halaman lain dan tinggal memanggil view saja yang sudah kita buat tadi untuk halaman lainnya.
Coba perhatikan lagi kode program yang ada di contoh_header disana terdapat hyperlink pada menu Beranda dan Tentang, nah disini belum sempat saya setting. Untuk menyesuaikannya kita perlu membuat halaman tentang yang kemudian akan disambungkan pada menu tersebut.
Sekarang kita buat view baru dengan nama contoh_tentang.php
application/views/contoh_tentang.php
Tambahkan lagi method Tentang pada controller web.php atau bisa replace untuk membuat halaman baru
application/contorollers/web.php
Sekarang jalankan, dan klik Tentang maka kamu akan diarahkan ke halaman Tentang dan alamatnya akan mengarah ke http://localhost/irvan_gen/index.php/
Halaman Tentang - #IRVANGEN
Halaman Tentang – #IRVANGEN
Kesimpulan
Dengan cara membuat template website multiple view atau memecah template menjadi beberapa bagian, maka akan mempermudah kita karena tidak perlu menulis syntax secara berulang.
Tinggal panggil saja misalkan, memanggil header, index, footer dan halaman lainnya. Khusus untuk halaman index tidak perlu lagi membuat halaman header dan footer dan tidak perlu juga membuat halaman about. Intinya dengan menggunakan cara ini semua halaman yang kita buat dapat digunakan tanpa menulis syntax kembali.
Penutup
Mungkin itu saja yang dapat saya sampaikan di pertemuan ini tentang Tutorial Membuat Template Website Sederhada Dengan CodeIgniter. Jika kurang mengerti atau salah dalam penulisan maupun penyampaian, silahkan diskusikan di form komentar, semoga apa yang sudah kita pelajari dapat bermanfaat bagi kita semua.
Terimakasih.
Income Search
• cara membuat website dinamis dengan codeigniter
• membuat halaman web dinamis dengan codeigniter
• pengertian multiple view
• membuat web dengan codeigniter
• cara membuat web dengan codeigniter
• template codeigniter
• template web CI
• membuat header di codeigniter
• template ci
• membangun website dengan ci
Tentang Penulis
Irvan Nurfazri
No one can bring you true happiness except Allah
Instagram : ig.com/irvan_gen
Tinggalkan Komentar
|
__label__pos
| 0.958178 |
src/Pure/thm.ML
author wenzelm
Tue Mar 17 12:10:42 2009 +0100 (2009-03-17)
changeset 30556 7be15917f3fa
parent 30554 73f8bd5f0af8
child 30711 952fdbee1b48
permissions -rw-r--r--
eq_assumption: slightly more efficient by checking (open) result of Logic.assum_problems directly;
tuned;
1 (* Title: Pure/thm.ML
2 Author: Lawrence C Paulson, Cambridge University Computer Laboratory
3 Author: Makarius
4
5 The very core of Isabelle's Meta Logic: certified types and terms,
6 derivations, theorems, framework rules (including lifting and
7 resolution), oracles.
8 *)
9
10 signature BASIC_THM =
11 sig
12 (*certified types*)
13 type ctyp
14 val rep_ctyp: ctyp ->
15 {thy_ref: theory_ref,
16 T: typ,
17 maxidx: int,
18 sorts: sort OrdList.T}
19 val theory_of_ctyp: ctyp -> theory
20 val typ_of: ctyp -> typ
21 val ctyp_of: theory -> typ -> ctyp
22
23 (*certified terms*)
24 type cterm
25 exception CTERM of string * cterm list
26 val rep_cterm: cterm ->
27 {thy_ref: theory_ref,
28 t: term,
29 T: typ,
30 maxidx: int,
31 sorts: sort OrdList.T}
32 val crep_cterm: cterm ->
33 {thy_ref: theory_ref, t: term, T: ctyp, maxidx: int, sorts: sort OrdList.T}
34 val theory_of_cterm: cterm -> theory
35 val term_of: cterm -> term
36 val cterm_of: theory -> term -> cterm
37 val ctyp_of_term: cterm -> ctyp
38
39 (*theorems*)
40 type thm
41 type conv = cterm -> thm
42 type attribute = Context.generic * thm -> Context.generic * thm
43 val rep_thm: thm ->
44 {thy_ref: theory_ref,
45 tags: Properties.T,
46 maxidx: int,
47 shyps: sort OrdList.T,
48 hyps: term OrdList.T,
49 tpairs: (term * term) list,
50 prop: term}
51 val crep_thm: thm ->
52 {thy_ref: theory_ref,
53 tags: Properties.T,
54 maxidx: int,
55 shyps: sort OrdList.T,
56 hyps: cterm OrdList.T,
57 tpairs: (cterm * cterm) list,
58 prop: cterm}
59 exception THM of string * int * thm list
60 val theory_of_thm: thm -> theory
61 val prop_of: thm -> term
62 val tpairs_of: thm -> (term * term) list
63 val concl_of: thm -> term
64 val prems_of: thm -> term list
65 val nprems_of: thm -> int
66 val cprop_of: thm -> cterm
67 val cprem_of: thm -> int -> cterm
68 val transfer: theory -> thm -> thm
69 val weaken: cterm -> thm -> thm
70 val weaken_sorts: sort list -> cterm -> cterm
71 val extra_shyps: thm -> sort list
72 val strip_shyps: thm -> thm
73
74 (*meta rules*)
75 val assume: cterm -> thm
76 val implies_intr: cterm -> thm -> thm
77 val implies_elim: thm -> thm -> thm
78 val forall_intr: cterm -> thm -> thm
79 val forall_elim: cterm -> thm -> thm
80 val reflexive: cterm -> thm
81 val symmetric: thm -> thm
82 val transitive: thm -> thm -> thm
83 val beta_conversion: bool -> conv
84 val eta_conversion: conv
85 val eta_long_conversion: conv
86 val abstract_rule: string -> cterm -> thm -> thm
87 val combination: thm -> thm -> thm
88 val equal_intr: thm -> thm -> thm
89 val equal_elim: thm -> thm -> thm
90 val flexflex_rule: thm -> thm Seq.seq
91 val generalize: string list * string list -> int -> thm -> thm
92 val instantiate: (ctyp * ctyp) list * (cterm * cterm) list -> thm -> thm
93 val instantiate_cterm: (ctyp * ctyp) list * (cterm * cterm) list -> cterm -> cterm
94 val trivial: cterm -> thm
95 val class_triv: theory -> class -> thm
96 val unconstrainT: ctyp -> thm -> thm
97 val dest_state: thm * int -> (term * term) list * term list * term * term
98 val lift_rule: cterm -> thm -> thm
99 val incr_indexes: int -> thm -> thm
100 val assumption: int -> thm -> thm Seq.seq
101 val eq_assumption: int -> thm -> thm
102 val rotate_rule: int -> int -> thm -> thm
103 val permute_prems: int -> int -> thm -> thm
104 val rename_params_rule: string list * int -> thm -> thm
105 val compose_no_flatten: bool -> thm * int -> int -> thm -> thm Seq.seq
106 val bicompose: bool -> bool * thm * int -> int -> thm -> thm Seq.seq
107 val biresolution: bool -> (bool * thm) list -> int -> thm -> thm Seq.seq
108 end;
109
110 signature THM =
111 sig
112 include BASIC_THM
113 val dest_ctyp: ctyp -> ctyp list
114 val dest_comb: cterm -> cterm * cterm
115 val dest_fun: cterm -> cterm
116 val dest_arg: cterm -> cterm
117 val dest_fun2: cterm -> cterm
118 val dest_arg1: cterm -> cterm
119 val dest_abs: string option -> cterm -> cterm * cterm
120 val adjust_maxidx_cterm: int -> cterm -> cterm
121 val capply: cterm -> cterm -> cterm
122 val cabs: cterm -> cterm -> cterm
123 val major_prem_of: thm -> term
124 val no_prems: thm -> bool
125 val terms_of_tpairs: (term * term) list -> term list
126 val maxidx_of: thm -> int
127 val maxidx_thm: thm -> int -> int
128 val hyps_of: thm -> term list
129 val full_prop_of: thm -> term
130 val axiom: theory -> string -> thm
131 val axioms_of: theory -> (string * thm) list
132 val get_name: thm -> string
133 val put_name: string -> thm -> thm
134 val get_tags: thm -> Properties.T
135 val map_tags: (Properties.T -> Properties.T) -> thm -> thm
136 val norm_proof: thm -> thm
137 val adjust_maxidx_thm: int -> thm -> thm
138 val rename_boundvars: term -> term -> thm -> thm
139 val match: cterm * cterm -> (ctyp * ctyp) list * (cterm * cterm) list
140 val first_order_match: cterm * cterm -> (ctyp * ctyp) list * (cterm * cterm) list
141 val incr_indexes_cterm: int -> cterm -> cterm
142 val varifyT: thm -> thm
143 val varifyT': (string * sort) list -> thm -> ((string * sort) * indexname) list * thm
144 val freezeT: thm -> thm
145 val future: thm future -> cterm -> thm
146 val pending_groups: thm -> Task_Queue.group list -> Task_Queue.group list
147 val proof_body_of: thm -> proof_body
148 val proof_of: thm -> proof
149 val join_proof: thm -> unit
150 val extern_oracles: theory -> xstring list
151 val add_oracle: binding * ('a -> cterm) -> theory -> (string * ('a -> thm)) * theory
152 end;
153
154 structure Thm:> THM =
155 struct
156
157 structure Pt = Proofterm;
158
159
160 (*** Certified terms and types ***)
161
162 (** certified types **)
163
164 datatype ctyp = Ctyp of
165 {thy_ref: theory_ref,
166 T: typ,
167 maxidx: int,
168 sorts: sort OrdList.T};
169
170 fun rep_ctyp (Ctyp args) = args;
171 fun theory_of_ctyp (Ctyp {thy_ref, ...}) = Theory.deref thy_ref;
172 fun typ_of (Ctyp {T, ...}) = T;
173
174 fun ctyp_of thy raw_T =
175 let
176 val T = Sign.certify_typ thy raw_T;
177 val maxidx = Term.maxidx_of_typ T;
178 val sorts = Sorts.insert_typ T [];
179 in Ctyp {thy_ref = Theory.check_thy thy, T = T, maxidx = maxidx, sorts = sorts} end;
180
181 fun dest_ctyp (Ctyp {thy_ref, T = Type (s, Ts), maxidx, sorts}) =
182 map (fn T => Ctyp {thy_ref = thy_ref, T = T, maxidx = maxidx, sorts = sorts}) Ts
183 | dest_ctyp cT = raise TYPE ("dest_ctyp", [typ_of cT], []);
184
185
186
187 (** certified terms **)
188
189 (*certified terms with checked typ, maxidx, and sorts*)
190 datatype cterm = Cterm of
191 {thy_ref: theory_ref,
192 t: term,
193 T: typ,
194 maxidx: int,
195 sorts: sort OrdList.T};
196
197 exception CTERM of string * cterm list;
198
199 fun rep_cterm (Cterm args) = args;
200
201 fun crep_cterm (Cterm {thy_ref, t, T, maxidx, sorts}) =
202 {thy_ref = thy_ref, t = t, maxidx = maxidx, sorts = sorts,
203 T = Ctyp {thy_ref = thy_ref, T = T, maxidx = maxidx, sorts = sorts}};
204
205 fun theory_of_cterm (Cterm {thy_ref, ...}) = Theory.deref thy_ref;
206 fun term_of (Cterm {t, ...}) = t;
207
208 fun ctyp_of_term (Cterm {thy_ref, T, maxidx, sorts, ...}) =
209 Ctyp {thy_ref = thy_ref, T = T, maxidx = maxidx, sorts = sorts};
210
211 fun cterm_of thy tm =
212 let
213 val (t, T, maxidx) = Sign.certify_term thy tm;
214 val sorts = Sorts.insert_term t [];
215 in Cterm {thy_ref = Theory.check_thy thy, t = t, T = T, maxidx = maxidx, sorts = sorts} end;
216
217 fun merge_thys0 (Cterm {thy_ref = r1, t = t1, ...}) (Cterm {thy_ref = r2, t = t2, ...}) =
218 Theory.merge_refs (r1, r2);
219
220
221 (* destructors *)
222
223 fun dest_comb (ct as Cterm {t = c $ a, T, thy_ref, maxidx, sorts}) =
224 let val A = Term.argument_type_of c 0 in
225 (Cterm {t = c, T = A --> T, thy_ref = thy_ref, maxidx = maxidx, sorts = sorts},
226 Cterm {t = a, T = A, thy_ref = thy_ref, maxidx = maxidx, sorts = sorts})
227 end
228 | dest_comb ct = raise CTERM ("dest_comb", [ct]);
229
230 fun dest_fun (ct as Cterm {t = c $ _, T, thy_ref, maxidx, sorts}) =
231 let val A = Term.argument_type_of c 0
232 in Cterm {t = c, T = A --> T, thy_ref = thy_ref, maxidx = maxidx, sorts = sorts} end
233 | dest_fun ct = raise CTERM ("dest_fun", [ct]);
234
235 fun dest_arg (ct as Cterm {t = c $ a, T = _, thy_ref, maxidx, sorts}) =
236 let val A = Term.argument_type_of c 0
237 in Cterm {t = a, T = A, thy_ref = thy_ref, maxidx = maxidx, sorts = sorts} end
238 | dest_arg ct = raise CTERM ("dest_arg", [ct]);
239
240
241 fun dest_fun2 (Cterm {t = c $ a $ b, T, thy_ref, maxidx, sorts}) =
242 let
243 val A = Term.argument_type_of c 0;
244 val B = Term.argument_type_of c 1;
245 in Cterm {t = c, T = A --> B --> T, thy_ref = thy_ref, maxidx = maxidx, sorts = sorts} end
246 | dest_fun2 ct = raise CTERM ("dest_fun2", [ct]);
247
248 fun dest_arg1 (Cterm {t = c $ a $ _, T = _, thy_ref, maxidx, sorts}) =
249 let val A = Term.argument_type_of c 0
250 in Cterm {t = a, T = A, thy_ref = thy_ref, maxidx = maxidx, sorts = sorts} end
251 | dest_arg1 ct = raise CTERM ("dest_arg1", [ct]);
252
253 fun dest_abs a (ct as
254 Cterm {t = Abs (x, T, t), T = Type ("fun", [_, U]), thy_ref, maxidx, sorts}) =
255 let val (y', t') = Term.dest_abs (the_default x a, T, t) in
256 (Cterm {t = Free (y', T), T = T, thy_ref = thy_ref, maxidx = maxidx, sorts = sorts},
257 Cterm {t = t', T = U, thy_ref = thy_ref, maxidx = maxidx, sorts = sorts})
258 end
259 | dest_abs _ ct = raise CTERM ("dest_abs", [ct]);
260
261
262 (* constructors *)
263
264 fun capply
265 (cf as Cterm {t = f, T = Type ("fun", [dty, rty]), maxidx = maxidx1, sorts = sorts1, ...})
266 (cx as Cterm {t = x, T, maxidx = maxidx2, sorts = sorts2, ...}) =
267 if T = dty then
268 Cterm {thy_ref = merge_thys0 cf cx,
269 t = f $ x,
270 T = rty,
271 maxidx = Int.max (maxidx1, maxidx2),
272 sorts = Sorts.union sorts1 sorts2}
273 else raise CTERM ("capply: types don't agree", [cf, cx])
274 | capply cf cx = raise CTERM ("capply: first arg is not a function", [cf, cx]);
275
276 fun cabs
277 (ct1 as Cterm {t = t1, T = T1, maxidx = maxidx1, sorts = sorts1, ...})
278 (ct2 as Cterm {t = t2, T = T2, maxidx = maxidx2, sorts = sorts2, ...}) =
279 let val t = Term.lambda t1 t2 in
280 Cterm {thy_ref = merge_thys0 ct1 ct2,
281 t = t, T = T1 --> T2,
282 maxidx = Int.max (maxidx1, maxidx2),
283 sorts = Sorts.union sorts1 sorts2}
284 end;
285
286
287 (* indexes *)
288
289 fun adjust_maxidx_cterm i (ct as Cterm {thy_ref, t, T, maxidx, sorts}) =
290 if maxidx = i then ct
291 else if maxidx < i then
292 Cterm {maxidx = i, thy_ref = thy_ref, t = t, T = T, sorts = sorts}
293 else
294 Cterm {maxidx = Int.max (maxidx_of_term t, i), thy_ref = thy_ref, t = t, T = T, sorts = sorts};
295
296 fun incr_indexes_cterm i (ct as Cterm {thy_ref, t, T, maxidx, sorts}) =
297 if i < 0 then raise CTERM ("negative increment", [ct])
298 else if i = 0 then ct
299 else Cterm {thy_ref = thy_ref, t = Logic.incr_indexes ([], i) t,
300 T = Logic.incr_tvar i T, maxidx = maxidx + i, sorts = sorts};
301
302
303 (* matching *)
304
305 local
306
307 fun gen_match match
308 (ct1 as Cterm {t = t1, sorts = sorts1, ...},
309 ct2 as Cterm {t = t2, sorts = sorts2, maxidx = maxidx2, ...}) =
310 let
311 val thy = Theory.deref (merge_thys0 ct1 ct2);
312 val (Tinsts, tinsts) = match thy (t1, t2) (Vartab.empty, Vartab.empty);
313 val sorts = Sorts.union sorts1 sorts2;
314 fun mk_cTinst ((a, i), (S, T)) =
315 (Ctyp {T = TVar ((a, i), S), thy_ref = Theory.check_thy thy, maxidx = i, sorts = sorts},
316 Ctyp {T = T, thy_ref = Theory.check_thy thy, maxidx = maxidx2, sorts = sorts});
317 fun mk_ctinst ((x, i), (T, t)) =
318 let val T = Envir.typ_subst_TVars Tinsts T in
319 (Cterm {t = Var ((x, i), T), T = T, thy_ref = Theory.check_thy thy,
320 maxidx = i, sorts = sorts},
321 Cterm {t = t, T = T, thy_ref = Theory.check_thy thy, maxidx = maxidx2, sorts = sorts})
322 end;
323 in (Vartab.fold (cons o mk_cTinst) Tinsts [], Vartab.fold (cons o mk_ctinst) tinsts []) end;
324
325 in
326
327 val match = gen_match Pattern.match;
328 val first_order_match = gen_match Pattern.first_order_match;
329
330 end;
331
332
333
334 (*** Derivations and Theorems ***)
335
336 datatype thm = Thm of
337 deriv * (*derivation*)
338 {thy_ref: theory_ref, (*dynamic reference to theory*)
339 tags: Properties.T, (*additional annotations/comments*)
340 maxidx: int, (*maximum index of any Var or TVar*)
341 shyps: sort OrdList.T, (*sort hypotheses*)
342 hyps: term OrdList.T, (*hypotheses*)
343 tpairs: (term * term) list, (*flex-flex pairs*)
344 prop: term} (*conclusion*)
345 and deriv = Deriv of
346 {max_promise: serial,
347 open_promises: (serial * thm future) OrdList.T,
348 promises: (serial * thm future) OrdList.T,
349 body: Pt.proof_body};
350
351 type conv = cterm -> thm;
352
353 (*attributes subsume any kind of rules or context modifiers*)
354 type attribute = Context.generic * thm -> Context.generic * thm;
355
356 (*errors involving theorems*)
357 exception THM of string * int * thm list;
358
359 fun rep_thm (Thm (_, args)) = args;
360
361 fun crep_thm (Thm (_, {thy_ref, tags, maxidx, shyps, hyps, tpairs, prop})) =
362 let fun cterm max t = Cterm {thy_ref = thy_ref, t = t, T = propT, maxidx = max, sorts = shyps} in
363 {thy_ref = thy_ref, tags = tags, maxidx = maxidx, shyps = shyps,
364 hyps = map (cterm ~1) hyps,
365 tpairs = map (pairself (cterm maxidx)) tpairs,
366 prop = cterm maxidx prop}
367 end;
368
369 fun terms_of_tpairs tpairs = fold_rev (fn (t, u) => cons t o cons u) tpairs [];
370
371 fun eq_tpairs ((t, u), (t', u')) = t aconv t' andalso u aconv u';
372 fun union_tpairs ts us = Library.merge eq_tpairs (ts, us);
373 val maxidx_tpairs = fold (fn (t, u) => Term.maxidx_term t #> Term.maxidx_term u);
374
375 fun attach_tpairs tpairs prop =
376 Logic.list_implies (map Logic.mk_equals tpairs, prop);
377
378 fun full_prop_of (Thm (_, {tpairs, prop, ...})) = attach_tpairs tpairs prop;
379
380 val union_hyps = OrdList.union TermOrd.fast_term_ord;
381 val insert_hyps = OrdList.insert TermOrd.fast_term_ord;
382 val remove_hyps = OrdList.remove TermOrd.fast_term_ord;
383
384
385 (* merge theories of cterms/thms -- trivial absorption only *)
386
387 fun merge_thys1 (Cterm {thy_ref = r1, ...}) (th as Thm (_, {thy_ref = r2, ...})) =
388 Theory.merge_refs (r1, r2);
389
390 fun merge_thys2 (th1 as Thm (_, {thy_ref = r1, ...})) (th2 as Thm (_, {thy_ref = r2, ...})) =
391 Theory.merge_refs (r1, r2);
392
393
394 (* basic components *)
395
396 val theory_of_thm = Theory.deref o #thy_ref o rep_thm;
397 val maxidx_of = #maxidx o rep_thm;
398 fun maxidx_thm th i = Int.max (maxidx_of th, i);
399 val hyps_of = #hyps o rep_thm;
400 val prop_of = #prop o rep_thm;
401 val tpairs_of = #tpairs o rep_thm;
402
403 val concl_of = Logic.strip_imp_concl o prop_of;
404 val prems_of = Logic.strip_imp_prems o prop_of;
405 val nprems_of = Logic.count_prems o prop_of;
406 fun no_prems th = nprems_of th = 0;
407
408 fun major_prem_of th =
409 (case prems_of th of
410 prem :: _ => Logic.strip_assums_concl prem
411 | [] => raise THM ("major_prem_of: rule with no premises", 0, [th]));
412
413 (*the statement of any thm is a cterm*)
414 fun cprop_of (Thm (_, {thy_ref, maxidx, shyps, prop, ...})) =
415 Cterm {thy_ref = thy_ref, maxidx = maxidx, T = propT, t = prop, sorts = shyps};
416
417 fun cprem_of (th as Thm (_, {thy_ref, maxidx, shyps, prop, ...})) i =
418 Cterm {thy_ref = thy_ref, maxidx = maxidx, T = propT, sorts = shyps,
419 t = Logic.nth_prem (i, prop) handle TERM _ => raise THM ("cprem_of", i, [th])};
420
421 (*explicit transfer to a super theory*)
422 fun transfer thy' thm =
423 let
424 val Thm (der, {thy_ref, tags, maxidx, shyps, hyps, tpairs, prop}) = thm;
425 val thy = Theory.deref thy_ref;
426 val _ = Theory.subthy (thy, thy') orelse raise THM ("transfer: not a super theory", 0, [thm]);
427 val is_eq = Theory.eq_thy (thy, thy');
428 val _ = Theory.check_thy thy;
429 in
430 if is_eq then thm
431 else
432 Thm (der,
433 {thy_ref = Theory.check_thy thy',
434 tags = tags,
435 maxidx = maxidx,
436 shyps = shyps,
437 hyps = hyps,
438 tpairs = tpairs,
439 prop = prop})
440 end;
441
442 (*explicit weakening: maps |- B to A |- B*)
443 fun weaken raw_ct th =
444 let
445 val ct as Cterm {t = A, T, sorts, maxidx = maxidxA, ...} = adjust_maxidx_cterm ~1 raw_ct;
446 val Thm (der, {tags, maxidx, shyps, hyps, tpairs, prop, ...}) = th;
447 in
448 if T <> propT then
449 raise THM ("weaken: assumptions must have type prop", 0, [])
450 else if maxidxA <> ~1 then
451 raise THM ("weaken: assumptions may not contain schematic variables", maxidxA, [])
452 else
453 Thm (der,
454 {thy_ref = merge_thys1 ct th,
455 tags = tags,
456 maxidx = maxidx,
457 shyps = Sorts.union sorts shyps,
458 hyps = insert_hyps A hyps,
459 tpairs = tpairs,
460 prop = prop})
461 end;
462
463 fun weaken_sorts raw_sorts ct =
464 let
465 val Cterm {thy_ref, t, T, maxidx, sorts} = ct;
466 val thy = Theory.deref thy_ref;
467 val more_sorts = Sorts.make (map (Sign.certify_sort thy) raw_sorts);
468 val sorts' = Sorts.union sorts more_sorts;
469 in Cterm {thy_ref = Theory.check_thy thy, t = t, T = T, maxidx = maxidx, sorts = sorts'} end;
470
471
472
473 (** sort contexts of theorems **)
474
475 fun present_sorts (Thm (_, {hyps, tpairs, prop, ...})) =
476 fold (fn (t, u) => Sorts.insert_term t o Sorts.insert_term u) tpairs
477 (Sorts.insert_terms hyps (Sorts.insert_term prop []));
478
479 (*remove extra sorts that are non-empty by virtue of type signature information*)
480 fun strip_shyps (thm as Thm (_, {shyps = [], ...})) = thm
481 | strip_shyps (thm as Thm (der, {thy_ref, tags, maxidx, shyps, hyps, tpairs, prop})) =
482 let
483 val thy = Theory.deref thy_ref;
484 val present = present_sorts thm;
485 val extra = Sorts.subtract present shyps;
486 val extra' =
487 Sorts.subtract (map #2 (Sign.witness_sorts thy present extra)) extra
488 |> Sorts.minimal_sorts (Sign.classes_of thy);
489 val shyps' = Sorts.union present extra';
490 in
491 Thm (der, {thy_ref = Theory.check_thy thy, tags = tags, maxidx = maxidx,
492 shyps = shyps', hyps = hyps, tpairs = tpairs, prop = prop})
493 end;
494
495 (*dangling sort constraints of a thm*)
496 fun extra_shyps (th as Thm (_, {shyps, ...})) = Sorts.subtract (present_sorts th) shyps;
497
498
499
500 (** derivations **)
501
502 fun make_deriv max_promise open_promises promises oracles thms proof =
503 Deriv {max_promise = max_promise, open_promises = open_promises, promises = promises,
504 body = PBody {oracles = oracles, thms = thms, proof = proof}};
505
506 val empty_deriv = make_deriv ~1 [] [] [] [] Pt.MinProof;
507
508
509 (* inference rules *)
510
511 fun promise_ord ((i, _), (j, _)) = int_ord (j, i);
512
513 fun deriv_rule2 f
514 (Deriv {max_promise = max1, open_promises = open_ps1, promises = ps1,
515 body = PBody {oracles = oras1, thms = thms1, proof = prf1}})
516 (Deriv {max_promise = max2, open_promises = open_ps2, promises = ps2,
517 body = PBody {oracles = oras2, thms = thms2, proof = prf2}}) =
518 let
519 val max = Int.max (max1, max2);
520 val open_ps = OrdList.union promise_ord open_ps1 open_ps2;
521 val ps = OrdList.union promise_ord ps1 ps2;
522 val oras = Pt.merge_oracles oras1 oras2;
523 val thms = Pt.merge_thms thms1 thms2;
524 val prf =
525 (case ! Pt.proofs of
526 2 => f prf1 prf2
527 | 1 => MinProof
528 | 0 => MinProof
529 | i => error ("Illegal level of detail for proof objects: " ^ string_of_int i));
530 in make_deriv max open_ps ps oras thms prf end;
531
532 fun deriv_rule1 f = deriv_rule2 (K f) empty_deriv;
533 fun deriv_rule0 prf = deriv_rule1 I (make_deriv ~1 [] [] [] [] prf);
534
535
536
537 (** Axioms **)
538
539 fun axiom theory name =
540 let
541 fun get_ax thy =
542 Symtab.lookup (Theory.axiom_table thy) name
543 |> Option.map (fn prop =>
544 let
545 val der = deriv_rule0 (Pt.axm_proof name prop);
546 val maxidx = maxidx_of_term prop;
547 val shyps = Sorts.insert_term prop [];
548 in
549 Thm (der, {thy_ref = Theory.check_thy thy, tags = [],
550 maxidx = maxidx, shyps = shyps, hyps = [], tpairs = [], prop = prop})
551 end);
552 in
553 (case get_first get_ax (theory :: Theory.ancestors_of theory) of
554 SOME thm => thm
555 | NONE => raise THEORY ("No axiom " ^ quote name, [theory]))
556 end;
557
558 (*return additional axioms of this theory node*)
559 fun axioms_of thy =
560 map (fn s => (s, axiom thy s)) (Symtab.keys (Theory.axiom_table thy));
561
562
563 (* tags *)
564
565 val get_tags = #tags o rep_thm;
566
567 fun map_tags f (Thm (der, {thy_ref, tags, maxidx, shyps, hyps, tpairs, prop})) =
568 Thm (der, {thy_ref = thy_ref, tags = f tags, maxidx = maxidx,
569 shyps = shyps, hyps = hyps, tpairs = tpairs, prop = prop});
570
571
572 fun norm_proof (Thm (der, args as {thy_ref, ...})) =
573 let
574 val thy = Theory.deref thy_ref;
575 val der' = deriv_rule1 (Pt.rew_proof thy) der;
576 val _ = Theory.check_thy thy;
577 in Thm (der', args) end;
578
579 fun adjust_maxidx_thm i (th as Thm (der, {thy_ref, tags, maxidx, shyps, hyps, tpairs, prop})) =
580 if maxidx = i then th
581 else if maxidx < i then
582 Thm (der, {maxidx = i, thy_ref = thy_ref, tags = tags, shyps = shyps,
583 hyps = hyps, tpairs = tpairs, prop = prop})
584 else
585 Thm (der, {maxidx = Int.max (maxidx_tpairs tpairs (maxidx_of_term prop), i), thy_ref = thy_ref,
586 tags = tags, shyps = shyps, hyps = hyps, tpairs = tpairs, prop = prop});
587
588
589
590 (*** Meta rules ***)
591
592 (** primitive rules **)
593
594 (*The assumption rule A |- A*)
595 fun assume raw_ct =
596 let val Cterm {thy_ref, t = prop, T, maxidx, sorts} = adjust_maxidx_cterm ~1 raw_ct in
597 if T <> propT then
598 raise THM ("assume: prop", 0, [])
599 else if maxidx <> ~1 then
600 raise THM ("assume: variables", maxidx, [])
601 else Thm (deriv_rule0 (Pt.Hyp prop),
602 {thy_ref = thy_ref,
603 tags = [],
604 maxidx = ~1,
605 shyps = sorts,
606 hyps = [prop],
607 tpairs = [],
608 prop = prop})
609 end;
610
611 (*Implication introduction
612 [A]
613 :
614 B
615 -------
616 A ==> B
617 *)
618 fun implies_intr
619 (ct as Cterm {t = A, T, maxidx = maxidxA, sorts, ...})
620 (th as Thm (der, {maxidx, hyps, shyps, tpairs, prop, ...})) =
621 if T <> propT then
622 raise THM ("implies_intr: assumptions must have type prop", 0, [th])
623 else
624 Thm (deriv_rule1 (Pt.implies_intr_proof A) der,
625 {thy_ref = merge_thys1 ct th,
626 tags = [],
627 maxidx = Int.max (maxidxA, maxidx),
628 shyps = Sorts.union sorts shyps,
629 hyps = remove_hyps A hyps,
630 tpairs = tpairs,
631 prop = Logic.mk_implies (A, prop)});
632
633
634 (*Implication elimination
635 A ==> B A
636 ------------
637 B
638 *)
639 fun implies_elim thAB thA =
640 let
641 val Thm (derA, {maxidx = maxA, hyps = hypsA, shyps = shypsA, tpairs = tpairsA,
642 prop = propA, ...}) = thA
643 and Thm (der, {maxidx, hyps, shyps, tpairs, prop, ...}) = thAB;
644 fun err () = raise THM ("implies_elim: major premise", 0, [thAB, thA]);
645 in
646 case prop of
647 Const ("==>", _) $ A $ B =>
648 if A aconv propA then
649 Thm (deriv_rule2 (curry Pt.%%) der derA,
650 {thy_ref = merge_thys2 thAB thA,
651 tags = [],
652 maxidx = Int.max (maxA, maxidx),
653 shyps = Sorts.union shypsA shyps,
654 hyps = union_hyps hypsA hyps,
655 tpairs = union_tpairs tpairsA tpairs,
656 prop = B})
657 else err ()
658 | _ => err ()
659 end;
660
661 (*Forall introduction. The Free or Var x must not be free in the hypotheses.
662 [x]
663 :
664 A
665 ------
666 !!x. A
667 *)
668 fun forall_intr
669 (ct as Cterm {t = x, T, sorts, ...})
670 (th as Thm (der, {maxidx, shyps, hyps, tpairs, prop, ...})) =
671 let
672 fun result a =
673 Thm (deriv_rule1 (Pt.forall_intr_proof x a) der,
674 {thy_ref = merge_thys1 ct th,
675 tags = [],
676 maxidx = maxidx,
677 shyps = Sorts.union sorts shyps,
678 hyps = hyps,
679 tpairs = tpairs,
680 prop = Term.all T $ Abs (a, T, abstract_over (x, prop))});
681 fun check_occs a x ts =
682 if exists (fn t => Logic.occs (x, t)) ts then
683 raise THM ("forall_intr: variable " ^ quote a ^ " free in assumptions", 0, [th])
684 else ();
685 in
686 case x of
687 Free (a, _) => (check_occs a x hyps; check_occs a x (terms_of_tpairs tpairs); result a)
688 | Var ((a, _), _) => (check_occs a x (terms_of_tpairs tpairs); result a)
689 | _ => raise THM ("forall_intr: not a variable", 0, [th])
690 end;
691
692 (*Forall elimination
693 !!x. A
694 ------
695 A[t/x]
696 *)
697 fun forall_elim
698 (ct as Cterm {t, T, maxidx = maxt, sorts, ...})
699 (th as Thm (der, {maxidx, shyps, hyps, tpairs, prop, ...})) =
700 (case prop of
701 Const ("all", Type ("fun", [Type ("fun", [qary, _]), _])) $ A =>
702 if T <> qary then
703 raise THM ("forall_elim: type mismatch", 0, [th])
704 else
705 Thm (deriv_rule1 (Pt.% o rpair (SOME t)) der,
706 {thy_ref = merge_thys1 ct th,
707 tags = [],
708 maxidx = Int.max (maxidx, maxt),
709 shyps = Sorts.union sorts shyps,
710 hyps = hyps,
711 tpairs = tpairs,
712 prop = Term.betapply (A, t)})
713 | _ => raise THM ("forall_elim: not quantified", 0, [th]));
714
715
716 (* Equality *)
717
718 (*Reflexivity
719 t == t
720 *)
721 fun reflexive (ct as Cterm {thy_ref, t, T, maxidx, sorts}) =
722 Thm (deriv_rule0 Pt.reflexive,
723 {thy_ref = thy_ref,
724 tags = [],
725 maxidx = maxidx,
726 shyps = sorts,
727 hyps = [],
728 tpairs = [],
729 prop = Logic.mk_equals (t, t)});
730
731 (*Symmetry
732 t == u
733 ------
734 u == t
735 *)
736 fun symmetric (th as Thm (der, {thy_ref, maxidx, shyps, hyps, tpairs, prop, ...})) =
737 (case prop of
738 (eq as Const ("==", Type (_, [T, _]))) $ t $ u =>
739 Thm (deriv_rule1 Pt.symmetric der,
740 {thy_ref = thy_ref,
741 tags = [],
742 maxidx = maxidx,
743 shyps = shyps,
744 hyps = hyps,
745 tpairs = tpairs,
746 prop = eq $ u $ t})
747 | _ => raise THM ("symmetric", 0, [th]));
748
749 (*Transitivity
750 t1 == u u == t2
751 ------------------
752 t1 == t2
753 *)
754 fun transitive th1 th2 =
755 let
756 val Thm (der1, {maxidx = max1, hyps = hyps1, shyps = shyps1, tpairs = tpairs1,
757 prop = prop1, ...}) = th1
758 and Thm (der2, {maxidx = max2, hyps = hyps2, shyps = shyps2, tpairs = tpairs2,
759 prop = prop2, ...}) = th2;
760 fun err msg = raise THM ("transitive: " ^ msg, 0, [th1, th2]);
761 in
762 case (prop1, prop2) of
763 ((eq as Const ("==", Type (_, [T, _]))) $ t1 $ u, Const ("==", _) $ u' $ t2) =>
764 if not (u aconv u') then err "middle term"
765 else
766 Thm (deriv_rule2 (Pt.transitive u T) der1 der2,
767 {thy_ref = merge_thys2 th1 th2,
768 tags = [],
769 maxidx = Int.max (max1, max2),
770 shyps = Sorts.union shyps1 shyps2,
771 hyps = union_hyps hyps1 hyps2,
772 tpairs = union_tpairs tpairs1 tpairs2,
773 prop = eq $ t1 $ t2})
774 | _ => err "premises"
775 end;
776
777 (*Beta-conversion
778 (%x. t)(u) == t[u/x]
779 fully beta-reduces the term if full = true
780 *)
781 fun beta_conversion full (Cterm {thy_ref, t, T, maxidx, sorts}) =
782 let val t' =
783 if full then Envir.beta_norm t
784 else
785 (case t of Abs (_, _, bodt) $ u => subst_bound (u, bodt)
786 | _ => raise THM ("beta_conversion: not a redex", 0, []));
787 in
788 Thm (deriv_rule0 Pt.reflexive,
789 {thy_ref = thy_ref,
790 tags = [],
791 maxidx = maxidx,
792 shyps = sorts,
793 hyps = [],
794 tpairs = [],
795 prop = Logic.mk_equals (t, t')})
796 end;
797
798 fun eta_conversion (Cterm {thy_ref, t, T, maxidx, sorts}) =
799 Thm (deriv_rule0 Pt.reflexive,
800 {thy_ref = thy_ref,
801 tags = [],
802 maxidx = maxidx,
803 shyps = sorts,
804 hyps = [],
805 tpairs = [],
806 prop = Logic.mk_equals (t, Envir.eta_contract t)});
807
808 fun eta_long_conversion (Cterm {thy_ref, t, T, maxidx, sorts}) =
809 Thm (deriv_rule0 Pt.reflexive,
810 {thy_ref = thy_ref,
811 tags = [],
812 maxidx = maxidx,
813 shyps = sorts,
814 hyps = [],
815 tpairs = [],
816 prop = Logic.mk_equals (t, Pattern.eta_long [] t)});
817
818 (*The abstraction rule. The Free or Var x must not be free in the hypotheses.
819 The bound variable will be named "a" (since x will be something like x320)
820 t == u
821 --------------
822 %x. t == %x. u
823 *)
824 fun abstract_rule a
825 (Cterm {t = x, T, sorts, ...})
826 (th as Thm (der, {thy_ref, maxidx, hyps, shyps, tpairs, prop, ...})) =
827 let
828 val (t, u) = Logic.dest_equals prop
829 handle TERM _ => raise THM ("abstract_rule: premise not an equality", 0, [th]);
830 val result =
831 Thm (deriv_rule1 (Pt.abstract_rule x a) der,
832 {thy_ref = thy_ref,
833 tags = [],
834 maxidx = maxidx,
835 shyps = Sorts.union sorts shyps,
836 hyps = hyps,
837 tpairs = tpairs,
838 prop = Logic.mk_equals
839 (Abs (a, T, abstract_over (x, t)), Abs (a, T, abstract_over (x, u)))});
840 fun check_occs a x ts =
841 if exists (fn t => Logic.occs (x, t)) ts then
842 raise THM ("abstract_rule: variable " ^ quote a ^ " free in assumptions", 0, [th])
843 else ();
844 in
845 case x of
846 Free (a, _) => (check_occs a x hyps; check_occs a x (terms_of_tpairs tpairs); result)
847 | Var ((a, _), _) => (check_occs a x (terms_of_tpairs tpairs); result)
848 | _ => raise THM ("abstract_rule: not a variable", 0, [th])
849 end;
850
851 (*The combination rule
852 f == g t == u
853 --------------
854 f t == g u
855 *)
856 fun combination th1 th2 =
857 let
858 val Thm (der1, {maxidx = max1, shyps = shyps1, hyps = hyps1, tpairs = tpairs1,
859 prop = prop1, ...}) = th1
860 and Thm (der2, {maxidx = max2, shyps = shyps2, hyps = hyps2, tpairs = tpairs2,
861 prop = prop2, ...}) = th2;
862 fun chktypes fT tT =
863 (case fT of
864 Type ("fun", [T1, T2]) =>
865 if T1 <> tT then
866 raise THM ("combination: types", 0, [th1, th2])
867 else ()
868 | _ => raise THM ("combination: not function type", 0, [th1, th2]));
869 in
870 case (prop1, prop2) of
871 (Const ("==", Type ("fun", [fT, _])) $ f $ g,
872 Const ("==", Type ("fun", [tT, _])) $ t $ u) =>
873 (chktypes fT tT;
874 Thm (deriv_rule2 (Pt.combination f g t u fT) der1 der2,
875 {thy_ref = merge_thys2 th1 th2,
876 tags = [],
877 maxidx = Int.max (max1, max2),
878 shyps = Sorts.union shyps1 shyps2,
879 hyps = union_hyps hyps1 hyps2,
880 tpairs = union_tpairs tpairs1 tpairs2,
881 prop = Logic.mk_equals (f $ t, g $ u)}))
882 | _ => raise THM ("combination: premises", 0, [th1, th2])
883 end;
884
885 (*Equality introduction
886 A ==> B B ==> A
887 ----------------
888 A == B
889 *)
890 fun equal_intr th1 th2 =
891 let
892 val Thm (der1, {maxidx = max1, shyps = shyps1, hyps = hyps1, tpairs = tpairs1,
893 prop = prop1, ...}) = th1
894 and Thm (der2, {maxidx = max2, shyps = shyps2, hyps = hyps2, tpairs = tpairs2,
895 prop = prop2, ...}) = th2;
896 fun err msg = raise THM ("equal_intr: " ^ msg, 0, [th1, th2]);
897 in
898 case (prop1, prop2) of
899 (Const("==>", _) $ A $ B, Const("==>", _) $ B' $ A') =>
900 if A aconv A' andalso B aconv B' then
901 Thm (deriv_rule2 (Pt.equal_intr A B) der1 der2,
902 {thy_ref = merge_thys2 th1 th2,
903 tags = [],
904 maxidx = Int.max (max1, max2),
905 shyps = Sorts.union shyps1 shyps2,
906 hyps = union_hyps hyps1 hyps2,
907 tpairs = union_tpairs tpairs1 tpairs2,
908 prop = Logic.mk_equals (A, B)})
909 else err "not equal"
910 | _ => err "premises"
911 end;
912
913 (*The equal propositions rule
914 A == B A
915 ---------
916 B
917 *)
918 fun equal_elim th1 th2 =
919 let
920 val Thm (der1, {maxidx = max1, shyps = shyps1, hyps = hyps1,
921 tpairs = tpairs1, prop = prop1, ...}) = th1
922 and Thm (der2, {maxidx = max2, shyps = shyps2, hyps = hyps2,
923 tpairs = tpairs2, prop = prop2, ...}) = th2;
924 fun err msg = raise THM ("equal_elim: " ^ msg, 0, [th1, th2]);
925 in
926 case prop1 of
927 Const ("==", _) $ A $ B =>
928 if prop2 aconv A then
929 Thm (deriv_rule2 (Pt.equal_elim A B) der1 der2,
930 {thy_ref = merge_thys2 th1 th2,
931 tags = [],
932 maxidx = Int.max (max1, max2),
933 shyps = Sorts.union shyps1 shyps2,
934 hyps = union_hyps hyps1 hyps2,
935 tpairs = union_tpairs tpairs1 tpairs2,
936 prop = B})
937 else err "not equal"
938 | _ => err"major premise"
939 end;
940
941
942
943 (**** Derived rules ****)
944
945 (*Smash unifies the list of term pairs leaving no flex-flex pairs.
946 Instantiates the theorem and deletes trivial tpairs. Resulting
947 sequence may contain multiple elements if the tpairs are not all
948 flex-flex.*)
949 fun flexflex_rule (th as Thm (der, {thy_ref, maxidx, shyps, hyps, tpairs, prop, ...})) =
950 let val thy = Theory.deref thy_ref in
951 Unify.smash_unifiers thy tpairs (Envir.empty maxidx)
952 |> Seq.map (fn env =>
953 if Envir.is_empty env then th
954 else
955 let
956 val tpairs' = tpairs |> map (pairself (Envir.norm_term env))
957 (*remove trivial tpairs, of the form t==t*)
958 |> filter_out (op aconv);
959 val der' = deriv_rule1 (Pt.norm_proof' env) der;
960 val prop' = Envir.norm_term env prop;
961 val maxidx = maxidx_tpairs tpairs' (maxidx_of_term prop');
962 val shyps = Envir.insert_sorts env shyps;
963 in
964 Thm (der', {thy_ref = Theory.check_thy thy, tags = [], maxidx = maxidx,
965 shyps = shyps, hyps = hyps, tpairs = tpairs', prop = prop'})
966 end)
967 end;
968
969
970 (*Generalization of fixed variables
971 A
972 --------------------
973 A[?'a/'a, ?x/x, ...]
974 *)
975
976 fun generalize ([], []) _ th = th
977 | generalize (tfrees, frees) idx th =
978 let
979 val Thm (der, {thy_ref, maxidx, shyps, hyps, tpairs, prop, ...}) = th;
980 val _ = idx <= maxidx andalso raise THM ("generalize: bad index", idx, [th]);
981
982 val bad_type = if null tfrees then K false else
983 Term.exists_subtype (fn TFree (a, _) => member (op =) tfrees a | _ => false);
984 fun bad_term (Free (x, T)) = bad_type T orelse member (op =) frees x
985 | bad_term (Var (_, T)) = bad_type T
986 | bad_term (Const (_, T)) = bad_type T
987 | bad_term (Abs (_, T, t)) = bad_type T orelse bad_term t
988 | bad_term (t $ u) = bad_term t orelse bad_term u
989 | bad_term (Bound _) = false;
990 val _ = exists bad_term hyps andalso
991 raise THM ("generalize: variable free in assumptions", 0, [th]);
992
993 val gen = TermSubst.generalize (tfrees, frees) idx;
994 val prop' = gen prop;
995 val tpairs' = map (pairself gen) tpairs;
996 val maxidx' = maxidx_tpairs tpairs' (maxidx_of_term prop');
997 in
998 Thm (deriv_rule1 (Pt.generalize (tfrees, frees) idx) der,
999 {thy_ref = thy_ref,
1000 tags = [],
1001 maxidx = maxidx',
1002 shyps = shyps,
1003 hyps = hyps,
1004 tpairs = tpairs',
1005 prop = prop'})
1006 end;
1007
1008
1009 (*Instantiation of schematic variables
1010 A
1011 --------------------
1012 A[t1/v1, ..., tn/vn]
1013 *)
1014
1015 local
1016
1017 fun pretty_typing thy t T = Pretty.block
1018 [Syntax.pretty_term_global thy t, Pretty.str " ::", Pretty.brk 1, Syntax.pretty_typ_global thy T];
1019
1020 fun add_inst (ct, cu) (thy_ref, sorts) =
1021 let
1022 val Cterm {t = t, T = T, ...} = ct;
1023 val Cterm {t = u, T = U, sorts = sorts_u, maxidx = maxidx_u, ...} = cu;
1024 val thy_ref' = Theory.merge_refs (thy_ref, merge_thys0 ct cu);
1025 val sorts' = Sorts.union sorts_u sorts;
1026 in
1027 (case t of Var v =>
1028 if T = U then ((v, (u, maxidx_u)), (thy_ref', sorts'))
1029 else raise TYPE (Pretty.string_of (Pretty.block
1030 [Pretty.str "instantiate: type conflict",
1031 Pretty.fbrk, pretty_typing (Theory.deref thy_ref') t T,
1032 Pretty.fbrk, pretty_typing (Theory.deref thy_ref') u U]), [T, U], [t, u])
1033 | _ => raise TYPE (Pretty.string_of (Pretty.block
1034 [Pretty.str "instantiate: not a variable",
1035 Pretty.fbrk, Syntax.pretty_term_global (Theory.deref thy_ref') t]), [], [t]))
1036 end;
1037
1038 fun add_instT (cT, cU) (thy_ref, sorts) =
1039 let
1040 val Ctyp {T, thy_ref = thy_ref1, ...} = cT
1041 and Ctyp {T = U, thy_ref = thy_ref2, sorts = sorts_U, maxidx = maxidx_U, ...} = cU;
1042 val thy' = Theory.deref (Theory.merge_refs (thy_ref, Theory.merge_refs (thy_ref1, thy_ref2)));
1043 val sorts' = Sorts.union sorts_U sorts;
1044 in
1045 (case T of TVar (v as (_, S)) =>
1046 if Sign.of_sort thy' (U, S) then ((v, (U, maxidx_U)), (Theory.check_thy thy', sorts'))
1047 else raise TYPE ("Type not of sort " ^ Syntax.string_of_sort_global thy' S, [U], [])
1048 | _ => raise TYPE (Pretty.string_of (Pretty.block
1049 [Pretty.str "instantiate: not a type variable",
1050 Pretty.fbrk, Syntax.pretty_typ_global thy' T]), [T], []))
1051 end;
1052
1053 in
1054
1055 (*Left-to-right replacements: ctpairs = [..., (vi, ti), ...].
1056 Instantiates distinct Vars by terms of same type.
1057 Does NOT normalize the resulting theorem!*)
1058 fun instantiate ([], []) th = th
1059 | instantiate (instT, inst) th =
1060 let
1061 val Thm (der, {thy_ref, hyps, shyps, tpairs, prop, ...}) = th;
1062 val (inst', (instT', (thy_ref', shyps'))) =
1063 (thy_ref, shyps) |> fold_map add_inst inst ||> fold_map add_instT instT;
1064 val subst = TermSubst.instantiate_maxidx (instT', inst');
1065 val (prop', maxidx1) = subst prop ~1;
1066 val (tpairs', maxidx') =
1067 fold_map (fn (t, u) => fn i => subst t i ||>> subst u) tpairs maxidx1;
1068 in
1069 Thm (deriv_rule1 (fn d => Pt.instantiate (map (apsnd #1) instT', map (apsnd #1) inst') d) der,
1070 {thy_ref = thy_ref',
1071 tags = [],
1072 maxidx = maxidx',
1073 shyps = shyps',
1074 hyps = hyps,
1075 tpairs = tpairs',
1076 prop = prop'})
1077 end
1078 handle TYPE (msg, _, _) => raise THM (msg, 0, [th]);
1079
1080 fun instantiate_cterm ([], []) ct = ct
1081 | instantiate_cterm (instT, inst) ct =
1082 let
1083 val Cterm {thy_ref, t, T, sorts, ...} = ct;
1084 val (inst', (instT', (thy_ref', sorts'))) =
1085 (thy_ref, sorts) |> fold_map add_inst inst ||> fold_map add_instT instT;
1086 val subst = TermSubst.instantiate_maxidx (instT', inst');
1087 val substT = TermSubst.instantiateT_maxidx instT';
1088 val (t', maxidx1) = subst t ~1;
1089 val (T', maxidx') = substT T maxidx1;
1090 in Cterm {thy_ref = thy_ref', t = t', T = T', sorts = sorts', maxidx = maxidx'} end
1091 handle TYPE (msg, _, _) => raise CTERM (msg, [ct]);
1092
1093 end;
1094
1095
1096 (*The trivial implication A ==> A, justified by assume and forall rules.
1097 A can contain Vars, not so for assume!*)
1098 fun trivial (Cterm {thy_ref, t =A, T, maxidx, sorts}) =
1099 if T <> propT then
1100 raise THM ("trivial: the term must have type prop", 0, [])
1101 else
1102 Thm (deriv_rule0 (Pt.AbsP ("H", NONE, Pt.PBound 0)),
1103 {thy_ref = thy_ref,
1104 tags = [],
1105 maxidx = maxidx,
1106 shyps = sorts,
1107 hyps = [],
1108 tpairs = [],
1109 prop = Logic.mk_implies (A, A)});
1110
1111 (*Axiom-scheme reflecting signature contents: "OFCLASS(?'a::c, c_class)" *)
1112 fun class_triv thy c =
1113 let
1114 val Cterm {t, maxidx, sorts, ...} =
1115 cterm_of thy (Logic.mk_inclass (TVar ((Name.aT, 0), [c]), Sign.certify_class thy c))
1116 handle TERM (msg, _) => raise THM ("class_triv: " ^ msg, 0, []);
1117 val der = deriv_rule0 (Pt.PAxm ("Pure.class_triv:" ^ c, t, SOME []));
1118 in
1119 Thm (der, {thy_ref = Theory.check_thy thy, tags = [], maxidx = maxidx,
1120 shyps = sorts, hyps = [], tpairs = [], prop = t})
1121 end;
1122
1123 (*Internalize sort constraints of type variable*)
1124 fun unconstrainT
1125 (Ctyp {thy_ref = thy_ref1, T, ...})
1126 (th as Thm (_, {thy_ref = thy_ref2, maxidx, shyps, hyps, tpairs, prop, ...})) =
1127 let
1128 val ((x, i), S) = Term.dest_TVar T handle TYPE _ =>
1129 raise THM ("unconstrainT: not a type variable", 0, [th]);
1130 val T' = TVar ((x, i), []);
1131 val unconstrain = Term.map_types (Term.map_atyps (fn U => if U = T then T' else U));
1132 val constraints = map (curry Logic.mk_inclass T') S;
1133 in
1134 Thm (deriv_rule0 (Pt.PAxm ("Pure.unconstrainT", prop, SOME [])),
1135 {thy_ref = Theory.merge_refs (thy_ref1, thy_ref2),
1136 tags = [],
1137 maxidx = Int.max (maxidx, i),
1138 shyps = Sorts.remove_sort S shyps,
1139 hyps = hyps,
1140 tpairs = map (pairself unconstrain) tpairs,
1141 prop = Logic.list_implies (constraints, unconstrain prop)})
1142 end;
1143
1144 (* Replace all TFrees not fixed or in the hyps by new TVars *)
1145 fun varifyT' fixed (Thm (der, {thy_ref, maxidx, shyps, hyps, tpairs, prop, ...})) =
1146 let
1147 val tfrees = fold Term.add_tfrees hyps fixed;
1148 val prop1 = attach_tpairs tpairs prop;
1149 val (al, prop2) = Type.varify tfrees prop1;
1150 val (ts, prop3) = Logic.strip_prems (length tpairs, [], prop2);
1151 in
1152 (al, Thm (deriv_rule1 (Pt.varify_proof prop tfrees) der,
1153 {thy_ref = thy_ref,
1154 tags = [],
1155 maxidx = Int.max (0, maxidx),
1156 shyps = shyps,
1157 hyps = hyps,
1158 tpairs = rev (map Logic.dest_equals ts),
1159 prop = prop3}))
1160 end;
1161
1162 val varifyT = #2 o varifyT' [];
1163
1164 (* Replace all TVars by new TFrees *)
1165 fun freezeT (Thm (der, {thy_ref, maxidx, shyps, hyps, tpairs, prop, ...})) =
1166 let
1167 val prop1 = attach_tpairs tpairs prop;
1168 val prop2 = Type.freeze prop1;
1169 val (ts, prop3) = Logic.strip_prems (length tpairs, [], prop2);
1170 in
1171 Thm (deriv_rule1 (Pt.freezeT prop1) der,
1172 {thy_ref = thy_ref,
1173 tags = [],
1174 maxidx = maxidx_of_term prop2,
1175 shyps = shyps,
1176 hyps = hyps,
1177 tpairs = rev (map Logic.dest_equals ts),
1178 prop = prop3})
1179 end;
1180
1181
1182 (*** Inference rules for tactics ***)
1183
1184 (*Destruct proof state into constraints, other goals, goal(i), rest *)
1185 fun dest_state (state as Thm (_, {prop,tpairs,...}), i) =
1186 (case Logic.strip_prems(i, [], prop) of
1187 (B::rBs, C) => (tpairs, rev rBs, B, C)
1188 | _ => raise THM("dest_state", i, [state]))
1189 handle TERM _ => raise THM("dest_state", i, [state]);
1190
1191 (*Increment variables and parameters of orule as required for
1192 resolution with a goal.*)
1193 fun lift_rule goal orule =
1194 let
1195 val Cterm {t = gprop, T, maxidx = gmax, sorts, ...} = goal;
1196 val inc = gmax + 1;
1197 val lift_abs = Logic.lift_abs inc gprop;
1198 val lift_all = Logic.lift_all inc gprop;
1199 val Thm (der, {maxidx, shyps, hyps, tpairs, prop, ...}) = orule;
1200 val (As, B) = Logic.strip_horn prop;
1201 in
1202 if T <> propT then raise THM ("lift_rule: the term must have type prop", 0, [])
1203 else
1204 Thm (deriv_rule1 (Pt.lift_proof gprop inc prop) der,
1205 {thy_ref = merge_thys1 goal orule,
1206 tags = [],
1207 maxidx = maxidx + inc,
1208 shyps = Sorts.union shyps sorts, (*sic!*)
1209 hyps = hyps,
1210 tpairs = map (pairself lift_abs) tpairs,
1211 prop = Logic.list_implies (map lift_all As, lift_all B)})
1212 end;
1213
1214 fun incr_indexes i (thm as Thm (der, {thy_ref, maxidx, shyps, hyps, tpairs, prop, ...})) =
1215 if i < 0 then raise THM ("negative increment", 0, [thm])
1216 else if i = 0 then thm
1217 else
1218 Thm (deriv_rule1 (Pt.map_proof_terms (Logic.incr_indexes ([], i)) (Logic.incr_tvar i)) der,
1219 {thy_ref = thy_ref,
1220 tags = [],
1221 maxidx = maxidx + i,
1222 shyps = shyps,
1223 hyps = hyps,
1224 tpairs = map (pairself (Logic.incr_indexes ([], i))) tpairs,
1225 prop = Logic.incr_indexes ([], i) prop});
1226
1227 (*Solve subgoal Bi of proof state B1...Bn/C by assumption. *)
1228 fun assumption i state =
1229 let
1230 val Thm (der, {thy_ref, maxidx, shyps, hyps, prop, ...}) = state;
1231 val thy = Theory.deref thy_ref;
1232 val (tpairs, Bs, Bi, C) = dest_state (state, i);
1233 fun newth n (env as Envir.Envir {maxidx, ...}, tpairs) =
1234 Thm (deriv_rule1
1235 ((if Envir.is_empty env then I else (Pt.norm_proof' env)) o
1236 Pt.assumption_proof Bs Bi n) der,
1237 {tags = [],
1238 maxidx = maxidx,
1239 shyps = Envir.insert_sorts env shyps,
1240 hyps = hyps,
1241 tpairs =
1242 if Envir.is_empty env then tpairs
1243 else map (pairself (Envir.norm_term env)) tpairs,
1244 prop =
1245 if Envir.is_empty env then (*avoid wasted normalizations*)
1246 Logic.list_implies (Bs, C)
1247 else (*normalize the new rule fully*)
1248 Envir.norm_term env (Logic.list_implies (Bs, C)),
1249 thy_ref = Theory.check_thy thy});
1250
1251 val (close, asms, concl) = Logic.assum_problems (~1, Bi);
1252 val concl' = close concl;
1253 fun addprfs [] _ = Seq.empty
1254 | addprfs (asm :: rest) n = Seq.make (fn () => Seq.pull
1255 (Seq.mapp (newth n)
1256 (if Term.could_unify (asm, concl) then
1257 (Unify.unifiers (thy, Envir.empty maxidx, (close asm, concl') :: tpairs))
1258 else Seq.empty)
1259 (addprfs rest (n + 1))))
1260 in addprfs asms 1 end;
1261
1262 (*Solve subgoal Bi of proof state B1...Bn/C by assumption.
1263 Checks if Bi's conclusion is alpha-convertible to one of its assumptions*)
1264 fun eq_assumption i state =
1265 let
1266 val Thm (der, {thy_ref, maxidx, shyps, hyps, prop, ...}) = state;
1267 val (tpairs, Bs, Bi, C) = dest_state (state, i);
1268 val (_, asms, concl) = Logic.assum_problems (~1, Bi);
1269 in
1270 (case find_index (fn asm => Pattern.aeconv (asm, concl)) asms of
1271 ~1 => raise THM ("eq_assumption", 0, [state])
1272 | n =>
1273 Thm (deriv_rule1 (Pt.assumption_proof Bs Bi (n + 1)) der,
1274 {thy_ref = thy_ref,
1275 tags = [],
1276 maxidx = maxidx,
1277 shyps = shyps,
1278 hyps = hyps,
1279 tpairs = tpairs,
1280 prop = Logic.list_implies (Bs, C)}))
1281 end;
1282
1283
1284 (*For rotate_tac: fast rotation of assumptions of subgoal i*)
1285 fun rotate_rule k i state =
1286 let
1287 val Thm (der, {thy_ref, maxidx, shyps, hyps, prop, ...}) = state;
1288 val (tpairs, Bs, Bi, C) = dest_state (state, i);
1289 val params = Term.strip_all_vars Bi
1290 and rest = Term.strip_all_body Bi;
1291 val asms = Logic.strip_imp_prems rest
1292 and concl = Logic.strip_imp_concl rest;
1293 val n = length asms;
1294 val m = if k < 0 then n + k else k;
1295 val Bi' =
1296 if 0 = m orelse m = n then Bi
1297 else if 0 < m andalso m < n then
1298 let val (ps, qs) = chop m asms
1299 in list_all (params, Logic.list_implies (qs @ ps, concl)) end
1300 else raise THM ("rotate_rule", k, [state]);
1301 in
1302 Thm (deriv_rule1 (Pt.rotate_proof Bs Bi m) der,
1303 {thy_ref = thy_ref,
1304 tags = [],
1305 maxidx = maxidx,
1306 shyps = shyps,
1307 hyps = hyps,
1308 tpairs = tpairs,
1309 prop = Logic.list_implies (Bs @ [Bi'], C)})
1310 end;
1311
1312
1313 (*Rotates a rule's premises to the left by k, leaving the first j premises
1314 unchanged. Does nothing if k=0 or if k equals n-j, where n is the
1315 number of premises. Useful with etac and underlies defer_tac*)
1316 fun permute_prems j k rl =
1317 let
1318 val Thm (der, {thy_ref, maxidx, shyps, hyps, tpairs, prop, ...}) = rl;
1319 val prems = Logic.strip_imp_prems prop
1320 and concl = Logic.strip_imp_concl prop;
1321 val moved_prems = List.drop (prems, j)
1322 and fixed_prems = List.take (prems, j)
1323 handle Subscript => raise THM ("permute_prems: j", j, [rl]);
1324 val n_j = length moved_prems;
1325 val m = if k < 0 then n_j + k else k;
1326 val prop' =
1327 if 0 = m orelse m = n_j then prop
1328 else if 0 < m andalso m < n_j then
1329 let val (ps, qs) = chop m moved_prems
1330 in Logic.list_implies (fixed_prems @ qs @ ps, concl) end
1331 else raise THM ("permute_prems: k", k, [rl]);
1332 in
1333 Thm (deriv_rule1 (Pt.permute_prems_prf prems j m) der,
1334 {thy_ref = thy_ref,
1335 tags = [],
1336 maxidx = maxidx,
1337 shyps = shyps,
1338 hyps = hyps,
1339 tpairs = tpairs,
1340 prop = prop'})
1341 end;
1342
1343
1344 (** User renaming of parameters in a subgoal **)
1345
1346 (*Calls error rather than raising an exception because it is intended
1347 for top-level use -- exception handling would not make sense here.
1348 The names in cs, if distinct, are used for the innermost parameters;
1349 preceding parameters may be renamed to make all params distinct.*)
1350 fun rename_params_rule (cs, i) state =
1351 let
1352 val Thm (der, {thy_ref, tags, maxidx, shyps, hyps, ...}) = state;
1353 val (tpairs, Bs, Bi, C) = dest_state (state, i);
1354 val iparams = map #1 (Logic.strip_params Bi);
1355 val short = length iparams - length cs;
1356 val newnames =
1357 if short < 0 then error "More names than abstractions!"
1358 else Name.variant_list cs (Library.take (short, iparams)) @ cs;
1359 val freenames = Term.fold_aterms (fn Free (x, _) => insert (op =) x | _ => I) Bi [];
1360 val newBi = Logic.list_rename_params (newnames, Bi);
1361 in
1362 (case duplicates (op =) cs of
1363 a :: _ => (warning ("Can't rename. Bound variables not distinct: " ^ a); state)
1364 | [] =>
1365 (case cs inter_string freenames of
1366 a :: _ => (warning ("Can't rename. Bound/Free variable clash: " ^ a); state)
1367 | [] =>
1368 Thm (der,
1369 {thy_ref = thy_ref,
1370 tags = tags,
1371 maxidx = maxidx,
1372 shyps = shyps,
1373 hyps = hyps,
1374 tpairs = tpairs,
1375 prop = Logic.list_implies (Bs @ [newBi], C)})))
1376 end;
1377
1378
1379 (*** Preservation of bound variable names ***)
1380
1381 fun rename_boundvars pat obj (thm as Thm (der, {thy_ref, tags, maxidx, shyps, hyps, tpairs, prop})) =
1382 (case Term.rename_abs pat obj prop of
1383 NONE => thm
1384 | SOME prop' => Thm (der,
1385 {thy_ref = thy_ref,
1386 tags = tags,
1387 maxidx = maxidx,
1388 hyps = hyps,
1389 shyps = shyps,
1390 tpairs = tpairs,
1391 prop = prop'}));
1392
1393
1394 (* strip_apply f (A, B) strips off all assumptions/parameters from A
1395 introduced by lifting over B, and applies f to remaining part of A*)
1396 fun strip_apply f =
1397 let fun strip(Const("==>",_)$ A1 $ B1,
1398 Const("==>",_)$ _ $ B2) = Logic.mk_implies (A1, strip(B1,B2))
1399 | strip((c as Const("all",_)) $ Abs(a,T,t1),
1400 Const("all",_) $ Abs(_,_,t2)) = c$Abs(a,T,strip(t1,t2))
1401 | strip(A,_) = f A
1402 in strip end;
1403
1404 (*Use the alist to rename all bound variables and some unknowns in a term
1405 dpairs = current disagreement pairs; tpairs = permanent ones (flexflex);
1406 Preserves unknowns in tpairs and on lhs of dpairs. *)
1407 fun rename_bvs([],_,_,_) = I
1408 | rename_bvs(al,dpairs,tpairs,B) =
1409 let
1410 val add_var = fold_aterms (fn Var ((x, _), _) => insert (op =) x | _ => I);
1411 val vids = []
1412 |> fold (add_var o fst) dpairs
1413 |> fold (add_var o fst) tpairs
1414 |> fold (add_var o snd) tpairs;
1415 (*unknowns appearing elsewhere be preserved!*)
1416 fun rename(t as Var((x,i),T)) =
1417 (case AList.lookup (op =) al x of
1418 SOME y =>
1419 if member (op =) vids x orelse member (op =) vids y then t
1420 else Var((y,i),T)
1421 | NONE=> t)
1422 | rename(Abs(x,T,t)) =
1423 Abs (the_default x (AList.lookup (op =) al x), T, rename t)
1424 | rename(f$t) = rename f $ rename t
1425 | rename(t) = t;
1426 fun strip_ren Ai = strip_apply rename (Ai,B)
1427 in strip_ren end;
1428
1429 (*Function to rename bounds/unknowns in the argument, lifted over B*)
1430 fun rename_bvars(dpairs, tpairs, B) =
1431 rename_bvs(List.foldr Term.match_bvars [] dpairs, dpairs, tpairs, B);
1432
1433
1434 (*** RESOLUTION ***)
1435
1436 (** Lifting optimizations **)
1437
1438 (*strip off pairs of assumptions/parameters in parallel -- they are
1439 identical because of lifting*)
1440 fun strip_assums2 (Const("==>", _) $ _ $ B1,
1441 Const("==>", _) $ _ $ B2) = strip_assums2 (B1,B2)
1442 | strip_assums2 (Const("all",_)$Abs(a,T,t1),
1443 Const("all",_)$Abs(_,_,t2)) =
1444 let val (B1,B2) = strip_assums2 (t1,t2)
1445 in (Abs(a,T,B1), Abs(a,T,B2)) end
1446 | strip_assums2 BB = BB;
1447
1448
1449 (*Faster normalization: skip assumptions that were lifted over*)
1450 fun norm_term_skip env 0 t = Envir.norm_term env t
1451 | norm_term_skip env n (Const("all",_)$Abs(a,T,t)) =
1452 let val Envir.Envir{iTs, ...} = env
1453 val T' = Envir.typ_subst_TVars iTs T
1454 (*Must instantiate types of parameters because they are flattened;
1455 this could be a NEW parameter*)
1456 in Term.all T' $ Abs(a, T', norm_term_skip env n t) end
1457 | norm_term_skip env n (Const("==>", _) $ A $ B) =
1458 Logic.mk_implies (A, norm_term_skip env (n-1) B)
1459 | norm_term_skip env n t = error"norm_term_skip: too few assumptions??";
1460
1461
1462 (*Composition of object rule r=(A1...Am/B) with proof state s=(B1...Bn/C)
1463 Unifies B with Bi, replacing subgoal i (1 <= i <= n)
1464 If match then forbid instantiations in proof state
1465 If lifted then shorten the dpair using strip_assums2.
1466 If eres_flg then simultaneously proves A1 by assumption.
1467 nsubgoal is the number of new subgoals (written m above).
1468 Curried so that resolution calls dest_state only once.
1469 *)
1470 local exception COMPOSE
1471 in
1472 fun bicompose_aux flatten match (state, (stpairs, Bs, Bi, C), lifted)
1473 (eres_flg, orule, nsubgoal) =
1474 let val Thm (sder, {maxidx=smax, shyps=sshyps, hyps=shyps, ...}) = state
1475 and Thm (rder, {maxidx=rmax, shyps=rshyps, hyps=rhyps,
1476 tpairs=rtpairs, prop=rprop,...}) = orule
1477 (*How many hyps to skip over during normalization*)
1478 and nlift = Logic.count_prems (strip_all_body Bi) + (if eres_flg then ~1 else 0)
1479 val thy = Theory.deref (merge_thys2 state orule);
1480 (** Add new theorem with prop = '[| Bs; As |] ==> C' to thq **)
1481 fun addth A (As, oldAs, rder', n) ((env as Envir.Envir {maxidx, ...}, tpairs), thq) =
1482 let val normt = Envir.norm_term env;
1483 (*perform minimal copying here by examining env*)
1484 val (ntpairs, normp) =
1485 if Envir.is_empty env then (tpairs, (Bs @ As, C))
1486 else
1487 let val ntps = map (pairself normt) tpairs
1488 in if Envir.above env smax then
1489 (*no assignments in state; normalize the rule only*)
1490 if lifted
1491 then (ntps, (Bs @ map (norm_term_skip env nlift) As, C))
1492 else (ntps, (Bs @ map normt As, C))
1493 else if match then raise COMPOSE
1494 else (*normalize the new rule fully*)
1495 (ntps, (map normt (Bs @ As), normt C))
1496 end
1497 val th =
1498 Thm (deriv_rule2
1499 ((if Envir.is_empty env then I
1500 else if Envir.above env smax then
1501 (fn f => fn der => f (Pt.norm_proof' env der))
1502 else
1503 curry op oo (Pt.norm_proof' env))
1504 (Pt.bicompose_proof flatten Bs oldAs As A n (nlift+1))) rder' sder,
1505 {tags = [],
1506 maxidx = maxidx,
1507 shyps = Envir.insert_sorts env (Sorts.union rshyps sshyps),
1508 hyps = union_hyps rhyps shyps,
1509 tpairs = ntpairs,
1510 prop = Logic.list_implies normp,
1511 thy_ref = Theory.check_thy thy})
1512 in Seq.cons th thq end handle COMPOSE => thq;
1513 val (rAs,B) = Logic.strip_prems(nsubgoal, [], rprop)
1514 handle TERM _ => raise THM("bicompose: rule", 0, [orule,state]);
1515 (*Modify assumptions, deleting n-th if n>0 for e-resolution*)
1516 fun newAs(As0, n, dpairs, tpairs) =
1517 let val (As1, rder') =
1518 if not lifted then (As0, rder)
1519 else (map (rename_bvars(dpairs,tpairs,B)) As0,
1520 deriv_rule1 (Pt.map_proof_terms
1521 (rename_bvars (dpairs, tpairs, Bound 0)) I) rder);
1522 in (map (if flatten then (Logic.flatten_params n) else I) As1, As1, rder', n)
1523 handle TERM _ =>
1524 raise THM("bicompose: 1st premise", 0, [orule])
1525 end;
1526 val env = Envir.empty(Int.max(rmax,smax));
1527 val BBi = if lifted then strip_assums2(B,Bi) else (B,Bi);
1528 val dpairs = BBi :: (rtpairs@stpairs);
1529
1530 (*elim-resolution: try each assumption in turn*)
1531 fun eres [] = raise THM ("bicompose: no premises", 0, [orule, state])
1532 | eres (A1 :: As) =
1533 let
1534 val A = SOME A1;
1535 val (close, asms, concl) = Logic.assum_problems (nlift + 1, A1);
1536 val concl' = close concl;
1537 fun tryasms [] _ = Seq.empty
1538 | tryasms (asm :: rest) n =
1539 if Term.could_unify (asm, concl) then
1540 let val asm' = close asm in
1541 (case Seq.pull (Unify.unifiers (thy, env, (asm', concl') :: dpairs)) of
1542 NONE => tryasms rest (n + 1)
1543 | cell as SOME ((_, tpairs), _) =>
1544 Seq.it_right (addth A (newAs (As, n, [BBi, (concl', asm')], tpairs)))
1545 (Seq.make (fn () => cell),
1546 Seq.make (fn () => Seq.pull (tryasms rest (n + 1)))))
1547 end
1548 else tryasms rest (n + 1);
1549 in tryasms asms 1 end;
1550
1551 (*ordinary resolution*)
1552 fun res () =
1553 (case Seq.pull (Unify.unifiers (thy, env, dpairs)) of
1554 NONE => Seq.empty
1555 | cell as SOME ((_, tpairs), _) =>
1556 Seq.it_right (addth NONE (newAs (rev rAs, 0, [BBi], tpairs)))
1557 (Seq.make (fn () => cell), Seq.empty));
1558 in
1559 if eres_flg then eres (rev rAs) else res ()
1560 end;
1561 end;
1562
1563 fun compose_no_flatten match (orule, nsubgoal) i state =
1564 bicompose_aux false match (state, dest_state (state, i), false) (false, orule, nsubgoal);
1565
1566 fun bicompose match arg i state =
1567 bicompose_aux true match (state, dest_state (state,i), false) arg;
1568
1569 (*Quick test whether rule is resolvable with the subgoal with hyps Hs
1570 and conclusion B. If eres_flg then checks 1st premise of rule also*)
1571 fun could_bires (Hs, B, eres_flg, rule) =
1572 let fun could_reshyp (A1::_) = exists (fn H => Term.could_unify (A1, H)) Hs
1573 | could_reshyp [] = false; (*no premise -- illegal*)
1574 in Term.could_unify(concl_of rule, B) andalso
1575 (not eres_flg orelse could_reshyp (prems_of rule))
1576 end;
1577
1578 (*Bi-resolution of a state with a list of (flag,rule) pairs.
1579 Puts the rule above: rule/state. Renames vars in the rules. *)
1580 fun biresolution match brules i state =
1581 let val (stpairs, Bs, Bi, C) = dest_state(state,i);
1582 val lift = lift_rule (cprem_of state i);
1583 val B = Logic.strip_assums_concl Bi;
1584 val Hs = Logic.strip_assums_hyp Bi;
1585 val compose = bicompose_aux true match (state, (stpairs, Bs, Bi, C), true);
1586 fun res [] = Seq.empty
1587 | res ((eres_flg, rule)::brules) =
1588 if !Pattern.trace_unify_fail orelse
1589 could_bires (Hs, B, eres_flg, rule)
1590 then Seq.make (*delay processing remainder till needed*)
1591 (fn()=> SOME(compose (eres_flg, lift rule, nprems_of rule),
1592 res brules))
1593 else res brules
1594 in Seq.flat (res brules) end;
1595
1596
1597
1598 (*** Future theorems -- proofs with promises ***)
1599
1600 (* future rule *)
1601
1602 fun future_result i orig_thy orig_shyps orig_prop raw_thm =
1603 let
1604 val _ = Theory.check_thy orig_thy;
1605 val thm = strip_shyps (transfer orig_thy raw_thm);
1606 val _ = Theory.check_thy orig_thy;
1607 fun err msg = raise THM ("future_result: " ^ msg, 0, [thm]);
1608
1609 val Thm (Deriv {max_promise, ...}, {shyps, hyps, tpairs, prop, ...}) = thm;
1610 val _ = prop aconv orig_prop orelse err "bad prop";
1611 val _ = null tpairs orelse err "bad tpairs";
1612 val _ = null hyps orelse err "bad hyps";
1613 val _ = Sorts.subset (shyps, orig_shyps) orelse err "bad shyps";
1614 val _ = max_promise < i orelse err "bad dependencies";
1615 in thm end;
1616
1617 fun future future_thm ct =
1618 let
1619 val Cterm {thy_ref = thy_ref, t = prop, T, maxidx, sorts} = ct;
1620 val thy = Context.reject_draft (Theory.deref thy_ref);
1621 val _ = T <> propT andalso raise CTERM ("future: prop expected", [ct]);
1622
1623 val i = serial ();
1624 val future = future_thm |> Future.map (future_result i thy sorts prop);
1625 val promise = (i, future);
1626 in
1627 Thm (make_deriv i [promise] [promise] [] [] (Pt.promise_proof thy i prop),
1628 {thy_ref = thy_ref,
1629 tags = [],
1630 maxidx = maxidx,
1631 shyps = sorts,
1632 hyps = [],
1633 tpairs = [],
1634 prop = prop})
1635 end;
1636
1637
1638 (* pending task groups *)
1639
1640 fun pending_groups (Thm (Deriv {open_promises, ...}, _)) =
1641 fold (insert Task_Queue.eq_group o Future.group_of o #2) open_promises;
1642
1643
1644 (* fulfilled proofs *)
1645
1646 fun raw_proof_of (Thm (Deriv {body, ...}, _)) = Proofterm.proof_of body;
1647
1648 fun proof_body_of (Thm (Deriv {open_promises, promises, body, ...}, {thy_ref, ...})) =
1649 let
1650 val _ = Exn.release_all (map (Future.join_result o #2) (rev open_promises));
1651 val ps = map (apsnd (raw_proof_of o Future.join)) promises;
1652 in Pt.fulfill_proof (Theory.deref thy_ref) ps body end;
1653
1654 val proof_of = Proofterm.proof_of o proof_body_of;
1655 val join_proof = ignore o proof_body_of;
1656
1657
1658 (* closed derivations with official name *)
1659
1660 fun get_name thm =
1661 Pt.get_name (hyps_of thm) (prop_of thm) (raw_proof_of thm);
1662
1663 fun put_name name (thm as Thm (der, args)) =
1664 let
1665 val Deriv {max_promise, open_promises, promises, body, ...} = der;
1666 val {thy_ref, hyps, prop, tpairs, ...} = args;
1667 val _ = null tpairs orelse raise THM ("put_name: unsolved flex-flex constraints", 0, [thm]);
1668
1669 val ps = map (apsnd (Future.map proof_of)) promises;
1670 val thy = Theory.deref thy_ref;
1671 val (pthm, proof) = Pt.thm_proof thy name hyps prop ps body;
1672
1673 val open_promises' = open_promises |> filter (fn (_, p) =>
1674 (case Future.peek p of SOME (Exn.Result _) => false | _ => true));
1675 val der' = make_deriv max_promise open_promises' [] [] [pthm] proof;
1676 val _ = Theory.check_thy thy;
1677 in Thm (der', args) end;
1678
1679
1680
1681 (*** Oracles ***)
1682
1683 (* oracle rule *)
1684
1685 fun invoke_oracle thy_ref1 name oracle arg =
1686 let val Cterm {thy_ref = thy_ref2, t = prop, T, maxidx, sorts} = oracle arg in
1687 if T <> propT then
1688 raise THM ("Oracle's result must have type prop: " ^ name, 0, [])
1689 else
1690 let val prf = Pt.oracle_proof name prop in
1691 Thm (make_deriv ~1 [] [] (Pt.make_oracles prf) [] prf,
1692 {thy_ref = Theory.merge_refs (thy_ref1, thy_ref2),
1693 tags = [],
1694 maxidx = maxidx,
1695 shyps = sorts,
1696 hyps = [],
1697 tpairs = [],
1698 prop = prop})
1699 end
1700 end;
1701
1702
1703 (* authentic derivation names *)
1704
1705 fun err_dup_ora dup = error ("Duplicate oracle: " ^ quote dup);
1706
1707 structure Oracles = TheoryDataFun
1708 (
1709 type T = serial NameSpace.table;
1710 val empty = NameSpace.empty_table;
1711 val copy = I;
1712 val extend = I;
1713 fun merge _ oracles : T = NameSpace.merge_tables (op =) oracles
1714 handle Symtab.DUP dup => err_dup_ora dup;
1715 );
1716
1717 val extern_oracles = map #1 o NameSpace.extern_table o Oracles.get;
1718
1719 fun add_oracle (b, oracle) thy =
1720 let
1721 val naming = Sign.naming_of thy;
1722 val (name, tab') = NameSpace.define naming (b, serial ()) (Oracles.get thy)
1723 handle Symtab.DUP _ => err_dup_ora (Binding.str_of b);
1724 val thy' = Oracles.put tab' thy;
1725 in ((name, invoke_oracle (Theory.check_thy thy') name oracle), thy') end;
1726
1727 end;
1728
1729 structure BasicThm: BASIC_THM = Thm;
1730 open BasicThm;
|
__label__pos
| 0.996188 |
HomeDev GuideAPI Reference
Dev GuideAPI ReferenceUser GuideGitHubNuGetDev CommunityDoc feedbackLog In
GitHubNuGetDev CommunityDoc feedback
Restrict content types in properties
Describes how to control the addition of certain items to a property of type ContentArea, ContentReference or ContentReferenceList.
Restrict a ContentArea
The AllowedTypes attribute (placed in the EPiServer.DataAnnotations namespace, EPiServer assembly) can be applied to a property.
As you can see from examples below the array declaration is not always needed.
[AllowedTypes(new [] { typeof(PageData) })]
public virtual ContentArea RelatedContentArea { get; set; }
[AllowedTypes(typeof(PageData))]
public virtual ContentArea OtherRelatedContentArea { get; set; }
The same attribute can also be set as:
[AllowedTypes(AllowedTypes = new [] { typeof(PageData) })]
public virtual ContentArea RelatedContentArea { get; set; }
When an item that is part of the allowed types is dragged over this property, the property is highlighted, and the editor can add the item to the property. But if editor tries to drag another item which is not part of allowed types, the property is grayed out, and the item can't be added. The content selector dialog is also filtered out accordingly. Through the Create a new block link in the content area, editors can only add allowed item types.
550
You can specify several allowed types as well as inherited types:
[AllowedTypes(new [] {typeof(PageData), typeof(BlockData)})]
public virtual ContentArea RelatedContentArea { get; set; }
[AllowedTypes(typeof(PageData), typeof(BlockData))]
public virtual ContentArea OtherRelatedContentArea { get; set; }
Restrict a certain set of types but allow all others
The AllowedTypes attribute can also be used to restrict a certain set of types from a larger pool of types.
For example, if you supply two arrays of types as constructor arguments, the first array argument considers allowed items, while the second argument considers restricted items.
[AllowedTypes(new [] { typeof(BlockData) }, new [] { typeof(EditorialBlock) })]
public virtual ContentArea RelatedContentArea { get; set; }
or
[AllowedTypes(AllowedTypes = new [] { typeof(BlockData) }, RestrictedTypes = new [] { typeof(EditorialBlock) })]
public virtual ContentArea RelatedContentArea { get; set; }
In this case, all BlockData items can be added to the content area except the EditorialBlock, which is a BlockData but also part of restricted types. When an editor drags the EditorialBlock or its sub types, the content area is grayed out. Similarly, when an editor clicks the Create a new block link, the content selector is filtered out accordingly.
Restrict based on base classes and interfaces
Once you place the AllowTypes attribute on a ContentArea, ContentReference or a list of ContentReference property, the user interface not only allows and restricts based on the given type but also on all of the given type's sub types. For example:
[AllowedTypes(new [] { typeof(SiteBlockData) })]
public virtual ContentArea RelatedContentArea { get; set; }
In this case, the property not only allows the SiteBlockData to be added but all other types inherited from the SiteBlockData also behave the same.
However, if you want to allow and restrict based on an interface, you must implement the UIDescriptor for the interface.
interface ISpecialInterface { }
public class SpecialBlock : BlockData, ISpecialInterface {}
If you now want to enable a content area to allow all blocks to be added except the ones inherited from the ISpecialInterface, you first have to implement a UIDescriptor for the ISpecialInterface.
[UIDescriptorRegistration]
public class SpecialInterfaceDescriptor : UIDescriptor<ISpecialInterface> { }
Additionally, the interface has to inherits from IContentData interface.
public interface ISpecialInterface: IContentData
{
// properties and methods
}
After that, place the AllowedTypes on the content area property.
[AllowedTypes(AllowedTypes = new [] { typeof(BlockData) }, RestrictedTypes = new [] { typeof(ISpecialInterface) })]
public virtual ContentArea SomeArea { get; set; }
This code allows all blocks to be added to SomeArea but restricts blocks that implement ISpecialInterface.
Restrict content reference properties
The AllowedTypes attribute can be used for ContentReference or list of ContentReference properties as well:
[AllowedTypes( typeof(ProductPage))]
public virtual ContentReference SomeLink { get; set; }
[AllowedTypes( typeof(ProductPage))]
public virtual IList<ContentReference> SomeLinks { get; set; }
This code results in the same behavior as for content areas when dragging items to the property; that is, only items of the type ProductPage can be added to the property. Items not allowed (according to the AllowedTypes attribute) are not selectable in the content selector dialog.
Known limitations
• AllowedTypes only works with ContentArea, ContentReference and list of ContentReference properties.
• RestrictedTypes always win. That means that if an item (a class or an interface) is given in RestrictedTypes, all of its instances and sub items are restricted whether or not the item or sub items exist in AllowedTypes.
|
__label__pos
| 0.996419 |
Permalink
Browse files
Change the `protect_from_forgery` prepend default to `false`
Per this comment
#18334 (comment) we want
`protect_from_forgery` to default to `prepend: false`.
`protect_from_forgery` will now be insterted into the callback chain at the
point it is called in your application. This is useful for cases where you
want to `protect_from_forgery` after you perform required authentication
callbacks or other callbacks that are required to run after forgery protection.
If you want `protect_from_forgery` callbacks to always run first, regardless of
position they are called in your application, then you can add `prepend: true`
to your `protect_from_forgery` call.
Example:
```ruby
protect_from_forgery prepend: true
```
• Loading branch information...
eileencodes committed Dec 7, 2015
1 parent ba1bfa7 commit 39794037817703575c35a75f1961b01b83791191
View
@@ -1,3 +1,26 @@
* Change the `protect_from_forgery` prepend default to `false`
Per this comment
https://github.com/rails/rails/pull/18334#issuecomment-69234050 we want
`protect_from_forgery` to default to `prepend: false`.
`protect_from_forgery` will now be insterted into the callback chain at the
point it is called in your application. This is useful for cases where you
want to `protect_from_forgery` after you perform required authentication
callbacks or other callbacks that are required to run after forgery protection.
If you want `protect_from_forgery` callbacks to always run first, regardless of
position they are called in your application then you can add `prepend: true`
to your `protect_from_forgery` call.
Example:
```ruby
protect_from_forgery prepend: true
```
* Eileen M. Uchitelle*
* In url_for, never append a question mark to the URL when the query string
is empty anyway. (It used to do that when called like `url_for(controller:
'x', action: 'y', q: {})`.)
@@ -102,21 +102,21 @@ module ClassMethods
#
# Valid Options:
#
# * <tt>:only/:except</tt> - Only apply forgery protection to a subset of actions. Like <tt>only: [ :create, :create_all ]</tt>.
# * <tt>:only/:except</tt> - Only apply forgery protection to a subset of actions. For example <tt>only: [ :create, :create_all ]</tt>.
# * <tt>:if/:unless</tt> - Turn off the forgery protection entirely depending on the passed Proc or method reference.
# * <tt>:prepend</tt> - By default, the verification of the authentication token is added to the front of the
# callback chain. If you need to make the verification depend on other callbacks, like authentication methods
# (say cookies vs OAuth), this might not work for you. Pass <tt>prepend: false</tt> to just add the
# verification callback in the position of the protect_from_forgery call. This means any callbacks added
# before are run first.
# * <tt>:prepend</tt> - By default, the verification of the authentication token will be added at the position of the
# protect_from_forgery call in your application. This means any callbacks added before are run first. This is useful
# when you want your forgery protection to depend on other callbacks, like authentication methods (Oauth vs Cookie auth).
#
# If you need to add verification to the beginning of the callback chain, use <tt>prepend: true</tt>.
# * <tt>:with</tt> - Set the method to handle unverified request.
#
# Valid unverified request handling methods are:
# * <tt>:exception</tt> - Raises ActionController::InvalidAuthenticityToken exception.
# * <tt>:reset_session</tt> - Resets the session.
# * <tt>:null_session</tt> - Provides an empty session during request but doesn't reset it completely. Used as default if <tt>:with</tt> option is not specified.
def protect_from_forgery(options = {})
options = options.reverse_merge(prepend: true)
options = options.reverse_merge(prepend: false)
self.forgery_protection_strategy = protection_method_class(options[:with] || :null_session)
self.request_forgery_protection_token ||= :authenticity_token
@@ -540,10 +540,10 @@ def test_verify_authenticity_token_is_not_prepended
assert_equal(expected_callback_order, @controller.called_callbacks)
end
def test_verify_authenticity_token_is_prepended_by_default
def test_verify_authenticity_token_is_not_prepended_by_default
@controller = PrependDefaultController.new
get :index
expected_callback_order = ["verify_authenticity_token", "custom_action"]
expected_callback_order = ["custom_action", "verify_authenticity_token"]
assert_equal(expected_callback_order, @controller.called_callbacks)
end
end
0 comments on commit 3979403
Please sign in to comment.
|
__label__pos
| 0.993419 |
Installation guide
There is a newer version of this article under: https://github.com/redaxmedia/redaxscript/wiki/Installation-guide
Wizard
1. Download and unpack the latest Redaxscript package
2. Upload all files to your webspace
3. Grant write permission on config.php by setting chmod to 666
4. Execute install.php and follow the instructions
5. Revoke write permissions on config.php by setting chmod to 444
6. Delete install.php from your webspace
Manual
1. Download and unpack the latest Redaxscript and SQL package
2. Upload needed files to your webspace, except install.php
3. Login to your database and import the alternate SQL file
4. Edit config.php to setup your database connection
5. You now can login with user and password: admin
Upgrade
1. Download and unpack the latest Redaxscript and SQL package
2. Backup your files and database
3. Login to your database and import the upgrade SQL file
4. Delete deprecated files from your webspace, except config.php
5. Upload latest files to your webspace, except install.php
Powered by Redaxscript 2.1.0 • Design and realization by Jörg Steinhauer & Henry Ruhs
|
__label__pos
| 0.979641 |
Stack Overflow is a community of 4.7 million programmers, just like you, helping each other.
Join them; it only takes a minute:
Sign up
Join the Stack Overflow community to:
1. Ask programming questions
2. Answer and help your peers
3. Get recognized for your expertise
I've tried a few different things, and this is as close I have been able to get:
<script type="text/javascript" charset="utf-8">
$(function () {
new Highcharts.Chart({
chart: { renderTo: 'orders_chart' },
title: { text: 'Orders by Day' },
xAxis: { type: 'datetime' },
yAxis: {
title: { text: 'Dollars' }
},
tooltip: {
formatter: function () {
return Highcharts.dateFormat("%B %e %Y", this.x) + ': ' +
'$' + Highcharts.numberFormat(this.y, 2);
}
},
series: [{
pointInterval: <%= 1.day * 1000 %>,
pointStart: <%= 0.days.ago.at_midnight.to_i * 1000 %>,
data: <%= @daily_count[0] %>
}]
});
});
</script>
The problem is specifically: data: <%= @daily_count[0] %> this currently gives me one datapoint. I've tried just <%= @daily_count %> but that doesn't work. What I need is a way to put an array, and specifically [daily_count[0], daily_count[1]...] into data.
share|improve this question
what is the content of @daily_count – PriteshJ Aug 23 '12 at 19:00
It's an array that might look like this [1,2,3,8,2..]. – Noah Clark Aug 23 '12 at 19:10
up vote 3 down vote accepted
try using to_json
data: <%= @daily_count.to_json %>
share|improve this answer
1
Thanks, but it doesn't work! – Noah Clark Aug 23 '12 at 19:11
I'm pretty sure it needs to be <%= ... %> as other examples use that format. If I remove the = I get the following error: Uncaught SyntaxError: Unexpected token } Nothing at all in the rails error logs or javascript console otherwise. – Noah Clark Aug 23 '12 at 19:30
@NoahClark, what is the value of <%= @daily_count.to_json %> after the page loads? – PriteshJ Aug 23 '12 at 19:34
data: [ <% @daily_count.each do |d|%><%= d.inspect %><%end%>] makes it work. – Noah Clark Aug 23 '12 at 19:38
1
glad that help :) – PriteshJ Aug 23 '12 at 19:41
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.906833 |
Commit 3ba66e74 authored by Emmanuel Christophe's avatar Emmanuel Christophe
Browse files
STYLE: removing trailing spaces (Code)
parent fd444d90
......@@ -10,8 +10,8 @@
See OTBCopyright.txt for details.
This software is distributed WITHOUT ANY WARRANTY; without even
the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
This software is distributed WITHOUT ANY WARRANTY; without even
the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the above copyright notices for more information.
=========================================================================*/
......@@ -35,7 +35,7 @@ namespace otb
* \ingroup ImageFilters
*/
template <class TInputImage, class TOutputImage>
class ITK_EXPORT BSplineDecompositionImageFilter :
class ITK_EXPORT BSplineDecompositionImageFilter :
public itk::ImageToImageFilter<TInputImage,TOutputImage>
{
public:
......@@ -47,7 +47,7 @@ public:
/** Run-time type information (and related methods). */
itkTypeMacro(BSplineDecompositionImageFilter, ImageToImageFilter);
/** New macro for creation of through a Smart Pointer */
itkNewMacro( Self );
......@@ -115,7 +115,7 @@ private:
/** Copies a vector of data from m_Scratch to the Coefficients image. */
void CopyScratchToCoefficients( OutputLinearIterator & );
};
......
......@@ -10,8 +10,8 @@
See OTBCopyright.txt for details.
This software is distributed WITHOUT ANY WARRANTY; without even
the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
This software is distributed WITHOUT ANY WARRANTY; without even
the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the above copyright notices for more information.
=========================================================================*/
......@@ -48,7 +48,7 @@ template <class TInputImage, class TOutputImage>
void
BSplineDecompositionImageFilter<TInputImage, TOutputImage>
::PrintSelf(
std::ostream& os,
std::ostream& os,
itk::Indent indent) const
{
Superclass::PrintSelf( os, indent );
......@@ -61,13 +61,13 @@ template <class TInputImage, class TOutputImage>
bool
BSplineDecompositionImageFilter<TInputImage, TOutputImage>
::DataToCoefficients1D()
{
{
// See Unser, 1993, Part II, Equation 2.5,
// or Unser, 1999, Box 2. for an explaination.
// See Unser, 1993, Part II, Equation 2.5,
// or Unser, 1999, Box 2. for an explaination.
double c0 = 1.0;
double c0 = 1.0;
if (m_DataLength[m_IteratorDirection] == 1) //Required by mirror boundaries
{
return false;
......@@ -76,30 +76,30 @@ BSplineDecompositionImageFilter<TInputImage, TOutputImage>
// Compute overall gain
for (int k = 0; k < m_NumberOfPoles; k++)
{
// Note for cubic splines lambda = 6
// Note for cubic splines lambda = 6
c0 = c0 * (1.0 - m_SplinePoles[k]) * (1.0 - 1.0 / m_SplinePoles[k]);
}
// apply the gain
// apply the gain
for (unsigned int n = 0; n < m_DataLength[m_IteratorDirection]; n++)
{
m_Scratch[n] *= c0;
}
// loop over all poles
for (int k = 0; k < m_NumberOfPoles; k++)
// loop over all poles
for (int k = 0; k < m_NumberOfPoles; k++)
{
// causal initialization
// causal initialization
this->SetInitialCausalCoefficient(m_SplinePoles[k]);
// causal recursion
// causal recursion
for (unsigned int n = 1; n < m_DataLength[m_IteratorDirection]; n++)
{
m_Scratch[n] += m_SplinePoles[k] * m_Scratch[n - 1];
}
// anticausal initialization
// anticausal initialization
this->SetInitialAntiCausalCoefficient(m_SplinePoles[k]);
// anticausal recursion
// anticausal recursion
for ( int n = m_DataLength[m_IteratorDirection] - 2; 0 <= n; n--)
{
m_Scratch[n] = m_SplinePoles[k] * (m_Scratch[n + 1] - m_Scratch[n]);
......@@ -132,7 +132,7 @@ BSplineDecompositionImageFilter<TInputImage, TOutputImage>
::SetPoles()
{
/* See Unser, 1997. Part II, Table I for Pole values */
// See also, Handbook of Medical Imaging, Processing and Analysis, Ed. Isaac N. Bankman,
// See also, Handbook of Medical Imaging, Processing and Analysis, Ed. Isaac N. Bankman,
// 2000, pg. 416.
switch (m_SplineOrder)
{
......@@ -195,7 +195,7 @@ BSplineDecompositionImageFilter<TInputImage, TOutputImage>
{
/* accelerated loop */
sum = m_Scratch[0]; // verify this
for (unsigned int n = 1; n < horizon; n++)
for (unsigned int n = 1; n < horizon; n++)
{
sum += zn * m_Scratch[n];
zn *= z;
......@@ -224,11 +224,11 @@ void
BSplineDecompositionImageFilter<TInputImage, TOutputImage>
::SetInitialAntiCausalCoefficient(double z)
{
// this initialization corresponds to mirror boundaries
// this initialization corresponds to mirror boundaries
/* See Unser, 1999, Box 2 for explaination */
// Also see erratum at http://bigwww.epfl.ch/publications/unser9902.html
m_Scratch[m_DataLength[m_IteratorDirection] - 1] =
(z / (z * z - 1.0)) *
(z / (z * z - 1.0)) *
(z * m_Scratch[m_DataLength[m_IteratorDirection] - 2] + m_Scratch[m_DataLength[m_IteratorDirection] - 1]);
}
......@@ -266,7 +266,7 @@ BSplineDecompositionImageFilter<TInputImage, TOutputImage>
// Perform 1D BSpline calculations
this->DataToCoefficients1D();
// Copy scratch back to coefficients.
// Brings us back to the end of the line we were working on.
CIterator.GoToBeginOfLine();
......@@ -303,7 +303,7 @@ BSplineDecompositionImageFilter<TInputImage, TOutputImage>
++inIt;
++outIt;
}
}
......
......@@ -10,8 +10,8 @@
See OTBCopyright.txt for details.
This software is distributed WITHOUT ANY WARRANTY; without even
the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
This software is distributed WITHOUT ANY WARRANTY; without even
the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the above copyright notices for more information.
=========================================================================*/
......@@ -41,11 +41,11 @@ namespace otb
* \ingroup ImageFunctions
*/
template <
class TImageType,
class TImageType,
class TCoordRep = double,
class TCoefficientType = double >
class ITK_EXPORT BSplineInterpolateImageFunction :
public itk::InterpolateImageFunction<TImageType,TCoordRep>
class ITK_EXPORT BSplineInterpolateImageFunction :
public itk::InterpolateImageFunction<TImageType,TCoordRep>
{
public:
/** Standard class typedefs. */
......@@ -57,7 +57,7 @@ public:
/** Run-time type information (and related methods). */
itkTypeMacro(BSplineInterpolateImageFunction, InterpolateImageFunction);
/** New macro for creation of through a Smart Pointer */
itkNewMacro( Self );
......@@ -87,25 +87,25 @@ typedef typename InputImageType::RegionType RegionType;
/** Internal Coefficient typedef support */
typedef TCoefficientType CoefficientDataType;
typedef itk::Image<CoefficientDataType,
typedef itk::Image<CoefficientDataType,
itkGetStaticConstMacro(ImageDimension)
> CoefficientImageType;
/** Define filter for calculating the BSpline coefficients */
typedef otb::BSplineDecompositionImageFilter<TImageType, CoefficientImageType>
typedef otb::BSplineDecompositionImageFilter<TImageType, CoefficientImageType>
CoefficientFilter;
typedef typename CoefficientFilter::Pointer CoefficientFilterPointer;
/** Evaluate the function at a ContinuousIndex position.
*
* Returns the B-Spline interpolated image intensity at a
* Returns the B-Spline interpolated image intensity at a
* specified point position. No bounds checking is done.
* The point is assume to lie within the image buffer.
*
* ImageFunction::IsInsideBuffer() can be used to check bounds before
* calling the method. */
virtual OutputType EvaluateAtContinuousIndex(
const ContinuousIndexType & index ) const;
virtual OutputType EvaluateAtContinuousIndex(
const ContinuousIndexType & index ) const;
/** Derivative typedef support */
typedef itk::CovariantVector<OutputType,
......@@ -113,13 +113,13 @@ typedef typename InputImageType::RegionType RegionType;
> CovariantVectorType;
CovariantVectorType EvaluateDerivative( const PointType & point ) const
{
{
ContinuousIndexType index;
this->GetInputImage()->TransformPhysicalPointToContinuousIndex( point, index );
return ( this->EvaluateDerivativeAtContinuousIndex( index ) );
}
}
CovariantVectorType EvaluateDerivativeAtContinuousIndex(
CovariantVectorType EvaluateDerivativeAtContinuousIndex(
const ContinuousIndexType & x ) const;
......@@ -133,7 +133,7 @@ typedef typename InputImageType::RegionType RegionType;
virtual void SetInputImage(const TImageType * inputData);
/** Update coefficients filter. Coefficient filter are computed over the buffered
/** Update coefficients filter. Coefficient filter are computed over the buffered
region of the input image. */
virtual void UpdateCoefficientsFilter(void);
......@@ -148,34 +148,34 @@ protected:
typename TImageType::SizeType m_DataLength; // Image size
unsigned int m_SplineOrder; // User specified spline order (3rd or cubic is the default)
typename CoefficientImageType::ConstPointer m_Coefficients; // Spline coefficients
typename CoefficientImageType::ConstPointer m_Coefficients; // Spline coefficients
private:
BSplineInterpolateImageFunction( const Self& ); //purposely not implemented
/** Determines the weights for interpolation of the value x */
void SetInterpolationWeights( const ContinuousIndexType & x,
const vnl_matrix<long> & EvaluateIndex,
vnl_matrix<double> & weights,
void SetInterpolationWeights( const ContinuousIndexType & x,
const vnl_matrix<long> & EvaluateIndex,
vnl_matrix<double> & weights,
unsigned int splineOrder ) const;
/** Determines the weights for the derivative portion of the value x */
void SetDerivativeWeights( const ContinuousIndexType & x,
const vnl_matrix<long> & EvaluateIndex,
vnl_matrix<double> & weights,
void SetDerivativeWeights( const ContinuousIndexType & x,
const vnl_matrix<long> & EvaluateIndex,
vnl_matrix<double> & weights,
unsigned int splineOrder ) const;
/** Precomputation for converting the 1D index of the interpolation neighborhood
/** Precomputation for converting the 1D index of the interpolation neighborhood
* to an N-dimensional index. */
void GeneratePointsToIndex( );
/** Determines the indicies to use give the splines region of support */
void DetermineRegionOfSupport( vnl_matrix<long> & evaluateIndex,
const ContinuousIndexType & x,
void DetermineRegionOfSupport( vnl_matrix<long> & evaluateIndex,
const ContinuousIndexType & x,
unsigned int splineOrder ) const;
/** Set the indicies in evaluateIndex at the boundaries based on mirror
/** Set the indicies in evaluateIndex at the boundaries based on mirror
* boundary conditions. */
void ApplyMirrorBoundaryConditions(vnl_matrix<long> & evaluateIndex,
void ApplyMirrorBoundaryConditions(vnl_matrix<long> & evaluateIndex,
unsigned int splineOrder) const;
......@@ -186,7 +186,7 @@ private:
CoefficientFilterPointer m_CoefficientFilter;
RegionType m_CurrentBufferedRegion;
};
} // namespace otb
......
......@@ -10,8 +10,8 @@
See OTBCopyright.txt for details.
This software is distributed WITHOUT ANY WARRANTY; without even
the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
This software is distributed WITHOUT ANY WARRANTY; without even
the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the above copyright notices for more information.
=========================================================================*/
......@@ -51,7 +51,7 @@ template <class TImageType, class TCoordRep, class TCoefficientType>
void
BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
::PrintSelf(
std::ostream& os,
std::ostream& os,
itk::Indent indent) const
{
Superclass::PrintSelf( os, indent );
......@@ -60,7 +60,7 @@ BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
}
template <class TImageType, class TCoordRep, class TCoefficientType>
void
void
BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
::UpdateCoefficientsFilter(void)
{
......@@ -72,7 +72,7 @@ BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
m_CurrentBufferedRegion =m_CoefficientFilter->GetInput()->GetBufferedRegion();
}
template <class TImageType, class TCoordRep, class TCoefficientType>
void
void
BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
::SetInputImage(const TImageType * inputData)
{
......@@ -82,7 +82,7 @@ BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
// the Coefficient Filter requires that the spline order and the input data be set.
// TODO: We need to ensure that this is only run once and only after both input and
// spline order have been set. Should we force an update after the
// spline order have been set. Should we force an update after the
// splineOrder has been set also?
UpdateCoefficientsFilter();
......@@ -101,7 +101,7 @@ BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
template <class TImageType, class TCoordRep, class TCoefficientType>
void
void
BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
::SetSplineOrder(unsigned int SplineOrder)
{
......@@ -117,13 +117,13 @@ BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
for (unsigned int n=0; n < ImageDimension; n++)
{
m_MaxNumberInterpolationPoints *= ( m_SplineOrder + 1);
}
}
this->GeneratePointsToIndex( );
}
template <class TImageType, class TCoordRep, class TCoefficientType>
typename
typename
BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
::OutputType
BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
......@@ -132,8 +132,8 @@ BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
//UpdateCoefficientsFilter();
vnl_matrix<long> EvaluateIndex(ImageDimension, ( m_SplineOrder + 1 ));
// compute the interpolation indexes
this->DetermineRegionOfSupport(EvaluateIndex, x, m_SplineOrder);
// compute the interpolation indexes
this->DetermineRegionOfSupport(EvaluateIndex, x, m_SplineOrder);
// Determine weights
vnl_matrix<double> weights(ImageDimension, ( m_SplineOrder + 1 ));
......@@ -145,8 +145,8 @@ BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
// Modify EvaluateIndex at the boundaries using mirror boundary conditions
this->ApplyMirrorBoundaryConditions(EvaluateIndex, m_SplineOrder);
// perform interpolation
// perform interpolation
double interpolated = 0.0;
IndexType coefficientIndex;
// Step through eachpoint in the N-dimensional interpolation cube.
......@@ -167,7 +167,7 @@ BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
// m_Coefficients cube.
interpolated += w * m_Coefficients->GetPixel(coefficientIndex);
}
/* double interpolated = 0.0;
IndexType coefficientIndex;
// Step through eachpoint in the N-dimensional interpolation cube.
......@@ -184,16 +184,16 @@ BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
}
interpolated += w * m_Coefficients->GetPixel(coefficientIndex);
}
}
}
*/
return(interpolated);
}
template <class TImageType, class TCoordRep, class TCoefficientType>
typename
typename
BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
:: CovariantVectorType
BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
......@@ -202,7 +202,7 @@ BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
UpdateCoefficientsFilter();
vnl_matrix<long> EvaluateIndex(ImageDimension, ( m_SplineOrder + 1 ));
// compute the interpolation indexes
// compute the interpolation indexes
// TODO: Do we need to revisit region of support for the derivatives?
this->DetermineRegionOfSupport(EvaluateIndex, x, m_SplineOrder);
......@@ -215,7 +215,7 @@ BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
// Modify EvaluateIndex at the boundaries using mirror boundary conditions
this->ApplyMirrorBoundaryConditions(EvaluateIndex, m_SplineOrder);
// Calculate derivative
CovariantVectorType derivativeValue;
double tempValue;
......@@ -225,7 +225,7 @@ BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
derivativeValue[n] = 0.0;
for (unsigned int p = 0; p < m_MaxNumberInterpolationPoints; p++)
{
tempValue = 1.0 ;
tempValue = 1.0 ;
for (unsigned int n1 = 0; n1 < ImageDimension; n1++)
{
//coefficientIndex[n1] = EvaluateIndex[n1][sp];
......@@ -238,7 +238,7 @@ BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
}
else
{
tempValue *= weights[n1][ m_PointsToIndex[p][n1] ];
tempValue *= weights[n1][ m_PointsToIndex[p][n1] ];
}
}
derivativeValue[n] += m_Coefficients->GetPixel(coefficientIndex) * tempValue ;
......@@ -247,21 +247,21 @@ BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
}
return(derivativeValue);
}
template <class TImageType, class TCoordRep, class TCoefficientType>
void
void
BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
::SetInterpolationWeights( const ContinuousIndexType & x, const vnl_matrix<long> & EvaluateIndex,
::SetInterpolationWeights( const ContinuousIndexType & x, const vnl_matrix<long> & EvaluateIndex,
vnl_matrix<double> & weights, unsigned int splineOrder ) const
{
// For speed improvements we could make each case a separate function and use
// function pointers to reference the correct weight order.
// Left as is for now for readability.
double w, w2, w4, t, t0, t1;
switch (splineOrder)
{
case 3:
......@@ -346,13 +346,13 @@ BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
throw err;
break;
}
}
template <class TImageType, class TCoordRep, class TCoefficientType>
void
void
BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
::SetDerivativeWeights( const ContinuousIndexType & x, const vnl_matrix<long> & EvaluateIndex,
::SetDerivativeWeights( const ContinuousIndexType & x, const vnl_matrix<long> & EvaluateIndex,
vnl_matrix<double> & weights, unsigned int splineOrder ) const
{
// For speed improvements we could make each case a separate function and use
......@@ -362,10 +362,10 @@ BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
// Left as is for now for readability.
double w, w1, w2, w3, w4, w5, t, t0, t1, t2;
int derivativeSplineOrder = (int) splineOrder -1;
switch (derivativeSplineOrder)
{
// Calculates B(splineOrder) ( (x + 1/2) - xi) - B(splineOrder -1) ( (x - 1/2) - xi)
case -1:
// Why would we want to do this?
......@@ -390,11 +390,11 @@ BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
weights[n][0] = 0.0 - w1;
weights[n][1] = w1 - w;
weights[n][2] = w;
weights[n][2] = w;
}
break;
case 2:
for (unsigned int n = 0; n < ImageDimension; n++)
{
w = x[n] + .5 - (double)EvaluateIndex[n][2];
......@@ -405,11 +405,11 @@ BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
weights[n][0] = 0.0 - w1;
weights[n][1] = w1 - w2;
weights[n][2] = w2 - w3;
weights[n][3] = w3;
weights[n][3] = w3;
}
break;
case 3:
for (unsigned int n = 0; n < ImageDimension; n++)
{
w = x[n] + 0.5 - (double)EvaluateIndex[n][2];
......@@ -449,7 +449,7 @@ BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
weights[n][5] = w5;
}
break;
default:
// SplineOrder not implemented yet.
itk::ExceptionObject err(__FILE__, __LINE__);
......@@ -458,7 +458,7 @@ BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
throw err;
break;
}
}
......@@ -491,13 +491,13 @@ BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
template <class TImageType, class TCoordRep, class TCoefficientType>
void
BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
::DetermineRegionOfSupport( vnl_matrix<long> & evaluateIndex,
const ContinuousIndexType & x,
::DetermineRegionOfSupport( vnl_matrix<long> & evaluateIndex,
const ContinuousIndexType & x,
unsigned int splineOrder ) const
{
{
long indx;
// compute the interpolation indexes
// compute the interpolation indexes
for (unsigned int n = 0; n< ImageDimension; n++)
{
if (splineOrder & 1) // Use this index calculation for odd splineOrder
......@@ -512,8 +512,8 @@ BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
}
}
else // Use this index calculation for even splineOrder
{
{
indx = (long)vcl_floor((float)(x[n] + 0.5)) - splineOrder / 2;
//std::cout<<"x: "<<x<<std::endl;
//std::cout<<"splineOrder: "<<splineOrder<<std::endl;
......@@ -529,16 +529,16 @@ BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
template <class TImageType, class TCoordRep, class TCoefficientType>
void
BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
::ApplyMirrorBoundaryConditions(vnl_matrix<long> & evaluateIndex,
::ApplyMirrorBoundaryConditions(vnl_matrix<long> & evaluateIndex,
unsigned int splineOrder) const
{
for (unsigned int n = 0; n < ImageDimension; n++)
{
long dataLength = m_DataLength[n];
long dataOffset = m_CurrentBufferedRegion.GetIndex()[n];
// apply the mirror boundary conditions
// apply the mirror boundary conditions
// TODO: We could implement other boundary options beside mirror
if (m_DataLength[n] == 1)
{
......@@ -561,7 +561,7 @@ BSplineInterpolateImageFunction<TImageType,TCoordRep,TCoefficientType>
}
}
}
}
......
......@@ -10,8 +10,8 @@
See OTBCopyright.txt for details.
This software is distributed WITHOUT ANY WARRANTY; without even
the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
This software is distributed WITHOUT ANY WARRANTY; without even
the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the above copyright notices for more information.
=========================================================================*/
......@@ -26,12 +26,12 @@ namespace otb
/**
* \class BinaryImageMinimalBoundingRegionCalculator
* \brief This class compute the smallest region of the image
* containing every pixel with the foreground value.
* containing every pixel with the foreground value.
*
* This class is used for instance in the RCC8 calculator filter,
|
__label__pos
| 0.944345 |
NVL smarts
Oracle appears to have a new smart means for handling NVL around bind variables.
A common issue for reports and any query where users can pass parameters is how to handle the "optional" parameter. Here's a typical example:
Table CP can be queried where column X is equal to optional parameter P.
Should we code:
select *
from CP
where ( X = :P or :P is null)
OR
select *
from CP
where X = NVL(:P,X)
(As always) the best way to find this out, is with a test case - first some test data
SQL> create table CP ( x number not null, y number);
Table created.
SQL> insert into cp select rownum,rownum
2 from all_objects
3 where rownum < 20000;
19999 rows created.
SQL> commit;
Commit complete.
SQL> create index cp1 on cp (x );
Index created.
SQL> analyze table cp estimate statistics;
Table analyzed.
Now lets see what happens with each scenario
SQL> variable p number
SQL> set autotrace traceonly explain
SQL> select *
2 from cp
3 where ( x = :p or :p is null )
4 /
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=33 Card=1001 Bytes=68068)
1 0 TABLE ACCESS (FULL) OF 'CP' (Cost=33 Card=1001 Bytes=68068)
SQL> select *
2 from cp
3 where x = nvl(:p,x);
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=34 Card=2 Bytes=136)
1 0 CONCATENATION
2 1 FILTER
3 2 TABLE ACCESS (FULL) OF 'CP' (Cost=1 Card=1 Bytes=68)
4 1 FILTER
5 4 TABLE ACCESS (BY INDEX ROWID) OF 'CP' (Cost=1 Card=1 Bytes=68)
6 5 INDEX (RANGE SCAN) OF 'CP1' (NON-UNIQUE) (Cost=1 Card=1)
The second one possibly looks worse because it looks like a full scan and an index scan, but look at the cost for the full scan. This looks a little odd - the cost is "1" whereas the previous explain plan thinks the full scan costs "33". In the second case, what Oracle is doing is some smarts, where it will defer the decision on the whether to do the full scan or the index scan based on whether the parameter is actually provided or not. This apparently has come in somewhere around the 8i stage, it does not appear to do this on 8.0. We can prove this "smart-choice" with some timing results:
SQL> set autotrace traceonly statistics
SQL> exec :p := 123;
PL/SQL procedure successfully completed.
SQL> select *
2 from cp
3 where x = nvl(:p,x);
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
4 consistent gets
3 physical reads
0 redo size
307 bytes sent via SQL*Net to client
214 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
So we used the index to quickly get to the row, but when we null out the parameter
SQL> exec :p := null;
PL/SQL procedure successfully completed.
SQL> select *
2 from cp
3 where x = nvl(:p,x);
19999 rows selected.
Statistics
----------------------------------------------------------
0 recursive calls
4 db block gets
1537 consistent gets
210 physical reads
0 redo size
623623 bytes sent via SQL*Net to client
92189 bytes received via SQL*Net from client
1336 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
19999 rows processed
And here we did the full scan (the best option because no parameter was given).
You'll find that if you use the other syntax, you will get a full tablescan every time. The reason for this is that the queries actually could return a different result. It all all depends if the column being queried does not contain any nulls. If the column could contain nulls, then of course, the check:
where x = nvl(:p,x)
will not pick up any rows for which X is null (whereas the other query will)
Moral of the story
a) Ensure that any columns that will not be null are defined as such in the database
b) Use the NVL clause on those columns to handle optional parameters
c) If the column can contain nulls, if possible, dissolve the SQL into two queries:
if :p is null then
select *
from CP
else
select *
from CP
where x = :p
end if;
to at least get the index benefit when the parameter is actually given.
|
__label__pos
| 0.800143 |
Rigid Body
What is a rigid body?
A rigid body is one that is directly controlled by the physics engine in order to simulate the behavior of physical objects. In order to define the shape of the body, it must have one or more Shape objects assigned. Note that setting the position of these shapes will affect the body’s center of mass.
How to control rigid body
A rigid body’s behavior can be altered by setting its properties such as friction, mass, bounce, etc. These properties can be set in the Inspector or via code. See RigidBody for the full list of properties and their effects.
There are several ways to control a rigid body’s movement, depending on your desired application.
If you only need to place a rigid body once, for example to set its initial location, you can use the methods provided by the Spatial node, such as set_global_transform() or look_at(). However, these functions can not be called every frame or the physics engine will not be able to correctly simulate the body’s state. As an example, consider a rigid body that you want to rotate so that it points towards another object. A common mistake when implementing this kind of behavior is to use look_at() every frame, which breaks the physics simulation. Below, we’ll demonstrate how to implement this correctly.
The fact that you can’t use set_global_transform() or look_at() methods doesn’t mean that you can’t have full control of a rigid body. Instead, you can control it by using the _integrate_forces() callback. In this function, you can add forces, apply impulses, or set the velocity in order to achieve any movement you desire.
Look at function
As described above, using the Spatial node’s look_at() function can’t be used each frame to follow a target. Here is a custom look_at() function that will work reliably with rigid bodies:
extends RigidBody
func look_follow(state, current_transform, target_position):
var up_dir = Vector3(0, 1, 0)
var cur_dir = current_transform.basis.xform(Vector3(0, 0, 1))
var target_dir = (target_position - current_transform.origin).normalized()
var rotation_angle = acos(cur_dir.x) - acos(target_dir.x)
state.set_angular_velocity(up_dir * (rotation_angle / state.get_step()))
func _integrate_forces(state):
var target_position = $my_target_spatial_node.get_global_transform().origin
look_follow(state, get_global_transform(), target_position)
class Body : RigidBody
{
private void lookFollow(PhysicsDirectBodyState state, Transform currentTransform, Vector3 targetPosition)
{
var upDir = new Vector3(0, 1, 0);
var curDir = currentTransform.basis.Xform(new Vector3(0, 0, 1));
var targetDir = (targetPosition - currentTransform.origin).Normalized();
var rotationAngle = Mathf.Acos(curDir.x) - Mathf.Acos(targetDir.x);
state.SetAngularVelocity(upDir * (rotationAngle / state.GetStep()));
}
public override void _IntegrateForces(PhysicsDirectBodyState state)
{
var targetPosition = (GetNode("my_target_spatial_node") as Spatial).GetGlobalTransform().origin;
lookFollow(state, GetGlobalTransform(), targetPosition);
}
}
This function uses the rigid body’s set_angular_velocity() method to rotate the body. It first calculates the difference between the current and desired angle and then adds the velocity needed to rotate by that amount in one frame’s time.
Note
This script will not work with rigid bodies in character mode because in this mode the body’s rotation is locked. In this case, you would have to rotate the attached mesh node instead using the standard Spatial methods.
|
__label__pos
| 0.985625 |
How to rename batches of files using .NET Regular Expressions
by Klaus Graefensteiner 9. June 2008 06:16
Introduction
The tool that I am presenting here renames batches of files that have some kind of numerical index as part of their file name. It can rename the text before the index, it can shift the index numbers, give the files a new extension and add or remove leading zeros to and from the numerical index part of the file name. It uses Regular Extensions to parse the file names.
Thumbnail images of the scanned pages of my 1988 year book
Figure 1: Thumbnail images of the scanned pages of my 1988 year book
The War Story
I was trying the other day to scan in an old high school year book. I used a Canon CanoScan LiDE 600F scanner (0302B002). The Toolbox software that came with the scanner indexes the file names automatically. If you call your project e.g. “Abizeitung” then the name of the file of the first scan is called “Abizeitung_0001.jpg”, the name of the 10th scan is called “Abizeitung_0010.jpg” and so on. My goal was to have the index in the file name match the page number of the scanned page. After scanning 38 pages I made a mistake. I scanned the sheet with page 39 and page 40 twice. At this point the file index and the pages got out of sync. Page 41 was now “Abizeitung_0043.jpg”. The index of the files of page 41 and higher was shifted by 2. Just later when I was about to scan pages 120 and 121 I noticed that I screwed up the file numbering and deleted the duplicate scans. I deleted “Abizeitung_0039.jpg” and “Abizeitung_0040.jpg” and continued scanning. This time the CanoScan Toolbox software outsmarted me again, by using the now two empty file name slots to store the scans of pages 120 and 121. The following table shows the full extent of the scan screw up:
File Name Index Page Number Comment
1 1
2 2
... ...
37 37
38 38
39 37, 120
First I scanned page 37 twice, which created file index 39, then later when I discovered that page number and file index are out of synch I deleted the file with index 39. But the scanning tool used the empty file index slot and filled it up with the scan of page number 120.
40 38, 121
First I scanned page 38 twice, which created file index 40, then later when I discovered that page number and file index are out of synch I deleted the file with index 40. But the scanning tool used the empty file index slot and filled it up with the scan of page number 121.
41 39
42 40
... ...
119 117
120 118
121 119
The scan of page 120 filled up file index slot 39 and the scan of pate 121 filled up file index slot 40.
122 122
Now the page number and the file index are in synch again.
... ...
136 136
PowerShell is not EasyShell
I thought "no problem, just write a quick PowerShell script". Fairly quickly I found out that PowerShell is not EasyShell and actually requires a little bit of learning. I decided to first start reading Bruce Payette's book Windows PowerShell in Action and revisit this idea once I am finised with the book.
C Sharp is EasyDev
For the quick fix I decided to create little C# WinForms application using regular expressions and a SortedDictionary<int, FileInfo> to straighten out the stray file names.
Source code
The following paragraph shows the source code file that does the actual file name parsing and renaming work.
1: using System;
2: using System.Collections.Generic;
3: using System.ComponentModel;
4: using System.Data;
5: using System.Drawing;
6: using System.Text;
7: using System.Windows.Forms;
8: using System.Text.RegularExpressions;
9: using System.IO;
10:
11: namespace RenameFiles
12: {
13: public partial class Form1 : Form
14: {
15: public Form1()
16: {
17: InitializeComponent();
18: button2.Enabled = false;
19: }
20:
21: private Regex FileNameRegex;
22: private void button1_Click(object sender, EventArgs e)
23: {
24: openFileDialog1.ShowDialog();
25: }
26:
27:
28: private FileInfo FirstFile;
29: private Match FileNameMatch;
30: private void openFileDialog1_FileOk(object sender, CancelEventArgs e)
31: {
32: if (!e.Cancel)
33: {
34: OpenFileDialog FirstFileDlg = sender as OpenFileDialog;
35: FirstFile = new FileInfo(FirstFileDlg.FileName);
36: FileNameRegex = new Regex(textBox1.Text, RegexOptions.Compiled | RegexOptions.IgnoreCase);
37: FileNameMatch = FileNameRegex.Match(FirstFile.Name);
38: if (FileNameMatch.Success)
39: {
40: textBox2.Text = FileNameMatch.Groups["NAME"].Value;
41: numericUpDown1.Value = Convert.ToInt32(FileNameMatch.Groups["INDEX"].Value);
42: textBox3.Text = FileNameMatch.Groups["EXT"].Value;
43:
44:
45: ChangeLabel();
46:
47: button2.Enabled = true;
48: numericUpDown2.Enabled = true;
49: numericUpDown1.Enabled = true;
50: numericUpDown3.Enabled = true;
51: numericUpDown3.Value = numericUpDown1.Value;
52: checkBox1.Enabled = true;
53: textBox2.Enabled = true;
54: textBox3.Enabled = true;
55: }
56: else
57: {
58: MessageBox.Show("Selected File doesn't match Regular Expression!");
59: }
60: }
61: }
62:
63: private void button2_Click(object sender, EventArgs e)
64: {
65: FileNameRegex = new Regex(textBox1.Text, RegexOptions.Compiled | RegexOptions.IgnoreCase);
66: DirectoryInfo Folder = FirstFile.Directory;
67:
68: Match m;
69: SortedDictionary<int, FileInfo> MatchedFiles = new SortedDictionary<int, FileInfo>();
70:
71: int Index = 0;
72: int MinIndex = (int)numericUpDown1.Value + 9999;
73: int MaxIndex = 0;
74: int ShiftIndexBy = (int)numericUpDown2.Value;
75:
76: foreach (FileInfo f in Folder.GetFiles())
77: {
78:
79: //Match file names
80: m = FileNameRegex.Match(f.Name);
81:
82: if (m.Success)
83: {
84: Index = Convert.ToInt32(m.Groups["INDEX"].Value);
85:
86: if (Index >= numericUpDown1.Value && Index <= numericUpDown3.Value)
87: {
88: try
89: {
90: MatchedFiles.Add(Index, f);
91: if (Index <= MinIndex) MinIndex = Index;
92: if (Index >= MaxIndex) MaxIndex = Index;
93: }
94: catch (Exception Ex)
95: {
96: MessageBox.Show(Ex.Message + "\n" + "Another match for "
97: + f.Name + " already exists in file list!", "Error while filtering files",
98: MessageBoxButtons.OK, MessageBoxIcon.Error);
99: }
100: }
101: }
102:
103: }
104:
105: MessageBox.Show(MatchedFiles.Count.ToString() + " files found\n" +
106: "First file index " + MinIndex.ToString() + "\n" +
107: "Last file index " + MaxIndex.ToString(), "Renaming files...",
108: MessageBoxButtons.OK, MessageBoxIcon.Information);
109:
110: if((MatchedFiles.Count == 0) || ( ShiftIndexBy == 0
111: && textBox2.Text == FileNameMatch.Groups["NAME"].Value
112: && checkBox1.Checked == true
113: && textBox3.Text == FileNameMatch.Groups["EXT"].Value ))
114: {
115: MessageBox.Show("There is no file to be renamed. DONE!");
116: }
117:
118: if (MatchedFiles.Count > 0 && ShiftIndexBy <= 0)
119: {
120: FileInfo fi;
121: for (int i = MinIndex; i <= MaxIndex; i++ )
122: {
123: if (MatchedFiles.TryGetValue(i, out fi))
124: {
125: string NewFileName = textBox2.Text
126: + (checkBox1.Checked ? String.Format("{0:0000}", i
127: + numericUpDown2.Value) : String.Format("{0}", i
128: + numericUpDown2.Value)) + textBox3.Text;
129: try
130: {
131: fi.MoveTo(NewFileName);
132: }
133: catch (Exception Ex)
134: {
135: MessageBox.Show(Ex.Message + "\n" + "Attempt to rename "
136: + fi.Name + " to " + NewFileName + " failed!",
137: "Error while moving files", MessageBoxButtons.OK, MessageBoxIcon.Error);
138: }
139: }
140: }
141: }
142:
143:
144:
145: if (MatchedFiles.Count > 0 && ShiftIndexBy > 0)
146: {
147: FileInfo fi;
148: for (int i = MaxIndex; i >= MinIndex; i--)
149: {
150: if (MatchedFiles.TryGetValue(i, out fi))
151: {
152: string NewFileName = textBox2.Text
153: + (checkBox1.Checked ? String.Format("{0:0000}", i
154: + numericUpDown2.Value) : String.Format("{0}", i
155: + numericUpDown2.Value)) + textBox3.Text;
156: try
157: {
158: fi.MoveTo(NewFileName);
159: }
160: catch (Exception Ex)
161: {
162: MessageBox.Show(Ex.Message + "\n" + "Attempt to rename "
163: + fi.Name + " to " + NewFileName + " failed!",
164: "Error while moving files", MessageBoxButtons.OK,
165: MessageBoxIcon.Error);
166: }
167: }
168: }
169: }
170: }
171:
172: private void numericUpDown1_ValueChanged(object sender, EventArgs e)
173: {
174: ChangeLabel();
175: }
176:
177: private void ChangeLabel()
178: {
179: label1.Text = "Renames all files in a folder from e.g. \""
180: + FileNameMatch.Groups["NAME"].Value
181: + String.Format("{0:0000}", (int)numericUpDown1.Value)
182: + FileNameMatch.Groups["EXT"].Value + "\" to \""
183: + (checkBox1.Checked ? textBox2.Text
184: + String.Format("{0:0000}", (Convert.ToInt32(FileNameMatch.Groups["INDEX"].Value)
185: + numericUpDown2.Value)) : textBox2.Text
186: + String.Format("{0}", (Convert.ToInt32(FileNameMatch.Groups["INDEX"].Value)
187: + numericUpDown2.Value))) + textBox3.Text +"\"";
188: }
189:
190: private void numericUpDown2_ValueChanged(object sender, EventArgs e)
191: {
192: ChangeLabel();
193: }
194:
195: private void textBox2_TextChanged(object sender, EventArgs e)
196: {
197: ChangeLabel();
198: }
199:
200: private void checkBox1_CheckedChanged(object sender, EventArgs e)
201: {
202: ChangeLabel();
203: }
204:
205: private void textBox3_TextChanged(object sender, EventArgs e)
206: {
207: ChangeLabel();
208: }
209:
210: private void Form1_Load(object sender, EventArgs e)
211: {
212:
213: }
214:
215: private void numericUpDown3_ValueChanged(object sender, EventArgs e)
216: {
217: ChangeLabel();
218: }
219:
220:
221: }
222: }
Download
The complete Visual Studio 2005 Solution can be downloaded here: RenameFiles_1_0.zip
Batch Rename Files in Action
The movie
The following movie shows the utility in action:
The step-by-step instructions
Step 1
First rename the files with the index 39 and 40 with new place holder name to prepare the next step.
Batch Rename Files First Step
Figure 2: Batch Rename Files Step 1
Step 2
Now rename files by decrementing the index by 2 starting with file index 41 and ending with file index 121.
Batch Rename Files Second Step
Figure 3: Batch Rename Files Step 2
Step 3
Now we need to move the two files from step 1 to the final destination.
Batch Rename Files Third Step
Figure 4: Batch Rename Files Step 3
Ausblick
Initially I just wanted to scan a year book and share it with my former class mates as PDF file, but I ended up creating a C# application for renaming numbered files. Onother classical case of "Vom Hundertsten Ins Tausendste!"
Tags: , , , , , ,
.NET Framework | Photoshop | PowerShell
Comments are closed
About Klaus Graefensteiner
I like the programming of machines.
Add to Google Reader or Homepage
LinkedIn FacebookTwitter View Klaus Graefensteiner's profile on Technorati
Klaus Graefensteiner
Klaus Graefensteiner
works as Developer In Test and is founder of the PowerShell Unit Testing Framework PSUnit. More...
Open Source Projects
PSUnit is a Unit Testing framwork for PowerShell. It is designed for simplicity and hosted by Codeplex.
BlogShell is The tool for lazy developers who like to automate the composition of blog content during the writing of a blog post. It is hosted by CodePlex.
Administration
About
Powered by:
BlogEngine.Net
Version: 1.6.1.0
License:
Creative Commons License
Copyright:
© Copyright 2015, Klaus Graefensteiner.
Disclaimer:
The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.
Theme design:
This blog theme was designed and is copyrighted 2015 by Klaus Graefensteiner
Rendertime:
Page rendered at 2/28/2015 12:46:36 PM (PST Pacific Standard Time UTC DST -7)
|
__label__pos
| 0.988321 |
Respected Sir/Madam
I need to find out the execution time of a fragment of code.
For that I am using
time_t
structure, but it gives time in seconds.
And the value of CLOCKS_PER_SEC in my comp is 1000000.
Will you please me in getting the time in milliseconds
Thanks & regards
Srishekh
123456 microseconds equals how many milliseconds?
It gives you the time in seconds and you need to know how many milliseconds? Well a millisecond is a thousandth of a second ... one thousand milliseconds in a second. Did I misunderstand your question??
Respected Sir/madam
You misunderstood my question.
When it takes time greater than or equal to 1 second it gives
the correct output,but when it takes time less than 1 second
it gives the 0 as the answer.
For example, if the code takes 0.00045 seconds, the output
must be 0.00045,
How to get such a precesion.
I maybe wrong, but I remember reading in a Visual Studio 6.0 book that the best resolution possible in it is 1 millisecond. So if that is the case you wont be able to measure 0.00045 seconds. It will come as 0.
PS.
Isn't the greeting "Respected Sir/madam" a bit out of date? It drives me nuts.
The functions in time.h are only accurate to 1 second -- so no matter how you divide it you still have only 1 second accuracy. You need to use other functions, such as clock(), or win32 api function GetTickCount(), or GetSystemTime() that provide more accuracy.
The functions in time.h are only accurate to 1 second -- so no matter how you divide it you still have only 1 second accuracy. You need to use other functions, such as clock(), or win32 api function GetTickCount(), or GetSystemTime() that provide more accuracy.
Do you know the resolution of those functions. As I said earlier I remember that Clock() had only a resolution of 1ms.
If you are in Unix:
setitimer() and getitimer() have microsecond resolution on POSIX-compliant systems.
Do you know about profiling and profilers?
I need to find out the execution time of a fragment of code.
You don't need finer resolution timing. If you want to measure the thickness of a piece of paper, would you buy an expensive microscope, or would you measure a stack of 200 pieces of paper and then divide by 200?
i don't think you can use the concept of 200 pages to measure time accurately.
let us suppose the width of a paper is 1/10 th of mm and you have a scale which can measure only upto a mm. suppose you end up measuring 209 pages with the scale, than you'll end up with a reading that'llgive you the width of each page as .95 = 20/209.
The problem becomes more acute in computing where instructions take around a nano sec to complete.
lots of platform specific api. On windows this works <windows.h>
GetTickCount() . gives the number of ms since midnight
The most precise unit is as precise as you can get, so Rashakil's analogy does not make sense to me. You can get less precise, but not more precise.
You don't need finer resolution timing. If you want to measure the thickness of a piece of paper, would you buy an expensive microscope, or would you measure a stack of 200 pieces of paper and then divide by 200?
In other words, run it a whole bunch of times until it runs to 10 seconds, then divide 10 by the number of times it ran. Now, of course, all that counting and looping will throw off your results -- after all, that code and all the calls to time() also take time. So you'll be off only by 5-15%.
i don't think you can use the concept of 200 pages to measure time accurately.
but you can wait 3 1/2 years to respond to a thread and divide that time by the number of total posts, for a rough estimate of the number of thread-posts per year.
amirite?
commented: That's the same as the number of pointless thread bumps per month :) +36
uRrite. Just another case of some turkey not caring that 100 pages in on a forum might be moot thread. Nothing better to do, obviously.
commented: And it wasn't even a good post either! +36
Be a part of the DaniWeb community
We're a friendly, industry-focused community of 1.18 million developers, IT pros, digital marketers, and technology enthusiasts learning and sharing knowledge.
|
__label__pos
| 0.935216 |
Russel Ledge Russel Ledge - 5 months ago 31
C++ Question
luabind: How to pass value from C++ to lua function by reference?
When programming on C++, you can do the following:
void byReference(int &y)
{
y = 5;
}
int main()
{
int x = 2; // x = 2
byReference(x); // x = 5
}
How to do the same using luabind?
Luabind docs says:
If you want to pass a parameter as a reference, you have to wrap it
with the
Boost.Ref
.
Like this:
int ret = call_function(L, "fun", boost::ref(val));
But when I'm trying to do this:
#include <iostream>
#include <conio.h>
extern "C"
{
#include "lua.h"
#include "lualib.h"
#include "lauxlib.h"
}
#include <luabind\luabind.hpp>
using namespace std;
using namespace luabind;
int main()
{
lua_State *myLuaState = luaL_newstate();
open(myLuaState);
int x = 2;
do
{
luaL_dofile(myLuaState, "script.lua");
cout<<"x before = "<< x <<endl;
call_function<void>(myLuaState, "test", boost::ref(x));
cout<<"x after = "<< x <<endl;
} while(_getch() != 27);
lua_close(myLuaState);
}
script.lua
function test(x)
x = 7
end
My program crashes at runtime with the following error:
Unhandled exception at at
0x76B5C42D
in
LuaScripting.exe
: Microsoft
C++ exception:
std::runtime_error
at memory location
0x0017F61C
.
So, how to pass value from C++ to lua function by reference, so I can change it inside the script and it will be changed in c++ program too? I'm using boost 1.55.0, lua 5.1, luabind 0.9.1
EDIT :
When I try-catched
try {
call_function<void>(myLuaState, "test", boost::ref(x));
}catch(const std::exception &TheError) {
cerr << TheError.what() << endl;
it gave me a
"Trying to use unregistered class"
error.
EDIT 2 :
After a little research, I found that
"Trying to use unregistered class"
error throwing because of
boost::ref(x)
. I registered a
int&
class(just a guess):
module(myLuaState)
[
class_<int&>("int&")
];
and
"Trying to use unregistered class"
error disappeared. But calling
print(x)
in
test()
still causes
"lua runtime error"
, and program still not doing what I want from it to do.
Answer
I've managed to make this thing work. Well, not exactly like in the question: instead of int I passed my custom type GameObject.
But for some reason I still can't make it work with simple types like int, float etc.
So, first I decided to build luabind and lua by myself, just to be sure that they will work in VS2012. I followed the instructions from this question.
Then I wrote this test program:
#include <iostream>
#include <conio.h>
extern "C"
{
#include "lua.h"
#include "lualib.h"
#include "lauxlib.h"
}
#include <luabind\luabind.hpp>
#include <luabind\adopt_policy.hpp>
using namespace std;
using namespace luabind;
struct Vector2
{
float x, y;
};
class Transform
{
public:
Transform()
{
pos.x = 0;
pos.y = 0;
}
public:
Vector2 pos;
};
class Movement
{
public:
Movement(){}
public:
Vector2 vel;
};
class GameObject
{
public:
GameObject(){}
Transform& getTransform()
{
return _transform;
}
Movement& getMovement()
{
return _movement;
}
private:
Transform _transform;
Movement _movement;
};
int main()
{
lua_State *myLuaState = luaL_newstate();
open(myLuaState);
module(myLuaState) [
class_<Vector2>("Vector2")
.def(constructor<>())
.def_readwrite("x", &Vector2::x)
.def_readwrite("y", &Vector2::y),
class_<Transform>("Transform")
.def(constructor<>())
.def_readwrite("pos", &Transform::pos),
class_<Movement>("Movement")
.def(constructor<>())
.def_readwrite("vel", &Movement::vel),
class_<GameObject>("GameObject")
.def(constructor<>())
.def("getTransform", &GameObject::getTransform)
.def("getMovement", &GameObject::getMovement)
];
GameObject _testGO;
_testGO.getMovement().vel.x = 2;
_testGO.getMovement().vel.y = 3;
do
{
cout<<"_testGO.pos.x before = "<< _testGO.getTransform().pos.x <<endl;
cout<<"_testGO.pos.y before = "<< _testGO.getTransform().pos.y <<endl;
try
{
luaL_dofile(myLuaState, "script.lua");
call_function<void>(myLuaState, "testParams", boost::ref(_testGO), 0.3);
}
catch(const exception &TheError)
{
cerr << TheError.what() << endl;
}
cout<<"_testGO.pos.x after = "<< _testGO.getTransform().pos.x <<endl;
cout<<"_testGO.pos.y after = "<< _testGO.getTransform().pos.y <<endl;
}
while(_getch() != 27);
lua_close(myLuaState);
return 0;
}
script.lua:
function testParams(owner, dt)
owner:getTransform().pos.x = owner:getMovement().vel.x * dt;
owner:getTransform().pos.y = owner:getMovement().vel.y * dt;
end
And it worked:
_testGO.pos.x before = 0
_testGO.pos.y before = 0
_testGO.pos.x after = 0.6
_testGO.pos.y after = 0.9
Will try to figure out how to manage simple types.
UPDATE:
This is the answer for the same question, I got on the luabind mailing lists:
'It is impossible to pass object types that are modelled in lua with built-in types by reference. In Lua, there exists no attribute to types like reference or value type. A type is either a reference type (tables), or a value type (built-in types like number). What you can do however is return the new value in the lua-side. A policy could be established, that transforms reference integral types to a list of parsed return types, so it is transparent on the c++ side. That would place hard constraints on the lua side though.' link: http://sourceforge.net/p/luabind/mailman/message/32692053/
|
__label__pos
| 0.965459 |
Take the 2-minute tour ×
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It's 100% free, no registration required.
I was watching the movie 21 yesterday, and in the first 15 minutes or so the main character is in a classroom, being asked a "trick" question (in the sense that the teacher believes that he'll get the wrong answer) which revolves around theoretical probability.
The question goes a little something like this (I'm paraphrasing, but the numbers are all exact):
You're on a game show, and you're given three doors. Behind one of the doors is a brand new car, behind the other two are donkeys. With each door you have a $1/3$ chance of winning. Which door would you pick?
The character picks A, as the odds are all equally in his favor.
The teacher then opens door C, revealing a donkey to be behind there, and asks him if he would like to change his choice. At this point he also explains that most people change their choices out of fear; paranoia; emotion and such.
The character does change his answer to B, but because (according to the movie), the odds are now in favor of door B with a $1/3$ chance of winning if door A is picked and $2/3$ if door B is picked.
What I don't understand is how removing the final door increases the odds of winning if door B is picked only. Surely the split should be 50/50 now, as removal of the final door tells you nothing about the first two?
I assume that I'm wrong; as I'd really like to think that they wouldn't make a movie that's so mathematically incorrect, but I just can't seem to understand why this is the case.
So, if anyone could tell me whether I'm right; or if not explain why, I would be extremely grateful.
share|improve this question
17
This is known as the Monty Hall problem. The point is that your odds of winning with the original door have not changed. Since the total odds have to add up to 1, the odds of $B$ being the correct door are now 2/3. In fact, "switching to B" is equivalent to "pick the best of whatever is behind doors B and C" (you know that what is behind B is no worse than what has been revealed behind C), which clearly gives you a 2/3rds odds of winning. The precise conditions of the game are very important, though. – Arturo Magidin Jan 6 '12 at 3:27
1
I would expect that this has been asked before; I found two questions asking about variants (here and here), but not the plain question. I'm probably just not finding it? – Arturo Magidin Jan 6 '12 at 3:33
@ArturoMagidin I looked as well and was shocked not to find one. – Alex Becker Jan 6 '12 at 3:37
2
I'd suggest that the question is ambiguous as commonly stated. It doesn't specify whether the teacher selected a door at random which just happened to have a donkey behind it, or if the teacher deliberately selected a door with a donkey behind it. – Winston Ewert Feb 6 '12 at 3:43
1
Of course if you lived in the mountains of Nepal a donkey would be preferred to a car... – Bogatyr Sep 16 '12 at 19:02
10 Answers 10
up vote 52 down vote accepted
This problem, known as the Monty Hall problem, is famous for being so bizarre and counter-intuitive. It is in fact best to switch doors, and this is not hard to prove either. In my opinion, the reason it seems so bizarre the first time one (including me) encounters it is that humans are simply bad at thinking about probability. What follows is essentially how I have justified switching doors to myself over the years.
At the start of the game, you are asked to pick a single door. There is a $1/3$ chance that you have picked correctly, and a $2/3$ chance that you are wrong. This does not change when one of the two doors you did not pick is opened. The second time is that you are choosing between whether your first guess was right (which has probability $1/3$) or wrong (probability $2/3$). Clearly it is more likely that your first guess was wrong, so you switch doors.
This didn't sit well with me when I first heard it. To me, it seemed that the situation of picking between two doors has a certain kind of symmetry-things are either behind one door or the other, with equal probability. Since this is not the case here, I was led to ask where the asymmetry comes from? What causes one door to be more likely to hold the prize than the other? The key is that the host knows which door has the prize, and opens a door that he knows does not have the prize behind it.
To clarify this, say you choose door $A$, and are then asked to choose between doors $A$ and $B$ (no doors have been opened yet). There is no advantage to switching in this situation. Say you are asked to choose between $A$ and $C$; again, there is no advantage in switching. However, what if you are asked to choose between a) the prize behind door $A$ and b) the better of the two prizes behind door $B$ and $C$. Clearly, in this case it is in your advantage to switch. But this is exactly the same problem as the one you've been confronted with! Why? Precisely because the host always opens (hence gets rid of) the door that you did not pick which has the worse prize behind it. This is what I mean when I say that the asymmetry in the situation comes from the knowledge of the host.
share|improve this answer
4
Fantastic explanation! Thanks for the clarification :). – Avicinnian Jan 6 '12 at 4:11
5
+1 what if you are asked to choose between a) the prize behind door A and b) the better of the two prizes behind door B and C really cements the idea. – dj18 May 2 '12 at 17:34
Good explanation. But I would say that the fact that the host knows the door which not to pick, is more a kind of asymmetry than symmetry. – Cris Stringfellow Dec 20 '12 at 0:39
@CrisStringfellow I wrote asymmetry. – Alex Becker Dec 20 '12 at 18:40
There's one point that I'd like to make: Before the host opens one of the "bad" doors, each door has an equal probability of $1/3$ of being the "good" door. However, after the host has revealed one of the two bad doors, there are now two doors---one good door and one bad door---left. So now the probability---which is in fact the conditional probability---of either of the other two doors being the good door is $1/2$ because the contestant at this choice has an open choice of either to stick with his original selection or choose the third remaining door. What is the flaw in this reasoning? – Saaqib Mahmuud Aug 9 '14 at 3:31
You begin with probability of 2/3 of losing.
You have more possibilities of losing with the first door, so , if you are losing (which is more probable 2/3) and you CHANGE is 100% sure that you will WIN because the remaining door IS (yes... IS) the one with the prize, because the host showed you based in your first guess and with his knowledge of the good door which was the bad door, the remaining door have more posibilities (2/3) of being the one with the prize.
share|improve this answer
The movie 21 didn't state the riddle correctly. The movie failed to state the rules governing how the game show host will behave.
Assuming the riddle in the movie follows the "Monty Hall Problem" as described on Wikipedia there a few critical assumptions the movie failed to mention:
1) the host must always open a door that was not picked by the contestant 2) the host must always open a door to reveal a goat and never the car 3) the host must always offer the chance to switch between the originally chosen door and the remaining closed door.
Knowings the rules it makes the riddle much easier to understand. Many of the explanations above will suffice and Wikipedia has a good explanation.
The problem is that the movie failed to state these critical assumptions.
share|improve this answer
1
It's also necessary that if the contestant initially chooses the winning door, the host chooses either of the remaining doors with equal probability. See my comment above. – augurar Aug 9 '14 at 18:46
Rather than looking at the player, I prefer to explain the paradox from the host's standpoint, as this only involves one step.
As the player gets one door, the host gets two. There are 3 possibilities with the same probability:
• donkey-donkey => leaves a donkey after a door is open
• car-donkey => leaves the car
• donkey-car => leaves the car
So in two cases out of three the door that the host leaves closed hides the car.
share|improve this answer
Merely knowing that the teacher showed a losing door does not provide any information unless one knows how the correctness of one's initial answer would influence the likelihood of the teacher showing the losing door. Consider the following four possible "strategies" for the teacher:
1. The host knows where the prize is, wants the contestant to lose, and will show an empty door only if the contestant had picked the one with the prize [if the contest already picked a wrong door, the host would reveal either the contestant's door or the one with the prize].
2. The host knows where the prize is, wants the contestant to win, and will show an empty door only if the contestant had picked the other empty door [if the contestant had already picked the right door, the host would simply show it].
3. The host knows where the prize is, and will always show an empty door [the empty door if the contestant's initial guess was wrong, or an arbitrarily-selected empty door if it was right].
4. The host picks a door at random; if it contains the prize, the contestant loses; otherwise, the contestant is allowed to switch to the other unseen door.
In the first two scenarios, the host's decision to show or not show an empty door will indicate to anyone who knows the host's strategy whether the player's guess was right or not. In the third scenario, the host's decision to offer a switch provides no information about whether the contestant's initial guess was right, but converts the 2/3 probability that the contestant's initial guess was wrong into a 2/3 probability that the prize is under the remaining door.
To evaluate the last scenario intuitively, imagine that the host draws an "X" on the player's door, flips a coin to pick a door at random and draws an "Y" on it, and finally draws a "Z" on the remaining door. If neither the host nor player has any clue as to where the prize is, doors 1, 2, and 3 will each have an equal probability of holding the prize, and the marking of letters X, Y, or Z by people who have no idea where the prize is don't change that. If the host asks the player if he'd like to switch to Z before anyone knows what's under Y, the decision will be helpful 1/3 of the time, harmful 1/3 of the time, and irrelevant 1/3 of the time. If door Y is shown to be empty, the irrelevant case will be eliminated so of the cases that remain, the other two will have 1/2 probability each.
Note: Many discussions of the "Monty Hall Paradox" assume that the host uses strategy #3, but fail to explicitly state that fact. That assumption is critically important to assessing the probability that a switch will be a winning move, since without it (depending upon the host's strategy) the probability of the prize being under the remaining door could be anything from 0% to 100%. I don't know the strategy used by the real-life game-show host for whom the "paradox" is named, but am pretty certain I've seen players revealed as winners or losers without being given a chance to switch, implying that while Monty Hall might sometimes have used strategy #3, he did not do so consistently [the normal arguments/proofs would hold if the host's decision of whether or not the player would be shown an empty door and allowed to switch was made before the player selected his door, but I have no particular reason to believe Monty Hall did things that way].
share|improve this answer
The person who changes his choice will win if and only if his first choice was wrong, and there is a probability of $\frac{2}{3}$ on that.
The person who does not change his choice will win if and only if his first choice was right. There is a probability of $\frac{1}{3}$ on that.
share|improve this answer
Let us use some theory here. Lets call the doors $0, 1, 2,$ and the right door $D$. $$P(D=0)=P(D=1)=P(D=2)=\frac13$$ $D$ is random. Now let us call the door we choose $C$. $C$ is not random. Without loss of generality, let $C=0$. Also we have $R$, the revealed door, which is random. Since the person won't reveal the right door or the one you choose, $R \neq C$ and $R \neq D$. Since we know $C$, without knowledge of $D$ we have: $$P(R=1)=P(R=2)=\frac12$$ Let us say, without loss of generality, we get the information $R=2$. Then what we are looking for is $P(D=1|R=2)$. $$\frac{P(D=1 \wedge R=2)}{P(R=2)}$$ $$\frac{P(R=2|D=1)P(D=1)}{\frac12}$$ Now, $P(R=2|D=1)=1$, since $R$ can't be 0 or 1, since those are already taken by $C$ and $D$. $$\frac{1 \cdot \frac13}{\frac12}$$ So the answer is: $\frac23=66.\overline6 \%$.
share|improve this answer
Look The Monte Hall problem is corrupted b/c everyone is influenced by previous answers given in movies and an article in a magazine years ago.
You are all solving a false equation. Door #1, #2, or #3. there is 1 door of value and 2 doors of non value.
The Host always removes a non value door, single time. This gives you a new choice, a new equation. Which is never accounted for in the given explanations.
The real choice is A, B or C where C = B = $0 and A=$ After you choose, regardless of that choice B or C is removed from the equation and you are given another choice. A or B (or C)
The first equation is thrown out and a new one put in its place. essentially A or 0.
The Host was always going to make the real choice between A and 0, so you never really had a choice of A, B and C.
The fact that you don't know that doesn't change the real equation.
share|improve this answer
5
Your explanation is confusing. Please clean it up, and explain how other approaches go wrong (better yet, what yur assumptions are, and why they are more realistic). – vonbrand Feb 11 '13 at 2:01
Its simple, switching allows you to pick 2 out of the 3 doors. Choosing door number 1 and then always switching is the equivalent of saying "door number 2 or door number 3, but NOT door number 1". When you look at it that way, you should see that you have a 2/3 chance of being right, and that the reveal simply confirms which door it must be if you are right. Increase the number of doors and it should become even more obvious that saying "door 2 or 3 or 4 or 5 or ... but not 1" is the right way to bet. You have an $1-1/x$ chance of being right, and a $1/x$ chance of being wrong.
share|improve this answer
To understand why your odds increase by changing door let us take an extreme example first. Say there are $10000$ doors. Behind one of them is a car and behind the rest are donkeys. Now, the odds of choosing a car is $1\over10000$ and the odds of choosing a donkey are $9999\over10000$. Say you pick a random door which we call X for now. According to the rules of the game, the game show host now opens all the doors except for two, one of which contains the car. You now have the option to switch. Since the probability for not choosing the car initially was $9999\over10000$ it is very likely you didn't choose the car. So assuming now that door X is a goat and you switch you get the car. This means that as long as you pick the goat on your first try you will always get the car.
If we return to the original problem where there are only 3 doors we see that the exact same logic applies. The probability that you choose a goat on your first try is $2\over3$ while choosing a car is $1\over3$. If your choose a goat on your first try and switch you will get a car and if you choose the car on your first try and switch you will get a goat. Thus the probability that you will get a car if you switch is $2\over3$ (which is more than the initial $1\over3$).
share|improve this answer
1
+1 This means that as long as you pick the goat on your first try you will always get the car. - really clarified the topic. – dj18 May 2 '12 at 17:36
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.565704 |
Socket.BeginAccept Method (Int32, AsyncCallback, Object)
System_CAPS_noteNote
The .NET API Reference documentation has a new home. Visit the .NET API Browser on docs.microsoft.com to see the new experience.
Begins an asynchronous operation to accept an incoming connection attempt and receives the first block of data sent by the client application.
Namespace: System.Net.Sockets
Assembly: System (in System.dll)
[HostProtectionAttribute(SecurityAction.LinkDemand, ExternalThreading = true)]
public IAsyncResult BeginAccept(
int receiveSize,
AsyncCallback callback,
object state
)
Parameters
receiveSize
Type: System.Int32
The number of bytes to accept from the sender.
callback
Type: System.AsyncCallback
The AsyncCallback delegate.
state
Type: System.Object
An object that contains state information for this request.
Return Value
Type: System.IAsyncResult
An IAsyncResult that references the asynchronous Socket creation.
Exception Condition
ObjectDisposedException
The Socket object has been closed.
NotSupportedException
Windows NT is required for this method.
InvalidOperationException
The accepting socket is not listening for connections. You must call Bind and Listen before calling BeginAccept.
-or-
The accepted socket is bound.
ArgumentOutOfRangeException
receiveSize is less than 0.
SocketException
An error occurred when attempting to access the socket. See the Remarks section for more information.
Connection-oriented protocols can use the BeginAccept method to asynchronously process incoming connection attempts. Accepting connections asynchronously enables you to send and receive data within a separate execution thread. This overload allows you to specify the number of bytes to accept in the initial transfer in the receiveSize parameter.
Before calling the BeginAccept method, you must call the Listen method to listen for and queue incoming connection requests.
You must create a callback method that implements the AsyncCallback delegate and pass its name to the BeginAccept method. To do this, at the very minimum, you must pass the listening Socket object to BeginAccept through the state parameter. If your callback needs more information, you can create a small class to hold the Socket and the other required information. Pass an instance of this class to the BeginAccept method through the state parameter.
Your callback method should invoke the EndAccept method. When your application calls BeginAccept, the system usually uses a separate thread to execute the specified callback method and blocks on EndAccept until a pending connection is retrieved.
EndAccept returns a new Socket that you can use to send and receive data with the remote host. You cannot use this returned Socket to accept any additional connections from the connection queue. If you want the original thread to block after you call the BeginAccept method, use WaitHandle.WaitOne. Call the Set method on a ManualResetEvent in the callback method when you want the original thread to continue executing.
The system may also use the calling thread to invoke the callback method. In this case, the CompletedSynchronously property on the returned IAsyncResult will be set to indicate that the BeginAcceptmethod completed synchronously.
For additional information on writing callback methods see Marshaling a Delegate as a Callback Method.
To cancel a pending call to the BeginAccept method, close the Socket. When the Closemethod is called while an asynchronous operation is in progress, the callback provided to the BeginAccept method is called. A subsequent call to the EndAcceptmethod will throw an ObjectDisposedException to indicate that the operation has been cancelled.
System_CAPS_noteNote
You can call use the RemoteEndPoint property of the returned Socket object to identify the remote host's network address and port number.
System_CAPS_noteNote
If you receive a SocketException, use the SocketException.ErrorCode property to obtain the specific error code. After you have obtained this code, refer to the Windows Sockets version 2 API error code documentation in the MSDN library for a detailed description of the error.
System_CAPS_noteNote
This member outputs trace information when you enable network tracing in your application. For more information, see Network Tracing in the .NET Framework.
System_CAPS_noteNote
The execution context (the security context, the impersonated user, and the calling context) is cached for the asynchronous Socket methods. After the first use of a particular context (a specific asynchronous Socket method, a specific Socket instance, and a specific callback), subsequent uses of that context will see a performance improvement.
The following code example opens a socket and accepts an asynchronous connection. In this example, the socket accepts the initial 10 bytes of data. The number of bytes received and the data are displayed on the console by the callback delegate. See BeginReceive for a description of how the remaining data is received.
// This server waits for a connection and then uses asynchronous operations to
// accept the connection with initial data sent from the client.
// Establish the local endpoint for the socket.
IPHostEntry ipHostInfo = Dns.GetHostEntry(Dns.GetHostName());
IPAddress ipAddress = ipHostInfo.AddressList[0];
IPEndPoint localEndPoint = new IPEndPoint(ipAddress, 11000);
// Create a TCP/IP socket.
Socket listener = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp );
// Bind the socket to the local endpoint, and listen for incoming connections.
listener.Bind(localEndPoint);
listener.Listen(100);
while (true)
{
// Set the event to nonsignaled state.
allDone.Reset();
// Start an asynchronous socket to listen for connections and receive data from the client.
Console.WriteLine("Waiting for a connection...");
// Accept the connection and receive the first 10 bytes of data.
int receivedDataSize = 10;
listener.BeginAccept(receivedDataSize, new AsyncCallback(AcceptReceiveCallback), listener);
// Wait until a connection is made and processed before continuing.
allDone.WaitOne();
}
}
public static void AcceptReceiveCallback(IAsyncResult ar)
{
// Get the socket that handles the client request.
Socket listener = (Socket) ar.AsyncState;
// End the operation and display the received data on the console.
byte[] Buffer;
int bytesTransferred;
Socket handler = listener.EndAccept(out Buffer, out bytesTransferred, ar);
string stringTransferred = Encoding.ASCII.GetString(Buffer, 0, bytesTransferred);
Console.WriteLine(stringTransferred);
Console.WriteLine("Size of data transferred is {0}", bytesTransferred);
// Create the state object for the asynchronous receive.
StateObject state = new StateObject();
state.workSocket = handler;
handler.BeginReceive( state.buffer, 0, StateObject.BufferSize, 0,
new AsyncCallback(ReadCallback), state);
}
.NET Framework
Available since 2.0
Return to top
Show:
|
__label__pos
| 0.611892 |
Evolution computer technology essay
evolution computer technology essay The current advancements in computer technology are likely to transform computing machines into intelligent ones that possess self organizing skills the evolution of computers will continue, perhaps till the day their processing powers equal human intelligence.
The evolution of computer technology introduction fifty years ago, the us army unveiled the electronic numerical integrator and computer the world's first operational, general purpose, electronic digital computer developed at moore school of electrical engineering, university of pennsylvania. Technology continued to prosper in the computer world into the nineteenth century a major figure during this time is charles babbage , designed the idea of the difference engine in the year 1820 it was a calculating machine designed to tabulate the results of mathematical functions (evans, 38). This research paper is mainly going to discuss how the computer technology evolved from the end of the fifth generation to current day sixth generation. The tools you need to write a quality essay or term paper evolution of information technology has affected all these practices, and has helped them in improving .
Read this essay on evolution of computer technology come browse our large digital warehouse of free sample essays get the knowledge you need in order to pass your classes and more. Computers: essay on computers (992 words) fact during the evolution, each time computers are being launched that are lighter, smaller, speedier and more powerful . Computers have been around a lot longer than many people might imagine history & evolution of computers march 31, 2015 by: technology made a giant leap . Free essay: the personal computer underwent drastic changes with the introduction to advanced computing software and hardware the evolution of computers did.
Computer essay 1 (100 words) a computer is a great invention of the modern technology it is generally a machine which has capability to store large data value in its memory. Evolution of computer technology describe how concepts such as risc, pipelining, cache memory, and virtual memory have evolved over the past 25 years to improve system performance. Sample technology essays is the step towards the radio air-interface evolution for 3g technology to deliver “mobile broadband” essay violent computer .
Effects of technology on people: computer word processing, social networking, and the text message, the spo- the swiss-french philosopher in his essay on the . From windows 1 to windows 10: 29 years of windows evolution a mouse before the mouse was a common computer input device of microsoft’s directx 10 technology windows media player 11 and . Essay on evolution of computer technology and operating systems 1601 words | 7 pages the personal computer underwent drastic changes with the introduction to advanced computing software and hardware. Computer evolution has been a fascinating process as we find out here the generation of computers may be broadly classified into 5 stages : 1 first generation ( 1940 – 1956 ) 2. 1 computer technology essay the history of computers - 602 words did the computer begin who came up the idea of this genius creation that would allow you to surf the internet and chat with friends worldwide.
Evolution computer technology essay
All the computers, electronic gadgets, different games and so many other kinds of things were innovated and invented through the use of high technology by the experts things are made easily and very convenient to all because of what technology brings to our lives. Over a period of time computers have evolved and toady with the artificial intelligence technology, we use the most advanced kind of computers that have helped man in every sectors of life at every generations of the computers or in fact during the evolution, each time computers are being launched that are lighter, smaller, speedier and more . Evolution of the computer - in pictures the first computer was arguably invented around 4,000 years ago with the advent of the abacus, the first machine designed to help humans count and calculate.
• The concept of pipelining would be to store and forward jobs for the computer to execute it enhances the performance of the system by .
• Essay: evolution of computer technology sample essay throughout the last 25 years, computers have evolved from being low speed simple machines to high speed behemoths while still remaining affordable.
• Technology essays: evolution of computers of computers this essay evolution of computers and other computer have a mind the evolution of cellular automata .
Technology term papers, essays, research papers on technology free technology college papers and model essays also evolution occurs in technology take for . Essay: evolution of technology primitive men cleaved their universe into friends and enemies and responded with quick, deep emotion to even the mildest threats emanating from outside the arbitrary boundary. Computer and technology communications and media essay the evolution of computing technology student name school affiliation course title due date the evolution of computing has exponentially taken place over the years exponentially, this traces from devices such as the ancient chinese abacus, the slide rule and today's modern devices that involve digital processing to perform complex tasks . The personal computer underwent drastic changes with the introduction to advanced computing software and hardware the evolution of computers did not develop on its own key influential figures such as steve jobs and bill gates led the revolution of technology these well-known individuals competed .
evolution computer technology essay The current advancements in computer technology are likely to transform computing machines into intelligent ones that possess self organizing skills the evolution of computers will continue, perhaps till the day their processing powers equal human intelligence. evolution computer technology essay The current advancements in computer technology are likely to transform computing machines into intelligent ones that possess self organizing skills the evolution of computers will continue, perhaps till the day their processing powers equal human intelligence. evolution computer technology essay The current advancements in computer technology are likely to transform computing machines into intelligent ones that possess self organizing skills the evolution of computers will continue, perhaps till the day their processing powers equal human intelligence.
Evolution computer technology essay
Rated 4/5 based on 30 review
Download
|
__label__pos
| 0.950594 |
Creating and Using a Simple, Bayesian Linear Model (in brms and R)
[This article was first published on [R]eliability, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
This post is my good-faith effort to create a simple linear model using the Bayesian framework and workflow described by Richard McElreath in his Statistical Rethinking book.1 As always – please view this post through the lens of the eager student and not the learned master. I did my best to check my work, but it’s entirely possible that something was missed. Please let me know – I won’t take it personally. As McElreath notes in his lectures – “if you’re confused, it’s because you’re paying attention”. And sometimes I get confused – this a lot harder than my old workflow which consisted of clicking “add a trendline” in Excel. Thinking Bayesian is still relatively new to me. Disclaimer over – let’s get to it.
I’m playing around with a bunch of fun libraries in this one.
library(tidyverse)
library(styler)
library(ggExtra)
library(knitr)
library(brms)
library(cowplot)
library(gridExtra)
library(skimr)
library(DiagrammeR)
library(rayshader)
library(av)
library(rgl)
I made up this data set. It represents hypothetical values of ablation time and tissue impedance as measured by sensors embedded in a RF ablation catheter. This type of device is designed to apply RF or thermal energy to the vessel wall. The result is a lesion that can aid in improve arrhythmia, reduce hypertension, or provide some other desired outcome.
In RF ablations, the tissue heats up over the course of the RF cycle, resulting in a drop in impedance that varies over time. As described above, the goal will be to see how much of the variation in impedance is described by time (over some limited range) and then communicate the uncertainty in the predictions visually. None of this detail is terribly important other than I like to frame my examples from within my industry and McElreath emphasizes grounding our modeling in real world science and domain knowledge. This is what an ablation catheter system looks like:2
To get things started, load the data and give it a look with skim(). There are no missing values.
ablation_dta_tbl <- read.csv(file = "abl_data_2.csv")
ablation_dta_tbl <- ablation_dta_tbl %>% select(temp, time)
ablation_dta_tbl %>% skim()
## Skim summary statistics
## n obs: 331
## n variables: 2
##
## -- Variable type:numeric -------------------------------------------------------
## variable missing complete n mean sd p0 p25 p50 p75 p100
## temp 0 331 331 77.37 3.9 68.26 74.61 77.15 80.33 89.53
## time 0 331 331 22.57 3.22 15.83 20.22 22.54 24.69 31.5
## hist
## <U+2581><U+2585><U+2587><U+2586><U+2586><U+2583><U+2581><U+2581>
## <U+2582><U+2586><U+2587><U+2587><U+2587><U+2583><U+2582><U+2581>
Let’s start with a simple visualization. The code below builds out a scatterplot with marginal histograms which I think is a nice, clean way to evaluate scatter data.3 These data seem plausible since the impedance will typically drop as the tissue heats up during the procedure. In reality the impedance goes asymptotic but we’ll work over a limited range of time where the behavior might reasonably be linear.
scatter_1_fig <- ablation_dta_tbl %>% ggplot(aes(x = time, y = temp)) +
geom_point(
colour = "#2c3e50",
fill = "#2c3e50",
size = 2,
alpha = 0.4
) +
labs(
x = "Ablation Time (seconds)",
y = "Tissue Temperature (deg C)",
title = "Ablation Time vs. Tissue Temperature",
subtitle = "Simulated Catheter RF Ablation"
)
scatter_hist_1_fig <- ggMarginal(scatter_1_fig,
type = "histogram",
color = "white",
alpha = 0.7,
fill = "#2c3e50",
xparams = list(binwidth = 1),
yparams = list(binwidth = 2.5)
)
# ggExtra needs these explit calls to display in Markdown docs *shrug*
grid::grid.newpage()
grid::grid.draw(scatter_hist_1_fig)
It helps to have a plan. If I can create a posterior distribution that captures reasonable values for the model parameters and confirm that the model makes reasonable predictions then I will be happy. Here’s the workflow that hopefully will get me there.
grViz("digraph flowchart {
# node definitions with substituted label text
node [fontname = Helvetica, shape = rectangle, fillcolor = yellow]
tab1 [label = 'Step 1: Propose a distribution for the response variable \n Choose a maximum entropy distribution given the constraints you understand']
tab2 [label = 'Step 2: Parameterize the mean \n The mean of the response distribution will vary linearly across the range of predictor values']
tab3 [label = 'Step 3: Set priors \n Simulate what the model knows before seeing the data. Use domain knowledge as constraints.']
tab4 [label = 'Step 4: Define the model \n Create the model using the observed data, the likelihood function, and the priors']
tab5 [label = 'Step 5: Draw from the posterior \n Plot plausible lines using parameters visited by the Markov chains']
tab6 [label = 'Step 6: Push the parameters back through the model \n Simulate real data from plausible combinations of mean and sigma']
# edge definitions with the node IDs
tab1 -> tab2 -> tab3 -> tab4 -> tab5 -> tab6;
}
")
Step 1: Propose a distribution for the response variable
A Gaussian model is reasonable for the outcome variable Temperature as we know it is a measured from the thermocouples on the distal end of the catheter. According to McElreath (pg 75):
Measurement errors, variations in growth, and the velocities of molecules all tend towards Gaussian distributions. These processes do this because at their heart, these processes add together fluctuations. And repeatedly adding finite fluctuations results in a distribution of sums that have shed all information about the underlying process, aside from mean and spread.
Here’s us formally asserting Temperature as a normal distribution with mean and standard deviation . These two parameters are all that is needed to completely describe the distribution and also pin down the likelihood function.
\(T_i \sim \text{Normal}(\mu_i, \sigma)\)
Step 2: Parameterize the mean
If we further parameterize , we can do some neat things like move the mean around with the predictor variable. This is a pretty key concept - you move the mean of the outcome variable around by parameterizing it. If we make it a line then it will move linearly with the predictor variable. The real data will still have a spread once the terms is folded back in, but we can think of the whole distribution shifting up and down based on the properties of the line.
Here’s us asserting we want mu to move linearly with changes in the predictor variable (time). Subtracting the mean from each value of the predictor variable “centers” the data which McElreath recommends in most cases. I will explore the differences between centered and un-centered later on.
\(\mu_i = \alpha + \beta (x_i - \bar{x})\)
Step 3: Set priors
We know some things about these data. Temperature is a continuous variable so we want a continuous distribution. We also know from the nature of the treatment that there isn’t really any physical mechanism within the device that would be expected to cool down the tissue below normal body temperature. Since only heating is expected, the slope should be positive or zero.
McElreath emphasizes simulating from the priors to visualize “what the model knows before it sees the data”. Here are some priors to consider. Let’s evaluate.
# Set seed for repeatability
set.seed(1999)
# number of sims
n <- 150
# random draws from the specified prior distributions
# lognormal distribution is used to constrain slopes to positive values
a <- rnorm(n, 75, 15)
b <- rnorm(n, 0, 1)
b_ <- rlnorm(n, 0, 0.8)
# calc mean of time and temp for later use
mean_temp <- mean(ablation_dta_tbl$temp)
mean_time <- mean(ablation_dta_tbl$time)
# dummy tibble to feed ggplot()
empty_tbl <- tibble(x = 0)
# y = b(x - mean(var_1)) + a is equivalent to:
# y = bx + (a - b * mean(var_1))
# in this fig we use the uninformed prior that generates some unrealistic values
prior_fig_1 <- empty_tbl %>% ggplot() +
geom_abline(
intercept = a - b * mean_time,
slope = b,
color = "#2c3e50",
alpha = 0.3,
size = 1
) +
ylim(c(0, 150)) +
xlim(c(0, 150)) +
labs(
x = "time (sec)",
y = "Temp (C)",
title = "Prior Predictive Simulations",
subtitle = "Uninformed Informed"
)
# in this fig we confine the slopes to broad ranges informed by what we know about the domain
prior_fig_2 <- empty_tbl %>% ggplot() +
geom_abline(
intercept = a - b_ * mean_time,
slope = b_,
color = "#2c3e50",
alpha = 0.3,
size = 1
) +
ylim(c(0, 150)) +
xlim(c(0, 150)) +
labs(
x = "time (sec)",
y = "Temp (C)",
title = "Prior Predictive Simulations",
subtitle = "Mildly Informed Prior"
)
plot_grid(prior_fig_1, prior_fig_2)
The plots above show what the model thinks before seeing the data for two different sets of priors. In both cases, I have centered the data by subtracting the mean of the time from each individual value of time. This means the intercept has the meaning of the expected temperature at the mean of time. The family of lines on the right seem a lot more realistic despite having some slopes that predict strange values out of sample (blood coagulates at ~90C). Choosing a log normal distribution for time ensures positives slopes. You could probably go even tighter on these priors but for this exercise I’m feeling good about proceeding.
Looking only at the time window of the original observations and the Temp window bounded by body temperature (lower bound) and water boiling (upper bound).
empty_tbl %>% ggplot() +
geom_abline(
intercept = a - b_ * mean_time,
slope = b_,
color = "#2c3e50",
alpha = 0.3,
size = 1
) +
ylim(c(37, 100)) +
xlim(c(15, 40)) +
labs(
x = "time (sec)",
y = "Temp (C)",
title = "Prior Predictive Simulations",
subtitle = "Mildly Informed Prior, Original Data Range"
)
Here are the prior distributions selected to go forward.
\(\alpha \sim \text{Normal}(75, 15)\)
\(\beta \sim \text{LogNormal}(0, .8)\)
\(\sigma \sim \text{Uniform}(0, 30)\)
Step 4: Define the model
Here I use the brm() function in brms to build what I’m creatively calling: “model_1”. This one uses the un-centered data for time. This function uses Markov Chain Monte Carlo to survey the parameter space. After the warm up cycles, the relative amount of time the chains spend at each parameter value is a good approximation of the true posterior distribution. I’m using a lot of warm up cycles because I’ve heard chains for the uniform priors on sigma can take a long time to converge. This model still takes a bit of time to chug through the parameter space on my modest laptop.
#model_1 <-
# brm(
# data = ablation_dta_tbl, family = gaussian,
# temp ~ 1 + time,
# prior = c(
# prior(normal(75, 15), class = Intercept),
# prior(lognormal(0, .8), class = b),
# prior(uniform(0, 30), class = sigma)
# ),
# iter = 41000, warmup = 40000, chains = 4, cores = 4,
# seed = 4
# )
For presenting in a post, I found it easier to store the model and call it back when compiling the page rather than replicate the fitting each time.
#saveRDS(model_1, file = "blr_rethinking_model_1.rds")
model_1 <- readRDS(file = "blr_rethinking_model_1.rds")
Step 5: Draw from the posterior
The fruits of all my labor! The posterior holds credible combinations for sigma and the slope and intercept (which together describe the mean of the outcome variable we care about). Let’s take a look.
post_samplesM1_tbl <-
posterior_samples(model_1) %>%
select(-lp__) %>%
round(digits = 3)
post_samplesM1_tbl %>%
head(10) %>%
kable(align = rep("c", 3))
b_Intercept b_time sigma
58.509 0.841 2.682
55.983 0.949 2.648
56.195 0.937 2.540
56.661 0.919 2.474
55.143 0.978 2.593
55.170 0.977 2.667
54.908 0.996 2.621
58.453 0.836 2.534
54.134 1.031 2.647
58.713 0.828 2.707
The plotting function in brms is pretty sweet. I’m not expert in MCMC diagnostics but I do know the “fuzzy caterpillar” look of the trace plots is desirable.
plot(model_1)
Posterior_summary() can grab the model results in table form.
mod_1_summary_tbl <-
posterior_summary(model_1) %>%
as.data.frame() %>%
rownames_to_column() %>%
as_tibble() %>%
mutate_if(is.numeric, funs(as.character(signif(., 2)))) %>%
mutate_at(.vars = c(2:5), funs(as.numeric(.)))
mod_1_summary_tbl %>%
kable(align = rep("c", 5))
rowname Estimate Est.Error Q2.5 Q97.5
b_Intercept 57.00 1.000 55.00 59.0
b_time 0.91 0.045 0.83 1.0
sigma 2.60 0.100 2.40 2.8
lp__ -790.00 1.300 -790.00 -790.0
Now let’s see what changes if the time data is centered. Everything is the same here in model_2 except the time_c data which is transformed by subtracting the mean from each value.
ablation_dta_tbl <- ablation_dta_tbl %>% mutate(time_c = time - mean(time))
#model_2 <-
# brm(
# data = ablation_dta_tbl, family = gaussian,
# temp ~ 1 + time_c,
# prior = c(
# prior(normal(75, 15), class = Intercept),
# prior(lognormal(0, .8), class = b),
# prior(uniform(0, 30), class = sigma)
# ),
# iter = 41000, warmup = 40000, chains = 4, cores = 4,
# seed = 4
# )
Plotting model_2 to compare with the output of model_1 above.
plot_mod_2_fig <- plot(model_2)
The slope B and sigma are very similar. The intercept is the only difference with model_1 ranging from low to high 50’s. Model 2 is tight around 77. We should visualize the lines proposed by the parameters in the posteriors of our models to understand the uncertainty associated with the mean and also understand why the intercepts are different between models. First, store the posterior samples as a tibble in anticipation for ggplot.
post_samplesM2_tbl <-
posterior_samples(model_2) %>%
select(-lp__) %>%
round(digits = 3)
post_samplesM2_tbl %>%
head(10) %>%
kable(align = rep("c", 3))
b_Intercept b_time_c sigma
77.323 0.894 2.350
77.430 0.881 2.516
77.335 0.957 2.571
77.011 0.947 2.776
77.209 1.013 2.691
77.517 0.820 2.488
77.335 0.881 2.682
77.313 0.857 2.538
77.423 0.873 2.569
77.302 0.926 2.340
Visualize the original data (centered and un-centered versions) along with plausible values for regression line of the mean:
mean_regressionM1_fig <-
ablation_dta_tbl %>%
ggplot(aes(x = time, y = temp)) +
geom_point(
colour = "#481567FF",
size = 2,
alpha = 0.6
) +
geom_abline(aes(intercept = b_Intercept, slope = b_time),
data = post_samplesM1_tbl,
alpha = 0.1, color = "gray50"
) +
geom_abline(
slope = mean(post_samplesM1_tbl$b_time),
intercept = mean(post_samplesM1_tbl$b_Intercept),
color = "blue", size = 1
) +
labs(
title = "Regression Line Representing Mean of Slope",
subtitle = "Data is As-Observed (No Centering of Predictor)",
x = "Time (s)",
y = "Temperature (C)"
)
mean_regressionM2_fig <-
ablation_dta_tbl %>%
ggplot(aes(x = time_c, y = temp)) +
geom_point(
color = "#55C667FF",
size = 2,
alpha = 0.6
) +
geom_abline(aes(intercept = b_Intercept, slope = b_time_c),
data = post_samplesM2_tbl,
alpha = 0.1, color = "gray50"
) +
geom_abline(
slope = mean(post_samplesM2_tbl$b_time_c),
intercept = mean(post_samplesM2_tbl$b_Intercept),
color = "blue", size = 1
) +
labs(
title = "Regression Line Representing Mean of Slope",
subtitle = "Predictor Data (Time) is Centered",
x = "Time (Difference from Mean Time in seconds)",
y = "Temperature (C)"
)
combined_mean_fig <-
ablation_dta_tbl %>%
ggplot(aes(x = time, y = temp)) +
geom_point(
colour = "#481567FF",
size = 2,
alpha = 0.6
) +
geom_point(
data = ablation_dta_tbl, aes(x = time_c, y = temp),
colour = "#55C667FF",
size = 2,
alpha = 0.6
) +
geom_abline(aes(intercept = b_Intercept, slope = b_time),
data = post_samplesM1_tbl,
alpha = 0.1, color = "gray50"
) +
geom_abline(
slope = mean(post_samplesM1_tbl$b_time),
intercept = mean(post_samplesM1_tbl$b_Intercept),
color = "blue", size = 1
) +
geom_abline(aes(intercept = b_Intercept, slope = b_time_c),
data = post_samplesM2_tbl,
alpha = 0.1, color = "gray50"
) +
geom_abline(
slope = mean(post_samplesM2_tbl$b_time_c),
intercept = mean(post_samplesM2_tbl$b_Intercept),
color = "blue", size = 1
) +
labs(
title = "Regression Line Representing Mean of Slope",
subtitle = "Centered and Un-Centered Predictor Data",
x = "Time (s)",
y = "Temperature (C)"
)
combined_predicts_fig <- combined_mean_fig +
ylim(c(56,90)) +
labs(title = "Points Represent Observed Data (Green is Centered)",
subtitle = "Regression Line Represents Rate of Change of Mean (Grey Bands are Uncertainty)")
Now everything is clear. The slopes are exactly the same (as we saw in the density plots between model_1 and model_2 in summary()). The intercepts are different because in the centered data (green) the intercept occurs when the predictor equals 0 (its new mean). The outcome variable temp must therefore also be at its mean value in the “knot” of the bow-tie.
For the un-centered data (purple), the intercept is the value of Temperature when the un-adjusted time is at 0. The slope of the regression line for the mean is much more uncertain here.
Another way to look at the differences is as a map of the plausible parameter space. We need a plot that can represent 3 parameters: intercept, slope, and sigma. Each point will be a credible combination of the three parameters as observed in 1 row of the posterior distribution tibble(s).
First, the un-centered model.
p_spaceM1_fig <-
post_samplesM1_tbl[1:1000, ] %>%
ggplot(aes(x = b_time, y = b_Intercept, color = sigma)) +
geom_point(alpha = 0.5) +
geom_density2d(color = "gray30") +
scale_color_viridis_c() +
labs(
title = "Parameter Space - Model 1 (Un-Centered)",
subtitle = "Intercept Represents the Expected Temp at Time = 0"
)
Now the centered version:
p_spaceM2_fig <-
post_samplesM2_tbl[1:1000, ] %>%
ggplot(aes(x = b_time_c, y = b_Intercept, color = sigma)) +
geom_point(alpha = 0.5) +
geom_density2d(color = "gray30") +
scale_color_viridis_c() +
labs(
title = "Parameter Space - Model 2 (Centered)",
subtitle = "Intercept Represents the Expected Temp at Mean Time"
)
#p_spaceM2_fig
#ggsave(filename = "p_spaceM2_fig.png")
These look way different, but part of it is an illusion of the scaling on the y-axis. Remember how the credible values of the intercept were much tighter for the centered model? If we plot them both on the same canvas we can understand better, and it’s pretty (to my eye at least).
p_spaceC_tbl <-
post_samplesM2_tbl[1:1000, ] %>%
ggplot(aes(x = b_time_c, y = b_Intercept, color = sigma)) +
geom_point(alpha = 0.5) +
geom_point(data = post_samplesM1_tbl, aes(x = b_time, y = b_Intercept, color = sigma), alpha = 0.5) +
scale_color_viridis_c() +
labs(
title = "Credible Parameter Values for Models 1 and 2",
subtitle = "Model 1 is Un-Centered, Model 2 is Centered",
x = expression(beta["time"]),
y = expression(alpha["Intercept"])) +
ylim(c(54, 80))
Now we see they aren’t as different as they first seemed. They cover very similar ranges for the slope and the un-centered model covers a wider range of plausible intercepts.
I’ve been looking for a good time to fire up the rayshader package and I’m not throwing away my shot here. Plotting with rayshader feels like a superpower that I shouldn’t be allowed to have. It’s silly how easy it is to make these ridiculous visuals. First, a fancy 3d plot providing some perspective on the relative “heights” of theta.
#par(mfrow = c(1, 1))
#plot_gg(p_spaceC_tbl, width = 5, height = 4, scale = 300, multicore = TRUE, windowsize = c(1200, 960),
# fov = 70, zoom = 0.45, theta = 330, phi = 40)
#Sys.sleep(0.2)
#render_depth(focus = 0.7, focallength = 200)
If you want more, this code below renders a video guaranteed to impress small children and executives.
#install.packages("av")
#library(av)
# Set up the camera position and angle
#phivechalf = 30 + 60 * 1/(1 + exp(seq(-7, 20, length.out = 180)/2))
#phivecfull = c(phivechalf, rev(phivechalf))
#thetavec = 0 + 60 * sin(seq(0,359,length.out = 360) * pi/180)
#zoomvec = 0.45 + 0.2 * 1/(1 + exp(seq(-5, 20, length.out = 180)))
#zoomvecfull = c(zoomvec, rev(zoomvec))
# Actually render the video.
#render_movie(filename = "hex_plot_fancy_2", type = "custom",
# frames = 360, phi = phivecfull, zoom = zoomvecfull, theta = thetavec)
Step 6: Push the parameters back through the model
After a lot of work we have finally identified the credible values for our model parameters. We now want to see what sort of predictions our posterior makes. Again, I’ll work with both the centered and un-centered data to try to understand the difference between the approaches. The first step in both cases is to create a sequence of time data to predict off of. For some reason I couldn’t get the predict() function in brms to cooperate so I wrote my own function to predict values. You enter a time value and the function makes a temperature prediction for every combination of mean and standard deviation derived from the parameters in the posterior distribution. Our goal will be to map this function over the sequence of predictor values we just set up.
#sequence of time data to predict off of. Could use the same for both models but I created 2 for clarity
time_seq_tbl <- tibble(pred_time = seq(from = -15, to = 60, by = 1))
time_seq_tbl_2 <- tibble(pred_time_2 = seq(from = -15, to = 60, by = 1))
#function that takes a time value and makes a prediction using model_1 (un-centered)
rk_predict <-
function(time_to_sim){
rnorm(n = nrow(post_samplesM1_tbl),
mean = post_samplesM1_tbl$b_Intercept + post_samplesM1_tbl$b_time*time_to_sim,
sd = post_samplesM1_tbl$sigma
)
}
#function that takes a time value and makes a prediction using model_2 (centered)
rk_predict2 <-
function(time_to_sim){
rnorm(n = nrow(post_samplesM2_tbl),
mean = post_samplesM2_tbl$b_Intercept + post_samplesM2_tbl$b_time_c*time_to_sim,
sd = post_samplesM2_tbl$sigma
)
}
#map the first prediction function over all values in the time sequence
#then calculate the .025 and .975 quantiles in anticipation of 95% prediction intervals
predicts_m1_tbl <- time_seq_tbl %>%
mutate(preds_for_this_time = map(pred_time, rk_predict)) %>%
mutate(percentile_2.5 = map_dbl(preds_for_this_time, ~quantile(., .025))) %>%
mutate(percentile_97.5 = map_dbl(preds_for_this_time, ~quantile(., .975)))
#same for the 2nd prediction function
predicts_m2_tbl <- time_seq_tbl_2 %>%
mutate(preds_for_this_time = map(pred_time_2, rk_predict2)) %>%
mutate(percentile_2.5 = map_dbl(preds_for_this_time, ~quantile(., .025))) %>%
mutate(percentile_97.5 = map_dbl(preds_for_this_time, ~quantile(., .975)))
#visualize what is stored in the nested prediction cells (sanity check)
test_array <- predicts_m2_tbl[1, 2] %>% unnest(cols = c(preds_for_this_time))
test_array %>%
round(digits = 2) %>%
head(5) %>%
kable(align = rep("c", 1))
preds_for_this_time
68.13
61.67
65.55
62.12
64.05
And now the grand finale - overlay the 95% prediction intervals on the original data along with the credible values of mean. We see there is no difference between the predictions made from centered data vs. un-centered.
big_enchilada <-
tibble(h=0) %>%
ggplot() +
geom_point(
data = ablation_dta_tbl, aes(x = time, y = temp),
colour = "#481567FF",
size = 2,
alpha = 0.6
) +
geom_point(
data = ablation_dta_tbl, aes(x = time_c, y = temp),
colour = "#55C667FF",
size = 2,
alpha = 0.6
) +
geom_abline(aes(intercept = b_Intercept, slope = b_time),
data = post_samplesM1_tbl,
alpha = 0.1, color = "gray50"
) +
geom_abline(
slope = mean(post_samplesM1_tbl$b_time),
intercept = mean(post_samplesM1_tbl$b_Intercept),
color = "blue", size = 1
) +
geom_abline(aes(intercept = b_Intercept, slope = b_time_c),
data = post_samplesM2_tbl,
alpha = 0.1, color = "gray50"
) +
geom_abline(
slope = mean(post_samplesM2_tbl$b_time_c),
intercept = mean(post_samplesM2_tbl$b_Intercept),
color = "blue", size = 1
) +
geom_ribbon(
data = predicts_m1_tbl, aes(x = predicts_m1_tbl$pred_time, ymin = predicts_m1_tbl$percentile_2.5, ymax = predicts_m1_tbl$percentile_97.5), alpha = 0.25, fill = "pink", color = "black", size = .3
) +
geom_ribbon(
data = predicts_m2_tbl, aes(x = predicts_m2_tbl$pred_time_2, ymin = predicts_m2_tbl$percentile_2.5, ymax = predicts_m2_tbl$percentile_97.5), alpha = 0.4, fill = "pink", color = "black", size = .3
) +
labs(
title = "Regression Line Representing Mean of Slope",
subtitle = "Centered and Un-Centered Predictor Data",
x = "Time (s)",
y = "Temperature (C)"
) +
scale_x_continuous(limits = c(-10, 37), expand = c(0, 0)) +
scale_y_continuous(limits = c(40, 120), expand = c(0, 0))
What a ride! This seemingly simple problem really stretched my brain. There are still a lot of question I want to go deeper on - diagnostics for the MCMC, impact of the regularizing priors, different between this workflow and frequentist at various sample sizes and priors, etc… but that will have to wait for another day.
Thank you for reading.
To leave a comment for the author, please follow the link and comment on their blog: [R]eliability.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)
Click here to close (This popup will not appear again)
|
__label__pos
| 0.971997 |
Running Images Cartoon, Sony Dvp-sr510h Manual, Pet Proof Carpet Reviews, Outdoor Gourmet Grills Parts, Quito Ecuador Elevation Meters, Bernat Pop Yarn Colors, Online Radio Codes, Mx Linux Vs Linux Mint 2020, Teresa R Brown, 30 Inch Whole House Fan, Book Titles Related To Music, " /> Running Images Cartoon, Sony Dvp-sr510h Manual, Pet Proof Carpet Reviews, Outdoor Gourmet Grills Parts, Quito Ecuador Elevation Meters, Bernat Pop Yarn Colors, Online Radio Codes, Mx Linux Vs Linux Mint 2020, Teresa R Brown, 30 Inch Whole House Fan, Book Titles Related To Music, " />
who invented dynamic programming
Topics in this lecture … He named it Dynamic Programming to hide the fact he was really … Lectures in Dynamic Programming and Stochastic Control Arthur F. Veinott, Jr. Spring 2008 MS&E 351 Dynamic Programming and Stochastic Control Department of Management Science and Engineering Stanford University Stanford, California 94305 Dynamic Programming History ‘Bellman … explained that he invented the name “dynamic programming” to hide the fact that he was doing mathematical research at RAND under a Secretary of Defense who “had a pathological fear and hatred of the term, research.” He settled on “dynamic programming” because it would be … This article reviews the history and theory of dynamic programming (DP), a recursive method of solving sequential decision problems under uncertainty. ‘Hello World:’ Programming Has … He settled on the term ‘dynamic programming’ because it A server-side dynamic web page is a web page whose construction is controlled by an application server processing server-side scripts. The Viterbi algorithm is a dynamic programming algorithm for finding the most likely sequence of hidden states—called the Viterbi path—that results in a sequence of observed events, especially in the context of Markov information sources and hidden Markov models (HMM).. Dynamic Programming 11.1 Overview Dynamic Programming is a powerful technique that allows one to solve many different types of problems in time O(n2) or O(n3) for which a naive approach would take exponential time. It wasn’t until The C Programming Language when “Hello World” really took off. The 1956 Dynamic Programming kicks off with examples that can be applied to ordinary calculus. Dynamic programming is widely considered the only feasible way of solving general stochastic optimal control problems. Why Is Dynamic Programming Called Dynamic Programming? Professor George Dantzig: Linear Programming Founder Turns 80 SIAM News, November 1994 In spite of impressive developments in computational optimization in the last 20 years, including the rapid advance of interior point methods, the simplex method, invented by George B. Dantzig in 1947, has stood the test of time quite … It discusses computational algorithms for the numerical solution of DP problems, and an important limitation in our ability to solve realistic large-scale dynamic programming … In this lecture, we discuss this technique, and present a few key examples. Using this example aims to reinforce the unpopular idea that humans can, in fact, talk to computers. Bellman named it Dynamic Programming because at the time, RAND (his employer), disliked mathematical research and didn't want to fund it. The algorithm has found universal application in decoding … he invented the name ‘dynamid programming’ to hide the fact that he was doing mathe matical research at RAND under a Secretary of Defense who ‘had a pathological fear and hatred of the term, research’. It suffers from what Bellman called "the curse of dimensionality," meaning that its computational requirements grow exponentially with the number of state variables, but it is still far more efficient and more widely … Richard Bellman invented DP in the 1950s. Also known as backward induction, it is used to nd optimal decision rules in figames ... was there that he invented the term dynamic programming that is now the generally accepted synonym for backward … Dynamic Programming is a recursive method for solving sequential decision problems (hereafter abbre-viated as SDP). In server-side scripting, parameters determine how the assembly of every new web page proceeds, including the setting up of more client-side processing.
Running Images Cartoon, Sony Dvp-sr510h Manual, Pet Proof Carpet Reviews, Outdoor Gourmet Grills Parts, Quito Ecuador Elevation Meters, Bernat Pop Yarn Colors, Online Radio Codes, Mx Linux Vs Linux Mint 2020, Teresa R Brown, 30 Inch Whole House Fan, Book Titles Related To Music,
Napsat komentář
|
__label__pos
| 0.80391 |
(284 Views)
possible hardware issues (ACPI and USB)
BIOS, Win 7, Win Vista, Win XP
Question
I deleted Windows from my new U300s and am currently experiencing a number of bugs which appear to be BIOS -related.
Is it possible to update my BIOS without Windows?
What are my options?
Answer
You can use your Windows key to activate an install of Windows from Microsoft media, as long as they are the same version (Do double check that that you have the right version, though it will not matter if it were a 32 or 64 bit - refer to the Windows sticker on your machine.)
Note that you may need to use the phone option to complete activation.
|
__label__pos
| 0.59388 |
1. HUGE Androidforums.com UPDATE! Learn what's new (and download the new app!)
Dismiss Notice
My prediction....New dimension in mobile access
Last Updated:
1. ckali7
ckali7 Well-Known Member This Topic's Starter
Joined:
Jan 12, 2010
Messages:
124
Likes Received:
8
So here it is, I'm calling it now.....I'm predicting that within the next year that we'll see a company (maybe google??) offering network service WITHOUT cellular contracts.
What I'm saying is that I think that Google's (Google Talk) program along with other VOIP programs will soon replace the need for cellular service. It seems that this is where Google is heading with their unlocked phones and their Google Voice project. Imagine being able buy network service for like $30/month with no cellular contract.....
Thoughts?
Advertisement
2. itype2slo
itype2slo Well-Known Member
Joined:
Nov 18, 2009
Messages:
172
Likes Received:
15
Why is there no cure for cancer? Why has the US not found Bin Laden? Why isn't there a viable alternative fuel to gasoline?..... Because, the people that have the power set the rules. That being said, I think you are right, but I don't see a major provider jumping on board any time soon.
3. JrzDroid
JrzDroid Well-Known Member
Joined:
Nov 30, 2009
Messages:
1,682
Likes Received:
308
Google Voice service w/ Full data + talk for $30 a month? Sign me up please.
I can already downgrade my texting and calling plans to the minimum and just use voice for calling and sms for FREE! Google should buy T mobile, switch to LTE asap, expand their coverage to that of Verizons and have plans in the 30-60 range. What other provider could compete?
4. IOWA
IOWA Mr. Logic Pants Moderator
Joined:
Dec 2, 2009
Messages:
8,852
Likes Received:
2,367
I could compete. I have an exclusive on the two cans(del monte) and a string model.
5. ckali7
ckali7 Well-Known Member This Topic's Starter
Joined:
Jan 12, 2010
Messages:
124
Likes Received:
8
Just make sure you download an app killer for those cans or your transmissions will start slowing down.... :)
Share This Page
Loading...
|
__label__pos
| 0.765315 |
phpday 2022
ReflectionClass::newInstanceArgs
(PHP 5 >= 5.1.3, PHP 7, PHP 8)
ReflectionClass::newInstanceArgsCrea una nueva instancia de clase a partir de los argumentos dados
Descripción
public ReflectionClass::newInstanceArgs(array $args = ?): object
Crea una nueva instancia de la clase, pasando los argumentos al constructor de la clase.
Parámetros
args
Los parámetros a pasar al constructor de la clase, como array.
Valores devueltos
Devuelve una nueva instancia de la clase.
Ejemplos
Ejemplo #1 Uso básico de ReflectionClass::newInstanceArgs()
<?php
$clase
= new ReflectionClass('ReflectionFunction');
$instancia $clase->newInstanceArgs(array('substr'));
var_dump($instancia);
?>
El resultado del ejemplo sería:
object(ReflectionFunction)#2 (1) {
["name"]=>
string(6) "substr"
}
Errores/Excepciones
Lanza una excepción de tipo ReflectionException si el constructor de la clase no es público.
Lanza una excepción de tipo ReflectionException si la clase no tuviera un constructor y el parámetro args contuviera uno o más parámetros.
Ver también
add a note add a note
User Contributed Notes 11 notes
up
5
foxbunny
11 years ago
It should be noted that the the values in the array are mapped to constructor arguments positionally, rather than by name, so using an associative array will not make any difference.
up
5
sarfraznawaz2005 at gmail dot com
12 years ago
I use reflection class and also detect whether arguments are passed by reference or passed by value
and then initiate/call the method successfully with those arguments:
<?php
if (count($args) > 1)
{
if (
method_exists($class_name'__construct') === false)
{
exit(
"Constructor for the class <strong>$class_name</strong> does not exist, you should not pass arguments to the constructor of this class!");
}
$refMethod = new ReflectionMethod($class_name'__construct');
$params = $refMethod->getParameters();
$re_args = array();
foreach(
$params as $key => $param)
{
if (
$param->isPassedByReference())
{
$re_args[$key] = &$args[$key];
}
else
{
$re_args[$key] = $args[$key];
}
}
$refClass = new ReflectionClass($class_name);
$class_instance = $refClass->newInstanceArgs((array) $re_args);
}
?>
up
3
richardcook at gmail dot com
12 years ago
the newInstanceArgs function cannot call a class' constructor if it has references in its arguments, so be careful what you pass into it:
<?php
class Foo {
function
__construct (&$arr) {
$this->arr = &$arr;
}
function
createInstance () {
$reflectionClass = new ReflectionClass("Bar");
return
$reflectionClass->newInstanceArgs(array($this, $this->arr));
}
function
mod($key, $val) {
$this->arr[$key] = $val;
}
}
class
Bar {
function
__construct (&$foo, &$arr) {
$this->foo = &$foo;
$this->arr = &$arr;
}
function
mod($key, $val) {
$this->arr[$key] = $val;
}
}
$arr = array();
$foo = new Foo($arr);
$arr["x"] = 1;
$foo->mod("y", 2);
$bar = $foo->createInstance();
$bar->mod("z", 3);
echo
"<pre>";
print_r($arr);
print_r($foo);
print_r($bar);
echo
"</pre>";
/*
Output:
Warning: Invocation of Bar's constructor failed in [code path] on line 31
Fatal error: Call to a member function mod() on a non-object in [code path] on line 58
*/
?>
up
2
kirillsaksin at no-spam dot yandex dot ru
6 years ago
Hack to properly instantiate class with private constructor:
<?php
class TestClass
{
private
$property;
private function
__construct($argument)
{
$this->property = $argument;
}
}
$ref = new ReflectionClass(TestClass::class);
$instance = $ref->newInstanceWithoutConstructor();
var_dump($instance);
echo
PHP_EOL . '------------------------' . PHP_EOL . PHP_EOL;
$constructor = $ref->getConstructor();
$constructor->setAccessible(true);
$constructor->invokeArgs($instance, ['It works!']);
var_dump($instance);
// Output:
// class TestClass#3 (1) {
// private $property =>
// NULL
// }
//
// ------------------------
//
// class TestClass#3 (1) {
// private $property =>
// string(9) "It works!"
// }
?>
up
1
talk at stephensugden dot com
11 years ago
This is the way I dynamically instantiate objects in my lightweight IoC container
<?php
class SimpleContainer {
// ...
// Creates an instance of an object with the provided array of arguments
protected function instantiate($name, $args=array()){
if(empty(
$args))
return new
$name();
else {
$ref = new ReflectionClass($name);
return
$ref->newInstanceArgs($args);
}
}
// ...
}
?>
I explicitly do NOT handle the case where a user passes constructor arguments for a constructor-less class, as I this SHOULD fail.
up
1
dev at yopmail dot com
6 years ago
With PHP 5.6, we can use the ... (T_ELLIPSIS) operator
<?php
class Test {
public function
__construct($a, $b) {
echo
$a . ' ' . $b;
}
}
$args = array(12, 34);
new
Test(... $args); // Displays "12 34"
?>
up
-1
sausage at tehsausage dot com
11 years ago
Annoyingly, this will throw an exception for classes with no constructor even if you pass an empty array for the arguments. For generic programming you should avoid this function and use call_user_func_array with newInstance.
up
-1
kevinpeno at gmail dot com
12 years ago
I misunderstood this function to be a sort of setter of Reflection::newInstance() arguments in an array form rather than a creator of new instances itself.
This function is equivilant to call_user_func_array() while Reflection::newInstance() is equivilant to call_user_func()
up
-2
gromit at mailinator dot com
13 years ago
Be aware that calling the method newInstanceArgs with an empty array will still call the constructor with no arguments. If the class has no constructor then it will generate an exception.
You need to check if a constructor exists before calling this method or use try and catch to act on the exception.
up
-3
cimbomzek1905 at hotmail dot com
4 years ago
ayyıldız tim sizin bir öğrenciniz tek hayalim olan heacker olmak istiyorum lütfen başvurumu onaylayın
up
-4
cimbomzek1905 at hotmail dot com
4 years ago
ayyıldız tim sizin bir öğrenciniz tek hayalim olan heacker olmak istiyorum lütfen başvurumu onaylayın
To Top
|
__label__pos
| 0.79506 |
CSV datasource, using and combining
(Matt Sully) #1
Matrix Version: Squiz Matrix v5.3.4.0
I am using a .csv datasource, for my tasks.
I need to try to display it as a look up from mapping.
For example: http://www.exmoor-nationalpark.gov.uk/test-area/donate-map?ticket_no=3669
So the gate ticket number is:3669.
This is the records of the datasource.
I have applied the Page contents:
%asset_listing%
%ds__ticket_no% %ds__problem_type% %ds__donatestatus% %ds__donatedescription% %ds__easting% %ds__northing% %ds__grid_ref% %ds__cost%
How do I go about linking the 2 together?
A very easy question hopefully, as I am new to using Squiz Matrix.
Thanks
(Bart Banda) #2
Hi Matt, not sure I understand the requirement. What do you mean when you say “linking the 2 together” ?
(Matt Sully) #3
Bart Thanks for coming back to me.
I have a CSV datasource. I wish the output to be a webpage with a webmap in it.
and also some information from the .csv file format, shown above. That looks like this
So it takes elements from the .csv file format?
Does that make sense?
(Bart Banda) #4
Right, that helps, thanks.
So you basically just want to list a single item from the data source list?
I think you are wanting to put the data source keywords (%ds__ticket_no%) in the default type format body copy rather than the page contents, like you have done. Something like this:
|
__label__pos
| 0.987503 |
CryptoLock(Variant) repair script
Use this script to search for files that have .encrypted appended to their name and replace them with a version from shadow copy
This powershell script will create the symlink given the ShadowCopy name you provide, it will then search the folder specified and replace all effected files removing the encrypted versions.
This script is modified version from here – https://rcmtech.wordpress.com/2016/01/27/restore-malware-encrypted-files-from-vss-snapshots/
Function New-SymLink ($link, $target)
{
#if (test-path -pathtype container $target)
#{
$command = "cmd /c mklink /d"
#}
#else
#{
# $command = "cmd /c mklink"
#}
invoke-expression "$command $link $target"
}
Function Remove-SymLink ($link)
{
if (test-path -pathtype container $link)
{
$command = "cmd /c rmdir"
}
else
{
$command = "cmd /c del"
}
invoke-expression "$command $link"
}
# Before running this script:
# Use: vssadmin list shadows to find the latest unencrypted shadow copy - see the date & time they were created
# Record the Shadow Copy Volume, and use this to create a symbolic link:
# Create a folder to hold the symbolic link: md C:\VSS
# Then use: cmd /c mklink /d C:\VSS\67 \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy1555\
# You need to add a trailing backslash to the Shadow Copy Volume name produced by vssadmin.
# Once done, remove the symbolic link by using: cmd /c rd C:\VSS\67
# This is the path on the file server that got encrypted:
$EncryptedPath = "E:\File Shares\"
# This is the path to your shadow copy symbolic link:
$VSSPath = "c:\vsstemp\"
# File extension that the encrypted files have:
$Extension = ".encrypted"
# File name (minus extension) used for the "How to get your stuff unencrypted" files:
$RecoverFileFilter = "HOW_TO_RESTORE_FILES"
#Be sure to inlcude the trailing \
$VSSName="\\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy250\"
#The folder to be used temporarily to mount the VSS snapshot
Remove-SymLink( $VSSPath )
New-SymLink($VSSPath,$VSSName)
$FileList = Get-ChildItem -LiteralPath $EncryptedPath -Filter *$Extension -Recurse -Force
$TotalFiles = $FileList.Count
Write-Host ("Found "+$TotalFiles)
$Counter = 0
foreach($EncryptedFile in $FileList){
$DestFileName = $EncryptedFile.FullName.Replace($Extension,"")
#$VSSFileName = $DestFileName.Replace("F:\",$VSSPath)
#Strip the first 3 characters from the full path and replace it with the temporary VSS path
$StrippedName=$DestFileName.Substring(3,$DestFileName.Length-3)
$VSSFileName = "$VSSPath$StrippedName"
try{
# Use LiteralPath to prevent problems with paths containing special characters, e.g. square brackets
Copy-Item -LiteralPath $VSSFileName -Destination $DestFileName -ErrorAction Stop
Remove-Item -LiteralPath $EncryptedFile.FullName -Force
}
catch{
$Error[0]
}
Write-Progress -Activity "Fixing" -Status $DestFileName -PercentComplete ($Counter/$TotalFiles*100)
$Counter++
}
Write-Progress -Activity "Fixing" -Completed
Write-Host "Done recoverying files. Now cleaning up."
$RecoveryFileList = Get-ChildItem -LiteralPath $EncryptedPath -Filter *$RecoverFileFilter* -Recurse
foreach($RecoveryFile in $RecoveryFileList){
try{
Remove-Item -LiteralPath $RecoveryFile.FullName -force -ErrorAction Stop
}
catch{
$Error[0]
}
}
|
__label__pos
| 0.966412 |
How to Calculate the Mode in R
This is something I keep looking up, because for whatever reason R does not come with a built-in function to calculate the mode. (The mode() function does something else, not what I’d expect given that there are mean() and median()…) It’s quite easy to write a short function to calculate the mode in R:
Mode <- function(x) {
uni <- unique(x)
uni[which.max(tabulate(match(x, uni)))]
}
3 thoughts on “How to Calculate the Mode in R
1. Mode <- function(x) {
uni <- unique(x)
uni[which.max(tabulate(match(x, uni)))]
}
When there is no mode, this function returns a mode e.g.
fruit = c(rep('apple', 5), rep('pear', 5), rep('banana', 5))
Mode(fruit)
1. I think it is more common to say that there is more than one mode in this case. So yes, the function only gives the first mode if there are multiple ones. I have something for this: library(agrmt); modes(collapse(fruit)) # this is a function I have written to identify multiple modes. The handling is a bit different, because it uses frequency vectors — hence the collapse() function in the middle. Perhaps this helps?
Leave a Reply
Fill in your details below or click an icon to log in:
WordPress.com Logo
You are commenting using your WordPress.com account. Log Out / Change )
Google photo
You are commenting using your Google account. Log Out / Change )
Twitter picture
You are commenting using your Twitter account. Log Out / Change )
Facebook photo
You are commenting using your Facebook account. Log Out / Change )
Connecting to %s
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
__label__pos
| 0.956475 |
Grails vs. Zend Framework vs. FuelPHP
Get help choosing one of these Get news updates about these tools
Favorites
22
Favorites
6
Favorites
2
Hacker News, Reddit, Stack Overflow Stats
• 30
• 164
• 28.4K
• -
• 796
• 20.3K
• 186
• 122
• 518
GitHub Stats
Description
What is Grails?
Grails is a framework used to build web applications with the Groovy programming language. The core framework is very extensible and there are numerous plugins available that provide easy integration of add-on features.
What is Zend Framework?
Zend Framework 2 is an open source framework for developing web applications and services using PHP 5.3+. Zend Framework 2 uses 100% object-oriented code and utilises most of the new features of PHP 5.3, namely namespaces, late static binding, lambda functions and closures.
What is FuelPHP?
FuelPHP is a fast, lightweight PHP 5.4 framework. In an age where frameworks are a dime a dozen, We believe that FuelPHP will stand out in the crowd. It will do this by combining all the things you love about the great frameworks out there, while getting rid of the bad.
Pros
Why do developers choose Grails?
• Why do you like Grails?
Why do developers choose Zend Framework?
Why do you like Zend Framework?
Why do developers choose FuelPHP?
Why do you like FuelPHP?
Cons
What are the cons of using Grails?
No Cons submitted yet for Grails
Downsides of Grails?
What are the cons of using Zend Framework?
No Cons submitted yet for Zend Framework
Downsides of Zend Framework?
What are the cons of using FuelPHP?
No Cons submitted yet for FuelPHP
Downsides of FuelPHP?
Companies
What companies use Grails?
41 companies on StackShare use Grails
What companies use Zend Framework?
33 companies on StackShare use Zend Framework
What companies use FuelPHP?
2 companies on StackShare use FuelPHP
Integrations
What tools integrate with Grails?
4 tools on StackShare integrate with Grails
What tools integrate with Zend Framework?
3 tools on StackShare integrate with Zend Framework
What tools integrate with FuelPHP?
2 tools on StackShare integrate with FuelPHP
What are some alternatives to Grails, Zend Framework, and FuelPHP?
• Node.js - Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications
• Rails - Web development that doesn't hurt
• Android SDK - The Android SDK provides you the API libraries and developer tools necessary to build, test, and debug apps for Android.
• Django - The Web framework for perfectionists with deadlines
See all alternatives to Grails
Interest Over Time
Get help choosing one of these
|
__label__pos
| 0.99727 |
4
$\begingroup$
In general, Frobenius splitting only defines on field of characteristic $p$ (algebraically closed) field.
I am reading Brion and Kumar's book and I can see that there are geometric results can be proven by it on flag variety such as all higher cohomology groups of ample line bundle vanish, rational singularity, normality etc.
I am pretty curious about what kind of results over characteristic zero were/could be proven by Frobenius splitting technique. Can anyone give me some examples or reference about this aspect?
$\endgroup$
• 1
$\begingroup$ Techniques such as Frobenius splitting were introduced in prime characteristic because it's so hard to work out analogues involving geometry or representations of algebraic groups already known in characteristic 0. But there are fruitful interactions between characteristic $p$ questions and quantum groups (often at a root of unity) in characteristic 0, while Frobenius splitting methods have applications to quantum groups; see for instance a paper by Kumar-Littelmann, Algebraization of Frobenius splitting via quantum groups, Ann. of Math. 155 (2002), 491–551. $\endgroup$ – Jim Humphreys Nov 8 '15 at 14:54
• 1
$\begingroup$ If you have access to MathSciNet, it may be helpful to check their list of 100+ citations of the Brion-Kumar book. But again I'd stress that the problems solved using Frobenius splitting tend to be solved already in characteristic 0 by more traditional methods. $\endgroup$ – Jim Humphreys Nov 8 '15 at 14:56
• 3
$\begingroup$ I'm not sure this will pass even the 'techniques such as Frobenius splitting' filter, but to me two amazing applications of characteristic p to prove results in characteristic zero are Mori's proof of the Hartshorne conjecture and the Deligne-Illusie (algebraic) proof of Kodaira vanishing $\endgroup$ – aginensky Nov 8 '15 at 16:36
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Browse other questions tagged or ask your own question.
|
__label__pos
| 0.842457 |
Commit 2c93c6fd authored by Ben Gamari's avatar Ben Gamari 🐢 Committed by Ben Gamari
rts: Introduce non-moving heap census
This introduces a simple census of the non-moving heap (not to be
confused with the heap census used by the heap profiler). This
collects basic heap usage information (number of allocated and free
blocks) which is useful when characterising fragmentation of the
nonmoving heap.
parent 6db4be95
...@@ -467,6 +467,7 @@ library ...@@ -467,6 +467,7 @@ library
sm/MBlock.c sm/MBlock.c
sm/MarkWeak.c sm/MarkWeak.c
sm/NonMoving.c sm/NonMoving.c
sm/NonMovingCensus.c
sm/NonMovingMark.c sm/NonMovingMark.c
sm/NonMovingScav.c sm/NonMovingScav.c
sm/NonMovingSweep.c sm/NonMovingSweep.c
......
...@@ -21,6 +21,7 @@ ...@@ -21,6 +21,7 @@
#include "NonMoving.h" #include "NonMoving.h"
#include "NonMovingMark.h" #include "NonMovingMark.h"
#include "NonMovingSweep.h" #include "NonMovingSweep.h"
#include "NonMovingCensus.h"
#include "StablePtr.h" // markStablePtrTable #include "StablePtr.h" // markStablePtrTable
#include "Schedule.h" // markScheduler #include "Schedule.h" // markScheduler
#include "Weak.h" // dead_weak_ptr_list #include "Weak.h" // dead_weak_ptr_list
...@@ -747,6 +748,10 @@ static void nonmovingMark_(MarkQueue *mark_queue, StgWeak **dead_weaks, StgTSO * ...@@ -747,6 +748,10 @@ static void nonmovingMark_(MarkQueue *mark_queue, StgWeak **dead_weaks, StgTSO *
ASSERT(nonmovingHeap.sweep_list == NULL); ASSERT(nonmovingHeap.sweep_list == NULL);
debugTrace(DEBUG_nonmoving_gc, "Finished sweeping."); debugTrace(DEBUG_nonmoving_gc, "Finished sweeping.");
traceConcSweepEnd(); traceConcSweepEnd();
#if defined(DEBUG)
if (RtsFlags.DebugFlags.nonmoving_gc)
nonmovingPrintAllocatorCensus();
#endif
// TODO: Remainder of things done by GarbageCollect (update stats) // TODO: Remainder of things done by GarbageCollect (update stats)
......
/* -----------------------------------------------------------------------------
*
* (c) The GHC Team, 1998-2018
*
* Non-moving garbage collector and allocator: Accounting census
*
* This is a simple space accounting census useful for characterising
* fragmentation in the nonmoving heap.
*
* ---------------------------------------------------------------------------*/
#include "Rts.h"
#include "NonMoving.h"
#include "Trace.h"
#include "NonMovingCensus.h"
struct NonmovingAllocCensus {
uint32_t n_active_segs;
uint32_t n_filled_segs;
uint32_t n_live_blocks;
uint32_t n_live_words;
};
// N.B. This may miss segments in the event of concurrent mutation (e.g. if a
// mutator retires its current segment to the filled list).
static struct NonmovingAllocCensus
nonmovingAllocatorCensus(struct NonmovingAllocator *alloc)
{
struct NonmovingAllocCensus census = {0, 0, 0, 0};
for (struct NonmovingSegment *seg = alloc->filled;
seg != NULL;
seg = seg->link)
{
census.n_filled_segs++;
census.n_live_blocks += nonmovingSegmentBlockCount(seg);
unsigned int n = nonmovingSegmentBlockCount(seg);
for (unsigned int i=0; i < n; i++) {
StgClosure *c = (StgClosure *) nonmovingSegmentGetBlock(seg, i);
census.n_live_words += closure_sizeW(c);
}
}
for (struct NonmovingSegment *seg = alloc->active;
seg != NULL;
seg = seg->link)
{
census.n_active_segs++;
unsigned int n = nonmovingSegmentBlockCount(seg);
for (unsigned int i=0; i < n; i++) {
if (nonmovingGetMark(seg, i)) {
StgClosure *c = (StgClosure *) nonmovingSegmentGetBlock(seg, i);
census.n_live_words += closure_sizeW(c);
census.n_live_blocks++;
}
}
}
for (unsigned int cap=0; cap < n_capabilities; cap++)
{
struct NonmovingSegment *seg = alloc->current[cap];
unsigned int n = nonmovingSegmentBlockCount(seg);
for (unsigned int i=0; i < n; i++) {
if (nonmovingGetMark(seg, i)) {
StgClosure *c = (StgClosure *) nonmovingSegmentGetBlock(seg, i);
census.n_live_words += closure_sizeW(c);
census.n_live_blocks++;
}
}
}
return census;
}
void nonmovingPrintAllocatorCensus()
{
for (int i=0; i < NONMOVING_ALLOCA_CNT; i++) {
struct NonmovingAllocCensus census =
nonmovingAllocatorCensus(nonmovingHeap.allocators[i]);
uint32_t blk_size = 1 << (i + NONMOVING_ALLOCA0);
// We define occupancy as the fraction of space that is used for useful
// data (that is, live and not slop).
double occupancy = 100.0 * census.n_live_words * sizeof(W_)
/ (census.n_live_blocks * blk_size);
if (census.n_live_blocks == 0) occupancy = 100;
(void) occupancy; // silence warning if !DEBUG
debugTrace(DEBUG_nonmoving_gc, "Allocator %d (%d bytes - %d bytes): "
"%d active segs, %d filled segs, %d live blocks, %d live words "
"(%2.1f%% occupancy)",
i, 1 << (i + NONMOVING_ALLOCA0 - 1), 1 << (i + NONMOVING_ALLOCA0),
census.n_active_segs, census.n_filled_segs, census.n_live_blocks, census.n_live_words,
occupancy);
}
}
/* -----------------------------------------------------------------------------
*
* (c) The GHC Team, 1998-2018
*
* Non-moving garbage collector and allocator: Accounting census
*
* ---------------------------------------------------------------------------*/
#pragma once
void nonmovingPrintAllocatorCensus(void);
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment
|
__label__pos
| 0.973884 |
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. Join them; it only takes a minute:
Sign up
Here's how it works:
1. Anybody can ask a question
2. Anybody can answer
3. The best answers are voted up and rise to the top
We can join the north pole and the south pole of an sphere by an unlimited number of geodesics.
1: Is this property still valid if we take any manifold that is diffeomorphic to the sphere, i.e. are there any two points in this manifold that are connected by an infinite number of geodesic?
2: If the answer of question 1 is no, is it possible to find a manifold diffeomorphic to the sphere, such that, for all two distinct points there is only one geodesic passing through this points?
Thanks
share|cite|improve this question
For (1), are you asking about any metric? For instance, if you've chosen the pullback of the spherical metric, then every two points are connected by uncountably many geodesics. – Neal Oct 27 '12 at 13:09
up vote 3 down vote accepted
For $2$ the answer is no. In fact, more generally, on any closed manifold there are always at least two points with two geodesics between them.
The reason is that every closed Riemannian manifold has at least one closed geodesic. I thought this result was due to Birhoff, but according to https://www.encyclopediaofmath.org/index.php/Closed_geodesic it's due to Lyusternik and Fet. (In the non-simply connected case, it's not too hard to prove and is due to Cartan).
Now, let $\gamma:[0,L]$ be a unit speed closed geodesic. We may assume wlog that $\gamma$ is minimal in the sense that if $\gamma$ is restricted to any subinterval of $[0,L]$, the resulting geodesic is not a closed geodesic.
Now, consider the points $\gamma(0)$ and $\gamma(L/2)$. If these are not the same point, then the geodesic $\gamma$ and "follow $\gamma$ backwards from $\gamma(0)$" are two geodesics between $2$ different points.
If, on the other hand, $\gamma(0) = \gamma(L/2)$ (but $\gamma'(0)\neq \gamma'(L/2)$, since otherwise we'd contradict minimality of $\gamma$), then we repeat the argument with $\gamma(0)$ and $\gamma(L/4)$. Eventually the sequence $\gamma(L/2), \gamma(L/4),...,\gamma(L/2^k)$ gets within the injectivity radius at $\gamma(0)$, and then the argument stops.
share|cite|improve this answer
Very interesting @JasonDeVito. Thank you. Any idead for 1? – Tomás Oct 27 '12 at 13:22
The two geodesics produced by this proof are always continuations of each other. Is is also possible to find two points with geodesics between them whose maximal extensions are different subsets of the manifold? (This would necessarily require stronger conditions than closedness because it's not true for the real projective plane). – Henning Makholm Oct 27 '12 at 13:23
@Tomás, no, just take an ellipsoid with three different axis lengths, so it is not a surface of revolution. There was a fair amount of published work either side of 1900 about this sort of thing. The dissertation of Bliss characterizing all geodesics of a torus of revolution ("anchor ring") was published in the Annals about 1901. – Will Jagy Oct 27 '12 at 15:27
Thanks you , @WillJagy. – Tomás Oct 28 '12 at 11:12
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.911785 |
Personal tools
You are here: Home Research Trends & Opportunities New Media and New Digital Economy Data Science and Analytics Data Science and Landscape User Knowledge, Data Modeling. and Visualization
User Knowledge, Data Modeling, and Visualization
Castle_Bonn_Germany_092820A
[Castle, Bonn, Germany]
- User Knowledge, Data Modeling and Visualization
In modern technology, the level of knowledge is increasing day by day. This growth is reflected in volume, velocity and variety. Understanding this knowledge is essential for individuals to extract meaningful insights from it. With advances in computer and image-based technologies, visualization has become one of the most important platforms for extracting, interpreting, and communicating information.
In data modeling, visualization is the process of extracting knowledge to reveal the detailed data structures and processes of data.
- Data Visualization
Data visualization is the practice of transforming information into a visual environment, such as a map or graph, to make it easier for the human mind to understand data and draw insights from it. The main goal of data visualization is to more easily identify patterns, trends, and outliers in large datasets. The term is often used interchangeably with other terms, including infographics, information visualization, and statistical graphics.
Data visualization is one of the steps of the data science process, which states that after data is collected, processed, and modeled, it must be visualized to draw conclusions. Data visualization is also an element of the broader discipline of Data Presentation Architecture (DPA), which aims to identify, locate, manipulate, format and deliver data in the most efficient way possible.
Data visualization is important to almost any career. Teachers can use it to display test results for students, computer scientists can use it to explore advances in artificial intelligence (AI), or executives who want to share information with stakeholders. It also plays an important role in big data projects. As businesses accumulated large amounts of data in the early days of the big data trend, they needed a way to quickly and easily get an overview of the data. Visualizers are a natural fit.
For similar reasons, visualization is at the heart of advanced analytics. When data scientists write advanced predictive analytics or machine learning (ML) algorithms, it becomes important to visualize the output to monitor results and ensure the model is performing as expected. This is because visualizations of complex algorithms are often easier to interpret than numerical outputs.
- Data Modeling
Data modeling refers to the process of creating a visual representation of an entire information system or parts thereof to convey relationships between data points and structures. The purpose is to show the types of data stored in the system, the relationships between the data types, the format and properties of the data, and how the data is grouped and organized.
Data models are usually created around business requirements. Requirements and rules are predefined through feedback obtained from business stakeholders so that they can be used to design new systems. The data modeling process starts with gathering information about business needs from stakeholders and end users. Business requirements are then translated into data structures to develop a specific database design.
Today, data modeling has applications in every field you can think of, from financial institutions to the healthcare industry. A LinkedIn study named data modeling the fastest-growing occupation in the current job market.
- Data Modeling and Visualization: Key Similarities
Following are the key similarities between data modeling and visualization:
• They both deal with data: data is central to data modeling and data visualization. They help users make sense of ambiguous data sets and obtain relevant metrics to help make better decisions.
• No need for ML algorithms: Neither data modeling nor visualization requires the use of machine learning algorithms to get correct results.
• They both use visual elements: In both data modeling and data visualization, answers are in the form of visual elements, not text or numbers. However, they differ in the types of visual elements used.
• No data analysis required: Neither data modeling nor visualization requires analyzing data. Instead, data engineers and data modelers go straight to the data as-is to find inconsistencies in the data.
[More to come ...]
Document Actions
|
__label__pos
| 0.987921 |
Stack Overflow is a community of 4.7 million programmers, just like you, helping each other.
Join them; it only takes a minute:
Sign up
Join the Stack Overflow community to:
1. Ask programming questions
2. Answer and help your peers
3. Get recognized for your expertise
I have a question about these two C statements:
1. x = y++;
2. t = *ptr++;
With statement 1, the initial value of y is copied into x then y is incremented.
With statement 2, We look into the value pointed at by *ptr, putting that into variable t, then sometime later increment ptr.
For statement 1, the suffix increment operator has higher precedence than the assignment operator. So shouldn't y be incremented first and then x is assigned to the incremented value of y?
I'm not understanding operator precedence in these situations.
share|improve this question
You'll probably want to read about sequence points then if this is confusing to you. – Jeff Mercado May 26 '12 at 6:47
You're mistaken about the meaning of your 2]. Post-increment always yields the value from before the increment, then sometime afterward increments the value.
Therefore, t = *ptr++ is essentially equivalent to:
t = *ptr;
ptr = ptr + 1;
The same applies with your 1] -- the value yielded from y++ is the value of y before the increment. Precedence doesn't change that -- regardless of how much higher or lower the precedence of other operators in the expression, the value it yields will always be the value from before the increment, and the increment will be done sometime afterwards.
share|improve this answer
Chk this out. As I said the pointer gets incremented first. c-faq.com/aryptr/ptrary2.html – Pkp May 26 '12 at 6:49
1
Check this out: "The result of the postfix ++ operator is the value of the operand. After the result is obtained, the value of the operand is incremented." (from §6.5.2.4/2 of the C99 standard). – Jerry Coffin May 26 '12 at 6:51
you are correct. I thought I could trust the c-faq , following the standard it the best policy. Apologies. – Pkp May 26 '12 at 6:53
1
@Pkp: I think you're misinterpreting the FAQ. The question being addressed there is whether the ++ will apply to the pointer itself, or what the pointer refers to (which, BTW, is also what precedence determines in this case). The answer to that is that it applies to the pointer itself, not the pointee. – Jerry Coffin May 26 '12 at 6:58
Oh. now i see it. Thanks – Pkp May 26 '12 at 7:09
Difference between pre-increment and post-increment in C:
Pre-increment and Post-increment are built-in Unary Operators. Unary means: "A function with ONE input". "Operator" means: "a modification is done to the variable".
The increment (++) and decrement (--) builtin Unary operators modify the variable that they are attached to. If you tried to use these Unary Operators against a constant or a literal, you will get an error.
In C, here is a list of all the Built-in Unary operators:
Increment: ++x, x++
Decrement: −−x, x−−
Address: &x
Indirection: *x
Positive: +x
Negative: −x
Ones_complement: ~x
Logical_negation: !x
Sizeof: sizeof x, sizeof(type-name)
Cast: (type-name) cast-expression
These builtin operators are functions in disguise that take the variable input and place the result of the calculation back out into the same variable.
Example of post-increment:
int x = 0; //variable x receives the value 0.
int y = 5; //variable y receives the value 5
x = y++; //variable x receives the value of y which is 5, then y
//is incremented to 6.
//Now x has the value 5 and y has the value 6.
//the ++ to the right of the variable means do the increment after the statement
Example of pre-increment:
int x = 0; //variable x receives the value 0.
int y = 5; //variable y receives the value 5
x = ++y; //variable y is incremented to 6, then variable x receives
//the value of y which is 6.
//Now x has the value 6 and y has the value 6.
//the ++ to the left of the variable means do the increment before the statement
Example of post-decrement:
int x = 0; //variable x receives the value 0.
int y = 5; //variable y receives the value 5
x = y--; //variable x receives the value of y which is 5, then y
//is decremented to 4.
//Now x has the value 5 and y has the value 4.
//the -- to the right of the variable means do the decrement after the statement
Example of pre-decrement:
int x = 0; //variable x receives the value 0.
int y = 5; //variable y receives the value 5
x = --y; //variable y is incremented to 6, then variable x receives
//the value of y which is 6.
//x has the value 4 and y has the value 4.
//the -- to the right of the variable means do the decrement before the statement
share|improve this answer
int rm=10,vivek=10;
printf("the preincrement value rm++=%d\n",++rmv);//the value is 11
printf("the postincrement value vivek++=%d",vivek++);//the value is 10
share|improve this answer
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.534293 |
This short post is prompted by a question that came in through Twitter – I *knew* it was worth joining and spending time on it (http://twitter.com/PaulRandal).
The (paraphrased) question is "can FILESTREAM data be stored remotely?". This has been confusing people, and neither FILESTREAM BOL nor my FILESTREAM whitepaper (see here) explicitly answer the question.
The FILESTREAM data container for a database must be placed on an NTFS volume on locally-connected storage. Just like database data and log files, the directory cannot be on a UNC share. The confusion comes from the fact that FILESTREAM data *can* be *accessed* remotely through a UNC share – but as far as the host instance is concerned, the FILESTREAM storage must be local.
A second question that came up a while ago is whether FILESTREAM data containers can share the same directory or be nested. The answers are kind-of, and no, respectively. Let's see.
I'll create the first database with a FILESTREAM data container:
CREATE DATABASE FileStreamTestDB1 ON PRIMARY
(NAME = FileStreamTestDB1_data, FILENAME = N'C:\SQLskills\FSTestDB1_data.mdf'),
FILEGROUP FileStreamFileGroup CONTAINS FILESTREAM
(NAME = FileStreamTestDB1Documents, FILENAME = N'C:\SQLskills\FSDC\Documents')
LOG ON
(NAME = FileStreamTestDB1_log, FILENAME = N'C:\SQLskills\FSTestDB1_log.ldf');
GO
And now let's try another database with the same parent directory:
CREATE DATABASE FileStreamTestDB2 ON PRIMARY
(NAME = FileStreamTestDB2_data, FILENAME = N'C:\SQLskills\FSTestDB2_data.mdf'),
FILEGROUP FileStreamFileGroup CONTAINS FILESTREAM
(NAME = FileStreamTestDB2Documents, FILENAME = N'C:\SQLskills\FSDC\Documents2')
LOG ON
(NAME = FileStreamTestDB2_log, FILENAME = N'C:\SQLskills\FSTestDB2_log.ldf');
GO
This works fine. You can't have another database use the *same* directory as the first database (i.e. N'C:\SQLskills\FSDC\Documents'), but two FILESTREAM data containers can have the same parent directory.
And now let's try a nested one:
CREATE DATABASE FileStreamTestDB3 ON PRIMARY
(NAME = FileStreamTestDB3_data, FILENAME = N'C:\SQLskills\FSTestDB3_data.mdf'),
FILEGROUP FileStreamFileGroup CONTAINS FILESTREAM
(NAME = FileStreamTestDB3Documents, FILENAME = N'C:\SQLskills\FSDC\Documents\Documents3')
LOG ON
(NAME = FileStreamTestDB3_log, FILENAME = N'C:\SQLskills\FSTestDB3_log.ldf');
GO
Msg 5136, Level 16, State 2, Line 1
The path specified by 'C:\SQLskills\FSDC\Documents\Documents3' cannot be used for a FILESTREAM container since it is contained in another FILESTREAM container.
Msg 1802, Level 16, State 2, Line 1
CREATE DATABASE failed. Some file names listed could not be created. Check related errors.
Doesn't work, as expected, as this is documented in BOL here.
Thanks
|
__label__pos
| 0.665582 |
/* * This file is part of MPlayer. * * MPlayer is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * MPlayer is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License along * with MPlayer; if not, write to the Free Software Foundation, Inc., * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. */ #ifndef MPLAYER_VCD_READ_DARWIN_H #define MPLAYER_VCD_READ_DARWIN_H #define _XOPEN_SOURCE 500 #include #include #include #include #include #include #include #include #include #include #include #include "compat/mpbswap.h" #include "core/mp_msg.h" #include "stream.h" //=================== VideoCD ========================== #define CDROM_LEADOUT 0xAA typedef struct { uint8_t sync [12]; uint8_t header [4]; uint8_t subheader [8]; uint8_t data [2324]; uint8_t spare [4]; } cdsector_t; typedef struct mp_vcd_priv_st { int fd; cdsector_t buf; dk_cd_read_track_info_t entry; struct CDDiscInfo hdr; CDMSF msf; } mp_vcd_priv_t; static inline void vcd_set_msf(mp_vcd_priv_t* vcd, unsigned int sect) { vcd->msf = CDConvertLBAToMSF(sect); } static inline unsigned int vcd_get_msf(mp_vcd_priv_t* vcd) { return CDConvertMSFToLBA(vcd->msf); } static int vcd_seek_to_track(mp_vcd_priv_t* vcd, int track) { struct CDTrackInfo entry; memset( &vcd->entry, 0, sizeof(vcd->entry)); vcd->entry.addressType = kCDTrackInfoAddressTypeTrackNumber; vcd->entry.address = track; vcd->entry.bufferLength = sizeof(entry); vcd->entry.buffer = &entry; if (ioctl(vcd->fd, DKIOCCDREADTRACKINFO, &vcd->entry)) { mp_msg(MSGT_STREAM,MSGL_ERR,"ioctl dif1: %s\n",strerror(errno)); return -1; } vcd->msf = CDConvertLBAToMSF(be2me_32(entry.trackStartAddress)); return VCD_SECTOR_DATA*vcd_get_msf(vcd); } static int vcd_get_track_end(mp_vcd_priv_t* vcd, int track) { struct CDTrackInfo entry; if (track > vcd->hdr.lastTrackNumberInLastSessionLSB) { mp_msg(MSGT_OPEN, MSGL_ERR, "track number %d greater than last track number %d\n", track, vcd->hdr.lastTrackNumberInLastSessionLSB); return -1; } //read track info memset( &vcd->entry, 0, sizeof(vcd->entry)); vcd->entry.addressType = kCDTrackInfoAddressTypeTrackNumber; vcd->entry.address = trackhdr.lastTrackNumberInLastSessionLSB?track+1:vcd->hdr.lastTrackNumberInLastSessionLSB; vcd->entry.bufferLength = sizeof(entry); vcd->entry.buffer = &entry; if (ioctl(vcd->fd, DKIOCCDREADTRACKINFO, &vcd->entry)) { mp_msg(MSGT_STREAM,MSGL_ERR,"ioctl dif2: %s\n",strerror(errno)); return -1; } if (track == vcd->hdr.lastTrackNumberInLastSessionLSB) vcd->msf = CDConvertLBAToMSF(be2me_32(entry.trackStartAddress) + be2me_32(entry.trackSize)); else vcd->msf = CDConvertLBAToMSF(be2me_32(entry.trackStartAddress)); return VCD_SECTOR_DATA*vcd_get_msf(vcd); } static mp_vcd_priv_t* vcd_read_toc(int fd) { dk_cd_read_disc_info_t tochdr; struct CDDiscInfo hdr; dk_cd_read_track_info_t tocentry; struct CDTrackInfo entry; CDMSF trackMSF; mp_vcd_priv_t* vcd; int i, min = 0, sec = 0, frame = 0; //read toc header memset(&tochdr, 0, sizeof(tochdr)); tochdr.buffer = &hdr; tochdr.bufferLength = sizeof(hdr); if (ioctl(fd, DKIOCCDREADDISCINFO, &tochdr) < 0) { mp_msg(MSGT_OPEN,MSGL_ERR,"read CDROM toc header: %s\n",strerror(errno)); return NULL; } //print all track info mp_msg(MSGT_IDENTIFY, MSGL_INFO, "ID_VCD_START_TRACK=%d\n", hdr.firstTrackNumberInLastSessionLSB); mp_msg(MSGT_IDENTIFY, MSGL_INFO, "ID_VCD_END_TRACK=%d\n", hdr.lastTrackNumberInLastSessionLSB); for (i=hdr.firstTrackNumberInLastSessionLSB ; i<=hdr.lastTrackNumberInLastSessionLSB + 1; i++) { if (i <= hdr.lastTrackNumberInLastSessionLSB) { memset( &tocentry, 0, sizeof(tocentry)); tocentry.addressType = kCDTrackInfoAddressTypeTrackNumber; tocentry.address = i; tocentry.bufferLength = sizeof(entry); tocentry.buffer = &entry; if (ioctl(fd,DKIOCCDREADTRACKINFO,&tocentry)==-1) { mp_msg(MSGT_OPEN,MSGL_ERR,"read CDROM toc entry: %s\n",strerror(errno)); return NULL; } trackMSF = CDConvertLBAToMSF(be2me_32(entry.trackStartAddress)); } else trackMSF = CDConvertLBAToMSF(be2me_32(entry.trackStartAddress) + be2me_32(entry.trackSize)); //mp_msg(MSGT_OPEN,MSGL_INFO,"track %02d: adr=%d ctrl=%d format=%d %02d:%02d:%02d\n", if (i<=hdr.lastTrackNumberInLastSessionLSB) mp_msg(MSGT_OPEN,MSGL_INFO,"track %02d: format=%d %02d:%02d:%02d\n", (int)tocentry.address, //(int)tocentry.entry.addr_type, //(int)tocentry.entry.control, (int)tocentry.addressType, (int)trackMSF.minute, (int)trackMSF.second, (int)trackMSF.frame ); if (mp_msg_test(MSGT_IDENTIFY, MSGL_INFO)) { if (i > hdr.firstTrackNumberInLastSessionLSB) { min = trackMSF.minute - min; sec = trackMSF.second - sec; frame = trackMSF.frame - frame; if ( frame < 0 ) { frame += 75; sec --; } if ( sec < 0 ) { sec += 60; min --; } mp_msg(MSGT_IDENTIFY, MSGL_INFO, "ID_VCD_TRACK_%d_MSF=%02d:%02d:%02d\n", i - 1, min, sec, frame); } min = trackMSF.minute; sec = trackMSF.second; frame = trackMSF.frame; } } vcd = malloc(sizeof(mp_vcd_priv_t)); vcd->fd = fd; vcd->hdr = hdr; vcd->msf = trackMSF; return vcd; } static int vcd_end_track(mp_vcd_priv_t* vcd) { return vcd->hdr.lastTrackNumberInLastSessionLSB; } static int vcd_read(mp_vcd_priv_t* vcd,char *mem) { if (pread(vcd->fd,&vcd->buf,VCD_SECTOR_SIZE,vcd_get_msf(vcd)*VCD_SECTOR_SIZE) != VCD_SECTOR_SIZE) return 0; // EOF? vcd->msf.frame++; if (vcd->msf.frame==75) { vcd->msf.frame=0; vcd->msf.second++; if (vcd->msf.second==60) { vcd->msf.second=0; vcd->msf.minute++; } } memcpy(mem,vcd->buf.data,VCD_SECTOR_DATA); return VCD_SECTOR_DATA; } #endif /* MPLAYER_VCD_READ_DARWIN_H */
|
__label__pos
| 0.582849 |
...
View Full Version : Writing line break in JS
spacepoet
01-08-2012, 06:52 PM
Hello:
First post here on CS ... I am wondering why my code will not write a new line or line break.
This is fine:
<script>
document.write("Hello and welcome to javascript!");
</script>
But when I try to put a message on two lines, it does not work. The book I'm reading and several websites have stated the below codes will work, but neither do:
<script>
document.write("Hello and welcome to javascript! \n");
document.write("It is a tricky language to learn!")
</script>
<script>
document.writeln("Hello and welcome to javascript!");
document.writeln("It is a tricky language to learn!")
</script>
Am I missing something?
Amphiluke
01-08-2012, 07:00 PM
Why not simply output a line break explicitly?
document.write("Hello and welcome to javascript!<br />");
document.write("It is a tricky language to learn!")
DanInMa
01-08-2012, 08:08 PM
well the code looks fine. maybe try including the type attribute in your script tag, and make sure your using a valid document type for the page itself.
spacepoet
01-09-2012, 05:15 AM
Hi:
I tried the type attribute already:
<script type="text/javascript">
document.write("Hello and welcome to javascript! \n");
document.write("It is a tricky language to learn!")
</script>
but is does not work that way, either ...
Which is why I am confused ...
Any other ideas?
xelawho
01-09-2012, 05:32 AM
1. Find another way to display text. Almost all of them are better than document.write
2. if you must, as per Amphiluke's suggestion:
<script type="text/javascript">
document.write("Hello and welcome to javascript!<br>It is a tricky language to learn!");
</script>
Old Pedant
01-09-2012, 07:46 AM
Here's some questions for you, SpacePoet.
When was the last time you created some ordinary HTML?
And when you did that, how did you force a line break on the screen?
Didn't you use <br/> (or at least <br>)??
So why would you think that creating line breaks from JavaScript would be any different?
Looks to me like you completely ignored Amphiluke's advice in post #2.
FWIW, this is an HTML thing, nothing to do with JavaScript. HTML *requires* that browsers treat *ALL* "whitespace" (that is, spaces, line breaks, tabs, etc.) the same. To wit: Any number of ANY kind of whitespace is rendered on the screen as a single space character.
********
N.B.: Of course there are other ways to get a line break with HTML.
Examples:
<pre>
The pre tag will preserve
line breaks and all spaces.
</pre>
...
<textarea>
And textareas preserve line
breaks and spaces as well.
</textarea>
But by far the most common thing to use is <br/>
jmrker
01-09-2012, 03:44 PM
Hi:
I tried the type attribute already:
<script type="text/javascript">
document.write("Hello and welcome to javascript! \n");
document.write("It is a tricky language to learn!")
</script>
but is does not work that way, either ...
Which is why I am confused ...
Any other ideas?
Also, because we don't see any other code that might be on the page,
be sure that you are not overwriting the document.write with
a later part of your program. :D
spacepoet
01-10-2012, 06:09 PM
Hello:
Thanks for the replies but it didn't answer the question to why it is not working. Do modern browsers not support this code anymore?
I know I can use "<br />" or CSS "clear: both", etc.
I am starting JS from the beginning and following code examples from a book (Learning PHP, MySQL, and JavaScript: A Step-By-Step Guide to Creating Dynamic Websites - Robin Nixon). I find it to be a good book, but this simple code does not work:
<html>
<head></head>
<body>
<script type="text/javascript">
document.write("Hello and welcome to javascript! \n");
document.write("It is a tricky language to learn!")
</script>
</body>
</html>
All of the examples I have found on other sites use the same "\n" or "document.writeln (Which I am told does not work in xhtml)."
The point is I am trying to learn to write JS code from the very beginning so I can get better at writing my own code.
Is there a better or more modern way to do the above?
Philip M
01-10-2012, 06:20 PM
<script type="text/javascript">
document.write("Hello and welcome to javascript! <br>");
document.write("It is a tricky language to learn!");
alert ("Hello and welcome to javascript! \nIt is a tricky language to learn!")
</script>
<br> is the HTML code for a newline. document.write() outputs HTML.
\n is the Javascript code for a newline. Windows translates that internally into \r\n (Carriage Return - Line Feed or CRLF)
Old Pedant
01-10-2012, 08:56 PM
Philip is exactly correct.
Look, SpacePoet, your code (from post #8) would result in the following HTML:
<html>
<head></head>
<body>
Hello and welcome to javascript!
It is a tricky language to learn!
</body>
</html>
That is, the JavaScript code would indeed put a LINE BREAK after the first line created by document.write, thanks to the \n in there.
But now that you has done that writing, it is up to the browser to follow the HTML rules.
And, as I said, HTML dictates that multiple whitespace characters are converted into a single space.
So that will render on the screen as simply
Hello and welcome to javascript! It is a tricky language to learn!
Look, I know you know ASP coding. In ASP you might code this as
<html>
<head></head>
<body>
<%
Response.Write "Hello and welcome to javascript!" & vbNewLine
Response.Write "It is a tricky language to learn!"
%>
</body>
</html>
And surely if you did that from ASP you would NOT be surprised to find that there is no line break in the screen presentation. Surely you would instead use
Response.Write "Hello and welcome to javascript!<br/>"
to force the line break. Right?
So WHY are you surprised to find that you must use <br/> with JavaScript, as well?????
**********
In any case, document.write should almost be outlawed. It's quite often a really bad choice when using JavaScript to create page content. For one thing, if you use document.write *after* a page is fully loaded, you WIPE OUT all content on the page (including even the JavaScript that did the document.write!). So okay, use document.write when you are playing around, but then be prepared to stop using it except on rare occasions.
Old Pedant
01-10-2012, 09:00 PM
Is there a better or more modern way to do the above?
Yes.
<html>
<head>
<script type="text/javascript">
function addMessage( )
{
document.getElementById("message").innerHTML =
"Hello and welcome to javascript!<br/>"
+ "It is one of the easiest of all languages to learn!";
}
window.onload = addMessage;
</script>
</head>
<body>
Demonstration:
<div id="message">this will be wiped out</div>
</body>
</html>
felgall
01-10-2012, 09:08 PM
Or even more modern:
<html>
<head>
</head>
<body>
Demonstration:
<div id="message">this will be wiped out</div>
<script type="text/javascript">
(function() {
document.getElementById("message").innerHTML =
"Hello and welcome to javascript!<br/>"
+ "It is one of the easiest of all languages to learn!";
})();
</script>
</body>
</html>
This variant provides the following additional benefits:
1. The rest of the page loads faster when the JavaScript is at the bottom.
2. The JavaScript runs sooner because it doesn't have to wait for everything else in the page to load.
3. It is as completely unobtrusive as it can be and will not interfere with any other JavaScript in the page (unless that JavaScript also wants to update the same id.
Old Pedant
01-10-2012, 09:18 PM
Ummm...why do you need a function? What does that buy you?
If you want to put the JS after the HTML, you could simply do
<html>
<head>
</head>
<body>
Demonstration:
<div id="message">this will be wiped out</div>
<script type="text/javascript">
document.getElementById("message").innerHTML =
"Hello and welcome to javascript!<br/>"
+ "It is one of the easiest of all languages to learn!";
</script>
</body>
</html>
(Yes, yes, I know...in many other situations the function will be advantageous...such as when you need to use variables and don't want name clashes.)
spacepoet
01-10-2012, 11:39 PM
...thanks for clarifying ... that makes more sense to me...
This is why learning some of this has been difficult for me. The book is a good book and very recent, but the first intro to JS and the code doesn't work .. lol ..
I will stick with the "innerHTML" examples .. what does "innerHTML" do, anyway?
One interesting question: I am noticing a tread - as mentioned - of adding the JS after the rest of the content (at least I have been coming across this is some HTML5 code).
Good idea or not? I know it will not work until after it has loaded - which is why I believe most JS is between the HEAD tags.
Thanks for the tips.
Anyway - onto lesson 2 - working with "Date();" an such ..
Oh, one last thing - what is a good JS debugger to download. I know FireFox has one, but I was hoping for other options.
xelawho
01-10-2012, 11:54 PM
... is nobody going to mention our old friend DOM?
<script>
bod=document.body;
text1=document.createTextNode("Hello and welcome to javascript!")
text2=document.createTextNode("It is a tricky language to learn!")
brk=document.createElement("br")
bod.appendChild(text1)
bod.appendChild(brk)
bod.appendChild(text2)
</script>
Old Pedant
01-11-2012, 12:29 AM
Chrome and MSIE 9 both have builtin debugger that are, if anything, even better then FireFox.
But any of them are more than adequate for most usage. Most of the time all you want to do is set breakpoints and inspect variables, and they all do that just fine.
Just avoid MSIE 8 and below. You *can* debug them using the MS Script Debugger or using Visual Studio (any variety), but not as easily as with any other browser.
**********
Don't be afraid of document.write, esp. in early lessons, but don't think it does anything magic that is outside of the rules of HTML.
spacepoet
01-11-2012, 09:01 PM
I will look into those debuggers ... just taking each lesson in my own time now that I have some to devote to me .. lol ..
DOM: Thanks for posting that. I believe I get the idea.
I did this book because it is teaching about DOM and gives good, simple examples and then builds upon them.
Should be an interesting trip ..
:)
EZ Archive Ads Plugin for vBulletin Copyright 2006 Computer Help Forum
|
__label__pos
| 0.706144 |
Report inadequate content
What is informatics!???
{
}
What is informatics!???
Informatics is the scientific study of information. This incredibly broad field is sometimes treated as the parent field for information technology and computer science , two fields which rely on informatics to organize, display, and transmit data in ways which are meaningful to users. There are a number of subfields within the discipline of informatics, such as bioinformatics, which involves the application of informatics to the field of biology, classically in the realm of health care.
Both natural and artificial systems which involve information can be examined within the framework of informations, including the brain, computer systems, and paper filing methods. Informatics is concerned with how data is collected and stored, how it is organized, and how it is retrieved and transmitted. It can also include issues like data security, storage limitations, and so forth.
{
}
{
}
Leave your comment What is informatics!???
Log in to Obolog, or create your free blog if you are not registered yet.
User avatar Your name
|
__label__pos
| 0.992785 |
Configure Role-Based Access Control (RBAC) for Microsoft Azure Resources
For Alert Logic to protect assets in Microsoft Azure, you must create an app registration with administrative permissions. Role-Based Access Control (RBAC) enables fine-grained access management for Azure accounts. When you assign a RBAC role to the app registration, you grant Alert Logic access to monitor your environments, and no further access.
This procedure requires administrative permissions in Azure, and the installation of one of the following command line interfaces:
If you have Azure CLI 1.0 installed, Microsoft recommends you upgrade to CLI 2.0 and use the deprecated CLI 1.0 only for support with the Azure Service Management (ASM) model with "classic" resources. For more information, contact Microsoft Azure support.
To configure your RBAC role in Azure, you must:
1. Create an app registration in Azure
2. Create a custom RBAC role
3. Assign the role to the user account
Create an app registration in Azure
1. Log into the Azure portal.
2. In the left menu, click Azure Active Directory.
3. On the left panel, under Manage, click App registrations.
4. Click New registration, and enter a name. Note the name of the registration, which you will need later when you create an Azure deployment.
5. Click Register. Note the Application (client) ID, and the Directory (tenant) ID, which you will need later.
6. On the left panel, under Manage, click Certificates & secrets, and then click +New client secret.
7. Enter a description, and then on Expire, select Never.
8. Click Add. Note the key value, which you will need later.
Create a custom RBAC role
RBAC roles enable fine-grained access management for Azure. After you create an app registration, you must assign an RBAC role to that registration to grant Alert Logic permission to monitor your environments.
For more information about Azure RBAC or managing roles with command-line applications, see:
To create a custom RBAC role, you must first create a role document and then create a custom role in the Azure portal.
To create a custom RBAC role, you must:
Create a role document
To create a role document:
1. Create a new text file and copy the Alert Logic RBAC role into it. Note the directory where you save the file. You must know the path and file name for later in the procedure.
2. Make the following changes to the file:
1. In the "Name": "<role name>", line, change the "<role name>" entry to the name for the app registration you just created.
2. In the "AssignableScopes":"/subscriptions/<subscription id>" line, change the <subscription ID> value to the Subscription ID found on your Azure portal Subscriptions blade.
3. Save the text file as a JSON file.
Create a custom role in Azure
To create a custom role in Azure:
1. Open either Azure CLI 2.0 or Azure PowerShell, and log in to your Azure account, and then specify the default subscription.
2. Create your custom role in Azure.
3. In the Azure portal, under Subscriptions, select your subscription, and then click select Access control (IAM).
4. Click Roles to verify that the RBAC role you created appears in the portal.
5. If the role does not appear, refresh the list of roles.
Assign the role to the user account
After you create the RBAC role, you must assign it to the Azure app you registered. In Azure, roles are assigned in the Access Control portion of the Subscriptions blade.
1. In the Azure Navigation Menu, click Subscriptions.
2. In the Subscriptions blade, select the subscription you want Alert Logic to protect, and then click Access Control (IAM). Note the subscription ID, which you will need when you create an Azure deployment.
3. Click +Add, and then click Add role assignment.
4. Select the RBAC role you created.
5. From the list, click the app you registered earlier.
6. Click SAVE.
Create a deployment in the Alert Logic console
The steps you must take to create a deployment vary based on your subscription level.
For Essentials subscriptions, see Microsoft Azure Deployment Configuration (Essentials Subscription)
For Professional subscriptions, see Microsoft Azure Deployment Configuration (Professional Subscription).
|
__label__pos
| 0.991387 |
Cara Sederhana Mencari IP Valid
Salam Sobat Sederhana, dalam artikel kali ini kita akan membahas tentang cara sederhana untuk mencari IP valid. IP atau Internet Protocol adalah sebuah protokol yang digunakan untuk mengirimkan dan menerima data di internet. Setiap perangkat yang terhubung ke internet memiliki sebuah IP address yang unik.
Apa itu IP Valid?
IP valid adalah IP yang benar-benar dapat digunakan dan terdaftar di jaringan internet. Ada beberapa jenis IP yang bisa digunakan di internet seperti IP public, IP private, dan IP loopback. Namun, tidak semua IP tersebut dapat digunakan untuk mengakses internet atau melakukan koneksi ke jaringan lain.
Untuk itu, penting bagi kita untuk mengetahui cara mencari IP valid yang dapat digunakan untuk berbagai keperluan seperti mengakses website, menjalankan aplikasi, dan lain sebagainya. Berikut ini adalah cara sederhana untuk mencari IP valid.
Cara Mencari IP Valid
1. Menggunakan Command Prompt
Salah satu cara yang paling mudah untuk mencari IP valid adalah melalui Command Prompt di Windows. Berikut ini adalah langkah-langkahnya:
Langkah-langkah
Cara Melakukannya
Buka Command Prompt
Tekan tombol Windows + R, ketik “cmd”, lalu tekan Enter
Ketik “ipconfig”
Ketik perintah “ipconfig” di Command Prompt, lalu tekan Enter
Cari IP Address
Temukan bagian “IPv4 Address” dan cari alamat IP yang tertera di sana
Dengan cara ini, kita dapat dengan mudah mengetahui alamat IP yang valid dan dapat digunakan untuk berbagai keperluan.
2. Menggunakan Website Penyedia Layanan IP Lookup
Selain melalui Command Prompt, kita dapat juga mencari IP valid melalui website penyedia layanan IP Lookup. Beberapa website tersebut antara lain:
• www.whatismyip.com
• www.iplocation.net
• www.ipchicken.com
TRENDING 🔥 Cara Membuat Aplikasi Pembelajaran Sederhana untuk Anak TK
Cara menggunakannya pun cukup mudah, kita hanya perlu mengunjungi website tersebut kemudian alamat IP akan ditampilkan di layar.
3. Menggunakan Perangkat Lunak
Jika kita memerlukan cara yang lebih canggih untuk mencari IP valid, kita dapat menggunakan perangkat lunak seperti Advanced IP Scanner, Angry IP Scanner, atau Fing. Dengan perangkat lunak tersebut, kita dapat melihat IP address yang terhubung ke jaringan kita dan mengetahui apakah alamat IP tersebut valid atau tidak.
FAQ
Apa itu IP Address?
IP address adalah sebuah alamat unik yang diberikan kepada sebuah perangkat yang terhubung ke jaringan internet. Alamat IP digunakan untuk mengidentifikasi perangkat tersebut di internet dan memungkinkan perangkat tersebut untuk berkomunikasi dengan perangkat lain di jaringan.
Apa itu IP Valid?
IP valid adalah IP yang benar-benar dapat digunakan dan terdaftar di jaringan internet. IP tersebut dapat digunakan untuk mengakses internet atau melakukan koneksi ke jaringan lain.
Bagaimana cara mengetahui IP Address saya?
Untuk mengetahui IP address pada komputer Windows, kamu dapat membuka Command Prompt dan mengetikkan “ipconfig” kemudian mencari bagian “IPv4 Address”. Sedangkan pada perangkat Android, kamu dapat membuka Pengaturan dan mencari bagian “Wi-Fi”. Klik pada jaringan yang sedang terhubung dan alamat IP akan ditampilkan di sana.
Kesimpulan
Mencari IP valid memang penting dilakukan jika kita ingin menggunakan internet atau melakukan koneksi ke jaringan lain. Kita dapat mencari IP valid dengan mudah melalui Command Prompt, website penyedia layanan IP Lookup, atau menggunakan perangkat lunak khusus. Semoga informasi ini bermanfaat untuk Sobat Sederhana. Sampai jumpa di artikel menarik lainnya!
Semoga Bermanfaat dan sampai jumpa di artikel menarik lainnya.
TRENDING 🔥 Cara Bikin Pancake Sederhana
Cara Sederhana Mencari IP Valid
|
__label__pos
| 0.999312 |
Redirecting output streams in Powershell
Found this on MS connect site discussion and reposting here because I don’t know about you but I can never remember the syntax
Posted by Microsoft on 15.2.2012 at 16:40
# In PS 3.0, we've extended output redirection to include the following streams:
# Pipeline (1)
# Error (2)
# Warning (3)
# Verbose (4)
# Debug (5)
# All (*)
# We still use the same operators
# > Redirect to a file and replace contents
# >> Redirect to a file and append to existing content
# >&1 Merge with pipeline output
# First, let's set some preference variables and get a temp file
$VerbosePreference = "Continue"; $DebugPreference = "Continue"; $filename = [System.IO.Path]::GetTempFileName()
# Scenario 1
# Merge warning output with pipeline output (3>&1)
# Pipeline output is assigned to a variable
# The error output goes to the host (no redirection)
$var = $(Write-Output "Pipeline Output"; Write-Warning "Warning Output"; Write-Error "Error Output") 3>&1
# The variable $var contains the pipeline and warning output
$var
# Scenario 2
# Redirect warning output to a file (3>$filename)
# All other output is merged with pipeline output (2>&1 4>&1 5>&1)
$var = $(Write-Output "Pipeline Output"; Write-Warning "Warning Output"; Write-Verbose "Verbose Output"; Write-Error "Error Output") 2>&1 4>&1 5>&1 3>$filename
# Pipeline, Verbose, and Error output are now in $var
$var
# Warning output is in the file
Get-Content $filename
# Scenario 3
# Redirect all output to a file (*>$filename)
$(Write-Output "Pipeline Output"; Write-Warning "Warning Output"; Write-Verbose "Verbose Output"; Write-Error "Error Output"; Write-Debug "Debug Output") *>$filename
# All output is now in the file
Get-Content $filename
# Scenario 4
# Merge warning, error, and verbose output with the pipeline output (2>&1 3>&1 4>&1)
# Redirect the pipeline output to a file (1>$filename)
# Debug output goes to the host (no redirection)
$(Write-Output "Pipeline Output"; Write-Warning "Warning Output"; Write-Verbose "Verbose Output"; Write-Error "Error Output"; Write-Debug "Debug Output") 2>&1 3>&1 4>&1 1>$filename
# All other output is now in the file
Get-Content $filename
One recent example from me:
# redirect all but debug stream into a file
# Sharepoint 2013 install - keeping for future reference
Initialize-SPResourceSecurity -Verbose 2>&1 3>&1 4>&1 1> C:\Install\install_logs\Initialize-SPResourceSecurity.txt
Tagged:
Leave a Reply
Fill in your details below or click an icon to log in:
WordPress.com Logo
You are commenting using your WordPress.com account. Log Out / Change )
Google photo
You are commenting using your Google account. Log Out / Change )
Twitter picture
You are commenting using your Twitter account. Log Out / Change )
Facebook photo
You are commenting using your Facebook account. Log Out / Change )
Connecting to %s
smsagent
Scripts, tools and tips, mostly around Microsoft SCCM and EMS
To The Point
Anything about Technology and Business
Brian's Power Windows Blog
Microsoft in the Enterprise. Windows, Hyper-V, Exchange, SQL, and more!
PowerScripting Podcast
Shownotes and links for the PowerScripting Podcast, a podcast to help people learn Windows Powershell
Learn Powershell | Achieve More
What is this Powershell of which you speak?
%d bloggers like this:
|
__label__pos
| 0.978514 |
5
$\begingroup$
Let $X$ be a complex manifold, you can assume it's compact, if necessary. We have the Dolbeault complex $$0 \rightarrow \mathcal{A}^{0,0} \xrightarrow{\bar{\partial}} \mathcal{A}^{0,1} \xrightarrow{\bar{\partial}} \ldots \xrightarrow{\bar{\partial}} \mathcal{A}^{0,n} \rightarrow 0.$$ It is well known that we can consider the completion of $\mathcal{A}^{0,*}$ with respect to the $L^2$-norm induced by some metric on $X$, get a Hilbert space $H$, and perform functional calculus on the Dolbeault operator $D=\bar{\partial} + \bar{\partial}^*$, to get a Fredholm module $(\frac{D}{\sqrt{1+D^2}} , H) \in KK(C_0(X), \mathbb{C})$ in Kasparov K-group. The reference I'm using for this process is chapter 10 of Analytic K-homology, by Higson and Roe.
Now, let's say instead of working with the Hilbert space $H$, we want to work with Hilbert $C_0(X)$-modules. So I guess I need to consider the continuous sections of the bundle $\wedge ^{0,*} T^*X$ over $X$, and $\bar{\partial}$ is a densely defined (unbounded) operator acting on this space. If I understand correctly, unlike the case of Hilbert spaces, performing functional calculus for a densely defined operator on a Hilbert module is not automatic, and one needs a "regularity" condition on the operator(the operator $T$ acting on Hilbert module $E$ is regular if it is densely defined, the adjoint $T^*$ is also densely defined, and $1+T^*T$ is invertible.).
My question is if my understanding above is correct, and more importantly, if I can perform functional calculus on the Dolbeault operator or not?(i.e. is $D$ regular?) Even if one can somehow define it up to compact operators, that's good enough for me.
Thanks in advance for any help!
Edit: As Johannes pointed out, $D$ is not $C_0(X)$-linear. However, what I need is to only define functional calculus from $$\mathcal{L}_{C_0(X)}(H_X, H^{odd})/\mathcal{K}_{C_0(X)}(H_X, H^{odd}) \rightarrow \mathcal{L}_{C_0(X)}(H_X, H^{even}) / \mathcal{K}_{C_0(X)}(H_X, H^{even})$$, where in here, $H_X$, is the canonical Hilbert $C_0(X)$-module $\mathcal{l}^2(C_0(X))$ of sequences in $C_0(X)$, $\mathcal{L}_{C_0(X)}(H_1,H_2)$ is the space of $C_0(X)$-linear bounded adjointable maps from $H_1$ to $H_2$, $\mathcal{K}$ is the ideal of compact operators, and $H^{odd}$ is Hilbert $C_0(X)$-module corresponding to $\mathcal{A}^{0,odd}$ ($H^{even}$ is defined similarly).
I realize that one might still need $D$ to be $C_0(X)$-linear to define this, but is there a way to get around this issue(since we may only need $D$ to be $C_0(X)$-linear up to compact operators for "only a dense subset" for this to make sense)?
Thanks,
$\endgroup$
2
• 1
$\begingroup$ The problem is that D is not C(X)-linear at all. $\endgroup$ – Johannes Ebert Nov 25 '17 at 20:39
• $\begingroup$ Thanks for pointing out the issue. I edited the question, since I needed something slightly weaker. I was wondering if the weaker statement above can be true? $\endgroup$ – Kashayar Nov 27 '17 at 1:02
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Browse other questions tagged or ask your own question.
|
__label__pos
| 0.990822 |
Monday, August 5, 2024
Coding
The Rise of Rust: A Language for Modern Development
Last Updated on September 28, 2023
Introduction
The programming language, Rust, has been gaining popularity and adoption in modern development. This blog chapter will explore the rise of Rust, discussing its key points and increasing usage.
Let’s delve into the world of Rust and its significance in the development community.
In the dynamic landscape of programming, Rust emerges as a powerhouse, gaining widespread recognition and adoption.
1. Surge in Popularity: Rust’s popularity skyrockets as developers seek a language that combines performance, safety, and modern syntax.
2. Adoption Across Industries: From system-level programming to web development, Rust finds applications in diverse domains, showcasing its versatility.
3. Key Features Unveiled: Explore Rust’s unique features, such as zero-cost abstractions, ownership system, and fearless concurrency, underpinning its prowess.
4. Memory Safety Revolution: Rust’s ownership model revolutionizes memory safety, providing a robust solution to common programming pitfalls.
5. Community-Driven Evolution: The vibrant Rust community fosters innovation, continuously evolving the language with a focus on developer satisfaction and usability.
Embark on a journey to unravel the ascent of Rust—a language tailor-made for the challenges of modern development.
Witness its meteoric rise, understand its core features, and discover why it’s becoming the go-to choice for developers worldwide.
Overview of Rust
Rust is a systems programming language that prioritizes safety, performance, and concurrency.
What is Rust?
Rust is a modern programming language that aims to provide reliable and efficient software development solutions.
It focuses on three key pillars:
• Safety: Rust ensures memory safety and prevents common programming errors that can lead to crashes or security vulnerabilities.
• Performance: Rust emphasizes zero-cost abstractions and low-level control to deliver high-performance applications.
• Concurrency: Rust enables developers to write concurrent code, allowing efficient utilization of modern hardware.
Origins and Development
Rust was initially developed by Mozilla as a personal project by Graydon Hoare in 2006.
It took several years of iteration and collaboration with the open-source community to reach its stable 1.0 release in 2015. Since then, Rust has gained significant popularity and support.
The language draws inspiration from various programming paradigms and languages. It incorporates ideas from C++, Haskell, and others to provide a unique and powerful development experience.
Safety, Performance, and Concurrency
Rust’s strong focus on safety makes it stand out among other languages. It achieves this through its ownership system, borrowing rules, and strict compiler checks.
The ownership system in Rust ensures that memory allocation and deallocation are managed safely, eliminating common bugs such as use-after-free and data races.
With Rust’s borrow checker, the compiler enforces strict rules around mutable and immutable references, preventing data races and guaranteeing thread safety.
This commitment to safety does come at a small cost of additional complexity and learning curve, but these trade-offs are well worth it in the long run for building reliable and efficient software.
Regarding performance, Rust’s zero-cost abstractions and low-level control allow developers to have fine-grained control over their program’s execution without sacrificing performance.
The language’s emphasis on concurrency comes with built-in support for multi-threading and asynchronous programming, enabling developers to take full advantage of modern hardware without compromising safety or performance.
In general, Rust presents a powerful option for modern development with its focus on safety, performance, and concurrency.
Although it may have a learning curve, Rust’s unique features make it an attractive choice for building reliable and efficient software systems with ease.
Read: https://learncodingusa.com/using-google-analytics-api-coding-custom-dashboards/
Benefits of Rust
Rust is gaining popularity as a language for modern development with its numerous advantages over other programming languages. In this section, we will discuss some of the key benefits of using Rust.
Strong Memory Safety Guarantees
One of the major advantages of Rust is its strong memory safety guarantees. Unlike languages like C and C++, Rust eliminates common memory-related bugs such as null pointer dereferences and buffer overflows.
These bugs can cause crashes, security vulnerabilities, and other critical issues in software.
Rust achieves memory safety through its ownership system and borrowing mechanism. The ownership system ensures that each piece of data only has one owner at a time, preventing data races and memory leaks.
The borrowing mechanism allows safe sharing of data between different parts of the code without introducing any runtime overhead.
Strict Compile-Time Checks
Rust’s compiler performs extensive static checks to ensure program correctness at compile-time. This helps catch errors early in the development process, saving time and effort in debugging later on.
The compiler checks for issues like uninitialized variables, type mismatches, and unused code.
Unlike languages with permissive compilers, Rust’s strictness can be seen as a strength rather than a limitation. By forcing developers to write robust and reliable code, Rust helps create software that is more resilient and less prone to bugs.
Prevention of Common Programming Errors and Vulnerabilities
Rust’s unique features contribute to its ability to prevent common programming errors and vulnerabilities. The ownership system and borrowing mechanism eliminate issues like null pointer exceptions and data races, making programs more secure and less prone to crashes.
Rust also provides built-in mechanisms for error handling, such as the Result and Option types. These types ensure that the presence of errors is explicitly handled in the code, reducing the chances of unexpected failures.
Additionally, Rust’s pattern matching makes it easier to handle different error scenarios.
Support for Parallelism and Concurrency
Rust’s design includes features that make it well-suited for parallel and concurrent programming. The ownership system ensures thread safety, allowing multiple threads to access and modify shared data without conflicts.
This eliminates the need for manual synchronization primitives like locks and mutexes.
Rust’s support for async/await syntax enables efficient asynchronous programming, making it easier to write scalable and responsive software.
Asynchronous programming can enhance the performance and responsiveness of applications, especially when dealing with I/O-bound operations.
In essence, Rust offers several benefits for modern development. Its strong memory safety guarantees, strict compile-time checks, prevention of common errors and vulnerabilities, and support for parallelism and concurrency make it a powerful language for building reliable and efficient software.
Read: https://learncodingusa.com/shortcuts-for-faster-coding-in-ides/
Use Cases
Explore the various domains and industries where Rust is gaining traction
Rust, an innovative programming language, has found its way into diverse domains and industries due to its unique features and capabilities. Let’s delve into some of these areas:
1. Systems Programming: Rust’s emphasis on memory safety and low-level control makes it an ideal language for developing operating systems, device drivers, embedded systems, and other system-level software.
2. Web Development: Despite being a systems language, Rust is gaining popularity in the web development space. Developers can use the Rocket framework to build high-performance web applications that are both secure and scalable.
3. Network Programming: Rust’s ability to handle concurrent programming and its lightweight runtime make it well-suited for developing networking tools, servers, and protocols. Tokio, a powerful asynchronous runtime, enables efficient network programming in Rust.
4. Game Development: Rust’s control over memory and performance, along with its modern syntax, has attracted game developers. The Amethyst game engine, built in Rust, has gained recognition for its speed and flexibility.
5. Blockchain and Cryptocurrency: Due to its focus on security and performance, Rust is a popular choice for blockchain development. Projects like Parity Ethereum and Grin utilize Rust to ensure the reliability and efficiency of their decentralized applications.
Discuss its suitability for systems programming, web development, and network programming
Rust’s versatility enables it to excel in different programming domains, making it a powerful language for systems programming, web development, and network programming:
• Systems Programming: Rust’s strict compile-time checks guarantee memory safety and prevent common bugs like null pointer dereferences and buffer overflows. Its performance is on par with C and C++, making it suitable for building efficient and secure systems software.
• Web Development: Rust’s concurrency model and thread safety, coupled with frameworks like Rocket and Actix, enable the rapid development of highly performant and secure web applications. Its static typing catches many vulnerabilities at compile-time.
• Network Programming: Rust’s asynchronous programming model, supported by Tokio, enables developers to build fast and scalable network applications with ease. Its memory safety guarantees ensure the robustness of network protocols, reducing the risk of attacks.
Provide examples of companies or projects that have adopted Rust successfully
Rust’s adoption has been steadily growing, with several notable companies and projects leveraging its power:
1. Dropbox: Dropbox has incorporated Rust into its core infrastructure to improve synchronization performance and enhance security.
2. Cloudflare: Cloudflare, a leading web infrastructure and security company, relies on Rust for its high-performance edge computing platform.
3. Mozilla: As the primary developer of Rust, Mozilla uses the language extensively in various projects, including the Firefox web browser.
4. Brave: Brave, a privacy-focused web browser, utilizes Rust to provide a secure and efficient browsing experience.
5. Discord: Discord, a popular communication platform, utilizes Rust for its audio processing system, ensuring low latency and high-quality voice communication.
These are just a few examples of successful Rust adoption, highlighting the language’s versatility and effectiveness in different domains.
Basically, Rust’s unique features and impressive performance make it an outstanding choice for a wide range of applications and industries.
Whether it’s systems programming, web development, network programming, or beyond, Rust continues to gain traction and prove its worth in the modern development landscape.
Read: https://learncodingusa.com/coding-programming-difference/
The Rise of Rust: A Language for Modern Development
Growing Community and Ecosystem
Rust, a language for modern development, has seen a significant expansion in its community of developers and enthusiasts in recent years.
Expanding Community of Rust Developers and Enthusiasts
The Rust programming language has gained immense popularity due to its strong emphasis on performance, reliability, and safety.
As a result, it has attracted a growing number of developers who are eager to harness its power.
The community of Rust developers is not only diverse but also highly collaborative.
They actively contribute to the language’s development, offer support to fellow developers, and share their knowledge and experiences through various channels, including online forums, mailing lists, and social media networks.
Availability of Documentation, Libraries, and Frameworks
One of the key factors contributing to the growth of Rust’s community is the availability of extensive documentation.
The Rust community has put significant effort into creating comprehensive and user-friendly documentation, making it easier for newcomers to learn the language and existing developers to refine their skills.
Moreover, the Rust ecosystem boasts a wide range of libraries and frameworks that developers can leverage to expedite their development process.
These libraries, often created by community members, aim to address common programming challenges and offer reusable code snippets and functionalities.
From networking and web development to data processing and cryptography, Rust libraries cater to various domains.
Vibrant Ecosystem and Active Development
One of the remarkable aspects of Rust’s growth is its vibrant ecosystem. The Rust community is known for its active development, with numerous projects and initiatives constantly taking shape.
This continuous development ensures that Rust remains relevant and up to date with the latest trends and requirements of modern development.
Rust is often praised for its package manager, Cargo, which streamlines the process of managing dependencies and building projects.
Cargo has become an integral part of the Rust development workflow, providing developers with a seamless experience.
In addition to Cargo, the Rust community actively maintains and updates popular frameworks like Rocket, Actix, and Warp, making it easier for developers to build robust web applications.
These frameworks offer powerful abstractions and essential features, empowering developers to create scalable and high-performing applications using Rust.
The rise of Rust has been accompanied by a rapidly growing community of developers and enthusiasts.
The availability of extensive documentation, libraries, and frameworks has been instrumental in attracting developers to the language.
Additionally, the vibrant ecosystem and active development happening around Rust ensure that it remains a language well-suited for modern development.
As Rust continues to evolve, its community will likely expand further, fostering innovation and providing developers with new tools and resources.
With its performance and safety guarantees, Rust is poised to play a significant role in shaping the future of software development.
Read: https://learncodingusa.com/google-app-engine/
Challenges and Limitations
Potential Challenges of Using Rust
1. Rust’s strict ownership and borrowing system can be challenging for new developers.
2. Writing code in Rust requires a deeper understanding of memory management concepts.
3. The learning curve of Rust can be steep compared to more established languages like Java or Python.
4. Rust’s compiler often produces complex error messages, which can be frustrating for beginners.
5. Porting existing codebases to Rust may require rewriting large portions of the code.
6. The community and ecosystem around Rust are still growing, leading to fewer available resources and experts.
The Learning Curve and Understanding Complex Concepts
Rust introduces new concepts like ownership, borrowing, and lifetimes, which may not be familiar to developers coming from other languages.
Understanding these concepts is crucial for writing safe and efficient Rust code.
The ownership system, while powerful, can be difficult to grasp initially. It enforces strict rules on how memory is managed, preventing common bugs like use-after-free and data races.
However, it requires developers to understand ownership transfer and lifetime scopes.
Rust’s borrowing system allows multiple references to data, but with strict rules. While preventing data races, it can be challenging to navigate these rules when writing complex code.
Developers need to understand borrowing and how it affects code organization and performance.
Lifetimes, another key concept in Rust, ensure that references remain valid. While they enable the compiler to guarantee memory safety, they can be confusing to newcomers.
Properly annotating lifetimes can sometimes be tedious and error-prone.
Limited Number of Libraries
Rust is still a relatively new language compared to more established ones like Java or JavaScript. Consequently, the number of available libraries and frameworks in Rust is more limited.
The lack of libraries can pose a challenge when developing certain applications or when porting existing codebases to Rust. Developers may need to implement functionalities from scratch or search for alternative solutions.
However, Rust’s community and ecosystem have been growing rapidly, and efforts are being made to fill these gaps.
Many popular libraries from other languages are being ported or reimplemented in Rust, providing broader options for developers.
In short, while Rust brings many advantages, it also presents challenges and limitations. Its learning curve can be steep, requiring developers to understand complex concepts like ownership, borrowing, and lifetimes.
Additionally, the limited number of libraries compared to more established languages may pose difficulties for specific use cases.
However, as Rust’s community and ecosystem continue to expand, these challenges are likely to diminish over time.
Conclusion
The rise of Rust as a language for modern development is evident. Throughout this blog post, we discussed the key points that make Rust a compelling choice for developers.
We explored its strong focus on safety and performance, along with its ability to provide concurrency without sacrificing simplicity.
We also discussed how Rust is gaining popularity and adoption in various industries, such as system programming, web development, and game development.
Its unique features, such as zero-cost abstractions and fearless concurrency, make it stand out among other programming languages.
We encourage readers to explore Rust and consider its potential benefits for their own projects.
By leveraging Rust’s powerful features, developers can write safer and more efficient code, leading to more reliable and robust software.
Whether you are a seasoned developer or just starting your journey in programming, Rust offers a valuable toolset that can enhance your development process. So, don’t hesitate to dive into Rust and discover its vast possibilities.
In summary, Rust has proven to be a language at the forefront of modern development.
Its rise in popularity and adoption is a testament to its efficacy and potential. Embark on your Rust journey and unlock the numerous benefits it offers for your projects.
Leave a Reply
Your email address will not be published. Required fields are marked *
|
__label__pos
| 0.938369 |
Dataform values provider using Array with custom object not working as expected
#1
Hello,
I’ve “Picker” type entity on dataform with values provider as below
get provinceProvider(){
if(!this._provinceProvider){
this._provinceProvider = {
key: “id”,
label: “name”,
//items: JSON.parse(appSettings.getString(“provinces”))
items:[{
“id”: “2”,
“name”: “The Eastern Cape”
}, {
“id”: “3”,
“name”: “The Free State”
}, {
“id”: “4”,
“name”: “Gauteng”
}, {
“id”: “5”,
“name”: “KwaZulu-Natal”
}, {
“id”: “6”,
“name”: “Limpopo”
}, {
“id”: “7”,
“name”: “Mpumalanga”
}, {
“id”: “8”,
“name”: “The Northern Cape”
}, {
“id”: “9”,
“name”: “North West”
}, {
“id”: “10”,
“name”: “The Western Cape”
}]
};
}
return this._provinceProvider;
}
Labels are correctly getting displayed, but when I pick an item underlying source property “provinceId” is getting assigned the “label” instead of the “id”, below is the value of provinceId printed to console during dfPropertyCommitted.
I"m expecting the value of “4” to be assigned to “provinceId”?
#2
Could anybody confirm if I’m doing something wrong here, or there is an actual issue Dataform?
|
__label__pos
| 0.999992 |
#include "blaswrap.h" #include "f2c.h" /* Subroutine */ int dtbtrs_(char *uplo, char *trans, char *diag, integer *n, integer *kd, integer *nrhs, doublereal *ab, integer *ldab, doublereal *b, integer *ldb, integer *info) { /* -- LAPACK routine (version 3.0) -- Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., Courant Institute, Argonne National Lab, and Rice University March 31, 1993 Purpose ======= DTBTRS solves a triangular system of the form A * X = B or A**T * X = B, where A is a triangular band matrix of order N, and B is an N-by NRHS matrix. A check is made to verify that A is nonsingular. Arguments ========= UPLO (input) CHARACTER*1 = 'U': A is upper triangular; = 'L': A is lower triangular. TRANS (input) CHARACTER*1 Specifies the form the system of equations: = 'N': A * X = B (No transpose) = 'T': A**T * X = B (Transpose) = 'C': A**H * X = B (Conjugate transpose = Transpose) DIAG (input) CHARACTER*1 = 'N': A is non-unit triangular; = 'U': A is unit triangular. N (input) INTEGER The order of the matrix A. N >= 0. KD (input) INTEGER The number of superdiagonals or subdiagonals of the triangular band matrix A. KD >= 0. NRHS (input) INTEGER The number of right hand sides, i.e., the number of columns of the matrix B. NRHS >= 0. AB (input) DOUBLE PRECISION array, dimension (LDAB,N) The upper or lower triangular band matrix A, stored in the first kd+1 rows of AB. The j-th column of A is stored in the j-th column of the array AB as follows: if UPLO = 'U', AB(kd+1+i-j,j) = A(i,j) for max(1,j-kd)<=i<=j; if UPLO = 'L', AB(1+i-j,j) = A(i,j) for j<=i<=min(n,j+kd). If DIAG = 'U', the diagonal elements of A are not referenced and are assumed to be 1. LDAB (input) INTEGER The leading dimension of the array AB. LDAB >= KD+1. B (input/output) DOUBLE PRECISION array, dimension (LDB,NRHS) On entry, the right hand side matrix B. On exit, if INFO = 0, the solution matrix X. LDB (input) INTEGER The leading dimension of the array B. LDB >= max(1,N). INFO (output) INTEGER = 0: successful exit < 0: if INFO = -i, the i-th argument had an illegal value > 0: if INFO = i, the i-th diagonal element of A is zero, indicating that the matrix is singular and the solutions X have not been computed. ===================================================================== Test the input parameters. Parameter adjustments */ /* Table of constant values */ static integer c__1 = 1; /* System generated locals */ integer ab_dim1, ab_offset, b_dim1, b_offset, i__1; /* Local variables */ static integer j; extern logical lsame_(char *, char *); extern /* Subroutine */ int dtbsv_(char *, char *, char *, integer *, integer *, doublereal *, integer *, doublereal *, integer *); static logical upper; extern /* Subroutine */ int xerbla_(char *, integer *); static logical nounit; #define b_ref(a_1,a_2) b[(a_2)*b_dim1 + a_1] #define ab_ref(a_1,a_2) ab[(a_2)*ab_dim1 + a_1] ab_dim1 = *ldab; ab_offset = 1 + ab_dim1 * 1; ab -= ab_offset; b_dim1 = *ldb; b_offset = 1 + b_dim1 * 1; b -= b_offset; /* Function Body */ *info = 0; nounit = lsame_(diag, "N"); upper = lsame_(uplo, "U"); if (! upper && ! lsame_(uplo, "L")) { *info = -1; } else if (! lsame_(trans, "N") && ! lsame_(trans, "T") && ! lsame_(trans, "C")) { *info = -2; } else if (! nounit && ! lsame_(diag, "U")) { *info = -3; } else if (*n < 0) { *info = -4; } else if (*kd < 0) { *info = -5; } else if (*nrhs < 0) { *info = -6; } else if (*ldab < *kd + 1) { *info = -8; } else if (*ldb < max(1,*n)) { *info = -10; } if (*info != 0) { i__1 = -(*info); xerbla_("DTBTRS", &i__1); return 0; } /* Quick return if possible */ if (*n == 0) { return 0; } /* Check for singularity. */ if (nounit) { if (upper) { i__1 = *n; for (*info = 1; *info <= i__1; ++(*info)) { if (ab_ref(*kd + 1, *info) == 0.) { return 0; } /* L10: */ } } else { i__1 = *n; for (*info = 1; *info <= i__1; ++(*info)) { if (ab_ref(1, *info) == 0.) { return 0; } /* L20: */ } } } *info = 0; /* Solve A * X = B or A' * X = B. */ i__1 = *nrhs; for (j = 1; j <= i__1; ++j) { dtbsv_(uplo, trans, diag, n, kd, &ab[ab_offset], ldab, &b_ref(1, j), & c__1); /* L30: */ } return 0; /* End of DTBTRS */ } /* dtbtrs_ */ #undef ab_ref #undef b_ref
|
__label__pos
| 0.99999 |
5
(let (lineStr (thing-at-point 'line t))
(body-form......))
throws the error,
call-interactively: `let' bindings can have only one value-form: thing-at-point, (quote line), t [2 times]
Can't a let variable take a elisp form, and eval the form, get its value assigned?
13
Read the documentation: C-hf let RET
let is a special form in `C source code'.
(let VARLIST BODY...)
Bind variables according to VARLIST then eval BODY.
The value of the last form in BODY is returned.
Each element of VARLIST is a symbol (which is bound to nil)
or a list (SYMBOL VALUEFORM) (which binds SYMBOL to the value of VALUEFORM).
All the VALUEFORMs are evalled before any symbols are bound.
So VARLIST is a list:
(let (...)
BODY)
And elements which bind values are also lists:
(let ((SYMBOL VALUEFORM)
(SYMBOL VALUEFORM))
BODY)
9
You're missing a necessary set of parentheses around the VARLIST. Your code should look like this:
(let ((lineStr (thing-at-point 'line t))) (body-form......))
(See the documentation for 'let' at 'C-h f let'.) The purpose of the VARLIST being a list is to allow multiple variables to be bound within a single 'let'. The extra parentheses separate the VARLIST from the BODY.
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.87418 |
quick-lint-js
Find bugs in JavaScript programs.
Install for Sublime Text on macOS
Install prerequisites
Install the LSP Sublime Text package:
1. Open Sublime Text.
2. Open the Command Palette (press Control-Shift-P) and type "Install Package Control".
3. Open the Command Palette and type "Package Control: Install Package", then type "LSP".
Install quick-lint-js
1. Download the latest release of quick-lint-js for your platform:
2. Extract the downloaded archive.
3. Copy the extracted quick-lint-js/bin/quick-lint-js file into a directory in your PATH.
For example, copy it into the /usr/local/bin directory.
4. Open a terminal window. Type quick-lint-js --version and press enter to verify the installation succeeded.
Configure Sublime Text
After installing quick-lint-js, you need to register quick-lint-js in Sublime Text's LSP package:
1. Open Sublime Text.
2. Open the Command Palette (press Control-Shift-P) and type "Preferences: LSP Settings".
3. In the LSP.sublime-settings file which opens, add the following code. If "clients" already exists, add "quick-lint-js" inside the existing { }:
{
"clients": {
"quick-lint-js": {
"command": ["quick-lint-js", "--lsp-server"],
"enabled": true,
"languageId": "javascript",
"syntaxes": ["Packages/JavaScript/JavaScript.sublime-syntax"]
}
}
}
Sublime Text with LSP.sublime-settings open, showing closure-lsp and quick-lint-js settings
Example LSP settings for Sublime Text
|
__label__pos
| 0.996296 |
Handling PID of Genserver in params and HTML.eex template
Following is my PID while printing in log
IO.puts("++++++pid")
IO.inspect(pid)
as result I am getting
++++++pid
#PID<0.9069.0>
I am pasisng this PID in following ways-
1- inserting into session and render to live view
2- From live view passing it to html
3- Then Html to a controller.
Now, what happens I am getting error as following-
protocol Phoenix.Param not implemented for #PID<0.9069.0>. This protocol is implemented for: Map, BitString, Atom, Integer, Any
Now, I want to pass this PID as I have to used it later on. And to do so, I must ensure that it pass well in params as well as other places. How can I do this? Any help will be appreciated.
PIDs are mainly opaque, even though you could send its textual representation to the client, you weren’t be able to get back a PID from it on the server side.
If you really need your client to know the “address” of a certain GenServer, rather than letting the server know which of the running GenServers is responsible for the users session, then use a symbolic name and Registry.
1 Like
I have identifier, so could I still get the PIDs. If yes, then tell me that way.
Then you can just use the identifier when calling GenServer.call/GenServer.cast.
Both functions can deal with pids or with whatever :name you gave the GenServer when starting it.
1 Like
Even, when I am passing them into map, still getting same above error message. I am passing like this-
{_, process_id} = start_timer(job_opening, page, applied_opening, identifier)
pid = %{"process_id" => process_id}
IO.puts("++++++pid")
IO.inspect(pid)
as result-
++++++pid
%{"process_id" => #PID<0.2944.0>}
Here is how I am passing into Live view-
session: %{
tenant: Repo.get_prefix(),
duration: job_opening.duration,
attempts: attempts,
job_opening_hash: job_opening.share_url_route,
job_opening_id: job_opening.id,
user_id: user.id,
state: applied_opening.state,
identifier: identifier,
pid: pid
}
and In html-
<%= render_many @attempts.list, ApolloWeb.AttemptView, "answers/_type.html", conn: @conn, attempts: @attempts, page: @attempts.page, user: @user_id, job_opening: @job_opening_id, hash: @job_opening_hash, identifier: @identifier, pid: @pid, as: :attempt %>
Also, I have tried the function you mention- GenServer.call/GenServer.cast but they don’t accept identifier as parameter (here). Could you give example of it.
Again, you shall not use the pid to send it between client and server, it is not properly deserializable.
You can use though an atom as name for the GenServer though and send it to the client. But this does not scale well.
Better were to use Registry and :via tuples to name the GenServer on start. Then you can send the moving parts of the :via tuple to the client (perhaps a session id or something like that) and reconstruct the tuple on the server.
2 Likes
|
__label__pos
| 0.603423 |
Data Structures and Algorithms | Set 19
Following questions have been asked in GATE CS 2009 exam.
1. Let X be a problem that belongs to the class NP. Then which one of the following is TRUE?
(A) There is no polynomial time algorithm for X.
(B) If X can be solved deterministically in polynomial time, then P = NP.
(C) If X is NP-hard, then it is NP-complete.
(D) X may be undecidable.
Answer (C)
(A) is incorrect because set NP includes both P(Polynomial time solvable) and NP-Complete .
(B) is incorrect because X may belong to P (same reason as (A))
(C) is correct because NP-Complete set is intersection of NP and NP-Hard sets.
(D) is incorrect because all NP problems are decidable in finite set of operations.
2. What is the number of swaps required to sort n elements using selection sort, in the worst case?
(A) Θ(n)
(B) Θ(n log n)
(C) Θ(n2 )
(D) Θ(nn2 log n)
Answer (A)
Here is Selection Sort algorithm for sorting in ascending order.
1. Find the minimum value in the list
2. Swap it with the value in the first position
3. Repeat the steps above for the remainder of the list (starting at
the second position and advancing each time)
As we can see from the algorithm, selection sort performs swap only after finding the appropriate position of the current picked element. So there are O(n) swaps performed in selection sort.
Because swaps require writing to the array, selection sort is preferable if writing to memory is significantly more expensive than reading. This is generally the case if the items are huge but the keys are small. Another example where writing times are crucial is an array stored in EEPROM or Flash. There is no other algorithm with less data movement.
References:
http://en.wikipedia.org/wiki/Selection_sort
3. The running time of an algorithm is represented by the following recurrence relation:
if n <= 3 then T(n) = n
else T(n) = T(n/3) + cn
Which one of the following represents the time complexity of the algorithm?
(A) Θ(n)
(B) Θ(n log n)
(C) Θ(n2)
(D) Θ(n2 log n)
Answer(A)
T(n) = cn + T(n/3)
= cn + cn/3 + T(n/9)
= cn + cn/3 + cn/9 + T(n/27)
Taking the sum of infinite GP series. The value of T(n) will
be less than this sum.
T(n) <= cn(1/(1-1/3))
<= 3cn/2
or we can say
cn <= T(n) <= 3cn/2
Therefore T(n) = Θ(n)
This can also be solved using Master Theorem for solving recurrences. The given expression lies in Case 3 of the theorem.
4. The keys 12, 18, 13, 2, 3, 23, 5 and 15 are inserted into an initially empty hash table of length 10 using open addressing with hash function h(k) = k mod 10 and linear probing. What is the resultant hash table?
Answer (C)
To get the idea of open addressing concept, you can go through below lines from Wikipedia
.
Open addressing, or closed hashing, is a method of collision resolution in hash tables. With this method a hash collision is resolved by probing, or searching through alternate locations in the array (the probe sequence) until either the target record is found, or an unused array slot is found, which indicates that there is no such key in the table. Well known probe sequences include:
linear probing in which the interval between probes is fixed--often at 1.
quadratic probing in which the interval between probes increases linearly (hence, the indices are described by a quadratic function).
double hashing in which the interval between probes is fixed for each record but is computed by another hash function.
Please write comments if you find any of the answers/explanations incorrect, or you want to share more information about the topics discussed above.
GATE CS Corner Company Wise Coding Practice
Recommended Posts:
0 Average Difficulty : 0/5.0
No votes yet.
|
__label__pos
| 0.987738 |
How to write an improper fraction
What is an example of an improper fraction?
An improper fraction is a fraction in which the numerator (top number) is greater than or equal to the denominator (bottom number). Fractions such as 65 or 114 are “improper”.
How do you write a mixed number as an improper fraction?
To convert a mixed fraction to an improper fraction, follow these steps:
1. Multiply the whole number part by the fraction’s denominator.
2. Add that to the numerator.
3. Then write the result on top of the denominator.
How do you write a mixed number as a fraction?
What is 1 and 2/3 as an improper fraction?
Answer and Explanation: The mixed number 1 2/3 is 5/3 as an improper fraction.
What is 1 and 3/4 as an improper fraction?
Answer and Explanation: The mixed number 1 3/4 would be equal to the improper fraction 7/4.
What is 2 and 3/4 as an improper fraction?
23/4 as improper fraction
Hence, the improper fraction is 11/4.
What is 3/4 as a mixed number?
Basic Math Examples
Since 34 is a proper fraction, it cannot be written as a mixed number.
What is 3 and 3/4 as an improper fraction?
The improper fraction 15/4 is equivalent to the mixed fraction 3 3/4.
What is 3 as a fraction?
Decimal to fraction conversion table
DecimalFraction
0.251/4
0.285714292/7
0.33/10
0.333333331/3
What is 3/8 as a decimal?
Answer: 3/8 as a decimal is 0.375.
What are the 7 types of fractions?
Based on the numerators and denominators, fractions are classified into the following types:
• Proper Fractions.
• Improper Fractions.
• Mixed Fractions.
• Like Fractions.
• Unlike Fractions.
• Equivalent Fractions.
• Unit Fractions.
What is 2% in a fraction?
Decimal as a Fraction Calculator
2 as a fraction is 1/5.
What is 70% as a fraction?
Percent to fraction conversion table
PercentFraction
60%3/5
70%7/10
71.435/7
75%3/4
What is the fraction of 35%?
How to Write 0.35 or 35% as a Fraction?
DecimalFractionPercentage
0.459/2045%
0.48/2040%
0.357/2035%
0.36/2030%
What is 3/4 as a percent?
Answer: 3/4 is expressed as 75% in terms of Percentage.
How do you write 1/3 as a percent?
Now we can see that our fraction is 33.333333333333/100, which means that 1/3 as a percentage is 33.3333%.
How do you write 3/4 as a decimal?
3/4 as a decimal is 0.75.
What grade is a 75%?
Letter GradePercentage RangeMid-Range
A80% to 89%85%
B+75% to 79%77.5%
B70% to 74%72.5%
C+65% to 69%67.5%
How to write an improper fraction
Leave a Reply
Your email address will not be published. Required fields are marked *
This site uses Akismet to reduce spam. Learn how your comment data is processed.
Scroll to top
|
__label__pos
| 1 |
Save Points in GTA 2 Mission Scripts
The three official levels allow you to save your game at the church. I’ll explain how to do them for your maps.
The best way is to do it in a THREAD_TRIGGER command (some examples elsewhere in this guide). So let’s give an example of a save game thread:
FORWARD savepoint_1:
THREAD_TRIGGER thr_savepoint_1 = THREAD_WAIT_FOR_CHAR_IN_AREA (p1, 44.5,100.5,2.0, 3.0,1.0, savepoint_1:)
savepoint_1:
PERFORM_SAVE_GAME (thr_savepoint_1, 44.5,100.5,2.0, 3.0,1.0)
RETURN
Let’s go through this. You can call the trigger whatever you like (replace the thr_savepoint_1 with a name you like). Choose the player to wait for (p1) and the (X,Y,Z) coordinates. The two digits after the (X,Y,Z) are the check width and check height and the final one is from your FORWARD function. On the line with PERFORM_SAVE_GAME, the code is identical, but you need to tell it what the THREAD_TRIGGER name is. Hopefully this should all work!
(Originally written by Chris “Pyro” Hockley and formatted by Ben “Cerbera” Millard with full permission.)
|
__label__pos
| 0.779628 |
Huffman Encoding And Decoding Python
The name of the module refers to the full name of the inventor of the Huffman code tree algorithm: David Albert Huffman (August 9, 1925 – October 7, 1999). for all finite lists. Literally encoding means to convert body of information from one system to another system in the form of codes. Standard Huffman coding applies to a particular data set being encoded, with the set-specific symbol table prepended to the output data stream. In this tutorial, you will understand the working of Huffman coding with working code in C, C++, Java, and Python. Huffman Encoding is performed over the secret image/message before embedding and each bit of Huffman code of secret image/message is embedded inside the cover image by altering the least significant bit (LSB). Programs are created to implement algorithms. Huffman tree based on the phrase „Implementation of Huffman Coding algorithm” (source: huffman. In this article I describe the DEFLATE algorithm that GZIP implements and depends on. The term refers to the use of a variable-length code table for encoding a source symbol (such as a character in a file) where the variable-length code table has been derived in a particular way based on the estimated probability of occurrence for. Huffman coding is an entropy encoding algorithm used for lossless data compression. Area coding. Topics include: elements of information theory, Huffman coding, run-length coding and fax, arithmetic coding, dictionary techniques, and predictive coding. The built-in function bin returns the binary representation of a number as a string: >>> 42 42 >>> 0b101010 42 >>> bin(42) '0b101010'. Insert a node for a character in Huffman decoding tree. Let’s explain with a simple example how encoding and decoding is carried out in Bit plane compression. Huffman encoding is a method used to reduce the number of bits used to store a message. This algorithm performs few operations and Huffman encoding of secret image helps to protect the images from stealing or misuse by unintended users. Python Completions: 224. for a list of all code families known to Sage. student at MIT, and published in the 1952 paper "A Method for the Construction of Minimum. * The weight of a `Leaf` is the frequency of appearance of the character. KY - White Leghorn Pullets). 1 Introduction. Discrete Cosine transform (DCT) is performed both by classical & Chen s Flowgraph methods. Slawek Ligus 2010. To construct a Unicode string, simply call unicode(). 허프만 트리 저장하기 (Recording Huffman tree) (0) 2015. Closed Policy. Huffman decoding 04:05. js Ocaml Octave Objective-C Oracle Pascal Perl Php PostgreSQL Prolog Python Python 3 R Rust Ruby Scala Scheme Sql Server. Huffman Coding. doc), PDF File (. Problem 3: Decoding; Problem 2: Encoding. This is a closed project. 허프만 트리 저장하기 (Recording Huffman tree) (0) 2015. It yields the smallest possible number of code symbols per source symbol [7]. He asked the question: if I assigned bit sequences to each of those symbols, what assignment would produce the shortest output. You can rate examples to help us improve the quality of examples. 10 KB __license__ returns an huffman tree containing all the datas from the list. Given the above symbols and probabilities, we might ask what is the optimal way to encode them in bits. Huffman coding implementation in Python. Algorithm Visualizations. In python, ‘heapq’ is a library that lets us implement this easily. The codebook alongside encoded. Recently, Huffman code has been used to improve MFCVQ-based reversible data hiding (Yang et al. Taken from wikipedia. Hi havia una vegada un MOOC L'afició als MOOCs està fent que dediqui menys temps al blog però també em proporciona idees per a posts. Ideone is something more than a pastebin; it's an online compiler and debugging tool which allows to compile and run code online in more than 40 programming languages. Unlike ASCII codes, Huffman codes use lesser number of bits. First, a disclaimer: this is a very superficial scientific vulgatisation post about a topic that I have no formal background about, and I try to keep it very simple. Design and implementation of "Huffman coding and decoding and compression using Huffman" using matlab Jan 2019 – Apr 2019 Implement of "Image Denoising using Wavelet Transform" using python. Huffman coding algorithm was invented by David Huffman in 1952. Huffman Encoding (Finding bit-encoding for each character) Huffman encoding is a form of compression which assigns least number of bits for encoding the most frequent character. thank you. decompress already undoes for you. Compression is achieved due to this because the overall text requires lesser number of bits to encode. you encode the given string. the huffman tree for decoding. why we use python, because in python we can fi. Key generator reads the text file and writes a new key file. The first step in this process is to build a histogram of the number of occurrences of each symbol in the data to be. For the given picture, a codebook (table) is created. Ultimately though the author should investigate arithmetic encoding because it always produces equivalent or better results than Huffman, which is a derivative of the Shannon-Fano algorithm. In this example, the Most Significant Bit(MSB) alone is considered and encoded. the huffman tree for decoding. According to section 4. Developing decoding and encoding skills is essential for a solid understanding of reading. The result of the encoding is displayed in Base64 format. Huffman Coding The last phase of MP3 encoding compresses the filtered signal using Huffman coding. rgb and using a 3 quantization level. Interview Data (Identifying & Coding Themes) Open coding. 对任意文件进行压缩 并且生成压缩文件保存在磁盘上 1. Blocks now have more adaptive coding. Even if they used it, it would be more rational to give the shortest possible code, S, the shortest possible code, and for the rarest letter, T (or U, or '\ n'), give the code more authentic. (ii) It is a widely used and beneficial technique for compressing data. One of difficulties has been when experimenting with in-built python data structures such as heapq or priority queue using the example text 'DAVIDAHUFFMAN' with equal counts for several of the letters when I put these in heapq or PriorityQueue and then came to remove them the. Huffman while he was a Sc. These are the top rated real world Python examples of HuffmanCoding. Encoding a string can be done by replacing each letter in the string with its binary code (the Huffman code). Over the years various people have made good progress on decoding. Huffman’sAlgorithm: 1. hamming_code. It is suboptimal in the sense that it does not achieve the lowest possible expected code word length like Huffman coding. Huffman, Run Length, Arithmetic, LZW, and Shift Coding). Introduction to Huffman decoding. 2020-06-18 python keras encoding nlp decoding 파이썬 및 외국어와 관련된 인코딩 문제 2020-06-16 python-3. – Decoding: not contain characters outside the Base64 Alphabet. Resolve ties by giving single letter groups precedence (put to the left) over multiple letter groups, then alphabetically. you encode the given string. such a code proceeds by means of Huffman coding, an algorithm developed by David A. Unlike ASCII codes, Huffman codes use lesser number of bits. In adaptive huffman coding, the character will be inserted at the highest leaf possible to be decoded, before eventually getting pushed down the tree by higher. you need to provide input (text) file as well as the. This is a closed project. Huffman Coding (link to Wikipedia) is a compression algorithm used for loss-less data compression. Python Programming Course for Manchester, UK Attend face-to-face, remote-live, on-demand or on site at your facility On-Demand Training with Personal Facilitation. (iii) Huffman's greedy algorithm uses a table of the frequencies of occurrences of each character to build up an optimal way of representing each character as a binary string. The following pseudo code modifies the standard form of the decoding algorithm for a range scale of [0, ∑c i):. Huffman coding is a very popular algorithm for encoding data. Huffman using run lengths In my runlength/Huffman encoder the maximum runlength is 69 for the reasons explained in this document (postscript) |. Replace these bits with bits of data to be hidden. For example, the ASCII standard code used to represent text in computers encodes each character as a sequence of seven bits, and 128 characters are encoded in total. The goal Huffman coding intends to achieve is to use a lower amount of bits than origianlly used. As I mentioned earlier, I'm leaving out the details for now, since I don't want to ruin anyone else's fun in decoding the message. once before you start coding the message will make your task much easier, Decoding a message produced in this way requires the following steps: 5. •Giv e soptimal (min average code-length) prefix-free binary code to each ai ∈Σofor a givenprobabilities p(ai)>0. Huffman encoding is a method used to reduce the number of bits used to store a message. 2) recall that a Huffman tree can be represented as a sequence of integers. Image in steganography process neglects the basic demand of robustness. def idctii(x, axes=None): """ Compute a multi-dimensional inverse DCT-II over specified array axes. However, the prerequisite for this kind of compression is that both the encoding and decoding systems recognize the coding. Huffman encoding and decoding. Detail on Huffman Encoding Once you start looking at things, you'll see that there's "static" Huffman Encoding and "dynamic" or "adaptive" Huffman Encoding. The Huffman code uses the frequency of appearance of letters in the text, calculate and sort the characters from the most frequent to the least frequent. The expected output of a program for custom text with 100 000 words: 100 000 words compression (Huffman Coding algorithm) Algorithms Data structures Huffman. Using the code. 将树节点升序排序,每次选出最小的两个节点,合并为一个节点。再将此节点放回节点列表。. i have to perform some analysis using arithmetic coding instead of huffman. First you map your input string based on the original character encoding : 00 A 10 B 10 B 01 C 01 C 01 C 00 A 10 B 11 D 01 C 01 C 01 C 01 C 01 C 10 B 01 C 10 B 00 A. It should be in Python language. To decode the encoded string, follow the zeros and ones to a leaf and return the character there. In computer science and information theory, Huffman coding is an entropy encoding algorithm used for lossless data compression. 23 colour image: A continuous-tone image that has more than one component. The Base64 string can be copied to the clipboard as CSS background property or HTML img tag format. exe -i compressedfilename -o. CS216: Program and Data Representation University of Virginia Computer Science Spring 2006 David Evans Lecture 15: Compression http://www. For this lab only, you may use string methods to work with. How to use Base64 Image? Convert Image to Base64. Huffman encoder and lossless compression of data. Huffman encoding is a relatively slower process since it uses two passes- one for building the statistical model and another for encoding. New method of denoising 4. GitHub Gist: instantly share code, notes, and snippets. Non-Baseline JPEG may use also Arithmetic coding. Generate binary tree which represents best encoding. 1 kB) File type Source Python version None Upload date Jan 26, 2009 Hashes View. For decoding it takes in a. Task 1 - Shannon-Fano coding and Huffman coding. Let's look at the encoding process now. "dessert"에 대해서 심볼, 출현 횟수를 "(심볼,횟수)"로 표현하면 (d,1) (e,2) (s,2) (r,1) (t,1)로 표현 할 수 있고, 각각을 이진트리의 node로 다시 표현하면 아래와 같다. Key generator reads the text file and writes a new key file. To handle multi language text, besides default ASCII string, Python support a Unicode strings datatype. Huffman Encoding. – Encoding: result must be represented in lines of no more than 76 characters each and use a carriage return followed by a linefeed (\r ). The encoding graph for systematic linear block codes is proposed. Huffman encoding is part of the deflate compression that zlib. In practice, the efficiency of Huffman decoding is a major issue in the design of the Huffman decoder. Huffman Encoding. In computer science and information theory, a Huffman code is a particular type of optimal prefix code that is commonly used for lossless data compression. Encoding And Decoding In Java - Online base64, base64 decode, base64 encode, base64 converter, python, to text _decode decode image, javascript, convert to image, to string java b64 decode, decode64 , file to, java encode, to ascii php, decode php , encode to file, js, _encode, string to text to decoder, url characters, atob javascript, html img, c# encode, 64 bit decoder, decode linuxbase. The term refers to the use of a variable-length code table for encoding a source symbol (such as a character in a file) where the variable-length code table has been derived in a particular way based on the estimated probability of occurrence for each possible value. Find Complete Code at GeeksforGeeks Article: http://www. JPEG Image compression using Huffman encoding and decoding. The resulting data for all 8×8 blocks is further compressed with a lossless algorithm, a variant of Huffman encoding. A una de les “video lectures” s’explicava el format jpeg i es parlava de l’us de la codificació de Huffman dins d’aquest format. Note that this time we run the loop for length of message instead of length of string. He asked the question: if I assigned bit sequences to each of those symbols, what assignment would produce the shortest output. The process of finding or using such a code proceeds by means of Huffman coding, an algorithm developed by David A. These codes are called as prefix code. can any one please help me with the code or some kind of help?. 1 Run-length coding. The specific coding steps are. USING MATLAB Internal Guide : SVMG Phani Kumar C ECE-B Coordinator : Mrs. Python huffman_decode - 2 examples found. 1 Introduction. student at MIT, and published in the 1952 paper "A Method for the Construction of Minimum. The last node remaining in the queue T will be the final Huffman tree. exe -i compressedfilename -o. – Decoding: not contain characters outside the Base64 Alphabet. Task 1 - Shannon-Fano coding and Huffman coding. May 28, 2019. The process of finding or using such a code proceeds by means of Huffman coding, an algorithm developed by David A. Noiseless Coding (3) Example for state-of-the art coding kernel (MPEG-2/4 AAC) Multi-dimensional (2 or 4-dim. I studied using this site and write. Here you are encoding Example. Web Development JavaScript React CSS Angular PHP Node. huffman encoding and decoding method - 特許庁 可変長 復号化 方法及び装置 例文帳に追加 METHOD AND DEVICE FOR VARIABLE LENGTH DECODING - 特許庁. Huffman (PS4) 9517 Compress (LZW) 10000 (“file unchanged”) Gzip(not LZW) 8800 This is quite surprising! UVa CS216 Spring 2006 -Lecture 15: Numbers 9 GIF •Graphics Interchange Format developed by Compuserve(1987) •Algorithm: –Divide image into 8x8 blocks –Find optimal Huffman encoding for those blocks –Encode result using LZW. No need to transmit any table to the decoder. DCs are coded as the differences to the previous ones (inside macroblock component) and instead of being coded as 9-bit number they are now Huffman-coded and table is selected depending on component bit depth. You can rate examples to help us improve the quality of examples. 4 or higher. Huffman encoding and decoding python. Decoding strategies are techniques that help students to develop reading capabilities. do it in trees. I learned that this is not a canonical Huffman code (from #25798 ticket i created), so i can't just use the freq to create the Huffman for the decoding. When a child reads the words 'The ball is big,' for. First, a disclaimer: this is a very superficial scientific vulgatisation post about a topic that I have no formal background about, and I try to keep it very simple. Try these out using the encoding and decoding objects give above. A Huffman in early 1950’s: story of invention Analyze and process data before compression Not developed to compress data “on-the-fly” Represent data using variable length codes Each letter/chunk assigned a codeword/bitstring Codeword for letter/chunk is produced by traversing. 读入文件并且保存在一个字符串数组内 或者保存文件指针 或文件流对象 2. We now know how to decode for Huffman code. Solution proposal - week 13 Solutions to exercises week 13. py files) are typically compiled to an intermediate bytecode language (. Entropy coding is a type of lossless coding to compress digital data by representing frequently occurring patterns with few bits and rarely occurring patterns with many bits. This is an Open Source project, code licensed MIT. It is an algorithm which works with integer length codes. build_decoding_dict. 19: 허프만코딩 21. The encoding graph for systematic linear block codes is proposed. The last node remaining in the queue T will be the final Huffman tree. RUN - LEVEL coding is used instead of Huffman codi. MATLAB image processing codes with examples, explanations and flow charts. Task 1 - Shannon-Fano coding and Huffman coding. 디코딩 시작 (Begin the decoding of Huffman encoded file) (0) 2015. That is, the intervening data structure, the Huffman decoding tree, is completely eliminated, replacing it with a call graph that does the decoding instead. 81) is not hard to follow either, and contains a lot of useful flowcharts showing the decoding and encoding process. Huffman coding processing. USING MATLAB Internal Guide : SVMG Phani Kumar C ECE-B Coordinator : Mrs. Huffman tree is a specific method of representing each symbol. As mentioned in the text, the algorithm will use 2 auxiliary data structures, a priority queue and a binary tree. This method of compression is based on an inefficiency in normal representation of data strings. Let’s start by. For example, if you use letters as symbols and have details of the frequency of occurrence of those letters in typical strings, then you could just encode each letter with a fixed number of bits. Python Forums on Bytes. Learn Python: Online training VCDIFF also includes options for encoding/decoding with alternate byte-code tables. A Huffman tree represents Huffman codes for the character that might appear in a text file. Encoded String “1001011” represents the string “ABACA” You have to decode an encoded string using the huffman tree. For this lab only, you may use string methods to work with. Huffman coding is a very popular algorithm for encoding data. such a code proceeds by means of Huffman coding, an algorithm developed by David A. I assume the length of the file is known to the decoder; this allows the compressed file to be about 6 bits shorter than if I ensured that the file is self-delimiting in some way, for example, using the EOF character. Save changes image's si Lossless compression method to shorten Fast search in compressed text files What are some alternatives. (ii) It is a widely used and beneficial technique for compressing data. Encode a String in Huffman Coding: In order to encode a string first, we need to build a min-heap tree So, we are using a Module called heapq in Python. We now know how to decode for Huffman code. Huffman, Run Length, Arithmetic, LZW, and Shift Coding). You are given pointer to the root of the Huffman tree and a binary coded string to decode. Standard Huffman coding applies to a particular data set being encoded, with the set-specific symbol table prepended to the output data stream. The decoding process reverses these steps, except the quantization because it is irreversible. It doubles the bitrate, but is cheap and simple to implement. These users have contributed to this kata: Similar Kata: 5 kyu. As far as I can tell, the encoding used was arbitrary, and does not have any fundamental meaning. Huffman, in his 1952 paper, called Huffman Coding. such a code proceeds by means of Huffman coding, an algorithm developed by David A. Files for huffman-encoder-decoder, version 0. One, predictably, is in the. It is suboptimal in the sense that it does not achieve the lowest possible expected code word length like Huffman coding. signal decoders category is a curation of 10 web resources on , iPhone DTMF Decoder, Digital Speech Decoder, Globe-S for RTL1090. , 2^5 = 32, which is enough to represent 26 values), thus reducing the overall memory. Solution proposal - week 13 Solutions to exercises week 13. Huffman encoding ensures that our encoded bitstring is as small as possible without losing any information. Tag: python,image,encoding,character-encoding Note: I don't know much about Encoding / Decoding, but after I ran into this problem, those words are now complete jargon to me. Using the frequency table shown below, build a Huffman Encoding Tree. The standard form of arithmetic coding's decoding is also based on fractional ranges on a probability line between 0 and 1. 4 or higher. ) The first element in the result array is a simple sum. It should be in Python language. thank you. In Huffman encoding of images, a symbol represents an image block. I came across a neat shortcut to decoding a Huffman table the other day and thought I would share it. Two Types of Source (Image) Coding •Lossless coding (entropy coding) – Data can be decoded to form exactly the same bits – Used in “zip” – Can only achieve moderate compression (e. Summary of Styles and Designs. * * Every `Leaf` node of the tree represents one character of the alphabet that the tree can encode. This tutorial is focussed on the topic: Run Length Encoding in a String in Python. Programs are created to implement algorithms. Get exceptionally good at coding interviews by solving one problem every day. geeksforgeeks. Programming is writing computer code to create a program, to solve a problem. How to use Base64 Image? Convert Image to Base64. The encoding side of Huffman is fairly expensive, though; the whole data set has to be scanned and a frequency table built up. Note: Please use this button to report only Software related issues. Huffman Coding (link to Wikipedia) is a compression algorithm used for loss-less data compression. The probabilities or frequencies have to be written, as side information, to the. The encoding graph for systematic linear block codes is proposed. The built-in function bin returns the binary representation of a number as a string: >>> 42 42 >>> 0b101010 42 >>> bin(42) '0b101010'. The result of the encoding is displayed in Base64 format. One of difficulties has been when experimenting with in-built python data structures such as heapq or priority queue using the example text 'DAVIDAHUFFMAN' with equal counts for several of the letters when I put these in heapq or PriorityQueue and then came to remove them the. You need to print the actual string. How do I implement Huffman encoding and decoding using an array and not a tree? Based on how the question is formulated I’ll assume you know how to do it with a tree. Unicode in Python. In case of Huffman coding, the most generated character will get the small code and least generated character will get the large code. Thus, the lossless techniques that use Huffman encoding are considerably slower than others. For example, the ASCII standard code used to represent text in computers encodes each character as a sequence of seven bits, and 128 characters are encoded in total. This tutorial is focussed on the topic: Run Length Encoding in a String in Python. Huffman, in his 1952 paper, called Huffman Coding. Surprisingly, i was nevertheless unable to find a general-purpose module for the Python programming language that allowed for some tweaking, as was necessary for the development of a specific artistic project. For example, if you use letters as symbols and have details of the frequency of occurrence of those letters in typical strings, then you could just encode each letter with a fixed number of bits. Python source files (. Bit-tree encoding is performed like decoding, except that bit values are taken from the input integer to be encoded rather than from the result of the bit decoding functions. Each symbol at a leaf is assigned a weight (which is its relative frequency), and each non-leaf node contains a weight that is the sum of all the weights of the leaves lying below it. etc can be encoded and decoded. Huffman Coding Algorithm - Programiz. The word “codec” is a contraction of two words: “COde” and “DECode”. Entropy coding is a type of lossless coding to compress digital data by representing frequently occurring patterns with few bits and rarely occurring patterns with many bits. To decode look for match left to right, bit by bit record letter when a match is found begin next character where you left off Computer Science Huffman Encoding Example Decode!! 1011111001010! Try it!. **Functions**: 1. Python can handle various encoding processes, and different types of modules need to be imported to make these encoding techniques work. The console is straightforward to use to encode a source file to a Huffman compressed one:. Figure 2 The first step in the Huffman coding algorithm. This ADT does the work of coding and decoding using a Huffman tree. Run-length encoding (RLE) is a very simple form of data compression in which a stream of data is given as the input (i. Ideone is something more than a pastebin; it's an online compiler and debugging tool which allows to compile and run code online in more than 40 programming languages. Huffman coding is a method in which we will enter the symbols with there frequency and the output will be the binary code for each symbol. We now present an arithmetic coding view, with the aid of Figure 1. Huffman encoding ensures that our encoded bitstring is as small as possible without losing any information. Closed Policy. – Suppose your encoding of a file is 8001 bits long – Then the resulting encoded file will have 8008 bits – Clearly, there are 7 bits at the end of the encoded file that don’t correspond data in the original file – But 7 is a lot of bits for Huffman, and it’s likely that we would decode a few extra characters. In this method, lossless decoding is done using the property that the codes generated are prefix codes. The huffmandict, huffmanenco, and huffmandeco functions support Huffman coding and decoding. A una de les “video lectures” s’explicava el format jpeg i es parlava de l’us de la codificació de Huffman dins d’aquest format. Built a text file encoder using Huffman Encoding in the C language. Huffman Encoder and Decoder مارس 2020 – يونيو 2020 An implementation of the Huffman Coding algorithm for Encoding and Decoding text files in Python. This allows unambiguous, linear-time decoding: 101 b 111 d 1100 f 0 a 100 c 1101 e Prefix coding means that we can draw our code as a binary tree, with the leaves representing code-words (see Figure19. 허프만 트리 저장하기 (Recording Huffman tree) (0) 2015. People who share documents over the Internet with people who work in other languages, or with people using different computer systems, may use this feature to store the text as numeric values to ensure their recipients can download and use the files. decoding, as no code word is a prefix of any other code word. Noiseless Coding (3) Example for state-of-the art coding kernel (MPEG-2/4 AAC) Multi-dimensional (2 or 4-dim. For queries regarding questions and quizzes, use the comment area below respective pages. Ensured security of the encoded message by One Time Pad where a randomly Developed an encryption and decryption algorithm for encoding data into DNA nucleotides as well as for decoding using Python. Modular conversion, encoding and encryption online. We relate arithmetic coding to the process of sub- dividing the unit interval, and we make two points: Point I Each codeword (code point) is the sum of the proba- bilities of the preceding symbols. Insert a node for a character in Huffman decoding tree. Actually, that’s too strong for us. Encoding & Decoding; Meaning of Encoding in communication. Looks like you're on Windows with Python 2. We now present an arithmetic coding view, with the aid of Figure 1. BIT Stream Definition: A Bitstream refers to binary bits of information (1's and 0's) transferred from one device to another. , ZIP, JPEG, MPEG). py) and decode it. Huffman Code Algorithm Overview: Encoding. A Huffman tree represents Huffman codes for the character that might appear in a text file. Despite use of the word "standard" as part of its name, readers are advised that this document is not an Internet Standards Track specification; it is being. Hi, We are also using these functions. Pure Python implementation, only using standard library. LinearCode is used to represent the former. Encoding & Decoding; Meaning of Encoding in communication. This means that we can prove that decoding inverts encoding, using the “Unfoldr–Foldr Theorem” stated in the Haskell documentation for (in fact, the only property of stated there!). huffman_decode extracted from open source projects. In the remainder of this section, the encoding and decoding processes are described in more detail. That is, the intervening data structure, the Huffman decoding tree, is completely eliminated, replacing it with a call graph that does the decoding instead. When applying Huffman encoding technique on an Image, the source symbols can be either pixel intensities of the Image, or the output of an intensity mapping function. Replace these bits with bits of data to be hidden. Huffman Encoding (Finding bit-encoding for each character) Huffman encoding is a form of compression which assigns least number of bits for encoding the most frequent character. decompress Text. Like: huffman. As mentioned in the text, the algorithm will use 2 auxiliary data structures, a priority queue and a binary tree. No line separator is. It yields the smallest possible number of code symbols per source symbol [7]. The default is the current directory, but it may be more appropriate. One of difficulties has been when experimenting with in-built python data structures such as heapq or priority queue using the example text 'DAVIDAHUFFMAN' with equal counts for several of the letters when I put these in heapq or PriorityQueue and then came to remove them the. The following pseudo code modifies the standard form of the decoding algorithm for a range scale of [0, ∑c i):. In this method, lossless decoding is done using the property that the codes generated are prefix codes. If you don't understand the above, then please go and read about HUFFMAN, SHANNON-FANO coding and Minimum redundancy coding. huf file and decodes it back to it's original format. RUN - LEVEL coding is used instead of Huffman coding. Whereas Huffman encoding uses variable length sequences to represent a fixed length string (usually a character), LZW compression uses a fixed length sequence to represent a variable length string. you encode the given string. [Active] Homework 10: Huffman Code In this problem, your job is to write Huffman Code and related functionalities, in three parts: Part 1 Huffman Encoding: In this part, you generate the Huffman Codes for the different characters in the given string (contains only small case english letters) Part 2-Huffman Decoding: In this part, using the given set of codes. It’s free, open source, and most often classified as a scripting language (meaning it doesn’t require an explicit compilation step). Here you are encoding Example. The term refers to the use of a variable-length code table for encoding a source symbol (such as a character in a file) where the variable-length code table has been derived in a particular way based on the estimated probability of occurrence for. decompress Text. In some cases a "shortcut" is appropriate with Huffman coding. The result of the encoding is displayed in Base64 format. •Giv e soptimal (min average code-length) prefix-free binary code to each ai ∈Σofor a givenprobabilities p(ai)>0. Huffman, Run Length, Arithmetic, LZW, and Shift Coding). Binary tree python github. Let’s start by. The test data is frequencies of the letters of the alphabet in English text. Encoding And Decoding In Java - Online base64, base64 decode, base64 encode, base64 converter, python, to text _decode decode image, javascript, convert to image, to string java b64 decode, decode64 , file to, java encode, to ascii php, decode php , encode to file, js, _encode, string to text to decoder, url characters, atob javascript, html img, c# encode, 64 bit decoder, decode linuxbase. algorithm documentation: Huffman Coding. He asked the question: if I assigned bit sequences to each of those symbols, what assignment would produce the shortest output. These users have contributed to this kata: Similar Kata: 5 kyu. The specific coding steps are. huffman encoding and decoding method - 特許庁 可変長 復号化 方法及び装置 例文帳に追加 METHOD AND DEVICE FOR VARIABLE LENGTH DECODING - 特許庁. Examples: DEED 10100101 (8 bits) MUCK 111111001110111101 (18 bits) Problem 3: Decoding. However, the prerequisite for this kind of compression is that both the encoding and decoding systems recognize the coding. 20 coding: Encoding or decoding. Files for huffman-encoder-decoder, version 0. Your function will receive the encoded string and the root to the huffman tree only. 3 Outline of this Lecture Given an encoded message, decoding is the process of turning it back into the original message. that encode and decode data. C , Python. Huffman Encoding And Decoding; Huffman Encoding Errors; Making A Decode Method For Huffman Encoding? Python; PHP; Mobile Development; ASP. INF2310, spring 2017. In computer science and information theory, a Huffman code is a particular type of optimal prefix code that is commonly used for lossless data compression. Daily Coding Problem is a mailing list for coding interview problems. test book. Slawek Ligus 2010. The weights are not used in the encoding or the decoding process. , Python III: Python. They use a String array of 26, one spot for each letter. •Giv e soptimal (min average code-length) prefix-free binary code to each ai ∈Σofor a givenprobabilities p(ai)>0. why we use python, because in python we can fi. When applying Huffman encoding technique on an Image, the source symbols can be either pixel intensities of the Image, or the output of an intensity mapping function. According to section 4. rgb and using a 1 quantization level. for all finite lists. Huffman encoding is a method for lossless compression of information. Files for huffman-encoder-decoder, version 0. This allows unambiguous, linear-time decoding: 101 b 111 d 1100 f 0 a 100 c 1101 e Prefix coding means that we can draw our code as a binary tree, with the leaves representing code-words (see Figure19. , ZIP, JPEG, MPEG). Follow 45 views (last 30 days) Alif Kusumah on 4 Dec 2012. python实现huffman编码代码. Now i wanted to evaluate using IPP Huffman Encoding/Decoding functions in different area of our software and i noticed that the functions are all deprecated. python version 3 needed. More frequent characters are assigned shorter codewords and less frequent characters are assigned longer codewords. To decode the encoded string, follow the zeros and ones to a leaf and return the character there. Check out the videos on Youtube below: Huffman Coding - Explanation and Example; Huffman Coding - Python Implementation and Demo. 2020-06-18 python keras encoding nlp decoding 파이썬 및 외국어와 관련된 인코딩 문제 2020-06-16 python-3. CS216: Program and Data Representation University of Virginia Computer Science Spring 2006 David Evans Lecture 15: Compression http://www. Using the Huffman code table, which is either explicitly contained in the compressed data stream or is already pre-fixed, Huffman decoding 62 decodes (i. We'll be using the python heapq library to implement a priority queue, so if you're unfamiliar with that library, go back and read our previous guide. Here you are encoding Example. Firstly we are going to have an introduction. Design and implementation of "Huffman coding and decoding and compression using Huffman" using matlab Jan 2019 – Apr 2019 Implement of "Image Denoising using Wavelet Transform" using python. The Huffman Codec is a record that stores two records: the Huffman Tree thatwas created in the previous exercise, and a codebook, which was created by analysing the Huffman tree. 1 kB) File type Source Python version None Upload date Jan 26, 2009 Hashes View. For one thing, a terminal can only display one codepage at a given time, and a document with an ISO-8859-* encoding can only contain one character set. Huffman Coding (link to Wikipedia) is a compression algorithm used for loss-less data compression. **Functions**: 1. It’s free, open source, and most often classified as a scripting language (meaning it doesn’t require an explicit compilation step). Each symbol at a leaf is assigned a weight (which is its relative frequency), and each non-leaf node contains a weight that is the sum of all the weights of the leaves lying below it. The specific coding steps are. linear_code. To decode look for match left to right, bit by bit record letter when a match is found begin next character where you left off Computer Science Huffman Encoding Example Decode!! 1011111001010! Try it!. Detail on Huffman Encoding Once you start looking at things, you'll see that there's "static" Huffman Encoding and "dynamic" or "adaptive" Huffman Encoding. Here’s the basic idea: each ASCII character is usually represented with 8 bits, but if we had a text filed composed of only the lowercase a-z letters we could represent each character with only 5 bits (i. doc), PDF File (. why we use python, because in python we can fi. exe -i compressedfilename -o. This means that we can prove that decoding inverts encoding, using the “Unfoldr–Foldr Theorem” stated in the Haskell documentation for (in fact, the only property of stated there!). dahuffman is a pure Python module for Huffman encoding and decoding, commonly used for lossless data compression. You can find the code here. Two 8 bit gray level image of size M X N and P X Q are used as cover image and secret image respectively. Python语言实现哈夫曼编码 9466 2017-12-02 汉语版:使用python实现huffman编码是一个能够很快地实现。所以我们选择使用python来实现我们这个程序。 l E-version: we will use python to realize this program called huffman encoding and decoding. Huffman tree (as list) char_count build_huffman_tree Codes dictionary {char:code} generate_code. Tag: python,image,encoding,character-encoding Note: I don't know much about Encoding / Decoding, but after I ran into this problem, those words are now complete jargon to me. Introduction to Huffman decoding. 3 Decode and get the original data by walking the Huffman encoding tree. The number of bits involved in encoding the string isn’t reduced. py), save the encoded string to file and then open this file from another program (script server. Huffman encoding is a method used to reduce the number of bits used to store a message. Now encoding is a simple and decoding a simple. 1) generate 2 Huffman trees( called dictionary, one for alphabet and repeat marks, one for distance) and encoding the text file with these 2 trees. HUFFMAN ENCODING AND DECODING. The default is the current directory, but it may be more appropriate. Huffman while he was a Sc. Enter a brief summary of what you are selling. In some cases a "shortcut" is appropriate with Huffman coding. The DEFLATE algorithm uses a combination of LZ77, Huffman codes and run-length-encoding; this article describes each in detail by walking through an example and developing source code to implement the algorithm. Since unipolar line encoding has one of its states at 0 Volts, it’s also called Return to Zero (RTZ) as shown in Figure. As you noted, a standard Huffman coder has access to the probability mass function of its input sequence, which it uses to construct efficient encodings for the most probable symbol values. once before you start coding the message will make your task much easier, Decoding a message produced in this way requires the following steps: 5. python实现huffman编码代码. If you don't understand the above, then please go and read about HUFFMAN, SHANNON-FANO coding and Minimum redundancy coding. Find Complete Code at GeeksforGeeks Article: http://www. 2 Huffman Decoding Huffman decoding is the reverse process of encoding, which is used to decompress the image. The built-in function bin returns the binary representation of a number as a string: >>> 42 42 >>> 0b101010 42 >>> bin(42) '0b101010'. Like encoding, we have to rescale our calculations for the decoding process. Project Due: Saturday 11/17 at 11:00 PM. Closed Policy. Huffman coding. Hi, We are also using these functions. 15 kilobytes. In some cases a "shortcut" is appropriate with Huffman coding. Here are some things that will help you: Think about dealing with text as if you were dealing with images. huffman_decode extracted from open source projects. In this post decoding is discussed. Correspondingly, delta encoding followed by Huffman and/or run-length encoding is a common strategy for compressing signals. , decompresses) the Huffman-encoded data stream. The test data is frequencies of the letters of the alphabet in English text. To decode look for match left to right, bit by bit record letter when a match is found begin next character where you left off Computer Science Huffman Encoding Example Decode!! 1011111001010! Try it!. Encoding & Decoding; Meaning of Encoding in communication. Like encoding, we have to rescale our calculations for the decoding process. It is part of Dave Coffin's dcraw code. The huffmandict, huffmanenco, and huffmandeco functions support Huffman coding and decoding. Read the description of tree from the compressed file, thus reconstructing the original Huffman tree that was used to encode the message (you will write the code to do this in util. Lecture 15: Huffman Coding CLRS- 16. Huffman encoding ensures that our encoded bitstring is as small as possible without losing any information. Python Programming Course for Manchester, UK Attend face-to-face, remote-live, on-demand or on site at your facility On-Demand Training with Personal Facilitation. Two 8 bit gray level image of size M X N and P X Q are used as cover image and secret image respectively. Huffman, in his 1952 paper, called Huffman Coding. For this project, you only need to do static encoding (which is probably easier). Zlib gzip example c. 19: 허프만코딩 22. By using the right codepage, 8-bit bytes can be made quite suitable for encoding reasonable sized (phonetic) alphabets. The class sage. We now know how to decode for Huffman code. Each tree node will have a value, a set of characters in the text, and a priority, the sum of the frequencies of those characters in the text. No line separator is. 62 Run length encoding introduction 63 Run length encoding implementation - encode 64 Run length encoding implementation - decode 65 Huffman encoding introduction 66 Huffman decoding 67 Huffman encoding implementation I - helper classes 68 Huffman encoding implementation II - encoding 69 Huffman encoding implementation III - testing. Algorithm Visualizations. Huffman Trees can exploit this fact, to make the files even smaller. Topics include: elements of information theory, Huffman coding, run-length coding and fax, arithmetic coding, dictionary techniques, and predictive coding. Miele French Door Refrigerators; Bottom Freezer Refrigerators; Integrated Columns – Refrigerator and Freezers. You are expected to do all of the work on this project without consulting with anyone other than the CMSC 132 instructors and TAs. Python print binary data. How do I implement Huffman encoding and decoding using an array and not a tree? Based on how the question is formulated I’ll assume you know how to do it with a tree. Huffman Encoding / Decoding My code essentially reads from a file, encodes, and writes an encoded ". The number of bits involved in encoding the string isn’t reduced. Using the Huffman code table, which is either explicitly contained in the compressed data stream or is already pre-fixed, Huffman decoding 62 decodes (i. Huffman Code Algorithm Overview: Encoding. CONTENTS v 5 Lossless Compression 71 5. Literally encoding means to convert body of information from one system to another system in the form of codes. Complete coding may be done by calling an easy to use main program (or main function), where input argument is the sequences you want to compress and the output is the compressed bitstream, as a vector of bytes. Like encoding, we have to rescale our calculations for the decoding process. The most commonly used encodings are UTF-8 (which uses one byte for any ASCII characters, which have the same code values in both UTF-8 and ASCII encoding, and up to four bytes for other characters), the now-obsolete UCS-2 (which uses two bytes for each character but cannot encode every character in the current Unicode standard), and UTF-16. Since unipolar line encoding has one of its states at 0 Volts, it’s also called Return to Zero (RTZ) as shown in Figure. This time, gzip use dynamic Huffman codes. Decoding strategies are techniques that help students to develop reading capabilities. In this tutorial, we are going to see how to encode a string in Huffman coding in Python. Algebraic, combinatorial, and geometric approaches to coding theory are adopted with the aim of highlighting how coding can have an important real-world impact. Hi Rodion, Thanks for pointers and encouragement. Do not worry about punctuation or capitalization. DCT The DCT is a mathematical operation that transform a set of data, which is sampled at a given sampling rate, to it's frequency components. Modular conversion, encoding and encryption online. 6-Huffman Encoding and Decoding Huffman encoding, utilizes fewer numbers of bits to encode the picture pixels. In Huffman encoding of images, a symbol represents an image block. build_decoding_dict. Hash function Hex to Ascii85 Caesar cipher Vigenère cipher. Program: Run Length Encoding in a String in Python. Python can handle various encoding processes, and different types of modules need to be imported to make these encoding techniques work. Huffman Encoding. Such as, the sequence ‘11010111’ might be decoded into the String ‘decd’. Huffman Encoding. The result of the encoding is displayed in Base64 format. dahuffman is a pure Python module for Huffman encoding and decoding, commonly used for lossless data compression. For algorithms that try to compute the encoding with the shortest post-range-encoding size, the encoder also needs to provide an estimate of that. Entropy encoding is used to further compresses the quantized values losslessly to give better overall compression. Each tree node will have a value, a set of characters in the text, and a priority, the sum of the frequencies of those characters in the text. Huffman Coding First presented by David A. It is a significant advancement over the other lossless methods. Closed Policy. Web Development JavaScript React CSS Angular PHP Node. Insert a node for a character in Huffman decoding tree. Huffman coding is one of the basic compression methods, that have proven useful in image and video compression standards. Take an arbitrary Huffman codebook (Lecture 6/Internet/etc. In computer science and information theory, Huffman coding is an entropy encoding algorithm used for lossless data compression. Like encoding, we have to rescale our calculations for the decoding process. The name of the module refers to the full name of the inventor of the Huffman code tree algorithm: David Albert Huffman (August 9, 1925 – October 7, 1999). We analyze a generalization of Huffman coding to the quantum case. The following pseudo code modifies the standard form of the decoding algorithm for a range scale of [0, ∑c i):. Huffman encoding is a way to assign binary codes to symbols that reduces the overall number of bits used to encode a typical string of those symbols. To construct a Unicode string, simply call unicode(). The procedure is simple enough that we can present it here. Because it carefully balances both theory and applications, this book will be an indispensable resource for readers seeking a timely treatment of error-correcting codes. 3 Decode and get the original data by walking the Huffman encoding tree. Assignment 3a: Huffman Encoding Description For this project, you will implement a program to encode text using Huffman coding. Sai Sruthi (14321A04A6) Contents Aim Block Diagram Huffman Coding Flow Chart of Huffman algorithm Nature of Huffman coding Matlab Specific Syntaxes Result Advantages and Applications Conclusion and Future scope Aim. The principle of Huffman code is based on the frequency of each data item. Huffman coding is used in image compression; however, in JPEG2000, an arithmetic codec is employed. Decoding an encoded string can be done by looking at the bits in the coded string from left to right until. At the heart of the. Like encoding, we have to rescale our calculations for the decoding process. For example, if you use letters as symbols and have details of the frequency of occurrence of those letters in typical strings, then you could just encode each letter with a fixed number of bits. Huffman coding as the final step [10]. Data can be presented in different kinds of encoding, such as CSV, XML, and JSON, etc. Huffman’sAlgorithm: 1. Algorithm Visualizations. •Giv e soptimal (min average code-length) prefix-free binary code to each ai ∈Σofor a givenprobabilities p(ai)>0. Huffman Encoding. This algorithm is also widely use for compressing any type of file that may have redundancy (e. Rupa (14321A04A0) B. dahuffman is a pure Python module for Huffman encoding and decoding, commonly used for lossless data compression. In other words, you are breaking down the data into first level concepts, or master headings, and second-level categories, or subheadings. The Wikipedia article has a pretty good description of the adaptive Huffman coding process using one of the notable implementations, the Vitter algorithm. It does that based on the probabilities of the symbols. In this tutorial, we are going to discuss Huffman Decoding in C++. Since length of all the binary codes is different, it becomes difficult for the decoding software to detect. Huffman coding is a type of entropy coding. MIME – Uses the “The Base64 Alphabet” as specified in Table 1 of RFC 2045 for encoding and decoding operation. Here are some things that will help you: Think about dealing with text as if you were dealing with images. student at MIT, and published in the 1952 paper "A Method for the Construction of Minimum. Canonical Huffman coding. ) and encode the subband values of each signal block with it (take a look at the attached Grundlagen der Videotechnok lecture 7, pages 6 (how to define codebook) and 11 (how to pack bitstream)) Save the resulting bitstream (all blocks) as binary file and compare its size to. Hash function Hex to Ascii85 Caesar cipher Vigenère cipher. Let’s start by. Learn Python: Online training VCDIFF also includes options for encoding/decoding with alternate byte-code tables. We analyze a generalization of Huffman coding to the quantum case. We give the algorithm in several steps: 1. pdf), Text File (. A Huffman binary tree was built to organize all the characters in the given text file based on occurrence of the character. p y # import sys, string codes = {} def frequency (str) : freqs = {} for ch in str : freqs[ch] = freqs. Encoding strategies enable the development of writing and spelling capabilities. In this example, the Most Significant Bit(MSB) alone is considered and encoded. You need to print the actual string. For this lab only, you may use string methods to work with. Whereas Huffman encoding uses variable length sequences to represent a fixed length string (usually a character), LZW compression uses a fixed length sequence to represent a variable length string. huffman编码以根节点到叶子节点的路径来编码的,左为0,右为1? 1. "AAABBCCCC") and the output is a sequence of counts of consecutive data values in a row (i. Huffman encoding is a method used to reduce the number of bits used to store a message. There’s certainly not a lack of implementations for Huffman coding (a kind of data compression) in the web. This means that we can prove that decoding inverts encoding, using the “Unfoldr–Foldr Theorem” stated in the Haskell documentation for (in fact, the only property of stated there!). that encode and decode data. Huffman decoding 62 is a reverse operation of Huffman encoding. Huffman Coding The last phase of MP3 encoding compresses the filtered signal using Huffman coding. Huffman Compression Algorithm Codes and Scripts Downloads Free. – Encoding: result must be represented in lines of no more than 76 characters each and use a carriage return followed by a linefeed (\r ). Huffman encoder and lossless compression of data. Huffman coding is used in image compression; however, in JPEG2000, an arithmetic codec is employed. I am sure you are right. More frequent characters are assigned shorter codewords and less frequent characters are assigned longer codewords. Huffman, involves assigning variable code length to characters based on their probability of occurences (Huffman, 1952). Huffman Encoding. Unipolar encoding uses only one level of value 1 as a positive value and 0 remains Idle.
|
__label__pos
| 0.933182 |
Solution: Sum of Three Values
Let's solve the Sum of Three Values problem using the Two Pointers pattern.
Statement
Given an array of integers, nums, and an integer value, target, determine if there are any three integers in nums whose sum is equal to the target, that is, nums[i] + nums[j] + nums[k] == target. Return TRUE if three such integers exist in the array. Otherwise, return FALSE.
Note: A valid triplet consists of elements with distinct indexes. This means, for the triplet nums[i], nums[j], and nums[k], i \not = j, i \not = k and j \not = k.
Constraints:
• 33 \leq nums.length \leq 500500
• 103-10^3 \leq nums[i] \leq 10310^3
• 103-10^3 \leq target \leq 10310^3
Level up your interview prep. Join Educative to access 70+ hands-on prep courses.
|
__label__pos
| 0.932929 |
The Evolution of Web Development: Navigating the Digital Frontier
Web development has undergone a remarkable transformation in recent years, shaping the digital landscape we navigate daily. From the early days of static HTML pages to the dynamic and interactive web applications of today, the field has evolved to meet the growing demands of users and businesses alike.
One significant shift in web development is the adoption of responsive design. With the proliferation of various devices and screen sizes, developers now prioritize creating websites that seamlessly adapt to different platforms. Responsive design ensures a consistent and optimal user experience, whether someone is accessing a site on a desktop computer, tablet, or smartphone.
Furthermore, the rise of JavaScript frameworks has played a pivotal role in enhancing the interactivity of web applications. Technologies like React, Angular, and Vue.js have empowered developers to build dynamic, single-page applications that mimic the fluidity of desktop software. These frameworks enable the creation of highly responsive user interfaces, providing a smoother and more engaging experience.
The importance of user experience (UX) and user interface (UI) design has also become a focal point in web development. Websites are not only expected to function flawlessly but also to offer aesthetically pleasing and intuitive interfaces. As a result, web developers collaborate closely with designers to ensure that both the functionality and visual appeal of a website meet modern standards.
Moreover, the advent of serverless architecture has revolutionized the way web applications are deployed and scaled. Serverless computing allows developers to focus solely on writing code without the need to manage server infrastructure. This not only streamlines the development process but also offers cost-effective solutions, as resources are allocated dynamically based on demand.
In recent times, the importance of web security has grown exponentially. With cyber threats becoming more sophisticated, web developers are incorporating robust security measures into their applications. From encryption protocols to secure coding practices, ensuring the protection of user data has become a top priority in the development process.
The concept of progressive web apps (PWAs) has gained traction, combining the best of web and mobile applications. PWAs offer users an app-like experience directly through their web browsers, eliminating the need for installations. This approach enhances accessibility and user engagement, making it a sought-after development strategy.
In conclusion, web development has evolved into a dynamic and multifaceted field, driven by technological advancements and changing user expectations. From responsive design to serverless architecture and enhanced security measures, developers continue to push the boundaries of what is possible on the web. As we navigate the digital frontier, the evolution of web development remains a testament to the industry’s adaptability and commitment to delivering innovative and user-centric solutions.
|
__label__pos
| 0.934067 |
Linear Algebra and Its Applications, Exercise 1.5.3
Exercise 1.5.3. From equations (6) and (3) respectively we have
L = E^{-1}F^{-1}G^{-1} = \begin{bmatrix} 1&0&0 \\ 2&1&0 \\ -1&-1&1 \end{bmatrix} \quad GFE = \begin{bmatrix} 1&0&0 \\ -2&1&0 \\ -1&1&1 \end{bmatrix}
Multiply the two matrices, in both orders. Explain the two answers.
Answer: We have
\begin{bmatrix} 1&0&0 \\ 2&1&0 \\ -1&-1&1 \end{bmatrix} \begin{bmatrix} 1&0&0 \\ -2&1&0 \\ -1&1&1 \end{bmatrix} = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&0&1 \end{bmatrix}
and
\begin{bmatrix} 1&0&0 \\ -2&1&0 \\ -1&1&1 \end{bmatrix} \begin{bmatrix} 1&0&0 \\ 2&1&0 \\ -1&-1&1 \end{bmatrix} = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&0&1 \end{bmatrix}
In other words, the product of the matrices is the same in both cases, and is the identity matrix I. This can be explained in two ways:
First, GFE is the matrix that embodies the elimination steps to take the matrix A to the upper triangular matrix U, while L takes U back to A: (GFE)A = U and LU = A. So applying GFE to A followed by L will take us back to A:
(E^{-1}F^{-1}G^{-1})(GFE)A = (E^{-1}F^{-1}G^{-1})U = LU = A
Thus (E^{-1}F^{-1}G^{-1})(GFE) is the identity matrix I.
Similarly, applying L to U followed by GFE takes us back to U:
(GFE)(E^{-1}F^{-1}G^{-1})U = (GFE)LU = (GFE)A = U
Thus (GFE)(E^{-1}F^{-1}G^{-1}) is also the identity matrix I.
Second, from the definition of the inverse of a matrix and the associative property of matrix multiplication we have
(E^{-1}F^{-1}G^{-1})(GFE) = (E^{-1}F^{-1})(G^{-1}G)(FE) = (E^{-1}F^{-1})I(FE) = (E^{-1}F^{-1})(FE) = E^{-1}(F^{-1}F)E = E^{-1}IE = E^{-1}E = I
and
(GFE)(E^{-1}F^{-1}G^{-1}) = (GF)(EE^{-1})(F^{-1}G^{-1}) = (GF)I(F^{-1}G^{-1}) = (GF)(F^{-1}G^{-1}) = G(FF^{-1})G^{-1} = GIG^{-1} = GG^{-1} = I
Thus
(E^{-1}F^{-1}G^{-1})(GFE) = (GFE)(E^{-1}F^{-1}G^{-1}) = I
NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.
If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.
This entry was posted in linear algebra. Bookmark the permalink.
Leave a Reply
Fill in your details below or click an icon to log in:
WordPress.com Logo
You are commenting using your WordPress.com account. Log Out / Change )
Google photo
You are commenting using your Google account. Log Out / Change )
Twitter picture
You are commenting using your Twitter account. Log Out / Change )
Facebook photo
You are commenting using your Facebook account. Log Out / Change )
Connecting to %s
|
__label__pos
| 0.999825 |
How do you preview like in Mancandy´s tuts?
I d like to be able to preview my mesh while in pose mode like they do in Mancandy´s videotuts, with the mesh lines displayed in green while the mesh texture is in solid mode; like this:
Attachments
To show the mesh lines, use Draw Extra > Wire in the Draw buttons panel. I think the green is due to the mesh being part of a group. Add your mesh to a group and by default those lines will be drawn green.
hey bugman, perfect!
thank you
Note for the ones to come:
To access the “Draw Extra” Panel press F7 and then go to the three arrows button!
did you try this with 2.48
i tried it on simple shape and not working
but may be bug in 2.48 ?
let me know
Thanks
Hey Rick, i ve just posted a thread baut that bug in the 2.47 version here.
i dunno if it is the same bug as yours.
i would recommend you to use beta versions only if you are an advanced user.
let me know if is it the same bug we are talking about
|
__label__pos
| 0.501318 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.