content
stringlengths 228
999k
| pred_label
stringclasses 1
value | pred_score
float64 0.5
1
|
---|---|---|
Popularity
6.7
Declining
Activity
0.0
Stable
19
2
4
Monthly Downloads: 9
Programming language: Haskell
License: LicenseRef-PublicDomain
Tags: Control
Latest version: v1.0.1
conkin alternatives and similar packages
Based on the "Control" category.
Alternatively, view conkin alternatives based on common mentions on social networks and blogs.
Do you think we are missing an alternative of conkin or a related project?
Add another 'Control' Package
README
One thing I haven't often seen people talk about doing in Haskell is working with data in column-major order, or as a struct of arrays. If we take a look though, there's some interesting possibilities and theory underlying this relatively simple concept.
The conkin package is the result of my explorations along this line of thinking.
<!--
Setup
This is a literate haskell file, so we need to specify all our LANGUAGE pragma and imports up front. But just because we need to, doesn't mean we need to show it our reader, thus the HTML comments.
{-# OPTIONS_GHC -Wno-name-shadowing #-}
{-# LANGUAGE RecordWildCards #-}
{-# LANGUAGE StandaloneDeriving #-}
{-# LANGUAGE NoMonomorphismRestriction #-}
{-# LANGUAGE UndecidableInstances #-}
{-# LANGUAGE TypeApplications #-}
{-# LANGUAGE PatternSynonyms #-}
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
{-# LANGUAGE ExplicitNamespaces #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE PackageImports #-}
{-# LANGUAGE GADTs #-}
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE PolyKinds #-}
{-# LANGUAGE TypeOperators #-}
{-# LANGUAGE TypeFamilies #-}
module Main where
import Data.Functor.Identity (Identity(..))
import Control.Applicative (Alternative(..))
import "conkin" Conkin (type (~>)((~$~)))
import qualified "conkin" Conkin
import Numeric (showHex)
import Data.Char (toUpper)
import Data.Maybe (fromJust, fromMaybe, isJust)
import Data.Default (Default(..))
import Data.Monoid (All(..), (<>))
import GHC.Generics
import Test.DocTest
main :: IO ()
main = doctest $ words "-pgmL markdown-unlit README.lhs"
A couple things only need to be set for the tests.
{-$
>>> :set -XTypeApplications -XTypeOperators -XStandaloneDeriving -XDeriveGeneric
-}
By using an alternate printer, we get much more legible example results in the doctests
{-$
>>> import Text.Show.Pretty (pPrint)
>>> :set -interactive-print pPrint
-}
And some custom data types are handy, but could be distracting pedagogically:
type Dollars = Double
newtype UPC = UPC { getUPC :: Integer }
deriving (Num, Eq, Ord)
instance Show UPC where
showsPrec _ (UPC u) = showString "0x" . (map toUpper (showHex u []) ++)
-->
An example of use
Suppose we have a list of items we wish to manipulate in column-major order:
items :: [Item]
items = [ chocolateBar, toiletPaper, ibuprofen ]
chocolateBar, toiletPaper, ibuprofen :: Item
chocolateBar = Item 0xDE1EC7AB1E "chocolate bar" 1.50
toiletPaper = Item 0xDEFEC8 "toilet paper" 9.99
ibuprofen = Item 0x43A1A11 "ibuprofen" 5.25
Using the Functor instance for lists, we can easily extract each field into its own list:
extractFields0 :: [Item] -> ([UPC], [String], [Double])
extractFields0 items = ( upc <$> items, name <$> items, price <$> items )
{-$-----------------------------------------------------------------------------
>>> extractFields0 items
( [ 0xDE1EC7AB1E , 0xDEFEC8 , 0x43A1A11 ]
, [ "chocolate bar" , "toilet paper" , "ibuprofen" ]
, [ 1.5 , 9.99 , 5.25 ]
)
-}
We've lost bit of semantic meaning, however, as we've switched from our own custom data type to a generic tuple. We can regain this meaning if we define a type specifically for a collection of items, parameterized by the collection type:
extractFields1 :: [Item] -> ItemF []
extractFields1 items = ItemF (upc <$> items) (name <$> items) (price <$> items)
{-$-----------------------------------------------------------------------------
>>> extractFields1 items
ItemF
{ _upc = [ 0xDE1EC7AB1E , 0xDEFEC8 , 0x43A1A11 ]
, _name = [ "chocolate bar" , "toilet paper" , "ibuprofen" ]
, _price = [ 1.5 , 9.99 , 5.25 ]
}
-}
data ItemF f = ItemF
{ _upc :: f UPC
, _name :: f String
, _price :: f Dollars
}
deriving instance (Show (f String), Show (f Dollars), Show (f UPC)) => Show (ItemF f)
deriving instance (Eq (f String), Eq (f Dollars), Eq (f UPC)) => Eq (ItemF f)
With a little help from PatternSynonyms we can derive the Item type from ItemF, making sure the two definitions don't slip out of step:
{-$-----------------------------------------------------------------------------
>>> items
[ ItemF
{ _upc = Identity 0xDE1EC7AB1E
, _name = Identity "chocolate bar"
, _price = Identity 1.5
}
, ItemF
{ _upc = Identity 0xDEFEC8
, _name = Identity "toilet paper"
, _price = Identity 9.99
}
, ItemF
{ _upc = Identity 0x43A1A11
, _name = Identity "ibuprofen"
, _price = Identity 5.25
}
]
-}
-- import Data.Functor.Identity (Identity(..))
-- ...
type Item = ItemF Identity
-- {-# LANGUAGE PatternSynonyms #-}
-- ...
pattern Item :: UPC -> String -> Dollars -> Item
pattern Item upc name price = ItemF (Identity upc) (Identity name) (Identity price)
upc :: Item -> UPC
upc = runIdentity . _upc
name :: Item -> String
name = runIdentity . _name
price :: Item -> Dollars
price = runIdentity . _price
So what else can we do with ItemF? We can't make it a Functor, it's got the wrong kind.
{-$-----------------------------------------------------------------------------
>>> instance Functor ItemF where fmap = undefined
<BLANKLINE>
...
• Expected kind ‘* -> *’, but ‘ItemF’ has kind ‘(* -> *) -> *’
• In the first argument of ‘Functor’, namely ‘ItemF’
In the instance declaration for ‘Functor ItemF’
-}
But it's still got this parameter that it's covariant and homogenous in - all the fields must use the same container of kind * -> *, and changing what container we're using should be easy.
So let's define a different Functor class for types of kind (k -> *) -> *.
{-$-----------------------------------------------------------------------------
>>> :i Conkin.Functor
class Conkin.Functor (f :: (k -> *) -> *) where
Conkin.fmap :: forall (a :: k -> *) (b :: k -> *).
(forall (x :: k). a x -> b x) -> f a -> f b
...
-}
-- import qualified Conkin
-- ...
instance Conkin.Functor ItemF where
fmap f (ItemF {..}) = ItemF
{ _upc = f _upc
, _name = f _name
, _price = f _price
}
Now we can use Conkin.fmap to convert an individual Item into a ItemF []
{-$-----------------------------------------------------------------------------
>>> :t Conkin.fmap (\(Identity x) -> [x])
Conkin.fmap (\(Identity x) -> [x])
:: Conkin.Functor f => f Identity -> f []
>>> Conkin.fmap (\(Identity x) -> [x]) chocolateBar
ItemF
{ _upc = [ 0xDE1EC7AB1E ]
, _name = [ "chocolate bar" ]
, _price = [ 1.5 ]
}
-}
We could stitch together multiple of these ItemF [] into one if ItemF [] had a Monoid instance:
extractFields2 :: [Item] -> ItemF []
extractFields2 = foldMap $ Conkin.fmap $ pure . runIdentity
{-$-----------------------------------------------------------------------------
>>> extractFields2 items
ItemF
{ _upc = [ 0xDE1EC7AB1E , 0xDEFEC8 , 0x43A1A11 ]
, _name = [ "chocolate bar" , "toilet paper" , "ibuprofen" ]
, _price = [ 1.5 , 9.99 , 5.25 ]
}
-}
-- import Control.Applicative (Alternative(..))
-- ...
instance Alternative a => Monoid (ItemF a) where
mempty = ItemF empty empty empty
left `mappend` right = ItemF
{ _upc = _upc left <|> _upc right
, _name = _name left <|> _name right
, _price = _price left <|> _price right
}
Of course we could do this before with extractFields1, but there's nothing specific to ItemF in the definition of extractFields2. The same definition would work for any Conkin.Functor that formed a Monoid:
{-$-----------------------------------------------------------------------------
>>> :t foldMap $ Conkin.fmap $ pure . runIdentity
foldMap $ Conkin.fmap $ pure . runIdentity
:: (Applicative b, Conkin.Functor f, Monoid (f b), Foldable t) =>
t (f Identity) -> f b
-}
Another useful monoid is ItemF Maybe. This could let us combine multiple partially specified items into one:
{-$-----------------------------------------------------------------------------
>>> mempty { _price = Just 2.99 }
ItemF { _upc = Nothing , _name = Nothing , _price = Just 2.99 }
>>> mempty { _price = Just 2.99 } <> mempty { _upc = Just 0x0 }
ItemF { _upc = Just 0x0 , _name = Nothing , _price = Just 2.99 }
-}
(Side note - I love being able to partially specify ItemF Maybe using mempty with record notation. All the succinctness of ItemF { _price = Just 2.99 }, but none of the missing fields.)
We can use <> (aka mappend) to transform a partially specified item into a fully specified one:
withDefaults0 :: ItemF Maybe -> Item
withDefaults0 partial = Conkin.fmap (Identity . fromJust) $ partial <> ItemF
{ _upc = Just 0x0
, _name = Just "unknown"
, _price = Just 0
}
{-$-----------------------------------------------------------------------------
>>> withDefaults0 mempty
ItemF
{ _upc = Identity 0x0
, _name = Identity "unknown"
, _price = Identity 0.0
}
>>> withDefaults0 mempty { _price = Just 2.99, _name = Just "flyswatter" }
ItemF
{ _upc = Identity 0x0
, _name = Identity "flyswatter"
, _price = Identity 2.99
}
-}
However, I'm not a big fan of this solution. We've abandoned some safety by using the partial fromJust. If a future developer alters a default to be Nothing, the compiler won't complain, we'll just get a runtime error.
What I'd rather be using is the safer fromMaybe, but since that's a two-argument function, I can't just use it via fmap. I need ItemF to be an Applicative.
We'll need a slightly different Applicative class than Prelude's, as ItemF again has the wrong kind:
{-$-----------------------------------------------------------------------------
>>> :i Conkin.Applicative
class Conkin.Functor f =>
Conkin.Applicative (f :: (k -> *) -> *) where
Conkin.pure :: forall (a :: k -> *). (forall (x :: k). a x) -> f a
(Conkin.<*>) :: forall (a :: k -> *) (b :: k -> *).
f (a ~> b) -> f a -> f b
...
>>> :i (~>)
type role (~>) representational representational nominal
newtype (~>) (a :: k -> *) (b :: k -> *) (x :: k)
= Conkin.Arrow {(~$~) :: a x -> b x}
...
-}
instance Conkin.Applicative ItemF where
pure a = ItemF a a a
ItemF fi fs fd <*> ItemF ai as ad
= ItemF (fi ~$~ ai) (fs ~$~ as) (fd ~$~ ad)
Now we can lift fromMaybe:
withDefaults1 :: ItemF Maybe -> Item
withDefaults1 = Conkin.liftA2 (\(Identity x) -> Identity . fromMaybe x) ItemF
{ _upc = Identity 0x0
, _name = Identity "unknown"
, _price = Identity 0
}
{-$-----------------------------------------------------------------------------
>>> withDefaults1 mempty
ItemF
{ _upc = Identity 0x0
, _name = Identity "unknown"
, _price = Identity 0.0
}
>>> withDefaults1 mempty { _price = Just 2.99, _name = Just "flyswatter" }
ItemF
{ _upc = Identity 0x0
, _name = Identity "flyswatter"
, _price = Identity 2.99
}
-}
Using data-default's Default class, we can generalize this idea to create a function that converts any partially-specified Conkin.Applicative to a fully specified one.
withDefaults2 :: (Conkin.Applicative f, Default (f Identity)) => f Maybe -> f Identity
withDefaults2 = Conkin.liftA2 (\(Identity x) -> Identity . fromMaybe x) def
instance Default Item where
def = ItemF
{ _upc = Identity 0x0
, _name = Identity "unknown"
, _price = Identity 0
}
{-$-----------------------------------------------------------------------------
>>> withDefaults2 mempty :: ItemF Identity
ItemF
{ _upc = Identity 0x0
, _name = Identity "unknown"
, _price = Identity 0.0
}
>>> withDefaults2 mempty { _price = Just 2.99, _name = Just "flyswatter" }
ItemF
{ _upc = Identity 0x0
, _name = Identity "flyswatter"
, _price = Identity 2.99
}
-}
What also might be nice is a way to test whether a ItemF Maybe is actually fully specified:
isAllJust :: Conkin.Foldable f => f Maybe -> Bool
isAllJust = getAll . Conkin.foldMap (All . isJust)
{-$-----------------------------------------------------------------------------
>>> isAllJust mempty { _upc = Just 0x1111111111 }
False
>>> isAllJust ItemF { _upc = Just 0xDEADBEEF, _name = Just "hamburger", _price = Just 1.99 }
True
-}
At this point, it should not be surprising that we need a slightly different Foldable in order to collapse ItemF values:
{-$-----------------------------------------------------------------------------
>>> :i Conkin.Foldable
class Conkin.Foldable (t :: (k -> *) -> *) where
Conkin.foldr :: forall (a :: k -> *) b.
(forall (x :: k). a x -> b -> b) -> b -> t a -> b
Conkin.foldMap :: forall m (a :: k -> *).
Monoid m =>
(forall (x :: k). a x -> m) -> t a -> m
...
-}
instance Conkin.Foldable ItemF where
foldMap f (ItemF {..}) = f _upc <> f _name <> f _price
We could use isAllJust to safely create an Item from a fully-specified ItemF Maybe:
toItem0 :: ItemF Maybe -> Maybe Item
toItem0 i | isAllJust i = Just $ Conkin.fmap (Identity . fromJust) i
| otherwise = Nothing
But the conkin package already provides a function that does just that:
{-$-----------------------------------------------------------------------------
>>> Conkin.apportion mempty { _upc = Just 0x1111111111 }
Nothing
>>> Conkin.apportion ItemF { _upc = Just 0xDEADBEEF, _name = Just "hamburger", _price = Just 1.99 }
Just
ItemF
{ _upc = Identity 0xDEADBEEF
, _name = Identity "hamburger"
, _price = Identity 1.99
}
>>> :t Conkin.apportion
Conkin.apportion
:: (Conkin.Traversable g, Applicative f) => g f -> f (g Identity)
-}
Although conkin does require that ItemF implement its custom Traversable class, it provides helpers for tuple-like classes like ItemF.
{-$-----------------------------------------------------------------------------
>>> :m +Data.Functor.Compose
>>> :i Conkin.Traversable
class (Conkin.Foldable t, Conkin.Functor t) =>
Conkin.Traversable (t :: (i -> *) -> *) where
Conkin.traverse :: forall j (f :: (j -> *) -> *) (a :: i
-> *) (b :: i -> j -> *).
Conkin.Applicative f =>
(forall (x :: i). a x -> f (b x))
-> t a -> f (Compose t (Conkin.Flip b))
Conkin.sequenceA :: forall j (f :: (j -> *) -> *) (a :: i
-> j -> *).
Conkin.Applicative f =>
t (Compose f a) -> f (Compose t (Conkin.Flip a))
...
-}
instance Conkin.Traversable ItemF where
sequenceA (ItemF {..}) = Conkin.liftT3 ItemF _upc _name _price
We could also attempt to use apportion to invert extractFields2, but it mixes up the columns:
{-$-----------------------------------------------------------------------------
>>> items
[ ItemF
{ _upc = Identity 0xDE1EC7AB1E
, _name = Identity "chocolate bar"
, _price = Identity 1.5
}
, ItemF
{ _upc = Identity 0xDEFEC8
, _name = Identity "toilet paper"
, _price = Identity 9.99
}
, ItemF
{ _upc = Identity 0x43A1A11
, _name = Identity "ibuprofen"
, _price = Identity 5.25
}
]
>>> Conkin.apportion (extractFields2 items)
[ ItemF
{ _upc = Identity 0xDE1EC7AB1E
, _name = Identity "chocolate bar"
, _price = Identity 1.5
}
, ItemF
{ _upc = Identity 0xDE1EC7AB1E
, _name = Identity "chocolate bar"
, _price = Identity 9.99
}
, ItemF
{ _upc = Identity 0xDE1EC7AB1E
, _name = Identity "chocolate bar"
, _price = Identity 5.25
}
...
, ItemF
{ _upc = Identity 0x43A1A11
, _name = Identity "ibuprofen"
, _price = Identity 1.5
}
, ItemF
{ _upc = Identity 0x43A1A11
, _name = Identity "ibuprofen"
, _price = Identity 9.99
}
, ItemF
{ _upc = Identity 0x43A1A11
, _name = Identity "ibuprofen"
, _price = Identity 5.25
}
]
-}
This is because of []'s Applicative instance. If we use the ZipList newtype wrapper, we can get the behaviour we desire:
{-$-----------------------------------------------------------------------------
>>> import Control.Applicative (ZipList(..))
>>> Conkin.align (ZipList items)
ItemF
{ _upc =
ZipList { getZipList = [ 0xDE1EC7AB1E , 0xDEFEC8 , 0x43A1A11 ] }
, _name =
ZipList
{ getZipList = [ "chocolate bar" , "toilet paper" , "ibuprofen" ] }
, _price = ZipList { getZipList = [ 1.5 , 9.99 , 5.25 ] }
}
>>> Conkin.apportion (Conkin.align (ZipList items))
ZipList
{ getZipList =
[ ItemF
{ _upc = Identity 0xDE1EC7AB1E
, _name = Identity "chocolate bar"
, _price = Identity 1.5
}
, ItemF
{ _upc = Identity 0xDEFEC8
, _name = Identity "toilet paper"
, _price = Identity 9.99
}
, ItemF
{ _upc = Identity 0x43A1A11
, _name = Identity "ibuprofen"
, _price = Identity 5.25
}
]
}
-}
Here we use the handy align function as yet another way to implement extractFields:
{-$-----------------------------------------------------------------------------
>>> :t Conkin.align
Conkin.align
:: (Conkin.Applicative g, Traversable f) => f (g Identity) -> g f
-}
A little bit of theory
Typically in Haskell, we talk about the category Hask, where the objects are types of kind * and the arrows are normal Haskell functions. In general, a functor is a mapping between categories, mapping each object or arrow in one category to an object or arrow (respectively) in another.
The Prelude's Functor typeclass actually describes endofunctors from Hask to Hask; given a Functor f, we can map any type a in Hask to the type f a in Hask (so f must have kind * -> *), and we can map any arrow (function) a -> b in Hask to an arrow f a -> f b in Hask (using fmap).
The conkin package focuses on the functors from Haskk to Hask. In Haskk, the objects are types of kind k -> *, and the arrows are transformations a ~> b where (a ~> b) x ~ (a x -> b x). A functor from Haskk to Hask must then be able to map any type a :: k -> * in Haskk to a type f a :: * in Hask (so f must have kind (k -> *) -> *), and must be able to map any arrow a ~> b in Haskk to an arrow f a -> f b in Hask.
(I'm not very well read in category theory, so it's thoroughly possible Haskk has a more common name in literature, I just chose that one out of similarity with type exponentials.)
You can lift any functor from Hask to Hask to a functor from Haskk to Hask using Dispose:
{-$-----------------------------------------------------------------------------
>>> :i Conkin.Dispose
type role Conkin.Dispose representational nominal nominal
newtype Conkin.Dispose (f :: * -> *) (x :: k) (a :: k -> *)
= Conkin.Dispose {Conkin.getDispose :: f (a x)}
...
-}
And any functor from Haskk to Hask can be lifted to a functor from Hask to Hask using Coyoneda:
{-$-----------------------------------------------------------------------------
>>> :i Conkin.Coyoneda
type role Conkin.Coyoneda representational representational
data Conkin.Coyoneda (t :: (k -> *) -> *) u where
Conkin.Coyoneda :: forall k (t :: (k -> *) -> *) u (a :: k -> *).
(forall (x :: k). a x -> u) -> (t a) -> Conkin.Coyoneda t u
...
-}
Not only do both of these encodings preserve functorality, but they also preserve foldability, applicativity, and traversability (e.g. Traversable t => Conkin.Traversable (Conkin.Dispose t x)).
Another interesting facet of functors from Haskk to Hask is the similarity between their kind, (k -> *) -> *, and the type of continuations, type Cont r a = (a -> r) -> r. This continuation kind is where the conkin package gets its name from.
If we start to look at these functors as types of kind Cont Type i, then we can can start thinking of how to compose them in an algebra, using
• Conkin.Product f g :: Cont Type (i,j) as the product type of functors f :: Cont Type i and g :: Cont Type j
• Conkin.Coproduct f g :: Cont Type (Either i j) as the coproduct type of functors f :: Cont Type i and g :: Cont Type j
Interestingly, Conkin.Product f g a is isomorphic to f (Compose g a), making Conkin.sequenceA the equivalent of Data.Tuple.swap.
Notes and concerns
Existing Work
The conkin package isn't unprecedented. In addition to Edward Kmett's even more general categories package, there's also Gracjan Polak's fieldwise package, which supports a similar set of operations for types of kind (k -> *) -> *.
Boilerplate instances
Instances of Conkin's Functor, Applicative, Foldable, and Traversable classes are mainly mechanical, and seem like excellent candidates for using -XDeriveGeneric and -XDefaultSignatures to reduce the amount of boilerplate needed for use. This is not currently true, as you cannot encode a type like ItemF using the fundamental representational types GHC knows about:
{-$-----------------------------------------------------------------------------
>>> deriving instance Generic1 (ItemF)
...
• Can't make a derived instance of ‘Generic1 ItemF’:
Constructor ‘ItemF’ applies a type to an argument involving the last parameter
but the applied type is not of kind * -> *, and
Constructor ‘ItemF’ applies a type to an argument involving the last parameter
but the applied type is not of kind * -> *, and
Constructor ‘ItemF’ applies a type to an argument involving the last parameter
but the applied type is not of kind * -> *
• In the stand-alone deriving instance for ‘Generic1 (ItemF)’
-}
It's very possible to hand-write instances of Generic1 for functors from Haskk to Hask using an fundamental representational type, Par2:
newtype Par2 (x :: k) (a :: k -> *) = Par2 { unPar2 :: a x }
instance Generic1 ItemF where
type Rep1 ItemF =
D1 ('MetaData "ItemF" "Main" "conkin" 'True)
(C1 ('MetaCons "ItemF" 'PrefixI 'True)
(S1 ('MetaSel ('Just "_upc") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy)
(Par2 UPC)
:*:
S1 ('MetaSel ('Just "_name") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy)
(Par2 String)
:*:
S1 ('MetaSel ('Just "_cost") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy)
(Par2 Dollars)))
from1 (ItemF {..}) = M1 (M1 (M1 (Par2 _upc) :*: M1 (Par2 _name) :*: M1 (Par2 _price)))
to1 (M1 (M1 (M1 (Par2 _upc) :*: M1 (Par2 _name) :*: M1 (Par2 _price)))) = ItemF {..}
However the verbosity of the above makes it less useful as a way to avoid boilerplate.
This is not necessarily the end of all hope; I could make a pull request to GHC including Par2 and updates to the DeriveGeneric mechanism, or write some TemplateHaskell macros to generate the instances. Until I do so, I've gone the fairly low-effort route of providing a few helper functions to make Conkin.Traversable instances easier to write.
Use of unsafeCoerce
In my personal Haskell experience, my only uses of unsafeCoerce until this package had been for newtype wrappers and such (i.e. excellent candidates to use coerce instead). This library marks the first time I found myself using unsafeCoerce because I just couldn't think of another way to convince the compiler of something, in Dispose's implementation of Conkin.Traversable:
instance Prelude.Traversable t => Traversable (Dispose t x) where
sequenceA = teardown . Prelude.traverse setup . getDispose where
setup :: Compose f a x -> Coyoneda f (Exists (a x))
setup = Coyoneda Exists . getCompose
teardown :: (Functor f, Prelude.Functor t) => Coyoneda f (t (Exists (a x))) -> f (Compose (Dispose t x) (Flip a))
teardown (Coyoneda k fax) = Compose . Dispose . Prelude.fmap Flip . unwrap k <$> fax
-- by parametricity, `t`'s implementation of `Prelude.sequenceA :: t (g e) ->
-- g (t e)` can't inspect the value of `e`, so all `Exists a` values
-- must be wrapped `a x` values, so this should be an okay use
-- of `unsafeGetExists`.
unwrap :: Prelude.Functor t => (b x -> t (Exists a)) -> b x -> t (a x)
unwrap k bx = Prelude.fmap (unsafeGetExists bx) $ k bx
unsafeGetExists :: proxy x -> Exists a -> a x
unsafeGetExists _ (Exists az) = unsafeCoerce az
data Exists (a :: k -> *) where
Exists :: a x -> Exists a
I've managed to convince myself that my use of unsafeCoerce is, well, safe, but only until someone finds a law-abiding Traversable that proves me wrong. I should probably come back to this, and either come up with a more formal proof of validity, rather than the loose argument I present in the code.
Literate Haskell
This README.md file is a literate haskell file, for use with markdown-unlit. To allow GHC to recognize it, it's softlinked as README.lhs, which you can compile with
$ ghc -pgmL markdown-unlit README.lhs
Many of the above examples are doctest-compatible, and can be run with
$ doctest -pgmL markdown-unlit README.lhs
Alternately, you can have cabal manage the dependencies and compile and test this with:
$ cabal install happy
$ cabal install --enable-tests
$ cabal test readme
|
__label__pos
| 0.864323 |
Theming Storybook
Edit this page
Storybook is theme-able! Just set a theme in the options parameter!
Global theming
It’s really easy to theme Storybook globally.
We’ve created two basic themes that look good of the box: “normal” (a light theme) and “dark” (a dark theme).
As the simplest example, you can tell Storybook to use the “dark” theme by modifying .storybook/config.js:
import { addParameters } from '@storybook/react';
import { themes } from '@storybook/theming';
// Option defaults.
addParameters({
options: {
theme: themes.dark,
},
});
When setting a theme, set a full theme object. The theme is replaced, not combined.
Dynamic theming
You can also theme dynamically based on the story you’re viewing or based on UI in an addon (e.g. a theme picker).
For example, you can update the theme when the user is viewing a specific component:
import { storiesOf } from '@storybook/react';
import yourTheme from './yourTheme';
storiesOf('MyComponent', module)
.addParameters({ options: { theme: yourTheme } })
.add(...)
});
Read on for more on how to create your own theme.
Create a theme quickstart
The easiest way to customize Storybook is to generate a new theme using the create() function from storybook/theming. This function includes shorthands for the most common theme variables. Here’s how to use it:
First create a new file in .storybook called yourTheme.js.
Next paste the code below and tweak the variables.
import { create } from '@storybook/theming';
export default create({
base: 'light',
colorPrimary: 'hotpink',
colorSecondary: 'deepskyblue',
// UI
appBg: 'white',
appContentBg: 'silver',
appBorderColor: 'grey',
appBorderRadius: 4,
// Typography
fontBase: '"Open Sans", sans-serif',
fontCode: 'monospace',
// Text colors
textColor: 'black',
textInverseColor: 'rgba(255,255,255,0.9)',
// Toolbar default and active colors
barTextColor: 'silver',
barSelectedColor: 'black',
barBg: 'hotpink',
// Form colors
inputBg: 'white',
inputBorder: 'silver',
inputTextColor: 'black',
inputBorderRadius: 4,
brandTitle: 'My custom storybook',
brandUrl: 'https://example.com',
brandImage: 'https://placehold.it/350x150',
});
Finally, import your theme into .storybook/config and add it to your Storybook parameters.
import yourTheme from './yourTheme';
addParameters({
options: {
theme: yourTheme,
},
});
The storybook/theming package is built using TypeScript, so this should help create a valid theme for typescript users. The types are part of the package itself.
Many theme variables are optional, the base property is NOT. This is a perfectly valid theme:
import { create } from '@storybook/theming';
export default create({
base: 'light',
brandTitle: 'My custom storybook',
brandUrl: 'https://example.com',
brandImage: 'https://placehold.it/350x150',
});
Addons and theme creation
Some addons require specific theme variables that a Storybook user must add. If you share your theme with the community, make sure to support the official and other popular addons so your users have a consistent experience.
For example, the popular Actions addon uses react-inspector which has themes of its own. Supply additional theme variables to style it like so:
addonActionsTheme: {
...chromeLight,
BASE_FONT_FAMILY: typography.fonts.mono,
BASE_BACKGROUND_COLOR: 'transparent',
}
Using the theme for addon authors
For a native Storybook experience, we encourage addon authors to reuse the theme variables above. The theming engine relies on emotion, a CSS-in-JS library.
import { styled } from '@storybook/theming';
Use the theme variables in object notation:
const Component = styled.div(({ theme }) => ({
background: theme.background.app,
width: 0,
}));
Or with template literals:
const Component = styled.div`
background: `${props => props.theme.background.app}`
width: 0;
`;
|
__label__pos
| 0.900737 |
Successfully reported this slideshow.
Your SlideShare is downloading. ×
data structure3.pptx
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Upcoming SlideShare
Data structure.pptx
Data structure.pptx
Loading in …3
×
Check these out next
1 of 32 Ad
More Related Content
Recently uploaded (20)
Advertisement
data structure3.pptx
1. 1. CS261 DATA STRUCTURES & ALGORITHMS (WEEK-3) LECTURE-5 & 6 INTRODUCTION TO DATA STRUCTURES & ALGORITHMS Lecturer Azka Aziz [email protected]
2. 2. Data structures & Algorithms Lecture#05 Link List(continue..)
3. 3. Course contents Link List Data Structure (Declaration, Initialization, Updating) Insert After a specific location Insert a value before a specific location Insert a value as a first item of list Delete a value from a link list
4. 4. Insert After a specific Location Insert_after (position , value) 1. If (position <= 0); then Write(“invalid position ”); [End of if structure] 2. Temp = ptfirst; 3. Repeat for I = 1,2,3…. < position 3.1.Temp = temp -> link ; 3.2.If (temp == NULL) 3.2.1. Write(“invalid position ”); 3.2.2. Return; [End of if structre of 3.2] [End of loop of step number 3] 4. Q = new node ; 5. Q -> data = value; 6. Q -> link temp -> link; 7. Temp -> link = Q; 8. Return;
5. 5. Insert After a Specific Location Insert_after (position , value) 1. If (position <= 0); then Write(“invalid position ”); [End of if structure] 2. Temp = ptfirst; 3. Repeat for I = 1,2,3…. < position 3.1. Temp = temp -> link ; 3.2. If (temp == NULL) 3.2.1. Write(“invalid position ”); 3.2.2. Return; [End of if structure of 3.2] [End of loop of step number 3] 4. Q = new node ; 5. Q -> data = value; 6. Q -> link temp -> link; 7. Temp -> link = Q; 8. Return;
6. 6. Insert a Value Before a Specific Location Insert_before (value , position ) 1. Insert position <= 0 ; then Write(“Invalid position”) Return; [ End of if structure] 2. Temp = ptfirst; 2.1. Repeat for I = 1,2,3….. <= position-1; 2.1.1. Previous = temp; 2.1.2. Temp = temp -> link; 2.1.3. If (temp == NULL) Write(“invalid position ’) Return [End if structure of step 2.1.3] [End of loop of step 2.1] 3. Temp = new node ; 4. Temp -> data = value; 5. Temp -> link = previous -> link; 6. Previous -> link = temp; 7. Return
7. 7. Insert a Value as a First List Item Insert_First(value) 1. [create a new node] temp= new node 2. temp->data = value 3. Temp-> link = ptfirst 4. Ptfirst = temp 5. Return
8. 8. Delete a Value from Link List Delete(value) 1. [inilization] Temp = ptfist ; Previous = temp; 2. Repeat while (temp != NULL) 2.1. If (temp == ptfist) then 2.1.1. If (temp == ptfirst) then Delete temp Else Previous -> link = temp -> link ; Delete -> temp; Return; [end of if-else structure of 2.1.1. ] Else Previous = temp; Temp = temp -> link; [end of if-else structure of step 2.1] [end of loop of step of 2] 3. Write(“value not found”); 4. Return;
9. 9. Data structures & Algorithms Lecture#06 Double Link List
10. 10. Course contents Doubly Link List ADT Operations Sample Code Structure of a DLL Insert value Insert first
11. 11. Insert First Insert_first(value) 1. [create new node ] Temp = new value; 2. [insert value] Temp -> data = value; 3. Temp -> prev = NULL; 4. Temp -> next = ptfirst ; 5. Ptfirst = NULL; 6. If (ptfirst -> next != NULL) Ptfirst -> next -> prev = ptfirst; [End of if-structure] 7. Return
12. 12. Double Link list NULL NULL Doubly linked list (or two way linked list) is a type of linked list with every node containing two node pointers (next, prev) instead of only one (next). Start Diagram represents a Doubly Linked List with yellow pointers representing next-link and green pointers representing previous-link.
13. 13. Double Link list … Node ADT
14. 14. Double Link list … Doubly Linked List ADT NULL NULL Start End
15. 15. Doubly Linked List … Operations Operation Description Pseudocode int size(); Counts and returns the number of nodes in Doubly Linked List Create a Node Pointer ‘temp’ and point it to ‘start’ Create an integer variable ‘count’ and assign 0 to it Iterate temp through all nodes of list till the end of list is reached During every iteration add one to ‘count’ Return ‘count’ when iterations are over
16. 16. Doubly Linked List … Operations Operation Description Pseudocode void print_all(); Prints all nodes’ data values Create a node pointer ‘temp’ and point it to ‘start’ of list Traverse through the list until ‘temp’ reaches to the end of list (NULL) During every iteration print the ‘data’ value
17. 17. Doubly Linked List … Operations Operation Description Pseudocode void print_all_backward(); Print all nodes’ ‘data’ values in backward direction Create a node pointer ‘temp’ and point it to ‘end’ of list Traverse back through the list until ‘temp’ reaches to the node before the start of list (NULL) During every iteration print the ‘data’ value
18. 18. Doubly Linked List … Operations Operation Description Pseudocode Node* search(int); Searches through the list for a Node with given value and returns its reference Create a node pointer ‘temp’ and point it to ‘start’ Traverse through the list until either ‘temp’ reaches to the Node with given value or it reaches to the end of list (NULL) Return the node pointer ‘temp’
19. 19. Doubly Linked List … add_node()
20. 20. Doubly Linked List … Operations Operation Description Pseudocode void append_node(int); Create a node with given value and append (add to end) it to the list Create a node with given ‘key’ as its ‘data’, NULL as ‘next’, NULL as ‘prev’ Point the newly created node with pointer ‘temp’ If size() of list is ZERO, assign ‘temp’ to ‘start’, assign ‘temp’ to ‘end’ Otherwise Assign ‘temp’ to ‘next’ of ‘end’ Assign ‘end’ to ‘prev’ of ‘temp’ Assign ‘temp’ to ‘end’
21. 21. Doubly Linked List … append_node()
22. 22. Doubly Linked List … Operations Operation Description Pseudocode void delete_node(int); Delete the node with given value from the list Search node with given value as key (say it is ‘temp’) If ‘temp’ is equl to NULL then Exit Else If ‘temp’ is equal to ‘start’ and it is equal to ‘end’ Assign NULL to ‘start’ and assign NULL to ‘end’ Exit Else if ‘temp’ is equal to ‘start’ Assign ‘start’ to ‘next’ of ‘start’ Assign NULL to ‘prev’ of ‘start’ Delete ‘temp’, Exit Else if ‘temp’ is equal to ‘end’ Assign ‘end’ to ‘prev’ of ‘end’ Assign NULL to ‘next’ of ‘end’ Delete ‘temp’, Exit Otherwise, Create node pointers ‘n’ and ‘p’ and assign them ‘next’ of ‘temp’ and ‘prev’ of ‘temp’ respectively Assign ‘n’ to the ‘next’ of ‘p’ Assign ‘p’ to the ‘prev’ of ‘n’ Delete ‘temp’, Exit
23. 23. Doubly Linked List … delete_node
24. 24. Doubly Linked List … insert_after
25. 25. Doubly Linked List … Operations Operation Description Pseudocode void insert_before(int before, int key); Insert a node with given value in linked list immediately before a specific node already present in the list Search the node with data is equal to ‘before’, say it is ‘temp’ If ‘temp’ is pointing to NULL, exit Else, Create a new node in memory and point it be a pointer ‘new_node’ Set ‘key’ as data of ‘new_node’ Assign ‘prev’ of ‘temp’ to the ‘prev’ of ‘new_node’ Assign ‘temp’ to ‘next’ of ‘new_node’ Assign ‘new_node’ to the ‘prev’ of ‘temp’ If ‘prev’ of ‘new_node’ is not NULL, assign ‘new_node’ to the ‘next’ of ‘prev’ of ‘new_node’ Exit
26. 26. Doubly Linked List … insert_before
27. 27. Double Link list … sample code Sample code is provided during class
28. 28. Structure for node in DLL struct node { struct node * prev; struct node * next; int data ; struct node *next; }; struct node *ptfirst = NULL; (global variable)
29. 29. Main Algorithm for DLL opertions Start Repeat (loop) while true (true = 1) [menu option] Write(cout) (“1: Insert ”); Write (“2: display ”); Write (“3: Insert first ”); Write (“4: Insert after ”); Write (“5: Insert before ”); Write (“6: Delete ”); Write (“7: Exite ”); Write (“Enter option number ”); Read (cin) option number (option_number) Select option_number
30. 30. Select option Option 1 Write (“ Insert value ”); Read value Insert (value) Option 2 Display(); Option 3 …… Option 4 Exit(); [Exit of selection structure] [End of loop structure] End
31. 31. Insert a value in DLL Insert(value) { 1. [check for empty list] If(ptfirst == NULL) then Ptfirst = new node ; Ptfirst -> prev = Null; Temp = ptfirst; Else 2. Temp = ptfirst ; 3. Repeat while (temp -> next != NULL) Temp = temp -> next; [ end of loop] Temp -> next = new node ; Temp -> next -> prev = temp; Temp = temp -> next ; [end of if-else ] 4. Temp -> data = value; 5. Temp -> next = NULL; 6. Return ; }
32. 32. Display/Traverse a DLL: Display() { If(ptfirst == NULL) then Write (“list is empty ”); Return ; [End of if-structure] temp = ptfirst; Repeat while(temp != NULL); write(temp -> data); Temp = temp -> next; [ end of loop] Return }
×
|
__label__pos
| 0.998166 |
Show Menu
Cheatography
Angular Js Directives v.1.3.0 Cheat Sheet by
Angular Js Directives
A
Modifies the default behavior of the html A tag so that the default action is prevented when the href attribute is empty.
Priority level: 0
Example:
<a href="" ng-click="list.addItem()">Add Item</a>
Form
Submitting a form and preventing the default action.
CSS classes:
ng-valid is set if the form is valid.
ng-invalid is set if the form is invalid.
ng-pristine is set if the form is pristine.
ng-dirty is set if the form is dirty.
ng-submitted is set if the form was submitted.
Arguments:
name(optional, string): If specified, the form controller will be published into related scope, under this name.
Priority level 0
ngApp
Use this directive to auto-bootstrap an AngularJS application. The ngApp directive designates the root element of the application and is typically placed near the root element of the page - e.g. on the <body> or <html> tags.
ngApp - {angular.Module} - an optional application module name to load.
ngStrictDi (optional) - {boolean} - if this attribute is present on the app element, the injector will be created in "strict-di" mode. This means that the application will fail to invoke functions which do not use explicit function annotation (and are thus unsuitable for minification), as described in the Dependency Injection guide, and useful debugging info will assist in tracking down the root of these bugs.
ngBind
The ngBind attribute tells Angular to replace the text content of the specified HTML element with the value of a given expression, and to update the text content when the value of that expression changes.
ngBind - {expression} - Expression to evaluate.
ngBindHtml
Creates a binding that will innerHTML the result of evaluating the expression into the current element in a secure way. By default, the innerHTML-ed content will be sanitized using the $sanitize service. To utilize this functionality, ensure that $sanitize is available, for example, by including ngSanitize in your module's dependencies (not in core Angular.) You may also bypass sanitization for values you know are safe. To do so, bind to an explicitly trusted value via $sce.trustAsHtml. See the example under Strict Contextual Escaping (SCE).
ngBindHtml - {expression} - Expression to evaluate.
ngBindTemplate
The ngBindTemplate directive specifies that the element text content should be replaced with the interpolation of the template in the ngBindTemplate attribute. Unlike ngBind, the ngBindTemplate can contain multiple {{ }} expressions. This directive is needed since some HTML elements (such as TITLE and OPTION) cannot contain SPAN elements.
ngBindTemplate - {string} - template of form {{ expression }} to eval.
ngBlur
Specify custom behavior on blur event.
Note: As the blur event is executed synchronously also during DOM manipulations (e.g. removing a focussed input), AngularJS executes the expression using scope.$evalAsync if the event is fired during an $apply to ensure a consistent state.
ngBlur - {expression} - Expression to evaluate upon blur. (Event object is available as $event)
ngChange
Evaluate the given expression when the user changes the input. The expression is evaluated immediately, unlike the JavaScript onchange event which only triggers at the end of a change (usually, when the user leaves the form element or presses the return key).
Note, this directive requires ngModel to be present.
ngChange - {expression} - Expression to evaluate upon change in input value.
ngChecked
The HTML specification does not require browsers to preserve the values of boolean attributes such as checked. (Their presence means true and their absence means false.) If we put an Angular interpolation expression into such an attribute then the binding information would be lost when the browser removes the attribute. The ngChecked directive solves this problem for the checked attribute. This complementary directive is not removed by the browser and so provides a permanent reliable place to store the binding information.
Argument:
ngChecked - {expression} - If the expression is truthy, then special attribute "checked" will be set on the element
ngClass
The ngClass directive allows you to dynamically set CSS classes on an HTML element by databinding an expression that represents all classes to be added.
The directive operates in three different ways, depending on which of three types the expression evaluates to:
- If the expression evaluates to a string, the string should be one or more space-delimited class names.
- If the expression evaluates to an array, each element of the array should be a string that is one or more space-delimited class names.
- If the expression evaluates to an object, then for each key-value pair of the object with a truthy value the corresponding key is used as a class name.
The directive won't add duplicate classes if a particular class was already set.
When the expression changes, the previously added classes are removed and only then the new classes are added.
Animations:
add - happens just before the class is applied to the element remove - happens just before the class is removed from the element
Arguments:
ngClass - {expression} - Expression to eval. The result of the evaluation can be a string representing space delimited class names, an array, or a map of class names to boolean values. In the case of a map, the names of the properties whose values are truthy will be added as css classes to the element.
ngClassEven ngClassOdd
The ngClassOdd and ngClassEven directives work exactly as ngClass, except they work in conjunction with ngRepeat and take effect only on odd (even) rows.
Usage:
as attribute:
<ANY
ng-class-even="">
...
</ANY>
as CSS class:
<ANY class="ng-class-even: ;"> ... </ANY>
Arguments
ngClassEven | ngClassOdd - {expression} - Expression to eval. The result of the evaluation can be a string representing space delimited class names or an array.
ngClick
The ngClick directive allows you to specify custom behavior when an element is clicked.
Arguments:
ngClick - {expression} - Expression to evaluate upon click. (Event object is available as $event)
ngCloak
The ngCloak directive is used to prevent the Angular html template from being briefly displayed by the browser in its raw (uncompiled) form while your application is loading. Use this directive to avoid the undesirable flicker effect caused by the html template display.
ngController
The ngController directive attaches a controller class to the view. This is a key aspect of how angular supports the principles behind the Model-View-Controller design pattern.
MVC components in angular:
Model — Models are the properties of a scope; scopes are attached to the DOM where scope properties are accessed through bindings.
View — The template (HTML with data bindings) that is rendered into the View.
Controller — The ngController directive specifies a Controller class; the class contains business logic behind the application to decorate the scope with functions and values
Arguments
ngController - {expression} - Name of a constructor function registered with the current $controllerProvider or an expression that on the current scope evaluates to a constructor function. The controller instance can be published into a scope property by specifying ng-controller="as propertyName". If the current $controllerProvider is configured to use globals (via $controllerProvider.allowGlobals()), this may also be the name of a globally accessible constructor function (not recommended).
ngCopy
Specify custom behavior on copy event.
Arguments:
ngCopy - {expression} - Expression to evaluate upon copy. (Event object is available as $event)
ngCsp
Enables CSP (Content Security Policy) support.
This is necessary when developing things like Google Chrome Extensions.
CSP forbids apps to use eval or Function(string) generated functions (among other things). For Angular to be CSP compatible there are only two things that we need to do differently:
- don't use Function constructor to generate optimized value getters
- don't inject custom stylesheet into the document
AngularJS uses Function(string) generated functions as a speed optimization. Applying the ngCsp directive will cause Angular to use CSP compatibility mode. When this mode is on AngularJS will evaluate all expressions up to 30% slower than in non-CSP mode, but no security violations will be raised.
ngCut
Specify custom behavior on cut event.
ngCut - {expression} - Expression to evaluate upon cut. (Event object is available as $event)
ngKeydown
Specify custom behavior on keydown event.
Arguments:
ngKeydown - {expression} - Expression to evaluate upon keydown. (Event object is available as $event and can be interrogated for keyCode, altKey, etc.)
ngKeypress
Specify custom behavior on keypress event.
Arguments:
ngKeypress - {expression} - Expression to evaluate upon keypress. (Event object is available as $event and can be interrogated for keyCode, altKey, etc.)
ngKeyup
Specify custom behavior on keyup event.
Arguments:
ngKeyup - {expression} - Expression to evaluate upon keyup. (Event object is available as $event and can be interrogated for keyCode, altKey, etc.)
ngList
Text input that converts between a delimited string and an array of strings. The default delimiter is a comma followed by a space - equivalent to ng-list=", ". You can specify a custom delimiter as the value of the ngList attribute - for example, ng-list=" | ".
Arguments:
ngList (optional) - {string} - optional delimiter that should be used to split the value.
ngModel
The ngModel directive binds an input,select, textarea (or custom form control) to a property on the scope using NgModelController, which is created and exposed by this directive.
ngModelOptions
Allows tuning how model updates are done. Using ngModelOptions you can specify a custom list of events that will trigger a model update and/or a debouncing delay so that the actual update only takes place when a timer expires; this timer will be reset after another change takes place.
Arguments:
ngModelOptions - {Object} - options to apply to the current model. Valid keys are:
updateOn: string specifying which event should be the input bound to. You can set several events using an space delimited list. There is a special event called default that matches the default events belonging of the control.
debounce: integer value which contains the debounce model update value in milliseconds. A value of 0 triggers an immediate update. If an object is supplied instead, you can specify a custom value for each event. For example: ng-model-options="{ updateOn: 'default blur', debounce: {'default': 500, 'blur': 0} }"
getterSetter: boolean value which determines whether or not to treat functions bound to ngModel as getters/setters.
timezone: Defines the timezone to be used to read/write the Date instance in the model for <input type="date">, <input type="time">, ... . Right now, the only supported value is 'UTC', otherwise the default timezone of the browser will be used.
ngMousedown
The ngMousedown directive allows you to specify custom behavior on mousedown event.
Arguments:
ngMousedown - {expression} - Expression to evaluate upon mousedown. (Event object is available as $event)
ngMouseenter
Specify custom behavior on mouseenter event.
Arguments:
ngMouseenter - {expression} - Expression to evaluate upon mouseenter. (Event object is available as $event)
ngMouseleave
Specify custom behavior on mouseleave event.
Arguments:
ngMouseleave - {expression} - Expression to evaluate upon mouseleave. (Event object is available as $event)
ngMousemove
Specify custom behavior on mousemove event.
Arguments:
ngMousemove - {expression} - Expression to evaluate upon mousemove. (Event object is available as $event)
ngMouseover
Specify custom behavior on mouseover event.
Arguments:
ngMouseover - {expression} - Expression to evaluate upon mouseover. (Event object is available as $event)
ngMouseup
Specify custom behavior on mouseup event.
Arguments:
ngMouseup - {expression} - Expression to evaluate upon mouseup. (Event object is available as $event)
ngNonBindable
The ngNonBindable directive tells Angular not to compile or bind the contents of the current DOM element. This is useful if the element contains what appears to be Angular directives and bindings but which should be ignored by Angular. This could be the case if you have a site that displays snippets of code, for instance.
ngOpen
The HTML specification does not require browsers to preserve the values of boolean attributes such as open. (Their presence means true and their absence means false.) If we put an Angular interpolation expression into such an attribute then the binding information would be lost when the browser removes the attribute. The ngOpen directive solves this problem for the open attribute. This complementary directive is not removed by the browser and so provides a permanent reliable place to store the binding information.
Arguments:
ngOpen - {expression} - If the expression is truthy, then special attribute "open" will be set on the element.
ngPaste
Specify custom behavior on paste event.
Arguments:
ngPaste - {expression} - Expression to evaluate upon paste. (Event object is available as $event)
ngPluralize
ngPluralize is a directive that displays messages according to en-US localization rules. These rules are bundled with angular.js, but can be overridden (see Angular i18n dev guide). You configure ngPluralize directive by specifying the mappings between plural categories and the strings to be displayed.
Arguments:
count - {string | expression} - The variable to be bound to.
when - {string} - The mapping between plural category to its corresponding strings.
offset (optional) - {number} - Offset to deduct from the total number.
ngReadonly
The HTML specification does not require browsers to preserve the values of boolean attributes such as readonly. (Their presence means true and their absence means false.) If we put an Angular interpolation expression into such an attribute then the binding information would be lost when the browser removes the attribute. The ngReadonly directive solves this problem for the readonly attribute. This complementary directive is not removed by the browser and so provides a permanent reliable place to store the binding information.
Arguments:
ngReadonly - {expression} - If the expression is truthy, then special attribute "readonly" will be set on the element
ngRepeat
The ngRepeat directive instantiates a template once per item from a collection. Each template instance gets its own scope, where the given loop variable is set to the current collection item, and $index is set to the item index or key.
Special properties are exposed on the local scope of each template instance, including:
$index - number - iterator offset of the repeated element (0..length-1)
$first - boolean - true if the repeated element is first in the iterator.
$middle - boolean - true if the repeated element is between the first and last in the iterator.
$last - boolean - true if the repeated element is last in the iterator.
$even - boolean - true if the iterator position $index is even (otherwise false).
$odd - boolean - true if the iterator position $index is odd (otherwise false).
Arguments:
ngRepeat - {repeat_expression} - The expression indicating how to enumerate a collection. These formats are currently supported:
variable in expression – where variable is the user defined loop variable and expression is a scope expression giving the collection to enumerate.
For example: album in artist.albums.
(key, value) in expression – where key and value can be any user defined identifiers, and expression is the scope expression giving the collection to enumerate.
For example: (name, age) in {'adam':10, 'amalie':12}.
variable in expression track by tracking_expression – You can also provide an optional tracking function which can be used to associate the objects in the collection with the DOM elements. If no tracking function is specified the ng-repeat associates elements by identity in the collection. It is an error to have more than one tracking function to resolve to the same key. (This would mean that two distinct objects are mapped to the same DOM element, which is not possible.) Filters should be applied to the expression, before specifying a tracking expression.
For example: item in items is equivalent to item in items track by $id(item). This implies that the DOM elements will be associated by item identity in the array.
variable in expression as alias_expression – You can also provide an optional alias expression which will then store the intermediate results of the repeater after the filters have been applied. Typically this is used to render a special message when a filter is active on the repeater, but the filtered result set is empty.
For example: item in items | filter:x as results will store the fragment of the repeated items as results, but only after the items have been processed through the filter.
For example: item in items track by $id(item). A built in $id() function can be used to assign a unique $$hashKey property to each item in the array. This property is then used as a key to associated DOM elements with the corresponding item in the array by identity. Moving the same object in array would move the DOM element in the same way in the DOM.
For example: item in items track by item.id is a typical pattern when the items come from the database. In this case the object identity does not matter. Two objects are considered equivalent as long as their id property is same.
For example: item in items | filter:searchText track by item.id is a pattern that might be used to apply a filter to items in conjunction with a tracking expression.
ngSelected
The HTML specification does not require browsers to preserve the values of boolean attributes such as selected. (Their presence means true and their absence means false.) If we put an Angular interpolation expression into such an attribute then the binding information would be lost when the browser removes the attribute. The ngSelected directive solves this problem for the selected attribute. This complementary directive is not removed by the browser and so provides a permanent reliable place to store the binding information.
Arguments:
ngSelected - {expression} - If the expression is truthy, then special attribute "selected" will be set on the element
ngShow
The ngShow directive shows or hides the given HTML element based on the expression provided to the ngShow attribute. The element is shown or hidden by removing or adding the .ng-hide CSS class onto the element. The .ng-hide CSS class is predefined in AngularJS and sets the display style to none (using an !important flag). For CSP mode please add angular-csp.css to your html file (see ngCsp).
Arguments:
ngShow - {expression} - If the expression is truthy then the element is shown or hidden respectively.
input
HTML input element control with angular data-binding.
Arguments:
ngModel(string): Assignable angular expression to data-bind to.
name(optional, string): Property name of the form under which the control is published.
required(optional, string): Sets required validation error key if the value is not entered.
ngRequired(optional, boolean): Sets required attribute if set to true
ngMinlength(optional,number): Sets minlength validation error key if the value is shorter than minlength.
ngMaxlength(optional,number): Sets maxlength validation error key if the value is longer than maxlength.
ngPattern(optional, string): Sets pattern validation error key if the value does not match the RegExp pattern expression. Expected value is /regexp/ for inline patterns or regexp for patterns defined as scope expressions.
ngChange(optional,string) Angular expression to be executed when input changes due to user interaction with the input element.
priority level 0
input[checkbox]
HTML checkbox.
Arguments:
ngModel, string: Assignable angular expression to data-bind to.
name(optional), string: Property name of the form under which the control is published.
ngTrueValue(optional), expression: The value to which the expression should be set when selected.
ngFalseValue(optional), expression: The value to which the expression should be set when not selected.
ngChange(optional), string: Angular expression to be executed when input changes due to user interaction with the input element.
Priority level 0
input[dateTimeLocal]
Input with datetime validation and transformation. In browsers that do not yet support the HTML5 date input, a text element will be used. In that case, the text must be entered in a valid ISO-8601 local datetime format (yyyy-MM-ddTHH:mm), for example: 2010-12-28T14:57. The model must be a Date object.
ngModel, string: Assignable angular expression to data-bind to.
name(optional), string: Property name of the form under which the control is published.
min(optional), string: Sets the min validation error key if the value entered is less than min. This must be a valid ISO datetime format (yyyy-MM-ddTHH:mm).
max(optional), string: Sets the max validation error key if the value entered is greater than max. This must be a valid ISO datetime format (yyyy-MM-ddTHH:mm).
required(optional), string: Sets required validation error key if the value is not entered.
ngRequired (optional), string: Adds required attribute and required validation constraint to the element when the ngRequired expression evaluates to true. Use ngRequired instead of required when you want to data-bind to the required attribute.
ngChange(optional), string: Angular expression to be executed when input changes due to user interaction with the input element.
priority level 0
input[date]
Input with date validation and transformation. In browsers that do not yet support the HTML5 date input, a text element will be used. In that case, text must be entered in a valid ISO-8601 date format (yyyy-MM-dd), for example: 2009-01-06. The model must always be a Date object.
ngModel, string: Assignable angular expression to data-bind to.
name(optional), string: Property name of the form under which the control is published.
min(optional), string: Sets the min validation error key if the value entered is less than min. This must be a valid ISO date string (yyyy-MM-dd).
max(optional), string: Sets the max validation error key if the value entered is greater than max. This must be a valid ISO date string (yyyy-MM-dd).
required(optional), string: Sets required validation error key if the value is not entered.
ngRequired(optional), string: Adds required attribute and required validation constraint to the element when the ngRequired expression evaluates to true. Use ngRequired instead of required when you want to data-bind to the required attribute.
ngChange(optional), string: Angular expression to be executed when input changes due to user interaction with the input element.
priority level 0
input[time]
Input with time validation and transformation. In browsers that do not yet support the HTML5 date input, a text element will be used. In that case, the text must be entered in a valid ISO-8601 local time format (HH:mm:ss), for example: 14:57:00. Model must be a Date object. This binding will always output a Date object to the model of January 1, 1970, or local date new Date(1970, 0, 1, HH, mm, ss).
ngModel - {string} - Assignable angular expression to data-bind to.
name (optional) - {string} - Property name of the form under which the control is published.
min (optional) - {string} - Sets the min validation error key if the value entered is less than min. This must be a valid ISO time format (HH:mm:ss).
max (optional) - {string} - Sets the max validation error key if the value entered is greater than max. This must be a valid ISO time format (HH:mm:ss).
required (optional) - {string} - Sets required validation error key if the value is not entered.
ngRequired (optional) - {string} - Adds required attribute and required validation constraint to the element when the ngRequired expression evaluates to true. Use ngRequired instead of required when you want to data-bind to the required attribute.
ngChange (optional) - {string} - Angular expression to be executed when input changes due to user interaction with the input element.
input[url]
Text input with URL validation. Sets the url validation error key if the content is not a valid URL.
ngModel - {string} - Assignable angular expression to data-bind to.
name (optional) - {string} - Property name of the form under which the control is published.
required (optional) - {string} - Sets required validation error key if the value is not entered.
ngRequired (optional) - {string} - Adds required attribute and required validation constraint to the element when the ngRequired expression evaluates to true. Use ngRequired instead of required when you want to data-bind to the required attribute.
ngMinlength (optional) - {number} - Sets minlength validation error key if the value is shorter than minlength.
ngMaxlength (optional) - {number} - Sets maxlength validation error key if the value is longer than maxlength.
ngPattern (optional) - {string} - Sets pattern validation error key if the value does not match the RegExp pattern expression. Expected value is /regexp/ for inline patterns or regexp for patterns defined as scope expressions.
ngChange (optional) - {string} - Angular expression to be executed when input changes due to user interaction with the input element.
input[week]
Input with week-of-the-year validation and transformation to Date. In browsers that do not yet support the HTML5 week input, a text element will be used. In that case, the text must be entered in a valid ISO-8601 week format (yyyy-W##), for example: 2013-W02. The model must always be a Date object.
ngModel - {string} - Assignable angular expression to data-bind to.
name (optional) - {string} - Property name of the form under which the control is published.
min (optional) - {string} - Sets the min validation error key if the value entered is less than min. This must be a valid ISO week format (yyyy-W##).
max (optional) - {string} - Sets the max validation error key if the value entered is greater than max. This must be a valid ISO week format (yyyy-W##).
required (optional) - {string} - Sets required validation error key if the value is not entered.
ngRequired (optional) - {string} - Adds required attribute and required validation constraint to the element when the ngRequired expression evaluates to true. Use ngRequired instead of required when you want to data-bind to the required attribute.
ngChange (optional) - {string} - Angular expression to be executed when input changes due to user interaction with the input element.
input[email]
Text input with email validation. Sets the email validation error key if not a valid email address.
ngModel, string: Assignable angular expression to data-bind to.
name(optional), string: Property name of the form under which the control is published.
required(optional), string: Sets required validation error key if the value is not entered.
ngRequired(optional), string: Adds required attribute and required validation constraint to the element when the ngRequired expression evaluates to true. Use ngRequired instead of required when you want to data-bind to the required attribute.
ngMinlength(optional), number: Sets minlength validation error key if the value is shorter than minlength.
ngMaxlength(optional), number: Sets maxlength validation error key if the value is longer than maxlength.
ngPattern(optional), string: Sets pattern validation error key if the value does not match the RegExp pattern expression. Expected value is /regexp/ for inline patterns or regexp for patterns defined as scope expressions.
ngChange(optional), string: Angular expression to be executed when input changes due to user interaction with the input element.
priority level 0
input[month]
Input with month validation and transformation. In browsers that do not yet support the HTML5 month input, a text element will be used. In that case, the text must be entered in a valid ISO-8601 month format (yyyy-MM), for example: 2009-01. The model must always be a Date object.
ngModel, string: Assignable angular expression to data-bind to.
name(optional), string: Property name of the form under which the control is published.
min(optional), string: Sets the min validation error key if the value entered is less than min. This must be a valid ISO month format (yyyy-MM).
max(optional), string: Sets the max validation error key if the value entered is greater than max. This must be a valid ISO month format (yyyy-MM).
required(optional), string: Sets required validation error key if the value is not entered.
ngRequired(optional), string: Adds required attribute and required validation constraint to the element when the ngRequired expression evaluates to true. Use ngRequired instead of required when you want to data-bind to the required attribute.
ngChange(optional), string: Angular expression to be executed when input changes due to user interaction with the input element.
priority level 0
input[number]
Text input with number validation and transformation. Sets the number validation error if not a valid number.
ngModel, string: Assignable angular expression to data-bind to.
name(optional), string: Property name of the form under which the control is published.
min(optional), string: Sets the min validation error key if the value entered is less than min.
max(optional), string: Sets the max validation error key if the value entered is greater than max.
required(optional), string: Sets required validation error key if the value is not entered.
ngRequired(optional), string: Adds required attribute and required validation constraint to the element when the ngRequired expression evaluates to true. Use ngRequired instead of required when you want to data-bind to the required attribute.
ngMinlength(optional), number: Sets minlength validation error key if the value is shorter than minlength.
ngMaxlength(optional), number: Sets maxlength validation error key if the value is longer than maxlength.
ngPattern(optional), string: Sets pattern validation error key if the value does not match the RegExp pattern expression. Expected value is /regexp/ for inline patterns or regexp for patterns defined as scope expressions.
ngChange(optional), string: Angular expression to be executed when input changes due to user interaction with the input element.
input[text]
Standard HTML text input with angular data binding, inherited by most of the input elements.
ngModel - {string} - Assignable angular expression to data-bind to.
name (optional) - {string} - Property name of the form under which the control is published.
required (optional) - {string} - Adds required validation error key if the value is not entered.
ngRequired (optional) - {string} - Adds required attribute and required validation constraint to the element when the ngRequired expression evaluates to true. Use ngRequired instead of required when you want to data-bind to the required attribute.
ngMinlength (optional) - {number} - Sets minlength validation error key if the value is shorter than minlength.
ngMaxlength (optional) - {number} - Sets maxlength validation error key if the value is longer than maxlength.
ngPattern (optional) - {string} - Sets pattern validation error key if the value does not match the RegExp pattern expression. Expected value is /regexp/ for inline patterns or regexp for patterns defined as scope expressions.
ngChange (optional) - {string} - Angular expression to be executed when input changes due to user interaction with the input element.
ngTrim (optional) - {boolean} - If set to false Angular will not automatically trim the input. This parameter is ignored for input[type=password] controls, which will never trim the input. (default: true)
input[radio]
HTML radio button.
ngModel, string:Assignable angular expression to data-bind to.
value, string: The value to which the expression should be set when selected.
name(optional), string: Property name of the form under which the control is published.
ngChange(optional), string: Angular expression to be executed when input changes due to user interaction with the input element.
ngValue, string: Angular expression which sets the value to which the expression should be set when selected.
priority level 0.
ngDblclick
The ngDblclick directive allows you to specify custom behavior on a dblclick event.
Arguments:
ngDblclick - {expression} - Expression to evaluate upon a dblclick. (The Event object is available as $event)
ngDisabled
The HTML specification does not require browsers to preserve the values of boolean attributes such as disabled. (Their presence means true and their absence means false.) If we put an Angular interpolation expression into such an attribute then the binding information would be lost when the browser removes the attribute. The ngDisabled directive solves this problem for the disabled attribute. This complementary directive is not removed by the browser and so provides a permanent reliable place to store the binding information.
Arguments:
ngDisabled - {expression} - If the expression is truthy, then special attribute "disabled" will be set on the element
ngFocus
Specify custom behavior on focus event.
Arguments:
ngFocus - {expression} - Expression to evaluate upon focus. (Event object is available as $event)
ngForm
Nestable alias of form directive. HTML does not allow nesting of form elements. It is useful to nest forms, for example if the validity of a sub-group of controls needs to be determined.
Arguments:
ngForm | name (optional) - {string} - Name of the form. If specified, the form controller will be published into related scope, under this name.
ngHide
The ngHide directive shows or hides the given HTML element based on the expression provided to the ngHide attribute. The element is shown or hidden by removing or adding the ng-hide CSS class onto the element. The .ng-hide CSS class is predefined in AngularJS and sets the display style to none (using an !important flag). For CSP mode please add angular-csp.css to your html file (see ngCsp).
Arguments:
ngHide - {expression} - If the expression is truthy then the element is shown or hidden respectively.
ngHref
Using Angular markup like {{hash}} in an href attribute will make the link go to the wrong URL if the user clicks it before Angular has a chance to replace the {{hash}} markup with its value. Until Angular replaces the markup the link will be broken and will most likely return a 404 error.
Arguments:
ngHref - {template} - any string which can contain {{}} markup.
ngIf
The ngIf directive removes or recreates a portion of the DOM tree based on an {expression}. If the expression assigned to ngIf evaluates to a false value then the element is removed from the DOM, otherwise a clone of the element is reinserted into the DOM.
Arguments:
ngIf - {expression} - If the expression is falsy then the element is removed from the DOM tree. If it is truthy a copy of the compiled element is added to the DOM tree.
ngInclude
Fetches, compiles and includes an external HTML fragment.
Arguments:
ngInclude | src - {string} - angular expression evaluating to URL. If the source is a string constant, make sure you wrap it in single quotes, e.g. src="'myPartialTemplate.html'".
onload (optional) - {string} - Expression to evaluate when a new partial is loaded.
autoscroll (optional) - {string} - Whether ngInclude should call $anchorScroll to scroll the viewport after the content is loaded.
- If the attribute is not set, disable scrolling.
- If the attribute is set without value, enable scrolling.
- Otherwise enable scrolling only if the expression evaluates to truthy value.
Events:
$includeContentRequested - Emitted every time the ngInclude content is requested.
Type: emit
Target: the scope ngInclude was declared in
$includeContentLoaded - Emitted every time the ngInclude content is reloaded.
Type: emit
Target: the current ngInclude scope
$includeContentError - Emitted when a template HTTP request yields an erronous response (status < 200 || status > 299)
Type: emit
Target: the scope ngInclude was declared in
ngInit
The ngInit directive allows you to evaluate an expression in the current scope.
Arguments:
ngInit - {expression} - Expression to eval.
ngSrc
Using Angular markup like {{hash}} in a src attribute doesn't work right: The browser will fetch from the URL with the literal text {{hash}} until Angular replaces the expression inside {{hash}}. The ngSrc directive solves this problem.
Arguments:
ngSrc - {template} - any string which can contain {{}} markup.
ngSrcset
Using Angular markup like {{hash}} in a srcset attribute doesn't work right: The browser will fetch from the URL with the literal text {{hash}} until Angular replaces the expression inside {{hash}}. The ngSrcset directive solves this problem.
Arguments:
ngSrcset - {template} - any string which can contain {{}} markup.
ngStyle
The ngStyle directive allows you to set CSS style on an HTML element conditionally.
Arguments:
ngStyle - {expression} - Expression which evals to an object whose keys are CSS style names and values are corresponding values for those CSS keys.
Since some CSS style names are not valid keys for an object, they must be quoted. See the 'background-color' style in the example below.
ngSubmit
Enables binding angular expressions to onsubmit events.
Arguments:
ngSubmit - {expression} - Expression to eval. (Event object is available as $event)
ngSwitch
The ngSwitch directive is used to conditionally swap DOM structure on your template based on a scope expression. Elements within ngSwitch but without ngSwitchWhen or ngSwitchDefault directives will be preserved at the location as specified in the template.
Arguments:
ngSwitch | on - {*} - expression to match against ng-switch-when. On child elements add:
ngSwitchWhen: the case statement to match against. If match then this case will be displayed. If the same match appears multiple times, all the elements will be displayed.
ngSwitchDefault: the default case when no other case match. If there are multiple default cases, all of them will be displayed when no other case match.
ngTransclude
Directive that marks the insertion point for the transcluded DOM of the nearest parent directive that uses transclusion.
Any existing content of the element that this directive is placed on will be removed before the transcluded content is inserted.
ngValue
Binds the given expression to the value of input[select] or input[radio], so that when the element is selected, the ngModel of that element is set to the bound value.
Arguments:
ngValue (optional) - {string} - angular expression, whose value will be bound to the value attribute of the input element
script
Load the content of a <script> element into $templateCache, so that the template can be used by ngInclude, ngView, or directives. The type of the <script> element must be specified as text/ng-template, and a cache name for the template must be assigned through the element's id, which can then be used as a directive's templateUrl.
Arguments:
type - {string} - Must be set to 'text/ng-template'.
id - {string} - Cache name of the template.
select
HTML SELECT element with angular data-binding.
ngOptions:
The ngOptions attribute can be used to dynamically generate a list of <option> elements for the <select> element using the array or object obtained by evaluating the ngOptions comprehension_expression.
Arguments:
ngModel - {string} - Assignable angular expression to data-bind to.
name (optional) - {string} - Property name of the form under which the control is published.
required(optional) - {string} - The control is considered valid only if value is entered.
ngRequired(optional) - {string} - Adds required attribute and required validation constraint to the element when the ngRequired expression evaluates to true. Use ngRequired instead of required when you want to data-bind to the required attribute.
ngOptions(optional) - {comprehension_expression} - in one of the following forms:
for array data sources:
label for value in array
select as label for value in array
label group by group for value in array
*select as label group by group for value in array track by trackexpr
for object data sources:
label for (key , value) in object
select as label for (key , value) in object
label group by group for (key, value) in object
select as label group by group for (key, value) in object
Where:
array / object: an expression which evaluates to an array / object to iterate over.
value: local variable which will refer to each item in the array or each property value of object during iteration.
key: local variable which will refer to a property name in object during iteration.
label: The result of this expression will be the label for <option> element. The expression will most likely refer to the value variable (e.g. value.propertyName).
select: The result of this expression will be bound to the model of the parent <select> element. If not specified, select expression will default to value.
group: The result of this expression will be used to group options using the <optgroup> DOM element.
trackexpr: Used when working with an array of objects. The result of this expression will be used to identify the objects in the array. The trackexpr will most likely refer to the value variable (e.g. value.propertyName).
textarea
HTML textarea element control with angular data-binding. The data-binding and validation properties of this element are exactly the same as those of the input element.
Arguments:
ngModel - {string} - Assignable angular expression to data-bind to.
name (optional) - {string} - Property name of the form under which the control is published.
required(optional) - {string} - Sets required validation error key if the value is not entered.
ngRequired(optional) - {string} - Adds required attribute and required validation constraint to the element when the ngRequired expression evaluates to true. Use ngRequired instead of required when you want to data-bind to the required attribute.
ngMinlength(optional) - {number} - Sets minlength validation error key if the value is shorter than minlength.
ngMaxlength(optional) - {number} - Sets maxlength validation error key if the value is longer than maxlength.
ngPattern (optional) - {string} - Sets pattern validation error key if the value does not match the RegExp pattern expression. Expected value is /regexp/ for inline patterns or regexp for patterns defined as scope expressions.
ngChange (optional) - {string} - Angular expression to be executed when input changes due to user interaction with the input element.
ngTrim (optional) - {boolean} - If set to false Angular will not automatically trim the input. (default: true)
Comments
No comments yet. Add yours below!
Add a Comment
Your Comment
Please enter your name.
Please enter your email address
Please enter your Comment.
Related Cheat Sheets
AngularJS Cheat Sheet
Angular Js Filters (v1.3.0) Cheat Sheet
AngularUI Router Cheat Sheet
More Cheat Sheets by Roman
Angular Js Filters (v1.3.0) Cheat Sheet
|
__label__pos
| 0.674759 |
Chat now with support
Chat with Support
syslog-ng Premium Edition 7.0.32 - Administration Guide
Preface Introduction to syslog-ng The concepts of syslog-ng Installing syslog-ng PE The syslog-ng PE quick-start guide The syslog-ng PE configuration file Collecting log messages — sources and source drivers
How sources work default-network-drivers: Receive and parse common syslog messages internal: Collecting internal messages file: Collecting messages from text files google-pubsub: collecting messages from the Google Pub/Sub messaging service wildcard-file: Collecting messages from multiple text files linux-audit: Collecting messages from Linux audit logs mssql, oracle, sql: collecting messages from an SQL database network: Collecting messages using the RFC3164 protocol (network() driver) office365: Fetching logs from Office 365 osquery: Collect and parse osquery result logs pipe: Collecting messages from named pipes program: Receiving messages from external applications python: writing server-style Python sources python-fetcher: writing fetcher-style Python sources snmptrap: Read Net-SNMP traps syslog: Collecting messages using the IETF syslog protocol (syslog() driver) system: Collecting the system-specific log messages of a platform systemd-journal: Collecting messages from the systemd-journal system log storage systemd-syslog: Collecting systemd messages using a socket tcp, tcp6,udp, udp6: Collecting messages from remote hosts using the BSD syslog protocol udp-balancer: Receiving UDP messages at very high rate unix-stream, unix-dgram: Collecting messages from UNIX domain sockets windowsevent: Collecting Windows event logs
Sending and storing log messages — destinations and destination drivers
elasticsearch2>: Sending messages directly to Elasticsearch version 2.0 or higher (DEPRECATED) elasticsearch-http: Sending messages to Elasticsearch HTTP Event Collector file: Storing messages in plain-text files google_pubsub(): Sending logs to the Google Cloud Pub/Sub messaging service google_pubsub-managedaccount(): Sending logs to the Google Cloud Pub/Sub messaging service authenticated by Google Cloud managed service account hdfs: Storing messages on the Hadoop Distributed File System (HDFS) http: Posting messages over HTTP kafka(): Publishing messages to Apache Kafka (Java implementation) (DEPRECATED) kafka-c(): Publishing messages to Apache Kafka using the librdkafka client (C implementation) logstore: Storing messages in encrypted files mongodb: Storing messages in a MongoDB database network: Sending messages to a remote log server using the RFC3164 protocol (network() driver) pipe: Sending messages to named pipes program: Sending messages to external applications python: writing custom Python destinations sentinel(): Sending logs to the Microsoft Azure Sentinel cloud snmp: Sending SNMP traps smtp: Generating SMTP messages (email) from logs splunk-hec: Sending messages to Splunk HTTP Event Collector sql(): Storing messages in an SQL database stackdriver: Sending logs to the Google Stackdriver cloud syslog: Sending messages to a remote logserver using the IETF-syslog protocol syslog-ng(): Forward logs to another syslog-ng node tcp, tcp6, udp, udp6: Sending messages to a remote log server using the legacy BSD-syslog protocol (tcp(), udp() drivers) unix-stream, unix-dgram: Sending messages to UNIX domain sockets usertty: Sending messages to a user terminal — usertty() destination Client-side failover
Routing messages: log paths, flags, and filters Global options of syslog-ng PE TLS-encrypted message transfer Advanced Log Transport Protocol Reliability and minimizing the loss of log messages Manipulating messages parser: Parse and segment structured messages Processing message content with a pattern database Correlating log messages Enriching log messages with external data Monitoring statistics and metrics of syslog-ng Multithreading and scaling in syslog-ng PE Troubleshooting syslog-ng Best practices and examples The syslog-ng manual pages Glossary
Formatting messages, filenames, directories, and tablenames
The syslog-ng PE application can dynamically create filenames, directories, or names of database tables using macros that help you organize your log messages. Macros refer to a property or a part of the log message, for example, the ${HOST} macro refers to the name or IP address of the client that sent the log message, while ${DAY} is the day of the month when syslog-ng has received the message. Using these macros in the path of the destination log files allows you for example, to collect the logs of every host into separate files for every day.
A set of macros can be defined as a template object and used in multiple destinations.
Another use of macros and templates is to customize the format of the syslog message, for example, to add elements of the message header to the message text.
NOTE: If a message uses the IETF-syslog format (RFC5424), only the text of the message can be customized (that is, the $MESSAGE part of the log), the structure of the header is fixed.
Templates and macros
The syslog-ng PE application allows you to define message templates, and reference them from every object that can use a template. Templates can include strings, macros (for example, date, the hostname, and so on), and template functions. For example, you can use templates to create standard message formats or filenames. For a list of macros available in syslog-ng Premium Edition, see Macros of syslog-ng PE. Fields from the structured data (SD) part of messages using the new IETF-syslog standard can also be used as macros.
Declaration
template <template-name> {
template("<template-expression>") <template-escape(yes)>;
};
Template objects have a single option called template-escape(), which is disabled by default (template-escape(no)). This behavior is useful when the messages are passed to an application that cannot handle escaped characters properly. Enabling template escaping (template-escape(yes)) causes syslog-ng to escape the ', ", and backslash characters from the messages.
If you do not want to enable the template-escape() option (which is rarely needed), you can define the template without the enclosing braces.
template <template-name> "<template-expression>";
You can also refer to an existing template from within a template. The result of the referred template will be pasted into the second template.
template first-template "sample-text";
template second-template "The result of the first-template is: $(template first-template)";
If you want to use a template only once, you can define the template inline, for example:
destination d_file {
file ("/var/log/messages" template("${ISODATE} ${HOST} ${MSG}\n") );
};
Macros can be included by prefixing the macro name with a $ sign, just like in Bourne compatible shells. Although using braces around macro names is not mandatory, and the "$MSG" and "${MSG}" formats are equivalent, using the "${MSG}" format is recommended for clarity.
Macro names are case-sensitive, that is, "$message" and "$MESSAGE" are not the same.
To use a literal $ character in a template, you have to escape it. In syslog-ng PE versions 4.0-4.2, use a backslash (\$). In version 5.0 and later, use $$.
NOTE: To use a literal @ character in a template, use @@.
Default values for macros can also be specified by appending the :- characters and the default value of the macro. If a message does not contain the field referred to by the macro, or it is empty, the default value will be used when expanding the macro. For example, if a message does not contain a hostname, the following macro can specify a default hostname.
${HOST:-default_hostname}
By default, syslog-ng sends messages using the following template: ${ISODATE} ${HOST} ${MSGHDR}${MSG}\n. (The ${MSGHDR}${MSG} part is written together because the ${MSGHDR} macro includes a trailing whitespace.)
Example: Using templates and macros
The following template (t_demo_filetemplate) adds the date of the message and the name of the host sending the message to the beginning of the message text. The template is then used in a file destination: messages sent to this destination (d_file) will use the message format defined in the template.
template t_demo_filetemplate {
template("${ISODATE} ${HOST} ${MSG}\n"); };
destination d_file {
file("/var/log/messages" template(t_demo_filetemplate)); };
If you do not want to enable the template-escape() option (which is rarely needed), you can define the template without the enclosing braces. The following two templates are equivalent.
template t_demo_template-with-braces {
template("${ISODATE} ${HOST} ${MSG}\n");
};
template t_demo_template-without-braces "${ISODATE} ${HOST} ${MSG}\n";
Templates can also be used inline, if they are used only at a single location. The following destination is equivalent with the previous example:
destination d_file {
file ("/var/log/messages" template("${ISODATE} ${HOST} ${MSG}\n") );
};
The following file destination uses macros to daily create separate logfiles for every client host.
destination d_file {
file("/var/log/${YEAR}.${MONTH}.${DAY}/${HOST}.log");
};
NOTE: Macros can be used to format messages, and also in the name of destination files or database tables. However, they cannot be used in sources as wildcards, for example, to read messages from files or directories that include a date in their name.
Date-related macros
The macros related to the date of the message (for example: ${ISODATE}, ${HOUR}, and so on) have three further variants each:
• S_ prefix, for example, ${S_DATE}: The ${S_DATE} macro represents the date found in the log message, that is, when the message was sent by the original application.
Caution:
To use the S_ macros, the keep-timestamp() option must be enabled (this is the default behavior of syslog-ng PE).
• R_ prefix, for example, ${R_DATE}: ${R_DATE} is the date when syslog-ng PE has received the message.
• C_ prefix, for example, ${C_DATE}: ${C_DATE} is the current date, that is when syslog-ng PE processes the message and resolves the macro. Note that syslog-ng PE evaluates the macro every time it is processed, so even if you use the same macro for the same message, its value can be different. For example, if you use ${C_USEC} in a filter and in a destination filename, their values will be different even for the same message.
The ${DATE} macro equals the ${S_DATE} macro.
The values of the date-related macros are calculated using the original timezone information of the message. To convert it to a different timezone, use the time-zone() option. You can set the time-zone() option as a global option, or per destination. For sources, it applies only if the original message does not contain timezone information. Converting the timezone changes the values of the following date-related macros (macros MSEC and USEC are not changed):
• AMPM
• DATE
• DAY
• FULLDATE
• HOUR
• HOUR12
• ISODATE
• MIN
• MONTH
• MONTH_ABBREV
• MONTH_NAME
• MONTH_WEEK
• MSEC
• SEC
• STAMP
• TZ
• TZOFFSET
• UNIXTIME
• USEC
• WEEK
• WEEK_DAY
• WEEK_DAY_ABBREV
• WEEK_DAY_NAME
• YEAR
• YEAR_DAY
Hard versus soft macros
Hard macros contain data that is directly derived from the log message, for example, the ${MONTH} macro derives its value from the timestamp. Hard macros are read-only. Soft macros (sometimes also called name-value pairs) are either built-in macros automatically generated from the log message (for example, ${HOST}), or custom user-created macros generated by using the syslog-ng pattern database or a CSV-parser. In contrast to hard macros, soft macros are writable and can be modified within syslog-ng PE, for example, using rewrite rules.
Hard and soft macros are rather similar and often treated as equivalent. Macros are most commonly used in filters and templates, which does not modify the value of the macro, so both soft and hard macros can be used. However, it is not possible to change the values of hard macros in rewrite rules or via any other means.
The following macros in syslog-ng PE are hard macros and cannot be modified: BSDTAG, CONTEXT_ID, DATE, DAY, FACILITY_NUM, FACILITY, FULLDATE, HOUR, ISODATE, LEVEL_NUM, LEVEL, MIN, MONTH_ABBREV, MONTH_NAME, MONTH, MONTH_WEEK, PRIORITY, PRI, RCPTID, SDATA, SEC, SEQNUM, SOURCEIP, STAMP, TAG, TAGS, TZOFFSET, TZ, UNIXTIME, WEEK_DAY_ABBREV, WEEK_DAY_NAME, WEEK_DAY, WEEK, YEAR_DAY, YEAR.
The following macros can be modified:FULLHOST_FROM, FULLHOST, HOST_FROM, HOST, LEGACY_MSGHDR, MESSAGE, MSG,MSGID, MSGONLY, PID, PROGRAM, SOURCE. Custom values created using rewrite rules or parsers can be modified as well, just like stored matches of regular expressions ($0 ... $255).
Related Documents
The document was helpful.
Select Rating
I easily found the information I needed.
Select Rating
|
__label__pos
| 0.746285 |
Skip to content
Find file
Fetching contributors…
Cannot retrieve contributors at this time
115 lines (86 sloc) 3.94 KB
#----------------------------------------------------------------------------------------------
# File: data-structures-introduction.R
# Date: 03-25-2012
# Author: Eric Nantz
# URL: https://github.com/thercast/data_structures/blob/master/data-structures-introduction.R
# Email: [email protected]
# Purpose: Examples of creating data structures discussed in the R-Podcast episode 4
# www.r-podcast.org/the-r-podcast-episode-4-data-structures-introduction
# License: Creative Commons Attribution-ShareAlike 3.0 Unported License
#----------------------------------------------------------------------------------------------
# numeric vector of 16 high school GPA scores
gpa.scores <- c(3.67, 3.95, 2.89, 4.00, 3.83, 3.18, 3.72, 2.75,
3.01, 3.29, 4.00, 2.15, 2.89, 3.80, 3.55, 3.99)
# useful functions for exploring vectors in general
length(gpa.scores)
unique(gpa.scores)
length(unique(gpa.scores)) # number of unique elements
# character vector of grade level
grade.character <- c("freshman", "junior", "sophomore", "freshman",
"senior", "junior", "freshman", "sophomore",
"freshman", "junior", "junior", "sophomore",
"senior", "senior", "senior", "sophomore")
# useful functions for character vectors (note there are many others!)
tolower(grade.character)
toupper(grade.character)
# create a factor out of the grade character vector
grade.factor <- factor(grade.character)
# useful functions for factors
table(grade.factor)
levels(grade.factor)
# matrix of gpa scores with two columns corresponding to mid-semsester gpa and end of semester gpa
gpa.matrix <- matrix(gpa.scores, nrow=8, byrow=FALSE)
# use the [i,j] notation to extract specific elements from matrix
gpa.matrix[4,2]
# row names and column names can be added to the matrix or supplied in the matrix function
rownames(gpa.matrix) <- c("R1", "R2", "R3", "R4", "R5", "R6", "R7", "R8")
colnames(gpa.matrix) <- c("mid", "end")
gpa.matrix.named <- matrix(gpa.scores,
nrow=8,
byrow=FALSE,
dimnames=list(c("R1", "R2", "R3", "R4", "R5", "R6", "R7", "R8"),
c("mid", "end")))
# you can extract elements using the names of rows and columns
gpa.matrix.named["R4", "end"]
# useful functions for exploring matrices
dim(gpa.matrix)
nrow(gpa.matrix)
ncol(gpa.matrix)
dimnames(gpa.matrix)
# create a list with the objects created so far
gpa.list <- list(gpa.scores, gpa.matrix, grade.character, grade.factor)
# extract components of a list using [[i]]
gpa.list[[1]]
# names of the components of a list can be supplied the following ways
names(gpa.list) <- c("score.vec", "score.matrix", "grade.ch", "grade.f")
gpa.list <- list(score.vec=gpa.scores,
score.matrix=gpa.matrix,
grade.ch=grade.character,
grade.f=grade.factor)
# use the names to extract certain components
gpa.list$score.vec
gpa.list[["score.vec"]]
# useful functions for lists
names(gpa.list)
length(gpa.list)
# create a data frame with gpa.scores and grade levels
gpa.data <- data.frame(gpa.scores, grade.character, grade.factor)
# assign names to the columns using any of the following
names(gpa.data) <- c("scores", "grade.level.ch", "grade.level.f")
colnames(gpa.data) <- c("scores", "grade.level.ch", "grade.level.f")
gpa.data <- data.frame(scores=gpa.scores, grade.level.ch=grade.character, grade.level.f=grade.factor)
# extract rows and columns of data frame using matrix indexing or list indexing
gpa.data[,1]
gpa.data$scores
gpa.data[["scores"]]
gpa.data[2,3]
gpa.data[2, "grade.level.f"]
# other useful functions can be applied to data frames:
nrow(gpa.data)
ncol(gpa.data)
unique(gpa.data) # returns all of the unique rows of the data frame
names(gpa.data)
# exploring the class and other attributes of objects in R
class(gpa.data) # try this on the other data structures to see how it compares
mode(gpa.data)
attributes(gpa.data)
Something went wrong with that request. Please try again.
|
__label__pos
| 0.818806 |
In this JUnit 5 Maven example, we will learn how to get the required dependencies for writing JUnit 5 tests with Maven. Also, we will learn how to configure the Maven Surefire plugin to run the tests.
This article is part of the JUnit 5 Tutorial.
Required dependencies
Update 18th of June 2018: Beginning from Maven Surefire 2.22.0 there is now native support for JUnit Jupiter. This means that the configuration is now easier.
Update 1st of March 2019: Beginning from JUnit Jupiter 5.4.0 there is is now an aggregator artifact junit-jupiter that transitively pulls in dependencies on junit-jupiter-api, junit-jupiter-params, and junit-jupiter-engine for simplified dependency management.
First, to be able to write JUnit 5 tests we need the junit-jupiter artifact as a dependency in pom.xml:
<dependencies>
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter</artifactId>
<version>5.4.0</version>
<scope>test</scope>
</dependency>
</dependencies>
In addition, to be able to run JUnit 5 tests we have to add the Maven Surefire plugin:
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.22.1</version>
</plugin>
</plugins>
</build>
Now we have the most basic setup for writing and running JUnit 5 tests with Maven.
Configuration prior to Maven Surefire 2.22.0 and JUnit Jupiter 5.4.0
To be able to write JUnit 5 tests we need the junit-jupiter-api artifact as a dependency:
<dependencies>
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-api</artifactId>
<version>5.3.2</version>
<scope>test</scope>
</dependency>
</dependencies>
In addition, when using Maven Surefire 2.21.0 or older, we have to use a provider for Maven Surefire plugin to run the tests:
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.21.0</version>
<dependencies>
<dependency>
<groupId>org.junit.platform</groupId>
<artifactId>junit-platform-surefire-provider</artifactId>
<version>1.3.2</version>
</dependency>
</dependencies>
</plugin>
</plugins>
</build>
We also have to add JUnit Jupiter test engine to the runtime classpath. We add the dependency to maven-surefire-plugin dependencies:
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.21.0</version>
<dependencies>
<dependency>
<groupId>org.junit.platform</groupId>
<artifactId>junit-platform-surefire-provider</artifactId>
<version>1.3.2</version>
</dependency>
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-engine</artifactId>
<version>5.3.2</version>
</dependency>
</dependencies>
</plugin>
</plugins>
</build>
Now we have the a setup for writing and running JUnit 5 tests with older versions of Maven Surefire.
Running tests
The maven-surefire-plugin discovers tests under src/test/java directory by default.
Let’s check if our configuration works by adding a very simple test that does nothing:
class MavenExampleTest {
@Test
void shouldRun() {
}
}
Now we can run our tests on the command line with:
$ mvn test
We should see output similar to this:
[INFO] --- maven-surefire-plugin:2.22.0:test (default-test) @ junit5-maven ---
[INFO]
[INFO] -------------------------------------------------------
[INFO] T E S T S
[INFO] -------------------------------------------------------
[INFO] Running MavenExampleTest
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.003 s - in MavenExampleTest
[INFO]
[INFO] Results:
[INFO]
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
There we go! Maven Surefire is now running our tests.
Summary
In this JUnit 5 Maven example we have learned how to add the required dependency for writing JUnit 5 tests and how to configure Maven Surefire plugin to be able to run the tests.
The example code for this guide can be found on GitHub.
Categories:
Updated:
Comments
|
__label__pos
| 0.698323 |
MLIR 20.0.0git
SPIRVCanonicalization.cpp
Go to the documentation of this file.
1 //===- SPIRVCanonicalization.cpp - MLIR SPIR-V canonicalization patterns --===//
2 //
3 // Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
4 // See https://llvm.org/LICENSE.txt for license information.
5 // SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
6 //
7 //===----------------------------------------------------------------------===//
8 //
9 // This file defines the folders and canonicalization patterns for SPIR-V ops.
10 //
11 //===----------------------------------------------------------------------===//
12
13 #include <optional>
14 #include <utility>
15
17
22 #include "mlir/IR/Matchers.h"
23 #include "mlir/IR/PatternMatch.h"
24 #include "llvm/ADT/STLExtras.h"
25 #include "llvm/ADT/SmallVectorExtras.h"
26
27 using namespace mlir;
28
29 //===----------------------------------------------------------------------===//
30 // Common utility functions
31 //===----------------------------------------------------------------------===//
32
33 /// Returns the boolean value under the hood if the given `boolAttr` is a scalar
34 /// or splat vector bool constant.
35 static std::optional<bool> getScalarOrSplatBoolAttr(Attribute attr) {
36 if (!attr)
37 return std::nullopt;
38
39 if (auto boolAttr = llvm::dyn_cast<BoolAttr>(attr))
40 return boolAttr.getValue();
41 if (auto splatAttr = llvm::dyn_cast<SplatElementsAttr>(attr))
42 if (splatAttr.getElementType().isInteger(1))
43 return splatAttr.getSplatValue<bool>();
44 return std::nullopt;
45 }
46
47 // Extracts an element from the given `composite` by following the given
48 // `indices`. Returns a null Attribute if error happens.
50 ArrayRef<unsigned> indices) {
51 // Check that given composite is a constant.
52 if (!composite)
53 return {};
54 // Return composite itself if we reach the end of the index chain.
55 if (indices.empty())
56 return composite;
57
58 if (auto vector = llvm::dyn_cast<ElementsAttr>(composite)) {
59 assert(indices.size() == 1 && "must have exactly one index for a vector");
60 return vector.getValues<Attribute>()[indices[0]];
61 }
62
63 if (auto array = llvm::dyn_cast<ArrayAttr>(composite)) {
64 assert(!indices.empty() && "must have at least one index for an array");
65 return extractCompositeElement(array.getValue()[indices[0]],
66 indices.drop_front());
67 }
68
69 return {};
70 }
71
72 static bool isDivZeroOrOverflow(const APInt &a, const APInt &b) {
73 bool div0 = b.isZero();
74 bool overflow = a.isMinSignedValue() && b.isAllOnes();
75
76 return div0 || overflow;
77 }
78
79 //===----------------------------------------------------------------------===//
80 // TableGen'erated canonicalizers
81 //===----------------------------------------------------------------------===//
82
83 namespace {
84 #include "SPIRVCanonicalization.inc"
85 } // namespace
86
87 //===----------------------------------------------------------------------===//
88 // spirv.AccessChainOp
89 //===----------------------------------------------------------------------===//
90
91 namespace {
92
93 /// Combines chained `spirv::AccessChainOp` operations into one
94 /// `spirv::AccessChainOp` operation.
95 struct CombineChainedAccessChain final
96 : OpRewritePattern<spirv::AccessChainOp> {
98
99 LogicalResult matchAndRewrite(spirv::AccessChainOp accessChainOp,
100 PatternRewriter &rewriter) const override {
101 auto parentAccessChainOp =
102 accessChainOp.getBasePtr().getDefiningOp<spirv::AccessChainOp>();
103
104 if (!parentAccessChainOp) {
105 return failure();
106 }
107
108 // Combine indices.
109 SmallVector<Value, 4> indices(parentAccessChainOp.getIndices());
110 llvm::append_range(indices, accessChainOp.getIndices());
111
112 rewriter.replaceOpWithNewOp<spirv::AccessChainOp>(
113 accessChainOp, parentAccessChainOp.getBasePtr(), indices);
114
115 return success();
116 }
117 };
118 } // namespace
119
120 void spirv::AccessChainOp::getCanonicalizationPatterns(
121 RewritePatternSet &results, MLIRContext *context) {
122 results.add<CombineChainedAccessChain>(context);
123 }
124
125 //===----------------------------------------------------------------------===//
126 // spirv.IAddCarry
127 //===----------------------------------------------------------------------===//
128
129 // We are required to use CompositeConstructOp to create a constant struct as
130 // they are not yet implemented as constant, hence we can not do so in a fold.
131 struct IAddCarryFold final : OpRewritePattern<spirv::IAddCarryOp> {
133
134 LogicalResult matchAndRewrite(spirv::IAddCarryOp op,
135 PatternRewriter &rewriter) const override {
136 Location loc = op.getLoc();
137 Value lhs = op.getOperand1();
138 Value rhs = op.getOperand2();
139 Type constituentType = lhs.getType();
140
141 // iaddcarry (x, 0) = <0, x>
142 if (matchPattern(rhs, m_Zero())) {
143 Value constituents[2] = {rhs, lhs};
144 rewriter.replaceOpWithNewOp<spirv::CompositeConstructOp>(op, op.getType(),
145 constituents);
146 return success();
147 }
148
149 // According to the SPIR-V spec:
150 //
151 // Result Type must be from OpTypeStruct. The struct must have two
152 // members...
153 //
154 // Member 0 of the result gets the low-order bits (full component width) of
155 // the addition.
156 //
157 // Member 1 of the result gets the high-order (carry) bit of the result of
158 // the addition. That is, it gets the value 1 if the addition overflowed
159 // the component width, and 0 otherwise.
160 Attribute lhsAttr;
161 Attribute rhsAttr;
162 if (!matchPattern(lhs, m_Constant(&lhsAttr)) ||
163 !matchPattern(rhs, m_Constant(&rhsAttr)))
164 return failure();
165
166 auto adds = constFoldBinaryOp<IntegerAttr>(
167 {lhsAttr, rhsAttr},
168 [](const APInt &a, const APInt &b) { return a + b; });
169 if (!adds)
170 return failure();
171
172 auto carrys = constFoldBinaryOp<IntegerAttr>(
173 ArrayRef{adds, lhsAttr}, [](const APInt &a, const APInt &b) {
174 APInt zero = APInt::getZero(a.getBitWidth());
175 return a.ult(b) ? (zero + 1) : zero;
176 });
177
178 if (!carrys)
179 return failure();
180
181 Value addsVal =
182 rewriter.create<spirv::ConstantOp>(loc, constituentType, adds);
183
184 Value carrysVal =
185 rewriter.create<spirv::ConstantOp>(loc, constituentType, carrys);
186
187 // Create empty struct
188 Value undef = rewriter.create<spirv::UndefOp>(loc, op.getType());
189 // Fill in adds at id 0
190 Value intermediate =
191 rewriter.create<spirv::CompositeInsertOp>(loc, addsVal, undef, 0);
192 // Fill in carrys at id 1
193 rewriter.replaceOpWithNewOp<spirv::CompositeInsertOp>(op, carrysVal,
194 intermediate, 1);
195 return success();
196 }
197 };
198
199 void spirv::IAddCarryOp::getCanonicalizationPatterns(
200 RewritePatternSet &patterns, MLIRContext *context) {
201 patterns.add<IAddCarryFold>(context);
202 }
203
204 //===----------------------------------------------------------------------===//
205 // spirv.[S|U]MulExtended
206 //===----------------------------------------------------------------------===//
207
208 // We are required to use CompositeConstructOp to create a constant struct as
209 // they are not yet implemented as constant, hence we can not do so in a fold.
210 template <typename MulOp, bool IsSigned>
211 struct MulExtendedFold final : OpRewritePattern<MulOp> {
213
214 LogicalResult matchAndRewrite(MulOp op,
215 PatternRewriter &rewriter) const override {
216 Location loc = op.getLoc();
217 Value lhs = op.getOperand1();
218 Value rhs = op.getOperand2();
219 Type constituentType = lhs.getType();
220
221 // [su]mulextended (x, 0) = <0, 0>
222 if (matchPattern(rhs, m_Zero())) {
223 Value zero = spirv::ConstantOp::getZero(constituentType, loc, rewriter);
224 Value constituents[2] = {zero, zero};
225 rewriter.replaceOpWithNewOp<spirv::CompositeConstructOp>(op, op.getType(),
226 constituents);
227 return success();
228 }
229
230 // According to the SPIR-V spec:
231 //
232 // Result Type must be from OpTypeStruct. The struct must have two
233 // members...
234 //
235 // Member 0 of the result gets the low-order bits of the multiplication.
236 //
237 // Member 1 of the result gets the high-order bits of the multiplication.
238 Attribute lhsAttr;
239 Attribute rhsAttr;
240 if (!matchPattern(lhs, m_Constant(&lhsAttr)) ||
241 !matchPattern(rhs, m_Constant(&rhsAttr)))
242 return failure();
243
244 auto lowBits = constFoldBinaryOp<IntegerAttr>(
245 {lhsAttr, rhsAttr},
246 [](const APInt &a, const APInt &b) { return a * b; });
247
248 if (!lowBits)
249 return failure();
250
251 auto highBits = constFoldBinaryOp<IntegerAttr>(
252 {lhsAttr, rhsAttr}, [](const APInt &a, const APInt &b) {
253 if (IsSigned) {
254 return llvm::APIntOps::mulhs(a, b);
255 } else {
256 return llvm::APIntOps::mulhu(a, b);
257 }
258 });
259
260 if (!highBits)
261 return failure();
262
263 Value lowBitsVal =
264 rewriter.create<spirv::ConstantOp>(loc, constituentType, lowBits);
265
266 Value highBitsVal =
267 rewriter.create<spirv::ConstantOp>(loc, constituentType, highBits);
268
269 // Create empty struct
270 Value undef = rewriter.create<spirv::UndefOp>(loc, op.getType());
271 // Fill in lowBits at id 0
272 Value intermediate =
273 rewriter.create<spirv::CompositeInsertOp>(loc, lowBitsVal, undef, 0);
274 // Fill in highBits at id 1
275 rewriter.replaceOpWithNewOp<spirv::CompositeInsertOp>(op, highBitsVal,
276 intermediate, 1);
277 return success();
278 }
279 };
280
282 void spirv::SMulExtendedOp::getCanonicalizationPatterns(
283 RewritePatternSet &patterns, MLIRContext *context) {
284 patterns.add<SMulExtendedOpFold>(context);
285 }
286
287 struct UMulExtendedOpXOne final : OpRewritePattern<spirv::UMulExtendedOp> {
289
290 LogicalResult matchAndRewrite(spirv::UMulExtendedOp op,
291 PatternRewriter &rewriter) const override {
292 Location loc = op.getLoc();
293 Value lhs = op.getOperand1();
294 Value rhs = op.getOperand2();
295 Type constituentType = lhs.getType();
296
297 // umulextended (x, 1) = <x, 0>
298 if (matchPattern(rhs, m_One())) {
299 Value zero = spirv::ConstantOp::getZero(constituentType, loc, rewriter);
300 Value constituents[2] = {lhs, zero};
301 rewriter.replaceOpWithNewOp<spirv::CompositeConstructOp>(op, op.getType(),
302 constituents);
303 return success();
304 }
305
306 return failure();
307 }
308 };
309
311 void spirv::UMulExtendedOp::getCanonicalizationPatterns(
312 RewritePatternSet &patterns, MLIRContext *context) {
313 patterns.add<UMulExtendedOpFold, UMulExtendedOpXOne>(context);
314 }
315
316 //===----------------------------------------------------------------------===//
317 // spirv.UMod
318 //===----------------------------------------------------------------------===//
319
320 // Input:
321 // %0 = spirv.UMod %arg0, %const32 : i32
322 // %1 = spirv.UMod %0, %const4 : i32
323 // Output:
324 // %0 = spirv.UMod %arg0, %const32 : i32
325 // %1 = spirv.UMod %arg0, %const4 : i32
326
327 // The transformation is only applied if one divisor is a multiple of the other.
328
329 // TODO(https://github.com/llvm/llvm-project/issues/63174): Add support for vector constants
330 struct UModSimplification final : OpRewritePattern<spirv::UModOp> {
332
333 LogicalResult matchAndRewrite(spirv::UModOp umodOp,
334 PatternRewriter &rewriter) const override {
335 auto prevUMod = umodOp.getOperand(0).getDefiningOp<spirv::UModOp>();
336 if (!prevUMod)
337 return failure();
338
339 IntegerAttr prevValue;
340 IntegerAttr currValue;
341 if (!matchPattern(prevUMod.getOperand(1), m_Constant(&prevValue)) ||
342 !matchPattern(umodOp.getOperand(1), m_Constant(&currValue)))
343 return failure();
344
345 APInt prevConstValue = prevValue.getValue();
346 APInt currConstValue = currValue.getValue();
347
348 // Ensure that one divisor is a multiple of the other. If not, fail the
349 // transformation.
350 if (prevConstValue.urem(currConstValue) != 0 &&
351 currConstValue.urem(prevConstValue) != 0)
352 return failure();
353
354 // The transformation is safe. Replace the existing UMod operation with a
355 // new UMod operation, using the original dividend and the current divisor.
356 rewriter.replaceOpWithNewOp<spirv::UModOp>(
357 umodOp, umodOp.getType(), prevUMod.getOperand(0), umodOp.getOperand(1));
358
359 return success();
360 }
361 };
362
363 void spirv::UModOp::getCanonicalizationPatterns(RewritePatternSet &patterns,
364 MLIRContext *context) {
365 patterns.insert<UModSimplification>(context);
366 }
367
368 //===----------------------------------------------------------------------===//
369 // spirv.BitcastOp
370 //===----------------------------------------------------------------------===//
371
372 OpFoldResult spirv::BitcastOp::fold(FoldAdaptor /*adaptor*/) {
373 Value curInput = getOperand();
374 if (getType() == curInput.getType())
375 return curInput;
376
377 // Look through nested bitcasts.
378 if (auto prevCast = curInput.getDefiningOp<spirv::BitcastOp>()) {
379 Value prevInput = prevCast.getOperand();
380 if (prevInput.getType() == getType())
381 return prevInput;
382
383 getOperandMutable().assign(prevInput);
384 return getResult();
385 }
386
387 // TODO(kuhar): Consider constant-folding the operand attribute.
388 return {};
389 }
390
391 //===----------------------------------------------------------------------===//
392 // spirv.CompositeExtractOp
393 //===----------------------------------------------------------------------===//
394
395 OpFoldResult spirv::CompositeExtractOp::fold(FoldAdaptor adaptor) {
396 Value compositeOp = getComposite();
397
398 while (auto insertOp =
399 compositeOp.getDefiningOp<spirv::CompositeInsertOp>()) {
400 if (getIndices() == insertOp.getIndices())
401 return insertOp.getObject();
402 compositeOp = insertOp.getComposite();
403 }
404
405 if (auto constructOp =
406 compositeOp.getDefiningOp<spirv::CompositeConstructOp>()) {
407 auto type = llvm::cast<spirv::CompositeType>(constructOp.getType());
408 if (getIndices().size() == 1 &&
409 constructOp.getConstituents().size() == type.getNumElements()) {
410 auto i = llvm::cast<IntegerAttr>(*getIndices().begin());
411 if (i.getValue().getSExtValue() <
412 static_cast<int64_t>(constructOp.getConstituents().size()))
413 return constructOp.getConstituents()[i.getValue().getSExtValue()];
414 }
415 }
416
417 auto indexVector = llvm::map_to_vector(getIndices(), [](Attribute attr) {
418 return static_cast<unsigned>(llvm::cast<IntegerAttr>(attr).getInt());
419 });
420 return extractCompositeElement(adaptor.getComposite(), indexVector);
421 }
422
423 //===----------------------------------------------------------------------===//
424 // spirv.Constant
425 //===----------------------------------------------------------------------===//
426
427 OpFoldResult spirv::ConstantOp::fold(FoldAdaptor /*adaptor*/) {
428 return getValue();
429 }
430
431 //===----------------------------------------------------------------------===//
432 // spirv.IAdd
433 //===----------------------------------------------------------------------===//
434
435 OpFoldResult spirv::IAddOp::fold(FoldAdaptor adaptor) {
436 // x + 0 = x
437 if (matchPattern(getOperand2(), m_Zero()))
438 return getOperand1();
439
440 // According to the SPIR-V spec:
441 //
442 // The resulting value will equal the low-order N bits of the correct result
443 // R, where N is the component width and R is computed with enough precision
444 // to avoid overflow and underflow.
445 return constFoldBinaryOp<IntegerAttr>(
446 adaptor.getOperands(),
447 [](APInt a, const APInt &b) { return std::move(a) + b; });
448 }
449
450 //===----------------------------------------------------------------------===//
451 // spirv.IMul
452 //===----------------------------------------------------------------------===//
453
454 OpFoldResult spirv::IMulOp::fold(FoldAdaptor adaptor) {
455 // x * 0 == 0
456 if (matchPattern(getOperand2(), m_Zero()))
457 return getOperand2();
458 // x * 1 = x
459 if (matchPattern(getOperand2(), m_One()))
460 return getOperand1();
461
462 // According to the SPIR-V spec:
463 //
464 // The resulting value will equal the low-order N bits of the correct result
465 // R, where N is the component width and R is computed with enough precision
466 // to avoid overflow and underflow.
467 return constFoldBinaryOp<IntegerAttr>(
468 adaptor.getOperands(),
469 [](const APInt &a, const APInt &b) { return a * b; });
470 }
471
472 //===----------------------------------------------------------------------===//
473 // spirv.ISub
474 //===----------------------------------------------------------------------===//
475
476 OpFoldResult spirv::ISubOp::fold(FoldAdaptor adaptor) {
477 // x - x = 0
478 if (getOperand1() == getOperand2())
479 return Builder(getContext()).getIntegerAttr(getType(), 0);
480
481 // According to the SPIR-V spec:
482 //
483 // The resulting value will equal the low-order N bits of the correct result
484 // R, where N is the component width and R is computed with enough precision
485 // to avoid overflow and underflow.
486 return constFoldBinaryOp<IntegerAttr>(
487 adaptor.getOperands(),
488 [](APInt a, const APInt &b) { return std::move(a) - b; });
489 }
490
491 //===----------------------------------------------------------------------===//
492 // spirv.SDiv
493 //===----------------------------------------------------------------------===//
494
495 OpFoldResult spirv::SDivOp::fold(FoldAdaptor adaptor) {
496 // sdiv (x, 1) = x
497 if (matchPattern(getOperand2(), m_One()))
498 return getOperand1();
499
500 // According to the SPIR-V spec:
501 //
502 // Signed-integer division of Operand 1 divided by Operand 2.
503 // Results are computed per component. Behavior is undefined if Operand 2 is
504 // 0. Behavior is undefined if Operand 2 is -1 and Operand 1 is the minimum
505 // representable value for the operands' type, causing signed overflow.
506 //
507 // So don't fold during undefined behavior.
508 bool div0OrOverflow = false;
509 auto res = constFoldBinaryOp<IntegerAttr>(
510 adaptor.getOperands(), [&](const APInt &a, const APInt &b) {
511 if (div0OrOverflow || isDivZeroOrOverflow(a, b)) {
512 div0OrOverflow = true;
513 return a;
514 }
515 return a.sdiv(b);
516 });
517 return div0OrOverflow ? Attribute() : res;
518 }
519
520 //===----------------------------------------------------------------------===//
521 // spirv.SMod
522 //===----------------------------------------------------------------------===//
523
524 OpFoldResult spirv::SModOp::fold(FoldAdaptor adaptor) {
525 // smod (x, 1) = 0
526 if (matchPattern(getOperand2(), m_One()))
527 return Builder(getContext()).getZeroAttr(getType());
528
529 // According to SPIR-V spec:
530 //
531 // Signed remainder operation for the remainder whose sign matches the sign
532 // of Operand 2. Behavior is undefined if Operand 2 is 0. Behavior is
533 // undefined if Operand 2 is -1 and Operand 1 is the minimum representable
534 // value for the operands' type, causing signed overflow. Otherwise, the
535 // result is the remainder r of Operand 1 divided by Operand 2 where if
536 // r ≠ 0, the sign of r is the same as the sign of Operand 2.
537 //
538 // So don't fold during undefined behavior
539 bool div0OrOverflow = false;
540 auto res = constFoldBinaryOp<IntegerAttr>(
541 adaptor.getOperands(), [&](const APInt &a, const APInt &b) {
542 if (div0OrOverflow || isDivZeroOrOverflow(a, b)) {
543 div0OrOverflow = true;
544 return a;
545 }
546 APInt c = a.abs().urem(b.abs());
547 if (c.isZero())
548 return c;
549 if (b.isNegative()) {
550 APInt zero = APInt::getZero(c.getBitWidth());
551 return a.isNegative() ? (zero - c) : (b + c);
552 }
553 return a.isNegative() ? (b - c) : c;
554 });
555 return div0OrOverflow ? Attribute() : res;
556 }
557
558 //===----------------------------------------------------------------------===//
559 // spirv.SRem
560 //===----------------------------------------------------------------------===//
561
562 OpFoldResult spirv::SRemOp::fold(FoldAdaptor adaptor) {
563 // x % 1 = 0
564 if (matchPattern(getOperand2(), m_One()))
565 return Builder(getContext()).getZeroAttr(getType());
566
567 // According to SPIR-V spec:
568 //
569 // Signed remainder operation for the remainder whose sign matches the sign
570 // of Operand 1. Behavior is undefined if Operand 2 is 0. Behavior is
571 // undefined if Operand 2 is -1 and Operand 1 is the minimum representable
572 // value for the operands' type, causing signed overflow. Otherwise, the
573 // result is the remainder r of Operand 1 divided by Operand 2 where if
574 // r ≠ 0, the sign of r is the same as the sign of Operand 1.
575
576 // Don't fold if it would do undefined behavior.
577 bool div0OrOverflow = false;
578 auto res = constFoldBinaryOp<IntegerAttr>(
579 adaptor.getOperands(), [&](APInt a, const APInt &b) {
580 if (div0OrOverflow || isDivZeroOrOverflow(a, b)) {
581 div0OrOverflow = true;
582 return a;
583 }
584 return a.srem(b);
585 });
586 return div0OrOverflow ? Attribute() : res;
587 }
588
589 //===----------------------------------------------------------------------===//
590 // spirv.UDiv
591 //===----------------------------------------------------------------------===//
592
593 OpFoldResult spirv::UDivOp::fold(FoldAdaptor adaptor) {
594 // udiv (x, 1) = x
595 if (matchPattern(getOperand2(), m_One()))
596 return getOperand1();
597
598 // According to the SPIR-V spec:
599 //
600 // Unsigned-integer division of Operand 1 divided by Operand 2. Behavior is
601 // undefined if Operand 2 is 0.
602 //
603 // So don't fold during undefined behavior.
604 bool div0 = false;
605 auto res = constFoldBinaryOp<IntegerAttr>(
606 adaptor.getOperands(), [&](const APInt &a, const APInt &b) {
607 if (div0 || b.isZero()) {
608 div0 = true;
609 return a;
610 }
611 return a.udiv(b);
612 });
613 return div0 ? Attribute() : res;
614 }
615
616 //===----------------------------------------------------------------------===//
617 // spirv.UMod
618 //===----------------------------------------------------------------------===//
619
620 OpFoldResult spirv::UModOp::fold(FoldAdaptor adaptor) {
621 // umod (x, 1) = 0
622 if (matchPattern(getOperand2(), m_One()))
623 return Builder(getContext()).getZeroAttr(getType());
624
625 // According to the SPIR-V spec:
626 //
627 // Unsigned modulo operation of Operand 1 modulo Operand 2. Behavior is
628 // undefined if Operand 2 is 0.
629 //
630 // So don't fold during undefined behavior.
631 bool div0 = false;
632 auto res = constFoldBinaryOp<IntegerAttr>(
633 adaptor.getOperands(), [&](const APInt &a, const APInt &b) {
634 if (div0 || b.isZero()) {
635 div0 = true;
636 return a;
637 }
638 return a.urem(b);
639 });
640 return div0 ? Attribute() : res;
641 }
642
643 //===----------------------------------------------------------------------===//
644 // spirv.SNegate
645 //===----------------------------------------------------------------------===//
646
647 OpFoldResult spirv::SNegateOp::fold(FoldAdaptor adaptor) {
648 // -(-x) = 0 - (0 - x) = x
649 auto op = getOperand();
650 if (auto negateOp = op.getDefiningOp<spirv::SNegateOp>())
651 return negateOp->getOperand(0);
652
653 // According to the SPIR-V spec:
654 //
655 // Signed-integer subtract of Operand from zero.
656 return constFoldUnaryOp<IntegerAttr>(
657 adaptor.getOperands(), [](const APInt &a) {
658 APInt zero = APInt::getZero(a.getBitWidth());
659 return zero - a;
660 });
661 }
662
663 //===----------------------------------------------------------------------===//
664 // spirv.NotOp
665 //===----------------------------------------------------------------------===//
666
667 OpFoldResult spirv::NotOp::fold(spirv::NotOp::FoldAdaptor adaptor) {
668 // !(!x) = x
669 auto op = getOperand();
670 if (auto notOp = op.getDefiningOp<spirv::NotOp>())
671 return notOp->getOperand(0);
672
673 // According to the SPIR-V spec:
674 //
675 // Complement the bits of Operand.
676 return constFoldUnaryOp<IntegerAttr>(adaptor.getOperands(), [&](APInt a) {
677 a.flipAllBits();
678 return a;
679 });
680 }
681
682 //===----------------------------------------------------------------------===//
683 // spirv.LogicalAnd
684 //===----------------------------------------------------------------------===//
685
686 OpFoldResult spirv::LogicalAndOp::fold(FoldAdaptor adaptor) {
687 if (std::optional<bool> rhs =
688 getScalarOrSplatBoolAttr(adaptor.getOperand2())) {
689 // x && true = x
690 if (*rhs)
691 return getOperand1();
692
693 // x && false = false
694 if (!*rhs)
695 return adaptor.getOperand2();
696 }
697
698 return Attribute();
699 }
700
701 //===----------------------------------------------------------------------===//
702 // spirv.LogicalEqualOp
703 //===----------------------------------------------------------------------===//
704
706 spirv::LogicalEqualOp::fold(spirv::LogicalEqualOp::FoldAdaptor adaptor) {
707 // x == x -> true
708 if (getOperand1() == getOperand2()) {
709 auto trueAttr = BoolAttr::get(getContext(), true);
710 if (isa<IntegerType>(getType()))
711 return trueAttr;
712 if (auto vecTy = dyn_cast<VectorType>(getType()))
713 return SplatElementsAttr::get(vecTy, trueAttr);
714 }
715
716 return constFoldBinaryOp<IntegerAttr>(
717 adaptor.getOperands(), [](const APInt &a, const APInt &b) {
718 return a == b ? APInt::getAllOnes(1) : APInt::getZero(1);
719 });
720 }
721
722 //===----------------------------------------------------------------------===//
723 // spirv.LogicalNotEqualOp
724 //===----------------------------------------------------------------------===//
725
726 OpFoldResult spirv::LogicalNotEqualOp::fold(FoldAdaptor adaptor) {
727 if (std::optional<bool> rhs =
728 getScalarOrSplatBoolAttr(adaptor.getOperand2())) {
729 // x != false -> x
730 if (!rhs.value())
731 return getOperand1();
732 }
733
734 // x == x -> false
735 if (getOperand1() == getOperand2()) {
736 auto falseAttr = BoolAttr::get(getContext(), false);
737 if (isa<IntegerType>(getType()))
738 return falseAttr;
739 if (auto vecTy = dyn_cast<VectorType>(getType()))
740 return SplatElementsAttr::get(vecTy, falseAttr);
741 }
742
743 return constFoldBinaryOp<IntegerAttr>(
744 adaptor.getOperands(), [](const APInt &a, const APInt &b) {
745 return a == b ? APInt::getZero(1) : APInt::getAllOnes(1);
746 });
747 }
748
749 //===----------------------------------------------------------------------===//
750 // spirv.LogicalNot
751 //===----------------------------------------------------------------------===//
752
753 OpFoldResult spirv::LogicalNotOp::fold(FoldAdaptor adaptor) {
754 // !(!x) = x
755 auto op = getOperand();
756 if (auto notOp = op.getDefiningOp<spirv::LogicalNotOp>())
757 return notOp->getOperand(0);
758
759 // According to the SPIR-V spec:
760 //
761 // Complement the bits of Operand.
762 return constFoldUnaryOp<IntegerAttr>(adaptor.getOperands(),
763 [](const APInt &a) {
764 APInt zero = APInt::getZero(1);
765 return a == 1 ? zero : (zero + 1);
766 });
767 }
768
769 void spirv::LogicalNotOp::getCanonicalizationPatterns(
770 RewritePatternSet &results, MLIRContext *context) {
771 results
772 .add<ConvertLogicalNotOfIEqual, ConvertLogicalNotOfINotEqual,
773 ConvertLogicalNotOfLogicalEqual, ConvertLogicalNotOfLogicalNotEqual>(
774 context);
775 }
776
777 //===----------------------------------------------------------------------===//
778 // spirv.LogicalOr
779 //===----------------------------------------------------------------------===//
780
781 OpFoldResult spirv::LogicalOrOp::fold(FoldAdaptor adaptor) {
782 if (auto rhs = getScalarOrSplatBoolAttr(adaptor.getOperand2())) {
783 if (*rhs) {
784 // x || true = true
785 return adaptor.getOperand2();
786 }
787
788 if (!*rhs) {
789 // x || false = x
790 return getOperand1();
791 }
792 }
793
794 return Attribute();
795 }
796
797 //===----------------------------------------------------------------------===//
798 // spirv.SelectOp
799 //===----------------------------------------------------------------------===//
800
801 OpFoldResult spirv::SelectOp::fold(FoldAdaptor adaptor) {
802 // spirv.Select _ x x -> x
803 Value trueVals = getTrueValue();
804 Value falseVals = getFalseValue();
805 if (trueVals == falseVals)
806 return trueVals;
807
808 ArrayRef<Attribute> operands = adaptor.getOperands();
809
810 // spirv.Select true x y -> x
811 // spirv.Select false x y -> y
812 if (auto boolAttr = getScalarOrSplatBoolAttr(operands[0]))
813 return *boolAttr ? trueVals : falseVals;
814
815 // Check that all the operands are constant
816 if (!operands[0] || !operands[1] || !operands[2])
817 return Attribute();
818
819 // Note: getScalarOrSplatBoolAttr will always return a boolAttr if we are in
820 // the scalar case. Hence, we are only required to consider the case of
821 // DenseElementsAttr in foldSelectOp.
822 auto condAttrs = dyn_cast<DenseElementsAttr>(operands[0]);
823 auto trueAttrs = dyn_cast<DenseElementsAttr>(operands[1]);
824 auto falseAttrs = dyn_cast<DenseElementsAttr>(operands[2]);
825 if (!condAttrs || !trueAttrs || !falseAttrs)
826 return Attribute();
827
828 auto elementResults = llvm::to_vector<4>(trueAttrs.getValues<Attribute>());
829 auto iters = llvm::zip_equal(elementResults, condAttrs.getValues<BoolAttr>(),
830 falseAttrs.getValues<Attribute>());
831 for (auto [result, cond, falseRes] : iters) {
832 if (!cond.getValue())
833 result = falseRes;
834 }
835
836 auto resultType = trueAttrs.getType();
837 return DenseElementsAttr::get(cast<ShapedType>(resultType), elementResults);
838 }
839
840 //===----------------------------------------------------------------------===//
841 // spirv.IEqualOp
842 //===----------------------------------------------------------------------===//
843
844 OpFoldResult spirv::IEqualOp::fold(spirv::IEqualOp::FoldAdaptor adaptor) {
845 // x == x -> true
846 if (getOperand1() == getOperand2()) {
847 auto trueAttr = BoolAttr::get(getContext(), true);
848 if (isa<IntegerType>(getType()))
849 return trueAttr;
850 if (auto vecTy = dyn_cast<VectorType>(getType()))
851 return SplatElementsAttr::get(vecTy, trueAttr);
852 }
853
854 return constFoldBinaryOp<IntegerAttr>(
855 adaptor.getOperands(), getType(), [](const APInt &a, const APInt &b) {
856 return a == b ? APInt::getAllOnes(1) : APInt::getZero(1);
857 });
858 }
859
860 //===----------------------------------------------------------------------===//
861 // spirv.INotEqualOp
862 //===----------------------------------------------------------------------===//
863
864 OpFoldResult spirv::INotEqualOp::fold(spirv::INotEqualOp::FoldAdaptor adaptor) {
865 // x == x -> false
866 if (getOperand1() == getOperand2()) {
867 auto falseAttr = BoolAttr::get(getContext(), false);
868 if (isa<IntegerType>(getType()))
869 return falseAttr;
870 if (auto vecTy = dyn_cast<VectorType>(getType()))
871 return SplatElementsAttr::get(vecTy, falseAttr);
872 }
873
874 return constFoldBinaryOp<IntegerAttr>(
875 adaptor.getOperands(), getType(), [](const APInt &a, const APInt &b) {
876 return a == b ? APInt::getZero(1) : APInt::getAllOnes(1);
877 });
878 }
879
880 //===----------------------------------------------------------------------===//
881 // spirv.SGreaterThan
882 //===----------------------------------------------------------------------===//
883
885 spirv::SGreaterThanOp::fold(spirv::SGreaterThanOp::FoldAdaptor adaptor) {
886 // x == x -> false
887 if (getOperand1() == getOperand2()) {
888 auto falseAttr = BoolAttr::get(getContext(), false);
889 if (isa<IntegerType>(getType()))
890 return falseAttr;
891 if (auto vecTy = dyn_cast<VectorType>(getType()))
892 return SplatElementsAttr::get(vecTy, falseAttr);
893 }
894
895 return constFoldBinaryOp<IntegerAttr>(
896 adaptor.getOperands(), getType(), [](const APInt &a, const APInt &b) {
897 return a.sgt(b) ? APInt::getAllOnes(1) : APInt::getZero(1);
898 });
899 }
900
901 //===----------------------------------------------------------------------===//
902 // spirv.SGreaterThanEqual
903 //===----------------------------------------------------------------------===//
904
905 OpFoldResult spirv::SGreaterThanEqualOp::fold(
906 spirv::SGreaterThanEqualOp::FoldAdaptor adaptor) {
907 // x == x -> true
908 if (getOperand1() == getOperand2()) {
909 auto trueAttr = BoolAttr::get(getContext(), true);
910 if (isa<IntegerType>(getType()))
911 return trueAttr;
912 if (auto vecTy = dyn_cast<VectorType>(getType()))
913 return SplatElementsAttr::get(vecTy, trueAttr);
914 }
915
916 return constFoldBinaryOp<IntegerAttr>(
917 adaptor.getOperands(), getType(), [](const APInt &a, const APInt &b) {
918 return a.sge(b) ? APInt::getAllOnes(1) : APInt::getZero(1);
919 });
920 }
921
922 //===----------------------------------------------------------------------===//
923 // spirv.UGreaterThan
924 //===----------------------------------------------------------------------===//
925
927 spirv::UGreaterThanOp::fold(spirv::UGreaterThanOp::FoldAdaptor adaptor) {
928 // x == x -> false
929 if (getOperand1() == getOperand2()) {
930 auto falseAttr = BoolAttr::get(getContext(), false);
931 if (isa<IntegerType>(getType()))
932 return falseAttr;
933 if (auto vecTy = dyn_cast<VectorType>(getType()))
934 return SplatElementsAttr::get(vecTy, falseAttr);
935 }
936
937 return constFoldBinaryOp<IntegerAttr>(
938 adaptor.getOperands(), getType(), [](const APInt &a, const APInt &b) {
939 return a.ugt(b) ? APInt::getAllOnes(1) : APInt::getZero(1);
940 });
941 }
942
943 //===----------------------------------------------------------------------===//
944 // spirv.UGreaterThanEqual
945 //===----------------------------------------------------------------------===//
946
947 OpFoldResult spirv::UGreaterThanEqualOp::fold(
948 spirv::UGreaterThanEqualOp::FoldAdaptor adaptor) {
949 // x == x -> true
950 if (getOperand1() == getOperand2()) {
951 auto trueAttr = BoolAttr::get(getContext(), true);
952 if (isa<IntegerType>(getType()))
953 return trueAttr;
954 if (auto vecTy = dyn_cast<VectorType>(getType()))
955 return SplatElementsAttr::get(vecTy, trueAttr);
956 }
957
958 return constFoldBinaryOp<IntegerAttr>(
959 adaptor.getOperands(), getType(), [](const APInt &a, const APInt &b) {
960 return a.uge(b) ? APInt::getAllOnes(1) : APInt::getZero(1);
961 });
962 }
963
964 //===----------------------------------------------------------------------===//
965 // spirv.SLessThan
966 //===----------------------------------------------------------------------===//
967
968 OpFoldResult spirv::SLessThanOp::fold(spirv::SLessThanOp::FoldAdaptor adaptor) {
969 // x == x -> false
970 if (getOperand1() == getOperand2()) {
971 auto falseAttr = BoolAttr::get(getContext(), false);
972 if (isa<IntegerType>(getType()))
973 return falseAttr;
974 if (auto vecTy = dyn_cast<VectorType>(getType()))
975 return SplatElementsAttr::get(vecTy, falseAttr);
976 }
977
978 return constFoldBinaryOp<IntegerAttr>(
979 adaptor.getOperands(), getType(), [](const APInt &a, const APInt &b) {
980 return a.slt(b) ? APInt::getAllOnes(1) : APInt::getZero(1);
981 });
982 }
983
984 //===----------------------------------------------------------------------===//
985 // spirv.SLessThanEqual
986 //===----------------------------------------------------------------------===//
987
989 spirv::SLessThanEqualOp::fold(spirv::SLessThanEqualOp::FoldAdaptor adaptor) {
990 // x == x -> true
991 if (getOperand1() == getOperand2()) {
992 auto trueAttr = BoolAttr::get(getContext(), true);
993 if (isa<IntegerType>(getType()))
994 return trueAttr;
995 if (auto vecTy = dyn_cast<VectorType>(getType()))
996 return SplatElementsAttr::get(vecTy, trueAttr);
997 }
998
999 return constFoldBinaryOp<IntegerAttr>(
1000 adaptor.getOperands(), getType(), [](const APInt &a, const APInt &b) {
1001 return a.sle(b) ? APInt::getAllOnes(1) : APInt::getZero(1);
1002 });
1003 }
1004
1005 //===----------------------------------------------------------------------===//
1006 // spirv.ULessThan
1007 //===----------------------------------------------------------------------===//
1008
1009 OpFoldResult spirv::ULessThanOp::fold(spirv::ULessThanOp::FoldAdaptor adaptor) {
1010 // x == x -> false
1011 if (getOperand1() == getOperand2()) {
1012 auto falseAttr = BoolAttr::get(getContext(), false);
1013 if (isa<IntegerType>(getType()))
1014 return falseAttr;
1015 if (auto vecTy = dyn_cast<VectorType>(getType()))
1016 return SplatElementsAttr::get(vecTy, falseAttr);
1017 }
1018
1019 return constFoldBinaryOp<IntegerAttr>(
1020 adaptor.getOperands(), getType(), [](const APInt &a, const APInt &b) {
1021 return a.ult(b) ? APInt::getAllOnes(1) : APInt::getZero(1);
1022 });
1023 }
1024
1025 //===----------------------------------------------------------------------===//
1026 // spirv.ULessThanEqual
1027 //===----------------------------------------------------------------------===//
1028
1030 spirv::ULessThanEqualOp::fold(spirv::ULessThanEqualOp::FoldAdaptor adaptor) {
1031 // x == x -> true
1032 if (getOperand1() == getOperand2()) {
1033 auto trueAttr = BoolAttr::get(getContext(), true);
1034 if (isa<IntegerType>(getType()))
1035 return trueAttr;
1036 if (auto vecTy = dyn_cast<VectorType>(getType()))
1037 return SplatElementsAttr::get(vecTy, trueAttr);
1038 }
1039
1040 return constFoldBinaryOp<IntegerAttr>(
1041 adaptor.getOperands(), getType(), [](const APInt &a, const APInt &b) {
1042 return a.ule(b) ? APInt::getAllOnes(1) : APInt::getZero(1);
1043 });
1044 }
1045
1046 //===----------------------------------------------------------------------===//
1047 // spirv.ShiftLeftLogical
1048 //===----------------------------------------------------------------------===//
1049
1050 OpFoldResult spirv::ShiftLeftLogicalOp::fold(
1051 spirv::ShiftLeftLogicalOp::FoldAdaptor adaptor) {
1052 // x << 0 -> x
1053 if (matchPattern(adaptor.getOperand2(), m_Zero())) {
1054 return getOperand1();
1055 }
1056
1057 // Unfortunately due to below undefined behaviour can't fold 0 for Base.
1058
1059 // Results are computed per component, and within each component, per bit...
1060 //
1061 // The result is undefined if Shift is greater than or equal to the bit width
1062 // of the components of Base.
1063 //
1064 // So we can use the APInt << method, but don't fold if undefined behaviour.
1065 bool shiftToLarge = false;
1066 auto res = constFoldBinaryOp<IntegerAttr>(
1067 adaptor.getOperands(), [&](const APInt &a, const APInt &b) {
1068 if (shiftToLarge || b.uge(a.getBitWidth())) {
1069 shiftToLarge = true;
1070 return a;
1071 }
1072 return a << b;
1073 });
1074 return shiftToLarge ? Attribute() : res;
1075 }
1076
1077 //===----------------------------------------------------------------------===//
1078 // spirv.ShiftRightArithmetic
1079 //===----------------------------------------------------------------------===//
1080
1081 OpFoldResult spirv::ShiftRightArithmeticOp::fold(
1082 spirv::ShiftRightArithmeticOp::FoldAdaptor adaptor) {
1083 // x >> 0 -> x
1084 if (matchPattern(adaptor.getOperand2(), m_Zero())) {
1085 return getOperand1();
1086 }
1087
1088 // Unfortunately due to below undefined behaviour can't fold 0, -1 for Base.
1089
1090 // Results are computed per component, and within each component, per bit...
1091 //
1092 // The result is undefined if Shift is greater than or equal to the bit width
1093 // of the components of Base.
1094 //
1095 // So we can use the APInt ashr method, but don't fold if undefined behaviour.
1096 bool shiftToLarge = false;
1097 auto res = constFoldBinaryOp<IntegerAttr>(
1098 adaptor.getOperands(), [&](const APInt &a, const APInt &b) {
1099 if (shiftToLarge || b.uge(a.getBitWidth())) {
1100 shiftToLarge = true;
1101 return a;
1102 }
1103 return a.ashr(b);
1104 });
1105 return shiftToLarge ? Attribute() : res;
1106 }
1107
1108 //===----------------------------------------------------------------------===//
1109 // spirv.ShiftRightLogical
1110 //===----------------------------------------------------------------------===//
1111
1112 OpFoldResult spirv::ShiftRightLogicalOp::fold(
1113 spirv::ShiftRightLogicalOp::FoldAdaptor adaptor) {
1114 // x >> 0 -> x
1115 if (matchPattern(adaptor.getOperand2(), m_Zero())) {
1116 return getOperand1();
1117 }
1118
1119 // Unfortunately due to below undefined behaviour can't fold 0 for Base.
1120
1121 // Results are computed per component, and within each component, per bit...
1122 //
1123 // The result is undefined if Shift is greater than or equal to the bit width
1124 // of the components of Base.
1125 //
1126 // So we can use the APInt lshr method, but don't fold if undefined behaviour.
1127 bool shiftToLarge = false;
1128 auto res = constFoldBinaryOp<IntegerAttr>(
1129 adaptor.getOperands(), [&](const APInt &a, const APInt &b) {
1130 if (shiftToLarge || b.uge(a.getBitWidth())) {
1131 shiftToLarge = true;
1132 return a;
1133 }
1134 return a.lshr(b);
1135 });
1136 return shiftToLarge ? Attribute() : res;
1137 }
1138
1139 //===----------------------------------------------------------------------===//
1140 // spirv.BitwiseAndOp
1141 //===----------------------------------------------------------------------===//
1142
1144 spirv::BitwiseAndOp::fold(spirv::BitwiseAndOp::FoldAdaptor adaptor) {
1145 // x & x -> x
1146 if (getOperand1() == getOperand2()) {
1147 return getOperand1();
1148 }
1149
1150 APInt rhsMask;
1151 if (matchPattern(adaptor.getOperand2(), m_ConstantInt(&rhsMask))) {
1152 // x & 0 -> 0
1153 if (rhsMask.isZero())
1154 return getOperand2();
1155
1156 // x & <all ones> -> x
1157 if (rhsMask.isAllOnes())
1158 return getOperand1();
1159
1160 // (UConvert x : iN to iK) & <mask with N low bits set> -> UConvert x
1161 if (auto zext = getOperand1().getDefiningOp<spirv::UConvertOp>()) {
1162 int valueBits =
1164 if (rhsMask.zextOrTrunc(valueBits).isAllOnes())
1165 return getOperand1();
1166 }
1167 }
1168
1169 // According to the SPIR-V spec:
1170 //
1171 // Type is a scalar or vector of integer type.
1172 // Results are computed per component, and within each component, per bit.
1173 // So we can use the APInt & method.
1174 return constFoldBinaryOp<IntegerAttr>(
1175 adaptor.getOperands(),
1176 [](const APInt &a, const APInt &b) { return a & b; });
1177 }
1178
1179 //===----------------------------------------------------------------------===//
1180 // spirv.BitwiseOrOp
1181 //===----------------------------------------------------------------------===//
1182
1183 OpFoldResult spirv::BitwiseOrOp::fold(spirv::BitwiseOrOp::FoldAdaptor adaptor) {
1184 // x | x -> x
1185 if (getOperand1() == getOperand2()) {
1186 return getOperand1();
1187 }
1188
1189 APInt rhsMask;
1190 if (matchPattern(adaptor.getOperand2(), m_ConstantInt(&rhsMask))) {
1191 // x | 0 -> x
1192 if (rhsMask.isZero())
1193 return getOperand1();
1194
1195 // x | <all ones> -> <all ones>
1196 if (rhsMask.isAllOnes())
1197 return getOperand2();
1198 }
1199
1200 // According to the SPIR-V spec:
1201 //
1202 // Type is a scalar or vector of integer type.
1203 // Results are computed per component, and within each component, per bit.
1204 // So we can use the APInt | method.
1205 return constFoldBinaryOp<IntegerAttr>(
1206 adaptor.getOperands(),
1207 [](const APInt &a, const APInt &b) { return a | b; });
1208 }
1209
1210 //===----------------------------------------------------------------------===//
1211 // spirv.BitwiseXorOp
1212 //===----------------------------------------------------------------------===//
1213
1215 spirv::BitwiseXorOp::fold(spirv::BitwiseXorOp::FoldAdaptor adaptor) {
1216 // x ^ 0 -> x
1217 if (matchPattern(adaptor.getOperand2(), m_Zero())) {
1218 return getOperand1();
1219 }
1220
1221 // x ^ x -> 0
1222 if (getOperand1() == getOperand2())
1223 return Builder(getContext()).getZeroAttr(getType());
1224
1225 // According to the SPIR-V spec:
1226 //
1227 // Type is a scalar or vector of integer type.
1228 // Results are computed per component, and within each component, per bit.
1229 // So we can use the APInt ^ method.
1230 return constFoldBinaryOp<IntegerAttr>(
1231 adaptor.getOperands(),
1232 [](const APInt &a, const APInt &b) { return a ^ b; });
1233 }
1234
1235 //===----------------------------------------------------------------------===//
1236 // spirv.mlir.selection
1237 //===----------------------------------------------------------------------===//
1238
1239 namespace {
1240 // Blocks from the given `spirv.mlir.selection` operation must satisfy the
1241 // following layout:
1242 //
1243 // +-----------------------------------------------+
1244 // | header block |
1245 // | spirv.BranchConditionalOp %cond, ^case0, ^case1 |
1246 // +-----------------------------------------------+
1247 // / \
1248 // ...
1249 //
1250 //
1251 // +------------------------+ +------------------------+
1252 // | case #0 | | case #1 |
1253 // | spirv.Store %ptr %value0 | | spirv.Store %ptr %value1 |
1254 // | spirv.Branch ^merge | | spirv.Branch ^merge |
1255 // +------------------------+ +------------------------+
1256 //
1257 //
1258 // ...
1259 // \ /
1260 // v
1261 // +-------------+
1262 // | merge block |
1263 // +-------------+
1264 //
1265 struct ConvertSelectionOpToSelect final : OpRewritePattern<spirv::SelectionOp> {
1267
1268 LogicalResult matchAndRewrite(spirv::SelectionOp selectionOp,
1269 PatternRewriter &rewriter) const override {
1270 Operation *op = selectionOp.getOperation();
1271 Region &body = op->getRegion(0);
1272 // Verifier allows an empty region for `spirv.mlir.selection`.
1273 if (body.empty()) {
1274 return failure();
1275 }
1276
1277 // Check that region consists of 4 blocks:
1278 // header block, `true` block, `false` block and merge block.
1279 if (llvm::range_size(body) != 4) {
1280 return failure();
1281 }
1282
1283 Block *headerBlock = selectionOp.getHeaderBlock();
1284 if (!onlyContainsBranchConditionalOp(headerBlock)) {
1285 return failure();
1286 }
1287
1288 auto brConditionalOp =
1289 cast<spirv::BranchConditionalOp>(headerBlock->front());
1290
1291 Block *trueBlock = brConditionalOp.getSuccessor(0);
1292 Block *falseBlock = brConditionalOp.getSuccessor(1);
1293 Block *mergeBlock = selectionOp.getMergeBlock();
1294
1295 if (failed(canCanonicalizeSelection(trueBlock, falseBlock, mergeBlock)))
1296 return failure();
1297
1298 Value trueValue = getSrcValue(trueBlock);
1299 Value falseValue = getSrcValue(falseBlock);
1300 Value ptrValue = getDstPtr(trueBlock);
1301 auto storeOpAttributes =
1302 cast<spirv::StoreOp>(trueBlock->front())->getAttrs();
1303
1304 auto selectOp = rewriter.create<spirv::SelectOp>(
1305 selectionOp.getLoc(), trueValue.getType(),
1306 brConditionalOp.getCondition(), trueValue, falseValue);
1307 rewriter.create<spirv::StoreOp>(selectOp.getLoc(), ptrValue,
1308 selectOp.getResult(), storeOpAttributes);
1309
1310 // `spirv.mlir.selection` is not needed anymore.
1311 rewriter.eraseOp(op);
1312 return success();
1313 }
1314
1315 private:
1316 // Checks that given blocks follow the following rules:
1317 // 1. Each conditional block consists of two operations, the first operation
1318 // is a `spirv.Store` and the last operation is a `spirv.Branch`.
1319 // 2. Each `spirv.Store` uses the same pointer and the same memory attributes.
1320 // 3. A control flow goes into the given merge block from the given
1321 // conditional blocks.
1322 LogicalResult canCanonicalizeSelection(Block *trueBlock, Block *falseBlock,
1323 Block *mergeBlock) const;
1324
1325 bool onlyContainsBranchConditionalOp(Block *block) const {
1326 return llvm::hasSingleElement(*block) &&
1327 isa<spirv::BranchConditionalOp>(block->front());
1328 }
1329
1330 bool isSameAttrList(spirv::StoreOp lhs, spirv::StoreOp rhs) const {
1331 return lhs->getDiscardableAttrDictionary() ==
1332 rhs->getDiscardableAttrDictionary() &&
1333 lhs.getProperties() == rhs.getProperties();
1334 }
1335
1336 // Returns a source value for the given block.
1337 Value getSrcValue(Block *block) const {
1338 auto storeOp = cast<spirv::StoreOp>(block->front());
1339 return storeOp.getValue();
1340 }
1341
1342 // Returns a destination value for the given block.
1343 Value getDstPtr(Block *block) const {
1344 auto storeOp = cast<spirv::StoreOp>(block->front());
1345 return storeOp.getPtr();
1346 }
1347 };
1348
1349 LogicalResult ConvertSelectionOpToSelect::canCanonicalizeSelection(
1350 Block *trueBlock, Block *falseBlock, Block *mergeBlock) const {
1351 // Each block must consists of 2 operations.
1352 if (llvm::range_size(*trueBlock) != 2 || llvm::range_size(*falseBlock) != 2) {
1353 return failure();
1354 }
1355
1356 auto trueBrStoreOp = dyn_cast<spirv::StoreOp>(trueBlock->front());
1357 auto trueBrBranchOp =
1358 dyn_cast<spirv::BranchOp>(*std::next(trueBlock->begin()));
1359 auto falseBrStoreOp = dyn_cast<spirv::StoreOp>(falseBlock->front());
1360 auto falseBrBranchOp =
1361 dyn_cast<spirv::BranchOp>(*std::next(falseBlock->begin()));
1362
1363 if (!trueBrStoreOp || !trueBrBranchOp || !falseBrStoreOp ||
1364 !falseBrBranchOp) {
1365 return failure();
1366 }
1367
1368 // Checks that given type is valid for `spirv.SelectOp`.
1369 // According to SPIR-V spec:
1370 // "Before version 1.4, Result Type must be a pointer, scalar, or vector.
1371 // Starting with version 1.4, Result Type can additionally be a composite type
1372 // other than a vector."
1373 bool isScalarOrVector =
1374 llvm::cast<spirv::SPIRVType>(trueBrStoreOp.getValue().getType())
1375 .isScalarOrVector();
1376
1377 // Check that each `spirv.Store` uses the same pointer, memory access
1378 // attributes and a valid type of the value.
1379 if ((trueBrStoreOp.getPtr() != falseBrStoreOp.getPtr()) ||
1380 !isSameAttrList(trueBrStoreOp, falseBrStoreOp) || !isScalarOrVector) {
1381 return failure();
1382 }
1383
1384 if ((trueBrBranchOp->getSuccessor(0) != mergeBlock) ||
1385 (falseBrBranchOp->getSuccessor(0) != mergeBlock)) {
1386 return failure();
1387 }
1388
1389 return success();
1390 }
1391 } // namespace
1392
1393 void spirv::SelectionOp::getCanonicalizationPatterns(RewritePatternSet &results,
1394 MLIRContext *context) {
1395 results.add<ConvertSelectionOpToSelect>(context);
1396 }
static Value getZero(OpBuilder &b, Location loc, Type elementType)
Get zero value for an element type.
static uint64_t zext(uint32_t arg)
static MLIRContext * getContext(OpFoldResult val)
static Attribute extractCompositeElement(Attribute composite, ArrayRef< unsigned > indices)
MulExtendedFold< spirv::UMulExtendedOp, false > UMulExtendedOpFold
static std::optional< bool > getScalarOrSplatBoolAttr(Attribute attr)
Returns the boolean value under the hood if the given boolAttr is a scalar or splat vector bool const...
static bool isDivZeroOrOverflow(const APInt &a, const APInt &b)
Attributes are known-constant values of operations.
Definition: Attributes.h:25
Block represents an ordered list of Operations.
Definition: Block.h:31
Operation & front()
Definition: Block.h:151
iterator begin()
Definition: Block.h:141
Block * getSuccessor(unsigned i)
Definition: Block.cpp:258
Special case of IntegerAttr to represent boolean integers, i.e., signless i1 integers.
This class is a general helper class for creating context-global objects like types,...
Definition: Builders.h:50
IntegerAttr getIntegerAttr(Type type, int64_t value)
Definition: Builders.cpp:254
TypedAttr getZeroAttr(Type type)
Definition: Builders.cpp:347
This class defines the main interface for locations in MLIR and acts as a non-nullable wrapper around...
Definition: Location.h:63
MLIRContext is the top-level object for a collection of MLIR operations.
Definition: MLIRContext.h:60
Operation * create(const OperationState &state)
Creates an operation given the fields represented as an OperationState.
Definition: Builders.cpp:480
This class represents a single result from folding an operation.
Definition: OpDefinition.h:268
Operation is the basic unit of execution within MLIR.
Definition: Operation.h:88
Location getLoc()
The source location the operation was defined or derived from.
Definition: Operation.h:223
Region & getRegion(unsigned index)
Returns the region held by this operation at position 'index'.
Definition: Operation.h:682
A special type of RewriterBase that coordinates the application of a rewrite pattern on the current I...
Definition: PatternMatch.h:785
This class contains a list of basic blocks and a link to the parent operation it is attached to.
Definition: Region.h:26
bool empty()
Definition: Region.h:60
RewritePatternSet & insert(ConstructorArg &&arg, ConstructorArgs &&...args)
Add an instance of each of the pattern types 'Ts' to the pattern list with the given arguments.
Definition: PatternMatch.h:931
RewritePatternSet & add(ConstructorArg &&arg, ConstructorArgs &&...args)
Add an instance of each of the pattern types 'Ts' to the pattern list with the given arguments.
Definition: PatternMatch.h:847
virtual void eraseOp(Operation *op)
This method erases an operation that is known to have no uses.
OpTy replaceOpWithNewOp(Operation *op, Args &&...args)
Replace the results of the given (original) op with a new op that is created without verification (re...
Definition: PatternMatch.h:536
Instances of the Type class are uniqued, have an immutable identifier and an optional mutable compone...
Definition: Types.h:74
unsigned getIntOrFloatBitWidth() const
Return the bit width of an integer or a float type, assert failure on other types.
Definition: Types.cpp:129
This class represents an instance of an SSA value in the MLIR system, representing a computable value...
Definition: Value.h:96
Type getType() const
Return the type of this value.
Definition: Value.h:129
Operation * getDefiningOp() const
If this value is the result of an operation, return the operation that defines it.
Definition: Value.cpp:20
Operation::operand_range getIndices(Operation *op)
Get the indices that the given load/store operation is operating on.
Definition: Utils.cpp:18
Include the generated interface declarations.
bool matchPattern(Value value, const Pattern &pattern)
Entry point for matching a pattern over a Value.
Definition: Matchers.h:485
detail::constant_int_value_binder m_ConstantInt(IntegerAttr::ValueType *bind_value)
Matches a constant holding a scalar/vector/tensor integer (splat) and writes the integer value to bin...
Definition: Matchers.h:522
Type getType(OpFoldResult ofr)
Returns the int type of the integer in ofr.
Definition: Utils.cpp:305
detail::constant_int_predicate_matcher m_Zero()
Matches a constant scalar / vector splat / tensor splat integer zero.
Definition: Matchers.h:437
Type getElementTypeOrSelf(Type type)
Return the element type or return the type itself.
detail::constant_int_predicate_matcher m_One()
Matches a constant scalar / vector splat / tensor splat integer one.
Definition: Matchers.h:473
auto get(MLIRContext *context, Ts &&...params)
Helper method that injects context only if needed, this helps unify some of the attribute constructio...
detail::constant_op_matcher m_Constant()
Matches a constant foldable operation.
Definition: Matchers.h:369
LogicalResult matchAndRewrite(spirv::IAddCarryOp op, PatternRewriter &rewriter) const override
LogicalResult matchAndRewrite(MulOp op, PatternRewriter &rewriter) const override
LogicalResult matchAndRewrite(spirv::UModOp umodOp, PatternRewriter &rewriter) const override
LogicalResult matchAndRewrite(spirv::UMulExtendedOp op, PatternRewriter &rewriter) const override
OpRewritePattern is a wrapper around RewritePattern that allows for matching and rewriting against an...
Definition: PatternMatch.h:358
OpRewritePattern(MLIRContext *context, PatternBenefit benefit=1, ArrayRef< StringRef > generatedNames={})
Patterns must specify the root operation name they match against, and can also specify the benefit of...
Definition: PatternMatch.h:362
LogicalResult matchAndRewrite(Operation *op, PatternRewriter &rewriter) const final
Attempt to match against code rooted at the specified operation, which is the same operation code as ...
Definition: PatternMatch.h:329
|
__label__pos
| 0.837358 |
Skip to main content
How to Build a Vue Front End for a Headless CMS
Share:
Free JavaScript Book!
Write powerful, clean and maintainable JavaScript.
RRP $11.95
In this guide, we’ll learn how to build a modern blog website using Vue.js and GraphCMS, a headless CMS platform.
If you’re looking to start a quick blog today, my recommendation is to go straight to WordPress.
But what if you’re a media powerhouse and you want to deliver your content as fast as possible to multiple devices? You’ll probably also need to integrate your content with ads and other third-party services. Well, you could do that with WordPress, but you’ll come across a few problems with that platform.
1. You’ll need to install a plugin to implement additional features. The more plugins you install, the slower your website becomes.
2. PHP is quite slow compared to most JavaScript web frameworks. From a developer’s perspective, it’s much easier and faster to implement custom features on a JavaScript-powered front end.
JavaScript offers superior performance to PHP in browser loading tests. In addition, modern JavaScript and its ecosystem provides a far more pleasant development experience when it comes to building new web experiences fast.
Want to learn Vue.js from the ground up? This article is an extract from our Premium library. Get an entire collection of Vue books covering fundamentals, projects, tips and tools & more with SitePoint Premium. Join now for just $9/month.
So there’s been a growth of headless CMS solutions — which are simply back ends for managing content. With this approach, developers can focus on building fast and interactive front ends using a JavaScript framework of their choice. Customizing a JavaScript-powered front end is much easier than making changes on a WordPress site.
GraphCMS differs from most Headless CMS platforms in that, instead of delivering content via REST, it does so via GraphQL. This new technology is superior to REST, as it allows us to construct queries that touch on data belonging to multiple models in a single request.
Consider the following model schema:
Post
• id: Number
• title: String
• content : String
• comments : array of Comments
Comment
• id: Number
• name: String
• message: String
The above models have a one(Post)-to-many(Comments) relationship. Let’s see how we can fetch a single Post record attached with all linked Comment records.
If the data is in a relational database, you have to construct either one inefficient SLQ statement, or two SQL statements for fetching the data cleanly. If the data is stored in a NoSQL database, you can use a modern ORM like Vuex ORM to fetch the data easily for you, like this:
const post = Post.query()
.with('comments')
.find(1);
Quite simple! You can easily pass this data via REST to the intended client. But here’s the problem: whenever the data requirement changes at the client end, you’ll be forced to go back to your back-end code to either update your existing API endpoint, or create a new one that provides the required data set. This back and forth process is tiring and repetitive.
What if, at the client level, you could just ask for the data you need and the back end will provide it for you, without you doing extra work? Well, that’s what GraphQL is for.
Prerequisites
Before we begin, I’d like to note that this is a guide for intermediate to advanced users. I won’t be going over the basics, but rather will show you how to quickly build a Vue.js blog using GraphCMS as the back end. You’ll need to be proficient in the following areas:
• ES6 and ES7 JavaScript
• Vue.js (using CLI version 3)
• GraphQL
That’s all you need to know to get started with this tutorial. Also, a background in using REST will be great, as I’ll be referencing this a lot. If you’d like a refresher, this article might help: “REST 2.0 Is Here and Its Name Is GraphQL”.
About the Project
We’ll build a very simple blog application with a basic comment system. Below are the links you can visit to check out the completed project:
Please note that a READ-ONLY token has been used in the demo and consequently the comments system won’t work. You’ll need to supply your OPEN permission token and endpoint as per the instructions in this tutorial for it to work.
Create GraphCMS Project Database
Head over to the GraphCMS website and click the “Start Building for Free” button. You’ll be taken to their signup page.
Signing up to GraphCMS
Sign up using your preferred method. Once you’ve completed the account authentication and verification process, you should be able to access the main dashboard.
The GraphCMS main dashboard
In the above example, I’ve already created a project called “BlogDB”. Go ahead and create a new one, and call it whatever you want. After you’ve entered the name, you can leave the rest of the fields in their defaults. Click Create and you’ll be taken to their project plan.
GraphCMS plans
For the purposes of this tutorial, select the free Developer plan then click Continue. You’ll be taken to the project’s dashboard, which looks something like this:
The GraphCMS project dashboard
Go to the Schema tab. We’re going to create the following models, each with the following fields:
Category
• name: Single line text, required, unique
Post
• slug: Single line text, required, unique
• title: Single line text, required, unique
• content: Multi line text
Comment
• name: Single line text, required
• message: Multi line text, required
Use the Create Model button to create models. On the right side, you should find a hidden panel for Fields, which is activated by clicking the Fields button. Drag the appropriate field type onto the model’s panel. You will be presented with a form to fill in your field’s attributes. Do note at the bottom there’s a pink button labeled Advanced. Clicking it will expand the panel to give you more field attributes you can enable.
Field attributes in the Advanced tab
Next, you’ll need to add the relationship between models as follows:
• Post > Categories (many-to-many)
• Post > Comments (one-to-many)
Use the Reference field to define this relationship. You can add this field to any side; GraphCMS will automatically create the opposite relation field in the referenced model. When you’ve completed defining the models, you should have something like this:
GraphCMS models
You’ve now completed the first part. Let’s now provide some data to our models.
GraphQL Data Migration
To add content to your models, you can simply click the Content tab in the project dashboard where you can create new records for each of your models. However, if you find this to be a slow method, you’ll be happy to know that I’ve created a GraphCMS migration tool that copies data from CSV files and uploads them to your GraphCMS database. You can find the project here in this GitHub repository. To start using the project, simply download it into your workspace like this:
git clone [email protected]:sitepoint-editors/graphcsms-data-migration.git
cd graphcsms-data-migration
npm install
Next, you’ll need to grab your GraphCMS project’s API endpoint and token from the dashboard’s Settings page. You’ll need to create a new token. For the permission level, use OPEN, as this will allow the tool to perform READ and WRITE operations on your GraphCMS database. Create a file called .env and put it at the root of the project:
ENDPOINT=<Put api endpoint here>
TOKEN=<Put token with OPEN permission here>
Next, you may need to populate the CSV files in the data folder with your own. Here’s some sample data that has been used:
// Categories.csv
name
Featured
Food
Fashion
Beauty
// Posts.csv
title,slug,content,categories
Food Post 1,food-post-1,Breeze through Thanksgiving by making this Instant Pot orange cranberry sauce,Food|Featured
Food Post 2,food-post-2,This is my second food post,Food
Food Post 3,food-post-3,This is my last and final food post,Food
Fashion Post 1,fashion-post-1,This is truly my very first fashion post,Fashion|Featured
Fashion Post 2,fashion-post-2,This is my second fashion post,Fashion
Fashion Post 3,fashion-post-3,This is my last and final fashion post,Fashion
Beauty Post 1,Beauty-post-1,This is truly my very first Beauty post,Beauty|Featured
Beauty Post 2,Beauty-post-2,This is my second beauty post,Beauty
You can change the content if you want. Make sure not to touch the top row, as otherwise you’ll change the field names. Please note, for the column categories, I’ve used the pipe | character as a delimiter.
To upload the CSV data to your GraphCMS database, execute the following commands in this order:
npm run categories
npm run posts
Each script will print out records that have uploaded successfully. The reason we uploaded categories first is so that the posts records can link successfully to existing category records.
If you want to clean out your database, you can run the following command:
npm run reset
This script will delete all your model’s contents. You’ll get a report indicating how many records were deleted for each model.
I hope you find the tool handy. Go back to the dashboard to confirm that data for the Posts and Categories have successfully been uploaded.
With the back end taken care of, let’s start building our front-end blog interface.
Building the Blog’s Front End Using Vue.js
As mentioned earlier, we are going to build a very simple blog application powered by a GraphCMS database back end. Launch a terminal and navigate to your workspace.
If you haven’t got Vue CLI installed, do that now:
npm install -g @vue/cli
Then create a new project:
vue create vue-graphcms
Choose to manually select features, then select the following options:
• Features: Babel, Router
• Router History Mode: Y
• ESLint with error prevention only
• Lint on save
• Config file placement: Dedicated Config Files
• Save preset: your choice
Once the project creation process is complete, change into the project directory and install the following dependencies:
npm install bootstrap-vue axios
To set up Bootstrap-Vue in our project, simply open src/main.js and add the following code:
import BootstrapVue from "bootstrap-vue";
import "bootstrap/dist/css/bootstrap.css";
import "bootstrap-vue/dist/bootstrap-vue.css";
Vue.config.productionTip = false;
Vue.use(BootstrapVue);
Next, we need to start laying down our project structure. In the src/components folder, delete the existing files and create these new ones:
• CommentForm.vue
• CommentList.vue
• Post.vue
• PostList.vue
In the src/views folder, delete About.vue and create a new file called PostView.vue. As seen from the demo, we’ll have several category pages each displaying a list of posts filtered by category. Technically, there will only be one page that will display a different list of posts based on an active route name. The PostList component will filter posts based on the current route.
Let’s first set up the routes. Open src/router.js and replace the existing code with this:
import Vue from "vue";
import Router from "vue-router";
import Home from "./views/Home.vue";
import Post from "./views/PostView.vue";
Vue.use(Router);
export default new Router({
mode: "history",
base: process.env.BASE_URL,
linkActiveClass: "active",
routes: [
{
path: "/",
name: "Featured",
component: Home
},
{
path: "/food",
name: "Food",
component: Home
},
{
path: "/fashion",
name: "Fashion",
component: Home
},
{
path: "/beauty",
name: "Beauty",
component: Home
},
{
path: "/post/:slug",
name: "Post",
component: Post
}
]
});
Now that we have our routes, let’s set up our navigation menu. Open src/App.vue and replace the existing code with this:
<template>
<div id="app">
<b-navbar toggleable="md" type="dark" variant="info">
<b-navbar-toggle target="nav_collapse"></b-navbar-toggle>
<b-navbar-brand href="#">GraphCMS Vue</b-navbar-brand>
<b-collapse is-nav id="nav_collapse">
<b-navbar-nav>
<router-link class="nav-link" to="/" exact>Home</router-link>
<router-link class="nav-link" to="/food">Food</router-link>
<router-link class="nav-link" to="/fashion">Fashion</router-link>
<router-link class="nav-link" to="/beauty">Beauty</router-link>
</b-navbar-nav>
</b-collapse>
</b-navbar>
<b-container>
<router-view/>
</b-container>
</div>
</template>
This will add a nav bar to the top of our site with links to our different categories.
Save the file and update the following files accordingly:
src/views/Home.vue
<template>
<div class="home">
<PostList />
</div>
</template>
<script>
import PostList from "@/components/PostList.vue";
export default {
name: "home",
components: {
PostList
}
};
</script>
src/components/PostList.vue
<template>
<section class="post-list">
<h1>{{ category }} Articles</h1>
<hr/>
<p>Put list of posts here!</p>
</section>
</template>
<script>
export default {
name: "PostList",
data() {
return {
category: ""
};
},
created() {
this.category = this.$route.name;
},
watch: {
$route() {
this.category = this.$route.name;
}
}
};
</script>
Notice that, in the PostList component, we’re using a custom watcher to update our category data property, based on our current URL.
Now we’re ready to perform a quick test to confirm the routes are working. Spin up the Vue.js server using the command npm run serve. Open a browser at localhost:8080 and test each navigation link. The category property should output the same value we defined in route name’s attribute.
A page view of our app
Pulling in Data From GraphCMS
Now that we have our routing code working, let’s see how we can pull information from our GraphCMS back end. At the root of your project, create an env.local file and populate it with values for the following fields:
VUE_APP_ENDPOINT=
VUE_APP_TOKEN=
Do note that Vue.js single-page applications only load custom environment variables starting with VUE_APP. You can find the API endpoint and token from your GraphCMS dashboard settings page. For the token, make sure to create one with OPEN permission, as that will allow both READ and WRITE operations. Next, create the file src/graphcms.js and copy the following code:
import axios from "axios";
export const ENDPOINT = process.env.VUE_APP_ENDPOINT;
const TOKEN = process.env.VUE_APP_TOKEN;
const headers = {
"Content-Type": "application/json",
Authorization: `Bearer ${TOKEN}`
};
export const apiClient = axios.create({
headers
});
export const POSTS_BY_CATEGORY_QUERY = `
query PostsByCategory($category: String!){
category(where: {
name: $category
}
){
name,
posts {
id
slug
title
content
categories {
name
}
}
}
}
`;
export const POST_BY_SLUG_QUERY = `
query PostBySlug($slug: String!){
post(where: {
slug: $slug
})
{
id
title
content
categories {
name
}
comments {
name
message
}
}
}
`;
export const CREATE_COMMENT_MUTATION = `
mutation CreateComment($post: PostWhereUniqueInput!, $name: String!, $message: String!){
createComment(data: {
name: $name,
message: $message,
post: {
connect: $post
},
status: PUBLISHED
})
{
id
name
message
}
}
`;
This helper file we just created provides two main functions:
• It creates an instance of axios that’s configured to perform authorized requests to your GraphCMS back end.
• It contains GraphQL queries and mutations used in this project. These are responsible for fetching posts (either by category or by slug), as well as for creating new comments. If you’d like to find out more about GraphQL queries and mutations, please consult the GraphQL docs.
You can also use the API explorer in your project dashboard to test out these queries and mutations. To do this, copy the query or mutation from the code above and paste it into the top window of the API explorer. Enter any query variables in the window below that, then hit the Play button. You should see the results in a new pane on the right.
Here’s a query example:
A GraphCMS query example
Here’s a mutation example:
A GraphCMS mutation example
Displaying the Data in a Template
Now, let’s create our HTML template in our src/components/PostList.vue that will display a list of posts in a neat way. We’ll also add the axios code that will pull in posts data from our GraphCMS database:
<template>
<section class="post-list">
<h1>{{ category }} Articles</h1>
<hr/>
<b-row v-if="loading">
<b-col class="text-center">
<div class="lds-dual-ring"></div>
</b-col>
</b-row>
<div v-if="!loading" >
<b-card tag="article" v-for="post in posts" :key="post.id" :title="post.title" :sub-title="post.categories.map(cat => cat.name).toString()">
<p class="card-text">
{{ post.content }}
</p>
<router-link class="btn btn-primary" :to="'post/' + post.slug">
Read Post
</router-link>
</b-card>
</div>
</section>
</template>
<script>
import { ENDPOINT, apiClient, POSTS_BY_CATEGORY_QUERY } from "../graphcms.js";
export default {
name: "PostList",
data() {
return {
category: "",
loading: false,
posts: []
};
},
methods: {
async fetchPosts() {
try {
this.loading = true;
const response = await apiClient.post(ENDPOINT, {
query: POSTS_BY_CATEGORY_QUERY,
variables: {
category: this.category
}
});
const body = await response.data.data;
this.posts = await body.category.posts;
this.loading = false;
} catch (error) {
console.log(error);
}
}
},
created() {
this.category = this.$route.name;
this.fetchPosts();
},
watch: {
$route() {
this.category = this.$route.name;
this.posts = [];
this.fetchPosts();
}
}
};
</script>
<style>
h1{
margin-top: 25px !important;
}
.lds-dual-ring {
display: inline-block;
width: 64px;
height: 64px;
}
.lds-dual-ring:after {
content: " ";
display: block;
width: 46px;
height: 46px;
margin: 1px;
border-radius: 50%;
border: 5px solid #ccc;
border-color: #ccc transparent #ccc transparent;
animation: lds-dual-ring 1.2s linear infinite;
}
@keyframes lds-dual-ring {
0% {
transform: rotate(0deg);
}
100% {
transform: rotate(360deg);
}
}
</style>
Let’s quickly go through the code’s main features:
• Loading. When a request is made, a loading spinner is displayed to indicate to the user there’s something in progress. When the request is fulfilled, the loading spinner is replaced with a list of posts.
• Query. In order to get a list of posts by category, I found it easier to query for the category, then use the category-to-posts relationship to get access to the filtered posts.
• Created. When the page is loaded for the first time, the fetchPosts() function is called from within the created lifecycle hook.
• Watch. When the route URL changes, the fetchPosts() function is called each time.
After making those changes, you should now have the following view:
Our post list
Displaying an Individual Post
Make sure the top main navigation is working as expected. Let’s now work on the Post component. It will have its own fetchPost() function, where it will query by slug. If you’re wondering where the slug parameter is coming from, let me remind you of this bit of code we put in router.js:
//...
{
path: '/post/:slug',
name: 'Post',
component: Post
},
//...
This states that anything that comes after /post/ in the URL is available to us in the component as this.$route.params.slug.
The post component is a parent of the CommentForm and CommentList components. The comments data will be passed as props to the CommentList component from the Posts record. Let’s insert code for src/components/CommentList.vue now:
<template>
<section class="comment-list">
<hr/>
<h4 class="text-muted">Comments</h4>
<b-card v-for="comment in comments" :title="comment.name" title-tag="h5" :key="comment.id">
<p class="card-text text-muted">{{ comment.message }} </p>
</b-card>
<p v-if="comments.length === 0" class="text-center text-muted">No comments posted yet!</p>
</section>
</template>
<script>
export default {
name: "CommentsList",
props: ["comments"]
};
</script>
Unless you’ve manually entered comments via the GraphCMS dashboard, don’t expect to see any results just yet. Let’s add code to src/components/CommentForm.vue that will enable users to add comments to a blog post:
<template>
<section class="comment-form">
<h4 class="text-muted">Comment Form</h4>
<b-form @submit.prevent="onSubmit">
<b-form-group label="Name">
<b-form-input id="input-name" type="text" v-model="name" placeholder="Enter your name" required></b-form-input>
</b-form-group>
<b-form-group label="Message">
<b-form-textarea id="input-message" v-model="message" placeholder="Enter your comment" :rows="3" :max-rows="6" required>
</b-form-textarea>
</b-form-group>
<b-button type="submit" variant="primary">Submit</b-button>
</b-form>
</section>
</template>
<script>
import { apiClient, ENDPOINT, CREATE_COMMENT_MUTATION } from "../graphcms.js";
export default {
name: "CommentForm",
props: ["post"],
data() {
return {
name: "",
message: ""
};
},
methods: {
async onSubmit() {
const formattedComment = {
name: this.name,
message: this.message,
post: {
id: this.post.id
}
};
try {
const response = await apiClient.post(ENDPOINT, {
query: CREATE_COMMENT_MUTATION,
variables: formattedComment
});
const body = await response.data.data;
const newComment = body.createComment;
this.post.comments.push(newComment);
this.name = "";
this.message = "";
} catch (error) {
console.log(error);
}
}
}
};
</script>
<style>
.comment-form {
margin-top: 35px;
}
</style>
We now have a basic comment form capable of submitting a new comment to our GraphQL back-end system. Once the new comment is saved, we’ll take the returned object and add it to the post.comments array. This should trigger the CommentList component to display the newly added Comment.
Let’s now build the src/components/Post.vue component:
<template>
<section class="post">
<b-row v-if="loading">
<b-col>
<div class="lds-dual-ring text-center"></div>
</b-col>
</b-row>
<b-row v-if="!loading">
<b-col>
<h1>{{post.title}}</h1>
<h4 class="text-muted">{{post.categories.map(cat => cat.name).toString()}}</h4>
<hr>
<p>{{ post.content }}</p>
</b-col>
</b-row>
<!-- List of comments -->
<b-row v-if="!loading">
<b-col>
<CommentList :comments="post.comments" />
</b-col>
</b-row>
<!-- Comment form -->
<b-row v-if="!loading">
<b-col>
<CommentForm :post="post" />
</b-col>
</b-row>
</section>
</template>
<script>
import { ENDPOINT, apiClient, POST_BY_SLUG_QUERY } from "../graphcms.js";
import CommentList from "@/components/CommentList";
import CommentForm from "@/components/CommentForm";
export default {
name: "Post",
components: {
CommentList,
CommentForm
},
data() {
return {
loading: false,
slug: "",
post: {}
};
},
methods: {
async fetchPost() {
try {
this.loading = true;
const response = await apiClient.post(ENDPOINT, {
query: POST_BY_SLUG_QUERY,
variables: {
slug: this.slug
}
});
const body = await response.data.data;
this.post = body.post;
this.loading = false;
} catch (error) {
console.log(error);
}
}
},
created() {
this.slug = this.$route.params.slug;
this.fetchPost();
}
};
</script>
Finally, here’s the code for src/views/PostView.vue to tie everything together:
<template>
<div class="post-view">
<Post/>
</div>
</template>
<script>
import Post from "@/components/Post.vue";
export default {
name: "PostView",
components: {
Post
}
};
</script>
You should now have the following view for Posts. Take note of the :slug at the end of the URL localhost:8080/post/fashion-post-1:
Post comments
In the above example, I’ve added a couple of comments to test out the new feature. Make sure you do the same on yours.
Summary
I hope you’ve seen how easy it is to build a blog website using Vue.js and GraphQL. If you’d been using plain PHP and MySQL, you’d have written much more code. Even with a PHP framework, you still would have written more code for a simple blog application.
For the sake of this tutorial, I had to keep things as simple as possible. You may note that this blog project is far from even meeting a minimalistic blog setup. There are several things we haven’t tackled, such as error handling, form validation and caching. For the last bit, I recommend Apollo Client, as it has mechanisms for caching GraphQL query results. Then of course there needs to be an author model, and a proper comments system that supports authentication and message approval.
If you’re up to it, please go ahead and take this simple Vue.js GraphCMS blog even further.
I write clean, readable and modular code. I love learning new technologies that bring efficiencies and increased productivity to my workflow.
New books out now!
Learn how Git works, and how to use it to streamline your workflow!
Google, Netflix and ILM are Python users. Maybe you should too?
|
__label__pos
| 0.795008 |
iT邦幫忙
0
金流串智付寶的問題
最近改用智付寶串金流,有依照他們提供的API文件做串接,但是不知道為甚麼還是有很多地方出現問題。
以下列出我遇到的問題:
1.付款方式有設定信用卡一次付款,但是到了支付頁面卻還是出現其他全部的付款方式
2.依照文件設定ReturnURL但是付款完畢後卻沒重導向指定網址
3.依照文件設定NotifyURL吃不到支付寶POST回來的參數
4.已經讓EmailModify等於0,表示我不想讓使用者在支付頁面可以再填寫一次Email,可是卻沒效,使用者仍然還是要再填一次
使用串接API文件:https://cwww.newebpay.com/website/Page/content/download_api#1
我的Form程式碼:
<form name="newebpay" method="post" action="https://ccore.newebpay.com/MPG/mpg_gateway" id="newebpay" class="d-none">
<input type="hidden" name="MerchantID" value="...">
<input type="hidden" name="RespondType" value="JSON">
<input type="hidden" name="TimeStamp" value="1561712191">
<input type="hidden" name="Version" value="1.5">
<input type="hidden" name="MerchantOrderNo" value="926x58834852">
<input type="hidden" name="Amt" value="335">
<input type="hidden" name="ItemDesc" value="冰山紅火焰">
<input type="hidden" name="TradeLimit" value="900">
<input type="hidden" name="ReturnURL" value="http://yeshealth.fanswoo.com/admin/my/order/order/edit/view?id=926">
<input type="hidden" name="NotifyURL" value="http://yeshealth.fanswoo.com/shop/order/pay/catchPay?orderId=926&type=back">
<input type="hidden" name="Email" value="[email protected]">
<input type="hidden" name="EmailModify" value="0">
<input type="hidden" name="LoginType" value="0">
<input type="hidden" name="OrderComment" value="...">
<input type="hidden" name="TradeInfo" value="...">
<input type="hidden" name="TradeSha" value="...">
<input type="hidden" name="CREDIT" value="1">
<input type="submit" value="submit">
</form>
回傳參數接收程式碼:
public function catch($arg = []) //接收回傳參數的function
{
$json = $_POST['JSONData'];
$enjson = json_decode($json);
$orderId = $arg['orderId'];
$type = $arg['type'];
$OrderField = OrderField::orm( $orderId );
if( $enjson->Status == 'SUCCESS' )
{
$OrderField->responseStatus = $enjson->Status;
$OrderField->responseMsg = $enjson->Message;
$OrderField->amt = $enjson->Amt;
$OrderField->tradeNo = $enjson->TradeNo;
$OrderField->payTypeRespond = $enjson->PaymentType;
$OrderField->PayTime = $enjson->PayTime;
$OrderField->cardFourNo = $enjson->Card4No;
$OrderField->creditAmountRespond = $enjson->Inst;
$OrderField->stast = $enjson->InstFirst;
$OrderField->staed = $enjson->InstEach;
$OrderField->paycheckStatus = 1;
$OrderField->save();
}
else
{
$OrderField->paycheckStatus = 3;
$OrderField->save();
}
}
2 個回答
0
浩瀚星空
iT邦超人 1 級 ‧ 2019-07-01 14:18:52
最佳解答
1.付款方式有設定信用卡一次付款,但是到了支付頁面卻還是出現其他全部的付款方式
這是需要問他們工程師。一般能判斷的就是你可能少了什麼參數沒送。導致無法判斷。
2.依照文件設定ReturnURL但是付款完畢後卻沒重導向指定網址
回傳的域名需要是https,要不然有機會會回傳不回來。只要對方是用js的方式。
一般比較常用的爛招就用框架來處理。不過已經有部份銀行跟信用卡相關的頁面有防制框架頁。
會變成框架會被強迫轉換。
當初我為了這個問題搞很久。目前是自已用一個https的頁面轉回來後再轉回去客戶的頁面暫時解決。
3.依照文件設定NotifyURL吃不到支付寶POST回來的參數
試試用get接值看看。如果是特殊數據流可以用
$post_data = file_get_contents('php://input');
以上的方式接看看。
4.已經讓EmailModify等於0,表示我不想讓使用者在支付頁面可以再填寫一次Email,可是卻沒效,使用者仍然還是要再填一次
這也需要問他們工程師。不過我碰過的很多是不能取消的。一般我會用一個mail去代替處理。
不過當然也要客戶能接受就是了。
謝囉 我知道原因了 因為我沒有把其他參數加入陣列中加密 所以他們沒取得我送過去的參數
0
小魚
iT邦大師 1 級 ‧ 2019-07-01 11:44:28
這個問題,
直接問智富寶比較快吧,
問他們有沒有範例,
比較有名的廠商應該會提供範例吧...
我有問過,他們表示因為是免費提供API,所以沒有辦法直接聯絡到他們工程師,然後他們說要幫我轉達,到目前都還沒回覆我,但是有點急 =_=
小魚 iT邦大師 1 級 ‧ 2019-07-01 12:31:57 檢舉
你可能不是公司單位吧,
他們大概覺得賺不到錢...
我要發表回答
立即登入回答
|
__label__pos
| 0.841308 |
logo
11 Nagarro SQL Interview Questions (Updated 2024)
Updated on
June 23, 2024
At Nagarro, SQL is used for analyzing large datasets for actionable insights and managing data schemas in the tech consulting industry. That's the reason behind why Nagarro almost always evaluates jobseekers on SQL query questions in interviews for Data Science, Data Engineering and Data Analytics jobs.
So, to help you study, we've curated 11 Nagarro SQL interview questions – can you solve them?
11 Nagarro SQL Interview Questions
SQL Question 1: Rolling 3 Months Active Users per Product
Assume that Nagarro is an e-commerce company. Their product team wants to analyze user engagement on a rolling 3 months basis. Specifically, they want to calculate how many unique active users a product has on a rolling 3 months basis, sorted by date and product.
Use the schema and sample data from the reviews table provided below for this question.
Example Input:
review_iduser_idsubmit_dateproduct_idstars
61711232022-01-08500014
78022652022-02-10698524
52933622022-03-18500013
63521922022-04-26698523
45179812022-05-05698522
We define an active user as a user that has submitted at least one review in the timeframe. Note: use the to calculate the rolling 3 month window.
Example Output:
end_of_periodproduct_idactive_users
2022-03-31500012
2022-03-31698521
2022-04-30500011
2022-04-30698522
2022-05-31500010
2022-05-31698522
Answer:
This query calculates the number of unique active users per product on a rolling 3 months basis. It uses a subquery to filter the relevant reviews for each rolling 3 month window, and then counts the distinct users. The results are ordered by the end of the rolling 3 month period in ascending order and product id in ascending order. Keep in mind that if a user has not submitted a review for a product in a given 3 month period, the product will not appear in the results for that period.
Pro Tip: Window functions are a frequent SQL interview topic, so practice all the window function problems on DataLemur
DataLemur Window Function SQL Questions
SQL Question 2: 2nd Highest Salary
Assume you had a table of Nagarro employee salary data. Write a SQL query to find the 2nd highest salary at the company.
Nagarro Example Input:
employee_idsalary
12500
2800
31000
41200
Example Output:
second_highest_salary
1200
Try this question and run your code right in the browser:
2nd Highest Salary SQL Interview Question
Answer:
You can find a detailed solution here: 2nd Highest Salary.
SQL Question 3: In SQL, are values same the same as zero or a blank space?
A value represents a missing or unknown value. It is different from a zero or a blank space, which are actual values.
It's important to handle values properly in SQL because they can cause unexpected results if not treated correctly. For example, comparisons involving values always result in . Also values are not included in calculations. For example, will ignore values in the column.
Nagarro SQL Interview Questions
SQL Question 4: Employee Management System
Nagarro is a global software development and technology consultancy company. To manage its vast pool of employees and their projects efficiently, Nagarro is thinking of developing an Employee Management System. Design the database for this system. The database should be capable of storing details about employees, their departments, projects they are assigned to, and their roles in those projects. Also, write an SQL query to retrieve a list of all the employees who are working on more than one project.
Consider the following tables:
Example Input:
employee_idfirst_namelast_namedepartment_id
1JohnDoe100
2JaneSmith200
3JimBrown300
Example Input:
department_iddepartment_name
100Engineering
200Marketing
300Human Resources
Example Input:
project_idproject_namedepartment_id
50System Design100
60Brand Campaign200
Example Input:
employee_idproject_id
150
260
160
350
Answer:
This query joins the and tables on the field. It groups the result by employee's first and last name, and then filters the groups to only include those where an employee is working on more than one project. This is achieved by using the clause to filter the result of the clause. The result is a list of employees who are working on multiple projects.
SQL Question 5: How do the 'BETWEEN' and 'IN' commands differ?
While both the and operators are used to filter data based on some criteria, selects for values within a given range, whereas for it checks if the value is in a given list of values.
For example, say you had a table called , which had the salary of the employee, along with the country in which they reside.
To find all employees who made between 80kand80k and 120k, you could use the operator:
To find all employees that reside in the US or Canada, you could use the operator:
SQL Question 6: Filter Customers Based on Specific Conditions
You are a data analyst at Nagarro. The data management team has a table consisting of customer records that includes their contact information and the date they joined the company. Your goal is to design a SQL query that filters this table to return only the customers who joined the company after 1st January 2020 and are either from the US or have an email with domain '@nagarro.com'.
The following table presents an example of the table:
Example Input:
customer_idfirst_namelast_nameemailCountryjoin_date
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
Answer:
With this query, you will get a list of customers who joined the company after January 1, 2020, and are either from the United States or have an email account with '@nagarro.com' domain. The 'LIKE' operator is used to match text string patterns. In our case, any email that ends with '@nagarro.com'. The 'AND' and 'OR' operators used in the where clause filter the records based on multiple conditions.
SQL Question 7: What does / SQL commands do?
Note: interviews at Nagarro often aren't trying to test you on a specific flavor of SQL. As such, you don't need to exactly know that is available in PostgreSQL and SQL Server, while MINUS is available in MySQL and Oracle – you just need to know the general concept!
Your answer should mention that the / operator is used to remove to return all rows from the first SELECT statement that are not returned by the second SELECT statement.
Here's a PostgreSQL example of using EXCEPT to find all of Nagarro's Facebook video ads with more than 50k views that aren't also being run on YouTube:
If you want to retain duplicates, you can use the EXCEPT ALL operator instead of EXCEPT. The EXCEPT ALL operator will return all rows, including duplicates.
SQL Question 8: Average Project Duration
You are given a table 'projects' in Nagarro's database. Each row of the table represents a project carried out by the company. The table has the following Columns: project_id, start_date, end_date. Can you write a SQL query that gives the average duration (in days) of all projects undertaken by Nagarro in the year 2020?
Example Input:
project_idstart_dateend_date
12020-01-052020-03-10
22020-07-012020-09-30
32020-04-032020-05-12
42020-12-192021-01-10
52019-11-232019-12-02
Example Output:
avg_duration_days
78
Answer:
In the above SQL query, we are extracting the project durations by finding the difference of and . We use the function to get the number of DAYS difference from these dates. We filter out projects only for the year 2020 using WHERE clause. Then, we calculate the average of these durations using the function. The output will be the average project duration for Nagarro in 2020. Please note, this query assumes that the and are of type .
To practice a very similar question try this interactive Facebook Average Post Hiatus (Part 1) Question which is similar for calculating duration between dates or this Facebook Active User Retention Question which is similar for date-based analysis.
SQL Question 9: Find the Maximum Sold Product per Month
Nagarro, being a custom software development and services company, provides many software services and solutions to their clients. Let's assume that we want to find the product that has been sold the most for each month.
Consider a scenario where we have a sales table in the database that keeps track of all software products sold, the client who bought them, and the date of purchase.
Example Input:
sale_idclient_idpurchase_dateproduct_idquantity
10156706/05/2022A015
10267206/14/2022A0310
10389006/30/2022A017
10456707/11/2022A029
10589007/16/2022A016
The task is to write an SQL query to find the software product that has been sold the most for each month. The output should provide the month, product_id, and sum of quantity sold.
Example Output:
mthproduct_idmaximum_sold
6A0112
7A029
Answer:
This SQL query makes use of the function to get the month from the date of the sales data. We then group by both the extracted month and product_id to get the sum of the quantity sold for each product in each month. The clause is used to sort the results in descending order of the quantity sold.
SQL Question 10: Can you explain the distinction between a unique and a non-unique index?
While both types of indexes improve the performance of SQL queries by providing a faster way to lookup rows of data, a unique index enforces the uniqueness of the indexed columns while a non-unique index allows duplicate values in the indexed columns.
Suppose you had a table of Nagarro employees. Here's an example of a unique index on the column:
This index would ensure that no two Nagarro employees have the same , which could be used as a unique identifier for each employee.
Here's a non-unique index example example on the column:
This index would not enforce uniqueness, but it could be used to improve the performance of queries that filter or sort the data based on the column. For example, if you want to quicklly retreive all Data Scientists, the database can use the index to efficiently locate and retrieve the desired records without having to do a full table scan on all Nagarro employees.
SQL Question 11: Fetch Customer Records With Specific Pattern
Given a table named , your task is to write a SQL query that will fetch all the records of customers whose email addresses end with "@nagarro.com". {#Question-11}
Example Input:
customer_idfirst_namelast_nameemail_address
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
Answer:
You can make use of the keyword in SQL to filter the records based on a specific pattern. To get the required results, you can use the following PostgreSQL query:
This query will return all the records where the email address column ends with "@nagarro.com". The percent sign (%) is a wildcard character that matches any sequence of characters.
Example Output:
customer_idfirst_namelast_nameemail_address
[email protected]
[email protected]
[email protected]
Preparing For The Nagarro SQL Interview
The best way to prepare for a Nagarro SQL interview is to practice, practice, practice. Beyond just solving the earlier Nagarro SQL interview questions, you should also solve the 200+ SQL questions on DataLemur which come from companies like Google, Facebook, Microsoft and Amazon. DataLemur SQL Interview Questions
Each SQL question has hints to guide you, step-by-step solutions and most importantly, there's an interactive SQL code editor so you can instantly run your SQL query answer and have it checked.
To prep for the Nagarro SQL interview it is also helpful to solve SQL questions from other tech companies like:
However, if your SQL coding skills are weak, forget about jumping right into solving questions – strengthen your SQL foundations with this SQL tutorial for Data Analytics.
DataLemur SQL Tutorial for Data Science
This tutorial covers things like UNION vs. joins and Self-Joins – both of which come up often in SQL interviews at Nagarro.
Nagarro Data Science Interview Tips
What Do Nagarro Data Science Interviews Cover?
Beyond writing SQL queries, the other types of questions covered in the Nagarro Data Science Interview are:
Nagarro Data Scientist
How To Prepare for Nagarro Data Science Interviews?
I'm sorta biased, but I think the optimal way to prep for Nagarro Data Science interviews is to read my book Ace the Data Science Interview.
The book has 201 data interview questions taken from companies like Microsoft, Google & Amazon. It also has a refresher covering Stats, SQL & ML. And finally it's vouched for by the data community, which is why it's earned 1000+ reviews on Amazon and a 4.5-star rating.
Ace the Data Science Interview
|
__label__pos
| 0.901181 |
去掉django rest framework强制的csrf检查
去掉django rest framework强制的csrf检查
近期的项目,前端的js是在localhost上跑的,然后向我们后端的开发服务器进行请求。但是突然前端说所有的post请求都报csrf校验错误了,甚是奇怪,之前为了开发方便已经把django的csrf middleware注释掉了啊,为什么还会错误,由于返回值格式还是django rest的通用格式,肯定问题是出在这里面,于是翻了一下它的源代码看了看。
from django.middleware.csrf import CsrfViewMiddleware
class CSRFCheck(CsrfViewMiddleware):
def _reject(self, request, reason):
# Return the failure reason instead of an HttpResponse
return reason
class SessionAuthentication(BaseAuthentication):
"""
Use Django's session framework for authentication.
"""
def authenticate(self, request):
"""
Returns a `User` if the request session currently has a logged in user.
Otherwise returns `None`.
"""
# Get the underlying HttpRequest object
request = request._request
user = getattr(request, 'user', None)
# Unauthenticated, CSRF validation not required
if not user or not user.is_active:
return None
self.enforce_csrf(request)
# CSRF passed with authenticated user
return (user, None)
def enforce_csrf(self, request):
"""
Enforce CSRF validation for session based authentication.
"""
reason = CSRFCheck().process_view(request, None, (), {})
if reason:
# CSRF failed, bail with explicit error message
raise exceptions.PermissionDenied('CSRF Failed: %s' % reason)
原来是这样,最近给系统增加了用户登陆功能,使用的就是SessionAuthorization和TokenAuthorization,然后在SessionAuthorization中调用了self.enforce_csrf(request) 而这个调用的又是上面的 CSRFCheck ,这个类是重载了django里面的csrf middleware,而且没发现有地方可以关掉这个功能,即使在django里面去掉这个middleware,但是这个还是会调用的。
那怎么去掉这个功能呢,我们现在就是要进行跨域请求。
1. 最简单了,直接注释掉上面的 self.enforce_csrf(request) 这一行代码就行了或者在设置中添加一项,比如改成
GLOBAL_CSRF_CHECK = True
if GLOBAL_CSRF_CHECK:
self.enforce_csrf(request)
1. 我们继续看源代码,到middleware的代码里面去。
class CsrfViewMiddleware(object):
"""
Middleware that requires a present and correct csrfmiddlewaretoken
for POST requests that have a CSRF cookie, and sets an outgoing
CSRF cookie.
This middleware should be used in conjunction with the csrf_token template
tag.
"""
# The _accept and _reject methods currently only exist for the sake of the
# requires_csrf_token decorator.
def _accept(self, request):
# Avoid checking the request twice by adding a custom attribute to
# request. This will be relevant when both decorator and middleware
# are used.
request.csrf_processing_done = True
return None
def _reject(self, request, reason):
logger.warning('Forbidden (%s): %s',
reason, request.path,
extra={
'status_code': 403,
'request': request,
}
)
return _get_failure_view()(request, reason=reason)
def process_view(self, request, callback, callback_args, callback_kwargs):
if getattr(request, 'csrf_processing_done', False):
return None
try:
csrf_token = _sanitize_token(
request.COOKIES[settings.CSRF_COOKIE_NAME])
# Use same token next time
request.META['CSRF_COOKIE'] = csrf_token
except KeyError:
csrf_token = None
# Generate token and store it in the request, so it's
# available to the view.
request.META["CSRF_COOKIE"] = _get_new_csrf_key()
# Wait until request.META["CSRF_COOKIE"] has been manipulated before
# bailing out, so that get_token still works
if getattr(callback, 'csrf_exempt', False):
return None
# Assume that anything not defined as 'safe' by RFC2616 needs protection
if request.method not in ('GET', 'HEAD', 'OPTIONS', 'TRACE'):
if getattr(request, '_dont_enforce_csrf_checks', False):
# Mechanism to turn off CSRF checks for test suite.
# It comes after the creation of CSRF cookies, so that
# everything else continues to work exactly the same
# (e.g. cookies are sent, etc.), but before any
# branches that call reject().
return self._accept(request)
if request.is_secure():
# Suppose user visits http://example.com/
# An active network attacker (man-in-the-middle, MITM) sends a
# POST form that targets https://example.com/detonate-bomb/ and
# submits it via JavaScript.
#
# The attacker will need to provide a CSRF cookie and token, but
# that's no problem for a MITM and the session-independent
# nonce we're using. So the MITM can circumvent the CSRF
# protection. This is true for any HTTP connection, but anyone
# using HTTPS expects better! For this reason, for
# https://example.com/ we need additional protection that treats
# http://example.com/ as completely untrusted. Under HTTPS,
# Barth et al. found that the Referer header is missing for
# same-domain requests in only about 0.2% of cases or less, so
# we can use strict Referer checking.
referer = request.META.get('HTTP_REFERER')
if referer is None:
return self._reject(request, REASON_NO_REFERER)
# Note that request.get_host() includes the port.
good_referer = 'https://%s/' % request.get_host()
if not same_origin(referer, good_referer):
reason = REASON_BAD_REFERER % (referer, good_referer)
return self._reject(request, reason)
if csrf_token is None:
# No CSRF cookie. For POST requests, we insist on a CSRF cookie,
# and in this way we can avoid all CSRF attacks, including login
# CSRF.
return self._reject(request, REASON_NO_CSRF_COOKIE)
# Check non-cookie token for match.
request_csrf_token = ""
if request.method == "POST":
request_csrf_token = request.POST.get('csrfmiddlewaretoken', '')
if request_csrf_token == "":
# Fall back to X-CSRFToken, to make things easier for AJAX,
# and possible for PUT/DELETE.
request_csrf_token = request.META.get('HTTP_X_CSRFTOKEN', '')
if not constant_time_compare(request_csrf_token, csrf_token):
return self._reject(request, REASON_BAD_TOKEN)
return self._accept(request)
def process_response(self, request, response):
if getattr(response, 'csrf_processing_done', False):
return response
# If CSRF_COOKIE is unset, then CsrfViewMiddleware.process_view was
# never called, probaby because a request middleware returned a response
# (for example, contrib.auth redirecting to a login page).
if request.META.get("CSRF_COOKIE") is None:
return response
if not request.META.get("CSRF_COOKIE_USED", False):
return response
# Set the CSRF cookie even if it's already set, so we renew
# the expiry timer.
response.set_cookie(settings.CSRF_COOKIE_NAME,
request.META["CSRF_COOKIE"],
max_age = 60 * 60 * 24 * 7 * 52,
domain=settings.CSRF_COOKIE_DOMAIN,
path=settings.CSRF_COOKIE_PATH,
secure=settings.CSRF_COOKIE_SECURE,
httponly=settings.CSRF_COOKIE_HTTPONLY
)
# Content varies with the CSRF cookie, so set the Vary header.
patch_vary_headers(response, ('Cookie',))
response.csrf_processing_done = True
return response
里面主要有两个函数,一个是process view,另一个是process response。这里就不得不说django middleware的工作原理了。
https://docs.djangoproject.com/en/1.6/topics/http/middleware/
process_request() is called on each request, before Django decides which view to execute.
process_view() is called just before Django calls the view.
process_response() is called on all responses before they’re returned to the browser.
所以这个middleware的process view会在请求到达view函数之前被调用,可以理解为一个过滤器吧。
if request.method not in ('GET', 'HEAD', 'OPTIONS', 'TRACE'):
if getattr(request, '_dont_enforce_csrf_checks', False):
return self._accept(request)
这里request里面有 _dont_enforce_csrf_checks 就直接进入view了,没有下面的检查了。所以我们只要自己给request添加一个这样的属性就好了。最直接的方法还是去写一个middleware啊,哈哈。
代码很简单
class DisableCSRFCheck(object):
def process_request(self, request):
setattr(request, '_dont_enforce_csrf_checks', True)
Comments are closed.
|
__label__pos
| 0.819943 |
Implementing Resizable Controls in VC++
Friday Jan 17th 2003 by George Anescu
Share:
Implement resizable Windows controls in VC++ using MFC. The full solutions for Button, Edit Box, and List Box are given.
Environment: VC6, MFC
To implement an interactively resizable control (using the mouse), you generally need to treat the following Windows messages:
• WM_SETCURSOR—to set the appropriate mouse cursor for the different regions of the control's border. The following mouse cursors are used:
• IDC_SIZEWE—when the mouse is on the left or right side
• IDC_SIZENS—when the mouse is on the top or bottom side
• IDC_SIZENWSE—when the mouse is on the top left or bottom right corner
• IDC_SIZENESW—when the mouse is on the top right or bottom left corner
• IDC_ARROW—when the mouse is on the control but not on the border)
• WM_MOUSEMOVE—to resize the control with the mouse movement.
• WM_NCLBUTTONDOWN—to start the resizing action.
• WM_LBUTTONUP—to end the resizing action.
During the code development phase I detected some variables and actions that are general for all the resizable controls and for this reason I decided to move them in a general interface, IResizeControl, from which all the resizable controls are to inherit. Technically, IResizeControl is not an interface because it doesn't have any pure virtual functions, but I still consider it an interface because it doesn't make any sense to declare IResizeControl objects. In fact, I am preventing the creation of IResizeControl objects by declaring the IResizeControl's constructor and destructor as protected. The IResizeControl class declaration is given below:
class IResizeControl
{
public:
//Enabling Flags
void EnableNorth(bool bN=true);
void EnableWest(bool bW=true);
void EnableSouth(bool bS=true);
void EnableEast(bool bE=true);
//Change Limits
bool SetWidth(int iMinWidth, int iMaxWidth);
bool SetHeight(int iMinHeight, int iMaxHeight);
//Resize Message
static const UINT UWM_CONTROLRESIZE;
protected:
//Constructor and Destructor declared protected prevents
//creation of IResizeControl objects
//CONSTRUCTOR
IResizeControl(bool bN, bool bW, bool bS, bool bE,
int iMinWidth, int iMaxWidth,
int iMinHeight, int iMaxHeight, bool bNotify);
//DESTRUCTOR
virtual ~IResizeControl();
//Find the current Mouse Position
virtual int FindPosition(POINT const& rPt, CRect const& roRect);
//Determine the new Dimensions
virtual void NewDimensions(POINT const& rPt,
CRect const& roRect,
int& riLeft, int& riTop,
int& riWidth, int&
riHeight, bool& rbResize);
//Mouse Cursor Positions
enum { POSDEF=0, POSN=1, POSNW=2, POSW=3, POSSW=4, POSS=5,
POSSE=6, POSE=7, POSNE=8 };
//Mouse Cursors
static HCURSOR sm_hWE, sm_hNS, sm_hNWSE, sm_hNESW, sm_hDEF;
//Enabling Flags
bool m_bN, m_bW, m_bS, m_bE;
//Tracking Flag
bool m_bTrack;
//Notification Flag
bool m_bNotify;
//Position
int m_iPosition;
//Limits
int m_iMinWidth, m_iMaxWidth, m_iMinHeight, m_iMaxHeight;
};
The member functions EnableNorth(), EnableWest(), EnableSouth(), and EnableEast() are used to alter the set of resizable borders after construction (initially the set of resizable borders is decided at construction). If two adjacent borders are in the set of resizable borders, the corner in between is also active for mouse resizing; for example, if the member variables m_bN and m_bW are both true, the NW (top left) corner can be caught with the mouse and both borders can be resized at the same time.
The member functions SetWidth() and SetHeight() are used to change the minimal and maximum size limits of the control after construction (initially the size limits are decided at construction). You cannot interactively resize the control beyond these limits.
The user message UWM_CONTROLRESIZE is posted to the parent window each time the control is resized, but only if the bNotify flag is set true (this flag can be set only in the constructor). The UWM_CONTROLRESIZE message can be used by the parent window (which can be a dialog box, for example) to take some specific actions, for example, to resize other control accordingly.
The direction, size limits, and notification member variables can be decided from the constructor. All of them, excepting the notification flag, can be changed later.
The virtual member function FindPosition() is used to find the current mouse position. It decides whether the current position of the mouse cursor is inside or outside the argument rectangle, on the borders, or on the corners. The return values are from the enumeration:
enum { POSDEF=0, POSN=1, POSNW=2, POSW=3, POSSW=4, POSS=5,
POSSE=6, POSE=7, POSNE=8 };
The POSDEF value is used when the mouse is not on the rectangle's border; the other values are self-explanatory. The current position on the control is also maintained by the m_iPosition member variable. This function can be overridden in derived classes if needed.
The virtual members function NewDimensions() is used to determine the new control's dimensions when the mouse is moved. The rPt argument transmits the current mouse position. The roRect rectangle argument transmits the current control's dimensions. The arguments riLeft, riTop, riWidth, and riHeight return the new control's dimensions and the rbResize flag is informing back whether the control has to be resized. This function can be overridden in derived classes if needed.
The cursor handles sm_hWE, sm_hNS, sm_hNWSE, sm_hNESW, and sm_hDEF are keeping some preloaded mouse cursor (sm_hDEF is for the default arrow cursor, the other are self-explanatory).
The tracking flag m_bTrack is set to true only during the resizing operation.
All the implemented resizable Windows controls are deriving from the IResizeControl interface. I give as a code example only the resizable button, CResizeButton class, the other (CResizeEdit, CResizeListBox) being similar. CResizeButton inherits from both CButton and IResizeControl:
class CResizeButton : public CButton, public IResizeControl
As explained before, the messages WM_MOUSEMOVE, WM_SETCURSOR, WM_LBUTTONUP, and WM_NCLBUTTONDOWN have to be treated by each resizable control:
// ResizeButton.h : header file
//...
//In class declaration
//{{AFX_MSG(CResizeButton)
afx_msg void OnMouseMove(UINT nFlags, CPoint point);
afx_msg BOOL OnSetCursor(CWnd* pWnd, UINT nHitTest, UINT message);
afx_msg void OnLButtonUp(UINT nFlags, CPoint point);
afx_msg void OnNcLButtonDown(UINT nHitTest, CPoint point);
//}}AFX_MSG
DECLARE_MESSAGE_MAP()
// ResizeButton.cpp : implementation file
//...
BEGIN_MESSAGE_MAP(CResizeButton, CButton)
//{{AFX_MSG_MAP(CResizeButton)
ON_WM_MOUSEMOVE()
ON_WM_SETCURSOR()
ON_WM_LBUTTONUP()
ON_WM_NCLBUTTONDOWN()
//}}AFX_MSG_MAP
END_MESSAGE_MAP()
One peculiarity for the button control is that the static edge style, WS_EX_STATICEDGE, has to be set for the WM_NCLBUTTONDOWN message to work properly (it can be easily set from the Resource Editor). The implementation of the message handlers is given below:
void CResizeButton::OnMouseMove(UINT nFlags, CPoint point)
{
if(true == m_bTrack)
{
CRect oRect;
GetWindowRect(&oRect);
//Transform from screen coordinates to parent client
//coordinates
GetParent()->ScreenToClient(&oRect);
ClientToScreen(&point);
GetParent()->ScreenToClient(&point);
//Determine the new Dimensions
int iLeft, iTop, iWidth, iHeight;
bool bResize;
NewDimensions(point, oRect, iLeft, iTop, iWidth, iHeight,
bResize);
if(true == bResize)
{
SetWindowPos(NULL, iLeft, iTop, iWidth, iHeight,
SWP_NOZORDER);
//Notify the parent about size change
if(true == m_bNotify)
GetParent()->PostMessage(UWM_CONTROLRESIZE,
GetDlgCtrlID());
}
}
CButton::OnMouseMove(nFlags, point);
}
void CResizeButton::OnNcLButtonDown(UINT nHitTest, CPoint point)
{
SetCapture();
m_bTrack = true;
CButton::OnNcLButtonDown(nHitTest, point);
}
void CResizeButton::OnLButtonUp(UINT nFlags, CPoint point)
{
if(true == m_bTrack)
{
ReleaseCapture();
m_bTrack = false;
}
CButton::OnLButtonUp(nFlags, point);
}
BOOL CResizeButton::OnSetCursor(CWnd* pWnd, UINT nHitTest,
UINT message)
{
if(HTBORDER == nHitTest)
{
//Is on Border, find out where
CRect oRect;
GetWindowRect(&oRect);
POINT pt = GetCurrentMessage()->pt;
m_iPosition = FindPosition(pt, oRect);
switch(m_iPosition)
{
case POSN:
case POSS:
::SetCursor(sm_hNS);
break;
case POSE:
case POSW:
::SetCursor(sm_hWE);
break;
case POSNW:
case POSSE:
::SetCursor(sm_hNWSE);
break;
case POSNE:
case POSSW:
::SetCursor(sm_hNESW);
break;
}
}
else
::SetCursor(sm_hDEF);
//Message handled
return TRUE;
}
How to Use
1. Copy the files: ResizeControl.h, ResizeListBox.h, ResizeButton.h, ResizeEdit.h, ResizeControl.cpp, ResizeListBox.cpp, ResizeButton.cpp, and ResizeEdit.cpp into your project.
2. Replace everywhere in these files the line
#include "TestDlg.h"
with your application's header.
3. Create the controls from the Resource Editor.
4. Include the header files in your class header, where appropriate:
5. #include "ResizeListBox.h"
#include "ResizeButton.h"
#include "ResizeEdit.h"
6. Declare the member variable controls in your class declaration, where appropriate:
7. ResizeListBox m_oResizeListBox;
CResizeButton m_oResizeButton;
CResizeEdit m_oResizeEdit;
8. Subclass the controls in the appropriate initialization function; for example, if you use the controls in a Dialog Box, the appropriate place is the OnInitDialog() function:
9. BOOL CTestDlg::OnInitDialog()
//...
m_oResizeListBox.SubclassDlgItem(IDC_LIST1, this);
m_oResizeButton.SubclassDlgItem(IDC_BUTTON1, this);
m_oResizeEdit.SubclassDlgItem(IDC_EDIT1, this);
Downloads
Download demo project - 20 Kb
Download source - 8 Kb
Share:
Home
Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved
|
__label__pos
| 0.834237 |
Windows XP: How to get rid of Windows Messenger
"The fastest and safest way to remove Windows Messenger:
1) Click Start button
2) Click Run
3) Copy and paste this command into the Run dialog
4) RunDll32 advpack.dll,LaunchINFSection %windir%\INF\msmsgs.inf,BLC.Remove
5) Reboot and Windows Messenger is removed cleanly"
I got a "cannot unregister" message, but it worked. Phew.
|
__label__pos
| 0.85881 |
A bookmarklet is a JavaScript-based bookmark that adds to a web browser. I’d like to show you some awesome web browser hacks to aid your web development workflow and how to convert those hacks into time-saving bookmarklets.
1. Activating design mode
2. Applying a background to everything
3. Simulating events
4. Setting cookies
5. Toggling classes
6. Color widget bookmark
7. What other bookmarklets can you think of?
Activating design mode
Design mode (styled as designMode since it’s a JavaScript property) is for who like to experiment with variations of copy on a live website. For example, copywriters who like to observe how content reads within the flow of the website’s design, or, say, designers who want to ensure that text fits comfortably within a certain space at a certain font size.
JavaScript has a mightily simple feature that can render an entire HTML document editable. It works exactly like HTML’s contenteditable="true" name-value attribute (or contentEditable="true" in JavaScript) but for the whole document. If you’d like to see how it works, start by entering the browser’s console using the relevant keyboard shortcut:
• Chrome: Option + + J / Shift + CTRL + J
• Firefox: Option + + K / Shift + CTRL + K
• Safari: Option + + C / Shift + CTRL + C
Next, type document.designMode="on" into the console, hit Return, and then click on any text element. You’ll see that this text element (and all other text elements) are now editable simply by clicking on them. This method of editing text on a live website is much faster than having to open DevTools, then right-clicking and selecting the “Edit Text” option… and much less tiresome.
Showing an edited version of the CSS-Tricks guide landing page using the design mode bookmarklet.
“Guides and Thangs” — my favorite part of CSS-Tricks
While I’m not sure that “design mode” is the most accurate description of the feature, it’s super useful nonetheless and it’s actually been around for a really long time, surprisingly.
And what’s even an even faster way to enable it? A bookmarklet, of course! Create a bookmark using javascript: document.designMode="on";void 0; as the URL.
Showing the bookmarklet installation.
Applying a background to everything
When HTML elements don’t have backgrounds, it can be difficult to visualize their bounds and/or accurately measure the distance between them and other elements. Developers might want to better visualize bounds when dealing with optical imbalance (i.e. when something “looks off” even though it’s not), margin collapse (when certain margins are ignored), various issues with display:/float:/position:, and more.
Applying backgrounds means applying a semi-transparent background to all HTML elements in order to better visualize their bounds and spacings. It’s something many of us commonly do by opening up DevTools then typing a CSS declaration like selector { background: rgb(0 0 0 / 10%); } into the “Styles” box. But again, it’s really tiresome and repetitive — and something we can simplify with a bookmarklet.
Once again, to create a bookmark, we’re going to make a URL. Here’s what we can use for this one:
javascript: document.querySelectorAll("*").forEach(element => element.style.background="rgb(0 0 0 / 10%)");
We’re using a semi-transparent background because the transparency stacks, which ensures that every nested element is distinguishable and the distances between them can be measured.
Showing the CSS-Tricks guides landing page with all backgrounds fill with varying shades of gray.
Apply a background to everything to see what’s happening.
Simulating events
Have you ever had to test a web event that first requires a series of interactions, or certain conditions to be met? It’s super time-consuming to have to test or debug these kinds of functionalities. This event simulation bookmarklet can be used to instantly trigger specific events, making testing a breeze.
Simulating an event means coding a “throwaway” button that triggers a JavaScript event, making it much easier to quickly and repeatedly test the event without having to meet any usual user-facing conditions, like needing to be logged in.
Assuming that you have your JavaScript event listeners set up, create a bookmark for each event that you’d like to trigger/simulate and submit the following URL:
javascript: document.querySelector("SELECTOR").click();
Replace “SELECTOR” with your unique selector, replace “click” with “focus” or “blur” (when necessary), or extend the snippet to make it trigger more complex events, like scroll.
Setting cookies
Cookies are tokens that are stored on a website visitor’s computer by the website that they’re visiting. Cookies contain data that can be read by the website that created them until they’ve exceeded their expiration date or have been deleted. The mere existence of a cookie can determine whether or not a visitor is logged in, whereas the data itself can store user information.
An example of a scenario where you might want to set a cookie using a bookmarklet is when you want to force a logged-in state during website testing. Websites often look very different for users that are logged in, however, logging in and out eventually becomes very tedious, so this bookmarklet can save quite a bit of time.
Manually writing expires= dates for cookies is awkward as heck, but luckily this create-your-own-set-cookie-bookmarklet app can generate a bookmarklet for a specific cookie, if you know its exact name.
Toggling classes
You may want to add or remove a class from an HTML element in order to trigger a fresh state or a change in appearance, otherwise known as toggling classes. Class toggling happens behind the scenes of most live websites, but it can also be used during testing to skip having to meet certain user-facing conditions.
Class toggling can be used to trigger changes in appearance (e.g. alternative themes or states) and even animations, but it can be a little fiddly when doing it with developer tools when it’s only for testing reasons (i.e. the website doesn’t actually function that way for users). Similar to the other bookmarklets, use this one to rapidly toggle classes and save yourself time.
Create the following bookmarklet to target all elements that match your chosen “SELECTOR”, which, in turn, toggles the “CLASS.”
javascript: document.querySelectorAll("SELECTOR").forEach(element => element.classList.toggle("CLASS"));
Color widget bookmark
While not technically a “bookmarklet,” this bookmarkable data URI by Scott Jehl opens up an <input type="color"> in a new tab:
data:text/html;charset=utf-8,%3Chtml%3E%3Ctitle%3EColor Picker%3C%2Ftitle%3E%3Cinput type%3D"color"%3E%3C%2Fhtml%3E
Why is that cool? Well, how many times have you needed to grab a color value off a page, only to find yourself cracking open DevTools, clicking through a bunch of elements, and pouring over CSS properties to find the value? Better to run this little guy, click the element, and get a color right away!
What other bookmarklets can you think of?
Are there any overly repetitive web development workflows that require you to use the web browser’s sometimes-awkward developer tools? If so, it’s super easy to create your own time-saving bookmarklets. Just remember to start the URL with javascript:!
And if you’ve made a bookmarklet to simplify your workflow, I’d love to see it! Share them here in the comments and let’s get a nice collection going.
Source link
|
__label__pos
| 0.883591 |
Chroma subsampling
From Wikipedia, the free encyclopedia
(Redirected from 4:1:1)
Jump to: navigation, search
Chroma subsampling is the practice of encoding images by implementing less resolution for chroma information than for luma information, taking advantage of the human visual system's lower acuity for color differences than for luminance.[1]
It is used in many video encoding schemes — both analog and digital — and also in JPEG encoding.
Rationale[edit]
In full size, this image shows the difference between four subsampling schemes. Note how similar the color images appear. The lower row shows the resolution of the color information.
Because of storage and transmission limitations, there is frequently a desire to reduce (or compress) the signal. Since the human visual system is much more sensitive to variations in brightness than color, a video system can be optimized by devoting more bandwidth to the luma component (usually denoted Y'), than to the color difference components Cb and Cr. In compressed images, for example, the 4:2:2 Y'CbCr scheme requires two-thirds the bandwidth of (4:4:4) R'G'B'. This reduction results in almost no visual difference as perceived by the viewer for photographs, although images produced digitally containing harsh lines and saturated colors will have significant artifacts.[citation needed]
How subsampling works[edit]
Because the human visual system is less sensitive to the position and motion of color than luminance,[2] bandwidth can be optimized by storing more luminance detail than color detail. At normal viewing distances, there is no perceptible loss incurred by sampling the color detail at a lower rate[vague]. In video systems, this is achieved through the use of color difference components. The signal is divided into a luma (Y') component and two color difference components (chroma).
In human vision there are two chromatic channels as well as a luminance channel, and in color science there are two chromatic dimensions as well as a luminance dimension. In neither the vision nor the science is there complete independence of the chromatic and the luminance. Luminance information can be gleaned from the chromatic information; e.g. the chromatic value implies a certain minimum for the luminance value. But there can be no question of color influencing luminance in the absence of a post-processing of the separate signals. In video, the luma and chroma components are formed as a weighted sum of gamma-corrected (tristimulus) R'G'B' components instead of linear (tristimulus) RGB components. As a result, luma must be distinguished from luminance. That there is some "bleeding" of luminance and color information between the luma and chroma components in video, the error being greatest for highly saturated colors and noticeable in between the magenta and green bars of a color bars test pattern (that has chroma subsampling applied), should not be attributed to this engineering approximation being used. Indeed similar bleeding can occur also with gamma = 1, whence the reversing of the order of operations between gamma correction and forming the weighted sum can make no difference. The chroma can influence the luma specifically at the pixels where the subsampling put no chroma. Interpolation may then put chroma values there which are incompatible with the luma value there, and further post-processing of that Y'CbCr into R'G'B' for that pixel is what ultimately produces false luminance upon display.
Color-bars-original.png
Original without color subsampling. 200% zoom.
Color-bars-vegas-dv.png
Image after color subsampling (compressed with Sony Vegas DV codec, box filtering applied.)
Sampling systems and ratios[edit]
The subsampling scheme is commonly expressed as a three part ratio J:a:b (e.g. 4:2:2) or four parts if alpha channel is present (e.g. 4:2:2:4), that describe the number of luminance and chrominance samples in a conceptual region that is J pixels wide, and 2 pixels high. The parts are (in their respective order):
• J: horizontal sampling reference (width of the conceptual region). Usually, 4.
• a: number of chrominance samples (Cr, Cb) in the first row of J pixels.
• b: number of changes of chrominance samples (Cr, Cb) between first and second row of J pixels.
• Alpha: horizontal factor (relative to first digit). May be omitted if alpha component is not present, and is equal to J when present.
This notation is not valid for all combinations and has exceptions, e.g. 4:1:0 (where the height of the region is not 2 pixels but 4 pixels, so if 8 bits/component are used the media would be 9 bits/pixel) and 4:2:1.
An explanatory image of different chroma subsampling schemes can be seen at the following link: http://lea.hamradio.si/~s51kq/subsample.gif (source: "Basics of Video": http://lea.hamradio.si/~s51kq/V-BAS.HTM) or in details in Chrominance Subsampling in Digital Images, by Douglas Kerr.
4:1:1 4:2:0 4:2:2 4:4:4 4:4:0
Y'CrCb
= = = = =
Y'
+ + + + +
1 2 3 4 J = 4 1 2 3 4 J = 4 1 2 3 4 J = 4 1 2 3 4 J = 4 1 2 3 4 J = 4
(Cr, Cb) 1 a = 1 1 2 a = 2 1 2 a = 2 1 2 3 4 a = 4 1 2 3 4 a = 4
1 b = 1 b = 0 1 2 b = 2 1 2 3 4 b = 4 b = 0
¼ horizontal resolution,
full vertical resolution
½ horizontal resolution,
½ vertical resolution
½ horizontal resolution,
full vertical resolution
full horizontal resolution,
full vertical resolution
full horizontal resolution,
½ vertical resolution
The mapping examples given are only theoretical and for illustration. Also note that the diagram does not indicate any chroma filtering, which should be applied to avoid aliasing.
To calculate required bandwidth factor relative to 4:4:4 (or 4:4:4:4), one needs to sum all the factors and divide the result by 12 (or 16, if alpha is present).
Types of subsampling[edit]
4:4:4 Y'CbCr[edit]
Each of the three Y'CbCr components have the same sample rate. This scheme is sometimes used in high-end film scanners and cinematic postproduction.
4:4:4 R'G'B' (no subsampling)[edit]
Note that "4:4:4" may instead be referring to R'G'B' color space, which implicitly does not have any chroma subsampling at all. Formats such as HDCAM SR can record 4:4:4 R'G'B' over dual-link HD-SDI.
4:2:2[edit]
The two chroma components are sampled at half the sample rate of luma: the horizontal chroma resolution is halved. This reduces the bandwidth of an uncompressed video signal by one-third with little to no visual difference.[citation needed]
Many high-end digital video formats and interfaces use this scheme:
4:2:1[edit]
This sampling mode is not expressible in J:a:b notation. '4:2:1' is a hangover from a previous notational scheme, and very few software or hardware codecs use it. Cb horizontal resolution is half that of Cr (and a quarter of the horizontal resolution of Y). This exploits the fact that human eye has less spatial sensitivity to blue/yellow than to red/green. NTSC is similar, in using lower resolution for blue/yellow than red/green, which in turn has less resolution than luma.
4:1:1[edit]
In 4:1:1 chroma subsampling, the horizontal color resolution is quartered, and the bandwidth is halved compared to no chroma subsampling. Initially, 4:1:1 chroma subsampling of the DV format was not considered to be broadcast quality and was only acceptable for low-end and consumer applications.[3][4] Currently, DV-based formats (some of which use 4:1:1 chroma subsampling) are used professionally in electronic news gathering and in playout servers. DV has also been sporadically used in feature films and in digital cinematography.
In the NTSC system, if the luma is sampled at 13.5 MHz, then this means that the Cr and Cb signals will each be sampled at 3.375 MHz, which corresponds to a maximum Nyquist bandwidth of 1.6875 MHz, whereas traditional "high-end broadcast analog NTSC encoder" would have a Nyquist bandwidth of 1.5 MHz and 0.5 MHz for the I/Q channels. However in most equipment, especially cheap TV sets and VHS/Betamax VCR's the chroma channels have only the 0.5 MHz bandwidth for both Cr and Cb (or equivalently for I/Q). Thus the DV system actually provides a superior color bandwidth compared to the best composite analog specifications for NTSC, despite having only 1/4 of the chroma bandwidth of a "full" digital signal.
Formats that use 4:1:1 chroma subsampling include:
4:2:0[edit]
In 4:2:0, the horizontal sampling is doubled compared to 4:1:1, but as the Cb and Cr channels are only sampled on each alternate line in this scheme, the vertical resolution is halved. The data rate is thus the same. This fits reasonably well with the PAL color encoding system since this has only half the vertical chrominance resolution of NTSC. It would also fit extremely well with the SECAM color encoding system since like that format, 4:2:0 only stores and transmits one color channel per line (the other channel being recovered from the previous line). However, little equipment has actually been produced that outputs a SECAM analogue video signal. In general SECAM territories either have to use a PAL capable display or a transcoder to convert the PAL signal to SECAM for display.
Different variants of 4:2:0 chroma configurations are found in:
Cb and Cr are each subsampled at a factor of 2 both horizontally and vertically.
There are three variants of 4:2:0 schemes, having different horizontal and vertical siting. [7]
• In MPEG-2, Cb and Cr are cosited horizontally. Cb and Cr are sited between pixels in the vertical direction (sited interstitially).
• In JPEG/JFIF, H.261, and MPEG-1, Cb and Cr are sited interstitially, halfway between alternate luma samples.
• In 4:2:0 DV, Cb and Cr are co-sited in the horizontal direction. In the vertical direction, they are co-sited on alternating lines.
Most digital video formats corresponding to PAL use 4:2:0 chroma subsampling, with the exception of DVCPRO25, which uses 4:1:1 chroma subsampling. Both the 4:1:1 and 4:2:0 schemes halve the bandwidth compared to no chroma subsampling.
With interlaced material, 4:2:0 chroma subsampling can result in motion artifacts if it is implemented the same way as for progressive material. The luma samples are derived from separate time intervals while the chroma samples would be derived from both time intervals. It is this difference that can result in motion artifacts. The MPEG-2 standard allows for an alternate interlaced sampling scheme where 4:2:0 is applied to each field (not both fields at once). This solves the problem of motion artifacts, reduces the vertical chroma resolution by half, and can introduce comb-like artifacts in the image.
444-original-single-field.png
Original. *This image shows a single field. The moving text has some motion blur applied to it.
420-progressive-single-fiel.png
4:2:0 progressive sampling applied to moving interlaced material. Note that the chroma leads and trails the moving text. *This image shows a single field.
420-interlaced-single-field.png
4:2:0 interlaced sampling applied to moving interlaced material. *This image shows a single field.
In the 4:2:0 interlaced scheme however, vertical resolution of the chroma is roughly halved since the chroma samples effectively describe an area 2 samples wide by 4 samples tall instead of 2X2. As well, the spatial displacement between both fields can result in the appearance of comb-like chroma artifacts.
420-original444.png
Original still image.
420-progressive-still.png
4:2:0 progressive sampling applied to a still image. Both fields are shown.
420-interlaced-still.png
4:2:0 interlaced sampling applied to a still image. Both fields are shown.
If the interlaced material is to be de-interlaced, the comb-like chroma artifacts (from 4:2:0 interlaced sampling) can be removed by blurring the chroma vertically.[8]
4:1:0[edit]
This ratio is possible, and some codecs support it, but it is not widely used. This ratio uses half of the vertical and one-fourth the horizontal color resolutions, with only one-eighth of the bandwidth of the maximum color resolutions used. Uncompressed video in this format with 8-bit quantization uses 10 bytes for every macropixel (which is 4 x 2 pixels). It has the equivalent chrominance bandwidth of a PAL I signal decoded with a delay line decoder, and still very much superior to NTSC.
• Some video codecs may operate at 4:1:0.5 or 4:1:0.25 as an option, so as to allow similar to VHS quality.
3:1:1[edit]
Used by Sony in their HDCAM High Definition recorders (not HDCAM SR). In the horizontal dimension, luma is sampled horizontally at three quarters of the full HD sampling rate- 1440 samples per row instead of 1920. Chroma is sampled at 480 samples per row, a third of the luma sampling rate.
In the vertical dimension, both luma and chroma are sampled at the full HD sampling rate (1080 samples vertically).
Out-of-gamut colors[edit]
One of the artifacts that can occur with chroma subsampling is that out-of-gamut colors can occur upon chroma reconstruction. Suppose the image consisted of alternating 1-pixel red and black lines and the subsampling omitted the chroma for the black pixels. Chroma from the red pixels will be reconstructed onto the black pixels, causing the new pixels to have positive red and negative green and blue values. As displays cannot output negative light (negative light does not exist), these negative values will effectively be clipped and the resulting luma value will be too high.[9] Similar artifacts arise in the less artificial example of gradation near a fairly sharp red/black boundary.
Filtering during subsampling can also cause colors to go out of gamut.
Terminology[edit]
The term Y'UV refers to an analog encoding scheme while Y'CbCr refers to a digital encoding scheme.[citation needed] One difference between the two is that the scale factors on the chroma components (U, V, Cb, and Cr) are different. However, the term YUV is often used erroneously to refer to Y'CbCr encoding. Hence, expressions like "4:2:2 YUV" always refer to 4:2:2 Y'CbCr since there simply is no such thing as 4:x:x in analog encoding (such as YUV).
In a similar vein, the term luminance and the symbol Y are often used erroneously to refer to luma, which is denoted with the symbol Y'. Note that the luma (Y') of video engineering deviates from the luminance (Y) of color science (as defined by CIE). Luma is formed as the weighted sum of gamma-corrected (tristimulus) RGB components. Luminance is formed as a weighed sum of linear (tristimulus) RGB components.
In practice, the CIE symbol Y is often incorrectly used to denote luma. In 1993, SMPTE adopted Engineering Guideline EG 28, clarifying the two terms. Note that the prime symbol ' is used to indicate gamma correction.
Similarly, the chroma/chrominance of video engineering differs from the chrominance of color science. The chroma/chrominance of video engineering is formed from weighted tristimulus components, not linear components. In video engineering practice, the terms chroma, chrominance, and saturation are often used interchangeably to refer to chrominance.
History[edit]
Chroma subsampling was developed in the 1950s by Alda Bedford for the development of color television by RCA, which developed into the NTSC standard; luma-chroma separation was developed earlier, in 1938 by Georges Valensi.
Through studies, he showed that the human eye has high resolution only for black and white, somewhat less for "mid-range" colors like yellows and greens, and much less for colors on the end of the spectrum, reds and blues. Using this knowledge allowed RCA to develop a system in which they discarded most of the blue signal after it comes from the camera, keeping most of the green and only some of the red; this is chroma subsampling in the YIQ color space, and is roughly analogous to 4:2:1 subsampling, in that it has decreasing resolution for luma, yellow/green, and red/blue.
Effectiveness[edit]
While subsampling can easily reduce the size of an uncompressed image by 50% with minimal loss of quality, the final effect on the size of a compressed image is considerably less.[citation needed] This is because image compression algorithms also remove redundant chroma information. In fact, by applying something as rudimentary as chroma subsampling prior to compression, information is removed from the image that could be used by the compression algorithm to produce a higher quality result with no increase in size. For example, with wavelet compression methods, better results are obtained by dropping the highest frequency chroma layer inside the compression algorithm than by applying chroma subsampling prior to compression. This is because wavelet compression operates by repeatedly using wavelets as high and low pass filters to separate frequency bands in an image, and the wavelets do a better job than chroma subsampling does.[citation needed]
Compatibility issues[edit]
The details of chroma subsampling implementation cause considerable confusion. Is the upper leftmost chroma value stored, or the rightmost, or is it the average of all the chroma values? This must be exactly specified in standards and followed by all implementors. Incorrect implementations cause the chroma of an image to be offset from the luma. Repeated compression/decompression can cause the chroma to "travel" in one direction. Different standards may use different versions for example of "4:2:0" with respect to how the chroma value is determined, making one version of "4:2:0" incompatible with another version of "4:2:0".
Proper upsampling of chroma can require knowing whether the source is progressive or interlaced, information which is often not available to the upsampler.
Chroma subsampling causes problems for film makers trying to do keying with blue or green screening. The chroma interpolation along edges produces noticeable haloing artifacts.
See also[edit]
References[edit]
1. ^ S. Winkler, C. J. van den Branden Lambrecht, and M. Kunt (2001). "Vision and Video: Models and Applications". In Christian J. van den Branden Lambrecht. Vision models and applications to image and video processing. Springer. p. 209. ISBN 978-0-7923-7422-0.
2. ^ Livingstone, Margaret (2002). "The First Stages of Processing Color and Luminance: Where and What". Vision and Art: The Biology of Seeing. New York: Harry N. Abrams. pp. 46–67. ISBN 0-8109-0406-3.
3. ^ Jennings, Roger; Bertel Schmitt (1997). "DV vs. Betacam SP". DV Central. Retrieved 2008-08-29.
4. ^ Wilt, Adam J. (2006). "DV, DVCAM & DVCPRO Formats". adamwilt.com. Retrieved 2008-08-29.
5. ^ Clint DeBoer (2008-04-16). "HDMI Enhanced Black Levels, xvYCC and RGB". Audioholics. Retrieved 2013-06-02.
6. ^ "Digital Color Coding" (PDF). Telairity. Retrieved 2013-06-02.
7. ^ Poynton, Charles (2008). "Chroma Subsampling Notation". Charles Poynton. Retrieved 2008-10-01.
8. ^ Munsil, Don; Stacey Spears (2003). "DVD Player Benchmark - Chroma Upsampling Error". Secrets of Home Theater & High Fidelity. Retrieved 2008-08-29.
9. ^ Chan, Glenn. "Towards Better Chroma Subsampling". SMPTE Journal. Retrieved 2008-08-29.
• Poynton, Charles. "YUV and luminance considered harmful: A plea for precise terminology in video" [2]
• Poynton, Charles. "Digital Video and HDTV: Algorithms and Interfaces". U.S.: Morgan Kaufmann Publishers, 2003.
• Kerr, Douglas A. "Chrominance Subsampling in Digital Images" [3]
|
__label__pos
| 0.870998 |
# About Identity Proofing
Digital onboarding offers challenges that face to face onboarding does not. Companies need to remotely ensure who their customer is and detect potential risks for diminishing the chance of being used for illegal activities.
Signicat's own Identity Proofing solution, Assure, drastically simplifies the onboarding process in many industries like banking and finance. The Signicat Assure API (opens new window) integrates with multiple identification services to verify the identity of your customers. This can be both eIDV and eID services in combination (see the below use case example for more details). When choosing which identification services to use for identity assurance, you must consider multiple factors such as the required level of assurance, user experience, the nationality of end-users, etc. This is typically dictated by the geographic markets that the company operates in and is often subject to local regulatory requirements.
# Use case example: Onboarding a new customer
This page is useful if you need more basic information about the usage context for identity proofing.
Picture a potential customer, let’s call her Claire, who wants to open a new online bank account. The bank might want to take several different steps to ensure that it is indeed Claire who is opening this account. The following is just one of many possible workflows.
# End-user perspective
1. Claire enters the bank’s online website and selects “Become a customer”.
2. The bank asks Claire to provide some basic information about herself, such as her full name, date of birth and address.
3. The bank asks Claire to identify herself with an ID method, e.g. the Dutch iDIN.
4. The bank asks Claire to identify herself using her passport and take a selfie.
5. The identity proofing is finished and Claire can continue as a new customer of the bank.
Process options click-to-zoom
# Bank's perspective
1. A new user, Claire, wants to become a customer in the bank.
2. The bank initially collects information from Claire in a form on the website, among the information her full name, date of birth and address.
3. Based on the type of customer relationship, risk profile and regulatory requirements the bank decides on the necessary steps in the customer due diligence process.
4. Now the bank calls Assure API to create a dossier, which is a container for all information needed to perform the identity proofing.
5. The bank submits relevant information about Claire as user data to Claire’s dossier in the Assure API.
6. Since Claire is a citizen of the Netherlands, the bank uses iDIN as eID scheme. This constitutes a strong identity proofing method in the Netherlands. The bank creates an iDIN process within Assure API, which stores the results of the iDIN transaction in Claire’s dossier.
7. Because of local regulations, iDIN in itself is not sufficient to meet the requirements. The bank, therefore, asks Claire to use her passport to further prove her identity. The bank creates a process using one of the available eIDV services. The result of the method returns the information from the identity document and the result of the face match and liveness detection to Claire’s dossier.
8. The bank now has sufficient information about Claire and now must perform corroboration on the captured data. The bank uses Assure APIs corroboration functionality, e.g. matching (opens new window) to compare the information stored in the user data with the data from the iDIN and eIDV processes. The bank parses the responses and based on internal business logic, the bank’s risk engine decides that it can continue the business process and make Claire a customer of the bank.
9. The necessary identity information is used to establish an identity in the bank’s identity store and the relevant proof is extracted from the Assure API. The bank then sends a delete request to permanently delete all of Claire's PII data from Signicat's platform.
Last updated: 08/09/2023 10:49 UTC
|
__label__pos
| 0.898249 |
Automated Testing with MSTest V2
MSTest V2 is a cross platform, open source testing framework. This course will teach you how to create automated tests to check your code is working as expected, create data driven tests, control test execution, and also how to extend the framework.
Course info
Rating
(29)
Level
Intermediate
Updated
May 18, 2018
Duration
2h 2m
Table of contents
Course Overview
Getting Started
Using Asserts to Pass or Fail Tests
Controlling and Customizing Test Execution
Creating Data Driven Tests
Reducing Code Duplication and Increasing Test Readability
Description
Course info
Rating
(29)
Level
Intermediate
Updated
May 18, 2018
Duration
2h 2m
Description
MSTest V2 is the evolution of the original "in-box" testing framework from Microsoft and is now a cross platform, open source testing framework. In this course, Automated Testing with MSTest V2, you'll learn how to use the framework to create automated test to help check your code is working correctly. First, you'll discover how to use the various assert features such as checking numeric, string, and collection values. Next, you'll explore how to take control of the test execution lifecycle and also execute additional setup or cleanup code. Finally, you'll learn how to execute a test multiple times with data driven tests and also how to customize and extend the MSTest framework to create more maintainable and readable test code. By the end of this course, you'll know how to create readable, maintainable, and customized automated tests with MSTest V2.
About the author
About the author
With over 15 years experience, Jason Roberts is a Microsoft .NET MVP, freelance developer, and author.
More from the author
Working with Files and Streams in C#
Intermediate
3h 13m
12 Oct 2018
Error Handling in C# with Exceptions
Intermediate
1h 37m
6 Aug 2018
More courses by Jason Roberts
Section Introduction Transcripts
Section Introduction Transcripts
Course Overview
Hi everyone. My name is Jason Roberts. Welcome to my course, Automated Testing With MSTest V2. I'm a Microsoft MVP, freelance developer, and author. In this course, we're going to learn how to create automated tests with the MSTest V2 testing framework. Some of the major topics that we'll cover in this course include how to check the results of production code by using MSTest asserts, how to take control of and also customize how your tests are executed, and reduce test code duplication by creating data-driven tests, and using the customization and extensibility features of the MSTest framework. By the end of this course, you'll understand how to create automated tests to ensure your code is behaving as it should and help to reduce production errors. Before beginning the course, it's recommended that you have at least a basic understanding of C#. I hope you'll join me on this journey with the Automated Testing With MSTest V2 course at Pluralsight.
Getting Started
Hi everyone. My name is Jason Roberts from Pluralsight. Welcome to this course, Automated Testing with MSTest V2. MSTest has been around for many, many years and indeed was the first testing framework I ever used back in the day. MSTest V2 is the next iteration of the testing framework and is now open source and has support for testing. NET,. NET Core, desktop apps, and UWP store apps. The V2 framework is also delivered as a NuGet package rather than being tightly coupled to Visual Studio releases. In this module, we're going to learn how to get started with MSTest V2. We're going to kick off this module by looking at some of the benefits of automated tests, and we'll also get a high-level overview of the MSTest V2 testing framework. We'll learn about the high-level components of the testing framework, the essential NuGet packages that comprise the framework, and we'll also look at some of the framework's high-level features. We'll then head over to Visual Studio and create a new MSTest test project, and we'll write an initial test just so we can demonstrate how things work. We'll learn how we can run the tests within Visual Studio itself and also how we can execute tests at the command line.
Using Asserts to Pass or Fail Tests
Hi. Welcome back. In the previous module, we got started with MSTest, and we created our first test. In this module, we're going to be delving in on the different ways that we can assert that our production code is doing the correct thing. So we're going to kick off this module with a brief overview of the asserts that we'll be looking at, and then we're going to head over to Visual Studio. We'll learn how we can assert on Boolean and null results, we'll learn how we can assert that the correct numeric results are being returned from our production code, and we'll also learn how we can deal with double values by providing an assert delta. We'll learn about simple string equality asserts and also a number of specialized string assert methods. We'll learn some of the different ways we can assert that collections contain the correct items and how to check that the production code is throwing the correct exceptions at the correct time. Finally we'll learn how we can assert that object are of the correct type and also how to assert on object references. So let's kick off this module with a brief overview of the types of asserts we can make with MSTest.
Controlling and Customizing Test Execution
Hi. Welcome back. In the previous module, we learned how to use asserts to pass and fail tests. In this module, we're going to learn how we can control and customize the test execution lifecycle. So in this module, we're going to start off by learning how we can organize tests into arbitrary categories. We'll learn that we can do this at the individual test method level and also at the class level. And we'll learn how we can also execute only specific categories of tests. We'll learn that we can temporarily stop a test from running and also how we can output additional message during the text execution lifecycle. We'll also learn how we can run additional code during the test execution lifecycle, so we'll learn how we can execute code before and after test executes, before the first test in a test class executes, and also after the last test in a test class executes. And we'll also learn how to execute additional code before any test in a test assembly executes and also after the last test in a test assembly executes. Finally we're going to learn how we can share objects between tests during test execution. So without further ado, let's head over to Visual Studio, and we'll learn how we can categorize tests.
Creating Data Driven Tests
Hi. Welcome back. In the previous module, we learned how we can customize and control the test execution lifecycle. In this module, we're going to learn how we can execute tests multiple times with sets of test data. So in this module, we're going to start off by getting a high-level overview of why we might want to create data-driven tests, and then we'll jump into Visual Studio and we'll learn how we can create a data-driven test and specify test data at the TestMethod level. When we take this approach, however, we're not able to share the set of test data across multiple tests, so we'll go and see how we can actually do this. We'll learn how we can share test data across multiple tests by implementing a getter-only static property that returns the data or alternatively implementing a static method that returns the data. And we'll also learn that we can centralize this data in a separate class to make it available across multiple test classes. We'll then learn that in addition to hardcoding test data within our test code, we can also get this test data from an external source. So we're going to learn how we can get this data from a CSV file, but this data could also come from a database, Excel file, REST service, or some other data source. So let's kick off this module with a high-level overview of why we might want to create data-driven tests.
Reducing Code Duplication and Increasing Test Readability
Hi. Welcome back. In the previous module, we learned how we can start to reduce code duplication by making use of data-driven tests. In this module, we're going to continue this process and learn how we can further reduce code duplication and also increase test readability. So in this module, we're going to start off with an initial refactoring of the test code and start to reduce some of the duplication. We'll then look at some of the tests that we've already written, and we'll see that some of the asserts aren't as readable as they could be. To solve this, we're going to see how we can create custom asserts, so we're going to create a custom assert to check a numeric value is within a given range. We'll create an assert to check a string is not null or whitespace. We'll learn how we can create custom asserts that work on collections, such as checking all items in a collection of strings are not null or whitespace, asserting that all items within a collection satisfy a specified predicate, asserting that at least one item in the collection satisfies a predicate, and also a more generic assert to run an action on each item in a collection so we can make multiple asserts against each item. We'll then take a look at the test categories that we've defined, and we'll learn how we can create custom reusable test category attributes and how this improve readability and also remove duplicated magic strings. Finally we'll learn that if we're creating data-driven tests, we can actually go and create custom test data source attributes. Once again, this can increase readability and also remove any hardcoded values from the data access code. So let's head over to Visual Studio now, and we'll perform some initial test code refactoring.
|
__label__pos
| 0.522697 |
/* * Copyright (C) 1999 Antti Koivisto ([email protected]) * Copyright (C) 2004, 2005, 2006, 2007, 2008, 2014 Apple Inc. All rights reserved. * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Library General Public * License as published by the Free Software Foundation; either * version 2 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Library General Public License for more details. * * You should have received a copy of the GNU Library General Public License * along with this library; see the file COPYING.LIB. If not, write to * the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, * Boston, MA 02110-1301, USA. * */ #include "config.h" #include "FillLayer.h" #include #include namespace WebCore { struct SameSizeAsFillLayer { FillLayer* next; RefPtr image; Length x; Length y; LengthSize sizeLength; unsigned bitfields : 32; unsigned bitfields2 : 11; }; COMPILE_ASSERT(sizeof(FillLayer) == sizeof(SameSizeAsFillLayer), FillLayer_should_stay_small); FillLayer::FillLayer(FillLayerType type) : m_image(FillLayer::initialFillImage(type)) , m_xPosition(FillLayer::initialFillXPosition(type)) , m_yPosition(FillLayer::initialFillYPosition(type)) , m_attachment(static_cast(FillLayer::initialFillAttachment(type))) , m_clip(static_cast(FillLayer::initialFillClip(type))) , m_origin(static_cast(FillLayer::initialFillOrigin(type))) , m_repeatX(static_cast(FillLayer::initialFillRepeatX(type))) , m_repeatY(static_cast(FillLayer::initialFillRepeatY(type))) , m_composite(FillLayer::initialFillComposite(type)) , m_sizeType(static_cast(FillSizeType::None)) , m_blendMode(static_cast(FillLayer::initialFillBlendMode(type))) , m_maskSourceType(static_cast(FillLayer::initialFillMaskSourceType(type))) , m_imageSet(false) , m_attachmentSet(false) , m_clipSet(false) , m_originSet(false) , m_repeatXSet(false) , m_repeatYSet(false) , m_xPosSet(false) , m_yPosSet(false) , m_backgroundXOriginSet(false) , m_backgroundYOriginSet(false) , m_backgroundXOrigin(static_cast(Edge::Left)) , m_backgroundYOrigin(static_cast(Edge::Top)) , m_compositeSet(type == FillLayerType::Mask) , m_blendModeSet(false) , m_maskSourceTypeSet(false) , m_type(static_cast(type)) { } FillLayer::FillLayer(const FillLayer& o) : m_next(o.m_next ? std::make_unique(*o.m_next) : nullptr) , m_image(o.m_image) , m_xPosition(o.m_xPosition) , m_yPosition(o.m_yPosition) , m_sizeLength(o.m_sizeLength) , m_attachment(o.m_attachment) , m_clip(o.m_clip) , m_origin(o.m_origin) , m_repeatX(o.m_repeatX) , m_repeatY(o.m_repeatY) , m_composite(o.m_composite) , m_sizeType(o.m_sizeType) , m_blendMode(o.m_blendMode) , m_maskSourceType(o.m_maskSourceType) , m_imageSet(o.m_imageSet) , m_attachmentSet(o.m_attachmentSet) , m_clipSet(o.m_clipSet) , m_originSet(o.m_originSet) , m_repeatXSet(o.m_repeatXSet) , m_repeatYSet(o.m_repeatYSet) , m_xPosSet(o.m_xPosSet) , m_yPosSet(o.m_yPosSet) , m_backgroundXOriginSet(o.m_backgroundXOriginSet) , m_backgroundYOriginSet(o.m_backgroundYOriginSet) , m_backgroundXOrigin(o.m_backgroundXOrigin) , m_backgroundYOrigin(o.m_backgroundYOrigin) , m_compositeSet(o.m_compositeSet) , m_blendModeSet(o.m_blendModeSet) , m_maskSourceTypeSet(o.m_maskSourceTypeSet) , m_type(o.m_type) { } FillLayer::~FillLayer() { // Delete the layers in a loop rather than allowing recursive calls to the destructors. for (std::unique_ptr next = WTFMove(m_next); next; next = WTFMove(next->m_next)) { } } FillLayer& FillLayer::operator=(const FillLayer& o) { m_next = o.m_next ? std::make_unique(*o.m_next) : nullptr; m_image = o.m_image; m_xPosition = o.m_xPosition; m_yPosition = o.m_yPosition; m_backgroundXOrigin = o.m_backgroundXOrigin; m_backgroundYOrigin = o.m_backgroundYOrigin; m_backgroundXOriginSet = o.m_backgroundXOriginSet; m_backgroundYOriginSet = o.m_backgroundYOriginSet; m_sizeLength = o.m_sizeLength; m_attachment = o.m_attachment; m_clip = o.m_clip; m_composite = o.m_composite; m_blendMode = o.m_blendMode; m_origin = o.m_origin; m_repeatX = o.m_repeatX; m_repeatY = o.m_repeatY; m_sizeType = o.m_sizeType; m_maskSourceType = o.m_maskSourceType; m_imageSet = o.m_imageSet; m_attachmentSet = o.m_attachmentSet; m_clipSet = o.m_clipSet; m_compositeSet = o.m_compositeSet; m_blendModeSet = o.m_blendModeSet; m_originSet = o.m_originSet; m_repeatXSet = o.m_repeatXSet; m_repeatYSet = o.m_repeatYSet; m_xPosSet = o.m_xPosSet; m_yPosSet = o.m_yPosSet; m_maskSourceTypeSet = o.m_maskSourceTypeSet; m_type = o.m_type; return *this; } bool FillLayer::operator==(const FillLayer& o) const { // We do not check the "isSet" booleans for each property, since those are only used during initial construction // to propagate patterns into layers. All layer comparisons happen after values have all been filled in anyway. return arePointingToEqualData(m_image.get(), o.m_image.get()) && m_xPosition == o.m_xPosition && m_yPosition == o.m_yPosition && m_backgroundXOrigin == o.m_backgroundXOrigin && m_backgroundYOrigin == o.m_backgroundYOrigin && m_attachment == o.m_attachment && m_clip == o.m_clip && m_composite == o.m_composite && m_blendMode == o.m_blendMode && m_origin == o.m_origin && m_repeatX == o.m_repeatX && m_repeatY == o.m_repeatY && m_sizeType == o.m_sizeType && m_maskSourceType == o.m_maskSourceType && m_sizeLength == o.m_sizeLength && m_type == o.m_type && ((m_next && o.m_next) ? *m_next == *o.m_next : m_next == o.m_next); } void FillLayer::fillUnsetProperties() { FillLayer* curr; for (curr = this; curr && curr->isXPositionSet(); curr = curr->next()) { } if (curr && curr != this) { // We need to fill in the remaining values with the pattern specified. for (FillLayer* pattern = this; curr; curr = curr->next()) { curr->m_xPosition = pattern->m_xPosition; if (pattern->isBackgroundXOriginSet()) curr->m_backgroundXOrigin = pattern->m_backgroundXOrigin; if (pattern->isBackgroundYOriginSet()) curr->m_backgroundYOrigin = pattern->m_backgroundYOrigin; pattern = pattern->next(); if (pattern == curr || !pattern) pattern = this; } } for (curr = this; curr && curr->isYPositionSet(); curr = curr->next()) { } if (curr && curr != this) { // We need to fill in the remaining values with the pattern specified. for (FillLayer* pattern = this; curr; curr = curr->next()) { curr->m_yPosition = pattern->m_yPosition; if (pattern->isBackgroundXOriginSet()) curr->m_backgroundXOrigin = pattern->m_backgroundXOrigin; if (pattern->isBackgroundYOriginSet()) curr->m_backgroundYOrigin = pattern->m_backgroundYOrigin; pattern = pattern->next(); if (pattern == curr || !pattern) pattern = this; } } for (curr = this; curr && curr->isAttachmentSet(); curr = curr->next()) { } if (curr && curr != this) { // We need to fill in the remaining values with the pattern specified. for (FillLayer* pattern = this; curr; curr = curr->next()) { curr->m_attachment = pattern->m_attachment; pattern = pattern->next(); if (pattern == curr || !pattern) pattern = this; } } for (curr = this; curr && curr->isClipSet(); curr = curr->next()) { } if (curr && curr != this) { // We need to fill in the remaining values with the pattern specified. for (FillLayer* pattern = this; curr; curr = curr->next()) { curr->m_clip = pattern->m_clip; pattern = pattern->next(); if (pattern == curr || !pattern) pattern = this; } } for (curr = this; curr && curr->isCompositeSet(); curr = curr->next()) { } if (curr && curr != this) { // We need to fill in the remaining values with the pattern specified. for (FillLayer* pattern = this; curr; curr = curr->next()) { curr->m_composite = pattern->m_composite; pattern = pattern->next(); if (pattern == curr || !pattern) pattern = this; } } for (curr = this; curr && curr->isBlendModeSet(); curr = curr->next()) { } if (curr && curr != this) { // We need to fill in the remaining values with the pattern specified. for (FillLayer* pattern = this; curr; curr = curr->next()) { curr->m_blendMode = pattern->m_blendMode; pattern = pattern->next(); if (pattern == curr || !pattern) pattern = this; } } for (curr = this; curr && curr->isOriginSet(); curr = curr->next()) { } if (curr && curr != this) { // We need to fill in the remaining values with the pattern specified. for (FillLayer* pattern = this; curr; curr = curr->next()) { curr->m_origin = pattern->m_origin; pattern = pattern->next(); if (pattern == curr || !pattern) pattern = this; } } for (curr = this; curr && curr->isRepeatXSet(); curr = curr->next()) { } if (curr && curr != this) { // We need to fill in the remaining values with the pattern specified. for (FillLayer* pattern = this; curr; curr = curr->next()) { curr->m_repeatX = pattern->m_repeatX; pattern = pattern->next(); if (pattern == curr || !pattern) pattern = this; } } for (curr = this; curr && curr->isRepeatYSet(); curr = curr->next()) { } if (curr && curr != this) { // We need to fill in the remaining values with the pattern specified. for (FillLayer* pattern = this; curr; curr = curr->next()) { curr->m_repeatY = pattern->m_repeatY; pattern = pattern->next(); if (pattern == curr || !pattern) pattern = this; } } for (curr = this; curr && curr->isSizeSet(); curr = curr->next()) { } if (curr && curr != this) { // We need to fill in the remaining values with the pattern specified. for (FillLayer* pattern = this; curr; curr = curr->next()) { curr->m_sizeType = pattern->m_sizeType; curr->m_sizeLength = pattern->m_sizeLength; pattern = pattern->next(); if (pattern == curr || !pattern) pattern = this; } } } void FillLayer::cullEmptyLayers() { for (FillLayer* layer = this; layer; layer = layer->m_next.get()) { if (layer->m_next && !layer->m_next->isImageSet()) { layer->m_next = nullptr; break; } } } static inline FillBox clipMax(FillBox clipA, FillBox clipB) { if (clipA == FillBox::Border || clipB == FillBox::Border) return FillBox::Border; if (clipA == FillBox::Padding || clipB == FillBox::Padding) return FillBox::Padding; if (clipA == FillBox::Content || clipB == FillBox::Content) return FillBox::Content; return FillBox::Text; } void FillLayer::computeClipMax() const { Vector layers; for (auto* layer = this; layer; layer = layer->m_next.get()) layers.append(layer); FillBox computedClipMax = FillBox::Text; for (unsigned i = layers.size(); i; --i) { auto& layer = *layers[i - 1]; computedClipMax = clipMax(computedClipMax, layer.clip()); layer.m_clipMax = static_cast(computedClipMax); } } bool FillLayer::clipOccludesNextLayers(bool firstLayer) const { if (firstLayer) computeClipMax(); return m_clip == m_clipMax; } bool FillLayer::containsImage(StyleImage& image) const { for (auto* layer = this; layer; layer = layer->m_next.get()) { if (layer->m_image && image == *layer->m_image) return true; } return false; } bool FillLayer::imagesAreLoaded() const { for (auto* layer = this; layer; layer = layer->m_next.get()) { if (layer->m_image && !layer->m_image->isLoaded()) return false; } return true; } bool FillLayer::hasOpaqueImage(const RenderElement& renderer) const { if (!m_image) return false; if (m_composite == CompositeClear || m_composite == CompositeCopy) return true; return static_cast(m_blendMode) == BlendMode::Normal && m_composite == CompositeSourceOver && m_image->knownToBeOpaque(&renderer); } bool FillLayer::hasRepeatXY() const { return repeatX() == FillRepeat::Repeat && repeatY() == FillRepeat::Repeat; } bool FillLayer::hasImage() const { for (auto* layer = this; layer; layer = layer->m_next.get()) { if (layer->image()) return true; } return false; } bool FillLayer::hasFixedImage() const { for (auto* layer = this; layer; layer = layer->m_next.get()) { if (layer->m_image && layer->attachment() == FillAttachment::FixedBackground) return true; } return false; } bool FillLayer::imagesIdentical(const FillLayer* layer1, const FillLayer* layer2) { for (; layer1 && layer2; layer1 = layer1->next(), layer2 = layer2->next()) { if (!arePointingToEqualData(layer1->image(), layer2->image())) return false; } return !layer1 && !layer2; } TextStream& operator<<(TextStream& ts, FillSize fillSize) { return ts << fillSize.type << " " << fillSize.size; } TextStream& operator<<(TextStream& ts, const FillLayer& layer) { TextStream::GroupScope scope(ts); ts << "fill-layer"; ts.startGroup(); ts << "position " << layer.xPosition() << " " << layer.yPosition(); ts.endGroup(); ts.dumpProperty("size", layer.size()); ts.startGroup(); ts << "background-origin " << layer.backgroundXOrigin() << " " << layer.backgroundYOrigin(); ts.endGroup(); ts.startGroup(); ts << "repeat " << layer.repeatX() << " " << layer.repeatY(); ts.endGroup(); ts.dumpProperty("clip", layer.clip()); ts.dumpProperty("origin", layer.origin()); ts.dumpProperty("composite", layer.composite()); ts.dumpProperty("blend-mode", layer.blendMode()); ts.dumpProperty("mask-type", layer.maskSourceType()); if (layer.next()) ts << *layer.next(); return ts; } } // namespace WebCore
|
__label__pos
| 0.999565 |
Forum Rule: Always post complete source code & details to reproduce any issue!
Search:
Type: Posts; User: rdm
Search: Search took 0.00 seconds.
1. Replies
1
Views
539
Doesn't it happen that someone has an idea or a...
Doesn't it happen that someone has an idea or a clue what it might be?
Or does anyone have experience with the USB touchscreen library?
I would be very grateful for a tip on where to start.
...
2. Replies
1
Views
539
Touchscreen does not work with RAW HID
If you want to use the touchscreen with the RawHID function, it doesn't work.
Code is the slightly adapted Teensy "Single Finger Line Example"
elapsedMillis nextTouch;
void setup() {...
Results 1 to 2 of 2
|
__label__pos
| 0.813666 |
2. Show Off Your Singularity - In my previous posts, I have discussed the value of being distinct from others in your business. You must promote your singularity. What's it about YOU that will bring your audience for you? Among the greatest mistakes you can make is attempting to duplicate someone in your business. Although, they may become successful, you're not giving your target audience reasons to pick your services over theirs. That is why it is essential to reveal your audience why you're distinct, and be consistent with your singularity.
Are you struggling to have a proper understanding of an article rewriter or text spinner tool? Don’t worry, we can help you. It is a simple tool that can work online or offline depending on its functionality. An article rewriter or article spinner or paraphrasing tool is a simple yet efficient tool that understands the text you enter and then rewrite it for you. Article rewriter allows you to have your article rewritten without having to go through any trouble. Any good and quality article spinner tool will not just save your time but saves you from the pain of sitting in front of your computer screen for hours and hours to create a unique and plagiarism free content. Article spinner or article rewriter is just a tool that aids you in doing your work more efficiently. The real authority over your work remains with you, you can read the newly created article yourself and make the mandatory changes. Sometimes the article created by an article rewriter doesn’t make sense or content could be plagiarized, therefore we highly recommend you to proofread the article before using it.
It's a long, long job to research your keywords, keyword phrases, LSi's etc and then to write content for your chosen search term that has the perfect keywords, and the perfect density of those keywords. This plugin looks at the top 10 Google SERPS for your search term, compares them to each other, and then tells you the keywords that ALL the top 10 pages are using in their content, PLUS the density of those keywords. In less than a minute you will know EXACTLY which keywords and their relevant density to include in your website copy so that it matches Google's exact keyword expectations. Click here for more details.
Anybody may do that, all you will need is the ability to chain a sentence jointly and a computer. You're able to compose original articles and distribute them to post submission sites and get paid. You will need to apply to these sites and might or might not get rejected. Should you happen to get rejected then just keep plugging away till you find some body to "hire" you.
In writing almost same as paraphrasing, article rewriter tool & essay rewriter tool works, but as writer we need to know clearly what are the differences of between two. In article rewriting you can change the word or phrase to keep the meaning same as well may add few more sentences to elaborate the matter more clearly to the reader. Rewritertools.com tried to present few essay rewriter tools for make the writings easier and comfortable.
Manual spinnig is still needed when you want to create ultimate spun and readable articles. You must spin paragraphs and sentences if you want to create the ultimate spun article, it used to be a pain to spin huge articles in paragraph and sentence mode, the nested text always made it diffcult to find the right sentence. But now, with SpinnerChief 5's spin tree function, everything becomes clear and easy! All paragraphs and sentences are organized very well, you can locate, modify and spin them easily.
A long time ago right now there lived a boy. He had blue eyes with blond hair as yellowish as gold. The young boy was small and fragile, but his mind was swift just like the wind. He could calculate the outcome of any situation in the blink of an eye. But that had not been his biggest talent. No. His biggest skill was perception. The young boy could understand and perceive anything quicker than anyone he previously ever met. Then, he fell in love with a beautiful princess. She acquired magnetic green eye and light brown locks. He loved her quite definitely and she loved him back more even. There was only 1 problem – the young boy was the son of a blacksmith. The princess’s father could not allow her to marry a straightforward child of a blacksmith. But their love was too strong. In one fell swoop, the young boy raised an army of undead and began a rebellion to overthrow the evil king who did not believe in true love. After months of fighting against the armies of his beloved’s father, the youthful boy was assassinated and the battle was over. No songs were sang.
But still, this tool is guaranteed to be very useful, and every writer can use it whenever they wish to do some rewriting, and this tool can be educational especially if you want to learn how to revise words the right way. This is a great teacher and helper for your work, and rest assured that using this will be as easy as opening up your program for writing!
Open Graph description is not detected on the main page of Spinner Chief. Lack of Open Graph description can be counter-productive for their social media presence, as such a description allows converting a website homepage (or other pages) into good-looking, rich and well-structured posts, when it is being shared on Facebook and other social media. For example, adding the following code snippet into HTML tag will help to represent this web page correctly in social networks:
Welcome to the JVNP 2.0 Update featuring a primary JV invite courtesy of fellow JVNP 2.0 Partner Aaron Sustar (Spin Rewriter 9.0 launch affiliate program JV invite), Buzz Builders (Tradeology – Forex Growth Code Evergreen launch, Matt Clark + Jason Katzenback – Amazing Selling Machine 10 (ASMX) pre-launch), and More … in Today’s get behind Spin Rewriter #9 and earn lifetime recurring commission edition.
Article Spinner or Article Rewriter, as the name itself gives you an idea, is a rewriting tool or a paraphrasing tool that allows people to rewrite their content while saving significant amount of time. Many such article rewriter and article spinner tools are available over internet. Some article spinner tools are completely free, requiring no installation or fee while some article rewriter tools can ask you to sign up or pay.
Yea mate, WordAI really is overpriced IMO too, which is why I didn’t rank it higher in the cost-effectiveness rankings. Now, it did change more than a few words, but I see how you would think it’s less than 80% unique. TBS is also great, but only in terms of functionality and the easy interface which allows manual spinning to be done quite fast. As for their thesaurus, I think they have a lot to improve.
You have seen this type of article before - e.g. "10 Tips To Lose Weight" The key factor about this type of article is that each tip (each paragraph) makes sense in its own right - independently of the rest of the article. So if you can get a bunch of these paragraphs together, then you can mix any number of paragraphs in order to make your new article. The SpinnerChief TIPS plugin will do this task effortlessly for you. Check the details by clicking here
For this purpose, humans have created numerous rewriting tools to save time and use all words available in the English dictionary. Rewriting tools use millions of synonyms made available to them through artificial intelligence and instructions given by the human mind. If you hire the services of a content writer to rewrite/create content for you, they will charge you a lot. Big organizations can afford this but small startups and students can never afford the services of professional writers.
Online marketing is one of the biggest tasks for the success of a website or portal. For which we need some tools to do our task faster and in a reliable way. Now most famous tool used by a user is article rewriter, which is used to manipulate a sentence, para or a complete article. SpinnerChief is one of most advanced software in this field and used by many users around the world. These SpinnerChief Coupons are beneficial for a content writer they can use our discount coupons when to buy this content tools.
Some of you may be wondering if we would class Spin Rewriter 9.0 as being a scam? Well, the simple answer to that is no, I wouldn’t say that this software is a scam at all. Yes, the software is quite expensive and the sales funnel is designed to make both creators and the affiliate marketers as much money as possible, but I wouldn’t say its a scam.
Mastering an online article rewriter tool can prove quite tricky. When paraphrasing or rewriting any text or paragraph, many writers and students tend to get hold of the technology that enables them to rewriter any written piece of content into a fresh piece, yet keeping the original meaning of the text same. This approach is usually used to simplify a piece of writing, minimize the use of quotes or target an alternative audience. When article spinner tool is used correctly, paraphrasing turns out to be much more concise than the original text, covering all the main points while preventing the risk of plagiarism. Whether you are a student or writer, you can use this free article rewriter online to rewrite any text to save time and get a different version.
Does your website need quality content in numbers? The truth is; producing articles restrict the amount of content that is written on your site if you’re the only one writing on your website and can take any author hours. Hiring a writer can get expensive. Have you ever considered using an article spinner? Spinning content is a excellent way to keep your website fresh and relevant . Needless to say, not all article spinner applications is great. Therefore it’s important to do your homework first. We propose Spin Rewriter 9.0. You can rewrite your articles using their Rewrite system. In addition, you have a Bulk Rewrite option that lets your rewrite articles with one click. You break it up into several terrific pieces of content and may take one article. The choices are endless.
Conclusion, if it boils down to speed, money and most readable spun content SpinnerChief Ultimate is what you want. If you’ve just got to have a web based solutions then WordAi is what you want. Remember we are using the cream of the crop here Word Ai Turing Edition and Spinner Chief III Ultimate. I believe the majority of users will opt for SpinnerChief Ultimate over WordAi Turning due to WordAi’s cost.
!function(n,t){function r(e,n){return Object.prototype.hasOwnProperty.call(e,n)}function i(e){return void 0===e}if(n){var o={},s=n.TraceKit,a=[].slice,l="?";o.noConflict=function(){return n.TraceKit=s,o},o.wrap=function(e){function n(){try{return e.apply(this,arguments)}catch(e){throw o.report(e),e}}return n},o.report=function(){function e(e){l(),h.push(e)}function t(e){for(var n=h.length-1;n>=0;--n)h[n]===e&&h.splice(n,1)}function i(e,n){var t=null;if(!n||o.collectWindowErrors){for(var i in h)if(r(h,i))try{h[i].apply(null,[e].concat(a.call(arguments,2)))}catch(e){t=e}if(t)throw t}}function s(e,n,t,r,s){var a=null;if(w)o.computeStackTrace.augmentStackTraceWithInitialElement(w,n,t,e),u();else if(s)a=o.computeStackTrace(s),i(a,!0);else{var l={url:n,line:t,column:r};l.func=o.computeStackTrace.guessFunctionName(l.url,l.line),l.context=o.computeStackTrace.gatherContext(l.url,l.line),a={mode:"onerror",message:e,stack:[l]},i(a,!0)}return!!f&&f.apply(this,arguments)}function l(){!0!==d&&(f=n.onerror,n.onerror=s,d=!0)}function u(){var e=w,n=p;p=null,w=null,m=null,i.apply(null,[e,!1].concat(n))}function c(e){if(w){if(m===e)return;u()}var t=o.computeStackTrace(e);throw w=t,m=e,p=a.call(arguments,1),n.setTimeout(function(){m===e&&u()},t.incomplete?2e3:0),e}var f,d,h=[],p=null,m=null,w=null;return c.subscribe=e,c.unsubscribe=t,c}(),o.computeStackTrace=function(){function e(e){if(!o.remoteFetching)return"";try{var t=function(){try{return new n.XMLHttpRequest}catch(e){return new n.ActiveXObject("Microsoft.XMLHTTP")}},r=t();return r.open("GET",e,!1),r.send(""),r.responseText}catch(e){return""}}function t(t){if("string"!=typeof t)return[];if(!r(j,t)){var i="",o="";try{o=n.document.domain}catch(e){}var s=/(.*)\:\/\/([^:\/]+)([:\d]*)\/{0,1}([\s\S]*)/.exec(t);s&&s[2]===o&&(i=e(t)),j[t]=i?i.split("\n"):[]}return j[t]}function s(e,n){var r,o=/function ([^(]*)\(([^)]*)\)/,s=/['"]?([0-9A-Za-z$_]+)['"]?\s*[:=]\s*(function|eval|new Function)/,a="",u=10,c=t(e);if(!c.length)return l;for(var f=0;f0?s:null}function u(e){return e.replace(/[\-\[\]{}()*+?.,\\\^$|#]/g,"\\$&")}function c(e){return u(e).replace("<","(?:<|<)").replace(">","(?:>|>)").replace("&","(?:&|&)").replace('"','(?:"|")').replace(/\s+/g,"\\s+")}function f(e,n){for(var r,i,o=0,s=n.length;or&&(i=s.exec(o[r]))?i.index:null}function h(e){if(!i(n&&n.document)){for(var t,r,o,s,a=[n.location.href],l=n.document.getElementsByTagName("script"),d=""+e,h=/^function(?:\s+([\w$]+))?\s*\(([\w\s,]*)\)\s*\{\s*(\S[\s\S]*\S)\s*\}\s*$/,p=/^function on([\w$]+)\s*\(event\)\s*\{\s*(\S[\s\S]*\S)\s*\}\s*$/,m=0;m]+)>|([^\)]+))\((.*)\))? in (.*):\s*$/i,o=n.split("\n"),l=[],u=0;u=0&&(g.line=v+x.substring(0,j).split("\n").length)}}}else if(o=d.exec(i[y])){var _=n.location.href.replace(/#.*$/,""),T=new RegExp(c(i[y+1])),E=f(T,[_]);g={url:_,func:"",args:[],line:E?E.line:o[1],column:null}}if(g){g.func||(g.func=s(g.url,g.line));var k=a(g.url,g.line),A=k?k[Math.floor(k.length/2)]:null;k&&A.replace(/^\s*/,"")===i[y+1].replace(/^\s*/,"")?g.context=k:g.context=[i[y+1]],h.push(g)}}return h.length?{mode:"multiline",name:e.name,message:i[0],stack:h}:null}function y(e,n,t,r){var i={url:n,line:t};if(i.url&&i.line){e.incomplete=!1,i.func||(i.func=s(i.url,i.line)),i.context||(i.context=a(i.url,i.line));var o=/ '([^']+)' /.exec(r);if(o&&(i.column=d(o[1],i.url,i.line)),e.stack.length>0&&e.stack[0].url===i.url){if(e.stack[0].line===i.line)return!1;if(!e.stack[0].line&&e.stack[0].func===i.func)return e.stack[0].line=i.line,e.stack[0].context=i.context,!1}return e.stack.unshift(i),e.partial=!0,!0}return e.incomplete=!0,!1}function g(e,n){for(var t,r,i,a=/function\s+([_$a-zA-Z\xA0-\uFFFF][_$a-zA-Z0-9\xA0-\uFFFF]*)?\s*\(/i,u=[],c={},f=!1,p=g.caller;p&&!f;p=p.caller)if(p!==v&&p!==o.report){if(r={url:null,func:l,args:[],line:null,column:null},p.name?r.func=p.name:(t=a.exec(p.toString()))&&(r.func=t[1]),"undefined"==typeof r.func)try{r.func=t.input.substring(0,t.input.indexOf("{"))}catch(e){}if(i=h(p)){r.url=i.url,r.line=i.line,r.func===l&&(r.func=s(r.url,r.line));var m=/ '([^']+)' /.exec(e.message||e.description);m&&(r.column=d(m[1],i.url,i.line))}c[""+p]?f=!0:c[""+p]=!0,u.push(r)}n&&u.splice(0,n);var w={mode:"callers",name:e.name,message:e.message,stack:u};return y(w,e.sourceURL||e.fileName,e.line||e.lineNumber,e.message||e.description),w}function v(e,n){var t=null;n=null==n?0:+n;try{if(t=m(e))return t}catch(e){if(x)throw e}try{if(t=p(e))return t}catch(e){if(x)throw e}try{if(t=w(e))return t}catch(e){if(x)throw e}try{if(t=g(e,n+1))return t}catch(e){if(x)throw e}return{mode:"failed"}}function b(e){e=1+(null==e?0:+e);try{throw new Error}catch(n){return v(n,e+1)}}var x=!1,j={};return v.augmentStackTraceWithInitialElement=y,v.guessFunctionName=s,v.gatherContext=a,v.ofCaller=b,v.getSource=t,v}(),o.extendToAsynchronousCallbacks=function(){var e=function(e){var t=n[e];n[e]=function(){var e=a.call(arguments),n=e[0];return"function"==typeof n&&(e[0]=o.wrap(n)),t.apply?t.apply(this,e):t(e[0],e[1])}};e("setTimeout"),e("setInterval")},o.remoteFetching||(o.remoteFetching=!0),o.collectWindowErrors||(o.collectWindowErrors=!0),(!o.linesOfContext||o.linesOfContext<1)&&(o.linesOfContext=11),void 0!==e&&e.exports&&n.module!==e?e.exports=o:"function"==typeof define&&define.amd?define("TraceKit",[],o):n.TraceKit=o}}("undefined"!=typeof window?window:global)},"./webpack-loaders/expose-loader/index.js?require!./shared/require-global.js":function(e,n,t){(function(n){e.exports=n.require=t("./shared/require-global.js")}).call(n,t("../../../lib/node_modules/webpack/buildin/global.js"))}});
We know that you want more from your spinner so SpinnerChief 5 uses a new, unique, organic approach to spinning, one that grows with your needs. SpinnerChief 5's new method actually gets better the more it is used. No other spinner has it - it's the new, exclusive Statistical Replacement Technology (SRT). It works in a radical new way by selecting the statistically most appropriate synonym for any word or phrase. SRT works in a similar way to Google Translate - you may have noticed Google Translate is getting better over time with more accurate translations. This is because as the web grows, the sample size for Google Translate's database increases, and so becomes statistically more likely to use the correct wording when it translates. SpinnerChief 5 works in a similar way by polling its huge Cloud Thesaurus for the statistically best synonym. As the Cloud Thesaurus grows, so SpinnerChief 5 gets better and better at synonym replacement. Remember - it's exclusive, don't expect to see this kind of technology anywhere else. ONLY with SpinnerChief!
We tried and we tested. We spinned and we unspinned. After all said and done here, I think you should have already made your choice about which spinner is best for you. If you will be doing a lot of manual spinning, then The Best Spinner is probably the best choice. On the other hand, if you will be using the spinner just for automatic spinning, and you have enough capital, WordAI is the spinning tool for you. But if you are still on a budget, Spin Rewriter is your spinner. However, if you are going to be spinning text in languages other than English, you might want to go with either Spinner Chief or Chimp Rewriter. The choice is all yours. You can now make an informed decision.
Mastering an online article rewriter tool can prove quite tricky. When paraphrasing or rewriting any text or paragraph, many writers and students tend to get hold of the technology that enables them to rewriter any written piece of content into a fresh piece, yet keeping the original meaning of the text same. This approach is usually used to simplify a piece of writing, minimize the use of quotes or target an alternative audience. When article spinner tool is used correctly, paraphrasing turns out to be much more concise than the original text, covering all the main points while preventing the risk of plagiarism. Whether you are a student or writer, you can use this free article rewriter online to rewrite any text to save time and get a different version.
Great article and research, thanks so much. I was looking to speed up some of my content writing but these are pretty bad. All of them do awkward things that would require proof reading and replacing words and phrases to be published. This was a simple short story, very readable. Professional industries with inelastic terms and vocabulary would not fare well. Just this one paragraph felt like a lot of work to fix. I’ve heard a lot about spinners and SEO, but now I see that it would not work in a professional industry, unless I’m judging to quickly without trying them, let me know. I don’t see business or legal content doing well with these.
SpinnerChief is being hailed as the world's most advanced content creation, article spinning and article submission tool. This powerful program will enable you to produce, verify and post completely original content within minutes, passing and exceeding Copyscape scrutiny, as well as being 100% readable and syntax-correct. Here are just some of the inbuilt features: All the basic functions of a normal article spinner software. Super fast/easy to rewrite your article using the unique "Super Replace" system. Huge expert-defined thesaurus. Thesaurus is saved locally, so the speed is very fast. Get synonyms from the Google Adwords keywords tool. Paragraph/Sentence edit mode, you can rewrite paragraphs/sentences easily. Split/Merge function - split an article to multiple smaller parts to make it much easier to write big articles. Import synonyms from text files - multiple formats supported. Batch-spin articles. Calculate the similarity among spun articles to ensure that they are unique form each other, not just unique from the original. Batch-compare articles similarity function. Submit article(s) to over 10 blog platforms and many article directories directly from SpinnerChief. Export articles with other syntax formats like [ ] [/] %\%. Favorites function - Add/Manage your favorite synonyms thesaurus based on your own categories and niches. Quick Preview - Easily preview the spun article and quickly locate the position you want to modify. Developer API - Now, programmers can integrate SpinnerChief functions into their own programs. Protect important keywords words in spun articles. Auto-add anchor text links in spun articles. "Uniqueness" percentage indicator. And many, many more functions not listed here. All in this free product (not just a trial or lite version) In addition, there are a selection of optional paid-for Pro features which take this product to a whole new level.
Google is getting VERY good at catching spun content so you'd better be talented at manual spins. If they DO catch you, of course they'll spank you... and hard. So anything you use spun content on is inherently AT RISK. Worse yet... Google gets better at finding spun/poor content every year. So in the longrun you're screwed. I'd weight your other options and see if you couldn't find a cheaper source for manually hand written content.
Some writers search for mountains of inexpensive private label rights articles and edit them before posting them. These PLR articles might maybe not be the finest quality but something can be increased by the rewriting. These PLR articles are utilized as a result of fact that actually although the quality is poor, they nevertheless have aided attract visitors.
Not sure about version 1, but I'm testing version 2 right now and I have to say it's the best auto spinner I've seen bar none. The content is actually extremely readable and probably better than the manual spins a lot of people do out there. However, some of the content doesn't pass copyscape so make sure you play around with the settings. Minor edits should do the trick. I'm on the trial now but seriously thinking of keeping the Turing plan for my Tier 1 content.
There are lots of reviews about the product, and many people here are writers that experienced a wonderful time doing their job with the aid of the tool. Most of these people usually say that you dont need to rely on the tool completely to get the job done because that will fail at most times. What you need to do according to the reviewers is that you should learn how to edit the article a bit so that it wont look completely spun since there are some clients that hate this tool.
When there are so many article rewriter or article spinner tools available both online and offline then why should you use our tool? Don’t get us wrong, we are not just bragging when we say that this is one of the best rewriting tool or text spinner tools out there. We are offering you the best article spinner and what makes our article rewriter unique and distinct is its convenience and user-friendly interface. It is a simple yet an efficient article spinner that doesn’t require any sign up, registration fee or installation. All you need is a good internet connection and you are good to go. Also, our paraphrasing tool is absolutely free which makes it an ideal option for students and people who can’t afford paying heavily to writers or rewriting tool or website.
There is also the option to spin capitalized words (assumed to be proper nouns) as well as leave any number of words unchanged, depending on whatever you enter into the "ignore" field, separated by commas. You also have the option to only keep the sentences that were altered a minimum percentage, as indicated by the "Keep Sentences that Changed" option.
Unlike other spinners, WordAi fully understands what each word content means. It doesn’t view sentences as just a list of words, it views them as real things that interact with each other. This human like understanding allows WordAI to automatically rewrite entire sentences from scratch. This high level of rewriting ensures that Google and Copyscape can’t detect your content while still remaining human readable!
Before signing up, make sure to open a different browser window, one which you did not visit Spin Rewriter sales page before. For example, if you’re using Chrome, open a new browser, Firefox or IE and copy our url [ https://impromocoder.com/spin-rewriter-discount-promo-code-75-off/ ] to that page. This step is essential so we will be assigned as your referral.
Some writers search for mountains of inexpensive private label rights articles and edit them before posting them. These PLR articles might maybe not be the finest quality but something can be increased by the rewriting. These PLR articles are utilized as a result of fact that actually although the quality is poor, they nevertheless have aided attract visitors.
Language claimed in HTML meta tag should match the language actually used on the web page. Otherwise Spinnerchief.com can be misinterpreted by Google and other search engines. Our service has detected that English is used on the page, and it matches the claimed language. Our system also found out that Spinnerchief.com main page’s claimed encoding is utf-8. Use of this encoding format is the best practice as the main page visitors from all over the world won’t have any issues with symbol transcription.
"I have been using Spinner Chief for about 2 months now and find it to be the best spinner that I have used. Of the many unique features Spinner Chief has, but I think the Super Replace feature is my favorite and is totally unique to Spinner Chief. I have used it exclusively to product content for my site, BuyMazuGold.com, and has saved me a lot of time and effort. I highly recommend this product and look forward to the new updates."
Hello and welcome to another review from the Make Money Online Zone. Today we are looking at something that I didn’t really expect to see ever again. But here we are looking at another version of an article spinner that has been around for quite some time. Spin Rewriter from www.spinrewriter.com has just announced version 9.0 of its article creation software. Spin Rewriter 9.0 boasts lots of features that claim to make content creation easy. But is this software all its cracked up to be or should you avoid this one like the plague?
These days there are a lot of article rewriting softwares and it is pretty hard to choose one that will suit all your needs (quality + price – no subscription at least). When you will look for example at WordAI article spinner – you will notice that there is pretty expensive subscription (50 dollars per month) while spinnerchief costs only 77 dollars per year (which is about $6.5 per month).
I would say that with the exception of the first sentence in the quotes, the content generated by the “Standard” spinner of WordAI is phenomenal. I mean, I have delved into the depths of machine learning and human language data and I know a thing or two about trying to educate a machine on human language. So the results here can be thought of as written by someone who has started learning English a few months ago, but still manages to write understandable human readable content. That is all in my opinion.
Article Spinner or Article Rewriter, as the name itself gives you an idea, is a rewriting tool or a paraphrasing tool that allows people to rewrite their content while saving significant amount of time. Many such article rewriter and article spinner tools are available over internet. Some article spinner tools are completely free, requiring no installation or fee while some article rewriter tools can ask you to sign up or pay.
I personally love this one. It creates amazing content and is extremely affordable. We use a combination of WordAI and Spin Rewriter for our SEO projects, because these two are just an amazing combination. I think Spin Rewriter beats the “Standard Spinner” of WordAI in terms of quality and human readability, but loses to its “Turing Spinner”. So what we do is use the “Turing Spinner” for Tier 1 projects and then Spin Rewriter for the lower tiers. Sometimes we might switch them up, but it really depends a lot on the situation.
Mastering an online article rewriter tool can prove quite tricky. When paraphrasing or rewriting any text or paragraph, many writers and students tend to get hold of the technology that enables them to rewriter any written piece of content into a fresh piece, yet keeping the original meaning of the text same. This approach is usually used to simplify a piece of writing, minimize the use of quotes or target an alternative audience. When article spinner tool is used correctly, paraphrasing turns out to be much more concise than the original text, covering all the main points while preventing the risk of plagiarism. Whether you are a student or writer, you can use this free article rewriter online to rewrite any text to save time and get a different version.
I think it is better to just come up with your own content. Most of those type of programs that are designed to make things easier, really just make things more difficult. You are right. The more you write, the more practice you get, and the better you’ll become. I have been writing for years myself, and I have never needed any content generation programs.
Welcome to the world's best Free Article Rewriter Tool By SeoToolStation...There are unlimited opportunities to make money online, but at the same time, there's a lot of competition in this field. With millions of competitors out there, you need to stay ahead of the competition and make your website liked by Search Engines. Not to worry, though. With the help of this Free SeoToolStation Article Rewriter Pro Tool the game has changed and the most complexed task of producing unique content is now made super easy.
×
|
__label__pos
| 0.519433 |
aboutsummaryrefslogtreecommitdiff
path: root/samples
diff options
context:
space:
mode:
authorTeng Qin <[email protected]>2017-06-02 21:03:53 -0700
committerDavid S. Miller <[email protected]>2017-06-04 21:58:15 -0400
commit41e9a8046c92e26a68fdf5a4cb831b7c60113602 (patch)
tree63fab2b95741918a420f0a8d7b6eb3966e043fdb /samples
parentf91840a32deef5cb1bf73338bc5010f843b01426 (diff)
download96b-common-41e9a8046c92e26a68fdf5a4cb831b7c60113602.tar.gz
samples/bpf: add tests for more perf event types
$ trace_event tests attaching BPF program to HW_CPU_CYCLES, SW_CPU_CLOCK, HW_CACHE_L1D and other events. It runs 'dd' in the background while bpf program collects user and kernel stack trace on counter overflow. User space expects to see sys_read and sys_write in the kernel stack. $ tracex6 tests reading of various perf counters from BPF program. Both tests were refactored to increase coverage and be more accurate. Signed-off-by: Teng Qin <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: David S. Miller <[email protected]>
Diffstat (limited to 'samples')
-rw-r--r--samples/bpf/bpf_helpers.h3
-rw-r--r--samples/bpf/trace_event_user.c73
-rw-r--r--samples/bpf/tracex6_kern.c28
-rw-r--r--samples/bpf/tracex6_user.c180
4 files changed, 228 insertions, 56 deletions
diff --git a/samples/bpf/bpf_helpers.h b/samples/bpf/bpf_helpers.h
index 9a9c95f2c9fb..51e567bc70fc 100644
--- a/samples/bpf/bpf_helpers.h
+++ b/samples/bpf/bpf_helpers.h
@@ -31,7 +31,8 @@ static unsigned long long (*bpf_get_current_uid_gid)(void) =
(void *) BPF_FUNC_get_current_uid_gid;
static int (*bpf_get_current_comm)(void *buf, int buf_size) =
(void *) BPF_FUNC_get_current_comm;
-static int (*bpf_perf_event_read)(void *map, int index) =
+static unsigned long long (*bpf_perf_event_read)(void *map,
+ unsigned long long flags) =
(void *) BPF_FUNC_perf_event_read;
static int (*bpf_clone_redirect)(void *ctx, int ifindex, int flags) =
(void *) BPF_FUNC_clone_redirect;
diff --git a/samples/bpf/trace_event_user.c b/samples/bpf/trace_event_user.c
index fa4336423da5..7bd827b84a67 100644
--- a/samples/bpf/trace_event_user.c
+++ b/samples/bpf/trace_event_user.c
@@ -75,7 +75,10 @@ static void print_stack(struct key_t *key, __u64 count)
for (i = PERF_MAX_STACK_DEPTH - 1; i >= 0; i--)
print_addr(ip[i]);
}
- printf("\n");
+ if (count < 6)
+ printf("\r");
+ else
+ printf("\n");
if (key->kernstack == -EEXIST && !warned) {
printf("stackmap collisions seen. Consider increasing size\n");
@@ -105,7 +108,7 @@ static void print_stacks(void)
bpf_map_delete_elem(fd, &next_key);
key = next_key;
}
-
+ printf("\n");
if (!sys_read_seen || !sys_write_seen) {
printf("BUG kernel stack doesn't contain sys_read() and sys_write()\n");
int_exit(0);
@@ -122,24 +125,29 @@ static void test_perf_event_all_cpu(struct perf_event_attr *attr)
{
int nr_cpus = sysconf(_SC_NPROCESSORS_CONF);
int *pmu_fd = malloc(nr_cpus * sizeof(int));
- int i;
+ int i, error = 0;
/* open perf_event on all cpus */
for (i = 0; i < nr_cpus; i++) {
pmu_fd[i] = sys_perf_event_open(attr, -1, i, -1, 0);
if (pmu_fd[i] < 0) {
printf("sys_perf_event_open failed\n");
+ error = 1;
goto all_cpu_err;
}
assert(ioctl(pmu_fd[i], PERF_EVENT_IOC_SET_BPF, prog_fd[0]) == 0);
- assert(ioctl(pmu_fd[i], PERF_EVENT_IOC_ENABLE, 0) == 0);
+ assert(ioctl(pmu_fd[i], PERF_EVENT_IOC_ENABLE) == 0);
}
- system("dd if=/dev/zero of=/dev/null count=5000k");
+ system("dd if=/dev/zero of=/dev/null count=5000k status=none");
print_stacks();
all_cpu_err:
- for (i--; i >= 0; i--)
+ for (i--; i >= 0; i--) {
+ ioctl(pmu_fd[i], PERF_EVENT_IOC_DISABLE);
close(pmu_fd[i]);
+ }
free(pmu_fd);
+ if (error)
+ int_exit(0);
}
static void test_perf_event_task(struct perf_event_attr *attr)
@@ -150,12 +158,13 @@ static void test_perf_event_task(struct perf_event_attr *attr)
pmu_fd = sys_perf_event_open(attr, 0, -1, -1, 0);
if (pmu_fd < 0) {
printf("sys_perf_event_open failed\n");
- return;
+ int_exit(0);
}
assert(ioctl(pmu_fd, PERF_EVENT_IOC_SET_BPF, prog_fd[0]) == 0);
- assert(ioctl(pmu_fd, PERF_EVENT_IOC_ENABLE, 0) == 0);
- system("dd if=/dev/zero of=/dev/null count=5000k");
+ assert(ioctl(pmu_fd, PERF_EVENT_IOC_ENABLE) == 0);
+ system("dd if=/dev/zero of=/dev/null count=5000k status=none");
print_stacks();
+ ioctl(pmu_fd, PERF_EVENT_IOC_DISABLE);
close(pmu_fd);
}
@@ -175,11 +184,56 @@ static void test_bpf_perf_event(void)
.config = PERF_COUNT_SW_CPU_CLOCK,
.inherit = 1,
};
+ struct perf_event_attr attr_hw_cache_l1d = {
+ .sample_freq = SAMPLE_FREQ,
+ .freq = 1,
+ .type = PERF_TYPE_HW_CACHE,
+ .config =
+ PERF_COUNT_HW_CACHE_L1D |
+ (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16),
+ .inherit = 1,
+ };
+ struct perf_event_attr attr_hw_cache_branch_miss = {
+ .sample_freq = SAMPLE_FREQ,
+ .freq = 1,
+ .type = PERF_TYPE_HW_CACHE,
+ .config =
+ PERF_COUNT_HW_CACHE_BPU |
+ (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ (PERF_COUNT_HW_CACHE_RESULT_MISS << 16),
+ .inherit = 1,
+ };
+ struct perf_event_attr attr_type_raw = {
+ .sample_freq = SAMPLE_FREQ,
+ .freq = 1,
+ .type = PERF_TYPE_RAW,
+ /* Intel Instruction Retired */
+ .config = 0xc0,
+ .inherit = 1,
+ };
+ printf("Test HW_CPU_CYCLES\n");
test_perf_event_all_cpu(&attr_type_hw);
test_perf_event_task(&attr_type_hw);
+
+ printf("Test SW_CPU_CLOCK\n");
test_perf_event_all_cpu(&attr_type_sw);
test_perf_event_task(&attr_type_sw);
+
+ printf("Test HW_CACHE_L1D\n");
+ test_perf_event_all_cpu(&attr_hw_cache_l1d);
+ test_perf_event_task(&attr_hw_cache_l1d);
+
+ printf("Test HW_CACHE_BPU\n");
+ test_perf_event_all_cpu(&attr_hw_cache_branch_miss);
+ test_perf_event_task(&attr_hw_cache_branch_miss);
+
+ printf("Test Instruction Retired\n");
+ test_perf_event_all_cpu(&attr_type_raw);
+ test_perf_event_task(&attr_type_raw);
+
+ printf("*** PASS ***\n");
}
@@ -209,7 +263,6 @@ int main(int argc, char **argv)
return 0;
}
test_bpf_perf_event();
-
int_exit(0);
return 0;
}
diff --git a/samples/bpf/tracex6_kern.c b/samples/bpf/tracex6_kern.c
index be479c4af9e2..e7d180305974 100644
--- a/samples/bpf/tracex6_kern.c
+++ b/samples/bpf/tracex6_kern.c
@@ -3,22 +3,36 @@
#include <uapi/linux/bpf.h>
#include "bpf_helpers.h"
-struct bpf_map_def SEC("maps") my_map = {
+struct bpf_map_def SEC("maps") counters = {
.type = BPF_MAP_TYPE_PERF_EVENT_ARRAY,
.key_size = sizeof(int),
.value_size = sizeof(u32),
- .max_entries = 32,
+ .max_entries = 64,
+};
+struct bpf_map_def SEC("maps") values = {
+ .type = BPF_MAP_TYPE_HASH,
+ .key_size = sizeof(int),
+ .value_size = sizeof(u64),
+ .max_entries = 64,
};
-SEC("kprobe/sys_write")
+SEC("kprobe/htab_map_get_next_key")
int bpf_prog1(struct pt_regs *ctx)
{
- u64 count;
u32 key = bpf_get_smp_processor_id();
- char fmt[] = "CPU-%d %llu\n";
+ u64 count, *val;
+ s64 error;
+
+ count = bpf_perf_event_read(&counters, key);
+ error = (s64)count;
+ if (error <= -2 && error >= -22)
+ return 0;
- count = bpf_perf_event_read(&my_map, key);
- bpf_trace_printk(fmt, sizeof(fmt), key, count);
+ val = bpf_map_lookup_elem(&values, &key);
+ if (val)
+ *val = count;
+ else
+ bpf_map_update_elem(&values, &key, &count, BPF_NOEXIST);
return 0;
}
diff --git a/samples/bpf/tracex6_user.c b/samples/bpf/tracex6_user.c
index ca7874ed77f4..a05a99a0752f 100644
--- a/samples/bpf/tracex6_user.c
+++ b/samples/bpf/tracex6_user.c
@@ -1,73 +1,177 @@
-#include <stdio.h>
-#include <unistd.h>
-#include <stdlib.h>
-#include <stdbool.h>
-#include <string.h>
+#define _GNU_SOURCE
+
+#include <assert.h>
#include <fcntl.h>
-#include <poll.h>
-#include <sys/ioctl.h>
#include <linux/perf_event.h>
#include <linux/bpf.h>
-#include "libbpf.h"
+#include <sched.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <sys/ioctl.h>
+#include <sys/resource.h>
+#include <sys/time.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <unistd.h>
+
#include "bpf_load.h"
+#include "libbpf.h"
#include "perf-sys.h"
#define SAMPLE_PERIOD 0x7fffffffffffffffULL
-static void test_bpf_perf_event(void)
+static void check_on_cpu(int cpu, struct perf_event_attr *attr)
{
- int nr_cpus = sysconf(_SC_NPROCESSORS_CONF);
- int *pmu_fd = malloc(nr_cpus * sizeof(int));
- int status, i;
+ int pmu_fd, error = 0;
+ cpu_set_t set;
+ __u64 value;
- struct perf_event_attr attr_insn_pmu = {
+ /* Move to target CPU */
+ CPU_ZERO(&set);
+ CPU_SET(cpu, &set);
+ assert(sched_setaffinity(0, sizeof(set), &set) == 0);
+ /* Open perf event and attach to the perf_event_array */
+ pmu_fd = sys_perf_event_open(attr, -1/*pid*/, cpu/*cpu*/, -1/*group_fd*/, 0);
+ if (pmu_fd < 0) {
+ fprintf(stderr, "sys_perf_event_open failed on CPU %d\n", cpu);
+ error = 1;
+ goto on_exit;
+ }
+ assert(bpf_map_update_elem(map_fd[0], &cpu, &pmu_fd, BPF_ANY) == 0);
+ assert(ioctl(pmu_fd, PERF_EVENT_IOC_ENABLE, 0) == 0);
+ /* Trigger the kprobe */
+ bpf_map_get_next_key(map_fd[1], &cpu, NULL);
+ /* Check the value */
+ if (bpf_map_lookup_elem(map_fd[1], &cpu, &value)) {
+ fprintf(stderr, "Value missing for CPU %d\n", cpu);
+ error = 1;
+ goto on_exit;
+ }
+ fprintf(stderr, "CPU %d: %llu\n", cpu, value);
+
+on_exit:
+ assert(bpf_map_delete_elem(map_fd[0], &cpu) == 0 || error);
+ assert(ioctl(pmu_fd, PERF_EVENT_IOC_DISABLE, 0) == 0 || error);
+ assert(close(pmu_fd) == 0 || error);
+ assert(bpf_map_delete_elem(map_fd[1], &cpu) == 0 || error);
+ exit(error);
+}
+
+static void test_perf_event_array(struct perf_event_attr *attr,
+ const char *name)
+{
+ int i, status, nr_cpus = sysconf(_SC_NPROCESSORS_CONF);
+ pid_t pid[nr_cpus];
+ int err = 0;
+
+ printf("Test reading %s counters\n", name);
+
+ for (i = 0; i < nr_cpus; i++) {
+ pid[i] = fork();
+ assert(pid[i] >= 0);
+ if (pid[i] == 0) {
+ check_on_cpu(i, attr);
+ exit(1);
+ }
+ }
+
+ for (i = 0; i < nr_cpus; i++) {
+ assert(waitpid(pid[i], &status, 0) == pid[i]);
+ err |= status;
+ }
+
+ if (err)
+ printf("Test: %s FAILED\n", name);
+}
+
+static void test_bpf_perf_event(void)
+{
+ struct perf_event_attr attr_cycles = {
.freq = 0,
.sample_period = SAMPLE_PERIOD,
.inherit = 0,
.type = PERF_TYPE_HARDWARE,
.read_format = 0,
.sample_type = 0,
- .config = 0,/* PMU: cycles */
+ .config = PERF_COUNT_HW_CPU_CYCLES,
+ };
+ struct perf_event_attr attr_clock = {
+ .freq = 0,
+ .sample_period = SAMPLE_PERIOD,
+ .inherit = 0,
+ .type = PERF_TYPE_SOFTWARE,
+ .read_format = 0,
+ .sample_type = 0,
+ .config = PERF_COUNT_SW_CPU_CLOCK,
+ };
+ struct perf_event_attr attr_raw = {
+ .freq = 0,
+ .sample_period = SAMPLE_PERIOD,
+ .inherit = 0,
+ .type = PERF_TYPE_RAW,
+ .read_format = 0,
+ .sample_type = 0,
+ /* Intel Instruction Retired */
+ .config = 0xc0,
+ };
+ struct perf_event_attr attr_l1d_load = {
+ .freq = 0,
+ .sample_period = SAMPLE_PERIOD,
+ .inherit = 0,
+ .type = PERF_TYPE_HW_CACHE,
+ .read_format = 0,
+ .sample_type = 0,
+ .config =
+ PERF_COUNT_HW_CACHE_L1D |
+ (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16),
+ };
+ struct perf_event_attr attr_llc_miss = {
+ .freq = 0,
+ .sample_period = SAMPLE_PERIOD,
+ .inherit = 0,
+ .type = PERF_TYPE_HW_CACHE,
+ .read_format = 0,
+ .sample_type = 0,
+ .config =
+ PERF_COUNT_HW_CACHE_LL |
+ (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ (PERF_COUNT_HW_CACHE_RESULT_MISS << 16),
+ };
+ struct perf_event_attr attr_msr_tsc = {
+ .freq = 0,
+ .sample_period = 0,
+ .inherit = 0,
+ /* From /sys/bus/event_source/devices/msr/ */
+ .type = 7,
+ .read_format = 0,
+ .sample_type = 0,
+ .config = 0,
};
- for (i = 0; i < nr_cpus; i++) {
- pmu_fd[i] = sys_perf_event_open(&attr_insn_pmu, -1/*pid*/, i/*cpu*/, -1/*group_fd*/, 0);
- if (pmu_fd[i] < 0) {
- printf("event syscall failed\n");
- goto exit;
- }
-
- bpf_map_update_elem(map_fd[0], &i, &pmu_fd[i], BPF_ANY);
- ioctl(pmu_fd[i], PERF_EVENT_IOC_ENABLE, 0);
- }
+ test_perf_event_array(&attr_cycles, "HARDWARE-cycles");
+ test_perf_event_array(&attr_clock, "SOFTWARE-clock");
+ test_perf_event_array(&attr_raw, "RAW-instruction-retired");
+ test_perf_event_array(&attr_l1d_load, "HW_CACHE-L1D-load");
- status = system("ls > /dev/null");
- if (status)
- goto exit;
- status = system("sleep 2");
- if (status)
- goto exit;
-
-exit:
- for (i = 0; i < nr_cpus; i++)
- close(pmu_fd[i]);
- close(map_fd[0]);
- free(pmu_fd);
+ /* below tests may fail in qemu */
+ test_perf_event_array(&attr_llc_miss, "HW_CACHE-LLC-miss");
+ test_perf_event_array(&attr_msr_tsc, "Dynamic-msr-tsc");
}
int main(int argc, char **argv)
{
+ struct rlimit r = {RLIM_INFINITY, RLIM_INFINITY};
char filename[256];
snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]);
+ setrlimit(RLIMIT_MEMLOCK, &r);
if (load_bpf_file(filename)) {
printf("%s", bpf_log_buf);
return 1;
}
test_bpf_perf_event();
- read_trace_pipe();
-
return 0;
}
|
__label__pos
| 0.995841 |
Puppet Class: kafkatee
Defined in:
modules/kafkatee/manifests/init.pp
Overview
Parameters:
• configure_rsyslog (Any) (defaults to: true)
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
# File 'modules/kafkatee/manifests/init.pp', line 17
class kafkatee(
$configure_rsyslog = true,
) {
require_package('kafkatee')
file { '/etc/kafkatee':
ensure => 'directory',
}
# Mask the (default) package provided systemd config
# The "real" ones are created with instance.pp
systemd::mask { 'kafkatee.service': }
if $configure_rsyslog {
# Basic logrotate.d configuration to rotate /var/log/kafkatee.log
logrotate::conf { 'kafkatee':
source => 'puppet:///modules/kafkatee/kafkatee_logrotate',
}
# Basic rsyslog configuration to create /var/log/kafkatee.log
rsyslog::conf { 'kafkatee':
source => 'puppet:///modules/kafkatee/kafkatee_rsyslog.conf',
priority => 70,
}
}
}
|
__label__pos
| 0.995092 |
user2716568 user2716568 -4 years ago 430
LaTeX Question
How to change font size of table in Rmarkdown, LaTeX and .pdf?
I have a document that I create, based on data from different banks and for staff across different locations and roles. My .Rmd file is as follows:
---
title: <center> <h1>Output Report</h1> </center>
mainfont: Arial
output:
pdf_document:
latex_engine: xelatex
sansfont: Arial
fig_crop: false
toc: true
classoption: landscape
fontsize: 11pt
geometry: margin=0.45in
header-includes:
- \usepackage{booktabs}
- \usepackage{sectsty} \sectionfont{\centering}
- \renewcommand{\contentsname}{}\vspace{-2cm}
---
<style>
.main-container {
max-width: 1200px !important;
}
</style>
```{r load myData, include=FALSE}
load("C:/Desktop/RawData.RData")
```
```{r global_options, R.options=knitr::opts_chunk$set(warning=FALSE, message=FALSE)}
```
\newpage
# Client One
```{r fig.width=18, fig.height=6, echo=FALSE, comment=" "}
library(ggplot2)
ggplot(subset(Durations, Client == "Client One")) +
aes(x = NumStart, y = Value) +
geom_rect(aes(xmin = NumStart, xmax = NumEnd,
ymin = 0, ymax = 0.8,
fill = Position)) +
geom_text(data=Inv[Inv$Client == "Client One",],
aes(x = Median, y = Value,
label = Symbol),
size = 4.5) +
scale_fill_manual(values=myColors) +
scale_x_continuous(expand = c(0.015, 0.015)) +
scale_y_continuous(expand = c(0, 0),limits = c(0, 0.8),
labels = c("F50", "FMID", "DMID", "D50"),
breaks = c(0.1, 0.3, 0.5, 0.7)) +
theme_classic() +
theme(legend.position = "bottom",
legend.title = element_blank(),
legend.key.size = unit(1.25, 'lines'),
legend.text = element_text(size = 12, colour = "black", face = "bold"),
axis.title.x = element_blank(),
axis.title.y = element_blank(),
axis.text.y = element_text(face = "bold", size = 15, colour = "black"),
axis.text.x = element_text(face = "bold", colour = "black", size = 10),
axis.ticks.y = element_blank(),
axis.line.y = element_blank(),
panel.spacing = unit(0.5, "lines"),
strip.text = element_text(color = "black", size = 15, face = "bold")) +
facet_wrap(~ Bank, nrow = 1, scales = "free_x")
```
<br>
<br>
<br>
<br>
```{r, echo=FALSE, results='asis'}
library(knitr)
library(xtable)
t1 <- kable({x <- subset(Inv_Location, Client=="Client One", select = Location:DE);
names(x) <- sprintf("\\textbf{%s}", names(x))
x},
format = "latex", booktabs = TRUE, row.names = FALSE,
align = 'c', escape = FALSE)
t2 <- kable({x <- subset(Inv_Role, Client == "Client One", select = Role:DE);
names(x) <- sprintf("\\textbf{%s}", names(x))
x},
format = "latex", booktabs = TRUE, row.names = FALSE,
align = 'c', escape = FALSE)
t3 <- kable({x <- subset(Inv_ByRotation, Client == "Client One", select = Rotation:IPM);
names(x) <- sprintf("\\textbf{%s}", names(x))
x},
format = "latex", booktabs = TRUE, row.names = FALSE,
align = 'c', escape = FALSE)
t4 <- kable({x <- subset(Season_InvByLocation, Client == "Client One", select = Location:DE);
names(x) <- sprintf("\\textbf{%s}", names(x))
x},
format = "latex", booktabs = TRUE, row.names = FALSE,
align = 'c', escape = FALSE)
t5 <- kable({x <- subset(Season_InvByRole, Client == "Client One", select = Role:DE);
names(x) <- sprintf("\\textbf{%s}", names(x))
x},
format = "latex", booktabs = TRUE, row.names = FALSE,
align = 'c', digits = 2, escape = FALSE)
cat(c("\\begin{table}[!htb]
\\centering
\\begin{minipage}{.20\\linewidth}
\\centering
\\textbf{Bank - By Location}
\\centering",
t1,
"\\end{minipage}%
\\centering
\\begin{minipage}{.35\\linewidth}
\\centering
\\textbf{Bank - By Role}",
t2,
"\\end{minipage}%
\\centering
\\begin{minipage}{.25\\linewidth}
\\centering
\\textbf{Bank - By Rotation}",
t3,
"\\end{minipage}
\\end{table}"
))
cat(c("\\begin{table}[!htb]
\\centering
\\begin{minipage}{.25\\linewidth}
\\centering
\\textbf{Bank - By Location }",
t4,
"\\end{minipage}%
\\centering
\\begin{minipage}{.35\\linewidth}
\\centering
\\textbf{Bank - By Role}",
t5,
"\\end{minipage}
\\end{table}"
))
```
I now wish to simply change the font size of the tables - it is simply too big and overlapping into seperate pages.
Is it possible to do this, given my end output is a .pdf and I am using
kable
to create these tables?
Thank you.
Answer Source
Maybe you can try out the kableExtra package. To change font size, you can try
kable(x, "latex", booktabs = T) %>%
kable_styling(font_size = 7)
Or, you can try to use the scale_down option, which will automatically scale the table to page width using the LaTeX graphicx package.
kable(x, "latex", booktabs = T) %>%
kable_styling(latex_options = "scale_down")
You can read the full vignette here: http://haozhu233.github.io/kableExtra/awesome_table_in_pdf.pdf http://haozhu233.github.io/kableExtra/awesome_table_in_html.html
Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download
|
__label__pos
| 0.999971 |
Beefy Boxes and Bandwidth Generously Provided by pair Networks
We don't bite newbies here... much
PerlMonks
Re: Detect STX and ETX hex in received string
by GrandFather (Sage)
on Nov 28, 2012 at 23:49 UTC ( #1006128=note: print w/ replies, xml ) Need Help??
in reply to Detect STX and ETX hex in received string
I'd use a state machine and to avoid manifest global variables I'd wrap it up in a light weight object:
#!/usr/bin/perl use strict; use warnings; use Win32::SerialPort; my $kSTX = "\x02"; my $kETX = "\x03"; my $obj = bless { port => Win32::SerialPort->new('COM1'), idle => 1 }; $obj->configurePort(); while (1) { my $crc = $obj->readSerialPort(); next if !defined $crc; print $obj->{buffer}; } continue { sleep 1; } sub configurePort { my ($self) = @_; $self->{port}->...; } sub writeSerialPort { my ( $self, $outStr ) = @_; $self->{port}->write($outStr) || die "Serial port write failed: $! +\n"; } sub readSerialPort { my ($self) = @_; while ( my $byte = $self->{port}->input() ) { next if $self->{idle} && $byte eq $kSTX; if ( $byte eq $kSTX ) { $self->{buffer} = ''; $self->{idle} = undef; next; } if ( $byte ne $kETX && 80 > length $self->{buffer} ) { $self->{buffer} .= $byte; next; } # Got end of record my $crc = 0; $crc += ord($_) for split '', $self->{buffer}; $crc &= 0xFF; $self->{idle} = 1; return $crc; } return; }
The code assumes that the input function times out after some reasonable time and that nulls are not used in the payload data. None of this is tested code!
Note too the use of strictures - always use strictures (use strict; use warnings;)
True laziness is hard work
Comment on Re: Detect STX and ETX hex in received string
Download Code
Log In?
Username:
Password:
What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://1006128]
help
Chatterbox?
and the web crawler heard nothing...
How do I use this? | Other CB clients
Other Users?
Others rifling through the Monastery: (14)
As of 2015-03-06 13:34 GMT
Sections?
Information?
Find Nodes?
Leftovers?
Voting Booth?
When putting a smiley right before a closing parenthesis, do you:
Results (161 votes), past polls
|
__label__pos
| 0.818656 |
Data Flow Transformation categories in SSIS
10
5088
Aim In this post we will learn abot Data Flow Transformation Categories in SSIS. Transformations are defined as a core component in the Data flow of a package in SSIS. It is that part of the data flow to which we apply our business logic to manipulate and modify the input data into the required format before loading it to the destination. All the Data Flow Transformations are broadly classified into 2 types:-
Type 1 Synchronous Transformations.
Type 2 Asynchronous Transformations.
What is the difference between Synchronous and Asynchronous transformations?
Synchronous Transformations Asynchronous Transformations
Processes each incoming row, modifies according to the required format and forward it. Stores all the rows into the memory before it begins the process of modifying input data to the required output format.
No. of input rows = No. of output rows. No. of input rows != No. of output rows
Output rows are in sync with Input rows i.e. 1:1 relationship. Output rows are not in sync with Input rows
Less memory is required as they work on row by row basis. More memory is required to store the whole data set as input and output buffers do not use the same memory.
Does not block the data flow in the pipeline. Are also known as “Blocking Transformations” as they block the data flow in the pipeline until all the input rows are read into the memory.
Runs quite faster due to less memory required. Runs generally slow as memory requirement is very high.
E.g. – Data Conversion Transformation- Input rows flow into the memory buffers and the same buffers come out in the required data format as Output. E.g. – Sort Transformation- where the component has to process the complete set of rows in a single operation.
Further Asynchronous Transformations are divided into 2 categories:-
1. Partially blocking Transformations creates new memory buffers for the output of the transformation such as the Union All Transformation.
2. Fully blocking Transformations performs the same operation but cause a full block of the data such as the Sort and Aggregate Transformations.
Data Flow Transformation Categories are as follows:-
1. Row Transformations –Row Transformations
• This transformation is used to update column values or create new columns.
• It transforms each row present in the pipeline (Input).
Transformation Name Description
Character Map Modifies strings, typically for changes involving code pages.
Copy Column Copies columns to new output columns.
Data Conversion Performs data casting.
Derived Column Allows the definition of new columns, or the overriding of values in existing columns, based on expressions.
OLE DB Command Executes a command against a connection manager for each row. This transformation can behave as a destination.
2. Rowset Transformations –
• Rowset transformationsThese transformations are also called Asynchronous as they “dam the flow” of data i.e. Stores all the rows into the memory before it begins the process of modifying input data to the required output format.
• As a result, a block is caused in the pipeline of data until the operation is completed.
Transformation Name Description
Aggregate Aggregates (summarizes) numeric columns
Percentage Sampling Outputs a configured percentage of rows
Row Sampling Outputs a configured number of rows
Sort Sorts the data, and can be configured to remove duplicates.
Pivot Pivots the data
Unpivot Unpivots the data
3. Split and join Transformations –
Split and Join Transformations
• Distribute rows to different outputs.
• Create copies of the transformation inputs.
• Join multiple inputs into one output.
Transformation Name Description
Condition Split Uses conditions to allocate rows to multiple outputs.
Look up Performs a look up against a reference set of data. Typically used in the fact table for loading packages.
Merge Unions two sorted inputs and retains sort order in the output.
Merge Join Joins two sorted inputs, and can be configured as Inner, Left Outer or Full Outer.
Multicast Broadcasts (duplicates) the rows to multiple outputs.
Union All Unions two or more inputs to produce a single output.
4. Business Intelligence Transformations –
• Business Intelligence TransformationsThese are used to introduce data mining capabilities and data cleansing.
• Cleaning data includes identification and removal of duplicate rows based on approximate matches.
• These are only available with the Enterprise Edition
5. Script Transformations –
• Script TransformationsExtends the capabilities of the data flow.
• Delivers optimized performance because it is precompiled.
• Similar to the Script Task, it delivers the ability to introduce custom logic into the data flow using VB.NET or C#.NET.
• It can be configured to behave as a Source, a Destination, or any type of Transformation.
6. Other Transformations –
• Other TransformationsAdd Audit Information such as when the package was run and by whom.
• Export and Import Data.
• Stores the row count from the Data Flow into a variable
Transformation Name Description
Audit Adds audit information as columns to the output.
Cache Transform Prepares caches for use by the Lookup transformation.
Export and Import Column Extracts or load data from/to the file system.
Row Count Stores the number of rows that have passed through the transformation into a variable.
Slowly Changing Dimensions Produces SCD logic for type 1 and 2 changes.
This completes the basic of Data Flow Transformation Categories in SSIS. You can have as many transformations as you need according to your business requirement with the use of Script Transformation. This feature of creating custom transformations via VB.NET or C# programming language, makes it a valuable tool in this competeting market of Business Intelligence. Further we will do the implementation of each Data Flow Transformation. We will see how it is configured to work according to our requirements. I hope you like my article on Data Flow Transformation Categories in SSIS. Get some time to post your queries and comments below.
VN:F [1.9.22_1171]
Your Feedback hels to Improve
Rating: 9.1/10 (59 votes cast)
VN:F [1.9.22_1171]
Rating: +44 (from 50 votes)
Data Flow Transformation categories in SSIS, 9.1 out of 10 based on 59 ratings
• Allen
Excellent explaination. Really appreciate
• Thanks a lot Allen !!!
• Anupama
Very nice explanation of all the transformation with crisp information
• Pingback: SSIS Aggregate Dataflow transformation | SSIS Tutorials()
• Bindu
Hiiii Chander Sharma,
Can yu du a favour , i need clear explanation on Look up and Cache transformations ….
Could yu plz update it asap …
Thank yu …
• Jerome
Hi Chander Sharma,
Good work – one place to see all the transforms.
I have three questions – 1. Do we have a similar blog that contains all the tasks that can be part of the control flow
2.In a multi user environment , how does SSIS work.
To give more details about my question – If I create a project – would that be visible for others – or vise versa .
3.can I use a package that is there in another project folder ?
• Pingback: SSIS Non-Blocking, Semi and Fully-Blocking Transformations()
• gopal
Very Nice and SImlpe, Great Work.
• PRADEEP
Thank you for sharing very useful stuff.
|
__label__pos
| 0.533165 |
Learn what Felgo offers to help your business succeed. Start your free evaluation today! Felgo for Your Business
Star Delegate Example
The Star Delegate example shows how to create a delegate that can paint itself and that supports editing.
The Star Delegate Example
When displaying data in a QListView, QTableView, or QTreeView, the individual items are drawn by a delegate. Also, when the user starts editing an item (for example, by double-clicking the item), the delegate provides an editor widget that is placed on top of the item while editing takes place.
Delegates are subclasses of QAbstractItemDelegate. Qt provides QItemDelegate, which inherits QAbstractItemDelegate and handles the most common data types (notably int and QString). If we need to support custom data types, or want to customize the rendering or the editing for existing data types, we can subclass QAbstractItemDelegate or QItemDelegate. See Delegate Classes for more information about delegates, and Model/View Programming if you need a high-level introduction to Qt's model/view architecture (including delegates).
In this example, we will see how to implement a custom delegate to render and edit a "star rating" data type, which can store values such as "1 out of 5 stars".
The example consists of the following classes:
• StarRating is the custom data type. It stores a rating expressed as stars, such as "2 out of 5 stars" or "5 out of 6 stars".
• StarDelegate inherits QItemDelegate and provides support for StarRating (in addition to the data types already handled by QItemDelegate).
• StarEditor inherits QWidget and is used by StarDelegate to let the user edit a star rating using the mouse.
To show the StarDelegate in action, we will fill a QTableWidget with some data and install the delegate on it.
StarDelegate Class Definition
Here's the definition of the StarDelegate class:
class StarDelegate : public QStyledItemDelegate
{
Q_OBJECT
public:
StarDelegate(QWidget *parent = 0) : QStyledItemDelegate(parent) {}
void paint(QPainter *painter, const QStyleOptionViewItem &option,
const QModelIndex &index) const override;
QSize sizeHint(const QStyleOptionViewItem &option,
const QModelIndex &index) const override;
QWidget *createEditor(QWidget *parent, const QStyleOptionViewItem &option,
const QModelIndex &index) const override;
void setEditorData(QWidget *editor, const QModelIndex &index) const override;
void setModelData(QWidget *editor, QAbstractItemModel *model,
const QModelIndex &index) const override;
private slots:
void commitAndCloseEditor();
};
All public functions are reimplemented virtual functions from QItemDelegate to provide custom rendering and editing.
StarDelegate Class Implementation
The paint() function is reimplemented from QItemDelegate and is called whenever the view needs to repaint an item:
void StarDelegate::paint(QPainter *painter, const QStyleOptionViewItem &option,
const QModelIndex &index) const
{
if (index.data().canConvert<StarRating>()) {
StarRating starRating = qvariant_cast<StarRating>(index.data());
if (option.state & QStyle::State_Selected)
painter->fillRect(option.rect, option.palette.highlight());
starRating.paint(painter, option.rect, option.palette,
StarRating::ReadOnly);
} else {
QStyledItemDelegate::paint(painter, option, index);
}
The function is invoked once for each item, represented by a QModelIndex object from the model. If the data stored in the item is a StarRating, we paint it ourselves; otherwise, we let QItemDelegate paint it for us. This ensures that the StarDelegate can handle the most common data types.
If the item is a StarRating, we draw the background if the item is selected, and we draw the item using StarRating::paint(), which we will review later.
StartRatings can be stored in a QVariant thanks to the Q_DECLARE_METATYPE() macro appearing in starrating.h. More on this later.
The createEditor() function is called when the user starts editing an item:
QWidget *StarDelegate::createEditor(QWidget *parent,
const QStyleOptionViewItem &option,
const QModelIndex &index) const
{
if (index.data().canConvert<StarRating>()) {
StarEditor *editor = new StarEditor(parent);
connect(editor, &StarEditor::editingFinished,
this, &StarDelegate::commitAndCloseEditor);
return editor;
} else {
return QStyledItemDelegate::createEditor(parent, option, index);
}
}
If the item is a StarRating, we create a StarEditor and connect its editingFinished() signal to our commitAndCloseEditor() slot, so we can update the model when the editor closes.
Here's the implementation of commitAndCloseEditor():
void StarDelegate::commitAndCloseEditor()
{
StarEditor *editor = qobject_cast<StarEditor *>(sender());
emit commitData(editor);
emit closeEditor(editor);
}
When the user is done editing, we emit commitData() and closeEditor() (both declared in QAbstractItemDelegate), to tell the model that there is edited data and to inform the view that the editor is no longer needed.
The setEditorData() function is called when an editor is created to initialize it with data from the model:
void StarDelegate::setEditorData(QWidget *editor,
const QModelIndex &index) const
{
if (index.data().canConvert<StarRating>()) {
StarRating starRating = qvariant_cast<StarRating>(index.data());
StarEditor *starEditor = qobject_cast<StarEditor *>(editor);
starEditor->setStarRating(starRating);
} else {
QStyledItemDelegate::setEditorData(editor, index);
}
}
We simply call setStarRating() on the editor.
The setModelData() function is called to commit data from the editor to the model when editing is finished:
void StarDelegate::setModelData(QWidget *editor, QAbstractItemModel *model,
const QModelIndex &index) const
{
if (index.data().canConvert<StarRating>()) {
StarEditor *starEditor = qobject_cast<StarEditor *>(editor);
model->setData(index, QVariant::fromValue(starEditor->starRating()));
} else {
QStyledItemDelegate::setModelData(editor, model, index);
}
}
The sizeHint() function returns an item's preferred size:
QSize StarDelegate::sizeHint(const QStyleOptionViewItem &option,
const QModelIndex &index) const
{
if (index.data().canConvert<StarRating>()) {
StarRating starRating = qvariant_cast<StarRating>(index.data());
return starRating.sizeHint();
} else {
return QStyledItemDelegate::sizeHint(option, index);
}
}
We simply forward the call to StarRating.
StarEditor Class Definition
The StarEditor class was used when implementing StarDelegate. Here's the class definition:
class StarEditor : public QWidget
{
Q_OBJECT
public:
StarEditor(QWidget *parent = 0);
QSize sizeHint() const override;
void setStarRating(const StarRating &starRating) {
myStarRating = starRating;
}
StarRating starRating() { return myStarRating; }
signals:
void editingFinished();
protected:
void paintEvent(QPaintEvent *event) override;
void mouseMoveEvent(QMouseEvent *event) override;
void mouseReleaseEvent(QMouseEvent *event) override;
private:
int starAtPosition(int x);
StarRating myStarRating;
};
The class lets the user edit a StarRating by moving the mouse over the editor. It emits the editingFinished() signal when the user clicks on the editor.
The protected functions are reimplemented from QWidget to handle mouse and paint events. The private function starAtPosition() is a helper function that returns the number of the star under the mouse pointer.
StarEditor Class Implementation
Let's start with the constructor:
StarEditor::StarEditor(QWidget *parent)
: QWidget(parent)
{
setMouseTracking(true);
setAutoFillBackground(true);
}
We enable mouse tracking on the widget so we can follow the cursor even when the user doesn't hold down any mouse button. We also turn on QWidget's auto-fill background feature to obtain an opaque background. (Without the call, the view's background would shine through the editor.)
The paintEvent() function is reimplemented from QWidget:
void StarEditor::paintEvent(QPaintEvent *)
{
QPainter painter(this);
myStarRating.paint(&painter, rect(), this->palette(),
StarRating::Editable);
}
We simply call StarRating::paint() to draw the stars, just like we did when implementing StarDelegate.
void StarEditor::mouseMoveEvent(QMouseEvent *event)
{
int star = starAtPosition(event->x());
if (star != myStarRating.starCount() && star != -1) {
myStarRating.setStarCount(star);
update();
}
}
In the mouse event handler, we call setStarCount() on the private data member myStarRating to reflect the current cursor position, and we call QWidget::update() to force a repaint.
void StarEditor::mouseReleaseEvent(QMouseEvent * /* event */)
{
emit editingFinished();
}
When the user releases a mouse button, we simply emit the editingFinished() signal.
int StarEditor::starAtPosition(int x)
{
int star = (x / (myStarRating.sizeHint().width()
/ myStarRating.maxStarCount())) + 1;
if (star <= 0 || star > myStarRating.maxStarCount())
return -1;
return star;
}
The starAtPosition() function uses basic linear algebra to find out which star is under the cursor.
StarRating Class Definition
class StarRating
{
public:
enum EditMode { Editable, ReadOnly };
explicit StarRating(int starCount = 1, int maxStarCount = 5);
void paint(QPainter *painter, const QRect &rect,
const QPalette &palette, EditMode mode) const;
QSize sizeHint() const;
int starCount() const { return myStarCount; }
int maxStarCount() const { return myMaxStarCount; }
void setStarCount(int starCount) { myStarCount = starCount; }
void setMaxStarCount(int maxStarCount) { myMaxStarCount = maxStarCount; }
private:
QPolygonF starPolygon;
QPolygonF diamondPolygon;
int myStarCount;
int myMaxStarCount;
};
Q_DECLARE_METATYPE(StarRating)
The StarRating class represents a rating as a number of stars. In addition to holding the data, it is also capable of painting the stars on a QPaintDevice, which in this example is either a view or an editor. The myStarCount member variable stores the current rating, and myMaxStarCount stores the highest possible rating (typically 5).
The Q_DECLARE_METATYPE() macro makes the type StarRating known to QVariant, making it possible to store StarRating values in QVariant.
StarRating Class Implementation
The constructor initializes myStarCount and myMaxStarCount, and sets up the polygons used to draw stars and diamonds:
StarRating::StarRating(int starCount, int maxStarCount)
{
myStarCount = starCount;
myMaxStarCount = maxStarCount;
starPolygon << QPointF(1.0, 0.5);
for (int i = 1; i < 5; ++i)
starPolygon << QPointF(0.5 + 0.5 * std::cos(0.8 * i * 3.14),
0.5 + 0.5 * std::sin(0.8 * i * 3.14));
diamondPolygon << QPointF(0.4, 0.5) << QPointF(0.5, 0.4)
<< QPointF(0.6, 0.5) << QPointF(0.5, 0.6)
<< QPointF(0.4, 0.5);
}
The paint() function paints the stars in this StarRating object on a paint device:
void StarRating::paint(QPainter *painter, const QRect &rect,
const QPalette &palette, EditMode mode) const
{
painter->save();
painter->setRenderHint(QPainter::Antialiasing, true);
painter->setPen(Qt::NoPen);
if (mode == Editable) {
painter->setBrush(palette.highlight());
} else {
painter->setBrush(palette.foreground());
}
int yOffset = (rect.height() - PaintingScaleFactor) / 2;
painter->translate(rect.x(), rect.y() + yOffset);
painter->scale(PaintingScaleFactor, PaintingScaleFactor);
for (int i = 0; i < myMaxStarCount; ++i) {
if (i < myStarCount) {
painter->drawPolygon(starPolygon, Qt::WindingFill);
} else if (mode == Editable) {
painter->drawPolygon(diamondPolygon, Qt::WindingFill);
}
painter->translate(1.0, 0.0);
}
painter->restore();
}
We first set the pen and brush we will use for painting. The mode parameter can be either Editable or ReadOnly. If mode is editable, we use the Highlight color instead of the Foreground color to draw the stars.
Then we draw the stars. If we are in Edit mode, we paint diamonds in place of stars if the rating is less than the highest rating.
The sizeHint() function returns the preferred size for an area to paint the stars on:
QSize StarRating::sizeHint() const
{
return PaintingScaleFactor * QSize(myMaxStarCount, 1);
}
The preferred size is just enough to paint the maximum number of stars. The function is called by both StarDelegate::sizeHint() and StarEditor::sizeHint().
The main() Function
Here's the program's main() function:
int main(int argc, char *argv[])
{
QApplication app(argc, argv);
QTableWidget tableWidget(4, 4);
tableWidget.setItemDelegate(new StarDelegate);
tableWidget.setEditTriggers(QAbstractItemView::DoubleClicked
| QAbstractItemView::SelectedClicked);
tableWidget.setSelectionBehavior(QAbstractItemView::SelectRows);
QStringList headerLabels;
headerLabels << "Title" << "Genre" << "Artist" << "Rating";
tableWidget.setHorizontalHeaderLabels(headerLabels);
populateTableWidget(&tableWidget);
tableWidget.resizeColumnsToContents();
tableWidget.resize(500, 300);
tableWidget.show();
return app.exec();
}
The main() function creates a QTableWidget and sets a StarDelegate on it. DoubleClicked and SelectedClicked are set as edit triggers, so that the editor is opened with a single click when the star rating item is selected.
The populateTableWidget() function fills the QTableWidget with data:
void populateTableWidget(QTableWidget *tableWidget)
{
static const struct {
const char *title;
const char *genre;
const char *artist;
int rating;
} staticData[] = {
{ "Mass in B-Minor", "Baroque", "J.S. Bach", 5 },
...
{ 0, 0, 0, 0 }
};
for (int row = 0; staticData[row].title != 0; ++row) {
QTableWidgetItem *item0 = new QTableWidgetItem(staticData[row].title);
QTableWidgetItem *item1 = new QTableWidgetItem(staticData[row].genre);
QTableWidgetItem *item2 = new QTableWidgetItem(staticData[row].artist);
QTableWidgetItem *item3 = new QTableWidgetItem;
item3->setData(0,
QVariant::fromValue(StarRating(staticData[row].rating)));
tableWidget->setItem(row, 0, item0);
tableWidget->setItem(row, 1, item1);
tableWidget->setItem(row, 2, item2);
tableWidget->setItem(row, 3, item3);
}
}
Notice the call to qVariantFromValue to convert a StarRating to a QVariant.
Possible Extensions and Suggestions
There are many ways to customize Qt's model/view framework. The approach used in this example is appropriate for most custom delegates and editors. Examples of possibilities not used by the star delegate and star editor are:
• It is possible to open editors programmatically by calling QAbstractItemView::edit(), instead of relying on edit triggers. This could be used to support other edit triggers than those offered by the QAbstractItemView::EditTrigger enum. For example, in the Star Delegate example, hovering over an item with the mouse might make sense as a way to pop up an editor.
• By reimplementing QAbstractItemDelegate::editorEvent(), it is possible to implement the editor directly in the delegate, instead of creating a separate QWidget subclass.
Files:
Qt_Technology_Partner_RGB_475 Qt_Service_Partner_RGB_475_padded
|
__label__pos
| 0.971907 |
Take the 2-minute tour ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.
For collision testing I need to raster a line. The bresenham algorithm works almost as desired, but has the flaw that is produces a line like:
And I need:
My current implementation (based on http://en.wikipedia.org/wiki/Bresenham%27s_line_algorithm#Simplification):
public boolean isInsideLine(int x1, int y1, int x2, int y2) {
final int dx = abs(x2 - x1), dy = abs(y2 - y1);
final int sx = x1 < x2 ? 1 : -1, sy = y1 < y2 ? 1 : -1;
int err = dx - dy;
while (true) {
if (isInside(x1, y1)) //Lookup in pixel array
return true;
if (x1 == x2 && y1 == y2)
break;
final int e2 = err << 1;
if (e2 > -dy) {
err -= dy;
x1 += sx;
}
if (e2 < dx) {
err += dx;
y1 += sy;
}
}
return false;
}
Is there an other line rasterization algorithm I could use, or does anyone know how modify the bresenham?
share|improve this question
It seems to me that the raw output of Bresenham is 8-connected but you require 4-connected. You could do a post processing of the raw output to detect a diagonal link and then decide which pixel the line is closest to. – koan Nov 24 '12 at 16:18
2
See stackoverflow.com/questions/5186939/… for what looks like the same question. – koan Nov 24 '12 at 16:19
1
Out of interest: why do you need to rasterize a line for collision detection? Can't you just calculate intersections? – Axel Nov 24 '12 at 18:26
It's for (nearly) pixel exact 2d collision. – DiddiZ Nov 24 '12 at 18:49
add comment
1 Answer
up vote 1 down vote accepted
Thanks koan, sometimes you just lack the keywords to search for, Algorithm for drawing a 4-connected line seems to solve it:
public boolean isInsideLine(int x1, int y1, int x2, int y2) {
final int dx = abs(x2 - x1), dy = abs(y2 - y1);
final int sx = x1 < x2 ? 1 : -1, sy = y1 < y2 ? 1 : -1;
int err = dx - dy;
while (true) {
if (isInside(x1, y1)) //Lookup in pixel array
return true;
if (x1 == x2 && y1 == y2)
break;
final int e2 = err << 1;
if (e2 > -dy) {
err -= dy;
x1 += sx;
} else if (e2 < dx) { // else if instead of if
err += dx;
y1 += sy;
}
}
return false;
}
share|improve this answer
add comment
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.947408 |
12
Chisel is an alternative to classic Hardware Description Languages (HDLs) like Verilog and VHDL. My experience with Verilog and its existing prevalence in both industry and academia indicate that it is a well-developed mature language.
Can someone who has used Chisel for some time share your experiences and help to explain Chisel's utility over classic HDLs?
My experiences so far, as an undergraduate IC design student, include:
• Verilog is dominant in industry and has wide simulation support
• Chisel does not support all constructs Verilog does, e.g., analog elements
• Chisel parameterization provides equivalent capabilities to Verilog
closed as primarily opinion-based by xaxxon, Billal Begueradj, Matthieu Brucher, Poul Bak, tk421 Oct 26 '18 at 19:51
Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question.
18
This is an incredibly difficult question to answer for a number of reasons. The Chisel community is attempting to unify on one concrete answer that is easy to understand. There are two main angles:
• Improved productivity through new language features and availability of libraries
• Improved specialization due to the hardware-compiler structure
Language Feature/Power Answer
Think about the following similar question, "What utility does Python provide over C?" A response indicating no utility would likely bring up the following points:
1. Everything I can write in C I can write in Python. Both have for, while, etc. and the code I write looks the same: printf("hello\n") vs. print("hello")
2. C has features that Python doesn't like inline assembly
3. Both are Turing complete
The problem with this line of reasoning is that it's ignoring the fact that Python provides new programming paradigms that allow you to be more productive:
• Object oriented programming
• Functional programming
• Libraries
In effect, Python can be viewed as a more powerful language from a design productivity and code reuse perspective. Language power is notoriously difficult to objectively evaluate. Paul Graham describes this as the "Blub Paradox" in his "Beating the Averages" essay. Graham's thesis is that an engineer proficient in a less powerful language cannot evaluate the utility of a more powerful language.
Put differently, the existence of the paradigms listed above does not force you to use them. Hence, it's entirely possible to write Python that looks and feels just like C (and use that for justification to dismiss Python altogether!).
Chisel is a domain specific language (DSL) for describing hardware circuits embedded in Scala. By it's very nature, it provides equivalent constructs to Verilog like module, Input/Output, Reg, etc. Problematically for your question, this causes both the most basic Chisel examples to look exactly like Verilog and also allows people to write Chisel that looks just like Verilog. This is often used as an argument for dismissing Chisel in favor of Verilog. However, this is analogous to making programming language choice based on the structure of "Hello World" examples.
That means that a better question is: "What can I build with Python that is insurmountably hard with C?"
Answering this question requires case studies and is (very quickly) out of the scope of "Hello World" comparisons. Consider building machine learning libraries in C vs. Python, e.g., "How would I build Tensorflow in C?" This would be incredibly difficult due to the abstractions that something like Tensorflow requires.
You can think about similar exemplar comparisons by looking at mature Chisel projects. Two prominent examples are Rocket-Chip and Diplomacy. The former is a generator of arbitrary System-on-Chip (SoC) designs. The latter is used by Rocket Chip to deal with the problem of parameter negotiation---in an arbitrary SoC I'd like to have the parameters of the N components I'm connecting together to be a function of what I'm connecting (e.g., address widths, internal logic, coherency protocol). Critically, both Rocket-Chip and Diplomacy can be used as libraries (with some difficulty that will be resolved in the future). Concretely, this means that a user is very close to being able to "just import a RISC-V microprocessor" in the same way that they "just import a graph library".
Due to language feature/power coupled with library availability, the real utility metrics become:
• How many lines of code do I need to describe some component?
• How powerful are the abstractions the language allows me to build?
• To what extent does the language enable code reuse?
For unsupported constructs, like negative edge triggered things and asynchronous resets (both FIRRTL and Chisel now support asynchronous resets), Chisel always provides an escape hatch kludge via Verilog blackboxes. However, all of these are really just features looking for a developer.
Hardware Compiler Answer
Chisel forms part of a Hardware Compiler Framework that looks very much like LLVM applied to hardware generation. The Chisel-to-Verilog process forms part of a multi-stage compiler. The "Chisel stage/front-end" compiles Chisel to a circuit intermediate representation called FIRRTL (Flexible Intermediate Representation for RTL). "FIRRTL stage/mid-end" then optimizes FIRRTL and applies user-custom transformations. Finally the "Verilog stage/back-end" emits Verilog based on the optimized FIRRTL.
While subtle, this compiler structure enables the following things:
• The front-end and back-ends are decoupled meaning that other front-ends and back-ends can be written. A Verilog front-end exists through Yosys and other languages can directly target FIRRTL, e.g., Magma is it's own Chisel-like language in Python that can target FIRRTL. New front-ends get all the benefits of existing mid-end optimizations and available back-ends. New back-ends can also be written, e.g., a VHDL back-end just needs one motivated developer to write it.
• The introduction of a circuit IR enables automated specialization/transformation of circuits. This has been exploited to transform circuits into FPGA optimized versions that run faster than unoptimized versions (Midas and FireSim), enable hardware breakpoints/assertions (Dessert), and to add run-time configurable fault injection capabilities (Chiffre). Doing these optimizations directly on the Verilog would be insurmountably complex and brittle.
From this perspective, the question is closer to "What utility does C++ and LLVM provide over manually written assembly language?"
When discussing compilers, usually the question of Chisel (a hardware construction language) vs. high level synthesis (HLS) comes up. The differentiating factor here is that Chisel is still, fundamentally, a powerful language for describing circuits while HLS is a path for converting programs to circuits.
Conclusion
Summarily, a better way of thinking about this question is, "What does Chisel enable beyond Verilog?" That's a difficult question without a pithy answer and requires comparing the set of programming paradigms both languages provide. Currently, and through my own experience, the best way to tackle this is to do deep dives on mature Chisel code bases and try to use it yourself (and withholding judgment for several months). The difficulty with these approaches is that this takes time. Additionally, the requisite skill sets for reading these code bases and making these types of judgments are not commonly prevalent in hardware engineers. Hardware engineers are usually very proficient with C, but may have never seen (in any depth) object oriented programming, functional programming, or written complex projects exploiting modern software engineering principles. This tends to set up biases where people are looking for reasons to dismiss Chisel (or similar languages) as opposed to reasons to use it.
Further Reading
• 4
This is a fantastic answer, we should add this to the wiki – jkoenig Oct 26 '18 at 16:05
• Thanks for spare time answering my question. I quite understand the concept that chisel and firrtl want to promote (decouple front-end and back-end? Make verilog a more primitive code that we don't need to revise it just like C and assembly. Don't know if i'm right lol). So, we decide to build some project out of it (e.g. a CNN accelerator or something lol). Hope that we could work it out haha – Eric Wu Oct 27 '18 at 6:20
• The synthesizable subset of Verilog, VHDL, etc. share qualities of both procedural languages (C) and low-level languages (assembly). This makes them poorly matched to solve classic hardware design problems: design re-use (we write the same modules over and over) and specialization (targeting different FPGAs, ASIC toolchains, VLSI technologies). Naturally, this isn't saying that you cannot build complex things with Verilog or VHDL, just that there are existing software engineering paradigms that facilitate this. I'm updating the answer with some references from others who motivated these ideas. – seldridge Oct 27 '18 at 15:54
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.929168 |
Monster Hunter Rise
92
(2021)
Rise to the challenge and join the hunt! Hunt solo or in a party with friends to earn rewards that you can use to craft a huge variety of weapons and armor. Brand new gameplay systems such as the high-flying ‘Wire Action’ and your canine companion ‘Palamute’ will add exciting new layers to the already robust combat ... Read More
PLAY WITH WTFAST
STATUS
Click icon to download wtfast
Supported
GENRE
Role-playing (RPG) , Single player, Multiplayer, Co-operative, Role-playing (RPG), Hack and slash/Beat 'em up, Action, Survival
DEVELOPED BY
Capcom
PUBLISHED BY
Capcom
RATED
Videos & Images
Monster Hunter Rise Tips & Tricks
Why is my ping so high in Monster Hunter Rise?
The terms "high ping" and "low ping" are commonly used in online gaming. You may already know that ping is the network latency between a gaming client and the game server. Ping is measured in milliseconds where 1000 milliseconds is equivalent to 1 second. For gamers, low ping times are our best friend, because a low ping equates to low latency. This means there are fewer chances for lags and delays to happen. Contrary to a high ping which causes a significant amount of lag. Having a ping higher than 100ms can already produce a severe amount of lag. If you often experience pings that go higher than 100ms, chances are high that you suffer from ping spikes and jitter. This can be incredibly frustrating especially for an online gamer like you who plays Monster Hunter Rise. You might be wondering why your ping is so high in Monster Hunter Rise? Here are the possible reasons why... Your distance to the game server - the distance between your location and the game server has a big impact on the amount of delay you're experiencing. The further you are located from the game server, the higher ping you’ll have. Keep in mind that game servers are usually named by their geographic location, so you must check on that one. Wireless connections - connecting to the internet via a wireless connection will make you prone to interference. This will certainly result in increased latency, packet loss, jitter, and all sorts of other network-related performance issues. All of these could contribute to increasing your ping and slowing down your game. Running programs on the background - all applications that are running on the background while you are playing could contribute strain to your network and computer in varying degrees. This could significantly affect your computer's performance, especially if the program is consuming a lot of bandwidth. Be wary of streaming applications and downloads happening on your computer as this will increase latency between your computer and the game servers. Other devices are connected to your network - not only running applications but other connected devices to your network could eat up your bandwidth as well. Devices such as other computers, consoles, smartphones, tablets, and others could consume your network's resources. This could also result in a higher ping on your part, which will dramatically affect your gameplay. Your ISP - the problem might also be lying on your ISP. Maybe your internet service provider does not route traffic optimally for gaming which is possible on a lot of internet service providers. Many of our ISP friends provide dedicated gaming packages with wtfast included in the bundle, which gives you a stronger connection and all the power of wtfast’s dedicated network boosting service, too! You're not using a dedicated gaming software - using a gaming VPN like wtfast will improve your gaming performance by regulating and reducing your ping. Instead of just connecting to the "normal" server connection, wtfast will create a custom connection which will eliminate the number of hops needed for you to reach the server. The quicker you get to the server, the faster your game will be!
What are ping spikes, and how do they affect Monster Hunter Rise?
Imagine playing Monster Hunter Rise, and you are about to make a critical move that requires the quickest reflexes. You're all ready. You aim. And... BAM! Your little brother runs up and shoves you out of your chair. Unsurprisingly, you miss your critical move entirely! Not only do you lose the match, but now you're pissed off with your little brother. A ping spike is like having your brother shove you out of your chair right as you're about to make that move in Monster Hunter Rise. A ping spike is what happens when your lag jumps suddenly, like when you go from a stable 10ms ping time to an unstable 300ms response. These drastic ping spikes are your enemy when it comes to online games and latency-sensitive applications where real-time interaction is expected. When your ping spikes suddenly, it almost always results in a missed move in games like first-person shooters, MOBAs, or fighting games. The more responsive and 'twitchy' the game mechanic, the more critical it is to make sure your connection is both fast AND smooth.
How is my ping time to Monster Hunter Rise calculated in WTFast?
WTFast calculates an average ping time between you and the Monster Hunter Rise server, across each of the hops. While we do our best to calculate every hop along the route, there are some 'hop points' that are behind firewalls or obfuscated (i.e., hidden from us) which make measuring the ping time to and from that location a bit trickier. By calculating the difference between the total ping time between you and the Monster Hunter Rise server, we can calculate an approximate time between obfuscated hop points, but it can be slightly less accurate than the non-obfuscated points.
What is jitter, and what is its effect on Monster Hunter Rise?
You might already know about checking your ping and latency to improve your gaming experience. But there is one more factor that has a significant effect on the smooth gameplay you're supposed to have - and that is jitter. What is jitter? Jitter is an average of the change in ping over time or how your latency score fluctuates. It is the variation in latency, and it's a problem because it makes the experience unpredictable. Average internet users commonly ignore it, but to online gamers, especially the ones who are playing multiplayer shooter games, this problem is a big deal. Jitter (or more accurately, latency fluctuation/flux), shows itself during games like Monster Hunter Rise through choppy gameplay. You're running around, and suddenly the world freezes. Once it unfreezes, everything has changed, as if time had stood still for you and then got caught up suddenly. Even if this just happened half a second, you'll probably notice this stuttery gameplay. Jitter causes the latency to change rapidly, for example from 10ms to 80ms and back. In such a situation, it's tough for game servers to provide a fair environment for all players, which can be very frustrating.
Help! My Monster Hunter Rise lag is the worst. How do I fix it?
First off, you've come to the right place. We're anti-lag here at WTFast! Secondly, keep in mind that 'lag' is a general term and could apply to both slow down (e.g., frames per second/FPS), or reduced responsiveness during gameplay. For clarity, we're going to focus less on graphical lag and more on your network latency and performance. At WTFast, we make software to help you boost your network performance for latency-sensitive programs like Monster Hunter Rise. We can help improve your network latency!
How do I reduce my Monster Hunter Rise ping?
We know how hard it is to fight the never-ending battle between you and lag, ping spikes, and more. Just like what a lot of gamers say - the lower the ping, the better your gaming experience. So first and foremost, you need to know what a ping is and how does it affect your Monster Hunter Rise gameplay. A ping is the response time between two computers. When it comes to online gaming, a ping shows the response time between the client and the game server. Ping time is measured in milliseconds, and tell you how long a packet data takes to travel back and forth from the client and the game server. Simply put, whenever a gamer connects to an online game, a reduced ping becomes a gamer’s best friend. On the contrary, a high ping becomes a gamer's worst enemy. Your ping time can literally be the difference between winning and losing. So how can you reduce your Monster Hunter Rise ping? Before connecting to Monster Hunter Rise, make sure to check your ping beforehand. Using the WTFast Ping Test is one of the best methods to do so. Pings that are less than 100ms are ideal for online gamers. However, pings higher than 150ms will start to show lag. Even if you have the best gaming rig in town, that won't help in reducing your ping unless you have a very strong internet connection. And surprisingly enough, sometimes even a strong internet connection isn’t the solution to your ping problems. Reducing your ping can be done in several ways. First is to make sure that you use a wired internet connection. Using a wireless connection may result in a lot of ping problems, which is why a wired connection is the best option when it comes to online gaming. You should also consider using your local servers or the servers that are closest to your location. This can dramatically reduce your ping as it will allow better communication between the gaming server to your PC. Next is to close all running applications running in the background. This will prevent your computer from multitasking. Having many running applications will consume your computer's memory and will affect the overall performance of your gameplay. Moreover, you should close all software that demands bandwidth, as it leeches all your bandwidth and will give you a terrible amount of delays and high ping times. You should also consider disabling your software updates before you start playing, as updates tend to consume a lot of bandwidth. You may just turn your updates back on after playing. If you are using a wireless connection, make sure to reduce the number of devices that are connected to the WiFi. If it's possible, disconnect all other connected devices on your local network so that you're the only one consuming the bandwidth. If not, the next best thing is to connect your PC to the router through an ethernet cable. Routers and modems tend to work non-stop which results to congested data registry. You should try restarting your router to refresh your connection and potentially lower your ping. If your router has been around since the stone age, you might want to consider buying a newer replacement. Replacing your router will noticeably affect your Internet connection speed, strength, and consistency. Furthermore, your ping will also likely be lower when you install a new router. You can also dramatically reduce your ping by using a dedicated gaming software. WTFast can lower your ping by reducing the number of hops it takes to get to your gaming server. By decreasing the number of hops between you and the Monster Hunter Rise game server, you'll surely see a significant reduction with your ping and have a much, much better online gaming experience! The advantage of WTFast is that you aren’t limited to a single internet connection between you and the game router. Instead, WTFast gives you thousands and thousands of potential alternative routes between you and the game server, giving you a multitude of options and choices for improving your connection! With such a powerful utility, we encourage our users to experiment with different connection routes to find the path that’s best for them and their favorite game.
How do I fix jitter or latency flux in Monster Hunter Rise?
Jitter is the sudden deviation that you get in your ping whenever you are playing online games. For example, your average ping might be 45ms. But with jitter, that 45ms might spike to 90ms or even 300ms for a short time, before going back down to your average ping. These sudden spikes in ping may throw off your game, causing you to miss that game-winning moment in games like Monster Hunter Rise. A simple thing like jitter can cause you to lose a game, making your gaming skill completely irrelevant. How can you fix jitter? The main cause of jitter is the difference in the average latency time of your packets. So, you can fix your jitter by lowering your latency and more. To resolve this, you should strongly consider using a wired internet connection if you’re not already. This is highly advisable when you are into online gaming rather than using a wireless connection. A wired connection will prevent fluctuations and lost packets which can significantly improve and lower your latency. Next is to use a high-speed internet connection. Lower internet speed may cause jitter or latency flux, especially when you're sharing it with other people or other devices. You may try and switch to fiber connections, or even just upgrade your connection and increase your bandwidth. Doing so can transform your gaming into smooth, no-jitter gameplay. Another step is to use a powerful router. Your router is the heart of your internet connection, so you’ll want to invest in the best here. Maybe your router has been there for ages, and it's not working well for your needs anymore. Look for a powerful router that is fit for your gaming needs. Do some research and check reviews to verify the quality of the router. Make sure that the bandwidth capacity is high enough to handle the traffic your household produces. We suggest finding a router with WTFast built-in. This way, your router can handle all the complexity of finding the best route across the thousands and thousands of potential different connection routes available in WTFast. Our friends at ASUS offer dedicated gaming routers with WTFast built-in, giving you peace of mind for gaming without headache. If you're still experiencing jitter and aren’t ready to upgrade your router, you can boost your game by simply using gaming software that offers the network optimisation solution for you. That software is WTFast! WTFast uses a custom business-grade network infrastructure that is perfect for your gaming needs. Our optimization services will provide you with a more reliable and responsive connection which can help you reduce jitter and improve your network stability.
How does WTFast help improve my Monster Hunter Rise ping time?
WTFast is a utility that helps redirect your internet traffic from the 'regular' internet to a private connection. Think of WTFast a bit like a 'fast lane,' where there are fewer cars on the road or fewer planes in the sky. For instance, private airlines fly higher in the air than standard commercial airline routes. Operating in less busy airspace allows private airlines to fly faster than the usual commercial lines, often allowing business people to get to their destinations more quickly. There are multiple hops between your home internet connection and the Monster Hunter Rise server. WTFast allows you to reduce the number of hops to get much closer to your game server. When you use WTFast, you can select a Proxy Server that sends your traffic directly to the game server instead of leaving your connection up to chance. Standard internet traffic routing wasn't intended for low-latency gaming, like playing Monster Hunter Rise with fast ping times. Instead, it was designed to handle large quantities of data shuttling between places in the most cost-effective way possible. Bandwidth is expensive, especially when you're streaming Netflix in 4K. And since video streaming is such a popular activity on the internet, internet service providers have to optimize for the most popular uses of their services. Decreasing the number of hops between you and the Monster Hunter Rise game server can help, as well as routing your traffic through quieter and less populated proxy paths (what we call our "GPN Nodes".) We support thousands of games and over 60,000 combinations of GPN proxy servers to handle your connection across 190 countries. There are over 72 billion potential combinations of game, game server region, country, and proxy routes that we handle. If one path using WTFast doesn't work for you, consider trying different server route.
What is a traceroute, and how does it apply to Monster Hunter Rise?
Think of a traceroute as an 'audit trail' for your Monster Hunter Rise connection. Your data is usually routed through several servers between you and the Monster Hunter Rise servers. A traceroute helps you measure and visualize the route your traffic takes, showing you each gateway or 'hop' along the way. For your connection to Monster Hunter Rise, your data packets will usually travel across multiple 'hops' to get from your computer to the server. Your data will often change hands across various networks to get from Point A (your device) to Point B (the game server). A traceroute measures your data packets as they're set from your computer across all the various 'hops' between Point A and Point B. When you use a traceroute, your connection history is recorded as "round-trip time." A traceroute shows you a list of each of the points your connection hits as it travels between you and the Monster Hunter Rise server.
What is latency and how does it affect my response time in Monster Hunter Rise?
Latency is a term that is commonly used in online gaming. Latency (in an online gaming context) refers to the average total time that it takes for your computer to send data to the gaming server. Latency is measured in milliseconds, and a second is composed of 1000 milliseconds. On the other hand, your game response time is the time it takes for the data and the corresponding event to reach the game server and then back to your computer. Basically, your response time is 2x the latency which means, if you lower your latency by 250ms, you'll also reduce your game response time by 500ms, which is half a second. If you lower your latency by 500ms, you'll also lower your game response time by 1000ms, which corresponds to a second, and so on. The lower your latency, the faster the data will be delivered to the game server and the quicker for the data to return to your computer. A low latency connection time will have a significant improvement in your gameplay, especially on fast-paced games like Monster Hunter Rise where you need to execute an action quickly. WTFast helps in reducing your latency, as well as protecting you from lag spikes and high ping times. We have dedicated servers across 190 different countries which will reduce the number of hops needed to transfer between different servers. By using WTFast, you have more chances of lowering your latency and achieving a much better gaming experience.
|
__label__pos
| 0.862718 |
Aws4 request&x amz signedheaders=host&x amz signature=940233cc6ffd140d68c8e205bf079e408ab8a1c25953a7817e3807d87231b53c
In some cities, citizens worry that police officers use violent tactics far too often. How many officers really use excessive force, and what is the most effective way for cities to combat this?
In this lesson, students compare two distributions of complaints against police officers. They analyze the fraction of complaints that officers are responsible for and evaluate the effectiveness of policy proposals in each scenario.
Students will
• Given a histogram of excessive force complaints find the number of officers with a given range of complaints
• Use the distribution, mean, median and mode of a dataset to determine what value is “typical” of the data
• Decide whether a policy that targets the worst 10% of officers will effectively reduce the total number of complaints in each scenario
• Understand how the distribution of complaints affects how a certain remedy’s effectiveness
Before you begin
Students should be able to calculate and use percentages, and they should also be able to read and make sense of a histogram.
Common Core Standards
Content Standards
Mathematical Practices
Shoutouts
This American Life, Malcolm Gladwell
|
__label__pos
| 0.989058 |
edwardsmarkf edwardsmarkf - 8 months ago 219
Javascript Question
Using jquery ajax to download a binary file
I am attempting to use jquery ajax to download a binary audio file.
Normally i would just issue a command like this:
windows.location.href = 'http://marksdomain(dot)com/audioFile.wav' ;
However, recently our server has been waiting too long to respond, and I get a nasty gateway timeout message.
It has been suggested that i use jquery-ajax instead, which makes sense since then i would have more control over the timeout.
Here is the code i have played with so far:
$.ajax(
{ url: 'http://marksdomain(dot)com/audioFile.wav'
, timeout: 999999
, dataType: 'binary'
, processData: false // this one does not seem to do anything ?
, success: function(result) {
console.log(result.length);
}
, error: function(result, errStatus, errorMessage){
console.log(errStatus + ' -- ' + errorMessage);
}
When i omit the "dataType", the binary file is coming through about three times larger than it actually is on the server. However, when i make the dataType equal to "binary", ajax throws an error:
"No conversion from text to binary"
From some earlier posts, it sounds as if jquery-ajax cannot handle binary files in this manner.
I did discover Delivery.js which actually works quite well for what I am attempting, but I would rather not use a node solution if possible.
Any suggestions?
Answer
Just use XHR directly. This example is taken from MDN:
var oReq = new XMLHttpRequest();
oReq.open("GET", "/myfile.png", true);
oReq.responseType = "arraybuffer";
oReq.onload = function(oEvent) {
var arrayBuffer = oReq.response;
// if you want to access the bytes:
var byteArray = new Uint8Array(arrayBuffer);
// ...
// If you want to use the image in your DOM:
var blob = new Blob(arrayBuffer, {type: "image/png"});
var url = URL.createObjectURL(blob);
someImageElement.src = url;
// whatever...
};
oReq.send();
|
__label__pos
| 0.881182 |
logo
Tag: games
Snake AI in Pygame
In this article we will show you how to create basic game AI (Artificial Intelligence). This article will describe an AI for the game snake.
In this game (snake) both the computer and you play a snake, and the computer snake tries to catch you. In short: the opponent AI tries to determine and go to the destination point based on your location on the board.
You may like
Adding the computer player:
We extend the code with a new class called Computer which will be our computer player. This contains routines to draw and move the computer snake.
class Computer:
x = [0]
y = [0]
step = 44
direction = 0
length = 3
updateCountMax = 2
updateCount = 0
def __init__(self, length):
self.length = length
for i in range(0,2000):
self.x.append(-100)
self.y.append(-100)
# initial positions, no collision.
self.x[0] = 1*44
self.y[0] = 4*44
def update(self):
self.updateCount = self.updateCount + 1
if self.updateCount > self.updateCountMax:
# update previous positions
for i in range(self.length-1,0,-1):
self.x[i] = self.x[i-1]
self.y[i] = self.y[i-1]
# update position of head of snake
if self.direction == 0:
self.x[0] = self.x[0] + self.step
if self.direction == 1:
self.x[0] = self.x[0] - self.step
if self.direction == 2:
self.y[0] = self.y[0] - self.step
if self.direction == 3:
self.y[0] = self.y[0] + self.step
self.updateCount = 0
def moveRight(self):
self.direction = 0
def moveLeft(self):
self.direction = 1
def moveUp(self):
self.direction = 2
def moveDown(self):
self.direction = 3
def draw(self, surface, image):
for i in range(0,self.length):
surface.blit(image,(self.x[i],self.y[i]))
We then call the update and drawing method of the computer.
def on_loop(self):
self.player.update()
self.computer.update()
....
def on_render(self):
self._display_surf.fill((0,0,0))
self.player.draw(self._display_surf, self._image_surf)
self.apple.draw(self._display_surf, self._apple_surf)
self.computer.draw(self._display_surf, self._image_surf)
pygame.display.flip()
This will make the computer snake move and be drawn on the screen. It has the same properties as the human player.
Adding the intelligence to the computer player
Because this is a simple game, we do not need to create a complete thinking machine inside the game. We simply need some basic intelligence exhibited by our computer player. Intelligence in games is often quite limited because most of the time more complexity is not neccesary or there simply is not the time available to implement clever algorithms.
The algorithm we will add will simply go to the destination. It will neglect any obstacles (the human player).
def target(self,dx,dy):
if self.x[0] > dx:
self.moveLeft()
if self.x[0] < dx:
self.moveRight()
if self.x[0] == dx:
if self.y[0] < dy:
self.moveDown()
if self.y[0] > dy:
self.moveUp()
Complete code
We end up with this complete code:
from pygame.locals import *
from random import randint
import pygame
import time
class Apple:
x = 0
y = 0
step = 44
def __init__(self,x,y):
self.x = x * self.step
self.y = y * self.step
def draw(self, surface, image):
surface.blit(image,(self.x, self.y))
class Player:
x = [0]
y = [0]
step = 44
direction = 0
length = 3
updateCountMax = 2
updateCount = 0
def __init__(self, length):
self.length = length
for i in range(0,2000):
self.x.append(-100)
self.y.append(-100)
# initial positions, no collision.
self.x[0] = 1*44
self.x[0] = 2*44
def update(self):
self.updateCount = self.updateCount + 1
if self.updateCount > self.updateCountMax:
# update previous positions
for i in range(self.length-1,0,-1):
self.x[i] = self.x[i-1]
self.y[i] = self.y[i-1]
# update position of head of snake
if self.direction == 0:
self.x[0] = self.x[0] + self.step
if self.direction == 1:
self.x[0] = self.x[0] - self.step
if self.direction == 2:
self.y[0] = self.y[0] - self.step
if self.direction == 3:
self.y[0] = self.y[0] + self.step
self.updateCount = 0
def moveRight(self):
self.direction = 0
def moveLeft(self):
self.direction = 1
def moveUp(self):
self.direction = 2
def moveDown(self):
self.direction = 3
def draw(self, surface, image):
for i in range(0,self.length):
surface.blit(image,(self.x[i],self.y[i]))
class Computer:
x = [0]
y = [0]
step = 44
direction = 0
length = 3
updateCountMax = 2
updateCount = 0
def __init__(self, length):
self.length = length
for i in range(0,2000):
self.x.append(-100)
self.y.append(-100)
# initial positions, no collision.
self.x[0] = 1*44
self.y[0] = 4*44
def update(self):
self.updateCount = self.updateCount + 1
if self.updateCount > self.updateCountMax:
# update previous positions
for i in range(self.length-1,0,-1):
self.x[i] = self.x[i-1]
self.y[i] = self.y[i-1]
# update position of head of snake
if self.direction == 0:
self.x[0] = self.x[0] + self.step
if self.direction == 1:
self.x[0] = self.x[0] - self.step
if self.direction == 2:
self.y[0] = self.y[0] - self.step
if self.direction == 3:
self.y[0] = self.y[0] + self.step
self.updateCount = 0
def moveRight(self):
self.direction = 0
def moveLeft(self):
self.direction = 1
def moveUp(self):
self.direction = 2
def moveDown(self):
self.direction = 3
def target(self,dx,dy):
if self.x[0] > dx:
self.moveLeft()
if self.x[0] < dx:
self.moveRight()
if self.x[0] == dx:
if self.y[0] < dy:
self.moveDown()
if self.y[0] > dy:
self.moveUp()
def draw(self, surface, image):
for i in range(0,self.length):
surface.blit(image,(self.x[i],self.y[i]))
class Game:
def isCollision(self,x1,y1,x2,y2,bsize):
if x1 >= x2 and x1 <= x2 + bsize:
if y1 >= y2 and y1 <= y2 + bsize:
return True
return False
class App:
windowWidth = 800
windowHeight = 600
player = 0
apple = 0
def __init__(self):
self._running = True
self._display_surf = None
self._image_surf = None
self._apple_surf = None
self.game = Game()
self.player = Player(5)
self.apple = Apple(8,5)
self.computer = Computer(5)
def on_init(self):
pygame.init()
self._display_surf = pygame.display.set_mode((self.windowWidth,self.windowHeight), pygame.HWSURFACE)
pygame.display.set_caption('Pygame pythonspot.com example')
self._running = True
self._image_surf = pygame.image.load("pygame.png").convert()
self._apple_surf = pygame.image.load("apple.png").convert()
def on_event(self, event):
if event.type == QUIT:
self._running = False
def on_loop(self):
self.computer.target(self.apple.x, self.apple.y)
self.player.update()
self.computer.update()
# does snake eat apple?
for i in range(0,self.player.length):
if self.game.isCollision(self.apple.x,self.apple.y,self.player.x[i], self.player.y[i],44):
self.apple.x = randint(2,9) * 44
self.apple.y = randint(2,9) * 44
self.player.length = self.player.length + 1
# does computer eat apple?
for i in range(0,self.player.length):
if self.game.isCollision(self.apple.x,self.apple.y,self.computer.x[i], self.computer.y[i],44):
self.apple.x = randint(2,9) * 44
self.apple.y = randint(2,9) * 44
#self.computer.length = self.computer.length + 1
# does snake collide with itself?
for i in range(2,self.player.length):
if self.game.isCollision(self.player.x[0],self.player.y[0],self.player.x[i], self.player.y[i],40):
print "You lose! Collision: "
print "x[0] (" + str(self.player.x[0]) + "," + str(self.player.y[0]) + ")"
print "x[" + str(i) + "] (" + str(self.player.x[i]) + "," + str(self.player.y[i]) + ")"
exit(0)
pass
def on_render(self):
self._display_surf.fill((0,0,0))
self.player.draw(self._display_surf, self._image_surf)
self.apple.draw(self._display_surf, self._apple_surf)
self.computer.draw(self._display_surf, self._image_surf)
pygame.display.flip()
def on_cleanup(self):
pygame.quit()
def on_execute(self):
if self.on_init() == False:
self._running = False
while( self._running ):
pygame.event.pump()
keys = pygame.key.get_pressed()
if (keys[K_RIGHT]):
self.player.moveRight()
if (keys[K_LEFT]):
self.player.moveLeft()
if (keys[K_UP]):
self.player.moveUp()
if (keys[K_DOWN]):
self.player.moveDown()
if (keys[K_ESCAPE]):
self._running = False
self.on_loop()
self.on_render()
time.sleep (50.0 / 1000.0);
self.on_cleanup()
if __name__ == "__main__" :
theApp = App()
theApp.on_execute()
python snake python snake
Conclusion
You learned how to create a basic computer player using an very simple AI algorithm.
Next: Learn basic sidescroller logic
Game development with Pygame
Pygame Python Pygame Python
Welcome to the first tutorial of the series: Building games with Pygame. Games you create with Pygame can be run on any machine that supports Python, including Windows, Linux and Mac OS.
In this tutorial we will explain the fundamental of building a game with Pygame. We’ll start of with the basics and will teach you how to create the basic framework. In the next tutorials you will learn how to make certain types of games.
You may like
Python Game Development for Beginners
PyGame introduction
You’ll end up with a program similar to the one on the right:
A game always starts in an order similar to this (pseudo code):
initialize()
while running():
game_logic()
get_input()
update_screen()
deinitialize()
The game starts with initialization. All graphics are loaded, sounds are loaded, levels are loaded and any data that needs to be loaded. The game continues running until it receives a quit event. In this game loop we update the game, get input and update the screen. Depending on the game the implementation widely varies, but this fundamental structure is common in all games.
In Pygame we define this as:
import pygame
from pygame.locals import *
class App:
windowWidth = 640
windowHeight = 480
x = 10
y = 10
def __init__(self):
self._running = True
self._display_surf = None
self._image_surf = None
def on_init(self):
pygame.init()
self._display_surf = pygame.display.set_mode((self.windowWidth,self.windowHeight), pygame.HWSURFACE)
self._running = True
self._image_surf = pygame.image.load("pygame.png").convert()
def on_event(self, event):
if event.type == QUIT:
self._running = False
def on_loop(self):
pass
def on_render(self):
self._display_surf.blit(self._image_surf,(self.x,self.y))
pygame.display.flip()
def on_cleanup(self):
pygame.quit()
def on_execute(self):
if self.on_init() == False:
self._running = False
while( self._running ):
for event in pygame.event.get():
self.on_event(event)
self.on_loop()
self.on_render()
self.on_cleanup()
if __name__ == "__main__" :
theApp = App()
theApp.on_execute()
The Pygame program starts with the constructor init(). Once that is finished on_execute() is called. This method runs the game: it updates the events, updates the screen. Finally, the game is deinitialized using on_cleanup().
In the initialiasation phase we set the screen resolution and start the Pygame library:
def on_init(self):
pygame.init()
self._display_surf = pygame.display.set_mode((self.windowWidth,self.windowHeight), pygame.HWSURFACE)
We also load the image.
self._image_surf = pygame.image.load("pygame.png").convert()
This does not draw the image to the screen, that occurs in on_render().
def on_render(self):
self._display_surf.blit(self._image_surf,(self.x,self.y))
pygame.display.flip()
The blit method draws the image (image_surf) to the coordinate (x,y). In Pygame the coordinates start from (0,0) top left to (wind0wWidth, windowHeight). The method call pygame.display.flip() updates the screen.
Continue the next tutorial and learn how to add game logic and build games :-)
|
__label__pos
| 0.980421 |
Interviewer: why don't different return types count as method overloading?
Posted by ntg on Wed, 05 Jan 2022 06:13:53 +0100
This article has included "Java common interview questions": https://gitee.com/mydb/interview
Method overloading means that multiple methods with the same name are defined in the same class, but the parameter type or number of parameters of each method is different.
For example, the following four method methods can be called method overloading, as shown in the following code:
public class OverloadExample {
public void method() {
// doSomething
}
public void method(String name) {
// doSomething
}
public void method(Integer id) {
// doSomething
}
public void method(Integer id, String name) {
// doSomething
}
}
Why do different return types not count as method overloading?
To answer this question, we must first understand a bit of the front content, method signature.
Method signature is a unique value composed of method name + parameter type + number of parameters. This unique value is the method signature, and the JVM (Java virtual machine) determines which method to call through this method signature.
From the composition rules of method signature, we can see that the return type of a method is not a part of the method signature. Therefore, when there are multiple methods with the same method name and parameters but different return value types in the same class, the JVM cannot judge which method to call through the method signature, as shown in the following figure:
So why can't a return type be part of a method signature?
The reason is actually very simple. Imagine that if the return type of the method is also part of the method signature, when the programmer writes a code to call the "overloaded" method, the JVM cannot distinguish which method to call, as shown in the following code:
public class OverloadExample {
public static void main(String[] args) {
OverloadExample example = new OverloadExample();
example.method("Brother Lei"); // Which method should the JVM call?
}
public int method(String name) {
// doSomething
return 666;
}
public String method(String name) {
// doSomething
return "Brother Lei talking about programming";
}
}
In the above case, the JVM cannot infer which method to call, so the return type of the method cannot be part of the method signature.
Usage scenario of method overloading
The classic usage scenario of method overloading is the valueOf method of String type. There are 9 implementations of valueOf method overloading, as shown in the following figure:
It can convert arrays, objects, and underlying data types to string types.
Method overload matching principle
The calling sequence of method overloading can be divided into front and back, such as the following code:
public class OverloadExample {
public static void main(String[] args) {
OverloadExample example = new OverloadExample();
example.method(12);
}
public void method(int num) {
System.out.println("call int method");
}
public void method(long num) {
System.out.println("call long method");
}
public void method(Integer num) {
System.out.println("call Integer method");
}
public void method(Object num) {
System.out.println("call Object method");
}
public void method(int... num) { // Optional parameters
System.out.println("call int... method");
}
}
When method overloading occurs, which method does the program call? The results of the above procedures are as follows:
Therefore, we can draw the following conclusions.
Matching principle 1: accurate type matching
Method overload calls as like as two peas of parameter types. This is the first priority matching principle: precise type matching.
Matching principle 2: base types are automatically converted to larger base types
Next, we delete the exact matching method and observe the second matching order? The implementation code is as follows:
public class OverloadExample {
public static void main(String[] args) {
OverloadExample example = new OverloadExample();
example.method(12);
}
public void method(long num) {
System.out.println("call long method");
}
public void method(Integer num) {
System.out.println("call Integer method");
}
public void method(Object num) {
System.out.println("call Object method");
}
public void method(int... num) { // Optional parameters
System.out.println("call int... method");
}
}
The execution results of the above procedures are shown in the figure below:
Therefore, we can conclude that if it is a basic data type, the second matching principle of method overload call is to automatically convert to a larger basic data type.
Matching principle 3: automatic packing / unpacking matching
Next, delete the long method in the second matching principle, and the implementation code is as follows:
public class OverloadExample {
public static void main(String[] args) {
OverloadExample example = new OverloadExample();
example.method(12);
}
public void method(Integer num) {
System.out.println("call Integer method");
}
public void method(Object num) {
System.out.println("call Object method");
}
public void method(int... num) { // Optional parameters
System.out.println("call int... method");
}
}
The execution results of the above procedures are shown in the figure below:
As can be seen from the above execution results, the third matching principle of method overloading is to match the data type of automatic boxing or unpacking.
Matching principle 4: match up according to the inheritance route
At this time, delete the Integer method in the third matching principle, and the remaining code is as follows:
public class OverloadExample {
public static void main(String[] args) {
OverloadExample example = new OverloadExample();
example.method(12);
}
public void method(Object num) {
System.out.println("call Object method");
}
public void method(int... num) { // Optional parameters
System.out.println("call int... method");
}
}
The execution results of the above procedures are shown in the figure below:
As can be seen from the above execution results, the fourth matching principle of method overloading is to match the method calls of the parent class upward in turn.
Matching principle 5: variable parameter matching
Finally, the method in the code is deleted, leaving only one optional parameter. The implementation code is as follows:
public class OverloadExample {
public static void main(String[] args) {
OverloadExample example = new OverloadExample();
example.method(12);
}
public void method(int... num) { // Optional parameters
System.out.println("call int... method");
}
}
The execution results of the above procedures are shown in the figure below:
As can be seen from the above execution results, the fifth matching principle of method overloading is to match optional parameters.
summary
Multiple methods with the same name are defined in the same class, but the parameter type or number of parameters of each method is different, which is method overloading. The typical use scenario of method overloading is the valueOf method in String, which has nine implementations. The method return type cannot be used as a basis for method overloading because it is not part of the method signature. There are five matching principles for method overloading: accurate matching, automatic conversion of basic type to larger basic type matching, automatic packing / unpacking matching, upward matching according to inheritance route, and variable parameter matching.
Reference: code out efficiency
Right and wrong are judged by ourselves, bad reputation is heard by others, and the number of gains and losses is safe.
The official account: Java interview
Topics: Java Back-end
|
__label__pos
| 1 |
精华内容
下载资源
问答
• 如何设置单元格的文本控制
2021-07-25 18:54:42
目录
背景
设置字号和加粗
背景
上一篇,我们已经讲解了如何设置单元格的边框,过程我们已经对node-xlsx进行了修改。
后续所有的样式设置都依赖已经修改过的node-xlsx来进行操作的!
设置字号和加粗
设置单元格字体大小和加粗与否比较简单,如下:
const xlsx = require('node-xlsx');
const fs = require('fs')
let s = {
font: {
sz: 19,// 字号大小 单位不是px,会比px大一点的单位,具体是啥还不知道
bold: true,// 加粗
},
};
let data = [
更多相关内容
• 代码如下: 复制代码代码如下: <table> <tr> <td> <div xss=removed> ’这里放置要显示的文本’> 这里放置要显示的文本 </nobr> </div> </td> </tr> </table> 效果如下:
• Aspose.Cells设置单元格格式
千次阅读 2018-12-18 19:22:00
//设置单元格格式 Style style = worksheet.Cells["A1"].GetStyle(); style.Number = 1; worksheet.Cells["A1"].SetStyle(style); //填充并设置单元格格式 worksheet.Cells["A1"].PutValue("10",true,style); ...
使用Aspose.Cells操作Excel时,填写的参数是这样的,显然要不得!
这需要像Excel中的“转换为数字”操作,强大的Aspose.Cells可轻松解决这个问题。
//默认写法
worksheet.Cells["A1"].PutValue(10);
worksheet.Cells["A1"].PutValue("10");
//转为化数字
worksheet.Cells["A1"].PutValue(10,true);
worksheet.Cells["A1"].PutValue("10",true);
//设置单元格格式
Style style = worksheet.Cells["A1"].GetStyle();
style.Number = 1;
worksheet.Cells["A1"].SetStyle(style);
//填充并设置单元格格式
worksheet.Cells["A1"].PutValue("10",true,style);
展开全文
• 来源:Python中文社区OpenPyXL 使您能够以多种不同的方式设置单元格的样式。样式化单元格会让您的电子表格充满魅力!这将有助于将它们与其他电子表格区分开来。但是请不要过度使用, 如...
来源:Python中文社区
OpenPyXL 使您能够以多种不同的方式设置单元格的样式。样式化单元格会让您的电子表格充满魅力!这将有助于将它们与其他电子表格区分开来。但是请不要过度使用, 如果每个单元格都有不同的字体和颜色,您的电子表格就会看起来一团糟。
您应该谨慎使用在本文中学到的技能。如果您想了解有关 OpenPyXL 支持样式的更多信息,您可以查看他们的文档。
https://openpyxl.readthedocs.io/en/stable/styles.html
在本文中,您将了解以下内容:
•设置字体•设置对齐方式•添加边框•更改单元格背景颜色•将图像插入单元格•样式合并单元格•使用内置样式•创建自定义命名样式
现在是时候开始探索如何使用 OpenPyXL 处理字体了!
设置字体
您可以用字体在计算机上设置文本样式。字体控制您在屏幕上或打印中看到的文本的大小、粗细、颜色和样式。您的计算机可以使用数千种字体。Microsoft 在其 Office 产品中包含了许多字体。
当您想使用 OpenPyXL 设置字体时,您需要从 openpyxl.styles 导入 Font 类。以下演示您将如何进行导入:
from openpyxl.styles import Font
Font 类采用许多参数。根据 OpenPyXL 的文档,这是 Font 类的完整参数列表:
class openpyxl.styles.fonts.Font(name=None, sz=None, b=None, i=None, charset=None, u=None,
strike=None, color=None, scheme=None, family=None, size=None, bold=None, italic=None,
strikethrough=None, underline=None, vertAlign=None, outline=None, shadow=None,
condense=None, extend=None)
以下列表显示了您最有可能使用的参数及其默认值:
name=’Calibri’size=11bold=Falseitalic=FalsevertAlign=Noneunderline=’none’strike=Falsecolor=’FF000000′
这些参数允许您设置使文本看起来漂亮所需的大部分内容。请注意,OpenPyXL 中的颜色名称使用十六进制值来表示 RGB(红、绿、蓝)颜色值。您可以设置文本是否应为粗体、斜体、下划线或删除线。
要了解如何在 OpenPyXL 中使用字体,请创建一个名为 font_sizes.py 的新文件,并向其中添加以下代码:
# font_sizes.py
import openpyxl
from openpyxl.styles import Font
def font_demo(path):
workbook = openpyxl.Workbook()
sheet = workbook.active
cell = sheet["A1"]
cell.font = Font(size=12)
cell.value = "Hello"
cell2 = sheet["A2"]
cell2.font = Font(name="Arial", size=14, color="00FF0000")
sheet["A2"] = "from"
cell2 = sheet["A3"]
cell2.font = Font(name="Tahoma", size=16, color="00339966")
sheet["A3"] = "OpenPyXL"
workbook.save(path)
if __name__ == "__main__":
font_demo("font_demo.xlsx")
此代码在三个不同的单元格中使用了三种不同的字体。在 A1 中,您使用默认值,即 Calibri。然后在 A2 中,将字体大小设置为 Arial 并将大小增加到 14 磅。最后,在 A3 中,您将字体更改为 Tahoma,将字体大小更改为 16 磅。
对于第二种和第三种字体,您还可以更改文本颜色。在 A2 中,将颜色设置为红色,在 A3 中,将颜色设置为绿色。
当您运行此代码时,您的输出将如下所示:
尝试更改代码以使用其他字体或颜色。如果你想冒险,你应该尝试将你的文字加粗或斜体。
现在您已准备好学习文本对齐。
设置对齐
您可以使用 openpyxl.styles.Alignment 在 OpenPyXL 中设置对齐方式。您可以使用此类来旋转文本、设置文本换行和缩进。
以下是 Alignment类使用的默认值:
horizontal=’general’vertical=’bottom’text_rotation=0wrap_text=Falseshrink_to_fit=Falseindent=0
打开 Python 编辑器并创建一个名为 alignment.py 的新文件。然后在其中添加以下代码:
# alignment.py
from openpyxl import Workbook
from openpyxl.styles import Alignment
def center_text(path, horizontal="center", vertical="center"):
workbook = Workbook()
sheet = workbook.active
sheet["A1"] = "Hello"
sheet["A1"].alignment = Alignment(horizontal=horizontal,
vertical=vertical)
sheet["A2"] = "from"
sheet["A3"] = "OpenPyXL"
sheet["A3"].alignment = Alignment(text_rotation=90)
workbook.save(path)
if __name__ == "__main__":
center_text("alignment.xlsx")
运行此代码时,您将在 A1 中水平和垂直居中字符串。然后使用 A2 的默认值。最后,对于 A3,将文本旋转 90 度。
尝试运行此代码,您将看到如下内容:
最好花时间尝试不同的 text_rotation 值。然后尝试使用不同的值更改水平和垂直参数。很快,您就可以像专业人士一样对齐文本了!
现在您已准备好了解如何为单元格添加边框!
添加边框
OpenPyXL 使您能够设置单元格边框的样式。您可以为单元格的四个边中的每一个指定不同的边框样式。
您可以使用以下任何一种边框样式:
‘dashDot’‘dashDotDot’‘dashed’‘dotted’‘double’‘hair’‘medium’‘mediumDashDot’‘mediumDashDotDot’,‘mediumDashed’‘slantDashDot’‘thick’‘thin’
打开 Python 编辑器并创建一个名为 border.py 的新文件。然后在您的文件中输入以下代码:
# border.py
from openpyxl import Workbook
from openpyxl.styles import Border, Side
def border(path):
pink = "00FF00FF"
green = "00008000"
thin = Side(border_style="thin", color=pink)
double = Side(border_style="double", color=green)
workbook = Workbook()
sheet = workbook.active
sheet["A1"] = "Hello"
sheet["A1"].border = Border(top=double, left=thin, right=thin, bottom=double)
sheet["A2"] = "from"
sheet["A3"] = "OpenPyXL"
sheet["A3"].border = Border(top=thin, left=double, right=double, bottom=thin)
workbook.save(path)
if __name__ == "__main__":
border("border.xlsx")
此代码将为单元格 A1 和 A3 添加边框。A1的顶部和底部使用“双”边框样式并为绿色,而单元格侧面使用“细”边框样式并为粉红色。
单元格 A3 使用相同的边框,但交换了它们,现在边为绿色,顶部和底部为粉红色。
您可以通过在 border_style 和要使用的颜色中创建 Side 对象来获得此效果。然后将这些 Side 对象传递给 Border 类,该类允许您单独设置单元格的四个边中的每一个。要将边框应用于单元格,您必须设置单元格的边框属性。
运行此代码时,您将看到以下结果:
此图像被放大了很多,以便您可以轻松看到单元格的边框。如果您尝试使用本节开头提到的其他一些边框样式来修改此代码,您可以了解您还可以做什么。
更改单元格背景颜色
您可以通过更改背景颜色来突出显示一个单元格或一系列单元格。在大多数情况下,突出显示单元格比更改文本的字体或颜色更引人注目。OpenPyXL 为您提供了一个名为 PatternFill 的类,您可以使用它来更改单元格的背景颜色。
PatternFill 类接受以下参数(默认值包括在下面):
patternType=NonefgColor=Color()bgColor=Color()fill_type=Nonestart_color=Noneend_color=None
您可以使用多种不同的填充类型。以下是当前支持的填充类型列表:
‘none’‘solid’‘darkDown’‘darkGray’‘darkGrid’‘darkHorizontal’‘darkTrellis’‘darkUp’‘darkVertical’‘gray0625’‘gray125’‘lightDown’‘lightGray’‘lightGrid’‘lightHorizontal’‘lightTrellis’‘lightUp’‘lightVertical’‘mediumGray’
现在您有足够的信息来尝试使用 OpenPyXL 设置单元格的背景颜色。在 Python 编辑器中打开一个新文件并将其命名为 background_colors.py。然后将此代码添加到您的新文件中:
# background_colors.py
from openpyxl import Workbook
from openpyxl.styles import PatternFill
def background_colors(path):
workbook = Workbook()
sheet = workbook.active
yellow = "00FFFF00"
for rows in sheet.iter_rows(min_row=1, max_row=10, min_col=1, max_col=12):
for cell in rows:
if cell.row % 2:
cell.fill = PatternFill(start_color=yellow, end_color=yellow,
fill_type = "solid")
workbook.save(path)
if __name__ == "__main__":
background_colors("bg.xlsx")
此示例将迭代 9 行 12 列。如果该单元格在奇数行中,它会将每个单元格的背景颜色设置为黄色。背景颜色发生变化的单元格将从 A 列到 L 列。
当您想设置单元格的背景颜色时,您可以将单元格的填充属性设置为 PatternFill 的实例。在此示例中,您指定 start_color 和 end_color。您还将 fill_type 设置为“solid”。OpenPyXL 还支持使用 GradientFill 作为背景。
尝试运行此代码。运行后,您将拥有一个新的 Excel 文档,如下所示:
以下是您可以使用此代码尝试的一些想法:
•更改受影响的行数或列数•更改您要更改的颜色•更新代码以使用不同颜色为偶数行着色•尝试其他填充类型
完成对背景颜色的试验后,您可以了解如何在单元格中插入图像!
将图像插入单元格
OpenPyXL 使图像插入 Excel 电子表格变得简单。为了实现这个魔法,您可以使用 Worksheet 对象的 add_image() 方法。该方法接受两个参数:
img – 您要插入的图像文件的路径 anchor – 提供一个单元格作为图像的左上角锚点(可选)
对于此示例,您将使用Mouse vs. Python图片:
本书的 GitHub 存储库中有可供您使用的图像。
下载图像后,创建一个新的 Python 文件并将其命名为 insert_image.py。然后添加以下内容:
# insert_image.py
from openpyxl import Workbook
from openpyxl.drawing.image import Image
def insert_image(path, image_path):
workbook = Workbook()
sheet = workbook.active
img = Image("logo.png")
sheet.add_image(img, "B1")
workbook.save(path)
if __name__ == "__main__":
insert_image("logo.xlsx", "logo.png")
在这里,您传入要插入的图像的路径。要插入图像,请调用 add_image()。在此示例中,您正在硬编码以使用单元格 B1 作为锚定单元格。然后保存 Excel 电子表格。
如果你打开你的电子表格,你会看到它看起来像这样:
您可能不需要经常在 Excel 电子表格中插入图像,但这是一项极好的技能。
样式合并单元格
合并单元格是将两个或多个相邻单元格合并为一个单元格。如果要使用 OpenPyXL 设置合并单元格的值,则必须使用合并单元格的最左上角的单元格。
您还必须使用此特定单元格为整个合并单元格设置样式。您可以将在单个单元格上使用的所有样式和字体设置与合并单元格一起使用。但是,您必须将样式应用于左上角的单元格,才能将其应用于整个合并单元格。
如果您看到一些代码,您就会明白这是如何工作的。继续创建一个名为 style_merged_cell.py 的新文件。现在在您的文件中输入以下代码:
# style_merged_cell.py
from openpyxl import Workbook
from openpyxl.styles import Font, Border, Side, GradientFill, Alignment
def merge_style(path):
workbook = Workbook()
sheet = workbook.active
sheet.merge_cells("A2:G4")
top_left_cell = sheet["A2"]
light_purple = "00CC99FF"
green = "00008000"
thin = Side(border_style="thin", color=light_purple)
double = Side(border_style="double", color=green)
top_left_cell.value = "Hello from PyOpenXL"
top_left_cell.border = Border(top=double, left=thin, right=thin,
bottom=double)
top_left_cell.fill = GradientFill(stop=("000000", "FFFFFF"))
top_left_cell.font = Font(b=True, color="FF0000", size=16)
top_left_cell.alignment = Alignment(horizontal="center",
vertical="center")
workbook.save(path)
if __name__ == "__main__":
merge_style("merged_style.xlsx")
在这里,您创建一个从 A2(左上角的单元格)开始到 G4 的合并单元格。然后设置单元格的值、边框、填充、字体和对齐方式。
当您运行此代码时,您的新电子表格将如下所示:
您应该花一些时间在合并的单元格上尝试一些不同的样式。例如,也许会想出比这里使用的灰色更好的渐变。
现在您已准备好了解 OpenPyXL 的内置样式!
使用内置样式
OpenPyXL 带有多种内置样式,您也可以使用。与其在本书中复制完整的内置样式列表,不如查看官方文档,因为它将是样式名称的最新来源。
但是,值得注意的是其中一些样式。例如,以下是您可以使用的数字格式样式:
‘Comma’‘Comma [0]’‘Currency’‘Currency [0]’‘Percent’
您还可以应用文本样式。以下是这些样式的列表:
‘Title’‘Headline 1’‘Headline 2’‘Headline 3’‘Headline 4’‘Hyperlink’‘Followed Hyperlink’‘Linked Cell’
OpenPyXL 有几个其他的内置样式组。您应该查看文档以了解支持的所有不同样式。
现在您了解了一些可以使用的内置样式,是时候编写一些代码了!创建一个新文件并将其命名为 builtin_styls.py。然后输入以下代码:
# builtin_styles.py
from openpyxl import Workbook
def builtin_styles(path):
workbook = Workbook()
sheet = workbook.active
sheet["A1"].value = "Hello"
sheet["A1"].style = "Title"
sheet["A2"].value = "from"
sheet["A2"].style = "Headline 1"
sheet["A3"].value = "OpenPyXL"
sheet["A3"].style = "Headline 2"
workbook.save(path)
if __name__ == "__main__":
builtin_styles("builtin_styles.xlsx")
在这里,您将三种不同的样式应用于三个不同的单元格。您可以使用“Title”,“Headline 1”和 “Headline 2”
当您运行此代码时,您最终将拥有一个如下所示的电子表格:
与往常一样,您应该尝试其他一些内置样式。尝试它们是确定它们做什么以及它们是否适合您的唯一方法。
如果你想创造你的风格怎么办?这就是您将在下一节中介绍的内容!
创建自定义命名样式
您也可以使用 OpenPyXL 创建设计的自定义样式。要创建您的样式,您必须使用 NamedStyle 类。
NamedStyle 类采用以下参数(也包括默认值):
name=”Normal”font=Font()fill=PatternFill()border=Border()alignment=Alignment()number_format=Noneprotection=Protection()builtinId=Nonehidden=FalsexfId=None
您应该始终为您的 NamedStyle 提供您自己的名称以保持其唯一性。继续创建一个新文件并将其命名为named_style.py。然后在其中添加以下代码:
# named_style.py
from openpyxl import Workbook
from openpyxl.styles import Font, Border, Side, NamedStyle
def named_style(path):
workbook = Workbook()
sheet = workbook.active
red = "00FF0000"
font = Font(bold=True, size=22)
thick = Side(style="thick", color=red)
border = Border(left=thick, right=thick, top=thick, bottom=thick)
named_style = NamedStyle(name="highlight", font=font, border=border)
sheet["A1"].value = "Hello"
sheet["A1"].style = named_style
sheet["A2"].value = "from"
sheet["A3"].value = "OpenPyXL"
workbook.save(path)
if __name__ == "__main__":
named_style("named_style.xlsx")
在这里,您创建了一个 Font()Side() 和 Border() 实例以传递给您的 NamedStyle()。创建自定义样式后,您可以通过设置单元格的样式属性将其应用于单元格。应用自定义样式的方式与应用内置样式的方式相同!
您将自定义样式应用到单元格 A1。
当您运行此代码时,您将获得一个如下所示的电子表格:
现在轮到你了!编辑代码以使用 Side 样式,这将更改您的边框。或者创建多个 Side 实例,以便您可以使单元格的每一侧都是唯一的。使用不同的字体或添加自定义背景颜色!
总结
您可以使用 OpenPyXL 对单元格做很多不同的事情。本文中的信息使您能够以优美的方式格式化数据。
在本文中,您了解了以下主题:
•使用字体•设置对齐方式•添加边框•更改单元格背景颜色•将图像插入单元格•样式合并单元格•使用内置样式•创建自定义命名样式
您可以利用在本文中学到的内容制作精美的电子表格。您可以通过更改单元格的背景颜色或字体来突出特定的数据。您还可以使用内置样式更改单元格的格式。OpenPyXL 在处理单元格时非常强大和有价值。
---------End---------
精选资料
回复关键词,获取对应的资料:
关键词资料名称
600《Python知识手册》
md《Markdown速查表》
time《Python时间使用指南》
str《Python字符串速查表》
pip《Python:Pip速查表》
style《Pandas表格样式配置指南》
mat《Matplotlib入门100个案例》
px《Plotly Express可视化指南》
精选内容
展开全文
• 单元格(td)的设置和块(div)有一点不同,前者不能直接设置width,要首先将表格布局(table-layout)设为固定(fixed),然后再设置单元格宽度,否则文字仍然会把单元格撑开。单元格宽度只需设置表头元素就可以作用于...
1 固定单元格宽度
这种效果采用css实现。单元格(td)的设置和块(div)有一点不同,前者不能直接设置width,要首先将表格布局(table-layout)设为固定(fixed),然后再设置单元格宽度,否则文字仍然会把单元格撑开。单元格宽度只需设置表头元素就可以作用于整列。
<style>
/*隐藏多余文字*/
table{
/*将表格布局设为fixed,默认是auto*/
table-layout: fixed;
}
.th_rname{
/*自定义单元格宽度,设置表头即可作用于整列*/
width: 400px;
}
.td_rname{
/* 强制不换行 */
white-space: nowrap;
/* 超出部分显示省略号 */
text-overflow: ellipsis;
/* 隐藏超出内容 */
overflow: hidden;
}
</style>
效果如下,注意第二列“线路名称”的显示效果:
2 固定字符数量
固定字符数量的效果提供两种实现方法,一是jquery,二是thymeleaf。先放出实现效果:
2.1 jquery实现
<script>
$(function () {
$("td[class='td_rname']").each(function(){
let rname_str = $(this).text();
// 最大显示字符数25,超出部分显示省略号
if(rname_str.length>25){
$(this).text(rname_str.substring(0, 25)+"...");
}
});
});
</script>
2.2 thymeleaf实现
实现方法比较简单,使用三目运算就可以。
<!-- title属性:鼠标悬停显示完整文字 -->
<td th:text="${#strings.length(u.rname)>25}?${#strings.substring(u.rname, 0, 25)+'...'}:${u.rname}"
th:title="${u.rname}" class="td_rname"></td>
展开全文
• easyExcel 导出时设置样式、列宽策略。
• 用借助名称一次选定单元格区域先用鼠标圈出一部分区域,再单击工作表左上角的“名称框”,输入该区域的名称,如“文萃编辑”。以后无论工作区域的选定多么复杂,只要在名称框中键入特定的名称(如“文萃编辑”),然后...
• 东奥会计在线高级会计师频道提供:设置单元格格式。熟悉教材中每个知识点,掌握...2.也可以使用“单元格格式”对话框,在“对齐”选项卡设置水平对齐、垂直对齐、缩进、文本方向等,“确定”完成。4.1.2 合并单元格E...
• //设置单元格为富文本类型 XSSFRichTextString text = new XSSFRichTextString(titlesNew[i]); //给单元格设置值 cell.setCellValue(text); //设置自动列宽(必须在单元格设值以后进行) sheet.autoSizeColumn(i); ...
• 本文档就是讲解通过控制单元格的宽度,让后通过样式使单元格内的超长文本,超过单元格宽度的文本用省略号的形式代替
• // 第四步,创建单元格,并设置值表头 设置表头居中 CellStyle style = wb.createCellStyle(); style.setAlignment(XSSFCellStyle.ALIGN_CENTER); // 创建一个居中格式 //设置背景颜色 ...
• 在https://blog.csdn.net/sinat_33472737/article/details/103660527这篇文章中,我们通过注解实现了每一列的单元格格式精确控制,但如果想要让同一列的数据可以有不同的样式,则无法实现。例如对于同一列的值,值在...
• easyexcel官方文档中可以设置表头单元格格式及内容的单元格格式,其中单元格格式策略类中有两个构造函数 public HorizontalCellStyleStrategy(WriteCellStyle headWriteCellStyle, List<WriteCellStyle> ...
• //代码出自:文山综合管理平台--绩效考核--员工考核--员工岗位价值系数评定//页面:jx.web/JxEmployee/JxYgjzxsList.aspx
• 单元格属性
千次阅读 2021-06-12 04:17:33
本章对一般单元格属性中的值、显示、段落、字体、扩展、超链接、分页、WEB、其他、表达式、类型以及统计进行了详细的讲解。本小节对一般单元格的属性值、...显示格式单元格属性列表中第二项是显示格式,用来设置值在...
• excel 单元格文本拼接If you have a spreadsheet where the text in your column headings (not the lettered column headings) is longer than the text in the columns, and there are a lot of columns, you may ...
• 本文档就是讲解通过控制单元格的宽度,让后通过样式使单元格内的超长文本,超过单元格宽度的文本用省略号的形式代替
• VC设置EXCEL单元格格式
千次阅读 2016-11-16 11:44:42
示例---将所有单元格设置文本格式: [cpp] view plaincopy _Application app; //Excel应用程序接口 Workbooks books; //工作薄集合 _Workbook book; //工作薄 ...
• 通过css样式控制单元格内超长文本自动换行,并且解决了浏览器兼容问题的绝佳文档,值得你一看的文档……
• python – 在Excel中调整单元格宽度
千次阅读 2020-11-27 15:59:00
我面临的问题是:当文本超过单元格大小时,它会被隐藏.import xlsxwriterworkbook = xlsxwriter.Workbook("file.xlsx")worksheet1 = workbook.add_worksheet()worksheet1.write(1, 1,"long text hidden test-1" )wo.....
• AppDelegate设置根视图控制器 RootViewController
• private void DGVMain_CellPainting(object sender, DataGridViewCellPaintingEventArgs e) { if (e.ColumnIndex>0)//如果没有这一句渲染是从-1开始的会造成单元格全是黑色 { DataGridViewColumn ThisCl = DGVMain....
• 帆软单元格格式属性设置
千次阅读 2020-07-20 11:06:53
单元格是组成报表的最小元素,FineReport 将单元格很多属性开放给应用开发人员进行控制,如新增单元格设置列宽、行高,字体、前景色,背景色、显示位置、边框样式、边框颜色等等。 以下我们将常用的属性设置列出...
• 怎样实现GridView中的每个单元格文本长度的控制,鼠标悬停时,显示单元格所有的内容,方法以下: 1)在.aspx页面GridView控件添加OnDataBound属性,如下所示: 2)在后台.aspx.cs文件GridView1_DataBound事件...
• //这里控制超出多少行开始用省略号代替 -webkit-box-orient: vertical; } 效果如下: 说实话一开始听到这个需求的时候,内心:“你这是为难我胖虎.jpg”,后来插槽大法好,插槽大法秒~撒花❀❀❀°*( ̄▽ ̄)/❀❀❀...
• //添加带有formatting的Number对象 jxl.write.NumberFormat nf = new jxl.write.... //设置数字格式 jxl.write.WritableCellFormat wcfN = new jxl.write.WritableCellFormat(nf); //设置表单格式 jxl.write.
• 那么如何设置一个单元格不可编辑呢 在gridView1_ShowingEditor事件中可以实现。这个事件的意思是允许取消激活编辑器 意思也就是说可以让我们取消编辑 在这个事件中可以添加自己的条件来判断是否这个单元格不能...
• CSS控制文本自动换行CSS控制文本自动换行,阅读CSS控制文本自动换行,1.你定死表格的宽度,即给表格一个宽度值(是 数值,不是百分比) 2.强制不换行div{//white-space:不换行;normal 默认;nowrap强制在同一行内显示...
空空如也
空空如也
1 2 3 4 5 ... 20
收藏数 35,827
精华内容 14,330
热门标签
关键字:
如何设置单元格的文本控制
|
__label__pos
| 0.890857 |
凌的博客
您现在的位置是: 首页 > 学无止境 > Linux >
Linux
Linux Crontab安装使用
2015-05-30 Linux 470
crontab命令常见于Unix和Linux的操作系统之中,用于设置周期性被执行的指令。该命令从标准输入设备读取指令,并将其存放于“crontab”文件中,以供之后读取和执行。通常,crontab储存的指令被守护进程激活。cron
crontab命令常见于Unix和Linux的操作系统之中,用于设置周期性被执行的指令。该命令从标准输入设备读取指令,并将其存放于“crontab”文件中,以供之后读取和执行。通常,crontab储存的指令被守护进程激活。crond 常常在后台运行,每一分钟检查是否有预定的作业需要执行。这类作业一般称为cron jobs。
一、安装
[root@CentOS ~]# yum -y install vixie-cron
[root@CentOS ~]# yum -y install crontabs
说明:
vixie-cron 软件包是 cron 的主程序;
crontabs 软件包是用来安装、卸装、或列举用来驱动 cron 守护进程的表格的程序。
二、配置
cron 是 linux 的内置服务,但它不自动起来,可以用以下的方法启动、关闭这个服务:
service crond start //启动服务
service crond stop //关闭服务
service crond restart //重启服务
service crond reload //重新载入配置
service crond status //查看crontab服务状态
在CentOS系统中加入开机自动启动: chkconfig --level 345 crond on
cron 的主配置文件是 /etc/crontab,它包括下面几行:
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
HOME=/
# run-parts
01 * * * * root run-parts /etc/cron.hourly
02 4 * * * root run-parts /etc/cron.daily
22 4 * * 0 root run-parts /etc/cron.weekly
42 4 1 * * root run-parts /etc/cron.monthly
前四行是用来配置 cron 任务运行环境的变量。
SHELL 变量的值告诉系统要使用哪个 shell 环境(在这个例子里是 bash shell);
PATH 变量定义用来执行命令的路径。
cron 任务的输出被邮寄给 MAILTO 变量定义的用户名。
如果 MAILTO 变量被定义为空白字符串(MAILTO=""),电子邮件就不会被寄出。
HOME 变量可以用来设置在执行命令或脚本时使用的主目录。
限制对 cron 的使用:
/etc/cron.allow
/etc/cron.deny 文件被用来限制对 cron 的使用。
这两个使用控制文件的格式都是每行一个用户。
两个文件都不允许空格。
如果使用控制文件被修改了,cron 守护进程(crond)不必被重启。
使用控制文件在每次用户添加或删除一项 cron 任务时都会被读取。
无论使用控制文件中的规定如何,root 都总是可以使用 cron。
如果 cron.allow 文件存在,只有其中列出的用户才被允许使用 cron,并且 cron.deny 文件会被忽略。
如果 cron.allow 文件不存在,所有在 cron.deny 中列出的用户都被禁止使用 cron。
三、crontab 命令
功能:设置计时器。
语法crontab[-u <用户名称>][配置文件] 或 crontab [-u <用户名称>][-elr]
解释:cron 是一个常驻服务,它提供计时器的功能,让用户在特定的时间得以执行预设的指令或程序。只要用户会编辑计时器的配置文件,就可以使 用计时器的功能。其配置文件格式如下:Minute Hour Day Month DayOFWeek Command
参数:
-e 编辑该用户的计时器设置。
-l 列出该用户的计时器设置。
-r 删除该用户的计时器设置。
-u<用户名称> 指定要设定计时器的用户名称。
格式:
* * * * * command
分 时 日 月 周 命令
第1列表示分钟1~59 每分钟用*或者 */1表示
第2列表示小时1~23(0表示0点)
第3列表示日期1~31
第4列表示月份1~12
第5列标识号星期0~6(0表示星期天)
第6列要运行的命令
例子:
vi /etc/crontab
30 21 * * * /usr/local/etc/rc.d/lighttpd restart
上面的例子表示每晚的21:30重启apache。
45 4 1,10,22 * * /usr/local/etc/rc.d/lighttpd restart
上面的例子表示每月1、10、22日的4 : 45重启apache。
10 1 * * 6,0 /usr/local/etc/rc.d/lighttpd restart
上面的例子表示每周六、周日的1 : 10重启apache。
0,30 18-23 * * * /usr/local/etc/rc.d/lighttpd restart
上面的例子表示在每天18 : 00至23 : 00之间每隔30分钟重启apache。
0 23 * * 6 /usr/local/etc/rc.d/lighttpd restart
上面的例子表示每星期六的11 : 00 pm重启apache。
* */1 * * * /usr/local/etc/rc.d/lighttpd restart
每一小时重启apache
* 23-7/1 * * * /usr/local/etc/rc.d/lighttpd restart
晚上11点到早上7点之间,每隔一小时重启apache
0 11 4 * mon-wed /usr/local/etc/rc.d/lighttpd restart
每月的4号与每周一到周三的11点重启apache
0 4 1 jan * /usr/local/etc/rc.d/lighttpd restart
一月一号的4点重启apache
*/30 * * * * /usr/sbin/ntpdate 210.72.145.44
每半小时同步一下时间
文章评论
0条评论
|
__label__pos
| 0.814348 |
Exploring the Ethereum Layer 2 Protocols
Introduction:
In recent years, Ethereum has gained remarkable status as a decentralized operations as well as a smart contract-based platform. Yet, its escalating popularity has brought up a significant issue of scalability issues. As new operations and druggies are added to the Ethereum network, its ability to manage a variety of transactions has been stretched which has resulted in an increase in traffic and increased shipping costs for sales. This has highlighted the need for new solutions to improve the scalability of Ethereum while keeping its security and moral decentralization.
Securing Ethereum’s Scalability Challenge by Implementing Ethereum Layer 2 Protocols
To get around the backups for scalability of the Ethereum network, developers have chosen to use a set of protocols referred to collectively in the Ethereum Layer 2 protocols. The protocols were designed to operate” on top of” the Ethereum mainnet, providing another subcaste to process better and decreases the load for the blockchain itself. This way, Ethereum Layer 2 protocols help to reduce traffic and eliminate the issues of slow documentation as well as high prices for gas and improve the overall experience for stoners.
One method to imagine the concept of Ethereum Layer 2 protocol is to think of them as the set of mutual tracks that are in parallel with one another along the Ethereum road. Although the primary road (the Ethereum mainnet) remains crucial for security and coordination The new routes (the Subcaste 2 protocols) enable faster and more scalable transactions. They accomplish this with vibrant processes that speed up sales processing while still relying to the Ethereum mainnet for the best security.
The Ethereum Subcaste 2 protocols comprise an array of new technology, each with their own distinctive approach to using Ethereum’s scalability mystification. By using mechanisms like state channels Tube Auspicious Rollerups and ZkRollups and Validium, Ethereum Layer 2 protocols boost the capacity of the network to handle a greater amount of transactions per second.
What is the Ethereum Layer 2 Protocol?
…………………
Ethereum Layer 2 Protocol
……………………
The Ethereum Layer 2 Protocol is scalable solution that is a part of the Ethereum blockchain. Its goal is to improve user experience. Layer 2s generally allow transactions to be faster and more affordable, without compromising the level of security provided by Ethereum. In general they are Layer 2s take transactions off-chain prior being registered with Ethereum’s mainnet. Ethereum mainnet. Layer 2 protocols make use of their native ERC tokens that pay gas costs or governance as well as other uses and allow traders to get exposure to Layer 2 protocols.
The Role of the Ethereum Layer 2 Protocol
Ethereum Layer 2 protocol function by accelerating several transactions off of the blockchain’s main one, reusing them in a sporadic manner on the secondary subcaste, and then submitting the summary of the transactions onto the mainnet. This results in a significant reduction in the cost of sales and processing time. Furthermore, these protocols keep their ties to smart contracts allowing the enduring reliability of Ethereum’s operating ecosystem that is decentralized.
This preface to the Ethereum Layer 2 protocol does not just address the possibility of scaling but also reinforces Ethereum’s foundational principles of security and decentralization. Because the additional subcaste is reliant on the main net’s security, any unauthorized activity or irregularities are identifiable and measurable on the Ethereum main blockchain. In essence, Ethereum Layer 2 protocols provide a clever solution to Ethereum’s scalability trifecta that balances security, scalability, and decentralization.
With the protocols mentioned above, Ethereum aims to accommodate the growing number of users and broaden its scope of operations while remaining true to the principles that have fueled its popularity. Resolving the Ethereum’s Scalability Challenge with Ethereum Layer 2 Protocols In order to overcome the backups to scalability of Ethereum’s Ethereum network, the developers have turned to a collection of protocols that are collectively referred to by the Ethereum Layer 2 protocols.
The protocols are designed to function” on top of” the Ethereum mainnet, providing an additional subcaste that better and lessens the load for the blockchain itself. In this way, Ethereum Layer 2 protocols help to reduce traffic and eliminate the issues of slow documentation and expensive gas prices which enhance the overall experience of stoners.
One way to imagine the concept of Ethereum Layer 2 protocols can be described as an array of interconnected tracks running in parallel with the primary Ethereum road. The main route (the Ethereum mainnet) remains crucial for security and collaboration The new roads (the Subcaste 2 protocols) enable faster and more scalable transactions. They accomplish this with vibrant methods that improve sales processing while still relying on the Ethereum manner to provide the best security.
The Ethereum Subcaste 2 protocols comprise the full range of advanced technology, each with distinct ways of working with Ethereum’s scalability mystery. Through mechanisms such as states channels Tube Auspicious rollups, ZkRollups and Validium, Ethereum Layer 2 protocols increase the capacity of the network to manage a much larger amount of transactions per second.
Types of Ethereum Layer 2 Protocols:
Although the primary goals of all protocols in the layer 2 are the same but there are a number of ways to attain the highest level of security, scalability and a high level of efficiency. They are two of the more commonly used layers 2 protocols.
1. State Channels:
State channels can be used to improve the scalability of Ethereum. They allow drug dealers to negotiate transactions off-chain, and then deal with the result through the mainnet. This decreases the amount of traffic, reduces the cost of freight and speeds up the process. This is why the Raiden Network is a notable design utilizing state channels. They are still stylish when dealing with frequent interactions with a restricted group of people and drug dealers must secure funds to secure collateral.
2. Tube Tube:
This is a Subcaste 2 result, resulting in tubes chains” connected with the Ethereum mainnet. These chains handle transactions off-chain and then submit the added data to the mainnet on a regular basis. This improves production and lowers costs. OmiseGO is a design-enforcing tube. Security concerns stem from entities that need to safeguard the supply chain and also challenge fraudulent transactions.
3. Auspicious Rollups:
These rollups mix out-of-chain processing with on-chain security. Deals must be off-chain, and they are generally considered legitimate unless they are proven to be not by fraud verification on the Ethereum mainnet. This increases scalability and is perfectly compatible with smart contracts. Synthetix utilizes Auspicious Rollups in order to improve the trading capabilities of its platform. But icing the data’s volume and managing Quiescence are challenges.
4. ZkRollups:
ZkRollups employ zero-knowledge certifications to move deals off-chain and ensure scaling. They decrease the cost of freight and speed up transactions by confirming batches of deals that require only minimal computations on the chain. Loopring could be considered a design challenge for ZkRollups. However, the complexity of zero-knowledge verifications can create a risk of fraud and auditing issues.
5. Validium:
The results emphasize the scalability of deals by recycling them off-chain and then submitting the added information on the Ethereum mainnet. This will result in the highest output and a more enjoyable stoner experience. Polygon( previously Matic) utilizes Validium to improve its multi-chain scaling performance. Security companies are concerned with issues with centralization and data integrity due to processing outside of the chain.
Watch This Video to learn more about these types
Conclusion:
In short, Ethereum Layer 2 protocols offer a key solution to Ethereum’s scaling issues. These protocols, which include states channels Tube Auspicious, Auspicious, ZkRollups, as well as Validium and Validium, work with and complement the Ethereum mainnet to handle transactions quicker, more cost-effective and more effectively. They ensure the Ethereum’s security and decentralization while substantially making the users experience better.
In the event that Ethereum continues to expand and is facing an increase in need, Layer 2 protocols are vital to supporting a larger number of users and expanding the capabilities of the network. They balance the ability to scale with Ethereum’s fundamental principles to ensure the security of the network. Ethereum Layer 2 protocols are an important innovation in the field of blockchain that is paving the way to an increasingly scalable and decentralized future.
1 thought on “Exploring the Ethereum Layer 2 Protocols”
Leave a Comment
|
__label__pos
| 0.934358 |
Skip to main content
Intent-Centric Blockchain: Everything You Need to Know
By April 11, 2024April 16th, 20246 minute read
Have you ever visited a bustling market with a list of items to buy? Picture this scenario: a huge market, and you find yourself needing clarity on where to begin. Now, envision having a knowledgeable friend familiar with the market. You hand over your list, and effortlessly, your friend procures everything for you. Sounds convenient, doesn’t it? A concept known as “intents” is emerging in blockchains, offering a similar streamlined experience.
Just as you entrust your shopping list to a friend, you communicate your desires to the computer in blockchains, which takes care of the intricate tasks for you. This shift promises to revolutionize the way we interact with blockchain, simplifying the process for everyone involved. Let’s delve into the details!
Understanding Intent
Think of blockchains as colossal, global computers. Traditionally, users would furnish detailed instructions (e.g., use Uniswap to exchange token A for token B at a specific price), and the blockchain would execute these instructions step by step.
However, the landscape is changing with the advent of intents. In this new paradigm, users articulate their objectives (e.g., exchange A for B at the optimal price) without specifying the methodology, allowing the protocol to handle the intricacies.
Get WazirX News First
Imagine it as akin to hailing a taxi. Traditional blockchain services resemble providing the driver with precise turn-by-turn directions, a potentially tedious and costly process, especially if the route involves twists or hard-to-find shortcuts. With intent, all you need to do is convey your destination to the taxi driver and then relax, trusting the driver to handle the journey.
A fresh wave of blockchains and protocols is already introducing intent-centric services to crypto users. Users can present a broad goal to these services, such as “exchange these tokens for the best price,” and rely on a third-party solver to manage it for a fee.
Types of Intent in Blockchain
The progression towards intent-centricity in blockchain technology has given rise to various intent categories, each designed to address distinct user goals and interactions within decentralized networks. A comprehensive understanding of these different types illuminates the versatility and applicability of this transformative shift in the crypto landscape.
• Transactional Intent: At the heart of intent-centricity lies the transactional dimension, where users convey their intents for specific transactions without delving into step-by-step execution details. Whether it involves token swaps, cross-chain asset transfers, or intricate transactions, users articulate their desired outcomes, enabling protocols and solvers to manage the intricacies.
• Trading and Exchange Intent: Intent-centric models significantly impact trading endeavors. Users can express broader intents, such as securing the best price for their trades or identifying optimal market conditions, empowering solvers to navigate the intricacies of diverse exchanges and protocols to fulfill these intentions.
• Protocol Interaction Intent: Blockchain protocols are increasingly embracing intent-centric approaches. Users express their interaction intents with these protocols, outlining their goals without micromanaging the protocol execution. This simplifies protocol interaction, providing users with a more seamless and efficient engagement experience.
• Optimization Intent: In blockchain, intents extend beyond transactional facets to optimization objectives. Users aim to optimize gas fees, maximize profits, or efficiently manage assets. Articulating these optimization intents enables solvers to navigate the diverse ecosystem to fulfill these objectives.
• Customized Use Case Intents: Intent-centricity in blockchain transcends predefined categories. Users can articulate highly specific intents tailored to their unique use cases. This adaptability allows for a wide range of intents, spanning from DeFi operations to NFT transactions, reflecting the diverse needs of crypto users.
These varied types of intents underscore the expansive scope of intent-centricity within the blockchain realm. Whether streamlining transactional activities, optimizing interactions, or customizing user experiences, blockchain intents redefine user engagement, promising a more intuitive and efficient blockchain ecosystem.
How Does Blockchain Intent Work?
Blockchain intent, built on the foundation of user-defined objectives rather than step-by-step instructions, represents a significant departure in the way transactions and interactions unfold within decentralized networks. But how does this concept function, and what makes it crucial in crypto?
Working of Blockchain Intent
Blockchain intent overturns the conventional transaction model. Instead of furnishing detailed step-by-step instructions for blockchain execution, users articulate their desired outcomes without delving into the specifics of the execution process.
• Expression of Intent: Users communicate their objectives, whether it involves swapping tokens, executing intricate transactions, optimizing asset management, or interacting with protocols by expressing their broader goals.
• Solver Network: These intents are then conveyed to solvers – a diverse range of entities, which may include AI bots, other protocols, or individuals. Solvers are tasked with carrying out the assignments, utilizing their capabilities and access to achieve the user’s intent.
• Execution without Micromanagement: By entrusting solvers to handle the intricacies, users adopt a hands-off approach, streamlining their interaction with the blockchain. This shift from micromanagement to goal-setting enhances user experience and efficiency.
Advantages of Intent-Centric Blockchain
• Enhanced Security:
Blockchain networks can implement more robust security measures by centering on the intent behind transactions. Smart contracts, which encapsulate the terms of agreements in code, execute only when specific conditions or intents are met. This approach significantly reduces the potential for fraudulent activities, ensuring a more secure environment for users.
• Efficient Decentralization:
Intent-centric blockchain networks empower Decentralized Applications (DApps) to operate more efficiently. Smart contracts, guided by specific intents, can automate processes without intermediaries, making transactions quicker, cheaper, and more reliable. This efficiency is pivotal for the widespread adoption of blockchain technology across various industries.
• Improved User Experience:
Users engaging with intent-centric blockchain applications encounter a more intuitive and seamless interface. Transactions align with their intentions, eliminating confusion and enhancing user satisfaction. This user-centric design approach fosters a positive experience, encouraging more individuals and businesses to embrace blockchain technology.
• Transparency and Traceability:
Intent-centric blockchain networks offer unparalleled transparency. Every transaction’s intent is recorded on the ledger, enabling users to trace the flow of assets and verify the legitimacy of transactions. This level of transparency builds trust among users, a crucial factor for the widespread adoption of blockchain in sectors like supply chain management and finance.
Challenges of Intent-Centric Blockchain
• Balance Between Solver Efficiency and User Privacy:
While it’s beneficial that solvers can optimize intent fulfillment, there’s a need to strike a balance. The more information solvers have, the better they can perform, but this may require users to share more, impacting their privacy.
• Technical Challenges in Implementation:
Developing a truly intent-centric model is challenging. It involves understanding user preferences and navigating the complex DeFi landscape efficiently. This necessitates advanced algorithms and a profound understanding of the DeFi space.
• Concerns About Centralization and Control:
If a few major players become the primary “solvers” of intents, it may lead to centralization, contradicting the decentralized nature of blockchain. There’s also a risk that dominant players could influence or control the intent-solving process, potentially not always aligning with the user’s best interests.
Conclusion: The Future of Blockchain Technology
The intent-centric paradigm signifies a crucial shift in the blockchain landscape. Its focus on user intention, heightened security, efficiency, and transparency establishes the groundwork for a future where blockchain seamlessly integrates into our daily lives.
Moving forward, it is imperative for developers, businesses, and policymakers to acknowledge the transformative potential of the intent-centric approach. Embracing this paradigm opens up new possibilities, fostering a digital ecosystem where trust, security, and user satisfaction take precedence.
To sum up, the ascent of the intent-centric paradigm stands as a noteworthy milestone in the progression of blockchain technology. Its impact reverberates across industries, shaping the dynamics of transactions, information sharing, and trust-building in the digital era. As we navigate this ever-evolving technological landscape, comprehending and harnessing the power of intent-centric blockchain is not merely an option; it is necessary for a future characterized by security, efficiency, and transparency.
Disclaimer: Cryptocurrency is not a legal tender and is currently unregulated. Kindly ensure that you undertake sufficient risk assessment when trading cryptocurrencies as they are often subject to high price volatility. The information provided in this section doesn't represent any investment advice or WazirX's official position. WazirX reserves the right in its sole discretion to amend or change this blog post at any time and for any reasons without prior notice.
Participate in the Indian Crypto Movement. Share:
Leave a Reply
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
|
__label__pos
| 0.898795 |
82Views6Replies
Author Options:
Minecraft Help Answered
Just a quick question what does "User Not Premium" mean?
Discussions
0
mpilchfamily
mpilchfamily
7 years ago
It means you you need to pay for the game if you want to play it. If you have paid for the game then log in.
0
didexo
didexo
Reply 7 years ago
I have bought the game but it won't let me play multiplayer.
0
mpilchfamily
mpilchfamily
Reply 7 years ago
Have you logged in?
Be sure to direct this question to the help section of the Minecraft home page.
0
didexo
didexo
Reply 7 years ago
I was logged in and I have sent them a complaint.
0
lemonie
lemonie
7 years ago
This isn't Minecraft Help, it's a DIY website.
However, the answer to your question may be "you need to use a dictionary and think about it".
L
|
__label__pos
| 0.598366 |
Practice Prime Factorisation to Find the Factors of 72
0
241
Clearing your basics is the most important part of better learning. To do that, you need to understand basic terminology. Prime numbers are natural numbers that are only divisible by 1 or by themselves. It means that these numbers are not equal to m x n where m and n are any integers (except 1). The prime numbers begin from 2 because 1 is neither prime nor composite by nature. However, Euclid, the famous classical era mathematician, recorded proof that there is no limit to the “biggest” prime number.
Practice Prime Factorisation to Find the Factors of 72
Therefore, before we jump into the question at hand to look at Pair Factors of 72, it is essential to look at prime numbers.
Finding smaller prime numbers
There are some basic steps you can follow easily To find small prime numbers.
Step 1: Write down a list of whole numbers from 2 to n, where n is the upper limit of the numbers you want to test.
Step 2: Next, starting from 2, erase every number that is a multiple of 2.
Step 3: For step 3, you can repeat the same step with any following remaining number and erase all numbers that are multiples of it.
Step 4: The remaining numbers that are not divisible by any numbers smaller than them in the list are your prime numbers.
What is Prime Factorisation?
The prime factorisation process is a multi-step process to break down any number to its fundamental prime number. This implies that every number can be shown as a multiple of various prime numbers.
Essentially, dividing and redividing a number with the help of smaller prime numbers that the number is a multiple of until a point where all of its factors are prime numbers is how you perform prime factorisation.
For instance, the number 35 is the multiple of two prime numbers: 5 and 7. You can calculate all the prime factors of any given number with the process of prime factorisation.
Performing prime factorisation:
You can follow these steps to perform prime factorisation on any number. Let’s say the number in question is represented by n.
Step 1: Find the smallest prime number by which the number n is divisible. For instance, if the number is 39 (n=39), it is not completely divisible by 2, but it is divisible by 3. Thus, 3 is the first prime number that can completely divide 39.
Step 2: This can be easily represented in the form of a table, like below:
3 39
Step 3: Next step is to divide n by the prime number and put the factor in the table’s next line, like so:
3 39
13
Step 4: The next step is to repeat steps 2 and 3 until the remaining factor is also a prime number.
Step 5: once the remaining factor is also a prime number, you can divide it by itself and leave the remaining factor as 1. Thus, giving you all the prime factors of the number n. In the example used, it can be represented like so:
Learning more examples
Here are a few more numbers solved as a prime factorisation table:
Solve 28 for its prime factors.
Solution:
2 28
2 14
7 7
1
The prime factors of 28 are 2, 2 and 7.
Solve 36 for its prime factors.
Solution:
2 36
2 18
3 9
3 3
1
The prime factors of 36 are 2, 2, 3 and 3.
Solve 52 for its prime factors.
Solution:
2 52
2 26
13 13
1
The prime factors of 52 are 2, 2 and 13.
Solve 68 for its prime factors.
Solution:
2 68
2 34
17 17
1
The prime factors of 68 are 2, 2 and 17.
Solving prime factorisation for finding the factors of 72
Here is how you solve for the factors of 72
Step 1: Find out what is the smallest prime number that can divide the number 72. Since it is divisible by 2, that is the smallest prime number that can divide 72.
Step 2: Enter the remaining factor value into a table. You can do this like so:
2 72
36
Step 3: Since the remaining factor is 36, we know it is not a prime number. Therefore, you should divide it further with the smallest prime number. 36 is divisible by 2, and thus, that is the smallest prime number that can completely divide 36.
Step 4: Enter the remaining factor value into a table. You can do it like so:
2 72
2 36
18
Step 5: Since the remaining factor is 18, we know it is not a prime number. Therefore, you should divide it further with the smallest prime number. 18 is divisible by 2, and thus, that is the smallest prime number that can completely divide 18.
Step 6: Enter the remaining factor value into a table. You can do it like so:
2 72
2 36
2 18
9
Step 7: Since the remaining factor is 9, we know it is not a prime number. Therefore, you should divide it further with the smallest prime number. 9 is not divisible by 2; however, it is divisible by 3. Thus, 3 is the smallest prime number that can completely divide 9.
Step 8: Enter the remaining factor value into a table. You can do it like so:
2 72
2 36
2 18
3 9
3
Step 9: Since the remaining factor is 3, we know that it is a prime number. Therefore, you can not further divide it with any other number but itself. Thus, 3 is the smallest number that can completely divide 3.
Step 10: Enter the remaining factor value into a table. You can do it like so:
2 72
2 36
2 18
3 9
3 3
1
Step 11: Now that the remaining factor value is 1, we know that the prime factors of 72 are 2, 2, 2, 3 and 3. You can also represent these factors as an equation in the form of multiplication where,
2 x 2 x 2 x 3 x 3 = 72.
Conclusion:
From this exercise, we can conclude that the prime factors of 72 by the prime factorisation method are 2, 2, 2, 3, and 3. Similarly, you can calculate the prime factors of any composite number by the prime factorisation method.
LEAVE A REPLY
Please enter your comment!
Please enter your name here
|
__label__pos
| 0.999064 |
// Self contained implementation in a class #include #include using namespace std; class Planet{ public: void SetMassRadius(double mass, double radius){ M = mass; R = radius; G = 6.67428e-11; } double Density(){ return 1.0e-3 * M/(4.0*M_PI*R*R*R/3); } double Gravity(){ return G*M/(R*R); } private: double M, R, G; }; int main(){ Planet Mars; Mars.SetMassRadius(6.4e23, 3.4e6); cout << "Density = " << Mars.Density() << endl; cout << "Gravity = " << Mars.Gravity() << endl; }
|
__label__pos
| 1 |
Subject How to encrypt/decrypt during a query or before actually showing data?
Author Chuck Belanger
Hello:
I'm working on an application that will have an associated DB and will
be distributed for desktop use. I'd like to encrypt some of the fields
and then have the application decrypt these as part of the query or
certainly before accessing them via various edits, memos, richedit, etc.
These fields (including text BLOBs) will not be written to, so there is
no need to encrypt before writing in the distributed application.
(Although, obviously, I need to encrypt as I create/pump into the
distributed DB.)
I've seen that TField has the Get/SetText events which from a Borland KB
article sounds like a perfect match for what I'm trying to do. Is there
anything like it for IBObjects for use in the IB_xx components,
especially the IB_DataPump? Or some other option that will do what I'm
trying to do in the query?
/////time elapsed
So, I found that IB_Query.FieldByName(asString).OnSET/GETText exist as
an event of TIB_Column and that this is the equivalent of TField. But
how do I access and create a encryption/decryption routine in these
events? Any examples on how to use these events, especially for what I'm
trying to do?
I've been using 4.2Ie version of IBO and FB 1.5, D7.
Thank you,
Chuck Belanger
Richmond, CA
|
__label__pos
| 0.959197 |
Expand my Community achievements bar.
use a value from from for a select statement
Avatar
Not applicable
Hello All!
It seems like a pretty simple question, but I can't find the answer, or I'm not looking in the right area.
I have a form, it has a employee number the user fills in. I have a connection created to SQL 2005. I want to call that employee number which is EmpNumTxt so it part of my Select statement. So when a employyes number is put in, it displays the name of that person.
SELECT EmpName from EmpTable WHERE "the text box on my form" = EmpNumber.
How do I call that value in the text box?
Thanks!
Jim
0 Replies
Avatar
Not applicable
Hi Jim,
If I understand you are asking for the proper naming syntax for the textfield you added to your form.
1) In the Script Editor position your cursor where you want the textfield object proper name.
2) Move the mouse pointer over the textfield on your form and hold down the ctrl key. The mouse pointer will turn to a "V" shape. Click the textfield object and it will paste the proper naming syntax where the cursor is positioned in the Script Editor.
If I misunderstood what you were conveying please explain further.
Thanks
Pam
Avatar
Not applicable
Hi Pam!
Thanks for the reply. I tried that, but it didn't work. Let me try to explain this better.
I have a form with a employee number on it. I have a table in SQL that holds employess name, number address, etc. So I need when a user enters employee number '55', the name Tom Jones will appear in another text box.
The way I would normally do this is..
SELECT [Name]
FROM List2
WHERE [File] = @Numb
@Numb would equal that value in the employee number text box. How can I take that value in my employee number box and pass it to my SELECT statement. Thus retruning a emp name.
Thanks for you help!
Jim
Avatar
Not applicable
Adding more to this.
I found this, with no answer. But this is my problem as well.
I'm trying to connect to a SQL Server DB to automatically fill certain form fields. I can create the connection, but then I am presented with a screen to select the data collection method - either Table, Stored Procedure, or SQL Query. I choose Stored Proc, but when I try to proceed I get the following error:
Stored Procedure has non-optional parameters.
How do I pass the stored procedure the parameter?
Here is my SP
SELECT [Name]
FROM List2
WHERE [File] = @Numb
Thanks!
Jim
Avatar
Not applicable
Does anyone have a solution to Jim's question? I need to know how to create a variable for a user-entered field, and have that variable be used in a SQL Server 2005 query associated with other fields on the form. In my situation, I chose SQL Query rather than Stored Proc, but the answer should be the same for both methods.
Avatar
Not applicable
I am working on the same problem, have the same question. It seems like such a simple solution and problem and I am wondering why there is not a response.
Avatar
Level 7
You should be using JavaScript's "switch()" statement.
switch(DropDownList1.rawValue) {
case "One":
$.rawValue = "One";
break;
case "Two":
$.rawValue = "Select 2";
break;
case "Three":
$.rawValue = "3 Selected";
break;
default:
$.rawValue = "None";
break;
}
page footer
|
__label__pos
| 0.746713 |
How to implement ERROR.TYPE Function in Excel?
Excel
Excel ERROR.TYPE Function explained with examples step by step
Excel: ERROR.TYPE Function is wonderful. This function provides an easy way to write case statements that will help users to right action using if-else scenarios. This is a clearly explained Excel tutorial about how to implement ERROR.TYPE Function. Execution misunderstanding can create headaches. This post will take the unwanted pressure off data analysts and make them understand to easily implement in projects.
In the tutorial, we will answer the question "How to implement ERROR.TYPE Function in Excel?" with multiple examples using Excel. This will help in understanding where and why ERROR.TYPE Function should be used. Each article I write will become a small step in automating creating and maintaining your projects. Similar examples will be shared to help you in your job or project. If you feel you really need to know read ahead or else just scroll down to the bottom to see code to use as it is.
The Excel ERROR.TYPE function returns a number that corresponds to a specific error value. The Excel ERROR.TYPE function returns a number that corresponds to a specific error
If error_val is ERROR.TYPE returns
#NULL! 1
#DIV/0! 2
#VALUE! 3
#REF! 4
#NAME? 5
#NUM! 6
#N/A 7
#N/A 8
Anything else #N/A
Excel : ERROR.TYPE Function
What is ERROR.TYPE Function
The ERROR.TYPE function is an Information formula that returns an index number corresponding to a specific error value in Microsoft Excel. If there are no errors in the target cell(s), the function returns another error value, #N/A. This function can be especially useful in troubleshooting and error handling in your Excel models, as it can provide a quick overview of the existing issues. In this guide, we’re going to show you how to use the ERROR.TYPE function and also go over some tips and error handling methods.
featureimage
How to add ERROR.TYPE Function with Excel?
Type the = (equal sign) followed by the constants and operators (up to 8192 characters) that you want to use in the calculation.
For our example, type =1+1.
Press Enter (Windows) or Return (Mac).
why is ERROR.TYPE Function indispensable to learn ?
This is not used by many of the analysts, unless they are dealing with a large dataset where they don't want to take chances of being wrong or in coding terms expectation handling.
ERROR.TYPE Function step by step guided approach
Avatar
Quick quote bite!!!...
Defeat is not the worst of failures. Not to have tried is the true failure. George Woodberry
Represented by Analytic Monk--
Code solution
A sample example of the same is shown in the sections
=IF(ISERROR(A1),IF(ERROR.TYPE(A1)=2,"Missing value",A1),"")
=ERROR.TYPE(A1)
Result: 2
=ERROR.TYPE(A2)
Result: 4
=ERROR.TYPE(A3)
Result: "#N/A"
|
__label__pos
| 0.946453 |
Videos
Low-code/No-code, what are the risks and how do we prevent them?
Zenity’s Software Engineer will lead you on a journey of the top 10 risks of low-code/no-code, explaining the proper way to secure your low-code/no-code-developed apps and automations, as well as how you can maintain compliance.
For anyone working with low-code/no-code technologies, this webinar is a must-see.
|
__label__pos
| 0.804594 |
Javatpoint Logo
Javatpoint Logo
Rust Recoverable Errors
• Recoverable errors are those errors which are not very serious to stop the program entirely. The errors which can be handled are known as recoverable errors.
• It is represented by Result<T, E>. The Result<T, E> is an enum consists of two variants, i.e., OK<T> and Err<E>. It describes the possible error.
OK<T>: The 'T' is a type of value which returns the OK variant in the success case. It is an expected outcome.
Err<E>: The 'E' is a type of error which returns the ERR variant in the failure. It is an unexpected outcome.
• In the above case, Result is the enum type, and OK<T> & Err<E> are the variants of enum type where 'T' and 'E' are the generic type parameters.
• 'T' is a type of value which will be returned in the success case while 'E' is a type of error which will be returned in the failure case.
• The Result contains the generic type parameters, so we can use the Result type and functions defined in the standard library in many different situations where the success and failure values may vary.
Let's see a simple example that returns the Result value:
Output:
Rust Recoverable Errors
In the above example, Rust compiler shows that type does not match. The 'f' is a u32 type while File:: open returns the Result<T, E>type. The above output shows that the type of the success value is std::fs:: File and the type of the error value is std::io:: Error.
Note:
1. The return type of the File:: open is either a success value or failure value. If the file:: open succeeds, then it returns a file handle, and if file:: open fails, then it returns an error value. The Result enum provides this information.
2. If File:: open succeed, then f will have an OK variant that contains the file handle, and if File:: open fails, then f will have Err variant that contains the information related to the error.
Match Expression to handle the Result variants.
Let's see a simple example of match expression:
Output:
Rust Recoverable Errors
Program Explanation
• In the above example, we can access the enum variants directly without using the Result:: before OK and Err variant.
• If the result is OK, then it returns the file and stores it in the 'f' variable. After the match, we can perform the operations in the file either reading or writing.
• The second arm of the match works on the Err value. If Result returns the Error value, then panic! runs and stops the execution of a program.
Panic on Error: unwrap()
• The Result<T, E> has many methods to provide various tasks. One of the methods is unwrap() method. The unwrap() method is a shortcut method of a match expression. The working of unwrap() method and match expression is the same.
• If the Result value is an OK variant, then the unwrap() method returns the value of the OK variant.
• If the Result value is an Err variant, then the unwrap() method calls the panic! macro.
Let's see a simple example:
Output:
Rust Recoverable Errors
In the above example, unwrap() method will automatically call the panic macro and the panic! displays the error information.
Panic on Error: expect()
• The expect() method behaves in the same way as the unwrap() method, i.e., both methods call the panic! to display the error information.
• The difference between the expect() and unwrap() method is that the error message is passed as a parameter to the expect() method while unwrap() method does not contain any parameter. Therefore, we can say that the expect() method makes tracking of the panic! source easier.
Let's see a simple example of expect()
Output:
Rust Recoverable Errors
In the above output, the error message is displayed on the output screen which we specify in our program, i.e., "Not able to find the file hello.txt" and this makes easier for us to find the code from where the error is coming from. If we contain multiple unwrap() method, then it becomes difficult to find where the unwrap() method is causing panic! as as panic! shows the same error messages for all the errors.
Propagating Errors
Propagating error is a mechanism in which errors are forwarded from one function to other function. Errors are propagated to the calling function where more information is available so that the error can be handled. Suppose we have a file named as 'a.txt' and it contains the text "javaTpoint." We want to create a program that performs the reading operation on this file. Let's work on this example.
Let's see a simple example:
Output:
Rust Recoverable Errors
Program Explanation
• The read_username_from_file() function returns a value of the type Result<T, E> where 'T' is a type of String and 'E' is a type of io:Error.
• If the function succeeds, then it returns an OK value that holds a String, and if the function fails, then it returns an Err value.
• This function starts by calling the File:: open function. If the File:: open function fails, then the second arm of the match will return the Err value, and if the File:: open function succeeds, then it stores the value of the file handle in variable f.
• If the File:: open function succeeds, then we create the variable of a String. If read_to_string() method succeeds, then it returns the text of the file otherwise it returns the error information.
• Suppose we have an external file with a name 'a.text' and contains the text "javaTpoint." Therefore, this program reads the file 'a.text' and displays the content of the file.
Shortcut for propagating the errors: the '?' operator
The use of '?' operator reduces the length of the code. The '?' operator is the replacement of the match expressions means that the '?' operator works in the same way as the match expressions do. Suppose we have a file named as 'a.txt' and it contains the text "javaTpoint." We want to create a program that performs the reading operation on this file. Let's work on this example.
Let's see a simple example.
Output:
Rust Recoverable Errors
In the above example, '?' operator is used before the Result value type. If Result is OK, then it returns the value of OK variant, and if Result is an Err, then it returns the error information.
Difference b/w '?' operator & match expression
• The errors which are used with the '?' operator moves through the 'from' function and the 'from' function is defined in the from trait in the standard library.
• When the '?' operator calls the 'from' function, then this function converts the error type into the error type defined in the return type of the current function.
• If no error occurs, then the '?' operator at the end of any function returns the value of OK, and if the error occurs, then the value of Err is returned.
• It makes the implementation of the function simpler.
Chaining method calls after the '?' operator
We can even shorten the code of a program more by using the chaining method calls after the '?' operator.
Let's see a simple example:
Output:
Rust Recoverable Errors
Program Explanation
In the above example, we have chained the call of read_to_string() to the result of the call of File::open("a.txt")?. We place the '?' operator at the end of the call of read_to_string(). It returns OK value if both the functions, i.e., read_to_string() and File::open("a.txt") succeeds otherwise it returns the error value.
Limitation of '?' operator
The '?' operator can only be used in the functions that return the Result type value. As the '?' operator works similarly as the match expression. The match expression works only on the Result return type.
Let's understand this through a simple example.
Output:
Rust Recoverable Errors
Next Topic#
Help Others, Please Share
facebook twitter google plus pinterest
Learn Latest Tutorials
Preparation
Trending Technologies
B.Tech / MCA
|
__label__pos
| 0.895658 |
Musings on missing data
[This article was first published on ouR data generation, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
I’ve been meaning to share an analysis I recently did to estimate the strength of the relationship between a young child’s ability to recognize emotions in others (e.g. teachers and fellow students) and her longer term academic success. The study itself is quite interesting (hopefully it will be published sometime soon), but I really wanted to write about it here as it involved the challenging problem of missing data in the context of heterogeneous effects (different across sub-groups) and clustering (by schools).
As I started to develop simulations to highlight key issues, I found myself getting bogged down in the data generation process. Once I realized I needed to be systematic about thinking how to generate various types of missingness, I thought maybe DAGs would help to clarify some of the issues (I’ve written a bit about DAGS before and provided some links to some good references). I figured that I probably wasn’t the first to think of this, and a quick search confirmed that there is indeed a pretty rich literature on the topic. I first found this blog post by Jake Westfall, which, in addition to describing many of the key issues that I want to address here, provides some excellent references, including this paper by Daniel et al and this one by Mohan et al.
I think the value I can add here is to provide some basic code to get the data generation processes going, in case you want to explore missing data methods for yourself.
Thinking systematically about missingness
In the world of missing data, it has proved to be immensely useful to classify different types of missing data. That is, there could various explanations of how the missingness came to be in a particular data set. This is important, because as in any other modeling problem, having an idea about the data generation process (in this case the missingness generation process) informs how you should proceed to get the “best” estimate possible using the data at hand.
Missingness can be recorded as a binary characteristic of a particular data point for a particular individual; the data point is missing or it is not. It seems to be the convention that the missingness indicator is \(R_{p}\) (where \(p\) is the variable), and \(R_{p} = 1\) if the data point \(p\) is missing and is \(0\) otherwise.
We say data are missing completely at random (MCAR) when \(P(R)\) is independent of all data, observed and missing. For example, if missingness depends on the flip of a coin, the data would be MCAR. Data are missing at random when \(P(R \ | \ D_{obs})\) is independent of \(D_{mis}\), the missing data. In this case, if older people tend to have more missing data, and we’ve recorded age, then the data are MAR. And finally, data are missing not at random (MNAR) when \(P(R \ | \ D_{obs}) = f(D_{mis})\), or missingness is related to the unobserved data even after conditioning on observed data. If missingness is related to the health of a person at follow-up and the outcome measurement reflects the health of a person, then the data are MNAR.
The missingness taxonomy in 3 DAGs
The Mohan et al paper suggests including the missing indicator \(R_p\) directly in the DAG to clarify the nature of dependence between the variables and the missingness. If we have missingness in the outcome \(Y\) (so that for at least one individual \(R_y = 1\)), there is an induced observed variable \(Y^*\) that equals \(Y\) if \(R_y = 0\), and is missing if \(R_y = 1\). \(Y\) represents the complete outcome data, which don’t observe if there is any missingness. The question is, can we estimate the joint distribution \(P(A, Y)\) (or really any characteristic of the distribution, such as the mean of \(Y\) at different levels of \(A\), which would give us a measure of causal effect) using the observed data \((A, R_y, Y^*)\)? (For much of what follows, I am drawing directly from the Mohan et al paper.)
MCAR
First, consider when the missingness is MCAR, as depicted above. From the DAG, \(A \cup Y \perp \! \! \! \perp R_y\), since \(Y^*\) is a “collider”. It follows that \(P(A, Y) = P(A, Y \ | \ R_y)\), or more specifically \(P(A, Y) = P(A, Y \ | \ R_y=0)\). And when \(R_y = 0\), by definition \(Y = Y^*\). So we end up with \(P(A, Y) = P(A, Y^* \ | \ R_y = 0)\). Using observed data only, we can “recover” the underlying relationship between \(A\) and \(Y\).
A simulation my help to see this. First, we use the simstudy functions to define both the data generation and missing data processes:
def <- defData(varname = "a", formula = 0, variance = 1, dist = "normal")
def <- defData(def, "y", formula = "1*a", variance = 1, dist = "normal")
defM <- defMiss(varname = "y", formula = 0.2, logit.link = FALSE)
The complete data are generated first, followed by the missing data matrix, and ending with the observed data set.
set.seed(983987)
dcomp <- genData(1000, def)
dmiss <- genMiss(dcomp, defM, idvars = "id")
dobs <- genObs(dcomp, dmiss, "id")
head(dobs)
## id a y
## 1: 1 0.171 0.84
## 2: 2 -0.882 0.37
## 3: 3 0.362 NA
## 4: 4 1.951 1.62
## 5: 5 0.069 -0.18
## 6: 6 -2.423 -1.29
In this replication, about 22% of the \(Y\) values are missing:
dmiss[, mean(y)]
## [1] 0.22
If \(P(A, Y) = P(A, Y^* \ | \ R_y = 0)\), then we would expect that the mean of \(Y\) in the complete data set will equal the mean of \(Y^*\) in the observed data set. And indeed, they appear quite close:
round(c(dcomp[, mean(y)], dobs[, mean(y, na.rm = TRUE)]), 2)
## [1] 0.03 0.02
Going beyond the mean, we can characterize the joint distribution of \(A\) and \(Y\) using a linear model (which we know is true, since that is how we generated the data). Since the outcome data are missing completely at random, we would expect that the relationship between \(A\) and \(Y^*\) to be very close to the true relationship represented by the complete (and not fully observed) data.
fit.comp <- lm(y ~ a, data = dcomp)
fit.obs <- lm(y ~ a, data = dobs)
broom::tidy(fit.comp)
## # A tibble: 2 x 5
## term estimate std.error statistic p.value
##
## 1 (Intercept) -0.00453 0.0314 -0.144 8.85e- 1
## 2 a 0.964 0.0313 30.9 2.62e-147
broom::tidy(fit.obs)
## # A tibble: 2 x 5
## term estimate std.error statistic p.value
##
## 1 (Intercept) -0.0343 0.0353 -0.969 3.33e- 1
## 2 a 0.954 0.0348 27.4 4.49e-116
And if we plot those lines over the actual data, they should be quite close, if not overlapping. In the plot below, the red points represent the true values of the missing data. We can see that missingness is scattered randomly across values of \(A\) and \(Y\) – this is what MCAR data looks like. The solid line represents the fitted regression line based on the full data set (assuming no data are missing) and the dotted line represents the fitted regression line using complete cases only.
dplot <- cbind(dcomp, y.miss = dmiss$y)
ggplot(data = dplot, aes(x = a, y = y)) +
geom_point(aes(color = factor(y.miss)), size = 1) +
scale_color_manual(values = c("grey60", "#e67c7c")) +
geom_abline(intercept = coef(fit.comp)[1],
slope = coef(fit.comp)[2]) +
geom_abline(intercept = coef(fit.obs)[1],
slope = coef(fit.obs)[2], lty = 2) +
theme(legend.position = "none",
panel.grid = element_blank())
MAR
This DAG is showing a MAR pattern, where \(Y \perp \! \! \! \perp R_y \ | \ A\), again because \(Y^*\) is a collider. This means that \(P(Y | A) = P(Y | A, R_y)\). If we decompose \(P(A, Y) = P(Y | A)P(A)\), you can see how that independence is useful. Substituting \(P(Y | A, R_y)\) for \(P(Y | A)\) , \(P(A, Y) = P(Y | A, R_y)P(A)\). Going further, \(P(A, Y) = P(Y | A, R_y=0)P(A)\), which is equal to \(P(Y^* | A, R_y=0)P(A)\). Everything in this last decomposition is observable – \(P(A)\) from the full data set and \(P(Y^* | A, R_y=0)\) from the records with observed \(Y\)’s only.
This implies that, conceptually at least, we can estimate the conditional probability distribution of observed-only \(Y\)’s for each level of \(A\), and then pool the distributions across the fully observed distribution of \(A\). That is, under an assumption of data MAR, we can recover the joint distribution of the full data using observed data only.
To simulate, we keep the data generation process the same as under MCAR; the only thing that changes is the missingness generation process. \(P(R_y)\) now depends on \(A\):
defM <- defMiss(varname = "y", formula = "-2 + 1.5*a", logit.link = TRUE)
After generating the data as before, the proportion of missingness is unchanged (though the pattern of missingness certainly is):
dmiss[, mean(y)]
## [1] 0.22
We do not expect the marginal distribution of \(Y\) and \(Y^*\) to be the same (only the distributions conditional on \(A\) are close), so the means should be different:
round(c(dcomp[, mean(y)], dobs[, mean(y, na.rm = TRUE)]), 2)
## [1] 0.03 -0.22
However, since the conditional distribution of \((Y|A)\) is equivalent to \((Y^*|A, R_y = 0)\), we would expect estimates from a regression model of \(E[Y] = \beta_0 + \beta_1A)\) would yield estimates very close to \(E[Y^*] = \beta_0^{*} + \beta_1^{*}A\). That is, we would expect \(\beta_1^{*} \approx \beta_1\).
## # A tibble: 2 x 5
## term estimate std.error statistic p.value
##
## 1 (Intercept) -0.00453 0.0314 -0.144 8.85e- 1
## 2 a 0.964 0.0313 30.9 2.62e-147
## # A tibble: 2 x 5
## term estimate std.error statistic p.value
##
## 1 (Intercept) 0.00756 0.0369 0.205 8.37e- 1
## 2 a 0.980 0.0410 23.9 3.57e-95
The overlapping lines in the plot confirm the close model estimates. In addition, you can see here that missingness is associated with higher values of \(A\).
MNAR
In MNAR, there is no way to separate \(Y\) from \(R_y\). Reading from the DAG, \(P(Y) \neq P(Y^* | R_y)\), and \(P(Y|A) \neq P(Y^* | A, R_y)\). There is no way to recover the joint probability of \(P(X,Y)\) with observed data. Mohan et al do show that under some circumstances, it is possible to use observed data to recover the true distribution under MNAR (particularly when there is missingness related to the exposure measurement \(A\)), but not in this particular case.
Daniel et al have a different approach to determine whether the causal relationship of \(A\) and \(Y\) is identifiable under the different mechanisms. They do not use variable like \(Y^*\), but introduce external, nodes \(U_a\) and \(U_y\) representing unmeasured variability related to both exposure and outcome (panel a of the diagram below).
In the case of MNAR, when you use complete cases only, you are effectively controlling for \(R_y\) (panel b). Since \(Y\) is a collider (and \(U_y\) is an ancestor of \(Y\)), this has the effect of inducing an association between \(A\) and \(U_y\), the common causes of \(Y\). By doing this, we have introduced unmeasured confounding that cannot be corrected, because \(U_y\), by definition, always represents the portion of unmeasured variation of \(Y\).
In the simulation, I explicitly generate \(U_y\), so we can see if we observe this association:
def <- defData(varname = "a", formula = 0, variance = 1, dist = "normal")
def <- defData(def, "u.y", formula = 0, variance = 1, dist = "normal")
def <- defData(def, "y", formula = "1*a + u.y", dist = "nonrandom")
This time around, we generate missingness of \(Y\) as a function of \(Y\) itself:
defM <- defMiss(varname = "y", formula = "-3 + 2*y", logit.link = TRUE)
## [1] 0.21
Indeed, \(A\) and \(U_y\) are virtually uncorrelated in the full data set, but are negatively correlated in the cases where \(Y\) is not missing, as theory would suggest:
round(c(dcomp[, cor(a, u.y)], dobs[!is.na(y), cor(a, u.y)]), 2)
## [1] -0.04 -0.23
The plot generated from these data shows diverging regression lines, the divergence a result of the induced unmeasured confounding.
In this MNAR example, we see that the missingness is indeed associated with higher values of \(Y\), although the proportion of missingness remains at about 21%, consistent with the earlier simulations.
There may be more down the road
I’ll close here, but in the near future, I hope to explore various (slightly more involved) scenarios under which complete case analysis is adequate, or where something like multiple imputation is more useful. Also, I would like to get back to the original motivation for writing about missingness, which was to describe how I went about analyzing the child emotional intelligence data. Both of these will be much easier now that we have the basic tools to think about how missing data can be generated in a systematic way.
References:
Daniel, Rhian M., Michael G. Kenward, Simon N. Cousens, and Bianca L. De Stavola. “Using causal diagrams to guide analysis in missing data problems.” Statistical methods in medical research 21, no. 3 (2012): 243-256.
Mohan, Karthika, Judea Pearl, and Jin Tian. “Graphical models for inference with missing data.” In Advances in neural information processing systems, pp. 1277-1285. 2013.
Westfall, Jake. “Using causal graphs to understand missingness and how to deal with it.” Cookie Scientist (blog). August 22, 2017. Accessed March 25, 2019. http://jakewestfall.org/blog/.
To leave a comment for the author, please follow the link and comment on their blog: ouR data generation.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)
Click here to close (This popup will not appear again)
|
__label__pos
| 0.93392 |
CURLINFO_SIZE_UPLOAD
NAME
CURLINFO_SIZE_UPLOAD − get the number of uploaded bytes
SYNOPSIS
#include <curl/curl.h>
CURLcode curl_easy_getinfo(CURL *handle, CURLINFO_SIZE_UPLOAD, double *uploadp);
DESCRIPTION
Pass a pointer to a double to receive the total amount of bytes that were uploaded.
CURLINFO_SIZE_UPLOAD_T(3) is a newer replacement that returns a more sensible variable type.
PROTOCOLS
All
EXAMPLE
CURL *curl = curl_easy_init();
if(curl) {
curl_easy_setopt(curl, CURLOPT_URL, "https://example.com");
/* Perform the request */
res = curl_easy_perform(curl);
if(!res) {
double ul;
res = curl_easy_getinfo(curl, CURLINFO_SIZE_UPLOAD, &ul);
if(!res) {
printf("Uploaded %.0f bytes\n", ul);
}
}
}
AVAILABILITY
Added in 7.4.1
RETURN VALUE
Returns CURLE_OK if the option is supported, and CURLE_UNKNOWN_OPTION if not.
SEE ALSO
curl_easy_getinfo(3), curl_easy_setopt(3), CURLINFO_SIZE_DOWNLOAD_T(3), CURLINFO_SIZE_UPLOAD_T(3),
|
__label__pos
| 0.62976 |
Question: Is 1080ti Still Good In 2020?
Should I buy 2070 or 1080ti?
In conclusion: The 1080ti is a very solid card and it overpowers the 2070 but the gap isn’t enormous.
The 2070 is interesting if you want the latest and greatest in software support and want to try exploring new technologies.
It does consume less power than the 1080ti as well..
How much is a 1080 TI worth?
I would say a “fair” price for a GTX 1080ti is somewhere between 500-600$, taking in mind that the Galax HOF versions is a premium branding idk if it’s worth the 700$. The 1080 is equivalent to the RTX2060, except +2GB VRAM and – RTX. The price, $400, isn’t especially low.
Is the RTX 2080 better than 1080 TI?
Nvidia’s RTX 2080 is a better card utilizing newer technology and offering better, faster performance than the GTX 1080 Ti and usually at a much lower cost. There will be some games that perform better with the GTX 1080 Ti, but that advantage is not worth hundreds of dollars.
Can a GTX 1080 run 4k?
The GTX 1080 is much more powerful than the GPUs that came before it, and at 4K resolutions, nothing from the previous generation of Nvidia cards can touch it. It doesn’t quite reach the magic 60fps average at maximum settings for 4K, though, so there is room for improvement.
Is a 1080 TI still good?
Beyond being one of the most powerful consumer graphics cards of all time, the GTX 1080 Ti is a showcase of what Nvidia’s Pascal architecture was, and still is, capable of. Even in the face of Nvidia Turing cards like the RTX 2080 and 2080 Ti, the GTX 1080 Ti still stands up to the test of time.
Are 1080 TI discontinued?
Yep, 1080 Ti’s are discontinued.
Is the GTX 1080 outdated?
The 1080 has a multitude of ‘new’ and ‘current’ technologies that will keep it quite relevant for some time to come. If we look at the history of nVidia’s top end cards, they have generally had a pretty good lifespan of around 3–5 years and beyond.
CAN 1080 TI do ray tracing?
Now, Nvidia has announced it will extend ray tracing support to its older Pascal family. … the chances of seeing playable frame rates with ray tracing enabled, even on a GTX 1080 Ti, are not high. In fact, Nvidia’s entire blog post seems mostly calculated to persuade people not to run ray tracing workloads on GTX GPUs.
Should I upgrade my 1080ti?
Bottom line: Only consider upgrading from the GeForce GTX 1080 Ti to the RTX 2080 Ti if you have a high refresh-rate 1440p or 4K monitor. And, uh, a lot of disposable income. If you have a 4K/60 monitor already, sit tight with the GTX 1080 Ti.
Why is RTX 2080 TI so expensive?
No one really knows why the RTX 2080 Ti is so expensive in comparison to the 1080 ti at launch but there are a few well supported reasons why: … Increase in cost to cover the extra costs of R&D for Tensor and RTX cores.
How long will a 1080ti last?
The GTX 1080 TI has a 35% performance increase as compared to the 1080 which will last around 4–5 years and after that it will become a minimum requirement but the 1080TI can last at least 2 years longer.
Why are 1080ti so expensive right now?
The 1080 ti is more expensive because of supply & demand. Supply is really short now and demand quite high, more so than rtx cards bc of the,, rtx on” bash.
How old is GTX 1080?
GeForce 10 (10xx) seriesModelLaunchCore configGeForce GTX 1080May 27, 20162560:160:64GeForce GTX 1080 TiMarch 10, 20173584:224:88NVIDIA TITAN XAugust 2, 20163584:224:9615 more rows
|
__label__pos
| 0.983995 |
WCF vs .NET Remoting
Remoting is about distributed objects, while WCF is about services. Remoting is capable of passing object instances between the client and the server, WCF is not. This distinction is much more fundamental than the technicalities like which technology is more efficient, more dead, or whether it can connect .NET to Java, but I have fully realized it only recently, when I was debating what technology to use for communication between two specific .NET apps that we had.
Remoting is conceptually similar to other distributed objects technologies like DCOM, CORBA, and Java RMI. The server holds a stateful object, which the client accesses via a proxy. The client may pass its own objects to the server, and the server may pass more objects to the client. Bidirectional connection is essential, and proxy lifetime issues are a big thing.
WCF is conceptually similar to data-exchanging technologies like HTML, REST, SQL and, to some extent, Win32 API if we ignore callbacks. The client may call the server, but not necessarily vice versa. Only plain data can be exchanged between the client and the server. There are no proxies, and thus lifetime management issues don’t exist. If the server is complex and represents multiple entities, every call must identify the entity it referes to via some kind of moniker: it could be a URL, a table name, or a numeric handle like in Win32 API.
So, Remoting and WCF are not really easily interchangeable except the simplest scenario when the server has only one object and all its methods deal only with plain data.
To illustrate, suppose you have a server that stores some kind of tree. With Remoting you can define the interface as follows:
public interface IRemotingTree
{
string Data { get; set; }
IRemotingTree AddChild(string data);
}
With WCF you cannot have properties or return callable object, so the interface would look something like this:
[ServiceContract]
public interface IWcfTree
{
[OperationContract]
string GetData(string nodeId);
[OperationContract]
void SetData(string nodeId, string value);
[OperationContract]
string AddChildNode(string data); // returns node id
}
This is where similarity with Win32 API comes in: we cannot return multiple objects from the call, so we must specify which tree node is being worked on by sending node ID. If this ID is some opaque token like 0x1234, then this would look like Win32 API. If this ID encodes a path from the root to the node, e.g. “/4/2” (second child of the 4th child of the root), then it would be more like REST.
In any case, unlike Remoting WCF is not object oriented and everything that gets passed around is plain data. What are the implications of that in real life? That’s going to be the topic of a future post.
7 Comments
1. Your Baktun Shell uses .Net Remoting. .Net Core will not support .Net Remoting. Instead, they recommend HTTP based protocols. So this post seems very relevant.
Any suggestions on how to make Baktun Shell work in .Net Core?
Thank you.
Reply
1. Kareem, thank you for the comment. You’d need to build a layer of communication based on WCF I suppose. But to be honest, I am not sure about the reasons to run Baktun shell on .NET core.
It’s a WPF application, so it will not run on Linux, and on Windows you have regular .NET. The only problem with regular .NET I can think of is that you cannot have multiple versions on the same machine. Do you have a particular scenario in mind when you’d really need Baktun shell on .NET core? I am sincerely curious, my imagination runs blank on that one, but I did not give it a very deep thought. Please advise.
Reply
1. Our need for .Net Core is not significant, really. The main value we hoped to achieve with .Net Core was moving to the new project format that .Net Standard and .Net Core support.
Doing so means we give up what .Net Framework offers. For example, .Net Remoting because we hope to use your ideas from Baktun Shell. (We are just now starting to see if it will work.)
The real problem that we want to solve is developing code as “modules” that are independently developable and deployable, meaning a team works on their code using versions of packages as needed. Modules have their own UI. However, the main product will run with all the modules, sort of like a plugin model, and will appear as running in one Window.
If we use one AppDomain, then the CLR will load only one version of a assembly, and thus, likely break one of the modules.
We also like the option of having some modules run in their own process (for various reasons that are not relevant to this discussion).
Baktun Shell seems to be a way to solve both of our real needs.
Reply
2. After thinking about this some more, the WCF solution you mentioned does not seem a good way to go based on what you described in the post. .Net Remoting “remotes” the object, which somehow allows graphics rendered by WPF to go across a process. I am not sure how that can be done with WCF.
Reply
1. In the company I work now, we have a home-grown equivalent of Baktun shell that uses WCF. I did not write it though, and I did not have time to investigate the details, but it proves it’s possibe. Having said that, you probably can follow the “isolate the ugly stuff’ strategy: have all your UI components in .NET standard assemblies, and then one “dirty” loader DLL that knows about remoting and talks to Baktun shell. If I remember correctly, there is no requirement in Baktun shell that the User Control itself implements any of the remoting stuff, but I may be mistaken, it’s been a while.
Reply
1. I agree that it is possible. I just assumed it would be much more work. Baktun shell is a surprisingly small but elegant code base given what it accomplishes. (Not trying to be fawning.)
I did think about the design that you mentioned. The “top” layer would be .Net Framework.
I assumed that WPF won’t be in the .Net Standard.
2. Simple article which addresses complex questions. Many thanks for your time.
Reply
Leave a Reply
Your email address will not be published. Required fields are marked *
|
__label__pos
| 0.527451 |
Java多线程+分治求和,太牛了
简介: `shigen`,一位擅长Java、Python、Vue和Shell的博主,分享编程知识和成长体验。在一次面试中因对高并发问题准备不足而受挫,随后深入学习,研究了线程池和经典案例——计算1亿数字的和。采用分治策略,`shigen`实现了Java版的归并排序,并对比了Python的简洁实现。通过多线程和分段求和优化,展示了如何高效解决大数求和问题,引入了分治思想的递归任务来进一步提升性能。未来将探讨`forkjoin`框架。关注`shigen`,每天学习新知识!
shigen坚持更新文章的博客写手,擅长Java、python、vue、shell等编程语言和各种应用程序、脚本的开发。记录成长,分享认知,留住感动。
个人IP:shigen
最近的一个面试,shigen简直被吊打,简历上写了熟悉高并发。完了面试官不按照套路出牌,我说了我用了countdownLanch,他问forkjoin了解吗?LRU怎么设计……一脸懵,尴尬的直接抠脚。
赶紧花时间研究了,顺便看了一下线程池,看到了这样一个经典的案例:
求1-10000_0000的和。
没错,别眼花,是1-1个亿个数字的和。别告诉我,直接循环相加,那就回家等通知吧。
好的,前提就聊到这。看看我这一段炫酷的代码:
代码案例
天啊,task+递归,和着在线程池不断的玩呗。
一看这种分而治之,像极了传说中的二分法,经典的分治思想。等等,我咋这么熟悉!
没错,经典的归并排序,就是这样子的!花了一小时,把这个算法用Java写出来了。shigen之前可是用的python写算法。
java版归并排序
public class MergeSortDemo {
// 归并排序
static void mergeSort(int[] arr, int left, int right) {
if (left < right) {
int mid = (left + right) / 2;
// 简直直接mid
mergeSort(arr, left, mid);
mergeSort(arr, mid + 1, right);
merge(arr, left, mid, right);
}
}
private static void print(int[] arr) {
for (int i = 0; i < arr.length; i++) {
System.out.print(arr[i] + " ");
}
System.out.println();
}
private static void merge(int[] arr, int left, int mid, int right) {
// 构建一个临时数组暂存arr[left, right]之间有序的元素
int[] temp = new int[right - left + 1];
int i = left, j = mid + 1, k = 0;
// while的临界条件需注意,此时分段有序数组合并
// [1,2,3] + [1,3,4,5,6] mid = 4
while (i <= mid && j <= right) {
if (arr[i] < arr[j]) {
temp[k++] = arr[i++];
} else {
temp[k++] = arr[j++];
}
}
// 剩下的元素直接追加即可,两个while只会走一个
while (i <= mid) {
temp[k++] = arr[i++];
}
while (j <= right) {
temp[k++] = arr[j++];
}
// 将temp[] => arr[left, right]
for (i = 0; i < temp.length; i++) {
arr[left + i] = temp[i];
}
}
public static void main(String[] args) {
int[] arr = {
1, 432, 1, 3243, 54, 32, -10, 43, 90};
mergeSort(arr, 0, arr.length - 1);
print(arr);
}
}
看似很复杂,其实一点也不简单。注意点写在代码里了。只能说用Java写算法,真的头大。
python版归并排序
python版本归并排序
没错,就短短的四行。简洁多了。
接下来,就是重点,如何求1-1个亿数字的和呢?多线程+分段会是不错的选择
• 1-1_0000
• 1_0001-2_0000
• 2_0001-3_0000
• ……
• 9999_0000-10000_0000
原理就是这个原理,多线程分段的求和,最后再把总体的和算出来。至少两点是确定的,线程池+Futuretask
多线程求和
public class ThreadPoolDemo {
@SneakyThrows
public static void main(String[] args) {
int[] arr = new int[10_0000];
for (int i = 0; i < arr.length; i++) {
arr[i] = i + 1;
}
StopWatch stopWatch = new StopWatch();
stopWatch.start();
ExecutorService executor = Executors.newFixedThreadPool(10);
int sum = 0;
int chunkSize = arr.length / 10;
for (int i = 0; i < 10; i++) {
int start = i * chunkSize;
int end = (i == 9) ? arr.length : (start + chunkSize);
sum += executor.submit(new SumTask(arr, start, end)).get();
}
executor.shutdown();
stopWatch.stop();
System.out.println("Sum of 1 to 100000 is: " + sum);
System.out.println("代码执行时间:" + stopWatch.getLastTaskTimeMillis() + "毫秒");
}
}
class SumTask implements Callable<Integer> {
private final int[] arr;
private final int start;
private final int end;
public SumTask(int[] arr, int start, int end) {
this.arr = arr;
this.start = start;
this.end = end;
}
@Override
public Integer call() {
int sum = 0;
for (int i = start; i < end; i++) {
sum += arr[i];
}
return sum;
}
}
看着很多,核心的一段就是这个:
for (int i = 0; i < 10; i++) {
int start = i * chunkSize;
int end = (i == 9) ? arr.length : (start + chunkSize);
sum += executor.submit(new SumTask(arr, start, end)).get();
}
创建任务->装进线程池->获得结果->关闭线程池。
但是,在这种情况下,还能继续的优化吗?其实也是可以的,因为现在数组还是太长了,而且计算的线程不是足够的多,性能上肯定不是最优的。
多线程+分治求和
这就是今天的主角:多线程+分治实现求和。还是先看代码:
public class SumRecursive {
public static class RecursiveSumTask implements Callable<Long> {
// 拆分粒度
public static final int THRESHOLD = 10_0000;
int low;
int high;
int[] arr;
ExecutorService executorService;
RecursiveSumTask(ExecutorService executorService, int[] arr, int low, int high) {
this.executorService = executorService;
this.arr = arr;
this.low = low;
this.high = high;
}
@Override
public Long call() throws Exception {
long result = 0;
if (high - low < THRESHOLD) {
for (int i = low; i < high; i++) {
result += arr[i];
}
} else {
int mid = (low + high) / 2;
RecursiveSumTask leftTask = new RecursiveSumTask(executorService, arr, low, mid);
RecursiveSumTask rightTask = new RecursiveSumTask(executorService, arr, mid, high);
Future<Long> lr = executorService.submit(leftTask);
Future<Long> rr = executorService.submit(rightTask);
result = lr.get() + rr.get();
}
return result;
}
}
@SneakyThrows
public static void main(String[] args) {
int[] arr = new int[10000_0000];
for (int i = 0; i < arr.length; i++) {
arr[i] = i + 1;
}
StopWatch stopWatch = new StopWatch();
stopWatch.start();
ExecutorService executorService = Executors.newCachedThreadPool();
RecursiveSumTask recursiveSumTask = new RecursiveSumTask(executorService, arr, 0, arr.length);
Long result = executorService.submit(recursiveSumTask).get();
executorService.shutdown();
stopWatch.stop();
System.out.println("Sum of 1 to 100000 is: " + result);
System.out.println("代码执行时间:" + stopWatch.getLastTaskTimeMillis() + "毫秒");
}
}
说实话,代码在显示器上显示真的太好看了,忍不住的截图分享了。
代码截图
那这里的不同点在于使用了分治思想,当我们的数组的长度小于阈值的时候,就直接计算和;但是大于阈值的之后,就会继续的拆分。
总之总体的设计和逻辑真的像极了上文提到的MergeSort,先分的足够小,然后合并,获得最终的结果。
当然,这种设计也并不是最好的,因为我们的线程池设计,或者说线程池等待队列的大小是不好把控的,所以我们线程池的等待队列是2147483647长度的同步队列。完了,又要考虑到OOM!
接下来会分享forkjoin,期待继续关注!文章代码点击这里。
与shigen一起,每天不一样!
目录
相关文章
|
3天前
|
安全 算法 Java
深入理解Java并发编程:线程安全与性能优化
【5月更文挑战第20天】 在多核处理器日益普及的今天,并发编程成为了软件开发中不可忽视的重要话题。Java语言提供了丰富的并发工具和机制来帮助开发者构建高效且线程安全的应用程序。本文将探讨Java并发的核心概念,包括线程同步、锁机制、以及如何通过这些工具实现性能优化。我们将透过实例分析,揭示并发编程中的常见问题,并展示如何利用现代Java API来解决这些问题。
|
3天前
|
安全 Java 开发者
深入理解Java并发编程:线程安全与性能优化
【5月更文挑战第20天】在Java并发编程中,线程安全和性能优化是两个关键要素。本文将深入探讨Java并发编程的基本概念、线程安全的实现方法以及性能优化技巧。通过分析同步机制、锁优化、无锁数据结构和并发工具类的使用,我们将了解如何在保证线程安全的前提下,提高程序的性能。
|
3天前
|
安全 算法 Java
深入理解Java并发编程:线程安全与性能优化
【5月更文挑战第20天】 在Java开发中,正确处理并发问题对于确保应用的稳定性和提高性能至关重要。本文将深入探讨Java并发编程的核心概念——线程安全,以及如何通过各种技术和策略实现它,同时保持甚至提升系统性能。我们将分析并发问题的根源,包括共享资源的竞争条件、死锁以及线程活性问题,并探索解决方案如同步机制、锁优化、无锁数据结构和并发工具类等。文章旨在为开发者提供一个清晰的指南,帮助他们在编写多线程应用时做出明智的决策,确保应用的高效和稳定运行。
|
1天前
|
Java 调度
【JAVA学习之路 | 提高篇】线程的通信
【JAVA学习之路 | 提高篇】线程的通信
|
1天前
|
存储 Java
【JAVA学习之路 | 提高篇】线程安全问题及解决
【JAVA学习之路 | 提高篇】线程安全问题及解决
|
1天前
|
Java
【JAVA学习之路 | 提高篇】创建与启动线程之二(继承Thread类)(实现Runnable接口)
【JAVA学习之路 | 提高篇】创建与启动线程之二(继承Thread类)(实现Runnable接口)
|
1天前
|
Java 调度
【JAVA学习之路 | 提高篇】进程与线程(Thread)
【JAVA学习之路 | 提高篇】进程与线程(Thread)
|
1天前
|
Java 开发者
Java并发编程:理解线程同步和锁
【5月更文挑战第22天】本文将深入探讨Java并发编程的核心概念——线程同步和锁。我们将从基本的同步问题开始,逐步深入到更复杂的并发控制技术,包括可重入锁、读写锁以及Java并发工具库中的其他锁机制。通过理论与实例相结合的方式,读者将能够理解在多线程环境下如何保证数据的一致性和程序的正确性。
|
1天前
|
Java 程序员
Java中的多线程编程
本文将深入探讨Java中的多线程编程,包括线程的创建、启动、控制和同步等关键技术。我们将通过实例代码演示如何在Java中实现多线程,并讨论多线程编程的优势和挑战。
|
2天前
|
Java 容器
Java并发编程:深入理解线程池
【5月更文挑战第21天】 在多核处理器的普及下,并发编程成为了提高程序性能的重要手段。Java提供了丰富的并发工具,其中线程池是管理线程资源、提高系统响应速度和吞吐量的关键技术。本文将深入探讨线程池的核心原理、关键参数及其调优策略,并通过实例展示如何高效地使用线程池以优化Java应用的性能。
|
__label__pos
| 0.990658 |
Take the tour ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.
(Java question)
If I reference a field in an inner class, does this cause a circular dependency between the enclosing class and the inner class?
How can I avoid this?
Here is an example:
public class Outer {
private Other o;
private Inner i;
public Outer() {
o = new Other();
i = new Inner() {
public void doSomething() {
o.foo();
}
};
}
}
share|improve this question
Outer, Other, Inner do you mean to have three classes? – jjnguy Sep 22 '08 at 23:49
Yes, three classes. – j.davies Sep 22 '08 at 23:55
add comment
3 Answers
up vote 9 down vote accepted
Static vs instance class: If you declare the inner class as static then the instances of the inner class doesn't have any reference to the outer class. If it's not satic then your inner object efectivelly points to the outer object that created it (it has an implicit reference, in fact, if you use reflection over its constructors you'll see an extra parameter for receiving the outer instance).
Inner instance points outer instance: Circular reference is in case each instance points the other one. A lot of times you use inner classes for elegantly implementing some interface and accessing private fields while not implementing the interface with the outer class. It does mean inner instance points outer instance but doesn't mean the opposite. Not necesary a circular reference.
Closing the circle: Anyway there's nothing wrong with circular referencing in Java. Objects work nicely and when they're not more referenced they're garbage collected. It doesn't matter if they point each other.
share|improve this answer
add comment
The syntax you're using in the example is a little off there is no declaration of the class or interface Inner. But there isn't anything wrong with the concept of the example. In Java it will work fine.
I'm not sure what you're doing here, but you may want to consider a more simple design for maintainability etc.
It's a common pattern for anonymous event handlers to reference elements of their parent class, so no reason to avoid it if that's the case, that's how Java was designed instead of having function pointers.
share|improve this answer
add comment
(Not sure if this is what you are asking...)
At runtime, the inner class has an implicit reference to the instance of the outer class it belongs to. So whenever you pass the inner class instance around, you are also passing the outer class instance around.
You can avoid that by declaring the inner class as "static", but that means that the inner class can't access member variables of the outer class. So in that case if you want to access a member of the outer class, you need to pass it explicitly to the inner class (using a setter or using the constructor of the inner class).
share|improve this answer
add comment
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.948989 |
Evaluation document moving stanford graduate class of business plagiarism checker
It isnt simply a scholarly training, but continues to be perceived as an exact portrayal this natural living space as well,. Yet, in an attempt to talk about the different basic research pieces of paper groups of geometry, the definition of geometry should not be mistaken for how specific room works best. Geometry was developed for theoretical uses of example territory surveying and constructions. Pythagoras placed analysis newspapers however the guidelines which had been inherited were found to be not associated certainly.
scienceresearchpapers.Co.Uk
There are various efficient software programs for elliptical geometry. The elliptic geometry which defines the surface connected with a circle or sphere is used by dispatch captains writing articles background work aircraft pilots and records while they circumnavigate around the entire world review report benefit we science-research-papers.co.uk live on. Truly, functioning in Elliptical geometry has particular non- user-friendly final results. As an example the quickest moving space in between Philippine Tropical isle to Fl could be a option during Alaska.
essay producing service plans invest in research papers, dissertation writing a research papers for use on your modern technology realistic job
They offer some restraints pertaining parallel wrinkles. We could demonstrate in Euclidean geometry that parallel lines are all the time equidistant; yet still studies document generating help truly this is not the scenario. In post my research pieces of paper we simply can perform that facial lines which are parallel offer the constraint the fact that they never intersect. In addition, in traditional sense the parallel lines do not appear in a straight line. They will also technique independently within a asymptotic approach. Correspondingly, the larger the triangles sides, the higher the modification of our amount of the sides for both hyperbolic and elliptic geometries. The triangles part is in accordance with the sum of its sides and certainly this signifies that there are not related triangles likewise.
writing a research papers for all your scientific disciplines honest assignment school libraries research document advisor
This means that, hyperbolic geometry is easier than elliptic geometry for this reason theoretically basic research report making was basically identified. Saddle and Poincare Disc a few types of historical hyperbolic floors. The types of surface associated with the saddle shape several different guidelines. Poincare Disc is really a design dreamed by French Philosopher, Poincare. His prototype is model of two dimensional prototype, which makes enlightening to individuals who focus on report. Usually, options to analysis report writer seem to have been viewed as mere curiosities simply because sprang out to impression bit to the honest space or room. Thanks much to stellar cosmologists like the Einstein; they begun to replace Euclidean geometry in the majority of environments.
writing research papers webster university crafting an investigation paper explore faculty of economic
For instance, physics is vastly demonstrated regarding the methods of Euclidean geometries but was toppled. The research paper freelance writers on Euclidean geometry certainly never deliberated the curvature through the world resulting in most important troubles with all the gravity and planetary equations. Hypothesis posting lookup papers power as a result of an natural curvature of space or room, comparison to Newtons description of motions far away. Natural curvature of a space or room, portrays how standard product lines include attributes relevant to curvature without ever incredibly actually being curved normally, is now accustomed to refer to why the orbit for the planet is definitely instantly set. These facets are being used by captains and deliver as they circumnavigate across the earth. Euclidean geometry was very easy straightforward, but didnt purposeful a preliminary understanding the way the nature herself to the room space was. Some people, particularly the spiritual designs only enjoyed Euclidean geometry but denied the everyday life scientific discipline background work reports and elliptic geometries.
crafting a research cardstock part 4: posting level city park college or university buy research paper help of top quality
They failed to look into math really is a control which may be governed by its events. You can now cheapest embrace that Euclidean geometry gives a amazing depiction for your personal confined element of the World that most of us survive.
Add Comment
1 + four =
Shares
|
__label__pos
| 0.612783 |
Unity 3D AI: NavMesh Navigation
In this tutorial, we’ll dive into artificial intelligence with Unity 3D by introducing the built-in navigation system. I’ll show you how to define a NavMesh in your scene, make an agent intelligently walk through it to reach different targets, and how to connect separate areas of a scene.
NavMesh is a commonly-used technique in Game AI to define a walkable area in an environment. It’s also used to calculate a path between two points, make an NPC walk from its position to a goal, let an enemy reach a player or move the player to a desired destination (e.g. a point-and-click adventure game or a RTS).
Unity offers a built-in implementation of this navigation system, available in both free and pro versions. Some advanced features, like off-mesh links, are pro-only.
You can access the navigation system through the Navigation panel using Window – Navigation, and it will open next to the Inspector.
For the purpose of this tutorial we’re going to use a 3D project. Also, I’ll assume you have familiarity with the Unity editor and know basics operations like adding objects to a scene, adding components, and scripting in C#.
Note: This tutorial is based on Unity 4. The screenshots have been captured from version 4.5.1. As new releases are distributed, menus, names or layouts in the editor may differ.
Marking Scene objects and baking
First thing’s first. In the Object tab, define whether the selected object is Navigation static or not. Walls, floors and platforms are all static objects.
Make sure to go through all your objects in the hierarchy and set them appropriately. You can use the Scene filter in the Navigation panel to show only Mesh Renderers or Terrains. When doing this, make sure to select the actual object with a mesh, not the parent.
Objects that are not Navigation static will be ignored when generating the NavMesh.
Next, define the Navigation Layer for the object using the related drop-down. You can choose between Default (walkable), Not Walkable and Jump. You can also define custom layers in case you want to have terrains with different traversal cost — more on this later.
We’re now ready to bake the nav mesh onto your scene. Here, you can see an initial scene without NavMesh:
After hitting the Bake button, you’ll see that the scene has a blue overlay in certain areas, that’s the NavMesh:
Actors will be able to walk only on the blue area of the map. Just note that every time you make a modification to the scene you’ll need to Bake the NavMesh again.
In this example, the floor is composed of smaller adjacent pieces. This will become handy when we’ll want to define different traversal cost for different areas of the level.
Setting NavMesh properties
You can customize your NavMesh in the Bake Tab.
Radius is the distance between the walls and the navigation mesh, and represents the agent’s radius. If you feel that your agent keeps bumping on walls or objects while moving, increase the radius to make it smoother.
Height represents the agent’s height and specifies the minimum height of areas where the agent will be able to venture into. Max slope specifies the maximum slope for a surface to be considered walkable, while Step Height is the height difference between two surfaces to be considered connected.
Under the Advanced group, you can set the width and height inaccuracy, which specify the approximation allowed when generating the NavMesh. Lower values will give a higher quality NavMesh, but are more computationally expensive. Generating a more accurate NavMesh will also take longer.
The NavMesh Agent
Now that we’ve got a NavMesh in our scene, we need a way to tell our character to walk through it. This is done using the NavMeshAgent component. You’ll find it in the Component Menu, under Navigation.
As you can see in the previous image, I have two cubes in my scene, the blue one represents the player, while the red one marks the position to reach.
Add a NavMeshAgent component to your player game object. This component is responsible for the agent pathfinding and its actual movement control.
There are many properties that you can set in the component. Again, Radius specifies the radius of the agent (and defines whether they can go through a narrow path), while Height is the agent’s height and defines whether they can pass under obstacles.
SpeedAcceleration and Angular Speed are self-explanatory. They define how your agent will move, while Stopping Distance defines how close the agent will get to the target position before stopping.
We now need to tell the agent what to do via scripting. Here’s a quick example of how to make the agent move to a target destination:
1. using UnityEngine;
2. using System.Collections;
3.
4. public class AgentWalkScript : MonoBehaviour
5. {
6.
7. public Transform destination;
8.
9. private NavMeshAgent agent;
10.
11. void Start ()
12. {
13. agent = gameObject.GetComponent<NavMeshAgent>();
14.
15. agent.SetDestination(destination.position);
16. }
17.
18. }
When the scene starts, the script will retrieve a reference to the game object’s NavMeshAgent component, and will use SetDestination to tell it to move to the destination.
Attach the script to your player game object in the editor, and make sure to set the destination field to another game object in your scene (in my case, I used the red cube). As soon as you hit play, you’ll see your character moving to its target destination. If the agent can’t reach the target (e.g. if you put it in a closed space) it will still try to get as close as possible and stop in the best position available.
If you tick Auto Braking in the inspector, the agent will slow down before reaching the target, smoothly stopping at its position. If the path is broken for any reason, the agent can rebuild a new one in realtime. To do this, tick Auto Repath.
If you want to recreate RTS-style controls (i.e. moving the agent where you click in the scene), you’ll need to cast a ray from the screen point using the camera, and then check Raycast collision with the world using that ray. In case any collision is detected, the agent will attemp to move to that position:
1. void Update()
2. {
3. if (Input.GetMouseButtonDown(0))
4. {
5. Ray screenRay = Camera.main.ScreenPointToRay(Input.mousePosition);
6.
7. RaycastHit hit;
8. if (Physics.Raycast(screenRay, out hit))
9. {
10. agent.SetDestination(hit.point);
11. }
12. }
13. }
Manually creating a path
There are cases in which you want to have more control over how your agent generates a path to the target and when it starts moving. Path-finding is a fairly expensive operation, and sometimes it’s better to calculate and store a path before using it. This can be done using CalculatePath:
1. agent = gameObject.GetComponent<NavMeshAgent>();
2.
3. NavMeshPath path = new NavMeshPath();
4. bool hasFoundPath = agent.CalculatePath(destination.position, path);
5.
6. if(path.status == NavMeshPathStatus.PathComplete)
7. {
8. print("The agent can reach the destionation");
9. }
10. else if(path.status == NavMeshPathStatus.PathPartial)
11. {
12. print("The agent can only get close to the destination");
13. }
14. else if(path.status == NavMeshPathStatus.PathInvalid)
15. {
16. print("The agent cannot reach the destination");
17. print("hasFoundPath will be false");
18. }
In this example, I manually calculate a path and then check its status to see whether the agent can reach the destination. Note that in this case the agent won’t start moving. Later on, when you want the agent to start moving, you can use SetPath and pass the path object you generated before:
1. agent.SetPath(path);
This is very useful when lots of things happen in the game, and you don’t want to slow down the action using SetDestination, which will start a path-finding calculation.
If you want to clear the path, you can use ResetPath:
1. agent.ResetPath();
The agent will also stop if it’s moving, because it doesn’t have a valid path to follow.
Pausing, resuming and warping
While your agent is moving towards its target, you can temporarily stop it using Stop. By default, the game object will still decelerate and be affected by the avoidance system while stopping. To prevent this, pass true as paramenter:
1. if(shouldStopImmediately)
2. {
3. agent.Stop(true); // Stops immediately
4. }
5. else
6. {
7. agent.Stop(); // Decelerate
8. }
Later on you can use Resume to make the agent move again:
1. agent.Resume();
In some cases you may want to move the agent to another position. In this case, instead of applying the transformation to its Transform, you should use Warp, passing the new position:
1. agent.Warp(newPosition);
In this instance, the agent will also stop moving, so you’ll need to calculate a new path manually or by using SetDestination.
Layers cost
As we said before, each walkable mesh has a layer assigned, and a relative traversal cost. While the default layer has a cost of 1, custom layers can be defined with different costs to, for example, define areas that may be dangerous to the agent for example.
To do that, let’s return to the Navigation panel and go to the Layers tab. Here, I’ve defined three custom layers with higher costs:
You can then return to the Object tab and use these new layers in the Navigation Layer dropdown. Areas with a higher cost will have a different color, just remember that you need to re-bake your NavMesh every time you change a layer to apply the changes.
For example, here I assigned the MoreExpensive layer to the light-brown portion of the platform:
In the current scenario depicted in the image, the blue agent will go all the way up and around the target in order to avoid the expensive area of the map.
You can check the cost of a navigation layer using GetLayerCost, passing the layer index. Default layer has an index of 0, and so on. For example, here I’m checking the cost of the Expensive layer:
1. float layerCost = agent.GetLayerCost(3);
You can also override layer cost using SetLayerCost:
1. agent.SetLayerCost(4, 10);
Here, I changed the MoreExpensive layer cost, using its index 4, from 4 to 10. Just note that when the agent is disabled this cost will be reset to its default value defined in the editor.
By default, a NavMeshAgent can walk on every layers defined in the NavMesh. If you want to prevent your agent from walking on certain layers, you can use the “NavMesh Walkable” dropdown menu located in the component properties.
Dynamic Obstacles
While it’s useful to have static objects defined in your map, your agent will also need to take into account and avoid dynamic objects in the scene. They can be moving objects, or objects generated at runtime that cannot be taken into account when baking the NavMesh. To do this, we’ll use NavMeshObstacle components.
In my example, I created another game object (the brown cube), and added a NavMeshObstacle to it:
You can change the obstacle radius and height from the inspector.
NavMesh obstacles won’t affect the NavMesh, but the agent will avoid them and go around their radius, or stop altogether if they cannot find a valid path as a result of the obstacle being in the way. For this reason, NavMesh obstacles are not taken into account when generating a path.
If you want to check for obstacles during path-finding, you can tick the Carve in the Inspector. In this case, the obstacles will create a hole in the NavMesh and the agent will try to find an alternative route in case this obstacle is blocking its path to the target.
By default, your agent’s Obstacle Avoidance is set to High Quality. This gives smooth movement between obstacles, but it’s also quite expensive. If you want to test lower quality but faster Obstacle Avoidance types you can change it from your game object’s NavMeshAgentcomponent, in the Inspector. Setting it as None will disable Obstacle Avoidance altogether, and your agent will ignore dynamic obstacles in the scene.
Author: Attilio Carotenuto
|
__label__pos
| 0.963549 |
// created by Minh Nguyen; // version 1.04; (function($) { // for zeptojs; $.isNumeric == null && ($.isNumeric = function(src) { return src != null && src.constructor === Number; }); $.isFunction == null && ($.isFunction = function(src) { return src != null && src instanceof Function; }); var $W = $(window); var $D = $(document); var layoutManager = { // default setting; defaultConfig: { animate: false, cache: true, // cache the size of blocks for performance; cellW: 100, // function(container) {return 100;} cellH: 100, // function(container) {return 100;} delay: 0, // slowdown active block; engine: 'giot', // 'giot' is a person name; fixSize: null, // resize + adjust = fill gap; //fixSize: 0, allow adjust size = no fill gap; //fixSize: 1, no resize + no adjust = no fill gap; gutterX: 15, // width spacing between blocks; gutterY: 15, // height spacing between blocks; selector: '> div', draggable: false, rightToLeft: false, bottomToTop: false, onGapFound: function() {}, onComplete: function() {}, onResize: function() {}, onBlockReady: function() {}, onBlockFinish: function() {}, onBlockActive: function() {} }, plugin: {}, totalGrid: 1, transition: false, loadBlock: function(item, setting) { var runtime = setting.runtime; var gutterX = runtime.gutterX; var gutterY = runtime.gutterY; var cellH = runtime.cellH; var cellW = runtime.cellW; var block = null; var $item = $(item); var active = $item.data("active"); var fixPos = $item.attr('data-position'); var fixSize = parseInt($item.attr('data-fixSize')); var blockId = runtime.lastId++ + '-' + runtime.totalGrid; //ignore dragging block; if ($item.hasClass('fw-float')) return; $item.attr({id: blockId, 'data-delay': item.index}); //remove animation for speed render; if (setting.animate && this.transition) { this.setTransition(item, ""); } isNaN(fixSize) && (fixSize = null); (fixSize == null) && (fixSize = setting.fixSize); var makeRound = (fixSize == 1) ? "ceil" : "round"; // store original size; $item.attr('data-height') == null && $item.attr('data-height', $item.height()); $item.attr('data-width') == null && $item.attr('data-width', $item.width()); var height = 1 * $item.attr('data-height'); var width = 1 * $item.attr('data-width'); if (!setting.cache) { item.style.width = ""; width = $item.width(); item.style.height = ""; height = $item.height(); } var col = !width ? 0 : Math[makeRound]((width + gutterX) / cellW); var row = !height ? 0 : Math[makeRound]((height + gutterY) / cellH); // estimate size; if (!fixSize && setting.cellH == 'auto') { $item.width(cellW * col - gutterX); item.style.height = ""; height = $item.height(); row = !height ? 0 : Math.round((height + gutterY) / cellH); } if (!fixSize && setting.cellW == 'auto') { $item.height(cellH * row - gutterY); item.style.width = ""; width = $item.width(); col = !width ? 0 : Math.round((width + gutterX) / cellW); } // for none resize block; if ((fixSize != null) && (col > runtime.limitCol || row > runtime.limitRow)) { block = null; } else { // get smallest width and smallest height of block; // using for image runtime; row && row < runtime.minHoB && (runtime.minHoB = row); col && col < runtime.minWoB && (runtime.minWoB = col); // get biggest width and biggest height of block; row > runtime.maxHoB && (runtime.maxHoB = row); col > runtime.maxWoB && (runtime.maxWoB = col); width == 0 && (col = 0); height == 0 && (row = 0); block = { id: blockId, width: col, height: row, fixSize: fixSize }; // for fix position; if (fixPos) { fixPos = fixPos.split("-"); block.y = 1 * fixPos[0]; block.x = 1 * fixPos[1]; block.width = fixSize != null ? col : Math.min(col, runtime.limitCol - block.x); block.height = fixSize != null ? row : Math.min(row, runtime.limitRow - block.y); var holeId = block.y + "-" + block.x + "-" + block.width + "-" + block.height; if (active) { runtime.holes[holeId] = { id: block.id, top: block.y, left: block.x, width: block.width, height: block.height }; this.setBlock(block, setting); } else { delete runtime.holes[holeId]; } } } // for css animation; if ($item.attr("data-state") == null) { $item.attr("data-state", "init"); } else { $item.attr("data-state", "move"); } setting.onBlockReady.call(item, block, setting); return (fixPos && active) ? null : block; }, setBlock: function(block, setting) { var runtime = setting.runtime; var gutterX = runtime.gutterX; var gutterY = runtime.gutterY; var height = block.height; var width = block.width; var cellH = runtime.cellH; var cellW = runtime.cellW; var x = block.x; var y = block.y; if (setting.rightToLeft) { x = runtime.limitCol - x - width; } if (setting.bottomToTop) { y = runtime.limitRow - y - height; } var realBlock = { fixSize: block.fixSize, top: y * cellH, left: x * cellW, width: cellW * width - gutterX, height: cellH * height - gutterY }; realBlock.top = 1 * realBlock.top.toFixed(2); realBlock.left = 1 * realBlock.left.toFixed(2); realBlock.width = 1 * realBlock.width.toFixed(2); realBlock.height = 1 * realBlock.height.toFixed(2); //runtime.length += 1; block.id && (runtime.blocks[block.id] = realBlock); // for append feature; return realBlock; }, showBlock: function(item, setting) { var runtime = setting.runtime; var method = setting.animate && !this.transition ? 'animate' : 'css'; var block = runtime.blocks[item.id]; var $item = $(item); var self = this; var start = $item.attr("data-state") != "move"; var trans = start ? "width 0.5s, height 0.5s" : "top 0.5s, left 0.5s, width 0.5s, height 0.5s, opacity 0.5s"; item.delay && clearTimeout(item.delay); //ignore dragging block; if ($item.hasClass('fw-float')) return; // kill the old transition; self.setTransition(item, ""); item.style.position = "absolute"; setting.onBlockActive.call(item, block, setting); function action() { // start to arrange; start && $item.attr("data-state", "start"); // add animation by using css3 transition; if (setting.animate && self.transition) { self.setTransition(item, trans); } // for hidden block; if (!block) { //var position = $item.position(); <= make speed so slow; var height = parseInt(item.style.height) || 0; var width = parseInt(item.style.width) || 0; var left = parseInt(item.style.left) || 0; var top = parseInt(item.style.top) || 0; $item[method]({ left: left + width / 2, top: top + height / 2, width: 0, height: 0, opacity: 0 }); } else { if (block.fixSize) { block.height = 1 * $item.attr("data-height"); block.width = 1 * $item.attr("data-width"); } $item["css"]({ opacity: 1, width: block.width, height: block.height }); // for animating by javascript; $item[method]({ top: block.top, left: block.left }); if ($item.attr('data-nested') != null) { self.nestedGrid(item, setting); } } runtime.length -= 1; setting.onBlockFinish.call(item, block, setting); runtime.length == 0 && setting.onComplete.call(item, block, setting); } setting.delay > 0 ? (item.delay = setTimeout(action, setting.delay * $item.attr("data-delay"))) : action(); }, nestedGrid: function(item, setting) { var innerWall, $item = $(item), runtime = setting.runtime; var gutterX = $item.attr("data-gutterX") || setting.gutterX; var gutterY = $item.attr("data-gutterY") || setting.gutterY; var method = $item.attr("data-method") || "fitZone"; var nested = $item.attr('data-nested') || "> div"; var cellH = $item.attr("data-cellH") || setting.cellH; var cellW = $item.attr("data-cellW") || setting.cellW; var block = runtime.blocks[item.id]; if (block) { innerWall = new freewall($item); innerWall.reset({ cellH: cellH, cellW: cellW, gutterX: 1 * gutterX, gutterY: 1 * gutterY, selector: nested }); switch (method) { case "fitHeight": innerWall[method](block.height); break; case "fitWidth": innerWall[method](block.width); break; case "fitZone": innerWall[method](block.width, block.height); break; } } }, adjustBlock: function(block, setting) { var runtime = setting.runtime; var gutterX = runtime.gutterX; var gutterY = runtime.gutterY; var $item = $("#" + block.id); var cellH = runtime.cellH; var cellW = runtime.cellW; if (setting.cellH == 'auto') { $item.width(block.width * cellW - gutterX); $item[0].style.height = ""; block.height = Math.round(($item.height() + gutterY) / cellH); } }, adjustUnit: function(width, height, setting) { var gutterX = setting.gutterX; var gutterY = setting.gutterY; var runtime = setting.runtime; var cellW = setting.cellW; var cellH = setting.cellH; $.isFunction(cellW) && (cellW = cellW(width)); cellW = 1 * cellW; !$.isNumeric(cellW) && (cellW = 1); $.isFunction(cellH) && (cellH = cellH(height)); cellH = 1 * cellH; !$.isNumeric(cellH) && (cellH = 1); if ($.isNumeric(width)) { // adjust cell width via container; cellW < 1 && (cellW = cellW * width); // estimate total columns; var limitCol = Math.max(1, Math.floor(width / cellW)); // adjust unit size for fit width; if (!$.isNumeric(gutterX)) { gutterX = (width - limitCol * cellW) / Math.max(1, (limitCol - 1)); gutterX = Math.max(0, gutterX); } limitCol = Math.floor((width + gutterX) / cellW); runtime.cellW = (width + gutterX) / Math.max(limitCol, 1); runtime.cellS = runtime.cellW / cellW; runtime.gutterX = gutterX; runtime.limitCol = limitCol; } if ($.isNumeric(height)) { // adjust cell height via container; cellH < 1 && (cellH = cellH * height); // estimate total rows; var limitRow = Math.max(1, Math.floor(height / cellH)); // adjust size unit for fit height; if (!$.isNumeric(gutterY)) { gutterY = (height - limitRow * cellH) / Math.max(1, (limitRow - 1)); gutterY = Math.max(0, gutterY); } limitRow = Math.floor((height + gutterY) / cellH); runtime.cellH = (height + gutterY) / Math.max(limitRow, 1); runtime.cellS = runtime.cellH / cellH; runtime.gutterY = gutterY; runtime.limitRow = limitRow; } if (!$.isNumeric(width)) { // adjust cell width via cell height; cellW < 1 && (cellW = runtime.cellH); runtime.cellW = cellW != 1 ? cellW * runtime.cellS : 1; runtime.gutterX = gutterX; runtime.limitCol = 666666; } if (!$.isNumeric(height)) { // adjust cell height via cell width; cellH < 1 && (cellH = runtime.cellW); runtime.cellH = cellH != 1 ? cellH * runtime.cellS : 1; runtime.gutterY = gutterY; runtime.limitRow = 666666; } }, resetGrid: function(runtime) { runtime.blocks = {}; runtime.length = 0; runtime.cellH = 0; runtime.cellW = 0; runtime.lastId = 1; runtime.matrix = {}; runtime.totalCol = 0; runtime.totalRow = 0; }, setDraggable: function(item, option) { var isTouch = false; var config = { startX: 0, //start clientX; startY: 0, top: 0, left: 0, handle: null, onDrop: function() {}, onDrag: function() {}, onStart: function() {} }; $(item).each(function() { var setting = $.extend({}, config, option); var handle = setting.handle || this; var ele = this; var $E = $(ele); var $H = $(handle); var posStyle = $E.css("position"); posStyle != "absolute" && $E.css("position", "relative"); function mouseDown(evt) { evt.stopPropagation(); evt = evt.originalEvent; if (evt.touches) { isTouch = true; evt = evt.changedTouches[0]; } if (evt.button != 2 && evt.which != 3) { setting.onStart.call(ele, evt); setting.startX = evt.clientX; setting.startY = evt.clientY; setting.top = parseInt($E.css("top")) || 0; setting.left = parseInt($E.css("left")) || 0; $D.bind("mouseup touchend", mouseUp); $D.bind("mousemove touchmove", mouseMove); } return false; }; function mouseMove(evt) { evt = evt.originalEvent; isTouch && (evt = evt.changedTouches[0]); $E.css({ top: setting.top - (setting.startY - evt.clientY), left: setting.left - (setting.startX - evt.clientX) }); setting.onDrag.call(ele, evt); }; function mouseUp(evt) { evt = evt.originalEvent; isTouch && (evt = evt.changedTouches[0]); setting.onDrop.call(ele, evt); $D.unbind("mouseup touchend", mouseUp); $D.unbind("mousemove touchmove", mouseMove); }; // ignore drag drop on text field; $E.find("iframe, form, input, textarea, .ignore-drag") .each(function() { $(this).on("touchstart mousedown", function(evt) { evt.stopPropagation(); }); }); $D.unbind("mouseup touchend", mouseUp); $D.unbind("mousemove touchmove", mouseMove); $H.unbind("mousedown touchstart").bind("mousedown touchstart", mouseDown); }); }, setTransition: function(item, trans) { var style = item.style; var $item = $(item); // remove animation; if (!this.transition && $item.stop) { $item.stop(); } else if (style.webkitTransition != null) { style.webkitTransition = trans; } else if (style.MozTransition != null) { style.MozTransition = trans; } else if (style.msTransition != null) { style.msTransition = trans; } else if (style.OTransition != null) { style.OTransition = trans; } else { style.transition = trans; } }, getFreeArea: function(t, l, runtime) { var maxY = Math.min(t + runtime.maxHoB, runtime.limitRow); var maxX = Math.min(l + runtime.maxWoB, runtime.limitCol); var minX = maxX; var minY = maxY; var matrix = runtime.matrix; // find limit zone by horizon; for (var y = t; y < minY; ++y) { for (var x = l; x < maxX; ++x) { if (matrix[y + '-' + x]) { (l < x && x < minX) && (minX = x); } } } // find limit zone by vertical; for (var y = t; y < maxY; ++y) { for (var x = l; x < minX; ++x) { if (matrix[y + '-' + x]) { (t < y && y < minY) && (minY = y); } } } return { top: t, left: l, width: minX - l, height: minY - t }; }, setWallSize: function(runtime, container) { var totalRow = runtime.totalRow; var totalCol = runtime.totalCol; var gutterY = runtime.gutterY; var gutterX = runtime.gutterX; var cellH = runtime.cellH; var cellW = runtime.cellW; var totalWidth = Math.max(0, cellW * totalCol - gutterX); var totalHeight = Math.max(0, cellH * totalRow - gutterY); container.attr({ 'data-total-col': totalCol, 'data-total-row': totalRow, 'data-wall-width': Math.ceil(totalWidth), 'data-wall-height': Math.ceil(totalHeight) }); if (runtime.limitCol < runtime.limitRow) { // do not set height with nesting grid; !container.attr("data-height") && container.height(Math.ceil(totalHeight)); } } }; var engine = { // Giot just a person name; giot: function(items, setting) { var runtime = setting.runtime, row = runtime.limitRow, col = runtime.limitCol, x = 0, y = 0, maxX = runtime.totalCol, maxY = runtime.totalRow, wall = {}, holes = runtime.holes, block = null, matrix = runtime.matrix, bigLoop = Math.max(col, row), freeArea = null, misBlock = null, fitWidth = col < row ? 1 : 0, lastBlock = null, smallLoop = Math.min(col, row); // fill area with top, left, width, height; function fillMatrix(id, t, l, w, h) { for (var y = t; y < t + h;) { for (var x = l; x < l + w;) { matrix[y + '-' + x] = id; ++x > maxX && (maxX = x); } ++y > maxY && (maxY = y); } } // set a hole on the wall; for (var i in holes) { if (holes.hasOwnProperty(i)) { fillMatrix(holes[i]["id"] || true, holes[i]['top'], holes[i]['left'], holes[i]['width'], holes[i]['height']); } } for (var b = 0; b < bigLoop; ++b) { if (!items.length) break; fitWidth ? (y = b) : (x = b); lastBlock = null; for (var s = 0; s < smallLoop; ++s) { if (!items.length) break; fitWidth ? (x = s) : (y = s); if (runtime.matrix[y + '-' + x]) continue; freeArea = layoutManager.getFreeArea(y, x, runtime); block = null; for (var i = 0; i < items.length; ++i) { if (items[i].height > freeArea.height) continue; if (items[i].width > freeArea.width) continue; block = items.splice(i, 1)[0]; break; } // trying resize the other block to fit gap; if (block == null && setting.fixSize == null) { // resize near block to fill gap; if (lastBlock && !fitWidth && runtime.minHoB > freeArea.height) { lastBlock.height += freeArea.height; fillMatrix(lastBlock.id, lastBlock.y, lastBlock.x, lastBlock.width, lastBlock.height); layoutManager.setBlock(lastBlock, setting); continue; } else if (lastBlock && fitWidth && runtime.minWoB > freeArea.width) { lastBlock.width += freeArea.width; fillMatrix(lastBlock.id, lastBlock.y, lastBlock.x, lastBlock.width, lastBlock.height); layoutManager.setBlock(lastBlock, setting); continue; } else { // get other block fill to gap; for (var i = 0; i < items.length; ++i) { if (items[i]['fixSize'] != null) continue; block = items.splice(i, 1)[0]; if (fitWidth) { block.width = freeArea.width; if (setting.cellH == 'auto') { layoutManager.adjustBlock(block, setting); } // for fitZone; block.height = Math.min(block.height, freeArea.height); } else { block.height = freeArea.height; // for fitZone; block.width = Math.min(block.width, freeArea.width); } break; } } } if (block != null) { wall[block.id] = { id: block.id, x: x, y: y, width: block.width, height: block.height, fixSize: block.fixSize }; // keep success block for next round; lastBlock = wall[block.id]; fillMatrix(lastBlock.id, lastBlock.y, lastBlock.x, lastBlock.width, lastBlock.height); layoutManager.setBlock(lastBlock, setting); } else { // get expect area; var misBlock = { x: x, y: y, fixSize: 0 }; if (fitWidth) { misBlock.width = freeArea.width; misBlock.height = 0; var lastX = x - 1; var lastY = y; while (matrix[lastY + '-' + lastX]) { matrix[lastY + '-' + x] = true; misBlock.height += 1; lastY += 1; } } else { misBlock.height = freeArea.height; misBlock.width = 0; var lastY = y - 1; var lastX = x; while (matrix[lastY + '-' + lastX]) { matrix[y + '-' + lastX] = true; misBlock.width += 1; lastX += 1; } } setting.onGapFound(layoutManager.setBlock(misBlock, setting), setting); } } } runtime.matrix = matrix; runtime.totalRow = maxY; runtime.totalCol = maxX; } }; window.freewall = function(selector) { var container = $(selector); if (container.css('position') == 'static') { container.css('position', 'relative'); } var MAX = Number.MAX_VALUE; var klass = this; // increase the instance index; layoutManager.totalGrid += 1; var setting = $.extend({}, layoutManager.defaultConfig); var runtime = { blocks: {}, // store all items; events: {}, // store custome events; matrix: {}, holes: {}, // forbidden zone; cellW: 0, cellH: 0, // unit adjust; cellS: 1, // unit scale; filter: '', // filter selector; lastId: 0, length: 0, maxWoB: 0, // max width of block; maxHoB: 0, minWoB: MAX, minHoB: MAX, // min height of block; running: 0, // flag to check layout arranging; gutterX: 15, gutterY: 15, totalCol: 0, totalRow: 0, limitCol: 666666, // maximum column; limitRow: 666666, currentMethod: null, currentArguments: [] }; setting.runtime = runtime; runtime.totalGrid = layoutManager.totalGrid; // check browser support transition; var bodyStyle = document.body.style; if (!layoutManager.transition) { (bodyStyle.webkitTransition != null || bodyStyle.MozTransition != null || bodyStyle.msTransition != null || bodyStyle.OTransition != null || bodyStyle.transition != null) && (layoutManager.transition = true); } function setDraggable(item) { var gutterX = runtime.gutterX; var gutterY = runtime.gutterY; var cellH = runtime.cellH; var cellW = runtime.cellW; var $item = $(item); var handle = $item.find($item.attr("data-handle")); layoutManager.setDraggable(item, { handle: handle[0], onStart: function(event) { if (setting.animate && layoutManager.transition) { layoutManager.setTransition(this, ""); } $item.css('z-index', 9999).addClass('fw-float'); }, onDrag: function(evt, tracker) { var position = $item.position(); var top = Math.round(position.top / cellH); var left = Math.round(position.left / cellW); var width = Math.round($item.width() / cellW); var height = Math.round($item.height() / cellH); top = Math.min(Math.max(0, top), runtime.limitRow - height); left = Math.min(Math.max(0, left), runtime.limitCol - width); klass.setHoles({top: top, left: left, width: width, height: height}); klass.refresh(); }, onDrop: function() { var position = $item.position(); var top = Math.round(position.top / cellH); var left = Math.round(position.left / cellW); var width = Math.round($item.width() / cellW); var height = Math.round($item.height() / cellH); top = Math.min(Math.max(0, top), runtime.limitRow - height); left = Math.min(Math.max(0, left), runtime.limitCol - width); $item.removeClass('fw-float'); $item.css({ zIndex: "auto", top: top * cellH, left: left * cellW }); //check old drag element; var x, y, key, oldDropId; for (y = 0; y < height; ++y) { for (x = 0; x < width; ++x) { key = (y + top) + "-" + (x + left); oldDropId = runtime.matrix[key]; if (oldDropId && oldDropId != true) { $("#" + oldDropId).removeAttr("data-position"); } } } runtime.holes = {}; $item.attr({ "data-width": $item.width(), "data-height": $item.height(), "data-position": top + "-" + left }); klass.refresh(); } }); } $.extend(klass, { addCustomEvent: function(name, func) { var events = runtime.events; name = name.toLowerCase(); !events[name] && (events[name] = []); func.eid = events[name].length; events[name].push(func); return this; }, appendBlock: function(items) { var allBlock = $(items).appendTo(container); var block = null; var activeBlock = []; if (runtime.currentMethod) { allBlock.each(function(index, item) { item.index = ++index; if (block = layoutManager.loadBlock(item, setting)) { activeBlock.push(block); } }); engine[setting.engine](activeBlock, setting); layoutManager.setWallSize(runtime, container); runtime.length = allBlock.length; allBlock.each(function(index, item) { layoutManager.showBlock(item, setting); if (setting.draggable || item.getAttribute('data-draggable')) { setDraggable(item); } }); } }, /* add one or more blank area (hole) on layout; example: wall.appendHoles({ top: 10, left: 36, width: 2, height: 6 }); wall.appendHoles([ { top: 16, left: 16, width: 8, height: 2 }, { top: 10, left: 36, width: 2, height: 6 } ]); */ appendHoles: function(holes) { var newHoles = [].concat(holes), h = {}, i; for (i = 0; i < newHoles.length; ++i) { h = newHoles[i]; runtime.holes[h.top + "-" + h.left + "-" + h.width + "-" + h.height] = h; } return this; }, container: container, destroy: function() { var allBlock = container.find(setting.selector).removeAttr('id'), block = null, activeBlock = []; allBlock.each(function(index, item) { $item = $(item); var width = 1 * $item.attr('data-width') || ""; var height = 1 * $item.attr('data-height') || ""; $item.width(width).height(height).css({ position: 'static' }); }); }, fillHoles: function(holes) { if (arguments.length == 0) { runtime.holes = {}; } else { var newHoles = [].concat(holes), h = {}, i; for (i = 0; i < newHoles.length; ++i) { h = newHoles[i]; delete runtime.holes[h.top + "-" + h.left + "-" + h.width + "-" + h.height]; } } return this; }, filter: function(filter) { runtime.filter = filter; runtime.currentMethod && this.refresh(); return this; }, fireEvent: function(name, object, setting) { var events = runtime.events; name = name.toLowerCase(); if (events[name] && events[name].length) { for (var i = 0; i < events[name].length; ++i) { events[name][i].call(this, object, setting); } } return this; }, fitHeight: function(height) { var allBlock = container.find(setting.selector).removeAttr('id'), block = null, activeBlock = []; height = height ? height : container.height() || $W.height(); runtime.currentMethod = arguments.callee; runtime.currentArguments = arguments; layoutManager.resetGrid(runtime); layoutManager.adjustUnit('auto', height, setting); if (runtime.filter) { allBlock.data('active', 0); allBlock.filter(runtime.filter).data('active', 1); } else { allBlock.data('active', 1); } allBlock.each(function(index, item) { var $item = $(item); item.index = ++index; if (block = layoutManager.loadBlock(item, setting)) { $item.data("active") && activeBlock.push(block); } }); klass.fireEvent('onGridReady', container, setting); engine[setting.engine](activeBlock, setting); layoutManager.setWallSize(runtime, container); klass.fireEvent('onGridArrange', container, setting); runtime.length = allBlock.length; allBlock.each(function(index, item) { layoutManager.showBlock(item, setting); if (setting.draggable || item.getAttribute('data-draggable')) { setDraggable(item); } }); }, fitWidth: function(width) { var allBlock = container.find(setting.selector).removeAttr('id'), block = null, activeBlock = []; width = width ? width : container.width() || $W.width(); runtime.currentMethod = arguments.callee; runtime.currentArguments = arguments; layoutManager.resetGrid(runtime); layoutManager.adjustUnit(width, 'auto', setting); if (runtime.filter) { allBlock.data('active', 0); allBlock.filter(runtime.filter).data('active', 1); } else { allBlock.data('active', 1); } allBlock.each(function(index, item) { var $item = $(item); item.index = ++index; if (block = layoutManager.loadBlock(item, setting)) { $item.data("active") && activeBlock.push(block); } }); klass.fireEvent('onGridReady', container, setting); engine[setting.engine](activeBlock, setting); layoutManager.setWallSize(runtime, container); klass.fireEvent('onGridArrange', container, setting); runtime.length = allBlock.length; allBlock.each(function(index, item) { layoutManager.showBlock(item, setting); if (setting.draggable || item.getAttribute('data-draggable')) { setDraggable(item); } }); }, fitZone: function(width, height) { var allBlock = container.find(setting.selector).removeAttr('id'), block = null, activeBlock = []; height = height ? height : container.height() || $W.height(); width = width ? width : container.width() || $W.width(); runtime.currentMethod = arguments.callee; runtime.currentArguments = arguments; layoutManager.resetGrid(runtime); layoutManager.adjustUnit(width, height, setting); if (runtime.filter) { allBlock.data('active', 0); allBlock.filter(runtime.filter).data('active', 1); } else { allBlock.data('active', 1); } allBlock.each(function(index, item) { var $item = $(item); item.index = ++index; if (block = layoutManager.loadBlock(item, setting)) { $item.data("active") && activeBlock.push(block); } }); klass.fireEvent('onGridReady', container, setting); engine[setting.engine](activeBlock, setting); layoutManager.setWallSize(runtime, container); klass.fireEvent('onGridArrange', container, setting); runtime.length = allBlock.length; allBlock.each(function(index, item) { layoutManager.showBlock(item, setting); if (setting.draggable || item.getAttribute('data-draggable')) { setDraggable(item); } }); }, /* set block with special position, the top and left are multiple of unit width/height; example: wall.fixPos({ top: 0, left: 0, block: $('.free') }); */ fixPos: function(option) { $(option.block).attr({'data-position': option.top + "-" + option.left}); return this; }, /* set block with special size, the width and height are multiple of unit width/height; example: wall.fixSize({ height: 5, width: 2, block: $('.free') }); */ fixSize: function(option) { option.height != null && $(option.block).attr({'data-height': option.height}); option.width != null && $(option.block).attr({'data-width': option.width}); return this; }, prepend: function(items) { container.prepend(items); runtime.currentMethod && this.refresh(); return this; }, refresh: function() { var params = arguments.length ? arguments : runtime.currentArguments; runtime.currentMethod == null && (runtime.currentMethod = this.fitWidth); runtime.currentMethod.apply(this, Array.prototype.slice.call(params, 0)); return this; }, /* custom layout setting; example: wall.reset({ selector: '.brick', animate: true, cellW: 160, cellH: 160, delay: 50, onResize: function() { wall.fitWidth(); } }); */ reset: function(option) { $.extend(setting, option); return this; }, /* create one or more blank area (hole) on layout; example: wall.setHoles({ top: 2, left: 2, width: 2, height: 2 }); */ setHoles: function(holes) { var newHoles = [].concat(holes), h = {}, i; runtime.holes = {}; for (i = 0; i < newHoles.length; ++i) { h = newHoles[i]; runtime.holes[h.top + "-" + h.left + "-" + h.width + "-" + h.height] = h; } return this; }, unFilter: function() { delete runtime.filter; this.refresh(); return this; } }); container.attr('data-min-width', Math.floor($W.width() / 80) * 80); // execute plugins; for (var i in layoutManager.plugin) { if (layoutManager.plugin.hasOwnProperty(i)) { layoutManager.plugin[i].call(klass, setting, container); } } // setup resize event; $W.resize(function() { if (runtime.running) return; runtime.running = 1; setTimeout(function() { runtime.running = 0; setting.onResize.call(klass, container); }, 122); container.attr('data-min-width', Math.floor($W.width() / 80) * 80); }); }; /* add default setting; example: freewall.addConfig({ offsetLeft: 0 }); */ freewall.addConfig = function(newConfig) { // add default setting; $.extend(layoutManager.defaultConfig, newConfig); }; /* support create new arrange algorithm; example: freewall.createEngine({ slice: function(items, setting) { // slice engine; } }); */ freewall.createEngine = function(engineData) { // create new engine; $.extend(engine, engineData); }; /* support create new plugin; example: freewall.createPlugin({ centering: function(setting, container) { console.log(this); console.log(setting); } })l */ freewall.createPlugin = function(pluginData) { // register new plugin; $.extend(layoutManager.plugin, pluginData); }; /* support access helper function; example: freewall.getMethod('setBlock')(block, setting); */ freewall.getMethod = function(method) { // get helper method; return layoutManager[method]; }; })(window.Zepto || window.jQuery);
|
__label__pos
| 0.998537 |
++ed by:
HRUPP HANENKAMP HMBRAND BINJOY PERLER
131 PAUSE user(s)
108 non-PAUSE user(s).
Jarkko Hietaniemi
NAME
Filter::Simple - Simplified source filtering
SYNOPSIS
# in MyFilter.pm:
package MyFilter;
use Filter::Simple sub { ... };
# in user's code:
use MyFilter;
# this code is filtered
no MyFilter;
# this code is not
DESCRIPTION
The Problem
Source filtering is an immensely powerful feature of recent versions of Perl. It allows one to extend the language itself (e.g. the Switch module), to simplify the language (e.g. Language::Pythonesque), or to completely recast the language (e.g. Lingua::Romana::Perligata). Effectively, it allows one to use the full power of Perl as its own, recursively applied, macro language.
The excellent Filter::Util::Call module (by Paul Marquess) provides a usable Perl interface to source filtering, but it is often too powerful and not nearly as simple as it could be.
To use the module it is necessary to do the following:
1. Download, build, and install the Filter::Util::Call module. (If you are using Perl 5.7.1 or later, you already have Filter::Util::Call.)
2. Set up a module that does a use Filter::Util::Call.
3. Within that module, create an import subroutine.
4. Within the import subroutine do a call to filter_add, passing it either a subroutine reference.
5. Within the subroutine reference, call filter_read or filter_read_exact to "prime" $_ with source code data from the source file that will use your module. Check the status value returned to see if any source code was actually read in.
6. Process the contents of $_ to change the source code in the desired manner.
7. Return the status value.
8. If the act of unimporting your module (via a no) should cause source code filtering to cease, create an unimport subroutine, and have it call filter_del. Make sure that the call to filter_read or filter_read_exact in step 5 will not accidentally read past the no. Effectively this limits source code filters to line-by-line operation, unless the import subroutine does some fancy pre-pre-parsing of the source code it's filtering.
For example, here is a minimal source code filter in a module named BANG.pm. It simply converts every occurrence of the sequence BANG\s+BANG to the sequence die 'BANG' if $BANG in any piece of code following a use BANG; statement (until the next no BANG; statement, if any):
package BANG;
use Filter::Util::Call ;
sub import {
filter_add( sub {
my $caller = caller;
my ($status, $no_seen, $data);
while ($status = filter_read()) {
if (/^\s*no\s+$caller\s*;\s*$/) {
$no_seen=1;
last;
}
$data .= $_;
$_ = "";
}
$_ = $data;
s/BANG\s+BANG/die 'BANG' if \$BANG/g
unless $status < 0;
$_ .= "no $class;\n" if $no_seen;
return 1;
})
}
sub unimport {
filter_del();
}
1 ;
This level of sophistication puts filtering out of the reach of many programmers.
A Solution
The Filter::Simple module provides a simplified interface to Filter::Util::Call; one that is sufficient for most common cases.
Instead of the above process, with Filter::Simple the task of setting up a source code filter is reduced to:
1. Set up a module that does a use Filter::Simple sub { ... }.
2. Within the anonymous subroutine passed to use Filter::Simple, process the contents of $_ to change the source code in the desired manner.
In other words, the previous example, would become:
package BANG;
use Filter::Simple sub {
s/BANG\s+BANG/die 'BANG' if \$BANG/g;
};
1 ;
How it works
The Filter::Simple module exports into the package that uses it (e.g. package "BANG" in the above example) two automagically constructed subroutines -- import and unimport -- which take care of all the nasty details.
In addition, the generated import subroutine passes its own argument list to the filtering subroutine, so the BANG.pm filter could easily be made parametric:
package BANG;
use Filter::Simple sub {
my ($die_msg, $var_name) = @_;
s/BANG\s+BANG/die '$die_msg' if \${$var_name}/g;
};
# and in some user code:
use BANG "BOOM", "BAM; # "BANG BANG" becomes: die 'BOOM' if $BAM
The specified filtering subroutine is called every time a use BANG is encountered, and passed all the source code following that call, up to either the next no BANG; call or the end of the source file (whichever occurs first). Currently, any no BANG; call must appear by itself on a separate line, or it is ignored.
AUTHOR
Damian Conway ([email protected])
COPYRIGHT
Copyright (c) 2000, Damian Conway. All Rights Reserved.
This module is free software; you can redistribute it and/or
modify it under the same terms as Perl itself.
|
__label__pos
| 0.783683 |
Skip Content
Linfield College scenery
Giving to Linfield Creating a Screensaver
Creating a Screensaver
There may be some variations to these instructions depending on your operating system.
PC
1. Save zipped image file to “My Pictures"
2. Go to “My Computer” and navigate to “My Pictures”
3. Right click on “1024” folder (may say 600 0r 768) and select “Extract All”
4. Extract folder into “My Documents” not “My Pictures”. If you extract them into "My Pictures", that whole folder becomes your screensaver
5. Once you've got the images extracted and in a folder of their own inside "My Documents", open your control panel and select “Display”
6. Select the “Screensaver” tab
7. Select “My Pictures Slideshow” in the screensaver menu
8. Select “Options”
9. Select the “1024” folder in "My Documents" from the “Use pictures from this folder” menu
10. Select “OK”. You're done!
Mac
1. Save the zipped file to your desktop or someplace you can find
2. Navigate to the downloaded file on your computer. Double click on the zipped file icon on to unzip the file
3. Open iPhoto
4. Select “Import to Library” from File menu
5. Navigate to the 1024 (or 600 or 768 depending on the size you downloaded) folder you just unzipped
6. Click “Import”
7. Click “New Album” in File menu
8. Name the album "Linfield"
9. Copy the newly imported images into the album (they’ll be under the “Last Import” option in the left menu or just go down to the bottom of your iPhoto library
10. Open up “Desktop & Screen Saver" in "System Preferences" (You're almost there!)
11. Select the "Screen Saver" tab
12. Select “Linfield” folder from left window. You're done!
linfield college seal
Linfield College
900 SE Baker Street
McMinnville, Oregon 97128-6894
Website Privacy Policy Policies and Procedures Contact Linfield
|
__label__pos
| 0.659975 |
6
I haven't rooted my mobile phone, but an antivirus is running. Without rooting, how will the antivirus come to know about the files accessed by an application?
Normally, in Linux distributions, it can be found from /proc/"p-id"/fd, but in Android root privileges are required to access that folder, although it is not the case in normal Linux distributions. So, without rooting, how does the antivirus come to know about the files accessed by an application? Is there any other way other than accessing /proc/"p-id"/fd?
1
• 1. What version of Android are you running? 2. When you check under App > Permissions what permissions is the app using? – andDevW Aug 1 '17 at 18:47
1
It's simple - It doesn't. Antivirus on android has very limited access. It can't check what the given app is doing. It's not very useful. It may warn you if it finds something suspicious on sdcard/externalsdcard or one of your installed apps is in their database marked as bad. It can't scan the app data files, it can't scan the main system, so it's pretty pointless to use it.
-1
Anti-virus programs run in sandbox mode like every other application. They can access your storage. And they can also scan installed apps because apk are like zip files.
1
• They can scan external storage (/sdcard) only, not installed apps. – Irfan Latif Sep 2 '19 at 7:07
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.977971 |
Thursday, November 30, 2023
Technology
Exploring the World of Virtual Reality Casinos: A Next-Level Gaming Experience
Introduction
Virtual Reality (VR) technology has revolutionized various industries, and the world of online gambling is no exception. Virtual Reality Casinos offer a next-level gaming experience, immersing players in a digital environment that replicates the atmosphere of a land-based casino. In this article, we will explore the exciting world of Virtual Reality Casinos, discussing the technology behind them, the benefits they offer to players, and the future potential of this emerging trend.
The Technology Behind Virtual Reality Casinos
Virtual Reality Casinos utilize cutting-edge technology to create a realistic and immersive gambling experience. Here are the key components of the technology:
VR Headsets
The centerpiece of the Virtual Reality experience is the VR headset. These headsets, such as the Oculus Rift, HTC Vive, or PlayStation VR, are worn on the head and provide a 360-degree view of the virtual environment. They track the movements of the player’s head, allowing them to look around and interact with the virtual world.
Hand Controllers
To enhance the level of interaction, hand controllers are used in Virtual Reality Casinos. These controllers allow players to manipulate objects within the virtual environment, such as picking up chips, spinning a roulette wheel, or dealing cards. The controllers provide a more immersive and realistic experience by simulating the sensation of touch and movement.
Spatial Audio
Spatial audio technology adds another layer of immersion to Virtual Reality Casinos. It creates a three-dimensional sound environment that changes in real-time based on the player’s position and orientation. This means that players can hear sounds coming from different directions, adding to the overall realism of the experience.
Room-Scale Tracking
Some Virtual Reality systems offer room-scale tracking, which allows players to move around in a physical space while wearing the VR headset. This feature provides a more immersive experience, as players can physically walk around a virtual casino, interact with objects, and even meet and chat with other players.
The Benefits of Virtual Reality Casinos
Virtual Reality Casinos offer several advantages over traditional online gambling platforms. Here are some of the key benefits that players can enjoy:
Realistic Casino Atmosphere
Virtual Reality Casinos aim to recreate the atmosphere of a land-based casino, providing players with a highly immersive experience. Players can explore beautifully designed virtual casino environments, complete with realistic sounds, visuals, and interactions. This adds a new level of excitement and engagement to the gambling experience.
Social Interaction
One of the biggest advantages of Virtual Reality Casinos is the ability to interact with other players in a virtual space. Players can meet and chat with other gamblers from around the world, creating a social environment similar to a physical casino. This social aspect adds a sense of community and enhances the overall enjoyment of the gambling experience.
Enhanced Gameplay
Virtual Reality technology allows for more interactive and engaging gameplay. Players can physically interact with objects, such as spinning the roulette wheel or throwing the dice, making the experience more tactile and immersive. Additionally, the use of hand controllers enables players to perform actions that were previously only possible in a physical casino, such as shuffling and dealing cards.
Privacy and Convenience
Virtual Reality Casinos offer the convenience of online gambling while maintaining the privacy of the player’s own home. Players can enjoy the thrills of casino gaming without the need to travel or be physically present in a crowded casino. This makes Virtual Reality Casinos a great option for those who prefer to gamble in a more private and comfortable environment.
The Future of Virtual Reality Casinos
The potential of Virtual Reality Casinos is still being explored, and the future looks promising. Here are some of the possibilities that the technology holds:
Expanded Game Selection
As Virtual Reality Casinos continue to evolve, we can expect to see an expanded selection of games. Currently, popular casino games such as blackjack, roulette, and slots are available in Virtual Reality format. However, as the technology becomes more widespread, we can anticipate the introduction of new and innovative games that take full advantage of the immersive capabilities of Virtual Reality.
Integration with Cryptocurrencies
Virtual Reality Casinos and cryptocurrencies, such as Bitcoin, are a natural fit. Both technologies offer enhanced security and privacy, making them an ideal combination for online gambling. We can expect to see Virtual Reality Casinos adopting cryptocurrencies as a form of payment, providing players with a secure and anonymous way to gamble online.
Virtual Reality Poker Tournaments
Poker is a game that heavily relies on social interaction and reading opponents’ reactions. Virtual Reality Casinos can take the poker experience to new heights by offering virtual poker tournaments. Players can sit at a virtual poker table, observe their opponents’ body language, and make strategic decisions, just as they would in a physical casino. This immersive poker experience has the potential to attract a large number of players and revolutionize the online poker industry.
Integration with Augmented Reality
Augmented Reality (AR) technology can be combined with Virtual Reality Casinos to create an even more interactive and engaging gambling experience. With AR, players can overlay virtual elements onto the real world, allowing them to play casinogames in their own physical surroundings. This opens up possibilities for unique and personalized gambling experiences, where players can place virtual slot machines on their coffee table or play a game of blackjack on their kitchen counter.
Virtual Reality Casino Resorts
In the future, we may see the emergence of Virtual Reality Casino Resorts, where players can explore and gamble in virtual replicas of famous land-based casinos from around the world. These virtual resorts would offer a wide range of casino games, entertainment options, and social interactions, providing players with a complete virtual casino experience without leaving their homes.
Conclusion
Virtual Reality Casinos are pushing the boundaries of online gambling, offering players a highly immersive and realistic gaming experience. With technology advancements, such as VR headsets, hand controllers, and spatial audio, players can enjoy the atmosphere of a land-based casino, interact with other players, and experience enhanced gameplay. The future of Virtual Reality Casinos holds exciting possibilities, including expanded game selection, integration with cryptocurrencies, virtual poker tournaments, and the combination of Virtual Reality with Augmented Reality. As the technology continues to evolve, we can expect Virtual Reality Casinos to become an increasingly popular and innovative way to gamble online.
Visit here: https://casinobonus.fun/dash-online-casinos-in-canada/
Related Posts
1 of 72
Leave A Reply
Your email address will not be published. Required fields are marked *
|
__label__pos
| 0.587136 |
levn
0.4.1 • Public • Published
levn Build Status
Light ECMAScript (JavaScript) Value Notation Levn is a library which allows you to parse a string into a JavaScript value based on an expected type. It is meant for short amounts of human entered data (eg. config files, command line arguments).
Levn aims to concisely describe JavaScript values in text, and allow for the extraction and validation of those values. Levn uses type-check for its type format, and to validate the results. MIT license. Version 0.4.1.
How is this different than JSON? levn is meant to be written by humans only, is (due to the previous point) much more concise, can be validated against supplied types, has regex and date literals, and can easily be extended with custom types. On the other hand, it is probably slower and thus less efficient at transporting large amounts of data, which is fine since this is not its purpose.
npm install levn
For updates on levn, follow me on twitter.
Quick Examples
var parse = require('levn').parse;
parse('Number', '2'); // 2
parse('String', '2'); // '2'
parse('String', 'levn'); // 'levn'
parse('String', 'a b'); // 'a b'
parse('Boolean', 'true'); // true
parse('Date', '#2011-11-11#'); // (Date object)
parse('Date', '2011-11-11'); // (Date object)
parse('RegExp', '/[a-z]/gi'); // /[a-z]/gi
parse('RegExp', 're'); // /re/
parse('Int', '2'); // 2
parse('Number | String', 'str'); // 'str'
parse('Number | String', '2'); // 2
parse('[Number]', '[1,2,3]'); // [1,2,3]
parse('(String, Boolean)', '(hi, false)'); // ['hi', false]
parse('{a: String, b: Number}', '{a: str, b: 2}'); // {a: 'str', b: 2}
// at the top level, you can ommit surrounding delimiters
parse('[Number]', '1,2,3'); // [1,2,3]
parse('(String, Boolean)', 'hi, false'); // ['hi', false]
parse('{a: String, b: Number}', 'a: str, b: 2'); // {a: 'str', b: 2}
// wildcard - auto choose type
parse('*', '[hi,(null,[42]),{k: true}]'); // ['hi', [null, [42]], {k: true}]
Usage
require('levn'); returns an object that exposes three properties. VERSION is the current version of the library as a string. parse and parsedTypeParse are functions.
// parse(type, input, options);
parse('[Number]', '1,2,3'); // [1, 2, 3]
// parsedTypeParse(parsedType, input, options);
var parsedType = require('type-check').parseType('[Number]');
parsedTypeParse(parsedType, '1,2,3'); // [1, 2, 3]
parse(type, input, options)
parse casts the string input into a JavaScript value according to the specified type in the type format (and taking account the optional options) and returns the resulting JavaScript value.
arguments
• type - String - the type written in the type format which to check against
• input - String - the value written in the levn format
• options - Maybe Object - an optional parameter specifying additional options
returns
* - the resulting JavaScript value
example
parse('[Number]', '1,2,3'); // [1, 2, 3]
parsedTypeParse(parsedType, input, options)
parsedTypeParse casts the string input into a JavaScript value according to the specified type which has already been parsed (and taking account the optional options) and returns the resulting JavaScript value. You can parse a type using the type-check library's parseType function.
arguments
• type - Object - the type in the parsed type format which to check against
• input - String - the value written in the levn format
• options - Maybe Object - an optional parameter specifying additional options
returns
* - the resulting JavaScript value
example
var parsedType = require('type-check').parseType('[Number]');
parsedTypeParse(parsedType, '1,2,3'); // [1, 2, 3]
Levn Format
Levn can use the type information you provide to choose the appropriate value to produce from the input. For the same input, it will choose a different output value depending on the type provided. For example, parse('Number', '2') will produce the number 2, but parse('String', '2') will produce the string "2".
If you do not provide type information, and simply use *, levn will parse the input according the unambiguous "explicit" mode, which we will now detail - you can also set the explicit option to true manually in the options.
• "string", 'string' are parsed as a String, eg. "a msg" is "a msg"
• #date# is parsed as a Date, eg. #2011-11-11# is new Date('2011-11-11')
• /regexp/flags is parsed as a RegExp, eg. /re/gi is /re/gi
• undefined, null, NaN, true, and false are all their JavaScript equivalents
• [element1, element2, etc] is an Array, and the casting procedure is recursively applied to each element. Eg. [1,2,3] is [1,2,3].
• (element1, element2, etc) is an tuple, and the casting procedure is recursively applied to each element. Eg. (1, a) is (1, a) (is [1, 'a']).
• {key1: val1, key2: val2, ...} is an Object, and the casting procedure is recursively applied to each property. Eg. {a: 1, b: 2} is {a: 1, b: 2}.
• Any test which does not fall under the above, and which does not contain special characters ([``]``(``)``{``}``:``,) is a string, eg. $12- blah is "$12- blah".
If you do provide type information, you can make your input more concise as the program already has some information about what it expects. Please see the type format section of type-check for more information about how to specify types. There are some rules about what levn can do with the information:
• If a String is expected, and only a String, all characters of the input (including any special ones) will become part of the output. Eg. [({})] is "[({})]", and "hi" is '"hi"'.
• If a Date is expected, the surrounding # can be omitted from date literals. Eg. 2011-11-11 is new Date('2011-11-11').
• If a RegExp is expected, no flags need to be specified, and the regex is not using any of the special characters,the opening and closing / can be omitted - this will have the affect of setting the source of the regex to the input. Eg. regex is /regex/.
• If an Array is expected, and it is the root node (at the top level), the opening [ and closing ] can be omitted. Eg. 1,2,3 is [1,2,3].
• If a tuple is expected, and it is the root node (at the top level), the opening ( and closing ) can be omitted. Eg. 1, a is (1, a) (is [1, 'a']).
• If an Object is expected, and it is the root node (at the top level), the opening { and closing } can be omitted. Eg a: 1, b: 2 is {a: 1, b: 2}.
If you list multiple types (eg. Number | String), it will first attempt to cast to the first type and then validate - if the validation fails it will move on to the next type and so forth, left to right. You must be careful as some types will succeed with any input, such as String. Thus put String at the end of your list. In non-explicit mode, Date and RegExp will succeed with a large variety of input - also be careful with these and list them near the end if not last in your list.
Whitespace between special characters and elements is inconsequential.
Options
Options is an object. It is an optional parameter to the parse and parsedTypeParse functions.
Explicit
A Boolean. By default it is false.
Example:
parse('RegExp', 're', {explicit: false}); // /re/
parse('RegExp', 're', {explicit: true}); // Error: ... does not type check...
parse('RegExp | String', 're', {explicit: true}); // 're'
explicit sets whether to be in explicit mode or not. Using * automatically activates explicit mode. For more information, read the levn format section.
customTypes
An Object. Empty {} by default.
Example:
var options = {
customTypes: {
Even: {
typeOf: 'Number',
validate: function (x) {
return x % 2 === 0;
},
cast: function (x) {
return {type: 'Just', value: parseInt(x)};
}
}
}
}
parse('Even', '2', options); // 2
parse('Even', '3', options); // Error: Value: "3" does not type check...
Another Example:
function Person(name, age){
this.name = name;
this.age = age;
}
var options = {
customTypes: {
Person: {
typeOf: 'Object',
validate: function (x) {
x instanceof Person;
},
cast: function (value, options, typesCast) {
var name, age;
if ({}.toString.call(value).slice(8, -1) !== 'Object') {
return {type: 'Nothing'};
}
name = typesCast(value.name, [{type: 'String'}], options);
age = typesCast(value.age, [{type: 'Numger'}], options);
return {type: 'Just', value: new Person(name, age)};
}
}
}
parse('Person', '{name: Laura, age: 25}', options); // Person {name: 'Laura', age: 25}
customTypes is an object whose keys are the name of the types, and whose values are an object with three properties, typeOf, validate, and cast. For more information about typeOf and validate, please see the custom types section of type-check.
cast is a function which receives three arguments, the value under question, options, and the typesCast function. In cast, attempt to cast the value into the specified type. If you are successful, return an object in the format {type: 'Just', value: CAST-VALUE}, if you know it won't work, return {type: 'Nothing'}. You can use the typesCast function to cast any child values. Remember to pass options to it. In your function you can also check for options.explicit and act accordingly.
Technical About
levn is written in LiveScript - a language that compiles to JavaScript. It uses type-check to both parse types and validate values. It also uses the prelude.ls library.
Install
npm i levn
DownloadsWeekly Downloads
38,939,056
Version
0.4.1
License
MIT
Unpacked Size
24.9 kB
Total Files
6
Last publish
Collaborators
• gkz
|
__label__pos
| 0.851583 |
Guide to Disable Hardware in Linux
Guide to Disable Hardware in Linux
Page content
Determine Which Module to Disable
You should always be extremely careful, and double check what each module is used for, before making any changes. If you accidentally remove the module that manages your ide bus and your harddrive is ide, your system will become unresponsive immediately.
The first thing you need to do is figure out the name of the module. If your kernel is in the 2.6.x series, you can figure it out pretty easily with the command: “sudo lspci -k”. On my laptop the output looks like this:
With the “-k” option, you can easily see which modules and drivers are used by each hardware device. However, if you are using an older kernel, the “-k” option does not work. Instead, you have to figure it out from the command “lsmod”.
lsmod
As you can see, the lsmod command gives you a list of all the kernel modules loaded, but it does not have a “plain English” way of telling you what device uses which module. If you are unsure, do some research before removing or blacklisting a module.
Disabling the Hardware
Once you have figured out the correct module, you will edit the /etc/modprobe.d/blacklist.conf file. This file may also be called /etc/modprobe.d/blacklist depending on your distribution. The blacklist.conf file is owned by root, so either su to root, or use “sudo gedit” to make any changes. The format for blacklisting a module is “blacklist ”. For example if I wanted to blacklist the “berry_charge” module, I would add the following two lines to my blacklist.conf file:
#No longer own my blackberry
blacklist berry_charge
Once you add an entry to the blacklist.conf file, you must reboot your system for the changes to take effect.
You can also permanently remove the module by using the “rmmod” command. Using the berry_charge example, if I wanted to permanently remove the module I would use the command:
sudo rmmod berry_charge
Some modules depend on other modules, so you may have to follow a dependency trail to actually get the results that you want.
Finally, if the driver is actually compiled into the kernel, you may have some difficulty actually disabling it. Sometimes you can use the /proc and /sys filesystems to disable the hardware. For example, if you want to disable a cpu you would use the following command:
echo 0 » /sys/devices/system/cpu/cpu1/online
Other times, you will have to use kernel command line arguments.
As you can see, disabling hardware is not a simple “point and click” process in Linux. There may be a time, possibly in the near future, when there will be graphical system. Until then your best bet is to hope your hardware uses a kernel module, and blacklist it.
|
__label__pos
| 0.733058 |
shibbir ahmed shibbir ahmed - 1 month ago 7
Javascript Question
How to get current data attribute value using jquery?
I am using following code in php while loop :
<tr class="odd gradeX">
<td><?php echo $blog_id; ?></td>
<td><?php echo $blog_title; ?></td>
<td><?php echo $cat_name; ?></td>
<td class="center"><img width="50" src="<?php echo SITE_URL."assets/images/blog_images/$blog_image" ?>"/></td>
<td class="center"><?php echo $uname; ?></td>
<td class="center"><?php echo $added_date; ?></td>
<td class="center"><?php echo $status; ?></td>
<td class="center"><a data-toggle="modal" class="delete" data-id="<?php echo $blog_id; ?>" data-target="#myModal" href="<?php echo AD_SITE_URL."delete.php?name=blog&blog_id=$blog_id" ?>">Delete</td>
</tr>
When I click on
delete
link it's showing me a popup box with bellow code :
<form >
<div class="modal-footer">
<input type="submit" id="deletePost" class="btn btn-danger" name="submit" value="YES">
<input type="submit" class="btn btn-success" name="submit" value="NO" data-dismiss="modal">
</div>
</form>
Now when I press
Yes
button it's should show me the
data-id
value from the
delete
link, but every time it's showing me same
data-id
value. How can I get each
delete
link
data-id
value using jQuery ?
I am using following code :
<script>
$(document).ready(function() {
$("#deletePost").click(function() {
var id = $(".delete").attr("data-id");
alert(id);
});
});
</script>
Answer
You just need to create a hidden field in your modal and populate its value each time when user click on delete link
Here is working demo
$(function() {
$(".delete").click(function() {
id = $(this).data('id');
$("#myModal #post-id").val(id);
});
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<script src="https://netdna.bootstrapcdn.com/bootstrap/3.1.1/js/bootstrap.min.js"></script>
<link href="https://netdna.bootstrapcdn.com/bootstrap/3.1.1/css/bootstrap.min.css" rel="stylesheet" />
<table class="table">
<tr>
<th>Id</th>
<th>Title</th>
<th>Category</th>
<th>Desc</th>
<th></th>
</tr>
<tr class="odd gradeX">
<td>1</td>
<td>title 1</td>
<td>Category</td>
<td class="center">data</td>
<td class="center"><a data-toggle="modal" class="delete" data-id="1" data-target="#myModal" href="#">Delete</a>
</td>
</tr>
<tr class="odd gradeX">
<td>2</td>
<td>title 2</td>
<td>Category</td>
<td class="center">data</td>
<td class="center"><a data-toggle="modal" class="delete" data-id="2" data-target="#myModal" href="#">Delete</a>
</td>
</tr>
<tr class="odd gradeX">
<td>3</td>
<td>title 3</td>
<td>Category</td>
<td class="center">data</td>
<td class="center"><a data-toggle="modal" class="delete" data-id="3" data-target="#myModal" href="#">Delete</a>
</td>
</tr>
<tr class="odd gradeX">
<td>4</td>
<td>title 4</td>
<td>Category</td>
<td class="center">data</td>
<td class="center"><a data-toggle="modal" class="delete" data-id="4" data-target="#myModal" href="#">Delete</a>
</td>
</tr>
<table>
<div id="myModal" class="modal fade" tabindex="-1" role="dialog">
<div class="modal-dialog" role="document">
<div class="modal-content">
<form>
<div class="modal-header">
<button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span>
</button>
<h4 class="modal-title">Modal title</h4>
</div>
<div class="modal-body">
<p>Dou you want to delete</p>
this is your hidden field for id
<input type="text" name="id" id="post-id">
</div>
<div class="modal-footer">
<input type="submit" id="deletePost" class="btn btn-danger" name="submit" value="YES">
<input type="submit" class="btn btn-success" name="submit" value="NO" data-dismiss="modal">
</div>
</form>
</div>
<!-- /.modal-content -->
</div>
<!-- /.modal-dialog -->
</div>
<!-- /.modal -->
I hope it will help
|
__label__pos
| 0.748268 |
OJ系统新功能测试中,如有问题请联系 17865569030 17865569180 17865571035 尽量不要在上课时间打电话
Problem 1091. -- A+B for Input-Output Practice (IV)
1091: A+B for Input-Output Practice (IV)
Time Limit: 1 Sec Memory Limit: 64 MB
Submit: 581 Solved: 393
[Submit][Status][Web Board]
Description
Your task is to Calculate the sum of some integers.
Input
Input contains multiple test cases. Each test case contains a integer N, and then N integers follow in the same line. A test case starting with 0 terminates the input and this test case is not to be processed.
Output
For each group of input integers you should output their sum in one line, and with one line of output for each line in input.
Sample Input
4 1 2 3 4
5 1 2 3 4 5
0
Sample Output
10
15
HINT
Source
[Submit][Status][Web Board]
|
__label__pos
| 0.805753 |
tree: d6935a00dcb8e123dceefea0615d287b7a5e10eb [path history] [tgz]
1. generated/
2. mozjs45/
3. shared/
4. templates/
5. testdata/
6. testing/
7. v8c/
8. IDLExtendedAttributes.txt
9. README.md
10. __init__.py
11. _env.py
12. bindings.gypi
13. code_generator_cobalt.py
14. contexts.py
15. expression_generator.py
16. flatten_idls.py
17. flatten_idls_test.py
18. generate_conversion_header.py
19. idl_compiler_cobalt.py
20. name_conversion.py
21. overload_context.py
22. path_generator.py
23. path_generator_test.py
24. run_cobalt_bindings_tests.bat
25. run_cobalt_bindings_tests.py
26. run_cobalt_bindings_tests.sh
27. update_blink_idls.py
src/cobalt/bindings/README.md
Script/bindings in depth
Bindings is the bridge that allows client JavaScript code to read/modify DOM objects. In order to better understand what this means, let's start with a simple example from the V8 embedder's guide, and build up to Cobalt bindings in their entirety.
Suppose we are working with a native application that deals with Point structure, representing two coordinates as integers. Now suppose that we also wanted to allow for the application to be scripted in JavaScript, meaning that JavaScript code is capable of manipulating our project specific Point struct.
struct Point {
int x;
int y;
};
This is accomplished by using embedder only native APIs to both
• Allow JavaScript objects to represent Points, by putting a pointer to the Point an object represents inside of a native-only internal field.
• Register native functions with the JavaScript engine to say “when JavaScript code does this, instead of doing what you normally do, call this function”.
In V8, this would look something like:
Local<ObjectTemplate> point_template = ObjectTemplate::New(isolate);
point_template->SetInternalFieldCount(1);
point_template.SetAccessor(String::NewFromUtf8(isolate, "x"), GetPointX, SetPointX);
point_template.SetAccessor(String::NewFromUtf8(isolate, "y"), GetPointY, SetPointY);
Point* p = new Point{0, 0};
Local<Object> obj = point_template->NewInstance();
obj->SetInternalField(0, External::New(isolate, p));
void GetPointX(Local<String> property,
const PropertyCallbackInfo<Value>& info) {
Local<Object> self = info.Holder();
Local<External> wrap = Local<External>::Cast(self->GetInternalField(0));
void* ptr = wrap->Value();
int value = static_cast<Point*>(ptr)->x;
info.GetReturnValue().Set(value);
}
// void GetPointY(...
In the above example, we first create an ObjectTemplate. This is a structure that can later be fed into V8 that lets it know the “native shape” of the point object. In particular, the SetAccessor lines are the ones that tell it to call into our GetPointX function when the property “x” is accessed on a point object. After V8 calls into the GetPointX function, we find the associated native Point object, get its x field, convert that value to a JavaScript integer, and then pass that back to JavaScript as the return value. Note that in SpiderMonkey this process is conceptually identical to V8, however with slightly different APIs.
That pattern, of intercept, convert from JavaScript to native, perform native operations, and then convert the native result back to JavaScript, is the essence of bindings. Of course, it is far more complicated, in that there are many different types, and more operations that must be performed. It is important however, to keep this general pattern in mind.
Scaling up to full Cobalt bindings
How do we scale up from what we saw before into full Cobalt bindings? The first problem that must be addressed is what should be exposed to JavaScript within each target. Instead of manually writing an ObjectTemplate for each object we want to expose to JavaScript, we write an IDL file. Then, these IDL files are collected by the IDL compiler, and then combined with a jinja2 template file, that will generate C++ header and source files for each IDL.
The jinja2 template file is responsible for
The details behind each of these responsibilities are discussed in the Web IDL spec, which has opinions on how to map Web IDL defined concepts that you see in the IDL files, to precise language bindings behavior. The first section discusses this mapping without a particular language in mind, however thankfully after that there is a section about ECMAScript bindings in particular.
The majority of the template file is about translating what the spec says in the ECMAScript bindings section of that spec, and calling the appropriate native APIs given a particular interface context. So for example, when we see
// document.idl
// ...
HTMLCollection getElementsByTagName(DOMString localName);
// ...
in an idl file, our IDL compiler will render the jinja template (relevant sections shown below)
// interface.cc.template
// …
{%- for operation in operations + static_operations %}
void {{operation.idl_name}}{{overload.overload_index}}(
const v8::FunctionCallbackInfo<v8::Value>& info) {
{{ function_implementation(overload) -}}
}
{% macro function_implementation(operation) %}
v8::Isolate* isolate = info.GetIsolate();
{% if operation.is_static %}
{{ static_function_prologue() }}
// ...
{% for operation in operations + static_operations %}
v8::Local<v8::String> name = NewInternalString(isolate, "{{operation.idl_name}}");
v8::Local<v8::FunctionTemplate> method_template = v8::FunctionTemplate::New(isolate,
{{operation.idl_name}}{{"Static" if operation.is_static else ""}}Method);
method_template->RemovePrototype();
method_template->SetLength({{operation.length}});
{% if operation.is_static %}
function_template->
{% elif operation.is_unforgeable or is_global_interface %}
instance_template->
{% else %}
prototype_template->
{% endif %}
Set(name, method_template);
and produce the following native code
void getElementsByTagNameMethod(const v8::FunctionCallbackInfo<v8::Value>& info) {
v8::Isolate* isolate = info.GetIsolate();
v8::Local<v8::Object> object = info.Holder();
V8cGlobalEnvironment* global_environment = V8cGlobalEnvironment::GetFromIsolate(isolate);
WrapperFactory* wrapper_factory = global_environment->wrapper_factory();
if (!WrapperPrivate::HasWrapperPrivate(object) ||
!V8cDocument::GetTemplate(isolate)->HasInstance(object)) {
V8cExceptionState exception(isolate);
exception.SetSimpleException(script::kDoesNotImplementInterface);
return;
}
// The name of the property is the identifier.
v8::Local<v8::String> name = NewInternalString(
isolate,
"getElementsByTagName");
// ...
script/
script/ implements various utility functions, abstraction classes, and wrapper classes that are both used directly by Cobalt, and to support bindings. The majority of these are either interfaces to abstract over important engine native APIs (such as forcing garbage collection, or getting heap statistics), or interfaces to allow Cobalt to interact with ECMAScript defined types that are implemented by the JavaScript engine, such as promises.
JavaScriptEngine is an abstract isolated instance of the JavaScript engine. It corresponds directly to the types JSRuntime and v8::Isolate. These types are about everything that has to do with a JavaScript before the global object gets involved, so things like the heap, and builtin functions. As this class owns the JavaScript heap, Cobalt must go through it when Cobalt wants to interact with the JavaScript heap, for things such as forcing garbage collection (which is used to lower memory footprint during suspend), reporting extra memory usage implied by the engine (so for example, when a JavaScript object being alive means that xhr data that lives in native Cobalt buffers is kept alive), and gathering information about the size of the heap (used in APIs such as window.performance.memory). Additionally, having a JavaScriptEngine is a prerequisite for creating a GlobalEnvironment, which is required for JavaScript execution.
GlobalEnvironment is an abstract JavaScript execution context, which effectively means the global object itself, as well as things that are very closely related to the global object (such as script evaluation). It corresponds to types JSContext and v8::Context. Also note that in the case of Cobalt, there is only one GlobalEnvironment per JavaScriptEngine, because we don't support features that would require two (such as iframes). Cobalt will use this class when it wants to do things such as evaluate scripts, inspect execution state (such as getting a stack trace), or interact with garbage collection. Implementations of this class are also responsible for setting up the global environment
Additionally, it contains interfaces for JavaScript (both ES6 and Web IDL) types that Cobalt DOM code needs to interact with, such as array buffers, callback functions, and promises. One worthy of special attention is ValueHandle, which represents an opaque JavaScript value (that could be anything), which can be used when interactions with an object is not the goal, but rather just holding onto it, and then likely passing it to someone else.
Each of these JavaScript value interface types when used by Cobalt must get held in a special wrapper type, called ScriptValue. ScriptValue provides a common type to feed into additional ScriptValue wrapper types (ScriptValue::Reference and script::Handle), that manage the lifetime of the underlying JavaScript value held in the ScriptValue. This is done because the JavaScript object itself is owned by the JavaScript engine, so management of its lifetime must be done through the native engine APIs, however Cobalt cannot access those directly.
script/$engine
Then, we designate a specific area of code to be our engine specific implementation of the interfaces established in script. We commit to an engine at gyp time, and then based on that, select the appropriate set of files. This is the only area of Cobalt code (except for bindings/*/$engine, which is essentially more of the same stuff) that is allowed to include engine specific headers (files in v8/ and third_party/mozjs-45/). Maintaining this abstraction has been useful throughout our multiple JavaScript engine migrations over the years.
A large portion of script/$engine is filling in the types discussed in the previous section. So V8cEngine implements JavaScriptEngine by wrapping a v8::Isolate, and V8cGlobalEnvironment implements GlobalEnvironment by wrapping a v8::Context. Note that these files are actually quite a bit bigger than just being thin wrappers over the V8 types, as they have more work to do in addition to just implementing their script interfaces, such as maintaining state necessary for bindings (interface objects need to be owned somewhere), serving as a bridge between the Isolate, and dealing with garbage collection interaction (the engine specific script::Tracer is implemented near them).
JavaScript value interface type implementations follow the pattern of creating a concrete implementation that wraps an appropriate v8::Value. They will also typically have a Create function if it makes sense to build them in Cobalt. In the case where they don't have a Create function (such as ValueHandle), the only way to gain access to one in Cobalt is to receive it from bindings code.
Another important area in this module are utility functions that exist solely for bindings. In particular, in the conversion helpers are implemented here, which is a giant file that implements functions to convert back and forth between native Cobalt types and JavaScript values. These conversion helper functions get called in the native callback implementation for the getters, setters, and functions that we saw at the beginning of this doc (so the stuff that would go inside of GetPointX). Because these conversion helpers primarily exist for parts of bindings defined by Web IDL, they're on the more complex side (because Web IDL allows for many conversion details to be configurable, such as whether null objects are accepted, or whether integral types should be clamped or not), however they are also used throughout common script/ when it makes sense.
Finally, another critical thing that takes place in script/$engine is what we call WrapperPrivate (in both SpiderMonkey and V8). This is the special type that goes inside of the native only internal object field that was discussed in the beginning of this doc. WrapperPrivate is a bit more complicated however, as it has to both bridge between Cobalt DOM garbage collection and the JavaScript engine's garbage collection for more details on this), and allow for Cobalt to possibly manipulate garbage collection from the JavaScript side as well (as in add JavaScript objects to the root set of reachable objects).
Testing
Bindings test is tested in isolation via the bindings_test target. Because bindings is the bridge from the JavaScript engine to the DOM, bindings test works by having an entirely separate set of IDL files that contain minimal internal logic, and are meant to stress Web IDL features. This is accomplished by parameterizing what IDL files should get compiled at gyp time. All other parts of the bindings build pipeline (such as the IDL compiler and jinja templates) are shared between bindings_test and cobalt entirely. Note that bindings_test lives above the script/ interface, so no engine specific APIs can be used within the tests.
Additionally, when it is convenient to implement a test entirely within JavaScript, certain script/ and bindings/ features are tested within layout_tests and web_platform_tests (see for example, platform-object-user-properties-survive-gc.html). These serve as higher level, more end-to-end tests, that are good for testing more complex examples that also involve Cobalt's usage of script/.
|
__label__pos
| 0.883981 |
Ember Octane is here! A lot has changed since Ember 3.14, including these Guides. Read more in the Ember Blog.
Old Guides - You are viewing the guides for Ember v1.12.0. VIEW v3.15.0
Edit Page
Testing User Interaction
Almost every test has a pattern of visiting a route, interacting with the page (using the helpers), and checking for expected changes in the DOM.
Example:
test('root lists first page of posts', function(assert){
visit('/posts');
andThen(function() {
assert.equal(find('ul.posts li').length, 3, 'The first page should have 3 posts');
});
});
The helpers that perform actions use a global promise object and automatically chain onto that promise object if it exists. This allows you to write your tests without worrying about async behaviour your helper might trigger.
var application;
module('Acceptance: New Post Appears First', {
beforeEach: function() {
application = startApp();
},
afterEach: function() {
Ember.run(application, 'destroy');
}
});
test('add new post', function(assert) {
visit('/posts/new');
fillIn('input.title', 'My new post');
click('button.submit');
andThen(function() {
assert.equal(find('ul.posts li:first').text(), 'My new post');
});
});
Testing Transitions
Suppose we have an application which requires authentication. When a visitor visits a certain URL as an unauthenticated user, we expect them to be transitioned to a login page.
export default Ember.Route.extend({
beforeModel: function() {
var user = this.modelFor('application');
if (Ember.isEmpty(user)) {
this.transitionTo('login');
}
}
});
We could use the route helpers to ensure that the user would be redirected to the login page when the restricted URL is visited.
var application;
module('Acceptance: Transitions', {
beforeEach: function() {
application = startApp();
},
afterEach: function() {
Ember.run(application, 'destroy');
}
});
test('visiting /profile', function(assert) {
visit('/profile');
andThen(function() {
assert.equal(currentRouteName(), 'login');
assert.equal(currentPath(), 'login');
assert.equal(currentURL(), '/login');
});
});
|
__label__pos
| 0.985415 |
Ajax Form Validation Using Spring and DWR, Revised Blog
{cs.r.title}
In "Ajax Form Validation Using Spring and DWR," I presented a design to use Ajax to perform client-side validation by invoking an application's server-side validation logic. The whole point was to easily leverage validation logic on the server to prevent duplication of code, to reduce the time and cost of development and maintenance, and to enhance the usability of a web application. Few non-trivial software designs get it right the first time. This design has been running in production systems for over a year (its publication is approaching the ten-month mark), and has shown some limitations. This article discusses the emerged limitations and then presents an improved design that not only addresses the limitations but provides added functionality. Conceptually, the two designs are very similar, and a familiarity with the previous article, or working knowledge of Ajax, DWR, and the Spring Web MVC framework, are assumed.
Before diving into the details, let's review the general flow of an Ajax validation request. When the user changes an input field, JavaScript is used to collect and bundle the necessary form input and transmit it to the server. On the server, some validation process is invoked. The results are returned to the browser, where they are interpreted and used to update the browser's DOM to visually display or hide validation messages. The browser's Ajax call is essentially asynchronously submitting a custom form, a JavaScript callback function is used to handle a custom response, and the custom response is used to dynamically update the page.
Reasons for Change
The previous design had limitations. To support being invoked through reflection, a design format was imposed on the SpringValidators. While creating a separate validation method for each input field is a good approach, on complex forms it lead to a high number of short and therefore seemingly cluttered methods. Many readers provided feedback that they, for one reason or another, could not or would not refactor their existingValidators.
Second, the original design contained a fair amount of redundant boilerplate and hardcoded configuration. Every Ajax validated form had to have its Validator's DWR proxy configured in the dwr-config.xml file, which became unwieldy in large applications. The JavaScript used to invoke validation had DWR proxy names embedded in the script, which required the JavaScript to be customized each time it was put in use on a new form.
Third, only single input field validation was supported. While this was acceptable for the vast majority of requirements, the fact remained that there are situations where validation of input is more complex and validation of a "dependent" input field requires the "parent" input field(s) to provide context to the validation process. Examples of this situation include validating that a user's zip code exists within their state (does 55431 exist in MN?), that a credit card number belongs to the specified card type (does the card number conform to the format of a Visa card?), and that a password confirmation input field matches what the user entered in the password input field. Since only a single input field was submitted for validation at a time, this type of complex validation was not possible.
Fourth, the previous design contained custom code that duplicated functionality available in the Spring and DWR projects.
With these weaknesses in mind, the goals of a redesign were to simplify the architecture, minimize configuration, replace custom code with existing Spring and DWR features, and support validation of multiple input fields.
Server-Side Validation Using Spring
The pain of having to configure a DWR proxy for everyValidator led to the realization that the use of a single front controller would be a great improvement over a front controller per form. This would eliminate the majority of the configuration in both the Java and JavaScript layers and would lead to a much more reusable solution.
Because every Ajax validation request now runs through a single front controller, this front controller must be smart enough to be able to determine which Spring Web MVC Controllerwould have been used to handle a traditional submission from the same form. Once this Controller is known, Spring's flexible and powerful API provides access to all the functionality needed to dynamically perform the validation process.
Before we go into the details of how this is accomplished, let's quickly review how Spring Web MVC works. When configuring Spring Web MVC for use within your application, theDispatcherServlet (or similar) is configured to know which Controller is used to process a request for a given URI. A typical dispatcherServlet.xml configuration file contains something similar to:
... <bean name="/shop/newAccount.do" class="AccountFormController"> <property name="petStore" ref="petStore"/> <property name="validator" ref="accountValidator"/> <property name="successView" value="index"/> </bean> ...
These mappings tell Spring Web MVC that when it receives a request URI of /shop/newAccount.do to delegate execution to the AccountFormController.
The core concept of the revised middle tier is that if the Ajax validation request included the value of the form's action attribute, the validation front controller could use theDispatcherServlet's configuration mappings to programatically determine which Controller would be used to handle a normal form submission. Just as when theDispatcherServlet receives a traditional form submission to an action URI of /shop/newAccount.do and delegates to the AccountFormController, the validation front controller can determine that a validation request that includes /shop/newAccount.do should be handled byAccountFormController as well. The following code fromSpringAjaxController does just that:
... protected BaseCommandController getController(String actionUri) { BaseCommandController baseCommandController = null; try { // Attempt to find the controller by bean name baseCommandController = (BaseCommandController) applicationContext.getBean(actionUri); LOG.debug("Found baseCommandController (by bean name): " + baseCommandController); } catch (NoSuchBeanDefinitionException nsbde) { LOG.debug("BaseCommandController not found by bean name, " + searching AbstractUrlHandlerMappings"); } if (baseCommandController == null) { Map<String, AbstractUrlHandlerMapping> map = applicationContext.getBeansOfType(AbstractUrlHandlerMapping.class); for (String mappingName : map.keySet()) { AbstractUrlHandlerMapping mapping = map.get(mappingName); Map<String, BaseCommandController> handleMap = mapping.getHandlerMap(); baseCommandController = handleMap.get(actionUri); if (baseCommandController != null) { LOG.debug("Found baseCommandController (by " + "AbstractUrlHandlerMapping): " + baseCommandController); break; } } } return baseCommandController; } ...
Although updated to take advantage of advanced Spring functionality, from this point the new design follows the same general process as the old design: dynamically instantiate an instance of the command class, bind the request input to the new command class instance, invoke all Validators associated with the Controller, and return the results to the browser. This code is also found in theAccountFormController:
... LOG.debug("Instantiating command"); Class<Object> commandClass = controller.getCommandClass(); Object command = BeanUtils.instantiateClass(commandClass); LOG.debug("Populating command object with request values"); BeanWrapper beanWrapper = new BeanWrapperImpl(command); beanWrapper.setPropertyValues(nameValuePairsMap); LOG.debug("Invoking validators"); String commandName = controller.getCommandName(); Errors errors = new BindException(command, commandName); Validator validators[] = controller.getValidators(); for (Validator validator : validators) { ValidationUtils.invokeValidator(validator, command, errors); } // For each input, get the first error message and assemble Locale locale = LocaleContextHolder.getLocale(); LOG.debug("Building output"); for (String inputId : nameValuePairsMap.keySet()) { String inputValue = nameValuePairsMap.get(inputId); String unqualifiedInputId = StringUtils.unqualify(inputId); String args[] = { unqualifiedInputId, inputValue }; String message = validationMessageFormatter.getFieldErrorMessage(errors, inputId, args, locale); if (message == null) { // There was no validation message, return an empty String message = ""; } resultMap.put(inputId, message); } return resultMap;
After instantiating an instance of the command class, the input values from the browser must be bound to this instance. Although the binding process is still manual, in the sense that Spring Web MVC does not do it for us, the use of Spring'sBeanWrapper and BeanWrapperImpl concrete implementation take care of all the actual work, including the binding of multiple inputs. Because the BeanWrappersupports nested paths, the input id and values transmitted from the browser can be used to populate complex domain objects that back the form. For example, if the validation request contained"user.firstname=Ted&user.lastname=Anderson&user.address.zipCode=55311", the BeanWrapper would follow JavaBean naming conventions and would set the firstname property of aUser object to the value of Ted, thelastname property to a value of Anderson, and 55311 would be set to the zipCodeproperty of the User's underlying Addressobject.
With the service layer to process validation messages in place, we next need to discuss how communication with the server has been revised. In the next section, you will see how DWR is used to expose and invoke the validation front controller.
Ajax Using DWR
As with the previous design, Direct Web Remoting (DWR) is used for Ajax communication between the client and middle tiers. DWR works by dynamically generating JavaScript based on Java classes. Using a servlet and the necessary JavaScript infrastructure, JavaScript calls made in the browser are transparently sent to the server and invoked on the Java class, with the results returned to the browser and available in JavaScript. This is how server-side logic--in this case, Spring Web MVC validation logic--is exposed to the client through Ajax.
With the goal of simplifying configuration, the updated design takes advantage of the newly released DWR 2.0 and its optional annotation-based configuration. Written by Maik Schreiber, the annotations can be used as a replacement or in combination with thedwr.xml configuration file used in DWR 1.x. By using annotations, the problem of a large and potentially bloateddwr.xml configuration file can be avoided.
As described in the DWR annotation documentation, configuring your application to use DWR annotations is a three-step process. First, you must specify the DWR controller servlet in your web.xml file. Second, the fully qualified class name of each class that is to be exposed through DWR needs to be added to a comma-separated list in theweb.xml file. Third, each one of these classes needs to be decorated with DWR annotations. To do this, begin by decorating each class with a @RemoteProxy annotation. By default, the class name is used as the JavaScript scripting name (the name of the JavaScript proxy object). It's good practice to expose as little information as possible about the Java layer in JavaScript, and the name attribute of the@RemoteProxy annotation is used to explicitly specify the scripting name of the JavaScript proxy object. Because the class we want to expose to JavaScript is a Spring-managed bean, DWR also needs to be instructed to use a SpringCreator and to locate the bean in Spring's ApplicationContext by a specific name. Finally, decorate each method to be made available through remote access with a @RemoteMethod annotation. Naturally, those methods without this annotation will not be remotely accessible. Let's see an example of the updatedSpringAjaxController:
@RemoteProxy(name="AjaxFormValidatorJS", creator=SpringCreator.class, creatorParams = @Param(name = "beanName", value = "ajaxFormValidator")) public class SpringAjaxController implements ApplicationContextAware { ... @RemoteMethod public Map<String, String< validateString(String formActionUri, String inputIdValuePairs) { Map resultMap = new HashMap(); BaseCommandController controller = getControler(formActionUri); ... } ... }
You can see that we're instructing DWR to use aSpringCreator, that we want to locate the bean by a bean name, and the bean name we're looking for in the SpringApplicationContext is ajaxFormValidator. DWR will generate a JavaScript object namedAjaxFormValidatorJS to proxy the Java-based instance on the server. This object will have avalidateString() function that when invoked executes the method of the same name on the server.
It's worth noting that decorating classes with DWR annotations creates an otherwise unnecessary compile-time dependency between the classes and DWR. This could become problematic if you want to reuse your Java layer on a project or in an environment where DWR cannot be used. While this will mostly likely be a small and academic issue for most developers, it's still a design issue to be aware of.
Once the validateString() method has completed theMap of validation message(s) must be returned to the browser in a format usable in JavaScript. The previous design manually assembled the results into a String of name-value pairs and transmitted it to the browser, where it was manually parsed. Not only providing an excellent mechanism to invoke Java from JavaScript (and vice versa), DWR also assists in converting Java objects into a representation that can easily be used within JavaScript. Because the validateString()method returns a Map, DWR will use a built-inConverter to transparently convert the map into a JavaScript array. As you'll see in the next section, this array will be used to by a callback method to dynamically update the page with validation error messages.
Controlling the Process with JavaScript
Although it has been updated to include the ability to submit multiple input id and value pairs, to eliminate hardcoded DWR proxy references, and to include the value of the form's action attribute in the Ajax request, conceptually the JavaScript used to support Ajax validation functions the same as the original version. Each Ajax-validated form input has anonChange() event listener registered that will invoke the validate() function when the value of that input field has been changed.
Let's look at the details:
<script type='text/javascript'> function validate(inputArray) { var request = ""; var formAction = ""; if (inputArray.length == null) { <%-- Submit a single input field and value --%> request = formatInput(inputArray.id, inputArray.value); formAction = getFormAction(inputArray.id); } else { <%-- Submit multiple input field and value pairs --%> for (var i = 0; i < inputArray.length; i++) { var input = document.getElementById(inputArray[i].id); var nameValuePair = formatInput(input.id, input.value); request = request + nameValuePair; if (i != inputArray.length) { request = request + "&"; } } formAction = getFormAction(inputArray[0].id); } <%-- Invoke server-side logic --%> AjaxFormValidatorJS.validateString(formAction, request, handleValidationResponse); } function handleValidationResponse(response) { <%-- The response is an array of id/message value pairs --%> for (inputId in response) { var errorElementId = inputId + ".errors"; var validationMessage = response[inputId]; dwr.util.setValue(errorElementId, validationMessage); } } function getFormAction(inputId) { var currentElement = document.getElementById(inputId); while (currentElement != null) { if (currentElement.tagName.toLowerCase() == "form") { <%-- Drop the http://servername --%> var formAction = ""; var locationFragements = currentElement.action.split("/"); for (var i = 4; i < locationFragements.length; i++) { formAction = formAction + "/" + locationFragements[i]; } return formAction; } else { currentElement = currentElement.parentNode; } } } ...
The validate() function is responsible for initiating the validation process and its input will be either a single form input element or an array of form input elements. Thelength property of the argument will indicate if the input is a single input element or an array of input elements. In the case where the argument is a single form input element, the element's id and value are assembled into a single name-value pair. In the case where the argument is an array of form input elements, the id and input values of each element in the array are assembled into single a name-value pair. Regardless of which case is executed, the end result is aString containing id-value pairs of each form element that will be transmitted to the server for validation. It is this ability to process either an individual element or an array of elements that provides client-side support for multi-input validation.
As we've seen from previous sections, the server-side validation logic needs the value of the form's action attribute included in the request. Looking at the previous sample code, thegetFormAction() function starts from the input element that fired the onChange() event and upwardly navigates the browser's DOM until the form element is found. With a reference to the form element, the form'saction attribute is read and then included as part of the validation request transmitted to the server. By dynamically determining the value, the JavaScript is much more flexible and reusable. As such, this script will work with any form and can be globally included on all pages (tools such as Tiles or SiteMesh makes this particularly easy).
Using the DWR-generated AjaxFormValidatorJS object, the request String (composed of the input name-value pairs and form action attribute value) is sent to the server for validation and the browser awaits a response.
When the response is returned from the server, thehandleValidationResponse() callback function is invoked with the sever-generated array of DOM elementid and validation messages. Every input element sent to the server will have a corresponding entry in the response array. The handleValidationResponse() function iterates through the array and passes the element idand validation message to DWR's dwr.util.setvalue()function, which updates the particular element with the validation message. Browser DOM changes take effect immediately and it is this show/hide effect that gives the appearance of client-side validation and greatly enhances the user experience of the web application.
Conclusion
This article discussed weaknesses in a previously presented design allowing easy integration of Ajax form validation into web-based applications. After discussing the weaknesses, an evolved but simplified design was presented that used a single front controller, DWR annotations and Converters, and updated JavaScript to dramatically reduce configuration and maximize reusability. By replacing custom code with Spring functionality, the code is not only simplified but now supports multiple input validation. Finally, by no longer imposing a design format on Spring Validators, existing application code can be Ajax-enabled without refactoring.
Resources
http://today.java.net/im/a.gif
|
__label__pos
| 0.919442 |
django.contrib.admin Example Code
The Django admin is an automatically-generated user interface for Django models. The admin interface can be heavily customized and the code examples below can help you understand how to implement some of the trickier parts of customization.
Example 1 from django-oscar
django-oscar (project website) is a framework for building e-commerce sites on top of Django. The code for the project is available open source under a custom license written by Tangent Communications PLC.
django-oscar / src / oscar / apps / address / admin.py
# admin.py
from django.contrib import admin
from oscar.core.loading import get_model
class UserAddressAdmin(admin.ModelAdmin):
readonly_fields = ('num_orders_as_billing_address', 'num_orders_as_shipping_address')
class CountryAdmin(admin.ModelAdmin):
list_display = [
'__str__',
'display_order'
]
list_filter = [
'is_shipping_country'
]
search_fields = [
'name',
'printable_name',
'iso_3166_1_a2',
'iso_3166_1_a3'
]
admin.site.register(get_model('address', 'useraddress'), UserAddressAdmin)
admin.site.register(get_model('address', 'country'), CountryAdmin)
Example 2 from heritagesites
heritagesites is a Django-based web app with a MySQL backend that displays UNESCO heritage sites. The project code is open source under the MIT license.
heritagesites / heritagesites / admin.py
# admin.py
from django.contrib import admin
import heritagesites.models as models
@admin.register(models.CountryArea)
class CountryAreaAdmin(admin.ModelAdmin):
fields = [
'country_area_name',
'iso_alpha3_code',
'm49_code',
'location',
'dev_status'
]
list_display = [
'country_area_name',
'location',
'iso_alpha3_code',
'm49_code',
'dev_status'
]
list_filter = ['location', 'dev_status']
@admin.register(models.DevStatus)
class DevStatusAdmin(admin.ModelAdmin):
fields = ['dev_status_name']
list_display = ['dev_status_name']
ordering = ['dev_status_name']
@admin.register(models.HeritageSite)
class HeritageSiteAdmin(admin.ModelAdmin):
fieldsets = (
(None, {
'fields': (
'site_name',
'heritage_site_category',
'description',
'justification',
'date_inscribed'
)
}),
('Location and Area', {
'fields': [
(
'longitude',
'latitude'
),
'area_hectares',
'transboundary'
]
})
)
list_display = (
'site_name',
'date_inscribed',
'area_hectares',
'heritage_site_category',
'country_area_display'
)
list_filter = (
'heritage_site_category',
'date_inscribed'
)
@admin.register(models.HeritageSiteCategory)
class HeritageSiteCategoryAdmin(admin.ModelAdmin):
fields = ['category_name']
list_display = ['category_name']
ordering = ['category_name']
@admin.register(models.IntermediateRegion)
class IntermediateRegionAdmin(admin.ModelAdmin):
fields = ['intermediate_region_name', 'sub_region']
list_display = ['intermediate_region_name', 'sub_region']
ordering = ['intermediate_region_name']
@admin.register(models.Region)
class RegionAdmin(admin.ModelAdmin):
fields = ['region_name', 'planet']
list_display = ['region_name', 'planet']
ordering = ['region_name', 'planet']
@admin.register(models.SubRegion)
class SubRegionAdmin(admin.ModelAdmin):
fields = ['sub_region_name', 'region']
list_display = ['sub_region_name', 'region']
ordering = ['sub_region_name']
@admin.register(models.Planet)
class Planet(admin.ModelAdmin):
"""New class added as a result of Mtg 5 database refactoring.
"""
fields = ['planet_name', 'unsd_name']
list_display = ['planet_name', 'unsd_name']
ordering = ['planet_name', 'unsd_name']
@admin.register(models.Location)
class Location(admin.ModelAdmin):
"""New class added as a result of Mtg 5 database refactoring.
"""
fields = ['planet', 'region', 'sub_region', 'intermediate_region']
list_display = ['planet', 'region', 'sub_region', 'intermediate_region']
ordering = ['planet', 'region', 'sub_region', 'intermediate_region']
Example 3 from viewflow
viewflow (project website) is a reusable workflow code library for organizing business logic in complex web applications. The code for the project is available under the GNU Alfredo license.
viewflow / viewflow / admin.py
# admin.py
from django.contrib import admin, auth
from viewflow.models import Process, Task
class TaskInline(admin.TabularInline):
"""Task inline."""
model = Task
fields = ['flow_task', 'flow_task_type', 'status',
'token', 'owner']
readonly_fields = ['flow_task', 'flow_task_type', 'status',
'token']
def has_add_permission(self, request):
"""Disable manually task creation."""
return False
def has_delete_permission(self, request, obj=None):
"""Disable task deletion in the process inline."""
return False
class ProcessAdmin(admin.ModelAdmin):
"""List all of viewflow process."""
icon = '<i class="material-icons">assignment</i>'
actions = None
date_hierarchy = 'created'
list_display = ['pk', 'created', 'flow_class', 'status',
'participants']
list_display_links = ['pk', 'created', 'flow_class']
list_filter = ['status']
readonly_fields = ['flow_class', 'status', 'finished']
inlines = [TaskInline]
def has_add_permission(self, request):
"""Disable manually process creation."""
return False
def participants(self, obj):
"""List of users performed tasks on the process."""
user_ids = obj.task_set.exclude(owner__isnull=True).\
values('owner')
USER_MODEL = auth.get_user_model()
username_field = USER_MODEL.USERNAME_FIELD
users = USER_MODEL._default_manager.filter(pk__in=user_ids).\
values_list(username_field)
return ', '.join(user[0] for user in users)
class TaskAdmin(admin.ModelAdmin):
"""List all of viewflow tasks."""
icon = '<i class="material-icons">assignment_turned_in</i>'
actions = None
date_hierarchy = 'created'
list_display = ['pk', 'created', 'process', 'status',
'owner', 'owner_permission', 'token',
'started', 'finished']
list_display_links = ['pk', 'created', 'process']
list_filter = ['status']
readonly_fields = ['process', 'status', 'flow_task', 'started',
'finished', 'previous', 'token']
def has_add_permission(self, request):
"""Disable manually task creation."""
return False
admin.site.register(Process, ProcessAdmin)
admin.site.register(Task, TaskAdmin)
Example 4 from django-filer
django-filer (project documentation) is a file management library for uploading and organizing files and images in Django's admin interface.
The project's code is available under the BSD 3-Clause "New" or "Revised" open source license.
django-filer / filer / admin / fileadmin.py
# -*- coding: utf-8 -*-
from __future__ import absolute_import
from django import forms
from django.contrib.admin.utils import unquote
from django.http import HttpResponseRedirect
from django.urls import reverse
from django.utils.safestring import mark_safe
from django.utils.translation import ugettext as _
from .. import settings
from ..models import File
from .permissions import PrimitivePermissionAwareModelAdmin
from .tools import AdminContext, admin_url_params_encoded, popup_status
class FileAdminChangeFrom(forms.ModelForm):
class Meta(object):
model = File
exclude = ()
class FileAdmin(PrimitivePermissionAwareModelAdmin):
list_display = ('label',)
list_per_page = 10
search_fields = ['name', 'original_filename', 'sha1', 'description']
raw_id_fields = ('owner',)
readonly_fields = ('sha1', 'display_canonical')
form = FileAdminChangeFrom
@classmethod
def build_fieldsets(cls, extra_main_fields=(), extra_advanced_fields=(),
extra_fieldsets=()):
fieldsets = (
(None, {
'fields': (
'name',
'owner',
'description',
) + extra_main_fields,
}),
(_('Advanced'), {
'fields': (
'file',
'sha1',
'display_canonical',
) + extra_advanced_fields,
'classes': ('collapse',),
}),
) + extra_fieldsets
if settings.FILER_ENABLE_PERMISSIONS:
fieldsets = fieldsets + (
(None, {
'fields': ('is_public',)
}),
)
return fieldsets
def response_change(self, request, obj):
"""
Overrides the default to be able to forward to the directory listing
instead of the default change_list_view
"""
if (
request.POST
and '_continue' not in request.POST
and '_saveasnew' not in request.POST
and '_addanother' not in request.POST
):
# Popup in pick mode or normal mode. In both cases we want to go
# back to the folder list view after save. And not the useless file
# list view.
if obj.folder:
url = reverse('admin:filer-directory_listing',
kwargs={'folder_id': obj.folder.id})
else:
url = reverse(
'admin:filer-directory_listing-unfiled_images')
url = "{0}{1}".format(
url,
admin_url_params_encoded(request),
)
return HttpResponseRedirect(url)
return super(FileAdmin, self).response_change(request, obj)
def render_change_form(self, request, context, add=False, change=False,
form_url='', obj=None):
info = self.model._meta.app_label, self.model._meta.model_name
extra_context = {'show_delete': True,
'history_url': 'admin:%s_%s_history' % info,
'is_popup': popup_status(request),
'filer_admin_context': AdminContext(request)}
context.update(extra_context)
return super(FileAdmin, self).render_change_form(
request=request, context=context, add=add, change=change,
form_url=form_url, obj=obj)
def delete_view(self, request, object_id, extra_context=None):
"""
Overrides the default to enable redirecting to the directory view after
deletion of a image.
we need to fetch the object and find out who the parent is
before super, because super will delete the object and make it
impossible to find out the parent folder to redirect to.
"""
try:
obj = self.get_queryset(request).get(pk=unquote(object_id))
parent_folder = obj.folder
except self.model.DoesNotExist:
parent_folder = None
if request.POST:
# Return to folder listing, since there is no usable file listing.
super(FileAdmin, self).delete_view(
request=request, object_id=object_id,
extra_context=extra_context)
if parent_folder:
url = reverse('admin:filer-directory_listing',
kwargs={'folder_id': parent_folder.id})
else:
url = reverse('admin:filer-directory_listing-unfiled_images')
url = "{0}{1}".format(
url,
admin_url_params_encoded(request)
)
return HttpResponseRedirect(url)
return super(FileAdmin, self).delete_view(
request=request, object_id=object_id,
extra_context=extra_context)
def get_model_perms(self, request):
"""
It seems this is only used for the list view. NICE :-)
"""
return {
'add': False,
'change': False,
'delete': False,
}
def display_canonical(self, instance):
canonical = instance.canonical_url
if canonical:
return mark_safe('<a href="%s">%s</a>' % (canonical, canonical))
else:
return '-'
display_canonical.allow_tags = True
display_canonical.short_description = _('canonical URL')
FileAdmin.fieldsets = FileAdmin.build_fieldsets()
Example 5 from gadget-board
gadget-board is a Django, Django REST Framework (DRF) and Angular web application that is open source under the Apache2 license.
gadget-board / web / authentication / admin.py
from django.contrib import admin
from .models import Account
@admin.register(Account)
class AccountAdmin(admin.ModelAdmin):
readonly_fields = ('created_at','updated_at',)
Sponsored By
Sentry logo
Software errors are inevitable. Chaos is not. Try Sentry for free.
AssemblyAI logo
The most accurate speech-to-text API. Built for Python developers.
1. Introduction 2. Development Environments 3. Data 4. Web Development 5. Deployment 6. DevOps Changelog What Full Stack Means About the Author Future Directions Page Statuses Django ExtensionsDjango Example Codedjango.apps.config AppConfigdjango.conf settingsdjango.conf.urls.urldjango.contrib.admindjango.contrib.admin.filters SimpleListFilterdjango.contrib.admin.sites registerdjango.contrib.admin helpersdjango.contrib.admin.helpers ActionFormdjango.contrib.admin.helpers AdminFormdjango.contrib.admin.options IS_POPUP_VARdjango.contrib.admin.options IncorrectLookupParametersdjango.contrib.admin.options ModelAdmindjango.contrib.admin.options csrf_protect_mdjango.contrib.admin.sites NotRegistereddjango.contrib.admin.sites sitedjango.contrib.staticfiles findersdjango.contrib.staticfiles storagedjango.contrib.staticfiles.finders BaseFinderdjango.contrib.staticfiles.finders BaseStorageFinderdjango.contrib.staticfiles.finders finddjango.contrib.staticfiles.finders get_findersdjango.contrib.staticfiles.handlers StaticFilesHandlerdjango.contrib.staticfiles.storage CachedStaticFilesStoragedjango.contrib.staticfiles.storage HashedFilesMixindjango.contrib.staticfiles.storage ManifestStaticFilesStoragedjango.contrib.staticfiles.storage StaticFilesStoragedjango.contrib.staticfiles.storage staticfiles_storagedjango.contrib.staticfiles.utils matches_patternsdjango.core cachedjango.core checksdjango.core exceptionsdjango.core maildjango.core managementdjango.core serializersdjango.core signalsdjango.core signingdjango.core validatorsdjango.core.exceptions DisallowedRedirectdjango.core.exceptions FieldDoesNotExistdjango.core.exceptions FieldErrordjango.core.exceptions MiddlewareNotUseddjango.core.exceptions NON_FIELD_ERRORSdjango.core.exceptions ObjectDoesNotExistdjango.core.exceptions PermissionDenieddjango.core.exceptions SuspiciousFileOperationdjango.core.exceptions SuspiciousMultipartFormdjango.core.exceptions ValidationErrordjango.db DEFAULT_DB_ALIASdjango.db DataErrordjango.db DatabaseErrordjango.db IntegrityErrordjango.db ProgrammingErrordjango.db connectiondjango.db connectionsdjango.db migrationsdjango.db modelsdjango.db routerdjango.db transactiondjango.db.backends utilsdjango.db.migrations RunPythondjango.db.migrations.autodetector MigrationAutodetectordjango.db.migrations.exceptions IrreversibleErrordjango.db.migrations.executor MigrationExecutordjango.db.migrations.loader MIGRATIONS_MODULE_NAMEdjango.db.migrations.loader MigrationLoaderdjango.db.migrations.operations.base Operationdjango.db.migrations.state ProjectStatedjango.db.models.query BaseIterabledjango.db.models.query EmptyQuerySetdjango.db.models.query ModelIterabledjango.db.models.query Prefetchdjango.db.models.query Qdjango.db.models.query QuerySetdjango.db.models.query prefetch_related_objectsdjango.db.models.query_utils DeferredAttributedjango.db.models.query_utils PathInfodjango.db.models.query_utils Qdjango.db.models.signals post_deletedjango.db.models.signals post_savedjango.db.models.signals pre_deletedjango.db.models.signals pre_savedjango.forms BaseFormdjango.forms CheckboxInputdjango.forms CheckboxSelectMultipledjango.forms DateInputdjango.forms Fielddjango.forms FileInputdjango.forms FilePathFielddjango.forms Formdjango.forms HiddenInputdjango.forms ImageFielddjango.forms Mediadjango.forms MediaDefiningClassdjango.forms ModelChoiceFielddjango.forms ModelFormdjango.forms ModelMultipleChoiceFielddjango.forms MultipleChoiceFielddjango.forms Selectdjango.forms SelectMultipledjango.forms ValidationErrordjango.shortcuts get_list_or_404django.shortcuts get_object_or_404django.shortcuts redirectdjango.shortcuts renderdjango.shortcuts resolve_urldjango.template Contextdjango.template.base Contextdjango.template loaderdjango.template.base FilterExpressiondjango.template.base Nodedjango.template.base NodeListdjango.template.base Parserdjango.template.base Templatedjango.template.base TemplateSyntaxErrordjango.template.base TextNodedjango.template.base Tokendjango.template.base TokenTypedjango.template.base VariableDoesNotExistdjango.template.base VariableNodedjango.template.base token_kwargsdjango.template.context Contextdjango.template.defaultfilters escapedjango.template.defaultfilters filesizeformatdjango.template.defaultfilters safedjango.template.defaultfilters slugifydjango.template.defaultfilters striptagsdjango.template.defaultfilters titledjango.template.defaultfilters truncatecharsdjango.template.loader get_templatedjango.template.loader render_to_stringdjango.template.loader select_templatedjango.template.loader_tags BlockNodedjango.template.loader_tags ExtendsNodedjango.template.loader_tags IncludeNodedjango.template.loaders.filesystem Loaderdjango.urls URLPatterndjango.urls URLResolverdjango.urls clear_url_cachesdjango.urls get_callabledjango.urls get_resolverdjango.urls get_script_prefixdjango.urls includedjango.urls re_pathdjango.urls register_converterdjango.urls resolvedjango.urls reversedjango.utils dateformatdjango.utils dateparsedjango.utils datetime_safedjango.utils formatsdjango.utils module_loadingdjango.utils termcolorsdjango.utils translationdjango.utils treedjango.utils.cache add_never_cache_headersdjango.utils.cache cc_delim_redjango.utils.cache patch_cache_controldjango.utils.cache patch_response_headersdjango.utils.cache patch_vary_headersdjango.utils.crypto constant_time_comparedjango.utils.crypto get_random_stringdjango.utils.datastructures MultiValueDictdjango.utils.dateparse parse_datetimedjango.utils.dateparse parse_durationdjango.utils.dates MONTHSdjango.utils.datetime_safe datetimedjango.utils.decorators method_decoratordjango.utils.deprecation MiddlewareMixindjango.utils.deprecation RenameMethodsBasedjango.utils.duration duration_stringdjango.utils.encoding DjangoUnicodeDecodeErrordjango.utils.encoding filepath_to_uridjango.utils.encoding force_bytesdjango.utils.encoding force_strdjango.utils.encoding force_textdjango.utils.encoding iri_to_uridjango.utils.encoding is_protected_typedjango.utils.encoding smart_bytesdjango.utils.encoding smart_strdjango.utils.encoding smart_textdjango.utils.encoding uri_to_iridjango.utils.formats get_formatdjango.utils.formats localize_inputdjango.utils.formats sanitize_separatorsdjango.utils.functional LazyObjectdjango.utils.functional Promisedjango.utils.functional SimpleLazyObjectdjango.utils.functional keep_lazydjango.utils.functional lazydjango.utils.functional total_orderingdjango.utils.functional wrapsdjango.utils.html conditional_escapedjango.utils.html escapedjango.utils.html escapejsdjango.utils.html format_html_joindjango.utils.html mark_safedjango.utils.html smart_urlquotedjango.utils.html strip_tagsdjango.utils.http base36_to_intdjango.utils.http http_datedjango.utils.http int_to_base36django.utils.http is_safe_urldjango.utils.http unquotedjango.utils.http url_has_allowed_host_and_schemedjango.utils.http urlencodedjango.utils.http urlquotedjango.utils.http urlunquotedjango.utils.ipv6 clean_ipv6_addressdjango.utils.itercompat is_iterabledjango.utils.module_loading autodiscover_modulesdjango.utils.module_loading import_stringdjango.utils.module_loading module_has_submoduledjango.utils.numberformat formatdjango.utils.safestring SafeDatadjango.utils.safestring SafeTextdjango.utils.safestring mark_safedjango.utils.termcolors colorizedjango.utils.text Truncatordjango.utils.text capfirstdjango.utils.text format_lazydjango.utils.text get_text_listdjango.utils.text get_valid_filenamedjango.utils.text slugifydjango.utils.timezone get_current_timezonedjango.utils.timezone make_awaredjango.utils.timezone nowdjango.utils.timezone timedeltadjango.utils.translation LANGUAGE_SESSION_KEYdjango.utils.translation activatedjango.utils.translation deactivate_alldjango.utils.translation get_languagedjango.utils.translation get_language_from_requestdjango.utils.translation gettextdjango.utils.translation gettext_lazydjango.utils.translation ngettextdjango.utils.translation overridedjango.utils.translation pgettextdjango.utils.translation pgettext_lazydjango.utils.translation ugettextdjango.utils.translation ugettext_lazydjango.utils.translation ungettextdjango.utils.translation ungettext_lazydjango.utils.version get_complete_versiondjango.views csrfdjango.views.debug get_default_exception_reporter_filterdjango.views.decorators.csrf csrf_exemptdjango.views.decorators.debug sensitive_post_parametersdjango.views.decorators.http require_GETdjango.views.decorators.http require_POSTdjango.views.generic CreateViewdjango.views.generic DeleteViewdjango.views.generic DetailViewdjango.views.generic FormViewdjango.views.generic ListViewdjango.views.generic RedirectViewdjango.views.generic TemplateViewdjango.views.generic UpdateViewdjango.views.generic Viewdjango.views.generic.base RedirectViewdjango.views.generic.base TemplateResponseMixindjango.views.generic.base TemplateViewdjango.views.generic.base Viewdjango.views.generic.detail SingleObjectMixindjango.views.generic.edit CreateViewdjango.views.generic.edit DeleteViewdjango.views.generic.edit DeletionMixindjango.views.generic.edit FormMixindjango.views.generic.edit FormViewdjango.views.generic.list ListViewdjango.views.generic.list MultipleObjectMixindjango.views.i18n JavaScriptCatalogdjango.views.static servedjango.views.static was_modified_sincedjango.contrib.auth.decorators login_requireddjango.contrib.auth get_user_modeldjango.contrib.auth.hashers make_passworddjango.core.exceptions ImproperlyConfigureddjango.core.mail.messages EmailMessagedjango.core.mail.send_maildjango.core.management.base BaseCommanddjango.db.models AutoFielddjango.db.models BooleanFielddjango.db.models CharFielddjango.db.models DateFielddjango.db.models DateTimeFielddjango.db.models FileFielddjango.db.models ForeignKeydjango.db.models GenericIPAddressFielddjango.db.models ImageFielddjango.db.models IntegerFielddjango.db.models Modeldjango.db.models PositiveIntegerFielddjango.db.models PositiveSmallIntegerFielddjango.db.models.signaldjango.db.models SlugFielddjango.db.models SmallIntegerFielddjango.db.models TextFielddjango.db OperationalErrordjango.dispatch Signaldjango.formsdjango.forms BooleanFielddjango.forms CharFielddjango.forms ChoiceFielddjango.forms DateFielddjango.forms DateTimeFielddjango.forms EmailFielddjango.forms IntegerFielddjango.forms TypedChoiceFielddjango.http Http404django.http HttpResponsedjango.http HttpResponseBadRequestdjango.http HttpResponseForbiddendjango.http HttpResponseNotModifieddjango.http HttpResponsePermanentRedirectdjango.http HttpResponseRedirectdjango.template.response SimpleTemplateResponsedjango.template.response TemplateResponsedjango.urls.pathdjango.urls reverse_lazydjango.urls.exceptions NoReverseMatchdjango.urls.exceptions Resolver404django.utils.html format_htmldjango.utils.timezone ...or view the full table of contents.
Full Stack Python
Full Stack Python is an open book that explains concepts in plain language and provides helpful resources for those topics.
Updates via Twitter & Facebook.
Matt Makai 2012-2021
|
__label__pos
| 0.808848 |
Fueling Your Coding Mojo
Buckle up, fellow PHP enthusiast! We're loading up the rocket fuel for your coding adventures...
Popular Searches:
310
Q:
How can I use operators to perform operations on binary or hexadecimal numbers in PHP?
I'm new to PHP and currently learning about operators. I've come across binary and hexadecimal numbers in my code, and I understand that they use a different number system than decimal numbers. However, I'm not quite sure how to perform operations on binary or hexadecimal numbers using operators in PHP.
For example, let's say I have a binary number "1010" and I want to perform addition or subtraction on it. How can I go about doing that using PHP operators? Similarly, how can I perform operations on hexadecimal numbers such as "1A"?
I would appreciate any guidance or examples that can help me understand how operators can be used with binary or hexadecimal numbers in PHP.
All Replies
fkovacek
User 1: Hey there! I remember grappling with the same question when I first started coding in PHP. Working with binary and hexadecimal numbers may seem a bit daunting at first, but once you get the hang of it, it's pretty straightforward.
To perform operations on binary or hexadecimal numbers in PHP, you can use some specific operators and functions. Let's start with binary numbers.
To perform addition or subtraction on binary numbers, you can make use of the `+` and `-` operators. PHP automatically recognizes binary numbers if you prefix them with `0b`. For example, if you have two binary numbers `$num1 = 0b1010` and `$num2 = 0b0110`, you can add them using the `+` operator like this: `$result = $num1 + $num2;`. Similarly, subtraction can be done using the `-` operator.
When it comes to hexadecimal numbers, you also have dedicated operators and functions available. PHP identifies hexadecimal numbers if you prefix them with `0x`. So, if you have two hexadecimal numbers `$hex1 = 0x1A` and `$hex2 = 0x0F`, you can add them using the `+` operator: `$result = $hex1 + $hex2;`.
Alternatively, if you need to perform more complex operations on binary or hexadecimal numbers, PHP provides helpful functions like `bindec()`, `decbin()`, `hexdec()`, and `dechex()`. These functions can convert binary to decimal, decimal to binary, hexadecimal to decimal, and decimal to hexadecimal respectively. By utilizing these functions, you can easily perform arithmetic operations on binary or hexadecimal numbers.
I hope this explanation helps you understand how to use operators to perform operations on binary and hexadecimal numbers in PHP. If you have any further questions, feel free to ask!
reid92
User 2: Greetings! I faced a similar dilemma while working with binary and hexadecimal numbers in PHP. Let me share with you what I've learned.
In PHP, performing operations on binary numbers is simplify thanks to the `bcmath` extension. This extension provides functions like `bcadd()`, `bcsub()`, `bcmul()`, and `bcdiv()` specifically designed for arbitrary precision arithmetic, including binary numbers. You can use these functions to add, subtract, multiply, and divide binary numbers with ease.
For instance, if you have two binary numbers `$bin1 = '1010'` and `$bin2 = '0110'`, you can add them using `bcadd($bin1, $bin2)`. The result will be returned as a binary string. Similarly, you can subtract, multiply, or divide binary numbers using the respective `bc` functions.
Now, when it comes to hexadecimal numbers, PHP provides built-in functions such as `hexdec()`, `dechex()`, `bcadd()`, and `bcsub()` which are quite handy. To add or subtract hexadecimal numbers, you can convert them to decimal using `hexdec()` function, then perform the calculations using the standard arithmetic operators (`+`, `-`, `*`, `/`), and finally convert the result back to hexadecimal using `dechex()`.
For example, if you have two hexadecimal numbers `$hex1 = '1A'` and `$hex2 = '0F'`, you can add them as follows:
php
$dec1 = hexdec($hex1);
$dec2 = hexdec($hex2);
$result = $dec1 + $dec2;
$finalResult = dechex($result);
By utilizing these functions and techniques, you can effortlessly perform operations on binary and hexadecimal numbers in PHP.
I hope this insight aids you in effectively using operators to operate on binary and hexadecimal numbers. If you have any more queries, feel free to ask!
New to LearnPHP.org Community?
Join the community
|
__label__pos
| 0.99997 |
The VisAO Camera
ShutterControlDioclient.h
Go to the documentation of this file.
1 /************************************************************
2 * ShutterControlDioclient.h
3 *
4 * Author: Jared R. Males ([email protected])
5 *
6 * Generic shutter controller as a client of the dioserver
7 *
8 * Developed as part of the Magellan Adaptive Optics system.
9 ************************************************************/
10
11 /** \file ShutterControlDioclient.h
12 * \author Jared R. Males ([email protected])
13 * \brief Declarations for the shutter controller as a client of the dioserver.
14 *
15 */
16
17 #ifndef __ShutterControlDioclient_h__
18 #define __ShutterControlDioclient_h__
19
20 #include "ShutterControl.h"
21 #include "libvisao.h"
22 #include "VisAOApp_standalone.h"
23
24 #include "profileutils.h"
25
26 #include <math.h>
27 #include <time.h>
28 #include <sys/time.h>
29
30 namespace VisAO
31 {
32
34 {
35 public:
36 ShutterControlDioclient(int argc, char **argv) throw (AOException);
37 ShutterControlDioclient(std::string name, const std::string &conffile) throw (AOException);
38
40
41 protected:
42 int dio_ch_set; ///<The dioserver channel used for setting shutter state (open or shut)
43 int dio_ch_get; ///<The dioserver channel used for getting shutter state (open or shut)
44 std::string diofifo_path; ///<The base path of the dioserver fifos
45
46 int initialize_ShutterControl() throw (AOException);
47
48 int DO_OPENSHUT;
49 char * glob_seqmsg;
50
51 int power_outlet; ///<Configuration variable, setting which power outlet to monitor
52 int *powerOutlet; ///<The power strip outlet controlling shutter power.
53
54 power_status_board * psb;
55
56 int getPowerStatus();
57
58 sharedim_stack<char> sis; ///< The shared memory ring buffer for timestamp storage
59 sharedim<char> * sim; ///< Pointer to a shared memory "image"
60
61 int shmem_key; ///< The key for the ring buffer shared memory
62 int num_timestamps; ///< The number of shared "images" available in the ring buffer
63
64 int initSaveInterval;
65 uint64 cycleNo;
66
67 //Data logger
68 pthread_t logger_th; ///<Thread which periodically logs data in the sharedim_stack data buffer
69 double logger_pause; ///<Pause time of the logger thread, seconds
70 double data_log_time_length; ///<Time duration of a shutter log file, seconds
71
72 std::string save_path; ///<Path, relative to VISAO_ROOT, where to save data.
73
74 ///Generate a unique data file name, using the standard VisAO timestamp.
75 std::string getDataFileName();
76
77 public:
78 virtual int Run();
79 //Virtual functions, to be overridden by derived classes specific to shutter and controller
80 virtual int start_ShutterControl();
81 virtual int shutdown_ShutterControl();
82
83 virtual int get_hw_state();
84 std::string get_state_str();
85
86 protected:
87 ///Send shutter open signal for initialization, but do not log or profile this.
88 /** The shutter will almost always be open (it fails open, etc.), this is just a check.
89 * so we don't want to count the cycle.
90 */
91 int init_shutter_open();
92 virtual int do_shutter_open(void *adata);
93 virtual int do_shutter_close(void *adata);
94
95 ///Save the initialization data to disk
96 int saveInit();
97
98 ///Launches the data logger thread
99 /** Sets to priority 0.
100 * \retval x is the return value of pthread_create
101 */
102 int launchDataLogger();
103
104 public:
105 ///Log data at intervals
106 void dataLogger();
107
108
109 std::string remote_command(std::string com, char * seqmsg = 0);
110 std::string local_command(std::string com, char *seqmsg = 0);
111 std::string script_command(std::string com, char * seqmsg = 0);
112 std::string auto_command(std::string com, char * seqmsg);
113 std::string common_command(std::string com, int calling_ctype);
114
115 virtual int update_statusboard();
116
117
118 };
119
120 ///Launches the data logger
121 /** \param Sctrl is a ShutterControlDioclient
122 */
123 void * __launch_data_logger(void * Sctrl);
124
125 void shutter_auto_handler(int signum, siginfo_t *siginf, void *ucont);
126
127 } //namespace VisAO
128
129 #endif // __ShutterControlDioclient_h__
int launchDataLogger()
Launches the data logger thread.
int dio_ch_set
The dioserver channel used for setting shutter state (open or shut)
Declarations for a generic shutter controller.
double data_log_time_length
Time duration of a shutter log file, seconds.
std::string diofifo_path
The base path of the dioserver fifos.
int power_outlet
Configuration variable, setting which power outlet to monitor.
int num_timestamps
The number of shared "images" available in the ring buffer.
int shmem_key
The key for the ring buffer shared memory.
STL namespace.
The standalone VisAO application, does not interface with the AO Supervisor.
int dio_ch_get
The dioserver channel used for getting shutter state (open or shut)
virtual int update_statusboard()
Update the status board.
Class to manage a stack of images in shared memory.
Declarations for the standalone VisAO application.
int init_shutter_open()
Send shutter open signal for initialization, but do not log or profile this.
int * powerOutlet
The power strip outlet controlling shutter power.
sharedim_stack< char > sis
The shared memory ring buffer for timestamp storage.
VisAO software utilitites, declarations.
pthread_t logger_th
Thread which periodically logs data in the sharedim_stack data buffer.
sharedim< char > * sim
Pointer to a shared memory "image".
virtual int Run()
The application main loop, to be re-implemented in derived classes.
std::string auto_command(std::string com, char *seqmsg)
Processing for auto commands - to be overriden.
std::string getDataFileName()
Generate a unique data file name, using the standard VisAO timestamp.
void * __launch_data_logger(void *Sctrl)
Launches the data logger.
std::string save_path
Path, relative to VISAO_ROOT, where to save data.
Convenience structure for an image.
void dataLogger()
Log data at intervals.
The namespace of VisAO software.
double logger_pause
Pause time of the logger thread, seconds.
int saveInit()
Save the initialization data to disk.
|
__label__pos
| 0.770708 |
Tell me more ×
Server Fault is a question and answer site for professional system and network administrators. It's 100% free, no registration required.
I've setup the instance SQLEXPRESS via SQL Server 2008 R2 installation, added a security login with all server roles, one called 'sqluser'.
The server authentication is SQL Server and Windows Authentication mode. However, when i specify the -S property, login fails.
There is no firewall enabled and SQL server even accepts connections from remote hosts.
C:\Users\user>sqlcmd -U sqluser -P qwerty -Q "Select * FROM testdb.dbo.testtable"
Output:
integer
-------
1
2
3
4
(4 rows affected)
However when specifying 'localhost' the query fails... Question is Why?
C:\Users\user>sqlcmd -S localhost/sqlexpress -U cpt -P 1234 -Q "Select * FROM cpt.dbo.testme"
Output:
HResult 0x43, Level 16, State 1
Named Pipes Provider: Could not open a connection to SQL Server [67].
Sqlcmd: Error: Microsoft SQL Server Native Client 10.0 : A network-related or in stance-specific error .....
Sqlcmd: Error: Microsoft SQL Server Native Client 10.0 : Login timeout expired.
Changing 'localhost' with '%COMPUTERNAME' is same result if someone would be wondering. The server is running as a LocalSystem instance.
share|improve this question
1 Answer
up vote 0 down vote accepted
You need to specify the -S switch to provide the instance name for sqlcmd to connect to, and it should be in the format "localhost\sqlexpress" instead of "localhost/sqlexpress".
share|improve this answer
got it, thx - actually finally found out last night :) – mschr Nov 12 '12 at 15:41
glad you got it working, thanks for the answer. – Jason Cumberland Nov 12 '12 at 16:33
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.520251 |
iT邦幫忙
0
java for+陣列
public class Non_R_Array1 {
public static void main(String[] args) {
int[][] n=new int[3][];
n[0]=new int[] {1};
n[1]=new int[] {2, 3};
n[2]=new int[] {4, 5, 6};
for (int i = 0; i<n.length;i++){
for(int j =0;j < n[i].length;j++)
System.out.print(" "+ n[i][j]);
System.out.println();
}
}
}
第一次運算 for i <3 所以進入迴圈 for j < 3又再進入第二個圈 n[0][0]輸出是1
但是為啥麼第一次回圈j沒有先+1而是直接跳到i+1呢?
看更多先前的討論...收起先前的討論...
75105135 iT邦新手 5 級 ‧ 2015-05-20 15:29:57 檢舉
上面都看不懂了,著個更難= ="
如果是這樣的話
那你可能不適合自學
建議你花錢找老師或補習班
才有辦法學java
外獅佬 iT邦大師 1 級 ‧ 2015-05-20 23:31:09 檢舉
我看...樓主對變數的定義域根本就不清楚....落寞
每次當迴圈由第一層(for(i=....))進入第二層(for(j=0....))
變數j都會被重新初始化落寞
1 個回答
0
marsyang1
iT邦新手 4 級 ‧ 2015-05-20 21:34:58
最佳解答
<pre class="c" name="code">
public class TempTest {
public static void main(String[] args) {
int[][] n = new int[3][];
n[0] = new int[]{1};
n[1] = new int[]{2, 3};
n[2] = new int[]{4, 5, 6};
for (int i = 0; i < n.length; i++) {
for (int j = 0; j < n[i].length; j++) {
System.out.println(" " + n[i][j]);
}
}
}
}
螢幕印出
<pre class="c" name="code">
1
2
3
4
5
6
看不出來有先i+1.迴圈都會是內圈的先跑完才跑外圈,第二圈的for迴圈我有多括起來比較清楚.另外你的程式有多寫了一行System.out.println() 應該是沒啥必要.要不要回頭檢查一下程式?
temo iT邦新手 1 級 ‧ 2015-05-21 15:39:00 檢舉
他多寫了一行System.out.println()是正常的
因為他前面是print,所以要再寫println來換行
他這程式寫法感覺像良葛格那本書的寫法....
第2圈(j)包在第1圈(i)裡面,所以第2圈有j+1,不過第2圈跑完print後就結束了,j也就歸0
再從第1圈跑
75105135 iT邦新手 5 級 ‧ 2015-05-30 02:21:58 檢舉
真心感謝兩位大師的回答,受益良多。
我要發表回答
立即登入回答
|
__label__pos
| 0.885977 |
CPM Homework Banner
6-135.
Consider the equation .
1. If you graphed this equation, what shape would the graph have? How can you tell?
If there's an and a , and there are no exponents like , what kind of graph does the equation make?
Remember that this equation could be written in form.
2. Without changing the form of the equation, find the coordinates of three points that must be on the graph of this equation. Then graph the equation on graph paper.
Make a table and substitute values for and until you have enough points to graph the line.
3. Solve the equation for . Does your answer agree with your graph? If so, how do they agree? If not, check your work to find the error.
• Solve for :
• Yes, they both have the same starting value () and growth ().
|
__label__pos
| 1 |
Uploaded image for project: 'mod-inn-reach'
1. mod-inn-reach
2. MODINREACH-37
Create API Endpoint to CRUD a Mapping of FOLIO Tenant Libraries and Shelving Locations to INN-Reach Compatible Location Codes
XMLWordPrintable
Details
• Volaris Sprint 117
• 8
• Volaris
Description
Purpose/Overview:
INN-Reach uses a flat location structure that does not distinguish between branches ("libraries" in FOLIO), shelving locations ("locations" in FOLIO) or service points. Therefore, the FOLIO system administrator or other user with appropriate permissions must provide a mapping between these records in FOLIO and an INN-Reach compatible code/description combination. The INN-Reach location code/description options should be loaded/managed as reference data (MODINREACH-17) by a user with appropriate permissions and referenced by ID in the mapping model (MODINREACH-11)
Requirements/Scope:
1. Provide an endpoint via OKAPI to CRUD a mapping of FOLIO locations, locationUnits, and service points (MODINREACH-11)
2. Logical permissions:
1. All permissions
Approach:
Acceptance criteria:
• AC: A user with appropriate permissions can create, retrieve, update, and delete a mapping record, for each central server, of FOLIO location-like records (Libraries, Shelving Locations, and Service Points) to one of the specified location code/description items via OKAPI API endpoint
TestRail: Results
Attachments
Issue Links
Activity
People
dmtkachenko Dmytro Tkachenko
brookstravis Brooks Travis
Votes:
0 Vote for this issue
Watchers:
2 Start watching this issue
Dates
Created:
Updated:
Resolved:
TestRail: Runs
TestRail: Cases
|
__label__pos
| 0.817539 |
Percentages, Fractions, and Decimal Values
75 percent of 400 is the same as what percent of 150?
636465
Top Answer
User Avatar
Wiki User
Answered
2012-12-16 13:06:40
2012-12-16 13:06:40
It is 200% of 150.
75% of 400 = 300
% rate =300/150 *100% = 200%
001
🎃
0
🤨
0
😮
0
😂
0
User Avatar
Related Questions
75% of 400= 75% * 400= 0.75 * 400= 300
150 is 75% of what= 150 / 75= 150 / 0.75= 200
75% of 150= 75% * 150= 0.75 * 150= 112.5
150% of 75= 150% * 75= 1.5 * 75= 112.5
75 percent of 400 dollars is 300 dollars.
150/200 x 100 = 75 Therefore, 150 is 75 percent of 200.
25% off is the same as 75% of the price. 150 * 0.75 = 112.5
To find 75 percent of a number, multiply the number by 0.75. In this instance, 0.75 x 150 = 112.5. Therefore, 75 percent of 150 is equal to 112.5.
50 percent off 150.00 = 7550% off of 150= 50% discount applied to 150= 150 - (50% * 150)= 150 - (0.50 * 150)= 150 - 75= 75
$150 - $112.50(75%) = $37.50
It is: (150-75)/75 times 100 = 100% increase
If you have a number, for example, 100, and add 50 percent, this is equal to 100 + 50 = 150.If you then increase the number 150 by 50 percent, you will get 150 + 75 = 225.Therefore, this is not the same as doubling.by ayaz khan
50% of 150 = 50% * 150 = 0.5 * 150 = 75
% rate = 75% = 150/200 * 100% = 0.75 * 100% = 75%
75% of 150= 75% * 150= 0.75 * 150= 112.5
50% of 150= 50% * 150= 0.5 * 150= 75
Expressed as a percentage, 75/150 x 100 = 50 percent.
75 50% = 50/100 = 1/2 150/2 = 75
You do this by caculating 100/400=200% so you take a quarter out of 200% which=175% so the answer is 175%A Different Answer75 out of 400 is written in fraction form as 75/400. To convert to percent. divide the top number (75) by the bottom number (400) and then move the decimal 2 places to the right:75 ÷ 400 = .1875 = 18.75%
75%= 75/100= 0.75 So, 400- 0.75= 399.25 ie, 400.00- 0.75 = 399.25
It is a FALSE statement.
75% means 75/100 = 0.750.75 * 200= 150=====
That's the same thing as one and a half of 50 one 50 is 50 half of 50 is 25 add 50 and 25 and you get 75 150% of 50 is 75
Copyright © 2020 Multiply Media, LLC. All Rights Reserved. The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply.
|
__label__pos
| 0.99188 |
Exporting a Lucidpress file into Google Slides
1. Download your file as a PNG or JPEG file from Lucidpress
If you go to the editor in Lucidpress, there should be a set of blue buttons in the top right corner. If you don’t see the row of blue buttons in the top right corner of your editor, simply click on the blue hamburger button to expand them out. Once you download as an image, you will receive a Zip file that has each page as an image.
2. Convert PDF to Images Using Acrobat Reader or another conversion service
There are multiple ways you can convert your PDF to an image file. The easiest would be using Adobe’s Acrobat Reader.
If you are having a subscription to Adobe Creative Cloud, then you will get the acrobat reader for free and you can use that. I don’t have access to Acrobat Reader so, unfortunately, I won’t be able to provide a visual explanation of the step-by-step process involved. But here’s a video that will help you with the process.
If you have access to an Acrobat Reader, here’s what you need to do to convert
1. Open the PDF file.
2. Go to the “File” menu
3. Click on the “Export to” option
4. Select “Images” and then the desired file format. Choose the quality of the images that you would want to export. I suggest going with the “JPEG” file format.
5. Select the location where you want to export the images, and click “Save”.
3. Extract the Zip Folder
As the last step, simply extract the ZIP folder that contains images of each page of your PDF document.
To do that, go to the downloaded compressed .zip’ file and ‘Right click’ on it. This will open a drop-down menu. Click on the ‘Extract Here’ option to extract the pictures and store them in the folder where the downloaded compressed file is located.
4. Uploading Image to Google Slides
Once you have converted your PDF document to image using any of the aforementioned methods, the next step is to upload all the images to your Google Slides presentation so that you can complete the process of converting PDF to Google Slides.
|
__label__pos
| 0.718304 |
整形溢出分类
0x1 强制宽度转换(Widthness)
从高级语言层面来看,如果数值超过 对应类型 可容纳的范围,会导致高位丢失:
char buffer[80];
char *src = argv[1];
int len = atoi(argv[2]);
unsigned short i = len; // mov WORD PTR [ebp + i], len
if (i < 80) {
memcpy(buffer, src, len);
}
运行:
$ ./width 65536 `python -c 'print "A"*100'`
Segmentation fault (core dumped)
0x2 运算溢出
void copy_data(int *data, unsigned int len)
{
int i;
int *buffer = malloc(len * sizeof(int)); // [1] overflow here
for (i = 0; i < len; i++) {
buffer[i] = data[i]; // [2] Out of bounds copy
}
}
malloc 中的参数为 unsigned long 类型,在 x86 平台中,大小为 32 字节,则最大数值为 0xFFFFFFFF
如果 len 数值过大,将导致计算结果溢出,分配较小内存块:
copy_data("AA...", 0x40000008); // 0x40000008 * 4 > MAX_UINT, 最终分配 0x20 大小内存
0x3 Signed / Unsigned Confusion ,混用
当将 Signed 类型作为 Unsigned 类型使用时,可能导致问题发生:
int process_data(char *data, int len)
{
char buffer[800];
if (len > sizeof(buffer)) { // [1] bypass with negative value
return -1;
}
memcpy(buffer, data, len); // [2] overflow here
return do_real_process_data(buffer, len);
}
memcpy 函数中的拷贝大小参数,对应为 unsigned int 类型,如果此时传入的 len 为负数,将绕过大小检测,并最终拷贝 4 GB 数据:
process_data("AAA...", -1); // -1 => 0xFFFFFFFF
References
Basic Integer Overflow
|
__label__pos
| 0.972563 |
Scanning for Peripherals
Once the CBCentralManager is up and powered on, you can create a function that scan for peripherals around us.
Create a function called startScanning. Call the method scanForPeripherals(withServices:).
This method scans for peripherals that are advertising services. Now since the unique identifier is set up, add that reference to the method.
func startScanning() -> Void {
// Start Scanning
centralManager?.scanForPeripherals(withServices: [CBUUIDs.BLEService_UUID])
}
Now add the startScanning() function to the centralManagerDidUpdateState to scan as soon as the app opens.
extension ViewController: CBCentralManagerDelegate {
func centralManagerDidUpdateState(_ central: CBCentralManager) {
switch central.state {
case .poweredOff:
print("Is Powered Off.")
case .poweredOn:
print("Is Powered On.")
startScanning()
case .unsupported:
print("Is Unsupported.")
case .unauthorized:
print("Is Unauthorized.")
case .unknown:
print("Unknown")
case .resetting:
print("Resetting")
@unknown default:
print("Error")
}
}
}
When you run your app, your device will start scanning for peripherals.
Discovering Peripherals
Now that scanning is started, what happens when a peripheral is discovered?
Every time a peripheral is discovered, the CBCentralManager will notify you by calling the centralManager(_:didDiscover:advertisementData:rssi:) function on its delegate.
This function provides the following information about the newly discovered peripheral:
• The discovered peripheral is recognized and can be stored as a CBPeripheral.
• The discovered peripheral has stored advertisement data.
• The current received signal strength indicator (RSSI) of the peripheral, in decibels.
Since you are interested in connecting to one peripheral, create an instance of a peripheral.
private var bluefruitPeripheral: CBPeripheral!
Call the didDiscover function. This tells the delegate the central manager discovered a peripheral while scanning for devices.
func centralManager(_ central: CBCentralManager, didDiscover peripheral: CBPeripheral,advertisementData: [String : Any], rssi RSSI: NSNumber) {
bluefruitPeripheral = peripheral
bluefruitPeripheral.delegate = self
print("Peripheral Discovered: \(peripheral)")
print("Peripheral name: \(peripheral.name)")
print ("Advertisement Data : \(advertisementData)")
centralManager?.stopScan()
}
The implementation of this function performs the following actions:
• Set the bluefruitPeripheral variable to the new peripheral found.
• Set the peripheral's delegate to self (ViewController)
• Printed the newly discovered peripheral's information in the console.
• Stopped scanning for peripherals.
Next up - actually connect to that peripheral.
This guide was first published on Jul 27, 2017. It was last updated on Jul 27, 2017.
This page (Scanning for Peripherals) was last updated on Apr 28, 2017.
Text editor powered by tinymce.
|
__label__pos
| 0.718867 |
The Lookup function finds matching data inside a workbook data element, either in the same table, or in an external table.
To understand how to apply the Lookup function without directly using the formula bar, see Add columns through Lookup.
All elements must be on the same data connection.
Syntax
Lookup(formula, local key 1, external key 1, \[local key 2\], \[external key 2\], ...)
This function has the following arguments:
formula
Required
The formula to compute (or the target column to reference) for the row in the target element.
local key 1
Required
The column to use as a join key in the local data element.
external key 1
Required
The column to use as a join key in the target data element.
local key 2
Optional
The additional column to use as a join key in the local data element.
external key 2
Optional
The additional column to use as a join key in the target data element.
Examples
Lookup(\[Customers/Cust Name\], \[Cust Key\], \[Customers/Cust Key\])
Data is inserted into the Sales table’s [Calc] column from Customer table’s [Cust Name] column.
These two tables are joined using a single column (join key) from each table. In this case, both join keys are named [Cust Key].
The function’s formula parameter directly references a column in the joined table.
Note: In formulas, columns from other tables should be referenced with the [table name/] prefix (e.g. [Customers/Cust Name]).
Screen_Shot_2021-11-03_at_3.23.30_PM.png
Lookup(\[Customers/Zip Code\], \[Cust Name\], \[Customers/Name\], \[Cust Key\], \[Customers/Cust Key\])
Data is inserted into the Ordered Items table’s [Calc] column from Customer table’s [Zip Code] column.
The function’s formula parameter directly references a column in the joined Customers table.
These two tables are joined using two sets of join keys:
• Order Items’ [Cust Name] column is joined with Customer’s [Name] column;
• Order Items’ [Cust Key] column is joined with Customer’s [Cust Key] column
Screen_Shot_2021-11-04_at_3.07.56_PM.png
Lookup(Sum(\[Sales Amounts/Sales Amount\]), \[Order Number\], \[Sales Amounts/Order Number\])
Data is inserted into the Orders table’s [Calc] column from Sales Amount table’s [Sales Amount] column.
The function’s formula parameter uses the Sum function to aggregate values from the Sales Amount table’s [Sales Amount] column.
These two tables are joined using a single set of join keys: Orders’ [Order Number] column is joined with Sales Amounts’ [Order Name] column.
Screen_Shot_2021-11-03_at_3.59.55_PM.png
|
__label__pos
| 0.675587 |
如何使用Future Pattern-2
By 11月 23, 2015
如何使用Future Pattern-2
如何使用Future Pattern中用Java的方式來實作Future Pattern, 其實Android也有提供這個機制。
Android的Future是被實作在Executor內部, 因此只要透過ExecutorService就可以使用,
而Future有兩個參數, 一個是InvokeAll另外一個是InvokeAny,
如何使用Future Pattern內有說明,
當InvokeAll會等待全部任務完成後, 將結果回傳。
而InvokeAny則只有要一個任務完成以後, 則會結束其他任務, 只回傳完成的該任務。
宣告兩個Button跟一個TextView, 一個處理InvokeAll另外一個處理InvokeAny,
將結果顯示在TextView內。
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent"
android:layout_height="match_parent" android:paddingLeft="@dimen/activity_horizontal_margin"
android:paddingRight="@dimen/activity_horizontal_margin"
android:paddingTop="@dimen/activity_vertical_margin"
android:paddingBottom="@dimen/activity_vertical_margin" tools:context=".MainActivity">
<Button
android:id="@+id/invoke_all"
android:text="Invoke All"
android:layout_width="wrap_content"
android:layout_height="wrap_content" />
<Button
android:layout_toRightOf="@id/invoke_all"
android:id="@+id/invoke_any"
android:text="Invoke Any"
android:layout_width="wrap_content"
android:layout_height="wrap_content" />
<TextView
android:layout_below="@id/invoke_all"
android:id="@+id/result"
android:text="Hello!"
android:layout_width="match_parent"
android:layout_height="wrap_content" />
</RelativeLayout>
主要程式
public class MainActivity extends AppCompatActivity {
private Button sendInvokeAll;
private Button sendInvokeAny;
private TextView result;
private List<Callable<String>> tasks;
private StringBuffer strBuffer;
private ExecutorService executorService;
private final static int INVOTE_ALL = 0;
private final static int INVOTE_ANY = 1;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
initData();
initView();
}
private void initView(){
sendInvokeAll = (Button) findViewById(R.id.invoke_all);
sendInvokeAny = (Button) findViewById(R.id.invoke_any);
result = (TextView) findViewById(R.id.result);
sendInvokeAll.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
executeTask(INVOTE_ALL);
}
});
sendInvokeAny.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
executeTask(INVOTE_ANY);
}
});
}
private void initData(){
tasks = new ArrayList<>();
strBuffer = new StringBuffer();
executorService = Executors.newCachedThreadPool();
}
private String emulateGetStringFromServer(int index){
String string = "task ";
try {
int r = (int)(Math.random() * 3 + 1);
Thread.sleep(r * 1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
return string + index + " is done.";
}
private void executeTask(final int flag){
new Thread(new Runnable() {
@Override
public void run() {
runOnUiThread(new Runnable() {
@Override
public void run() {
tasks.clear();
strBuffer.delete(0, strBuffer.length());
strBuffer.append("Loding..." + "\n");
result.setText(strBuffer.toString());
}
});
tasks.add(new Callable<String>() {
@Override
public String call() throws Exception {
return emulateGetStringFromServer(1);
}
});
tasks.add(new Callable<String>() {
@Override
public String call() throws Exception {
return emulateGetStringFromServer(2);
}
});
try {
if(flag == INVOTE_ALL) {
List<Future<String>> futures = executorService.invokeAll(tasks);
for(int i = 0; i < futures.size(); i++){
strBuffer.append(futures.get(i).get() + "\n");
}
} else {
String taskStr = executorService.invokeAny(tasks);
strBuffer.append(taskStr);
}
runOnUiThread(new Runnable() {
@Override
public void run() {
result.setText(strBuffer.toString());
}
});
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
}
}).start();
}
}
在上面可以看到, 我們透過一個List內包含了Future的物件,
List<Future<String>> futures = executorService.invokeAll(tasks);
然後實作Callable的任務, 透過傳入的不同參數, 來代表不同任務的id,
tasks.add(new Callable<String>() {
@Override
public String call() throws Exception {
return emulateGetStringFromServer(1);
}
});
這邊可以看到, 其實我們是用Thread.sleep的方式來模擬執行長任務的效果,
時間長短會根據Random後的結果。
private String emulateGetStringFromServer(int index){
String string = "task ";
try {
int r = (int)(Math.random() * 3 + 1);
Thread.sleep(r * 1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
return string + index + " is done.";
}
當我們執行第一個按鈕, 則會把全部任務跑完才顯示,
sendInvokeAll.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
executeTask(INVOTE_ALL);
}
});
第二個按鈕由於是隨機的, 因此只要有任務完成, 就會回傳字串,
可以看到有時候是任務1完成, 有時候是任務2完成。
sendInvokeAny.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
executeTask(INVOTE_ANY);
}
});
執行結果如下
程式碼
You Might Also Like
0 意見
|
__label__pos
| 0.996455 |
Fibonacci Sequence List Of Numbers
From the Fun Fact files, here is a Fun Fact at the Easy level: Fibonacci Number Formula. The Fibonacci numbers are generated by setting F0=0, F1=1, and then.
It seems easy enough, but I don’t see it: The Fibonacci numbers Fn are defined as follows. giving a growth rate following the Fibonacci sequence. // So if there is a new crud population of 10.
Fibonacci takes its name from Leonardo Fibonacci, the 12th century Italian mathematician who discovered what is often referred to as the Golden Ratio — a sequence of numbers where each successive.
Thomas Alva Edison Famous Quotes Which invention of his did Thomas Alva Edison demonstrate for the first time on this date. In which African country are the famous Kimberley diamond mines? 12. How did Hyderabad cricketer Ibrahim. Which river is responsible for carving out the famous Grand Canyon? 4. Rule
The Fibonacci sequence is a sequence of numbers defined by the. The Fibonacci sequence appears throughout the natural world, and it dates back over two.
Generating the nth Fibonacci number requires you to generate on the order of n bits. We define a lazy list corresponding to the FibonacciSequence. fibs = 0 : 1.
Galileo Planet Pack Run In Extended THE SOLAR SYSTEM UNIT OVERVIEW Our solar system is home to Earth and seven other planets. Each planet. belt, the outer solar system, Galileo’s moons, and the discovery and reclassification of Pluto. provided in the teaching tips for Investigation Packs and FOCUS Books. Core Science
When we are faced with problems like finding the sum of a list of numbers, we usually try to use. Question: Implement an algorithm to find the nth number in the Fibonacci sequence. Answer: Lets use.
May 15, 2012. Compute any number in the Fibonacci Sequence easily!. Google lists 1.2 million references for “Fibonacci Series” and 2.1 million references.
About List of Fibonacci Numbers. This online Fibonacci numbers generator is used to generate first n (up to 201) Fibonacci numbers.
Fibonacci sequence is a sequence of numbers, where each number is the sum of the 2 previous numbers, except the first two numbers that are 0 and 1.
The Fibonacci numbers are also a Lucas sequence U_n(1,-1) , and are. A list of 47 generalized identities are given by Halton (1965). In terms of the Lucas.
@param args the command line arguments */ public static void main(String[] args) { //a method int num1 = 1; //first number in fibonacci sequence int num2 = 1; //second number in fibonacci sequence int.
1 2 5 13 34 89 233 610 as my out put. I am missing some numbers like 3 and 8. Here is my code TITLE Fibonacci sequence with loop ; Prints first 12 numbers of fibonacci sequence with loop. INCLUDE.
My method was as follows: 1) Compile a list of all Panics, Recessions, and Depressions in US history. 2) Starting with 8, and going all the way up to 233 in the Fibonacci Sequence, add each number in.
Every numbers in the Fibonacci sequence (after the first two) is the sum of the two preceding numbers. Therefore, the Fibonacci numbers can be defined as the.
Oct 24, 2018. The Fibonacci sequence is one of the most famous formulas in mathematics. Each number in the sequence is the sum of the two numbers that.
F(n+2) = number of binary sequences of length n that have no consecutive 0's. The Fibonacci sequence, like any additive sequence, naturally tends to be. Sequences, Correspondence and Postings to Math-Fun Mailing List, Nov 1996 to.
I know this maybe easy but I need help with creating a fibonacci sequence array from a to b. This is my code so far: def FibL(a,b): list = [] if a == 0: return 0 elif a == 1: return 1 else: return.
What Kind Of Scientist Is Stephen Hawking Mar 14, 2018 · On March 14, 2018, visionary physicist Stephen Hawking passed away at the age of 76. You know his name, and may have even watched a biopic or two about him. But if you’ve ever wondered what. Mar 14, 2018. Physicist and icon
We’re going to round our rabbit to the nearest whole number. You’ll see as you write your rabbit. we’re now going to use all this information to create a Fibonacci sequence. After all, If you’re.
. generating the Fibonacci sequence. I would like to start at 30. Here is my program: #include <iostream> #include <conio> using namespace std; int main() { int Fibonacci, n; cout <<"Here is the.
For example: array[0] = 0 and array[1] = 1 There should be a function for when a number is added to the bag, the Fibonacci sequence is solved up to the position of the number that is added to the bag.
I guess I am not really understanding the code and how to word it properly. I am writing a program that will show the Fibonacci sequence numbers, ask the user to input the amount of numbers they want.
Dr. Sacks’s twins, with their power to see, hear, smell — somehow sense within minutes if a number was prime — would also have been on AsterCorp’s wish list. Something keeps. and what we think we.
An Entomologist Writes An Article Claims That Fewer Than Isaac Newton Short Summary A newton is a unit of measurement named after Sir Isaac Newton. It measures force and has units of kilogram meter per second squared. This lesson will describe the newton. The Chronology of Ancient Kingdoms Amended is an approximately 87,000-word composition
Please excuse the inefficiency as I am a newbie to assembly. The program is supposed to calculate the first 10 numbers of the fibonacci sequence. However, the first to numbers, 0 and 1, can be.
In mathematics, Fibonacci series is one type of number series. Number series is nothing but. Formula to form the Fibonacci sequence list: Fibonacci series = +
Hi Sir, please help me with my code I’ve already finish the sequence. this is my part : #include <iostream.h> int main () { unsigned long int a = 0; unsigned long int b = 1; unsigned long int c; int y.
In mathematics, the Fibonacci numbers, commonly denoted Fn form a sequence, called the. lengths correspond to sequential Fibonacci numbers – by dividing the list so that the two parts have lengths in the approximate proportion φ.
Jan 24, 2019. Learn about the history and logic behind Fibonacci Numbers and their. A sequence in which each number is the sum of the two numbers that.
A super-Fibonacci sequence is a list of whole numbers with the property that, from the third term onwards, every term is the sum of all the previous terms.
Tool to compute numbers of Fibonacci. Fibonacci sequence is a sequence of integers, each term is the sum of the two previous ones.
In mathematics it’s the sequence of numbers you get when, starting with 0 and 1, the next number is always the previous two added together; in music it’s one of the most distinguished chamber groups.
The program needs to ask for n and then output the first n terms of the Fibonacci. sequence though. frrossk’s program kind of sucks anyway. >Also, can you tell me that the a and the i means in this.
The first 300 Fibonacci numbers fully factorized. Further pages have all the numbes up to the 500-th Fibonacci number with puzzles and investigations for.
We’ll call the list of fibonacci numbers li. The gotcha here is that the fibonacci sequence is defined as 1, 2, 3, 5, 8. and not 1, 1, 2, 3, 5, 8. In my haste, I disregarded that part of the problem.
Mar 22, 2013. list of Fibonacci numbers. The http://caml.inria.fr/ocaml/index.en.htmlOCaml program used to create this list can be found.
This Washington, D.C. playground is math-themed, taking its design inspiration from the Fibonacci sequence, a numeric pattern in which the next number is always the sum of the last two. The curves of.
The Fibonacci Sequence is the series of numbers: 0, 1, 1, 2, The next number is found by adding up the two numbers before it. The 2 is. Here is a longer list:.
i have been trying in vain to implement fibonacci in prolog. fibonacci number (ok , there is 2 = 1 + 1, and 1 = 1 + 0) as it also have equal previous values A and B, means there should be three.
. for calculating the numbers of the Fibonacci sequence. After that, the main function uses fib two times. The first time is to get a specific Fibonacci number, and the second time is to get a list.
|
__label__pos
| 0.929343 |
Monday
November 30, 2015
Homework Help: math
Posted by pam on Friday, March 22, 2013 at 2:08pm.
surface area of a rightcircular cylinder. The area of a right circular cylinder is given by the polynomial 2pierh^2 where h is the height and r is the radius of the base. Suppose that a beverage can has a height of 6.2 inches and a radius of 1.3 inches. Evaluate the polynomial when h=6.2 and r=1.3 to find the area of the can. Use 3.14 for pie. What is the surface area ? inches^2
Answer this Question
First Name:
School Subject:
Answer:
Related Questions
More Related Questions
Members
|
__label__pos
| 0.604765 |
Login| Sign Up| Help| Contact|
Patent Searching and Data
Title:
DATA SYNCHRONIZATION
Document Type and Number:
WIPO Patent Application WO/2014/133489
Kind Code:
A1
Abstract:
An example for synchronizing data in accordance with aspects of the present disclosure includes monitoring a set of attributes at a plurality of devices on a network, selecting a group of data based on the monitored set of attributes for synchronization, assigning priority levels to each selected data and each device, prioritizing synchronization operations to be performed on the group of selected data based on the priority levels, and synchronizing the group of selected data in accordance with the prioritization of the synchronization.
Inventors:
SCHEER ROQUE LUIS (BR)
PORTO MAURICIO NUNES (BR)
SANTHIVEERAN SOMA SUNDARAM (US)
Application Number:
PCT/US2013/027848
Publication Date:
September 04, 2014
Filing Date:
February 27, 2013
Export Citation:
Click for automatic bibliography generation Help
Assignee:
HEWLETT PACKARD DEVELOPMENT CO (US)
International Classes:
G06F15/16; G06F11/30
Foreign References:
US8312096B22012-11-13
US20100257474A12010-10-07
US20050147130A12005-07-07
US7793002B22010-09-07
Attorney, Agent or Firm:
AZAKLI MAISAMI, Ceyda et al. (Intellectual Property Administration3404 E. Harmony Road,Mail Stop 3, Fort Collins Colorado, US)
Download PDF:
Claims:
WHAT IS CLAIMED IS:
1. A method for synchronizing data, comprising:
monitoring, by a processor, a set of attributes at a plurality of devices on a network;
selecting, by the processor, a group of data based on the monitored set of attributes for synchronization;
assigning, by the processor, priority levels to each selected data and each device;
prioritizing, by the processor, synchronization operations to be performed on the group of selected data based on the priority levels; and
synchronizing, by the processor, the group of selected data in accordance with the prioritization of the synchronization.
2. The method of claim 1 , wherein the set of attributes comprises static and dynamic attributes.
3. The method of claim 2, wherein the dynamic attributes comprises user activity, operations and actions of the system, battery power information, battery power consumption rates, storage space, wireless signal strength, and network
configuration.
4. The method of claim 1 , wherein assigning the priority level to each selected data and each device further comprises assigning the priority level dynamically based on the monitored set of attributes.
5. The method of claim 1 , wherein the priority level of each selected data and each device can be changed based on the monitored set of attributes.
6. The method of claim 1 , further comprising issuing commands to block the
synchronization based at least in part on at least one of battery power information, battery power consumption rates, storage space, wireless signal strength, and network details.
7. The method of claim 1 , wherein each group of selected data is associated with a device on the network, and data in each group identifies the data to be synchronized with the associated device.
8. The method of claim 1 , wherein monitoring the set of attributes at the plurality of devices on the network further comprises:
collecting groups of information associated with the set of attributes, each group identifying a device on the network; and
analyzing the collected groups of information.
9. The method of claim 1 , wherein selecting the group of data based on the monitored set of attributes for synchronization further comprises predicting the group of data having a likelihood of being requested by a user.
10. The method of claim 1 , wherein selecting the group of data based on the monitored set of attributes for synchronization further comprises:
receiving historical user activity; and
using the historical user activity to predict the groups of data having a likelihood of being requested by a user, wherein the user is associated with the plurality of the devices.
1 1 . The method of claim 1 , wherein prioritizing synchronization operations to be
performed on the group of selected data based on the priority level assigned to each selected data and each device further comprises:
generating a first queue associated with a first device having a first priority level, wherein the first queue comprises selected data to be transmitted to the first device; generating a second queue associated with a second device having a second priority level, wherein the second queue comprises selected data to be transmitted to the second device, and the second priority level is higher than the first priority level; and
prioritizing synchronization operations of the second queue over the first queue based on the second priority level being higher than the first priority level.
12. The method of claim 1 , wherein the set of attributes indicates an immediate request to access certain data on a certain device.
13. A system for synchronizing data, comprising:
a monitoring module to monitor a set of attributes at a plurality of devices on a network;
a prediction module to select a group of data based on the monitored set of attributes for synchronization and assign priority levels to each selected data and each device;
a data manager to prioritize synchronization operations to be performed on the group of selected data based on the priority levels; and
the data manager to synchronize the group of selected data in accordance with the prioritization of the synchronization.
14. The system of claim 13, further comprising a database to store the monitored set of attributes.
15. The system of claim 13, further comprising a prioritization manager coupled to the data manager, the prioritization manager assigning the priority level to each selected data and each device.
16. The system of claim 13, further comprising a synchronization manager coupled to the data manager, the synchronization manager blocking the synchronization operations in accordance with commands from the prediction module to block the synchronization based at least in part on at least one of battery power information, battery power consumption rates, storage space, wireless signal strength, and network details.
17. The system of claim 13,
wherein the monitoring module collects groups of information associated with the set of attributes, each group identifying a device on the network, and analyzes the collected groups of information; and
wherein the set of attributes comprises user activity, operations and actions of the system, battery power information, battery power consumption rates, storage space, wireless signal strength, and network details.
18. A non-transitory computer-readable medium comprising instructions which, when executed, cause a device to:
monitor a set of attributes at a plurality of devices on a network; select a group of data based on the monitored set of attributes for synchronization;
prioritize synchronization operations to be performed on the group of selected data based on a priority level assigned to each selected data and each device; and synchronize the group of selected data in accordance with the prioritization.
19. The non-transitory computer-readable medium of claim 18, comprising further
instructions, which when executed, cause the device to issue commands to block the synchronization based at least in part on at least one of battery power information, battery power consumption rates, storage space, wireless signal strength, and network details.
20. The non-transitory computer-readable medium of claim 18, wherein the priority level of each selected data and each device are assigned dynamically based on the monitored set of attributes.
Description:
DATA SYNCHRONIZATION
BACKGROUND
[0001] Content such as media files and documents may be created or acquired by users using multiple devices. On occasion, a user may desire to synchronize, transfer, upload or otherwise send content from one device to another. The content may be shared among users' other devices in a network in order to be readily consumed or edited when needed. For example, a user may decide to acquire a new mobile telephone with more advanced features (e.g., photo editing, advanced network settings, etc.) and desire to transfer the photos from the old mobile telephone to the new one. In another example, a user may desire to synchronize the video links on his mobile phone with the video links on his desktop computer in order to, for example, view the videos on a larger screen.
[0002] Devices and communication networks facilitate the collection and exchange of information. Such devices may include personal computer including attached peripherals, handheld/palmtop devices, portable computer, and/or the like, to collect, synchronize, transfer or exchange a variety of content, such as files, patches, graphics, and/or the like. For example, the files may contain large amount of content such as videos, photos, music, and documents. Such content may be created on or downloaded to one device and then replicated to the other devices.
[0003] Synchronization frameworks allow various devices or appliances to send content from a device or synchronization servers and/or external partners over network connectivity, such as via Internet. Synchronization may be accomplished through triggering synchronization of the content upon a user's request, periodic intervals, or in real-time. For example, the synchronization event can correspond to an expressed instruction or command issued by the user to synchronize information related to the content being presented (e.g., "record my last page read").
BRIEF DESCRIPTION OF THE DRAWINGS
[0004] Examples are described in the following detailed description and in reference to the drawings, in which:
[0005] Fig. 1 illustrates an example system in accordance with an implementation;
[0006] Fig. 2 illustrates an object sender of an example system in accordance with an implementation; [0007] Fig. 3 illustrates an example process flow diagram in accordance with another implementation; and
[0008] Fig. 4 illustrates an example process flow diagram in accordance with an implementation.
DETAILED DESCRIPTION
[0009] Various aspects of the present disclosure are generally directed to content synchronization. More specifically, various aspects of the present disclosure are generally directed to a manner by which content across various devices is synchronized. This approach allows for synchronization of content by a content synchronization framework, in which any connected device or appliance, such as a personal computer (PC), portable device, personal digital assistant (PDA), tablet, smartphone, scientific instrument, point of sales device, or the like, performs contextual synchronization over a wide variety of communication network topologies including both wired and wireless connections.
[00010] Aspects of the present disclosure described herein predict groups of data to be transmitted from one machine to another. Moreover, other aspects of the present disclosure prioritize the data within each transmission based on user activity and system information at each device. Further, other aspects identify user actions that indicate that the user require immediate access to the data on a given device and accordingly, synchronize new or modified data, available on the other devices, to that device in order to make it available just in time when the user needs it.
[00011] Among other things, this approach may present an efficient and effective manner to synchronize data and prevent unavailability of user data at the time of need in addition to preventing depletion of battery and waste of resources. This approach may also remove the need for manual user trigger, requiring the user to synchronize manually while maintaining a good user experience.
[00012] In one example in accordance with the present disclosure, a method for synchronizing data is provided. The method comprises monitoring a set of attributes at a plurality of devices on a network, selecting a group of data based on the monitored set of attributes for synchronization, assigning priority levels to each selected data and each device, prioritizing synchronization operations to be performed on the group of selected data based on the priority levels, and synchronizing the group of selected data in accordance with the prioritization of the synchronization. [00013] In another example in accordance with the present disclosure, a system is provided. The system comprises a monitoring module to monitor a set of attributes at a plurality of devices on a network, a prediction module to selecting a group of data based on the monitored set of attributes for synchronization and assign priority levels to each selected data and each device, a data manager to prioritize synchronization operations to be performed on the group of selected data based on the priority levels. The data manager also synchronizes the group of selected data in accordance with the prioritization of the synchronization.
[00014] In a further example in accordance with the present disclosure, a non- transitory computer readable medium is provided. The non-transitory computer-readable medium comprises instructions which, when executed, cause a device to (i) monitor a set of attributes at a plurality of devices on a network, (ii) select a group of data based on the monitored set of attributes for synchronization, (iii) prioritize synchronization operations to be performed on the group of selected data based on a priority level assigned to each selected data and each device, and (iv) synchronize the group of selected data in accordance with the prioritization of the synchronization.
[00015] Fig. 1 illustrates an example system 100 in accordance with an implementation. The system 100 may be a content synchronization system and may comprise a plurality of devices such as computing devices that are connected across a cloud-computing system or another network set-up. It should be readily apparent that the system 100 represents a generalized illustration and that other elements may be added or existing elements may be removed, modified, or rearranged without departing from the scope of the present disclosure. For example, while the system 100 illustrates that Fig. 1 includes devices 1 10, 120 and 130, the system may actually comprise less or more number of devices and three devices have been shown and described for simplicity.
[00016] The system 100 may be a peer-to-peer sharing environment, in which there may be distributed architecture without the need for central coordination, with user devices being at the same time both suppliers and consumers of objects. In another implementation, the system 100 may be a cloud-based sharing system, which uses computing resources (hardware and software) that are delivered as a service over a network (e.g., the Internet). The cloud-based system exposes data to be accessed by user devices as a service over a network, and accordingly the user devices may be connected to each other through the cloud. The cloud is indicated in FIG. 1 by the cloud 105. The cloud 105 may replace, supplement, or blend with features and capabilities provided by applications and software that run locally. Applications may include, for example, one or more of identity and directory services, device management and security, synchronized storage and data services across multiple devices or platforms, and services pertaining to activities and news. The cloud 105 may be provided under a variety of different business models including free, advertising-supported, and subscription-based models.
[00017] Alternatively or in addition, communications medium between the user devices may include, but is not limited to, any combination of wired and/or wireless resources. For example, communications medium may include resources provided by any combination of cable television networks, direct video broadcasting networks, satellite networks, cellular networks, wired telephony networks and/or data networks, wireless data networks (e.g., wireless local area networks, wireless personal area networks, etc.), and so forth.
[00018] The system 100 comprises the devices 1 10, 120 and 130. The device 1 10 comprises a monitoring engine (or monitoring module) 140, a log database 150, a prediction engine (prediction module) 160, an object receiver 170, an object database 180, and an object sender 190, each of which is described in greater detail below. In one implementation, an object receiver and an object sender may be data modules. In another implementation, the system 100 may comprise a queue manager and a prioritization manager (not shown in Fig. 1 ).
[00019] The devices 1 10, 120 and 130 may encompass all wired devices and wireless devices. In one implementation, the device 1 10 may comprise a desktop computer, set-top box, game console, laptop computer, pocket PC, PDA, smart phone, mobile phone, tablet, portable media players such as MP3 players (Moving Pictures Expert Group, MPEG- 1 , audio layer 3), e-book reader, handheld game device, ultra-mobile computer, or device comprising a combination of features provided therefrom.
[00020] In one implementation, a user may have the device 1 10, which may be a personal computer, the device 120, which may be a laptop computer, and the device 130, which may be a mobile device. The devices may be viewed as peer devices on a mesh where data may be moved across the devices (i.e., fetched). In another implementation, all devices may be arranged to connect to one another through the cloud 105. Depending on the implementation, security features/tools may be implemented in various ways such as by a firewall, one time passwords, encryption programs, digital certificates, user application security, etc. Various combinations of these and/or other security features may be used. In one implementation, these security approaches may be layered to provide a highly secure environment in which one device may interact with another. For example, the security features may require a user to log in before transferring data from one device to another. In other implementations, the security features may require the user to provide the user's credentials or identity which is trusted by the cloud 105 for transferring data over the cloud 105.
[00021] The devices 1 10, 120 and 130 may have differing features and capabilities. For example, the desktop computer and laptop computer may utilize different operating systems, respectively, including the Microsoft Windows operating system and the Apple Mac OS operating system. Alternatively or in addition, one device may be configured with fewer resources such as processing power, memory, and storage compared to the other devices.
[00022] For purposes of illustrative examples, the devices 1 10, 120 and 130 may correspond to, respectively, a desktop computer, a laptop and a mobile device. Each device stores objects (e.g., content). Such objects may include various forms of user perceptible content. Exemplary objects can refer to any object that can be directly or indirectly accessed by a user, including, but not limited to, multi-media data, digital images, digital video (e.g., television programming and movies), displayable text, audio data, electronic documents, electronic publications, computer-executable code, portions of the above, and/or the like. Further, the presentation of the content may vary as a function of the capabilities and configuration of the devices 1 10, 120 and 130 (e.g., a portable e-book reader vs. a mobile phone) and the format of the content (e.g., a digital publication vs. a video).
[00023] In one implementation, the devices 1 10, 120 and 130 associated with a user or a user account may have access to different representations of the content provided by content providers. For example, the device 1 10 may correspond to a desktop computer that has obtained a digital representation of content (e.g., a digital publication that can be presented on a computer) via a communication network (e.g., a wireless communication network). The device 120 may correspond to a laptop that has obtained a separate representation of the same content (e.g., a copy of the digital publication that can be presented on a laptop) via the same or a different communication network. Still further, the device 130 may correspond to a mobile device that has also obtained a separate representation of the same content (e.g., a copy of the digital publication that can be presented on a mobile device) from the same or a different communication network.
[00024] In another implementation, the devices 1 10, 120 and 130 may have different representations of the same content stored in each device. For example, the device 1 10 may correspond to a desktop computer that has a digital representation of content (e.g., a digital photograph that can be presented on a computer). The device 120 may correspond to a laptop that has a separate representation of the same content (e.g., a copy of the digital photograph that can be presented on a laptop). Still further, the device 130 may correspond to a mobile device that has also a separate representation of the same content (e.g., a copy of the photograph that can be presented on a mobile device).
[00025] In one implementation, a user of the device 1 10 may manually select content. This selection may involve the user browsing listings of content stored in the device 1 10. Alternatively or in addition, a user of the device 1 10 may request to obtain content from another device or a content provider through the cloud 105. The content provider may include any entities that can provide content for consumption by the user devices. Examples of content providers may include (but are not limited to) television broadcast stations, servers, peer-to-peer networking entities (e.g., other user devices 120, 130), and/or alike. Further, content can be variety of content such as files, patches, graphics, and/or the like. The files may include videos, audios and/or text.
[00026] The monitoring engine 140 may comprise collecting data related to operations and actions of the system, data that can be used to infer user activity in the network, as well as data that explicitly indicates activity. For example, the monitoring engine 140 may monitor content that is output by a user. Further, the monitoring engine 140 may collect data related to the presentation of content to the user via the device 1 10, and/or user content selection, which indicates content that the user has requested to receive and selected for output (e.g., viewing and/or listening). For example, the user may browse the files in the "My Project" folder on his desktop. The user may then start up a word processing application and begin to make edits to a particular file in the folder. The monitoring engine 140 may monitor such user activity on the device 1 10 and based on such activity, the monitoring engine 140 may generate user activity data, which may then be stored in the log database 150 and shared with the prediction engine 160.
[00027] Moreover, the monitoring engine 140 may monitor application data and track the data that the applications access. Applications may include any of a variety of typical applications that may run on a device to enhance productivity (e.g., word processing, spreadsheets), support communications (e.g., e-mail, web-browsing, and instant messaging), provide entertainment (e.g., games, multimedia players), and/or the like.
[00028] In another implementation, the monitoring engine 140 may monitor system information related to the device 1 10. Such system information may include various attributes of the device 1 10, including. The attributes may be static or dynamic. For example, dynamic attributes may include, but not limited to, battery power information, battery power consumption rates, memory/storage space, wireless signal strength, network details, overall device health information, device or ambient temperature values, and/or the like. The information can also include information related to the status of various input or output devices or information regarding a change in status of input or output devices. For example, the device information can include data indicative of the connection of the device 1 10 to another device (e.g., via a Bluetooth wireless connection or hardwire connection). Similarly, the device information can include an identification of any software application having functionality to present content to a user. In an illustrative implementation, the device information may be generated by hardware components (e.g., sensors) or software components on the device 1 10. Additionally, the device information may be obtained by the device 1 10 from external resources, such as attached hardware components, remote monitoring components/services or network data sources.
[00029] In one implementation, the device information can correspond to information (such as latitude and longitude information or other geographic coordinate information) from a global positioning system ("GPS") or cellular positioning system (e.g., triangulation). In one implementation, a GPS received signal from a GPS antenna may be processed by a GPS receiving part and may be input into the monitoring engine 140. The location information of the device 1 10 may be interpreted and translated into user behavior data that may be used to predict a set of content the user may need to access at that location. For example, the system may detect that the user is at a library, and that the past user behavior shows that the user accesses the image files in a "My Photos" folder in the user's laptop when the user is at the library. Accordingly, the monitoring engine 140 may collect the information about the user's location in addition to the user's past and/or present behavior associated with the location and deliver it to the prediction engine 160. After analyzing the information, the prediction engine 160 may predict the objects the user may choose to access on the user's laptop at the library, which, for example, may be the image files in the "My Photos" folder in the user's laptop.
[00030] In another implementation, the types of data being monitored may vary based on device type. A device can identify specific types of data that should be monitored for determining synchronization events or establishing thresholds for establishing synchronization events, or both. These types of data may be classified as the constraints events. For example, based on these types of data, the monitoring engine may detect patterns that indicate heavy usage of local resources (e.g., low battery, low network bandwidth, high network traffic). Accordingly, the system may switch to a resource savings mode and hold, reduce or block data transmission and data synchronization to limit the resources being utilized. For example, some devices, such as portable devices, can further limit the number of synchronization events that occur to preserve device battery life. In another example, the system 100 can specify a synchronization event for a portable device according to minimum power level (e.g., transfer the synchronization information before the device powers off, loses communication capabilities, among other possibilities). In still another example, the system 100 can specify a synchronization event for a portable device based on a communication signal strength criteria (minimum sustained communication signal strength).
[00031] The log database 150 may receive the collected data from the monitoring engine 140, which may implicitly indicate which object in the user's network may be currently being used or accessed by the user. Moreover, the log database 150 may develop a user history, for example, indicating that a user of the device 1 10 has watched two episodes of a television series two weeks in a row using the device 1 10. This data may be stored in the log database 150 along with the data related to the other current and past activities of the user and the device 1 10. In the log database 150, the data may be managed to a level of quality (measured in terms of accuracy, availability, usability, and resilience). Alternatively or in addition, the historical statistical data may be stored in a central log database 1 15 in the cloud 105, which may be connected to the device 1 10 via the prediction engine 160. In one implementation, the central log database may store logs from multiple devices (e.g., the device 120 and/or 130) in the system 100. In some implementations, the central log database and the log database 150 may be integrated into a single database, which can be combined with the device 1 10. [00032] In one implementation, the monitoring engine 140 and the log database 150 may interoperate with the prediction engine 160 by monitoring user activity and system information at the device 1 10, as well as tracking user activity and system information in the form of historical statistics that are stored in the log database 150. In another example system, the monitoring engine 140 and the prediction engine 160 may be capable of connecting to each other directly. Alternatively or in addition, the prediction engine may be connected to a central prediction engine 125.
[00033] The prediction engine 160 may anticipate desired synchronizations of certain objects (e.g., content) between the device 1 10 and other devices (e.g., the devices 120 and 130) associated with the user of the device 1 10. In one implementation, the prediction engine 160 may generate predictive selections of content that would be desirable to the user at various devices. The generation of the predictive selections may involve making inferences based on various factors, such as previous content that was provided to the devices, and/or information provided in status information of the system 100. Further, making such inferences may involve the employment of one or more heuristics, which may be weighted in various ways, and/or data mining algorithms, which may be a set of heuristics and calculations that may create a data mining model from the data received from the log database 150. To create a model, the algorithm may first analyze the data provided by the log database 150 and determine specific types of patterns or trends. The algorithm may use the results of this analysis to define the optimal parameters for creating the mining model. These parameters may then be applied across the entire data set to extract actionable patterns and detailed statistics. Moreover, the prediction engine 160 may assign priority levels to objects (e.g., content) that are predictively selected. In some implementations, the prediction engine 160 may assign priority levels dynamically based on the attributes.
[00034] In another implementation, the generation of predictive selections of content may be performed at the central prediction engine 125 as an alternative or addition to the prediction engine 160 in the device 1 10. The central prediction engine 125 may receive data related to the user behavior across all devices (e.g., devices 1 10, 120 and 130) and system 100 from the log database 150 and the central log database 1 15, and generates predictive selections of content. Such predictive selections of content may be communicated to the prediction engine 160 or other associated local prediction engines. [00035] In one implementation, the synchronization information can be propagated from the prediction engine 160 to the other devices, e.g., the devices 120 and 130 through a central source, e.g., the cloud 105. In other implementations, the synchronization information can be propagated from the device 1 10 directly to the device 120 and/or the device 130 without the need for a central source.
[00036] Alternatively or in addition to the synchronization of content, the system 100 may propagate content by distributing data from one to another device. In one implementation, such distribution may be managed according to propagation rules.
[00037] In one implementation, the prediction may be based on output habits (e.g., viewing and/or listening habits) on a specific device. For instance, increased outputting of content in a particular category or type at a particular device may result in selecting further content of similar category or type for synchronizing with that device. For example, if the device 120 user watches three or more episodes of shows of the same genre (e.g., situation comedies) on the device 120, recently recorded episodes in that genre may be selected and synchronized to the device 120.
[00038] In another implementation, the prediction may involve outputting (e.g., viewing and/or listening) habits across devices. For instance, viewing of particular content by a user of the device 1 10 may result in the selection of the same content for synchronizing with the device 120 based on the prediction that the user, who started viewing the content at the device 1 10, may choose to continue viewing that particular content at the device 120. For example, if a user of the device 1 10 (e.g. desktop computer) starts viewing a movie on the device 1 10, but pauses half way, the prediction engine 160 may conclude that the user may continue watching the movie at the device 120 (e.g., laptop). Accordingly, that movie file may be selected to be synchronized to the device 120.
[00039] As another example, as discussed above, a user of the device 1 10 may listen to two podcasts of a podcast series for two weeks in a row using the device 1 10. Based on such user behavior, the prediction engine 160 (based on the user history) may select the third podcast of the same series to be synchronized to the device 1 10 on the third week. As a further example, a user may browse a set of word files in a "My Project" folder in the device 1 10. The user may then start up a word processing application and begin to make edits to a particular file in the folder. By receiving this user activity data from the log data 150, the prediction engine 160 may infer that other files in the folder may also be accessed and/or edited by the user. Accordingly, the files in the folder may be selected to be synchronized.
[00040] In further implementations, the prediction may explicit user ratings of content (which may be inputted and received from user of the devices 1 10, 120 and 130). For instance, high user ratings of content on a specific device may cause the selection of similar content for synchronization with that specific device. For example, if a user rates two books highly in the "Books" folder at the device 1 10, other books may be selected to be synchronized.
[00041] In still further implementations, the prediction may be based on previous synchronization activities. For instance, previously synchronized content within a particular category and/or time interval with a specific device may cause the selection of similar content for synchronization with that specific device. As an example, if the device 120 user has explicitly synchronized photos three times from the same folder within two days of uploading the photos onto the device 1 10, subsequent uploads of photos onto that same folder at the device 1 10 may be selected to be synchronized.
[00042] In some implementations, the prediction may involve a user's search behavior. For instance, searching for a file at a specific device may result in the selection of the content of the search results for synchronization to another device. If a user of the device 1 10 searches for a wedding picture in a "My Pictures" folder at the device 1 10, the device 1 10 may display a list of files that are identified as wedding pictures in the "My Pictures" folder. The user may or may not open these files to view the pictures at the device 1 10. Regardless, the wedding pictures identified as the search results at the device 1 10 may be selected to be synchronized to the device 120 based on the prediction that the user may want to view the images at the device 120.
[00043] In another implementation, a list of search results may be influenced by data related to a user's behavior at a device. For example, a user's behavior may show that the user frequently listens to audio files at the device 1 10. Accordingly, the prediction engine 160 may assign higher priority to audio files at the device 1 10 based on the prediction that the user may access those files. If the user runs a search for the search keyword "birthday" at the device 120, the search results may list the audio files identified under the keyword "birthday" above images or word documents identified under the keyword "birthday" based on the higher priority given to the audio files at the device 1 10. The prediction engine 160 selects the audio files identified under the keyword "birthday" to be synchronized to the device 1 10.
[00044] The aforementioned prediction criteria are provided as examples, and not limitations. Accordingly any number of criteria, algorithms or any other method alone or in any combination or weighting may be employed. As discussed above, predictive selections of content may be generated by the prediction engine 160 and/or the central prediction engine 125.
[00045] In other implementations, a synchronization event may be determined also based on one or more inputs associated with the device (e.g., power levels, wireless communication network strength, motion sensors, timers, activation of headphones/speakers, etc.). For instance, activation of a headphone on a specific device may result in the selection of audio content for synchronization. For example, if the device 1 10 user has activated a headphone on the device 1 10, audio files in a "My Music" folder may be selected to be synchronized. As another example, a user may perform an activity that may be computationally expensive and make take a lot of disk access or file system operations. For example, the user may be editing videos or recalculating a large spreadsheet using one or more applications. The prediction engine 160 may then lower the priority of those objects so that the synchronization operations do not put additional pressures on system resources which may already be being consumed at a high level.
[00046] Moreover, in one implementation, upon receipt of incoming synchronization information, the prediction engine 160 in the device 1 10 can incorporate additional synchronization information collected on that specific device, filter inconsistent information and/or perform other conflict resolution.
[00047] The object receiver 170 may be a component for obtaining objects based on synchronization information from various devices in the user's network such as the devices 120 and 130. In another implementation, the object receiver 170 may obtain objects from various content providers through a communications medium.
[00048] The object database 180 may receive and store objects (e.g., content) from the object receiver 170. For example, a synchronization operation may occur between the devices 1 10, 120 and 130 when the devices 1 10, 120 and 130 are available to each other. This may include content stored by the device 1 10 being copied into a storage medium within the device 120 and/or the device 130. Moreover, it may also include content stored by the device 120 being copied into a storage medium within the device 1 10 (e.g., the object database 180) and/or the device 130. Lastly, it may also include content stored by the device 130 being copied into a storage medium within the device 1 10 (e.g., the object database 180) and/or the device 120.
[00049] In one implementation, in addition to the objects discussed above, contextual information or dynamic attributes such as, device location, device type and any arbitrary attribute values with the central reference point may also be stored in the object database 180. Other dynamic attributes may include client operating system, client locale, client device type, city, state abbreviation, zip code, language code, country code, area code, phone number, telephone country access code or the like.
[00050] In one implementation, the object database 180 may delete objects. Such deletions may be performed upon the occurrence of one or more conditions. Exemplary conditions include, the content being stored in the object database 180 for longer than a predetermined time threshold (e.g., emails in a trash folder), a user manually selecting content to be deleted, the content having already been outputted (e.g., viewed) by a user, and/or receipt of an instruction from another device to delete the content. Examples are not limited to these examples.
[00051] The object sender 190 associated with the device 1 10 may transmit objects to one or more devices (e.g., the devices 120 and 130) known to the device 1 10 and eligible to receive the synchronization information. For example, peer devices associated with the same user or user account may be considered eligible to receive content. In one implementation, the object sender 190 may navigate the transmission of the objects. In one implementation, the object sender 190 may transmit the objects to the receiving devices such as the device 120 or 130 directly. In another implementation, the object sender 190 may transmit the objects to the receiving devices via the cloud 105.
[00052] The object transmissions may be queued based on the device that the objects are to be delivered to. Each queue may be associated with a device and represent one or more objects to be delivered to the associated device. Such queue may be called a peer queue. In one implementation, a queue may be serviced for processing based on the synchronization priority assigned to the associated device. The device numbers could be represented, for example, by a numeric value, such as 1 , 2 . . . N. Synchronization priorities may also be represented by a numeric value with higher values indicating higher priority for processing and lower values indicating lower priority. For example, a desktop computer may have a higher priority than a mobile device based on, for example, the processing speed of the device. Thus, the objects to be delivered to a device with higher priority may get processed before the objects that are to be delivered to a device with lower priority. Moreover, as mentioned above, each queue contains one or more objects. The objects within each queue may be ranked based on synchronization priorities that are associated with the objects. For example, an image file may have a higher priority than a video file for reasons that are discussed in detail above with respect to the prediction engine 160.
[00053] In some implementations, a prioritization manager may assign the synchronization priorities (e.g., priority levels) to each object and each device. Moreover, the prioritization manager may modify the priority level of each object and each device based on the monitored information from the monitoring engine 140.
[00054] In one implementation, the system 100 may include an external data source interface component for obtaining external information from network-based resources, such as information corresponding to a user associated with the device 1 10 or selected receiving devices such as the devices 120 and 130. Alternatively or in addition, as discussed above in more detail, the device 1 10 may include a data processing component for processing the synchronization information obtained from the other devices. The data processing component may also determine one or more devices that may be receiving devices for specific synchronization information and, as appropriate, to determine specific subsets of the synchronization information each receiving device may receive. The objects stored by the object database 180 may be utilized by the data processing component.
[00055] The system 100 and device 1 10 may include a number of additional components, systems and/or subsystems for facilitating communications with the other wireless or wired devices and/or the cloud 105. The additional components can include one or more mobile switching centers for establishing communications with the wireless devices via the wireless communication network, such as a cellular radio access network, a wireless network based on the family of IEEE 802.1 1 technical standards ("WiFi"), a wireless network based on IEEE 802.16 standards ("WiMax"), and other wireless networks. The mobile switching center may include interfaces for establishing various communications via a communication network, such as the Internet, intranets, private networks and point-to-point networks, generally referred to as the "network."
[00056] The device 1 10 may be implemented using any suitable combination of hardware and/or software. For example, in one implementation, the device 1 10 may comprise at least a processing device and a memory. The processing device may correspond to a device that generally retrieves and executes the instructions stored in the memory (e.g., a central processing unit (CPU), processor, microcontroller, or the like). The memory may correspond to any typical storage device that stores computer-implemented instructions. The device 1 10 may perform functions by using the processing unit(s) to execute instructions provided by the system memory. Further, the memory may store information that provides an operating system component, various program modules, program data, and/or other components. In another implementation, the device 1 10 may include storage media or machine-readable articles, which may include, for example, any suitable type of memory unit, memory device, memory article, memory medium, storage device, storage article, storage medium and/or storage unit, for example, memory, removable or non-removable media, erasable or non-erasable media, writeable or re- writeable media, digital or analog media, hard disk, floppy disk, Compact Disk Read Only Memory (CD-ROM), Compact Disk Recordable (CD-R), Compact Disk Rewriteable (CD- RW), optical disk, magnetic media, magneto-optical media, removable memory cards or disks, various types of Digital Versatile Disk (DVD), a tape, a cassette, or the like. The storage medium may store an instruction or a set of instructions that, if executed by a machine, may cause the machine to perform a method and/or operations in accordance with the implementations. The instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, encrypted code, and the like, implemented using any suitable high-level, low-level, object- oriented, visual, compiled and/or interpreted programming language. The device 1 10 may also include one or more input devices (keyboard, mouse device, specialized selection keys, etc.) and one or more output devices (displays, printers, audio output mechanisms, etc.).
[00057] Fig. 2 illustrates example components of the system 100 in accordance with an implementation. The object sender 190 comprises a peer-1 queue 210, peer-2 queue 220, and peer-N queue 230, each of which is described in greater detail below. Each peer queue is fed into an associated device and represents a group of objects to be transmitted to the associated device. For example, peer-1 240 may be a desktop computer, and peer-1 queue 210 may be a group of objects that are lined up to be transmitted to the desktop computer. Such objects may include content such as files, patches, graphics, and/or the like. The files may include videos, audios and/or text. As described above in more detail with respect to Fig. 1 , the object transmissions may be queued based on the device that the objects are to be delivered to, and a peer queue may be serviced for processing based on the synchronization priority assigned to the associated device. Alternatively or in addition, as also described above in more detail with respect to Fig. 1 , the objects are lined up based on synchronization priorities that are associated with the objects.
[00058] In one implementation, the synchronization priority assignment may be based on content consumption behavior, active application used on the device to identify file types, information related to the mostly recently reviewed content on another device, priority time to synchronization content.
[00059] For example, the peer-1 240 may be a desktop computer, and the peer-2 250 may be a laptop, and the peer-N 230 may be a mobile device. The peer-1 240 may have a higher priority than the peer-2 250 and the peer-N 230 based on the speed of its processor. The desktop computer with the faster processor may have a higher priority because the peer-1 240 may have sufficient resources to synchronize large sets of objects, which may be more desirable to a user. Accordingly, the peer-1 queue 210 may be processed before the peer-2 queue 220 and the peer-N 230 queue.
[00060] The device numbers could be represented, for example, by a numeric value, such as 1 , 2 . . . N. Synchronization priorities (not shown in Fig. 2) may also be represented by a numeric value with higher values indicating higher priority for processing and lower values indicating lower priority.
[00061] It should be readily apparent that the object sender 190 illustrated in Fig. 2 represents a generalized depiction and that other components may be added or existing components may be removed, modified, or rearranged without departing from a scope of the present disclosure. While Fig. 2 illustrates that the object sender 190 includes three queues, peer-1 queue 210, peer-2 queue 220 and peer-N queue 230, the system may actually comprise less or more number of queues. Three queues have been shown and described as an example implementation. Moreover, while Fig. 2 illustrates that the object sender 190 includes three devices, peer-1 queue 210, peer-2 queue 220 and peer-N queue 230, the system may actually comprise less or more number of devices peer-1 240, peer-2 250 and peer-N 260. The number N in Fig. 2 illustrates that there may be a series of queues and devices in an object sender in accordance with an implementation.
[00062] Turning now to the operation of the system 100, Fig. 3 illustrates an example process flow diagram 300 in accordance with an implementation. More specifically, illustrative process presents a "push" model for generating predictive selection of objects and transmitting the objects based on synchronization priorities. It should be readily apparent that the processes depicted in Fig. 3 represents generalized illustrations, and that other processes may be added or existing processes may be removed, modified, or rearranged without departing from the scope and spirit of the present disclosure. Further, it should be understood that the processes may represent executable instructions stored on memory that may cause a processing device to respond, to perform actions, to change states, and/or to make decisions. Thus, the described processes may be implemented as executable instructions and/or operations provided by a memory associated with a computing device, such as the device 1 10 as illustrated in Figs. 1 and 2. Furthermore, Fig. 3 is not intended to limit the implementation of the described implementations, but rather the figure illustrates functional information one skilled in the art could use to design/fabricate circuits, generate software, or use a combination of hardware and software to perform the illustrated processes.
[00063] The process 300 may begin at block 305, where the monitoring engine 140 monitors user activity and device information at the device 1 10. This process may include collecting data related to content requested, content received, content outputted, application utilized, available storage capacity, network availability, network speed, network cost, battery capacity, charging status, location of the device, etc. In one implementation, the process of monitoring may entail collecting data related to all the user activities and device information. In another implementation, the process of monitoring may be executed based on a predetermined protocol such as pre-identified conditions or categories of data. For example, the monitoring engine 140 may be set to collect data related to activities that the user performs for duration of more than one minute, and any activities that the user spends less than one minute on may be disregarded. Moreover, the monitoring engine 140 may perform filtering actions to identify a subset of the collected data and provide only the subset that the user may need.
[00064] At block 310, the collected data related to the user activities and device information may be provided to the log database 150 to be recorded. The log database 150 may store the records and depending on the implementation, may organize the data and generate correlations between various types of data. In one implementation, relational database structure may be used for organization of the data. That is, the data is collected into tables, and related pieces of data may be grouped together in a single structure or record, and relationships can be defined between these structures and records. Such structures and records may be made available to the prediction engine 160, which pulls the stored data from the log database 150 and examine the data for further analysis. As discussed above with respect to Fig. 1 , in one implementation, the data from the log database may be transmitted to the central log database 1 15. The central log database 1 15 consolidates and stores all the data received from the devices in the network (e.g. the devices 1 10, 120 and 130).
[00065] At block 315, the prediction engine 160 may determine whether there is additional data that may be considered from the central log database. If additional data is identified, at block 320, such data may be acquired. At block 325, the prediction engine anticipates and predictively selects objects (e.g., content) that the prediction engine 160 determines to be desirable to the user. The process of predictive selections may involve pulling data from the log database 150 and performing data analytics to interpret the data. As described above with respect to Fig. 1 , the prediction engine may create a prediction rule that predicts future demand for objects based on various factors, such as the user's historical data consumption patterns.
[00066] At block 330, the predictively selected objects are obtained from the object database 180. At block 335, the object sender 190 generates queues by grouping the selected objects in queues based on the devices that the objects are being transmitted to. For example, if an object a is to be transmitted to peer-1 240 (as illustrated in Fig. 2), the object a may be grouped in the peer-1 queue 210. Similarly, if an object b is to be transmitted to peer-2 250 (as illustrated in Fig. 2), the object b may be grouped in the peer-2 queue 220. In one implementation, there may be multiple objects that are to be transmitted to the same device. For example, in addition to the object a, objects c and d may also be set to be transmitted to the peer-1 240. Accordingly, the objects c and d may be grouped in the peer-1 queue 210 along with the object a. In that situation, at block 340, the process of grouping may also comprise ranking the objects a, b and c based on their synchronization priorities. For example, the object a may have a higher priority than objects b and c. Accordingly, object a may be set to be synchronized first and transmitted to the peer-1 before the objects b and c.
[00067] At block 345, the process of transmitting objects may start by processing the queue with the highest synchronization priority. For example, the peer-1 queue 210 may have a higher priority than peer-2 queue 220 and/or peer-N queue 230. Accordingly, the object sender 190 initiates the process of object transmission by processing the peer-1 queue 210 first. As discussed in more detail above with respect to Fig. 2, the synchronization priority of each queue may be based on the synchronization priority assigned to the device associated with the queue. For example, the synchronization priority of the peer-1 queue 210 may be determined based on the synchronization priority of the peer-1 240.
[00068] In one implementation, the synchronization priority of the peer queue may be adjusted based on certain data monitored by the monitoring engine 140. As discussed in detail above with respect to Fig. 1 , the monitoring engine 140 may monitor various attributes of the device 1 10, including battery power information, battery power consumption rates, memory/storage space, wireless signal strength, network details, and/or alike. Based on data related to such attributes, the prediction engine 160 may conclude that the resources of the device 1 10 are being heavily utilized. For example, the monitoring engine 140 may detect that the battery of the device 1 10 is low, the network bandwidth is limited, and the network traffic is subject to billing (e.g., 3G or roaming). Accordingly, the prediction engine 160 may issue commands to hold, reduce or block transmission of objects to limit the usage of resources. In one implementation, the prediction engine 160 may lower the synchronization priority of the peer queues and/or the objects within such queues that may require, for example, more battery power, network bandwidth and/or storage space. In another implementation, the system may have the synchronization manager blocking the synchronization operations in accordance with commands from the prediction module 160 to block the synchronization based on battery power information, battery power consumption rates, storage space, wireless signal strength, and network details.
[00069] In another implementation, after determining the peer-1 queue 210 has the highest synchronization priority based on the peer-1 240 associated with that queue, the device 1 10 may determine whether the peer-1 240 is available to the device 1 10. If so, the queue may be processed and the object is synchronized (e.g., copied) to the peer-1 240. However, if the device is not available, then the device 1 10 may prompt the user to make the device available. This prompting may be performed through a user interface (e.g., a display) of the device and/or of an output device connected to the device. Once the peer-1 240 is available, the queue can be processed.
[00070] Fig. 4 illustrates another example process flow diagram 400 in accordance with another implementation. More specifically, illustrative process corresponds to a "pull" model for receiving objects from devices in a network for synchronization. For example, the device 1 10 can transmit a request, or initiate a transaction, that results in the downloading or streaming of objects (e.g., content) to the device 1 10. Typically, the content providing device would initiate the transfer upon receipt of the request from the device 1 10. It should be readily apparent that the processes depicted in Fig. 4 represents generalized illustrations, and that other processes may be added or existing processes may be removed, modified, or rearranged without departing from the scope and spirit of the present disclosure. Further, it should be understood that the processes may represent executable instructions stored on memory that may cause a processing device to respond, to perform actions, to change states, and/or to make decisions. Thus, the described processes may be implemented as executable instructions and/or operations provided by a memory associated with a computing device, such as the device 1 10 as illustrated in Figs. 1 and 2. Furthermore, Fig. 4 is not intended to limit the implementation of the described implementations, but rather the figure illustrates functional information one skilled in the art could use to design/fabricate circuits, generate software, or use a combination of hardware and software to perform the illustrated processes.
[00071] The process 400 may begin at block 405, where the monitoring engine 140 monitors user activity and device information at the device 1 10. At block 410, the monitoring engine 140 determines whether any of the monitored activity or system information indicates that a user of the device 1 10 may require immediate access to certain objects (e.g., content). In particular, this process may involve, e.g., performing a discovery process, during which the monitoring device may detect that some user activity indicates a need for an immediate access to an object, which may be a specific file or a set of files. For example, the user may navigate to a specific folder of content, in which case the monitoring engine 140 may interpret this activity as an indication for an upcoming request to access the files within that specific folder. Other example may include, but not limited to, user opening a specific application, connecting to a specific LAN connection, and searching for a specific file.
[00072] If it is determined that the user of the device 1 10 may require immediate access to the object, the synchronization process may be initiated by the prediction engine 160. At block 420, the prediction engine 160 creates a synchronization notification message. The synchronization notification message may identify the object (e.g. the specific file or the set of files), indicate that the object needs to be synchronized to the device 1 10 and include a request to pull content. Moreover, the synchronization notification message may request for the synchronization priority associated with the object to be modified at the devices.
[00073] At block 425, the prediction engine 160 sends the synchronization notification message to the other devices associated with the user or user account. In particular, this process may involve identifying any devices associated with the user or user account and determining whether any of associated devices may have a newer version of the object. For example, the object to be synchronized may be a digital publication file. The prediction engine 160 may specify that the devices 120 and 130 are associated with the user of the device 1 10 or user account, and that the devices 120 and 130 might have a copy of the object (possibly in a different format) corresponding to the digital publication file. Accordingly, the prediction engine 160 may send the synchronization notification message to the device 120 and 130. In another implementation, the central prediction engine 125 may transmit the synchronization notification message to the devices associated with the user or user account indicating that the object is needed to be synchronized.
[00074] In one implementation, when synchronization notification message is received, the request is preferably confirmed by a central reference point to verify that the request came from a valid device. This check preferably validates security information embedded in a message header or the like. This security information is preferably encrypted employing a key that only a valid device and server possess. However, any number of verification techniques may be used, such as public key encryption, digital signature certificates and/or the like, if desired. If the request is invalid, an error response is preferably sent back to the device sending the message (i.e., the device 1 10), indicating the device is not authorized to use the synchronization framework.
[00075] In response to the synchronization notification message, the devices 120 and 130 process the notification. This process may involve determining whether a copy of the object exists in the object databases in the devices 120 and 130. If confirmed that the object exists, the synchronization priority associated with the object is increased at the device that the object exists in (e.g., the device 120, or the device 130 or both devices), and in accordance with its own configuration settings and network availability, at block 430, the device transmits the object to the device 1 10. At block 425, the object is received at the object receiver 170 in the device 1 10. In some implementations, the transmission may be performed directly from one device to another. In another implementation, the transmission of the object may be performed through the cloud 105. [00076] In one implementation, the transmission of the object to the device 1 10 may be blocked or put on hold based on certain data monitored by the monitoring engine 140. As discussed in more detail above with respect to Fig. 1 , the monitoring engine 140 may monitor various attributes of the device 1 10, including battery power information, battery power consumption rates, memory/storage space, wireless signal strength, network details, and/or alike. Based on data related to such attributes, the prediction engine 160 may conclude that the resources of the device 1 10 are being heavily utilized. For example, the monitoring engine 140 may detect that the battery of the device 1 10 is low, the network bandwidth is limited, and/or the network traffic is subject billing (e.g., 3G or roaming). Accordingly, the prediction engine 160 may issue commands to hold or block the transmission of the object to limit the usage of resources.
[00077] At block 435, the object may be stored in the object database of the device 1 10. In one implementation, the devices 120 and 130 may store the synchronization notification message in the log databases in the devices 120 and 130, and may use such information to improve the historical synchronization data in each database. Moreover, in another implementation, the device 1 10 may perform conflict resolution of synchronization information received from multiple devices.
[00078] While the above disclosure has been shown and described with reference to the foregoing examples, it should be understood that other forms, details, and implementations may be made without departing from the spirit and scope of the disclosure that is defined in the following claims.
|
__label__pos
| 0.958973 |
Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl: the Markov chain saw
PerlMonks
Using regex to add a single backslash to escape quotes, instead of 2 backslashes or 0
by lancer (Beadle)
on Jul 26, 2012 at 16:48 UTC ( #983874=perlquestion: print w/ replies, xml ) Need Help??
lancer has asked for the wisdom of the Perl Monks concerning the following question:
Dear All,
I have a function like
sub sql_escape { my $text = shift; $text =~ s/'/\'/g; $text =~ s/"/\"/g; $text =~ s/\\/\\\\/g; return $text; }
Outputs this: O'REILLY
When I print the result of that in a double quoted string, like so: my $escaped_text = sql_escape ($raw_text); print "$escaped_text" , it prints single quotes without the backslash.
But if I change the function to add another backslash, it starts printing 2 backslashes:
sub sql_escape { my $text = shift; $text =~ s/'/\\'/g; $text =~ s/"/\\"/g; $text =~ s/\\/\\\\/g; return $text; }
Outputs this: O\\'REILLY
How can I change it to output only 1 backslash?
Comment on Using regex to add a single backslash to escape quotes, instead of 2 backslashes or 0
Select or Download Code
Replies are listed 'Best First'.
Re: Using regex to add a single backslash to escape quotes, instead of 2 backslashes or 0
by Corion (Pope) on Jul 26, 2012 at 17:02 UTC
See the ->quote method of DBI. Do not implement this routine yourself, and even better, use DBI placeholders instead, also documented in DBI.
Thanks for the suggestion, I was able to replace it with this:
sub sql_escape { my $text = shift; $text = DBD::_::db->quote ($text); return $text; }
Outputs: O''REILLY
(This format works with MySQL)
Why don't you use
$dbh->quote(...)
as the documentation suggests?
Re: Using regex to add a single backslash to escape quotes, instead of 2 backslashes or 0
by BillKSmith (Deacon) on Jul 26, 2012 at 18:43 UTC
Just for fun, here is a fix to your function.
use strict; use warnings; my $message = q(O'REILLY); my $sql_message = sql_escape( $message); print "$sql_message"; sub sql_escape { my $text = shift; $text =~ s/'/\\'/g; $text =~ s/"/\\"/g; return $text; }
Use the module!
For completeness, here's WHY this fixes your function so you can maybe avoid similar issues in the future.
$text = "O'REILLY"; $text =~ s/'/\\'/g; # $text is now O\'REILLY $text =~ s/"/\\"/g; # Unchanged $text =~ s/\\/\\\\/g; #$text is not O\\'REILLY
So you properly replace it with a single backslash, but then you tell it to replace all single backslashes with double backslashes. If you wanted to do this properly, move the 3rd line above the 1st. But everyone is right, use the module.
Wow, thanks for that. Such a basic mistake. :)
Re: Using regex to add a single backslash to escape quotes, instead of 2 backslashes or 0
by Anonymous Monk on Jul 26, 2012 at 17:28 UTC
Pay attention to what Corion has suggested. By implementing this yourself, you are very likely to miss edge cases and grey areas that the DBI implementation has already addressed.
Thanks Anonymous Monk, I've used Corion's suggestion. I posted it above.
Log In?
Username:
Password:
What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://983874]
Front-paged by Arunbear
help
Chatterbox?
and the web crawler heard nothing...
How do I use this? | Other CB clients
Other Users?
Others exploiting the Monastery: (4)
As of 2015-10-10 20:17 GMT
Sections?
Information?
Find Nodes?
Leftovers?
Voting Booth?
Does Humor Belong in Programming?
Results (258 votes), past polls
|
__label__pos
| 0.866103 |
Calling Java Code from C/C++ Transparently
29547 VIEWS
·
I recently needed to create a C/C++ API for some existing Java code. The first thing that came to mind was to use the Java Native Interface (JNI). However, that can get ugly, mainly because JNI was designed to do the reverse (call C code from Java) and because it adds additional compilation steps. For example, you need to generate the JNI layer using the JDK’s javah tool, implement and compile the native C/C++ side of things, and then keep it all in sync as the Java code changes. Fortunately, there is an alternative.
Inside Java Native Access (JNA)
The JNA library is written in Java, and provides Java applications easy access to native code using 100% pure Java. JNA implements a small, built-in, multi-purpose JNI library stub layer to dynamically invoke native code, similar to Java’s reflection. As a result, JNA hides all the ugly details of JNI, and instead allows you to call directly into native code using natural Java syntax. Overall, it removes the overhead of configuring and building JNI code for multiple platforms (e.g. Windows, Mac OS X, Linux, Solaris, and so on), which is a huge time saver.
JNA has been around for awhile. It’s open source, and available in Git, through the GNU LGPL license. Additionally, JNA is used by popular third-party software, such as NetBeans, IntelliJ, Apache Cassandra, GStreamer, and others. JNA also supports multiple platform architectures as well (e.g. Windows, Unix, x86, and ARM).
JNA Example: From Java to C/C++
To begin, let’s explore a simple JNA example of Java calling into native code. The example starts with a simple C++ class (see Listing 1) that, for demonstration purposes, performs a variety of functions:
1. Return the result of a calculation on an integer passed as a parameter: calc(int i)
2. Return a text string in response to a request: getName()
3. Log a text string passed into the native code as a parameter: log(string s)
4. Call back into the Java code via a callback definition: callme(callback func)
typedef void (*callback) (int); // for callback
class MyLibClass {
public:
MyLibClass();
virtual ~MyLibClass();
int calc(int i);
string getname(void);
void callme(callback func);
void log(string s);
};
Listing 1 – A native class definition that performs a variety of common functions.
All of the methods described above are defined in a C++ header file, along with the callback definition. In this case, the callback will provide an integer value to the caller at some time in the future, perhaps multiple times (typical callback pattern). The actual C++ code is very normal and very boring (see Listing 2), which is good in this case as it illustrates that using JNA requires no changes to the native code.
MyLibClass::MyLibClass() { /* ... */ }
MyLibClass::~MyLibClass() { /* ... */ }
int MyLibClass::calc(int i) {
return i * 2;
}
string MyLibClass::getname() {
return "Eric J. Bruno";
}
void MyLibClass::callme(callback func) {
if ( func ) {
func( ++value );
}
}
void MyLibClass::log(string s) {
cout << s;
cout << "\n";
}
Listing 2 - The native implementation of the MyLibClass class.
Before we use the native class MyLibClass in a Java application, let’s look at a very simple example of calling native code from Java, as shown in Listing 3.
import com.sun.jna.Library;
import com.sun.jna.Native;
import com.sun.jna.Platform;
public class SimpleTest {
public interface CLibrary extends Library {
CLibrary INSTANCE = (CLibrary)
Native.loadLibrary((Platform.isWindows() ? "msvcrt" : "c"),
CLibrary.class);
void printf(String format, Object... args);
}
public static void main(String[] args) {
CLibrary.INSTANCE.printf("Hello, World!\n");
}
}
Listing 3 - Simple native code invocation from Java.
First, this Java code uses JNA’s Library class to load a native library via a nested interface named CLibrary. In this case, the library is the standard C library. The JNA code is written so that it will work on Windows and Unix platforms equally. It also defines Java methods that match the native methods you wish to call from Java. The one line of code in the Java main() routine illustrates how simple and easy it is to call into native code using JNA.
The result is 100% pure Java code, which is mostly straightforward and readable. In this example, the JNA calling technique is similar to Java’s reflection, or any type of indirect method invocation. However, there can be a slight performance penalty per call when compared to native or plain Java method calls. To address this, JNA defines a technique called Direct Method Mapping to eliminate this penalty. Listing 4 contains the Java code that calls the native class MyLibClass (defined in Listing 1) with this optimization.
import com.sun.jna.Callback;
import com.sun.jna.Library;
import com.sun.jna.Native;
import com.sun.jna.Platform;
public class JNATest {
public static class MyCLib {
public static native int calc(int x);
public static native int log(String s);
public static native String getname();
interface callback extends Callback {
void invoke(int val);
}
public static native void callme(callback func);
static {
Native.register("MyCLib");
}
}
static int value = 0;
static MyCLib.callback func = new MyCLib.callback() {
public void invoke(int val) {
JNATest.value = val;
}
};
public static void main(String[] args) {
MyCLib.log("Hello, " + MyCLib.getname() );
MyCLib.log("The result of calc(2) = " + MyCLib.calc(2) );
MyCLib.callme(func);
MyCLib.log("The value from the callback was " + value);
}
}
Listing 4 - Using JNA Direct Method Invocation to optimally call native code from Java.
There are two key concepts defined in this code: the use of JNA’s Direct Method Invocation, and the syntax to define callbacks. The static class MyCLib defines Java methods for each of the matching native MyLibClass methods we wish to call (which happens to be all of them in this case). The syntax is simple: Java primitives match C/C++ types, and the Java String class matches the C++ string class.
For the callback used in the method callme(), we start with a Java interface that defines a method named invoke, which accepts the same parameters as the native callback method. This interface—named callback in this example, although you can change that—extends the JNA Callback class, and is then defined as the parameter func in the Java version of the callme() method. To complete the JNA callback pattern, the code implements the func object’s invoke method, which will be called when the native code follows through with the callback. In this example, the callback code simply stores the given value.
The static method main() calls the native code from Java, which includes the callme method where the callback is provided. Finally, the result of the value given back from the native code from the callback is logged. Admittedly, there is some mental bookkeeping involved with this JNA code, especially when callbacks are involved, but the end result as shown in main() is straightforward.
All of the code for the samples above are available in Git for download here:
1. MyCLib
2. JNATest
Calling Java from C/C++ Using JNA
The JNA callback pattern is the key to allow native code to transparently call Java code. By transparently, I mean that no special C/C++ code is needed to make this work and, in fact, the native developer doesn’t have to know that Java is even being invoked. To make this work, each native API method is routed back to the Java implementation through a matching JNA callback. This approach supports passing parameters, and routing each method’s return value back through as the callback parameters and return value, respectively. Let’s follow this process visually.
First, the Java interface MyJavaAPI is defined and implemented (see Figure 1). The methods and functionality are similar to those explored in the earlier example.
Figure 1 - A normal Java interface and implementation.
To make this code invokable from the native C/C++ side, JNA callbacks are defined for each API call, along with Java methods to register the callbacks from the native side (see Figure 2).
Figure 2 - The JNA code for callbacks that match each Java API method.
Next, a matching native library is written to match the methods in the Java API one-for-one. You can even create a tool that generates this automatically from the Java API. The Java JNA code is updated to call into this native code, providing the JNA callback methods (see Figure 3).
Figure 3 - The JNA code loads the matching native library and provides the callbacks.
Next, the native library code is updated to implement the API methods, which (through the JNA callback technique) then call into the matching Java code. Each native method checks that the JVM has been launched, and if it hasn’t, it will be completed before invoking the method call through the callback (see Figure 4).
Figure 4 - Each native method calls the matching Java API through the JNA callbacks.
With that, the native-to-Java invocation machinery is in place. All of this is put in motion once the native developer loads the C/C++ library code and invokes one of the methods. At that point, the JVM is loaded, the callbacks are registered, and each of the Java calls are routed through the matching callbacks. The diagram in Figure 5 shows the process in action as each C++ call propagates through the layers.
Figure 5 - The C++ application calls the C++ version of the Java API through the layers.
Following the arrows, you can see how all of the components are involved. Showing it this way can make it seem more involved than it really is, but it helps to understand the process in detail. To support C as well as C++ applications, the C++ library defers its calls to the C implementation (via the functions prefixed with extern “C”). C applications can call the C layer directly, as shown in Figure 6.
Figure 6 - Both C and C++ applications are supported in this JNA pattern.
Executing this sample code results in the following output:
$ ./mytestcapp
In C++: getName()
create_vm------------------------------
Creating the JVM with classpath=-Djava.class.path=/Users/ericjbruno/Dropbox/dev/MyJavaLibrary_JNA/build/classes:/Users/ericjbruno/Dropbox/dev/MyJavaLibrary/build/classes:/Users/ericjbruno/Dropbox/dev/MyJavaLibrary_JNA/dist/lib/jna-4.2.2.jar:/Users/ericjbruno/Dropbox/dev/MyJavaLibrary_JNA/dist/lib/jna-platform-4.2.2.ja:/Users/ericjbruno/Dropbox/dev/MyJavaLibrary_C/dist/Debug/GNU-MacOSX:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/jli:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/server
Looking for myjavalibrary_jna/MyJavaLibrary_JNA
MyJavaLibrary created
MyJavaLibrary_JNA(): callbacks registered, [email protected]
Getting main
Getting constructor
--------------------------------------
In MyJavaLibrary: getName(), return=Eric J. Bruno
In C++: log(msg=Eric J. Bruno)
MyJavaLibrary.log: Eric J. Bruno
In C++: calc(value=2)
In MyJavaLibrary: calc(value=2), return=4
In C: value returned: 4
In C++: calc(value=4)
In MyJavaLibrary: calc(value=4), return=8
In C: value returned: 8
In C++: calc(value=8)
In MyJavaLibrary: calc(value=8), return=16
In C: value returned: 16
In C++: calc(value=16)
In MyJavaLibrary: calc(value=16), return=32
In C: value returned: 32
In C++: calc(value=32)
In MyJavaLibrary: calc(value=32), return=64
In C: value returned: 64
You can follow the steps of processing via the logging inserted throughout the code. As mentioned above, both the native and JNA layers that match the Java API code could conceivably be generated automatically by a tool that analyzes the Java interface. For now this is left as an exercise for the reader.
Download the four sample projects from Git here:
1. MyJavaLibrary
2. MyJavaLibrary_JNA
3. MyJavaLibrary_C
4. MyTestCApp
Launching the JVM from Native Code
A critical step to make the calls from native to Java code transparent to C/C++ developers is to launch the JVM automatically from the native side. To do so, the JVM launch code is contained within a single header file JNABootstrap.h, summarized in Listing 5.
#include
#include
char* OPTION_CP = (char* )"-Djava.class.path=";
// Path to the Java API code, and the JNA layer for it
char* CUSTOM_LIB = (char* )...;
// Paths to the JNA components
char* JNA_LIB = (char* )...;
char* JNA_PLATFORM_LIB = (char* )...;
// Path to the native version of the library
char* CUSTOM_C_LIB = (char* )...;
// Path to parts of the standard Java installation
char* JAVA_LIB = (char* )...;
char* JLI_LIB = (char* )...;
char* SERVER_LIB = (char* )...;
JavaVM* jvm = NULL;
JNIEnv* env = NULL;
jobject object = NULL;
JNIEnv* create_vm(JavaVM ** jvm) {
JavaVMInitArgs vm_args;
JavaVMOption options;
// Path to the java source code
options.optionString = ...
printf("create_vm------------------------------\n");
vm_args.version = JNI_VERSION_1_6;
vm_args.nOptions = 1;
vm_args.options = &options;
vm_args.ignoreUnrecognized = 0;
JNI_CreateJavaVM(jvm, (void**)&env, &vm_args);
char* classname = (char* )"myjavalibrary_jna/MyJavaLibrary_JNA";
jclass cls = env->FindClass(classname);
jmethodID main_method = env->GetStaticMethodID(
cls, "main", "([Ljava/lang/String;)V");
jmethodID constructor = env->GetMethodID(cls, "", "()V");
return env;
}
Listing 5 - The code to launch the JVM and load the JNA code automatically.
Note that many of the details have been left out of the code above for brevity. Overall, the create_vm method uses the paths to the JDK installation, the Java API, the JNA code, and the native implementation to create the JVM with everything it needs to load all of these components. Once it loads the JVM, it loads the JNA code and finds the constructor for the main class, which it loads. With the JVM running and the JNA class loaded, calls into the native library will be successfully routed to the Java implementation—all of which happens under the hood.
Other Important Details
For Java to call a native library, with or without JNA, you need to provide the JVM the path to it. When invoking a Java application, you do this on the command line via the JVM: -Djava.library.path=... parameter. However, since this process begins when a native application is invoked, this isn’t possible. Therefore, you need to define the corresponding environment variable LD_LIBRARY_PATH to contain the path to the native library that matches the Java API, as shown here:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/Users/ebruno/dev/MyJavaLibrary_C/dist/Debug/GNU-MacOSX
The JVM uses this environment variable to load the library.
Additionally, on the native side, the C/C++ library version of the Java API that you generate, once compiled, must be in the path for your native application to find. An easy way to ensure this is to copy the library to the same directory as your application. For instance, the script I use to run my native test application, as shown below, copies the latest version of the native library file to the current directory each time.
cp ~/Users/ebruno/dev/MyJavaLibrary_C/dist/Debug/GNU-MacOSX/libMyJavaLibrary_C.dylib .
./mytestcapp
Conclusion
JNA allows you to easily call native code from Java using nothing but pure Java—no custom JNI code is needed. However, using JNA’s callback technique along with some glue code to automatically and transparently invoke the JVM from a native application, you can easily call Java code from a C/C++ application. With this approach, the native coder uses a thin native definition of the Java code being invoked. Under the hood, the JVM is launched, the Java API is loaded, and all callbacks are registered automatically. All of this works while C/C++ developers write nothing but native code, and Java developers write nothing but Java code.
Eric Bruno is a contributing editor to multiple publications with more than 20 years of experience in the information technology community. He is a highly requested speaker and writer for topics spanning the technology spectrum, from mobile to the data center. Eric is a regular contributor at Fixate IO.
Discussion
Click on a tab to select how you'd like to leave your comment
Leave a Comment
Your email address will not be published. Required fields are marked *
Menu
Skip to toolbar
|
__label__pos
| 0.921196 |
MLIR 19.0.0git
IRDLVerifiers.cpp
Go to the documentation of this file.
1 //===- IRDLVerifiers.cpp - IRDL verifiers ------------------------- C++ -*-===//
2 //
3 // This file is licensed under the Apache License v2.0 with LLVM Exceptions.
4 // See https://llvm.org/LICENSE.txt for license information.
5 // SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
6 //
7 //===----------------------------------------------------------------------===//
8 //
9 // Verifiers for objects declared by IRDL.
10 //
11 //===----------------------------------------------------------------------===//
12
14 #include "mlir/IR/Attributes.h"
15 #include "mlir/IR/Block.h"
17 #include "mlir/IR/Diagnostics.h"
19 #include "mlir/IR/Location.h"
20 #include "mlir/IR/Region.h"
21 #include "mlir/IR/Value.h"
23 #include "llvm/Support/FormatVariadic.h"
24
25 using namespace mlir;
26 using namespace mlir::irdl;
27
29 ArrayRef<std::unique_ptr<Constraint>> constraints)
30 : constraints(constraints), assigned() {
31 assigned.resize(this->constraints.size());
32 }
33
36 Attribute attr, unsigned variable) {
37
38 assert(variable < constraints.size() && "invalid constraint variable");
39
40 // If the variable is already assigned, check that the attribute is the same.
41 if (assigned[variable].has_value()) {
42 if (attr == assigned[variable].value()) {
43 return success();
44 }
45 if (emitError)
46 return emitError() << "expected '" << assigned[variable].value()
47 << "' but got '" << attr << "'";
48 return failure();
49 }
50
51 // Otherwise, check the constraint and assign the attribute to the variable.
52 LogicalResult result = constraints[variable]->verify(emitError, attr, *this);
53 if (succeeded(result))
54 assigned[variable] = attr;
55
56 return result;
57 }
58
60 Attribute attr,
61 ConstraintVerifier &context) const {
62 if (attr == expectedAttribute)
63 return success();
64
65 if (emitError)
66 return emitError() << "expected '" << expectedAttribute << "' but got '"
67 << attr << "'";
68 return failure();
69 }
70
73 Attribute attr, ConstraintVerifier &context) const {
74 if (attr.getTypeID() == baseTypeID)
75 return success();
76
77 if (emitError)
78 return emitError() << "expected base attribute '" << baseName
79 << "' but got '" << attr.getAbstractAttribute().getName()
80 << "'";
81 return failure();
82 }
83
86 Attribute attr, ConstraintVerifier &context) const {
87 auto typeAttr = dyn_cast<TypeAttr>(attr);
88 if (!typeAttr) {
89 if (emitError)
90 return emitError() << "expected type, got attribute '" << attr;
91 return failure();
92 }
93
94 Type type = typeAttr.getValue();
95 if (type.getTypeID() == baseTypeID)
96 return success();
97
98 if (emitError)
99 return emitError() << "expected base type '" << baseName << "' but got '"
100 << type.getAbstractType().getName() << "'";
101 return failure();
102 }
103
106 ConstraintVerifier &context) const {
107
108 // Check that the base is the expected one.
109 auto dynAttr = dyn_cast<DynamicAttr>(attr);
110 if (!dynAttr || dynAttr.getAttrDef() != attrDef) {
111 if (emitError) {
112 StringRef dialectName = attrDef->getDialect()->getNamespace();
113 StringRef attrName = attrDef->getName();
114 return emitError() << "expected base attribute '" << attrName << '.'
115 << dialectName << "' but got '" << attr << "'";
116 }
117 return failure();
118 }
119
120 // Check that the parameters satisfy the constraints.
121 ArrayRef<Attribute> params = dynAttr.getParams();
122 if (params.size() != constraints.size()) {
123 if (emitError) {
124 StringRef dialectName = attrDef->getDialect()->getNamespace();
125 StringRef attrName = attrDef->getName();
126 emitError() << "attribute '" << dialectName << "." << attrName
127 << "' expects " << params.size() << " parameters but got "
128 << constraints.size();
129 }
130 return failure();
131 }
132
133 for (size_t i = 0, s = params.size(); i < s; i++)
134 if (failed(context.verify(emitError, params[i], constraints[i])))
135 return failure();
136
137 return success();
138 }
139
142 ConstraintVerifier &context) const {
143 // Check that the base is a TypeAttr.
144 auto typeAttr = dyn_cast<TypeAttr>(attr);
145 if (!typeAttr) {
146 if (emitError)
147 return emitError() << "expected type, got attribute '" << attr;
148 return failure();
149 }
150
151 // Check that the type base is the expected one.
152 auto dynType = dyn_cast<DynamicType>(typeAttr.getValue());
153 if (!dynType || dynType.getTypeDef() != typeDef) {
154 if (emitError) {
155 StringRef dialectName = typeDef->getDialect()->getNamespace();
156 StringRef attrName = typeDef->getName();
157 return emitError() << "expected base type '" << dialectName << '.'
158 << attrName << "' but got '" << attr << "'";
159 }
160 return failure();
161 }
162
163 // Check that the parameters satisfy the constraints.
164 ArrayRef<Attribute> params = dynType.getParams();
165 if (params.size() != constraints.size()) {
166 if (emitError) {
167 StringRef dialectName = typeDef->getDialect()->getNamespace();
168 StringRef attrName = typeDef->getName();
169 emitError() << "attribute '" << dialectName << "." << attrName
170 << "' expects " << params.size() << " parameters but got "
171 << constraints.size();
172 }
173 return failure();
174 }
175
176 for (size_t i = 0, s = params.size(); i < s; i++)
177 if (failed(context.verify(emitError, params[i], constraints[i])))
178 return failure();
179
180 return success();
181 }
182
185 Attribute attr, ConstraintVerifier &context) const {
186 for (unsigned constr : constraints) {
187 // We do not pass the `emitError` here, since we want to emit an error
188 // only if none of the constraints are satisfied.
189 if (succeeded(context.verify({}, attr, constr))) {
190 return success();
191 }
192 }
193
194 if (emitError)
195 return emitError() << "'" << attr << "' does not satisfy the constraint";
196 return failure();
197 }
198
201 Attribute attr, ConstraintVerifier &context) const {
202 for (unsigned constr : constraints) {
203 if (failed(context.verify(emitError, attr, constr))) {
204 return failure();
205 }
206 }
207
208 return success();
209 }
210
213 Attribute attr,
214 ConstraintVerifier &context) const {
215 return success();
216 }
217
219 ConstraintVerifier &constraintContext) {
220 const auto emitError = [parentOp = region.getParentOp()](mlir::Location loc) {
221 return [loc, parentOp] {
223 // If we already have been given location of the parent operation, which
224 // might happen when the region location is passed, we do not want to
225 // produce the note on the same location
226 if (loc != parentOp->getLoc())
227 diag.attachNote(parentOp->getLoc()).append("see the operation");
228 return diag;
229 };
230 };
231
232 if (blockCount.has_value() && *blockCount != region.getBlocks().size()) {
233 return emitError(region.getLoc())()
234 << "expected region " << region.getRegionNumber() << " to have "
235 << *blockCount << " block(s) but got " << region.getBlocks().size();
236 }
237
238 if (argumentConstraints.has_value()) {
239 auto actualArgs = region.getArguments();
240 if (actualArgs.size() != argumentConstraints->size()) {
241 const mlir::Location firstArgLoc =
242 actualArgs.empty() ? region.getLoc() : actualArgs.front().getLoc();
243 return emitError(firstArgLoc)()
244 << "expected region " << region.getRegionNumber() << " to have "
245 << argumentConstraints->size() << " arguments but got "
246 << actualArgs.size();
247 }
248
249 for (auto [arg, constraint] : llvm::zip(actualArgs, *argumentConstraints)) {
250 mlir::Attribute type = TypeAttr::get(arg.getType());
251 if (failed(constraintContext.verify(emitError(arg.getLoc()), type,
252 constraint))) {
253 return failure();
254 }
255 }
256 }
257 return success();
258 }
static std::string diag(const llvm::Value &value)
StringRef getName() const
Return the unique name representing the type.
StringRef getName() const
Return the unique name representing the type.
Definition: TypeSupport.h:110
Attributes are known-constant values of operations.
Definition: Attributes.h:25
const AbstractTy & getAbstractAttribute() const
Return the abstract descriptor for this attribute.
Definition: Attributes.h:115
TypeID getTypeID()
Return a unique identifier for the concrete attribute type.
Definition: Attributes.h:70
StringRef getNamespace() const
Definition: Dialect.h:57
ExtensibleDialect * getDialect() const
Return the dialect defining the attribute.
StringRef getName() const
Return the name of the attribute, in the format 'attrname' and not 'dialectname.attrname'.
StringRef getName() const
Return the name of the type, in the format 'typename' and not 'dialectname.typename'.
ExtensibleDialect * getDialect() const
Return the dialect defining the type.
This class represents a diagnostic that is inflight and set to be reported.
Definition: Diagnostics.h:308
This class defines the main interface for locations in MLIR and acts as a non-nullable wrapper around...
Definition: Location.h:63
This class contains a list of basic blocks and a link to the parent operation it is attached to.
Definition: Region.h:26
BlockArgListType getArguments()
Definition: Region.h:81
unsigned getRegionNumber()
Return the number of this region in the parent operation.
Definition: Region.cpp:62
Operation * getParentOp()
Return the parent operation this region is attached to.
Definition: Region.h:200
BlockListType & getBlocks()
Definition: Region.h:45
Location getLoc()
Return a location for this region.
Definition: Region.cpp:31
Instances of the Type class are uniqued, have an immutable identifier and an optional mutable compone...
Definition: Types.h:74
const AbstractTy & getAbstractType() const
Return the abstract type descriptor for this type.
Definition: Types.h:205
TypeID getTypeID()
Return a unique identifier for the concrete type.
Definition: Types.h:117
LogicalResult verify(function_ref< InFlightDiagnostic()> emitError, Attribute attr, ConstraintVerifier &context) const override
Check that an attribute is satisfying the constraint.
LogicalResult verify(function_ref< InFlightDiagnostic()> emitError, Attribute attr, ConstraintVerifier &context) const override
Check that an attribute is satisfying the constraint.
LogicalResult verify(function_ref< InFlightDiagnostic()> emitError, Attribute attr, ConstraintVerifier &context) const override
Check that an attribute is satisfying the constraint.
LogicalResult verify(function_ref< InFlightDiagnostic()> emitError, Attribute attr, ConstraintVerifier &context) const override
Check that an attribute is satisfying the constraint.
LogicalResult verify(function_ref< InFlightDiagnostic()> emitError, Attribute attr, ConstraintVerifier &context) const override
Check that an attribute is satisfying the constraint.
Provides context to the verification of constraints.
Definition: IRDLVerifiers.h:41
ConstraintVerifier(ArrayRef< std::unique_ptr< Constraint >> constraints)
LogicalResult verify(function_ref< InFlightDiagnostic()> emitError, Attribute attr, unsigned variable)
Check that a constraint is satisfied by an attribute.
LogicalResult verify(function_ref< InFlightDiagnostic()> emitError, Attribute attr, ConstraintVerifier &context) const override
Check that an attribute is satisfying the constraint.
LogicalResult verify(function_ref< InFlightDiagnostic()> emitError, Attribute attr, ConstraintVerifier &context) const override
Check that an attribute is satisfying the constraint.
LogicalResult verify(function_ref< InFlightDiagnostic()> emitError, Attribute attr, ConstraintVerifier &context) const override
Check that an attribute is satisfying the constraint.
Include the generated interface declarations.
LogicalResult failure(bool isFailure=true)
Utility function to generate a LogicalResult.
Definition: LogicalResult.h:62
InFlightDiagnostic emitError(Location loc)
Utility method to emit an error message using this location.
bool succeeded(LogicalResult result)
Utility function that returns true if the provided LogicalResult corresponds to a success value.
Definition: LogicalResult.h:68
LogicalResult success(bool isSuccess=true)
Utility function to generate a LogicalResult.
Definition: LogicalResult.h:56
auto get(MLIRContext *context, Ts &&...params)
Helper method that injects context only if needed, this helps unify some of the attribute constructio...
bool failed(LogicalResult result)
Utility function that returns true if the provided LogicalResult corresponds to a failure value.
Definition: LogicalResult.h:72
This class represents an efficient way to signal success or failure.
Definition: LogicalResult.h:26
LogicalResult verify(mlir::Region ®ion, ConstraintVerifier &constraintContext)
Check that the region satisfies the constraint.
|
__label__pos
| 0.958846 |
Mathematica 9 is now available
Student Support Forum
-----
Student Support Forum > General > > "Calculation time of Solve[]"
Post Reply:
Name:
Email Address:
Please send email when my message is replied to.
Url (optional):
Subject:
Message: view original message?
Attachment (optional):
Please answer this:5+2 =
Original Message (ID '38346') By Bill Simpson:
This is fast with a fairly simple solution. Simplify[Reduce[{ (y1-y0)^2*a^2 == a^2*b^2 - (x1-x0)^2*b^2}, {a, b, x0, y0}]] This slows down with a much bigger solution. Simplify[Reduce[{(y1-y0)^2*a^2 == a^2*b^2 - (x1-x0)^2*b^2, (y2-y0)^2*a^2 == a^2*b^2 - (x2-x0)^2*b^2}, {a, b, x0, y0}]] This takes so long that I give up waiting for it. Simplify[Reduce[{ (y1-y0)^2*a^2 == a^2*b^2 - (x1-x0)^2*b^2, (y2-y0)^2*a^2 == a^2*b^2 - (x2-x0)^2*b^2, (y3-y0)^2*a^2 == a^2*b^2 - (x3-x0)^2*b^2}, {a, b, x0, y0}]] And you can probably guess how much slower and bigger the result might be with four equations. Simplifying equations or getting rid of square roots or denominators before trying to solve, if you are certain that this is valid, will sometimes help speed things up, but not always.
|
__label__pos
| 0.999994 |
If you're seeing this message, it means we're having trouble loading external resources on our website.
If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.
Main content
Converting metric units of length
Learn convert kilometers, meters, and centimeters. Created by Sal Khan.
Want to join the conversation?
Video transcript
What I want to do in this video is convert this amount of kilometers into meters and centimeters. So we'll first start with the 2 kilometers. And I encourage you to now pause this video and try to do this on your own. Well, the one thing that we know is that a kilometer literally means 1,000 meters. So you could literally view this is as 2 times 1,000 meters. Let me write that down. So this is going to be equal to 2 times 1,000 meters, which is equal to 2,000 meters. If we wanted to convert the 11 kilometers into meters, it's the same thing. 11 kilometers-- this right over here-- means 1,000 meters. So you could think of it as 11 times 1,000 meters. So 11 times 1,000 is going to be 11,000 meters. Now let's convert these distances into centimeters. And here we just have to remember that 1 meter is equal to 100 centimeters. Let me write that down. 1 meter is equal to 100 centimeters. And that's because the prefix centi means one hundredth. Another way you can write it is that one centimeter is equal to one hundredth of a meter. But here we have a certain number of meters, and each of those meters are going to be equivalent to 100 centimeters. So if we wanted to write 2,000 meters as centimeters, we could say, well, we have 2,000 meters. Each of those are going to be equivalent to 100 centimeters. And so this is going to be equal to 2,000 times 100. Well, that's going to be 2, and since where we have the three zeroes from the 2,000. And then every time you multiply by 10, you're going to add another zero. Or you're going to have another zero at the end of it. And we're going to be at multiplying by 100 is equivalent to multiplying by 10 twice. So we're going to have two more zeroes. So this is going to be 200,000 centimeters. Now let's do the same thing right over here with the kilometers, the 11 kilometers, which are 11,000 meters. And once again, I encourage you to pause the video and try to convert it into centimeters. Well, same idea. You have 11,000 meters, and each of them are equivalent to 100 centimeters. So this is going to be 11, and let's see, we have one, two, three, four, five zeroes. So this gets us to 1,100,000 centimeters.
|
__label__pos
| 0.968214 |
Clojure 解构
what & why
解构是用来方便赋值的,否则获取值的时候就要使用大量的next ,first 之类的 seq 操作,不方便,尤其是嵌套的 collection,取值就更麻烦。
有了解构以后,把collection 的值取出来就非常的简单和方便,除了可以用于let以外,还可以用于函数参数的赋值等地方
Clojure的解构特性提供了一种简洁的语法来声明式地从一个集合里面选取某些元素,并且把这些元素绑定到一个本地let绑定上去。并且因为解构这个特性是由let提供的, 它可以在任何间接使用了let的地方使用,比如fn、defn、loop。
how
1 list或vector解构
(let [[a b c] [1 2 3]]
(println a ", " b ", " c))
;=> 1, 2, 3
;你可以看到let的中括号中对应的关系
[
[a b c]
[1 2 3]
]
(let [[a [b [c]]] [1 [2 [3]]]]
(println a ", " b ", " c))
;=> 1, 2, 3
[
[a [b [c]]]
[1 [2 [3]]]
]
如果有的值不需要,但是又需要展位,clojure 中有一个惯用法 _ 下划线,来占用,但是不使用他的值。用下划线的方式来忽略掉一个或多个自己不在意的值。
(let [[a [_ [c]]] [1 [2 [3]]]]
(println a ", " c))
;=> 1, 2, 3
可以看到,只需要我们的取值的解构和原始数据结构一致,就可以很方便的把值一次取到,方便使用。
2 & 获取剩余值,& 符号会把剩下的值以list的形式全部放入b中,
(let [[a & b] [1 2 3]]
(println a ", " b))
;=> 1, (2 3)
3 :as 可以在解构形式中使用:as来获得对于被解构的集合的引用
(let [[a & b :as c] [1 2 3]]
(println (a ", " b ", " c)))
;=> 1, (2 3), [1 2 3]
4 map的解构
(let [{a :a b :b c :c} {:a 1 :b 2 :c 3}]
(println a ", " b ", " c))
;=> 1, 2, 3
map 的解构是用关键字来对应右边的关键字,然后把值给了 a 符号
跟顺序解构一样,map解构也可以处理内嵌map:
要注意的是,可以在map解构中用做key的不止是关键字,可以是任何类型的值,比如字符串:
当key找不到值的时候,就默认值为nil, 可以使用:or关键字来提供一个默认值
:or能区分到底是没有赋值,还是赋给的值就是逻辑false(nil或者false)
map引用的简便方法
(let [{:keys [a b c]} {:a 1 :b 2 :c 3}]
(pritnln a ", " b ", " c))
;=> 1, 2, 3
(let [{foo "foo" aaa :bbb :or {aaa 50}} m]
(println foo aaa))
使用:keys关键字告诉clojure,接下来的是一个vector,里面的每一个元素当做关键字去右边map里面找到对应的值,找到的值就直接给这个符号
同样的还有:strs关键字和:syms
:keys表示key的类型是关键字;:strs表示key的类型是字符串;:syms表示key的类型是符号
在map解构中,同样可以使用:as all
&关键字在 map 中如何使用呢?map解构中不可以使用&关键字
5 vector也可以使用 map 的方式来解构
(let [{b 1 c 2 } [1 2 3]]
(println b ", " c))
;=> 2, 3
{% raw %}
(def m {:a 5 :b 6 :c [7 8 9] :d {:e 10 :f 11} "foo" 88 42 false})
(let [{foo "foo"} m]
(println foo))
(let [{f 42} m]
(println f))
(let [{v 42} m]
(if v 1 0))
(let [{{e :e} :d} m]
(println e))
; 把顺序解构和map解构结合起来
(let [{[x y _] :c} m]
(println x y))
{% endraw %}
把vector中的下标和要对应的符号在 map 中一一对应起来,值就可以赋值过去
Clojure提供的更好的办法是直接用map解构来解构集合的剩余部分——如果剩余部分的元素个数是偶数的话,顺序解构会把剩余部分当做一个map来处理,神奇吧?
(let[[username account-year & {:keys [name city]}] user-info]
(format "%s is in %s" name city))
;="Bob is in Boston"
2018-12-30 08:55
留言
|
__label__pos
| 0.88583 |
Take the 2-minute tour ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.
For now, I have a class with fields.
@Entity
public class Fuel {
@Id @GeneratedValue
private Long id;
private boolean diesel;
private boolean gasoline;
private boolean etanhol;
private boolean cng;
private boolean electric;
public Fuel() {
// this form used by Hibernate
}
public List<String> getDeclaredFields() {
List<String> fieldList = new ArrayList<String>();
for(Field field : Fuel.class.getDeclaredFields()){
if(!field.getName().contains("_") && !field.getName().equals("id") && !field.getName().equals("serialVersionUID") ) {
fieldList.add(field.getName());
}
Collections.sort(fieldList);
}
return fieldList;
}
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public boolean isDiesel() {
return diesel;
}
public void setDiesel(boolean diesel) {
this.diesel = diesel;
}
public boolean isGasoline() {
return gasoline;
}
public void setGasoline(boolean gasoline) {
this.gasoline = gasoline;
}
public boolean isEtanhol() {
return etanhol;
}
public void setEtanhol(boolean etanhol) {
this.etanhol = etanhol;
}
public boolean isCng() {
return cng;
}
public void setCng(boolean cng) {
this.cng = cng;
}
public boolean isElectric() {
return electric;
}
public void setElectric(boolean electric) {
this.electric = electric;
}
}
I think it makes sense, but when I asked another question (maybe a stupid example since there can only be either automatic or manual gearbox) selectOneMenu from Declared Fields list in POJO , a user recommend me to use enums instead. Like this way:
public enum Fuel {
DIESEL("diesel"),
GASOLINE("gasoline"),
ETANHOL("etanhol"),
CNG("cng"),
ELECTRIC("electric");
private String label;
private Fuel(String label) {
this.label = label;
}
public String getLabel() {
return label;
}
}
However, since there exists hybrids on the market (like Toyota Prius) the parent class would implement the boolean class at this way:
private Fuel fuel = new Fuel();
and if using enumerated list at this way:
private List<Fuel> fuelList = new ArrayList<Fuel>();
What is the best practice? Keep in mind that I might have 100 different fuels (just for example =). Do not forget that it is an entity and hence persisted in a database.
Thanks in advance =)
share|improve this question
3 Answers 3
up vote 5 down vote accepted
It sounds to me like you want an EnumSet, yes, definitely over a bunch of bool's.
This reminds me a lot of the design patterns for flags and I recently posted an SO question on exactly that: Proper design pattern for passing flags to an object
This supports having 100 different fuel types easily. However it doesn't support a car using 100 different fuel types simultaneously easily. But that to me sounds perfectly fine - it would be very hard to build such a car and this is perfectly reflected in the programmatic complexity of coding this :) (Unless of course it really was just supporting all corn-based fuels - in which you might prefer a polymorphic pattern.)
share|improve this answer
You beat me to it. – JAB Aug 1 '12 at 19:37
Why would EnumSet not support the use of 100 fuels simultaneously? You can simply call EnumSet.allOf(Fuel.class) and that will provide an EnumSet<Fuel> which contains every Fuel type. – Bobulous Aug 1 '12 at 19:56
@user1515834 that will break when new fuel types are added but the car isn't re-engineered. – djechlin Aug 1 '12 at 19:57
Hi, thanks for your answers. However, I think I go for class of booleans for objects which has multiple selectors and enums for objects which has single selectors. Take a look at: stackoverflow.com/questions/11771993/enum-set-jpa-2-0 – kungcc Aug 2 '12 at 18:03
Google "premature optimization" then try posting a "which is faster?" question and see what sorts of replies you'll get. – djechlin Aug 3 '12 at 1:06
You should definetly use enums.
Image you want to get the fuel-type of an object.
If you would use bools you would end up with something like this:
if (myClass.IsGasoline())
else if (myClass.IsOtherFuel())
else if
...
If you use enums you can simply do something like:
Fuel fuel = myClass.GetFuelType()
(This is just pseudo-code ;))
share|improve this answer
If the number of hybrids is low, and I guess it will be better to use Enums, and include hybrids as a different case. Otherwise you will have to manage the logic in a way that can be cumbersome, as when you set a certain Fuel to true you, most likely, will have also to set to false the current one set to true. I am saying this as you have setters for your fuel categories and you don't only define at construction.
EDIT: the way on how to ask for the type of fuel you are using would also be an argument in favor of enums.
share|improve this answer
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.932544 |
Question & Answer
QUESTION
What is the line segment joining any two points on a circle called?
(a) arc of the chord
(b) radius of the circle
(c) chord of the circle
(d) tangent of the circle
ANSWER Verified Verified
Hint: Recall the definitions of the arc of the chord, radius of the circle, chord of the circle, and tangent of the circle. A line segment joining any two points on a conic section is called a chord.
Complete Step-by-Step solution:
A circle is a round-shaped figure having no corners or edges. It is a closed figure.
The center of a circle is defined as the fixed point inside the circle from which all the points on the circle are at equal distance.
The distance from the center of the circle to any point on the circle is called the radius of the circle.
A circle is completely represented by its center point and the radius of the circle.
The tangent to a circle is a straight line that touches a circle only at one point. The tangent of the circle is perpendicular to the radius of the circle at the point of contact.
A chord of the circle is the line segment joining any two points on the circle. The part of the circumference subtended by the chord is called the arc of the chord. A chord divided the circle into the minor arc and the major arc. The longest chord is the diameter of the circle.
Hence, the line segment joining any two points on a circle is called the chord.
Hence, the correct answer is option (c).
Note: You might choose the arc of the chord as the correct answer, but the arc of a chord is a curve and not a straight line. The chord is a straight line.
|
__label__pos
| 0.987329 |
Enterprise Key Drop Off: Simplifying Secure Access Management
Business Law
Short answer enterprise key drop off: Enterprise key drop-off refers to the process of returning or transferring encryption keys used for securing sensitive information within an organization. This involves securely delivering the keys to authorized individuals or entities in order to ensure data protection and access management.
The Importance of Enterprise Key Drop Off: Why Secure Key Management Matters
In the world of data security, enterprise key drop off is a crucial component that often goes unnoticed. But make no mistake – secure key management matters more than you might realize. It is the backbone of any organization’s ability to protect sensitive information from hackers or unauthorized access. In this blog post, we will delve into the significance of enterprise key drop off and shed light on why robust and secure key management practices are essential.
To grasp the importance of secure key management, it is necessary to understand what keys are in an enterprise context. Keys are like digital passports that grant access to encrypted data. They have the power to lock or unlock valuable information stored within databases, applications, or systems. Without appropriate protection and control over these keys, cybercriminals can easily gain unauthorized access to sensitive data, wreaking havoc on an organization’s reputation and financial stability.
So why should enterprises care about secure key management? Here are some compelling reasons:
1. Protect Confidential Information: Every company possesses critical information—customer records, trade secrets, financial data—that must be kept confidential at all costs. Secure key management ensures that only authorized individuals with authenticated credentials can decrypt and access this vital information.
2. Regulations and Compliance: Many industries such as healthcare (HIPAA) or finance (PCI DSS) have strict compliance requirements surrounding data protection and privacy laws. Adequate key management plays a pivotal role in meeting these standards effectively while ensuring organizations avoid hefty fines and legal consequences.
3. Mitigating Insider Threats: An insider threat can be just as dangerous as external attackers when it comes to compromising sensitive information within an organization. Implementing strong security measures like enterprise key drop off significantly reduces the risk of internal actors misusing their authorized privileges or unintentionally causing a breach.
4. Prevention against Data Breaches: Data breaches dominate news headlines regularly; however, many organizations fail to acknowledge the central role played by insecure key management practices in facilitating such incidents. By adopting secure key management protocols, companies can fortify their security posture and significantly reduce the likelihood of costly data breaches.
5. Data Recovery and Business Continuity: Secure key management is not solely about preventing unauthorized access; it also ensures smooth data recovery in case of system failures or disasters. By storing keys securely and having robust backup mechanisms in place, organizations can minimize downtime and swiftly restore critical services.
Now that we understand why enterprise key drop off matters, how can organizations ensure they have a sound key management strategy? Here are some helpful tips:
1. Use Trusted Encryption Algorithms: Select encryption algorithms with proven security standards, such as AES (Advanced Encryption Standard) for secure key generation and storage.
2. Implement Role-Based Access Controls: Establish strict role-based access controls to limit key accessibility to authorized personnel only. This prevents unauthorized users from obtaining keys through social engineering or other means.
3. Regular Key Rotation: Set up regular key rotation schedules to enhance security levels by rendering old keys useless after a certain timeframe. This reduces the risk of compromised keys being utilized to decrypt sensitive information.
4. Inventory Management: Keep track of all deployed keys through comprehensive inventory management practices. This allows organizations to monitor their key usage efficiently and trace any potential vulnerabilities or anomalies effectively.
5. Employ Hardware Security Modules (HSMs): HSMs provide a hardware-based root of trust for generating and storing keys securely. These physical devices are resistant to tampering attempts, adding an extra layer of protection against external threats seeking unauthorized access to cryptographic materials.
In conclusion, it is crucial for enterprises to prioritize secure key management within their broader cybersecurity strategies. Neglecting this aspect exposes them to significant risks such as data breaches, non-compliance penalties, reputational damage, and financial losses. By implementing robust enterprise key drop off practices while following industry best standards, organizations can safeguard confidential information effectively while ensuring business continuity in an increasingly hostile digital landscape.
How to Simplify Enterprise Key Drop Off: Step-by-Step Guide
Have you ever wondered how to simplify the process of enterprise key drop off? It might sound like a simple task, but in reality, it can be quite complicated and time-consuming if not done efficiently. In this step-by-step guide, we will walk you through the process of simplifying enterprise key drop off, ensuring that it becomes a hassle-free experience for both employees and administrators.
See also WPA2 AES: Personal or Enterprise - Which is More Secure?
Step 1: Assess Current Process
The first step towards simplifying enterprise key drop off is to evaluate your current process. Take a close look at how keys are currently managed – Are there multiple designated drop-off points? Is there a logbook or system in place to keep track of who drops off which keys? Identify any bottlenecks or areas for improvement so that you can streamline the process effectively.
Step 2: Define Drop-off Locations
Once you have assessed the current process, the next step is to define specific drop-off locations. Determine centralized areas where employees can conveniently return their keys. Ideally, these locations should be easily accessible yet secure to prevent any unauthorized access. Consider using lockboxes or secure cabinets with individual compartments for each employee’s keys.
Step 3: Implement Key Tracking System
To simplify enterprise key drop off, implementing a key tracking system is essential. This can be achieved by introducing an electronic logbook or using specialized software solutions specifically designed for this purpose. With such systems in place, administrators can easily monitor and manage all key drop-offs without relying on manual processes that are prone to errors.
Step 4: Communicate Guidelines & Reminders
One crucial aspect often overlooked is clearly communicating guidelines and reminders about enterprise key drop offs to all employees. Create a comprehensive guideline document outlining the correct procedure alongside any specific requirements such as labeling or packaging instructions. Send regular reminders through email or post notices around your office premises to ensure everyone stays informed and follows the protocol consistently.
Step 5: Regular Auditing & Maintenance
To maintain simplicity in the long run, it’s important to conduct regular auditing and maintenance of the enterprise key drop-off process. This includes reviewing the key tracking system, assessing drop-off locations’ efficiency, and seeking feedback from employees. By regularly evaluating this process, you can identify any potential issues or areas for improvement before they become major hurdles.
In conclusion, simplifying enterprise key drop off is an achievable goal with a well-defined step-by-step approach. Assessing the current process, defining drop-off locations, implementing a key tracking system, communicating guidelines effectively, and conducting regular audits are crucial steps towards achieving a hassle-free experience for both employees and administrators. By following these recommendations, you will streamline the entire process while ensuring security and compliance within your organization’s key management practices.
Frequently Asked Questions about Enterprise Key Drop Off Answered
Title: Frequently Asked Questions About Enterprise Key Drop Off Answered with Expert Insights
Introduction:
At Enterprise, we strive to provide our customers with convenient and efficient solutions. One such solution is our key drop off service, designed to simplify the rental return process. We understand that you may have questions regarding this service, which is why we have compiled a comprehensive list of frequently asked questions along with in-depth answers from our industry experts.
1) How does the key drop off service work?
Our key drop off service allows you to conveniently return your rented vehicle outside regular business hours. To make use of this option, simply locate the designated key drop box or slot at your nearest Enterprise branch. Seal your keys in an envelope provided by us, ensuring it is properly labeled with your name and reservation details. Then securely deposit the envelope into the slot. Rest assured that our branch personnel will collect the keys promptly when they resume their duties.
2) Is there any additional cost associated with utilizing the key drop off service?
No, there is no extra charge for using our secure key drop off service after business hours. We understand that flexibility is crucial for many travelers, so providing this convenient option helps ensure a seamless experience without any hidden fees.
3) Can I request electronic notification once my vehicle has been checked in?
Unfortunately, at this time we do not offer electronic notifications specifically for key drop offs. However, rest assured that once our branch staff verifies and processes your returned vehicle, they will update your file accordingly. If you require immediate confirmation or documentation regarding your return, we recommend contacting our customer support team directly.
4) What security measures are in place to safeguard my keys during after-hours drops?
We prioritize the security of all dropped-off keys at Enterprise locations. Our branches deploy various security measures such as surveillance cameras and secure lockboxes for storing these keys until they can be retrieved by authorized personnel. Additionally, all employees who handle these drops are extensively trained in ensuring the safe and proper handling of keys.
5) What should I do if I accidentally drop off my keys without properly sealing them?
In the event that you forget to seal your keys in the provided envelope, please contact our customer support immediately. Our agents will guide you through the necessary steps to ensure a smooth resolution, preventing any potential inconvenience or delays during the return process.
6) Can I drop off my keys at a different Enterprise branch than where I rented the vehicle?
We highly recommend returning your rental vehicle and key at the same location from which you initially rented it. However, if you encounter unexpected circumstances that require using a different branch for key drop off, we suggest contacting both branches in advance to discuss possibilities. While flexibility is important to us, ensuring availability and coordination between branches may be essential for a seamless experience.
See also Enterprise Miami International Airport: Everything You Need to Know
In conclusion:
Enterprise’s key drop off service is designed with convenience and peace of mind in mind. By providing secure after-hours options without any additional cost, we aim to make returning your rental stress-free. Should any questions arise about this service or any other aspect of your rental experience, please don’t hesitate to reach out to our knowledgeable customer support team who are always ready to assist you.
Ensuring Security in Enterprise Key Drop Off: Best Practices and Tips
Title: Ensuring Security in Enterprise Key Drop Off: Best Practices and Tips
Introduction:
The security of sensitive information and physical assets is paramount in enterprise settings. When it comes to key drop off systems, ensuring their integrity becomes crucial for maintaining an organization’s security. In this blog post, we will discuss the best practices and tips for enhancing security in enterprise key drop off processes. With a blend of professionalism, wit, and cleverness, let’s dive into the world of securing keys like secret agents safeguarding classified documents.
1. Implement Secure Lockboxes or Containers:
Just leaving keys lying around on a desk is akin to handing them over to a scheming villain who can easily gain access to restricted areas. To tackle this issue head-on, having secure lockboxes or containers is essential. These specialized devices make unauthorized access virtually impossible by utilizing advanced encryption mechanisms that even James Bond himself would have trouble cracking.
2. Utilize Cutting-Edge Biometric Systems:
The days of simple locks and standard keycards should be left behind in the dustbin of outdated technology. Embrace state-of-the-art biometric systems like fingerprint or retina scanners that would make even Mission Impossible’s Ethan Hunt proud! By relying on unique biological features, these systems provide an impenetrable fortress against any imposters trying their luck with fake keys.
3. Employ Video Surveillance:
In the world of secure key exchanges, visual evidence always works to your advantage. Install video surveillance cameras near the drop-off point to capture any potential suspicious activity or unauthorized access attempts. The footage not only aids investigations but also acts as intimidating eye candy for mischievous individuals considering dipping their toes into prohibited territories.
4. Set Up Strict Access Controls:
To maintain order among employees and prevent rogue agents from causing chaos within your organization, strict access controls are vital components of successful key drop-off operations. Develop a comprehensive policy that dictates who can request or collect keys and ensure proper identification is mandatory for anyone involved. Remember, weeding out the good guys from potential double-crossers requires diligence.
5. Establish Dual Verification Processes:
Even superheroes have sidekicks for a reason – two layers of security are better than one! Implement dual verification procedures to fortify your key drop off system. For instance, require an access code alongside an employee ID card to retrieve keys, providing an extra layer of protection against unauthorized individuals with misplaced intentions.
6. Regularly Schedule Audits and Inspections:
Secret agents engage in rigorous training and regular equipment checks to ensure they are always at the top of their game. Similarly, conducting periodic audits and inspections on your enterprise key drop off process is imperative to identify vulnerabilities and address them swiftly. Stay one step ahead by adopting the mindset of continuous improvement and adapting your security measures accordingly.
Conclusion:
In the ever-changing landscape of enterprise security, it is crucial to stay vigilant and adapt emerging technologies to protect sensitive assets like keys compellingly. By implementing secure lockboxes, utilizing cutting-edge biometric systems, employing video surveillance, setting up strict access controls, establishing dual verification processes, and regularly scheduling audits and inspections, enterprises can create a formidable defense against any attempts at compromising their key drop off processes. Remember – securing keys may not involve fighting villains or dodging lasers like in action movies, but following these best practices will certainly make you feel like a superhero in maintaining enterprise security!
Exploring the Benefits of Automating Enterprise Key Drop Off Processes
As technology continues to advance at a rapid pace, businesses are continually looking for ways to streamline their processes and improve efficiency. One area that often goes overlooked is the enterprise key drop off process, which can be time-consuming and prone to errors if not handled properly. However, by automating this process, businesses can unlock a host of benefits that enhance security, save time, and boost overall productivity.
One of the primary advantages of automating the enterprise key drop off process is improved security. Traditional methods of key exchange often involve physical handoffs or leaving keys in unsecured areas such as lockboxes or reception desks. This leaves room for unauthorized access and potential breaches in security. By implementing an automated system, businesses can ensure that only authorized personnel have access to keys by using secure authentication measures such as biometrics or access codes.
See also Efficiency and Compliance: Harnessing WHS Software for Workplace Safety
Time savings is another significant benefit that automation brings to the table. In today’s fast-paced business environment, every minute counts. With manual key drop-off processes, valuable time is wasted on coordinating schedules between employees or waiting for someone responsible for accepting the keys to become available. Automation eliminates these time-consuming steps by providing 24/7 access to key drop-offs through self-service key cabinets or lockers. Authorized individuals can conveniently deposit their keys at any time without needing direct interaction with others involved in the process.
Moreover, an automated system provides a detailed audit trail and eliminates human error during the key handoff process. Instead of relying on paperwork or manual record-keeping, digital records automatically track who dropped off or picked up keys at specific times and dates. This transparency not only helps reduce disputes or miscommunications but also assists in regulatory compliance efforts.
Beyond security and efficiency gains, automation brings efficiency benefits into play too. The speed offered by an automated system means employees spend less time waiting around for key exchanges and more time focusing on essential tasks—boosting overall productivity across the organization.
However, it’s important to note that automating the enterprise key drop off process is a smart investment. The initial costs of implementing an automated system might seem high at first glance, but the long-term benefits far outweigh the expense. By reducing theft or damage risks associated with manual processes, businesses can save money in the long run on potential security incidents or lost productivity due to misplaced keys.
In conclusion, exploring and embracing the benefits of automating enterprise key drop off processes presents an opportunity for businesses to enhance security, improve time management, reduce errors and ultimately boost productivity. Investing in automation technology ensures a seamless and efficient key exchange mechanism while also bringing peace of mind to both business owners and employees. Don’t let outdated manual systems hold your organization back – take advantage of the latest innovations and unlock substantial operational gains through automated key drop off processes today.
Planning for a Successful Enterprise Key Drop Off: Essential Considerations and Strategies
Planning for a Successful Enterprise Key Drop Off: Essential Considerations and Strategies
Handing over the keys to your enterprise to someone else can be a nerve-wracking experience. Whether it’s transitioning responsibilities to a new team member or ensuring secure access for external service providers, there are several essential considerations and strategies to keep in mind when planning for a successful enterprise key drop off. In this blog post, we will delve into the intricacies of this process, offering professional insights mixed with a touch of wit and cleverness.
1. Assessing Access Needs:
Before diving into the logistics of key drop off, it is crucial to assess your enterprise’s access requirements thoroughly. Identify who needs access and why they need it. By understanding the roles, responsibilities, and specific permissions required by each individual or group of individuals, you can streamline the key distribution process.
2. Establishing Security Protocols:
Security should be at the forefront of your planning efforts. Develop robust protocols that ensure authorized individuals receive keys while unauthorized personnel are kept at bay. Consider implementing multi-factor authentication measures or assigning unique access codes/passwords that limit entry only to those who are entitled.
3. Identifying Reliable Channels:
Choosing dependable channels for key drop off is paramount. Opt for options that offer traceability and accountability, such as registered mail or signed courier services. Leverage technology solutions like secure digital passcodes or biometric systems where feasible.
4. Creating Transparent Documentation:
A crucial component of successful key drop-offs is maintaining transparent documentation throughout the process. Create an inventory list detailing all distributed keys, including necessary details like recipient names, dates received/returned, purpose of access, and any outstanding obligations associated with each key holder.
5. Implementing Effective Communication:
While effective communication might seem obvious, its importance cannot be stressed enough when orchestrating an enterprise key handover successfully. Inform all stakeholders involved well in advance about any impending changes in access arrangements or personnel transitions. This foresight helps minimize surprises and aids in adjusting schedules accordingly.
6. Training and Education:
To ensure a smooth transition, provide adequate training and education to key recipients. Familiarize them with key usage guidelines, security protocols, and any relevant policies or procedures they need to follow. By doing so, you empower them to utilize the keys responsibly while maintaining the highest level of security for your enterprise.
7. Maintaining Regular Audits:
Even after the key drop off is completed, it is essential to conduct regular audits periodically. These audits will help identify any discrepancies or potential breaches in access control that may have occurred unnoticed. It’s better to be proactive instead of reactive when it comes to safeguarding your enterprise.
In conclusion, planning for a successful enterprise key drop off requires meticulous attention to detail coupled with effective strategies and considerations. By assessing access needs accurately, establishing robust security protocols, selecting reliable channels, creating transparent documentation, implementing effective communication methods, providing proper training, and conducting regular audits – you are well on your way towards a seamless and secure handover process. Remember: balancing professionalism with a dash of wit always makes for an engaging read but never compromises on the importance of taking these matters seriously!
Rate article
ssenterprisesgroup.org
|
__label__pos
| 0.984941 |
好贷网好贷款
iPhone开发环境搭建全过程
发布时间:2016-12-4 12:01:21 编辑:www.fx114.net 分享查询网我要评论
本篇文章主要介绍了"iPhone开发环境搭建全过程",主要涉及到iPhone开发环境搭建全过程方面的内容,对于iPhone开发环境搭建全过程感兴趣的同学可以参考一下。
一、 确定Mac OS的版本号版本号:Mac OS X version 10.5.5 CPU: Processor 1 GHz PowerPC G4iPhone SDK:iPhone_sdk_for_iPhone_os_2.2.19m2621afinal.dmg说明:每个IPhone SDK可能兼容的Mac OS都不同,该SDK支持10.5.3以 后的版本。还有,Intel 和 PowerPC 不同的CPU设置文件也不同,同一种CPU不同型号,设置也不同(注意PowerPC G3/G4 /G5不同型号),详细请看下文。重点的是:官方的iPhone SDK只支持Intel Mac with OSX 10.5.二、 安装IPhone SDK1. iPhone sdk 的后缀名为.dmg,只要双击该文件,弹出一个对话框,点“skip”,这就看到里面有3个文件,分别是About iPhone SDK.pdf 、iphone sdk.mpkg 、 Packages.如下图:2. 双击iPhone sdk.mpkg,按引导到Custom Install on”Macintosh HD”那步,如图:这是会看到iPhone SDK不可选,不知道是什么问题,不用害怕,下面有解决方案,我们是把可选的都选了,让所有的包都安装到默认的路径下。注意: Developer Tools Essentials那个包的安装路径(默认为/Developer),等下安装iPhone SDK 时需要安装到同一路径下。 其实这一步已经安装好了Xcode3.1 了。3. 安装iPhone SDK.进入到刚才的前面.mdg包里面的Packages文件夹,即第一张图中的Packages文件夹,选择如下5个包逐个安装。各个包名均以iphone*开头。注意:要安装到和Developer Tools Essentials同一目录,即/Developer如果已经安装到其他目录,可以不用重装。进入到安装目录,默认是/Platforms,然后 复制 iPhone.platform 和iPhoneSimulator.platform 到 /Developer/Platform,注意不要把 整个/Platforms复制。现在还不能建立iphone 项目,就算建好也编译不过。出现如下错误:No architectures to compile for (ONLY_ACTIVE_ARCH = YES, active arch = ppc, i386 VALID_ARCHS =)所以需要进一步配置文件4. 配置Architectures.xcspec文件 因为官方的iPhone SDK只支持Intel Mac with OSX 10.5.。所以要更改所有intel的配置。 进入到 /Developer/Platforms/iPhoneSimulator.platform/Developer/Library/Xcode/Specifications/,你会看到3个文件,修改其中的Architectures.xcspec文件 注意:不要用copy内页代码来改,要用手动输入,因为网页编码不一致问题。 具体修改如下:(// 32-Bit{ Type = Architecture;Identifier = Standard;Name = "Standard (iPhone Simulator: i386, ppc)";Description = "32-bit iPhone Simulator architectures";ListInEnum = YES;SortNumber = 1;RealArchitectures = ( i386, ppc );ArchitectureSetting = "ARCHS_STANDARD_32_BIT";},// Old-style Debug{ Type = Architecture;Identifier = Native;Name = "Native Architecture of Build Machine";Description = "32-bit for build machine";ListInEnum = YES;SortNumber = 101;ArchitectureSetting = "NATIVE_ARCH";},// G3{ Type = Architecture;Identifier = ppc;Name = "Minimal (32-bit PowerPC only)";Description = "32-bit PowerPC ";PerArchBuildSettingName = "PowerPC";ByteOrder = big;ListInEnum = No;SortNumber = 201;},// G4 这个是我台机的型号{ Type = Architecture;Identifier = ppc7400;Name = "PowerPC G4";Description = "32-bit PowerPC for G4 processor";ByteOrder = big;ListInEnum = NO;SortNumber = 202;},// G5 32-bit{ Type = Architecture;Identifier = ppc970;Name = "PowerPC G5 32-bit";Description = "32-bit PowerPC for G5 processor";ByteOrder = big;ListInEnum = NO;SortNumber = 203;},// Intel{ Type = Architecture;Identifier = i386;Name = "Intel";Description = "32-bit Intel";PerArchBuildSettingName = "Intel";ByteOrder = little;ListInEnum = NO;SortNumber = 105;},)注意:不要copy,只能手输入以上代码,否则出错!! 这样就完成设置了。=====================================================================简单Interface Builder工程,实现Hello World.双击 /Developer/Applications/Xcoide.app记得要到下面隐藏的快捷方式栏去找刚启动的Xcode,然后File->new project.选择 View-Based Application.然后,命名工程,如下:然后看到,如下:展开Groups&Files 下的 Resources目录,选择双击 MainWindow.xib,就会弹出xib主窗口和libery窗口,如果没有,可以通过Tools来调出窗口。Libery里有不同的控件,可以拖 放到View窗口里,单击选中控件,选择Tools->inspetor打开属性窗口修改控件属性。然后File->Save,Build->build and run.大功告成!!
上一篇:巧用Exit Sub减少循环层数
下一篇:文档/视图结构中的各个部分是如何联系到一起的
相关文章
相关评论
|
__label__pos
| 0.819455 |
How to use async functions with gapi, the Google Drive client, and the file picker
Use Promises and async functions instead of callbacks with Google libraries
Author's image
Tamás Sallai
7 mins
Quite some time has passed since I last worked with Google APIs but now I had a project to use them. I needed a solution that involved choosing a folder inside Google Drive and listing the contents inside it. Simple enough, but when I looked at the reference implementations I felt like going back a few years.
Async/await is mainstream now, providing a code structure that is simpler and easier to read, but all official code examples used a convoluted mess of callbacks for the various async tasks such as initializing a library or waiting for a result.
Sure, the example codes work and that is their primary purpose. But it took me some time to clean up everything and come up with a solution I’m happy with. This article walks through a GDrive-based webapp and describes how each part can work with async functions and Promises to provide a modern application structure.
Functionality
What I needed was a simple GDrive-based workflow. First, the user needs to log in, granting access to read the files. Then the app shows a file picker to allow choosing a folder. Finally, it fetches a list of all files inside the selected folder.
This yields 3 distinct steps in regard to functionality, but there are several more async steps along the way, most of them for initializing or loading a component.
The full async workflow looks like this:
Load gapiLog inInit auth2Logged in?Show login buttonyesnologinSelect a folderInit pickerShow the Pickerpick folderList contentsGet listmore results?finishedList a folder in Google Drive
Learn how async functions work in Javascript from our free email course
Load the gapi
The first step is to load the library script called gapi. This is the main entry point for all Google-related API calls and it is in a js file loaded from https://apis.google.com/js/api.js.
Loading scripts is usually a trivial thing, just add the <script> tag and the file is loaded for scripts following it. But loading third-party scripts should be done asynchronously not to block everything else. This is done by adding the async property.
But how to make sure the gapi is loaded when it is needed?
Most examples use the onload to call an init function and put the code inside that function, such as:
<script>
function init() {
// gapi is loaded here
}
</script>
<script async defer src="https://apis.google.com/js/api.js"
onload="init()"
>
This is a “push” model, as the script loading calls the function, which has some drawbacks. What if there are multiple scripts to load before running the code? In that case, it requires some additional code to make sure everything is loaded when the init script is run. Or what if some code can be run before the gapi is needed? In that case, the push model slows down execution. And what about error handling and graceful degradation?
Fortunately, it’s not hard to make a Promise that resolves when the gapi is loaded. It follows the deferred pattern which was widely used before async/await was mainstream and it works by extracting the resolve/reject callbacks of the Promise constructor so that they can be called from the outside.
This is how to wrap a Promise around the gapi loading:
<script>
const gapiPromise = new Promise((res, rej) => {
window.gapiLoaded = res;
window.gapiLoadError = rej;
});
</script>
<script async defer src="https://apis.google.com/js/api.js"
onload="window.gapiLoaded()"
onerror="window.gapiLoadError(event)"
>
With this structure, a simple await gapiPromise ensures that the gapi is loaded, properly providing errors where they can be handled, and does not need to restructure the app around an init function.
Login
The login flow can go in one of two ways. Since the state is managed by Google, the user might be already logged in. In that case, the app only needs to check this.
But when the user is not logged in, the app needs to show a button the initiates the auth flow and waits until the signin happens.
Let’s see how to write a Promise that handles all this and when it resolves the user is logged in!
Init the auth2 client
First, anything related to authentication needs the auth2 client loaded and initialized.
The first part uses gapi.load that is strictly callback-based. The examples usually pass a single function that is called when the client is loaded. But this function also accepts an object with a callback and an onerror handlers. Using that allows proper error propagation.
The second part is the gapi.client.init that provides a then-able result, compatible with await:
await new Promise((res, rej) => {
gapi.load("client:auth2", {callback: res, onerror: rej});
});
await gapi.client.init({
apiKey: API_KEY,
clientId: CLIENT_ID,
discoveryDocs: DISCOVERY_DOCS,
scope: SCOPES
});
Wait for login
The next step is to wait for the user to sign in. Since the auth flow has to be initiated by a user action, we need a button:
<button id="authorize_button" style="display: none;">Authorize</button>
Then there are two scenarios. First, if the user is already logged in, just move on. If not, then show the button, and wait for a change in the signed in status.
// do the auth when the button is clicked
authorizeButton.onclick = () => gapi.auth2.getAuthInstance().signIn();
await new Promise((res) => {
const initialSignedIn = gapi.auth2.getAuthInstance().isSignedIn.get();
if (initialSignedIn) {
// the user is already signed in
res();
}else {
// show the button
authorizeButton.style.display = "block";
// watch the event
gapi.auth2.getAuthInstance().isSignedIn.listen((signedIn) => {
if (signedIn) {
res();
}
});
}
});
// logged in, hide the button
authorizeButton.style.display = "none";
When this code finishes the user is signed in.
Show the file picker
Same as the authorization, the picker needs to be loaded first. It uses the same gapi.load call with the callback object:
await new Promise((res, rej) => {
gapi.load("picker", {callback: res, onerror: rej});
});
The next step is to construct and show the picker. Most of it is configuration to allow selecting folders, add the auth token, and show the component:
const folder = await new Promise((res, rej) => {
const view = new google.picker.DocsView()
// allow selecting only folders
.setIncludeFolders(true)
.setMimeTypes("application/vnd.google-apps.folder")
.setSelectFolderEnabled(true);
const picker = new google.picker.PickerBuilder()
.enableFeature(google.picker.Feature.NAV_HIDDEN)
.setOAuthToken(gapi.auth2.getAuthInstance().currentUser.get().getAuthResponse().access_token)
.addView(view)
.setCallback((data) => {
if (data[google.picker.Response.ACTION] == google.picker.Action.PICKED) {
// a folder is selected
res(data[google.picker.Response.DOCUMENTS][0]);
}else if (data[google.picker.Response.ACTION] == google.picker.Action.CANCEL) {
// cancelled
rej();
}
})
.build();
picker.setVisible(true);
});
The interesting part is the setCallback function. This is called when the user selects something or cancels the dialog. When that happens it resolves the Promise with the selected item or rejects it.
When this function is finished, folder holds the selected folder or an error indicating that the dialog was dismissed without selecting anything.
Get folder contents
The last part of the solution is to issue a list call to get the objects from GDrive.
It supports the q parameter that lets you define the search term. To list only files that are in the selected folder, use:
const res = await gapi.client.drive.files.list({
q: `'${folder.id}' in parents`,
// ...
});
But the list does not necessarily return all the files that match the query. It is a paginated operation, which means it returns only a limited number of results along with a nextPageToken if there are more. Then a separate list call with the token can continue the fetch, until no items remain.
Because of this, multiple calls might be needed to reliably get all the matched items.
Fortunately, it is the same pattern as other paginated operations use, such as the AWS API, so the same solution using an async generator works here also:
const getPaginatedResults = async (fn) => {
const EMPTY = Symbol("empty");
const res = [];
for await (const lf of (async function*() {
let NextMarker = EMPTY;
while (NextMarker || NextMarker === EMPTY) {
const {marker, results} = await fn(NextMarker !== EMPTY ? NextMarker : undefined);
yield* results;
NextMarker = marker;
}
})()) {
res.push(lf);
}
return res;
};
And use it to get all the files:
const files = await getPaginatedResults(async (NextMarker) => {
const res = await gapi.client.drive.files.list({
pageToken: NextMarker,
q: `'${folder.id}' in parents`,
pageSize: 20,
fields: "nextPageToken, files(id, name)",
});
return {
marker: res.result.nextPageToken,
results: res.result.files,
};
});
The fields argument specifies what properties will be included in the response. A file object is quite large with lots of values, but it’s likely your app only needs a few of them. Use this argument to remove what is not needed and that cuts the response to a fraction of its original size. But make sure to include the nextPageToken as that is needed for the pagination.
Conclusion
With a few Promises, the Google libraries can be fit into a modern async/await workflow. And it well-worth the effort as the code becomes a lot cleaner and easier to understand and maintain.
28 July 2020
In this article
|
__label__pos
| 0.908015 |
How to set up FreeBSD as VPS host?
FKEinternet
Member
Reaction score: 5
Messages: 65
I'm not sure which Forum I should be asking this.
When I do a Web search for "FreeBSD set up VPS" I get lots of results about how to set up FreeBSD on somebody else's VPS, but that's not what I'm trying to do: I want to set up my FreeBSD system as a VPS host, e.g., I want to be the host for example.com and when example.com's owner logs in as root, they only get access to their VPS, not to my whole system.
I'm pretty sure this isn't done using jails, my understanding is that users in a jail are the same as users in the base system, and IIRC when I log in to a jail as root, I have access to the whole base system's files, etc.
Do I have to set up virtual machines to be a VPS host? What would be the best software to use? What other requirements am I facing, e.g., does each VPS need its own IP address?
Is there a published resource somewhere I can refer to before asking more questions?
drhowarddrfine
Son of Beastie
Reaction score: 1,070
Messages: 2,976
Actually, jails are the way to go. While they have access to root, that root only has access to their own jail and those contents.
But I'm out of touch with all that and I'll let someone more knowledgeable jump in with more and better explanations.
gkontos
Daemon
Reaction score: 472
Messages: 2,141
I'm pretty sure this isn't done using jails, my understanding is that users in a jail are the same as users in the base system, and IIRC when I log in to a jail as root, I have access to the whole base system's files, etc.
Actually it is quite the opposite. A jail is an isolated instance of the OS and any user that logs in the jail has access only to the jail files. The only problem is that jails can only run FreeBSD. So, if you are looking at a more solid virtualization, you might want to have a look at bhyve.
Top
|
__label__pos
| 0.520353 |
Submit form to different script & window
Discussion in 'Javascript' started by Terence Parker, Aug 17, 2004.
1. I have a form which enables users to type in some text in a <TEXTAREA>,
allowing them to use HTML. I have defined two submit buttons - one to
submit as usual, but one which I would like to popup another window and
submit the typed text to different action/script to give a preview.
So far I have done this (removing the excess formatting HTML for clarity):
----------
<SCRIPT language="JavaScript" type="text/javascript">
function submitForm(t) {
window.open("", t, "width=500,height=450");
return true
}
</SCRIPT>
<form action="user_message.php" method="POST" name="UserMessage">
<textarea cols="55" rows="15" name="content"></textarea>
<INPUT type="submit" value="Modify Message">
</form>
<form method="POST" name="Preview" action="preview_message.php"
onsubmit="return submitForm(this.target)" target="preview">
<input type="hidden" name="content" value="">
<input type="submit" name="submit" value="Preview Message">
</form>
-----------
The normal submit obviously works ; the 'preview' submit successfully
pops up another window but because it is submitting a different form it
doesn't contain the value of my textarea.
I have tried adding:
document.form.Preview.content.value =
document.form.UserMessage.content.value;
- but this did not work. In fact, this resulted in the popup losing its
attributes too (i.e. not Hx500;Wx450).
I've also considered putting the 'preview' button as part of the first
form and using an 'OnClick' instead - but couldn't get this working either.
Anyone have any suggestions on how to achieve what I want to do?
Thanks,
Terence
Terence Parker, Aug 17, 2004
#1
1. Advertising
2. On Tue, 17 Aug 2004 13:17:28 +0800, Terence Parker
<> wrote:
[snip]
> Anyone have any suggestions on how to achieve what I want to do?
I would suggest NOT trying what you want to do. You have server-side
capabilities, so use them.
Detect on the server whether the form is intended for submission or
preview. If it's the latter, produce a page similar to the editing page,
but with a section that contains a preview of the message. No needless
JavaScript dependency and no reliance on (possibly disabled) pop-ups.
JavaScript, and everything you can do with it, should be considered an
optional component. Relying on it can break a site.
By the way, you probably don't want to do this:
<textarea cols="55" rows="15" name="content"></textarea>
<!-- .... -->
<input type="hidden" name="content" value="">
That is likely[1] to produce a collection of 'content' values, not one.
Mike
[1] I say "likely", because a user agent doesn't have to submit form
controls that have no value. However, some obviously will. I also don't
know if you alter the value with JavaScript.
--
Michael Winter
Replace ".invalid" with ".uk" to reply by e-mail
Michael Winter, Aug 17, 2004
#2
1. Advertising
Want to reply to this thread or ask your own question?
It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
1. Replies:
2
Views:
6,388
2. Network-Man
Replies:
5
Views:
79,669
gaul1
Jul 7, 2012
3. Jim Mesara
Replies:
0
Views:
148
Jim Mesara
Aug 27, 2003
4. Replies:
2
Views:
410
5. Replies:
3
Views:
807
Loading...
Share This Page
|
__label__pos
| 0.749634 |
Super User is a question and answer site for computer enthusiasts and power users. Join them; it only takes a minute:
Sign up
Here's how it works:
1. Anybody can ask a question
2. Anybody can answer
3. The best answers are voted up and rise to the top
I have a Samsung Series 9 that I needed to do a system restore on. After restore everything looked good so I rebooted it and it went into this endless loop. It shows the Samsung logo, then goes into a Windows 8 "Preparing Automatic Repair" screen, then into a BSOD, then reboots. Over and over and over.
I have tried getting into the BIOS to see if I can boot it from a USB stick but it goes into the Windows screens immediately so it never reads the keyboard.
Any idea how I can fix this thing?
share|improve this question
I've had something similar happen to me with a tower once, problem was a dieing power supply.
Easiest way to check, disconnect the power supply, plug in another one and see if it works, normally no problem but since you're using a laptop not so easy to check.
So rule out the software first, take a linux distro such as ubuntu, burn it to disc or install it on a flash drive and try to run it live, if it crashes as well, it's not a software issue.
At this point you have the options of attempting to disassemble the laptop yourself and manually check each piece of hardware or take it to a local shop.
For relevant info, although relating to towers check this question out.
Computer loop restarts on Windows loading
share|improve this answer
You must log in to answer this question.
Not the answer you're looking for? Browse other questions tagged .
|
__label__pos
| 0.799012 |
Get Request Header Flask With Code Examples
Get Request Header Flask With Code Examples
In this session, we’ll try our hand at solving the Get Request Header Flask puzzle by using the computer language. The code that follows serves to illustrate this point.
from flask import request
request.headers.get('your-header-name')
The solution to the same problem, Get Request Header Flask, can also be found in a different method, which will be discussed further down with some code examples.
from flask import request
import sys #for printing in log console
print(request.headers, file=sys.stderr)
The Get Request Header Flask issue was overcome by employing a variety of different examples.
How do you get headers from request Flask?
headers to get the HTTP headers of a request within a Flask view function.
• app = flask. Flask(__name__)
• @app. route("/")
• def index():
• headers = flask. request. headers.
• return "Request headers:\n" + str(headers)
• app. run(host="0.0.0.0", port=8080)
What is header in Flask?
Headers is class within the flask. app module of the Flask web framework that is imported from the datastructures module of the Werkzeug project. Headers handles the HTTP headers from requests and responses for Flask web applications.
How do you put a header on a Flask?
How to set response headers using Flask in Python
• app = flask. Flask(__name__)
• @app. route("/")
• def index():
• response = flask. Response()
• response. headers["Access-Control-Allow-Origin"] = "*"
• return response.
• app. run(host="0.0.0.0", port=8080)
How do you get responses in Flask?
Flask provides a method called make_response() that we can use to send custom headers, as well as change the property (like status_code , mimetype , etc.) in response. We can import make_response from the flask . make_response() accepts a string as a parameter, then creates and returns a response object.
What are headers in HTTP requests?
An HTTP header is a field of an HTTP request or response that passes additional context and metadata about the request or response. For example, a request message can use headers to indicate it's preferred media formats, while a response can use header to indicate the media format of the returned body.20-Sept-2022
How do you use Jsonify in Flask?
How does jsonify Work in Flask?
• During passing single argument, jsonify just passes the parameter as it is to dumps( ) function.
• Before passing multiple arguments, jsonify converts the arguments to array before passing it to dumps( ) function.
What is get and post in Flask?
A GET message is send, and the server returns data. POST. Used to send HTML form data to the server. The data received by the POST method is not cached by the server.
What is request authorization in Flask?
request. authorization(). You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. You may also want to check out all available functions/classes of the module flask. request , or try the search function .
What is request and response in Flask?
Requests and responses are all in the JSON API format, so each request must include an Accept header whose value is application/vnd. api+json and any request that contains content must include a Content-Type header whose value is application/vnd. api+json. If they do not, the client will receive an error response.
How do I change the authorization header on my Flask?
1 Answer
• In the login() method, you make a requests. get() call.
• Authorization is a request header. You don't use it in responses to the client.
• Once you'll get the token to your client, then you should use it as you said – set in the Authorization header as a Bearer token.
Follow us on Social Media
|
__label__pos
| 0.999241 |
Ruby Basics : Code Execution
In this article we will answer the question, when does Ruby execute code as it encounters the code?
At the Top Level
Open the editor and print 'hi' to the standard output.
puts 'hi'
This prints:
hi
Ruby encountered the puts() method and it executed the instruction. You see the output in the terminal.
Inside a Class
Define a class and print something to the standard output within the class.
class Rabbit
puts "I am within Rabbit class"
end
Running this program prints:
I am within Rabbit class
This dynamic nature of Ruby surprises developers who are familiar with other languages.
Bunny
Inside a Module
Define a module and print something to the standard output within the module.
module Rabbit
puts "I am within Rabbit module"
end
Running this program prints:
I am within Rabbit module
Inside a Method in a Class
Let's now add a method to the Rabbit class:
class Rabbit
def speak
puts "My name is Bugs Bunny"
end
end
Running this program does not print anything to the standard output.
Invoking the Instance Method
Why? Because, we need an instance of Rabbit to send the speak() message to it.
class Rabbit
def speak
puts "My name is Bugs Bunny"
end
end
bugs = Rabbit.new
bugs.speak
Running this program prints:
My name is Bugs Bunny
Bugs Bunny
Inside a Method in a Module
What happens when we define a method in a module?
module Rabbit
def speak
puts "My name is Bugs Bunny"
end
end
Running this program prints nothing to the standard output.
Mixin the Module
We can mixin the Rabbit module to the top level and invoke the speak() method.
module Rabbit
def speak
puts "My name is Bugs Bunny"
end
end
include Rabbit
speak
Running this program prints:
My name is Bugs Bunny
Summary
In this article, we learned that Ruby executes code as it encounters code:
1. At the top level.
2. Inside a class.
3. Inside a module.
Dynamic Code Execution
But, it does not execute the code inside the instance method defined in a class or a method defined inside a module as it encounters it. We need an object to execute an instance method defined in a class or mixin the method defined in a module and then call the method.
Related Articles
Ace the Technical Interview
• Easily find the gaps in your knowledge
• Get customized lessons based on where you are
• Take consistent action everyday
• Builtin accountability to keep you on track
• You will solve bigger problems over time
• Get the job of your dreams
Take the 30 Day Coding Skills Challenge
Gain confidence to attend the interview
No spam ever. Unsubscribe anytime.
|
__label__pos
| 0.959378 |
1. 50
1.
2. 32
They’re more popular than ever, as AWS Lambda.
1. 27
If I was good at writing satire, I would write a satire article that is a CGI scripting tutorial, but never use the word CGI and instead call it a “new Open Source function as a service library”.
Similar twist: “Hi folks, I wrote a new library that converts JVM byte code to WebAssembly. For the first time ever, we can write Java that runs in the browser! Never before seen!”
1. 11
They are both stack machines, so it should be simple enough I guess. Don’t give me ideas.
1. 5
Do it.
2. 3
I started trying to write this article a while back, not as satire, but as a direct comparison to the evolution of serverless. But then I realized it’s probably been done better than I could do and aborted.
1. 7
I have literally never not published something just because I think it might have been done. If I see something that is almost entirely what I wrote, sure, I’ll axe it. (Even retroactively, in one case, where I read someone else with a better take and thought, “oh, never mind then.”) But if I haven’t specifically read an article of what I want to write, then:
1. I might have a unique take after all.
2. Even if my take isn’t unique, it might be different enough to be helpful to someone else.
3. Even if it’s neither unique nor different enough, if I’m not aware of it (and after deliberately cursory search can’t find it, if applicable), it will likely reach a different audience.
In the draft post you’ve got, I think you are heading in a good direction, and it might be worth continuing. I’d suggest dropping the FastCGI/SCGI/WSGI/Rack section in favor of diving a lot more into early attempts to speed up CGI requests and how those relate to lambdas (you touch on mod_perl, but I’d also at least touch on PHP in particular, and quite possibly AOLServer, as close peers), highlighting similar issues with startup time and how lambdas are trying to solve them in their own ways/differently.
The other way to approach this kind of thing, incidentally, which I like for my equivalent writings on these axes, is to walk through how trying to solve the problems with the old old-and-busted resulted in the new old-and-busted. You can write that kind of article sarcastically, but you definitely don’t have to; my article comparing Win32 to Flux has a bit of humor in it, but I deliberately avoided anything past that. If you went that route, the FastCGI/SCGI section fits better, but also pairs very nicely with talking about things like the Danga Interactive products (Gearman, memcache, Perlbal, etc.), which turn out to be necessarily reinvented whenever a PaaS-like environment is used.
Anyway, all this to say: I’d love to see someone actually write a post along these lines. If you really don’t want to finish yours, you’ve given me half a mind to take my own stab.
2. 3
I have that as a laptop sticker. I don’t know if commercial advertising on lobste.rs is appropriate (even for an enterprise which makes me on the order of $2/month) so I won’t link it directly, but you could probably find it quite easily by searching redbubble.com for “serverless cgi-bin”
There is the reasonable objection that using a FaaS platform you have the expectation that it will autoscale to performance levels far in excess of anything that cgi-bin of yore managed, but really, that’s a implementation detail not an attribute of the API
1. 2
Kelsey Hightower made a similar comparison at Gophercon: https://youtu.be/U7glyWYj4qg
1. 2
“This is xinetd… the new hotness”
Love it.
3. 24
I am confused by the use of the past tense here.
1. 8
One of the best things in CGI in that era was CGI.pm which lived in Perl’s standard distribution from 1997 to 2015. The docs alone are still pretty amazing. Combined with taint mode it could sometimes lead to rather safe programming practices. And yes it is funny how much aws lambda, etc feel like a rehash, but ideally in a good way.
docs: https://metacpan.org/pod/release/LDS/CGI.pm-3.05/CGI.pm
perl taint: https://docstore.mik.ua/orelly/linux/cgi/ch08_04.htm
For truly learning about how simple CGI is I think writing an unsafe cgi handler in c is a good way as per this example: http://jkorpela.fi/forms/cgic.html
1. 3
Combined with taint mode it could sometimes lead to rather safe programming practices.
Until programming turned into copy and paste from Stack Exchange and everyone “fixed” tainted input with =~ /.*/
1. 2
Before that, programming was copy and past from Matt’s Script Archives, so it’s not like things have gotten significantly worse.
2. 6
While the mechanisms for running them now allow the code to stay resident in memory between requests, Python’s WSGI – which is the standard gateway interface for Python web apps – is still very CGI-ish.
The signature of a WSGI application is:
def application(environ, start_response)
Where:
• environ is a dict (hash table) of the request environment, using many of the same standard keys as CGI.
• start_response is a function that’s called to send the response status and headers.
• The return value is an iterable which will yield the response body.
Python web frameworks tend to hide this behind their own request/response abstractions and nicer life-cycle management, but under the hood it’s still the CGI programming model.
1. 8
I’m not sure if it’s appropriate to talk about CGI like it’s dead. OpenBSD ships slowcgi(8) and their man page viewer in mandoc is a genuine CGI application. The BCHS people endorse pure CGI with C, too.
(Whether these are good ideas is another story, but they exist right now.)
1. 5
I use them for a couple things (the order form on https://atreus.technomancy.us being one of them), and they’re fantastically useful for sites like that which can be 99% static. I wish more people realized that not everything has to be full of moving parts everywhere.
1. 3
Whenever I have some sort of local web thing, I still use CGI. It’s easy and, like you say, works well when the site is mostly static. Adding all that other stuff seems like a complete waste.
2. 4
Author does write
CGI scripting was undoubtedly useful and continues to be useful for small scale web applications, such as developer utilities, simple form data collection and local intranet tools.
(my emphasis)
1. 1
Oh, I see. My bad. I skimmed the article too quickly.
2. 2
In the first draft of this article, I used the term “near obsolescence” rather than just the term “obsolescence”, because you’re right, there are still people out there using CGI scripts (I am one of them).
Ultimately, I removed the weakening word. Determining when a technology is obsolete is a tough call, and often opinion based. In this case, it’s my opinion. If the definition of obsolete is “no longer produced or used; out of date.”, then there are very few technologies that can truly and in all cases be described as obsolete. In the case of CGI scripts as they were used in the late 90’s, I think it’s safe to say that that train left the station a long time ago.
1. 4
First I thought no one needs this article, but then I realized I’m now old. ;)
By the way, why your website keeps making requests to https://rickcarlino.com/owa/blank.php ?
1. 1
I realized I’m now old
Same.
making requests to https://rickcarlino.com/owa/blank.php
Open Web Analytics
1. 2
It sounds like a rather intrusive approach, and for people on mobile, it’s not free of charge either.
3. 4
I’m still using cgit. CGI is alive and well ;-)
1. 3
Is it possible to optimize operating systems, so creating new processes would be cheap and connections to database fast? Context switching is not an issue, as most modern web apps use threads instead of processes and perform well. I don’t understand why database connections must be pooled and reused instead of closed and opened again, as there’s no intercontinental latency between webapp and database usually. And to add ability to start quick for languages? (some caching? The main issue seems to be is re-initialization of all dependencies).
Long-living processes of todays web applications are not absolute necessity. Trendy “serverless” environments (which are proprietary and non-standard) show that it’s possible, but maybe it’s possible with regular unix processes too?
1. 5
Yes, it is possible, and was done years ago. CGI isn’t as slow as people assume anymore (even back in the day, it wasn’t CGI that was slow per se, it was frequently the interpreter people used with it), and the isolated processed can give good reliability.
That said though, I rarely use the original version anymore, since scgi can do most the same things and does legit intend to be a lil faster…
2. 3
This article is a nice trip down memory lane to what first got me into web programming – Perl scripts in my cgi-bin directory of my ISP, to customize web forum software, and to get early blog software, like CuteNews and MovableType, up-and-running.
I’m surprised that the article doesn’t really go into detail as to how the main thing that “killed” CGI were new application-webserver interfaces that were specifically developed by the popular web frameworks to supplant CGI. That is, in languages beyond Perl, you had Struts / Spring MVC (Java), Sinatra / Ruby-on-Rails (Ruby) and web.py / Django (Python). To support these frameworks, a higher-level abstraction than CGI was developed and employed, namely the Servlet API in Java (earliest to market, I think), Rack in Ruby, and WSGI in Python. Python’s interface for this, WSGI, was written in a “RFC format” similar to the Servlet API, and is well described on Wikipedia – there, you’ll see it was directly influenced by Java’s Servlet API.
The basic change was from process isolation semantics (one-process-per-HTTP-request, which was common in simple/naive CGI implementations) to live webapp semantics (with a single process, or a pool of processes with minimal shared in-memory state, doing one-function-call-per-HTTP-request).
Obviously moving away from one-process-per-HTTP-request meant that these web frameworks could scale much more easily, both in terms of requests per second per single process, and in scale-out scenarios across multiple nodes. It also made it easier for these frameworks to handle common web programming tasks like routing, session state, and cookies. I personally remember that routing was the user-perceivable benefit here: it let you easily have web apps with very pretty public URLs.
Adrian Holovaty, one of the creators of Django, covers this history in a bit more depth here: http://www.holovaty.com/writing/well-crafted-websites/
1. 2
Another cause of CGI scripting’s demise was an inability to handle the complexity of large scale applications.
I don’t think this is quite fair; when I first learned Rails in the 0.9 days, CGI was the easiest way to get started with Rails, but the main reason people stopped using it had more to do with Rails than CGI. Basically the problem was that Rails took so long to start up. For runtimes that offer a quick boot, that isn’t a problem at all.
For Rails, FastCGI eventually displaced CGI, but it was much worse in almost every regard except startup time.
1. 1
were??? in the past? They are still around. I still use them a lot. They work great.
1. 1
You can similarly create a fantastically easy secure web service using stunnel and inetd/micro_inetd, or even the newest websocketd.
1. 1
I still use CGI and SSI too.
As another poster said, great for filling in small pieces of an otherwise largely static site.
Also standard across most web servers.
1. 1
I thought it entertaining when I noticed cgi-bin in the url of my local councils web portal.
|
__label__pos
| 0.63508 |
Python struct pack, unpack
Filed Under: Python
Python struct module is capable of performing the conversions between the Python values and C structs, which are represented as Python Strings.
Python Struct
• Python struct module can be used in handling binary data stored in files, database or from network connections etc.
• It uses format Strings as compact descriptions of the layout of the C structs and the intended conversion to/from Python values.
Python Struct Functions
There are five important functions in struct module – pack(), unpack(), calcsize(), pack_into() and unpack_from(). In all these functions, we have to provide the format of the data to be converted into binary.
Some of the popular format characters are:
?: boolean
h: short
l: long
i: int
f: float
q: long long int
You can get the complete list of format characters here. Let’s start looking into struct module functions one by one.
Python struct.pack()
This function packs a list of values into a String representation of the specified type. The arguments must match the values required by the format exactly. Let’s quickly look at struct pack() example:
import struct
var = struct.pack('hhl', 5, 10, 15)
print(var)
var = struct.pack('iii', 10, 20, 30)
print(var)
When we run this script, we get the following representation:
python struct pack
Note that ‘b’ in the Output stands for binary.
Python struct.unpack()
This function unpacks the packed value into its original representation with the specified format. This function always returns a tuple, even if there is only one element. Let’s quickly look at struct unpack() function example:
import struct
var = struct.pack('hhl', 5, 10, 15)
print(var)
print(struct.unpack('hhl', var))
When we run this script, we get back our original representation:
python struct unpack
Clearly, we must tell the Python interpreter the format we need to unpack the values into.
Python struct calcsize()
This function calculates and returns the size of the String representation of struct with a given format.
Size is calculated in terms of bytes. Let’s quickly look at an example code snippet:
import struct
var = struct.pack('hhl', 5, 10, 15)
print(var)
print("Size of String representation is {}.".format(struct.calcsize('hhl')))
When we run this script, we get the following representation:
python struct calcsize
Python struct pack_into(), unpack_from()
These functions allow us to pack the values into string buffer and unpack from a string buffer. These functions are introduced in version 2.5.
import struct
# ctypes is imported to create a string buffer
import ctypes
# As shown in previous example
size = struct.calcsize('hhl')
print(size)
# Buffer 'buff' is created from ctypes
buff = ctypes.create_string_buffer(siz)
# struct.pack_into() packs data into buff and it doesn't return any value
# struct.unpack_from() unpacks data from buff, returns a tuple of values
print(struct.pack_into('hhl', buff, 0, 5, 10, 15))
print(struct.unpack_from('hhl', buff, 0))
When we run this script, we get the following representation:
Python Struct pack_into and unpack_from string buffer
That’s all for a short introduction of python struct module.
Comments
1. Pete says:
I agree, the way the struct var prints is ridiculous. How did it get past the checking process?
2. wally says:
is it possible to switch off this feature and get it ‘uneasy’ ?
3. Richard Ng says:
why 10 and 13 return
b’\n\x00\r\x00′
1. Rafael says:
10 and 13 correspond to newline and carriage return in the ASCII table. python is trying to make it easy an represent those special characters as \n and \r.
Leave a Reply
Your email address will not be published. Required fields are marked *
close
Generic selectors
Exact matches only
Search in title
Search in content
|
__label__pos
| 0.996469 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.