title
stringlengths
1
149
section
stringlengths
1
1.9k
text
stringlengths
13
73.5k
Wiener deconvolution
Definition
Given a system: y(t)=(h∗x)(t)+n(t) where ∗ denotes convolution and: x(t) is some original signal (unknown) at time t .h(t) is the known impulse response of a linear time-invariant system n(t) is some unknown additive noise, independent of x(t) y(t) is our observed signalOur goal is to find some g(t) so that we can estimate x(t) as follows: x^(t)=(g∗y)(t) where x^(t) is an estimate of x(t) that minimizes the mean square error ϵ(t)=E|x(t)−x^(t)|2 ,with E denoting the expectation.
Wiener deconvolution
Definition
The Wiener deconvolution filter provides such a g(t) . The filter is most easily described in the frequency domain: G(f)=H∗(f)S(f)|H(f)|2S(f)+N(f) where: G(f) and H(f) are the Fourier transforms of g(t) and h(t) ,S(f)=E|X(f)|2 is the mean power spectral density of the original signal x(t) ,N(f)=E|V(f)|2 is the mean power spectral density of the noise n(t) ,X(f) , Y(f) , and V(f) are the Fourier transforms of x(t) , and y(t) , and n(t) , respectively, the superscript ∗ denotes complex conjugation.The filtering operation may either be carried out in the time-domain, as above, or in the frequency domain: X^(f)=G(f)Y(f) and then performing an inverse Fourier transform on X^(f) to obtain x^(t) Note that in the case of images, the arguments t and f above become two-dimensional; however the result is the same.
Wiener deconvolution
Interpretation
The operation of the Wiener filter becomes apparent when the filter equation above is rewritten: G(f)=1H(f)[11+1/(|H(f)|2SNR(f))] Here, 1/H(f) is the inverse of the original system, SNR(f)=S(f)/N(f) is the signal-to-noise ratio, and |H(f)|2SNR(f) is the ratio of the pure filtered signal to noise spectral density. When there is zero noise (i.e. infinite signal-to-noise), the term inside the square brackets equals 1, which means that the Wiener filter is simply the inverse of the system, as we might expect. However, as the noise at certain frequencies increases, the signal-to-noise ratio drops, so the term inside the square brackets also drops. This means that the Wiener filter attenuates frequencies according to their filtered signal-to-noise ratio.
Wiener deconvolution
Interpretation
The Wiener filter equation above requires us to know the spectral content of a typical image, and also that of the noise. Often, we do not have access to these exact quantities, but we may be in a situation where good estimates can be made. For instance, in the case of photographic images, the signal (the original image) typically has strong low frequencies and weak high frequencies, while in many cases the noise content will be relatively flat with frequency.
Wiener deconvolution
Derivation
As mentioned above, we want to produce an estimate of the original signal that minimizes the mean square error, which may be expressed: ϵ(f)=E|X(f)−X^(f)|2 .The equivalence to the previous definition of ϵ , can be derived using Plancherel theorem or Parseval's theorem for the Fourier transform.
Wiener deconvolution
Derivation
If we substitute in the expression for X^(f) , the above can be rearranged to ϵ(f)=E|X(f)−G(f)Y(f)|2=E|X(f)−G(f)[H(f)X(f)+V(f)]|2=E|[1−G(f)H(f)]X(f)−G(f)V(f)|2 If we expand the quadratic, we get the following: ϵ(f)=[1−G(f)H(f)][1−G(f)H(f)]∗E|X(f)|2−[1−G(f)H(f)]G∗(f)E{X(f)V∗(f)}−G(f)[1−G(f)H(f)]∗E{V(f)X∗(f)}+G(f)G∗(f)E|V(f)|2 However, we are assuming that the noise is independent of the signal, therefore: E{X(f)V∗(f)}=E{V(f)X∗(f)}=0 Substituting the power spectral densities S(f) and N(f) , we have: ϵ(f)=[1−G(f)H(f)][1−G(f)H(f)]∗S(f)+G(f)G∗(f)N(f) To find the minimum error value, we calculate the Wirtinger derivative with respect to G(f) and set it equal to zero.
Wiener deconvolution
Derivation
dϵ(f)dG(f)=0⇒G∗(f)N(f)−H(f)[1−G(f)H(f)]∗S(f)=0 This final equality can be rearranged to give the Wiener filter.
PSR J0901–4046
PSR J0901–4046
PSR J0901–4046 is an ultra-long period pulsar. Its period, 75.9 seconds, is the longest for any known neutron star pulsar (some objects believed to be white dwarf pulsars, such as AR Scorpii, have longer periods). Its period is more than three times longer than that of PSR J0250+5854, the previous long period record-holder. The pulses are narrow; radio emission is seen from PSR J0901–4046 for only 0.5% of its rotation period.PSR J0901–4046 was discovered serendipitously on September 27, 2020, by the MeerTRAP team, when a single pulse from it was noticed during MeerKAT observations of Vela X-1 (which is less than 1/4 degree away from PSR J0901–4046 on the sky). After that pulse was detected, further examination of the data revealed that 14 weaker pulses were present in the ~30 minute long data set, but they had been missed by the real-time detection software. The deepest image of the MeerKAT field showed a diffuse shell-like structure that may be a supernova remnant associated with the birth of the neutron star.PSR J0901–4046's period, combined with its period time derivative of 2.25×10−13 second/second, implies a characteristic age of 5.3 million years. The discovery of PSR J0901–4046 challenges the understanding of how neutron stars evolve.
Pitofenone
Pitofenone
Pitofenone is an antispasmodic. Pitofenone is typically used in combination with fenpiverinium bromide, and metamizole sodium. Previously produced as Baralgin by Sanofi Aventis, the drug is currently sold in Eastern Europe under various trade names, including Spasmalgon (Actavis, Bulgaria), Revalgin (Shreya, India), Spasgan (Wockhardt, India), Bral (Micro Labs, India), and others. It relieves pain and spasms of smooth muscles.
Systolic array
Systolic array
In parallel computer architectures, a systolic array is a homogeneous network of tightly coupled data processing units (DPUs) called cells or nodes. Each node or DPU independently computes a partial result as a function of the data received from its upstream neighbours, stores the result within itself and passes it downstream. Systolic arrays were first used in Colossus, which was an early computer used to break German Lorenz ciphers during World War II. Due to the classified nature of Colossus, they were independently invented or rediscovered by H. T. Kung and Charles Leiserson who described arrays for many dense linear algebra computations (matrix product, solving systems of linear equations, LU decomposition, etc.) for banded matrices. Early applications include computing greatest common divisors of integers and polynomials. They are sometimes classified as multiple-instruction single-data (MISD) architectures under Flynn's taxonomy, but this classification is questionable because a strong argument can be made to distinguish systolic arrays from any of Flynn's four categories: SISD, SIMD, MISD, MIMD, as discussed later in this article.
Systolic array
Systolic array
The parallel input data flows through a network of hard-wired processor nodes, which combine, process, merge or sort the input data into a derived result. Because the wave-like propagation of data through a systolic array resembles the pulse of the human circulatory system, the name systolic was coined from medical terminology. The name is derived from systole as an analogy to the regular pumping of blood by the heart.
Systolic array
Applications
Systolic arrays are often hard-wired for specific operations, such as "multiply and accumulate", to perform massively parallel integration, convolution, correlation, matrix multiplication or data sorting tasks. They are also used for dynamic programming algorithms, used in DNA and protein sequence analysis.
Systolic array
Architecture
A systolic array typically consists of a large monolithic network of primitive computing nodes which can be hardwired or software configured for a specific application. The nodes are usually fixed and identical, while the interconnect is programmable. The more general wave front processors, by contrast, employ sophisticated and individually programmable nodes which may or may not be monolithic, depending on the array size and design parameters. The other distinction is that systolic arrays rely on synchronous data transfers, while wavefront tend to work asynchronously.
Systolic array
Architecture
Unlike the more common Von Neumann architecture, where program execution follows a script of instructions stored in common memory, addressed and sequenced under the control of the CPU's program counter (PC), the individual nodes within a systolic array are triggered by the arrival of new data and always process the data in exactly the same way. The actual processing within each node may be hard wired or block micro coded, in which case the common node personality can be block programmable.
Systolic array
Architecture
The systolic array paradigm with data-streams driven by data counters, is the counterpart of the Von Neumann architecture with instruction-stream driven by a program counter. Because a systolic array usually sends and receives multiple data streams, and multiple data counters are needed to generate these data streams, it supports data parallelism.
Systolic array
Goals and benefits
A major benefit of systolic arrays is that all operand data and partial results are stored within (passing through) the processor array. There is no need to access external buses, main memory or internal caches during each operation as is the case with Von Neumann or Harvard sequential machines. The sequential limits on parallel performance dictated by Amdahl's Law also do not apply in the same way, because data dependencies are implicitly handled by the programmable node interconnect and there are no sequential steps in managing the highly parallel data flow.
Systolic array
Goals and benefits
Systolic arrays are therefore extremely good at artificial intelligence, image processing, pattern recognition, computer vision and other tasks that animal brains do particularly well. Wavefront processors in general can also be very good at machine learning by implementing self configuring neural nets in hardware.
Systolic array
Classification controversy
While systolic arrays are officially classified as MISD, their classification is somewhat problematic. Because the input is typically a vector of independent values, the systolic array is definitely not SISD. Since these input values are merged and combined into the result(s) and do not maintain their independence as they would in a SIMD vector processing unit, the array cannot be classified as such. Consequently, the array cannot be classified as a MIMD either, because MIMD can be viewed as a mere collection of smaller SISD and SIMD machines.
Systolic array
Classification controversy
Finally, because the data swarm is transformed as it passes through the array from node to node, the multiple nodes are not operating on the same data, which makes the MISD classification a misnomer. The other reason why a systolic array should not qualify as a MISD is the same as the one which disqualifies it from the SISD category: The input data is typically a vector not a single data value, although one could argue that any given input vector is a single item of data.
Systolic array
Classification controversy
In spite of all of the above, systolic arrays are often offered as a classic example of MISD architecture in textbooks on parallel computing and in engineering classes. If the array is viewed from the outside as atomic it should perhaps be classified as SFMuDMeR = single function, multiple data, merged result(s). Systolic arrays use a pre-defined computational flow graph that connects their nodes. Kahn process networks use a similar flow graph, but are distinguished by the nodes working in lock-step in the systolic array: in a Kahn network, there are FIFO queues between each node.
Systolic array
Detailed description
A systolic array is composed of matrix-like rows of data processing units called cells. Data processing units (DPUs) are similar to central processing units (CPUs), (except for the usual lack of a program counter, since operation is transport-triggered, i.e., by the arrival of a data object). Each cell shares the information with its neighbors immediately after processing. The systolic array is often rectangular where data flows across the array between neighbour DPUs, often with different data flowing in different directions. The data streams entering and leaving the ports of the array are generated by auto-sequencing memory units, ASMs. Each ASM includes a data counter. In embedded systems a data stream may also be input from and/or output to an external source.
Systolic array
Detailed description
An example of a systolic algorithm might be designed for matrix multiplication. One matrix is fed in a row at a time from the top of the array and is passed down the array, the other matrix is fed in a column at a time from the left hand side of the array and passes from left to right. Dummy values are then passed in until each processor has seen one whole row and one whole column. At this point, the result of the multiplication is stored in the array and can now be output a row or a column at a time, flowing down or across the array.Systolic arrays are arrays of DPUs which are connected to a small number of nearest neighbour DPUs in a mesh-like topology. DPUs perform a sequence of operations on data that flows between them. Because the traditional systolic array synthesis methods have been practiced by algebraic algorithms, only uniform arrays with only linear pipes can be obtained, so that the architectures are the same in all DPUs. The consequence is, that only applications with regular data dependencies can be implemented on classical systolic arrays. Like SIMD machines, clocked systolic arrays compute in "lock-step" with each processor undertaking alternate compute | communicate phases. But systolic arrays with asynchronous handshake between DPUs are called wavefront arrays.
Systolic array
Detailed description
One well-known systolic array is Carnegie Mellon University's iWarp processor, which has been manufactured by Intel. An iWarp system has a linear array processor connected by data buses going in both directions.
Systolic array
History
Systolic arrays (also known as wavefront processors), were first described by H. T. Kung and Charles E. Leiserson, who published the first paper describing systolic arrays in 1979. However, the first machine known to have used a similar technique was the Colossus Mark II in 1944.
Systolic array
Application example
Polynomial evaluationHorner's rule for evaluating a polynomial is: y=(…(((an⋅x+an−1)⋅x+an−2)⋅x+an−3)⋅x+…+a1)⋅x+a0. A linear systolic array in which the processors are arranged in pairs: one multiplies its input by x and passes the result to the right, the next adds aj and passes the result to the right.
Systolic array
Advantages and disadvantages
Pros Faster than general purpose processors ScalableCons Expensive, due to low economy of scale Highly specialized, custom hardware is required and often application specific. Not widely implemented Limited code base of programs and algorithms. (Not all algorithms can be implemented as systolic arrays. Often tricks are needed to map such algorithms on to a systolic array.)
Systolic array
Implementations
Cisco PXF network processor is internally organized as systolic array. Google’s TPU is also designed around a systolic array. Paracel FDF4T TestFinder text search system Paracel FDF4G GeneMatcher Biological (DNA and Protein) search system Inferentia chip at Amazon Web Services MIT Eyeriss is a systolic array accelerator for convolutional neural networks.
Portable toilet
Portable toilet
A portable or mobile toilet (colloquial terms: thunderbox, porta-john or porta-potty) is any type of toilet that can be moved around, some by one person, some by mechanical equipment such as a truck and crane. Most types do not require any pre-existing services or infrastructure, such as sewerage, but are completely self-contained. The portable toilet is used in a variety of situations, for example in urban slums of developing countries, at festivals, for camping, on boats, on construction sites, and at film locations and large outdoor gatherings where there are no other facilities. Most portable toilets are unisex single units with privacy ensured by a simple lock on the door. Some portable toilets are small molded plastic or fiberglass portable rooms with a lockable door and a receptacle to catch the human excreta in a container. A portable toilet is not connected to a hole in the ground (like a pit latrine), nor to a septic tank, nor is it plumbed into a municipal system leading to a sewage treatment plant. The chemical toilet is probably the most well-known type of portable toilet, but other types also exist, such as urine-diversion dehydration toilets, composting toilets, container-based toilets, bucket toilets, freezing toilets and incineration toilets. A bucket toilet is a very simple type of portable toilet.
Portable toilet
Types
Chemical toilets A chemical toilet collects human excreta in a holding tank and uses chemicals to minimize the odors. These chemicals may either mask the odor or contain biocides that hinder odor-causing bacteria from multiplying, keeping the smell to a minimum. Chemical toilets include those on plane and trains (although many of these are now vacuum toilets), as well as much simpler ones.
Portable toilet
Types
Portable camping toilets A simpler type of portable toilet may be used in travel trailers (caravans, camper vans) and on small boats. They are also referred to as "cassette toilet" or "camping toilet", or under brand names that have become generic trademarks. The Oxford English Dictionary lists "Porta Potti" ("with arbitrary respelling") as "A proprietary name for: a portable chemical toilet, as used by campers", and gives mostly American examples from 1968. The OED gives this proprietary name a second meaning, "a small prefabricated unit containing a toilet, designed for easy transportation and temporary installation esp. outdoors", which Wikipedia covers under chemical toilet.
Portable toilet
Types
The other name common in British English is "Elsan", which dates back to 1925. According to the Camping and Caravanning Club, "Today you will often see campsites refer to their Chemical Disposal Points as Elsan Disposal Points because of the history and popularity of the brand." The Canal and River Trust uses both brand names, in lieu of any unbranded term.One colloquialism for these simple toilets is the "bucket and chuck it" system, although in fact they no longer resemble an open bucket (see bucket toilet). These are designed to be emptied into sanitary stations connected to the regular sewage system. These toilets are not to be confused with the types that are plumbed in to the vehicle and need to be pumped out at holding tank dump stations.
Portable toilet
Types
Urine-diversion dehydration toilets Portable urine-diversion dehydration toilets are self-contained dry toilets sometimes referred to as "mobile" or "stand-alone" units. They are identifiable by their one-piece molded plastic shells or, in the case of DIY versions, simple plywood box construction. Most users of self-contained UDDTs rely upon a post-treatment process to ensure pathogen reduction. This post-treatment may consist of long-term storage or addition to an existing or purpose-built compost pile or some combination thereof. A post-treatment step is unnecessary in the case of very modest seasonal use.
Portable toilet
Types
Others A commode chair (a chair enclosing a chamber pot) is a basic portable toilet that was used, for example, in 19th-century Europe.
Portable toilet
History
The close stool, built as an article of furniture, is one of the earliest forms of portable toilet. They can still be seen in historic house museums such as Sir George-Étienne Cartier National Historic Site in Old Montreal, Canada. The velvet upholstered close stool used by William III is on display at Hampton Court Palace; see Groom of the Stool.
Portable toilet
History
Early versions of the "Elsan chemical closet" ("closet" meaning a small room, see water closet, WC, and earth closet) were sold at Army & Navy Stores. Their use in World War II bomber aircraft is described at some length by the Bomber Command Museum of Canada; in brief, they were not popular with either the flying crew or the ground crew.African-Americans living under Jim Crow laws (i.e. before the Civil Rights Act of 1964) faced dangerous challenges. Public toilets were segregated by race, and many restaurants and gas stations refused to serve black people, so some travellers carried a portable toilet in the trunk of their car.Since 1974, Grand Canyon guides have used ammo boxes to defecate, according to the Museum of Northern Arizona in Flagstaff, Arizona.
Portable toilet
Society and culture
A slang term, now dated or historic, is a "thunder-box" (Oxford English Dictionary: "a portable commode; by extension, any lavatory"). The term was used particularly in British India; travel writer Stephen McClarence called it "a crude sort of colonial lavatory". One features to comic effect in Evelyn Waugh's novel Men at Arms: "If you must know, it's my thunderbox." ... He...dragged out the treasure, a brass-bound, oak cube... On the inside of the lid was a plaque bearing the embossed title Connolly's Chemical Closet.
Drag reduction system
Drag reduction system
In motor racing, the drag reduction system (DRS) is a form of driver-adjustable bodywork aimed at reducing aerodynamic drag in order to increase top speed and promote overtaking. It is an adjustable rear wing of the car, which moves in response to driver commands. DRS often comes with conditions, such as the requirement in Formula 1 that the pursuing car must be within one second (when both cars cross the detection point) for DRS to be activated.
Drag reduction system
Drag reduction system
DRS was introduced in Formula One in 2011. The use of DRS is an exception to the rule banning any moving parts whose primary purpose is to affect the aerodynamics of the car. The system is also used in the Formula Renault 3.5 since 2012, Deutsche Tourenwagen Masters since 2013, Super Formula since 2014, GP2 Series later FIA Formula 2 Championship since 2015, GP3 Series later FIA Formula 3 Championship since 2017. An adjustable wing was also used by the Nissan DeltaWing at the 2012 24 Hours of Le Mans, although with free usage.
Drag reduction system
Formula One
In Formula One, the DRS opens an adjustable flap on the rear wing of the car, in order to reduce drag, thus giving a pursuing car an overtaking advantage over the car in front. The FIA estimate the speed increase to be between 10–12 km/h (6.2–7.5 mph) by the end of the activation zone, while others, such as technical staff at racecar-engineering.com, cite a much lower figure of 4–5 km/h (2.5–3.1 mph). When the DRS is deactivated or closed, it increases downforce, giving better cornering.
Drag reduction system
Formula One
The device can only be used during a race after two racing laps have been completed, and when the pursuing car enters a designated "activation" zone defined by the FIA. This also includes having to wait 2 laps after a safety car restart.
Drag reduction system
Formula One
In 2011, the FIA increased the number of DRS zones to two on some circuits featuring multiple long straights. In Valencia and in Montreal, two zones were endorsed on consecutive long straights, while in Monza and in Buddh, two zones were created on separate parts of the circuit. Two zones had originally been planned for every race with multiple long straights from Montreal onwards (depending on Montreal/Valencia success), but this was not implemented. However, at the penultimate round of the 2011 season, two zones on consecutive long straights saw a return at Yas Marina.
Drag reduction system
Formula One
When usage of the DRS remained legal for the 2012 season, a second zone was added to the opening round's track in Melbourne. A third DRS zone was added during 2018 and 2019 seasons in Australia, Bahrain, Canada, Austria, Singapore, and Mexico. In the 2022 season, a fourth zone was initially added for the track in Melbourne, after the circuit redevelopment, before being removed for safety reasons. In the 2023 season the zone was re-added. Bahrain, Jeddah, Melbourne, Baku and Miami had their DRS zones adjusted based on whether the FIA deemed DRS made overtaking at these five circuits too easy or too hard in 2022.
Drag reduction system
Formula One
Functional description The horizontal elements of the rear wing consist of the main plane and the flap. The DRS allows the flap to lift a maximum of 85 millimetres (3.3 in) from the fixed main plane. This reduces opposition (drag) to airflow against the wing and results in less downforce. In the absence of significant lateral forces (straight line), less downforce allows faster acceleration and potential top speed, unless limited by the top gear ratio and engine rev limiter. Sam Michael, sporting director of the McLaren team, believes that DRS in qualifying will be worth about half a second per lap.
Drag reduction system
Formula One
The effectiveness of the DRS will vary from track to track and, to a lesser extent, from car to car. The system's effectiveness was reviewed in 2011 to see if overtaking could be made easier, but not to the extent that driver skill is sidelined. The effectiveness of DRS seems likely to be determined by the level of downforce at a given circuit (where the cars are in low drag trim at circuits like Monza, the effects may be smaller), length of the activation zone, and characteristics of the track immediately after the DRS zone.
Drag reduction system
Formula One
Rules on use Use of DRS is restricted by the F1 rules; it is permitted only when both: The following car is within one second of the car to be overtaken, which may be a car being lapped. The FIA may alter this parameter, race by race.
Drag reduction system
Formula One
The following car is in an overtaking zone as defined by the FIA before the race (commonly known as the DRS zone).Further: The system may not be activated on the first two laps after the race start, restart, or a safety car deployment, for example, as was the case of the 2021 Belgian Grand Prix, during which no driver could have activated DRS because the entire race took place behind a safety car, before being terminated due to bad weather.
Drag reduction system
Formula One
The system cannot be used by the defending driver, unless within one second of another car in front.
Drag reduction system
Formula One
The system may not be enabled if racing conditions are deemed dangerous by the race director, such as rain, as was the case at the 2011 Canadian Grand Prix.A dashboard light notifies the driver when the system is enabled (the driver can also see the system deploy in his wing mirrors). The system is deactivated when the driver releases the button or uses the brakes.
Drag reduction system
Formula One
There are lines on the track to show the area where the one-second proximity is being detected (the detection point) and a line later on the track (the activation point), along with a sign vertically marked "DRS" where the DRS zone itself begins.
Drag reduction system
Reception
There has been a mixed reaction to the introduction of DRS in Formula One amongst both fans and drivers. Some believe that this is the solution to the lack of overtaking in F1 in recent years while others believe this has made overtaking too easy. Former Formula One and Team Penske IndyCar Series driver Juan Pablo Montoya described it as "like giving Picasso Photoshop". The principal argument for the opponents of DRS is that the driver in front does not have an equal chance of defending his position because they are not allowed to deploy DRS to defend. The tightening up on the rules for a leading driver defending their position has added to this controversy. In 2018, then-Scuderia Ferrari driver Sebastian Vettel stated that he preferred throwing bananas Mario Kart-style over the use of DRS, arguing that it's "artificial".
Drag reduction system
Road car use
The McLaren P1 coupé is the first road car to have incorporated the F1-style rear wing Drag Reduction Systems. Porsche 911 (992) GT3 RS later followed suit by the introduction of the same system in 2022.
Cohesive bandage
Cohesive bandage
A self-adhering bandage or cohesive bandage (coban) is a type of bandage or wrap that coheres to itself but does not adhere well to other surfaces. "Coban" by 3M is commonly used as a wrap on limbs because it will stick to itself and not loosen. Due to its elastic qualities, coban is often used as a compression bandage.It is used both on humans and animals. For animal use, it is marketed under a variety of trade names such as "Vetrap" by 3M. It is commonly used on horses and other animals because it will not stick to hair so it is easily removed.
Convolution
Convolution
In mathematics (in particular, functional analysis), convolution is a mathematical operation on two functions (f and g) that produces a third function ( f∗g ) that expresses how the shape of one is modified by the other. The term convolution refers to both the result function and to the process of computing it. It is defined as the integral of the product of the two functions after one is reflected about the y-axis and shifted. The choice of which function is reflected and shifted before the integral does not change the integral result (see commutativity). The integral is evaluated for all values of shift, producing the convolution function.
Convolution
Convolution
Some features of convolution are similar to cross-correlation: for real-valued functions, of a continuous or discrete variable, convolution ( f∗g ) differs from cross-correlation ( f⋆g ) only in that either f(x) or g(x) is reflected about the y-axis in convolution; thus it is a cross-correlation of g(−x) and f(x), or f(−x) and g(x). For complex-valued functions, the cross-correlation operator is the adjoint of the convolution operator.
Convolution
Convolution
Convolution has applications that include probability, statistics, acoustics, spectroscopy, signal processing and image processing, geophysics, engineering, physics, computer vision and differential equations.The convolution can be defined for functions on Euclidean space and other groups (as algebraic structures). For example, periodic functions, such as the discrete-time Fourier transform, can be defined on a circle and convolved by periodic convolution. (See row 18 at DTFT § Properties.) A discrete convolution can be defined for functions on the set of integers.
Convolution
Convolution
Generalizations of convolution have applications in the field of numerical analysis and numerical linear algebra, and in the design and implementation of finite impulse response filters in signal processing.Computing the inverse of the convolution operation is known as deconvolution.
Convolution
Definition
The convolution of f and g is written f∗g, denoting the operator with the symbol ∗. It is defined as the integral of the product of the two functions after one is reflected about the y-axis and shifted. As such, it is a particular kind of integral transform: := ∫−∞∞f(τ)g(t−τ)dτ. An equivalent definition is (see commutativity): := ∫−∞∞f(t−τ)g(τ)dτ.
Convolution
Definition
While the symbol t is used above, it need not represent the time domain. At each t, the convolution formula can be described as the area under the function f(τ) weighted by the function g(−τ) shifted by the amount t. As t changes, the weighting function g(t − τ) emphasizes different parts of the input function f(τ); If t is a positive value, then g(t − τ) is equal to g(−τ) that slides or is shifted along the τ -axis toward the right (toward +∞) by the amount of t, while if t is a negative value, then g(t − τ) is equal to g(−τ) that slides or is shifted toward the left (toward -∞) by the amount of |t|.
Convolution
Definition
For functions f, g supported on only [0, ∞] (i.e., zero for negative arguments), the integration limits can be truncated, resulting in: for f,g:[0,∞)→R. For the multi-dimensional formulation of convolution, see domain of definition (below). Notation A common engineering notational convention is: := ∫−∞∞f(τ)g(t−τ)dτ⏟(f∗g)(t), which has to be interpreted carefully to avoid confusion. For instance, f(t)∗g(t − t0) is equivalent to (f∗g)(t − t0), but f(t − t0)∗g(t − t0) is in fact equivalent to (f∗g)(t − 2t0).
Convolution
Definition
Relations with other transforms Given two functions f(t) and g(t) with bilateral Laplace transforms (two-sided Laplace transform) F(s)=∫−∞∞e−suf(u)du and G(s)=∫−∞∞e−svg(v)dv respectively, the convolution operation (f∗g)(t) can be defined as the inverse Laplace transform of the product of F(s) and G(s) . More precisely, F(s)⋅G(s)=∫−∞∞e−suf(u)du⋅∫−∞∞e−svg(v)dv=∫−∞∞∫−∞∞e−s(u+v)f(u)g(v)dudv Let t=u+v such that F(s)⋅G(s)=∫−∞∞∫−∞∞e−stf(u)g(t−u)dudt=∫−∞∞e−st∫−∞∞f(u)g(t−u)du⏟(f∗g)(t)dt=∫−∞∞e−st(f∗g)(t)dt Note that F(s)⋅G(s) is the bilateral Laplace transform of (f∗g)(t) . A similar derivation can be done using the unilateral Laplace transform (one-sided Laplace transform).
Convolution
Definition
The convolution operation also describes the output (in terms of the input) of an important class of operations known as linear time-invariant (LTI). See LTI system theory for a derivation of convolution as the result of LTI constraints. In terms of the Fourier transforms of the input and output of an LTI operation, no new frequency components are created. The existing ones are only modified (amplitude and/or phase). In other words, the output transform is the pointwise product of the input transform with a third transform (known as a transfer function). See Convolution theorem for a derivation of that property of convolution. Conversely, convolution can be derived as the inverse Fourier transform of the pointwise product of two Fourier transforms.
Convolution
Historical developments
One of the earliest uses of the convolution integral appeared in D'Alembert's derivation of Taylor's theorem in Recherches sur différents points importants du système du monde, published in 1754.Also, an expression of the type: ∫f(u)⋅g(x−u)du is used by Sylvestre François Lacroix on page 505 of his book entitled Treatise on differences and series, which is the last of 3 volumes of the encyclopedic series: Traité du calcul différentiel et du calcul intégral, Chez Courcier, Paris, 1797–1800. Soon thereafter, convolution operations appear in the works of Pierre Simon Laplace, Jean-Baptiste Joseph Fourier, Siméon Denis Poisson, and others. The term itself did not come into wide use until the 1950s or 60s. Prior to that it was sometimes known as Faltung (which means folding in German), composition product, superposition integral, and Carson's integral. Yet it appears as early as 1903, though the definition is rather unfamiliar in older uses.The operation: ∫0tφ(s)ψ(t−s)ds,0≤t<∞, is a particular case of composition products considered by the Italian mathematician Vito Volterra in 1913.
Convolution
Circular convolution
When a function gT is periodic, with period T, then for functions, f, such that f ∗ gT exists, the convolution is also periodic and identical to: (f∗gT)(t)≡∫t0t0+T[∑k=−∞∞f(τ+kT)]gT(t−τ)dτ, where t0 is an arbitrary choice. The summation is called a periodic summation of the function f. When gT is a periodic summation of another function, g, then f ∗ gT is known as a circular or cyclic convolution of f and g. And if the periodic summation above is replaced by fT, the operation is called a periodic convolution of fT and gT.
Convolution
Discrete convolution
For complex-valued functions f, g defined on the set Z of integers, the discrete convolution of f and g is given by: (f∗g)[n]=∑m=−∞∞f[m]g[n−m], or equivalently (see commutativity) by: (f∗g)[n]=∑m=−∞∞f[n−m]g[m].
Convolution
Discrete convolution
The convolution of two finite sequences is defined by extending the sequences to finitely supported functions on the set of integers. When the sequences are the coefficients of two polynomials, then the coefficients of the ordinary product of the two polynomials are the convolution of the original two sequences. This is known as the Cauchy product of the coefficients of the sequences.
Convolution
Discrete convolution
Thus when g has finite support in the set {−M,−M+1,…,M−1,M} (representing, for instance, a finite impulse response), a finite summation may be used: (f∗g)[n]=∑m=−MMf[n−m]g[m]. Circular discrete convolution When a function gN is periodic, with period N, then for functions, f, such that f∗gN exists, the convolution is also periodic and identical to: (f∗gN)[n]≡∑m=0N−1(∑k=−∞∞f[m+kN])gN[n−m]. The summation on k is called a periodic summation of the function f. If gN is a periodic summation of another function, g, then f∗gN is known as a circular convolution of f and g. When the non-zero durations of both f and g are limited to the interval [0, N − 1], f∗gN reduces to these common forms: The notation (f ∗N g) for cyclic convolution denotes convolution over the cyclic group of integers modulo N. Circular convolution arises most often in the context of fast convolution with a fast Fourier transform (FFT) algorithm.
Convolution
Discrete convolution
Fast convolution algorithms In many situations, discrete convolutions can be converted to circular convolutions so that fast transforms with a convolution property can be used to implement the computation. For example, convolution of digit sequences is the kernel operation in multiplication of multi-digit numbers, which can therefore be efficiently implemented with transform techniques (Knuth 1997, §4.3.3.C; von zur Gathen & Gerhard 2003, §8.2).
Convolution
Discrete convolution
Eq.1 requires N arithmetic operations per output value and N2 operations for N outputs. That can be significantly reduced with any of several fast algorithms. Digital signal processing and other applications typically use fast convolution algorithms to reduce the cost of the convolution to O(N log N) complexity.
Convolution
Discrete convolution
The most common fast convolution algorithms use fast Fourier transform (FFT) algorithms via the circular convolution theorem. Specifically, the circular convolution of two finite-length sequences is found by taking an FFT of each sequence, multiplying pointwise, and then performing an inverse FFT. Convolutions of the type defined above are then efficiently implemented using that technique in conjunction with zero-extension and/or discarding portions of the output. Other fast convolution algorithms, such as the Schönhage–Strassen algorithm or the Mersenne transform, use fast Fourier transforms in other rings.
Convolution
Discrete convolution
If one sequence is much longer than the other, zero-extension of the shorter sequence and fast circular convolution is not the most computationally efficient method available. Instead, decomposing the longer sequence into blocks and convolving each block allows for faster algorithms such as the overlap–save method and overlap–add method. A hybrid convolution method that combines block and FIR algorithms allows for a zero input-output latency that is useful for real-time convolution computations.
Convolution
Domain of definition
The convolution of two complex-valued functions on Rd is itself a complex-valued function on Rd, defined by: (f∗g)(x)=∫Rdf(y)g(x−y)dy=∫Rdf(x−y)g(y)dy, and is well-defined only if f and g decay sufficiently rapidly at infinity in order for the integral to exist. Conditions for the existence of the convolution may be tricky, since a blow-up in g at infinity can be easily offset by sufficiently rapid decay in f. The question of existence thus may involve different conditions on f and g: Compactly supported functions If f and g are compactly supported continuous functions, then their convolution exists, and is also compactly supported and continuous (Hörmander 1983, Chapter 1). More generally, if either function (say f) is compactly supported and the other is locally integrable, then the convolution f∗g is well-defined and continuous.
Convolution
Domain of definition
Convolution of f and g is also well defined when both functions are locally square integrable on R and supported on an interval of the form [a, +∞) (or both supported on [−∞, a]).
Convolution
Domain of definition
Integrable functions The convolution of f and g exists if f and g are both Lebesgue integrable functions in L1(Rd), and in this case f∗g is also integrable (Stein & Weiss 1971, Theorem 1.3). This is a consequence of Tonelli's theorem. This is also true for functions in L1, under the discrete convolution, or more generally for the convolution on any group.
Convolution
Domain of definition
Likewise, if f ∈ L1(Rd) and g ∈ Lp(Rd) where 1 ≤ p ≤ ∞, then f∗g ∈ Lp(Rd), and ‖f∗g‖p≤‖f‖1‖g‖p. In the particular case p = 1, this shows that L1 is a Banach algebra under the convolution (and equality of the two sides holds if f and g are non-negative almost everywhere). More generally, Young's inequality implies that the convolution is a continuous bilinear map between suitable Lp spaces. Specifically, if 1 ≤ p, q, r ≤ ∞ satisfy: 1p+1q=1r+1, then ‖f∗g‖r≤‖f‖p‖g‖q,f∈Lp,g∈Lq, so that the convolution is a continuous bilinear mapping from Lp×Lq to Lr. The Young inequality for convolution is also true in other contexts (circle group, convolution on Z). The preceding inequality is not sharp on the real line: when 1 < p, q, r < ∞, there exists a constant Bp,q < 1 such that: ‖f∗g‖r≤Bp,q‖f‖p‖g‖q,f∈Lp,g∈Lq. The optimal value of Bp,q was discovered in 1975 and independently in 1976, see Brascamp–Lieb inequality. A stronger estimate is true provided 1 < p, q, r < ∞: ‖f∗g‖r≤Cp,q‖f‖p‖g‖q,w where ‖g‖q,w is the weak Lq norm. Convolution also defines a bilinear continuous map Lp,w×Lq,w→Lr,w for 1<p,q,r<∞ , owing to the weak Young inequality: ‖f∗g‖r,w≤Cp,q‖f‖p,w‖g‖r,w.
Convolution
Domain of definition
Functions of rapid decay In addition to compactly supported functions and integrable functions, functions that have sufficiently rapid decay at infinity can also be convolved. An important feature of the convolution is that if f and g both decay rapidly, then f∗g also decays rapidly. In particular, if f and g are rapidly decreasing functions, then so is the convolution f∗g. Combined with the fact that convolution commutes with differentiation (see #Properties), it follows that the class of Schwartz functions is closed under convolution (Stein & Weiss 1971, Theorem 3.3).
Convolution
Domain of definition
Distributions If f is a smooth function that is compactly supported and g is a distribution, then f∗g is a smooth function defined by ∫Rdf(y)g(x−y)dy=(f∗g)(x)∈C∞(Rd). More generally, it is possible to extend the definition of the convolution in a unique way with φ the same as f above, so that the associative law f∗(g∗φ)=(f∗g)∗φ remains valid in the case where f is a distribution, and g a compactly supported distribution (Hörmander 1983, §4.2). Measures The convolution of any two Borel measures μ and ν of bounded variation is the measure μ∗ν defined by (Rudin 1962) ∫Rdf(x)d(μ∗ν)(x)=∫Rd∫Rdf(x+y)dμ(x)dν(y). In particular, (μ∗ν)(A)=∫Rd×Rd1A(x+y)d(μ×ν)(x,y), where A⊂Rd is a measurable set and 1A is the indicator function of A This agrees with the convolution defined above when μ and ν are regarded as distributions, as well as the convolution of L1 functions when μ and ν are absolutely continuous with respect to the Lebesgue measure. The convolution of measures also satisfies the following version of Young's inequality ‖μ∗ν‖≤‖μ‖‖ν‖ where the norm is the total variation of a measure. Because the space of measures of bounded variation is a Banach space, convolution of measures can be treated with standard methods of functional analysis that may not apply for the convolution of distributions.
Convolution
Properties
Algebraic properties The convolution defines a product on the linear space of integrable functions. This product satisfies the following algebraic properties, which formally mean that the space of integrable functions with the product given by convolution is a commutative associative algebra without identity (Strichartz 1994, §3.3). Other linear spaces of functions, such as the space of continuous functions of compact support, are closed under the convolution, and so also form commutative associative algebras.
Convolution
Properties
Commutativity Proof: By definition: Changing the variable of integration to u=t−τ the result follows.Associativity Proof: This follows from using Fubini's theorem (i.e., double integrals can be evaluated as iterated integrals in either order).Distributivity Proof: This follows from linearity of the integral.Associativity with scalar multiplication for any real (or complex) number a .Multiplicative identity No algebra of functions possesses an identity for the convolution. The lack of identity is typically not a major inconvenience, since most collections of functions on which the convolution is performed can be convolved with a delta distribution (a unitary impulse, centered at zero) or, at the very least (as is the case of L1) admit approximations to the identity. The linear space of compactly supported distributions does, however, admit an identity under the convolution. Specifically, where δ is the delta distribution.Inverse element Some distributions S have an inverse element S−1 for the convolution which then must satisfy from which an explicit formula for S−1 may be obtained.The set of invertible distributions forms an abelian group under the convolution.Complex conjugation Relationship with differentiation Proof: (f∗g)′=ddt∫−∞∞f(τ)g(t−τ)dτ=∫−∞∞f(τ)∂∂tg(t−τ)dτ=∫−∞∞f(τ)g′(t−τ)dτ=f∗g′.
Convolution
Properties
Relationship with integration If {\textstyle F(t)=\int _{-\infty }^{t}f(\tau )d\tau ,} and {\textstyle G(t)=\int _{-\infty }^{t}g(\tau )\,d\tau ,} then Integration If f and g are integrable functions, then the integral of their convolution on the whole space is simply obtained as the product of their integrals: ∫Rd(f∗g)(x)dx=(∫Rdf(x)dx)(∫Rdg(x)dx). This follows from Fubini's theorem. The same result holds if f and g are only assumed to be nonnegative measurable functions, by Tonelli's theorem. Differentiation In the one-variable case, ddx(f∗g)=dfdx∗g=f∗dgdx where ddx is the derivative. More generally, in the case of functions of several variables, an analogous formula holds with the partial derivative: ∂∂xi(f∗g)=∂f∂xi∗g=f∗∂g∂xi. A particular consequence of this is that the convolution can be viewed as a "smoothing" operation: the convolution of f and g is differentiable as many times as f and g are in total. These identities hold under the precise condition that f and g are absolutely integrable and at least one of them has an absolutely integrable (L1) weak derivative, as a consequence of Young's convolution inequality. For instance, when f is continuously differentiable with compact support, and g is an arbitrary locally integrable function, ddx(f∗g)=dfdx∗g. These identities also hold much more broadly in the sense of tempered distributions if one of f or g is a rapidly decreasing tempered distribution, a compactly supported tempered distribution or a Schwartz function and the other is a tempered distribution. On the other hand, two positive integrable and infinitely differentiable functions may have a nowhere continuous convolution. In the discrete case, the difference operator D f(n) = f(n + 1) − f(n) satisfies an analogous relationship: D(f∗g)=(Df)∗g=f∗(Dg). Convolution theorem The convolution theorem states that F{f∗g}=F{f}⋅F{g} where F{f} denotes the Fourier transform of f Convolution in other types of transformations Versions of this theorem also hold for the Laplace transform, two-sided Laplace transform, Z-transform and Mellin transform. Convolution on matrices If W is the Fourier transform matrix, then W(C(1)x∗C(2)y)=(WC(1)∙WC(2))(x⊗y)=WC(1)x∘WC(2)y ,where ∙ is face-splitting product, ⊗ denotes Kronecker product, ∘ denotes Hadamard product (this result is an evolving of count sketch properties). Translational equivariance The convolution commutes with translations, meaning that τx(f∗g)=(τxf)∗g=f∗(τxg) where τxf is the translation of the function f by x defined by (τxf)(y)=f(y−x). If f is a Schwartz function, then τxf is the convolution with a translated Dirac delta function τxf = f ∗ τx δ. So translation invariance of the convolution of Schwartz functions is a consequence of the associativity of convolution.
Convolution
Properties
Furthermore, under certain conditions, convolution is the most general translation invariant operation. Informally speaking, the following holds Suppose that S is a bounded linear operator acting on functions which commutes with translations: S(τxf) = τx(Sf) for all x. Then S is given as convolution with a function (or distribution) gS; that is Sf = gS ∗ f.Thus some translation invariant operations can be represented as convolution. Convolutions play an important role in the study of time-invariant systems, and especially LTI system theory. The representing function gS is the impulse response of the transformation S.
Convolution
Properties
A more precise version of the theorem quoted above requires specifying the class of functions on which the convolution is defined, and also requires assuming in addition that S must be a continuous linear operator with respect to the appropriate topology. It is known, for instance, that every continuous translation invariant continuous linear operator on L1 is the convolution with a finite Borel measure. More generally, every continuous translation invariant continuous linear operator on Lp for 1 ≤ p < ∞ is the convolution with a tempered distribution whose Fourier transform is bounded. To wit, they are all given by bounded Fourier multipliers.
Convolution
Convolutions on groups
If G is a suitable group endowed with a measure λ, and if f and g are real or complex valued integrable functions on G, then we can define their convolution by (f∗g)(x)=∫Gf(y)g(y−1x)dλ(y).
Convolution
Convolutions on groups
It is not commutative in general. In typical cases of interest G is a locally compact Hausdorff topological group and λ is a (left-) Haar measure. In that case, unless G is unimodular, the convolution defined in this way is not the same as {\textstyle \int f\left(xy^{-1}\right)g(y)\,d\lambda (y)} . The preference of one over the other is made so that convolution with a fixed function g commutes with left translation in the group: Lh(f∗g)=(Lhf)∗g.
Convolution
Convolutions on groups
Furthermore, the convention is also required for consistency with the definition of the convolution of measures given below. However, with a right instead of a left Haar measure, the latter integral is preferred over the former. On locally compact abelian groups, a version of the convolution theorem holds: the Fourier transform of a convolution is the pointwise product of the Fourier transforms. The circle group T with the Lebesgue measure is an immediate example. For a fixed g in L1(T), we have the following familiar operator acting on the Hilbert space L2(T): Tf(x)=12π∫Tf(y)g(x−y)dy. The operator T is compact. A direct calculation shows that its adjoint T* is convolution with g¯(−y).
Convolution
Convolutions on groups
By the commutativity property cited above, T is normal: T* T = TT* . Also, T commutes with the translation operators. Consider the family S of operators consisting of all such convolutions and the translation operators. Then S is a commuting family of normal operators. According to spectral theory, there exists an orthonormal basis {hk} that simultaneously diagonalizes S. This characterizes convolutions on the circle. Specifically, we have hk(x)=eikx,k∈Z, which are precisely the characters of T. Each convolution is a compact multiplication operator in this basis. This can be viewed as a version of the convolution theorem discussed above.
Convolution
Convolutions on groups
A discrete example is a finite cyclic group of order n. Convolution operators are here represented by circulant matrices, and can be diagonalized by the discrete Fourier transform. A similar result holds for compact groups (not necessarily abelian): the matrix coefficients of finite-dimensional unitary representations form an orthonormal basis in L2 by the Peter–Weyl theorem, and an analog of the convolution theorem continues to hold, along with many other aspects of harmonic analysis that depend on the Fourier transform.
Convolution
Convolution of measures
Let G be a (multiplicatively written) topological group. If μ and ν are finite Borel measures on G, then their convolution μ∗ν is defined as the pushforward measure of the group action and can be written as (μ∗ν)(E)=∬1E(xy)dμ(x)dν(y) for each measurable subset E of G. The convolution is also a finite measure, whose total variation satisfies ‖μ∗ν‖≤‖μ‖‖ν‖. In the case when G is locally compact with (left-)Haar measure λ, and μ and ν are absolutely continuous with respect to a λ, so that each has a density function, then the convolution μ∗ν is also absolutely continuous, and its density function is just the convolution of the two separate density functions. If μ and ν are probability measures on the topological group (R,+), then the convolution μ∗ν is the probability distribution of the sum X + Y of two independent random variables X and Y whose respective distributions are μ and ν.
Convolution
Infimal convolution
In convex analysis, the infimal convolution of proper (not identically +∞ ) convex functions f1,…,fm on Rn is defined by: It can be shown that the infimal convolution of convex functions is convex. Furthermore, it satisfies an identity analogous to that of the Fourier transform of a traditional convolution, with the role of the Fourier transform is played instead by the Legendre transform: We have:
Convolution
Bialgebras
Let (X, Δ, ∇, ε, η) be a bialgebra with comultiplication Δ, multiplication ∇, unit η, and counit ε. The convolution is a product defined on the endomorphism algebra End(X) as follows. Let φ, ψ ∈ End(X), that is, φ, ψ: X → X are functions that respect all algebraic structure of X, then the convolution φ∗ψ is defined as the composition X→ΔX⊗X→ϕ⊗ψX⊗X→∇X.
Convolution
Bialgebras
The convolution appears notably in the definition of Hopf algebras (Kassel 1995, §III.3). A bialgebra is a Hopf algebra if and only if it has an antipode: an endomorphism S such that id id X∗S=η∘ε.
Convolution
Applications
Convolution and related operations are found in many applications in science, engineering and mathematics. Convolutional neural networks apply multiple cascaded convolution kernels with applications in machine vision and artificial intelligence. Though these are actually cross-correlations rather than convolutions in most cases. In non-neural-network-based image processing In digital image processing convolutional filtering plays an important role in many important algorithms in edge detection and related processes (see Kernel (image processing)) In optics, an out-of-focus photograph is a convolution of the sharp image with a lens function. The photographic term for this is bokeh. In image processing applications such as adding blurring. In digital data processing In analytical chemistry, Savitzky–Golay smoothing filters are used for the analysis of spectroscopic data. They can improve signal-to-noise ratio with minimal distortion of the spectra In statistics, a weighted moving average is a convolution. In acoustics, reverberation is the convolution of the original sound with echoes from objects surrounding the sound source. In digital signal processing, convolution is used to map the impulse response of a real room on a digital audio signal. In electronic music convolution is the imposition of a spectral or rhythmic structure on a sound. Often this envelope or structure is taken from another sound. The convolution of two signals is the filtering of one through the other.
Convolution
Applications
In electrical engineering, the convolution of one function (the input signal) with a second function (the impulse response) gives the output of a linear time-invariant system (LTI). At any given moment, the output is an accumulated effect of all the prior values of the input function, with the most recent values typically having the most influence (expressed as a multiplicative factor). The impulse response function provides that factor as a function of the elapsed time since each input value occurred.
Convolution
Applications
In physics, wherever there is a linear system with a "superposition principle", a convolution operation makes an appearance. For instance, in spectroscopy line broadening due to the Doppler effect on its own gives a Gaussian spectral line shape and collision broadening alone gives a Lorentzian line shape. When both effects are operative, the line shape is a convolution of Gaussian and Lorentzian, a Voigt function.
Convolution
Applications
In time-resolved fluorescence spectroscopy, the excitation signal can be treated as a chain of delta pulses, and the measured fluorescence is a sum of exponential decays from each delta pulse. In computational fluid dynamics, the large eddy simulation (LES) turbulence model uses the convolution operation to lower the range of length scales necessary in computation thereby reducing computational cost. In probability theory, the probability distribution of the sum of two independent random variables is the convolution of their individual distributions. In kernel density estimation, a distribution is estimated from sample points by convolution with a kernel, such as an isotropic Gaussian. In radiotherapy treatment planning systems, most part of all modern codes of calculation applies a convolution-superposition algorithm. In structural reliability, the reliability index can be defined based on the convolution theorem. The definition of reliability index for limit state functions with nonnormal distributions can be established corresponding to the joint distribution function. In fact, the joint distribution function can be obtained using the convolution theory.
Convolution
Applications
In Smoothed-particle hydrodynamics, simulations of fluid dynamics are calculated using particles, each with surrounding kernels. For any given particle i , some physical quantity Ai is calculated as a convolution of Aj with a weighting function, where j denotes the neighbors of particle i : those that are located within its kernel. The convolution is approximated as a summation over each neighbor.
Convolution
Applications
In Fractional calculus convolution is instrumental in various definitions of fractional integral and fractional derivative.
Single-track road
Single-track road
A single-track road or one-lane road is a road that permits two-way travel but is not wide enough in most places to allow vehicles to pass one another (although sometimes two compact cars can pass). This kind of road is common in rural areas across the United Kingdom and elsewhere. To accommodate two-way traffic, many single-track roads, especially those officially designated as such, are provided with passing places (United Kingdom) or pullouts or turnouts (United States), or simply wide spots in the road, which may be scarcely longer than a typical car using the road. The distance between passing places varies considerably, depending on the terrain and the volume of traffic on the road. The railway equivalent for passing places are passing loops.
Single-track road
In Scotland
The term is widely used in Scotland, particularly the Highlands, to describe such roads. Passing places are generally marked with a diamond-shaped white sign with the words "passing place" on it. New signs tend to be square rather than diamond-shaped, as diamond signs are also used for instructions to tram drivers in cities. On some roads, especially in Argyll and Bute, passing places are marked with black-and-white-striped posts. Signs remind drivers of slower vehicles to pull over into a passing place (or opposite it, if it is on the opposite side of the road) to let following vehicles pass, and most drivers oblige. The same system is found very occasionally in rural England and Wales, as well as Sai Kung District in the New Territories. Sometimes two small vehicles can pass one another at a place other than a designated passing place.
Single-track road
In Scotland
Some A-class and B-class roads in the Highlands are still single-track, although many sections have been widened for the sake of faster travel. In 2009, the A830 "Road to the Isles" and A851 on the Isle of Skye have had their single-track sections replaced with higher-quality single-carriageway road.
Single-track road
In mountains
In remote backcountry areas around the world, particularly in mountains, many roads are single-track and unmarked. These include many U.S. Forest Service and logging roads in the United States. In Peru, the second of two overland transportation routes between Cuzco and Madre de Dios Region, a 300 km heavy-truck route, is a single-track road of gravel and dirt.